[compiler-rt] [scudo] Add primary option to zero block on dealloc. (PR #142394)
via llvm-commits
llvm-commits at lists.llvm.org
Fri Jun 20 09:23:52 PDT 2025
piwicode wrote:
> If this is supposed to be generally supported, then it shouldn't be only for primary allocator. Like the zero-on-allocation, it's used for all the blocks. Unless we have a good reason and it's for general cases (not for some specific cases).
My proposal aims for performance gains, not security, which is why it differs from the zero-on-allocation standard. Could a new name help clarify this distinction?
My understanding is that memset is unnecessary on the secondary allocator because pages are unmapped upon deallocation. The next mapping will automatically provide a zero-initialized page, making a memset here a performance waste. (assuming no quanteene)
> OTOH, we have the page releasing mechanism which also helps with memory usage. You can do a mandatory clean up as well. One thing I'm not sure is the cost of having the kernel detects the zero page, it seems not a cheap operation like does it check the page status periodically?
This is a process already happening as part of zram compression. This change makes it more efficient and avoids to process data of the deallocated blocks.
When the operating system is under pressure, it scans the pages in least accessed order, with the primary intent to compress them. When pages happens to be full of zeroes, the operating system can avoid compression and decompression cpu time, and save some compression storage. On Fuchsia OS we observe a reduction in time spent compressing / decompressing and a slight increase in compression ratio (I suspect this is because there are more zeros in the average page).
> Sorry, I forgot you also want to clean the header. Then this is another behavior which is specific to a certain case. Adding zeroing-on-free, I may expect the zeroing cleans everything except the header (because it helps something like double-free-detection)
Ack. Keeping the headers defeats the purpose, as the memory pages would not be full of zeroes.
> Not only the micro-benchmark, we should also take real cases into consideration. When turning on zeroing-on-alloc, we see significant performance impact on Android (especially for some processes like camera which is sensitive to fps) Thus I'm not sure what you mean "the effect is small". Even for microbenchmark, I believe you still see performance regression on free because it just introduce additional operations. I would like to see if we have more details about the measurements.
So far macro benchmark on the boot sequence show no significant impact on android boot sequence timing.
Dedicate performance test suites data is temporarily unavailable available for now. So I cannot tell for specific processes.
Reducing the memory pressure on the system decreases the amount of time spent paging in and out.
Like many performance knob, it is a tradeoff. Here it is positive on system under memory pressure, and will be less interesting when there is plenty memory available.
> We can do it here. The main concern for me is that I'm not persuaded by the reason why we only support zeroing memory on free for primary and clean everything in the block. Scudo is an userspace memory allocator, if there's any performance/memory optimization relying on the OS/Kernel, then we should be careful about the expectation of new features. For example, it's better not to say the OS will release the zero pages. And the users are not only Fuchsia/Android, there are still few different platforms (yes, very few of them, but it's not only for one or two platforms). It's better to make changes based on more general use cases.
That is a good discussion to have. This proposal is aiming at performance, and the option should only be used on systems where it is beneficial. I understand that the person enabling this option is making an assumption that the operating system will be able to take part of it to release some pages when needed, and it is a new reclamation code-path.
https://github.com/llvm/llvm-project/pull/142394
More information about the llvm-commits
mailing list