"llvm/git@repo.hca.bsc.es:rferrer/llvm-epi-0.8.git" did not exist on "446122ed572696673ff1d1dadea8765823b03c39"
[Sanitizers] Allocator: new "release memory to OS" implementation
Summary: The current implementation of the allocator returning freed memory back to OS (controlled by allocator_release_to_os_interval_ms flag) requires sorting of the free chunks list, which has two major issues, first, when free list grows to millions of chunks, sorting, even the fastest one, is just too slow, and second, sorting chunks in place is unacceptable for Scudo allocator as it makes allocations more predictable and less secure. The proposed approach is linear in complexity (altough requires quite a bit more temporary memory). The idea is to count the number of free chunks on each memory page and release pages containing free chunks only. It requires one iteration over the free list of chunks and one iteration over the array of page counters. The obvious disadvantage is the allocation of the array of the counters, but even in the worst case we support (4T allocator space, 64 buckets, 16 bytes bucket size, full free list, which leads to 2 bytes per page counter and ~17M page counters), requires just about 34Mb of the intermediate buffer (comparing to ~64Gb of actually allocated chunks) and usually it stays under 100K and released after each use. It is expected to be a relatively rare event, releasing memory back to OS, keeping the buffer between those runs and added complexity of the bookkeeping seems unnesessary here (it can always be improved later, though, never say never). The most interesting problem here is how to calculate the number of chunks falling into each memory page in the bucket. Skipping all the details, there are three cases when the number of chunks per page is constant: 1) P >= C, P % C == 0 --> N = P / C 2) C > P , C % P == 0 --> N = 1 3) C <= P, P % C != 0 && C % (P % C) == 0 --> N = P / C + 1 where P is page size, C is chunk size and N is the number of chunks per page and the rest of the cases, where the number of chunks per page is calculated on the go, during the page counter array iteration. Among the rest, there are still cases where N can be deduced from the page index, but they require not that much less calculations per page than the current "brute force" way and 2/3 of the buckets fall into the first three categories anyway, so, for the sake of simplicity, it was decided to stick to those two variations. It can always be refined and improved later, should we see that brute force way slows us down unacceptably. Reviewers: eugenis, cryptoad, dvyukov Subscribers: kubamracek, mehdi_amini, llvm-commits Differential Revision: https://reviews.llvm.org/D38245 llvm-svn: 314311
Loading
Please register or sign in to comment