[all-commits] [llvm/llvm-project] 268428: Add a test for evicting unreachable modules from t...
jimingham via All-commits
all-commits at lists.llvm.org
Tue Dec 12 11:28:20 PST 2023
Branch: refs/heads/main
Home: https://github.com/llvm/llvm-project
Commit: 2684281d208612a746b05c891f346bd7b95318d5
https://github.com/llvm/llvm-project/commit/2684281d208612a746b05c891f346bd7b95318d5
Author: jimingham <jingham at apple.com>
Date: 2023-12-12 (Tue, 12 Dec 2023)
Changed paths:
A lldb/test/API/python_api/global_module_cache/Makefile
A lldb/test/API/python_api/global_module_cache/TestGlobalModuleCache.py
A lldb/test/API/python_api/global_module_cache/one-print.c
A lldb/test/API/python_api/global_module_cache/two-print.c
Log Message:
-----------
Add a test for evicting unreachable modules from the global module cache (#74894)
When you debug a binary and the change & rebuild and then rerun in lldb
w/o quitting lldb, the Modules in the Global Module Cache for the old
binary & .o files if used are now "unreachable". Nothing in lldb is
holding them alive, and they've already been unlinked. lldb will
properly discard them if there's not another Target referencing them.
However, this only works in simple cases at present. If you have several
Targets that reference the same modules, it's pretty easy to end up
stranding Modules that are no longer reachable, and if you use a
sequence of SBDebuggers unreachable modules can also get stranded. If
you run a long-lived lldb process and are iteratively developing on a
large code base, lldb's memory gets filled with useless Modules.
This patch adds a test for the mode that currently works:
(lldb) target create foo
(lldb) run
<rebuild foo outside lldb>
(lldb) run
In that case, we do delete the unreachable Modules.
The next step will be to add tests for the cases where we fail to do
this, then see how to safely/efficiently evict unreachable modules in
those cases as well.
More information about the All-commits
mailing list