[Lldb-commits] [PATCH] D151292: lldb WIP/RFC: Adding support for address fixing on AArch64 with high and low memory addresses

Jason Molenda via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Tue May 23 22:47:54 PDT 2023

jasonmolenda created this revision.
jasonmolenda added reviewers: DavidSpickett, JDevlieghere, jingham, omjavaid.
jasonmolenda added a project: LLDB.
Herald added a subscriber: kristof.beyls.
Herald added a project: All.
jasonmolenda requested review of this revision.
Herald added a subscriber: lldb-commits.

The number of bits used for addressing on AArch64 is a bit more complicated than we're representing today in lldb.  The same values apply to both EL0 and EL1 execution, but the number of bits used in addressing may be different for low memory (0x000...) versus high memory (0xfff...) addresses.  The Darwin kernel always has the same page table setup for low and high memory, but I'm supporting some teams that need to have different settings for each, and they're managing to capture a single corefile / JTAG connection capturing the virtual addresses for both high and low memory in a single process. Having a single number of addressing bits cannot handle this situation today.  Internally we use the Linux model of code address and data address mask, but that also doesn't encompass this concept.

This patch adds a high memory code and data mask, mirroring our existing code/data masks.  By default the high memory masks will either be default value 0, or will have the same mask values.  I'll need three ways of receiving the correct address bit numbers:  a setting, an LC_NOTE way to get it from a corefile, and a qProcessInfo way of getting it from a gdb-remote connection to a JTAG etc device.

To start, I changed `target.process.virtual-addressable-bits` from taking a uint to an array of strings (it should be an array of uint's but I wasn't getting that to work correctly, I'll play around with it more later).  The array can be zero elements (no user override), a single element (used for all addresses, like today), or two elements (first number is low memory, second number is high memory).  The alternative is to have an additional setting for those environments that need to specify a different address mask for high memory vrs. low memory.

I also changed `Process::GetDataAddressMask` and `Process::GetCodeAddressMask` to have a clear set of precedence for values.  If the user specifies a number of addressing bits in that setting, this overrides what the system might tell lldb.  The user's specified values do not overwrite/set the Process address mask.

Current behavior is that the mask is overwritten by the setting value if the mask is unset.  But once the mask is set, the user setting is ignored.  In practice this means you can set the setting ONCE in a Process lifetime, and then further changes to the setting are ignored.  Made it a little annoying to experiment with this environment when I first started working on it. :)

None of this should change behavior on linux, but if folks from the linux world have a comment or reaction to this change, I would be interested to hear opinions.  I haven't done much testing beyond the one test corefile, and I still need to work out how the two values are communicated in corefiles & live debug, but those are trivial details on the core idea.

FTR this is the AArch64 TCR_EL1.T0SZ and TCR_EL1.T1SZ.  The values of this control register apply to both EL0 and EL1 execution, but the T0SZ value applies to the TTBR0_EL1 for low memory addresses and the TCR_EL1.T1SZ applies to TTBR1_EL1 for high memory addresses.

  rG LLVM Github Monorepo



-------------- next part --------------
A non-text attachment was scrubbed...
Name: D151292.525008.patch
Type: text/x-patch
Size: 10493 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/lldb-commits/attachments/20230524/b8960188/attachment-0001.bin>

More information about the lldb-commits mailing list