<div dir="ltr">Ok. So it looks like x86_64h is not currently supported by llvm::Triple. I will find out about adding support for it. If it's possible to get that into llvm::Triple, I'll post a patch that updates Host::GetArchitecture() to use it, and then maybe one of you guys can test it in the various configurations.<div>
<br></div><div>Thanks!</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Aug 19, 2014 at 2:18 PM, <span dir="ltr"><<a href="mailto:jingham@apple.com" target="_blank">jingham@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sorry, I didn't read closely enough. Greg's answer is actually relevant to your question...<br>
<span class="HOEnZb"><font color="#888888"><br>
Jim<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> On Aug 19, 2014, at 2:04 PM, Zachary Turner <<a href="mailto:zturner@google.com">zturner@google.com</a>> wrote:<br>
><br>
> In this case though, we're talking about Host::GetArchitecture, which is a static function and is supposed to report information about the OS that the debugger is running on, and not the target that the debuggee is running on. Does this mean that a single instance of LLDB cannot debug both an app running in the simulator, and an app running in darwin? So you have to exit LLDB and start a new instance of LLDB in "simulator mode" or "mac mode"? Or am I misunderstanding?<br>
><br>
><br>
> On Tue, Aug 19, 2014 at 1:58 PM, <<a href="mailto:jingham@apple.com">jingham@apple.com</a>> wrote:<br>
> Mac OS X is more complex because lldb also supports apps running in the iOS simulator on OS X. Those apps are x86 or x86_64 processes, but their OS is "ios", not "darwin". The platforms and a bunch of other fiddly bits rely on getting the OS right as well.<br>
><br>
> Jim<br>
><br>
> > On Aug 19, 2014, at 1:52 PM, Zachary Turner <<a href="mailto:zturner@google.com">zturner@google.com</a>> wrote:<br>
> ><br>
> > Looking over the apple code path for Host::GetArchitecture, I'm a little confused about why all this custom logic is needed. What are the situations in which llvm::sys::getDefaultTargetTriple() will return something other than what we want? Specifically, a concrete example might help illustrate the problem.<br>
> ><br>
> > I understand from the comment that it has something to do with being able to run 32 and 64-bit executables in the same operating system. Isn't this the case everywhere? I can run 32-bit executables on Windows x64 as well. llvm::Triple has a function called get32BitArchVariant() that I thought returns a 32-bit triple for this case. Does this not work for some Apple configuration?<br>
> ><br>
> > It seems like this logic should be able to be sunk into llvm::Triple somehow. Conceptually speaking, it seems like there should be two cases:<br>
> ><br>
> > 64-bit:<br>
> > host_arch_64 = llvm::Triple::getDefaultTargetTriple()<br>
> > host_arch_32 = llvm::Triple::getDefaultTargetTriple().get32BitArchVariant()<br>
> ><br>
> > 32-bit<br>
> > host_arch_64 = <empty><br>
> > host_arch_32 = llvm::Triple::getDefaultTargetTriple()<br>
> ><br>
> > Why doesn't this work?<br>
> > _______________________________________________<br>
> > lldb-dev mailing list<br>
> > <a href="mailto:lldb-dev@cs.uiuc.edu">lldb-dev@cs.uiuc.edu</a><br>
> > <a href="http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev</a><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>