[PATCH] D54731: [lit] Enable the use of custom user-defined lit commands
Zachary Turner via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Wed Nov 21 09:32:58 PST 2018
zturner added a comment.
In https://reviews.llvm.org/D54731#1305183, @labath wrote:
> I think that something like this would go a long way towards solving the problems with lit tests we're having in lldb.
>
> However, the part that is not clear to me is whether it is actually necessary to modify lit (shtest) to achieve this. It seems to me an equivalent effect to the command from the motivating example could be achieved via something like
>
> RUN: %compile --source=%p/Inputs/foo.cpp --mode=debug --opt=none --link=no --output=%t.o --clean=yes
>
>
> where `%compile` expands to a python script living somewhere in the lldb repository. This script could do the same thing that the implementation of `COMPILE: ` would do, except it would be done in a separate process.
>
> The only downside of that I see is the extra process will incur some overhead, slowing down the testing, but I am not sure if we care about that (or if it would even be measurable). OTOH, the benefits are:
>
> - decreased complexity of lit
> - decreased level of surprise of developers seeing new lit commands
> - easier reproducibility of tests when debugging (just copy paste the `%compile` run-line to rebuild the executable)
I did consider this, and I'm still open to the possibility of doing things this way. Two reasons I chose this route instead are:
1. We have a lot of setup that runs in lit before we ever get to this point, and a builder could re-use this. For example, the environment, any additional lit configuration parameters specified on the command line, etc. We could of course pass these in to the `compile.py` script via hidden arguments, so this isn't a total blocker, it was just something I thought of.
2. We could re-purpose this machinery for other uses. For example, I could imagine re-writing many lldb inline tests in terms of a custom command prefix. For example, here's `test/functionalities/data-formatter/dump_dynamic/main.cpp`:
class Base {
public:
Base () = default;
virtual int func() { return 1; }
virtual ~Base() = default;
};
class Derived : public Base {
private:
int m_derived_data;
public:
Derived () : Base(), m_derived_data(0x0fedbeef) {}
virtual ~Derived() = default;
virtual int func() { return m_derived_data; }
};
int main (int argc, char const *argv[])
{
Base *base = new Derived();
return 0; //% stream = lldb.SBStream()
//% base = self.frame().FindVariable("base")
//% base.SetPreferDynamicValue(lldb.eDynamicDontRunTarget)
//% base.GetDescription(stream)
//% if self.TraceOn(): print(stream.GetData())
//% self.assertTrue(stream.GetData().startswith("(Derived *"))
}
I could imagine writing this as:
class Base {
public:
Base () = default;
virtual int func() { return 1; }
virtual ~Base() = default;
};
class Derived : public Base {
private:
int m_derived_data;
public:
Derived () : Base(), m_derived_data(0x0fedbeef) {}
virtual ~Derived() = default;
virtual int func() { return m_derived_data; }
};
int main (int argc, char const *argv[])
{
Base *base = new Derived();
return 0;
}
//SCRIPT: stream = lldb.SBStream()
//SCRIPT: base = self.frame().FindVariable("base")
//SCRIPT: base.SetPreferDynamicValue(lldb.eDynamicDontRunTarget)
//SCRIPT: base.GetDescription(stream)
//EXPECT: stream.GetData().startswith("(Derived *"))
where the `lldb` module is loaded in-process similar to how it is with `dotest.py`. (I do wonder if all `lldbinline` tests could actually be convereted to lit / FileCheck tests right now, today, using an lldbinit file such as:
script stream = lldb.SBStream()
script base = self.frame().FindVariable("base")
script base.SetPreferDynamicValue(lldb.eDynamicDontRunTarget)
script base.GetDescription(stream)
script stream.GetData()
and then FileCheck'ing that, but I haven't tried and I haven't investigated every single lldbinline test to see if they would all fit into this model.
In any case, the point being that being able to run python code in-process opens up a lot of interesting possibilities, considering that's how all the dotest tests are written. Whether we need that flexibility is open for discussion though. Like I said, i'm willing to give the external script a try if people think we should try a more conservative approach first.
https://reviews.llvm.org/D54731
More information about the llvm-commits
mailing list