[PATCH] D39571: [clangd] DidChangeConfiguration Notification

Simon Marchi via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Thu Jan 25 12:33:50 PST 2018


simark added inline comments.


================
Comment at: clangd/ClangdLSPServer.cpp:302
 
+// FIXME: This function needs to be properly tested.
+void ClangdLSPServer::onChangeConfiguration(
----------------
ilya-biryukov wrote:
> simark wrote:
> > ilya-biryukov wrote:
> > > simark wrote:
> > > > ilya-biryukov wrote:
> > > > > Are you planning to to address this FIXME before checking the code in?
> > > > Following what you said here:
> > > > 
> > > > https://reviews.llvm.org/D39571?id=124024#inline-359345
> > > > 
> > > > I have not really looked into what was wrong with the test, and what is missing in the infrastructure to make it work.  But I assumed that the situation did not change since then.  Can you enlighten me on what the problem was, and what is missing?
> > > We usually write unittests for that kind of thing, since they allow to plug an in-memory filesystem, but we only test `ClangdServer` (examples are in `unittests/clangd/ClangdTests.cpp`). `ClangdLSPServer` does not allow to plug in a virtual filesystem (vfs). Even if we add vfs, it's still hard to unit-test because we'll have to match the json input/output directly.
> > > 
> > > This leaves us with an option of a lit test that runs `clangd` directly, similar to tests in `test/clangd`.
> > > The lit test would need to create a temporary directory, create proper `compile_commands.json` there, then send the LSP commands with the path to the test to clangd.
> > > One major complication is that in LSP we have to specify the size of each message, but in our case the size would change depending on created temp path. It means we'll have to patch the test input to setup proper paths and message sizes.
> > > If we choose to go down this path, `clang-tools-extra/test/clang-tidy/vfsoverlay.cpp` does a similar setup (create temp-dir, patch up some configuration files to point into the temp directory, etc) and could be used as a starting point.
> > > 
> > > It's not impossible to write that test, it's just a bit involved. Having a test would be nice, though, to ensure we don't break this method while doing other things. Especially given that this functionality is not used anywhere in clangd.
> > > We usually write unittests for that kind of thing, since they allow to plug an in-memory filesystem, but we only test ClangdServer (examples are in unittests/clangd/ClangdTests.cpp). ClangdLSPServer does not allow to plug in a virtual filesystem (vfs). Even if we add vfs, it's still hard to unit-test because we'll have to match the json input/output directly.
> > 
> > What do you mean by "we'll have to match the json input/output directly"?  That we'll have to match the complete JSON output textually?  Couldn't the test parse the JSON into some data structures, then we could assert specific things, like that this particular field is present and contains a certain substring, for example?
> > 
> > > This leaves us with an option of a lit test that runs clangd directly, similar to tests in test/clangd.
> > > The lit test would need to create a temporary directory, create proper compile_commands.json there, then send the LSP commands with the path to the test to clangd.
> > > One major complication is that in LSP we have to specify the size of each message, but in our case the size would change depending on created temp path. It means we'll have to patch the test input to setup proper paths and message sizes.
> > > If we choose to go down this path, clang-tools-extra/test/clang-tidy/vfsoverlay.cpp does a similar setup (create temp-dir, patch up some configuration files to point into the temp directory, etc) and could be used as a starting point.
> > 
> > Ok, I see the complication with the Content-Length.  I am not familiar with lit yet, so I don't know what it is capable of.  But being able to craft and send arbitrary LSP messages would certainly be helpful in the future for all kinds of black box test, so having a framework that allows to do this would be helpful, I think.  I'm not familiar enough with the ecosystem to do this right now, but I'll keep it in mind.
> > 
> > One question about this particular test.  Would there be some race condition here?  If we do:
> > 
> > 1. Start clangd with compile_commands.json #1
> > 2. Ask for the definition of a function, expecting a result
> > 3. Change the configuration to compile_commands.json #2
> > 4. Ask for the definition of the same function, expecting a different result
> > 
> > Since clangd is multi-threaded and does work in the background, are we sure that we'll get the result we want in #4?
> > 
> > > It's not impossible to write that test, it's just a bit involved. Having a test would be nice, though, to ensure we don't break this method while doing other things. Especially given that this functionality is not used anywhere in clangd.
> > 
> > I agree.  For the time being, is it fine to leave the FIXME there?  We can work on improving the test frameworks to get rid of it.
> > What do you mean by "we'll have to match the json input/output directly"? That we'll have to match the complete JSON output textually? Couldn't the test parse the JSON into some data structures, then we could assert specific things, like that this particular field is present and contains a certain substring, for example?
> 
> The interface to interact with `ClangdLSPServer` is `JSONOutput`, which only allows you to pass the output of requests to the stream at the moment. That means not only parsing json, but also finding the  individual responses in the combined output.
> 
> > One question about this particular test. Would there be some race condition here? If we do:
> Technically clangd can do everything in parallel, but we have a flag `-run-synchronously` that will make it do all the work on the main thread and we use that in the tests.
> 
> LLVM has lots of tests that do substring matches, there's a special tool, called FileCheck, to make writing them simpler. See the test from my previous message (specifically the `# CHECK: ` lines), they should be enough to get started.
> Lit itself is a set of python scripts that allow, among other things, to specify directly in the test file which commands the test needs to run. Again, see the example tests (specifically, `# RUN: clangd ....` lines).
> 
> > I agree. For the time being, is it fine to leave the FIXME there? We can work on improving the test frameworks to get rid of it.
> FIXME looks fine for now, but make sure to test this code does really work for your use-case.
> FIXME looks fine for now, but make sure to test this code does really work for your use-case.

So far I have just tested with some small hello-world projects, but I'll take the time to test with some bigger project (gdb).


Repository:
  rCTE Clang Tools Extra

https://reviews.llvm.org/D39571





More information about the cfe-commits mailing list