<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/140799>140799</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[clang] Clang unit test refactoring appears to be creating/exposing testing gaps and/or test counting madness and/or test invocation confusion
</td>
</tr>
<tr>
<th>Labels</th>
<td>
clang
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
tahonermann
</td>
</tr>
</table>
<pre>
Grab a fresh cup of coffee, this might take a while. You might want to set aside a cup of something stronger for when you reach the end.
This all started with me investigating a downstream performance issue concerning the `ASTMatchersTests` unit test taking 30 minutes to run on Windows (serially, with `-j1`) compared to 45 seconds on Linux (serially). That is why this report focuses on the `ASTMatchers` tests. I assume the concerns presented here are applicable to other tests that have been folded into the new `AllClangUnitTests` executable, but I haven't verified that. For the record, the performance issue hasn't been found, but upstream appears to be unaffected.
Recent changes made via commit [5ffd9bd](https://github.com/llvm/llvm-project/commit/5ffd9bdb50b5753bbf668e4eab3647dfb46cd0d6) (which was merged April 1st, reverted later that day via commit [03a791f](https://github.com/llvm/llvm-project/commit/03a791f70364921ec3d3b7de8ddc6be8279c2fba), then reapplied on April 2nd via commit [e3c0565](https://github.com/llvm/llvm-project/commit/e3c0565b74b1f5122ab4dbabc3e941924e116330)) and commit [db2315a](https://github.com/llvm/llvm-project/commit/db2315afa8db1153e3b85d452cd14d5a1b957350) (which was merged April 29th) have merged most Clang unit tests into a single `AllClangUnitTests` executable with the intent to reduce the number of executables produced by the build. Ok, good, mostly, though I lost a lot of time debugging a stale `ASTMatchersTests` executable left in my work space before I discovered these changes. Related PRs include https://github.com/llvm/llvm-project/pull/133545 and https://github.com/llvm/llvm-project/pull/134196.
I'm now finding it challenging to convince myself that all of the tests that were previously being run via the `ASTMatchersTests` executable are still being run via the `AllClangUnitTests` executable.
My Linux and Windows build environments aren't configured the same. This is at least partially why the test counts shown below don't correlate well between them. So, ignore those; the interesting data concerns the test count comparisons for the same platform. For reference, my builds are configured as follows:
- Linux:
- Built with GCC 11.2.1.
- Built with the following commands:
- `cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=On -DLLVM_ENABLE_PROJECTS=clang -DLLVM_ENABLE_RUNTIMES="compiler-rt;libunwind;libcxx;libcxxabi;openmp" -DLLVM_BUILD_TESTS=ON -DLLVM_LIT_ARGS='-sv -o lit-results.json'`
- `make -j 80 && make install`
- `make -j 80 && make check-clang`
- Windows:
- Built within a MSVS 2022 x64 developer command prompt.
- Built with the following commands:
- `cmake -G "Visual Studio 16 2019" -A x64 -Thost=x64 -DLLVM_ENABLE_ASSERTIONS=On -DLLVM_ENABLE_PROJECTS=clang -DLLVM_BUILD_TESTS=ON -DLLVM_LIT_ARGS='-sv -o lit-results.json'`
- `cmake --build . --config Release --target install`
- `cmake --build . --config Release --target check-clang`
Running the Clang unit tests via a command like the following results in `lit` reporting that all tests passed. In each case, the current working directory is the build directory.
- Linux:
```
$ ./bin/llvm-lit -vv -j 1 tools/clang/test/Unit
-- Testing: 372 tests, 96 workers --
PASS: Clang-Unit :: ./AllClangUnitTests/36/96 (1 of 372)
...
PASS: Clang-Unit :: ./AllClangUnitTests/94/96 (97 of 372)
PASS: Clang-Unit :: Format/./FormatTests/75/151 (98 of 372)
...
PASS: Clang-Unit :: Basic/./BasicTests/22/95 (372 of 372)
Testing Time: 94.48s
Total Discovered Tests: 25808
Skipped: 4 (0.02%)
Passed : 25804 (99.98%)
```
- Windows:
```
$ python ./Release/bin/llvm-lit.py -vv -j 1 tools/clang/test/Unit
-- Testing: 156 tests, 24 workers --
PASS: Clang-Unit :: Release/AllClangUnitTests.exe/45/48 (1 of 156)
...
PASS: Clang-Unit :: Release/AllClangUnitTests.exe/24/48 (48 of 156)
PASS: Clang-Unit :: Format/Release/FormatTests.exe/21/38 (49 of 156)
...
PASS: Clang-Unit :: Basic/Release/BasicTests.exe/7/43 (156 of 156)
Testing Time: 215.54s
Total Discovered Tests: 25706
Skipped: 4 (0.02%)
Passed : 25702 (99.98%)
```
Personally, I find it rather confusing that `lit` says it is running 372 (156) tests and then reports that 25804 (25702) passed. I acknowledge the use of "Discovered" as a presumed disambiguator that these tests are not those tests, but it still seems odd to me. But I digress.
None of the `lit` output indicates which of those "PASS" lines, if any, corresponds to the tests formerly run by `ASTMatchersTests`. Fortunately, my build configuration passes `-sv -o lit-results.json` to `lit` and I can search.
- Linux:
```
$ grep ASTMatchersTests ./tools/clang/test/lit-results.json | wc -l
12558
```
- Windows:
```
$ grep ASTMatchersTests ./tools/clang/test/lit-results.json | wc -l
12558
```
Ok, so lots of different numbers above, but the numbers obtained from the search for "ASTMatchersTests" from the `lit` log match! They are also pretty close to numbers I had noted when running the stale `ASTMatchersTests` executables (12914 discovered tests in 140 tests for Linux, 12911 discovered tests in 65 tests for Windows; these numbers are from a downstream build with other changes and might include additional tests).
### Issue 1: Lit says all is good but `AllClangUnitTests` aborts with a failed assertion.
Per the detail above, `lit` claims that all tests passed. However, if I run `AllClangUnitTests` by itself, the results are ... different; the program aborts with a failed assertion (on both Linux and Windows).
- Linux:
```
$ ./tools/clang/unittests/AllClangUnitTests
[==========] Running 24477 tests from 539 test suites.
...
[----------] 5 tests from ASTMatchersTest
[ RUN ] ASTMatchersTest.NamesMember_CXXDependentScopeMemberExpr
[ OK ] ASTMatchersTest.NamesMember_CXXDependentScopeMemberExpr (10 ms)
[ RUN ] ASTMatchersTest.ArgumentCountIs_CXXUnresolvedConstructExpr
[ OK ] ASTMatchersTest.ArgumentCountIs_CXXUnresolvedConstructExpr (2 ms)
[ RUN ] ASTMatchersTest.HasArgument_CXXUnresolvedConstructExpr
[ OK ] ASTMatchersTest.HasArgument_CXXUnresolvedConstructExpr (2 ms)
[ RUN ] ASTMatchersTest.DecompositionDecl
[ OK ] ASTMatchersTest.DecompositionDecl (6 ms)
[ RUN ] ASTMatchersTest.Finder_DynamicOnlyAcceptsSomeMatchers
[ OK ] ASTMatchersTest.Finder_DynamicOnlyAcceptsSomeMatchers (0 ms)
[----------] 5 tests from ASTMatchersTest (21 ms total)
...
[ RUN ] CodeGenTest.CodeGenFromIRMemBuffer
warning: overriding the module target triple with i386-unknown-linux-gnu
AllClangUnitTests: /.../llvm/lib/CodeGen/CodeGenTargetMachineImpl.cpp:48: void llvm::CodeGenTargetMachineImpl::initAsmInfo(): Assertion `MRI && "Unable to create reg info"' failed.
```
- Windows:
```
$ ./tools/clang/unittests/Release/AllClangUnitTests.exe
[==========] Running 24393 tests from 520 test suites.
...
[----------] 5 tests from ASTMatchersTest
[ RUN ] ASTMatchersTest.NamesMember_CXXDependentScopeMemberExpr
[ OK ] ASTMatchersTest.NamesMember_CXXDependentScopeMemberExpr (22 ms)
[ RUN ] ASTMatchersTest.ArgumentCountIs_CXXUnresolvedConstructExpr
[ OK ] ASTMatchersTest.ArgumentCountIs_CXXUnresolvedConstructExpr (4 ms)
[ RUN ] ASTMatchersTest.HasArgument_CXXUnresolvedConstructExpr
[ OK ] ASTMatchersTest.HasArgument_CXXUnresolvedConstructExpr (5 ms)
[ RUN ] ASTMatchersTest.DecompositionDecl
[ OK ] ASTMatchersTest.DecompositionDecl (13 ms)
[ RUN ] ASTMatchersTest.Finder_DynamicOnlyAcceptsSomeMatchers
[ OK ] ASTMatchersTest.Finder_DynamicOnlyAcceptsSomeMatchers (0 ms)
[----------] 5 tests from ASTMatchersTest (46 ms total)
...
[ RUN ] CodeGenTest.CodeGenFromIRMemBuffer
warning: overriding the module target triple with i386-unknown-linux-gnu
Assertion failed: MRI && "Unable to create reg info", file D:\iusers\thonerma\llvm-project\llvm\lib\CodeGen\CodeGenTargetMachineImpl.cpp, line 48
Exception Code: 0x80000003
```
There appear to be at least three issues here:
1. The assertion failure.
2. That the assertion failure doesn't cause `lit` to report a failure.
3. The output suggests that only 5 tests from `ASTMatchersTests` were run. Is that because the program aborted before completing? Or is this yet another way in which "tests" are differently counted?
### Issue 2: How do I run just the `ASTMatchersTests` now?
If there is a way to use `lit` to run just the `ASTMatchersTests` portions of the `Clang-Unit` set of tests, I'd love to know how to do that. Previously, a command like the following sufficed to run all the relevant tests via `lit`:
```
$ python bin/llvm-lit.py -vv tools/clang/test/Unit/ASTMatchers
```
I am aware that the Google Test framework results in an executable that supports lot of options for controlling how the tests are run. For example, I can do this:
```
$ ./tools/clang/unittests/AllClangUnitTests --gtest_filter='ASTMatchersTests*'
...
[==========] 12558 tests from 1 test suite ran. (30913 ms total)
[ PASSED ] 12558 tests.
```
Hey, look! That number matches what I got from the `lit` log! Yay!
But, as seen above, running tests via `lit` and via the standalone unit test executable doesn't seem to produce the same behavior. It is therefore important to be able to do both.
### Issue 3: How it the typical developer expected to investigate a failed lit test?
Let's say that `lit` did fail one of the tests (as it seems it should have above). The `lit` output is not as helpful as it could be:
```
********************
FAIL: Clang-Unit :: ./AllClangUnitTests/16/96 (1288 of 21587)
******************** TEST 'Clang-Unit :: ./AllClangUnitTests/16/96' FAILED ********************
...
Failed Tests (26):
Clang-Unit :: ./AllClangUnitTests/ASTMatchersTests/ASTMatchersTest/IsExpandedFromMacro_MatchesObjectMacro/C11
...
```
The astute developer will eventually find that the key they need to actually investigate/debug the problem is hidden in the output
```
Script:
--
/.../tools/clang/unittests/./AllClangUnitTests --gtest_filter=ASTMatchersTests/ASTMatchersTest.IsExpandedFromMacro_MatchesObjectMacro/CXX11
--
```
Room for improvement:
- Better terminology to distinguish testsuites, tests, grouped tests, discovered/unit tests, etc...
- What do those "16/96" numbers above mean? Are they useful? Can they be replaced with something more useful?
- Better guidance to help a developer:
- Correlate a failed test message<br/>
`FAIL: Clang-Unit :: ./AllClangUnitTests/16/96 (1288 of 21587)`<br/>
to the failed test:<br/>
` Clang-Unit :: ./AllClangUnitTests/ASTMatchersTests/ASTMatchersTest/IsExpandedFromMacro_MatchesObjectMacro/C11`<br/>
to the command needed to run just that test:<br/>
`/.../tools/clang/unittests/./AllClangUnitTests --gtest_filter=ASTMatchersTests/ASTMatchersTest.IsExpandedFromMacro_MatchesObjectMacro/CXX11`<br/>
to the command needed to run just that test via `lit`:<br/>
`<I wish I knew>`
### Summary
I know the above is discussing issues that predate the recent refactoring, but I think these issues have been made worse by it. I spent a lot of time trying to wrap my head around what all was going on in my environments over the last week.
</pre>
<img width="1" height="1" alt="" src="http://email.email.llvm.org/o/eJzkW1tz27iS_jXMS5dYJCjq8pAH-aIZ7YmTlO2cM-cpBZItEWMQYAGgZP37rQZISbbsxEmm9szuupKyTAKNRqP76wta3FqxUYjvo_wiyq_e8c7V2rx3vNYKTcOVelfoav_-N8ML4LA2aGsouxb0Gkq9XiNG7BJcLSw0YlM7cPwBgcOuFhJj-Lfu-uc7rhw4DRYdcCsqGtTTsbpBVwu1AeuMVhs0sNYGdjUq2OsODPKyBlcjoKriKFlEyeKeVuRSgnXcOKxgJ1wNDYJQW7RObLgjghwqvVPWGeQNtGjWmvZUIghrO4RSqxKNopFEPpoki7v7G-7KGo29R-tsNEmgU8KBQ-s3R2OzBBqhOoeWdmQ6BVrBv4Sq9M5CxGYWjeBS7kk0nq9okoz-TKNJErE5lLppucGK5o5zsFhqVVki8UGo7vEpgXkM9zV3ICzs6n0QtMFWGwdrXXYW_cRz5olvYtnGsAJubdegH9Xv2EJr0KIiydVoEDj9b1spSl5IJNa0q9EEGuCIhZpvEQpEBWstK6xAKKc9UYU7v7yUl5KrzRcl3EF4-Ihl54goSaPoHKw8JRWxqYMtGrEWJIuauxiW2niCBkttqqBZ-MK51dyG-T07naoG6l3bnzZvW-TGn1CB0Cm-XmPpcFCgWyxROShrrjZooeEVwlZwOp1GOIjyi3y9ruZFFeVXEZvVzrU2yhYRW0ZsuRGu7oq41E3EllJuh1-j1ug_sXQRWwY6EVv2ZIo8KfJpnhXFejKZ4Rh5kU3G02pdjCdllVQTUo2IzXa1KGvYcQsNmg1WsGiNkJBaRzs0uEWv7pI7Oh06l4rvn3GeZHw6T9e_ynlPZppkk_GcpVhmVVZMK5xVVTkpcMam85KtCx6xeX9UimyVtAgrUsvAOlPVM_4wK5N8kv8qfz2ZYjou0nWeMsaLcVXwosxwPk7nbIxpOsmyxPM3B66qEx6qgmVpzn-Vh57Mms-qIk3zDLNillfjnJVVOq5ynhbzfJrlybdPl81dTSO8ifVvGm0deHs6ApANNsfBCrWR-AajC_hDViSUwwDBBquuDHCguqZAQyB8nELYoGlEBcXejyo6IasYPj3QMW-09sZG_AWIc7XuNjWsQBLLHKR2RNGJBqHCottsAhRbx-WrIHvCssS1A6Gg2cNOmwewLS8JedbaIKygErbUW_QIWqPFwYZjuEUyiwo-35KgStlVCD98tG0nZcSWaZbl49wrzc-TGKfzSY83q4hNG1B6B2uhKhKI8OgjJSovH6cJnLeCUK7ZW5TrYN7k5EiaNZ6C8Y5AuzW4Fbqzcg8FEg1yRWRq33BmJ3ImzLdOSPnK7G9rVr-vm33vtkhSgw_0CgOotsJo1aBylhYLkF1qtRabrj8-sLxB8nHCko_jDiRy66DlxnkX2Lu9sHkodUfEbK13CgqUegeVHuga488fdui35HbkHFyNTQx3mhRVbBSpkKu1xSi7OJiFoYhBbaDijh895NNFe7ctrFbWhycD89BK7sg_BfdlcI0GVendXbMPovDbP905JxpS6p3Xq2QxCkIMfwCM4KIT0gXj_e3yEtI0ZnEan78kLgIl2gDBEleVHejQ4GiSlA0FZaPfIGLsixKPcMMfcC0k2ogxGF1d3iz-cf314svqw9XX-39_vo6yq1ukc0AYXX348M-br9cfFxcfrr8u7u6ub-9Xnz7eRdnVJ_Xs7efbT_91fXlP70qPW09f3375eL-6uabXEWMkTiHRjIyLsgspik7thKrC5_Lx8fCBFyLKLnSLqmkDu55oz-31nV_v08fh-YfV_dfF7W9hlenIbmGkQQo3Mmg76Wz8p_UKQ8HYiYiChP6EWQIRm0RsAv6JUARa8m2DyxrLh5HfepgwGgyiP4_TkxMKONzc_fMOWMIYPE7GUOEWpW7RDOdIQNy07q849n8K23EJd66rhIZ0AixJ516cC7_26L7W1kXZlf_jLzj0v_h8-r2MArLEMBoFa4KDoo4cNxt0rxzZ2-efnSIFi506JAlnLpkgkx-OTIoHfHY8_c7IpUWTRApHQBqC-EC0R_lAruXWYhXDSoHPe0pucYiEy84YcuLkFj1gCYOl02ZP2Hnw1MfH8Tm0APHQ_-v_ZmOII7YshBpcmRQORtst6XgKTmtpKebxImFLYjNiS3ILg4RHcB8gNMoWkE1Z2ApxPZ94ZtFYGI364Z8Xd3c00EtyRHSAuMsWno1zt8OW2SRiy_mEIqiUnGE2ZRTUBXJxHP804fn4QHg-PaP8Or0l5SMkBSIc_hhITnNy_Xnqac5-itsLbkXZE_efB9qMEbs5kSYpP6fd_-rPAu5Fg0RuPo7HM0svhwHacQlXxzgq0M8WwPJZMutHAdw9iLbFil7Qz5jWTeKERSzvF_3slRWGqX7EfB7PZ8chzzTuOSa-opDt3tVa-XPrLfRMQ-N2__NKmuaTo5Ky8Y8o6ZGfM42K8ZGej0kFxrODuqb55McU4PtLsPFhifHsbI03KO5xiRP1HYinZHOB-PynNjBo8HGVox73i0xpA5mXUT45W4R-PddjluZxPrbPlP0bujxNJjTq-2rsf57o8jRh39flKFl8RmO1Goo9Kx_cU2RvuC-fkJPp7AHlj_Bv-d7SOGEp7PbOhSw6SINSweAMyKP0eTW5iz74P5ia55NGH5wG8PJB6Z3EahPcUGeRhBsxdpQR-X1ugfsiUNcgeQzLm0JsOu50X1YIuVXPhkFQ2oXI-Wg3RedoDyGHsIiNBV35uhbF9Be-1lOJjUFr-2zho1Y4ZDNHYejOtURKVaLkDi2ENNmPowUjxryqMQZSKPRrizVw5YXuA3_b-iJaX5AKXFNQjkbufWJT7F_JiHzY7jrFHYZDHIL2Q7zOndAqiNj6Ut4rQcskofWP26LDW0HJFVjkpqzPvfFLyLcx2MJzPj0QvoJxz_mAaHoJuxJGsqeasjyf_TwU_88xFCWLUGOwGqR2ljSgEmufULm-VmGBF3p7KCYeixgWdOG4UFjB2ugm5Gde7j5dixg7O3zGjkOPxyb1BhoaF7EU7mvch-KotJrsxbk9lNKbgT6svIKaV2QhWIWytTkJGN9Y9vCV45TN0_GTCkdf9YF0nBzVulcidgk0IX1xwiQ_GX846Iverg_CNBhk8KRSHvTfZxqhDDyUSUmlQz1_qK_wqhJkH1wOuDDvbT1iWfgHK1-0TQlZPxBeEPZRwCusryb5g3yt4MALj3ueFw5rLqRPny0aWjU-oLAXdYWOC3nUkOOhlpKLxr4Wa_-ud7hF0-PKygPGawwVexDOolwPQfkQ3pMo4zg-auxQYmiN3hjefGcrdPxaQaFdfV5R6aX6ffR4ySwpV-nP5oUguJ-bX1BG9o1_-RUMaRAbj6fTQbtIefIsOCywnXBo47NIIcovRocfIpWfTn9mGMc5cPvlY3DONOfZsPgjb9DeICny18s__rjCFlWFyt2VusXw_PqxNSfkws-nf_wKOW-mCTT2JCboOX2J6sJsugaVu9SdcitLlL8og1bLLVaXmiyuK90zPl_h8O20fHDwIo-vSvN3bgf6v8Tk2-j8BINXWOqm1dbDzRWW8q0HezaRFp_82OJLoSo0X6_2ijei_KTkflGW2Dp7pxs8XLq9kaE3EfNR6pHJH7IgL90UCPAoPH4pdj_V2Utd4W-oPG_956XRzer2BpuLjtCsn7Tj_rKUgJzcjRHV4OQaXXUSoa-hOCPa4e5BZLPJqFMUlaqRJPgabVTXEzyHo2wBlPnG8Ul1XRQRW_aMHT_d-7VueFkLhaumlXHZtlG2GM-IyFaLCjwBn5G8Nie8FUq4hW1Waq0jNiNxZQtYHJF5ktzcroZany-iDrekpUHuyAdsQPjZLGLTHtvjn4-5vgfj30sQfwrUs3n2BNRZ8v8K1NnLePT3AvXx3x3U8_8kqKfZ_3VUH0_-N6D6ATkDEBLlHwHQS1gLiXBFGJlfis7SKeSXru9JivLLJxeu4U_6JYoovxwcxeHTy46CXfpiAoyHTPT6kU6LuKZ5xHPyOEv8T_ZysnofWmd8o0nfZ3K4wHS1wb5dxfoWmwD4aUwp5UnUTxLqjL9NZX2vj3tpAFQa-6aXknf2NGH1V_q-JYifksvCUn1xxXabzfH6WCu5f6prr-Sn_p7ZdCqGVT-1wLD-WWaD1XBHT5YpMZRal_DJhNsJYWGPDrgKKeWO7ylNDaWeiDE3pOSURx1yKLkP16-kRMuX80pGR_W7vwnuU7c_O-u-dQeu9O5AbeWLUQb97bNnymk4l-8bqPr7HK3sSX3rWBj1NT8MfRFD-WwVsWkFUm-9IZApQa139LnSfUPU58MNP0345i2T7dZrUYa2MmLXZ7k-PZW49c13h9uqw9aCTj7R6mPt_eVq-7cK7Wx5Ihd4TjlZrIB0Zcf9HXyv6L9pvZHoq7awNrxB3_JxcmXG1WnPgp9nuzZUQ_tOE90Gwa-1r7g6o6UkkXhp1qdFTK_JS20AHznpaCjallwFkQv7skR-OKuG0WhDL7-uhXRowmXnef1pEbFplCwCfr8pWvOls1O7TU9iNDBcxf5yKJl7V3jqKLz_-ry4u7u-Cs7whFR8fla_o1c5qfVDqIPxoQIXimO-SssdrGCj3StlNJr4b76PWBpoXnS-iY1bsIjqWKY5VMvONdQXQYamFOu4qrjUCk96Mk-U4wiRFrEhQ-g7mY7NGgXWfCu0iWHl-ktTE0BLNKRTfZcqIXnvoCrtazKvFLWyAXxEUGe3b0XJ5cldPj62vu-QaB2bU_FYAZL9Tg6Q9AFdxKYWLIHR02uDSlR-GpzU0YPUIjbj_kYh1OHpQ607WYWesl7U8-ASzivv1pf3OXkq2a47CYFW6SkU-JJV_MK_ZLFcrD782E1tenIFzGb-youl-WzaK_cvsAP313f3ELHpDzNDqR7t5PoKflEeAQGWQSHuhwNlk5CK-ujj7dyd48zzRxFbruz1Y8tVhRUFhTe8NPprGGI_FRRW-UeUbadkvX2MeR4AAbeuc3ii8DshJeAWlet8A5e_Ejvg_QP6dq49KAw2wct-3IlxRGzpGweHIKOQ2JCS1qKqUJFXcIfI5hlXd6URreu7qkZeNfpqwrcA_EVRnoP49yUbv12uf_zhJRuYfCrXW60b781E0xq9xcZXk0Oj2AU63_qLphFKS73xEUsl_F1pJ2wdAMHn7L5EPYQbG6O7drghoAfVyYXg8tjQQq_QleHAR_Av32Osj1dxB-VnT69koEGuKOBbeP-Oewqj1p2kR5dchUcFBSSt5OXQsn9s_m8IhQ9TTve66UTlu7-d9gAF_Khux5a5y0MD4AFavYNo0Fq-wSi7LEzEllF2fYzm_1ocooDq-SL9jeQJQ36BF3j5z5r4Ge8940PESeZ6DC77WJi7V3bkv-rwd7a7X9nueQz90nFG2eUKdmSPK3hQuKOXg30_CSTuuqbhZj-EyD4P8CmgNyphvZ121ncR9PmkZ6U1WJG291-YQOXA4JqXThvKvA7ftiDreuiv_YZ89PBVDv-9h502FsPFVgwrsC3RetrH7cy-b1LeGd5Cs4caeQXc6E5VIRKkhGPHLWw0jdSqb-F-0gVMcOMZlpQj7xAf4nfV-6yaZ3P-Dt-n0_F0Os3m8_xd_b4q-SSp8hKTgmfTssySdDblk3EyWad5ns3eifcsYXmSs4Ql40mWx3wyn2DKeLrGalzO82icYMOFjCmJibXZvPP7f5-Ok-l8_k7yAqX133xirNdOFuVX78x7n_UU3cZG40QKipAPJJxw0n9dKszIr553BJ6ewrNvofhihz-cJT622g5BL_3e8Nbfr0bMfwvm0HjsoZFXCu3z10JtdRmaE_o-E63edUa-_-GG9aAWhG5BMtv37L8DAAD__1uXM6k">