<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Nov 18, 2017 at 5:02 AM, Saar Raz <span dir="ltr"><<a href="mailto:saar@raz.email" target="_blank">saar@raz.email</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Great to hear that you're willing to get this going too :)<div><br><div class="gmail_quote"><span class="gmail-"><div dir="ltr">On Sat, Nov 18, 2017 at 3:41 AM Richard Smith <<a href="mailto:richard@metafoo.co.uk" target="_blank">richard@metafoo.co.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">the result of evaluating a concept with a set of arguments may validly change throughout the compilation of a program, so any notion of tracking / caching concept specializations might not be workable. This needs more analysis.</div></div></div></blockquote></span><div>Can the value of a concept really change throughout the compilation of a program? Doesn't two phase lookup and the guarantee that templates cannot be further specialized after they've been instantiated guarantee this not to happen? ([temp.spec]p5.3)</div></div></div></div></blockquote><div>Concepts are not instantiated. They are evaluated for satisfaction. Much of the wording to allow "global" knowledge to be used for instantiating templates relies on point-of-instantiation.<br></div><div><br>void foo(int, int);<br><br>template <typename T><br>concept C = requires (T t) { ::foo(t); };<br><br>constexpr bool a = C<int>;<br><br>void foo(int, int = 0);<br>constexpr bool b = C<int>;<br><br>static_assert(a != b);<br></div> <br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div class="gmail_quote"><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><div>* Consider how constraint normalization and subsumption checking will fit into the system. These are probably the two biggest pieces to design and implement.</div></div></div></div></blockquote></span><div>Well yes, I see a few challenges with normalization and subsumption - please give your input:</div><div>1. The paper introduces "conjunction" and "disjunction" as their own 'operators' that act on constraints. I guess the goal there was to avoid people overloading operator&& and operator||. A possible implementation we could do is take the expression as parsed, and replace BinaryOperator nodes with either the same BinaryOperator with its arguments wrapped with casts to bool, or create a new operator opcode and substituting that in. The former has the problem that those bool casts weren't really written by the user and will be confusing in diagnostics, and frankly do not conform to the paper as they would allow 'atomic constraints' to have a type other than bool that has a conversion to bool. The latter is probably the more correct solution but IDK how much work introducing a new operator is gonna be - any input on that?</div></div></div></div></blockquote><div>My understanding is that it is viable for conjunction and disjunction to be represented by && and || for satisfaction checking; it would be context-dependent during the evaluation whether the node is within an atomic constraint (and thus not representative of a conjunction or disjunction). When checking for subsumption, the ordering ceases to matter and likewise the contents of atomic constraints: a non-AST representation would like be what is cached.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div class="gmail_quote"><div>2. Regarding normalization - my roadmap already incorporates caching a 'complete' version of the constraint-expression, we might as well calculate a normalized version right there and then.</div><div>3. Normalization requires we break up ConceptSpecializationExprs into their constituents, which would circumvent our proposed caching of concept specializations if we were to then calculate the satisfaction of the normalized referencing concept. How about using the non-normalized version of the constraints to calculate the satisfaction and using the normalized version only for subsumption checking? Can you think of any non-conformance issues we get from this approach?</div></div></div></div></blockquote><div>Normalizing only for subsumption checking sounds good.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div class="gmail_quote"><div>4. If we're using the normalized version for subsumption checking only, we might as well delay the normalization to when we need to calculate subsumption.</div></div></div></div></blockquote><div>Yes.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div class="gmail_quote"><div>5. Should we or should we not cache the result of subsumptions? I tend to think that we should, what do you thing?</div></div></div></div></blockquote><div>I think we should on the basis of candidate pairs. Of course, some associated constraints may manifestly be such that they cannot be subsumed except by themselves, and it would make sense to keep that as the first thing to check.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div class="gmail_quote"><span class="gmail-"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><div>* Decouple the implementation of requires-expressions from everything else. I would be inclined to implement them first rather than last, but that's up to whoever actually does the implementation work.</div></div></div></div></blockquote></span><div>They are decoupled except for their required support of partial-concept-ids like so:</div><div>requires (T t) {</div><div> { t.foo() }-> Same<bool></div><div>}</div><div>So I'd rather implement them last after we have concept-ids.</div></div></div></div>
</blockquote></div>That's also my inclination; just my 2 cents.<br><br></div></div>