Thanks for writing this up. The issue is already observed in our paper (p27).
Thanks, I’ll add a remark along those lines. I didn’t mean to imply that we were the first to discover this issue or anything like that.
The second is instance resolution efficiency. In more complex parts of our development we are now experiencing increasingly serious efficiency problems
I read this as referring to the time it takes to find an instance (which also can be a problem), as opposed to the size of the resulting instance. The latter is in some sense worse because the larger term increases RAM usage,
Qed time and also any other pass during interactive proofs that (for whatever reason) traverses parts of the already computed term again.
That’s something I’d like to understand at some point. AFAIK they used to use a more bundled approach, which does not suffer from an exponential blowup, but moved to indices recently. I was once told that lean exploits sharing in the terms better than Coq does. But also Lean has not been used for developments the size of what ssreflect was built for, so they may just not have hid the “bad part” of the high-degree polynomials in their complexity yet (if they got them).
Your second stdlib2 wiki link seems to be broken?