Inefficiency related to "use"
Hello,
In SPARK, we try to be very modular and generate many small theories so that we control the proof context. On a customer code we had now the situation where SPARK generates around ~800 modules, with lots of "use" statements (`grep "use" gives a count of over 6000).
Currently, why3 seems to be inefficient on such code. As an example, Task.split_theory
accumulates several seconds on this input file, even though it mostly returns the empty list. This is because in our case, only a single module actually contains lemmas or goals. If I check for the presence of lemmas/goals before doing the work in Task.split_theory
, we are able to recover this time, but then the code related to "use" in Typing.ml
still takes a lot of time.
Unfortunately I can't directly share the reproducer as it derives from customer code, but if needed I could possibly prepare something similar. Do you have any idea how to improve the running time?