There are a number of recent papers on machine learning for Coq.
E.g.
and similar work for HOL:
https://arxiv.org/pdf/1904.03241.pdf
Has anyone tried to use them while producing proofs?
There are a number of recent papers on machine learning for Coq.
E.g.
and similar work for HOL:
https://arxiv.org/pdf/1904.03241.pdf
Has anyone tried to use them while producing proofs?
Note that the tooling and datasets for the Coq learning paper linked here (CoqGym) just became public late this week, so I donāt believe anyone has had the opportunity to try it yet outside of the evaluation. Another earlier paper on machine learning for Coq only looks at a restricted rewriting problem, so the applications using that toolchain are likely absent because of that.
The machine learning approach that is likely most ready for practical use is TacticToe, but thatās only for the HOL4 proof assistant. They report around 65% success rate on the HOL4 standard library, while CoqGym has around 12% success rate on its Coq dataset.
An important difficulty when integrating with these kind of (very cool) tools is that Coq doesnāt provide any kind of interface / help suitable for it. For example, Isabelle is much better in this regard [at least when their developers showed me 2 years ago, maybe today is even better)
A proper implementation of such interface for interacting with external tools would require some non-trivial changes in Coq itself, unfortunately even much basic changes are hard to get in the codebase so feasibility of such project is open.
Right now itās not possible to use it in the official release of Coq. The primary reason is just as @ejgallego pointed out. We have to hack Coq in order to access the runtime information (environment, local context, goals, etc.), which is why we have a modified version of Coq in the CoqGym repo. I believe this kind of external tools will be much improved if Coq had a more friendly interface for exposing its internal structures.
Disclaimer: Iām an author of CoqGym.
Maybe you can collaborate with Dan Huang and the Coq developers to define the API you need.
The coq_hammer and ML4PG teams may also be interested.
@spitters there are a few longstanding issues already open in the Coq repos, and PRs fixing that. However Iām much afraid that such work has been rejected [in my opinion by totally bogus reasons] a few times, so we got big flamewars and zero benefits for users, who are still in big pain.
Iām much afraid that such a project would require an extent of technical leadership and control over the codebase that it is just not available today.
I concur with @spitters, Coq developers would be very happy to discuss such an API proposal. The issues @ejgallego mentions were aiming at making the internal structures transparent, which is not the same as designing an API.
I disagree; what you regard as āinternal structureā is an important piece of public API for some (available in all the releases until 8.9), as it was pointed out in the PR by me and two users. Even so it was rejected.
This should allow CoqGym a more stable way of pulling the archives:
Iām looking forward to what it can do on the other libraries.
Here is one more paper that describes a (deep) learning approach for Coq proof automation. The code for the project, called Proverbot9001, is available on GitHub.
Note also that CoqHammer is out for Coq 8.10 (also via OPAM), and the continuously updated Coq OPAM JSON file should have all the data needed to install/process a project from the Coq OPAM repository.
In other words, I think the stage is nearly set for a comparison of automation techniques for Coq 8.10.
@ejgallego @maximedenes Iād be interested in contributing to the development of such API for Machine Learning applications.
Why is building a Python API for SerAPI not good enough? or are there additional hurdles? @ejgallego
Another option could be having OCaml have access to Deep Learning frameworks? Would that help?
related link: https://discuss.ocaml.org/t/is-it-possible-to-use-machine-learning-and-deep-learning-frameworks-through-ocaml/4570
Does anyone know if the HOL Light interface provided by google is any easier to use than Coq? Anyone knows the advantages and disadvantages?
Hereās a nice benchmark suite for lean:
Some interesting progress on sauto/coqhammer by Åukasz Czajka. It would be good to have a some more uniform benchmarks.
Incidentally, tactictoe for coq
Note the linked paper is a submitted version before the review, which Iām going to update substantially taking into account the reviews.
Hi. May i ask if there is any machine learning tool that works well in predicting and using ssreflect tactics for proofs in Coq? From what I have found, most of the tools mentioned that they do not work well with ssreflect. Are there tools that we can use to train with ssreflect/mathcomp library?
I believe @palmskog made some progress on combining math-comp with hammers.