Using the Proof assistant for probability and learning theory? How to get started?

Hi,

I was wondering whether it would be feasible/useful to formalize ML, optimization and learning theory in order to help with nuanced/tricky derivations. Taking the “proof assistant” literally. I’ve seen some works doing this for specific theorems/statements, but couldn’t find anywere to get started. if I wanted to e.g. prove the convergence of SGD on convex functions, are there standard libraries/trick to help me get started? Apperently real numbers are already formalized, but there are different constructions?

1 Like

There is already some work in Coq to formalize various aspects of and results in machine learning:

http://ace.cs.ohio.edu/~gstewart/papers/aaai19-bagnall.pdf

Also for related systems: https://dl.acm.org/doi/10.5555/3305890.3305996

There are indeed several real number constructions, see for example:
https://coq.inria.fr/distrib/V8.11.0/stdlib/Coq.Reals.ClassicalDedekindReals.html
https://coq.inria.fr/distrib/V8.11.0/stdlib/Coq.Reals.ConstructiveCauchyReals.html

See the topic on probability theory for more about that: (Finite, discrete) Probabilities in Coq?