Any formally verified open source libraries?

Does anyone know if there are any open source software libraries that have been formally verified (whether using Coq or some other system) ? I’m thinking of something possibly as simple as even a sorting library. I assume they wouldn’t be verified all the way down to the silicon but would rely on assumptions of the correctness of compilers, kernel, etc.

Does a repository or directory of such software exist ?

1 Like

“Open source library” is a very general concept. Most Coq projects are open source, and most are libraries, as opposed to plugins. If “library” is interpreted as system software, then many of the projects described in this thread may be of interest.

For the more narrow interpretation of standard utility library, see hs-to-coq, which verified many Haskell library functions. Outside of Coq and Haskell, CakeML has verified many standard utility functions (down to the processor ISA).

1 Like

OK, so the main reason I asked this question is that I’ve been thinking about ways that a person, such as myself, who isn’t in academia, might get started learning about formal verification. I’ve begun learning coq using Software Foundations along with Coqart for a bit of deeper background material. However, I generally prefer to learn by doing, and I thought working on formal verification of some simple utility libraries might be a path that makes sense, but would prefer to avoid duplicating work that others have already done.

At this point I’m wondering if this approach even makes sense. I’ve known about HOL, Isabelle, Coq and a few others for a long time and I was under the impression that Coq was currently the most widely used/well documented of these or at least the one with the clearest accessible learning path.

However in the past few days I’ve learned about Why3 and now CakeML and I now feel more confused and uncertain about which would be the best to learn.

My general belief is that we need to get to a world where software is much more reliable and secure than it currently is and that the best path toward that goal is formal verification. But given how much software there is running the world today, it seems to me that this will likely require a much larger number of developers to become familiar with formal methods than are coming out of a hand full of PhD programs.

Are these ideas completely unrealistic ?

1 Like

In my reading, you’re interpreting “library” as software used in production, rather than for research in computer science and mathematics. This is far from the context most proof assistant practitioners are working in, and most research funding bodies (which fund development of most verification tools like Coq) do not fund development of production software to any significant degree, because they consider it outside their scope. Meanwhile, outside of some commercial research labs and startups, industry does not apply proof assistants or develop verified software.

Verification of production-grade software is extremely demanding, and typical production users (both industry and hobbyist) are in my experience unwilling to make even small concessions with respect to performance and features to gain higher trustworthiness. Industrial applications typically arise when verified software has both superior performance and near feature parity to its non-verified counterparts for some domain, e.g., seL4 and CompCert for embedded systems and Fiat Crypto for certain cryptographic applications.

If contributing to non-production libraries is enough, I would advise taking a look at Coq-community, which has many projects which welcome contributions, and also lists “abandoned” projects which volunteers are encouraged to start to maintain and (possibly) expand.

2 Likes

There are some projects of creating general-purpose verified libraries. For instance, this one for OCaml: ANR project VOCaL / VOCaL library (although it uses a bit of Coq and a lot of Why3).

Another project you might have heard of is the RustBelt project, which aims at verifying (among other things) the unsafe code from Rust’s standard library.

In any case, these projects are academic projects (aimed to have industrial impact). It is not straightforward as of today for an individual to contribute a new verified library for a given language ecosystem.

Hmm. I may be way off base but it seems to me that the problem of producing verified software has some characteristics that could make it amenable to some kind of crowd-sourced effort.

  1. Proving theorems is very labor intensive.

  2. A verified proof is a verified proof, the background of the person who created it is irrelevant.

  3. I suspect some people might find the process fun, almost like a game, but one that creates something useful.

It seems like one of the biggest obstacles to this working currently is the extremely small set of people with any experience with formally verified proofs.

You’re not off-base. But let me mention that open source is already underfunded in general, so you could say we’re doing as well as you could expect.

Exactly, but many of the existing libraries accept contributions; some bug tracker issues just need somebody to contribute an implementation.

Absolutely! But not all proofs are good enough. Some criteria

  1. Which lemmas should you prove? Some choices are better than others. And especially, which definitions?
  2. Is the code easy to maintain, when e.g. a new Coq version comes along?
2 Likes

If we are talking about program verification, I think the central problem hinted at here is to determine and formalize (and validate) the corresponding program specifications. While proving that programs indeed have given specifications is amenable to crowd-sourcing, it’s doubtful to me whether specification development itself works well in this model. For example, considerable time (several person years) was spent to develop a prototype/specification of the seL4 kernel in Haskell before anything much was verified in Isabelle/HOL.

The main way to attract manpower in the open source world appears to be wide adoption by end users (see Linux). At the same time, as I hinted at above, trustworthiness is not typically a convincing value proposition to most users if features and performance is lacking. This leads to a catch-22 of sorts in attracting Coq project contributors. Plain program verification without fancy languages and frameworks is also a somewhat cold topic in the research world and in general.

With all that said, I think a lot could be done with improved marketing, e.g., to bachelor-level computer science students and functional programming enthusiasts.

Indeed, I agree that formal proofs / verified software seems a good field for crowdsourced efforts, because the strong guarantees are supposed to bring more modularity (you do not have to understand everything you are relying upon, only their specifications, unlike in e.g. an untyped language where specifications may not exist or be plained wrong). In fact, I published in 2014 a couple of blog posts about crowdsourced formal proofs. More recently, I talked to other researchers who nurtured some similar ideas.

However, as of today, the lack of a crowd is not the only obstacle. There is also a lack of support for these individual volunteers. Some questions:

  • Which technology to use? There are many technologies available to verify software. Even without going off the Coq sphere of influence we can find: plain extraction to OCaml (or Haskell), CFML, Iris, Why3

  • How to maintain verified libraries? Once a library has been verified, if it needs to evolve (which is kind of expected), how hard is it going to be to adapt / redo the proofs / update the statements? If you evolve it without redoing the proofs, then you might reintroduce bugs.

  • How to combine verified libraries? Here we ask again the question of modularity. While modularity is made easier in theory by the presence of formal specifications, these formal specifications might not combine well together (especially if each library was verified using a different technology). And sometimes it is hard to anticipate which properties you will need to prove to ease building upon them.

  • How to distribute verified libraries to end users? I can use traditional channels and just claim the library has been verified. But unless I am more specific, this doesn’t say which properties, and this doesn’t say if the code I make available is really exactly the one that was verified, exactly like on npm some people might publish compiled / minified code that does not fully correspond to the code source they make available online.

None of these questions are impossible to solve, but they make the story harder for an individual user as of today. So I would reiterate on the suggestion of @Blaisorblade to go help some existing verified library project, and add to this a call to verified library authors to clearly indicate if contributions are welcome and what kind of issues would be “good first issues”.

2 Likes

Are future versions of Coq likely to break existing proofs ? I would think this would be a highly undesirable characteristic in a proof assistant.

Not to break the proofs, but to break the proof scripts. That’s well-known among Coq users, tho some practices ease maintenance (say, not relying on generated names), and nowadays (IME since 8.6) there is excellent testing to prevent unwanted breakage.
And I can’t think of a language where projects needn’t maintenance when the implementation is upgraded. The design of proof assistants is even less settled.

1 Like

I’m not sure I understand the distinction. Is there anything besides a proof script that validates to demonstrate the truth of any given theorem ?

Proof scripts are sequences of tactic calls, or more precisely lists of Vernacular sentences containing tactic expressions. Excluding “no-op” tactics, each successfully executed tactic sentence builds a fragment of the tentative proof term (object). When the proof term is complete (closed) it’s sent to the Coq kernel for final certification.

To fully guarantee that a sequence of tactic sentences always produce the same proof term across a wide range of projects, the only way is to preserve the original meaning of all built-in tactics and the tactic language across all proof assistant versions, since tactics can perform sophisticated searches that depend on the proof context and therefore be affected by even minor changes. Doing this severely restricts the improvements that can be made on a proof assistant, so no widely-used system does this.

2 Likes