The 4.2.0 release of Interval departs a bit from the previous versions in that it is no longer just about proving properties of real-valued expressions. It can now be used to investigate such properties visually, that is, it can now plot function graphs from inside Coq. The main difference with traditional computer algebra systems is that a plot is not a list of points obtained by sampling, it is a formal proof that the function passes only through filled pixels. So, no longer do you have to wonder whether a plot is actually faithful.
To do so, the library now provides a plot tactic to generate the proof term of a function graph, and a Plot command to display it using Gnuplot. Here is a simple example, with the corresponding screenshot. While the screenshot was obtained using Coqide, the Plot command does not depend on the actual user interface and can even be used from coqtop.
From Coq Require Import Reals.
From Interval Require Import Plot Tactic.
Open Scope R_scope.
Definition p1 := ltac:(plot (fun x => x^2 * sin (x^2)) (-4) 4).
This is very cool. Russell did something similar, but at lower resolution, using the Coq notation system.
With @gares we then made a quick hack to do a hi-res version.
Russell O’Connor. A Computer Verified Theory of Compact Sets. SCSS 2008, RISC-Linz Report Series 08–08: 148–162, Jul 2008, Proceedings
@VincentSe has been working on speeding up the computation to improve the resolution.
Perhaps, it is possible to separate the plotting part, so that it can also be used in other projects.
I haven’t looked at how modular it is.
This is definitely a big step forward towards having a general purpose CAS in Coq and will likely solve many plotting problems I had in the past! I will share some examples of what can go wrong with common math tools …
It will be interesting to see what happens in the presence of rounded arithmetic e.g. with Zfloor.
If I understand this right, there is a difference in specification strength between your and Russell’s / Vincent’s work. Essentially Interval produces a list of boxes which are guaranteed to cover the function graph. Russell’s / Vincent’s work (afaik) proofs an upper limit of the Hausdorff distance between the “intersection of the (mathematical) functions graph and the plot area” and the “set of the centers of the set pixels”. I think it was around 2 pixels or so (the minimum would obviously be 1/sqrt(2)).
I guess one can proof that for continuous functions, the Hausdorff distance of a covering set of boxes is limited by the pythagorean sum of the box width and the maximum overlap of neighboring boxes.
Not that I think that the Hausdorff distance is required in practice - one can see visually if the plot is “too big”, but I think it is the strongest specification one can have for a plot, so it might be worthwhile to compute it (even without a proof).
There is a small difference in the notion of correctness. I guarantee that, if the function graph passes through a pixel, then the pixel is filled, while O’Connor guarantees that the pixel or one of its neighbors is filled (if I understand correctly). It also seems that O’Connor works guarantees some completeness, that is, if a pixel is filled, then the function graph passes through the pixel or one of its neighbors. Completeness is missing from Interval; it is only performed as a best effort, but not formally proved.
It is not modular. But the plotting part was not the hard part. It is only a handful of lines of OCaml with nothing especially clever. So, it could easily be replicated.
The hard part was to get the computations to be performed as fast as possible. The given example has a resolution of 512x384 and computes under one second. (The time has to be multiplied by two unfortunately, because Coq performs the same computation twice, once to get the graph and another time to formally verify it.)
This plotting function is very nice indeed. The specification of correctness of the graphs is slightly different between Coq interval and CoRN, but both are visually faithful. Both will show the singularities and problems of the plots. For instance, if you have very narrow spikes in the actual function, those spikes will appear in the plots (instead of being lost because of the 64-bit precision).
A bigger difference would be that Coq interval only handles a hard-coded list of a dozen functions and their compositions (exp, sin, cos and the like if I recall), whereas CoRN plots any function, given a proof of uniform continuity. So probably a user would first try plotting with Coq interval, and if their function is not supported there, prove it uniformly continuous and plot it in CoRN.
That is a misconception. More often than not, precision will have absolutely no impact on whether spikes will appear or not. The issue is that traditional tools perform (uniform) sampling when drawing function graphs. For example, they might decide to sample 150 points and be done with it. So, if one is unlucky enough for a spike to occur between two sampled points, then it will be invisible, whatever the precision.
Let us take a very simple example: exp (-x*x) between -1000 and 1000. If you plot the function using a traditional tool, you get the following:
If you use an uniform continuity proof, it will correctly tell you that you need to sample at least 412.8k points to achieve a faithful plot at a resolution of 512x384 (that is the default resolution of Interval, hence why I am using it here). Obviously, evaluating hundreds of thousands of points with Coq is out of the question. And that is assuming that uniform continuity was proved in a perfect way. Any simpler proof will increase the lower bound on the number of sampled points and thus the plotting time.
Fortunately, Interval does not use sampling, so it is not bound by the above considerations. That is why it is able to instantly produce a plot:
Time Plot ltac:(plot (fun x => exp(-x*x)) (-1000) 1000 (-1) 2).
(* Finished transaction in 0.17 secs (0.17u,0.s) (successful) *)
I agree that Coq interval is easier to use when it understands the function to plot. But we need solutions for the other functions, those outside Coq interval’s syntax.
I have been thinking to implement piecewise uniformly continuous functions in CoRN, to vary the density of sampling in different parts of the domain. Also, a gigantic modulus of continuity like in your exp(-x*x) example is maybe all there is to know about the function : it has a very narrow spike (given the scale of the plot). If we can compute where the spike is, even better. Then we might avoid the plotting altogether.
What do you mean “gigantic”? The modulus of continuity is 0.86. Even the cosine function has a larger modulus of continuity. exp (-x*x) is a very well behaved function.
But there are cases where you cannot. For example, I just talked about the cosine function. If you were to plot it between -1000 and 1000 at a resolution of 512x384, you would need to sample it at 480k points. Varying the density of sampling would not make much of a difference. Indeed, the theoretical lower bound for faithfully sampling the cosine function is about 195k points. But that assumes the availability of an omniscient oracle that tells you precisely where to evaluate your function. In the absence of such an oracle, you need more points, hence 480k.
Multiply the modulus of continuity 0.86 by the plot ratio 2000 / 1.6, it warns you that the plot has narrow spikes. It also warns you that the plot might not tell you much, as your next cosine example shows
By the way, many real-world functions have no closed-form. They are rather defined as integrals or solution of partial differential equations. How long does it take to plot those in Coq interval ?
But in principle it should be possible to extend Coq interval with a “nice enough” class of solutions of differential equations, shouldn’t it? The code seems almost ready to handle \partial-finite functions and this would extend the catalogue in a significant way. No idea if efficiency would be sufficient though.
Sure. I don’t foresee any issue on the side of Taylor models; it should be straightforward.
On the floating-point side, however, I do not know what rerooting the power series of a differential equation entails, from an algorithmic point of view. (If the convergence radius is infinite, this issue does not exist in the first place, obviously. Though it might still be worth investigating for efficiency reasons, as it is a generic algorithm for argument reduction.)
My guess is that you are using a crippled version of Gnuplot, i.e., one compiled without any graphical user interface. If you type gnuplot -p -e 'plot x' in a console, does it plot anything? I do not know about macOS; but on my Linux, if I were to install gnuplot-nox instead of gnuplot-qt or gnuplot-x11, I would get no user interface for Gnuplot.