Dear Coq Community,
I am humbly working on a proof of concept code that modifies the llama.cpp llm engine with callbacks for calling native code.
The goals are that we can construct proofs about the state of neurons or tokens inside of a language model , we can check the output of the llm in real time, we can provide a feedback loop the llm from coq, we can generate better outputs and fine tune model.
This allows for callbacks in script engines to be called while the tokens are generating or even before inside the the llm tensors.
My goal is to have it validate the coq vernacular syntax, I was fighting with using lsp-coq and instead am natively embedding coq.
My architecture is : Driver application (main, webserver, olama, python application with libllama.cpp plugin) , then the llama.cpp, and inside llama we have plugins with python,nodejs,metacall.io and now ocaml.
The ocaml now has the coq modules loaded and I am trying to parse the text.
Screenshot:
I am in discord voice chat (str server) or text if anyone wants to go over this
Code branch is here https://github.com/meta-introspector/llama.cpp/pull/8
Subreddit here : Reddit - Dive into anything
twitter: twitter.com/introsp3ctor