![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
100 Current Papers in Artificial Intelligence, Automated Reasoning and Agent Programming. Number 7
Sabine Glesner, Johannes Leitner, and Jan Olaf Blech. 2007. Coinductive Verification of Program Optimizations Using Similarity Relations. Electron. Notes Theor. Comput. Sci. 176, 3 (July 2007), 61-77. DOI=10.1016/j.entcs.2006.02.037 http://dx.doi.org/10.1016/j.entcs.2006.02.037
DOI: 10.1016/j.entcs.2006.02.037
Open Access?: Yes
I did my PhD on automating proof by coinduction which is largely why this paper caught my eye. My work isn't cited :( though that may be fair enough since I last did anything with it about 15 years ago, I strongly doubt the system I implemented is easy to get hold of or run, and it didn't really go any further. Still, :(
However, bruised pride aside, can I say anything useful about this paper? I'm a little daunted since explaining proof by coinduction normally starts by saying "OK, so you know proof by induction" and I once tried to explain proof by induction to
sophievdennis and by the end of it she was looking at me as if all mathematicians were clearly crazy. Then once you've explained proof by induction you start muttering about greatest fixedpoints and final co-algebras and even if the reader has followed as far as proof by induction the introduction of category theory into the mix is likely to be a deal breaker.
So just assume that coinduction is a handy mechanism if you want to prove that two streams of data are the same. It is based on the idea that if you are watching your data points go past, and you know how one data point relates to the previous data point (or several of the previous data points) that you want to find some kind of relationship between the two streams that will always hold. The interesting part, from the perspective of automating the proof, was coming up with ways to "find some kind of relationship".
The authors of this paper introduce two ways of defining "some kind of relationship" in a theorem prover called Isabelle/HOL - the first is closer to the "mathematical intuition" of how these proofs work while the other is better for integration into mechanized frameworks (mechanized frameworks are ones where a computer checks each step in the proof. They blend into automated proof systems since mechanized theorem provers offer more and more sophisticated "steps" to the user as people develop the system). They prove the two techniques are equal and do some example mechanized proofs verifying compiler transformations. The outcome of the work is new techniques for performing these kinds of proofs, and in particular the techniques are available in Isabelle/HOL which is a widely used theorem prover.
Sabine Glesner, Johannes Leitner, and Jan Olaf Blech. 2007. Coinductive Verification of Program Optimizations Using Similarity Relations. Electron. Notes Theor. Comput. Sci. 176, 3 (July 2007), 61-77. DOI=10.1016/j.entcs.2006.02.037 http://dx.doi.org/10.1016/j.entcs.2006.02.037
DOI: 10.1016/j.entcs.2006.02.037
Open Access?: Yes
I did my PhD on automating proof by coinduction which is largely why this paper caught my eye. My work isn't cited :( though that may be fair enough since I last did anything with it about 15 years ago, I strongly doubt the system I implemented is easy to get hold of or run, and it didn't really go any further. Still, :(
However, bruised pride aside, can I say anything useful about this paper? I'm a little daunted since explaining proof by coinduction normally starts by saying "OK, so you know proof by induction" and I once tried to explain proof by induction to
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
So just assume that coinduction is a handy mechanism if you want to prove that two streams of data are the same. It is based on the idea that if you are watching your data points go past, and you know how one data point relates to the previous data point (or several of the previous data points) that you want to find some kind of relationship between the two streams that will always hold. The interesting part, from the perspective of automating the proof, was coming up with ways to "find some kind of relationship".
The authors of this paper introduce two ways of defining "some kind of relationship" in a theorem prover called Isabelle/HOL - the first is closer to the "mathematical intuition" of how these proofs work while the other is better for integration into mechanized frameworks (mechanized frameworks are ones where a computer checks each step in the proof. They blend into automated proof systems since mechanized theorem provers offer more and more sophisticated "steps" to the user as people develop the system). They prove the two techniques are equal and do some example mechanized proofs verifying compiler transformations. The outcome of the work is new techniques for performing these kinds of proofs, and in particular the techniques are available in Isabelle/HOL which is a widely used theorem prover.