Thank you very much for submitting a paper to CLIMA XIV. We are delighted to let
you know that your paper "Using Agent JPF to Build Models for Other Model Checkers" is accepted for presentation and inclusion in the Springer LNAI Proceedings. Yay! That is my fifth paper acceptance this year which considering the publication deserts that were 2012 and 2011 is a welcome relief.
Agent JPF is a model checking tool for so called rational agent programming languages that I built on top of NASA's Java Pathfinder as part of the project before last. I've been tweaking it ever since and grappling slightly with the fact that, out of the box, JPF only checks there are no deadlocks or exceptions in a Java program where most model checkers have fairly sophisticated languages for describing properties based on temporal logics.
Hunter et al. recently published an interesting paper in which they used JPF to create a model of an agent program written in the Brahms language (which is a human-agent interaction modelling language used by NASA) but then shipped the model off to the Spin model checker for actual verification. I knew about this work in advance since they worked with my (very nearly almost graduated) PhD student and he brought one of the authors, Franco Raimondi up to Liverpool to give a talk, and then I picked his brains about how it all worked. At some point during the conversation I had one of those why didn't I think of this? moments because it is a very elegant and simple idea and I was within a hair's breadth of implementing it myself - I just hadn't realised the fact.
So I quickly adapted AJPF to do the same. Outputting models to Spin proved to be a doddle, but I was really interested in outputting stuff to a model checker called PRISM which can verify probabilistic properties - e.g., "if the aircraft's sensors are correct 90% of a time and the risk of an aircraft appearing on a collision course is 10% then the chance that the aircraft collides with another is less than 1%" because I really didn't want to implement algorithms for reasoning about probabilities over Java programs.
It was all implemented and written up in something of a rush, so I'm pleased it got accepted, though slightly alarmed by the fact that all the reviewers have commented on how well written the paper is. Maybe I should write more papers in a frantic hurry!
Quick Test - Ignore
Date: 2013-06-26 06:57 am (UTC)(no subject)
Date: 2013-06-28 12:12 am (UTC)When you say model checker for a Java program... is this in the vein of verifying correctness of a program, i.e., provably saying that it actually does X? Or is this more like, "this program models an airplane" and then "this checker verifies that the program models an airplane"? I've some experience with the idea of the former, and of course, the latter sounds somewhat close to the idea of unit testing (except at a more meta level?).
(no subject)
Date: 2013-06-28 07:55 am (UTC)It's called model checking because, outside a few fairly modern systems like Java pathfinder, you couldn't apply it to actual code, you had to create a model of your algorithm or protocol in the input language of the checker. Even with "program model checkers" such as Java Pathfinder, any time you have to worry about what a user will do, or parts of the overall system that aren't written in Java you basically have to model that part of the program in Java and then you test that the code works correctly in that model (but not necessarily in the real world).
One of the things we are pushing with the aircraft modelling, is working with the aircraft industry and certification authorities in the UK to attempt to agree on how Unmanned Aircraft should be certified for use in UK civilian airspace. One idea is that you restrict your verification results purely to the code part - so you prove things like "Assuming the sensor input is correct, the program will activate the right controls (even if those controls are broken and don't do what they should)" but the certification authorities (and aircraft manufacturers and designers - to be fair) really want the end results to be probabilistic estimates of the chance of catastrophic failure - so we want to be able to combine experimental results about the accuracy of sensors and the reliability of actuators with verification results about what actuators the program will use given which sensor inputs in order to generate an overall estimate of the chance of failure.
(no subject)
Date: 2013-07-08 09:18 pm (UTC)Thank you for the explanation!
That is pretty interesting, and a whole lot different from building consumer software. I spend a lot of my time thinking about reliability of systems, but usually at a higher level -- I assume that systems will fail, so I (try to) design architectures around those failures. What if I lose X or Y, what happens?
It sounds like you have to go a lot lower level and verify everything. (An endless task?) I suppose some of it gets rolled up -- i.e., you don't have to worry about, say, motor burnout specifically, but more just "there is a 0.01% cumulative chance of failure for every 1 hour of usage" and that includes physical wear and tear, unexpected failures, etc?
(no subject)
Date: 2013-07-11 10:04 am (UTC)And then yes, its a matter of working out how all the cumulative potential points of failure (with numbers obtained in various ways) combine together with the onboard software to give some overall (hopefully small) number.
(no subject)
Date: 2013-07-11 10:05 am (UTC)(no subject)
Date: 2013-06-25 03:44 pm (UTC)(no subject)
Date: 2013-06-25 04:53 pm (UTC)(no subject)
Date: 2013-06-25 08:38 pm (UTC)(no subject)
Date: 2013-06-25 08:40 pm (UTC)(no subject)
Date: 2013-06-25 08:57 pm (UTC)