Swarm Verification
Oct. 19th, 2008 08:09 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I keep telling people, well academics interested in Computer Verification, about Gerard Holzmann's talk at Automated Software Engineering on Swarm Verification
Gerard Holzmann develops a model-checker called SPIN. In many ways it's the best known model checker out there. Model checkers check if a program is correct by running them on all possible inputs (sort of) - obviously this takes a long time. Anyway Holtzmann said that model checkers had been getting better and better partly because computers had been getting faster and faster.
However in the past five to ten years actual CPU speeds have not been getting faster at the same rate (certainly not at the Moore's Law rate), instead computer manufacturers have been constructing multi-cored machines. This means that for model checking software to keep up it has to exploit multi-cores. The interesting bit was that Holzmann had done some tests on running SPIN so it had several processes running at once. The "obvious" idea would be to get one run of SPIN to search several different inputs using shared memory to avoid duplicating the search. There was then lots of technical stuff about computer hardware from which I understood that you build a multi-core machine by having groups of four cores clustered around a bit of memory. Accessing the bit of memory a core is close to is "quick" but accessing memory close to a different core cluster takes a long time.
Holzmann had done some experiments with multiple processes and shared memory and it seemed pretty clear that with up to four processes you got a good speed up with the multi-core machine but above four processes and the speed up was wiped out by the slow down caused by accessing the memory that was at a different core cluster. Worse, once the nearby processes had finished the far away ones didn't migrate close to the memory - they stayed slow.
So instead of using shared memory Holzmann instead ran lots of completely separate versions of SPIN. He set them to search randomly - so potentially there was duplication in the search - but in fact the system did cover much of the search space. In a simple trial, setting the system to search for 100 numbers in a large set of possibilities the multiple process version found 98 of the numbers in an hour while the single cored system found only 2.
Gerard Holzmann develops a model-checker called SPIN. In many ways it's the best known model checker out there. Model checkers check if a program is correct by running them on all possible inputs (sort of) - obviously this takes a long time. Anyway Holtzmann said that model checkers had been getting better and better partly because computers had been getting faster and faster.
However in the past five to ten years actual CPU speeds have not been getting faster at the same rate (certainly not at the Moore's Law rate), instead computer manufacturers have been constructing multi-cored machines. This means that for model checking software to keep up it has to exploit multi-cores. The interesting bit was that Holzmann had done some tests on running SPIN so it had several processes running at once. The "obvious" idea would be to get one run of SPIN to search several different inputs using shared memory to avoid duplicating the search. There was then lots of technical stuff about computer hardware from which I understood that you build a multi-core machine by having groups of four cores clustered around a bit of memory. Accessing the bit of memory a core is close to is "quick" but accessing memory close to a different core cluster takes a long time.
Holzmann had done some experiments with multiple processes and shared memory and it seemed pretty clear that with up to four processes you got a good speed up with the multi-core machine but above four processes and the speed up was wiped out by the slow down caused by accessing the memory that was at a different core cluster. Worse, once the nearby processes had finished the far away ones didn't migrate close to the memory - they stayed slow.
So instead of using shared memory Holzmann instead ran lots of completely separate versions of SPIN. He set them to search randomly - so potentially there was duplication in the search - but in fact the system did cover much of the search space. In a simple trial, setting the system to search for 100 numbers in a large set of possibilities the multiple process version found 98 of the numbers in an hour while the single cored system found only 2.
(no subject)
Date: 2008-10-20 10:16 am (UTC)(no subject)
Date: 2008-10-20 10:46 am (UTC)(no subject)
Date: 2008-10-20 10:40 am (UTC)(no subject)
Date: 2008-10-20 10:46 am (UTC)(no subject)
Date: 2008-10-20 11:07 am (UTC)Is there any way to bias them in different directions, so they don't overlap so much, or would that risk missing out some areas altogether?
I speak from complete ignorance here, of course :-D
(no subject)
Date: 2008-10-20 11:22 am (UTC)I'm fairly sure they're not doing that but, now you've suggested it, it seems a fairly obvious direction to go in, in order to try and improve coverage.