![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I spent two hours this afternoon when I really should have been writing either the TPHOLs or ProMAS papers listening to Aaron Sloman talk in the "Thinking about Mathematics and Science" seminar series. Aaron apologised in advance that his thoughts were not well organised but the topic had turned out to be rather larger than he had at first thought.
Aaron was interested in the way that mathematics (or reasoning in general) was neither an impirical nor a completely trivial process and how we learned to reason and how that might impact on the design of intelligent systems. One of his earlier remarks was that, when studying philosophy at Oxford, he had been taught the Humean/Russellian approach to the philosophy of mathematics - essentially that mathematics is unrelated to the real world and is about the manipulation of symbol systems. He had later discovered, and agreed with, Kant's approach that Mathematics is about some other kind of non-impirical knowledge. I never studied Kant and did not even know he had written about mathematical philosophy which shows that this attitude still survived when I was studying mathematics and philosophy I would guess 20 years after Aaron did. I was a little dubious about some of this, I accept that when learning and studying mathematics we do acquire knowledge but I'm not convinced this knowledge is non-trivial (as in it doesn't follow logically from things already known) I think we are inclined to forget there are distinct resource-bounds on reasoning, both in humans and computers, which means it takes time to discover the logical consequences of known facts. However, I'm also thinking I should try and find a text book on Kant's approach to the philosophy of mathematics.
Aaron used, as examples, a number of spatial and geometrical reasoning problems. These are problems which are generally extremely simple when thought about the right way, but become tortuous if reduced to mathematical logic. His argument was that logic could not be the underlying process of mathematics. He was later picked up on this when an audience member pointed out that all computations on von Neumann machines are underpinned by Logic, so logic was the representation if such machines did spatial reasoning. I thought the obvious answer here was that although logic might by the underlying representation you were doomed if you attempted to search for the solution at the logic level and you need the more diagrammatic representation to make the reasoning problem tractable. So while logic may form an underlying representation into which all these problems can be embedded its not necessarily the language in which we, or computers, can or should reason about them, at least not if we wish to reason efficiently. My former PhD supervisor, Alan Bundy, has become extremely keen, in recent years, on the idea that the significant task in approaching a mathematical problem is to find the right representation of the problem, not the actual search for a solution. Aaron instead argued that you could not treat von Neumann or Turing machines as logical since they were embedded in the real world. Alan has also done some interesting work (I think anyway) on the way reasoning works with these representations and how this can account for some common mistakes, and some classic non-theorems.
I had not known that when he first moved into Computer Science, Aaron had worked in Computer Vision but he made the interesting point here that he felt that the computer vision people were still making the representation mistake. Seeking to interpret visual signals in order to form a precise model of the world complete with exact angles and so forth, rather than an appropriate representation for problem solving.
Niggles about logic and representation aside, it was a stimulating talk and Aaron easily held our attention for an hour and a half, followed by half an hour of questions, which is no mean feat in itself.
Aaron was interested in the way that mathematics (or reasoning in general) was neither an impirical nor a completely trivial process and how we learned to reason and how that might impact on the design of intelligent systems. One of his earlier remarks was that, when studying philosophy at Oxford, he had been taught the Humean/Russellian approach to the philosophy of mathematics - essentially that mathematics is unrelated to the real world and is about the manipulation of symbol systems. He had later discovered, and agreed with, Kant's approach that Mathematics is about some other kind of non-impirical knowledge. I never studied Kant and did not even know he had written about mathematical philosophy which shows that this attitude still survived when I was studying mathematics and philosophy I would guess 20 years after Aaron did. I was a little dubious about some of this, I accept that when learning and studying mathematics we do acquire knowledge but I'm not convinced this knowledge is non-trivial (as in it doesn't follow logically from things already known) I think we are inclined to forget there are distinct resource-bounds on reasoning, both in humans and computers, which means it takes time to discover the logical consequences of known facts. However, I'm also thinking I should try and find a text book on Kant's approach to the philosophy of mathematics.
Aaron used, as examples, a number of spatial and geometrical reasoning problems. These are problems which are generally extremely simple when thought about the right way, but become tortuous if reduced to mathematical logic. His argument was that logic could not be the underlying process of mathematics. He was later picked up on this when an audience member pointed out that all computations on von Neumann machines are underpinned by Logic, so logic was the representation if such machines did spatial reasoning. I thought the obvious answer here was that although logic might by the underlying representation you were doomed if you attempted to search for the solution at the logic level and you need the more diagrammatic representation to make the reasoning problem tractable. So while logic may form an underlying representation into which all these problems can be embedded its not necessarily the language in which we, or computers, can or should reason about them, at least not if we wish to reason efficiently. My former PhD supervisor, Alan Bundy, has become extremely keen, in recent years, on the idea that the significant task in approaching a mathematical problem is to find the right representation of the problem, not the actual search for a solution. Aaron instead argued that you could not treat von Neumann or Turing machines as logical since they were embedded in the real world. Alan has also done some interesting work (I think anyway) on the way reasoning works with these representations and how this can account for some common mistakes, and some classic non-theorems.
I had not known that when he first moved into Computer Science, Aaron had worked in Computer Vision but he made the interesting point here that he felt that the computer vision people were still making the representation mistake. Seeking to interpret visual signals in order to form a precise model of the world complete with exact angles and so forth, rather than an appropriate representation for problem solving.
Niggles about logic and representation aside, it was a stimulating talk and Aaron easily held our attention for an hour and a half, followed by half an hour of questions, which is no mean feat in itself.
(no subject)
Date: 2008-01-21 09:08 pm (UTC)My favourite book on this sort of thing was Grayling's Introduction to Philosophical Logic, but there are probably newer books around, and I can't remember if Kant was mention much in it anyway!
The slides for the talk (still developing) are available online
Date: 2008-01-26 05:28 pm (UTC)http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk56
Comments, criticisms, and suggestions are welcome.
I have included some references to papers explaining some of the ideas of Kant and Frege concerning the nature of geometrical knowledge.
I thought I had argued in the talk that for some purposes logic was not the best form of representation or medium for reasoning with. But I probably was not clear enough in my presentation.
It's a very old theme for me, going back to my 1971 IJCAI paper criticising Logicist AI as too narrow, available here: http://www.cs.bham.ac.uk/research/projects/cogaff/04.html#200407
There is still much work to be done, including clarifying the difference between "empirical" and "non-empirical" knowledge.
Aaron
http://www.cs.bham.ac.uk/~axs
Re: The slides for the talk (still developing) are available online
Date: 2008-02-01 03:19 pm (UTC)No, you did argue that, and I think we agree. But I would still argue that most, if not all, of your examples were reducible to logic, even if that were not the appropriate medium for either knowledge aquisition or proof discovery, though it might be an appropriate medium for proof checking...
Re: The slides for the talk (still developing) are available online
Date: 2008-02-02 12:32 pm (UTC)OK I did not realise that that was your point. The notion of something being "reducible" to logic is quite problematic.
Suppose Hilbert did manage to produce a logical axiomatisation of Euclidean geometry: i.e. a set of logical axioms (plus the standard rules of inference) such that every theorem and every valid inference in Euclidean geometry is mirrored by a logical theorem and a valid proof in the formal system.
This structural correspondence between geometry and a formal system would certainly be interesting and in some cases might be useful, but it would be a *discovery* rather than a *reduction*. As I think Frege argued somewhere, establishing the correspondence would require use of geometrical knowledge.
As one of my slides points out the arithemtisation of geometry by Descartes was one of the most important intellectual achievements of a human mind, which has made many things possible that would have been very difficult or impossible without it.
But it still remains the case that there is such a thing as using geometric capabilities to investigate problems and that is not the same thing as using logical or algebraic capabilities. However, as we agree, it is not easy to implement that collection of capabilities on current computers -- though various fragments have been demonstrated, e.g. in the PhD theses of Mateja Jamnik and Daniel Winterstein (both in Edinburgh).
I suspect we don't yet have a good requirements specification for the kind of virtual machine that is needed. When we do, it may turn out that such a VM cannot be implemented in von Neumann computers, though I don't think there is evidence yet that establishes that.
However, current computers don't have to limit the possibilities for information-processing engines to go into future robots, though there are some rigid and dogmatic people who insist that *only* what can be implemented on von Neumann or Turing-equivalent machines should be used for AI.
If it turns out, for example, that chemical computers are needed then they will be used. (Evolution produced a wide variety of chemistry-based information processing systems, including all the organisms that don't have brains.)
On re-reading your original posting ("Aaron instead argued that you could not treat von Neumann or Turing machines as logical since they were embedded in the real world"), I think it is possible that I missed the point of the question being asked. However, if you build a robot that shares some of its information-processing between internal processing and external mechanisms (e.g. diagrams on paper, etc.) then it is not clear that the whole process is one that can be run on a Turing machine, and so it may just be wrong to say it is a logic machine, even if a logic machine is a part of the total system. That was the point I was trying to make.
Most of the learning that humans do depends crucially on learning from interactions with the environment: the resulting system is not one that could have been derived by some kind of logical inference from the initial state of a new-born infant.
Humans are to some extent able to dispense with external diagrams and to use imagined ones just as well.
The question whether some of the uses of external reasoning aids can be internalised on a VN-based computer, or whether some different kind of engine, perhaps something used by human brains but not yet understood, is a separate question, mentioned above.
Aaron
http://www.cs.bham.ac.uk/~axs/ (http://www.cs.bham.ac.uk/~axs/)
Re: The slides for the talk (still developing) are available online
Date: 2008-02-03 08:59 pm (UTC)Maybe "reducible" was the wrong word to use here. I'm a passionate believer in formal proof. I recognise the arguements about both the unreliability of formal and/or computer assisted proof and I can see that the incomprehensibility of many formal proofs makes them of questionable use to the practice of mathematics. However I do believe there is a value in reliability, in and of itself, and that formal computer-assisted proofs provide a better guarantee of reliability than normal mathematical practice. This is completely beside your point but its my interest in "reducibility" or perhaps "representability" stems from this direction. Human reasoning and discovery would appear to happen at the diagramatic level in these cases, converting this to logic is, as you say, a discovery but is useful for generating correctness, even if this is by non-human-like processes.
It wouldn't surprise me if non-Turing, particularly analogue, devices or add-ons turned out to be necessary for reasoning in some cases, especially since there are many problems which are solved more easily by physical interaction with the real world than by simulation (all those boundary detection problems), but my gut feeling is that you didn't present any of those. I'd be surprised if the problems you presented were not amenable to solution by a Turing machine reasoning with the right representation.