Could a child robot grow up to be a mathematician and philosopher?
I spent two hours this afternoon when I really should have been writing either the TPHOLs or ProMAS papers listening to Aaron Sloman talk in the "Thinking about Mathematics and Science" seminar series. Aaron apologised in advance that his thoughts were not well organised but the topic had turned out to be rather larger than he had at first thought.
Aaron was interested in the way that mathematics (or reasoning in general) was neither an impirical nor a completely trivial process and how we learned to reason and how that might impact on the design of intelligent systems. One of his earlier remarks was that, when studying philosophy at Oxford, he had been taught the Humean/Russellian approach to the philosophy of mathematics - essentially that mathematics is unrelated to the real world and is about the manipulation of symbol systems. He had later discovered, and agreed with, Kant's approach that Mathematics is about some other kind of non-impirical knowledge. I never studied Kant and did not even know he had written about mathematical philosophy which shows that this attitude still survived when I was studying mathematics and philosophy I would guess 20 years after Aaron did. I was a little dubious about some of this, I accept that when learning and studying mathematics we do acquire knowledge but I'm not convinced this knowledge is non-trivial (as in it doesn't follow logically from things already known) I think we are inclined to forget there are distinct resource-bounds on reasoning, both in humans and computers, which means it takes time to discover the logical consequences of known facts. However, I'm also thinking I should try and find a text book on Kant's approach to the philosophy of mathematics.
Aaron used, as examples, a number of spatial and geometrical reasoning problems. These are problems which are generally extremely simple when thought about the right way, but become tortuous if reduced to mathematical logic. His argument was that logic could not be the underlying process of mathematics. He was later picked up on this when an audience member pointed out that all computations on von Neumann machines are underpinned by Logic, so logic was the representation if such machines did spatial reasoning. I thought the obvious answer here was that although logic might by the underlying representation you were doomed if you attempted to search for the solution at the logic level and you need the more diagrammatic representation to make the reasoning problem tractable. So while logic may form an underlying representation into which all these problems can be embedded its not necessarily the language in which we, or computers, can or should reason about them, at least not if we wish to reason efficiently. My former PhD supervisor, Alan Bundy, has become extremely keen, in recent years, on the idea that the significant task in approaching a mathematical problem is to find the right representation of the problem, not the actual search for a solution. Aaron instead argued that you could not treat von Neumann or Turing machines as logical since they were embedded in the real world. Alan has also done some interesting work (I think anyway) on the way reasoning works with these representations and how this can account for some common mistakes, and some classic non-theorems.
I had not known that when he first moved into Computer Science, Aaron had worked in Computer Vision but he made the interesting point here that he felt that the computer vision people were still making the representation mistake. Seeking to interpret visual signals in order to form a precise model of the world complete with exact angles and so forth, rather than an appropriate representation for problem solving.
Niggles about logic and representation aside, it was a stimulating talk and Aaron easily held our attention for an hour and a half, followed by half an hour of questions, which is no mean feat in itself.
Aaron was interested in the way that mathematics (or reasoning in general) was neither an impirical nor a completely trivial process and how we learned to reason and how that might impact on the design of intelligent systems. One of his earlier remarks was that, when studying philosophy at Oxford, he had been taught the Humean/Russellian approach to the philosophy of mathematics - essentially that mathematics is unrelated to the real world and is about the manipulation of symbol systems. He had later discovered, and agreed with, Kant's approach that Mathematics is about some other kind of non-impirical knowledge. I never studied Kant and did not even know he had written about mathematical philosophy which shows that this attitude still survived when I was studying mathematics and philosophy I would guess 20 years after Aaron did. I was a little dubious about some of this, I accept that when learning and studying mathematics we do acquire knowledge but I'm not convinced this knowledge is non-trivial (as in it doesn't follow logically from things already known) I think we are inclined to forget there are distinct resource-bounds on reasoning, both in humans and computers, which means it takes time to discover the logical consequences of known facts. However, I'm also thinking I should try and find a text book on Kant's approach to the philosophy of mathematics.
Aaron used, as examples, a number of spatial and geometrical reasoning problems. These are problems which are generally extremely simple when thought about the right way, but become tortuous if reduced to mathematical logic. His argument was that logic could not be the underlying process of mathematics. He was later picked up on this when an audience member pointed out that all computations on von Neumann machines are underpinned by Logic, so logic was the representation if such machines did spatial reasoning. I thought the obvious answer here was that although logic might by the underlying representation you were doomed if you attempted to search for the solution at the logic level and you need the more diagrammatic representation to make the reasoning problem tractable. So while logic may form an underlying representation into which all these problems can be embedded its not necessarily the language in which we, or computers, can or should reason about them, at least not if we wish to reason efficiently. My former PhD supervisor, Alan Bundy, has become extremely keen, in recent years, on the idea that the significant task in approaching a mathematical problem is to find the right representation of the problem, not the actual search for a solution. Aaron instead argued that you could not treat von Neumann or Turing machines as logical since they were embedded in the real world. Alan has also done some interesting work (I think anyway) on the way reasoning works with these representations and how this can account for some common mistakes, and some classic non-theorems.
I had not known that when he first moved into Computer Science, Aaron had worked in Computer Vision but he made the interesting point here that he felt that the computer vision people were still making the representation mistake. Seeking to interpret visual signals in order to form a precise model of the world complete with exact angles and so forth, rather than an appropriate representation for problem solving.
Niggles about logic and representation aside, it was a stimulating talk and Aaron easily held our attention for an hour and a half, followed by half an hour of questions, which is no mean feat in itself.
Re: The slides for the talk (still developing) are available online
Maybe "reducible" was the wrong word to use here. I'm a passionate believer in formal proof. I recognise the arguements about both the unreliability of formal and/or computer assisted proof and I can see that the incomprehensibility of many formal proofs makes them of questionable use to the practice of mathematics. However I do believe there is a value in reliability, in and of itself, and that formal computer-assisted proofs provide a better guarantee of reliability than normal mathematical practice. This is completely beside your point but its my interest in "reducibility" or perhaps "representability" stems from this direction. Human reasoning and discovery would appear to happen at the diagramatic level in these cases, converting this to logic is, as you say, a discovery but is useful for generating correctness, even if this is by non-human-like processes.
It wouldn't surprise me if non-Turing, particularly analogue, devices or add-ons turned out to be necessary for reasoning in some cases, especially since there are many problems which are solved more easily by physical interaction with the real world than by simulation (all those boundary detection problems), but my gut feeling is that you didn't present any of those. I'd be surprised if the problems you presented were not amenable to solution by a Turing machine reasoning with the right representation.