Jan. 9th, 2010

purplecat: Programming the Eniac Computer (computing)
I gave a talk at a Scibar on Monday evening. For those of you who don't know, a scibar is a meeting in a pub. The assembled throng invite a tame scientist to give a presentation for half an hour and then grill them for the next hour. This whole prospect is kind of scary, should you happen to be a scientist. B. got roped in to do one by the Knutsford crowd and, in a surprising failure of spousal responsibility, blithely said "Oh yes! My wife does Artificial Intelligence and Satellites, she'll come and talk to you."

I then, of course, made things much worse by saying, in response to the email I got sent, "Well, I could talk about artificial intelligence based programming languages for satellites, but everyone* always seems to want to know whether a machine could think". Suddenly I found I'd agreed to give a talk, to a pub full of members of the public accustomed to putting scientists on the spot, on the subject of "Could machines think" when I consider myself, at best, an educated layman on that particular topic.

Various members of my flist were witness to the ensuing panic.

Knutsford Scibar were really nice.

I had calmed down a little when I read their website and discovered that I was billed as an expert in Automated Reasoning and was giving a talk titled "Reasoning Machines". This let me split my talk into two and talk about Automated Reasoning for fifteen minutes** before biting the bullet and discussing the Turing Test, the Chinese Room argument and my own belief that I've no doubt a machine will, one day, pass almost any variant on the Turing Test you might choose, but that the Chinese Room*** argument, and some other attacks on the test have convinced me that we have a very poor grasp of what we mean by words like intelligence, thought and consciousness. I'm not convinced that the Turing Test actually is a good test of these things, but I've no idea what would be a good test.

I wasn't entirely surprised that we spent the next hour and a half discussing the Turing Test, the difference between behaving like you think or feel something and actually thinking or feeling something, and whether human thought was necessarily analogy/quantam/or non-digital in some other fashion. I didn't feel I got any questions I simply couldn't answer****, though there were a few where I wasn't quite sure what the question was, or at least what statement of mine the questionner was challenging. I think maybe, by starting the talk looking at Automated Reasoning techniques, I gave the impression I thought they were, in some fashion, the one true approach to computational intelligence when there are, of course, statistical and other approaches that are almost certainly going to prove hugely important in producing anything like a machine intelligence.

They then took me out for a very nice meal and put me on a train just as the snow started to fall.

Conclusion: The general public are not nearly as scary as they may, at first, appear.

* By everyone I, of course, meant random roleplayers met in pubs.

** One of the committee very nicely said he thought my explanation of Turing Machines, Logic and Automated Reasoning was one of the most accessible he'd come across.

*** The Chinese Room argument, broadly speaking, points out that behaving intelligently and being intelligent may not be quite the same thing.

**** B spent the preceding week or two pointing out I studied Philosophy at university, did a masters with components on the philosophy of AI, am interested in the subject and have a habit of going to talks and lectures on it when I get the chance and that, really, I'm about as educated as a lay person can get on the subject.

Profile

purplecat: Hand Drawn picture of a Toy Cat (Default)
purplecat

April 2019

S M T W T F S
 1 234 5 6
7 8 91011 12 13
14 15 16 17 18 19 20
21222324252627
282930    

Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags