purplecat: Hand Drawn picture of a Toy Cat (agents)
Pierr-Yves Oudeyer gave another plenary talk on how you control/direct curiosity driven learning in Robotic systems. The talk was, I think, a little be stolen by the acroban robot. Broadly speaking Oudeyer observed that many existing robot platforms for AI experimentation are optimised for things like ease-of-changing batteries and are, in fact, quite difficult to program because their locomotor abilities are very rigid and inflexible. The Acroban robot is engineered to be a lot more "human-like" in it's mechanics, essentially with a lot of the intelligence in the mechanics itself, rather than in the software.


Cute video follows:



This entry was originally posted at http://purplecat.dreamwidth.org/42142.html.
purplecat: Hand Drawn picture of a Toy Cat (agents)
Pierr-Yves Oudeyer gave another plenary talk on how you control/direct curiosity driven learning in Robotic systems. The talk was, I think, a little be stolen by the acroban robot. Broadly speaking Oudeyer observed that many existing robot platforms for AI experimentation are optimised for things like ease-of-changing batteries and are, in fact, quite difficult to program because their locomotor abilities are very rigid and inflexible. The Acroban robot is engineered to be a lot more "human-like" in it's mechanics, essentially with a lot of the intelligence in the mechanics itself, rather than in the software.


Cute video follows:

purplecat: Hand Drawn picture of a Toy Cat (agents)
I spent the last week in Taiwan at AAMAS 2011 and thought I'd do my usual thing of randomly blogging about bits of various talks that peaked my interest.

Game Theory under the Cut )

This entry was originally posted at http://purplecat.dreamwidth.org/41160.html.
purplecat: Hand Drawn picture of a Toy Cat (agents)
I spent the last week in Taiwan at AAMAS 2011 and thought I'd do my usual thing of randomly blogging about bits of various talks that peaked my interest.

Game Theory under the Cut )
purplecat: Hand Drawn picture of a Toy Cat (ai)
There were a lot of people from DLR (Deutsches Zentrum für Luft- und Raumfahrt) at iSAIRAS. I spent some time with a group who were working on the "DLR Crawler" which, apparently, has been cobbled together out of the "DLR Hand". I had high hopes of this, with visions of this disembodied hand crawling about on the surface of Mars. In reality, the DLR Crawler looks a lot less like a hand than you might expect.

They were looking at the use of stereoscopic vision to make assessments of the roughness of any terrain the crawler had to traverse and plan a route and modify its gait accordingly.

Pictures under the cut )

This entry was originally posted at http://purplecat.dreamwidth.org/19820.html.
purplecat: Hand Drawn picture of a Toy Cat (ai)
There were a lot of people from DLR (Deutsches Zentrum für Luft- und Raumfahrt) at iSAIRAS. I spent some time with a group who were working on the "DLR Crawler" which, apparently, has been cobbled together out of the "DLR Hand". I had high hopes of this, with visions of this disembodied hand crawling about on the surface of Mars. In reality, the DLR Crawler looks a lot less like a hand than you might expect.

They were looking at the use of stereoscopic vision to make assessments of the roughness of any terrain the crawler had to traverse and plan a route and modify its gait accordingly.

Pictures under the cut )

DEOS

Sep. 12th, 2010 05:14 pm
purplecat: Hand Drawn picture of a Toy Cat (ai)
One of the applications we've been looking at on the project is the problem of orbital debris and how they might be cleaned up. One suggestion, apparently, is to put up a big wall of that forensic jelly stuff they fire bullets into in CSI - then all the debris would hit it and get stuck. Of course, then the problem would be the large wall of flying jelly with bits of broken satellite stuck to it!

Anyway the Germans are building a satellite with a grasping arm. They plan to launch this and test it's ability to grab hold of objects that aren't "co-operating" (i.e. aren't also working to some predefined docking sequence). As far as I can tell the mission objectives consist entirely of going up into orbit, grabbing stuff and then letting go of it again. They haven't any plans to do anything useful with anything they grab, they just want to see if they can. This project is called DEOS (Deutsche Orbitale Servicing Mission). They have some interesting problems to solve, for instance, in space, if you reach forwards with your arm then your body is likely to move backwards!

I'm unreasonably amused by the pictures of DEOS for reasons I can't quite put my finger on )

My preferred solution to the orbital debris is to arm a satellite with a laser and give it some kind of utility function. So it autonomously decides whether something is useless and shoots it if it is. As I keep asking what could possibly go wrong with that? Sadly the most obvious thing that could go wrong is that you'd just end up with lots of very small debris floating around, rather than big bits of debris, so it doesn't solve the problem at all.

This entry was originally posted at http://purplecat.dreamwidth.org/19422.html.

DEOS

Sep. 12th, 2010 05:14 pm
purplecat: Hand Drawn picture of a Toy Cat (ai)
One of the applications we've been looking at on the project is the problem of orbital debris and how they might be cleaned up. One suggestion, apparently, is to put up a big wall of that forensic jelly stuff they fire bullets into in CSI - then all the debris would hit it and get stuck. Of course, then the problem would be the large wall of flying jelly with bits of broken satellite stuck to it!

Anyway the Germans are building a satellite with a grasping arm. They plan to launch this and test it's ability to grab hold of objects that aren't "co-operating" (i.e. aren't also working to some predefined docking sequence). As far as I can tell the mission objectives consist entirely of going up into orbit, grabbing stuff and then letting go of it again. They haven't any plans to do anything useful with anything they grab, they just want to see if they can. This project is called DEOS (Deutsche Orbitale Servicing Mission). They have some interesting problems to solve, for instance, in space, if you reach forwards with your arm then your body is likely to move backwards!

I'm unreasonably amused by the pictures of DEOS for reasons I can't quite put my finger on )

My preferred solution to the orbital debris is to arm a satellite with a laser and give it some kind of utility function. So it autonomously decides whether something is useless and shoots it if it is. As I keep asking what could possibly go wrong with that? Sadly the most obvious thing that could go wrong is that you'd just end up with lots of very small debris floating around, rather than big bits of debris, so it doesn't solve the problem at all.
purplecat: Hand Drawn picture of a Toy Cat (ai)
The project I'm currently working on involves the programming of multiple satellites to work in coordination using multi-agent techniques. Steve Chien, from JPL, seems to be one of the key people when it comes to getting artificial intelligence technology onto satellite systems. Have gave about 10 talks at iSAIRAS. I'm not sure if that is typical of the field or a result of various belt-tightening measures (he wasn't the first author on many of the papers he was presenting and actually commented at one point on the difference in style between his various sets of slides). Something similar happened at this year's AAMAS where Medhi Dastani gave about six talks because his institution had refused to fund PhD students to travel to the conference.

Two of Chien's talks involved cooperation between a mixture of satellites and on-ground sensor systems, or robots. One was a very speculative piece of work ultimately aimed at seismic and atmospheric events (e.g. dust devils) on Mars. The practical work involved a small rover robot and a couple of mounted cameras in a constructed "Mars Yard" which could be coordinated to make observations.



Picture of the JPL Mars Yard


The other talk discussed an existing Volcano monitoring sensor web in which involves ground sensors at Mount St. Helen's (and other networks) which can request observations from the EO1 Satellite. This is already deployed and indeed observations were automatically triggered during the Icelandic Volcano eruptions this year. The Volcano web already uses Multi-agent technology so seems very relevant to our work.

EDIT: One thing that we (at least the Liverpool end of the project) hadn't really clocked to, but which became very obvious listening to Chien's talks was that all these satellites have a complex schedule of observations they have to make. These present quite hard planning problems since the observations are constrained not only by the time the satellite is over the right bit of the world but also by data storage, uplink and downlink times and bandwidths and instrument heating. We've been talking about cases where a group of satellites need to move into some configuration in order to make an observation or, alternatively, where one satellite malfunctions (or one of its instruments malfunctions) and they have to change formation to compensate. Clearly we now need to at least think about how such reconfigurations would effect the large scale planning process, as well as the immediate observation at hand.

This entry was originally posted at http://purplecat.dreamwidth.org/19088.html.
purplecat: Hand Drawn picture of a Toy Cat (ai)
The project I'm currently working on involves the programming of multiple satellites to work in coordination using multi-agent techniques. Steve Chien, from JPL, seems to be one of the key people when it comes to getting artificial intelligence technology onto satellite systems. Have gave about 10 talks at iSAIRAS. I'm not sure if that is typical of the field or a result of various belt-tightening measures (he wasn't the first author on many of the papers he was presenting and actually commented at one point on the difference in style between his various sets of slides). Something similar happened at this year's AAMAS where Medhi Dastani gave about six talks because his institution had refused to fund PhD students to travel to the conference.

Two of Chien's talks involved cooperation between a mixture of satellites and on-ground sensor systems, or robots. One was a very speculative piece of work ultimately aimed at seismic and atmospheric events (e.g. dust devils) on Mars. The practical work involved a small rover robot and a couple of mounted cameras in a constructed "Mars Yard" which could be coordinated to make observations.



Picture of the JPL Mars Yard


The other talk discussed an existing Volcano monitoring sensor web in which involves ground sensors at Mount St. Helen's (and other networks) which can request observations from the EO1 Satellite. This is already deployed and indeed observations were automatically triggered during the Icelandic Volcano eruptions this year. The Volcano web already uses Multi-agent technology so seems very relevant to our work.

EDIT: One thing that we (at least the Liverpool end of the project) hadn't really clocked to, but which became very obvious listening to Chien's talks was that all these satellites have a complex schedule of observations they have to make. These present quite hard planning problems since the observations are constrained not only by the time the satellite is over the right bit of the world but also by data storage, uplink and downlink times and bandwidths and instrument heating. We've been talking about cases where a group of satellites need to move into some configuration in order to make an observation or, alternatively, where one satellite malfunctions (or one of its instruments malfunctions) and they have to change formation to compensate. Clearly we now need to at least think about how such reconfigurations would effect the large scale planning process, as well as the immediate observation at hand.

Hayabusa

Sep. 11th, 2010 05:40 pm
purplecat: Hand Drawn picture of a Toy Cat (Default)
People have been vaguely asking about the conference I attended in Japan. This was iSAIRAS (which stands for the International Symposium on Artificial Intelligence, Robotics and Automation in Space). The attendees were predominantly engineers and roboticists rather than computer scientists. Although many of the talks were way outside my field of expertise, I found much of it really interesting. So, on the grounds that space exploration, is interesting just because, I thought I might blog about some of the talks.

Hayabusa, a round-trip sample collection satellite )

This entry was originally posted at http://purplecat.dreamwidth.org/18775.html.

Hayabusa

Sep. 11th, 2010 05:40 pm
purplecat: Hand Drawn picture of a Toy Cat (Default)
People have been vaguely asking about the conference I attended in Japan. This was iSAIRAS (which stands for the International Symposium on Artificial Intelligence, Robotics and Automation in Space). The attendees were predominantly engineers and roboticists rather than computer scientists. Although many of the talks were way outside my field of expertise, I found much of it really interesting. So, on the grounds that space exploration, is interesting just because, I thought I might blog about some of the talks.

Hayabusa, a round-trip sample collection satellite )

BundyFest

Aug. 15th, 2008 06:24 pm
purplecat: Hand Drawn picture of a Toy Cat (rippling)
I've this long list of things I've been meaning to blog about over the past few months. I recall making a rather drunken post from BundyFest at the time, but I thought I might try a more sober round up of the event. In brief, Alan Bundy, my PhD supervisor is 61 so we had a symposium to celebrate his 60th Birthday and the opening of the "Informatics Forum" - the rather fancy title of the new Departmental building created for Informatics in Edinburgh, after their previous department burned down in mysterious circumstances.

Alan has worked primarily in Automated Reasoning a field which, one the whole, I would say really took off in 1965 when Alan Robinson published "A Machine-Oriented Logic Based on the Resolution Principle". Alan Bundy started working in what was then the Metamathematics Unit at the University of Edinburgh in 1971 and this became a part of the the Department of Artificial Intelligence in 1974. It's worth remembering how young the field was at this time. Automated Reasoning was only 10 years on from its first major result (Robinson's Resolution paper mentioned above) and it was only 20 years since John McCarthy coined the term "Artificial Intelligence". So it's not surprising that Alan Bundy's early work also involved what are now considered separate subfields such as Machine Learning, Automated Software Engineering and Natural Language. At the symposium a number of his colleagues and students were invited to talk and they were spread across all these fields.

I would identify two major themes from the talks and panel sessions held at the symposium:

What happened to Strong General Artificial Intelligence )

Ontologies are the Grand Challenge )

Other interesting factlets:

Ewen MacLean wrote a Jazz piece based on rippling (probably Alan Bundy's most famous automated reasoning technique) which was performed by what, I gather, are some of Scotland's best Jazz musicians, at the reception. I don't pretend to understand how the music related to rippling though.

Everything was being filmed for posterity but that meant everyone taking the microphone to ask a question was supposed to identify themselves. Much amusement was thereby had every time Alan Bundy asked a question and started it by saying "I'm Alan Bundy".

Alan's PhD supervisor Reuben Goodstein was, in turn, supervised by Wittgenstein. I've always been rather pleased by the fact that, in PhD terms, I am a descendent of Wittgenstein and, of course, via Wittgenstien of Bertand Russell.

The most important thing Alan ever told me was that really intelligent people worry more about understanding things than looking stupid. Therefore they ask questions whenever they don't understand something. I'm not as good at this as I should be.
purplecat: Hand Drawn picture of a Toy Cat (aisb)
Luciano Floridi gave two invited talks at the AISB convention. The first was a two-handed public lecture with Aaron Sloman. Aaron's talk was broadly similar to his recent Thinking about Mathematics and Science lecture at Liverpool. The second was an invited talk for the academics at the conference but Floridi treated them as two halves of the same thing.

Floridi is a primarily a philosopher. His interest, as I understood it, is in understanding philosophically what is happening at the moment in the interaction between humans and computational systems, in particular with a hope that this will allow us to avoid pitfalls down the road. He made a number of interesting points which I'm going to cover in no particular order:


  • We are on the edge of a shift in how we view ourselves; "The fourth revoluation". Once we thought we were the centre of the universe but then we had to change that self perception (The Copernican Revolution). Then we thought we were uniquely created and had to change that (The Darwinian Revolution). Then we viewed ourselves as entirely rational and explicable organisms (Freud put a stop to that one). I wasn't entirely clear exactly what change in self-perception the fourth revolution was but I think it involved challenging our perception of ourselves as discrete physical objects in favour of one that viewed ourselves as interconnected informational objects. There was a surprisingly vehement negative response to this idea in much of the room (though that response was linked to my next point) which suggested that, at the very least, the concept does challenge people's perception of self in some way.

  • We are a long way from producing intelligent programs but we already have a lot of dumb but smart systems. For instance Neopets are very basic but nevertheless clearly fill an emotional need for a lot of people. Floridi posited an upsurge of dumb programs designed to mimic human companionship in very specific ways - some of these would be for entertainment only (like Neopets) but some would have more specific assistive functions (e.g., monitoring of the elderly). None would be anything like intelligent. At lot of discussion followed on whether people would be "fooled" by this. Further discussion followed that people wouldn't be "fooled" - they'd be quite aware of the limitations of such companions - but they would use them and become attached to them anyway just as they do to pets or, perhaps more relevantly, sentimental objects.

  • The Ancient Greeks had an animist view of the world in which all objects had, to some extent, a personality. With the advent of pervasive systems and RFID tags making it practical to embed limited interactivity into everyday objects we might well be cyclicly entering a view of the world in which objects once more have personality (or at least a form of interactivity). Right now its only cars that talk back to us (and only if we have GPS installed).

  • At the moment most of us view the online/informational world as, in some sense, separate from the real or physical world. As pervasive systems become more widespread this concept of separation will fade and we will less and less compartmentalise what we are doing as either an informational act (working at a computer) or a physical act (not working at a computer).

purplecat: Hand Drawn picture of a Toy Cat (aisb)
You all thought I'd finished didn't you! I was just pausing. The last slew of talk posts were all for Thursday. This is a Friday talk but I actually have less to blog about on Friday. I heard fewer interesting talks, and several of those I did hear, I had heard before. Ethan Kennerly of Fine Game Design was invited to talk about, you know, actual games rather than game theory ones. He was particularly interested in the connection between the game mechanics (or simulation as he called it) and the game story. So, for instance, in chess the simulation (rules) is embedded loosely in a battle/combat story. Obviously in some games the story aspect is more important than in others. Kennerly attempted to highlight some practical problems in game design, particularly those related to making the simulation and story match up in an aesthetically pleasing fashion which he felt might benefit from some sort of theoretical framework which would help game designers understand what they were doing. The areas he highlighted were:


  • Correlating the simulation dynamics of risk and return to the aesthetic experience of play - so when a player does something risky in game mechanical terms they should understand it as risky on the story level.

  • Developing a theoretical relationship between challenge construction, skill acquisition and the aesthetics of drama

  • How does the combinatorial game theoretic heat of a simulation state correlate to the aesthetic experience of its users?

  • How do knowledge games (e.g., ones of bluff and information acquisition) extend to model correlations of a user's aesthetic experience?

  • How to make the dynamics of the simulation and story channel in a dramatic game reveal character and advance conflict toward a conclusion



The presentation was a lot of fun as well as being interesting. Kennerly kept getting us to suggest either story choices that might match a particular simulation or a simulation that might match a particular story.

He also brought with him a card game he'd devised for his son to help him learn mental arithmetic. It was a surprisingly complex to play, especially in a version where you played as partners, and had us all sitting around trying to do subtraction and multiplication in our heads.

On Friday evening four of us, including Kennerly, went out for dinner during which it became clear that three of us were roleplayers. We then went on to have one of those conversations which is very boring if you happen to be the fourth person at the table - though he asked for it somewhat by asking us to explain what roleplaying actually was, which devolved into an argument about whether one-off freeform political type games counted as roleplaying; how character and mechanics should be balanced; and the extent to which the player's skills determined the character's skills. Not to mention the aesthetics of rolling lots of dice at once in order to simulate a really big fireball.
purplecat: Hand Drawn picture of a Toy Cat (aisb)
All I'm going to note about this talk is that I find the iCat more disturbing than cute.

I sat next to the speaker, Frank Dignum, at dinner that evening and, as well as being very nice, he said some very perspicacious things about organisations of agents which I'm going to have to think about.
purplecat: Hand Drawn picture of a Toy Cat (aisb)
Maarten Schadd (with co-authors Mark Winands, Jaap van den Herik and Huib Aldewereld) gave a talk whose primary interest, from my POV, was that the bricks breaking game on Facebook is NP-Complete.

I'm going to have to explain that aren't I.

A P-time puzzle is one which, to all intents and purposes, can be solved quickly (according to a technical definition of quick). An NP-time puzzle is one in which, if you have the right answer, you can check it is right quickly but you can't necessarily find the right answer quickly. No one knows if P=NP though most people suspect not. Field medals will be won and a lot of research will get torn up if it turns out that P does equal NP.

I rather like bricks breaking.
purplecat: Hand Drawn picture of a Toy Cat (aisb)
This is the last experimental game theory talk I'm going to precis - but it was a fascinating session and an area I'd not come across before.

This talk was by local man, Juergen Bracht. The "trust game" is one which involves an investor and an allocator. The investor starts with 2 points and can keep them or invest them. If (s)he invests them they automatically grow to 8 pts which the allocator can then either keep entirely for themselves or split between the investor and the allocator. Juegen was interested in the effects of two processes on the trust game. The first, "cheap talk", was where the allocator was allowed to tell the investor what they intended to do with the points in advance, but was not held to that utterance. The second, "observation", was where the investor had access to the allocator's previous actions.

Unsurprisingly, especially since the interaction was computer mediated, not face-to-face, cheap talk had little effect on investor or allocator actions. Less surprisingly observation did have an effect (except in the last round - where both investor and allocator knew it was the last round). The reason I say "less surprisingly" is became in pure game theoretic terms the reasoning goes: in the last round the allocator should keep all the points since no one will ever use the outcome of the round in an observation, therefore the investor should not invest their points. This being the case the second-to-last round is the last one where anyone will invest so the allocator should keep all the points since no one will ever use the outcome of the round in an observation, therefore the investor should not invest their points. This being the case... and so on so no one ever invests anything... Clearly classical game theory needs some rethinking if we expect it to realistically model human actions.
purplecat: Hand Drawn picture of a Toy Cat (aisb)
Sobei H. Oda (together with co-authors, Gen Masumoto and Hiroyasu Yoneda) had been simulating the (I think) Future's market. In particular he had been testing the hypothesis that you make more money if you have better information about the future price of a commodity. He had a graph that showed the people with no information doing a little better than those with just a little information. The people with lots of information still did best. He had some maths to explain this, which it must be said I didn't follow, but I thought it was an interesting effect.
purplecat: Hand Drawn picture of a Toy Cat (aisb)
Vincent Wiegel presented joint work with Jan van den Berg investigating a criticism of a philosophical standpoint called act utilitarianism (contrasted to rule utilitarianism).

Roughly speaking, an act utilitarian evaluates each action, as they occur, in order to decide the utility of acting while a rule utilitarian acts according to a general rule about the utility. The thought experiment used to debunk act utilitarianism was that of an election. In a population of 100 an act utilitarian only votes if they are the 51st person to vote for their preferred candidate in all other situations they gain more utility by going and doing something else more interesting. Wiegel and van den Berg simulated this situation computationally. Obviously first they had some issues about why an act utilitarian might conclude they get utility only by being the 51st person to vote and of course, how they might determine that they have the deciding vote. Interestingly, when they varied their assumptions a bit so that act utilitarians only voted if they had reason to believe they were in the range of the 46th - 56th voter (or similar) - i.e., that their vote was likely to decisive then they did very well frequently getting the outcome they wished in an election while getting to do other more interesting things when the outcome was essentially a foregone conclusion anyway.

Profile

purplecat: Hand Drawn picture of a Toy Cat (Default)
purplecat

June 2025

S M T W T F S
1234567
8 9 1011 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
2930     

Syndicate

RSS Atom

Tags

Style Credit

Expand Cut Tags

No cut tags