purplecat: A Pepper Robot. (General:AI)
This paper derives from the work of one of my PGR students - one who has just passed his viva (yay!). I'm not sure how much interest it carries outside the field of beliefs-desire-intentions agent programming. In that particular paradigm agents select plans of action from a set generally provided by a programmer. Peter looked at how those plans could be automatically adapted if the actions stopped working properly - e.g., if some part of a robot's machinery wore down. The idea was to track the outcomes of actions against descriptions of what an action should do and then swap them in and out of plans based on their descriptions.
purplecat: Hand Drawn picture of a Toy Cat (Default)

We are pleased to inform you that your paper #671
Title: Agent Reasoning for Norm Compliance: A Semantic Approach
has been accepted for full publication and oral presentation in the proceedings of the Twelfth International Conference on Autonomous Agents and Multiagent Systems (AAMAS2013).





Explanation under the Cut )
purplecat: Hand Drawn picture of a Toy Cat (ai)
We (meaning my research group) have recently become interested in "ethical autonomy". From our point-of-view our interest is quite prescribed. Having formalised the "rules of the air" and created autonomous programs that can be shown to obey them we then got faced with the issue of when you want a pilot to deliberately break the rules of the air because there is some compelling ethical reason to do so (one of the examples we look at is when another aircraft is failing to obey the rules of the air, potentially maliciously, by moving left to avoid a collision instead of moving right. If the autonomous pilot continues to move right then eventually the two planes will collide where a human pilot would have deduced the other aircraft was breaking the rules and eventually would have moved left instead, thus breaking the rules of the air but nevertheless taking the ethical course of action).

Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.

It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.

There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.

The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.
purplecat: Hand Drawn picture of a Toy Cat (Default)
In which we show that using AI/Agent based approaches let's you write shorter code, which I will confess, isn't really news in the AI/Agent community but this has gone to a "Space" conference (albeit Artificial Intelligence, Robotics and Automation in Space).

Reviewers' Comments in their entirety:

This is a quite simple approach; not academically sophisticated.
Interesting: related to efficient coding of control algorithms by means of BDI..


It's always nice to know the peer review process is rigourous and robust.

This entry was originally posted at http://purplecat.dreamwidth.org/3136.html.
purplecat: Hand Drawn picture of a Toy Cat (Default)
In which we show that using AI/Agent based approaches let's you write shorter code, which I will confess, isn't really news in the AI/Agent community but this has gone to a "Space" conference (albeit Artificial Intelligence, Robotics and Automation in Space).

Reviewers' Comments in their entirety:

This is a quite simple approach; not academically sophisticated.
Interesting: related to efficient coding of control algorithms by means of BDI..


It's always nice to know the peer review process is rigourous and robust.

MP40

Jul. 6th, 2009 12:28 pm
purplecat: Hand Drawn picture of a Toy Cat (Default)
40 Years ago Oxford decided to offer a degree in Mathematics and Philosophy. This weekend past there were celebrations.

Talks, The Somerville Maths and Philosophers and an amusing anecdote about Hilbert under the cut )
purplecat: Hand Drawn picture of a Toy Cat (agents)
We are delighted to inform you that your paper entitled "A Common Basis for Agent Organisation in BDI Languages",
submitted to EUMAS-2007, has been accepted for presentation at the workshop
(recall that the proceedings are only informally published).


This is actually a reheat of our LADS paper but M. assures A. and myself that EUMAS is intended for re-heats and work in progress and the presentations are supposed merely to form a basis for the exchange of ideas. This means that A. gets to go to Tunisia in six weeks' time. M. said I could go too but I felt two weeks before christmas was cutting things a bit fine.

"Isn't that where they filmed Star Wars?" I said. A. and M. looked at me blankly.
purplecat: Hand Drawn picture of a Toy Cat (agents)
Dear Author(s),

Thank you very much for submitting a paper to LADS'007. We are
delighted to let you know that your paper is accepted.


This is another groups are agents paper much like this one only this time without any actual logic in it and more of a literature survey with explanations for why everything else people have done fits into our framework. This was the first paper we wrote on the subject but it went so far over the page limit we were forced to write the CLIMA paper to cover the logical bits.

AIL Lives

Jun. 22nd, 2007 06:13 pm
purplecat: Hand Drawn picture of a Toy Cat (agents)
I spent the first 3 months on this job designing AIL * - a low level BDI programming languages. Then I wrote a paper about the design and got sent to Hawaii.

I have also spent some time implementing AIL using the Maude Rewriting Logic. Today the first ever AIL agent executed - it wanted to pick something up, it had a plan to pick something up, and it did so (meaning it changed its beliefs so it believed it picked something up, it didn't actually pick something up). You heard it here first folks!!!

* short for Agent Infrastructure Layer. Since one of the other languages we are interested in is an interpreter for AgentSpeak called Jason (which has a painting of the golden fleece as a logo) I briefly considered trying to change the name so the acronym was GRAIL and then going on an Arthurian/Mythology theme. However my boss's eyes rather glazed over at that point so I thought it best to leave well alone.
purplecat: Hand Drawn picture of a Toy Cat (agents)
"Dear Author(s),

Thank you very much for submitting a paper to CLIMA-VIII. We are
delighted to let you know that your paper is accepted..."


This has to be one of the easiest papers I've ever written. Michael is keen on the use of groups to form multi-agent systems. Key idea: an agent is a group and a group is an agent - all agents can contain and be contained by other agents. This lets groups of agents have plans and goals external to the agents that compose them. Anyway I wrote him some inference rules explaining how this might work in what is known as the operational semantics of a typical BDI (Beliefs, Desires, Intentions) programming language and then left for Hawaii. Sometime while I was away he and his PhD student fleshed it out with some text and an example in a language called AgentSpeak and Presto! one more publication. They even put me down as first author (which means I really should read the paper!!).

"As the authors admit, the whole idea would rather lack a concrete justification in the paper." - ah! the anonymous referees spotted that then!
purplecat: Hand Drawn picture of a Toy Cat (agents)
In the world of agents or at least (BDI-agents) we think a lot about goals. There are many different sorts of goals among which achieve, perform, query and maintain figure prominently. In general the idea of a goal is that when an agent acquires a goal this gets flagged up (somehow, depending on the language you are using), the agent deliberates and selects a plan for the goal which it then executes. The plan may involve setting up some new sub-goals. So far so good. In the language of BDI theory (BDI stands for Belief, Desire and Intention btw - in case you are interested) goals also have a logical/philosophical meaning - they are states of the world you want to bring about. In particular they are statements that the agent is hoping to be able to believe.

I'm all indignant about perform goals. Recall this cycle flag a goal -> select a plan -> execute plan? Well if you have a perform goal you forget all about the goal once you've selected the plan - so if, after executing the plan, you've not actually achieved your goal, you don't realise this. Achievement goals, on the other hand, you don't forget about until they actually appear in the list of your beliefs (irrespective of what plans you have executed). So an agent working in an uncertain environment, or one over which it has little control, doesn't believe it's achieved its goals until it has actually checked that it has achieved them. I approve of this.

An additional problem with perform goals (where the agent doesn't check if it has succeeded) is that they get used as sub-routines. The sub-routine is a common and useful software engineering construct used to structure programming code and to allow easy reuse of code fragments which appear multiple times in a program. For the non-programmers a sub-routine lets you tag a sequence of instructions with a name and then refer to them by that name elsewhere in the program - letting you break your program up into manageble chunks. Sub-routines are an extremely important software engineering tool but have s*d all to do with states of the world an agent wants to come to believe.

The crazy thing is that, having seen a number of BDI-agent languages in the past few months, none of them provide decent sub-routine mechanisms and so programmers are forced to used perform goals to stand in for sub-routines. Which means every time they want to run a common sequence of instructions the agent flags up a goal, reasons about it, selects a plan etc. etc. - instead of just jumping to the subroutine, recognising it as a software engineering tool, and not trying to treat it on the same level as modelling cognition. In fact I can't think of any philosophical justification for perform goals at all, their existence seems to be required entirely because of this lack of adquate sub-routining.

Thanks for listening, normal service in the form of Doctor Who book reviews will resume shortly.

Profile

purplecat: Hand Drawn picture of a Toy Cat (Default)
purplecat

April 2025

S M T W T F S
  1 2 3 4 5
6789 10 11 12
13 14 15 1617 18 19
2021 2223242526
27282930   

Syndicate

RSS Atom

Tags

Style Credit

Expand Cut Tags

No cut tags