This paper derives from the work of one of my PGR students - one who has just passed his viva (yay!). I'm not sure how much interest it carries outside the field of beliefs-desire-intentions agent programming. In that particular paradigm agents select plans of action from a set generally provided by a programmer. Peter looked at how those plans could be automatically adapted if the actions stopped working properly - e.g., if some part of a robot's machinery wore down. The idea was to track the outcomes of actions against descriptions of what an action should do and then swap them in and out of plans based on their descriptions.
One of the projects for which I have a reasonable time allocation is about modelling responsibility. We're focusing on what we're calling prospective responsibility which is how responsibilities shape choices of goal and action (in contrast to retrospective responsibility which is about allocating praise and/or blame). This short paper sets our initial representation of a framework for how someone (e.g., a programmer or user) could describe responsibilities to a computational system which could then use that description to choose goals/actions. It's a short paper which meant we had to cut out the examples in order to get it to length and the result is therefore rather dry and a lot of the motivation has been lost.
To be honest, this is still very much a work in progress. The framework feels a bit too complex to me (and has become more complex since we wrote this paper as we try to incorporate ideas of moral and legal responsiblities) and I'm not sure we're really certain yet that responsibilities are a good way to frame these kinds of high-level motivational constraints.
To be honest, this is still very much a work in progress. The framework feels a bit too complex to me (and has become more complex since we wrote this paper as we try to incorporate ideas of moral and legal responsiblities) and I'm not sure we're really certain yet that responsibilities are a good way to frame these kinds of high-level motivational constraints.

We had a successful day, mind you, about which I may blog in due course.
Incidentally, I've always quite fancied doing one of those question memes except that my life doesn't work in a way where I can say "pick a day next month on which you would like me to answer your question in a blog post." On the other hand I can manage, "leave me a question and I'll answer it in a blog post in my own sweet time." Since the whole Doctor Who rewatch is on hiatus until we can figure out a way to watch Blink which doesn't involve NLSS Child refusing to take a shower ever again (see last post) I thought it might be fun to fill the gap with question answers so, you know, ask away!!
NB. I do not promise to answer everything but I'll give anything that doesn't seem overly embarrassing or personal a shot.
We are pleased to inform you that your paper #671
Title: Agent Reasoning for Norm Compliance: A Semantic Approach
has been accepted for full publication and oral presentation in the proceedings of the Twelfth International Conference on Autonomous Agents and Multiagent Systems (AAMAS2013).
( Explanation under the Cut )
We (meaning my research group) have recently become interested in "ethical autonomy". From our point-of-view our interest is quite prescribed. Having formalised the "rules of the air" and created autonomous programs that can be shown to obey them we then got faced with the issue of when you want a pilot to deliberately break the rules of the air because there is some compelling ethical reason to do so (one of the examples we look at is when another aircraft is failing to obey the rules of the air, potentially maliciously, by moving left to avoid a collision instead of moving right. If the autonomous pilot continues to move right then eventually the two planes will collide where a human pilot would have deduced the other aircraft was breaking the rules and eventually would have moved left instead, thus breaking the rules of the air but nevertheless taking the ethical course of action).
Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.
It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.
There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.
The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.
Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.
It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.
There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.
The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.
100 Current Papers in Artificial Intelligence, Automated Reasoning and Agent Programming. Number 5
Adam Sadilek and Henry Kautz. Location-Based Reasoning about Complex Multi-Agent Behaviour. Journal of Artificial Intelligence Research 43 (2012) 87-133
DOI: doi:10.1613/jair.3421
Open Access?: Yes!
( People got to play Capture The Flag for Science!! )
Adam Sadilek and Henry Kautz. Location-Based Reasoning about Complex Multi-Agent Behaviour. Journal of Artificial Intelligence Research 43 (2012) 87-133
DOI: doi:10.1613/jair.3421
Open Access?: Yes!
( People got to play Capture The Flag for Science!! )
100 Current Papers in Artificial Intelligence, Automated Reasoning and Agent Programming. Number 1.
Catholijn M. Jonker, Viara Popova, Alexei Sharpanskykh, Jan Treur, and Pınar Yolum. Formal Framework to Support Organizational Design. Knowledge-Based Systems 31:89-105, 2012.
DOI: http://dx.doi.org/10.1016/j.knosys.2012.02.011
Open Access?: Sadly no.
I was half hoping to start these posts with a terribly exciting AI paper but sadly nothing terribly exciting caught my eye in the various Tables of Contents last week. However this is very much a bread-and-butter AI paper and so in some ways I'm not so sorry to be starting out with it, since it gives a good impression of what a lot of AI types do when they are not trying to develop machines that will take over the world and wipe-out humanity.
( More Under the Cut )
So there you go, not terribly exciting, but a fairly typical current paper in artificial intelligence.
Catholijn M. Jonker, Viara Popova, Alexei Sharpanskykh, Jan Treur, and Pınar Yolum. Formal Framework to Support Organizational Design. Knowledge-Based Systems 31:89-105, 2012.
DOI: http://dx.doi.org/10.1016/j.knosys.2012.02.011
Open Access?: Sadly no.
I was half hoping to start these posts with a terribly exciting AI paper but sadly nothing terribly exciting caught my eye in the various Tables of Contents last week. However this is very much a bread-and-butter AI paper and so in some ways I'm not so sorry to be starting out with it, since it gives a good impression of what a lot of AI types do when they are not trying to develop machines that will take over the world and wipe-out humanity.
( More Under the Cut )
So there you go, not terribly exciting, but a fairly typical current paper in artificial intelligence.
In which we show that using AI/Agent based approaches let's you write shorter code, which I will confess, isn't really news in the AI/Agent community but this has gone to a "Space" conference (albeit Artificial Intelligence, Robotics and Automation in Space).
Reviewers' Comments in their entirety:
This is a quite simple approach; not academically sophisticated.
Interesting: related to efficient coding of control algorithms by means of BDI..
It's always nice to know the peer review process is rigourous and robust.
This entry was originally posted at http://purplecat.dreamwidth.org/3136.html.
Reviewers' Comments in their entirety:
This is a quite simple approach; not academically sophisticated.
Interesting: related to efficient coding of control algorithms by means of BDI..
It's always nice to know the peer review process is rigourous and robust.
This entry was originally posted at http://purplecat.dreamwidth.org/3136.html.
In which we show that using AI/Agent based approaches let's you write shorter code, which I will confess, isn't really news in the AI/Agent community but this has gone to a "Space" conference (albeit Artificial Intelligence, Robotics and Automation in Space).
Reviewers' Comments in their entirety:
This is a quite simple approach; not academically sophisticated.
Interesting: related to efficient coding of control algorithms by means of BDI..
It's always nice to know the peer review process is rigourous and robust.
Reviewers' Comments in their entirety:
This is a quite simple approach; not academically sophisticated.
Interesting: related to efficient coding of control algorithms by means of BDI..
It's always nice to know the peer review process is rigourous and robust.
So for the past two weeks I've had a visitor at work, who I shall call NT (since I'm so great at thinking up pseudonyms). He's a PhD student from the Netherlands who has invented a programming language for organisations of agents. I was in favour of calling this language Orwell (the original paper describes them as Orwellian Agents) but NT opted for the rather more prosaic OOPL (Organisation Oriented Programming Language). Somehow, last year at AAMAS, I managed to convince NT's second supervisor that my AIL framework was the obvious implementation platform for this language and we arranged for NT to come to Liverpool to implement it.
Cue much frantic manual writing; OK, some half-hearted manual writing which after about six months' effort resulted in me emailing 94 entirely unproof-read pages to the poor lamb, these representing about 3/4 of a manual. He was remarkably polite about this.
I have been asserting confidently in papers for at least a year that it is easier to implement a language interpreter in my framework than it is to attach a model checker to an existing language (the purpose of the framework being to let you model check your agent programs). So I really shouldn't have been as pleasantly surprised as I was that we managed to write an intepreter for OOPL inside a week. This week included us breaking Windows (so support had to scrub NT's account and do a fresh install) and my frantic attendance at the Automated Reasoning Workshop which was inconsiderately scheduled in Liverpool during the second two days of NT's stay.
We spent this week trying to get the model checker to work and write a parser. In both endeavours we suffered at the hands of other people's tools. The model checker, java pathfinder, is developed at NASA and they've broken it (fortunately, after some messing about, NT and I got our hands on an old version). Meanwhile I struggled to learn to use a parser generator - time taken to work out how to use the tool: two and a half days, time taken to actually write the parser: one day. TBH, if I'd known it was that easy to write parsers I'd have written some long ago rather than assuming they were horrible scary things best steered well clear of.
So all in all a busy but rewarding two weeks.
Cue much frantic manual writing; OK, some half-hearted manual writing which after about six months' effort resulted in me emailing 94 entirely unproof-read pages to the poor lamb, these representing about 3/4 of a manual. He was remarkably polite about this.
I have been asserting confidently in papers for at least a year that it is easier to implement a language interpreter in my framework than it is to attach a model checker to an existing language (the purpose of the framework being to let you model check your agent programs). So I really shouldn't have been as pleasantly surprised as I was that we managed to write an intepreter for OOPL inside a week. This week included us breaking Windows (so support had to scrub NT's account and do a fresh install) and my frantic attendance at the Automated Reasoning Workshop which was inconsiderately scheduled in Liverpool during the second two days of NT's stay.
We spent this week trying to get the model checker to work and write a parser. In both endeavours we suffered at the hands of other people's tools. The model checker, java pathfinder, is developed at NASA and they've broken it (fortunately, after some messing about, NT and I got our hands on an old version). Meanwhile I struggled to learn to use a parser generator - time taken to work out how to use the tool: two and a half days, time taken to actually write the parser: one day. TBH, if I'd known it was that easy to write parsers I'd have written some long ago rather than assuming they were horrible scary things best steered well clear of.
So all in all a busy but rewarding two weeks.
Dagstuhl: Programming Multi-Agent Systems
Sep. 29th, 2008 07:08 pmI vaguely threatened to write up my trip to Schloss Dagstuhl when I posted about visiting Trier. I don't really feel up to an account of even the most interesting talks but I thought a bit of general waffle might not go amiss.
( waffle )
( waffle )
All I'm going to note about this talk is that I find the iCat more disturbing than cute.
I sat next to the speaker, Frank Dignum, at dinner that evening and, as well as being very nice, he said some very perspicacious things about organisations of agents which I'm going to have to think about.
I sat next to the speaker, Frank Dignum, at dinner that evening and, as well as being very nice, he said some very perspicacious things about organisations of agents which I'm going to have to think about.
Heterogeneous Agents
Jan. 16th, 2008 03:50 pmI just got three agents written in three different agent programming languages to talk to each and even to co-operate in order to achieve two goals.
*does a little my-agents-are-talking-to-each-other dance*
Which leaves me with a luxurious 8 days to write a paper about it before the deadline*.
* remember students, do as we say not as we do.
*does a little my-agents-are-talking-to-each-other dance*
Which leaves me with a luxurious 8 days to write a paper about it before the deadline*.
* remember students, do as we say not as we do.
We are delighted to inform you that your paper entitled "A Common Basis for Agent Organisation in BDI Languages",
submitted to EUMAS-2007, has been accepted for presentation at the workshop
(recall that the proceedings are only informally published).
This is actually a reheat of our LADS paper but M. assures A. and myself that EUMAS is intended for re-heats and work in progress and the presentations are supposed merely to form a basis for the exchange of ideas. This means that A. gets to go to Tunisia in six weeks' time. M. said I could go too but I felt two weeks before christmas was cutting things a bit fine.
"Isn't that where they filmed Star Wars?" I said. A. and M. looked at me blankly.
submitted to EUMAS-2007, has been accepted for presentation at the workshop
(recall that the proceedings are only informally published).
This is actually a reheat of our LADS paper but M. assures A. and myself that EUMAS is intended for re-heats and work in progress and the presentations are supposed merely to form a basis for the exchange of ideas. This means that A. gets to go to Tunisia in six weeks' time. M. said I could go too but I felt two weeks before christmas was cutting things a bit fine.
"Isn't that where they filmed Star Wars?" I said. A. and M. looked at me blankly.
State of the AIL
Nov. 7th, 2007 10:08 amSeveral people now have told me these posts are entirely jibberish however I want to document the progress of our thought on the project. It's all under the cut but I don't suggest you read it unless you are actually interested in agents, programming and/or model checking.
( And now the science bit )
On hearing that a programming language was named after her G. immediately asked if she could program in it. I baulked a bit since its a long way off being anything approaching a sensible teaching language. In the end I said she could once she could program, confident in the knowledge this is probably several years away. With some satisfaction, a few days later when we attended a Curriculum Morning a the school, B. pointed out the note that listed "programming" in the IT curriculum for the summer term this year...
( And now the science bit )
On hearing that a programming language was named after her G. immediately asked if she could program in it. I baulked a bit since its a long way off being anything approaching a sensible teaching language. In the end I said she could once she could program, confident in the knowledge this is probably several years away. With some satisfaction, a few days later when we attended a Curriculum Morning a the school, B. pointed out the note that listed "programming" in the IT curriculum for the summer term this year...
All Change
Sep. 19th, 2007 07:31 pmSo today we had a productive project meeting in which we realised that large parts of our approach were probably wrong. While this doesn't mean all the work I've done in the past year designing and implementing the AIL agent programming language was pointless its going to be put to rather different uses than originally intended and in a much simplified form.
Meanwhile, if it works, the new solution we've got for the problem we are supposed to be addressing is going to be much simpler and far more elegant than AIL. I'm not terribly sorry to be losing AIL, despite the work, since it was beginning to look like something of an unweildy behemoth and the new solution is largely my idea but I do feel incredibly exhausted and, frankly, like I've drunk too much coffee (can't think what the cause of that could be).
Possibly foolishly I've said I will produce a prototype for the new solution by next Friday (this being a reflection of how simple it really will be if it works). Watch this space.
Meanwhile, if it works, the new solution we've got for the problem we are supposed to be addressing is going to be much simpler and far more elegant than AIL. I'm not terribly sorry to be losing AIL, despite the work, since it was beginning to look like something of an unweildy behemoth and the new solution is largely my idea but I do feel incredibly exhausted and, frankly, like I've drunk too much coffee (can't think what the cause of that could be).
Possibly foolishly I've said I will produce a prototype for the new solution by next Friday (this being a reflection of how simple it really will be if it works). Watch this space.
Java is a silly language
Aug. 3rd, 2007 04:44 pmPeople are always telling me that my usual languages, like Prolog, are silly or, at the very least, impractical, but Java has just got me flummoxed.
I have a list (of logical formulae as it happens) and I want to know if my agent believes everything in the list. To complicate matters the the list can contain variables so the agent might believe them in different ways. So my list might be - do I believe there is something, x say, such that 1) x is blue and 2) x has four legs and the answer might be "blue cow is blue and blue cow has four legs"* but I might also believe that the sky is blue and any number of other things are blue. The object is to find something that I believe is blue and which I also believe has four legs.
This is incredibly easy in Prolog. I just go through the list of formulae one at a time creating a candidate answer, if the answer fails at any point then the programming language automatically back tracks up the list looking for alternatives.
Java doesn't do back tracking. I'm going to have to do something complicated involving keeping track of where I am in the list and which of the alternative candidates I have explored and then zipping back and forth in a sensible fashion. I've appealed to B. who is supposed to be a bit of a dab hand with imperative languages and he looked a bit blank and then started talking about custom iterators.
* with apologies to those of you don't watch CBeebies.
I have a list (of logical formulae as it happens) and I want to know if my agent believes everything in the list. To complicate matters the the list can contain variables so the agent might believe them in different ways. So my list might be - do I believe there is something, x say, such that 1) x is blue and 2) x has four legs and the answer might be "blue cow is blue and blue cow has four legs"* but I might also believe that the sky is blue and any number of other things are blue. The object is to find something that I believe is blue and which I also believe has four legs.
This is incredibly easy in Prolog. I just go through the list of formulae one at a time creating a candidate answer, if the answer fails at any point then the programming language automatically back tracks up the list looking for alternatives.
Java doesn't do back tracking. I'm going to have to do something complicated involving keeping track of where I am in the list and which of the alternative candidates I have explored and then zipping back and forth in a sensible fashion. I've appealed to B. who is supposed to be a bit of a dab hand with imperative languages and he looked a bit blank and then started talking about custom iterators.
* with apologies to those of you don't watch CBeebies.
Dear Author(s),
Thank you very much for submitting a paper to LADS'007. We are
delighted to let you know that your paper is accepted.
This is another groups are agents paper much like this one only this time without any actual logic in it and more of a literature survey with explanations for why everything else people have done fits into our framework. This was the first paper we wrote on the subject but it went so far over the page limit we were forced to write the CLIMA paper to cover the logical bits.
Thank you very much for submitting a paper to LADS'007. We are
delighted to let you know that your paper is accepted.
This is another groups are agents paper much like this one only this time without any actual logic in it and more of a literature survey with explanations for why everything else people have done fits into our framework. This was the first paper we wrote on the subject but it went so far over the page limit we were forced to write the CLIMA paper to cover the logical bits.