purplecat: Hand Drawn picture of a Toy Cat (agents)
In the world of agents or at least (BDI-agents) we think a lot about goals. There are many different sorts of goals among which achieve, perform, query and maintain figure prominently. In general the idea of a goal is that when an agent acquires a goal this gets flagged up (somehow, depending on the language you are using), the agent deliberates and selects a plan for the goal which it then executes. The plan may involve setting up some new sub-goals. So far so good. In the language of BDI theory (BDI stands for Belief, Desire and Intention btw - in case you are interested) goals also have a logical/philosophical meaning - they are states of the world you want to bring about. In particular they are statements that the agent is hoping to be able to believe.

I'm all indignant about perform goals. Recall this cycle flag a goal -> select a plan -> execute plan? Well if you have a perform goal you forget all about the goal once you've selected the plan - so if, after executing the plan, you've not actually achieved your goal, you don't realise this. Achievement goals, on the other hand, you don't forget about until they actually appear in the list of your beliefs (irrespective of what plans you have executed). So an agent working in an uncertain environment, or one over which it has little control, doesn't believe it's achieved its goals until it has actually checked that it has achieved them. I approve of this.

An additional problem with perform goals (where the agent doesn't check if it has succeeded) is that they get used as sub-routines. The sub-routine is a common and useful software engineering construct used to structure programming code and to allow easy reuse of code fragments which appear multiple times in a program. For the non-programmers a sub-routine lets you tag a sequence of instructions with a name and then refer to them by that name elsewhere in the program - letting you break your program up into manageble chunks. Sub-routines are an extremely important software engineering tool but have s*d all to do with states of the world an agent wants to come to believe.

The crazy thing is that, having seen a number of BDI-agent languages in the past few months, none of them provide decent sub-routine mechanisms and so programmers are forced to used perform goals to stand in for sub-routines. Which means every time they want to run a common sequence of instructions the agent flags up a goal, reasons about it, selects a plan etc. etc. - instead of just jumping to the subroutine, recognising it as a software engineering tool, and not trying to treat it on the same level as modelling cognition. In fact I can't think of any philosophical justification for perform goals at all, their existence seems to be required entirely because of this lack of adquate sub-routining.

Thanks for listening, normal service in the form of Doctor Who book reviews will resume shortly.


purplecat: Hand Drawn picture of a Toy Cat (Default)

August 2017

   1 2 3 4 5
67 89101112
13 1415161718 19


RSS Atom


Style Credit

Expand Cut Tags

No cut tags