Jun. 8th, 2007

purplecat: Hand Drawn picture of a Toy Cat (Default)
National Geographic want to film B. "arriving off the plane with his latest simulation" - this entails a five day trip to America at a rather awkward point in time.

I can't help feeling that National Geographic are representing a fundamental misunderstanding of the nature of digital information...

Old Friends

Jun. 8th, 2007 04:14 pm
purplecat: Hand Drawn picture of a Toy Cat (books)
I can't quite decide if there is any point in reviewing a run of the mill Bernice Summerfield book here. One of you might decide to pick up a good one if I recommended it but I don't suppose any of you would be disposed to read one otherwise so there seems little point in informing you that "this one's a bit naff" but here goes anyway.

To recap Benny Summerfield is a space faring, archelogist, former companion of the Doctor who proved sufficiently popular to generate her own series of spin off novels.

Revew of Old Friends )
purplecat: Hand Drawn picture of a Toy Cat (books)
I was interested to read a passing comment in one of [livejournal.com profile] parrot_knights recent posts that he is currently reading Michael Moorcock's "The Dancers at the End of Time" sequence. I am also doing this prompted by a review of his work that appeared in SFX last year. I previously read one of his Elric novels but remember absolutely nothing about it at all beyond that it had Elric in it (well *duh*) and that I found the ending vaguely unsatisfying and open ended. "The Hollow Lands" is the second book in the sequence after "An Alien Heart" which I read earlier this year, before I started this blog.

Review of The Hollow Lands )
purplecat: Hand Drawn picture of a Toy Cat (agents)
In the world of agents or at least (BDI-agents) we think a lot about goals. There are many different sorts of goals among which achieve, perform, query and maintain figure prominently. In general the idea of a goal is that when an agent acquires a goal this gets flagged up (somehow, depending on the language you are using), the agent deliberates and selects a plan for the goal which it then executes. The plan may involve setting up some new sub-goals. So far so good. In the language of BDI theory (BDI stands for Belief, Desire and Intention btw - in case you are interested) goals also have a logical/philosophical meaning - they are states of the world you want to bring about. In particular they are statements that the agent is hoping to be able to believe.

I'm all indignant about perform goals. Recall this cycle flag a goal -> select a plan -> execute plan? Well if you have a perform goal you forget all about the goal once you've selected the plan - so if, after executing the plan, you've not actually achieved your goal, you don't realise this. Achievement goals, on the other hand, you don't forget about until they actually appear in the list of your beliefs (irrespective of what plans you have executed). So an agent working in an uncertain environment, or one over which it has little control, doesn't believe it's achieved its goals until it has actually checked that it has achieved them. I approve of this.

An additional problem with perform goals (where the agent doesn't check if it has succeeded) is that they get used as sub-routines. The sub-routine is a common and useful software engineering construct used to structure programming code and to allow easy reuse of code fragments which appear multiple times in a program. For the non-programmers a sub-routine lets you tag a sequence of instructions with a name and then refer to them by that name elsewhere in the program - letting you break your program up into manageble chunks. Sub-routines are an extremely important software engineering tool but have s*d all to do with states of the world an agent wants to come to believe.

The crazy thing is that, having seen a number of BDI-agent languages in the past few months, none of them provide decent sub-routine mechanisms and so programmers are forced to used perform goals to stand in for sub-routines. Which means every time they want to run a common sequence of instructions the agent flags up a goal, reasons about it, selects a plan etc. etc. - instead of just jumping to the subroutine, recognising it as a software engineering tool, and not trying to treat it on the same level as modelling cognition. In fact I can't think of any philosophical justification for perform goals at all, their existence seems to be required entirely because of this lack of adquate sub-routining.

Thanks for listening, normal service in the form of Doctor Who book reviews will resume shortly.

Profile

purplecat: Hand Drawn picture of a Toy Cat (Default)
purplecat

April 2019

S M T W T F S
 1 234 5 6
7 8 91011 12 13
14 15 16 17 18 19 20
21222324252627
282930    

Tags

Style Credit

Expand Cut Tags

No cut tags