purplecat: An open book with a quill pen and a lamp. (General:Academia)
I have a publication in the Agents and Robots for Reliable Engineered Autonomy (AREA) workshop. Corroborative V&V for Autonomous Systems: Integrating Evidence and Discrepancy Analysis for Safety Assurance. It should be open access, but does not appear to be. It's not a super-exciting paper. It takes the observation that, if you are doing assurance of robotic systems you will take a variety of approaches; abstract models, simulated tests, hardware tests... and then have to reconcile the results of these approaches. The paper describes the first stab at a tool for this, but it is a very early prototype.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
Both in a new conference called ERAS (Engineering Reliable Autonomous Systems). The first looks at how a system - e.g., a drone on a patrol - might decide to skip some patrol points if it realises it can't reach all of them. The second attempts to catalogue the kinds of things people want from a computer explanation. Explainability is big in AI at the moment, but explanation is quite a slippery term and its not clear that the support for explainability that's being developed actually meets what people want.

purplecat: An open book with a quill pen and a lamp. (General:Academia)
My PhD student had a paper published in AAMAS on Uncertain Machine Ethics Planning. This is a good conference which, for my sins, I'm currently joint Programme Chair for (this means I'm currently in the process of trying to find 1,300 potential referees in the hopes of ending up with 650). Anyhoo... AAMAS rewards pretty theory heavy papers and this was no exception, but the bottom line is that he's developed a technique in which a system can reason across several potential plans of action, using different moral theories in order to work out which plan of action is least unacceptable across all the moral theories (I hope this makes sense, we keep running into double negatives in the theory). It's grounded in a philosophical concept called hypothetical retrospection - in which even if something turns out badly you can argue it was still the correct choice because at the time you made the choice the chance of it turning out badly was low. There are some details such as ranking outcomes so, in the situation where you can get an apple (for sure) or gamble with a low chance on getting an all expenses paid holiday (yes I know this isn't a moral choice), no number of apples can outweigh the small chance of getting the holiday - I guess the moral equivalent might be no number of people made a little bit happier can be outweighed by killing someone.

Moral theories can be big theoretical juggernauts like utilitarianism or kantian morality - or more subtle distinction around which values are preferred (though this doesn't really come out in the paper if you can wade through all the formalism).
purplecat: An open book with a quill pen and a lamp. (General:Academia)
One of my post-docs, and I was previously on the supervisory team for her PhD, has been partly occupying her time generating publications from her thesis. This is one such. She's addressing the question of what people actually want when they ask that an autonomous system provide explanations. In particular, though she doesn't really get into that in the paper, most explainability research has focused specifically on neural networks that are classifying things into groups, not on robotic systems that are taking decisions about what to do next.

Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study talks through her approach, and tries to categorise her results into groups. There is also some formalisation of the requirements into logic, though via the use of structured English to make it more comprehensible. Lastly she reports on some lessons learned.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
The award winning paper I mentioned next week, actually had a sequel. In The human factor: Addressing computing risks for critical national infrastructure towards 2040 we performed a similar exercise of asking a number of experts about risks to Critical National Infrastructure arising from computing developments and synthesising the results.

I am honestly, happier with this paper, I thought we had a better range of genuine expertise in the people we talked to, and a more focused area of consideration. We had a little trouble with the third referee, who thought our experts were wrong about Quantum Computing and that we should rewrite the paper so they gave the answer the referee thought was correct. Our experts did not think Quantum Computing was among the biggest risks to be considered in the next 15 years - but instead thought there were a number of issues relating to human factors (sophisticated phishing, difficulty tracing the cause of problems and poor incident response in complex situations).
purplecat: A painting of Alan Turing (General:AI)
Do you recall this paper (which is also summarised in this article in the Conversation) about which a YouTube video was made?

Well it's just gone and won the journal's best paper award.

I continue to think one should be wary of indulging in futurism and remain glad I managed to keep the words "Rogue AI" out of it.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
The work I'm involved with on the CRADLE Project involves trying to put together an assurance case for an automonous robot to be deployed... somewhere. At the moment the various bodies like HSE, The Office for Nuclear Regulation, The Civil Aviation Authority and so on, don't really want to be drawn on what evidence they would need to be certain due diligence had been performed for the deployment of an autonomous robot. We're therefore trying to produce some evidence that a robot is safe and see if they might at least admit they wouldn't throw such evidence out immediately.

We're also interested in how the production of such evidence could be made more stream-lined to avoid having to come up with new processes each time a different robot is considered. Hence Towards Patterns for a Reference Assurance Case for Autonomous Inspection Robots (which doesn't appear to be Open Access even though it should be, but I'm pretty sure anyone reading this here knows how to contact one of the authors and request a copy, should they be interested)
purplecat: An open book with a quill pen and a lamp. (General:Academia)
I mentioned a while back that I was helping to supervise a PhD student who was using various symbolic AI tools to validate the output from LLMs. At the time he was using a version of Prolog, but has since switched to using a theorem proving tool called Isabell.

He recently won best paper award at Empirical Methods in Natural Language Processing (EMNLP) for his paper about this which can be found at https://aclanthology.org/2024.emnlp-main.172

New Lab

Oct. 14th, 2024 06:43 pm
purplecat: An open book with a quill pen and a lamp. (General:Academia)
Here are some photos of my new lab at work:

Photos under the Cut )

My colleagues in Engineering who three years ago were all moved into a ginormous custom-built state of the art building where there is not enough room and they have to share offices are all a bit gob-smacked by how I managed to get this much space. I'm not honestly sure I know either - I forsee spending the rest of my career defending my territory.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
For those of you wondering what paper I was presenting at AAMAS it was Safeguard Privacy for Minimal Data Collection with Trustworthy Autonomous Agents. Most of the work was done by Mengwei Xu who was working for me at the time.

We took an idea that a number of organisations and individuals, including Tim Berners-Lee, have been pushing for some time - that instead of giving all our personal data to web services in return for access, we should have a service that contains our personal data and gives access to it to web services. On top of this you could then implement policies - e.g., never give out my marital status.

As I say, the idea isn't new, but we looked at how such a personal data service/negotiator could by implemented in a way that would let you verify that it was enforcing the policies you wanted, and how you might verify them. There is quite a bit of technical material in the paper, but I don't think you need to get to grips with all the technical details in order to follow the argument, should you be so interested.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
The department recently partnered with the British Computer Society (BCS) to make some videos for... I don't know, honestly, generic publicity reasons I presume. Anyway the final results prominently feature one of my postdocs (Hazel) and my former boss (Michael). I get a bit part in Hazel's video and the book Michael and I wrote features in his video.

The interested can find them here.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
Apparently, or perhaps I should say allegedly, since I've received this news second hand - the paper Matryoshka and I wrote (along with a couple of other people) on verifying machine ethics has been awarded Outstanding Publication of the Decade 2013 – 2023 by the Norwegian Artificial Intelligence Research Consortium - possibly in celebration of Matryoshka's recently acquired Norwegian citizenship.

We were lucky that it was a comparatively early paper in the Machine Ethics field and so gets cited a lot as one of the bits of the standard literature on the subject. It's also still only one of a handful that really deals with the issue of correctness.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
I was going over my early LJ posts and regretting that I don't post about science much any more. Science blogging requires a fair bit of mental energy to put everything together in a coherent, yet understandable, way.

Anyway, here is a paper in which I, another Computer Scientist and a legal scholar discuss what kind of explanations offered by autonomous and AI systems might be of interest to lawyers. This includes both explanations that might be offered to lawyers after some event, but also what lawyers might want to know about explanations offered to users during some interaction that let to an event. It's fairly preliminary work aimed at mapping the space more than offering specific conclusions or calls to action.
purplecat: An open book with a quill pen and a lamp. (General:Academia)
Verifiable Autonomous Systems

I wouldn't expect anyone here to buy it, and certainly not at the price CUP are charging. But still, BOOK!
purplecat: An open book with a quill pen and a lamp. (General:Academia)
I thought we had hideously missed our deadline, but on review, I think we're only about two and a half years late which is probably good for academic authors.

Part of me thinks that I've spent an awful lot of my life (more or less an hour per working day for the past two years) writing something that barely anyone will read. On the other hand, one doesn't really feel one is a proper academic until one as written a book.
purplecat: Picture of a Satellite dish under a starry sky. (General:Space)

Formal Group shot from 1906 in the region of 50 men in boring suits and five women in splendid hats

This is (according to a marketing email I just recieved), the University of Manchester's Department of Physics in 1906. My immediate take-away is that clearly the hat situation in the Faculty of Science and Engineering has deteriorated. My second thought is a vague curiosity about those women - are my assumptions about how many female academics there were in Physics at the turn of the last century incorrect? Were there an unusual number of female physicists at Manchester in 1906? Or are they support staff of some kind?
purplecat: An open book with a quill pen and a lamp. (General:Academia)
Because I do a lot of outreach I'm on the school Recruitment, Outreach and Public Relations committee (though I am not an outreach, publicity or admissions officer for any of the departments in the school). The committee has decided to get involved in a Moon Landing anniversary event, which I can not attend, and Electrical Engineering have designed a special pop-up banners for the event explaining how space exploration has driven their field. The head of the outreach committee has emailed me to tell me to design a computer science banner for the event. I have decided not to do this (not my event, not my idea, I have better things to do with my time) but I would like to minimise the chance of this blowing up into some kind of drama.

Poll #22142 Minimal Drama Way of Saying No
Open to: Registered Users, detailed results viewable to: All, participants: 13


What is the minimal drama way of getting out of this:

View Answers

Respond with "I do not have time to do this, sorry".
7 (53.8%)

Delegate to two people who might vaguely be considered my minions and will be attending the event.
5 (38.5%)

Delegate to the Computer Science outreach officer even though I know they don't want to do it, won't be attending the event and are more than likely to cause tiresome drama.
0 (0.0%)

Don't do it and reply with "I forgot" or "I ran out of time" if called upon to answer for not doing it at a later date.
0 (0.0%)

Something else I will explain in the comments
1 (7.7%)

purplecat: Hand Drawn picture of a Toy Cat (Default)
Model-Checking for Analysis of Information Leakage in Social Networks has been accepted into the post-proceedings volume of the conference it was given at. The acceptance is notable, in particular, for the referees' comments which amount in all cases to "have dealt with previous comments more than sufficiently, nothing more to do" which is good in the light of how much time I don't have between now and the camera ready deadline.
purplecat: Hand Drawn picture of a Toy Cat (lego robots)
One of the odder things that happened to me at the tail end of last year was the below appearance in the EPSRC's Pioneer Magazine. It was bizarre chiefly because the first I knew about it was when a colleague showed me the article. The text is cut-n-paste from a piece the University Corporate Communications department wrote about me at the time of the first NASA Space Apps challenge (so approx. 5 years ago) and the pictures were lifted from my website. Still, not complaining...



Profile

purplecat: Hand Drawn picture of a Toy Cat (Default)
purplecat

January 2026

S M T W T F S
    12 3
45 678910
11121314151617
18192021222324
25262728293031

Syndicate

RSS Atom

Tags

Style Credit

Expand Cut Tags

No cut tags