purplecat: Hand Drawn picture of a Toy Cat (ai)
[personal profile] purplecat
We (meaning my research group) have recently become interested in "ethical autonomy". From our point-of-view our interest is quite prescribed. Having formalised the "rules of the air" and created autonomous programs that can be shown to obey them we then got faced with the issue of when you want a pilot to deliberately break the rules of the air because there is some compelling ethical reason to do so (one of the examples we look at is when another aircraft is failing to obey the rules of the air, potentially maliciously, by moving left to avoid a collision instead of moving right. If the autonomous pilot continues to move right then eventually the two planes will collide where a human pilot would have deduced the other aircraft was breaking the rules and eventually would have moved left instead, thus breaking the rules of the air but nevertheless taking the ethical course of action).

Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.

It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.

There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.

The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.

(no subject)

Date: 2012-11-25 06:01 pm (UTC)
From: [identity profile] daniel-saunders.livejournal.com
I thought this was really interesting, although I don't really know enough about AI or ethics to contribute meaningfully. That said, my uneducated opinion for some time has been to wonder whether a free-standing code of ethics that ignores context can ever be sufficient e.g. you probably know that it's pretty easy to formulate cases where a simple (perhaps simplistic) implementation of Kant's Categorical Imperative gets you doing all kinds of terrible things, or at least letting other people do them when you could easily stop them.

Ethics, Robots, Unmanned vehicles.

Date: 2012-11-26 12:29 am (UTC)
From: [identity profile] a-cubed.livejournal.com
THis is an interest of mine as well. On a different project (privacy and social media) last week, Lilian Edwards (Prof of IT Law at Strathclyde, and another of the members of the panel that created those EPSRC principles - she mentioned Winfield as the primary instigator of this stuff) briefly mentioned this kind of issue, including the question of the emotional attachment we're seeking to generate in emotionally vulnerable groups (children with learning difficulties, or some shade of Aspergers/Autism, the elderly) in developing care robots, which are one of the main foci for domestic robots at the moment.
We've just submitted a grant proposal to JSPS (Japan's EPSRC-equivalent) on a new theory for information ethics involving embodiment, where we argue that a radicaly revision of information ethics is needed because of both embodiment (robots, control of vehicles, mobile data, location data, DNA information...) and disembodiment (data moving to the cloud, ubquitous networking) whih will include (if it gets funded) some work on care robots and "caring surveillance".
If it gets funded we're hoping that Ishiguro-sensei (of the geminoid and telenoid robots) will be involved.
The sex-bots question is a very interesting one, which as a prof of information ethics, I'm able to discuss, but I can understand your reluctance given your position at present.

(no subject)

Date: 2012-11-26 06:52 am (UTC)
From: [identity profile] kargicq.livejournal.com
Interesting! I'm teaching Ethics at A-level at the moment, an area of philosophy almost completely new to me. AFAICS, there are three main ways of slicing the ethical pie:

Distinction 1: (i) Theories which consider the good of the community first (e.g. utilitarianism, social-contract theory) (ii) Theories which consider the good of the individual first (e.g. Virtue Ethics, where morality arises from the urge to be a Good Person -- the Ancient Greek idea) (iii) Theories which are all about an abstract Duty (e.g Kant).

(Of course, in class (i) the Social Contract can originate from a collection of self-interested motives. But once it's in place, the community trumps the individual.)

Distinction 2: (i) Theories in which the consequences of an individual act outweigh the general rule ("Consequentialist"), and (ii) Theories in which they don't ("Deontological").

Distinction 3: (i) Theories in which moral propositions have, at least in principle, a well-defined truth-value ("Cognitivist"), and (ii) Theories in which they don't ("Non-Cognitivist," e.g. full-blown relativism, emotivism, prescriptivism).

Sounds like the philosopher you mention was concentrating on the first of these distinctions. Perhaps the very fact you're trying to code this up means that the question raised by the third distinction has been well and truly begged. Distinction 2 is useful to get the fine-shading of the big classes in Distinction 1 (e.g. "act" vs "rule" versions of Utilitarianism; do we look over every act with a utilitarian eye, or go for the set of rules which, on the whole, works best?)

I've found the study of these classifications and their consequences to be fascinating, even if every single actual ethical theory seems to be fatally flawed. Heigh ho, that's philosophy for you. Maybe the best it can do is provide a technical vocabulary to facilitate debate..?
Edited Date: 2012-11-26 06:58 am (UTC)

Profile

purplecat: Hand Drawn picture of a Toy Cat (Default)
purplecat

May 2025

S M T W T F S
    1 2 3
4 56789 10
11121314151617
18192021222324
25262728293031

Tags

Style Credit

Expand Cut Tags

No cut tags