![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
We (meaning my research group) have recently become interested in "ethical autonomy". From our point-of-view our interest is quite prescribed. Having formalised the "rules of the air" and created autonomous programs that can be shown to obey them we then got faced with the issue of when you want a pilot to deliberately break the rules of the air because there is some compelling ethical reason to do so (one of the examples we look at is when another aircraft is failing to obey the rules of the air, potentially maliciously, by moving left to avoid a collision instead of moving right. If the autonomous pilot continues to move right then eventually the two planes will collide where a human pilot would have deduced the other aircraft was breaking the rules and eventually would have moved left instead, thus breaking the rules of the air but nevertheless taking the ethical course of action).
Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.
It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.
There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.
The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.
Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.
It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.
There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.
The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.
(no subject)
Date: 2012-11-25 06:01 pm (UTC)(no subject)
Date: 2012-11-25 06:40 pm (UTC)Then their was a class of systems into which utilitarianism falls which provides you with a way to, sort of, score actions and then the most ethical action is the one with the highest score - greatest good for the greatest number and so forth. Though it was clear there were philosophical systems with different measures for the outcomes of actions.
Then there were the systems into which the Categorical Imperative falls (as I understand it) which suggests there is an underlying set of laws which are ethically justified irrespective of the actual outcomes of applying them in some given situation.
It must be said, if there is an absolute ethical system I'm inclined to believe it will fall in the second group somewhere, but I suspect that is the group of most attraction to atheists since we tend not to believe in absolute principles which are divorced from actual outcomes. Though I'm also inclined to think the second group are a special case of the third (in which the over-riding principle is outcome based). There is also, obviously, a massive amount of question-begging in the phrase "the greatest good"...
Ethics, Robots, Unmanned vehicles.
Date: 2012-11-26 12:29 am (UTC)We've just submitted a grant proposal to JSPS (Japan's EPSRC-equivalent) on a new theory for information ethics involving embodiment, where we argue that a radicaly revision of information ethics is needed because of both embodiment (robots, control of vehicles, mobile data, location data, DNA information...) and disembodiment (data moving to the cloud, ubquitous networking) whih will include (if it gets funded) some work on care robots and "caring surveillance".
If it gets funded we're hoping that Ishiguro-sensei (of the geminoid and telenoid robots) will be involved.
The sex-bots question is a very interesting one, which as a prof of information ethics, I'm able to discuss, but I can understand your reluctance given your position at present.
(no subject)
Date: 2012-11-26 06:52 am (UTC)Distinction 1: (i) Theories which consider the good of the community first (e.g. utilitarianism, social-contract theory) (ii) Theories which consider the good of the individual first (e.g. Virtue Ethics, where morality arises from the urge to be a Good Person -- the Ancient Greek idea) (iii) Theories which are all about an abstract Duty (e.g Kant).
(Of course, in class (i) the Social Contract can originate from a collection of self-interested motives. But once it's in place, the community trumps the individual.)
Distinction 2: (i) Theories in which the consequences of an individual act outweigh the general rule ("Consequentialist"), and (ii) Theories in which they don't ("Deontological").
Distinction 3: (i) Theories in which moral propositions have, at least in principle, a well-defined truth-value ("Cognitivist"), and (ii) Theories in which they don't ("Non-Cognitivist," e.g. full-blown relativism, emotivism, prescriptivism).
Sounds like the philosopher you mention was concentrating on the first of these distinctions. Perhaps the very fact you're trying to code this up means that the question raised by the third distinction has been well and truly begged. Distinction 2 is useful to get the fine-shading of the big classes in Distinction 1 (e.g. "act" vs "rule" versions of Utilitarianism; do we look over every act with a utilitarian eye, or go for the set of rules which, on the whole, works best?)
I've found the study of these classifications and their consequences to be fascinating, even if every single actual ethical theory seems to be fatally flawed. Heigh ho, that's philosophy for you. Maybe the best it can do is provide a technical vocabulary to facilitate debate..?
(no subject)
Date: 2012-11-26 12:07 pm (UTC)My colleague who is doing most of the theoretical lifting here (lets call her Matryoshka) tells me that we are implementing Distinction 1, type iii) though I rather thought we were implementing Distinction 2, type i) without necessarily distinguishing between individuals and communities (we're at the level of "if you have to crash into something living, pick the cow instead of the human")
He was talking specifically about coding but with, I felt, a fairly naive view of the nuances of AI type programming. That said, he probably felt we had a rather naive view of the nuances of ethics. Hence, I suppose, the value of the workshop.
Re: Ethics, Robots, Unmanned vehicles.
Date: 2012-11-26 12:12 pm (UTC)I really didn't want to be remembered as "that woman who kept going on about sex-bots" especially since I was working from a gut feeling that saying they were "clearly unethical" appeared a rather sweeping statement to be making given the range of possibilities rather than from a specific thought-out standpoint on the issue.
Good luck with the grant application.