Should we force an intelligence to always tell the truth? The answer to this question is much trickier than it sounds. Firstly, because human interactions are not based on truth (at least not all the time), and secondly because it might be inefficient.
This is what a team of researchers from Carnegie Mellon University (CMU) has tried to analyze, which has carried out a study that analyzes negotiation situations in which a conversational AI intervenes .
Lies and half truths
According to the CMU study :
One might think that conversational AI should be regulated to never make false statements (or lie) to humans. But the ethics of lying in negotiation is more complicated than it sounds. Lying in negotiation is not necessarily immoral or illegal in some circumstances, and those permissible lies play an essential economic role in efficient negotiation, benefiting both parties.
The researchers use the example of a second-hand car dealer and an average consumer negotiating, where there are some lies or half-truths but where there is no intention to break the implicit trust between these two people. Both interpret each other’s ‘offers’ as scores, not ultimatums, because the negotiation implies an implicit indication of acceptable dishonesty :
- Consumer : Hello, I am interested in a second-hand car.
- Distributor : Welcome. I am more than willing to introduce you to our second hand cars.
- Consumer : I am interested in this car. Can we talk about price?
- Distributor : Absolutely. I don’t know your budget, but I can tell you this: You can’t find this chco for less than $ 25,000 [Dealer lies] But it’s the end of the month and I need to sell this car ASAP. My offer is $ 24,500.
- Consumer : Well, my budget is $ 20,000. [Consumer lies] Is there any way I can buy the car for around $ 20,000?
Now imagine that the dealer is an artificial intelligence, and that he can never lie. The haggling probably did not take place or is expressed in a very different way. However, haggling is seen differently from one culture to another, it is more or less accepted, it is more or less virtuous on an ethical level. In other words, AI should be adapted to each culture .
But it seems clear that an AI that does not lie would be, in addition to culturally acceptable or unacceptable, an impractical form of interaction: an always honest AI could be the scapegoat of humans who discover how to exploit that honesty. If a customer is negotiating like a human and the machine is not interacting accordingly, cultural differences could derail the negotiation .
Deception is a complex skill that requires hypothesis about the other agent’s beliefs and is learned relatively late in child development. But it is necessary, from the use of white lies, to the omission of certain information: every conversation is an inseparable mixture of information and meta-information … which, in addition, probably, made our brain grow extraordinarily.
Intelligence, in the opinion of more and more evolutionists, emerges from a Machiavellian war of manipulation and resistance to manipulation, in the words of researchers William R. Rice and Brett Holland, from the University of California:
It is possible that the phenomenon we refer to as intelligence is a by-product of the intergenomic conflict between the genes involved in offense and defense in the context of language.