Human beings are sociable by nature. We don’t tend to explore others. We experience empathy. We are pleased that there is justice. However, perhaps these trends are not so clear when it comes to artificial intelligence .
This is what a new study suggests that has explored how humans will interact with machines in future social situations, such as self- driving cars, by asking participants to play a series of social dilemma games .
The participants in this study faced four different forms of social dilemma, each of which presented them with a choice between pursuing personal or mutual interests, but with different levels of risk and commitment . Next, what the participants chose to do when interacting with an AI or anonymous people was compared.
Study co-author Jurgis Karpus , a behavioral game theorist and philosopher at the Ludwig Maximilian University in Munich, notes that they found a consistent pattern:
People expected artificial agents to be as cooperative as other humans. However, they did not return their benevolence as much and exploited the AI more than humans.
One of the experiments used in the study was the prisoner’s dilemma . In gambling, players accused of a crime must choose between mutually beneficial cooperation or self-interest betrayal. While participants accepted the risk with both humans and artificial intelligence, they betrayed the trust of AI much more frequently, while also trusting that its algorithmic partners would be as cooperative as humans.
The findings suggest that the benefits of smart machines could be restricted by human exploitation . Take the example of self-driving cars trying to merge onto a road from a side road, and trying to get other human drivers to give way. The autonomous car tries to be polite, to follow the rules, but human drivers will be more selfish and will not allow its incorporation easily. Should the autonomous car be less cooperative and behave more aggressively in order to be more useful to humans (in this case those who travel in the autonomous car)?