In machines we trust?

English
Question Title: 
In <font size="32">machines</font> we trust?
Short answer: 

Technology controls an important part of our lives. Every day we use a lot of different devices, trusting they will work. At ISTC the Trust, Theory and Technology Group (T3) focuses on the interaction between humans and machines in order to understand what technological trust really means. 

Extended answer: 

Complex social phenomena are strictly associated with trust, which is a key factor to understand how cooperation, economic exchanges and communications develop in society. But how about artificial objects? Can we still talk about trust? At ISTC the Trust, Theory and Technology Group (T3) has shown the answer is more complex than expected.

The starting point is that trust is one of the major problems for the success of machines. It is important to study people's trust in the computational infrastructure: apparently human-computer interaction is based on the notion of trust. Most of us don't know how our laptop works, but we keep switching it on every day without any fear of explosions.

However, trust is a much wider concept then security: a safe environment is not sufficient to provide trust, and trust could be even damaged if the security is pushed too far by an invasive technology. Technology can easily provide security: every step of an online communication has procedures for transmitting users' data safely, i.e. cryptography, security protocols, biometric technologies and so on. This does not mean trust though. Imagine we have indeed obtained a secure environment: here agents can act freely and confidently because they are protected by technology. But this is not a real trust building atmosphere because trust can exist only when there is risk, when agents do perceive the possibility of being cheated yet decide to run some risk and trust the partners anyway. In case of technology, users do not decide to be engaged in cooperation despite of the risks perceived: they accept to use technology just because they do not see any risk. So the hard technology protected environment kills the possibility of trust: agents will feel safe, not trusting.

But in a world in which machines are becoming more and more autonomous, this is not enough any more. In fact "safety" is a poor concept if applied to machines able to solve problems like and better than humans: how can we be sure they are actually reliable? Here the concept of trust becomes essential. The most an artificial agent is autonomous, the most we need to find a way to measure its trustworthiness.

For these reasons, T3 group came to the conviction that building trust in technology is a fundamental goal to use machines responsibly. This goal is not just a matter of protocols, architectures, mind-design, clear rules and constraints: trust is in fact considered as a mental state, which is strictly based on social context. For this reason, T3 team is trying to develop computational models able to include the risk component into human-computer interactions. These models are also necessary for the relations between artificial agents: since they are able to autonomously evolve, we should take control of this process if we want their evolution to be effective. Only this way we will really trust machines. 

Contact: Rino Falcone

ISTC Group: Trust, Theory and Technology Group

Image: 
Connection: 
TECHNOLOGY / COGNITION / SOCIETY