Trust is extremely important in human interactions, and it is also a vital element of the relationship between man and machine. Do you trust this software to do what it says? Is this brand of computers reliable than its competitor? Building trust in artificial intelligence (AI) also needs to be addressed.
Jesus Rodriguez, managing partner at Investor Labs, says, “Trust is a dynamic derived from the process of minimizing risk.” There are ways to approach this with software: testing, auditing and documenting all have a role in establishing the reputation of a software product. However, they are more difficult to implement with AI. Rodriguez neatly explains why: “In traditional software applications, their behavior is dictated by explicit rules expressed in the code; in the case of AI agents, their behavior is based on knowledge that evolves over time. The former approach is deterministic and predictable, the latter is non-deterministic and difficult to understand.”
So, what steps can we take to establish and measure trust in AI? At the moment confidence in an Ai product is highly subjective and often acquired without a clear understanding of the AI’s capabilities.
A team at IBM has proposed four pillars of trusted AI: fairness, robustness, explainability and lineage. What does each of them mean?
Fairness
“AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups.” Establishing tests for identifying, curating and minimising bias in training datasets should be a key element to establish fairness in AI systems.
Robustness
“AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.” AI safety is typically associated with the ability of an AI model to build knowledge that incorporates societal norms, policies, or regulations that correspond to well-established safe behaviours.
Explainability
“AI systems should provide decisions or suggestions that can be understood by their users and developers.” We have to know how AI arrives at specific decisions and be able to explain how it got there.
Lineage
“AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.” The history and evolution of an AI model is an important part of building trust in it.
IBM also proposes providing a Supplier’s Declaration of Conformity that helps to provide information about the four key pillars of trusted AI. It’s a simple solution, and although it may not be the ultimate one, it represents progress in the world of AI.