Last updated: Human trust in AI: 5 challenges and how to overcome them

Human trust in AI: 5 challenges and how to overcome them

4 shares

Listen to article

Download audio as MP3

As artificial intelligence makes its way into so many aspects of how we live and do business, trust in AI is hard to find.

61% of people are unwilling to trust AI or are ambivalent about the technology, according to a global survey by the University of Queensland and KPMG Australia.

While people believe AI systems can be helpful, they’re skeptical about its safety and fairness. “Many people feel ambivalent about the use of AI, reporting optimism and excitement, coupled with fear and worry,” the researchers said.

Another study, by MITRE and The Harris Poll, found that only 48% of Americans surveyed believe AI is safe and secure.

Why humans don’t trust AI

From the virtual assistants on our phones to the AI that’s incorporated in healthcare, finance, and many other industries, AI is inescapable.

But humans have plenty of reasons to be skeptical of AI technology as it continues to advance with meteoric rise of large language models:

  1. Lack of transparency. AI system operate as a “black box,” meaning they make decisions without providing clear explanations for how those decisions where reached. This lack of transparency can make it difficult for humans to understand and trust the reasoning behind AI-generated outcomes, especially when our children are involved. For example, school bus problems in Kentucky that left children stranded or not home until 10 p.m. have been blamed on a bus-routing vendor that touted its machine-learning tech.
  2. Bias and fairness issues. If the data AI systems are trained on have biases, they’re likely to inherit those biases, potentially leading to unfair or discriminatory outcomes. There are a number of examples here, including automated mortgage systems that charged Black and Latino borrowers higher interest rates and racial disparities in automated speech recognition systems. If humans perceive that an AI system is biased or unfair, they won’t have trust in its decision-making capabilities
  3. Lack of accountability. AI systems are typically designed to optimize for specific objectives, but they may not consider broader ethical or moral considerations. This lack of accountability can lead to distrust if humans perceive that AI systems prioritize certain objectives at the expense of others.
  4. Fear of job displacement. AI is often associated with the potential for job automation, which can create anxiety and fear among workers. This fear can contribute to a lack of trust in AI, as people may perceive it as a threat to their livelihoods. According to McKinsey Global Institute study, activities that account for 30% of hours worked in the US could be automated. Certain job categories will be more heavily impacted, including office support and customer service.
  5. Ethical concerns. AI feeds on our data, raising serious ethical questions, such as privacy violations, surveillance, and the potential for malicious use. These concerns can make it difficult for humans to trust AI systems, particularly when they perceive that their rights or values are being compromised.

Building trustworthy AI

Not all AI systems suffer from these issues, and AI developers are taking steps to mitigate concerns with the technology. Elected leaders and regulators also are working to develop AI safeguards, including President Biden’s directive to federal agencies to uncover bias in AI and other new technologies.

Building trust in AI requires transparent and accountable development practices, unbiased training data, and clear communication about the capabilities – and limitations – of AI systems.

Some things to keep in mind:

  • Remember that AI isn’t emotional; we need to react quickly when it fails. It doesn’t recognize the precious difference between children on a school bus and a truck load of paper products. System failures can have devastating consequences, putting our loved ones in potential danger.
  • Responsible, ethical AI starts with following rules to avoid bias, protecting privacy of users and their data, and reducing environmental impact. Companies can implement codes for ethical development and use of AI, and follow government-led regulatory frameworks.
  • The human element. AI has the potential to unlock amazing breakthroughs in medicine and other industries, but to reach its full potential in a responsible and safe way, AI will always need human input and creativity.

The promise of AI is undeniable. But without taking the right steps, human trust will remain elusive, as will AI’s benefits.

At SAP, we’re committed to ethical and transparent development of AI. Our approach upholds the United Nations guiding principles on business and human rights and follows our own seven guiding principles, including placing data protection at the core of every product and service.

Ready to become an intelligent enterprise?
Take a quick assessment HERE.

Share this article

4 shares

Search by Topic beginning with