How Can We Trust AI If We Don't Know How It Works (2024)

How Can We Trust AI If We Don't Know How It Works (1)

The following essay is reprinted with permission fromThe Conversation, an online publication covering the latest research.

There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone,determine your creditworthinessand writepoetryandcomputer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

But AI systems have a significant limitation: Many of their inner workings areimpenetrable, making them fundamentally unexplainableand unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Why AI is unpredictable

Trustis grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes.

Many AI systems are built ondeep learningneural networks, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it“learns” how to classify the databy adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before. It doesn’t memorize what each data point is, but instead predicts what a data point might be.

Many of the most powerful AI systems containtrillions of parameters. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is theAI explainability problem– the impenetrableblack boxof AI decision-making.

Consider a variation of the“Trolley Problem.” Imagine that you are a passenger in a self-driving vehicle, controlled by an AI. A small child runs into the road, and the AI must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the benefit of being able to explain their decision. Their rationalization – shaped by ethical norms, the perceptions of others and expected behavior – supports trust.

In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.

AI behavior and human expectations

Trust relies not only on predictability, but also onnormative or ethicalmotivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is adynamic process, shaped by ethical standards and others’ perceptions.

Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’sproving challenging.

The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is theAI alignment problem, and it’s another source of uncertainty that erects barriers to trust.

Critical systems and trusting AI

One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make. This is theapproach taken by the U.S. Department of Defense, which requires that for all AI decision-making, a human must be either in the loop oron the loop. In the loop means the AI system makes a recommendation but a human is required to initiate an action. On the loop means that while an AI system can initiate an action on its own, a human monitor can interrupt or alter it.

While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.

Avoiding that threshold is especially important because AI is increasingly being integrated intocritical systems, which include things such as electric grids, the internet andmilitary systems. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.

Can people ever trust AI?

AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.

If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust. More research in this area will hopefully shed light on this issue, ensuring that AI systems of the future are worthy of our trust.

This article was originally published on The Conversation. Read the original article.

How Can We Trust AI If We Don't Know How It Works (2024)
Top Articles
Rare U.S. Penny Sold for $1.7 Million
Exactly Who Is Included?? Figuring Out Your Guest Count. — Unique. Charming. Sophisticated.
Amwednesday Vimeo
Stones to Pounds Converter (st to lb)
Abbytheedoll
Joe Nichols Juab County Fair
Fuego Azteca Mexican Bar And Grill Live Oak Photos
Mail Healthcare Uiowa
Pokemon Infinite Fusion How To Get All Starters
Tara Brown Sleep Styler Net Worth
866-383-1604
Pollen Count Los Altos
Zits Comic Arcamax
Ap Psych Unit 7 Vocab
Family Dollar Distribution Center Joliet Photos
Walkthrough - Summertime Saga Wiki
Wedding Dr Amy Hutcheson Married
Frontier 733
Tulare Visalia Craigslist
Best Restaurants Westmont
Roadwarden Thais
Sallisaw Bin Store
Craigslist Houses For Rent In Hickory Nc
Chest Compressor Mr Mine
Citibank Branch Locations In Orlando Florida
Craigslist Yard Sales Jacksonville Fl
Tour 2024 | Titleist Ambassadors and PGA Players | Titleist
Death On 14 Freeway Today
Hope anchors the soul Zipper Pouch | CafePress
What is 802.11n? | Definition from TechTarget
Autozone Ac Condenser
Sierra Metals legt Finanzergebnisse seiner Tochtergesellschaft Sociedad Minera Corona in Peru für das 3. Quartal 2021 vor
Der frühere Jenaer Prorektor Otto Stamfort im Porträt
Young & Restless Dirty Laundry
Explore online Islamic books library of DawateIslami
Comment résoudre l'erreur « Could not resolve hostname: nodename nor servname provided, or not known » ?
Number of Kwik Trip Stores Location in the USA - 2024 | LocationsCloud
Slmd Skincare Appointment
Somewhere In Queens Showtimes Near Ambler Theater
Contact us
Youravon Comcom
Usps Passport Appt
Retro Bowl Slope Unblocked Games
Ssndob Cm New Domain
Umcu Cd Rates
Does Destiny Bond Work On Tera Raids
Vegamovies Home
Salary Calculator UK - Salary After Tax
The Ultimate Renaissance Quiz: Test Your Knowledge of Europe‘s Golden Age - History Tools
Jennifer Maker Website
Latest Posts
Article information

Author: Errol Quitzon

Last Updated:

Views: 5903

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.