Can We Solve AI’s ‘Trust Problem’?

To address users’ wariness, makers of AI applications should stop overpromising, become more transparent, and consider third-party certification.

The sad fact is that many people don’t trust decisions, answers, or recommendations from artificial intelligence. In one survey of U.S. consumers, when presented with a list of popular AI services (for example, home assistants, financial planning, medical diagnosis, and hiring), 41.5% of respondents said they didn’t trust any of these services. Only 9% of respondents said they trusted AI with their financials, and only 4% trusted AI in the employee hiring process.1 In another survey, 2,000 U.S. consumers were asked, “When you think about AI, which feelings best describe your emotions?” “Interested” was the most common response (45%), but it was closely followed by “concerned” (40.5%), “skeptical” (40.1%), “unsure” (39.1%), and “suspicious” (29.8%).2

What’s the problem here? And can it be overcome? I believe several issues need to be addressed if AI is to be trusted in businesses and in society.

Rein in the Promises

The IT research firm Gartner suggests that technologies like cognitive computing, machine learning, deep learning, and cognitive expert advisers are at the peak of their hype cycle and are headed toward the “trough of disillusionment.”3

Vendors may be largely to blame for this issue. Consider IBM’s very large Watson advertising budget and extravagant claims about Watson’s abilities. One prominent AI researcher, Oren Etzioni, has called Watson “the Donald Trump of the AI industry — [making] outlandish claims that aren’t backed by credible data.”4

Source: MIT Sloan Management Review

For Queries, Contact

+971 4 405 0817
marketing@futuresecsummit.com

Follow Us

copyright 2018. Futuresec Summit | Site Designed by Kern Culture