Thursday, October 4, 2018

The state of Artificial Intelligence

It seems that a day doesn't go by without artificial intelligence (AI) making the headlines. On the one hand, we are being warned about the potential negative impact of AI on the job market and on the other hand we are being made to believe that an AI-driven techno-utopia is right around the corner. The truth is, as is usually the case, far from black and white.


How We Got Here 

If you enter the term "artificial intelligence" into Google Trends, you'll notice a sharp uptake in search interest around the beginning of 2017. "I'd say in 2017 we reached the tipping or inflection point when it comes to enterprise adoption of AI technologies", says Kashyap Kompella, a contributing analyst at Real Story Group.

"We saw enterprise use cases emerge in text analytics, natural language processing, voice recognition, image recognition, and, of course, structured data analysis in a variety of domains." Indeed, 2017 was the year AI became a household name, but its story starts much earlier than that.

Thought-capable artificial beings can be found in tales dating back to antiquity, and the word "robot" was introduced in 1921 in a science fiction play titled R.U.R. (Rossum's Universal Robots) by the Czech writer Karel Čapek. The first work that we now recognize as an example of AI comes from 1943, and the field of AI research was born in 1956 at Dartmouth College.

Despite the lavish aspirations of AI researchers during the post-war era, progress soon slowed down to a crawl and real-world use cases were nowhere to be found. That changed in the late 1990s thanks to increasing computational power and greater emphasis on solving specific problems. In 1997, IBM reminded the whole world that computers had been getting smarter when its chess-playing computer Deep Blue defeated Garry Kasparov in the game of chess.

In 2011, another super-computer created by IBM, Watson participated in a "Jeopardy!" quiz show exhibition match and defeated two of the show's greatest champions by a significant margin. What made the spectacle especially memorable was the fact that Watson could not only understand questions posed in natural language but also respond to them using a voice synthesized in real-time, something that had seem unthinkable just a decade ago.

Today, just a few years after Watson's incredible performance, computers that understand questions posed in natural language and respond to them in real-time using a synthesized voice are everywhere, including in our pockets. From Apple's Siri to Amazon's Alexa to Microsoft's Cortana or Google's virtual assistant, the technology that was once a dream is now a reality, and we have all helped to make it happen. 


Big Data Revolution

"I've been working in AI for now more than 30 years and in the past eight years there are things that have occurred that I never thought would happen in my lifetime", said Adam Cheyer, co-founder of Viv Labs. "I believe within the next two to three years you will use an assistant for everything you use the web and apps for and it will be better than either of them."

The reason why AI is suddenly so much more useful despite having been around for many decades has everything to do with the digitization of virtually every aspect of our lives. We have arrived at a point where we create around 2.5 quintillion bytes of data each day. Over the last two years alone, we have generated 90 percent of all the data that was ever generated in the world, and our data output will only increase in the future.

The enormous quantities of data we produce, collectively referred to as big data, have been instrumental in an area of AI called deep learning. "The [deep learning] system basically learns by itself using a lot of data and computation. If you keep showing it pictures of an orange, it eventually figures out what an orange is - or a Chihuahua versus a Labrador versus a small pony", explains Nvidia CEO Jensen Huang.

According to an analysis of more than 400 use cases across 19 industries and nine business functions by McKinsey & Company, deep learning is among the AI techniques with the greatest likelihood to be used in real-world AI application. The analysis further suggests that on average deep learning AI techniques have the potential to provide a boost in additional value above traditional analytics techniques ranging from 30 percent to 128 percent, depending on the industry.

Current Use Cases

Use cases for AI have emerged in nearly every industry and some industries are already being transformed by it:

Automotive: 

Just a few years ago, it would be unthinkable for a vehicle to drive by itself, but there are now multiple car manufacturers that offer vehicles with self-driving capabilities. Tesla's fleet has accumulated over a billion of miles of autonomous driving and the company is getting ready for its first autonomous coast-to-coast drive. It will still take some time before self-driving cars are seen as first-class citizens on the road, but mining companies have been relying on fleets of autonomous mining trucks as an efficient way to mitigate the effects of widespread labor shortages in the mining industry for years now. 

Customer support: 

It's estimated that the global chatbot market will be worth around 1.3 billion by 2025, up from only 190 million in 2016. Chatbots, computer programs which conduct a conversation via auditory or textual methods, are quickly becoming a preferred way to get customer support because they are convenient to use, are available around the clock, and don't require users to install an additional application to interact with them. Companies such as Facebook, Lyft, Fandango, Spotify, Whole Foods, Sephora, Mastercard, Staples, The Wall Street Journal, or Marriott International are all using chatbots to improve their customer support.  

Healthcare: 

AI has many use cases in healthcare. For example, Qure’s deep learning algorithms have been trained with over a million curated X-rays and radiology reports to detect abnormal chest X-rays, screen for tuberculosis, and identify common abnormalities. The system has been successfully used in public health screen programs, reaching up to 99 percent accuracy. AI is also likely to play an important role in medical diagnosis. Around 10 percent of patient deaths and between 6 and 17 percent of all hospital complications are attributed to diagnostic errors, and automated diagnostic tools could be trained to provide a second opinion and warn about potential misdiagnosis. 

Manufacturing: 

From real-time maintenance of equipment using virtual models that pair the virtual and physical worlds and allow analysis of data and monitoring of systems to head off problems before they even occur to generative design involving a program capable of all the possible permutations of a solution that meet certain constraints, AI has many use cases in manufacturing. The trend of automation and data exchange in manufacturing even has a name: Industry 4.0, commonly referred to as the fourth industrial revolution.


Roadblocks to AI Adoption and Evolution

All the above-mentioned AI use cases are achieved by highly specialized algorithms that excel at performing tasks that take humans just a second to perform. However, the same algorithm can't accomplish anything beyond what any toddler can do.

"When an app can look at the cancer diagnosis and provide results better than a human doctor, when an autonomous car can … control the car in a safer way than a human, when a weapon system is able to guide itself without the aid of the pilot -when it can only do that one task with reliability and accuracy -that is narrow AI", said Amir Husain, founder and CEO of SparkCognition and author of The Sentient Machine: The Coming Age of Artificial Intelligence. "Today, that is where we are poised. That is not to say we aren't going toward more generalized AI."

Artificial general intelligence, or AGI for short, is the intelligence that would allow machines to successfully perform any intellectual task that a human can. Currently, over 40 organizations around the world are actively researching AGI but there are still many challenges to overcome to even come close to it.

To put things into perspective, consider that leading speech recognition systems are built using about 50,000 hours of audio data and corresponding audio transcripts. Before the computer program AlphaGo became the first computer program to defeat a world champion at the ancient Chinese board game Go, several leading AI experts had been working on it for multiple years.

Still, progress is being made, as was demonstrated at the International Conference for Learning Representations (ICLR) in Vancouver, British Columbia. Researchers have, for example, described a technique that allowed a robot to tie a knot and navigate an office with no specific guidance and after only one demonstration. It's important to remember that it was unthinkable for AI to do just a fraction of the things it does today just a few years ago, so who knows what the future might bring. 


Conclusion

Artificial intelligence has evolved significantly over the last few years and found application in virtually every industry. Many experts believe that it will have the same impact on the world as electricity had, but it's impossible to estimate what the timeline may be. 

At the moment, we are still far away from the general intelligence imagined by early sci-fi authors, but technology is known to progress at an exponential rate.  


Sources