What are three current applications for AI?

Three Applications for Artificial Intelligence

Artificial Intelligence (AI) is on a trajectory to become ubiquitous in every area of our lives, with machine learning and large datasets making it now possible to teach machines to perform at human-level capacities. AI uses Machine Learning techniques to perceive the world, reason through solutions to problems, and make decisions, much in the same way as human intelligence. In this post, we'll explore three current AI applications and the machine learning techniques that make them work.

1. Intelligent Personal Assistants

Even the most technology-illiterate people will meet AI in their phones, with an intelligent personal assistant like iPhone's Siri, or in other voice-based services like Alexa on Amazon's Echo platform. These services allow users to control integrated applications using voice commands. The services use a variety of machine learning techniques to interface between the user's spoken words and application actions.

"Siri" by Sean MacEntee personal assistant AI

For instance, if you asked Siri to "Play Road Trip playlist" on your iPhone, Siri would first need to transform your speech into a text representation. This first step is often a major problem in a country with as diverse accents as the United States. For instance, people with Southern accents often articulate words in radically different ways than people from the rest of the country.

When Siri first rolled out, it struggled to correctly identify text representations from people with these divergent accents. However, voice services like Siri and Alexa continually update neural networks, using people’s voices as a growing training dataset to train the neural networks to recognize words regardless of accent.

Once the speech has been transformed to text, voice services like Siri use standard Natural Language Processing (NLP) techniques to extract the text’s meaning and activate the correct application. Activating the correct application involves parsing the text for relevant grammatical elements to its integrated applications, interpreting the semantic meaning for the integrated applications (based, for instance, on topic modeling methods like Latent Dirichlet allocation), and finally responding by activating the appropriate music application to play your "Road Trip" playlist. The same process holds true for any application that Siri might interact with to assist you.

2. Games

The potential for worthy artificial intelligent game opponents came to the public eye in 1997 when IBM’s Deep Blue beat world champion Garry Kasparov at chess.

Still, there were many nay-sayers claiming that while it was a feat to win at chess, more impressive would be to win a game as subtle and complex as Go. In 2016, though, Google’s AlphaGo AI program managed to beat one of the top world Go players, Lee Sedol, proving that even the best human player of the game could be beaten by an AI.

While Go is extremely complex, it is not so complex that machine learning techniques cannot train a machine to successfully play it.

"Game of Go" by Jaro Larnos is licensed under CC by 2.0.

"Game of Go" by Jaro Larnos is licensed under CC by 2.0.

For each move, AlphaGo performs a Monte Carlo tree search. A Monte Carlo Tree Search recursively selects optimal game moves, expanding the tree to include additional possible moves if the game is not over, and then simulating how the game progresses from that move so that the current move sequence can be backpropagated. Thus, the value of each move can be estimated, like so:

To limit the scope of the number of possible moves in the tree search, Google’s team trained two neural networks on 30 million moves from games played by human experts, taking in a description of the Go board before each move as input.

One network predicts the best move in each round and one evaluates who has the upper hand in the overall match. Each neural network is extremely deep, with 12 network layers that allow the networks to learn multiple levels of data abstraction.

Once the neural networks had been trained on games played by human experts, the team trained AlphaGo to play games against other instances of itself using reinforcement learning to tune its neural network connections even further based on its successes and failures against itself.

While mastering Go is a noble goal in and of itself, the creators of AlphaGo argue that these AI techniques are generalizable and may serve to enhance other domains such as climate modeling or disease analysis.

3. Intelligent robots that can navigate their environments

While there are many examples of remote-controllable robots in any toy store, intelligent robots are also well on their way to active deployment. Beyond all the traction self-driving cars are making in the auto industry, we may also soon be looking at robots who can perform all the minute physical tasks a human can.

Valkyrie Robot image by NASA/JPL is in the Public Domain.

Valkyrie Robot image by NASA/JPL is in the Public Domain.

NASA, for instance, faces a considerable time delay between communications from Earth to Mars, meaning that humans wouldn’t be able to remotely control robots needed for emergency repair and future settlement construction efforts. For this reason, NASA is developing an autonomous robot “Valkyrie”, that can learn to act by itself and construct habitation spaces for humans for when we are able to settle on Mars.

In the case of things that need repair, Valkyrie should identify them and repair them in the midst of extreme Mars-like terrain, with varying gravity conditions and irregular ground surfaces. In addition, researchers talk about teaching the robots to deal with dynamic objects coming at them (by throwing balls at them, for instance).

Like other AI technologies, robots need to be able to perceive the world and decide on a proper to action to take. In comparison to a service like Siri, however, intelligent robots take in information about their external environment via a variety of motion, visual and sound sensors and make physical decisions about moving in the world.

Therefore, a lot of their machine learning training involves learning rules about how to move in different environmental conditions and accomplish specified tasks based on sensory input. Even the act of walking or climbing a set of stairs could require different robotic movements depending on the scenario.

As such, the robot requires a certain degree of autonomy, only possible through machine learning techniques. Thus, in the same way that Siri uses neural networks to learn how to interpret different American English accents, Valkyrie learns how to move in a variety of scenarios.

NASA notes that the same technology may also be able to make a direct impact in disaster areas on Earth, where it is too dangerous for crews to help people, but a robots could be safely deployed. In a sense, researchers working on the Valkyrie robots may be creating real-life superheroes who can go where ordinary humans cannot.

Final Remarks

Artificial intelligence has the capacity to have a hand in everything we do, performing even the most complex human tasks. While at this point, the majority of the population has only come into contact with personal assistant AI programs like Siri, similar machine learning and AI techniques can be used to produce disaster-zone robots and machines that solve extremely complex problems, such as mapping game scenarios in the game of Go.

However, these are just three of the examples of how machine learning and artificial intelligence have been applied to solve problems—by no means the only ones. Artificial Intelligence that learns will no doubt continue to expand beyond these narrow domains and touch every field that requires human-level decision-making and action.


.backtotop { background: url('icon_top.png') no-repeat;