Primer – Artificial Intelligence (Part 2)
/ Chris Collins / 11.26.2018 /
This is the second post in the Primer Series where I will cover emerging technologies over the next couple months and attempt to explain them in the simplest way possible.

In Part 1, I covered the broad AI landscape as well as what the future could look like. As I briefly mentioned in that post, narrow AI is already pervasive in our everday lives. Below I’ll dig into exactly how AI is being used today.
Overview of AI Use Cases
Natural Language Processing
AI can be used to help machines analyze and communicate in human language. Everything from basic spell check in Word to the Alexa’s and Siri’s of the world leverage NLP. A more recent example that has been widely rolled out is Gmail’s autocomplete feature that suggests phrases depending on which words you begin a sentence with. As you can imagine, Gmail is taking in massive amounts of data every minute and these suggestions will get better (assuming you have a writing style that is similar to the average person).
At a high level, NLP works as follows. For voice-based applications, the first step is translating spoken words into text using a method called Hidden Markov Models, which uses math models to translate voice clips into text that the NLP system can use. Then, the system breaks the text down into parts of speech through coded grammar rules that incorporate machine learning to determine the context of what was said (aka semantic analysis). Finally, the system can categorize what was said in different ways depending on the intended use case.
Computer Vision
AI can be leveraged to understand and categorize digital images. The process entails machines processing raw visual inputs and making decisions based off of these inputs. Facebook used an early implementation of computer vision to suggest tagging individuals in photos that its AI recognized from other photos on the platform. Now, the company is researching ways to facilitate photo retouching, such as removing red eyes or placing your very own eyes in photos when your eyes are accidentally closed.

Under the hood, machines can develop computer vision through the creation of a neural networks that process individual pixels of a picture. This process is improved by providing as many pictures as possible so that the machine can learn to differentiate for what it is being asked to look for. In this way a system powering autonomous vehicles, for example, can only get better as more miles are driven and it has more data to base decisions off of.
Pattern Recognition
AI can predict future outcomes based on how historical scenarios have played out. An important emerging application is the ability for machines to detect cancer faster and more accurately than doctors. The inputs in this case are x-rays related to the cancer type in question in addition to the doctor’s diagnosis for each. Another use case has been how alternative lenders judge a consumers’ ability and willingness to repay. These lenders go beyond traditional FICO scores and analyze data that correlates with creditworthiness, such as one’s educational background, social network, and how long a consumer spends reading through contracts of its loan application.

Pattern recognition is all about recognizing correlations between data points that lead to an output. Statistical methods are applied to the data set in order to generate probability values to a particular outcome given certain inputs. As always, the more data that the algorithm has to work with, the better it will get at forecasting.
Reasoning & Optimization
AI is able to predict required maintenance for equipment as well as augment process optimization. Farming is benefiting from AI, which can enable farmers to optimize crop yield while minimizing water consumption and energy usage. Plenty uses various IoT devices to measure variables such as temperature, moisture, and plant growth to optimize yield in its vertical hydroponic farms.

AI technology is very well equipped to deal with optimization problems, which in any business means maximizing output for each input. By analyzing how various inputs affect a desired outcome, algorithms are able to zero in on the ideal conditions that generate the highest probability of achieving a goal. Eventually, AI will be able to suggest pulling in additional data points that will be helpful in better optimizing all types of processes.