The spectrum of AI applications is broad and many innovations are not yet in a state of maturity. Depending on the underlying technology and the area of application, there is a high variety of maturity. Currently, the supervised machine learning approach is providing the most value in the deployment phase for industries where data is plentiful and structured, and where there is little room for ambiguity or uncertainty. The expectation is that the next decade will be an ‘age of implementation’.
Pinning down Artificial Intelligence
Artificial Intelligence is hard to pin down as a concept. Most definitions of AI are constructed with similar elements. For example, both definitions below suggest that AI is a technology that is responsive to its environment and learns from it whilst being relatively autonomous.
“Artificial Intelligence (AI) can be used to indicate any technology (software, algorithm, a set of processes, a robot, etc.) that is able to function appropriately with foresight of its environment” – N. J. Nilsson
“Artificial intelligence (AI) refers to systems that show intelligent behaviour: by analysing their environment they can perform various tasks with some degree of autonomy to achieve specific goals” – European Commission
However, “AI is not a well-defined technology and no universally agreed definition exists”.
In this article, we explain AI by a collection of technological approaches, innovation frontiers and business goals.
A first distinction that we make is the difference between Data Science and Artificial Intelligence. A good way to distinguish between these concepts is to examine their end-products. This distinction is not without overlap. Data Science methods are often used in designing models for AI and AI applications often do have humans in the loop. In practice, the distinction is too vague to be of use. Instead, we will note when a company or industry leans more to either Data Science insights) or Artificial Intelligence (actions and outcomes).
New AI approaches are primarily fueled by processing power and data
AI technologies require three crucial elements in order to be successful.
- Processing power: More computing power and storage at reasonable costs enable AI to process more complex calculations in less time.
- Data: An increase in the amount of available high-quality (annotated and labelled) data fuels the viability and usefulness of algorithms.
- Algorithms: The rules by which an AI program operates. Innovation in algorithms accelerates AI developments and enables new possibilities.
The biggest differentiators in recent AI innovations are not algorithms, but fast growth of processing power and the sharp increase in available data.
Today’s AI is driven by Machine learning
Traditional algorithms are described as Rule-Based. Systems based on these algorithms get input and follow a set of pre-defined rules and instructions to generate output. The current uptake in AI is largely due to the application of Machine Learning algorithms, whose performance change by exposing them to more data over time. These algorithms make use of (dynamic) input in order to derive machine-made patterns from the information and translate these to insights and actions.
Types of Machine Learning-Based AI:
- Supervised Learning: Labelled data by a human is put through an algorithm that models the relationships between each label and the input values.
- Unsupervised Learning: Unlabelled data is put through an algorithm that identifies rules, detects patterns, and summarizes and groups data points to derive insights.
- Reinforcement Learning: An autonomous, self-teaching system learns by trial and error to achieve the best outcomes. It performs actions aiming to maximize rewards.
Deep Learning: highly complex subset of Machine Learning that uses algorithms that mimic the neural network of the brain, to progressively extract higher level patterns and learn from vast amounts of (un)labelled data. Recently developments in Deep Learning have outperformed humans and classical computers at achieving several goals, such as winning complex games (GO, Starcraft), text translation (Google Translate) and classifying radiology images.
AI approaches often show one or more goal patterns
- Hyper-personalization: goal is to develop unique profiles of individual users that are adaptable over time for a variety of purposes. Example: a credit scoring system that looks at each individual’s personal credit history and uses it to create a personalised score.
- Goal-Driven Systems: goal is giving a system the ability to learn the optimal solution to a problem through trial and error. Example: A system learning to win a game of chess through thousands of games.
- Autonomous Systems: goal is to accomplish a task in an environment with little to no human involvement, minimising human labour. Example: The operating system of a self-driving car.
- Predictive Analytics & Decision Support: goal is to help humans make decisions through applications which suggest actions and predict future outcomes learned through data. Example: Using machine learning-based regression to predict failures in the electrical grid.
- Human Interaction: goal is to interact with humans with conversation through voice or text. Example: A chatbot interpreting messages and generating responses.
- Pattern & Anomaly Detection: goal is to identify patterns in data and learn connections between variables that provide insight into whether given data fits an existing pattern or is an outlier. Example: A fraud detection system at a bank classifying transactions as ordinary or anomalous.
- Recognition: identify and understand unstructured content through segmenting and recognizing this content into something that can be labelled and structured. Example: Facial recognition systems which can recognize the different curves and lines in a face as a pair of eyes, a nose, a mouth etc.
The next decade is expected to be the age of implementation
The spectrum of AI applications is broad and many innovations are not yet in a state of maturity. Depending on the underlying technology and the area of application, there is a high variety of maturity. Currently, the supervised machine learning approach is providing the most value in the deployment phase for industries where data is plentiful and structured, and where there is little room for ambiguity or uncertainty. Most new innovative applications are expected from other forms of machine learning, especially Deep Learning is expected to revolutionise the tasks computers are capable of. The expectation is that the next decade will be an ‘age of implementation’. However not all predictions and expectations about AI are realistic.