WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
It depends who you ask.
Back in the 1950s, the fathers of the
field, Minsky and McCarthy, described artificial
intelligence as any task performed by a machine that would have previously been
considered to require human intelligence.
Modern definitions of what it means to create intelligence are
more specific. Francois Chollet, AI researcher at Google and creator of the
machine-learning software library Keras, has said intelligence is tied to a
system's ability to adapt and improvise in a new environment, to generalise its
knowledge and apply it to unfamiliar scenarios.
"Intelligence is the efficiency with which you acquire new
skills at tasks you didn't previously prepare for," he said.
"Intelligence is not skill itself, it's not what you can
do, it's how well and how efficiently you can learn new things."
It's a definition under which modern AI-powered systems, such as
virtual assistants, would be characterised as having demonstrated 'narrow AI';
the ability to generalise their training when carrying out a limited set of
tasks, such as speech recognition or computer vision.
Typically, AI systems demonstrate at least some of the following
behaviours associated with human intelligence: planning, learning, reasoning,
problem solving, knowledge representation, perception, motion, and manipulation
and, to a lesser extent, social intelligence and creativity.
WHAT ARE THE USES FOR AI?
AI is ubiquitous today, used to recommend what you should buy
next online, to understanding what you say to virtual assistants, such as Amazon's Alexa and Apple's
Siri, to recognise who and what is in
a photo, to spot spam, or detect credit card fraud.
WHAT ARE THE DIFFERENT TYPES OF AI?
At a very high level, artificial intelligence can be split into
two broad types: narrow AI and general AI.
As mentioned above, narrow AI is what we see all around us in
computers today: intelligent systems that have been taught or have learned how
to carry out specific tasks without being explicitly programmed how to do so.
This type of machine intelligence is evident in the speech and
language recognition of the Siri virtual assistant on the Apple iPhone, in the
vision-recognition systems on self-driving cars, or in the recommendation
engines that suggest products you might like based on what you bought in the
past. Unlike humans, these systems can only learn or be taught how to do
defined tasks, which is why they are called narrow AI.
WHAT CAN NARROW AI DO?
There are a vast number of emerging applications for narrow AI:
interpreting video feeds from drones carrying out visual inspections of
infrastructure such as oil pipelines, organizing personal and business
calendars, responding to simple customer-service queries, coordinating with
other intelligent systems to carry out tasks like booking a hotel at a suitable
time and location, helping radiologists to spot
potential tumors in X-rays, flagging inappropriate content
online, detecting wear and tear in elevators from data gathered by IoT devices, generating
a 3D model of the world from satellite imagery, the list goes on and
on.
New applications of these learning systems are emerging all the
time. Graphics card designer Nvidia recently revealed an AI-based system Maxine,
which allows people to make good quality video calls, almost regardless of the
speed of their internet connection. The system reduces the bandwidth needed for
such calls by a factor of 10 by not transmitting the full video stream over the
internet and instead animating a small number of static images of the caller,
in a manner designed to reproduce the callers facial expressions and movements
in real time and to be indistinguishable from the video.
However, as much untapped potential as these systems have,
sometimes ambitions for the technology outstrips reality. A case in point are
self-driving cars, which themselves are underpinned by AI-powered systems such
as computer vision. Electric car company Tesla is lagging some way behind CEO
Elon Musk's original timeline for the car's Autopilot system being upgraded to
"full self-driving" from the system's more limited assisted-driving
capabilities, with the Full Self-Driving option only recently rolled out to a
select group of expert drivers as part of a beta testing program.
WHAT CAN GENERAL AI DO?
General AI is very different, and is the type of adaptable
intellect found in humans, a flexible form of intelligence capable of learning
how to carry out vastly different tasks, anything from haircutting to building
spreadsheets, or reasoning about a wide variety of topics based on its
accumulated experience. This is the sort of AI more commonly seen in movies,
the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist
today – and AI experts are fiercely divided over how soon it will become a
reality.
A survey conducted among four groups of
experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick
Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would
be developed between 2040 and 2050, rising to 90% by 2075. The group went even
further, predicting that so-called 'superintelligence' –
which Bostrom defines as "any intellect that greatly exceeds the cognitive
performance of humans in virtually all domains of interest" – was expected
some 30 years after the achievement of AGI.
However, recent assessments by AI experts are
more cautious. Pioneers in the field of modern AI research such as Geoffrey
Hinton, Demis Hassabis and Yann LeCun say society is nowhere near
developing AGI. Given the skepticism of leading lights in the field
of modern AI and the very different nature of modern narrow AI systems to AGI,
there is perhaps little basis to fears that society will be disrupted by a
general artificial intelligence in the near future.
That said, some AI experts believe such
projections are wildly optimistic given our limited understanding of the human
brain, and believe that AGI is still centuries away.
·
WHAT ARE RECENT LANDMARKS IN THE DEVELOPMENT OF AI?
While modern narrow AI may be limited to performing specific
tasks, within their specialisms these systems are sometimes capable of
superhuman performance, in some instances even demonstrating superior
creativity, a trait often held up as intrinsically human.
There have been too many breakthroughs to put together a
definitive list, but some highlights include: in 2009 Google showed it was
possible for its self-driving Toyota Prius to complete more than 10 journeys of
100 miles each, setting society on a path towards driverless vehicles.
In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show
Jeopardy!, beating two of the best players the show had ever
produced. To win the show, Watson used natural language processing and analytics
on vast repositories of data that it processed to answer human-posed questions,
often in a fraction of a second.
In 2012, another breakthrough heralded AI's
potential to tackle a multitude of new tasks previously thought of as too
complex for any machine. That year, the AlexNet system decisively triumphed in
the ImageNet Large Scale Visual Recognition Challenge. AlexNet's accuracy was
such that it halved the error rate compared to rival systems in the
image-recognition contest.
AlexNet's performance demonstrated the power
of learning systems based on neural networks, a model for machine learning that
had existed for decades but that was finally realising its potential due to
refinements to architecture and leaps in parallel processing power made possible
by Moore's Law. The prowess of machine-learning systems at carrying out
computer vision also hit the headlines that year, with Google training
a system to recognise an internet favorite: pictures of cats.
The next demonstration of the efficacy of
machine-learning systems that caught the public's attention was the 2016 triumph of the Google DeepMind AlphaGo AI over a human
grandmaster in Go, an ancient Chinese game whose complexity stumped
computers for decades. Go has about possible 200 moves per turn, compared to
about 20 in Chess. Over the course of a game of Go, there are so many possible
moves that searching through each of them in advance to identify the best play
is too costly from a computational point of view. Instead, AlphaGo was trained
how to play the game by taking moves played by human experts in 30 million Go
games and feeding them into deep-learning neural networks.
Training these deep learning networks can take
a very long time, requiring vast amounts of data to be ingested and iterated
over as the system gradually refines its model in order to achieve the best
outcome.
However, more recently Google refined the training process with AlphaGo Zero,
a system that played "completely random" games against itself, and
then learnt from the results. Google DeepMind CEO Demis Hassabis has also
unveiled a new version of AlphaGo Zero that has mastered the games of chess and
shogi.
And AI continues to sprint past new
milestones: a system trained by OpenAI has defeated the world's top players in
one-on-one matches of the online multiplayer game Dota 2.
That same year, OpenAI created AI agents that
invented their own language to cooperate and achieve their goal more effectively,
shortly followed by Facebook training agents to negotiate and even lie.
2020 was the year in which an AI system
seemingly gained the ability to write and talk like a human, about almost any
topic you could think of.
The system in question, known as Generative
Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on
billions of English language articles available on the open web.
From soon after it was made available for
testing by the not-for-profit organisation OpenAI, the internet was abuzz with
GPT-3's ability to generate articles on almost any topic that was fed to it,
articles that at first glance were often hard to distinguish from those written
by a human. Similarly impressive results followed in other areas, with its
ability to convincingly answer questions on a broad range of topics and even pass for a novice JavaScript coder.
But while many GPT-3 generated articles had an
air of verisimilitude, further testing found the sentences generated often
didn't pass muster, offering up superficially plausible but confused statements, as well as
sometimes outright nonsense.
There's still considerable interest in using
the model's natural language understanding as the basis of future services and
it is available to
select developers to build into software via OpenAI's beta API. It
will also be incorporated into future services available via Microsoft's Azure cloud
platform.
Perhaps the most striking example of AI's
potential came late in 2020, when the Google attention-based neural network
AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for
Chemistry.
The system's ability to look at a protein's
building blocks, known as amino acids, and derive that protein's 3D structure
could have a profound impact on the rate at which diseases are understood and
medicines are developed. In the Critical Assessment of protein Structure
Prediction contest, AlphaFold 2 was able to determine the 3D structure of a
protein with an accuracy rivaling crystallography, the gold standard for
convincingly modelling proteins.
Unlike crystallography, which takes months to
return results, AlphaFold 2 can model proteins in hours. With the 3D structure
of proteins playing such an important role in human biology and disease, such a
speed-up has been heralded as a landmark breakthrough for medical science, not
to mention potential applications in other areas where enzymes are used in biotech.
·
WHAT
IS MACHINE LEARNING?
Practically all of the achievements mentioned
so far stemmed from machine learning, a subset of AI that accounts for the vast
majority of achievements in the field in recent years. When people talk about
AI today they are generally talking about machine learning.
Currently enjoying something of a resurgence,
in simple terms machine learning is where a computer system learns how to
perform a task, rather than being programmed how to do so. This description of
machine learning dates all the way back to 1959, when it was coined by Arthur
Samuel, a pioneer of the field who developed one of the world's first self-learning
systems, the Samuel Checkers-playing Program.
To learn, these systems are fed huge amounts
of data, which they then use to learn how to carry out a specific task, such as
understanding speech or captioning a photograph. The quality and size of this
dataset is important for building a system able to accurately carry out its
designated task. For example, if you were building a machine-learning system to
predict house prices, the training data should include more than just the
property size, but other salient factors such as the number of bedrooms or the
size of the garden.
WHAT
ARE NEURAL NETWORKS?
Key to machine learning success are neural
networks. These mathematical models are able to tweak internal parameters to
change what they output. During training, a neural network is fed datasets that
teach it what it should spit out when presented with certain data. In concrete
terms, the network might be fed greyscale images of the numbers between zero
and 9, alongside a string of binary digits – zeroes and ones – that indicate
which number is shown in each greyscale image. The network would then be
trained, adjusting its internal parameters, until it classifies the number
shown in each image with a high degree of accuracy. This trained neural network
could then be used to classify other greyscale images of numbers between zero
and 9. Such a network was used in a seminal paper showing the application of
neural networks published by Yann LeCun in 1989 and has been used by the US
Postal Service to recognise handwritten zip codes.
The structure and functioning of neural
networks is very loosely based on the connections between neurons in the brain.
Neural networks are made up of of interconnected layers of algorithms, which
feed data into each other, and which can be trained to carry out specific tasks
by modifying the importance attributed to data as it passes between these
layers. During training of these neural networks, the weights attached to data
as it passes between layers will continue to be varied until the output from
the neural network is very close to what is desired, at which point the network
will have 'learned' how to carry out a particular task. The desired output
could be anything from correctly labelling fruit in an image to predicting when
an elevator might fail based on its sensor data.
A subset of machine learning is deep learning,
where neural networks are expanded into sprawling networks with a large number
of sizeable layers that are trained using massive amounts of data. It is these
deep neural networks that have fuelled the current leap forward in the ability
of computers to carry out tasks like speech recognition and computer vision.
SEE: IT leader's guide to deep learning (Tech Pro
Research)
There are various types of neural networks,
with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a
type of neural net particularly well suited to Natural Language Processing
(NLP) – understanding the meaning of text – and speech recognition, while
convolutional neural networks have their roots in image recognition, and have
uses as diverse as recommender systems and NLP. The design of neural networks
is also evolving, with researchers refining a more effective form of deep neural network called long
short-term memory or LSTM – a type of RNN architecture used for
tasks such as NLP and for stock market predictions – allowing it to operate
fast enough to be used in on-demand systems like Google Translate.
WHAT
ARE OTHER TYPES OF AI?
Another area of AI research is evolutionary
computation, which borrows from Darwin's theory of natural selection, and
sees genetic algorithms undergo random mutations and combinations between
generations in an attempt to evolve the optimal solution to a given problem.
This approach has even been used to help
design AI models, effectively using AI to help build AI. This use of
evolutionary algorithms to optimize neural networks is called neuroevolution,
and could have an important role to play in helping design efficient AI as the
use of intelligent systems becomes more prevalent, particularly as demand for
data scientists often outstrips supply. The technique was showcased by Uber AI Labs, which released papers on
using genetic algorithms to train deep neural networks for reinforcement
learning problems.
Finally, there are expert systems,
where computers are programmed with rules that allow them to take a series of
decisions based on a large number of inputs, allowing that machine to mimic the
behaviour of a human expert in a specific domain. An example of these
knowledge-based systems might be, for example, an autopilot system flying a
plane.
·
WHAT
IS FUELING THE RESURGENCE IN AI?
As outlined above, the biggest breakthroughs
for AI research in recent years have been in the field of machine learning, in
particular within the field of deep learning.
This has been driven in part by the easy
availability of data, but even more so by an explosion in parallel computing
power, during which time the use of clusters of graphics processing units
(GPUs) to train machine-learning systems has become more prevalent.
Not only do these clusters offer vastly more
powerful systems for training machine-learning models, but they are now widely
available as cloud services over the internet. Over time the major tech firms,
the likes of Google, Microsoft, and Tesla, have moved to using specialised
chips tailored to both running, and more recently training, machine-learning
models.
An example of one of these custom chips is
Google's Tensor Processing Unit (TPU), the latest version of which accelerates
the rate at which useful machine-learning models built using Google's TensorFlow
software library can infer information from data, as well as the rate at which
they can be trained.
These chips are not just used to train up
models for DeepMind and Google Brain, but also the models that underpin Google
Translate and the image recognition in Google Photos, as well as services that
allow the public to build machine-learning models using Google's TensorFlow Research Cloud.
The third generation of these chips was unveiled at Google's I/O conference in
May 2018, and have since been packaged into machine-learning powerhouses called
pods that can carry out more than one hundred thousand trillion floating-point
operations per second (100 petaflops). These ongoing TPU upgrades have allowed
Google to improve its services built on top of machine-learning models, for
instance halving the time taken to train
models used in Google Translate.
WHAT
ARE THE ELEMENTS OF MACHINE LEARNING?
As mentioned, machine learning is a subset of
AI and is generally split into two main categories: supervised and unsupervised
learning.
Supervised learning
A common technique for teaching AI systems is
by training them using a very large number of labelled examples. These
machine-learning systems are fed huge amounts of data, which has been annotated
to highlight the features of interest. These might be photos labelled to
indicate whether they contain a dog or written sentences that have footnotes to
indicate whether the word 'bass' relates to music or a fish. Once trained, the
system can then apply these labels to new data, for example to a dog in a photo
that's just been uploaded.
This process of teaching a machine by example
is called supervised learning and the role of labelling these examples is
commonly carried out by online workers, employed through
platforms like Amazon Mechanical Turk.
Training these systems typically requires vast
amounts of data, with some systems needing to scour millions of examples to
learn how to carry out a task effectively – although this is increasingly
possible in an age of big data and widespread data mining. Training datasets
are huge and growing in size – Google's Open Images Dataset has
about nine million images, while its labelled video repository YouTube-8M links to
seven million labelled videos. ImageNet, one of the
early databases of this kind, has more than 14 million categorized images.
Compiled over two years, it was put together by nearly 50,000 people – most of
whom were recruited through Amazon Mechanical Turk – who checked, sorted, and
labelled almost one billion candidate pictures.
In the long run, having access to huge
labelled datasets may also prove less important than access to large amounts of
compute power.
In recent years, Generative Adversarial
Networks (GANs) have been used in
machine-learning systems that only require a small amount of labelled data
alongside a large amount of unlabelled data, which, as the name suggests,
requires less manual work to prepare.
This approach could allow for the increased
use of semi-supervised learning, where systems can learn how to carry out tasks
using a far smaller amount of labelled data than is necessary for training
systems using supervised learning today.
Unsupervised learning
In contrast, unsupervised learning uses a
different approach, where algorithms try to identify patterns in data, looking
for similarities that can be used to categorise that data.
An example might be clustering together fruits
that weigh a similar amount or cars with a similar engine size.
The algorithm isn't set up in advance to pick
out specific types of data, it simply looks for data that can be grouped by its
similarities, for example Google News grouping together stories on similar
topics each day.
Reinforcement learning
A crude analogy for reinforcement learning is
rewarding a pet with a treat when it performs a trick. In reinforcement
learning, the system attempts to maximise a reward based on its input data,
basically going through a process of trial and error until it arrives at the
best possible outcome.
An example of reinforcement learning is Google
DeepMind's Deep Q-network, which has been used to best human performance in a variety of
classic video games. The system is fed pixels from each game and
determines various information, such as the distance between objects on screen.
By also looking at the score achieved in each
game, the system builds a model of which action will maximise the score in different
circumstances, for instance, in the case of the video game Breakout, where the
paddle should be moved to in order to intercept the ball.
The approach is also used in robotics research, where
reinforcement learning can help teach autonomous robots the optimal way to
behave in real-world environments.
WHICH ARE THE LEADING FIRMS IN AI?
With AI playing an increasingly major role in modern software
and services, each of the major tech firms is battling to develop robust
machine-learning technology for use in-house and to sell to the public via
cloud services.
Each regularly makes headlines for breaking new ground in AI
research, although it is probably Google with its DeepMind AI AlphaFold and
AlphaGo systems that has probably made the biggest impact on the public
awareness of AI.
WHICH AI SERVICES ARE AVAILABLE?
All of the major cloud platforms – Amazon Web Services,
Microsoft Azure and Google Cloud Platform – provide access to GPU arrays for
training and running machine-learning models, with Google
also gearing up to let users use its Tensor Processing Units –
custom chips whose design is optimized for training and running
machine-learning models.
All of the necessary associated infrastructure and services are
available from the big three, the cloud-based data stores, capable of holding
the vast amount of data needed to train machine-learning models, services to
transform data to prepare it for analysis, visualisation tools to display the
results clearly, and software that simplifies the building of models.
These cloud platforms are even simplifying the creation of
custom machine-learning models, with Google offering a service that automates the
creation of AI models, called Cloud AutoML. This drag-and-drop
service builds custom image-recognition models and requires the user to have no
machine-learning expertise.
Cloud-based, machine-learning services are constantly evolving.
Amazon now offers a host of AWS offerings designed to streamline the process of training up
machine-learning models and recently
launched Amazon SageMaker Clarify, a tool to help organizations root
out biases and imbalances in training data that could lead to skewed
predictions by the trained model.
For those firms that don't want to build their own
machine=learning models but instead want to consume AI-powered, on-demand
services, such as voice, vision, and language recognition, Microsoft Azure
stands out for the breadth of services on offer, closely followed by Google
Cloud Platform and then AWS. Meanwhile IBM, alongside its more general
on-demand offerings, is also attempting to sell sector-specific AI services
aimed at everything from healthcare to retail, grouping these offerings
together under its IBM Watson umbrella, and
having invested $2bn in buying The Weather Channel to unlock a
trove of data to augment its AI services.
WHICH OF THE MAJOR TECH FIRMS IS WINNING THE AI RACE?
Internally, each of the tech giants – and others such as
Facebook – use AI to help drive myriad public services: serving search results,
offering recommendations, recognizing people and things in photos, on-demand
translation, spotting spam – the list is extensive.
But one of the most visible manifestations of this AI war has
been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the
Google Assistant, and Microsoft Cortana.
Relying heavily on voice recognition and
natural-language processing, as well as needing an immense corpus to draw upon
to answer queries, a huge amount of tech goes into developing these assistants.
But while Apple's Siri may have come to
prominence first, it is Google and Amazon whose assistants have since overtaken
Apple in the AI space – Google Assistant with its ability to answer a wide
range of queries and Amazon's Alexa with the massive number of 'Skills' that
third-party devs have created to add to its capabilities.
Over time, these assistants are gaining
abilities that make them more responsive and better able to handle the types of
questions people ask in regular conversations. For example, Google Assistant
now offers a feature called Continued Conversation, where a user can ask follow-up
questions to their initial query, such as 'What's the weather like today?',
followed by 'What about tomorrow?' and the system understands the follow-up
question also relates to the weather.
These assistants and associated services can
also handle far more than just speech, with the latest incarnation of the
Google Lens able to translate text in images and allow you to search for
clothes or furniture using photos.
Despite being built into Windows 10, Cortana
has had a particularly rough time of late, with Amazon's Alexa now available
for free on Windows 10 PCs, while Microsoft revamped Cortana's role in the operating system to focus
more on productivity tasks, such as managing the user's schedule, rather than
more consumer-focused features found in other assistants, such as playing
music.
·
WHICH
COUNTRIES ARE LEADING THE WAY IN AI?
It'd be a big mistake to think the US tech
giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo
are investing heavily in AI in fields ranging from ecommerce to autonomous
driving. As a country China is pursuing a three-step plan to turn AI into a
core industry for the country, one that will be
worth 150 billion yuan ($22bn) by the end of 2020, with the aim of
becoming the world's leading AI power by 2030.
Baidu has
invested in developing self-driving cars, powered by its
deep-learning algorithm, Baidu AutoBrain, and, following several years of
tests, with its Apollo self-driving car having racked up more than three million miles of driving in tests and carried
over 100,000 passengers in 27 cities worldwide.
Baidu launched a fleet of 40 Apollo Go
Robotaxis in Beijing this year and the company's founder has predicted that
self-driving vehicles will be common in China's cities within five years.
The combination of weak privacy laws, huge investment, concerted
data-gathering, and big data analytics by major firms like Baidu, Alibaba, and
Tencent, means that some analysts believe China will have an advantage over the
US when it comes to future AI research, with one analyst describing the chances
of China taking the
lead over the US as 500 to one in China's favor.
HOW CAN I GET STARTED WITH AI?
While you could buy a moderately powerful Nvidia GPU for your PC
– somewhere around the Nvidia GeForce RTX 2060 or faster – and start training a
machine-learning model, probably the easiest way to experiment with AI-related
services is via the cloud.
All of the major tech firms offer various AI services, from the
infrastructure to build and train your own machine-learning models through to
web services that allow you to access AI-powered tools such as speech,
language, vision and sentiment recognition on-demand.
The desire for robots to be able to act autonomously and
understand and navigate the world around them means there is a natural overlap
between robotics and AI. While AI is only one of the technologies used in
robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. At
the start of 2020, General Motors
and Honda revealed the Cruise Origin, an electric-powered driverless
car and Waymo, the self-driving group inside Google parent Alphabet, recently
opened its robotaxi service to the general public in Phoenix, Arizona, offering a
service covering a 50-square mile area in the city.
Fake news
We are on the verge of having neural networks that can create photo-realistic images or replicate
someone's voice in a pitch-perfect fashion. With that comes the
potential for hugely disruptive social change, such as no longer being able to
trust video or audio footage as genuine. Concerns are also starting to be
raised about how such technologies will be used to misappropriate people's
image, with tools already being created to convincingly
splice famous faces into adult films.
Speech and language recognition
Machine-learning systems have helped computers recognise what
people are saying with an accuracy of almost 95%. Microsoft's Artificial
Intelligence and Research group also reported it had developed a system able to transcribe spoken English as accurately as human
transcribers.
With researchers pursuing a goal of 99% accuracy, expect
speaking to computers to become increasingly common alongside more traditional
forms of human-machine interaction.
Meanwhile, OpenAI's language prediction model GPT-3 recently
caused a stir with its ability to create articles that could pass as being
written by a human.
Facial recognition and surveillance
In recent years, the accuracy of facial-recognition systems has
leapt forward, to the point where Chinese tech
giant Baidu says it can match faces with 99% accuracy, providing the
face is clear enough on the video. While police forces in western countries
have generally only trialled using facial-recognition systems at large events,
in China the authorities are mounting a nationwide program to connect CCTV
across the country to facial recognition and to use AI systems to
track suspects and suspicious behavior, and has also expanded
the use of facial-recognition glasses by police.
Although privacy regulations vary across the world, it's likely
this more intrusive use of AI technology – including AI that can recognize
emotions – will gradually become more widespread, although a growing backlash
and questions about the fairness of facial-recognition systems have led to
Amazon, IBM and Microsoft pausing or halting the sale of these systems to law
enforcement.
Healthcare
AI could eventually have a dramatic impact on healthcare,
helping radiologists to pick out tumors in x-rays, aiding researchers in
spotting genetic sequences related to diseases and identifying molecules that
could lead to more effective drugs. The recent breakthrough by Google's
AlphaFold 2 machine-learning system is expected to reduce the time taken during
a key step when developing new drugs from months to hours.
There have been trials of AI-related technology in hospitals
across the world. These include IBM's Watson clinical decision support tool,
which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and
the use of Google DeepMind systems by the UK's National Health
Service, where it will help spot eye abnormalities and streamline
the process of screening patients for head and neck cancers.
Reinforcing discrimination and bias
A growing concern is the way that machine-learning systems can
codify the human biases and societal inequities reflected in their training
data. These fears have been borne out by multiple examples of how a lack of
variety in the data used to train such systems has negative real-world
consequences.
In 2018, an MIT and
Microsoft research paper found that facial recognition systems
sold by major tech companies suffered from error rates that were significantly
higher when identifying people with darker skin, an issue attributed to
training datasets being composed mainly of white men.
Another study a year
later highlighted that Amazon's Rekognition facial recognition
system had issues identifying the gender of individuals with darker skin, a charge that was
challenged by Amazon executives, prompting one of the
researchers to address the points raised in the Amazon rebuttal.
Since the studies were published, many of the major tech
companies have, at least temporarily, ceased selling facial recognition systems
to police departments.
Another example of insufficiently varied training data skewing
outcomes made headlines in 2018, when Amazon scrapped a machine-learning
recruitment tool that identified male applicants as preferable.
Today research is ongoing into ways to
offset biases in self-learning systems.
AI and global warming
As the size of machine-learning models and the datasets used to
train them grows, so does the carbon footprint of the vast compute clusters
that shape and run these models. The environmental impact of powering and
cooling these compute farms was the subject of a
paper by the World Economic Forum in 2018. One 2019 estimate
was that the power required by machine-learning systems is doubling every 3.4
months.
The issue of the vast amount of energy needed to train powerful
machine-learning models was brought into
focus recently by the release of the language prediction model GPT-3,
a sprawling neural network with some 175 billion parameters.
While the resources needed to train such models can be immense,
and largely only available to major corporations, once trained the energy
needed to run these models is significantly less. However, as demand for
services based on these models grows, power consumption and the resulting
environmental impact again becomes an issue.
One argument is that environmental impact of training and
running larger models needs to be
weighed against the potential machine learning has to have a significant
positive impact, for example, the more rapid advances in healthcare
that look likely following the breakthrough made by Google DeepMind's AlphaFold
2.
WILL AI KILL US ALL?
Again, it depends who you ask. As AI-powered systems have grown
more capable, so warnings of the downsides have become more dire.
Tesla and SpaceX CEO Elon Musk has claimed that AI
is a "fundamental risk to the existence of human civilization". As
part of his push for stronger regulatory oversight and more responsible
research into mitigating the downsides of AI he set up OpenAI, a non-profit
artificial intelligence research company that aims to promote and develop
friendly AI that will benefit society as a whole. Similarly, the esteemed
physicist Stephen Hawking warned that once a sufficiently advanced AI is
created it will
rapidly advance to the point at which it vastly outstrips human capabilities,
a phenomenon known as the singularity, and could pose an existential threat to
the human race.
Yet the notion that humanity is on the verge of an AI explosion
that will dwarf our intellect seems ludicrous to some AI researchers.
Chris Bishop, Microsoft's director of research in Cambridge,
England, stresses how
different the narrow intelligence of AI today is from the general intelligence
of humans, saying that when people worry about "Terminator and
the rise of the machines and so on? Utter nonsense, yes. At best, such
discussions are decades away."
WILL AN AI STEAL YOUR JOB?
The possibility of artificially intelligent systems replacing
much of modern manual labour is perhaps a more credible near-future
possibility.
Comments
Post a Comment