Thanks to visit codestin.com
Credit goes to github.com

Skip to content

fractus-io/ai4bl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Artificial Intelligence for Business Leaders

GitPitch

These are the materials for training Artificial Intelligence for Business Leaders, published under MIT licence.

Authors: Marc Steel & dstar55

Table of Contents

Introduction

Understand AI

Who is who in AI

AI Use Cases

How AI is changing industry

What is AI

What is Artificial Intelligence

When people talk about artificial intelligence (AI), their mind often goes straight to the movies. They seem to think that some company is going to create Skynet and manufacture Terminators to initiate the extinction of humanity. This is a common misperception of AI whereby movies have associated it with sci-fi and fantasy but in reality, it is quite different.

AI is the use of computer science and programming that trains machine to imitate human tasks and thought processes. It works by analysing data and surroundings to solve problems, incrementally learning for itself to continually improve. AI functions are initially based or trained using the instructions given to it by humans and when these are a bit vague or incomplete, in theory we could get Skynet type consequences (albeit not quite so extreme).

Right now, all forms of AI use some sort of human intervention. That could be loading the training data or analysing the results and perfecting them. AI is not at a point yet where it has its own conscious decision-making process and can see the world as or better than a human would. This is likely to still be quite a long way in the future and it is important not to exaggerate the capabilities.

Instead of talking about AI, discussing its applications helps to make better sense of the term and show how it is impacted many parts of everyday life.

Applications of AI

You are probably exposed to AI every day. Whether it be using Facebook Messenger, talking to Alexa, watching Netflix, listening to Spotify or searching on Google, you are using a form of AI. Most of these examples are powered by an AI application known as machine learning.

Machine learning is the use of existing data to make future decisions. Algorithms built without programming platforms are designed to enable machines to make unsupervised choices based on the data they have been supplied with. One of the best ways to explain machine learning is when comparing to how a baby learns to walk.

A baby would start by taking in the surroundings and watching other children or adults walking. Nobody explicitly tells a baby to move their left foot forward, then the right, then the left again, then the right and so on. In gathering data from the environment, a baby will learn for themselves and attempt their first steps. Initially, they might fail so next time they use a table to help them up. Over time, the baby connects all the dots provided by data and begins to walk.

A machine will learn in the same way. Let’s say you want the machine to separate pictures of cats from pictures of dogs. To start, you give it a large collection of cat photos and it looks through to find the patterns. When it is presented with a new photo, it tries to work out whether it is a cat or dog. Every time the machine fails, it learns from the mistake and becomes more accurate. It can do this against vast volumes of data. In theory, the machine should become 100% accurate with the task.

Machine learning works using data so a human never has to program it. It might even find patterns that a human never would have done. A real-world example in Hong Kong has shown that machine learning has become more accurate than doctors in cancer diagnosis through analysing i mages of patients who have symptoms.

The digital world is full of data meaning machine learning has a major part to play. One of the key applications today is in conversational chatbots which use a form of machine learning known as natural language processing. Amazon Alexa can take voice commands and analyse them against a huge knowledge base to return the most relevant response to the user. Sensors on factory machines are being used to constantly record data and predict when maintenance could be needed before any problems arise. In contract law, algorithms can review thousands of articles simultaneously and potentially solve cases in a split second that would normally take a human weeks or months.

Those are just some applications, but it exemplifies the value of treating data as a business asset. AI is more about data than it is about the fantasy we see in the movies.

Companies that use AI

AI applications are used in almost every company that we interact with. Here are a few popular examples.

  • Google – every Google search uses machine learning. It takes what the user writes or says and applies that to algorithms, returning the most relevant results. Google can even do this with video now as the AI has advanced over several iterations since 2010.

  • Netflix – the streaming service users what we would call a recommender system. Instead of users choosing a show to watch, Netflix uses data to predict what their subscribers will want to watch next. Recent statistics have suggested as much as 80% of user choices have come via the recommendations. Subscribers are even presented with different show thumbnails based upon their likely preferences. Spotify and Amazon use similar models

  • Facebook – the social network is massively based on data. Users get ads based on their preferences and it is never just a coincidence when they are relevant. Messenger is now a conversational chatbot used by major companies to complete retail actions without any human involvement. Facebook has utilised the purchase of WhatsApp to get a great understanding of how people converse.

The future prospect of AI

At the moment, we are only really at the start of an AI revolution. Applications like machine learning are still relatively new outside the big enterprises and have only been deployed as a very light touch. However, that said, many of the applications seem so normal that we don’t even remember they exist. Talking to Alexa to turn on your lights has become standard in some households. In the future, the same will most likely happen with driverless cars and robotics but we are a little way off that yet.

What is becoming quite scary is that humans are beginning to trust AI applications more than another human. In fact, even when in a retail store, over 60% of people surveyed said they would rather use their SmartPhone to answer questions than ask a human a ssistant. As new technology like driverless cars come into play, this lack of trust could start a societal breakdown of sorts which is perhaps why we are holding back just a bit with such game changers.

History of AI

Introduction

We tend to see artificial intelligence (AI) as a brand new development but the history books would tell us otherwise. No longer the domain of science fiction, robotics and artificial intelligence and becoming important business drivers. This article looks at how we got to where we are today.

The timeline below from the University of Queensland gives a brief overview of how AI has progressed over the years into becoming a standard part of university offerings.

alt text

Source – The University of Queensland

Whilst this timeline provides us wit a great overview, we are going to start back at 1921 at a time when the term “robot” was first used.

The rise of robotics and AI

The term robot was first used by Czech writer Karel Capek almost 100 years ago in 1921, although he credited his brother Josef Capek as being the inventor of the word. It comes from the word robota which is associated with labor or work in Slovak and gives us an insight into its intention.

In 1939, a humanoid robot named Elektro was presented at the World Fair, smoking cigarettes and blowing up balloons for the audience. This was a couple of years before Isaac Asimov formulated his “three laws of robotics” that most of us will be familiar with from movies like “I, Robot.” The three laws of robotics state:

  1. A robot may not injure a human being or through inaction allow a human being to be harmed

  2. A robot must obey orders given to it by human beings except where such orders would conflict with the first law

  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law

These laws still stand firm today and in their own way are built into artificially intelligent devices.

Moving into the 1940s and 1950s and the foundations of neural networks in machines start to be developed.
In many papers, this period is considered to be the true start of AI as computer science started being used to solve real world problems, moving away from just theory and fantasy.

During the Second World War, the British computer scientist worked to crack the “Enigma” code which was used by German forces to send secure messages. This was done by Turing and his team using the Bombe machine and laid down the foundations for the application of machine learning; using data to imitate human tasks.

Turing was amongst the first to consider that a machine could converse with a human, without the human knowing it was a machine. Many know this as the “imitation game”, another that has since been made into a popular movie.

The standard was set for AI and in the 50s and 60s, research into the domain began to boom. In 1951, Marvin Minsky built the first neurocomputer and a machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. At a similar time, John McCarthy who is often penned at the father of AI, developed the LISP programming language which has become very important in machine learning.

In the 1960s, there was more exploration around robotics as GM installed the Unimate robot to lift and stack hot pieces of metal. Frank Rosenblatt also constructed the Mark I Perception computer which was able to learn skills by trial and error. By 1968, a mobile robot known as “Shakey” is introduced and controlled by a computer the size of a room.

The AI Winters

Despite all this progress, during the 1970s, AI hit a period known as the AI Winters, coined as an analogy of the nuclear winter. Scientists were finding it very difficult to create truly intelligent machines as there simply wasn’t enough data to do so. This led to a slide in government funding as confidence began to dwindle.

Research slowed until the 1990s apart from a few notable projects such a SCARA, a robotic arm invented for assembly lines in 1979 and research by Doug Lanat and his team in the 80s which looked to codify what we call human common sense. Also, in 1988, the first version of a conversational chatbot was launched and we saw a service bot in hospitals for the first time. Some of these developments created a bit of a spark and a renewed interest in the potential of AI going into the 1990s.

New Opportunities for AI

During the 90s and going into the new Millennium, companies started showing a new interest in AI and it had a second coming of sorts. The Japanese government announced plans to develop a new generation computer to advance machine learning. In 1997, IBM’s Deep Blue computer famously defeated world chess champion, Gary Kasparov and really propelled AI into the limelight.

Improvements in computer hardware meant companies had more data and therefore, greater opportunity to develop machine learning propositions. In 1999, although we were a way off seeing the likes of Pokemon Go, augmented reality was first coined as a framework and term. It seems as if these theories started to drive a wave of developments in the early 2000s as the Big 4 (Google, Amazon, Facebook and Apple) gained a major share of the AI market.

In 2005, autonomous cars took a huge leave, driving 183 miles without intervention. IBM introduces their Watson AI assistant in 2006 which later defeated a Jeopardy champion in the US. Google launched street view in 2010 and in 2011 we met Siri for the first time. It wasn’t until 2015 that Alexa finally hit the market place and then drone deliveries started, Google lens became a reality, smart homes were the in thing and people started having their own Virtual Reality platforms at home.

In fact, for all the history of AI, the last ten years have been huge, creating so much data and allowing us to change life as we know. It is difficult to imagine life without AI in the modern digital world. This is quite an amazing feat when we considered the tumultuous times of the last century.

Future of the AI

Introduction

Artificial Intelligence (AI) is not a new innovation, although it certainly feels like it. Its origins are usually dated back to the 1920s when the term “robot” was first coined but it came into prominence during the Second World War thanks to Turing and the cracking of the Enigma code.
Between the 1970s and the new Millennium, it struggled to gain traction as there simply wasn’t enough data for it to develop meaning nobody was willing to invest. However, the 21st century has seen a huge resurgence and since 2010, AI has dramatically changed everyday life.

If we think about some of the things we do today, many didn’t even exist 10 or 15 years ago. We use Smartphones, check our social media, search using Google, talk to Alexa, stream on Netflix, listen to music on Spotify, buy products from Amazon, turn our lights from an App, catch Pokemon on the streets, play virtual reality games from our home, get an Uber or order food from Deliveroo. All these things are standard and show how far we have come in a short space of time. Every single one of them has a foundation of data and artificial intelligence.

Whilst AI has brought us these amazing innovations, they are not that exciting anymore. If we expect Alexa to respond to voice commands, the fact that the technology is starting to do that better is great but not really a new development. A lot of the technology we have talked about falls into a machine learning application, ones that use data to make decisions. Google returns webpages that it predicts are most likely to be relevant and Netflix uses data to give us the shows it predicts we will like. This idea has become an expectation rather than an innovation. So, what’s next?

The future of AI – beyond expectation

Experts in the field are primarily focusing on artificial general intelligence (AGI). Am AGI machine is one which can perform any cognitive task that a human can. Whilst the technology we have is amazing and useful, it is not cognitive, it does not have a concept of the world or a conscious mind if you will. Existing AI would not be able to pass the Turing test in which a computer must prove its intelligence as indistinguishable from that of a human. That is the goal for AGI and the future aims to move us closer to that point. Many believe that could be in the next 5 to 10 years whereas other would suggest 20 to 30 years is more accurate.

A machine with AGI would be able to perform any task that a human being can do without being programmed to do so. For example, if asks to hammer a nail it would simply start guessing at a few things and continue to fail until it got it right. The basis of AGI would be trial and error, the same as when a human learns to walk for example. We should probably remember that scientists don’t even fully understand the human brain yet, let alone training a machine to do the same.

Computing Power

One of the limitations to further development has been the lack of computer power available. Some of the aforementioned slow periods have also been caused by a lack of data to work with. For AI to get close to the intelligence of a human brain, we need what is known as quantum computing. Whilst the technology has been released by the likes of IBM, it is still very much in its infancy and we don’t know if they will become mainstream quickly.

What this means is that whilst AI might be able to look at images of cars and build the models, it is some way off being creative enough to come up with its own ideas for building a car like a human would.

Emerging Technology

Whilst AGI might not be imminent, advances in cloud technology, edge computing and Big Data platforms should bridge the gap between AI and robotics. These technologies are helping to process data faster and more effectively meaning robots can make better decisions and become more useful. For example, AI powered robots can carry out dangerous tasks safe in the knowledge they know what they are doing through existing datasets. Machine learning won’t just power Google, Facebook and Netflix but also more practical events that make our lives safer.

Assistants will become predictive

As it stands, Alexa and Google Home respond to the commands we give them. What if they were able to start predicting what we needed? The same applies to other smart home devices. For example, a smart refrigerator can work out when you run out of milk and order some to be delivered to your day without any intervention. Instead of asking Alexa to turn off the lights, smarter assistants will know when you need events to happen. As assistants gather data, they are becoming intelligent enough to be predictive.

Affective Computing

Emotional intelligence is something that AI traditionally lacks. If you say the same thing to Alexa in a happy tone or a sad tone, you receive the same response. Affective computing is a field that studies speech or body language and images to recognize our needs. For example, conversational chatbots can respond differently based on how we speak or type. This is done by monitoring pauses on speech or pitch of voice as a measure of emotion. The aim is to start making devices appear more human.

Changing Professions

Given the fact that AI can diagnose medical conditions through image scanning and data we will start to see changes in healthcare. Doctors will be able to care for patients rather than spending time on diagnosing them because AI can carry out the task in a split second. AI can also review contracts, provide measurements for construction or pay out insurance claims. Professional interactions will slowly evolve over the next few years.

Let’s get digital

Let’s get digital Everything will be digital. Technology such as Blockchain can be used to verify our identity so we don’t need to carry ID cards. The movement towards cryptocurrency like Bitcoin will continue. Ultimately, we will interact digitally making everyday life far more efficient than it is today.

Understand AI

AI, Machine Learning and Deep Learning

Introduction

Although Artificial Intelligence (AI) has been around for some time, it is still common to get caught up in the buzz without truly understanding what it means. People think that AI is a robot that can do things a smart person would, knowing everything and being able to answer every question. This is what television and movies have led us to believe. Creating these ‘conscious’ machines is the goal of researchers and professionals but we are not close to that yet.

AI is classified into two groups. General AI is the concept explained above where machines can intelligently solve problems without human input. A machine with general AI capabilities would have cognitive abilities and interpret the environment around it. It would be able to process this information far quicker than any human could leading to these sci-fi ideas of superior beings. General AI currently is beyond our reach but as the volume of data in the world grows and computing power increases, we will get closer.

In the here and now, we are in a time of what is called narrow AI. A machine with narrow AI capabilities is one that operates from a predefined set of rules. This could be a Netflix recommendation engine or a voice command system like Alexa. Both are examples of artificial i ntelligence but are fed from criteria or training data to function.
A good example is a driverless car which although impressive, is still narrow AI as it is given a set of rules to operate by. Until a car can understand the environment and think for itself, this will always be the case.

Narrow AI applications are driven by two subsets of AI known as machine learning and deep learning. The best way of explaining the link between the three is that AI is an all-encompassing term, inside of which is machine learning and then within that we get more complex deep learning.

What Is Machine Learning ?

Machine learning is often described as a method for realising AI. A computer or machine is loaded with vast amounts of data which it will use to train itself. The data might be labelled initially to make things easier. For example, if we want a machine to recognise photos of cats, we may load it with thousands of images of cats and dogs, labelling them appropriately. The machine will take a new image and find the label it matches to learn whether it is a cat or not.

Machine learning is the process of enabling machines to learn through data. The predictions the machine makes from that data is what we know as AI. If we go back to the example of Alexa. Alexa receives a voice command, interprets that using an algorithm (known as natural language processing or NLP), matches the result against all existing data stored in the cloud to find the appropriate response and sends that back as a reply. Alexa gives the impression of being a cognitive machine but is far from it.

There are four common machine learning methods.

  1. Supervised learning

This method takes existing data and trains a model to work out how to classify a new piece of data. For example, it could hold data on the symptoms of diabetes and when it receives blood test results of a new patient, it is able to diagnose accurately from the data. It will classify the patient as having diabetes or not having diabetes.

  1. Unsupervised learning

Unlike, supervised learning, these models will attempt to classify data without any prior knowledge. The algorithms look to find patterns themselves and put data into groups. A common example is something like customer purchasing behaviours. The algorithm won’t have existing labels and will decide on its own how to classify the data, often known as clustering. Imagine going to a party where everybody is a stranger. Your mind will probably classify people based on age, gender or clothing. You don’t know them but have still worked out the classifications.

  1. Semi-supervised learning

As title suggests, this is a mix between supervised and unsupervised learning. In our data, some items are labelled but some are not. Where you have vast amounts of data this can be quite common. A semi-supervised model would have some labelled data to know that classification does exist. It is then trained on unsupervised data to define the boundaries of what it is looking at and potentially specify new classifications that the human did not specify when labelling.

  1. Reinforced learning

This application is about positive and negative rewards for certain behaviours. This will be a common method in robotics where machines learn to optimise behaviour from experiencing positive or negative results. For example, if a robot found a TV remote and decided to throw it, it would break and be a negative result. However, pressing a button turns the TV on a produces a positive result so it continues to do it. The robot will continue this process until finding the best possible result.

Whilst every AI-based project is unique as they all run from different datasets and rules, there are key algorithms that you will find in the library of virtually every Data Scientist.

What Is Deep Learning ?

Deep learning is a subset of machine learning. In a sense, it is an evolved version of machine learning methods. It is inspired by processing patterns of the human brain known as neural networks. Whereas machine learning techniques will take an input of data and learn from it, deep learning neural networks learn through their own data processing.

Unlike in machine learning where even in unsupervised methods a human still jumps in if the model gets too confused, deep learning algorithms decide for themselves whether a prediction is accurate.

It is difficult to ensure deep learning neural networks don’t come up with incorrect conclusions but when it works, it can get us a step closer to general AI.

AlphaGo by Google is one of the most popular cases of machine learning. Google trained a machine to learn the board game Go which requires a lot of intellect. Without being told what move to make, the machine learnt the rules itself and began to outperform humans. If the computer had of been fed rules through machine learning, this may not be hugely impressive but the fact it learnt how to win on its own is incredible.

There are many types of neural network that are used for different applications.

Conclusion

Narrow AI can be seen everywhere from GPS systems to Alexa and recommendation platforms. Machine learning and deep learning have benefitted from large investments at the start of the 21st century as consumers seek ways to become more efficient and have an easier life.

However, the ultimate goal is artificial general intelligence, a self-teaching system that can outperform humans across a wide range of disciplines. It is thought that this could be 30 years away or some say as long as a century but as computing power and data evolve, we will continue to see more amazing developments.

Machine Learning Algorithms

Introduction

In our previous article, we gave an overview of the difference between artificial intelligence (AI) and its subsets known as machine learning (ML) and deep learning (DL).

Both ML and DL have a wide range of uses across several industries and whilst the applications might be different, the algorithms and neural networks tend to have the same foundations.

This article talks about some of the most popular technical algorithms used in machine learning and what they do as well as examples of neural networks used for deep learning.

Machine Learning Algorithms

When you start trying to learn Data Science and begin to research programming platforms like R and Python it can be intimidating. A lot of the techniques come with multiple page definitions and descriptions and it is very difficult to put the detail into a practical use case. If you dream of becoming a true expert and earning the big salaries, you will definitely need to have a handle of the ins and outs of everything but as a starter, this brief guide attempts to define a realistic beginning. https://blog.usejournal.com/machine-learning-algorithms-use-cases-72646df1245f

  1. Linear Regression

This is one of the quickest algorithms for a machine learning beginner to master. Essentially, if we have a set of variables (x) that are used to determine an output variable (y), the goal is to quantify the relationship between the two. This would be used in something like sales forecasting or risk assessment. It will show you what happens to dependent variables i.e. sales when changes are made to independent variables

  1. K-means Clustering

Used in applications such as grouping images together or detecting activity types in motion sensors as well as structured data use cases like customer segmentation. This unsupervised learning algorithm takes unstructured data and separates it into ‘K’ groups. It will classify the data and categorise it based on specific features.

  1. Logistic Regression

Predictions based upon continuous values after applying a transformation function. Unlike linear regression, the output will be the likelihood of an event occurring rather than a precise number like a sales figure. This could be whether a student will pass a test or if an employee is likely to be sick for example.

  1. Support Vector Machine (SVM)

This is a classification algorithm used for category assignment like detecting spam emails and sentiment analysis projects. It is a form of supervised learning that looks for support vectors that are along what is known as a ‘hyperplane’, the line that separates and classifies a set of data. It is designed for smaller datasets and is often more efficient than other algorithms given that it users a subset of training points.

  1. Decision Trees

A supervised learning method used in classification type problems. A decision tree is probably best explained using an example. Imagine we have 30 students with boy/girl, height and class variables. 15 out of 30 of them play soccer in their spare time. A decision tree will segregate the students based on values in the three variables and identify which create the best homogenous set of students.

  1. Naive Bayes

A probability algorithm that outputs the chance of an event occurring given that another event has already occurred. For example, if a student fails one test, what is the probability of them passing or failing their next test. It assumes all variables are independent of each other, hence the term ‘naïve.’

  1. Random Forest

Taking decision trees to the next level, a random forest algorithm constructs a number of tress together. The output will take a majority vote from the trees, so to speak, or take the average if the trees are producing numerical values.

  1. Principal Component Analysis (PCA)

This algorithm is used in applications such as stock market prediction and pattern classification tasks. A principal component analysis tries to identify patterns in data and make correlations of the variables within it.

  1. K-Nearest Neighbour

As the term neighbour suggests, this algorithm looks for similar items in comparison to others. It works well with unstructured data like images where the algorithm needs to have a “best guess” in some cases about what the likely output or classification should be of some input. It may no be accurate at first but can become very powerful, take Amazon Alexa as an example.

  1. Recommender System

This algorithm filters and predicts user ratings and preference by using collaborative and content-based techniques. The most popular examples of how this is used in the real-world are Netflix, Spotify and Amazon. In essence, it makes recommendations based on how different pieces of data have been classified e.g. genre in Netflix.

Neural Networks

Introduction

Whilst neural networks are not as complex as somebody starting out in Data Science might think, it would be wrong to say they are simple to learn. These deep learning networks will transform data until it can be classified into an output. It consists of neurons (for information processing, like the brain) which multiply an initial value by some weighting, works out a bias as new values come in and then normalises the output with a function. It is a bit like how our brain would try to develop concepts and learn about the environment but instead, based on mathematical functions and programming.

Types Of The Neural Networks

There are 6 types of neural networks used for different applications. The list below provides an overview of each. There is a lot more technical detail behind these available online and via publications to help work out which is best for your application.

  1. Feedforward

In this neural network, the data or input travels in one direction which is why it is often known as the simplest. The applications tend toile in computer vision and speech recognition as where classifying the target classes are complicated. The feedforward network responds to noisy data and is quite easy to maintain.

  1. Radial Basis

This type of neural network considers the distance of a point in relation to the center. They are used for time-series calculations and system control amongst other applications. A radial basis network will use a set of prototypes along with other training examples and find the distance between and input and a prototype. The activation functions of artificial neurons drive outputs that can be represented in different ways to show how the network classifies data points.

  1. Multilayer Perceptron

Comprised of one or more layers of neutrons. Data is fed into an input layer and there may be one or more hidden layers providing levels of abstraction and predictions are made on the output layer which is also known as the visible layer. This is suitable for classification prediction problems where inputs can be assigned to a class or label. Data is commonly provided in tabular format like a CSV or Excel sheet.

  1. Convolutional

Used for image classification, object detection and image segmentation. Networks have convolutional layers that act as hierarchical object extractors.

  1. Recurrent

This type of neural network models sequences by applying the same set of weights recursively to the state of the aggregation at time (t) and input at time (t). The neural network is used for text classification texts, machine translation and language modeling.

  1. Modular

A modular network is one composed of more than one neural network, connected by some intermediary. They can allow for more sophisticated use of basic neural network systems managed and handled in conjunction. Each individual network within the model should hope to accomplish some subtask of the wider objective.

Summary

We have provided a brief overview of popular machine learning algorithms and types of neural network. The technology and scripting behind them is of course far more technical than can be explained in a short article and does require previous experience in mathematics and data science fields.

However, for anybody beginning an AI journey, knowing about how these models function is important and it is worth getting to know them in more detail.

Why AI is taking off

Introduction

The term Artificial Intelligence (AI) has been around for a long time. It was coined by John McCarthy in the 1950s and he is considered one of the founding fathers of AI alongside Marvin Minsky.
As well as that, some of the key applications like machine learning, deep learning and neural networks have been widely researched and deployed during the 70s, 80s and 90s. However, during each of those periods, the industry suffered from what we call “AI Winters” where initiatives didn’t have enough investment to succeed.

In the 2000s, big companies like Google, Facebook and Baidu arrived on the scene and started putting big money into research again, realising the potential of the solutions. Whilst AI is flying right now, what’s to say that we won’t have another “Winter” up ahead?

How do we know that AI has really taken off this time?

AI Is Only Just Starting

The big difference between the AI applications of the 2000s and those of the past is commercialisation. Whilst the Turing Test was revolutionary in 1950 and machines winning at checkers, chess and Jeopardy were impressive developments, they didn’t learn to any real world, practical application of AI in a commercial sense.

There are three key elements as to why AI has been able to become so powerful in the last decade.

  1. Computing Power

Looking back to the 1950s, the early computers didn’t have sufficient power to create truly autonomous functioning systems. Some of the hardware and infrastructure might have been out there but it was incredibly costly where there were no real use cases that could prove the value of any investment.

Imagine trying to run Alexa on a dial-up internet connection. It simply would not have been possible. Today, people will abandon a webpage if it hasn’t loaded within 3 seconds. That is the extent to which computing power has changed consumer expectations. Only in the last few decades has computer processing power been enough to support an AI system.

IBM is now working towards developing even more powerful quantum computing platforms that will take AI to the next level (although full adoption is a little way in the future).

  1. Big Data

With the hyperconnected digital world, we are creating more data now than ever before and it is still growing. In fact, by 2025, intelligence firm International Data Corporation (IDC) are forecasting we will have 10 times more data than we did in 2016.

Big Data underpins most of what we are doing with AI. For example, social networks have unleashed a whole amalgam of data that never previously existing amount human behaviours. Amazon have collected enough shopping data from their consumers to now accurately predict what they are likely to purchase next. Netflix can tell their subscribers what they want to watch before they even know.

The fact is, as more of our lives become digitalised, the more data becomes available and the potential for AI applications can only increase.

  1. Models and Algorithms

For Big Data to be effective, we need good algorithms. These are scripts that instruct AI technology what to do. In some of the earlier applications of AI, these algorithms were very prescriptive and told machines what to do on a step by step basis. They have now become so sophisticated that computers can build their own algorithms without supervision to an incredibly high degree of accuracy.

Data and computational power has led to a rise in deep learning and neural networks with refined algorithms. For example, algorithms can now be modelled that were not possible 20 years ago where there wasn’t the volume, speed and accuracy of information to create commercial value.

  1. Democratisation of AI

Data Science and AI resource is still hard to find and for that reason, it can be expensive. Data Scientists were in a position where they could virtually create their own salaries given their unique skillsets.

A growing number of tools are ensuring AI capabilities can be put into the hands of non-technical experts. Large enterprises including Google, Microsoft and IBM and releasing cloud-based tools that allow almost anyone to create their own machine learning models. These tend to use pre-built algorithms that can be applied to various situations without the need for technical support.

Companies like DataRobot allows uses to upload data and quickly try different algorithms to see which obtains the best results.

BI platforms such as Tableau, Qlik and Sisense amongst countless others can analyse data in an instant without needing any specialist resource.

As AI becomes commonplace in business, all these attempts to democratise access to it will speed up adoption across numerous functions.

AI Deployments Will Continue To Accelerate

Whilst what we have seen from AI so far has greatly impacted everyday life as well as our jobs and the ways many industries work, we are only at the start. To date, what we have is “narrow AI.”

These are applications that have programmed rules to adhere by and function but the goal for researchers and developers is to achieve an artificial general intelligence (AGI). This is where machines can truly imitate human actions in a conscious way such as understanding the environment around them. A robot may be able to walk and perform activities against set rules, but it is not yet capable of designing those rules for itself.

Narrow forms of AI are still in the early stages of adoption and have already started diagnosing disease, solving legal cases and educating pupils. With the data available, potential computing power and democratisation of systems, applications will become mainstream in the forthcoming years.

Following that, we may be able to set our sights on AGI but we are some way off that kind of singularity just yet.

How AI can benefit your organization

Introduction

There is a huge buzz around artificial intelligence (AI) and its subsets such as machine learning and natural language processing. This hype has largely been created by the transformational impact it is having on businesses. The speed at which organisations can develop AI based technology and increase their value is key to having a competitive advantage. Whilst many organisations are still trying to understand exactly what AI is capable of, this article summarises some of the key benefits that are already being realised.

Customer Experience And Service

One of the most common forms of AI is the conversational chatbot. These are messaging apps, speech-based assistants or voice activated devices that are used to automate communication and create a very personalised customer experience. These Internet of Things (IoT) based applications can process vast amounts of data instantly meaning they can make faster and more accurate responses than a human would ever be able to.

Similar personalisation that makes best use of data can be used in marketing. This is where we get emails that are relevant to us and social media ads that just happen to be something we are interested in. In some cases, each customer can even see different website homepages depending on their likely preferences and what will interest them the most.

Utilising AI in these ways is a great way to ensure customer loyalty through a personalised experience.

Business Process Automation

Businesses that have been established for a long time tend to have several manual processes. AI is a natural partner to optimise these efforts given its efficiency at handling routine tasks, improving interfaces, willingness and speed to do monotonous tasks and ability to handle massive amounts of data.

There are some obvious processes like using robotics in factories, managing conditions in product storage, processing payments and registering customer requests but these only touch the surface of the possibilities. Doctors can use AI devices to dictate clinical notes which automatically fills in the relevant forms and orders a prescription. Lawyers will use AI to process contracts and agreements in a split second that may have taken them days or weeks.

Essentially, anything that can be turned into a digital format has the potential for automation.

Cost Reductions And Operational Efficiency

An improved customer experience and process automation will have a clear knock-on effect when it comes to reducing business costs.

In the context of efficiency, where AI can process vast amount of data so accurately, it can greatly reduce the number of business errors, improve workflows to increase production outputs and free up employee time for higher-level tasks. As an example, in healthcare, unnecessary tests as estimated to cost $210 billion per year in the US alone. Even if AI reduces this by half it is doing an amazing job.

Gartner predicts that 85% of service interactions will take place between human and AI in 2020. This reduces to cost of business call centres and allows businesses to focus their staff elsewhere. For example, they may be redeployed into jobs that support the AI and digital technology. More to the point, AI is never sick and doesn’t need a holiday!

Whilst AI can be costly to deploy, in the long run the savings will heavily outweigh the investment.

Data Security And Fraud

AI can be used to help identify fraudulent transactions and prevent unauthorised access to data. In an exponentially growing digital world, this is especially important when it comes to defending cyber-attacks. Powerful algorithms can find malware and combat spam for example. Machine learning will detect irregular patterns in the data and inform businesses when there is a potential threat.

As well as this we are seeing the increased utilisation of identity checks other than passwords such as facial recognition and fingerprint technology. These unique identifiers based on unstructured data are far more difficult to hack and offer a great layer of protection for businesses.

Predictive Analytics

Predictive analytics could be called the heartbeat of AI. Traditionally, management information would report on what has happened in the business e.g. we sold 100 pairs of shoes yesterday. Machine learning algorithms will make decisions on what is going to happen in the future. It will do this by finding patterns in data and making decisions based on that. For example, it can predict when a customer is next likely to want to buy a pair of shoes and ensure you are at the front of the queue when they come to market.

Another example comes in supply chain management where machine learning can predict when stock is likely to run out or whether there is going to be product surplus.

Staff Training

AI is being used in businesses to create personalised training plans. Some companies could have huge knowledge bases that take staff weeks or even months to learn. AI has been shown to cut this in half by presenting content to the learner in the way that best suits them. This could include the order they learn items in, the length of time between when learners are presented with repeat information or the type of material such as written, visual and audio. Training is both more useful and enjoyable.

Summary

Whilst many applications are still relatively immature, companies need to prepare for investment in AI if they are going to keep up with the competition. It should be made clear that AI is not going to replace humans but instead, it will provide the technology to run monotonous, tedious and repetitive jobs, allowing people to focus on what they are best at. For example, AI will help diagnose disease so doctors can concentrate on caring for patients. AI will deliver learning curriculums, so teachers can worry about helping the students.

To make the most of these powerful efficiencies, AI should be considered as a means of augmenting and not replacing human capabilities.

Who is who in AI

The Godfathers of AI

Introduction

With so much press and hype surrounding artificial intelligence (AI) and its applications over recent years, it is very difficult to keep track of everything that is going on. Social media has a vast array of channels for professionals, reaching over may industries and focussing on differing levels of technical and strategic abilities. To stay connected, there are some experts and influencers that everybody with a vested interest in AI must ensure they follow. This article talks about some of those key players in the data-centric world around us.

Andrew Ng

Considered as one of the top minds in machine learning, Andrew Ng is having a huge impact on AI education. He is best known for co-founding Coursera, the online education portal and his machine learning course at Stanford University remains the most popular on the site. In 2017, Ng started Deeplearning.ai, a project specialised in deep learning education projects aimed at engineering and mathematics students.

At the start of 2019, “AI for Everyone” was released on Coursera. Ng said on his blog that the AI-powered future must be built by both engineers and application domain experts. The course is designed to ensure we have experts in every industry who can apply AI to their organisations.

Credited as the founder of the Google Brain project and featured in the Times 100 Most Influential People, Andrew Ng is a go to source when it come to AI and machine learning. Since 2018 he also launched and currently heads the AI Fund, a $175 million investment fund for backing artificial intelligence start-ups.

Geoffrey Hinton

Hinton is a leading figure in the deep learning community and even been penned as the “Godfather of Deep Learning” by his peers. He was recently named as of the three recipients of the A.M. Turing Award for his decades of work advancing the field of artificial intelligence. The deep learning techniques inspired by Hinton and his collaborators have achieved significant breakthroughs in several applications from voice recognition in mobile devices all the way through to diagnosing cancerous tumours in medical scans.

Hinton has taught his own course via the Andrew Ng owned Coursera and worked for Google from 2013. The focus of his research is ways in which neural networks can be used for machine learning, memory and perception. He has put his name to many research papers and is the co-inventor of Boltzmann machines, one of the first neural networks capable of learning internal representations.

AI is in Hinton’s blood, being the great-great-grandson of logician George Boole, whose work formed the foundations of modern computer science.

Yann LeCun

As the VP and Chief AI Scientist at Facebook, Yann LeCun has quite the list of credentials. He was part of the team that were awarded the Turing Award in 2018 alongside Geoffrey Hinton and is the founding Director of Facebook AI Research and the NYU Center for Data Science.

LeCun is best known for his contributions to deep learning and neural networks with applications used in computer vision and speech recognition technology. With over 190 papers published in this area as well as several other AI topics, LeCun is high up the list of experts when it comes to the subject.

Beyond his role at Facebook, LeCun has co-founded start-ups including Elements Inc and Museami. He has too many qualifications to mention as well as being in the New Jersey Inventor Hall of Fame and a member of the US National Academy of Engineering.

Like Hinton, LeCun sees an amazing future in using AI to solve real world problems such as diagnosing diseases through medical image analysis.

Ian Goodfellow

Sometimes referred to as “The GANFather,” Ian Goodfellow is the inventor of a powerful AI tool that can pit different neural networks against each other. GAN stands for a Generative Adversarial Network, an approach to machine learning frequently used at Facebook. Given a training set, the approach learns to generate new data with the same statistics as the training set. For example, a GAN trained with images can generate new images that look at least superficially authentic to observers.

One use case could be in fashion where GANs can create photos of imaginary models with no need to hire a model, photographer or equipment. This could drive entire campaigns.

Yann LeCun has said that GANs are the “coolest idea in machine learning in the last twenty years.”

Goodfellow obtained his BS and MS in computer science under supervision from Andrew Ng. He joined Google as part of the Google Brain research team and after a brief stint away at the OpenAI Institute, re-joined Google in 2017.

Fei-Fei Li

As a former Google employee and Professor at Stanford University, Fei-Fei Li is one of the most prominent women in the world of AI. She is well known for her non-profit work as the Co-Founder and Chairperson of the organisation AI4ALL whose mission is to educate the next generation of AI thinkers and leaders. In 2018, AI4ALL successfully launched five more summer programs in addition to that at Stanford University due to its overwhelming success.

Li has published around 200 papers on AI, machine learning, computer vision and neuroscience-based topics. She has been highly commended as a pioneer and researcher who strives to being “humanity to AI.” Amongst her portfolio is the ImageNet project which has revolutionised the field of visual recognition. Many see the work as a catalyst to the current AI boom.

Big Companies and AI

Introduction

Artificial intelligence (AI) is the new weapon that is starting to define how large companies compete with each other. The Big Four or GAFA (Google, Amazon, Facebook, Apple) companies have had an advantage of large budgets, talented resource pools and vast amounts of data to develop AI solutions at a faster rate than anyone else. This is creating a problem for smaller businesses who cannot compete with many of the offerings and get drowned by a GAFA tidal wave. This means small to mid-sized businesses need to find their own niches to remain competitive.

Rather than focussing on the trials and tribulations of those small businesses, this article looks at GAFA and how they are progressing their AI offerings.

Google

AI radiates through almost every that Google does. In fact, they have stated that they are moving towards an “AI first” development structure. It has been around 12 years since Google introduced us to their Android operating system, an open source platform for mobile devices. In April 2019, they announced they would be doing the same with AI, using TensorFlow, their open source platform for machine learning. Essentially, anyone who can connect to the internet has access to one of the most powerful AI platform ever created.

Google has virtually become an AI company, far removed from simply a search engine, and they are starting to introduce that to the rest of the world. Although there are other companies with platforms like TensorFlow, they don’t have the same development, research and funding power that Google has.

Whilst in theory, providing these complex machine and deep learning tools to everyone with allow them to compete with Google, the objective is that it accelerates computer science and the community gives back. This is how Google see the future and they have set themselves up as a transformative AI company.

Beyond TensorFlow, there are a vast amount of ways that Google already deploy AI applications. Below is a summarise of some of the most well-known applications.

  • Google Search

Most of us would know that the Google Search algorithms are powered by AI, or more specifically machine learning and natural language processing. However, this has come a long way over the years, especially with the prominence of voice search and deep learning techniques are allowing the models to learn on their own.

  • Adwords

Google Adwords now uses AI to enable Smart Bidding. Instead of employing a digital marketing team to manually arrange auction bids, machine learning algorithms will automate the process to improve conversions whilst reducing the cost of labour. It is able to user a wider range of contextual signals within the machine learning models that may not have been picked up otherwise.

  • Maps

Consumers are now able to use Google Maps just like a satellite navigation system. It has effectively replaced the need old-fashioned GPS monitor (TomTom) and allowed one less device in the car. The integrated AI models can flag driver alerts and link to place of interest. It also detects traffic delays and route problems for drivers. The predictive nature of Google Maps means you can navigate without any commands.

  • YouTube

Owned by Google, one of the core uses of machine learning at YouTube is ensuring that brands don’t have their ads placed next to what might be deemed as inappropriate content. Users are also fed recommended videos in a form of predictive AI.

  • Photos

Although not one of the most used features, Photos will recommend images you should share and which friends you should be sharing them with.

  • Gmail

The email platform from Google has really stepped up the AI game in the last year. Platform users now have access to smart replies. The platform will offer up predictive sentences as you start writing content based on the data within your inbox. As it gathers more data, it learns your writing and it able to accurately create an email for the user.

  • Calendar

Within Drive is Smart Scheduling which can suggest meetings and appointments based on the regular habits of the user.

  • Drive

The AI used within Google Drive and Documents is able to predict which files you are looking for and claims to reduce the time spent finding it by up to 50%. It will display the files it thinks you need at the top of the screen each time. In Google Sheets, it can now even automatically generate formulas whilst Documents use natural language processing for better functionality.

  • Assistant

Google Assistant, the voice activated agent, gets answers almost instantly straight from the web, pretty much as if you were doing a search yourself. Google Assistant is also great at remembering previous conversations and syncing to Maps and other Google products. The assistant is the power behind the Google Home IoT device which has laid down its marker in the voice command device market. Google Home has been learning at an accerlerated rate thanks to the massive amount of data that the company hold.

  • Allo

This is the Google venture into the messaging App world. It goes beyond some of the other apps through deep learning neural networks which Google say will be powerful enough to create its own emojis, rather than the standard ones that come with other platforms. Everything Google is doing has an AI first thought process.

Amazon

Amazon has not become the retail giant that it is today by accident. AI is shaping everything they do from the warehouse right through to the Alexa smart speakers.

  • Product recommendations

Virtually from the start, Amazon has used machine learning algorithms to recommend products to consumers. The predictive AI will present users with products they are most likely to want based on either their purchasing behaviour or the actions of other consumers just like them.

The correlations for the recommendations have gone though a number of changes over the years to account for fast-paced and dynamic markets. For example, if a new product line comes to market, it can tell instantly which consumers are going to be interested in it. In the earlier days of the machine learning algorithms this may have taken some time to work itself out.

Reports have suggested that as much 35% of Amazon’s retail revenue comes from products that are purchased via its recommendation engine. To put that into context, AI is able to sell products that a consumer would not have made a conscious decision to buy otherwise. If you add personalised emails into the mix and the excellent on-site search, Amazon is driving much of its retail sales using data alone.

  • Voice control devices

Unless you have been hibernating for the last few years, you will have noticed the amazing rate at which Alexa has come to the market. Millions of consumers have now bought Alexa and there are almost 50,000 skills available on the device. This means they can help with a wide variety of tasks from simple voice commands.

We should not underestimate the amount of data in the Alexa engine and the immense processing power that can send answers back almost instantly. There are now integrations with many other devices, capitalising on the dominance of the market. They are creating customisable skills for third parties e.g. Marriott Hotels as a type of concierge system for guests.

The future of Alexa is in letting consumers create their own skills via their Blueprint platform which doesn’t require any development knowledge.

  • Amazon Web Services (AWS)

AWS is the massive cloud based service and storage facility provided by Amazon. It has become one of the market leaders in this sector and is a huge contributor towards the trillion dollar valuation of the company.

The aim of AWS is to give users the same capabilities that Amazon has in creating its machine learning models for recommendations, Alexa, DeepLens or Amazon-Go (see below). AWS is tailored for different levels of users, from beginners all the way to those with Data Science or equivalent type degrees.

Usage of AWS grew by 250% during 2018 and is set to continue as the cloud platform of choice for many businesses.

  • The Warehouse

Albeit not consumer facing, one of the most important areas for AI is in the amazon warehouse. The facilities are filled with robotics that spur into action as soon as somebody places an order. They are completely autonomous and will deliver items to a human who checks it and puts it on a conveyor belt. With the volume of orders received, every second counts in the warehouse which is why AI is vital for improving these efficiencies.

As well as this, Amazon use machine learning to predict what customers are likely to be ordering and put it in the right spot of the warehouse, improving speed of processing. Using computer vision, Amazon have also optimised the scanning processes for goods in the warehouse with a new system saving workers huge amounts of time.

Delivering items to the customer on time is imperative to the success of the Amazon offering so every second saved is another happy buyer.

  • Amazon-Go

Trials of the Amazon-Go stores showed customers walking in, picking up groceries and walking back out again. Sensors and scanners can tell exactly what they have picked up and charge them via their SmartPhone as they go in and out of the store. There is no need for cash or any human interactions whilst in the store.

The system is far more complex than the warehouse whereby in busy stores, cameras can have a blocked view or lighting can change and impact the algorithms. This is why there are only a few stores now whilst the experts attempt to build models that can account for these glitches.

The number of sensors needed right now could also be costly which is another reason why they haven’t got them to a commercial release. Automated stores are most likely the way forward and Amazon will be the pioneers if so.

  • DeepLens

AWS DeepLens is a wireless-enabled video camera and development platform integrated with the AWS Cloud. It lets you use the latest AI tools and technology to develop computer vision applications based on a deep learning model.

Everything about the hardware is fully customisable for developers. For example, it is possible to create deep learning networks capable of recognising the objects in a room or even the faces of people in a room.

Facebook

The bad press in the last 24 months with events like the Cambridge Analytica scandal has led to people questioning how Facebook use AI and data but the fact of the matter is they have been using it to great effect for a long time now.

With a dedicated research lab and billions of users, Facebook has an infrastructure designed for growth in AI.

  • Recommendations

A bit like Amazon, recommendations are high on the list of AI deployments at Facebook. As well as suggesting new friends, the items you see are your news feeds re all there because algorithms have told them to appear. Every time you do something on Facebook, it learns from your behaviour and can ensure it creates exactly the right intent next time around.

Businesses can use Facebook Ads to target the right users and you will see campaigns based on your activity and behaviours. One of the most impressive (some may say intrusive) aspects is re-targeting of products. If you’ve ever been shopping on a site and suddenly see the same item or similar items appear on Facebook, that is AI driven re-targeting. I promise Mark Zuckerberg isn’t listening to every conversation but deep learning models are getting very strong at tailoring these ads.

  • Content

With so much press around internet trolls and offensive posts, Facebook has algorithms that can alert them about content. A recent innovation was developing a script capable of spotting if teenagers were showing signs of depression and suicidal thoughts. This has been built for public good after some high-profile incidents that could have potentially been stopped.

Other use cases are in terrorism and racial abuse.

  • Language

Facebook acquired Wit.ai, a natural language processing startup based in London. With the technology, they are able to decipher what their users are saying an analyse the context and meaning as well as the actual items themselves. The use case for this is tackling fake news and hate speech but it will also be used to better understand the behaviour and needs of their users.

Some analytics companies are using language to recognise personality traits. Many say these attributes are better factor of trustworthiness and integrity when it comes to things like credit. Trials are in place that use social data as opposed to traditional financial backgrounds for making loan or mortgage decisions.

The AI platform for understanding context is known s Deep Text. Facebook say it has “human like accuracy” of understanding the context of language through the deep analysis of intents and entities.

  • Image Recognition

Millions of images are posted on Facebook. Identifying faces at automatically tagging them is one of the major advances machine learning algorithms have brought to the site over the years. This was trained using billions of photos from Instagram once they acquired it allowing them to build accurate models incredibly quickly.

One of the most important use cases is that the algorithms can identify images and describe them to the visually impaired. This works by simply taking a photo and allowing the deep learning networks to quickly identify and explain them to users.

Facebook even believe they have an algorithm which can work out the mood of a person based on their stature or pose in an image.

  • Chatbots

Facebook Messenger is probably the most used conversational chatbot AI. Businesses are now able to let them customers service themselves or purchase products using the technology.

Rich amounts of data feed the Messenger platform allowing it to conduct all kinds of activities. They have even given it the capability to negotiate with humans and it created its own way to bluff and lie in conversations.

Apple

Many say that Apple is a bit of a follower when it comes to AI and machine learning with the other big companies doing much more in the way of leading innovation. In fact, even if we look at Siri which was the first voice assistant to enter the market, Amazon and Google quickly took over with their products when they had a lot more data available to make things happen faster.

The Apple model has typically relied on hardware rather than developing their own AI and machine learning models like the other companies in this article have done constantly. However, in April 2018, Apple hired John Giannandrea from Google. He was one of the core reasons why Google integrated AI into just about every product and a major part of their success. His appointment showed the direction that Apple desired to go in.

With that, the latest iPhone models come with heaps of computational power and AI based algorithms. The objective of this was slicker camera effects and the ability to create amazing augmented reality (AR) experiences for the users. A little like Google and Amazon, customisation is key and non-developers are able to run their own algorithms using the Apple hardware.

With non-developers having this sort of scope, the iTunes store is set to livened up with new experiences for socialising and getting things done. The machine learning algorithms have image recognition to better understand photos as an example. The new computer chip in the iPhone is dedicated to running neural network software that starts to understand he concepts of speech as well as images, more so than ever before.

The new technology installed by Apple allows developers to run neural networks at a rate of 10 times faster than the iPhone X. To put this into some context, the basketball app HomeCourt has been able to improve drastically. The app analyses video and images to get on court analytics on shots, misses and dribbles. This used to take a couple of seconds to process but is now able to complete the analysis in real-time such is the computational power.

AI has become a key area of focus at Apple as they seek to keep up with Google, Amazon and Facebook. They have hired a number of staff to AI roles at the start of 2019 from developers to industry specific professionals. However, they don’t talk it up as loudly as the competitors.

In 2019, Apple announced that Siri is set to have a far more natural voice, personalised music will be available via their HomePod speaker system and Siri will be able to read incoming message to your AirPods. The Core ML platform is available to iOS developers for developing their own applications as we’ve already said and is continually improving.

Apple have also hired former Google man Ian Goodfellow as the Director of Machine Learning to further exemplify their intent. However, it seems that although they have the resources and knowledge, the culture of the company remains in hardware and it will be hard to pull away from that completely. Some have suggested there are roadblocks in using machine learning given Apple’s commitment to privacy and that could be causing more problems with development than we realise.

Although Apple are, and for the foreseeable future will be a huge power in the technology field, their AI strategy still seems to be somewhat lacking against their core competition. They need to think about releasing something big soon at the risk of falling short where Google, Amazon and Facebook are all powering ahead.

The AI Startups Scene

Introduction

Artificial Intelligence (AI) investments and developments have grown in earnest since the start of the 21st century. With so many different applications across data, robotics, deep learning, IT and IoT, there is a huge opportunity for business offering innovative services. Hardly a day goes by where we don’t read about a new technology startup in the news or via social media. In fact, there is such a plethora of them in the market that it is becoming near on impossible to decipher which offer genuinely unique AI services and which are simply trying to cash in on the hype.

The startup scene

A survey by venture capitalist firm MMC at the start of 2019 found that of 2,830 startups in Europe that classified themselves as AI companies, only 1,580 accurately fit the description. The survey report goes on to say it is very important to research a company, their products, website a documentation before diving in at the deep end. It isn’t necessarily the fault of the company. They are not deliberately trying to deceive the public but third party analytics have classified them in that way.

The issue here isn’t necessarily with the companies but the fact of the matter is those who are labelled as AI are attracting 15% to 50% more funding than other technology firms. Startups are aware of how they are classified but there is probably very limited incentive to correct listings when they are getting the investments they desperately need to survive.

In 2019, one in twelve startups are now featuring AI as part of their products and services. Beyond that, more than 10% of large enterprises are using AI based applications as part of their business which is a growth of 4% in only 12 months. If that trend continues as more industries recognise the benefits of AI in promoting efficiency, reducing cost and increasing revenue, the startup market looks set to blossom.

What are startups focussing on

Around one in four AI startups are primarily serving the company’s marketing department. This is where the sweet spot appears to sit with a wealth of data, variety of digital channels and the prospect of creating unique customer experiences. One of the most popular AI solutions is the conversational chatbot. Several startups are using this as their base as firms try to use natural language processing to understand their customers. Beyond this, in utilising a chatbot, businesses can reduce call centre costs and open a 24/7 customer service portal. There are a huge number of benefits which is why startups are swooping in.

AI startups tend to cluster around where data resides. For example, we have seen a lot of traction in healthcare, finance, retail and entertainment because there is a vast volume of consumer information within those. Data is key to AI given that most solutions involve applications like machine learning and natural language processing. Some of the most significant areas are image recognition, tech to speech and making predictions or decisions akin to the recommendation engines you see Amazon or Netflix using.

The most successful AI startups

The list below are the top 10 AI startups in the US ranked by funding amount, published in February 2019.

  1. Dataiku - $147 million

  2. Landing AI - $175 million

  3. Signifyd - $206 million

  4. Pony.ai - $214 million

  5. Data Robot - $225 million

  6. C3 - $243 million

  7. Butterfly Network - $350 million

  8. UiPath - $448 million

  9. Automation Anywhere - $550 million

  10. Zymergen - $574 million

Worldwide, the best funded startups are both from China; SenseTime and Face++.

SenseTime has had far and away more investment than any other and focusses on facial recognition technology. It has had significant government support. The solutions are heavily used in surveillance with support for police bureaus in identifying faces or looking for car number plates.

Toutaio, another Chinese based company is worth circa $3.1B having been founded in 2012. The platform was the first of its kind to take content based on keywords and feed it to readers via an application. It gradually learns what readers like through analysing social media accounts.

At fourth in the US funding list, Butterfly Network also have the highest number of patents filed for their portable, hand-held ultrasound that has been built for under $2k. It uses computer vision AI to help hardware interpret the images.

Some of the largest growth has been seen in the self-serving AI platforms like DataRobot and Dataiku. These companies provide platforms to business who don’t have expert resource but still want to develop AI and machine learning projects. Where resource is thin on the ground, businesses are starting to rely on such startups to get them going.

The list could be endless and a quick Google search for Top AI companies gives you a different list in virtually every article. It is worth reviewing which ones can work best for you and ensuring they offer genuine AI potential and not simply hype.

Summary

It is no secret that AI is big business. With so many startups in the space, it is important to be strategic and pick the right ones for you. Many will be industry specific and you must be sure they can adapt to ever changing business needs in a fast-paced digital economy. AI is where the future lies.

Autonomous Drive and AI

Introduction

When thinking of autonomous cars, many of us jump straight to David Hasselhoff talking to Kit in the 1980s (for the Millennials, just Google Knight Rider!). This type of driverless car is the dream and, depending on which report you read, some think we are not too far away from commercial use of fully autonomous vehicles.

However, whilst massive investment has helped progression no end, the arrival of a true driverless car that doesn’t need any type of human interaction doesn’t look to be around the corner just yet. Elon Musk (co-founder and CEO of Tesla) claims that Teslas will have full self-driving capability by the end of 2020. However, experts say that the technology is still too unpredictable and expensive with the chance of cars being able to navigate as a human would being near impossible. John Krafick, the CEO of Waymo (Google’s self-driving car project) echoes the fact that autonomy will have some restraints.

Instead of thinking about fully automated vehicles, it is perhaps best to see the technology in stages of automation.

• Level 1 automation

some small steering or acceleration tasks are performed by the car without human intervention, but everything else is fully under human control

• Level 2 automation

is like advance cruise control or original autopilot system on some Tesla vehicles, the car can automatically take safety actions but the 
driver needs to stay alert at the wheel

• Level 3 automation

still requires a human driver, but the human is able to put some “safety-critical functions” to the vehicle, under certain traffic or 
environmental conditions. This poses some potential dangers as humans pass the major tasks of driving to or from the car itself, 
which is why some car companies (Ford included) are interested in jumping directly to level 4

• Level 4 automation

is a car that can drive itself almost all the time without any human input, but might be programmed not to drive in unmapped areas or during 
severe weather. This is a car you could sleep in.

• Level 5 automation

means full automation in all conditions

Whilst a level 5 develop is somewhere in the future, hitting a level 3 or 4 automation project could be well within the grasp of the leading companies. In fact, this is most likely what Elon Musk is referring to with his claims. We already have driver free shuttles operating in cities like Detroit, driverless university campus run-arounds and self-driving machines on farms to name a few successful pilots.

Within these stages could be several other AI developments outside of just autonomy. For example, connected vehicles will rely on vast amounts of data. With hundreds of sensors applications of AI will be able to alert us of any problems before they happen, and we’re left stranded in the middle of a Highway.

There are even potential applications in marketing. In the digital age, marketing is all about data. Just imagine if social media feeds flag that someone is going on holiday and whilst in their car, they are recommended the best travel money exchange stores near them. It could even flag restaurants as they drive past them. AI can know exactly what a driver needs and wants.

Other possible uses for AI are in risk and manufacturing.

Waymo, Tesla and Baidu Apollo are three of the main players in the driverless vehicle industry. Below are the key developments and progress for each of them.

Waymo

In May 2019, Waymo announced a partnership with Lyft whereby they would deploy 10 vehicles in the Phoenix area. Formerly known as the Google Self Driving Car Project set up in 2009, Waymo’s mission is to make it safe and easy for people and things to move around. They believe that fully self-driving technology can both improve mobility and give people the freedom to get around whilst saving thousands of lives by negating traffic crashes. “We’re not building a car, we’re building a driver.”

Waymo design all the core components of their technology in-house and see themselves as being able to advance vehicles much faster than flashier competitors like Tesla or Uber. Their sensors have a high amount of computational power, hence the rather large exterior but they value functionality more than they do looks.

A foundation of machine learning from its association to Google Alphabet gives Waymo a very solid infrastructure.

Tesla

Probably the most well-known company in the world of autonomous and electric vehicles, Tesla have progressed the field significantly in the last decade.

Elon Musk stated at the start of 2019 that all vehicles now have the hardware installed to be fully self-driven. All that is required is that they improve the software but there is a long way to go with this.

Right now, Tesla vehicles are only considered to be at Level 2 automation. This is a more advanced assistance system than most other vehicles currently on the road. Whilst Musk has promised this will improve, it is unlikely this can get to Level 5 in the near future as he predicted. He says that by the middle of 2020, Tesla’s autonomous system will have improved to the point where drivers will not have to pay attention to the road.

We should note that Musk originally said he would have Level 5 vehicles on the road by the end of 2018. Tesla are definitely helping to drive the technology but common expectation is that it will be a good few years yet, but Musk’s claims are realised.

Baidu Apollo

The Baidu Apollo platform is an open source self-driving vehicle technology system. It provides a hardware and software solution including cloud data services as well as a vehicle hardware platform. Baidu offers the source code and capabilities in obstacle perception, trajectory planning, vehicle control and operating systems.

In 2019 Baidu announced that Apollo Enterprise for vehicles will be put into mass production. It is already being used by 130 partners around the world and one of its Chinese users plan to deploy 3 autonomous vehicles by 2021.

The Apollo 3.5 release now supports “complex urban and suburban driving environments” pushing it closer to Level 5 automation capability. It is already being utilised by companies and piloted by Walmart for grocery deliveries.

Baidu has grand plans such as 100 robo-taxis covering 130 miles of city in China which are all able to communicate with the road infrastructure like traffic lights.

Where next with autonomous vehicles?

When companies talk about autonomous vehicles they are probably being quite optimistic to ensure they get the required investment and interest in the future of the industry. We also need to be aware of potential regulatory developments for autonomous vehicle deployment such as who is liable if something does go wrong e.g. an accident on a busy road.

It is very likely that within the next decade we will have some sort of truly autonomous vehicle in a city somewhere if companies can handle the scepticism. The level 3 and 4 automation will definitely continue to prosper as AI becomes more a part of everyday life and it is expected this will become available in the majority of city environments.

AI Use Cases

Computer Vision

Introduction

Artificial Intelligence (AI) is revolutionising every day life and causing a positive disruption in several industries. There are many applications of AI such as natural language processing (NLP) and machine learning (ML) but one of the key ones that perhaps hasn’t had as much exposure as though although is equally as ground-breaking is known as computer vision. Sticking with the abbreviations, computer vision is often referred to as CV.

What is Computer Vision?

CV is defined as a field of study that looks to develop techniques that allows computers to see and understand the content of digital images like photographs or videos. CV goes beyond just putting a camera on your computer or laptop. The objective is to help machines view the world like people or animals do which is no small feat. Whilst that camera attached to your laptop might by able to see things, CV ensures they can also understand them. One example that we’ve all known for years is the ability to turn black lines into information about a product, commonly known as a barcode.

Computer vision is like the part of the human brain that processes what an image means with the camera being the part that sees the image, like our eyes.

Overview of Computer Vision

Many popular computer vision applications involve trying to recognize things in photographs; for example:

  • Object Classification: What broad category of object is in this photograph?

    • Grouping items into different categories like animals, people or buildings could be a basic example of this.
  • Object Identification: Which type of a given object is in this photograph?

    • If we know the image is classified as an animal, is it a dog or a cat?
  • Object Verification: Is the object in the photograph?

    • Is the object we know as a dog or a cat in the image being presented?
  • Object Detection: Where are the objects in the photograph?

    • Also known as “edge detection”, this works out the outer edge of a landscape to better identify what is in the image
  • Object Landmark Detection: What are the key points for the object in the photograph?

    • Could be checking whether there are key patterns to recognising the object within the image? Shapes, colours and visual indicators.
  • Object Segmentation: What pixels belong to the object in the image?

    • Pieces of the image can be examined separately for a more accurate analysis
  • Object Recognition: What objects are in this photograph and where are they?

    • Not only detecting that an image is there but specifically identifying what it is.

Applications of computer vision will often only need to incorporate one of these techniques. However, more advanced cases such as driverless cars rely on several different methods to accomplish their goals.

Whilst to a human, these tasks might not sound hugely complicated, machines struggle when an image is in a state that they might not expect. One of the best examples of CV in practice is the app, “Not Hotdog.” The concept of the app itself is incredibly unimpressive but when you consider the neural networks (an advanced type of AI) that happen behind the scenes, suddenly it falls nothing short of amazing.

The app itself simply determines whether an image is a hotdog or not. It sounds ridiculous as nine times out of ten, a small child could recognise whether an object is a hotdog. However, what about when that hotdog is in difference states? For example, in a bun or out of a bun, at different angles, in a jar or replaced with a banana. The machine needs to be smart enough to recognise whether the item they are looking for is still present. Not Hotdog is amazingly accurate to a point where it can even tell the difference between a hotdog and a bratwurst. That is where machines have become impressive with image recognition.

Typically, this type of image recognition operates via processes that take all the individual pixels of an image. A machine is trained with millions of images that humans will pre-label and help them recognise if future images are hotdogs or not. For example, the AI will create a view of what should be included within an image of a hotdog and make the appropriate decision having compared every pixel. Upon meeting a minimum threshold, the machine declares the result.

Not Hotdog is a trivial example, but CV is making important inroads into many industries where it is having a big impact.

Data, data, data

Although we tend to focus on the end user technology and results of any such system, the truth is that they are only effective if they have quality data feeding them. Beyond CV, this goes for virtually any application that operates using AI.

Think about a machine that needed to determine whether an image was of a cat or a dog. The best way to approach computer vision is in the same way that you would approach a jigsaw puzzle. You start with all the pieces and the task is to assemble them in such a way that makes sense. The machine isn’t just given a final image and left to work it out but instead is fed hundreds, thousands or millions of items (data points/pixels) to train it to recognise what the objects might be.

So, to find a cat, we wouldn’t just be telling a computer to search for whiskers and pointy ears. Millions of photos defining a cat or dog would be uploaded for the model to learn on its own the types of features that make each of them up.

Use Cases for Computer Vision

We know what computer vision does and some of the different types but the key to any application of artificial intelligence comes with ensuring we can put it into practice. Here are some case studies reviewing how computer vision is changing the face of different industries.

Healthcare

Using computer vision in healthcare is simultaneously one of the most ground-breaking and the most controversial developments in the industry (we’ll come on to this shortly). Using machine vision, AI vendors are able to create applications that diagnose patients as well as assisting with tasks like surgery, clinical trials and diagnostics.

Diagnostics

Computer vision software is being used to help find abnormalities in patient scan images that may lead to an accurate diagnosis. For example, the US and Israeli-based company, MaxQ AI have developed a software that allows physicians to identify anomalies in patient brain scans. The professionals can then focus on the best course of treatment for the patients rather than spending a long time diagnosing them.

One of the controversies, or at least talking points here, is that patients are wary that a machine is wary of their fate. For example, what would happen if the computer vision technology gets something wrong? Who is responsible? How can a machine be reprimanded? All of these are fair questions and one reason why such computer vision is yet to come into mainstream practice. In fact, companies like MaxQ AI specifically say their software is designed to accompany human knowledge and in no way replace it.

All that said, the potential benefits outweigh the questions hanging over computer vision. Tests in Hong Kong have shown it to be more accurate than humans when diagnosing some forms of cancer. If these diagnostics can happen instantly, in the event of something like a stroke, the chances of a full recovery are drastically improved.

Medical Imaging

Computer vision software like Arterys has been developed which can create 3D models of a patient’s heart on a radiologist’s computer screen. The software can reduce the time a radiologist needs to spend scanning patients and just like the applications used in diagnostics, allows them to focus their efforts of effective patient treatments. A comprehensive evaluation of the heart can be completed in 10 minutes and requires no technical training.

This is achieved through training the software using millions of images of the heart and its surrounding areas.

Clinical Trials

The New York based start-up, AiCure, have developed an app which monitors patients as they undergo clinical trials in an attempt to reduce the number of people who drop out. Using facial recognition technology, the app determines whether a patient has ingested a prescribed drug through the camera of their smartphone. Hours of footage collecting data would have been used to train the models and make sure the app is efficient.

Surgery

The Triton software developed by Gauss is able to calculate surgical blood loss using computer vision. Surgeons can hold their surgical sponge to a screen and Triton works out the current blood loss rate. As with the other applications discussed, this would have required millions of training images to be accurate, depicting all kinds of different states for the sponge.

In trials, Triton has been more accurate in determining the loss of blood during a C section than human counterparts. Those whose surgeries involved Triton experienced a shorter stay in hospital.

Transportation

Most of us will be aware of the promise of driverless or autonomous vehicles. Innovations in computer vision are brining that sci-fi fantasy much closer to reality.

To be autonomous, cars need a lot of input devices like cameras, radars and lasers so as they can attempt to perceive the world around them. For example, they will need to classify objects to identify whether they are a car, person, sign or something else. They will then need to detect where the object is, segment it and finally recognise it. In theory, driverless cars need to apply all the different types of computer vision techniques we talked about at the start of this article.

Autonomous vehicles can only become mainstream once vehicles can do this highly effectively. One error in their perception could cause an accident which is why we haven’t seen them fully commercialised just yet.

With more than one million people killed in car accidents every year, detecting objects and safe driving has the potential to save a lot of lives whereby many incidents are caused by human error. For example, if the camera notices a cyclist raise their arm, computer vision will ascertain if there is a genuine signal of intent and create an action accordingly e.g. to slow down.

Retail

Amazon have trialled using computer vision to have fully automated stores that don’t require staff. The Amazon Go stores have facial recognition cameras at the entrance and tracks each person as they go through the store. It recognises if that person removes something from a shelf or puts it back on and adds it to their virtual basket accordingly.

Once the shopper has finished in the store they simply walk out and are charged directly from their Amazon account for anything within their virtual basket. Trials of the store have been successful but there are a few flaws around “fuzziness” which is when the view of recognition software is blocked in some way.

Once the minor glitches have been fixed, we could easily begin to see a host of computer vision operated stores popping up globally.

Agriculture

We don’t always think about agriculture as an industry that is ripe for disruption. However, it is one of those that has been employing computer vision technology to optimise operational efficiency to good effect.

One of the most common applications is the use of drones. Traditionally, a farmer might have to manually review their crops and determine any threats to the yield. In using drones, they are able to do this by taking pictures of their crop and using those to scan for issues like infestation or slow growth.

Computer vision models are loaded with millions of images showing what good and bad growth look like so it can classify multiple states. The data is filtered into analytics systems that provides insights allowing farmers to take actions and save their crops.

Financial Services

Banks are increasingly turning to computer vision technology to remove the risk of fraudulent activity. This includes using fingerprint or retina scans as a method for customers to login to their accounts which are becoming commonplace in applications. As well as this, customers needing to deposit a cheque have the option to scan it and sending it to the bank where it gets authorised by their software.

Summary

Some industries are certainly ahead of other when it comes to computer vision technology, but we are starting to see a greater adoption across the board as companies begin to realise the ground-breaking potential. Right now, as the public still comes to terms with trusting machines over human interpretation, computer vision technology still requires supervision. There are no major use cases where it can completely replace human resource. For example, whilst some cars are driverless, they still have a human at the wheel supervising what happens. In the future, as we tweak and research computer vision, it is possible that some applications can completely eradicate the need for human input.

Introduction

Artificial Intelligence (AI) and machine learning are paving the way for the creation of new, innovative technologies which can be deployed in many industries. One of the technologies that experienced a big leap forward is Speech Recognition, also commonly referred to as Speak Recognition or Voice Recognition. Speech recognition is the ability of a machine or program to receive, process and interpret spoken sentences.

Usually, the machine is equipped with the hardware capabilities to provide auditory feedback, allowing for a person to communicate with it by means of a spoken conversation. This article will provide an overview of the history of speech recognition and the role artificial intelligence has had on its further development. In addition, different practical use-cases will be discussed, indicating the large influence speech recognition technology has had on a wide range of industries.

The History of Speech Recognition

Whereas Artificial Intelligence and machine learning have had a profound effect on the performance of speech recognition software, the core technology actually dates back to the early 50’s.

In general, the growth path of speech recognition technology can be divided into five distinct periods, each of which was characterized by a major technological breakthrough.

Pre 1970’s

The first ever machine able to recognize speech was invented by Bell Laboratories in 1952. The machine named ‘Audrey’ was an automatic digit recognizer able to detect digits from spoken words.

However, due to the severe limits on the system’s capacity, the machine could only distinguish between the numbers 0 to 9, which was still an unprecedented achievement during those times.

In 1962, IBM presented the successor of Audrey, which was called the ‘IBM shoebox’. The machine was given this name because it was approximately the size and shape of a standard American shoebox. In addition to recognizing the numbers 0 to 9, the IBM shoebox was able to recognize 16 spoken words. The machine was equipped with a microphone and a display which contained lamps that would turn on and off as certain words were spoken through the microphone. However, there still remained a long road to transform this speech recognition innovation into a commercially feasible product.

1970-1980

Research in the field of speech recognition halted after IBM’s shoebox invention in 1962. The main reason for this was the open letter from the influential John Pierce in which he criticized the practical feasibility of speech recognition and in which he openly encouraged the halting of the technology’s funding. This period came to an end when the U.S. Department of Defense funded the DARPA speech understanding research program with the goal of creating a speech recognition system that was able to recognize up to 1000 spoken words. IBM, Carnegie Mellon University (CMU) and Stanford Research Institute all participated in the program.

The results were promising: CMU developed a speech recognition system,named ‘Harpy’, which was able comprehend up to 1011 words. This was a tremendous increase in recognition capacity in comparison to the systems that had been developed during the past decade.

In addition, Bell Laboratories also revived the funding in speech recognition in parallel to DARPA’s speech understanding research program. Whereas previously, speech recognition systems needed to be trained on the voice of an individual person, Bell Laboratories developed a system that was able to understand voices of more than one person.

1980-1990

The 80’s were characterized by a major breakthrough in speech recognition technology due to the development of so-called hidden Markov models (HMM). Hidden Markov models are statistical models which can be used to calculate the probability of a certain outcome, given a chain of past events. These models turned out to be exceptionally well-suited for the purpose of speech recognition, where they were used to determine the probability of a word originating from an unknown sound. Using hidden Markov models, IBM was able to create a voice activated typewriter in the mid-1980’s.

This transcription system, called Tangora, was able to recognize up to 20,000 words which were automatically being typed out on paper. Although the machine could only understand the voice of the person which it was trained on and recognized only a small part of the full English vocabulary, it allowed one to get a glimpse of the future possibilities and implementations of the voice recognition technology.

1990-2000

Up until the 90’s, voice recognition systems operated rather slow due to the limited computational power that was available at that time. However, the widespread introduction of 64-bit microprocessors in the mid 90’s provided machines with the much-needed computational power for speech recognition technology to become more feasible.

In addition, personal computers were finding their way to the households of the middle class in more developed countries. This motivated the development of the world’s first speech recognition software for consumers. The software, developed by the company ‘Dragon’ and called ‘Dragon Dictate’, was able to recognize up to 100 words per minute, which is close to regular speaking rates of 100 to 150 words per minute.

Post–2000

The early years of the new millennium were characterized by a plateau in speech recognition development and the technology was still being dominated by traditional approaches such as hidden Markov models. However, the funding of speech recognition programs, ever-evolving computational power and the widespread interests of private companies caused the technology to flourish during the past two decades.

The real breakthrough came with the implementation of recurrent artificial neural networks and deep learning. These machine learning techniques mimic the working principles of the biological brain and are able to process very complex data structures, making them much-suited for speech recognition purposes. This caused a giant leap in the development of speech recognition and drastically increased the performance of the technology.

Many private companies started to develop their own speech recognition software, which was picked up by large technology companies with the aim of equipping their current product portfolio with speech recognition software.

Currently, big-tech companies such as Google (Google Assistant), Amazon (Alexa) and Apple (Siri) have implemented their voice assistants in numerous products, allowing customers to interact with their devices in a more convenient way. As the technology advances, we are evolving towards a future where machine-interaction will be governed more and more by means of spoken language rather than touch-based interaction.

Use-cases of Speech Recognition

Far-evolved speech recognition software has found its way to all sorts of industries in order to reduce costs by means of automation, provide new revenue generating services or increase overall customer experience. The following sections will provide an overview of the industries which have experienced massive changes due to the implementation of voice recognition applications.

Telecommunication

There are two classes of voice recognition applications of that are appearing within the field of telecommunication.

The first class of applications focusses on reducing costs for tasks that are currently being accomplished by a human attendant. An example of such task is the automation of operating services. Whereas before, calls to operating services were handled by human employees, the implementation of voice recognition software has enabled the full or partial automation of this process. Incoming calls are handled by speech recognition systems which detect the caller’s problem by asking it a series of relevant questions. Based on the answers, the caller may be redirected to a human attendant who is specialized in the matter regarding the caller’s problem. This has enabled operating services to increase efficiency and provide a better customer service using less resources.

The second class of applications focusses on providing services which are able to generate new revenue streams. A prime example of such applications within the field of telecommunication is voice dialing. Whereas previously, one had to dial complete phone numbers or navigate through the phone’s digital contact book in order to ring someone up, voice dialing allows people to complete calls without having to push any buttons.

This is done by pronouncing the name of the person you are trying to reach, resulting in the voice recognition software to initiate a call to the person in question. Such applications are especially useful in situations where touch-based interaction is limited or not possible. For example, connecting your mobile device with your car’s audio system allows one to make on-demand phone calls without ever losing the hold on your steer.

Education

Speech recognition has also proven to be useful in different fields of the educational system. One of the main applications of the technology is helping people to learn a second language and teach them the proper pronunciation. This is done by recognizing the student’s spoken words and sentences and to correct them when an error in pronunciation has been detected.

In addition, speech recognition allows to improve the quality of education for students which suffer from a physical disability affecting their motor capabilities. Such students may be unable to intensively carry out hand movements which disables them from writing or typing. However, speech-to-text programs, equipped with speech recognition software, allows these students to dictate their school assignments and to browse the internet without physically operating a pen, mouse or keyboard.

Business

The use of speech recognition software finds many practical solutions in the business environment. As discussed in the section about telecommunication, businesses can reduce costs and increase efficiency by partly or fully automating operating services.

In addition, powerful speech recognition software can perform in-depth datamining on the audio files obtained from customer calls. Such data-analysis may provide key demographic information about the caller such as gender, age, accents, emotion and sentiment. Such information allows businesses to gain powerful insights in their customer base, launch highly targeted marketing campaigns and improve support and sales performance.

Another application of speech recognition within a business context is the use of automatic text transcription software. Such software allows to convert audio and video fragments into perfectly accurate text documents which contain the spoken sentences from the imported file. This may be useful to acquire transcriptions of board meetings, conference calls or shareholder’s meetings in order to easily transfer this information to people which were not able to attend the event.

Daily Life

Large tech companies have invested billions of dollars in the development of voice-activated smart assistants which provide intelligent assistance to their owners in day-to-day tasks.

The current industry leaders providing such systems are Google (Google Assistant), Amazon (Alexa) and Apple (Siri), which have implemented their software in all sorts of devices such as phones, smart speakers and cars. These intelligent personal assistants can help their owner with basic day-to-day tasks by understanding short sentences and commands that are dictated to it.

One can, for example, request to schedule a meeting at a specific time and date, ask an overview of the scores of recently ended sport games or demand a weather report for the upcoming hours. Usually, these systems are backed-up with complex artificial intelligence algorithms which learn the owner’s preferences, enabling them to relevant information according to the owner’s needs.

In addition, these voice-activated assistants are being integrated in other industries in a rapid pace, in this way expanding the field in which they operate. This is the case for the financial industry, where applications are being integrated which allow customers to pay bills by using voice-activated assistants such as Apple’s Siri.

Conclusion

The world of speech recognition is rapidly changing and evolving, with new applications being discovered on a frequent basis. The future promises to bring a significant increase in speech recognition performance with more robustness to individual voices, increased capability of handling background noise and widespread software being available for a large range of different languages and dialects.

As speech recognition technology advances, the future will evolve to a situation where machine-interaction will be governed by means of spoken language rather than touch-based interaction, allowing more efficient and faster data transfer between human and machine.

Introduction

Just about everyone will be familiar with artificial intelligence (AI) given the huge amount of buzz and hype in recent times. However, what businesses sometimes fail to grasp is the benefits that the applications of AI can bring to their business and why they need to ensure they understand them.

AI can be defined as intelligence demonstrated by machines which, on its own, isn’t actually a tangible output. In fact, AI is an umbrella term for a number of different applications that deliver the intelligence it needs. One of the rapidly emerging subfields of AI is natural language processing (NLP). In this article, we can look at what NLP is and some of the use cases being deployed in different businesses and industries.

What is Natural Language Processing?

Natural Language Processing or NLP is an area of artificial intelligence that aims to help computers make sense of human language. It is very powerful and looks like it is going to have a massive impact on the future of how people interact with businesses and technology.
Humans converse without giving it a second thought. We call this natural language. For computers to interpret us correctly is incredibly difficult given the different dialects and contexts that we use to talk with one another.

The objective of NLP is to decipher, understand and make sense of human language in a way that offers business value. Most techniques rely on machine learning, another AI application, which uses data from previous interactions as a method of predicting responses. It’s perhaps best to look at NLP in a practical example.

alt text

Source – Upwork

What we see here are a number of steps linking a human to a machine via a NLP layer.

  1. A human interacts with the machine through speech, chat, social media or some way that involves language
  2. The machine captures the language via the NLP Layer and sends it off to be converted into text or data
  3. The data is processing to understand it. It will map the language to a knowledge base or data storage, analysing the most appropriate response.
  4. The response data is converted back into language
  5. The machine responds to the human

Take an everyday example of using an iPhone. You will see word suggestions based on what you are typing (most of us call it predictive text) which is NLP in real-time action. We take this for granted as it has been around for so long but some people really struggle to live without it.

Millions of us now own devices like Amazon Alexa and Google Home which are solely based on NLP technology. They work using what are known as Hidden Markov Models (HMM) that can determine what you’ve said and make it into something usable. Without getting too technical, they put your language into 10-20 second millisecond clips and compare that with the knowledge base. It might look for nouns, verbs and adjectives to get a view of the context as well.

Why do businesses need to use natural language processing?

Businesses need customers to interact and engage with them in order to survive. As the world becomes more digital, the challenge to get them to do so becomes ever greater. It is no secret that consumers are draw to social media and new technology like Chatbots. Over the last few years, this has in turn created a drastic increase in the amount of data they hold. This data would be classified as “unstructured” given that people write in different languages and formats.

Natural language processing is designed to help businesses overcome the potential language and format barriers of digital interactions. The techniques being used enable machines to understand the meaning of sentences and their context, ensuring deployed technology works efficiently. For example, imagine a business decides to deploy a voice activated device like Alexa of Google Home. At face value, this is a great idea to get their customers interacting with them. However, if the technology doesn’t have the right NLP layer setup to translate voice into data, the device will send irrelevant responses. Consumers will soon stop buying from your business if the technology doesn’t work, regardless of how innovative it might be.

If organisations can get it right, there are plenty of use cases for NLP.

User Experience

Arguably one of the most obvious use cases for NLP is to create more user-friendly experiences. We take things like spell-checks, autocomplete and predictive text applications for granted but these are all great examples of how NLP is being used by people every day to make them more efficient.

Those of us who use predictive text on a regular basis will know that it improves over time when the user has provided the phone with more data. This is an example of how NLP and machine learning work together to create seamless process flows within technology.

Automating Customer Support

Chatbots are the most common technology that uses NLP as a primary function. Conversational bots are gradually taking over from live customer service agents given their ability to operate on a 24/7 basis, provide immediate and accurate answers and heavily reduce call centre resource costs.

When talking about Chatbots, it refers to digital messaging platforms that respond to users on an automated basis with answers to their questions. Businesses are spending time to load these Chatbots with data and create conversation flows, enabling them to talk to customers in a very natural way.

NLP layers work to ensure that the Chatbots don’t simply provide a response but emotionally connect with the customer. For example, it can look at the way the customer forms their sentences to provide the most appropriate response and personalise it. A younger customer might say “Hey” whereas older customers might use “Hello” and the Chatbot can decide the right language to use.

This form of NLP is heavily based on data. Chatbots need to be well trained initially to operate correctly and produce the right experiences. The key reasons businesses fail in this area is due to poor data quality. There are countless papers available on how to train Chatbots effectively and we’d suggest reading those before diving in!

Products such as that offered by Digital Genius, can split your customer messages into manageable detail as the screenshot below shows

alt text

Using this type of categorisation, the bot software can automatically assign a ticket and direct the problem to the right area of support. When it gets things wrong, it learns from the experience and modifies how it categorises interactions next time.

Content Marketing

With improvements in AI technology, NLP platforms can assist human researchers by summarising articles in real-time that could take hours or even days to complete otherwise. Beyond that, through its sophisticated algorithms, NLP will likely pick up on content that would have never been discovered beforehand.

Summarising and researching information are two of the major content marketing applications for NLP.

Summarising Content

When we talk about summarising content with NLP, it can be extractive where the system distils text into relevant parts or abstractive whereby it comes up with its own wording from the text via machine learning algorithms. As an example, consider the paragraph: “John and Jane rode in a car to attend the annual event in Las Vegas. In the city, Jane gave birth to a child named Jennifer”*

An extractive summary would give us; John and Jane attend event Las Vegas. Jane birth Jennifer. The words have been extracted to create a summary which although quite odd grammatically, highlights the salient points of the paragraph.

For content marketing, machine learning models can apply this technique against massive volumes of text and supply teams with details quickly. This will save a huge amount of time in deploying human resource to do that job.

However, abstractive techniques take this to the next level. Instead of just taking the important words from the phrase, abstraction involves paraphrasing and shortening parts of the source document. In essence, the machine takes the content and curates it in its own words. This helps to overcome the grammatical issues we see with extraction and provide genuine value to content marketing teams.

An abstractive summary of the sentence above would be; John and Jane came to Las Vegas where Jennifer was born. The limitation right now is that developing abstractive algorithms that can be grammatically correct is incredibly difficult. This is why teams tend to rely on just the extractive methods which can be applied with relative ease as they don’t need to be any manipulation of the content.

In practice, using NLP for extractive content might take the top 50 results from a specific keyword in Excel, open up each of the links and review everything. The models will take all of that content, convert it to data and then produce a summary based on the key phrases that it discovers. It is still a great way of condensing a mass of information into understandable chunks which digital marketing teams can use.

Over time, AI is capable of learning the best order for the sentences and constructs them in a way that makes sense to a reader. This requires a lot of training data, but it will get there eventually.

Researching Content

Marketers will probably be the first to confess that they think too much. Before they have created and developed one piece of content, they are probably already researching the next one and the next and the next one (you get the idea). The problem here is that is can lead to misplaced focus and ambiguous looking content. A lot of time is probably spent copying and pasting material from one place to another.

Many of these tasks could be done with a machine and let teams focus on the strategy and insight rather than pouring hours of time into researching information.

Machine learning algorithms can quickly review vast amounts of data and analyse the patterns or trends that are key to your industry. NLP models might be given a list of phrases or keywords to monitor the market for and return relevant research based on those. This can provide real-time insight for content marketing teams and mean they don’t need to spend hours or days finding data themselves.

If you then combine the research with what we know about how NLP summarises language, suddenly you have a very efficient process that can really augment content creation.

Over time, NLP platforms can start learning from the content you write and ensure the research and summation becomes even more relevant.

Social Media Monitoring

It is vital that you know what your customers are saying about you on social media. With multiple channels out there, it is very easy to be swamped by buckets of data. NLP has the ability to monitor and respond to feedback easily.

Technology like Sprout Social listens to social media and analyses the activity surrounding your brand. For example, it will flag if you are get a high number of mentions on Twitter and recommend potential actions if necessary.

Natural Language Generation (NLG)

Natural Language Generation (NLG) is an important branch of Artificial Intelligence that is set to impact the future of NLP. Whilst NLP can research text and extract data to summarise it, NLG can take that and turn it into a written narrative. This is the core of content marketing, voice activated systems and IoT devices. NLG might quickly take us into the future of the field.

Natural Language Generation essentially uses specifically created algorithms to translate data into human-like language. This will result in the automation of news reporting, website copy and headline generation, among other tasks. A recent study by Gartner revealed that almost 20% of business-focused content will be generated solely by machines in 2018.

However, this does not spell doom for fields like content marketing. It only means that content marketers will have access to superior, advanced tools and technologies, enabling them to better analyse the content they create. Deeper insights will allow teams to effectively predict content performance and patterns in audience engagement.

Wherever there is a need for content generation, NLG can help. Some of the most common examples are:

  • Written analysis for business intelligence dashboards
  • Personalised customer communications
  • Product descriptions and landing pages
  • Client portfolio summaries

NLG narratives are designed to be written as if they were done by a human and are usually based on a set of pre-defined rules (laid out by humans). This would be some conditional logic or triggers based on the vast amounts of data sitting behind the content. Users can edit the rules based on their digital marketing strategy but the complex algorithms are able to adapt and instantly create relevant content.

Case Study – Orlando Magic NBA Team

Orlando Magic wanted to reach out to their fans with completely personalised email content. They realised that every one of their fans is different and need to be treated as individuals. They used the Wordsmith platform, a powerful NLG system that can transform data into insightful narratives.

The platform was able to take the data Orlando Magic held on each of their customers and turn that into a very specific personalised email. The emails were curated in such a way that every fan’s unique situation was taken into account when it came to ticket servicing. The result was that 80% of fans responded positively to the email they received as it was so targeted through NLG.

alt text

Google have recently released their Gmail Smart Compose algorithm which has changed the way their users interact with the platform. As you start typing an email, the NLG models predict and suggest what you are likely to want to say next. This can range from one word to entire sentences. To apply the recommendations, users simply swipe to the right. It saves them time when constructing an email whilst avoiding errors in content.

The algorithm Google use learns from experience. If users continually reject recommendations, it won’t offer them again whilst it will always offer up the ones that they accept. The most advanced NLG systems are able to self-learn in this way without the need for human intervention after the initial rules and triggers have been set, such as the Orlando Magic case.

Put this into the context of content marketing. To start, a human set up rules, keywords or phrases that it wants the machine to research and surmise. This will be based on the digital marketing strategy and business insight.

The machine will go off and start searching for the most relevant content and extracting that information for the user. Every time it does this, the machine will learn based on what the end user decides to accept or reject. It will incrementally adjust how it does things.

Where some digital teams have dismissed NLP and NLG, it is where they expect it to be perfect from day one. Unless you have a perfect strategy and millions of rows of data straight away, this will be almost impossible. NLG takes some time to be perfect, but it will get better quickly. In fact, it probably learns faster than humans ever could.

Only recently, Facebook created AI that was able to beat professional poker players. Just like NLG, it did this through incremental learning. At the start it probably only just about knew the difference between a Heart and a Diamond.

If you get everything right, suddenly you have a machine that can read, analyse, suggest and even curate content faster and more efficiently than humans have ever been able to. Marketing teams can spend time on strategy and insight and have far more satisfying jobs that don’t focus on laborious tasks.

The future of NLP and NLG

At the start of 2019, a non-profit AI research company backed by Elon Musk and others announced that they had built an AI model that was able to coherently writes paragraphs of text at scale.

The GPT-2 model from OpenAI learned how to write by extracting data from eight million web pages. Just think how much data that is for a second and how long it would take a human to scan that volume of information. You probably won’t even look at eight million websites in your lifetime. “The world’s best economies are directly linked to a culture of encouragement and positive feedback.”

Although it sounds like it, this is not a quote from an economist or philosopher but is generated straight from the GPT-2 model based on what it learned from its research. Below is a full extract (source Analytics Vidya) created by GPT-2.

“I called Donna and told her I had just adopted her. She thought my disclosure was a donation, but I’m not sure if Donna met the criteria. Donna was a genuinely sweet, talented woman who put her life here as a love story. I know she thanked me because I saw her from the photo gallery and she appreciated my outrage. It was most definitely not a gift. I appreciate that I was letting her care about kids, and that she saw something in me. I also didn’t have much choice but to let her know about her new engagement, although this doesn’t mean I wasn’t concerned, I am extremely thankful for all that she’s done to this country. When I saw it, I said, “Why haven’t you become like Betty or Linda?” “It’s our country’s baby and I can’t take this decision lightly.” “But don’t tell me you’re too impatient.” Donna wept and hugged me. She never expresses milk, otherwise I’d think sorry for her but sometimes they immediately see how much it’s meant to her. She apologized publicly and raised flagrant error of judgment in front of the society of hard choices to act which is appalling and didn’t grant my request for a birth certificate. Donna was highly emotional. I forgot that she is a scout. She literally didn’t do anything and she basically was her own surrogate owner.”

There is no reason why you wouldn’t think that narrative was written by a human. Quite amazing progress.

Whilst these breakthroughs are amazing, we are not saying that AI will replace the need for generating content and other human-based customer experience. It will do some tasks far more efficiently than humans ever could and must form part of data and digital strategies

ChatBots

Introduction

The digital world is growing exponentially and changing everyday life as we know it. Everybody wants to be connected, wherever they are and at any time through whichever device they want to do it with. The Internet of Things (IoT) is an enabler to do exactly that. Although some of the concepts are a little tricky to grasp, IoT itself is quite a simple notion, meaning a device or series of devices that take all the things from the world and connect them to the Internet. (McClelland, 2019) As soon as a device is connected to the internet, it can send and receive data to make it smart (hence smartphone).

alt text

A high level overview of how IoT devices function

Through using this data, the device can carry out some sort of action such as Amazon Alexa responding to a command or Google returning a search result. This report looks at how data drives IoT and the way organisations exploit the technology with conversational Chatbots.

About Internet of Things (IoT) Data

IoT devices are data are intrinsically related and heavily dependent on each other to create a real-world impact (Joseph, 2018). If we searched for something on Google and the results were not relevant because of poor data, would it still be the global colossus that it is today? Probably not. The amount of data worldwide is expanding constantly along with IoT adoption and it is thought that around 31 billion devices will be connected by 2020. That means that 31 billion devices are constantly sending and receiving data and organisations are moving towards Platform as a Service models (PaaS) which allow scalable cloud-based storage enabling fast as effective data processing.

alt text

Microsoft Azure example of how IoT uses cloud computing

The data collected from IoT devices can solve problems and there have already been successful trials (Sahu, 2018). Doctors use IoT scanners that use Big Data platforms capable of interpreting images and detecting the early signs of cancer in seconds (IoT For All, 2018). Smart Home devices such as Nest can maintain temperatures and ensure health and safety in the home. Ultimately, an IoT device is only as powerful as the data it collects and analyses.

About Conversational Chatbots

When somebody mentions the terms “conversational Chatbot” most of us probably think of Amazon Alex or Google Home. These are both great examples of voice activated IoT Chatbots but the technology is actually more far reaching that that alone. As with any other device, the Chatbot is almost entirely data driven as shown in Figure 1.3 below.

alt text

How Alexa works with data (Ovenden 2018)

Wikipedia defines a Chatbot as “a computer program or an artificial intelligence which conducts a conversation via auditory or textual methods. ” After years of conversing via text messaging using a smartphone, people have become comfortable with conversational interfaces. The conversational bot (known as a Chatbot) takes that to the next level. Using data, the intelligent software is designed to make you feel like you are talking to a real person and have the ability to automate tasks that were previously done with human intervention. They use applications of artificial intelligence (AI) such as machine learning (ML) and natural language processing (NLP) to process unstructured data like speech and text, map it to a knowledge base and return an answer to the user.

For example, a user might ask a customer service Chatbot what time a store is open. The Chatbot will interpret the data, send the results to a knowledge store or large data storage bank, analyse the matches and send back the most likely result. The best Chatbots will be 99.9% accurate and it is almost impossible to determine whether they are a human or robot. The accuracy all depends on the quality of data feeding them and the ongoing maintenance of the knowledge base, as well as the technical algorithms between those processes.

Benefits of Using Conversational Chatbots with IoT Data

Chatbots go together perfectly with IoT devices and data. They have a predefined workflow and help drive engagement and facilitate faster conversions by answering questions or even offering suggestions. The main scenario we see them in is customer supports. (Kar et al 2018)

“By 2020, the average person will have more conversations with bots than with their spouse, in fact, it is estimated that 85% of interactions will be with Chatbots.” (Gartner)

Some of the most common applications are seen in booking flights or tickets, searching for hotels and events, ordering food and buying clothes. Whilst the objectives are different the key concept of using data to present a decision or recommendation with the need for human intervention remains the same. Beyond this, using data with IoT devices has a major benefit over human resource; they never need to have a break.

Assuming they have requisite storage and are switched on, cloud servers are fully operative 24/7, in line with the demand of IoT users. Customers don’t want to wait for a company executive to help them with their query, they want results there and then. Having an intelligent Chatbot with a maintained knowledge base enables full-time availability, increasing revenue as a reduced cost, where fewer staff are required.

A further benefit of using data in IoT devices is the application of machine learning. This is the process of using historical data and taking the experience to make decisions. For example, if a retailer sells a series of products at different price points and the data shows customers have never purchased if the price goes over $10, the machine will know not to offer that. However, if customers start spending more, the machine might look to change the price points accordingly, learning from the data experience. Algorithms used within data can update themselves without any training requirement. As long as the quality of data is strong, they won’t make mistakes and provide a far better experience to the IoT user. Chatbots can even provide these responses instantly rather than an agent having to research a topic or review paper documents perhaps.

Limitations to Using Conversational Chatbots with IoT Data

Whilst Chatbots come with several benefits when used with IoT data, they also inherently come with challenges and limitations.

First and foremost is the technology landscape of IoT which is going through a period of constant change. Where the new IoT devices have a growing number of sensors, nodes and networks, Chatbots have a quite daunting challenge when it comes to gaining enough technical knowledge to interact with all the different components. Cloud and Multi-Cloud environments mean that data might be stored in different locations and in multiple contexts also posing challenges for encryption and security.

A further challenge with Chatbots is the need for being multi-lingual given the widescale reach of IoT devices. Not only does the knowledge base need to contain quality data, but it requires this information in multiple languages to suit the user of the IoT device being used. Chatbots need to be able to respond to phrases in various contexts and cultures which could require substantial coding and database maintenance. (Kar et al, 2016)

As well as these limitations, UX Collective give various reasons why Chatbots fail such as not being able to understand emotional context or not having the correct AI infrastructure. Companies often deploy Chatbots as they seem like the “cool” thing to do without realising that they only succeed with a strong data foundation.

Using Conversational Chatbots with IoT Data

Whilst Chatbots have mainly been used in consumer facing environments they have also started to prove useful within Industrial IoT and software development. This is because it has become easily to integrate them with existing platforms like Facebook Messenger, Slack, Skype or even into a website. (Fagella 2019)

The Microsoft Bot Ecosystem (Figure 1.5) consists of a Bot Framework (Boyd, 2017) to create the flows, a Bot Connector to integrate with the users preferred communication channels, Microsoft Cognitive Services that use AI to process the input data and Microsoft Language Understanding Services. Platforms such as this allows users to create a Chatbot using familiar development code bases like C++ and Java.

There are many uses cases now for Chatbots that rely on data. Concierge tells users if they are receiving the best travel pricing, communicating via mobile, website or Slack. GWYN is a development by IBM Watson that allows users to find the perfect gift via a Chatbot and place the order online. Th H&M Clothing Chatbot helps uses find the perfect outfit, matching data to customer profiles via at app.

Conversational Chatbots are becoming mainstream in many industry to manage customer support and experience and given their potential for cost saving, speed and learning, the art of human to human conversation could easily die out over the next decade.

alt text

Example of the Microsoft Bot Framework

Top Conversational Chatbot Use Cases

Whilst most use cases tend to be in customer services as that is where businesses see the biggest impact, there are plenty of other ways that conversational Chatbots can be deployed. These are some of the main ones that we see today that we haven’t mentioned yet in this article but show there is more to the technology than meets the eye.

Automating Transactions

Retailers are using Chatbots to allow customers to complete transactions online without human interaction. The major benefits here come in saving the cost of labour and the fact that consumers don’t have to spend time on a phone call, improving their experience in most cases. MasterCard have an AI application that allows their customers to check their account balance, setup alerts and pay their bill.

Marketing

Chatbots can think more like marketers than a traditional customer service agent would be able to do. With access to massive amounts of data and knowledge, Chatbots have the potential to upsell, cross-sell and recommend products. For example, if an existing customer is using your Chatbot, it can provide responses based on their previous behaviours and purchase patterns using experience it has gathered from data. A Chatbot can do this almost instantly whereas a customer service agent would have to spend time reviewing the account. Figure 1.6 shows an example from HelloFresh where the Chatbot provides a response based on the type of customer.

alt text

How HelloFresh do Marketing on a Chatbot

Supporting Social

Facebook Messenger is one of the most popular platforms for developing Chatbots. One of the key reasons for that is the ability to support Facebook Marketing and Ads. For example, a Facebook Ad can have a call to action that automatically directs a user to their Facebook Messenger Chatbot. This allows the user to do everything on one site and, with the mass of data Facebook have available, be given a truly personalised experience. Chatbots are a bit like hallowed ground for marketers.

Human Resources

HR departments are using Chatbots to monitor employee satisfaction. Integrations such as Polly will ask staff questions each day to get a better understanding of how happy and engaged they are. This is far more productive than 6 monthly or annual surveys that only gather responses at a very specific moment in time. Other applications in HR are with booking holidays and notifying of absences rather than requiring phone calls or emails.

IT

IT teams use Chatbots to facilitate helpdesk enquiries and respond to common questions. The objective of this is to ensure employees are not having to answer simple queries and instead, focus on more complex and skilled tasks. Other applications currently on the market include investment management automation, credit applications or money transfers as well as travel tips or accommodation bookings. Companies are continually finding more bespoke use cases that will continue to creep into our lives.

Summary

Chatbots are one of the most revolutionary technologies of the 21st century given the impact they have had across several applications. However, there is still a long way to go before they are perfect.

The majority of Chatbots are still responding to questions and although they do that very well, it doesn’t quite have the same feeling as an interaction with a human. The bots of the future will talk, think and draw insights from the knowledge they gain. In this way, they will build emotional connections with people rather than just being able to respond to requests.

As consumers continue to prefer Chatbot communications over and above other channels, investments in the technology will continue to grow. Businesses need to ensure they have a strategy in place to work with AI and keep up with the competition.

Robots

Introduction

Whilst in the last decade there has been a lot of hype around how robots are taking over, they have actually been part of our lives for almost 100 years. In fact, some say that you could pin the theories of robotics all the way back to the Ancient Egyptians with their water clocks being one of the very first examples of machinery use. Further examples have existed through time with the likes of Da Vinci, De Vaucanson and Kauffman inventing automated ways of doing things.

In this article, we look at the history, present and future of robotics from 1921, when writer Karel Capek first used the term “robot”, accelerating the field as we know it.

What is a robot and what is robotics?

The term robot is quite ambiguous and every website you look at, seems to have a variation on the definition. In general, a robot is a machine that is capable of carrying out both routine and complex actions that are programmed by engineers. In theory, everyday objects like dishwashers, ovens and thermostats are all robots but they have become so engrained in life that we don’t really see them as part of the “robotics” school anymore. However, today, expert definitions of a robot are going a bit beyond the household dishwasher.

Robots are able to do three things. Sense, compute and act. Although the scale of these things varies widely between robots, it differentiates them from standard machines like a dishwasher. Using something like sonar, robots may be able to sense the world around them. Some will have multiple sensors and cameras with different functions whereas others could just have one to carry out a single task. Similarly, where it comes to computing, robots will have anything from a single chip, all the way up to a clustered network of systems. When robots need to act, some will move around and others may just be deployed to manipulate things.

One of the key differentiators between what a robot is and what a robot is not would be the idea of a “feedback loop.”

A robot is in a constant cycle of sensing, computing and acting. Feedback is what makes machines smart and in turn, makes them into a robot. This is known as the field of robotics.

What is Robotics?

Robotics is a branch of technology that deals with robots. Robots are programmable machines which are usually able to carry out a series of actions autonomously, or semi-autonomously. They can interact with the physical world via sensors or actuators. Robotics is often confused with artificial intelligence (AI) due to sci-fi movies such as Terminator portraying a far higher level of singularity than is actually present in the real-world.

However, robotics requires the human programming of robots with a defined set of rules. These robots are not artificially intelligent in their own right. For example, you could build a robot to pick up an object and place it somewhere else. That is robotics. However, an AI programmed robot would be able to use a camera to recognise an object independently, understand what the item is and decide on where it should go.

Robotics is the part that builds the robots and AI involves programming it with some sort of intelligence.

How does a robot work?

In theory, on a physical level, a robot works exactly the same way as a human being. A robot will have a moveable structure, a motor, power supply and a computer chip which acts like its brain. Humans have muscles and senses that do all these things. A robot has the components to replicate human-like behaviours.

Robots are built will moving parts. These can range from very complex systems, consisting of dozens of parts to those with only a single wheel. They are usually made of metal or plastic and are constructed with joints just like the human body. To move the joints, robots are fitted with something called an actuator. Wikipedia defines an actuator as below;

“An actuator is a component of a machine that is responsible for moving and controlling a mechanism or system, for example by opening a valve. In simple terms, it is a "mover". An actuator requires a control signal and a source of energy.”

To power the actuator, robots tend to have a battery or can plug into the wall depending on their purpose i.e. you can’t really plug a robot designed to move into a wall as it quickly becomes impractical. Everything is then wired into an electrical circuit. This powers the motors and any hydraulic systems.

The computer built into the robot programs its actions. This will control everything that has been built into the circuits. The majority are reprogrammable through written code that is put onto the chips. Robots are fitted with sensors to integrate all of this together and form what we know as robotics.

Autonomous Robots

In the last few years, there has been a rise in the autonomous robot. You may have seen the robots that can vacuum your home with human intervention. They robots will use a bumper sensor to detect obstacles. When the robot hits something, the sensor is activated, telling the programming to make it change direction or go backwards. Every time the robot hits something, it goes in a different direction. It can do this until such a point it runs out of battery. Infra-red sensors can take this a step further and make the robot aware of an obstacle before it strikes it.

Recently, artificial intelligence (AI) has taken this automation to a new level. We will come onto that shortly after a brief history of robots

A brief history of robots

Starting from the penning of the term in 1921, robot have come a long way in the course of a century. We’ve already spoken about the standard make-up of a robot and how they have moved into automation. The future firmly lies in AI but let’s quickly see how we got there. There are many events beyond those below but we’ve tried to highlight the key ones in the timeline.

  • 1921 – Karel Capek coins the term robot to describe automata in fiction
  • 1942 – Isaac Asimov pens the term “robotics” to describe the field. The three laws of robotics (famed in the movie “I, Robot” are coined by Asimov.
  • 1950 – George Devol invented the first industrial robot, Unimate, to transport die castings from an assembly line.
  • 1960s and 70s – a lot of innovation in the field was around the use of robotics arms to complete tasks in the field of medicine.
  • 1971 – Mars 2 was the first robot to land on Mars (even if it did crash land!)
  • 1984 – Wabot-2 was created as the first robot capable of playing the organ
  • 1994 – the CyberKnife robot was able to conduct surgery
  • 2002 – Roomba the robotic vacuum cleaner was first released commercially.
  • 2005 – Self driving cars make their first appearance but are not very successful at this point
  • 2017 – a robot called Sophia was granted Saudi Arabian citizenship

Move in to 2019 and engineers at the University of Pennsylvania created millions of nanorobots in just a few weeks using technology borrowed from semiconductors. These microscopic robots, small enough to be hypodermically injected into the human body and controlled wirelessly, could one day deliver medications and perform surgeries, revolutionizing medicine and health.

Artificial Intelligence, Machine Learning and Automation

Robotics is at a state that engineers are quite comfortable with building robots and they can spend more time focussing on the software that can make them more powerful. With efficient AI and algorithms, robots won’t ever be able to do anything particularly ground-breaking. Consider an everyday device like Amazon Alexa. Essentially it is a black box sitting on the table at home. All the clever work is done by AI and its applications like machine learning and natural language processing in the background.

alt text

Amazon Alexa

An AI robot works in the same way (Alexa would be considered a voice-activated device as opposed to a robot). A sensor will gather information from the environment. This information could be data, images, voice or video for example. This information processed by powerful computers making use of modern cloud infrastructures and new technology like edge computing. The AI system can analyse the information and return a response in milliseconds.

In machine learning, AI will compare the new information to all data it has been previously programmed with to work out the best and most relevant action. As the robot carries out more activities, it learns from that and continually adds to its knowledge base.

For example, in September 2019, OpenAI created an AI program that could teach itself how to play the children’s game of hide and seek. At the start, with only some programmed data, the virtual game yielded poor results with the hiders often getting caught. However, after thousands of experiments, the AI began to learn how to use obstacles to its advantage and evade capture. A robot does the same. It will remember the bad choices it makes and improve on them next time round.

The benefits of robotics are perhaps best explained through some industry use cases.

Robots Industry Use Cases

Smart Homes

The Roxxter range of robotic vacuum cleaners from Bosch are leveraging AI to be more efficient. They can draw interactive maps of their environment, like a built-in GPS system. As well as that, they integrate with Alexa for voice-activated control.

Olly is a robot built by EuroTech that uses AI to recognise users and their emotions during use. It is ale to proactively start a conversation after recognising how the user might be feeling through its camera sensors (known as computer vision). It can predict what music the user might want to listen to based on their mood.

Agriculture

In agriculture, drones are being deployed to improve the efficiency of farming. Farmers can deploy a robotic drone to scan the terrain or even plant seeds. This can be done at a much faster rate than a human could ever achieve. Using sensors, the drone can take pictures of the land. AI systems can analyse those images in real-time to ensure the farmer is getting the best possible yield. For example, it can ascertain what the best crops would be for the current conditions or provide feedback on potential pests.

Factory and Manufacturing

The smart factory is expected to be one of the biggest developments when it comes to AI and robotics. Machines with integrated AI can be much more efficient, cost effective and productive. One example is in predictive maintenance. Sensors can collect machine data which is analysed in real-time. From that, it is possible to see if any maintenance is required based on historical data and automatic action are taken.

Healthcare

AI can be programmed with knowledge on how to conduct surgical procedures. This will come from existing images and videos which can help computer systems know exactly what to do. When integrated with robotics, there is no risk of tiredness or shaky hands, meaning better accuracy with procedures. Surgeons can oversee the procedure rather than carrying it out as long as they are confident in the programmed software.

Recently, the US Department of Defense funded research at Canergie Mellon University to create an autonomous robot that can treat injured soldiers in remote location using such AI systems.

Service, Sales and Retail

The Amazon fulfilment centres make use of AI and robotics. When a customer places an order, the sensors on the robots at the fulfilment centre allow them to move autonomously to find the right product. It will take the items to the edge of a fenced off robotic field where workers will pick them up ready for distribution. Even second saved by a robot is a massive cost reduction for Amazon given their millions of orders each year.

Amazon have also purchased Canvas Technology who created automated robots that know where to move products in the warehouse.

The Future of Robotics

How AI is changing industry

Using Data and AI in Telecoms

Introduction

The telecoms industry is more than simply providing phone and internet services. In the digital world with accelerated development in Artificial Intelligence (AI) and Internet of Things (IoT) devices, telecoms is thriving. In the telecom industry, AI applications are being used to streamline operations, maximise profits and build effective marketing strategies amongst other use cases. A lot of the activities in telecoms related to data transfer, exchange and import and the sheer volume of this process is getting larger by the minute as more devices are connected. Old techniques are becoming less relevant and AI is becoming a pre-requisite to success. Telecoms providers are under increased pressure to deliver high quality service as more devices are connected every day. Gartner have forecast there could be as many as 20 billion by 2020. That means there is a lot of data available and therefore, increased network traffic. Service providers will find it tough to manage the sheer volume of information with their current infrastructure which is why many are turning to AI investments. This article looks at some of the applications of AI in the telecoms industry.

Enhanced Customer Service

Virtual Assistants (sometimes called Chatbots) are changing the way people interact with customer support. In fact, they are so effective that it is thought they will be saving up to $8 billion by 2022. Several requests can be dealt with simultaneous like installations, set up and troubleshooting. All of these would require a tremendous amount of human resource previously and that accounts for the main bulk of cost savings. After releasing a virtual assistant to help customers, Vodafone had a 68% increase in customer satisfaction scores. Whilst this is mainly focused on the simple questions, it means customers don’t have to spend time on a call, nobody has to be on the other end to answer it and a virtual assistant can respond instantly. Beyond that, all of this service is available 24/7 which is vital to a worldwide company.

Robotic Process Automation (RPA)

Telecoms service providers have a lot of customers. In turn, these customers carry out millions of transactions every single day. RPA is a type of automation that utilises the AI. It is the process of managing back office tasks that are both repetitive and laborious. Some of the key tasks might be data entry, accounting or fulfilling orders that would traditionally require human intervention. In automating these tasks, people are available to do more complex and valuable business tasks.

Network Analysis and Predictive Maintenance

Using AI, service providers can analyse networks and automatically optimise the quality based upon the traffic and timezone. It does this using machine learning, an application of AI, which takes data and looks for patterns and trends over time. The AI platform will learn from the patterns and trends that it spots and once it has collated enough data, it will be able to start predicted when a failure is likely and recommend the best course of action for the providers. The notifications allows telecoms company to proactively maintain networks rather than reactively, ensuring a far better customer experience. Lots of companies are investing in AI as part of their infrastructure. For example, Sedona Systems’ NetFusion is designed to optimise the routing of traffic and speed delivery of 5G enabled services such as Virtual and Augmented Reality. Nokia has a platform that is able to predict service degradations and negate them wherever possible. As the algorithms learn, they can be used for predictive maintenance. By this, operators can use data-driven insights to can monitor the state of equipment, anticipate failure based on patterns, and proactively fix problems with communications hardware, such as cell towers, power lines, data centre servers, and even set-top boxes in customers’ homes. The Dutch company KPN have a system whereby they can automatically change their internal IVR system using the data generated from the contact centre. This has proved to be a very effective way to service their customers.

Summary

AI has already driven substantial growth in telecoms and with cloud and edge computing developments on the horizon improving speed, latency and bandwidth, the future is very exciting. As well as that, 2019 has seen the first deployments of 5G technology. It is thought that the bandwidth, latency and average speed of 5G will be the catalyst for connecting humans and machines on an unprecedented level. Industry analysts HIS Markit have proposed that 5G will enable $12.3 trillion of global economic output in 2035. The International Data Corporation say that the amount of data created, captured and replicated across the world could grow from 33 ZB in 2018 to 175 ZB by 2025. With all this additional computing power, the opportunities discussed in this article will only become more advanced and telecoms will continually to evolve at pace.

Using AI in Media

Introduction

Artificial Intelligence (AI) offers huge promise and opportunities for media companies. It has the capacity to impact everything from research to content curation and the customer experience. It is thought that AI will start influencing all parts of the media value chain helping the creators to become more creative and editors to be more productive. The objective is to take a lot of the effort out of finding relevant content and ensuring it is presented to the reader. This article looks at some of the key ways AI is infiltrating the media sector.

Marketing and Advertising

Machine learning, an application of AI that uses data to create automated and predictive processes, is accelerating the media sector. For example, algorithms can be trained the can extract text, video segments, audio and images from any number of resources and make suggests on how to improve marketing or advertising efficiency. Alibaba Luban is a fantastic example of how AI can virtually reinvent content duration. The platform is capable of generating approximately 8,000 different banner designs in only one second. Imagine how long it would take a marketeer to do that volume of work! Of those designs, people are not able to tell the difference between generated and human work. In a similar vein, IBM trained their IBM Watson technology to design a movie trailer based on classic audio and visual moments that had been tagged and classified. It was able to create a full movie trailer within only 24 hours. This would normally take weeks to produce. Companies investing in this technology have far more time to focus on strategy and insight rather than purely creating their content.

Personalisation

Consumers are demanding personalised experiences. This goes for all industries such as Amazon recommending products, Spotify giving them unique music or Netflix telling them the shows they need to watch. Even social media now displays very tailored content designed for each user and what they are likely to need and want to see. Predicting user behaviour is key to the future modelling of media. As people become ever more connected through electronic devices, the need for fast delivery is ever greater. If a consumer can’t find what they want straight away, with so much competition on the market, they’ll simply go elsewhere. In some industries, businesses might only get 4 seconds to convince somebody to stay on their site. Google mail (Gmail) now has predictive text. As somebody writes an email, it is able to accurately predict what they will want to write next and shows the next sentence without the user having to write. Personalisation often means reducing user effort at the same time.

Search and Classification

Let’s face it, there is a lot of media available online. A few years back, when talking about search, the only real option was to go onto Google or Bing and type in some keywords related to what you were looking for. AI and machine learning techniques have changed all of that. In fact, it is thought that in 2020, the market will be dominated be voice and image searches. Google have evolved their platform in readiness for voice and image searching. Rather than having to type a keyword, users can upload a picture and image recognition technology will search for similar pictures. This uses complex tagging and identification of features. The impact on the media industry will be in the way they prepare content. Tagging images well will be vital to make sure their content get seen. An AI startup called ClarifAI have a computer vision API designed to accelerate the classification of content in movies. If a human were to categorise everything in a movie, it might take hours or even days. The AI platform can do this in real-time. Similar technology will be used by Netflix and Amazon video to tag scenes and objects.

Experiences

Traditionally, paper and books were the main medium for sharing words and images. Over time, email became a primary channel and then we moved into social media, blogs and vlogs. AI may well be heralding a new era of experiences for visualising content with the booming force of augmented reality (AR) and virtual reality (VR). Whilst initially these technologies sounded like gaming fads or novelties, over the last few years they have started being put to incredible practical use. Machine learning algorithms can now build complex holographic scenes, all through a pair of googles with a lens. Brand new markets will open up in the media sector. As an example, fans can watch sports in a holographic view using VR headsets and get truly immersed in the game and atmosphere. This was massively publicised when Intel were able to launch such a service during the 2018 NFL Super Bowl. Viewing sport from the point of view of an athlete takes experiences to a new level.

Data

One of the reasons that AI in media has only been adopted by a minimal number of pioneers is due to the need for data. In order to be accurate and effective, algorithms require a massive volume of data. Media companies need to have some mechanism to gather data at scale. That can include consumer data, content data and operational data. For example, consider how Netflix operates. To deliver recommendations effectively they needed data on consumer behaviour whilst watching streams. It took them many years to get that right but now they have, Netflix are far and away the leading streaming service. In fact, major shows are developed based on data, House of Cards being a prime example.

Summary

AI looks set to be at the forefront of creativity in the media industry over the next decade and beyond. From automating content to scraping media, personalised experiences, AR, VR and enhanced search potential, there is a lot for those working in media to be extremely excited about.

Adoption will be slow until the cost of investment reduces but it is important to start budgeting for the AI revolution.

Using AI in Healthcare

Introduction

Most of us are using IoT devices in our everyday lives, whether it be wearable fitness trackers or voice activated system such as Alexa or Google Home. However, applications are now infiltrating several industries, making them more efficient and effective whilst enhancing their societal benefit. One of the key industries is healthcare, so much so that it even has its own acronym, IoMT (Internet of Medical Things) to differentiate just how large the opportunity is. The potential of IoMT is huge as a world of managing previously unstructured data become easy through AI, machine learning, image and natural language processing. Doctors can tailor plans for specific patients based on all their historical data at the click of a button, avoiding oversight and potential complications. In healthcare, IoT could literally be lifesaving.

Applications of Artificial Intelligence

As we know, IoT technology is fuelled by data. It is important to keep re-iterating that the device is really just a box to collect and store that data via sensors and the AI applications take that to process it and make decisions. By 2020, it is thought that healthcare providers and organisations will spend an average of $54 million on artificial intelligence projects. There has even been some discussion as to whether human physicians could be replaced by machines and whilst this isn’t really feasible for ethical reasons beyond anything else, AI will definitely become a highly skilled assistant in clinical decision making.

Training the AI

Before IoT devices can be used within healthcare, they need to be trained using existing data. These devices learn from experience to analyse the correlations between subjects, symptoms and decisions. For example, in July 2018, researchers in Japan ran a successful experiment in training AI to detect stomach cancer. This was done on a relatively small scale, loading the device with 100 early-stage cancer images and 100 normal stomach tissue images so it could learn the patterns. Through this training data, the AI took just 0.004 seconds to detect the images with early stage symptoms to an 80% accuracy and to a 95% accuracy for those with normal symptoms. It is thought that these early signs are incredibly difficult to detect and often misconstrued as inflammation by doctors, meaning this AI made quite an amazing breakthrough. Just imagine if this device had 1,000 or 1,000,000 images to work with and the potential for diagnosing serious illnesses. As the AI gains more experience through training, the results will only become more accurate and give the doctors time to focus on the treatment rather than the diagnosis and mining though data or scanning images.

Medical imaging

In just one year, a leading medical facility in Texas generated more than half a million medical images in their fight against cancer. With there being so many images to analyse, harnessing the power of IoT was a must in early diagnosis to present the correct treatments. The facility installed a smart CT scanner that uses something known as computer vision. The scanner sends data directly to the cloud or a series of connected clouds and uses neural networks and deep learning algorithms to process that in a split second. The application is able to interpret the image from everything it has learnt in the past and identify the indicators of cancer that could have potentially gone unnoticed. This isn’t a slight against healthcare professionals but there are some early-stage symptoms that are virtually impossible to spot, and the AI was able to pinpoint those. Doctors are able to provide patients with an on-the-spot diagnosis and treatment plan. Smart image processing connected technologies like the CT Scanner will also allow medical device manufacturers to innovate. Integrating smart cloud platforms to medical devices they bring to market and licensing cloud analytics capabilities to their customers as a premium service. Subscription based cloud analytics services for medical diagnosis has the potential to drastically improve workflows by allowing for faster, more accurate diagnosis. There are obstacles around data privacy and ownership of decisions before image processing becomes mainstream, but it will be a part of the future with the clear advantages in accuracy of diagnosis and efficiency in repetitive and laborious tasks.

Managing beds

WiseWard use IBM Watson technology to predict when patients are likely to be discharged and their bed will be free. As providing quality care is heavily dependent on adequate bed space and the ability to move patients around, having technology like this is revolutionary for care and ensuring that patients get the experience they need. The AI platform can predict availability as much as five to seven days ahead using existing datasets including variables like gender, ward, surgery type and patient age.

Repetitive jobs

Administrative tasks are thought to cost $18 billion in the healthcare industry. New technology including voice to text transcriptions could help quickly order tests or prescribe medications without manually writing charts and notes. One of the most beneficial uses of AI is in interpreting records and papers using natural language processing (another application of AI). It can take doctors hours, weeks or even months to research a disease or treatment plan but with AI, this can be completed in an instant. Doctors may eventually have the capability of providing an instant test result rather than keeping their patients waiting.

Virtual doctors and nursing assistants

AI applications can now offer 24/7 advice and care, reducing waiting room times whilst giving patients what they need whenever they need it. It is thought that virtual assistants in healthcare could ultimately save around $20 billion annually. They can answer questions, monitor patients and provide quick answers as well as provide regular communications and proactive notifications. If they are able to do these jobs, it reduces the time spent by doctors on repetitive tasks and allows them to focus on care.

Medication and inventory management

Using AI, medication and stocks can be tracked across departments and even locations with ease. Using sensors, data can be collected on where different equipment is and how long it is due to be there for. Each time medication is taken from stock, logs can be updated and ensure sufficient amounts are always available. Whilst you’d expect medical facilities to be doing this already, most have to do so manually which is prone to error.

Remote surgery

More powerful cloud computing and the invention of 5G technology should enable the opportunity for remote treatments like surgery. 5G will reduce latency to a point where data is transferred almost in real-time. With that, it is much faster than 4G and has a greater bandwidth. All of this combined will create a landscape where remote operations are very plausible. Robots can even analyse data from pre-op medical records to guide the instrument of the surgeon. Coupled with the remote location option, experts in their field can provide assistance without being physically present.

The future of AI in healthcare

AI has a bright future in the healthcare industry and this article only touches the surface on the possibilities. As new technology such as the Connected Cloud, Edge Computing and 5G become commonplace, the capabilities of innovation will be pushed further to save time, lower costs and ultimately improve accuracy.

AI in Agriculture

Introduction

Artificial Intelligence (AI) is a hot topic in various industries from financial services to healthcare, law, education and retail. However, one that is sometimes left off the discussion to float under the radar is agriculture. Agriculture based firms are seeking to use AI technology to boost productivity and create a competitive advantage in the sector. Innovation is helping them to deliver new products and services and cement their place within a populated market. According to UN projections, food production is going to have to increase by as much as 70% with climate change, population growth and food security concerns. With that in mind, agriculture firms need to seek innovative solutions if they are to fulfil the demand. This article discusses some of the way in which AI is supporting the agriculture industry.

Driverless vehicles

Autonomous vehicles have been talked about for a long time and there have been various public trials from companies like Uber and Tesla. Farming is also expected to benefit from driverless tractors. A driverless tractor will operate without human intervention and work continuously and efficiently based on a defined set of rules. This means farmers don’t have to take up their time carrying out the monotonous tasks and safety concerns around using equipment are negated. A supervisor will still need to monitor the progress at this stage until the technology advances again and some trials are considered remote controls.

Image Recognition

Drone technology has started to be used for industrial purposes. Most off us will have heard how the technology is being piloted in delivering goods e.g. Amazon but there is also an opportunity within agriculture. Drones can be used to take images of crops and monitor the production cycle at every stage. These images will be tagged and converted to data. That data can be used for making recommendations and decisions, exposing potential issues. For example, it might pick up on pests or irrigation problems that a farmer could be otherwise unaware of. Farmers can spend less time surveying their crops and more time thinking about production. In large farms, drones can scout much larger areas of land than humans could in a shorter time and more accurately. Drone technology will ultimately pay for itself quickly and is worth the initial investment.

Big Data

Data is the essence of all things to do with agriculture. AI-based sensor equipment can be deployed which monitors temperature, soil conditions, rainfall or invasions over time. The data collected will be sent back to a processor and analysed. Cloud computing platforms have the ability to store and provide insight on incredibly large datasets. Using all of the information, farmers can be presented with the optimal time to grow their seeds so as to maximise production. This can improve return on investment and reduce the likelihood of wasted yield. Sensors can also detect weeds and work out which herbicides need to be applied. The result of this is a reduction off toxins that find their way into food. Forecasting is integral within farming, especially in smaller countries where they don’t have as much knowledge or access to the right technology. The irony here is that these smaller farms are also responsible for as much as 70% of the world’s crops. If we are able to start predicting seasonality and weather patterns, the smaller farms can benefit from improved crops and continue to thrive.

Precision Farming

Precision farming is the umbrella term used for methods involving AI, data and new technology. It is best explained through the 4 R’s.

  • Right Source – matches the fertiliser type to crop needs
  • Right Rate – matches the amount of fertiliser to crop needs
  • Right Time – makes nutrients available when crops need them
  • Right Place – keep nutrients where crops can use them The method relies on having the right technology in place to optimise agricultural health and productivity with a goal of sustaining and protecting the environment. It uses everything we have already talked about including drones, sensors and vast volumes of data. Precision farming used to be something only larger businesses would be able to deploy but mobile apps, cloud computing, edge computing and advanced infrastructures are starting to make it commonplace.

Robotics

Let’s face it, it is rare to find somebody who lists farming as their dream job. Traditionally, farms might be family owned businesses and plenty of people would live on or nearby the land. Today, that is not the case and there is a workforce shortage. AI agriculture bots are being deployed to augment the human workforce. Bots are being used to harvest crops and can do so at a faster pace and for longer than humans have ever been able to. They can also identify weeds more accurately and reduce potential costs for the farm. Robots are being utilised within picking and packing areas that would have previously required manual labour as well. Again, they can do this faster than humans and work 24/7 meaning a far greater productivity rate. Hours of manual labour are eliminated.

Case Study – NatureFresh Farms

Across almost 200 acres of greenhouses in Ohio and Ontario, NatureFresh Farms’ are a great example of using almost every facet of AI they have at their disposal. Robotic cameras are being used to collect images of plants and feeding the data through to algorithms that detect when they will be fully ripe vegetables. Sensors are measuring variables like temperature of the crops as well as the amount of water and fertiliser that it needs. Workers can simply adjust the settings using a smartphone app based on what the data recommends they do. This has reduced an hour of work into only 5 minutes. The same data is used to create forecasts for harvests and yield with scenarios adapted to potential climate changes. All food is tagged so that the end recipient knows exactly where it came from and can quickly respond if there is any problem with the crop. This will use technology similar to Blockchain which is perfect for picking up on contaminated food and already in operation within large supermarket chains.

Summary

Without a doubt, AI is helping farms run more efficiently. Technology likes bots and drones can revolutionise efficiencies and productivity whilst ensuring more accurate decisions are made to optimise yields. With a demand for more food, investment in agricultural AI technology is important for the future and whilst change is tough, those in the sector need to start moving quickly.

AI in Manufacturing

Introduction

Advanced equipment like the Internet of Things (IoT) and the Industrial Internet of Things (IIoT) are generating more data than we’ve ever had before. The value of data is undeniable and becoming an essential part of manufacturing operations to remain competitive. AI platforms to capitalise on this data are slowly being adopted within the industry. Whilst such systems can be costly, factories are realising the major benefits they can see in their processes, productivity and efficiency if they begin to invest. Cloud computing and the access to real-time analytics are bringing more significant advantages to manufacturing and are being accelerated through innovative technology such as Blockchain and 5G. This article looks at the manufacturing industries and where the opportunities lie for those in the sector.

Industrial Internet of Things

When we talk about the Internet of Things (IoT), it refers to any device or service that is connected to the Internet. This could be our Smartphone, Amazon Echo or even streaming services like Netflix and Spotify. Whilst IIoT is also associated with connectivity, it relates to the large number of industrial devices that are filled with sensors, connected via wireless networks to gather and share data with each other.

Preventative vs Predictive Maintenance

Traditionally, machine optimisation and repairs within an industrial environment would be conducted in a scheduled manner. In performing regular maintenance, the chance of the equipment failing is minimised. It’s the same reason you might take a car for an annual check-up. It is based on assumptions that a machine or its components will degrade over time. Companies might use some data like looking at the average time between previous failures, but it is quite limited. The problem here is that carrying out maintenance just because it is scheduled may not be necessary and costs a business money. A predictive maintenance strategy is determined by the condition of equipment rather than assumptions on its degradation. It will try to predict failure before it even happens. Predictive maintenance systems use sensors to monitor and analyse industrial data like machinery and return information about productivity levels, consumption or status. The data can be used to make decisions on whether maintenance is required without necessarily completing such tasks on a preventative schedule. Predictive maintenance is founded on an application of artificial intelligence known as machine learning. This takes large sets of historical data, often referred to as training data to run different scenarios and predict what is likely to go wrong and when the events will happen. As the machine algorithms learn, they will recognise potential problems without the need for any human intervention. For example, if a temperature of 50 degrees always causes a machine to break, it could automatically switch off without the need for human analysis.

Using real-time data

According to Fero Labs, typical manufacturing companies discard 98% of the data they can collect because they simply can’t integrate it into their operations.
Hitachi is one company that have been trying to tap into their unused data. This has unlocked data that they were not aware of in the past, tracking many more variables with AI sensors. Real time monitoring of these variables will allow immediate intervention before an issue arises. Let’s say that you have a sensor in place monitoring the vibrations of a machine. Increased vibrations can be a sign that components are failing but having just one data point in isolation when the numbers hit a specified alert won’t be enough to prevent problems. Other common use cases are in infrared thermography, motor condition analysis, precision balancing and laser alignment. Realistically, anything that uses data has its own case for a predictive maintenance strategy.

Generative Design

Companies like Autodesk are using AI for what is known as generative design. Creating a new product or part can take companies weeks, months or even years. Using AI, experiments and recommendations could in theory be generated in seconds that greatly speed up the process. In generative design, a developer or engine would start by inputting all of their variables like performance and spatial requirements, materials, manufacturing methods and cost constraints. Software, like Autodesk will provide all the different permutations for a solution, quickly showing any potential alternatives. Beyond just making suggestions, the software will learn from the iterations and optimise the experiment as it tries to find the perfect solution. A computer will generate thousands of designs in the time a human can create one. Some of them will be things that weren’t even thought to be a possibility. In doing this and resolving any constraints, designers and engineers can focus their time on innovating and developing better process strategies.

Robotics

A report from the International Federation of Robotics predicted that by the end of 2018, there would be more than 1.3 million industrial robots at work in factories all over the world. It is unknown as to whether this came into fruition but deployment of such technology has certainly accelerated in the last 12 to 24 months. The objective, in theory, is that robots can carry out the repetitive jobs done by humans allowing workers to be trained for more complex roles in design or programming. The key to success in robotics is a collaborative environment. We are not in a position where robots can operate completely without humans and there is a strong case supporting that we would not want that to happen. However, robots will become more cognitive over time and start making autonomous decisions against real-time data. The speed and efficiency benefits as well as improved health and safety as robots can complete dangerous jobs, will ensure robotics are a major part of the future factory.

Summary

Investors are pouring billions of dollars into IIoT startups hoping to develop more revolutionary solutions for sensors that utilise new technology such as edge equipment and Blockchain (both of these emerging technologies would need their own write-ups). What we must remember is that the technology is still in its infancy. It will take time for algorithms to hold enough data so that they can self-learn and become truly efficient. There is also a huge reliance on networks of connected devices creating security risks which companies will have to mitigate. The challenges are not insurmountable and IIoT will be part of the factory of the future for a faster and more efficient manufacturing industry.

AI in Retail

Introduction

Retail is going through a period of change. Much of the innovation is coming from new technology and artificial intelligence (AI). As customers become more demanding in an exponentially growing digital world, billions of devices are connected, collating vast amounts of data leaving retail ripe for disruption. AI and its applications like machine learning and natural language processing are able to answer many of the questions for retail businesses. At a time where there is demand for personalised experiences, fast customer service, intelligent analytics and smarter shops, AI platforms are primed to drive society into the future. Although many are favouring e-commerce over traditional retail stores, understanding the usefulness of AI and data could be critical to the future if businesses don’t want to be swamped by the likes of Amazon for example. In fact, a report by EuroMonitor International predicted that over 80% of goods are still being purchased in brick and mortar stores. AI is the ideal way to bridge the experience gap between virtual and physical stores if they want to keep it that way. This article looks at the key use cases in retail and how AI has helped them keep up with their customers.

In Store Experience

Retailers are making use of AI for shoppers whilst they are in the store. For example, Macy’s On Call app helps shoppers whilst they navigate their way around the store. Customers input questions into the app such as asking where they can find a specific product, department or brand. The app will reply with a customised and relevant response to the user questions. It is the power of digital technology it a physical setting. The idea is based on research that showed consumers are more likely to use their phone in store to find information rather than approach a member of floor staff. Using machine learning, the app becomes more intelligent through very interaction. If it doesn’t know the answer to a question, it remembers the result for the next time a customer asks. In time, the hope would be that the machine can answer any question thrown at it in the store. Luxury fashion e-tailer Farfetch has come up with a solution to change the in-store experience for fashion retailers with its Store of the Future platform. The platform, in a similar way to the Macy’s solution looks to bring the digital and physical worlds together for an optimised customer experience. Businesses can collect data in-store such as in the fitting room where smart mirror technology allows customers to choose different sizes or colours of goods they are trying on. Information and shopping habits are stored for later so brands can learn about the specific styling requirements of their customers.

Image Recognition

Customers like engaging with different types of content and the popularity of social media platforms such as Instagram and Youtube show how images and video are the preferred choice right now. The department store Neiman Marcus has taken steps to take advantage of the trend through their app called ‘Snap.Find.Shop’. The AI-based app allows users to take photos they see whilst they are out shopping. It will then search the Neiman Marcus inventory to show all similar items. The more photos that get added to the database, the smarter the app becomes. Think of the solution as being similar to how Amazon are able to recommend products to customers based on what they have previously purchased. Recommendation engines are a great way to improve revenue. Amazon have reported as much as 40% of their sales coming via recommended products that the consumer may not have bought otherwise.

Robotics

The hardware stores Lowes successfully deployed in-store robotics back in 2016. The LoweBot is able to show products where to find products and answer simple questions in multiple languages. The objective was to free up employees, so they had more time to answer complex queries from customers and show their expertise. As well as this, the robots monitor stock levels so the inventory can be kept up-to-date automatically without relying on staff to carry out the repetitive and monotonous task. Walmart have also used robotics to scan store shelves. The bots roam the aisles and check for missing items or areas that are low on stock. It can even check if price tags need changing by using AI sensors. Like the Lowes solution, employees are given more time to spend with customers and improve the experience rather than worry about stocking shelves. Fashion brand Zara are also using robots in the warehouse to quickly get items as a customer comes into store for a pick-up.

Amazon Go

We all know that Amazon is primarily an e-commerce retailer but they have ventured into traditional retail (of sorts) with their Amazon Go stores. Sensors and cameras in the store allow customers to pick up what they need, put it in a bag and walk out. There are no cashiers or cash. Instead of this, their Amazon account is charged as they leave. AI creates a seamless shopping experience without the need for lots of staff. The stores have not become mainstream where there are still a few glitches e.g. in a busy store, it doesn’t always interpret properly what people pick up. In time however, it is expected similar retail outlets will pop-up.

Mind-Reading

Clothing store Uniqlo have been using the power of in-store emotion. When customers look at images of products on screen, their reaction is tracked via neurotransmitters as a view to how much they like the style or colour. Based on that reaction, the customer is recommended products that are most likely to suit them. This is a great example of AI creating an experience without virtually zero input required from the customer.

Receipts

Popular clothing brand H&M had the novel idea of using in-store receipts to work out when shelves need to be re-stocked. For example, instead of supplying every store with the same items, if the data shows that one store outperforms another with a certain line, algorithms adjust the levels accordingly. There is a clear cost saving exercise in monitoring stock levels as well as ensuring customers in-store only see the products they are going to be interested.

Removing Efforts

Buying make-up has traditionally been quite a stressful experience in stores. If a customer doesn’t know what shade or brand they want, it can put them off shopping. Brands like Sephora are making the experience better with their Color IQ platform. In the store, the AI-based system will scan the customers face and make recommendations for concealers or foundation shades that suit them best. It is a huge help to customers who would normally try to find the right product via trial and error.

Summary

To keep customers coming into store, retail outlets are using AI to enhance the customer experience, improve staff productivity and utilisation and reduce customer effort whilst shopping. With e-commerce causing major disruptions to traditional brick and mortar stores, creating an environment that attracts and excites customers is becoming fundamental rather than optional. In the future, retailers will continue to deploy this cutting-edge technology to keep up with their digital counterparts. Personalised and engaging environments will keep retail alive.

AI in Transportation

Introduction

Artificial Intelligence (AI) in transportation has always seemed like sci-fi as it’s been part of the entertainment scene for so long. Whether you remember Knight Rider, the DeLorean, a flying car in Grease or happen to be a fan of Herbie, thinking up ways of innovating transportation has clearly been at the forefront of creative minds. However, today, it is not so futuristic and thanks to AI, many of the fantasies from the movies are coming to life. Breakthroughs in technology are being announced virtually every month and, if you listen to Elon Musk (Tesla), it won’t be long until AI has taken over the transportation industry. This article looks at some of the ways in which AI is transforming the sector.

Autonomous/Driverless Vehicles

When we talk about autonomous vehicles, there is more to it than the cars we’ve been promised for a while now. Cars have been trialled but are still in their early stages and don’t look set to become mainstream too near in the future. However, there are plenty of other applications that are making best use of the technology. Small scale autonomous buses have been deployed in some parts of the world. China, Singapore and Finland are the countries where it the AI has been used most. The UK started trialling autonomous buses in March 2019 and there are plans to launch more pilots. These solutions still have human drivers behind the wheel in case of emergency. Although the AI is advancing, it is not able to think for itself in a way that makes it safe to leave it to its own devices. With that in mind, it is likely that autonomous buses will become mainstream technology way before driverless car, purely because of the human resource requirement. In Texas, the same technology is being used for grocery deliveries whereby a human operator is always at hand, but the system is very much AI driven. With several sensors and detectors, the AI systems can provide alerts about dangers like cyclists or pedestrians. The result would be safer roads as accidents are most frequent when human drivers don’t spot these hazards. At this level of automation there are lots of potential applications. Garbage trucks, snow ploughs and other large vehicles could be made more efficient using AI with a human overseeing what is going on. Driverless trains have been tested with success on the London Underground. One of the benefits here is that it frees up critical space through not requiring a drivers’ carriage. Anyone who travels on the underground regularly will know how vital space is. One of the smartest solutions may well be remote controlled cargo ships. It is thought that these have been in the making for a long time but if deployed, the savings on crew costs would be a massive win for the industry. Many driverless vehicles are also powered by electricity making them far more environmentally friendly.

Traffic Management

Complex data algorithms can be used to manage traffic and create redirection routes if required. Traffic sensors are already being used to predict potential accidents based on current conditions and helping to make recommendations on speed limits or routes. In 2012, the Surtrac AI system was installed in 9 traffic signals within the Pittsburgh area. By using data, it was able to reduce travel times by more than 25% on average and reduce wait times by as much as 40%. The solution was so successful that it was added to other traffic signals around the city. In Bengaluru, India, which regularly faces long traffic jams and the average speed on some roads at peak hours is just 4km/h (2.5mph), Siemens Mobility has built a prototype monitoring system that that uses AI through traffic cameras. Traffic cameras automatically detect vehicles and this information is sent back to a central control centre where algorithms estimate the density of traffic on the road. The system then alters the traffic lights based on real-time road congestions. Traffic management requires a lot of data and with so many cameras on the road and enormous computing power, we now can make it successful. Just imagine a world without traffic jams!

Drones

Amazon are starting to take to the air for deliveries. They are testing drone technology to allow for faster and safer delivery of goods to customers. It is an expensive investment but one that could really push them way out ahead of the competition. Uber have trialled a drone taxi in Dubai and whilst the future of that technology isn’t known yet, it is amazing to think that it’s available. Digital Number Plates Over time, we can expect cars to start losing physical number plates as they become digitalised. There are several benefits to this system such as notifying authorities instantly if something is wrong with your GPS location. The amount of vehicle theft will naturally reduce as it will be impossible to get away from a system that knows exactly where your car is using AI.

Air Travel

Pilotless planes have been talked about already, but the challenge will be in gaining public trust for such a huge innovation. Whilst there arguably isn’t much a pilot needs to do during travel (without disrespecting what they do), having nobody there might cross a mark just now. However, the way we travel is likely to change with proposals for digital passports based on face scanners at airports and tracking baggage via GPS to ensure it never gets lost.

Summary

AI brings a number of benefits to the transportation industry. With the ability to improve efficiency, provide better customer experiences and reduce accidents to name a few benefits, AI will be a driving force over the next decade in the sector. This article only considers some of the main developments with robot police cars, driver assist programs, AI taxi hailing and smart highways all on the industry radar. According to the US transportation research board, emerging applications of AI in transportation planning are in travel behavioural models, city infrastructure design and planning, and demand modelling for public and cargo transport. On-demand services like Uber are also likely to start moving to entirely autonomous services over time as long as they have more successful trials. Ethical constraints may mean it takes a while to get to full adoption in some cases but what originally looked like sci-fi, is now a distinct possibility.

AI in Finance

Introduction

Whilst Artificial Intelligence (AI) and its applications such as machine learning are disrupting several industries, one of them that has benefitted the most is financial services. The reason why we are seeing such huge strides in the sector is due to it being incredibly traditional and, in some cases, quite old-fashioned. For example, we have banked in the same way for a long time but as people are so digitally connected, this is not necessarily the way to carry on doing things. This article looks at some of the major innovations within financial services over the last few years.

Credit Decisions

Using AI, credit scoring can be more sophisticated than traditional methods. The aim to is use data that can distinguish between applicants who are high risk and those who simply don’t have a detailed credit history. Some establishments have even tinkered with using social data to gauge a measure of credit worthiness. For example, if somebody has a lot of secure connections with family and friends they can trust, it has shown to be a strong signal they are a good risk. Ford Motor Company has shown that machine learning is able to more accurately predict the risk of thin-file applicants. Beyond this, AI is not biased like a human might be. The decisions on credit will be based purely on the data it processes without human judgment of any kind. A more rational decision ensures enhanced risk management.

Risk

AI can analyse vast amounts of data almost in real-time. The advent of cloud computing and the recent deployment of 5G technology will only extend the availability of such services. In analysing a lot of data quickly, multiple sources can be used, whether they be structured or unstructured and help to make decisions. For example, they can predict the future risk of an applicant. US leasing company, Crest Financial have successful deployed real-time machine learning to reduce the lag in making risk-based decisions.

Fraud Prevention

Fraud is and always will be a big problem within financial services. AI has been very influential in combatting this type of crime. Whenever new technology is integrated, as much as it is great for the business, it gives those looking to cause malicious damage a new channel to explore. Think “Catch Me If You Can” style to get an idea of what can happen in the extremes. Using machine learning, algorithms can track user behaviour and spot patterns that seem irregular to the norm. Increasing e-commerce transactions have made this more difficult than ever and AI is becoming fundamental in combatting cybercrime. In another big area of criminal activity, money laundering, banks have reported that AI can reduce the time of investigations by as much as 20%. As consumers become more connected, there is an opportunity to reduce digital based crimes as much as there is to cause them.

Trading

Data driven investments and automatic trading are becoming big business and reinventing the industry. There are several benefits from using AI in the stock markets over human advisors, albeit both still have a part to play. With the capacity to process and analyse large volumes of data, trading algorithms are perfectly placed to help investors make the right decisions. Real-time data processing means fast decisions and fast trading. It’s a win/win situation for everybody involved. AI makes recommendations purely based on the information it is provided with. This means there is no bias in its decisions. Humans can easily be influenced by their own “gut feel” but this is not the case for machines which are primed to make the perfect unemotional decision. Huge companies like Bloomberg now use AI for their forecasting and market predictions. They can identify patterns quickly and accurately for traders.

Banking

Whether it be checking your balance, scheduling bills or making payments, banking is becoming heavily driven by technology and AI. Most people will have a banking app and the number of occasions where somebody needs to call or visit a physical bank have massively decreased. I’m not even sure a Millennial would know what to do with a cheque! AI applications within mobile apps have the ability to create financial goals and automate customer savings. They do this through tracking income and expenditure to create fully optimised plans for their customers. Banking has been crying out to be more streamlined for years and with the promise of Open Banking which we are slowly seeing creep into society, the sector is still set for a lot of re-development in the coming years.

Automating Processes

AI has reduced resource costs within financial services. Think about applying for a loan. Traditionally, you’d have to fill in various forms, sign documents, send photographic evidence for ID and maybe even pay your bank a visit. Machine learning algorithms can take this structured and unstructured data, process it in seconds and make a decision. There doesn’t need to be a human at the other end reviewing all the documentation. Instead, they can focus on customer care. Financial services require a lot of repetitive and mundane tasks. Ernst & Young have reported up to 70% cost reduction in automating these tasks. Most of this is from removing the human involvement and deploying those staff elsewhere. JP Morgan Chase have started to successfully deploy Robotic Process Automation (RPA) as a way to better capture documents and automate cash management tasks.

Summary

Financial services is continually being reshaped by AI. New technology like blockchain and adoption of cryptocurrency over the next decade look to have the potential to drastically change the landscape again. Whilst some are slow to adopt AI applications with the cost of investment, those who do deploy new solutions are starting to reap the benefits. Data protection and privacy are also one of the major obstacles when attempting to deploy AI and that is something that firms within the sector will have to battle to overcome. Much of that will come from consumer trust and understanding over time. The AI hype is real and financial services is amongst one of the most highly investable sectors for it.

Introduction

Artificial Intelligence (AI) is changing the dynamic of many industries and one of those at the very forefront is travel. As businesses seek to improve their efficiency and create more personalised customer experiences, travel is one sector that has the ability to generate a lot of investor excitement.

Chatbots and virtual assistants

Conversational bots, sometimes known as virtual assistances are becoming very important to the travel industry. Bookings are a major problem for hotels with reports suggesting that only one in twenty potential reservations are actually being taken up. Chatbots are trying to buck the trend. Applications like Hijiffy which is using the Facebook Messenger platform allow users to ask questions and get instant responses. For example, they can ask about the destination, available services and even book a room, all through the popular messaging app. The beauty of a chatbot is that they can learn from interactions. For example, if it is asked a question it doesn’t know the answer to, it tracks how the customer behaves to know the right answer for next time. This is the core of artificial intelligence, doing what humans do and then trying to improve on it. In providing real-time, accurate a conversational support, hotels are able to reduce abandonment rates and improve conversion via a platform their customers are familiar with. Similar technology can be used in the hotel room by responding to guests’ questions. Imagine having an Amazon Alexa or Google Home in your room as standard for example. IBM Watson is trying to take this even further as you’ll see below. Chatbots are also a great way for collating customer feedback that they would not have proactively provided.

Connie

Hilton hotels have deployed Connie, a robot concierge to help visitors at the front desk. It has been developed using the IBM Watson technology. Guests can ask Connie questions about where to go, where to dine or how to find something at the hotel. This means there isn’t a need for somebody to be on the desk 24/7 and provides a great cost saving whilst maintaining customer satisfaction. In time, as it learns, Hilton hope that Connie has recognise the faces of guests and remember previous conversations, taking it to the next level of artificial intelligence. If it can truly delight the customers in this way, it will be revolutionary for the travel industry. If a flight has arrived later, IBM have the power to recognise that and offer specific services. The same applies to behaviours like offering breakfast or treats to guests proactively.

Recommendations

Through the popularity of sites such as TripAdvisor as well as social media, recommendations have become highly important within the travel industry. It would be very unlikely for somebody to book a holiday without looking for reviews of some kind first. Research from Booking.com has shown that one third of customers would now be comfortable in letting a computer plan their next trip based on information from their travel history. Using this data, travel brands can create very tailored recommendations based on unique preferences. Going a step further than that, Utrip (powered by TUI Group) can recommend a full itinerary for trips based on user preferences. It can filter through millions of potential combinations to accomplish this.

Improved Search

Online travel agents are using a technology known as computer vision to improve search mechanisms on their site. They do this by optimising the tags used within their listings. For example, a hotel might tag their property as having a “beach view” and any searches will return them in results. Tagging is very important for remaining competitive. This is truer as consumers edge towards voice search over traditional typed text searches. Travel sites need to consider what comes will be searching for in a voice context which can be very different to text.

Smart Cruises

Carnival cruises have developed a solution based almost entirely on AI for their trips. The company operates more than 100 ships and travels to over 740 destinations across the globe. On the cruise ships, they are using wearable technology to create a seamless customer experience. The project is led by John Padgett who was responsible for bringing similar innovations to Disney, allowing visitors to quickly find and see their favourite characters. The wearable technology, called the Ocean Medallion, relies on 7,000 sensors placed on the ship and hundreds of miles of cables. Passengers are connected to all of these things to display personalised recommendations and experiences. Across the ship there are over 4,000 digital interaction points. Investment into these devices will no doubt extend to other companies given the overwhelming success of Carnival.

Data, data, data

Almost everything in travel creates vast amounts of data. As well as travellers, planes, trains, ships and cars are generating data through an enormous number of sensors every single second. Understanding that data can help to improve efficiency, remove processes and eliminate costs. For example, Boeing is using augmented reality so that engineers can see circuit diagrams placed over planes to find possible faults. Cruise ships are using data to find possible engine failures as early as 10 months before they even happen. As well as ensuring safety, it helps to avoid traveller disappointment and cancellations. AI applications are using data to create alerts for areas where security could be a problem. Travel apps can tell travellers about dangerous places or image recognition in CCTV can ensure people avoid certain areas. As we gather more data, the process efficiency in travel will continue to improve.

Summary

AI is changing the entire ecosystem of the travel industry right now. Chatbots are taking over bookings and assistance, planning platforms can generate personalised itineraries and guests are getting a truly unique experience. On top of this, operational improvements are making AI a very investable technology for those involved in the travel industry. The key to success will be in using AI wisely. It is designed as an aide to human interaction and not a replacement. Those who can best work out how the two compliment each other will be the ones who succeed.

Introduction

The Internet of Things (IoT) is a network of connected devices that operate via sensors and software allowing them to collect data. This data helps to form the basis of Artificial Intelligence (AI) in those devices through its subsets such as machine learning and natural language processing. As an example, Amazon Echo (Alexa) is an IoT device that uses natural language processing to convert our voice commands into data. The data is then processed using machine learning algorithms, sent back to the IoT device and Alexa talks back to us. Whilst platforms like Alexa are the ones we witness in everyday life, there are several other industries benefitting from IoT and AI applications. One of the key sectors, which is likely to be revolutionised over the next few years is healthcare. By 2020, McKinsey have estimated that as much as 40% of IoT devices will be health related, showing how big the industry is. This article looks at some of the examples of IoT and AI assisting people with special needs.

The elderly and our aging demographic

The population of the world is aging, especially in some of the larger growing economies such as Europe, Japan and China. It is thought that by 2050, the percentage of the world’s population that is over 60 years old will be at 22% versus just 12% in 2000. This means it is important for healthcare providers and governments to invest in solutions for the elderly. If they don’t, the costs of care could become astronomical in years to come and a burden to society. Some providers are already introducing AI into their journeys. At the very start, it has shown to be accurate in diagnosing diseases like cancer and early signs of diabetes. This means clinicians to look at the right form of treatment early on and treat the conditions effectively. Discovering the early onset of these diseases through AI can save lives but we could be some way off full adoption due to issues with trust and data privacy amongst other limitation. However, there are numerous ways that AI is helping the elderly to improve their quality of life.

At home health monitoring

Continuous supervision is key for a quick diagnosis of elderly patients. Some companies and devices are now collecting data all the time to monitor the health of their patients. One example is Biotricity who have a biometric remote monitoring solution in devices to connect remotely with patients in need. Another is CarePredict who use AI to detect any behaviour changes in their patients for early detection of any health issues. This allows professionals to administer help before an event happens rather than risk critical illness.

Medicine management

As Smartphones and similar devices have dropped in price, it has given everybody access to the likes of voice technology and chatbots. AI bots can help keep elderly patients keep track of medication plans and provide alerts at set times. As well as assisting with dosage, it also provides comfort and reduces anxiety from forgetting to take medication.

Assisting isolation

Some bots have gone a step beyond medication management and offer social companionship to the elderly. For example, soft toys can be programmed with social feeds about local events or healthy food choices to help them when they need it most. In having an outlet to the outside world, whilst sounding juvenile, it can extend the life of the elderly through keeping an active mind. In theory, the device could take any form with the underlying AI application and data being key. This means there is a huge opportunity for providing very personalised patient plans. As well as companionship, one of the biggest fears for seniors is falling whilst nobody else is around to help. Having a device with them alleviates these fairs and gives them more confidence and allows them to feel less isolated in their own home.

Caring for specific conditions

Some IoT devices have been created to deal with very specific patient needs.

Fractures

When somebody has a fracture, a device called The Myo can be used as a motion controller for patients who need to exercise during their recovery. It can be controlled via a phone or computer and allows doctors to measure the angle of movement and provide better medical advice.

Heart Monitoring

Devices such as the Zio Patch can monitor heart rate and ECG. It can monitor 24 hours per day for up to 2 weeks, collecting vital data to diagnose and prevent conditions.

Glucose Checking

For those with diabetes, apps such as EverSense can send the results of blood sugar tests to an app and monitory the trends. It is also possible to provide access to caregivers such as parents or those looking after the elderly who cannot manage the medication easily on their own.

Asthma and COPD

Devices have been made available to help track both of these breathing related conditions. The sensor can attach to a standard inhaler and send data on usage to an App via Bluetooth. The data provides trends and generates potential triggers that can heavily reduce the effects.

Autism

Brain Power offer game-like apps that use AI to produce quick insights for the children, parents and teachers. The apps teach skills for anybody on the spectrum, geared to the specific individual. Some of the key metrics are measuring anxiety at different points whilst using the app or checking body language of the app user. Many of the apps deal with emotions which are usually hard for those with autism to detect. Analysis using AI techniques could create significant breakthroughs in time.

Voice Activated Devices

Devices such as Amazon Echo and Google Home, whilst a novelty to some, have created a new world for those with special needs. For example, people with limited sight or mobility have been provided a brand new way of finding things out or getting things done. The Royal National Institute of Blind People (RNIB) has a page dedicated to how Alexa can help blind people interact with the internet. The elderly and disabled can have instant access to learning materials, friends and family through simple voice commands rather than having to learn complicated new technology. Caregivers can check on activity where applicable e.g. ensuring an elderly person has taken their medication (or at least listened to the notification). Families are less anxious about those they care for and those in need are provided with the comfort that somebody is on the other end of the device. Alexa and Google Home are still becoming more conversational through learning and in time, may be able to perform predictive functions e.g. predicting if there is a problem if nobody has spoken for XX hours or ignored notifications.

Image and Visual Recognition

Devices like Microsoft’s Seeing AI app narrates the visual world for the blind and low vision community, helping them to do things like see currency, read handwriting and text, recognise products from barcodes, recognise colours and recognise people around them and their emotions – all through their mobile phone cameras. IoT and AI such as this can provide a far better quality of life for so many people which is why enterprises including Microsoft, Google and Facebook have all made significant recent investments into that area. These companies have so much data available now, making all this very possible whereas we didn’t have the computing power or vastness of information 3, 5 or 10 years ago.

Summary

Whilst the opportunity for using IoT and AI in healthcare is huge, adoption to date has been slow for several reasons. Both consumers and doctors have data privacy concerns with the massive amounts being stored in the cloud and prone to hacking and malicious use. Professionals also argue that in using AI, there is a lack of ownership and therefore responsibility in making decisions on patient diagnoses. For example, if AI does get it wrong, who is to blame? What must be remembered, is that AI and IoT are not designed to replace human expertise, but rather to aide it. AI can take care of the repetitive and tedious jobs to allow human professionals to focus on the patients. It will enable patient care to come before admin. The examples in this article are revolutionary and as we enter a new decade, there is a whole lot more to come.

About

Artificial Intelligence for Business Leaders

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages