Artificial Intelligence (AI) is a buzz word in almost all walks of our life with a meteoric growth recently. For the last few years, it has come up as a superpower controlling the future of every scientific endeavour. Be it AI-powered self-driving car, disease detection in medical research, image/speech recognition or big data these are just the tips of the iceberg with respect to the enormous possibilities Artificial Intelligence capable of. This article covers the Artificial intelligence basics with its genesis including modern history.
Artificial intelligence is a broad term encompassing both Machine Learning and Deep Learning. In which Machine Learning is again a bigger domain with the subdomain Deep Learning. These three domains of advance computing can be represented by the following diagram.
Background of Artificial Intelligence, its genesis
Page contents
Before we start with the Artificial intelligence basics, we should know its background. The first instance of any machine having some intelligence akin to human was developed by Charles Babbage and English mathematician Lady Ada Lovelace of Victorian England during 1830-40.
It was called a mechanical computer and had the capacity to perform different mathematical computation. The machine algorithm she developed lead to the creation of an early computer which just existed only in paper till then. So, Ada Lovelace, the daughter of famous poet Lord Byron was named the world’s first computer programmer.
Turing machine: one step towards modern computer
Another similar example is the Turing Machine developed by Allan Turing in 1950. It can be designated as the first instance of a machine having Artificial Intelligence. He wrote a famous article on Turing Machine titled “Computing Machinery and Intelligence”.
Turing machine was the first realized model of a computer. Turing invented it while he was working at Cipher school at Bletchley Par and the mission was to break the German Enigma code during the Second World War. It was theoretically similar to modern-day electronic computers. In 1951, the US got its first commercially available electronic stored-program computer named UNIVAC.
The modern history of Artificial Intelligence
After that many years passed with lots of trials and errors, research and development without any significant advancement in the field. The main limitation was lack of training data as images are not abundant at that time and also the computing power also insufficient to analyze the voluminous data.
However, the scenario took a sharp turn as soon as the advent of computers with higher computational power. The term Artificial Intelligence was first coined in a conference at Dartmouth College, Hanover, New Hampshire in 1956. Again a group of researchers threw themselves to unveil the superpower of AI.
Setbacks
Anyway, critics are always there and their argument against AI now becoming more prominent due to lack of its practical evidence. The government also appeared to be convinced with the argument due to a lack of success in any of the AI projects. As a result funding towards all AI research projects got stopped. It was a big blow and eventually, a winter period started in AI research during 1974 and lasted till 1980.
In 1980 the AI research came to the headline for a brief period when the British government showed some interest with an intention to compete with the Japenese advancement in AI research. But that did not last long; soon due to measurable failure of some early-stage computers pushed the field into another prolonged winter period which lasted for long seven years (1987 to 1993).
Breakthrough
But the winning spree of AI was just a matter of time and inevitable. As industry leaders like IBM set foot in the AI industry and took the challenge to show the world what AI is capable of, things start to change. A team of highly qualified scientists and computer programmer threw themselves in this mission and the result was pathbreaking.
Deep Blue: the chess champion supercomputer
The first big success of the AI project was the creation of the supercomputer Deep Blue by IBM. The computer created history when it defeated the then world chess champion, Garry Kasparov on May 3rd, 1997.
Back then it was so surprising that the reigning champion was not ready to admit that he has lost to a computer with Artificial Intelligence. He was crying foul play and under the suspicion that it was some grandmaster actually playing for the computer.
The computer was so accurate in making the moves but without any human emotions. Where Garry lagged behind being a human. This is where a computer always steps ahead of human being, applying only hard logic based on the vast amount of information fed to it. This victory of Deep Blue over human intelligence ushered a new age of Artificial Intelligence.
Watson: the question-answering AI-based computer
Another historic foot of establishing supremacy over human intelligence achieved by AI in 2011 when a supercomputer named Watson won the famous Quiz show called Jeopardy. In this competition, Watson defeated the defending champions Ken Jennings and Brad Rutter.
Watson is a question-answering computer created by IBM’s DeepQA project in the year 2010 based on Natural Language Processing. Mr David Ferruci of IBM was the key brain behind the idea of Watson. And it got the name after the founder of IBM’s founder and first CEO Thomas J. Watson.
Artificial Intelligence basics
The concept of Artificial Intelligence just reversed the traditional idea of finding a solution for any data-oriented problem. The classical programming or statistical modelling approach usually set the rules first, then apply it on the input data to achieve estimation result. Whereas Artificial Intelligence uses the example answer data along with the input data to learn the rules. See the below schematic diagram to understand it:
This concept of Artificial Intelligence suggests that it gives more emphasis on the hands-on training part. To learn from the data. Indeed this process needs a large amount of data so that the algorithm can be certain about the actual relation between the variables. Thus the idea is to establish the rules more often empirically than theoretically.
The concept of Artificial Intelligence is not a new one though. The concept first came into existence long back in 1950. During its inception, besides the concept of Deep Learning and Machine Learning, it did contain some hardcore programming rule also. For example, playing a chess game back then comprised of a lot of rules programmed to the computer. Such kind of Artificial Intelligence got a name Symbolic AI.
During 1980 the concept of Expert Systems got the limelight across the industries. An expert system on any topic actually provides an interactive information delivery system. Here a machine can play an expert role and based on the user’s input provides suitable information. In the process of developing such expert systems, the Symbolic AI transformed into Machine Learning.
Components of Artificial Intelligence
This has three main components as shown in the above figure:
Input data:
This is very obvious and also common in traditional programming or statistical modelling. We need to feed the input data in order to arrive with the estimation. The sample data in our hand either labelled or non-labelled plays this role as input data.
Labelled data:
This is the unique part in case of Artificial Intelligence. We need to provide some example answer data to train the programme. The larger the example answer data, the more accurate is the training. This example data set is the labelled data here. As both the variables feature and label are present here. We expect the algorithm will learn from this example and identify the relationship between them.
Error optimization:
This is the third important component which calibrates the algorithm identifying how close is the estimation to the actual value. There are several metrics which provide a good measure of how good the model is performing.
Algorithm to represent the input data
In nutshell, this is the main essence of Artificial Intelligence. All machine learning or deep learning algorithms try to find out some effective way to represent the input data. This representation is of utmost importance as this is the key for successful prediction.
For example when the problem in hand is to identify any image and the image has colour composition Red, Green and blue; then a very effective way to represent the image can be to identify the number of pixels with red colour. In similar fashion in case of speech recognition, if the algorithm can represent the language and voice modulation effectively the accuracy of recognition gets much higher.
An example of data representation
Here is an example of this representation problem with an easy graphical classification problem. This example I have read in the book “Deep learning with Python” by Keras creator and Google AI researcher Francois Chollet. It is a great book to start your journey with Artificial Intelligence.
See in the above figure the scattered points with two colour groups red and blue. The problem is to find out some rule to classify these two groups. A good solution to this representation problem is to create a new coordinate like the below figure. Now after the change in coordinates, the different colour dots can be easily classified with a simple rule which is the dots are blue when X>0 and red when X<0.
AI algorithms: not creative but effective
This types of transformations are handled by Artificial Intelligence algorithms automatically. Like this coordinate change, other transformations like linear transformation, nonlinear operations, etc. all frequently used functions and are available for Artificial Intelligence algorithms to choose from a predefined space called Hypothetical space. In this sense Artificial Intelligence algorithms are not very creative, all they do is to select functions from this space of possibilities.
Although the algorithm is not creative, often does the work. The algorithm takes the input data; then applies suitable transformation from the Hypothetical space; the algorithm takes the help of the feedback signal obtained from the output and expected output and with this guidance, attempt to represent the input data.
The following diagram represents the flow of information process for ease of understanding.
Final words
So, in the simplest terms, Artificial Intelligence is all about learning through trials and examples. You provide lots and lots of example answers and the algorithm will go on perfecting itself. Unlike other prediction algorithms which reaches a plateau after a certain number of trails, AI algorithms keep improving.
A good practical example such learning process is Google’s Quick Draw. It is an AI-driven drawing game hosted by Google. As claimed by Google, it is the world’s largest doodling data set and you can also your drawing sample to it.
It is an experimental research on the use of AI. You will surprise to see how effortless and quick the drawing it offers using AI. You can draw a picture in less than 20 seconds time! And the reason behind its so high accuracy in pattern recognition is again as I mentioned, a huge database of example answers. Almost 15 million people have uploaded more than 50 million drawings in the database.
Not only drawing it is the collection of several other experiments with music, video, natural language processing and many more with open access code. You can try the codes as they are open-sourced and also add your own code of AI application.
Expectations from AI should be rational and for Long term
One problem with Artificial Intelligence was the possibilities were always hyped out of proportions. The goals and expectations were set for a too short term. The obvious result of which was disappointment and loss in interest. Such disappointment resulted in two winter period in AI research as I have mentioned before.
Such winter periods slow down the development process for years together and not at all good for the researchers and scientists putting tremendous effort in AI research. They become the victims of the irrational hype created by press and media and some over enthusiasts.
When the dreams got shattered all research projects experience a crunch in research funding. The scientists who may be at the verge of some significant result got stuck with their research just because of insufficient fund. This is very heartbreaking and may deprive a scientist of his life long research achievements.
Many of the expectations from AI technology during 1960-70 are still far-reaching possibilities even in 2020. Similarly, the hype with AI in recent years may be an exaggeration too and may lead to another winter period.
Conclusion
So, we need to be very cautious in making realistic expectations out of AI. Instead of setting short term goals, we should look for a long term broad objective. Should give the researchers sufficient time to proceed with their research and development activities.
There is no denying that AI is going to be our everyday best friend. It is going to make our lives much much easier in the coming days. The day is not very far when we will take help of AI in every problem we face, we will take suggestion when we will feel sick, it will help to educate our kids, take us to our destination, help us to understand a foreign language and in doing so AI will take the whole humanity to a newer level of evolution.
This is not an unrealistic expectation and the day will eventually come. We just need to keep patience and have faith on highly talented AI scientists working hard to make this dream a real one.
References
- Chollet, F., 2018. Deep Learning mit Python und Keras: Das Praxis-Handbuch vom Entwickler der Keras-Bibliothek. MITP-Verlags GmbH & Co. KG.
- https://thenextweb.com/artificial-intelligence
- https://www.roboticsbusinessreview.com
- https://en.wikipedia.org