It’s a term we often hear when talking about technology’s future and how it will affect us.
But Artificial Intelligence isn’t science fiction, it’s here right now!
From online banking to your Alexa speaker and Grammarly which helped me write this blog, A.I is part of our everyday life.
But first, what is Artificial Intelligence and when did it start?
In technical terms, Artificial Intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
What that generally means to you and I, is a computer programmed with characteristics of humans:
- discover meaning
- learn from experience
The concept of A.I has been around since the 1950s with the development of the digital computer in the 1940s.
During the early days, computers could be programmed to prove mathematical theorems or play chess but the ultimate test was, can it match or supersede human intelligence?
Human Intelligence vs Artificial Intelligence
Human intelligence sets us apart from other living beings and is essential to the human experience.
- Ability to adapt
- Solve problems
- Improvise in new situations
- Learn new things
Today’s A.I systems have
- Social intelligence
Key Moments in the Development of Artificial Intelligence
A running theme throughout A.I’s history was the financial cost to prove the theory worked.
In the 1950’s Mathematician Alan Turing, explored the possibility of artificial intelligence. Turing suggested that humans use available information and reason to solve problems and make decisions, so why can’t machines do the same thing?
Computing was extremely expensive in the 1950s, the cost of leasing a computer ran up to $200,000 a month, so this required back from big business or a wealthy university.
So, to get the money for research you had to show that it worked but needed the equipment for it worked.
From 1957 to 1974, Computers became faster, cheaper, and more accessible.
In 1970 Marvin Minsky told Life Magazine, “From three to eight years we will have a machine with the general intelligence of an average human being.”
Again this was all in theory as A.I couldn’t do the following
- Natural language processing
- Abstract thinking
In the 1980s, A.I was reignited because of an influx of investment
A key phase in this stage was “deep learning” techniques which allowed computers to learn using experience:
- Recognize complex patterns in pictures, text, and sounds
- Data to produce accurate insights and predictions
During the 80’s and 90’s the Japanese government heavily funded A.I research invested $400 million dollars with the goals of revolutionising computer processing and implementing logic programming.
But research into A.I thrived without government funding during the 1990s and 2000s, and many of the landmark goals of artificial intelligence had been achieved.
In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess-playing computer program. This was the first time a reigning world chess champion lost to a computer and served as a huge step towards an artificially intelligent decision-making program.
Also in 1997 speech recognition software, developed by Dragon Systems, was implemented on Microsoft Windows.
Tiger Electronic, Furby is released and becomes the first successful attempt at bringing A.I into the toy market.
In the new millennium A.I is the key factor in the development of autonomous cars.
Voice activation expanded with Apple’s SIRI (2011), Google’s Google Now (2012) and Microsoft’s Cortana (2014) which use voice requests to answer questions, make recommendations and perform actions.
As A.I continued to develop it has yet to reach its Sci-Fi theories.
But artificial intelligence has had a massive impact on our everyday life in diverse areas such as medical diagnosis, computer search engines, and voice or handwriting recognition that are all used today.
From what started as a theory in the 1940s to a part of our everyday life but with limitless possibilities that will continue to evolve.