History of Artificial Intelligence

Hello French Fries!

So, my office Secret Santa gave me a bolo tie a couple weeks ago, and I’ve been wearing it the past couple weeks because I think it looks cool and unique. So, two days ago, I decided that since I have been wearing a bolo tie to my office (instead of a normal tie), I should maybe watch a western or two since I really don’t watch westerns. However, I can’t seem to find any that are decent quality, modern, and I haven’t seen before. I watched the Magnificent 7, Django Unchained, 3:10 to Yuma and the Good the Bad and the Ugly in the past year, so I would like to ask you to suggest in the comments some interesting or good westerns, I’d greatly appreciate it!

For my rant of the day: There has been a lot of focus in the media and in the realm of science fiction on the concept of artificial intelligence. Artists and social commentators have popularly portrayed Artificial Intelligence (AI for short) as manmade self-aware entities that pose a threat to humanity due to our ignorance or arrogance. From Hal 9000 to Ultron, popular culture has portrayed AI as nefarious plots and machinations by artificially produced human-brain-equivalents in metal or plastic bodies with the ability to think abstractly but lacking moral constraints. Consequently, there is a lot of misconception on what AI actually entails, and the purpose of our scientific community’s investment in the study and development of AI.

Artificial Intelligence is the ability of non-biological machines to achieve goals through computation. To break this definition down, there are two parts: non-biological machines means manufactured devices or computers (the Artificial aspect of AI). There is nothing out of the norm there, but the next part is where things get tricky. The ability to achieve goals through computation (Intelligence) is a very basic definition of intelligence. There are many theories and ideas of what actually defines the parameters of computation and goals, so the scientific community has yet to come to a consensus on what qualifies for intelligence when it comes to AI. Consequently, as advancements in machine learning and task achievement become more common in occurrence amongst machines, what we as the casual public define as an ‘Intelligent Machine’ keeps becoming more advanced and difficult to attain as previous achievements begin to be viewed as mundane. While the general public might think that Artificial Intelligence is the equivalent of ‘human intelligence’, this could not be farther from the truth.

Despite what you may think, the concept of AI is not a modern innovation. The ancient Greeks believed that their god of fire, Hephaestus, had the ability to construct self-aware robots in the form of humanoid assistants and animal-like pets constructed out of gold and silver. This was likely inspired by, and served as inspiration to several ancient and medieval figures from a wide range of civilizations, such as the Seljuq academic Ismail al-Jazari, the Florentine polymath Leonardo Da Vinci, and the Chinese philosopher Yan Shi amongst others. These studious scholars built self-operating humanoid machines called Automatons. Automatons were the earliest forms of robotics; their designs often used technology similar to clocks in order to automate a process or task through movement triggered by certain cues. Famous examples include Leonardo DaVinci’s Knight, which could ride a horse and swing a sword; King Philip II of Spain’s Mechanical Monk, which could walk around, kiss rosaries and recite prayers; and many examples by Al-Jazari, including an automated soap and towel dispenser (900 years ago, well before the automated ones we have today), a mechanized wine servant who could sense when your glass was empty and refill it, and a full floating orchestra. The Greek myth of Talos has been theorized to have been based on a real giant Automaton that was designed to distinguish foreign vessels from local ones and throw rocks at hostile ships to scare them away. For those of you who put weight on faith over science, there is even biblical reference to Automatons. King Solomon’s throne was said to have 6 steps leading up to it, each with a different golden statue of an animal of Solomon’s own design that would bow whenever Solomon walked past but not anyone else. His throne itself was designed to move on it’s own. While primitive, these machines met the basic definition of AI, they could make an if-then logic argument which triggered them to achieve a task based on their judgment without management or regular input from an overseeing human.

Artificial Intelligence is based on the ability of machines to conduct reasoning, and are in essence attempts to mechanize human logic and reasoning. Thus, while Automatons are able to discern logic, there wasn’t much progress past the automatons until the advent of the Computer Science field by Charles Babbage and Ada Lovelace, which was then advanced from theoretical to practical application thanks to the works of Alan Turing . I went into details about Babbage and Lovelace in a previous post, so I won’t go into much detail here, but when Babbage and Lovelace invented the first computer and the first algorithm, they sowed the seeds that the Second World War harvested. The war necessitated the invention of the first programmable computer by Alan Turing, which opened a new door for the study of AI. The funny thing about World War 2 and all of it’s horrors and desperation is it’s total war nature led to large scale advancements in the computer science and technological fields. AI as we know it today was borne by the fires of World War 2, and after the war, many of the scientists and theorists who were assigned to work on developing AI during the war effort decided to continue full time after the war concluded. They were most likely inspired by Alan Turing. Alan Turing, who even today remains one of the most influential academics in the study of AI, wanted to go beyond simple logic-based intelligence and design machines that could actually think like humans. To this end, he repurposed a Victorian era guessing game, called the Imitation Game, into a test, now known as the Turing Test. In the Turing Test, a normal, average person is put in a room and will blindly communicate through a chat interface with either a person or a computer, and they need to determine if they are talking to a person or a computer. If a computer can convince the tester that it’s a person, it passes the Turing Test. This test, which Alan Turing proposed in 1950, would form the basis by which AI research and development would be focused. For the last 70 years, AI research and development was built around developing computers and algorithms with the aim of being able to overcome the Turing test to prove it’s intelligence.

This brings me back to my original definition of Artificial Intelligence. While the technology has developed, and we have long ago achieved AI as it was originally conceived centuries ago, the parameters of what we define as intelligence. and thus the goals of computer scientists with relation to AI has evolved as technology evolved over time. Since 1950, AI development and computer learning has been almost exclusively focused on beating the Turing test, an achievement that was not accomplished until a scant 3 years ago. This has left the field today somewhat lacking in purpose and direction. This uncertainty, and the fatalistic creativity of the collective human mind, has left the future somewhat cloudy yet full of possibility of what will happen. There are industry titans such as Elon Musk who fear that AI enhancements will lead to the destruction of humanity. I have made no attempt to hide my admiration for Elon Musk, but I do not agree with him in this instance. Artificial Intelligence is limited by human parameters and design, and even when there is an abundance of intelligence as designed by nature (the perfect programmer), it isn’t as intelligent as we believe. I personally believe that before we build a Hal 9000 or an Ultron, we would ultimately be able to minimize the harm to our collective selves because Artificial Intelligence is still limited by how we as humans perceive intelligence. While our definition of intelligence has changed over the centuries, our compassion and fear of self harm has not, and our natural, human parameters will limit the scope of artificial intelligence to those same parameters. After all, the goal of Artificial Intelligence in todays understanding is to mimic Human intelligence, and as Albert Einstein once said. “Only two things are infinite, the Universe and human stupidity, and I’m not so sure about the former”. So while I respect Mr. Musk, I must disagree with his assessment since the parameters we currently have set for AI mimic our own intelligence, and we are not as intelligent a species as we like to think, so we shouldn’t fear Judgement Day anytime soon.


Now for my tip of the day: If you are like me and spend a significant amount of time surfing the web, then you know the annoyance of auto play videos. These are videos or advertisements on different websites that start to automatically play when you first open the webpage.   It annoys the crap out of me because when I’m on my computer, I normally got my Spotify playlists playing in the background and the multiple sounds racket gives me a headache. There is good news though, most browsers have a way of disabling autoplay. Intrigued by this proposition? Then read on.

1.) Chrome

When you open chrome, go to the three dots icon in the top right hand corner of the browser, open that and select settings. In the chrome://settings page, on the top left there will be 3 lines next to the word Settings. Select the three lines, expand the Advanced menu underneath, and select ‘Privacy and security’. In the chrome://settings/privacy page, underneath the Privacy and security menu, select Content Settings. Go to Flash in the subsequent menu and check ‘Ask first’.

2.) Edge (and IE)

Go to the menu icon, and select settings. In settings, select View Advanced Settings. From there look for the ‘Use Adobe Flash Player’ option and turn it off.

3.) Safari

Select the safari menu from the top and open Preferences. In Preferences, click the websites tab, and scroll down the left side menu until you see Plug-ins. Uncheck the Adobe Flash Player plugin.

4.) Firefox

Click on the main menu (3 horizontal lines). In the menu, navigate to the Add-ons and click on that. Select the Plug Ins option on the left panel. Select the dropdown menu for Shockwave Flash and select ‘never activate’.

Just as a disclaimer, while these do work for flash player videos, which are the ones that traditionally start once you open a website, there’s been a sneaky and nefarious trend over the past year by website developers of going away from flash plugins in favor of directly embedding the video into the website’s HTML5 code. If the website you are opening has the video embedded directly in the HTML instead of through a flash plugin, I apologize, but you are SOL and will have to deal with the nuisance since there isn’t a built in way of preventing those. Sometimes an ad blocker might help, but not always.


Well folks, that concludes todays post. Until next time…

…the ketchup is in the sauce.


Leave a Reply