Home

A Brief History of Artificial Intelligence

Aug 30, 2016 11:18:14 AM

artificial-brain-artificial-intelligence.jpg A short history of AI: AI and its growing impact business and law

Technology is advancing in leaps and bounds. Enterprises are trying to keep up with the pace.  The legal industry despite being naturally risk averse and slow at change is also grappling with how to adopt new technologies including those which use artificial intelligence (AI). Why is the legal industry so resistant using new technologies? If the legal profession does get past this hurdle of resisting change what is likely to happen?  What will a legal profession that has fully embraced AI look like?  These are all interesting questions to ponder but before we look at any of them in more detail, let’s look at what AI is exactly and from where did it come (i.e. a brief history of AI). So, here's a short history of AI.

Download "What the Future Holds for Contract Management"

What is AI 

AI is a field of technology where machines mimic human thinking and reasoning in their processing and carry out tasks usually performed by people. Several people have given various definitions for what actually is AI. Ram Sriharsha, senior architect and machine learning expert at Hortonworks, a data collection and data analytics software company, is quoted in Information Week as describing AI as being “very broad” and says “AI is being able to communicate, being able to plan and reason and take actions”. The American author, computer scientist, inventor and futurist Raymond Kurzweil notes the most durable definition of AI is “the art of creating machines that perform functions that require intelligence when performed by people”. However, he explains this quote is not a perfect one as it does not always fit “actual usage” and doesn’t explain what exactly is meant by “artificial” or “intelligent”. Nick Bostrom, a Swedish philosopher at the University of Oxford Philosophy defined artificial intelligence (which he calls “superintelligence”) as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” He also specifies that “this definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you”, in his publication on superintelligence.

The concept of Artifical Intelligence

Although it’s recently been getting a lot of press, AI is not a completely new concept. The idea of AI has lived in the imaginations of human beings for centuries and can even perhaps be said to have its roots in ancient mythologies. There are several ancient myths about ancient statues or other inanimate objects that come to take a life on of there own. One of these is the the story of Hellenistic poet Philostephanus about a sculptor, Pygmalion, who carved a statue of a beautiful woman and then fell in love with her. His love and desire was so strong that eventually she came to life. In more recent folk law there is the legend of Pinocchio and in this last century there are stories such as Douglas Adams’s The Hitchhiker’s Guide to the Galaxy which imagines the earth to be an artificially intelligent computer created to determine the meaning of life. Star Wars is also filled with multiple androids and other artificially intelligent beings, and movies such as iRobot ponder how artificially intelligent robots could become a threat to humanity.

Download "A Short History of Legal Drafting"

The first roots of AI

With regard to technology from which artificial intelligence has evolved, calculating machines have been around since the 1600s. Wilhelm Schickhard, a German professor perhaps could be said to be one of the founding fathers of AI. Despite several people believing Blaise Pascal to be the first creator of the calculator, Schickhard, a German professor of Hebrew and Astronomy, is actually known to be the first creator of a calculating clock. There are drawings of this clock that predate the public release of Pascal’s calculator, the Pascaline, by 20 years.  These machines could assist in performing calculations that humans are capable of with their own cognitive thinking. However, it is noted that these machines could not “think” and “reason” like humans.

Four Industrial Revolutions?

Not long after the age of Schickhand and Pascal came the first Industrial Revolution.  According to Irving Wladawsky-Berger, Adjunct professor at Imperial College London, writing for the Wall Street Journal in Preparing for the Fourth Industrial Revolution there can be said to be three industrial revolutions and the coming age of AI is predicted to be the fourth. He writes:

“The First, in the last third of the 18th century, introduced new tools and manufacturing processes based on steam and water power, ushering the transition from hand-made goods to mechanized, machine-based production. The Second, a century later, revolved around steel, railroads, cars, chemicals, petroleum, electricity, the telephone and radio, leading to the age of mass production. The Third, starting in the 1960s, saw the advent of digital technologies, computers, the IT industry, and the automation of process in just about all industries.”

Download "Building a Transformative Contract Management Practice"

The fourth revolution involving AI can be said to have its roots in the later part of the 19th century through to the mid-20th century. During this time chess playing machines were developed and improved. As deciding which moves to make in a game of chess is considered a form of cognitive thought, these programs could be considered a form of artificial intelligence. Professor Bruce G. Buchanan, lecturer in Computer Science, Philosophy, and Medicine, with the Department of Computer Science at the University of Pittsburgh in his A (Very) Brief History of Artificial Intelligence notes that Pamela McCorduck noted that it was considered a very big achievement when in 1997 a program called Deep Blue defeated Gary Kasparov, a world chess champion, in 1997.  

Also the rise of electronics and modern computers during this time period led to several advances. Perhaps one of the most famous is one led by Alan Turing. He realised there was a need to be able to show whether or not a machine or tool was able to think and developed the "Turing Test" in 1950. It took a long time but in 2014 a machine passed this test. Alan Turing’s life was recently explored in the 2014 historical thriller The Imitation Game.

The reason for why no machine passed this test for so long could be said to be due to the fact that the power of computers just wasn’t there until very recently. However, perhaps there also needed to be a greater understanding about human thought and thinking and not simply advances in computing for this achievement to be realised. Between the middle of the 20th century and today there has been a lot of time dedicated to this research.

Greater Understanding about Human Minds and Thought

Perhaps the most well known researcher and theorist on the human mind and thinking is medical doctor, psychologist, and physiologist Sigmund Freud. Despite the fact that many of his theories are now considered erroneous and inaccurate some argue that his central ideas still “form the bedrock of contemporary therapy...and his musings on the mind are more relevant than ever”. Freud’s work influenced many later theorists. Marvin Minsky, cognitive scientist and co-founder of the MIT’s AI laboratory, developed his own theories about the workings of the mind in relation to artificial intelligence and referred to Freud as "one of the first computer scientists, because he studied the importance of memory" and was known to have said that he was his favorite theorist of mind.  Minsky published his theory about the mind in his 1988 book Society of Mind. He thought the mind was made up of “agents” - little entities that have the ability to perform specific actions (e.g. remembering, motivation, fantasizing, etc.) while collectively forming the human mind.

Daniel C. Dennett, an American philosopher, writer, cognitive scientist and professor at Tufts University in Medford, Massachusetts, has conducted various studies and developed theories about the mind and particularly as it relates to biology and cognitive science. In one of his books, Kinds of Minds, published in 1997, he combines ideas from philosophy, artificial intelligence, and neurobiology and discusses questions such as “What distinguishes the human mind from the minds of animals, especially those capable of complex behavior?” and “Will robots, once they have been endowed with sensory systems like those that provide us with experience, ever exhibit the particular traits long thought to distinguish the human mind, including the ability to think about thinking?”

Another academic, Margaret Boden, research professor of cognitive science in the Department of Informatics at the University of Sussex, published Mind as Machine: A History of Cognitive Science in 2006. In this book she considered the mind from the perspective of the fields of psychology, philosophy, anthropology and artificial intelligence. Gilbert Harman of the American Scientist reviewed her work and notes that in chapter 7 Boden “offers an extensive discussion of computational psychology as it has evolved since 1960” and that Boden concludes by acknowledging that “we’re still a very long way from a plausible understanding of the mind’s architecture, never mind computer models of it,” but Boden “believes that the advent of models of artificial intelligence has been extraordinarily important for the development of psychology”. The same could be said about the effect of development of theories of psychology and the mind being important for the development of artificial intelligence.

The future with Artificial Intelligence

Despite the fact that some academics feel there’s a long way to go for AI there are several people that believe we are on the brink of an artificial intelligence revolution - the fourth industrial revolution - such as Irving Wladawsky-Berger mentioned above. Mr Carpenter, creator of AI software Cleverbot, is another believer in the imminence of the AI revolution.  He believes that "we are a long way from having the computing power or developing the algorithms needed to achieve full artificial intelligence, but believes it will come in the next few decades".  

Atanu Basu, the founder of Ayata, a platform that uses artificial intelligence software to improve mission-critical processes and Michael Simmons, the co-founder of Empact, a company which helps build the entrepreneurship, recently wrote an article in online magazine Inc.com The Future Belongs To Leaders Who Get Artificial Intelligence in which they descibed what they feel the adoption of AI and machine intelligence will be like in the future.  They wrote:

“The transition from human intelligence to machine intelligence in our daily lives and in the enterprise is going to be messy.

It will challenge our identity.

It will go against our expert intuition that we've spent our careers building.

It will inherently require giving trust and control away to decisions we don't understand.

It will create larger and more diverse opportunities than we can even fathom today.

That's why we call it the surprise-fear-embrace curve, and that's why we'd argue that one of the most important skills we can learn is how to ride it.”

David Ferrucci, the principal investigator in creating perhaps currently most famous AI software, IBM Watson is also quoted in this article explaining that he thinks it’s going to be a strange transition for people to start listening to computers for expert advice and opinions rather than humans renowned for being experts in their specific fields.  

So there you have a short history of AI and some thoughts on what the future of AI may look like.  It will be very interesting to see how this future plays out.

What are your thoughts? How do you think AI will change our lives and in particular in relation to business and contracting?

ContractRoom is a contract management software system.  It uses machine learning and artificial intelligence to change the way deals get done and decisions are made with Predictive AgreementTM.  To find out more about ContractRoom go to www.contractroom.com and/or book a demo here: Request Demo

* Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two volumes, xlviii + 1631 pp. Oxford University Press, 2006.

This article was written with assistance from Jennifer Tran who is interning with ContractRoom over the summer.

Katie Cook

Written by Katie Cook

Katie Cook is the Director of Marketing, Communications and Legal Standards at ContractRoom. Originally from the east coast of Australia, she has a background as an Attorney having practiced in both public and private practice in Brisbane and Melbourne. While working as an Attorney Katie completed studies in journalism and is now combining her legal and writing skill sets in her role at ContractRoom.

Agree More with ContractRoom!

Request Demo

Lists by Topic

see all

Posts by Topic

see all

Free eBook Download Section

Download "A Short History of Legal Drafting"
Download "What the Future Holds for Contract Management"
Download "Essential Principles of Software Selection"
Download "Building a Transformative Contract Management Practice"