Introduction to Artificial Intelligence (AI)

What is AI?

AI is the study of intelligence, particularly intelligent machines

AI is mainly interested in understanding and creating things that think, or act, rationally (or intelligently)

e.g. we want self-driving cars to “do the right thing” in all situations, and this seems to require acting and thinking rationally

humans are a special case: so far, they are the only know things that are capable of general rational thought and action

so AI is often compared to human performance, and we will often take humans as inspiration for how computers might solve similar problems

  • but keep in mind there are some things that humans perform badly at compared to computers, e.g. most people can’t do arithmetic as quickly and accurately as computers, and so a calculator that performed at human-level would not be very good!

The Turing Test

determining if something is truly intelligent is a tricky problem

famously, in the 1950s computer scientist Alan Turing proposed what is now called the Turing Test

the Turing Test is a thought experiment: a computer and a human questioner have a conversation through a computer terminal, such that the questioner has no idea if they are conversing with a computer or a real human; the computers goal is to make the questioner think they are human

the Turing Test is open-ended: the questioner can ask anything they like, and may judge the computers response however they like

of course, some questions are human-centric, e.g. the questioner might ask “What was your mother’s name?”, or “How many toes do you have on your left foot?” which would seem to require that the computer lie

but such details aside, it is interesting to think about the sorts of things a program that could pass the Turing Test would need to be able to do; it would have to:

  • understand, and generate, natural language (natural language processing, NLP)
  • store and retrieve the things it knows and hears (knowledge representation, KR)
  • used its stored information to infer new conclusions (automated reasoning)
  • learn about new circumstances, or find new patterns (machine learning, ML)

all of these are significant and challenging parts of AI research today, and the general consensus is that we are nowhere near being able to make a program that could pass the Turing Test

  • sometimes people make the claim that programs can pass a “limited” Turing test, e.g. a Turing test-like situation that is very limited in scope; but this seems to go against the intention of the Turing test, which specifically allows for the conversation to go in whatever direction the participants take it

in practice, the Turing Test has had little direct impact on AI research, since it is too complex and too vague, and researchers have preferred to focus on the underlying principles of AI

yet you will often hear mention of the Turing Test in movies, books, and the popular press; it is certainly useful for inspiring people, but it has not been a direct topic of much AI research — it’s just too much to ask (yet!)

Thinking Humanly: Cognitive Science

cognitive science is its own discipline, although it is closely related to AI

cognitive science is essentially interested in how humans think, and is interested in models of cognition, intelligence, rational behaviour, etc. that models how humans might do such things

the goal of cognitive science is to explain how humans think, and if that helps further the cause of AI in general, then that’s a bonus

in contrast, the “computer science” approach to AI that we will be taking is more engineering-oriented: depending upon the problem we are trying to solve, mimicking how humans solve that problem may, or may not, be a good idea

  • we will use any idea or approach that gives good results!
  • it won’t matter to us if a good approach to solving a problem is not the way a human would do it

Rational Agents

a useful perspective on AI is that it is interested in creating and understanding rational agents

an agent is something that “acts” in the world, and by rational we mean “doing the right thing”

e.g. we want self-driving cars to be rational agents; we also Alexa and Siri and other such “intelligent assistants” to be rational agents

a human is a rational agent, and a major question in AI is if anything other than a human is a rational agent

  • many AI researchers think the answer to this question is obviously “yes”, although we don’t yet have any examples of rational agents other than humans
  • in principle, most researchers believe there is nothing magical about human brains: a brain is just “meat that thinks”, and in principle a computer simulation of a brain would also think, and thus be a rational agent
  • some people argue that computers don’t have emotions, but emotions don’t seem to be a requirement for rational thought; however, some researchers have certainly studied them in depth, and they can be simulated if necessary!
  • some people (passionately!) argue that consciousness is important, and suggest that while maybe computers may one day be truly intelligent, they will never be truly conscious
    • but a major problem with consciousness is that it is not clearly defined, and different scientists take it to mean different things
    • it is just not clear how consciousness helps with intelligence, and so we will have little to say about it in this course

The Chinese Room

a famous argument from the philosopher John Searle argues that a computer that can pass the Turing Test is not necessarily intelligent

here is a video of Searle explaining the argument: https://www.youtube.com/watch?v=18SXA-G2peY

this argument has provoked a lot of discussion in AI and philosophy about what it means to be intelligent, and what a mind is

Searle uses the term weak AI to refer to the proposition “a mind can be simulated”

  • Searle grants that week AI is possible

Searle uses the term strong AI to refer to the proposition “a simulated mind is an actual mind”

  • Searle believes this to be false, and uses the Chinese Room argument to prove it is false

in the Chinese Room argument, Searle asks you to imagine being in the room (and knowing no Chinese)

  • you are the CPU
  • you have a (huge!) rule book with instructions that tells you how to convert any input Chinese symbols into the correct output symbols in such a way that you pass the Turing Test
    • note that Searle grants this rule book, i.e. he assumes the existence of such a book is conceptually possible because he believes weak AI is possible
    • if you don’t think weak AI is possible, then you have no reason to believe in strong AI since it depends upon weak AI being true
  • Searle then claims that it is obvious that even though the Chinese Room can pass the Turing Test, you, the human acting as the CPU, obviously don’t understand Chinese
    • he claims that the important difference is that “strong AI” requires meaning, and “mental content”, while all that computers can do is manipulate symbols

note the following about Searle’s argument

  • Searle seems to have a strong common-sense intuition that is analogous to this:

    • imagine a video game that simulates water so well you can’t tell the difference between a picture from the game and a picture of real water
    • but no matter how good the simulation, simulated water will never be wet because the simulation lacks the physical properties necessary to created wetness
    • in the same way, Searle believes a simulated mind will never really be a mind
  • the claim of strong AI is that the software, the program, is where the “mental content” lies, but Searle clearly says that you, the person in the Chinese Room, is the CPU; but the claim of strong AI is that it’s the running software, i.e. the rule book plus the CPU together

    • the mental content is in the rule book
    • this is known as the “system’s reply”, and it says, yes, the person doesn’t understand Chinese, but the entire room does understand Chinese
  • it seems Searle’s argument can be paraphrased like this:

    • Premise 1: A mind must have “mental content”, and be able to manipulate symbol.
    • Premise 2: Computers can only manipulate symbols (and have no mental content).
    • Therefore: Computers cannot be minds.

    the problem with this argument is that it is not obvious if the premises are true, or even what exactly they mean

    in particular, in the video Searle does not define “mental content”, and does not say why a computer cannot have “mental content”

among practicing AI researchers, the Chinese Room argument has had little, if any, practical impact

  • it’s a fun topic you might discuss at lunch
  • for example, if minds are just programs, then imagine “MindHub”, which is like GitHub but for minds
    • MindHub lets you upload copies of your mind
    • you can easily clone other minds
    • you could “branch” off someones mind and try out different things
  • another fun idea: if minds are programs, it doesn’t necessarily mean we have the source code for the program; but imagine if we did, e.g. what would God’s original source code for the human mind be like?
    • What language would it be in? C++? Python? Cobol?
    • What sort of source code comments, if any, would God put in the program?
      • would there be “FIX THIS!” or “TODO” comments everywhere?
      • would functions be commented-out, e.g. maybe the “do-not-leave-your-essay-until-the-night-before-its-due” function is commented-out

Major Approaches and Inspirations in AI

  • logic: encode rationality as logical rules
  • mathematics: use ideas and techniques from math, like probability theory, to model rationality
  • computer science: computer science is the study of algorithms, and most people believe that rational agents use algorithms
  • economics: use notions like utility, and maximizing payoffs; in particular, the fields of decision theory and game theory are precisely about making rational decisions in the face of real-world complexities (which mathematics and logic often abstract away)
    • some of the most well-known early work in AI was done by researchers coming from economics: Herbert Simon and Allen Newell
  • neuroscience: study the human brain as a biological object, and try to simulate it
  • psychology: how do humans (and animals) act? this is at a higher-level than neuroscience, and is where ideas like “belief”, “intention”, and “perception”; even the term “rational” is a psychological term
  • hardware engineering: design a computer to be intelligent by carefully studying the actual components used to make computers; robotics is part of this — some researchers believe that intelligence needs to be embodied in something like a robot to truly rational
    • it’s interesting to note that a significant impact in the recent success in “deep” neural networks was figuring out ways to increase the performance of basic neural network algorithms using things like GPUs, and other special-purpose hardware
  • linguistics: how are language and thought related? human language is a very important part of their rationality, and the careful study of language has brought a lot of insight into understanding

we can’t cover all these areas of influence, and so we will focus on the areas closest to computer science

A Brief History of AI

  • 1943: Pitts and McCullock propose first “artificial neuron”; here is a copy of their original paper
  • 1956: Dartmouth 2-month summer workshop, where many of the early pioneers of AI met; their ideas approaches came to dominate AI for decades afterwards
  • 1958: John McCarthy developed the LISP programming language, to help implement AI-related programs that needed to do symbolic processing (which was very difficult to do in other languages of the day)
  • 1960s: some success in microworlds, i.e. problem-solving in carefully limited domains
  • 1966 - 1973 (“a dose of reality”) initial optimism about the pace of AI decreased, when, for example, the intractability of many of the problems they were trying to solve was understood (thanks to computer science!); many early AI programs were doing simple syntactic manipulations, and clearly had no real understanding of their domains
    • also, in 1969 Marvin Minsky and Seymour Papert published the book Perceptrons, and they famously proved that the simple neural-network learning algorithms that were popular at the time were not powerful enough to learn many extremely simple functions
  • 1970s (“knowledge-based systems”): many interesting approaches to storing and processing “knowledge” were explored, including expert systems (which were once thought to be the crowning success of AI)
  • 1986 onwards: neural networks regained popularity thanks to new, and better, methods for learning with them; work has progressed since this time until now, where so-called “deep neural nets” are considered by many to be the pinnacle of AI achievement, able to solve useful hard problems like image recognition
  • 2000 onwards: large data sets started to become available (thanks to the computerization of most data, including the web), and so this inspired a lot more interest in machine-learning techniques