Final Examination CS540: Introduction to Artificial Intelligence

Similar documents
Math 135: Answers to Practice Problems

CUR 412: Game Theory and its Applications, Lecture 11

Stochastic Games and Bayesian Games

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

Can we have no Nash Equilibria? Can you have more than one Nash Equilibrium? CS 430: Artificial Intelligence Game Theory II (Nash Equilibria)

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

Honors Statistics. Daily Agenda

Stochastic Games and Bayesian Games

CS 798: Homework Assignment 4 (Game Theory)

Sequential Rationality and Weak Perfect Bayesian Equilibrium

Economics 109 Practice Problems 1, Vincent Crawford, Spring 2002

PROBABILITY and BAYES THEOREM

Finish what s been left... CS286r Fall 08 Finish what s been left... 1

Game Theory. Important Instructions

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Yao s Minimax Principle

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

Decision Theory: Sequential Decisions

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem

Solution to Assignment 3

Inference in Bayesian Networks

Outline for today. Stat155 Game Theory Lecture 19: Price of anarchy. Cooperative games. Price of anarchy. Price of anarchy

ECO303: Intermediate Microeconomic Theory Benjamin Balak, Spring 2008

Elements of Economic Analysis II Lecture X: Introduction to Game Theory

Answers to Problem Set 4

Using the Maximin Principle

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

Notes on the EM Algorithm Michael Collins, September 24th 2005

Game Theory Notes: Examples of Games with Dominant Strategy Equilibrium or Nash Equilibrium

CUR 412: Game Theory and its Applications, Lecture 9

Computer Vision Group Prof. Daniel Cremers. 7. Sequential Data

m 11 m 12 Non-Zero Sum Games Matrix Form of Zero-Sum Games R&N Section 17.6

Chapter 4 and 5 Note Guide: Probability Distributions

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

Chapter 6: Random Variables

Probability. Logic and Decision Making Unit 1

Q1. [?? pts] Search Traces

Beliefs and Sequential Rationality

Repeated, Stochastic and Bayesian Games

Strategy -1- Strategy

Their opponent will play intelligently and wishes to maximize their own payoff.

Outline for today. Stat155 Game Theory Lecture 13: General-Sum Games. General-sum games. General-sum games. Dominated pure strategies

that internalizes the constraint by solving to remove the y variable. 1. Using the substitution method, determine the utility function U( x)

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

CSE 316A: Homework 5

1. better to stick. 2. better to switch. 3. or does your second choice make no difference?

Review. What is the probability of throwing two 6s in a row with a fair die? a) b) c) d) 0.333

Econ 101A Final exam Mo 18 May, 2009.

Introduction to Fall 2007 Artificial Intelligence Final Exam

In Class Exercises. Problem 1

Problem 3 Solutions. l 3 r, 1

7.1: Sets. What is a set? What is the empty set? When are two sets equal? What is set builder notation? What is the universal set?

Microeconomics II. CIDE, MsC Economics. List of Problems

(a) Describe the game in plain english and find its equivalent strategic form.

The exam is closed book, closed notes except a two-page crib sheet. Non-programmable calculators only.

Best counterstrategy for C

S 2,2-1, x c C x r, 1 0,0

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Microeconomic Theory (501b) Comprehensive Exam

CUR 412: Game Theory and its Applications, Lecture 12

Solution to Tutorial 1

Solution to Tutorial /2013 Semester I MA4264 Game Theory

Session 3: Computational Game Theory

CPSC 540: Machine Learning

Game theory for. Leonardo Badia.

Honors Statistics. 3. Review OTL C6#3. 4. Normal Curve Quiz. Chapter 6 Section 2 Day s Notes.notebook. May 02, 2016.

CMPSCI 240: Reasoning about Uncertainty

Random Variables and Applications OPRE 6301

G5212: Game Theory. Mark Dean. Spring 2017

Counting Basics. Venn diagrams

CEC login. Student Details Name SOLUTIONS

ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games

Final Examination December 14, Economics 5010 AF3.0 : Applied Microeconomics. time=2.5 hours

Finitely repeated simultaneous move game.

Jianfei Shen. School of Economics, The University of New South Wales, Sydney 2052, Australia

Extensive-Form Games with Imperfect Information

Notes for Section: Week 7

BAYESIAN GAMES: GAMES OF INCOMPLETE INFORMATION

Iterated Dominance and Nash Equilibrium

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

Econ 172A, W2002: Final Examination, Solutions

Game theory and applications: Lecture 1

CS 361: Probability & Statistics

Name. Answers Discussion Final Exam, Econ 171, March, 2012

Utilities and Decision Theory. Lirong Xia

Lecture 6 Dynamic games with imperfect information

Overuse of a Common Resource: A Two-player Example

Finding Mixed-strategy Nash Equilibria in 2 2 Games ÙÛ

PROBABILITY AND STATISTICS, A16, TEST 1

Microeconomic Theory II Spring 2016 Final Exam Solutions

Understanding neural networks

Markov Decision Processes

ORF 307: Lecture 12. Linear Programming: Chapter 11: Game Theory

General Examination in Microeconomic Theory SPRING 2014

Microeconomics Comprehensive Exam

TTIC An Introduction to the Theory of Machine Learning. Learning and Game Theory. Avrim Blum 5/7/18, 5/9/18

Econ 711 Homework 1 Solutions

Economics 212 Microeconomic Theory Final Exam. April 24, Faculty of Arts and Sciences Queen s University

ECON Microeconomics II IRYNA DUDNYK. Auctions.

Transcription:

Final Examination CS540: Introduction to Artificial Intelligence December 2008 LAST NAME: FIRST NAME: Problem Score Max Score 1 15 2 15 3 10 4 20 5 10 6 20 7 10 Total 100

Question 1. [15] Probabilistic Reasoning Consider the age group of women over forty. 1% of women who are screened, have breast cancer. 80% of women who really do have breast cancer will have a positive mammography (meaning the test indicates she has cancer). 9.6% of women who do not actually have breast cancer will have a positive mammography (meaning that they are incorrectly diagnosed with cancer). Define two Boolean random variables, M meaning a positive mammography test and M meaning a negative test, and C meaning the woman has breast cancer and C means she does not. (a) [5] If a woman in this age group gets a positive mammography, what is the probability that she actually has breast cancer? Show the key steps. (b) [2] True or False: The "Prior" probability, indicating the percentage of women with breast cancer, is not needed to compute the "Posterior" probability of a woman having breast cancer given a positive mammography. (c) [6] Say a woman who gets a positive mammography test, M1, goes back and gets a second mammography, M2, which also is positive. Use the Naive Bayes assumption to compute the probability that she has breast cancer given the results from these two tests. (d) [2] True or False: P(C M1, M2) can be calculated in general given only P(C) and P(M1, M2 C). 2

Question 2. [15] Bayesian Networks Consider the following Bayesian Network containing four Boolean random variables: A P(A) 0.6 P(B A) 0.3 P(B A) 0.8 D P(D B,C) 0.1 P(D B,C) 0.5 P(D B, C) 0.2 P(D B, C) 0.7 P(C A) 0.9 P(C A) 0.4 B C (a) [4] How many independent values are required to store the full joint probability distribution for this problem? (b) [2] True or False: From the above network it is possible to compute any joint probability of the four variables. (c) [2] True or False: Based on the topology only of the network, P(A,C,D) = P(A)P(C)P(D) (d) [2] True or False: Based on the topology only of the network, P(D C) = P(D A,C) (e) [5] Compute P(A, B,C, D) where A means A = true, D means D = false, etc. 3

Question 3. [10] Support Vector Machines We want to construct a Support Vector Machine (SVM) that computes the XOR function. Instead of input and output values of 1 and 0, we ll use values 1 and -1, respectively. So, for example, if the input is [x1 = -1, x2 = 1] we want the output to be 1. (a) [5] Using the four possible input vectors and their associated outputs, can a LINEAR SVM be constructed to correctly compute XOR? If it can, show how by drawing the four possible input value pairs in the 2D input space, x1, x2, and the separator (i.e., decision boundary) computed by the SVM. If it cannot, explain or show why not. (b) [5] Suppose we re-express the input data using the computed features [x1, f(x1, x2)] instead of the original [x1, x2] pair of values, where f(x1,x2) is some function of both x1 and x2. Can a LINEAR SVM be constructed to correctly compute XOR using the computed features rather than the raw features? If it can, show how by defining the function f(x1,x2), and drawing the four possible input value pairs in the 2D input space, x1, your f(x1, x2), and the separator (i.e., decision boundary) computed by the SVM. If it cannot, explain or show why not. 4

Question 4. [20] Hidden Markov Models Andy is a three-month old baby. He can be happy, hungry, or having a wet diaper. Initially when he wakes up from his nap at 1pm, he is happy. If he is happy, there is a 50% chance that he will remain happy one hour later, a 25% chance to be hungry by then, and a 25% chance to have a wet diaper. Similarly, if he is hungry, one hour later he will be happy with 25% chance, hungry with 25% chance, and wet diaper with 50% chance. If he has a wet diaper, one hour later he will be happy with 50% chance, hungry with 25% chance, and wet diaper with 25% chance. When he is happy, he smiles 75% of the time and cries 25% of the time; when he is hungry, he smiles 25% and cries 75%; when he has a wet diaper, he smiles 50% and cries 50%. (a) [5] Draw the HMM that corresponds to the above story. Clearly mark the transition probabilities and output probabilities. (b) [5] The nanny left a note: 1pm: smile. 2pm: cry. 3pm: smile. What is the probability that this particular observed sequence happens? 5

(c) [5] What is the most likely hidden sequence (in terms of happy, hungry, or wet diaper) for the note in (b)? (d) [5] (This question not related to above) Describe the McGurk effect ( hear with your eyes ) in one sentence. In another sentence, discuss its implication to automatic speech recognition. 6

Question 5. [10] Clustering K-means clustering tries to minimize the distortion ( x i c i ) (mean) of the cluster that point xi is in. i 2 where ci is the center (a) [4] Given a dataset with five points {1,4,6,7,8}, and K=2 clusters whose initial centers are c1=0, c2=9, run K-means clustering by hand. Show i) the final cluster centers, ii) the points in the two clusters respectively, iii) the distortion. (b) [4] Repeat (a), but with initial centers at c1=0, c2=6. (c) [2] Briefly discuss what property of K-means (a) and (b) show. 7

Question 6. [20] Game Theory Consider the game in matrix normal form below: x Y z a 0-1 1 b 1 0-1 c -1 1 0 In this question you will derive the optimal mixed strategy for both players. We will refer to the players as the XYZ player and the ABC player. The numbers are from the ABC player s perspective. For all the questions below, you do NOT need to formally prove your answer. (a) [4] Say player XYZ plays strategy x, y and z with probabilities ¼, ½, ¼. Give the best pure strategy and expected payoff for the ABC player. (b) [4] Say player XYZ plays strategy x, y and z with probabilities 0, p, 1-p. Give the best pure strategy and expected payoff for the ABC player. This shows that if player XYZ only mixes between two strategies, then player ABC has an advantage. 8

(c) [4] Say player XYZ plays strategy x, y and z with probabilities px, py, pz. What will the payoff for player ABC be, given that ABC plays a pure strategy? (d) [4] What is the best strategy for player XYZ, so that XYZ can even tell ABC the values of px, py, pz, but ABC cannot take advantage of this knowledge? (e) [3] Give a mixed-strategy Nash equilibrium of this game. (f) [1] Name an instance of this game in real life. 9

Question 7. [10] Neural Networks The following feedforward neural network takes three binary (0 or 1) inputs and produces two binary (0 or 1) outputs. Each node uses a Linear Threshold Unit as its activation function with the associated threshold value. Call an input vector that has exactly n inputs equal to "1" as an input vector with "count" n, for n = 0, 1, 2 or 3. Note that for this particular neural network, all input vector with the same "count" have the same output at both output units. In other words, for a given "count," the output of the network is the same no matter which particular input units are the ones with input equal to "1." (a) [5] For a given input "count" of n, describe what is computed as the output of each hidden unit. Give your answer in terms of n and do not give simply a literal translation of each individual calculation performed. (b) [5] For a given input "count" of n, give an interpretation of the computed output O1, O2. 10