CEC login. Student Details Name SOLUTIONS
|
|
- Spencer Rogers
- 5 years ago
- Views:
Transcription
1 Student Details Name SOLUTIONS CEC login Instructions You have roughly 1 minute per point, so schedule your time accordingly. There is only one correct answer per question. Good luck!
2 Question 1. Searching Circle the correct answer for each question (there is exactly one): a) [2] What is total number of nodes that iterative deepening visits? (if a node is visited multiple times, count it multiple times). Assume the tree has a branch factor of "b", depth "d", and the goal is at depth "g". a. O(bd) b. O(b d ) c. O(bg) d. O(b g+1 ) O(b g+1 ) === This answer could be O(b g ) but in class we derived it loosely as O(b g+1 ). By definition of O- notation, O(b g ) is O(b g+1 ) ( but not the other way around) b) [2] Which of the following statements are true about Breadth First graph search? Assume the tree has a branch factor of "b", depth "d", and the goal is at depth "g". a. Breadth First graph search is complete on problems with finite search graphs. b. Breadth First graph search uses O(bg) space. c. Both (a) and (b) are true. d. Both (a) and (b) are false. BFS searches the complete state space, so on finite graphs, using graph search to not re- explore nodes, it will explore all nodes and therefore find a solution c) [4] Consider the A* search. g is the cumulative path cost of a node t, h is a lower bound on the shortest path to a goal state, and n' is the parent of n. Assume all costs are positive. Note: Enqueuing == putting a node onto the fringe, dequeuing == removing then expanding a node from the fringe Which of the following search algorithms are guaranteed to be optimal? i. A*, but apply the goal test before enqueuing nodes rather than after dequeuing. ii. A*, but prioritize n by g(n) only. iii. A*, but prioritize n by h(n) only. iv. A*, but prioritize n by g(n) + h(n') v. A*, but prioritize n by g(n') + h(n) a. All but i. b. ii. and v. c. iv. and ii. d. iv. and iii.
3 e. Only iv. f. Only ii. ii is equivalent to A* with the heuristic h = 0 for all nodes. v is equivalent to A* with heuristic h (n) = h(n) (step(n, n)). So if h is admissible, so is h. iv. is wrong because it uses the true cost to node n + the heuristic from the parent of n. This could overestimate the total path length through node n, and therefore never expand a node on the optimal path. [IS THIS TRUE?] We apply a variety of queue- based graph- search algorithms to the state- graph on the right. Initially the fringe contains the start state A. When there are multiple options, nodes are enqueued (put onto the fringe) in alphabetical order. d) [2] In the BFS algorithm, we perform the goal test when we enqueue a node. How many nodes have been dequeued when we find the solution? a. 2 b. 3 c. 4 d in this case, all nodes ABCDE will be dequeued. e. 6 e) [2] In the DFS algorithm, we perform the goal test when we enqueue a node. What is the sequence of dequeued nodes? a. A,B,E,G b. A,B,C,D,E c. A,B,E d. A,D,E dequeue A, add BCD. Dequeue D, and ACE. Dequeue E, add BCDG. As we add G, we perform the goal test. So dequeued nodes are ADE. e. None of the above f) [2] In the UCS algorithm, we perform the goal test when we dequeue (expand) a node. How many nodes have been dequeued when we find the solution? (Do not count the dequeuing of the goal state itself) a. 3 b. 4 (explanation below).
4 c. 5 d. 6 e. None of the above Expand A, to have fringer: (B,1), (C,3), (D,7). Expand B to get fringe: (E,2), (C,3), (D,7) Expand E to get fringe: (C,3),(G,4), (D,7) Expand C to get fringe: (G,4), (D,6) Dequeue G and it passes goal test. A B C D E G H H H g) [2] The above table shows 3 heuristics, H1, H2, H3 and their values at each node. (For example, H1(A) = 2, H1(B) = 2, ) Which of these heuristics are admissible? (The graph is copied again below for your convenience) a. H1 and H2. H3 not admissible because H3(B) == 4 but distance is 3 from goal. b. H2 only c. H3 only d. All are admissible e. None are admissible. h) [2] Which of these heuristics are consistent? a. H1 and H2 b. H2 only H1,H3 not consistent: H1(B) H1(E) = 2, H3(B)- H3(E) = 3 c. H3 only d. All are consistent e. None are consistent
5 Question 2. CSPs Circle the correct answer for each question (there is exactly one): a) [2] Which of the following statements are true about the runtime for CSPs? a. Tree- structured CSPs may be solved in time that is linear in the number of variables. b. Arc- consistency may be checked in time that is polynomial in the number of variables. c. Both (a) and (b) are true. i. Tree structured CSPs can be solved in time O(nd 2 ), arc- consistency is O(n 2 d 2 ), although in class we showed an algorithm that is O(n 2 d 3 ) d. Both (a) and (b) are false. b) [2] When solving a CSP by backtracking, which of the following are good heuristics? a. Pick the value that is least likely to fail. b. Pick the variable that is least constrained. c. Both (a) and (b) are good heuristics. d. Both (a) and (b) are bad heuristics. c) [2] Suppose you have a highly efficient solver for tree- structured CSPs. Given a CSP with the following binary constraints, for which re- formulations will you find a fast and correct solution? A D H C G B E F I a. Set the value of E, solve the remaining CSP, and try another value for E if no solution is found. b. Replace variables D and E with variable DE {(d,e) d D and e E}, then solve. c. Ignore variable either variable D or E, solve, then pick a consistent value. d. Both (a) and (b). i. These both create a tree- like structure that can be solved quickly. e. Both (a) and (c).
6 d) [2] Which of the following statements are true? a. Additional constraints always make CSPs easier to solve. b. CSP solvers incorporating randomness are always a bad idea. c. Both (a) and (b) are true. d. Both (a) and (b) are false. i. [WE NEVER DISCUSSED RANDOMNESS IN CLASS, SO EVERYONE GOT CREDIT FOR ALL ANSWERS TO THIS QUESTION] e) [2] Which of the following are true about CSPs a. If a CSP is arc- consistent, it can be solved without backtracking b. A CSP with only binary constraints can be solved in time polynomial in n (the number of variables) and d (the number of options per variable). c. Both (a) and (b) are true. d. Both (a) and (b) are false. i. (a) not true, could have every arc be consistent, but no possible solution. (b) not true, general CSP with binary constraints is NP- hard.
7 Question 3. Adversarial Search Circle the correct answer for each question (there is exactly one): a) [2] Which statement is true about reflex agents: a. Reflex agents can be learned with Q- Learning. b. You can design reflex agents that play optimally. c. Both a) and b) are true. i. Q- learning defines optimal moves in every state, and therefore defines table for reflex agent to follow. Q- learning is one way to define reflex agents that play optimally. d. Both a) and b) are false. b) [2] Which statement is true about multi- player games? a. Each multi- player game is also a search problem. b. Multi- player games are easier (in complexity) than general search problems. c. Both a) and b) are true. i. Easier because things like alpha- beta pruning let you explore less of the search tree. d. Both a) and b) are false. c) [2] When doing Alpha- Beta pruning on a game tree visited left to right, a. the right- most branch will always be pruned. b. the left- most branch will always be pruned. c. Both a) and b) are true. d. Both a) and b) are false. i. The left- most branch is NEVER pruned, but otherwise there are no guarantees. d) [2] When applying alpha- beta pruning to minimax game trees. a. Pruning nodes does not change the value of the root to the max player. b. Alpha- beta pruning can prune different numbers of nodes if the children of the root node are reordered. c. Both a) and b) are true. d. Both a) and b) are false.
8 e) [2] Normally, alpha- beta- pruning is not used with expectimax. Which one of the following conditions allows you to perform pruning with expectimax: a. All values are positive. b. Children of expectation nodes have values within a finite pre- specified range. i. The finite range allows lower and upper bounds to be computed and used as in alpha/beta pruning. c. All transition probabilities are within finite a pre- specified range. d. The probabilities sum- up to one, and you only ever prune the last child. f) [2] You have game that you are play sometimes against a talented opponent, and sometimes against a random opponent, so you have implemented both Minimax and Expectimax. You discover that your evaluation function had a bug, and instead of returning the actual value of a terminal state, it returns the square- root of the value of the terminal state. All terminal states have positive values. Which of the following statements is true: a. The resulting policy might be sub- optimal for Minimax. b. The resulting policy might be sub- optimal for Expectimax. i. While minimax just looks at the sorted order, which doesn t change if you take the square root of positive values, expectimax needs to compute the average, which is affected by the square root. c. Both a) and b) are true. d. Both a) and b) are false.
9 Consider the following game tree (which is evaluated left- to- right): max This is LEFT This is RIGHT min min max max max max g) [2] what is the minimax value at the root node of this tree? 4 h) [4] How many leaf nodes would alpha- beta pruning prune? 3 i) [4] Suppose player #2 (formerly min) switches to a new strategy and picks the left action with probability (1/4) and right with probability (3/4). What is the maximum expected utility of player #1? 6 Left side. Max chooses options 4,8 so expected value is 6 Right side, Max chooses 2,8, so expected value is 5 Max chooses better of these to get 6
10 Question 4. MDP + RL Circle the correct answer for each question (there is exactly one): a) [1] A rational agent (who uses dollar amounts as utility) prefers to be given an envelope containing $X rather than one containing either $0 or $10 (with equal probability). What is the smallest $X such that the agent may be acting in accordance with the principle of maximum expected utility? a. There is no such minimum $X. b. $0. c. $ money is not always a good model for utility of choices for people, but this problem explicitly states that dollar amounts are the utility function. d. $10. b) [2] Which of the following are true about Markov Decision Processes: a. If the only difference between two MDPs is the value of the discount factor then they must have the same optimal policy. b. Rational policies can be learned before values converge. i. This is why we use policy iteration, because the policy may stay constant while the value is still changing. (a) is false if the discount factor is small, the MDP may avoid long paths with the (larger) payoff only at the end [CHECK THIS ANSWER] c. Both (a) and (b) are true d. Neither (a) nor (b) are true c) [2] Which of the following are true about Q- learning? a. Q- learning will only learn the optimal Q- values if actions are eventually selected according to the optimal policy. b. In a deterministic MDP (i.e. one in which each state / action leads to a single deterministic next state), the Q- learning update with a learning rate of α = 1 will correctly learn the optimal Q- values. i. : True. The learning rate is only there because we are trying to approximate a summation with a single sample. In a deterministic MDP where s is always that we always get to after applying action a in state s, then the update rule: 1. Q(s,a) = R(s,a,s ) + max_a Q(s,a ), which is exactly the update we make. ii. (a) is false because any strategy that visits all states will eventually allow Q- learning to converge to the optimal Q- values. c. Both (a) and (b) are true d. Neither (a) nor (b) are true
11 d) The above MDP has states 1,2,3,4,G, and M. Where G, M are terminal states. The reward for transitioning from any state to G is 1 (you scored a goal!). The reward for transitioning from any state to M is 0. All other rewards are zero. There is no discount rate (so γ = 1). The transition distributions are: From state i, if you shoot, you have a probability i/4 of scoring, so: T(i, S, G) = i/4, and otherwise you miss: T(i, S, M) = 1- i/4 If you dribble from state i, you have a ¾ probability to get to state i+1, and a ¼ probability of losing the ball and going to state M (unless you are in state 4, when the goalie stops you every time), so: T(i,D,i+1) = ¾, and T(I,D,M) = ¼ for i = 1,2,3, and T(4,D,M) = 1 a. [3] Let π be the policy that always shoots. What is V π (1)? 1/4 V(1) = T(1,S,G) * R(1,S,G) + T(1,S,M) * R(1,S,M) = 1/4 * 1 + 3/4 * 0 = 1/4 b. [3] Define Q* to be Q- values under the optimal policy; what is Q*(3,D)? 3/4 Q(3,D) is dribbling from step 3, so actions will be dribble then shoot. Rewards for missing are zero, so those terms can be dropped as soon as we see them. Thus: Plugging in values: T(3,D,4) = ¾. T(4,S,G) = 1, R(4,S,G) = 1 so Q*(3,D) = 3/4 c. [3] If you use value iteration to compute the values V* for each node, what is the sequence of values after the first three iterations for for V*(1)? (Your answer should be a set of three values, such as 1/12, 1/3, 1/2, and you may have to compute the value iteration values for all states to compute these. 1/4, 6/16 OR 3/8, 27/64 Box below is just for your work. The only thing that will be graded is what you put here ^^^^.
12 Iteration# V*(1) V*(2) V*(3) V*(4) 0 (initialization) ¼ 2/4 3/4 4/4 2 6/16 9/16 3/4 4/4 3 27/64 9/16 3/4 4/4 Question 5. Probabilities We continue the same soccer problem, but now imagine that sometimes there is a defender D between the agent A and the goal. A has no way of detecting whether D is present, but does know statistics of the environment: D is present 2/3 of the time. D does not affect shots at all, only dribbling. When D is absent, the chance of dribbling forward successfully is 3/4 (as it was in the problem above), When D is present, the chance of dribbling forward is 1/4. In either case, if dribbling forward fails, the game goes to the M (missed) state. a. [2] If the defender is present, what is the optimal action from state 1? S b. [4] Suppose that A dribbles twice successfully from state 1 to state 3, then shoots and scores. Given this observation, what is the probability that the defender D was present? 2/11 We can use Bayes rule, where d is a random variable denoting the presence of the defender, and e is the evidence that A dribbled twice and then scored: Want to compute P(d e). By Bayes rule that can be expressed as: P(d e) = P(e d) P(d) / P(e) Building up these pieces,we have: P(e) = P(e d) * P(d) + P(e ~d) P(~d) P(e d) = probability of our actions given defender = ¼ * ¼ * ¾ = 3/64 P(e ~d) = probability of our actions without defender = ¾ * ¾ * ¾ = 27/64 P(e) = 3/64 * 2/3 + 27/64 * 1/3 = 2/64 + 9/64 = 11/64 P(d e) = (3/64) * (2/3) / (11/64) = 3*(2/3) / 11 = 2/11
Q1. [?? pts] Search Traces
CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a
More informationCS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.
CS 188 Fall 2013 Introduction to Artificial Intelligence Midterm 1 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use
More informationThe exam is closed book, closed calculator, and closed notes except your three crib sheets.
CS 188 Spring 2016 Introduction to Artificial Intelligence Final V2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your three crib sheets.
More informationCS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.
CS 188 Fall 2013 Introduction to Artificial Intelligence Midterm 1 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use
More informationTo earn the extra credit, one of the following has to hold true. Please circle and sign.
CS 188 Fall 2018 Introduction to Artificial Intelligence Practice Midterm 1 To earn the extra credit, one of the following has to hold true. Please circle and sign. A I spent 2 or more hours on the practice
More informationCS360 Homework 14 Solution
CS360 Homework 14 Solution Markov Decision Processes 1) Invent a simple Markov decision process (MDP) with the following properties: a) it has a goal state, b) its immediate action costs are all positive,
More informationMidterm I. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 3 hours.
CS 88 Fall 202 Introduction to Artificial Intelligence Midterm I You have approximately 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators
More informationNon-Deterministic Search
Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Spring 2015 Introduction to Artificial Intelligence Midterm 1 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Markov Decision Processes (MDPs) Luke Zettlemoyer Many slides over the course adapted from Dan Klein, Stuart Russell or Andrew Moore 1 Announcements PS2 online now Due
More informationReinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein
Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the
More informationIntroduction to Fall 2011 Artificial Intelligence Midterm Exam
CS 188 Introduction to Fall 2011 Artificial Intelligence Midterm Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives
More informationCS188 Spring 2012 Section 4: Games
CS188 Spring 2012 Section 4: Games 1 Minimax Search In this problem, we will explore adversarial search. Consider the zero-sum game tree shown below. Trapezoids that point up, such as at the root, represent
More informationMarkov Decision Processes
Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. AIMA 3. Chris Amato Stochastic domains So far, we have studied search Can use
More informationIntroduction to Artificial Intelligence Midterm 1. CS 188 Spring You have approximately 2 hours.
CS 88 Spring 0 Introduction to Artificial Intelligence Midterm You have approximately hours. The exam is closed book, closed notes except your one-page crib sheet. Please use non-programmable calculators
More informationReinforcement Learning
Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards Grid World The agent
More informationCS 188: Artificial Intelligence. Outline
C 188: Artificial Intelligence Markov Decision Processes (MDPs) Pieter Abbeel UC Berkeley ome slides adapted from Dan Klein 1 Outline Markov Decision Processes (MDPs) Formalism Value iteration In essence
More informationAlgorithms and Networking for Computer Games
Algorithms and Networking for Computer Games Chapter 4: Game Trees http://www.wiley.com/go/smed Game types perfect information games no hidden information two-player, perfect information games Noughts
More informationCS221 / Spring 2018 / Sadigh. Lecture 9: Games I
CS221 / Spring 2018 / Sadigh Lecture 9: Games I Course plan Search problems Markov decision processes Adversarial games Constraint satisfaction problems Bayesian networks Reflex States Variables Logic
More informationDecision making in the presence of uncertainty
CS 2750 Foundations of AI Lecture 20 Decision making in the presence of uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Decision-making in the presence of uncertainty Computing the probability
More informationIntroduction to Fall 2011 Artificial Intelligence Midterm Exam
CS 188 Introduction to Fall 2011 Artificial Intelligence Midterm Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators
More informationLecture 9: Games I. Course plan. A simple game. Roadmap. Machine learning. Example: game 1
Lecture 9: Games I Course plan Search problems Markov decision processes Adversarial games Constraint satisfaction problems Bayesian networks Reflex States Variables Logic Low-level intelligence Machine
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Markov Decision Processes II Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non Deterministic Search Example: Grid World A maze like problem The agent lives in
More information91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010
91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 Lecture 17 & 18: Markov Decision Processes Oct 12 13, 2010 A subset of Lecture 9 slides from Dan Klein UC Berkeley Many slides over the course
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDPs 2/16/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Announcements
More informationIntroduction to Fall 2007 Artificial Intelligence Final Exam
NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Final Exam You have 180 minutes. The exam is closed book, closed notes except a two-page crib sheet, basic calculators
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Markov Decision Processes (MDP)! Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Stuart Russell or Andrew Moore 1 Outline
More informationLogistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week
CS 473: Artificial Intelligence Markov Decision Processes Dan Weld University of Washington [Slides originally created by Dan Klein & Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Spring 2016 Introduction to Artificial Intelligence Midterm V2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib
More informationCOMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2
COMP417 Introduction to Robotics and Intelligent Systems Reinforcement Learning - 2 Speaker: Sandeep Manjanna Acklowledgement: These slides use material from Pieter Abbeel s, Dan Klein s and John Schulman
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More information343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted
343H: Honors AI Lecture 7: Expectimax Search 2/6/2014 Kristen Grauman UT-Austin Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted 1 Announcements PS1 is out, due in 2 weeks Last time Adversarial
More information17 MAKING COMPLEX DECISIONS
267 17 MAKING COMPLEX DECISIONS The agent s utility now depends on a sequence of decisions In the following 4 3grid environment the agent makes a decision to move (U, R, D, L) at each time step When the
More informationMarkov Decision Process
Markov Decision Process Human-aware Robotics 2018/02/13 Chapter 17.3 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/mdp-ii.pdf
More informationCS 188 Fall Introduction to Artificial Intelligence Midterm 1
CS 188 Fall 2018 Introduction to Artificial Intelligence Midterm 1 You have 120 minutes. The time will be projected at the front of the room. You may not leave during the last 10 minutes of the exam. Do
More informationIntroduction to Artificial Intelligence Spring 2019 Note 2
CS 188 Introduction to Artificial Intelligence Spring 2019 Note 2 These lecture notes are heavily based on notes originally written by Nikhil Sharma. Games In the first note, we talked about search problems
More informationIssues. Senate (Total = 100) Senate Group 1 Y Y N N Y 32 Senate Group 2 Y Y D N D 16 Senate Group 3 N N Y Y Y 30 Senate Group 4 D Y N D Y 22
1. Every year, the United States Congress must approve a budget for the country. In order to be approved, the budget must get a majority of the votes in the Senate, a majority of votes in the House, and
More informationThe exam is closed book, closed notes except a two-page crib sheet. Non-programmable calculators only.
CS 188 Spring 2011 Introduction to Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a two-page crib sheet. Non-programmable calculators only.
More informationAnnouncements. Today s Menu
Announcements Reading Assignment: > Nilsson chapters 13-14 Announcements: > LISP and Extra Credit Project Assigned Today s Handouts in WWW: > Homework 9-13 > Outline for Class 25 > www.mil.ufl.edu/eel5840
More information6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE
6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path
More informationEssays on Some Combinatorial Optimization Problems with Interval Data
Essays on Some Combinatorial Optimization Problems with Interval Data a thesis submitted to the department of industrial engineering and the institute of engineering and sciences of bilkent university
More informationPOMDPs: Partially Observable Markov Decision Processes Advanced AI
POMDPs: Partially Observable Markov Decision Processes Advanced AI Wolfram Burgard Types of Planning Problems Classical Planning State observable Action Model Deterministic, accurate MDPs observable stochastic
More informationSublinear Time Algorithms Oct 19, Lecture 1
0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation
More informationAlgorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information
Algorithmic Game Theory and Applications Lecture 11: Games of Perfect Information Kousha Etessami finite games of perfect information Recall, a perfect information (PI) game has only 1 node per information
More informationYao s Minimax Principle
Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,
More informationMonte Carlo Methods (Estimators, On-policy/Off-policy Learning)
1 / 24 Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) Julie Nutini MLRG - Winter Term 2 January 24 th, 2017 2 / 24 Monte Carlo Methods Monte Carlo (MC) methods are learning methods, used
More informationExpectimax and other Games
Expectimax and other Games 2018/01/30 Chapter 5 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/games.pdf q Project 2 released,
More informationComplex Decisions. Sequential Decision Making
Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Uncertainty and Utilities Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at
More informationReinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration
Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision
More informationMarkov Decision Processes
Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Stochastic domains Image: Berkeley CS188 course notes (downloaded Summer
More informationMaking Decisions. CS 3793 Artificial Intelligence Making Decisions 1
Making Decisions CS 3793 Artificial Intelligence Making Decisions 1 Planning under uncertainty should address: The world is nondeterministic. Actions are not certain to succeed. Many events are outside
More informationMonte-Carlo Planning Look Ahead Trees. Alan Fern
Monte-Carlo Planning Look Ahead Trees Alan Fern 1 Monte-Carlo Planning Outline Single State Case (multi-armed bandits) A basic tool for other algorithms Monte-Carlo Policy Improvement Policy rollout Policy
More informationReinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration
Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision
More informationMicroeconomics of Banking: Lecture 5
Microeconomics of Banking: Lecture 5 Prof. Ronaldo CARPIO Oct. 23, 2015 Administrative Stuff Homework 2 is due next week. Due to the change in material covered, I have decided to change the grading system
More informationCS 4100 // artificial intelligence
CS 4100 // artificial intelligence instructor: byron wallace (Playing with) uncertainties and expectations Attribution: many of these slides are modified versions of those distributed with the UC Berkeley
More information16 MAKING SIMPLE DECISIONS
247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result
More informationIntroduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence
Introduction to Decision Making CS 486/686: Introduction to Artificial Intelligence 1 Outline Utility Theory Decision Trees 2 Decision Making Under Uncertainty I give a robot a planning problem: I want
More informationExtending MCTS
Extending MCTS 2-17-16 Reading Quiz (from Monday) What is the relationship between Monte Carlo tree search and upper confidence bound applied to trees? a) MCTS is a type of UCT b) UCT is a type of MCTS
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Uncertainty and Utilities Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides are based on those of Dan Klein and Pieter Abbeel for
More informationHandout 4: Deterministic Systems and the Shortest Path Problem
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas
More informationMarkov Decision Processes
Markov Decision Processes Ryan P. Adams COS 324 Elements of Machine Learning Princeton University We now turn to a new aspect of machine learning, in which agents take actions and become active in their
More informationMarkov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N
Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning
More informationExpectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning?
CS 188: Artificial Intelligence Fall 2010 Expectimax Search Trees What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In minesweeper, mine locations In
More informationReinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum
Reinforcement learning and Markov Decision Processes (MDPs) 15-859(B) Avrim Blum RL and MDPs General scenario: We are an agent in some state. Have observations, perform actions, get rewards. (See lights,
More informationExpectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Example. Expectimax Pseudocode. Expectimax Pruning?
CS 188: Artificial Intelligence Fall 2011 Expectimax Search Trees What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In minesweeper, mine locations In
More informationChapter wise Question bank
GOVERNMENT ENGINEERING COLLEGE - MODASA Chapter wise Question bank Subject Name Analysis and Design of Algorithm Semester Department 5 th Term ODD 2015 Information Technology / Computer Engineering Chapter
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 7: Expectimax Search 9/15/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 Expectimax Search
More information16 MAKING SIMPLE DECISIONS
253 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action a will have possible outcome states Result(a)
More informationWorst-Case vs. Average Case. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities. Expectimax Search. Worst-Case vs.
CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities Worst-Case vs. Average Case max min 10 10 9 100 Dieter Fox [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro
More informationUncertain Outcomes. CS 188: Artificial Intelligence Uncertainty and Utilities. Expectimax Search. Worst-Case vs. Average Case
CS 188: Artificial Intelligence Uncertainty and Utilities Uncertain Outcomes Instructor: Marco Alvarez University of Rhode Island (These slides were created/modified by Dan Klein, Pieter Abbeel, Anca Dragan
More informationMaking Complex Decisions
Ch. 17 p.1/29 Making Complex Decisions Chapter 17 Ch. 17 p.2/29 Outline Sequential decision problems Value iteration algorithm Policy iteration algorithm Ch. 17 p.3/29 A simple environment 3 +1 p=0.8 2
More informationProbabilities. CSE 473: Artificial Intelligence Uncertainty, Utilities. Reminder: Expectations. Reminder: Probabilities
CSE 473: Artificial Intelligence Uncertainty, Utilities Probabilities Dieter Fox [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are
More informationLecture 10: The knapsack problem
Optimization Methods in Finance (EPFL, Fall 2010) Lecture 10: The knapsack problem 24.11.2010 Lecturer: Prof. Friedrich Eisenbrand Scribe: Anu Harjula The knapsack problem The Knapsack problem is a problem
More informationLecture l(x) 1. (1) x X
Lecture 14 Agenda for the lecture Kraft s inequality Shannon codes The relation H(X) L u (X) = L p (X) H(X) + 1 14.1 Kraft s inequality While the definition of prefix-free codes is intuitively clear, we
More informationCS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma
CS 331: Artificial Intelligence Game Theory I 1 Prisoner s Dilemma You and your partner have both been caught red handed near the scene of a burglary. Both of you have been brought to the police station,
More informationPrisoner s Dilemma. CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma. Prisoner s Dilemma. Prisoner s Dilemma.
CS 331: rtificial Intelligence Game Theory I You and your partner have both been caught red handed near the scene of a burglary. oth of you have been brought to the police station, where you are interrogated
More informationSubject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.
e-pg Pathshala Subject : Computer Science Paper: Machine Learning Module: Decision Theory and Bayesian Decision Theory Module No: CS/ML/0 Quadrant I e-text Welcome to the e-pg Pathshala Lecture Series
More informationCOSC160: Data Structures Binary Trees. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Data Structures Binary Trees Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Binary Trees I. Implementations I. Memory Management II. Binary Search Tree I. Operations Binary Trees A
More informationMarkov Decision Processes. Lirong Xia
Markov Decision Processes Lirong Xia Today ØMarkov decision processes search with uncertain moves and infinite space ØComputing optimal policy value iteration policy iteration 2 Grid World Ø The agent
More informationCPS 270: Artificial Intelligence Markov decision processes, POMDPs
CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Markov decision processes, POMDPs Instructor: Vincent Conitzer Warmup: a Markov process with rewards We derive some reward
More informationLecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018
Lecture 2: Making Good Sequences of Decisions Given a Model of World CS234: RL Emma Brunskill Winter 218 Human in the loop exoskeleton work from Steve Collins lab Class Structure Last Time: Introduction
More informationMechanism Design and Auctions
Mechanism Design and Auctions Game Theory Algorithmic Game Theory 1 TOC Mechanism Design Basics Myerson s Lemma Revenue-Maximizing Auctions Near-Optimal Auctions Multi-Parameter Mechanism Design and the
More informationMonte-Carlo Planning Look Ahead Trees. Alan Fern
Monte-Carlo Planning Look Ahead Trees Alan Fern 1 Monte-Carlo Planning Outline Single State Case (multi-armed bandits) A basic tool for other algorithms Monte-Carlo Policy Improvement Policy rollout Policy
More informationCSCE 750, Fall 2009 Quizzes with Answers
CSCE 750, Fall 009 Quizzes with Answers Stephen A. Fenner September 4, 011 1. Give an exact closed form for Simplify your answer as much as possible. k 3 k+1. We reduce the expression to a form we ve already
More informationLecture 7: Bayesian approach to MAB - Gittins index
Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 9: MDPs 9/22/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 2 Grid World The agent lives in
More informationFinding Equilibria in Games of No Chance
Finding Equilibria in Games of No Chance Kristoffer Arnsfelt Hansen, Peter Bro Miltersen, and Troels Bjerre Sørensen Department of Computer Science, University of Aarhus, Denmark {arnsfelt,bromille,trold}@daimi.au.dk
More informationCMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS
CMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS November 17, 2016. Name: ID: Instructions: Answer the questions directly on the exam pages. Show all your work for each question.
More information2D5362 Machine Learning
2D5362 Machine Learning Reinforcement Learning MIT GALib Available at http://lancet.mit.edu/ga/ download galib245.tar.gz gunzip galib245.tar.gz tar xvf galib245.tar cd galib245 make or access my files
More information6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE
6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path
More informationDeep RL and Controls Homework 1 Spring 2017
10-703 Deep RL and Controls Homework 1 Spring 2017 February 1, 2017 Due February 17, 2017 Instructions You have 15 days from the release of the assignment until it is due. Refer to gradescope for the exact
More informationMax Registers, Counters and Monotone Circuits
James Aspnes 1 Hagit Attiya 2 Keren Censor 2 1 Yale 2 Technion Counters Model Collects Our goal: build a cheap counter for an asynchronous shared-memory system. Two operations: increment and read. Read
More informationMDPs and Value Iteration 2/20/17
MDPs and Value Iteration 2/20/17 Recall: State Space Search Problems A set of discrete states A distinguished start state A set of actions available to the agent in each state An action function that,
More information1 The EOQ and Extensions
IEOR4000: Production Management Lecture 2 Professor Guillermo Gallego September 16, 2003 Lecture Plan 1. The EOQ and Extensions 2. Multi-Item EOQ Model 1 The EOQ and Extensions We have explored some of
More informationCS 6300 Artificial Intelligence Spring 2018
Expectimax Search CS 6300 Artificial Intelligence Spring 2018 Tucker Hermans thermans@cs.utah.edu Many slides courtesy of Pieter Abbeel and Dan Klein Expectimax Search Trees What if we don t know what
More informationReinforcement Learning and Simulation-Based Search
Reinforcement Learning and Simulation-Based Search David Silver Outline 1 Reinforcement Learning 2 3 Planning Under Uncertainty Reinforcement Learning Markov Decision Process Definition A Markov Decision
More informationCS224W: Social and Information Network Analysis Jure Leskovec, Stanford University
CS224W: Social and Information Network Analysis Jure Leskovec, Stanford University http://cs224w.stanford.edu 10/27/16 Jure Leskovec, Stanford CS224W: Social and Information Network Analysis, http://cs224w.stanford.edu
More information