Ensemble Methods for Reinforcement Learning with Function Approximation

Size: px
Start display at page:

Download "Ensemble Methods for Reinforcement Learning with Function Approximation"

Transcription

1 Ensemble Methods for Reinforcement Learning with Function Approximation Stefan Faußer and Friedhelm Schwenker Institute of Neural Information Processing, University of Ulm, Ulm, Germany Abstract. Ensemble methods allow to combine multiple models to increase the predictive performances but mostly utilize labelled data. In this paper we propose several ensemble methods to learn a combined parameterized state-value function of multiple agents. For this purpose the Temporal-Difference (TD) and Residual-Gradient (RG) update methods as well as a policy function is adapted to learn from joint decisions. Such joint decisions include Majority Voting and Averaging of the state-values. We apply these ensemble methods to the simple pencil-and-paper game Tic-Tac-Toe and show that an ensemble of three agents outperforms a single agent in terms of the Mean-Squared Error (MSE) to the true values as well as in terms of the resulting policy. Further we apply the same methods to learn the shortest path in a maze and empirically show that the learning speed is faster and the resulting policy, i.e. the number of correctly choosen actions is better in an ensemble of multiple agents than that of a single agent. 1 Introduction In a single-agent problem multiple agents can be combined to act as a committee agent. The aim here is to rise the performance of the single acting agent. In contrast to a multi-agent problem multiple agents are needed to act in the same environment with the same (cooperative) or opposed (competitive) goals. Such multi-agent problems are formulated in a Collaborative Multiagent MDP (CMMDP) model. The Sparse Cooperative Q-Learning algorithm has been successfully applied to the distributed sensor network (DSN) problem where the agents cooperatively focus the sensors to capture a target (Kok et al ). In the predator-prey problem multiple agents are predators (one agent as one predator) and are hunting the prey. For this problem the Q-learning algorithm also has been used where each agent maintains its own and independent Q-table (Partalas et al ). Further an addition to add and to remove agents during learning, i.e. to perform self-organization in a network of agents has been proposed (Abdallah et al ). While the Multi-Agent Reinforcement Learning (MARL) as described above is well-grounded with research work only little is known for the case where multiple agents are combined to a single agent (committee agent) for single-agent C. Sansone, J. Kittler, and F. Roli (Eds.): MCS 2011, LNCS 6713, pp , c Springer-Verlag Berlin Heidelberg 2011

2 Ensemble Methods for RL with Function Approximation 57 problems. One reason may be that RL algorithms with state-tables theoretically converge to a global minimum independent of the initialized state-values and therefore multiple runs with distinct state-value initializations result to the same solution with always the same bias and no variance. However in Ensemble Learning methods like Bagging (Breiman ) the idea is to reduce variance and to improve the ensemble s overall performance. Sun et al has studied the partitioning of the input and output space and has developed some techniques using Genetic Algorithms (GA) to partition the spaces. Multiple agents applied the Q-Learning algorithm to learn the action-values in subspaces and have been combined through a weighting scheme to a single agent. However extensive use of heuristics and the need of much computation time for the GA algorithm makes this approach unusable for MDPs with large state spaces. In a more recent work (Wiering et al ) action-values are combined resulting by multiple independently learnt RL algorithms (Q-Learning, SARSA, Actor- Critic, etc.) to decide about the best action to take. As Q-Learning tend to converge to another fixed point than SARSA and Actor-Critic the action-value functions therefore have a different bias and variance. A Markov Decision Process (MDP) with a large state space imposes several problems on Reinforcement Learning (RL) algorithm. Depending on the number of states it may be possible to use RL algorithm that save the state-values in tables. However for huge state-spaces another technique is to learn a parameterized state-value function by linear or nonlinear function approximation. While the state-values in tables are independent on each other, the function approximated state-values are highly dependent based on the selection of the feature space and may therefore converge faster. In an application to English Draughts (Fausser et al ) which has about states the training of the parameterized state-value function needed about 5, 000, 000 episodes to reach an amateur player level. Although a parameterized state-value function with simple features can be learnt, it may not converge to a global fixed point and multiple runs with distinct initial weights tends to result in functions with different solutions (different bias and large variance) of the state-values. Our contribution in this paper is to describe several ensemble methods that aim to increase the learning speed and the final performance opposed to that of a single agent. We show the new derived TD update method as well as the new policy to learn from joint decisions. Although we use parameterized statevalue functions in order to deal with large state MDPs we have applied the methods to more simple problems to be able to compare performances. Our work differs from others that we are combining multiple agents for a single agent problem and by our general way of combining multiple state-values that enables to target problems with large-state spaces. It can be empirically shown that these ensemble methods improves the overall performance of multiple agents for the pencil-and-paper game Tic-Tac-Toe as well as for several mazes.

3 58 S. Faußer and F. Schwenker 2 Reinforcement Learning with Parameterized State-Value Functions Assume we want to estimate a smooth (differentiable) state-value function Vθ π(s) with its parameter vector θ where Vθ π(s) V (s), s using the TD Prediction method (Sutton & Barto ). Starting in a certain state s t we take an action a t defined by a given policy π, observe reward (signal) r t+1 and move from state s t to state s t+1. For this single state transition we can model the Squared TD Prediction error (TDPE) as follows: TDPE(θ) = r t+1 + γv θ (s t+1 ) V θ (s t ) s t+1 = π(s t ) 2 (1) The aim is to minimize the above error by updating parameter vector θ. Applying the gradient-descent technique to the TDPE this results in two possible update functions. The first one being Temporal-Difference learning with γv (s t+1 )kept fixed as training signal: Δθ TD := α r t+1 + γv θ (s t+1 ) V θ (s t ) s t+1 = π(s t ) V θ(s t ) (2) θ and the second one being Residual-Gradient learning (Baird ) with variable γv θ (s t+1 )intermsofθ: Δθ RG := α r t+1 +γv θ (s t+1 ) V θ (s t ) s t+1 = π(s t ) γ V θ(s t+1 ) V θ(s t ) θ θ (3) In both equations α > 0 is the learning rate and γ (0, 1 discounts future statevalues. Now suppose that policy π is a function that chooses one successor state s t+1 out of the set of all possible states S successor (s t ) based on its state-value: π(s t ):=argmax V θ (s t+1 ) s t+1 S successor (s t ) (4) s t+1 It is quite clear that this simple policy can only be as good as the estimations of V θ (s t+1 ). Thus an improvement of the estimations of V θ (s t+1 )resultsinamore accurate policy π and therefore in a better choice of a successor state s t+1.an agent using this policy tries to maximize its summed high-rated rewards and avoids getting low-rated rewards as much as possible. The optimal state-value function V (s) is: { } V (s) =E γ t r t+1 s o = s (5) t=0 While a parameterized state-value function can only approximate the optimal state-values to a certain degree it is expected that a function approximation of these state-values result in faster learning, i.e. needs less learning iterates than learning with independent state-values. Furthermore different initializations of the weights θ may result in different state-values after each learning step.

4 3 Ensemble Methods Ensemble Methods for RL with Function Approximation 59 Suppose a single-agent problem is given, e.g. finding the shortest path through a maze. Given a set of M agents {A 1,A 2,...,A M } each with its own nonlinear function approximator, for instance a Multi-Layer Perceptron (MLP), and either TD updates (2) or RG updates (3) to adapt the weights then it is possible to independently train the M agents or to dependently train the M agents in terms of their state-value updates and their decisions. Irrespective of the training method the decision of all M agents can be combined as a Joint Majority Decision: π VO (s t ):=argmax s t+1 M i=1 N i (s t,s t+1 ) where N i (s t,s t+1 ) models the willingness of agent i to move from state s t to state s t+1 : { 1, if π i (s t )=s t+1, N i (s t,s t+1 )= (7) 0, else Policy π i (s t ) is equivalent to equation (4) but with a subfix to note which agent, i.e. which function approximator to use. The state-values of all agents can be further combined to an Average Decision based on averaging the state-values: (6) 1 π AV (s t ):=argmax s t+1 M M i=1 V θi (s t+1 ) s t+1 S successor (s t ) (8) Here V θi (s t+1 ) is the state-value of agent i using the weights θ i of this agent. Summed up three policies, namely π s (s t )(4),π VO (s t ) (6) and π AV (s t ) (8) are available to decide about the best state-value where only the last two ones include the state-values of the other agents in an ensemble, i.e. perform a joint decision. One way of constructing a RL ensemble is to independently train the M agents and to combine their state-values for a joint decision using one of the above described policies after the training. Another way is to use the joint decision during the learning process. For this case it may be necessary to add some noise to the policies to keep agents (state-value functions) diverge. Another suggestion is to have different starting state positions for each agent in the MDP resulting in a better exploration of the MDP. 3.1 Combining the State-Values While joint decisions during the learning process implicitly updates the statevalues of one agent dependent on the state-values of all other agents M 1 it can be another improvement to explicitely combine the state-values. Assume agent i is currently in state s t and based on one of the policies described in the last section moves to state s t+1 and gets reward r t+1. Independent on the

5 60 S. Faußer and F. Schwenker choosen policy the state-values of the successor state s t+1 of all agents M can be combined to an Average Predicted Value: V AV (s t+1 )= 1 M M V θi (s t+1 ) (9) i=1 As the weights of the function approximators of all agents differ because of diverse initialization of the weights, exploration, different starting positions of the agents, decision noise and instabilities in the weight updates it is expected that the combination of the state-values results in a more stable and better predicted state-value. Now this single state-transition can be modelled in a Squared Average Predicted TD Error (AT DP E) function including the Average Predicted Value instead of one state-value of this single agent i: AT DP E(θ i )= r t+1 + γ 1 M M 2 V θj (s t+1 ) V θi (s t ) s t+1 = π(s t ) (10) j=1 By gradient-descent of the AT DP E function like we have done in section 2 with the TDPE function this formulates a new combined TD update function: Δθ CTD i := α r t+1 + γv AV (s t+1 ) V θi (s t ) s t+1 = π(s t ) V θ i (s t ) (11) θ i as well as a new combined RG update function: Δθ CRG i := α r t+1 + γv AV (s t+1 ) V θi (s t ) s t+1 = π(s t ) 1 M γ V θ i (s t+1 ) V θ i (s t ) θ i θ i (12) With one of the above update functions the agents learn from the average predicted state-values. Theoretical this can be further combined with one of the prior described joint decision policies. Using the simple single-decision policy (4) this results in an interesting ensemble where each agent decides based on their own state-values but learns from the average predicted state-values. For this case less noise for the decision functions are required as the agents mainly keep their bias. With one of the joint policies, i.e. Joint Majority Decision (6) or Average Decision (8) all agents perform joint decisions and learn from the average predicted state-values. As the combined update functions and the policies for joint decisions only need some additional memory space to save the state-values of all agents and this memory space is far lower than the memory space of the function approximator weights they can be ignored in memory space considerations. Therefore training an ensemble of M agents takes M times memory space and M times computation time of a single agent.

6 Ensemble Methods for RL with Function Approximation 61 Ensemble Learning with 20x20 Maze Ensemble Learning with 20x20 Maze, learning from avg. predicted state values number of best states choosen by policy agents single decision 5 agents avg decision 5 agents vote decision number of best states choosen by policy agents single decision avg states 5 agents avg decision avg states 5 agents vote decision avg states iterations * 100 iterations * 100 Fig. 1. Empirical results of different ensemble methods applied to five mazes. Measured are the number of states that are correctly choosen by the resulting policy where left the agents have learnt from joint decisions and right the agents have learnt from joint decisions and averaged state-values. 4 Experiments and Results To evaluate the behaviour of the ensemble methods described in the last sections we have performed experiments with the pencil-and-paper game Tic-Tac-Toe and several mazes. For fair evaluations we have performed multiple runs where we have given the same seed for the pseudo random number generator for all ensemble methods to ensure that the weights of the parameterized state-value function have been identically initialized. For example if we have performed 2 testruns then we have given seed1 for all evaluated methods in the first testrun and seed2 seed1 in the second testrun. The given values are the averaged values of the multiple runs. 4.1 Maze In the maze-problem an agent tries to find the shortest path to the goal. For our experiments we have created five mazes each with randomly positioned 100 barriers. A barrier can be horizontally or vertically set between two states and does not fill out a whole state. Each maze has one goal where the goal position is about upper-left, upper-right, lower-left, lower-right or in the middle of the maze. An agent receives a reward of 1 if he moves to a goal and a reward of 0 otherwise. From each state there are up to 4 possible successor states. The agent cannot move over a barrier or outside the maze. We have applied the Breadthfirst search algorithm (Russel & Norvig ) to calculate the true state values and the optimal policy. For the experiments we have designed M = 5 agents where each of the agent has an own 2-layer MLP with 8 input neurons, 3 hidden neurons and one output neuron. The input neurons are coded as follows:

7 62 S. Faußer and F. Schwenker 1. x position, 2. y position, x, 4.20 y, 5.1ifx 11 otherwise 0, 6. 1ifx 10 otherwise 0, 7. 1 if y 11 otherwise 0, 8. 1 if y 10 otherwise 0. For all evaluations the agents had the following common training parameters: (combined) TD update, α =0.01, γ =0.9, epsilon-greedy exploration strategy with ɛ =0.3, tangens hyperbolicus (tanh) transfer functions for hidden layer and output layer and uniformly distributed random noise between ±0.05 for joint decisions. Each agent has an own randomly initialized start-state and maintains its current state. If one agent reaches a goal or exceeds to reach the goal within 100 iterations then he starts at a new randomly initialized start-state. The results of an ensemble of five agents compared to a single agent with values averaged over 10 testruns and 5 mazes can be seen in figure 1. Comparing the ensemble methods in terms of the number of states that are correctly choosen by the resulting policy the methods with joint decisions are better than the methods learning from joint decisions and average predicted state-values. Even more a simple combination of five independently trained agents (5 agents single decision curve) seem to be the best followed by a combination of five dependently trained agents with Joint Majority Voting decisions. All ensemble methods learn faster and have a better final performance than a single agent within 30, 000 iterations. 4.2 Tic-Tac-Toe The pencil-and-paper game Tic-Tac-Toe is a competitive 2-player game where each player marks one of maximum nine available spaces turnwise until one player either has three of his own marks horizontal, vertical or diagonal resulting in a win or all spaces are marked resulting in a draw. Tic-Tac-Toe is has 5477 valid states excluding the empty position and starting from the empty position the game always results in a draw if both player perform the best moves. For our experiments we have designed M = 3 agents where each of the agent has an own 2-layer MLP with 9 input neurons, 5 hidden neurons and one output neuron. One input neuron binary codes one game space and is 1 ifthespace is occupied by the opponent, 1 if the space is occupied by the agent or 0 if the space is empty. The weights of the MLP are updated by the (combined) RG update function. A reward of 1 is received if the agent moves to a terminal state where he has won and receives a reward of 0 otherwise, i.e. for a transition to a non-terminal state and to a terminal state where he has lost or reached a draw. For all evaluations the agents had the following common training parameters: α =0.0025, γ =0.9, epsilon-greedy exploration strategy with ɛ =0.3, tangens hyperbolicus (tanh) transfer functions for hidden layer and output layer and uniformly distributed random noise between ±0.05 for joint decisions. Each agent learns by Self-Play, i.e. uses the same decision policy and statevalues for an inverted position to decide which action the opponent should take. Irrespective of the ensemble methods all agents learn episode-wise and start from the same initial state (the emtpy position). To calculate the true state-values and the optimal policy we have slightly modified the Minimax algorithm (Russel & Norvig ) to include the rewards and the discounting rate γ.

8 Ensemble Methods for RL with Function Approximation 63 Ensemble Learning with Tic Tac Toe Ensemble Learning with Tic Tac Toe, learning from avg. predicted state values MSE to true values agents single decision 3 agents avg decision 3 agents vote decision MSE to true values agents single decision avg states 3 agents avg decision agv states 3 agents vote decision avg states Ensemble Learning with Tic Tac Toe Ensemble Learning with Tic Tac Toe, learning from avg. predicted state values number of best states choosen by policy agents single decision 3 agents avg decision 3 agents vote decision number of best states choosen by policy agents single decision avg states 3 agents avg decision avg states 3 agents vote decision avg states Ensemble Learning with Tic Tac Toe Ensemble Learning with Tic Tac Toe percentage MSE to true values of single agent agents single decision 3 agents single decision avg states 3 agents avg decision avg states 3 agents vote decision avg states percentage number of best states of single agent agents single decision 3 agents single decision avg states 3 agents avg decision avg states 3 agents vote decision avg states Fig. 2. Empirical results of different ensemble methods applied to Tic-Tac-Toe. The first two figures show the Mean-Squared Error (MSE) to the true state-values. The next two figures show the number of best states that are choosen by the resulting policy, higher values are better. The last two figures compare the MSE to the true state-values (left) and the number of best states (right) of an ensemble of three agents to a single agent.

9 64 S. Faußer and F. Schwenker The results of an ensemble of three agents compared to a single agent with values averaged over 10 testruns can be seen in figure 2. Examining the MSE to the true values, an ensemble with three agents with single independent decisions and learning from average predicted state-values reaches the lowest error. During the first 100, 000 training episodes the MSE is almost always lower than the MSE of a single agent with three times the training episodes. This is especially true for the first 20, 000 and the last 40, 000 iterations. All other ensemble methods perform better than a single agent except the three agents that learnt from joint Majority Voting decision but have not learnt from the Average Predicted statevalues. Maybe lowering the noise for the joint decision would result in better MSE values for this case. Comparing the number of best states that are choosen by the resulting policy, all ensembles without exception are performing better than a single agent. Consider that Tic-Tac-Toe has 4520 non-terminal states. 5 Conclusion We have described several ensemble methods new in its aspect to be integrated into Reinforcement Learning with function approximation. The necessary extensions to the TD and RG update formulas have been shown to learn from average predicted state-values. Further the policies for joint decisions such as Majority Voting and Averaging based on averaging the state-values have been formulated. For two applications, namely Tic-Tac-Toe and five different mazes we have empirically shown that these ensembles have a faster learning speed and final performance than a single agent. While we have choosen simple applications to be able to unifiy the measure of the performances, we emphasize that our ensemble methods are most useful for large-state MDPs with simple feature spaces and MDPs with small number of hidden neurons. Such application to a large-state MDP to further evaluate these ensemble methods may be done in another contribution. References 1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998) 2. Baird, L.: Residual Algorithms: Reinforcement Learning with Function Approximation. In: Proceedings of the 12th International Conference on Machine Learning pp (1995) 3. Breiman, L.: Bagging Predictors. Machine Learning 24, (1996) 4. Schapire, R.E.: The Strength of Learnability. Machine Learning 5(2), (1990) 5. Sun, R., Peterson, T.: Multi-Agent Reinforcement Learning: Weighting and Partitioning. Journal on Neural Networks 12(4-5), (1999) 6. Kok, J.R., Vlassis, N.: Collaborative Multiagent Reinforcement Learning by Payoff Propagation. Journal of Machine Learning Research 7, (2006) 7. Partalas, I., Feneris, I., Vlahavas, I.: Multi-Agent Reinforcement Learning using Strategies and Voting. In: 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), vol. 2, pp (2007)

10 Ensemble Methods for RL with Function Approximation Abdallah, S., Lesser, V.: Multiagent Reinforcement Learning and Self-Organization in a Network of Agents. In: Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi-Agent Systems, AAMAS 2007 pp (2007) 9. Wiering, M.A., van Hasselt, H.: Ensemble Algorithms in Reinforcement Learning. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics 38, (2008), ISSN Faußer, S., Schwenker, F.: Learning a Strategy with Neural Approximated Temporal-Difference Methods in English Draughts. In: ICPR 2010, pp (2010) 11. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995) 12. Stuart, J.R., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice-Hall, Englewood Cliffs (2002)

Introduction to Reinforcement Learning. MAL Seminar

Introduction to Reinforcement Learning. MAL Seminar Introduction to Reinforcement Learning MAL Seminar 2014-2015 RL Background Learning by interacting with the environment Reward good behavior, punish bad behavior Trial & Error Combines ideas from psychology

More information

Option Pricing Using Bayesian Neural Networks

Option Pricing Using Bayesian Neural Networks Option Pricing Using Bayesian Neural Networks Michael Maio Pires, Tshilidzi Marwala School of Electrical and Information Engineering, University of the Witwatersrand, 2050, South Africa m.pires@ee.wits.ac.za,

More information

2D5362 Machine Learning

2D5362 Machine Learning 2D5362 Machine Learning Reinforcement Learning MIT GALib Available at http://lancet.mit.edu/ga/ download galib245.tar.gz gunzip galib245.tar.gz tar xvf galib245.tar cd galib245 make or access my files

More information

4 Reinforcement Learning Basic Algorithms

4 Reinforcement Learning Basic Algorithms Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Markov Decision Processes (MDPs) Luke Zettlemoyer Many slides over the course adapted from Dan Klein, Stuart Russell or Andrew Moore 1 Announcements PS2 online now Due

More information

Q1. [?? pts] Search Traces

Q1. [?? pts] Search Traces CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a

More information

STOCK PRICE PREDICTION: KOHONEN VERSUS BACKPROPAGATION

STOCK PRICE PREDICTION: KOHONEN VERSUS BACKPROPAGATION STOCK PRICE PREDICTION: KOHONEN VERSUS BACKPROPAGATION Alexey Zorin Technical University of Riga Decision Support Systems Group 1 Kalkyu Street, Riga LV-1658, phone: 371-7089530, LATVIA E-mail: alex@rulv

More information

MDPs: Bellman Equations, Value Iteration

MDPs: Bellman Equations, Value Iteration MDPs: Bellman Equations, Value Iteration Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) Adapted from slides kindly shared by Stuart Russell Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) 1 Appreciations

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the

More information

Non-Deterministic Search

Non-Deterministic Search Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:

More information

CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning

CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning Daniel M. Gaines Note: content for slides adapted from Sutton and Barto [1998] Introduction Animals learn through interaction

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDPs 2/16/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Announcements

More information

CS 461: Machine Learning Lecture 8

CS 461: Machine Learning Lecture 8 CS 461: Machine Learning Lecture 8 Dr. Kiri Wagstaff kiri.wagstaff@calstatela.edu 2/23/08 CS 461, Winter 2008 1 Plan for Today Review Clustering Reinforcement Learning How different from supervised, unsupervised?

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Markov Decision Processes (MDP)! Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Stuart Russell or Andrew Moore 1 Outline

More information

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 Lecture 17 & 18: Markov Decision Processes Oct 12 13, 2010 A subset of Lecture 9 slides from Dan Klein UC Berkeley Many slides over the course

More information

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

The exam is closed book, closed calculator, and closed notes except your three crib sheets. CS 188 Spring 2016 Introduction to Artificial Intelligence Final V2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your three crib sheets.

More information

AN ARTIFICIAL NEURAL NETWORK MODELING APPROACH TO PREDICT CRUDE OIL FUTURE. By Dr. PRASANT SARANGI Director (Research) ICSI-CCGRT, Navi Mumbai

AN ARTIFICIAL NEURAL NETWORK MODELING APPROACH TO PREDICT CRUDE OIL FUTURE. By Dr. PRASANT SARANGI Director (Research) ICSI-CCGRT, Navi Mumbai AN ARTIFICIAL NEURAL NETWORK MODELING APPROACH TO PREDICT CRUDE OIL FUTURE By Dr. PRASANT SARANGI Director (Research) ICSI-CCGRT, Navi Mumbai AN ARTIFICIAL NEURAL NETWORK MODELING APPROACH TO PREDICT CRUDE

More information

Reinforcement Learning 04 - Monte Carlo. Elena, Xi

Reinforcement Learning 04 - Monte Carlo. Elena, Xi Reinforcement Learning 04 - Monte Carlo Elena, Xi Previous lecture 2 Markov Decision Processes Markov decision processes formally describe an environment for reinforcement learning where the environment

More information

Compound Reinforcement Learning: Theory and An Application to Finance

Compound Reinforcement Learning: Theory and An Application to Finance Compound Reinforcement Learning: Theory and An Application to Finance Tohgoroh Matsui 1, Takashi Goto 2, Kiyoshi Izumi 3,4, and Yu Chen 3 1 Chubu University, 1200 Matsumoto-cho, Kasugai, 487-8501 Aichi,

More information

Multi-step Bootstrapping

Multi-step Bootstrapping Multi-step Bootstrapping Jennifer She Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto February 7, 2017 J February 7, 2017 1 / 29 Multi-step Bootstrapping Generalization

More information

Introducing GEMS a Novel Technique for Ensemble Creation

Introducing GEMS a Novel Technique for Ensemble Creation Introducing GEMS a Novel Technique for Ensemble Creation Ulf Johansson 1, Tuve Löfström 1, Rikard König 1, Lars Niklasson 2 1 School of Business and Informatics, University of Borås, Sweden 2 School of

More information

Application of Innovations Feedback Neural Networks in the Prediction of Ups and Downs Value of Stock Market *

Application of Innovations Feedback Neural Networks in the Prediction of Ups and Downs Value of Stock Market * Proceedings of the 6th World Congress on Intelligent Control and Automation, June - 3, 006, Dalian, China Application of Innovations Feedback Neural Networks in the Prediction of Ups and Downs Value of

More information

10703 Deep Reinforcement Learning and Control

10703 Deep Reinforcement Learning and Control 10703 Deep Reinforcement Learning and Control Russ Salakhutdinov Machine Learning Department rsalakhu@cs.cmu.edu Temporal Difference Learning Used Materials Disclaimer: Much of the material and slides

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 9: MDPs 9/22/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 2 Grid World The agent lives in

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18,   ISSN Volume XII, Issue II, Feb. 18, www.ijcea.com ISSN 31-3469 AN INVESTIGATION OF FINANCIAL TIME SERIES PREDICTION USING BACK PROPAGATION NEURAL NETWORKS K. Jayanthi, Dr. K. Suresh 1 Department of Computer

More information

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week CS 473: Artificial Intelligence Markov Decision Processes Dan Weld University of Washington [Slides originally created by Dan Klein & Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

c 2004 IEEE. Reprinted from the Proceedings of the International Joint Conference on Neural Networks (IJCNN-2004), Budapest, Hungary, pp

c 2004 IEEE. Reprinted from the Proceedings of the International Joint Conference on Neural Networks (IJCNN-2004), Budapest, Hungary, pp c 24 IEEE. Reprinted from the Proceedings of the International Joint Conference on Neural Networks (IJCNN-24), Budapest, Hungary, pp. 197 112. This material is posted here with permission of the IEEE.

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18, www.ijcea.com ISSN 31-3469 AN INVESTIGATION OF FINANCIAL TIME SERIES PREDICTION USING BACK PROPAGATION NEURAL

More information

The Use of Artificial Neural Network for Forecasting of FTSE Bursa Malaysia KLCI Stock Price Index

The Use of Artificial Neural Network for Forecasting of FTSE Bursa Malaysia KLCI Stock Price Index The Use of Artificial Neural Network for Forecasting of FTSE Bursa Malaysia KLCI Stock Price Index Soleh Ardiansyah 1, Mazlina Abdul Majid 2, JasniMohamad Zain 2 Faculty of Computer System and Software

More information

Reinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum

Reinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum Reinforcement learning and Markov Decision Processes (MDPs) 15-859(B) Avrim Blum RL and MDPs General scenario: We are an agent in some state. Have observations, perform actions, get rewards. (See lights,

More information

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning

More information

Development and Performance Evaluation of Three Novel Prediction Models for Mutual Fund NAV Prediction

Development and Performance Evaluation of Three Novel Prediction Models for Mutual Fund NAV Prediction Development and Performance Evaluation of Three Novel Prediction Models for Mutual Fund NAV Prediction Ananya Narula *, Chandra Bhanu Jha * and Ganapati Panda ** E-mail: an14@iitbbs.ac.in; cbj10@iitbbs.ac.in;

More information

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2 COMP417 Introduction to Robotics and Intelligent Systems Reinforcement Learning - 2 Speaker: Sandeep Manjanna Acklowledgement: These slides use material from Pieter Abbeel s, Dan Klein s and John Schulman

More information

Strategy Acquisition for the Game Othello Based on Reinforcement Learning

Strategy Acquisition for the Game Othello Based on Reinforcement Learning Strategy Acquisition for the Game Othello Based on Reinforcement Learning Taku Yoshioka, Shin Ishii and Minoru Ito IEICE Transactions on Information and System 1999 Speaker : Sameer Agarwal Course : Learning

More information

A multiple model of perceptron neural network with sample selection through chicken swarm algorithm for financial forecasting

A multiple model of perceptron neural network with sample selection through chicken swarm algorithm for financial forecasting Communications on Advanced Computational Science with Applications 2017 No. 1 (2017) 85-94 Available online at www.ispacs.com/cacsa Volume 2017, Issue 1, Year 2017 Article ID cacsa-00070, 10 Pages doi:10.5899/2017/cacsa-00070

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives

More information

A selection of MAS learning techniques based on RL

A selection of MAS learning techniques based on RL A selection of MAS learning techniques based on RL Ann Nowé 14/11/12 Herhaling titel van presentatie 1 Content Single stage setting Common interest (Claus & Boutilier, Kapetanakis&Kudenko) Conflicting

More information

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 12: MDP1 Victor R. Lesser CMPSCI 683 Fall 2010 Biased Random GSAT - WalkSat Notice no random restart 2 Today s lecture Search where there is Uncertainty in Operator Outcome --Sequential Decision

More information

An Algorithm for Trading and Portfolio Management Using. strategy. Since this type of trading system is optimized

An Algorithm for Trading and Portfolio Management Using. strategy. Since this type of trading system is optimized pp 83-837,. An Algorithm for Trading and Portfolio Management Using Q-learning and Sharpe Ratio Maximization Xiu Gao Department of Computer Science and Engineering The Chinese University of HongKong Shatin,

More information

The Option-Critic Architecture

The Option-Critic Architecture The Option-Critic Architecture Pierre-Luc Bacon, Jean Harb, Doina Precup Reasoning and Learning Lab McGill University, Montreal, Canada AAAI 2017 Intelligence: the ability to generalize and adapt efficiently

More information

Application of stochastic recurrent reinforcement learning to index trading

Application of stochastic recurrent reinforcement learning to index trading ESANN 2011 proceedings, European Symposium on Artificial Neural Networs, Computational Intelligence Application of stochastic recurrent reinforcement learning to index trading Denise Gorse 1 1- University

More information

Reinforcement Learning. Monte Carlo and Temporal Difference Learning

Reinforcement Learning. Monte Carlo and Temporal Difference Learning Reinforcement Learning Monte Carlo and Temporal Difference Learning Manfred Huber 2014 1 Monte Carlo Methods Dynamic Programming Requires complete knowledge of the MDP Spends equal time on each part of

More information

CS221 / Spring 2018 / Sadigh. Lecture 8: MDPs II

CS221 / Spring 2018 / Sadigh. Lecture 8: MDPs II CS221 / Spring 218 / Sadigh Lecture 8: MDPs II cs221.stanford.edu/q Question If you wanted to go from Orbisonia to Rockhill, how would you get there? ride bus 1 ride bus 17 ride the magic tram CS221 /

More information

Stock Market Prediction using Artificial Neural Networks IME611 - Financial Engineering Indian Institute of Technology, Kanpur (208016), India

Stock Market Prediction using Artificial Neural Networks IME611 - Financial Engineering Indian Institute of Technology, Kanpur (208016), India Stock Market Prediction using Artificial Neural Networks IME611 - Financial Engineering Indian Institute of Technology, Kanpur (208016), India Name Pallav Ranka (13457) Abstract Investors in stock market

More information

Artificially Intelligent Forecasting of Stock Market Indexes

Artificially Intelligent Forecasting of Stock Market Indexes Artificially Intelligent Forecasting of Stock Market Indexes Loyola Marymount University Math 560 Final Paper 05-01 - 2018 Daniel McGrath Advisor: Dr. Benjamin Fitzpatrick Contents I. Introduction II.

More information

A Review of Artificial Neural Network Applications in Control. Chart Pattern Recognition

A Review of Artificial Neural Network Applications in Control. Chart Pattern Recognition A Review of Artificial Neural Network Applications in Control Chart Pattern Recognition M. Perry and J. Pignatiello Department of Industrial Engineering FAMU - FSU College of Engineering 2525 Pottsdamer

More information

Stock Trading Following Stock Price Index Movement Classification Using Machine Learning Techniques

Stock Trading Following Stock Price Index Movement Classification Using Machine Learning Techniques Stock Trading Following Stock Price Index Movement Classification Using Machine Learning Techniques 6.1 Introduction Trading in stock market is one of the most popular channels of financial investments.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards Grid World The agent

More information

Random Search Techniques for Optimal Bidding in Auction Markets

Random Search Techniques for Optimal Bidding in Auction Markets Random Search Techniques for Optimal Bidding in Auction Markets Shahram Tabandeh and Hannah Michalska Abstract Evolutionary algorithms based on stochastic programming are proposed for learning of the optimum

More information

Motivation: disadvantages of MC methods MC does not work for scenarios without termination It updates only at the end of the episode (sometimes - it i

Motivation: disadvantages of MC methods MC does not work for scenarios without termination It updates only at the end of the episode (sometimes - it i Temporal-Di erence Learning Taras Kucherenko, Joonatan Manttari KTH tarask@kth.se manttari@kth.se March 7, 2017 Taras Kucherenko, Joonatan Manttari (KTH) TD-Learning March 7, 2017 1 / 68 Motivation: disadvantages

More information

CS221 / Autumn 2018 / Liang. Lecture 8: MDPs II

CS221 / Autumn 2018 / Liang. Lecture 8: MDPs II CS221 / Autumn 218 / Liang Lecture 8: MDPs II cs221.stanford.edu/q Question If you wanted to go from Orbisonia to Rockhill, how would you get there? ride bus 1 ride bus 17 ride the magic tram CS221 / Autumn

More information

Intra-Option Learning about Temporally Abstract Actions

Intra-Option Learning about Temporally Abstract Actions Intra-Option Learning about Temporally Abstract Actions Richard S. Sutton Department of Computer Science University of Massachusetts Amherst, MA 01003-4610 rich@cs.umass.edu Doina Precup Department of

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Ryan P. Adams COS 324 Elements of Machine Learning Princeton University We now turn to a new aspect of machine learning, in which agents take actions and become active in their

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Markov Decision Processes II Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC

More information

To earn the extra credit, one of the following has to hold true. Please circle and sign.

To earn the extra credit, one of the following has to hold true. Please circle and sign. CS 188 Fall 2018 Introduction to Artificial Intelligence Practice Midterm 1 To earn the extra credit, one of the following has to hold true. Please circle and sign. A I spent 2 or more hours on the practice

More information

Barapatre Omprakash et.al; International Journal of Advance Research, Ideas and Innovations in Technology

Barapatre Omprakash et.al; International Journal of Advance Research, Ideas and Innovations in Technology ISSN: 2454-132X Impact factor: 4.295 (Volume 4, Issue 2) Available online at: www.ijariit.com Stock Price Prediction using Artificial Neural Network Omprakash Barapatre omprakashbarapatre@bitraipur.ac.in

More information

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning)

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) 1 / 24 Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) Julie Nutini MLRG - Winter Term 2 January 24 th, 2017 2 / 24 Monte Carlo Methods Monte Carlo (MC) methods are learning methods, used

More information

The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions

The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions Optimality and Approximation Finite MDP: {S, A, R, p, γ}

More information

Predictive Model Learning of Stochastic Simulations. John Hegstrom, FSA, MAAA

Predictive Model Learning of Stochastic Simulations. John Hegstrom, FSA, MAAA Predictive Model Learning of Stochastic Simulations John Hegstrom, FSA, MAAA Table of Contents Executive Summary... 3 Choice of Predictive Modeling Techniques... 4 Neural Network Basics... 4 Financial

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non Deterministic Search Example: Grid World A maze like problem The agent lives in

More information

Stock market price index return forecasting using ANN. Gunter Senyurt, Abdulhamit Subasi

Stock market price index return forecasting using ANN. Gunter Senyurt, Abdulhamit Subasi Stock market price index return forecasting using ANN Gunter Senyurt, Abdulhamit Subasi E-mail : gsenyurt@ibu.edu.ba, asubasi@ibu.edu.ba Abstract Even though many new data mining techniques have been introduced

More information

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig] Basic Framework [This lecture adapted from Sutton & Barto and Russell & Norvig] About this class Markov Decision Processes The Bellman Equation Dynamic Programming for finding value functions and optimal

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Hierarchical Reinforcement Learning Action hierarchy, hierarchical RL, semi-mdp Vien Ngo Marc Toussaint University of Stuttgart Outline Hierarchical reinforcement learning Learning

More information

Deep RL and Controls Homework 1 Spring 2017

Deep RL and Controls Homework 1 Spring 2017 10-703 Deep RL and Controls Homework 1 Spring 2017 February 1, 2017 Due February 17, 2017 Instructions You have 15 days from the release of the assignment until it is due. Refer to gradescope for the exact

More information

Complex Decisions. Sequential Decision Making

Complex Decisions. Sequential Decision Making Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by

More information

Sequential Coalition Formation for Uncertain Environments

Sequential Coalition Formation for Uncertain Environments Sequential Coalition Formation for Uncertain Environments Hosam Hanna Computer Sciences Department GREYC - University of Caen 14032 Caen - France hanna@info.unicaen.fr Abstract In several applications,

More information

A Novel Prediction Method for Stock Index Applying Grey Theory and Neural Networks

A Novel Prediction Method for Stock Index Applying Grey Theory and Neural Networks The 7th International Symposium on Operations Research and Its Applications (ISORA 08) Lijiang, China, October 31 Novemver 3, 2008 Copyright 2008 ORSC & APORC, pp. 104 111 A Novel Prediction Method for

More information

Temporal Abstraction in RL

Temporal Abstraction in RL Temporal Abstraction in RL How can an agent represent stochastic, closed-loop, temporally-extended courses of action? How can it act, learn, and plan using such representations? HAMs (Parr & Russell 1998;

More information

COGNITIVE LEARNING OF INTELLIGENCE SYSTEMS USING NEURAL NETWORKS: EVIDENCE FROM THE AUSTRALIAN CAPITAL MARKETS

COGNITIVE LEARNING OF INTELLIGENCE SYSTEMS USING NEURAL NETWORKS: EVIDENCE FROM THE AUSTRALIAN CAPITAL MARKETS Asian Academy of Management Journal, Vol. 7, No. 2, 17 25, July 2002 COGNITIVE LEARNING OF INTELLIGENCE SYSTEMS USING NEURAL NETWORKS: EVIDENCE FROM THE AUSTRALIAN CAPITAL MARKETS Joachim Tan Edward Sek

More information

Journal of Internet Banking and Commerce

Journal of Internet Banking and Commerce Journal of Internet Banking and Commerce An open access Internet journal (http://www.icommercecentral.com) Journal of Internet Banking and Commerce, December 2017, vol. 22, no. 3 STOCK PRICE PREDICTION

More information

Forecasting stock market prices

Forecasting stock market prices ICT Innovations 2010 Web Proceedings ISSN 1857-7288 107 Forecasting stock market prices Miroslav Janeski, Slobodan Kalajdziski Faculty of Electrical Engineering and Information Technologies, Skopje, Macedonia

More information

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes. CS 188 Fall 2013 Introduction to Artificial Intelligence Midterm 1 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use

More information

Investing through Economic Cycles with Ensemble Machine Learning Algorithms

Investing through Economic Cycles with Ensemble Machine Learning Algorithms Investing through Economic Cycles with Ensemble Machine Learning Algorithms Thomas Raffinot Silex Investment Partners Big Data in Finance Conference Thomas Raffinot (Silex-IP) Economic Cycles-Machine Learning

More information

arxiv: v1 [cs.lg] 19 Nov 2018

arxiv: v1 [cs.lg] 19 Nov 2018 Practical Deep Reinforcement Learning Approach for Stock Trading arxiv:1811.07522v1 [cs.lg] 19 Nov 2018 Zhuoran Xiong, Xiao-Yang Liu, Shan Zhong, Hongyang (Bruce) Yang +, and Anwar Walid Electrical Engineering,

More information

Keywords: artificial neural network, backpropagtion algorithm, derived parameter.

Keywords: artificial neural network, backpropagtion algorithm, derived parameter. Volume 5, Issue 2, February 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Stock Price

More information

Reinforcement Learning and Simulation-Based Search

Reinforcement Learning and Simulation-Based Search Reinforcement Learning and Simulation-Based Search David Silver Outline 1 Reinforcement Learning 2 3 Planning Under Uncertainty Reinforcement Learning Markov Decision Process Definition A Markov Decision

More information

Business Strategies in Credit Rating and the Control of Misclassification Costs in Neural Network Predictions

Business Strategies in Credit Rating and the Control of Misclassification Costs in Neural Network Predictions Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2001 Proceedings Americas Conference on Information Systems (AMCIS) December 2001 Business Strategies in Credit Rating and the Control

More information

Comparing Direct and Indirect Temporal-Difference Methods for Estimating the Variance of the Return

Comparing Direct and Indirect Temporal-Difference Methods for Estimating the Variance of the Return Comparing Direct and Indirect Temporal-Difference Methods for Estimating the Variance of the Return Craig Sherstan 1, Dylan R. Ashley 2, Brendan Bennett 2, Kenny Young, Adam White, Martha White, Richard

More information

CEC login. Student Details Name SOLUTIONS

CEC login. Student Details Name SOLUTIONS Student Details Name SOLUTIONS CEC login Instructions You have roughly 1 minute per point, so schedule your time accordingly. There is only one correct answer per question. Good luck! Question 1. Searching

More information

Predicting Abnormal Stock Returns with a. Nonparametric Nonlinear Method

Predicting Abnormal Stock Returns with a. Nonparametric Nonlinear Method Predicting Abnormal Stock Returns with a Nonparametric Nonlinear Method Alan M. Safer California State University, Long Beach Department of Mathematics 1250 Bellflower Boulevard Long Beach, CA 90840-1001

More information

A Comparative Study of Ensemble-based Forecasting Models for Stock Index Prediction

A Comparative Study of Ensemble-based Forecasting Models for Stock Index Prediction Association for Information Systems AIS Electronic Library (AISeL) MWAIS 206 Proceedings Midwest (MWAIS) Spring 5-9-206 A Comparative Study of Ensemble-based Forecasting Models for Stock Index Prediction

More information

CS 188: Artificial Intelligence. Outline

CS 188: Artificial Intelligence. Outline C 188: Artificial Intelligence Markov Decision Processes (MDPs) Pieter Abbeel UC Berkeley ome slides adapted from Dan Klein 1 Outline Markov Decision Processes (MDPs) Formalism Value iteration In essence

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning MDP March May, 2013 MDP MDP: S, A, P, R, γ, µ State can be partially observable: Partially Observable MDPs () Actions can be temporally extended: Semi MDPs (SMDPs) and Hierarchical

More information

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Outline Sequential Decision Processes Markov chains Highlight Markov property Discounted rewards Value iteration Markov

More information

Decision Theory: Value Iteration

Decision Theory: Value Iteration Decision Theory: Value Iteration CPSC 322 Decision Theory 4 Textbook 9.5 Decision Theory: Value Iteration CPSC 322 Decision Theory 4, Slide 1 Lecture Overview 1 Recap 2 Policies 3 Value Iteration Decision

More information

2015, IJARCSSE All Rights Reserved Page 66

2015, IJARCSSE All Rights Reserved Page 66 Volume 5, Issue 1, January 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Financial Forecasting

More information

Iran s Stock Market Prediction By Neural Networks and GA

Iran s Stock Market Prediction By Neural Networks and GA Iran s Stock Market Prediction By Neural Networks and GA Mahmood Khatibi MS. in Control Engineering mahmood.khatibi@gmail.com Habib Rajabi Mashhadi Associate Professor h_mashhadi@ferdowsi.um.ac.ir Electrical

More information

Volatility Prediction with. Mixture Density Networks. Christian Schittenkopf. Georg Dorner. Engelbert J. Dockner. Report No. 15

Volatility Prediction with. Mixture Density Networks. Christian Schittenkopf. Georg Dorner. Engelbert J. Dockner. Report No. 15 Volatility Prediction with Mixture Density Networks Christian Schittenkopf Georg Dorner Engelbert J. Dockner Report No. 15 May 1998 May 1998 SFB `Adaptive Information Systems and Modelling in Economics

More information

Intro to Reinforcement Learning. Part 3: Core Theory

Intro to Reinforcement Learning. Part 3: Core Theory Intro to Reinforcement Learning Part 3: Core Theory Interactive Example: You are the algorithm! Finite Markov decision processes (finite MDPs) dynamics p p p Experience: S 0 A 0 R 1 S 1 A 1 R 2 S 2 A 2

More information

ANN Robot Energy Modeling

ANN Robot Energy Modeling IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, Volume 11, Issue 4 Ver. III (Jul. Aug. 2016), PP 66-81 www.iosrjournals.org ANN Robot Energy Modeling

More information

The Problem of Temporal Abstraction

The Problem of Temporal Abstraction The Problem of Temporal Abstraction How do we connect the high level to the low-level? " the human level to the physical level? " the decide level to the action level? MDPs are great, search is great,

More information

A Novel Iron Loss Reduction Technique for Distribution Transformers Based on a Combined Genetic Algorithm Neural Network Approach

A Novel Iron Loss Reduction Technique for Distribution Transformers Based on a Combined Genetic Algorithm Neural Network Approach 16 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 31, NO. 1, FEBRUARY 2001 A Novel Iron Loss Reduction Technique for Distribution Transformers Based on a Combined

More information

Introduction to Fall 2007 Artificial Intelligence Final Exam

Introduction to Fall 2007 Artificial Intelligence Final Exam NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Final Exam You have 180 minutes. The exam is closed book, closed notes except a two-page crib sheet, basic calculators

More information

Design and Application of Artificial Neural Networks for Predicting the Values of Indexes on the Bulgarian Stock Market

Design and Application of Artificial Neural Networks for Predicting the Values of Indexes on the Bulgarian Stock Market Design and Application of Artificial Neural Networks for Predicting the Values of Indexes on the Bulgarian Stock Market Veselin L. Shahpazov Institute of Information and Communication Technologies, Bulgarian

More information

Zero Intelligence Plus and Gjerstad-Dickhaut Agents for Sealed Bid Auctions

Zero Intelligence Plus and Gjerstad-Dickhaut Agents for Sealed Bid Auctions Zero Intelligence Plus and Gjerstad-Dickhaut Agents for Sealed Bid Auctions A. J. Bagnall and I. E. Toft School of Computing Sciences University of East Anglia Norwich England NR4 7TJ {ajb,it}@cmp.uea.ac.uk

More information

Improving Stock Price Prediction with SVM by Simple Transformation: The Sample of Stock Exchange of Thailand (SET)

Improving Stock Price Prediction with SVM by Simple Transformation: The Sample of Stock Exchange of Thailand (SET) Thai Journal of Mathematics Volume 14 (2016) Number 3 : 553 563 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 Improving Stock Price Prediction with SVM by Simple Transformation: The Sample of Stock Exchange

More information

Algorithmic Trading using Reinforcement Learning augmented with Hidden Markov Model

Algorithmic Trading using Reinforcement Learning augmented with Hidden Markov Model Algorithmic Trading using Reinforcement Learning augmented with Hidden Markov Model Simerjot Kaur (sk3391) Stanford University Abstract This work presents a novel algorithmic trading system based on reinforcement

More information