Decision Trees An Early Classifier

Similar documents
Lecture 9: Classification and Regression Trees

Chapter ML:III. III. Decision Trees. Decision Trees Basics Impurity Functions Decision Tree Algorithms Decision Tree Pruning

Pattern Recognition Chapter 5: Decision Trees

Tree Diagram. Splitting Criterion. Splitting Criterion. Introduction. Building a Decision Tree. MS4424 Data Mining & Modelling Decision Tree

ECS171: Machine Learning

Algorithms and Networking for Computer Games

Q1. [?? pts] Search Traces

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Chapter 7 One-Dimensional Search Methods

Yao s Minimax Principle

On the Optimality of a Family of Binary Trees Techical Report TR

Machine Learning and ID tree

CS188 Spring 2012 Section 4: Games

A new look at tree based approaches

Top-down particle filtering for Bayesian decision trees

Prior knowledge in economic applications of data mining

Relational Regression Methods to Speed Up Monte-Carlo Planning

Sublinear Time Algorithms Oct 19, Lecture 1

1 Solutions to Tute09

Integer Programming Models

Classification and Regression Trees

Session 5. Predictive Modeling in Life Insurance

Credit Card Default Predictive Modeling

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

Enforcing monotonicity of decision models: algorithm and performance

Financial Economics. Runs Test

Gamma Distribution Fitting

Conditional inference trees in dynamic microsimulation - modelling transition probabilities in the SMILE model

Lecture l(x) 1. (1) x X

Basic Procedure for Histograms

Chapter 7. Inferences about Population Variances

Molecular Phylogenetics

Lecture 4: Divide and Conquer

To earn the extra credit, one of the following has to hold true. Please circle and sign.

Lecture 7: Bayesian approach to MAB - Gittins index

LECTURE 2: MULTIPERIOD MODELS AND TREES

Expanding Predictive Analytics Through the Use of Machine Learning

Loan Approval and Quality Prediction in the Lending Club Marketplace

CMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS

Multistage risk-averse asset allocation with transaction costs

TABLEAU-BASED DECISION PROCEDURES FOR HYBRID LOGIC

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Finding optimal arbitrage opportunities using a quantum annealer

DECISION TREE INDUCTION

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

CEC login. Student Details Name SOLUTIONS

Exact Inference (9/30/13) 2 A brief review of Forward-Backward and EM for HMMs

Extending MCTS

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I

The Balance-Matching Heuristic *

Intro to GLM Day 2: GLM and Maximum Likelihood

The Loans_processed.csv file is the dataset we obtained after the pre-processing part where the clean-up python code was used.

2 all subsequent nodes. 252 all subsequent nodes. 401 all subsequent nodes. 398 all subsequent nodes. 330 all subsequent nodes

(iii) Under equal cluster sampling, show that ( ) notations. (d) Attempt any four of the following:

Budget Management In GSP (2018)

EE266 Homework 5 Solutions

Is Greedy Coordinate Descent a Terrible Algorithm?

Academic Research Review. Classifying Market Conditions Using Hidden Markov Model

Approximations of Stochastic Programs. Scenario Tree Reduction and Construction

Machine Learning and ID tree

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2010, Mr. Ruey S. Tsay Solutions to Final Exam

CSE 21 Winter 2016 Homework 6 Due: Wednesday, May 11, 2016 at 11:59pm. Instructions

Optimal Satisficing Tree Searches

Modeling Private Firm Default: PFirm

CS 798: Homework Assignment 4 (Game Theory)

Lecture 9: Games I. Course plan. A simple game. Roadmap. Machine learning. Example: game 1

Using Random Forests in conintegrated pairs trading

CPSC 540: Machine Learning

Lecture 10: The knapsack problem

Evolution of Strategies with Different Representation Schemes. in a Spatial Iterated Prisoner s Dilemma Game

Notes on Natural Logic

Introduction to Greedy Algorithms: Huffman Codes

Intelligent Systems (AI-2)

Markov Decision Processes

Web Science & Technologies University of Koblenz Landau, Germany. Lecture Data Science. Statistics and Probabilities JProf. Dr.

Essays on Some Combinatorial Optimization Problems with Interval Data

An effective perfect-set theorem

CISC 889 Bioinformatics (Spring 2004) Phylogenetic Trees (II)

Two-Sample T-Test for Non-Inferiority

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation.

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

Investing through Economic Cycles with Ensemble Machine Learning Algorithms

Some developments about a new nonparametric test based on Gini s mean difference

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

Hedge Fund Fraud prediction using classification algorithms

Option Pricing Using Bayesian Neural Networks

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2012, Mr. Ruey S. Tsay. Solutions to Final Exam

VARN CODES AND GENERALIZED FIBONACCI TREES

On Finite Strategy Sets for Finitely Repeated Zero-Sum Games

CPSC 540: Machine Learning

UNIT 5 DECISION MAKING

AIRCURRENTS: PORTFOLIO OPTIMIZATION FOR REINSURERS

Market Variables and Financial Distress. Giovanni Fernandez Stetson University

Finding Equilibria in Games of No Chance

Maximizing the Spread of Influence through a Social Network Problem/Motivation: Suppose we want to market a product or promote an idea or behavior in

Computing Unsatisfiable k-sat Instances with Few Occurrences per Variable

Notes on the EM Algorithm Michael Collins, September 24th 2005

Transcription:

An Early Classifier Jason Corso SUNY at Buffalo January 19, 2012 J. Corso (SUNY at Buffalo) Trees January 19, 2012 1 / 33

Introduction to Non-Metric Methods Introduction to Non-Metric Methods We cover such problems involving nominal data in this chapter that is, data that are discrete and without any natural notion of similarity or even ordering. For example (DHS), some teeth are small and fine (as in baleen whales) for straining tiny prey from the sea; others (as in sharks) come in multiple rows; other sea creatures have tusks (as in walruses), yet others lack teeth altogether (as in squid). There is no clear notion of similarity for this information about teeth. J. Corso (SUNY at Buffalo) Trees January 19, 2012 2 / 33

Introduction to Non-Metric Methods Introduction to Non-Metric Methods We cover such problems involving nominal data in this chapter that is, data that are discrete and without any natural notion of similarity or even ordering. For example (DHS), some teeth are small and fine (as in baleen whales) for straining tiny prey from the sea; others (as in sharks) come in multiple rows; other sea creatures have tusks (as in walruses), yet others lack teeth altogether (as in squid). There is no clear notion of similarity for this information about teeth. Most of the other methods we study will involve real-valued feature vectors with clear metrics. We may also consider problems involving data tuples and data strings. And for recognition of these, decision trees and string grammars, respectively. J. Corso (SUNY at Buffalo) Trees January 19, 2012 2 / 33

20 Questions Decision Trees I am thinking of a person. Ask me up to 20 yes/no questions to determine who this person is that I am thinking about. Consider your questions wisely... J. Corso (SUNY at Buffalo) Trees January 19, 2012 3 / 33

20 Questions I am thinking of a person. Ask me up to 20 yes/no questions to determine who this person is that I am thinking about. Consider your questions wisely... How did you ask the questions? What underlying measure led you the questions, if any? J. Corso (SUNY at Buffalo) Trees January 19, 2012 3 / 33

20 Questions I am thinking of a person. Ask me up to 20 yes/no questions to determine who this person is that I am thinking about. Consider your questions wisely... How did you ask the questions? What underlying measure led you the questions, if any? Most importantly, iterative yes/no questions of this sort require no metric and are well suited for nominal data. J. Corso (SUNY at Buffalo) Trees January 19, 2012 3 / 33

RE 8.1. Classification in a basic decision tree proceeds from top to bottom. The questions aske node concern a particular property of the pattern, and the downward links correspond to the poss s. Successive nodes are visited until a terminal or leaf node is reached, where the category label is r that the J. Corso same(suny question, at Buffalo) Size?, appears in different Trees places in the tree and that January different 19, 2012 questions 4 / 33 These sequence of questions are a decision tree... root Color? level 0 green yellow red Size? Shape? Size? level 1 big medium small round thin medium small Watermelon Apple Grape Size? Banana Apple Taste? level 2 big small sweet sour Grapefruit Lemon Cherry Grape level 3

101 Decision Trees The root node of the tree, displayed at the top, is connected to successive branches to the other nodes. J. Corso (SUNY at Buffalo) Trees January 19, 2012 5 / 33

Decision Trees 101 The root node of the tree, displayed at the top, is connected to successive branches to the other nodes. The connections continue until the leaf nodes are reached, implying a decision. J. Corso (SUNY at Buffalo) Trees January 19, 2012 5 / 33

Decision Trees 101 The root node of the tree, displayed at the top, is connected to successive branches to the other nodes. The connections continue until the leaf nodes are reached, implying a decision. The classification of a particular pattern begins at the root node, which queries a particular property (selected during tree learning). J. Corso (SUNY at Buffalo) Trees January 19, 2012 5 / 33

Decision Trees 101 The root node of the tree, displayed at the top, is connected to successive branches to the other nodes. The connections continue until the leaf nodes are reached, implying a decision. The classification of a particular pattern begins at the root node, which queries a particular property (selected during tree learning). The links off of the root node correspond to different possible values of the property. J. Corso (SUNY at Buffalo) Trees January 19, 2012 5 / 33

Decision Trees 101 The root node of the tree, displayed at the top, is connected to successive branches to the other nodes. The connections continue until the leaf nodes are reached, implying a decision. The classification of a particular pattern begins at the root node, which queries a particular property (selected during tree learning). The links off of the root node correspond to different possible values of the property. We follow the link corresponding to the appropriate value of the pattern and continue to a new node, at which we check the next property. And so on. J. Corso (SUNY at Buffalo) Trees January 19, 2012 5 / 33

Decision Trees 101 The root node of the tree, displayed at the top, is connected to successive branches to the other nodes. The connections continue until the leaf nodes are reached, implying a decision. The classification of a particular pattern begins at the root node, which queries a particular property (selected during tree learning). The links off of the root node correspond to different possible values of the property. We follow the link corresponding to the appropriate value of the pattern and continue to a new node, at which we check the next property. And so on. Decision trees have a particularly high degree of interpretability. J. Corso (SUNY at Buffalo) Trees January 19, 2012 5 / 33

When to Consider Decision Trees Instances are wholly or partly described by attribute-value pairs. Target function is discrete valued. Disjunctive hypothesis may be required. Possibly noisy training data. Examples Equipment or medical diagnosis. Credit risk analysis. Modeling calendar scheduling preferences. J. Corso (SUNY at Buffalo) Trees January 19, 2012 6 / 33

for Decision Tree Learning Assume we have a set of D labeled training data and we have decided on a set of properties that can be used to discriminate patterns. J. Corso (SUNY at Buffalo) Trees January 19, 2012 7 / 33

for Decision Tree Learning Assume we have a set of D labeled training data and we have decided on a set of properties that can be used to discriminate patterns. Now, we want to learn how to organize these properties into a decision tree to maximize accuracy. J. Corso (SUNY at Buffalo) Trees January 19, 2012 7 / 33

for Decision Tree Learning Assume we have a set of D labeled training data and we have decided on a set of properties that can be used to discriminate patterns. Now, we want to learn how to organize these properties into a decision tree to maximize accuracy. Any decision tree will progressively split the data into subsets. J. Corso (SUNY at Buffalo) Trees January 19, 2012 7 / 33

for Decision Tree Learning Assume we have a set of D labeled training data and we have decided on a set of properties that can be used to discriminate patterns. Now, we want to learn how to organize these properties into a decision tree to maximize accuracy. Any decision tree will progressively split the data into subsets. If at any point all of the elements of a particular subset are of the same category, then we say this node is pure and we can stop splitting. J. Corso (SUNY at Buffalo) Trees January 19, 2012 7 / 33

for Decision Tree Learning Assume we have a set of D labeled training data and we have decided on a set of properties that can be used to discriminate patterns. Now, we want to learn how to organize these properties into a decision tree to maximize accuracy. Any decision tree will progressively split the data into subsets. If at any point all of the elements of a particular subset are of the same category, then we say this node is pure and we can stop splitting. Unfortunately, this rarely happens and we have to decide between whether to stop splitting and accept an imperfect decision or instead to select another property and grow the tree further. J. Corso (SUNY at Buffalo) Trees January 19, 2012 7 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: 1 How many branches will be selected from a node? J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: 1 How many branches will be selected from a node? 2 Which property should be tested at a node? J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: 1 How many branches will be selected from a node? 2 Which property should be tested at a node? 3 When should a node be declared a leaf? J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: 1 How many branches will be selected from a node? 2 Which property should be tested at a node? 3 When should a node be declared a leaf? 4 How can we prune a tree once it has become too large? J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: 1 How many branches will be selected from a node? 2 Which property should be tested at a node? 3 When should a node be declared a leaf? 4 How can we prune a tree once it has become too large? 5 If a leaf node is impure, how should the category be assigned? J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

The basic strategy to recursively defining the tree is the following: Given the data represented at a node, either declare that node to be a leaf or find another property to use to split the data into subsets. There are 6 general kinds of questions that arise: 1 How many branches will be selected from a node? 2 Which property should be tested at a node? 3 When should a node be declared a leaf? 4 How can we prune a tree once it has become too large? 5 If a leaf node is impure, how should the category be assigned? 6 How should missing data be handled? J. Corso (SUNY at Buffalo) Trees January 19, 2012 8 / 33

Number of Splits Decision Trees The number of splits at a node, or its branching factor B, is generally set by the designer (as a function of the way the test is selected) and can vary throughout the tree. J. Corso (SUNY at Buffalo) Trees January 19, 2012 9 / 33

Number of Splits The number of splits at a node, or its branching factor B, is generally set by the designer (as a function of the way the test is selected) and can vary throughout the tree. Note that any split with a factor greater than 2 can easily be converted into a sequence of binary splits. J. Corso (SUNY at Buffalo) Trees January 19, 2012 9 / 33

Number of Splits The number of splits at a node, or its branching factor B, is generally set by the designer (as a function of the way the test is selected) and can vary throughout the tree. Note that any split with a factor greater than 2 can easily be converted into a sequence of binary splits. So, DHS focuses on only binary tree learning. J. Corso (SUNY at Buffalo) Trees January 19, 2012 9 / 33

Number of Splits The number of splits at a node, or its branching factor B, is generally set by the designer (as a function of the way the test is selected) and can vary throughout the tree. Note that any split with a factor greater than 2 can easily be converted into a sequence of binary splits. So, DHS focuses on only binary tree learning. But, we note that in certain circumstances for learning and inference, the selection of a test at a node or its inference may be computationally expensive and a 3- or 4-way split may be more desirable for computational reasons. J. Corso (SUNY at Buffalo) Trees January 19, 2012 9 / 33

Query Selection and Node Impurity The fundamental principle underlying tree creation is that of simplicity: we prefer decisions that lead to a simple, compact tree with few nodes. J. Corso (SUNY at Buffalo) Trees January 19, 2012 10 / 33

Query Selection and Node Impurity The fundamental principle underlying tree creation is that of simplicity: we prefer decisions that lead to a simple, compact tree with few nodes. We seek a property query T at each node N that makes the data reaching the immediate descendant nodes as pure as possible. J. Corso (SUNY at Buffalo) Trees January 19, 2012 10 / 33

Query Selection and Node Impurity The fundamental principle underlying tree creation is that of simplicity: we prefer decisions that lead to a simple, compact tree with few nodes. We seek a property query T at each node N that makes the data reaching the immediate descendant nodes as pure as possible. Let i(n) denote the impurity of a node N. J. Corso (SUNY at Buffalo) Trees January 19, 2012 10 / 33

Query Selection and Node Impurity The fundamental principle underlying tree creation is that of simplicity: we prefer decisions that lead to a simple, compact tree with few nodes. We seek a property query T at each node N that makes the data reaching the immediate descendant nodes as pure as possible. Let i(n) denote the impurity of a node N. In all cases, we want i(n) to be 0 if all of the patterns that reach the node bear the same category, and to be large if the categories are equally represented. J. Corso (SUNY at Buffalo) Trees January 19, 2012 10 / 33

Query Selection and Node Impurity The fundamental principle underlying tree creation is that of simplicity: we prefer decisions that lead to a simple, compact tree with few nodes. We seek a property query T at each node N that makes the data reaching the immediate descendant nodes as pure as possible. Let i(n) denote the impurity of a node N. In all cases, we want i(n) to be 0 if all of the patterns that reach the node bear the same category, and to be large if the categories are equally represented. Entropy impurity is the most popular measure: i(n) = j P (ω j ) log P (ω j ). (1) It will be minimized for a node that has elements of only one class (pure). J. Corso (SUNY at Buffalo) Trees January 19, 2012 10 / 33

For the two-category case, a useful definition of impurity is that variance impurity: i(n) = P (ω 1 )P (ω 2 ) (2) J. Corso (SUNY at Buffalo) Trees January 19, 2012 11 / 33

For the two-category case, a useful definition of impurity is that variance impurity: i(n) = P (ω 1 )P (ω 2 ) (2) Its generalization to the multi-class is the Gini impurity: i(n) = i j P (ω i )P (ω j ) = 1 j P 2 (ω j ) (3) which is the expected error rate at node N if the category is selected randomly from the class distribution present at the node. J. Corso (SUNY at Buffalo) Trees January 19, 2012 11 / 33

For the two-category case, a useful definition of impurity is that variance impurity: i(n) = P (ω 1 )P (ω 2 ) (2) Its generalization to the multi-class is the Gini impurity: i(n) = i j P (ω i )P (ω j ) = 1 j P 2 (ω j ) (3) which is the expected error rate at node N if the category is selected randomly from the class distribution present at the node. The misclassification impurity measures the minimum probability that a training pattern would be misclassified at N: i(n) = 1 max P (ω j ) (4) j J. Corso (SUNY at Buffalo) Trees January 19, 2012 11 / 33

i(p) Gini/variance entropy misclassification 0 0.5 1 P orfor the the two-category case, case, the impurity the impurity functions peak functions at equal class peak at e hefrequencies. variance and the Gini impurity functions are identical J. Corso (SUNY at Buffalo) Trees January 19, 2012 12 / 33

Query Selection Decision Trees Key Question: Given a partial tree down to node N, what feature s should we choose for the property test T? J. Corso (SUNY at Buffalo) Trees January 19, 2012 13 / 33

Query Selection Key Question: Given a partial tree down to node N, what feature s should we choose for the property test T? The obvious heuristic is to choose the feature that yields as big a decrease in the impurity as possible. J. Corso (SUNY at Buffalo) Trees January 19, 2012 13 / 33

Query Selection Key Question: Given a partial tree down to node N, what feature s should we choose for the property test T? The obvious heuristic is to choose the feature that yields as big a decrease in the impurity as possible. The impurity gradient is i(n) = i(n) P L i(n L ) (1 P L )i(n R ), (5) where N L and N R are the left and right descendants, respectively, P L is the fraction of data that will go to the left sub-tree when property T is used. J. Corso (SUNY at Buffalo) Trees January 19, 2012 13 / 33

Query Selection Key Question: Given a partial tree down to node N, what feature s should we choose for the property test T? The obvious heuristic is to choose the feature that yields as big a decrease in the impurity as possible. The impurity gradient is i(n) = i(n) P L i(n L ) (1 P L )i(n R ), (5) where N L and N R are the left and right descendants, respectively, P L is the fraction of data that will go to the left sub-tree when property T is used. The strategy is then to choose the feature that maximizes i(n). J. Corso (SUNY at Buffalo) Trees January 19, 2012 13 / 33

Query Selection Key Question: Given a partial tree down to node N, what feature s should we choose for the property test T? The obvious heuristic is to choose the feature that yields as big a decrease in the impurity as possible. The impurity gradient is i(n) = i(n) P L i(n L ) (1 P L )i(n R ), (5) where N L and N R are the left and right descendants, respectively, P L is the fraction of data that will go to the left sub-tree when property T is used. The strategy is then to choose the feature that maximizes i(n). If the entropy impurity is used, this corresponds to choosing the feature that yields the highest information gain. J. Corso (SUNY at Buffalo) Trees January 19, 2012 13 / 33

What can we say about this strategy? For the binary-case, it yields one-dimensional optimization problem (which may have non-unique optima). J. Corso (SUNY at Buffalo) Trees January 19, 2012 14 / 33

What can we say about this strategy? For the binary-case, it yields one-dimensional optimization problem (which may have non-unique optima). In the higher branching factor case, it would yield a higher-dimensional optimization problem. In multi-class binary tree creation, we would want to use the twoing criterion. The goal is to find the split that best separates groups of the c categories. A candidate supercategory C 1 consists of all patterns in some subset of the categories and C 2 has the remainder. When searching for the feature s, we also need to search over possible category groupings. J. Corso (SUNY at Buffalo) Trees January 19, 2012 14 / 33

What can we say about this strategy? For the binary-case, it yields one-dimensional optimization problem (which may have non-unique optima). In the higher branching factor case, it would yield a higher-dimensional optimization problem. In multi-class binary tree creation, we would want to use the twoing criterion. The goal is to find the split that best separates groups of the c categories. A candidate supercategory C 1 consists of all patterns in some subset of the categories and C 2 has the remainder. When searching for the feature s, we also need to search over possible category groupings. This is a local, greedy optimization strategy. J. Corso (SUNY at Buffalo) Trees January 19, 2012 14 / 33

What can we say about this strategy? For the binary-case, it yields one-dimensional optimization problem (which may have non-unique optima). In the higher branching factor case, it would yield a higher-dimensional optimization problem. In multi-class binary tree creation, we would want to use the twoing criterion. The goal is to find the split that best separates groups of the c categories. A candidate supercategory C 1 consists of all patterns in some subset of the categories and C 2 has the remainder. When searching for the feature s, we also need to search over possible category groupings. This is a local, greedy optimization strategy. Hence, there is no guarantee that we have either the global optimum (in classification accuracy) or the smallest tree. J. Corso (SUNY at Buffalo) Trees January 19, 2012 14 / 33

What can we say about this strategy? For the binary-case, it yields one-dimensional optimization problem (which may have non-unique optima). In the higher branching factor case, it would yield a higher-dimensional optimization problem. In multi-class binary tree creation, we would want to use the twoing criterion. The goal is to find the split that best separates groups of the c categories. A candidate supercategory C 1 consists of all patterns in some subset of the categories and C 2 has the remainder. When searching for the feature s, we also need to search over possible category groupings. This is a local, greedy optimization strategy. Hence, there is no guarantee that we have either the global optimum (in classification accuracy) or the smallest tree. In practice, it has been observed that the particular choice of impurity function rarely affects the final classifier and its accuracy. J. Corso (SUNY at Buffalo) Trees January 19, 2012 14 / 33

A Note About Multiway Splits In the case of selecting a multiway split with branching factor B, the following is the direct generalization of the impurity gradient function: i(s) = i(n) B P k i(n k ) (6) k=1 J. Corso (SUNY at Buffalo) Trees January 19, 2012 15 / 33

A Note About Multiway Splits In the case of selecting a multiway split with branching factor B, the following is the direct generalization of the impurity gradient function: i(s) = i(n) B P k i(n k ) (6) k=1 This direct generalization is biased toward higher branching factors. To see this, consider the uniform splitting case. J. Corso (SUNY at Buffalo) Trees January 19, 2012 15 / 33

A Note About Multiway Splits In the case of selecting a multiway split with branching factor B, the following is the direct generalization of the impurity gradient function: i(s) = i(n) B P k i(n k ) (6) k=1 This direct generalization is biased toward higher branching factors. To see this, consider the uniform splitting case. So, we need to normalize each: i B (s) = i(s) B k=1 P k log P k. (7) And then we can again choose the feature that maximizes this normalized criterion. J. Corso (SUNY at Buffalo) Trees January 19, 2012 15 / 33

When to Stop Splitting? If we continue to grow the tree until each leaf node has its lowest impurity (just one sample datum), then we will likely have over-trained the data. This tree will most definitely not generalize well. J. Corso (SUNY at Buffalo) Trees January 19, 2012 16 / 33

When to Stop Splitting? If we continue to grow the tree until each leaf node has its lowest impurity (just one sample datum), then we will likely have over-trained the data. This tree will most definitely not generalize well. Conversely, if we stop growing the tree too early, the error on the training data will not be sufficiently low and performance will again suffer. J. Corso (SUNY at Buffalo) Trees January 19, 2012 16 / 33

When to Stop Splitting? If we continue to grow the tree until each leaf node has its lowest impurity (just one sample datum), then we will likely have over-trained the data. This tree will most definitely not generalize well. Conversely, if we stop growing the tree too early, the error on the training data will not be sufficiently low and performance will again suffer. So, how to stop splitting? J. Corso (SUNY at Buffalo) Trees January 19, 2012 16 / 33

When to Stop Splitting? If we continue to grow the tree until each leaf node has its lowest impurity (just one sample datum), then we will likely have over-trained the data. This tree will most definitely not generalize well. Conversely, if we stop growing the tree too early, the error on the training data will not be sufficiently low and performance will again suffer. So, how to stop splitting? 1 Cross-validation... 2 Threshold on the impurity gradient. 3 Incorporate a tree-complexity term and minimize. 4 Statistical significance of the impurity gradient. J. Corso (SUNY at Buffalo) Trees January 19, 2012 16 / 33

Stopping by Thresholding the Impurity Gradient Splitting is stopped if the best candidate split at a node reduces the impurity by less than the preset amount, β: max i(s) β. (8) s J. Corso (SUNY at Buffalo) Trees January 19, 2012 17 / 33

Stopping by Thresholding the Impurity Gradient Splitting is stopped if the best candidate split at a node reduces the impurity by less than the preset amount, β: max i(s) β. (8) s Benefit 1: Unlike cross-validation, the tree is trained on the complete training data set. J. Corso (SUNY at Buffalo) Trees January 19, 2012 17 / 33

Stopping by Thresholding the Impurity Gradient Splitting is stopped if the best candidate split at a node reduces the impurity by less than the preset amount, β: max i(s) β. (8) s Benefit 1: Unlike cross-validation, the tree is trained on the complete training data set. Benefit 2: Leaf nodes can lie in different levels of the tree, which is desirable whenver the complexity of the data varies throughout the range of values. J. Corso (SUNY at Buffalo) Trees January 19, 2012 17 / 33

Stopping by Thresholding the Impurity Gradient Splitting is stopped if the best candidate split at a node reduces the impurity by less than the preset amount, β: max i(s) β. (8) s Benefit 1: Unlike cross-validation, the tree is trained on the complete training data set. Benefit 2: Leaf nodes can lie in different levels of the tree, which is desirable whenver the complexity of the data varies throughout the range of values. Drawback: But, how do we set the value of the threshold β? J. Corso (SUNY at Buffalo) Trees January 19, 2012 17 / 33

Stopping with a Complexity Term Define a new global criterion function α size + leaf nodes i(n). (9) which trades complexity for accuracy. Here, size could represent the number of nodes or links and α is some positive constant. J. Corso (SUNY at Buffalo) Trees January 19, 2012 18 / 33

Stopping with a Complexity Term Define a new global criterion function α size + leaf nodes i(n). (9) which trades complexity for accuracy. Here, size could represent the number of nodes or links and α is some positive constant. The strategy is then to split until a minimum of this global criterion function has been reached. J. Corso (SUNY at Buffalo) Trees January 19, 2012 18 / 33

Stopping with a Complexity Term Define a new global criterion function α size + leaf nodes i(n). (9) which trades complexity for accuracy. Here, size could represent the number of nodes or links and α is some positive constant. The strategy is then to split until a minimum of this global criterion function has been reached. Given the entropy impurity, this global measure is related to the minimum description length principle. The sum of the impurities at the leaf nodes is a measure of uncertainty in the training data given the model represented by the tree. J. Corso (SUNY at Buffalo) Trees January 19, 2012 18 / 33

Stopping with a Complexity Term Define a new global criterion function α size + leaf nodes i(n). (9) which trades complexity for accuracy. Here, size could represent the number of nodes or links and α is some positive constant. The strategy is then to split until a minimum of this global criterion function has been reached. Given the entropy impurity, this global measure is related to the minimum description length principle. The sum of the impurities at the leaf nodes is a measure of uncertainty in the training data given the model represented by the tree. But, again, how do we set the constant α? J. Corso (SUNY at Buffalo) Trees January 19, 2012 18 / 33

Stopping by Testing the Statistical Significance During construction, estimate the distribution of the impurity gradients i for the current collection of nodes. J. Corso (SUNY at Buffalo) Trees January 19, 2012 19 / 33

Stopping by Testing the Statistical Significance During construction, estimate the distribution of the impurity gradients i for the current collection of nodes. For any candidate split, estimate if it is statistical different from zero. One possibility is the chi-squared test. J. Corso (SUNY at Buffalo) Trees January 19, 2012 19 / 33

Stopping by Testing the Statistical Significance During construction, estimate the distribution of the impurity gradients i for the current collection of nodes. For any candidate split, estimate if it is statistical different from zero. One possibility is the chi-squared test. More generally, we can consider a hypothesis testing approach to stopping: we seek to determine whether a candidate split differs significantly from a random split. J. Corso (SUNY at Buffalo) Trees January 19, 2012 19 / 33

Stopping by Testing the Statistical Significance During construction, estimate the distribution of the impurity gradients i for the current collection of nodes. For any candidate split, estimate if it is statistical different from zero. One possibility is the chi-squared test. More generally, we can consider a hypothesis testing approach to stopping: we seek to determine whether a candidate split differs significantly from a random split. Suppose we have n samples at node N. A particular split s sends P n patterns to the left branch and (1 P )n patterns to the right branch. A random split would place P n1 of the ω 1 samples to the left, P n2 of the ω 2 samples to the left and corresponding amounts to the right. J. Corso (SUNY at Buffalo) Trees January 19, 2012 19 / 33

The chi-squared statistic calculates the deviation of a particular split s from this random one: χ 2 = 2 (n il n ie ) 2 i=1 n ie (10) where n il is the number of ω 1 patterns sent to the left under s, and n ie = P n i is the number expected by the random rule. J. Corso (SUNY at Buffalo) Trees January 19, 2012 20 / 33

The chi-squared statistic calculates the deviation of a particular split s from this random one: χ 2 = 2 (n il n ie ) 2 i=1 n ie (10) where n il is the number of ω 1 patterns sent to the left under s, and n ie = P n i is the number expected by the random rule. The larger the chi-squared statistic, the more the candidate split deviates from a random one. J. Corso (SUNY at Buffalo) Trees January 19, 2012 20 / 33

The chi-squared statistic calculates the deviation of a particular split s from this random one: χ 2 = 2 (n il n ie ) 2 i=1 n ie (10) where n il is the number of ω 1 patterns sent to the left under s, and n ie = P n i is the number expected by the random rule. The larger the chi-squared statistic, the more the candidate split deviates from a random one. When it is greater than a critical value (based on desired significance bounds), we reject the null hypothesis (the random split) and proceed with s. J. Corso (SUNY at Buffalo) Trees January 19, 2012 20 / 33

Pruning Decision Trees Tree construction based on when to stop splitting biases the learning algorithm toward trees in which the greatest impurity reduction occurs near the root. It makes no attempt to look ahead at what splits may occur in the leaf and beyond. J. Corso (SUNY at Buffalo) Trees January 19, 2012 21 / 33

Pruning Tree construction based on when to stop splitting biases the learning algorithm toward trees in which the greatest impurity reduction occurs near the root. It makes no attempt to look ahead at what splits may occur in the leaf and beyond. Pruning is the principal alternative strategy for tree construction. J. Corso (SUNY at Buffalo) Trees January 19, 2012 21 / 33

Pruning Tree construction based on when to stop splitting biases the learning algorithm toward trees in which the greatest impurity reduction occurs near the root. It makes no attempt to look ahead at what splits may occur in the leaf and beyond. Pruning is the principal alternative strategy for tree construction. In pruning, we exhaustively build the tree. Then, all pairs of neighboring leafs nodes are considered for elimination. J. Corso (SUNY at Buffalo) Trees January 19, 2012 21 / 33

Pruning Tree construction based on when to stop splitting biases the learning algorithm toward trees in which the greatest impurity reduction occurs near the root. It makes no attempt to look ahead at what splits may occur in the leaf and beyond. Pruning is the principal alternative strategy for tree construction. In pruning, we exhaustively build the tree. Then, all pairs of neighboring leafs nodes are considered for elimination. Any pair that yields a satisfactory increase in impurity (a small one) is eliminated and the common ancestor node is declared a leaf. J. Corso (SUNY at Buffalo) Trees January 19, 2012 21 / 33

Pruning Tree construction based on when to stop splitting biases the learning algorithm toward trees in which the greatest impurity reduction occurs near the root. It makes no attempt to look ahead at what splits may occur in the leaf and beyond. Pruning is the principal alternative strategy for tree construction. In pruning, we exhaustively build the tree. Then, all pairs of neighboring leafs nodes are considered for elimination. Any pair that yields a satisfactory increase in impurity (a small one) is eliminated and the common ancestor node is declared a leaf. Unbalanced trees often result from this style of pruning/merging. J. Corso (SUNY at Buffalo) Trees January 19, 2012 21 / 33

Pruning Tree construction based on when to stop splitting biases the learning algorithm toward trees in which the greatest impurity reduction occurs near the root. It makes no attempt to look ahead at what splits may occur in the leaf and beyond. Pruning is the principal alternative strategy for tree construction. In pruning, we exhaustively build the tree. Then, all pairs of neighboring leafs nodes are considered for elimination. Any pair that yields a satisfactory increase in impurity (a small one) is eliminated and the common ancestor node is declared a leaf. Unbalanced trees often result from this style of pruning/merging. Pruning avoids the local -ness of the earlier methods and uses all of the training data, but it does so at added computational cost during the tree construction. J. Corso (SUNY at Buffalo) Trees January 19, 2012 21 / 33

Assignment of Leaf Node Labels This part is easy...a particular leaf node should make the label assignment based on the distribution of samples in it during training. Take the label of the maximally represented class. We will see clear justification for this in the next chapter on Decision Theory. J. Corso (SUNY at Buffalo) Trees January 19, 2012 22 / 33

Instability of the Tree Construction J. Corso (SUNY at Buffalo) Trees January 19, 2012 23 / 33

Importance of Feature Choice The selection of features will ultimately play a major role in accuracy, generalization, and complexity. This is an instance of the Ugly Duckling principle. 1 x 2 R 1 x 1 < 0.27.8 x 2 < 0.32 x 2 < 0.6.6 x 1 < 0.07 ω 1 ω 2 x 1 < 0.55.4 R 2 ω 1 ω 2 ω 1 x 2 < 0.86.2 ω 2 x 1 < 0.81 0.2.4.6.8 1 x 1 ω 1 ω 2 x 2 1 R 1-1.2 x 1 + x 2 < 0.1.8.6 ω 2 ω 1.4 R 2.2 0.2.4.6.8 1 x 1 FIGURE 8.5. If the class of node decisions does not match the form of the training data, J. Corso (SUNY at Buffalo) a very complicated decision tree will result, Trees as shown at the top. Here decisions are January 19, 2012 24 / 33

Furthermore, the use of multiple variables in selecting a decision rule may greatly improve the accuracy and generalization. x2 1 x 2 < 0.5 0.8 0.6 R 1 x 1 < 0.95 x 2 < 0.56 0.4 R 1 R 2 ω 2 ω 1 x2 < 0.54 ω 1 0.2 R 2 ω 1 ω 2 0 0.2 0.4 0.6 0.8 1 x 1 x 2 0.04 x1 + 0.16 x2 < 0.11 1 0.8 0.27 x1-0.44 x2 < -0.02 ω 1 R 1 0.6 0.96 x 1-1.77x2 < -0.45 ω 2 0.4 0.2 R 2 5.43 x 1-13.33 x2 < -6.03 ω 2 0 0.2 0.4 0.6 0.8 1 x 1 ω 1 ω 2 J. Corso (SUNYFIGURE at Buffalo) 8.6. One form of multivariate tree Trees employs general linear decisions atjanuary each 19, 2012 25 / 33

ID3 Method Decision Trees ID3 ID3 is another tree growing method. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

ID3 ID3 Method ID3 is another tree growing method. It assumes nominal inputs. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

ID3 ID3 Method ID3 is another tree growing method. It assumes nominal inputs. Every split has a branching factor B j, where B j is the number of discrete attribute bins of the variable j chosen for splitting. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

ID3 ID3 Method ID3 is another tree growing method. It assumes nominal inputs. Every split has a branching factor B j, where B j is the number of discrete attribute bins of the variable j chosen for splitting. These are, hence, seldom binary. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

ID3 ID3 Method ID3 is another tree growing method. It assumes nominal inputs. Every split has a branching factor B j, where B j is the number of discrete attribute bins of the variable j chosen for splitting. These are, hence, seldom binary. The number of levels in the trees are equal to the number of input variables. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

ID3 ID3 Method ID3 is another tree growing method. It assumes nominal inputs. Every split has a branching factor B j, where B j is the number of discrete attribute bins of the variable j chosen for splitting. These are, hence, seldom binary. The number of levels in the trees are equal to the number of input variables. The algorithm continues until all nodes are pure or there are no more variables on which to split. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

ID3 ID3 Method ID3 is another tree growing method. It assumes nominal inputs. Every split has a branching factor B j, where B j is the number of discrete attribute bins of the variable j chosen for splitting. These are, hence, seldom binary. The number of levels in the trees are equal to the number of input variables. The algorithm continues until all nodes are pure or there are no more variables on which to split. One can follow this by pruning. J. Corso (SUNY at Buffalo) Trees January 19, 2012 26 / 33

C4.5 Method (in brief) Decision Trees C4.5 This is a successor to the ID3 method. J. Corso (SUNY at Buffalo) Trees January 19, 2012 27 / 33

C4.5 C4.5 Method (in brief) This is a successor to the ID3 method. It handles real valued variables like and uses the ID3 multiway splits for nominal data. J. Corso (SUNY at Buffalo) Trees January 19, 2012 27 / 33

C4.5 C4.5 Method (in brief) This is a successor to the ID3 method. It handles real valued variables like and uses the ID3 multiway splits for nominal data. Pruning is performed based on statistical significance tests. J. Corso (SUNY at Buffalo) Trees January 19, 2012 27 / 33

Example Example from T. Mitchell Book: PlayTennis Day Outlook Temperature Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No J. Corso (SUNY at Buffalo) Trees January 19, 2012 28 / 33

Example Which attribute is the best classifier? S: [9+,5-] S: [9+,5-] E =0.940 E =0.940 Humidity Wind High Normal Weak Strong [3+,4-] [6+,1-] [6+,2-] [3+,3-] E =0.985 E =0.592 E =0.811 E =1.00 Gain (S, Humidity ) Gain (S, Wind) =.940 - (7/14).985 - (7/14).592 =.151 =.940 - (8/14).811 - (6/14)1.0 =.048 J. Corso (SUNY at Buffalo) Trees January 19, 2012 29 / 33

Example {D1, D2,..., D14} [9+,5 ] Outlook Sunny Overcast Rain {D1,D2,D8,D9,D11} {D3,D7,D12,D13} {D4,D5,D6,D10,D14} [2+,3 ] [4+,0 ] [3+,2 ]? Yes? Which attribute should be tested here? Ssunny = {D1,D2,D8,D9,D11} Gain (S sunny, Humidity) =.970 (3/5) 0.0 (2/5) 0.0 =.970 Gain (S sunny, Temperature) =.970 (2/5) 0.0 (2/5) 1.0 (1/5) 0.0 =.570 Gain (S sunny, Wind) =.970 (2/5) 1.0 (3/5).918 =.019 J. Corso (SUNY at Buffalo) Trees January 19, 2012 30 / 33

Example Hypothesis Space Search by ID3 + + A1 + + + A2 + +...... + + A2 A3 + A2 + + A4...... J. Corso (SUNY at Buffalo) Trees January 19, 2012 31 / 33

Learned Tree Decision Trees Example Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes J. Corso (SUNY at Buffalo) Trees January 19, 2012 32 / 33

Example Overfitting Instance Consider adding a new, noisy training example #15: Sunny, Hot, N ormal, Strong, P layt ennis = N o What effect would it have on the earlier tree? J. Corso (SUNY at Buffalo) Trees January 19, 2012 33 / 33

Example Overfitting Instance Consider adding a new, noisy training example #15: Sunny, Hot, N ormal, Strong, P layt ennis = N o What effect would it have on the earlier tree? 0.9 0.85 0.8 Accuracy 0.75 0.7 0.65 0.6 0.55 On training data On test data 0.5 0 10 20 30 40 50 60 70 80 90 100 J. Corso (SUNY at Buffalo) Trees January 19, 2012 33 / 33