Decision tree analysis

From apppm
(Difference between revisions)
Jump to: navigation, search
Line 4: Line 4:
 
----
 
----
  
A common machine learning algorithm called decision tree analysis is used to categorize and predict outcomes based on a collection of input features <ref name="Breiman">Breiman et al. (1984) (Breiman et al., 1984). Each node represents a choice, and each branch represents one or more potential outcomes, creating a tree-like model of decisions and their potential effects. In order to decide what actions to take at each node based on the input features, the algorithm learns from a training collection of labeled data.
+
A common machine learning algorithm called decision tree analysis is used to categorize and predict outcomes based on a collection of input features <ref name="Breiman">Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. Chapman and Hall</ref> (Breiman et al., 1984). Each node represents a choice, and each branch represents one or more potential outcomes, creating a tree-like model of decisions and their potential effects. In order to decide what actions to take at each node based on the input features, the algorithm learns from a training collection of labeled data.
  
 
Many industries, including banking, medicine, marketing, and engineering, use decision tree analysis (Kotsiantis et al., 2006). It is especially helpful for issues with binary outcomes, like predicting client churn, determining credit risk, and spotting fraud (Wasserman, 2013).
 
Many industries, including banking, medicine, marketing, and engineering, use decision tree analysis (Kotsiantis et al., 2006). It is especially helpful for issues with binary outcomes, like predicting client churn, determining credit risk, and spotting fraud (Wasserman, 2013).

Revision as of 15:10, 19 February 2023

Abstract


A common machine learning algorithm called decision tree analysis is used to categorize and predict outcomes based on a collection of input features [1] (Breiman et al., 1984). Each node represents a choice, and each branch represents one or more potential outcomes, creating a tree-like model of decisions and their potential effects. In order to decide what actions to take at each node based on the input features, the algorithm learns from a training collection of labeled data.

Many industries, including banking, medicine, marketing, and engineering, use decision tree analysis (Kotsiantis et al., 2006). It is especially helpful for issues with binary outcomes, like predicting client churn, determining credit risk, and spotting fraud (Wasserman, 2013).


Application



Limitations



References


  1. Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. Chapman and Hall

Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. Chapman and Hall. ‎
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Springer. ‎
Kotsiantis, S. B., Zaharakis, I. D., & Pintelas, P. E. (2006). Machine learning: a review of classification and combining techniques. Artificial Intelligence Review, 26(3), 159-190. ‎
Wasserman, L. (2013). All of statistics: A concise course in statistical inference. Springer Science & Business Media.

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox