Assessing and Improving Prediction and Classification: Theory and Algorithms in C++

دانلود کتاب Assessing and Improving Prediction and Classification: Theory and Algorithms in C++

47000 تومان موجود

کتاب ارزیابی و بهبود پیش‌بینی و طبقه‌بندی: نظریه و الگوریتم‌ها در C نسخه زبان اصلی

دانلود کتاب ارزیابی و بهبود پیش‌بینی و طبقه‌بندی: نظریه و الگوریتم‌ها در C بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 7


توضیحاتی در مورد کتاب Assessing and Improving Prediction and Classification: Theory and Algorithms in C++

نام کتاب : Assessing and Improving Prediction and Classification: Theory and Algorithms in C++
عنوان ترجمه شده به فارسی : ارزیابی و بهبود پیش‌بینی و طبقه‌بندی: نظریه و الگوریتم‌ها در C
سری :
نویسندگان :
ناشر : Apress
سال نشر : 2017
تعداد صفحات : 529
ISBN (شابک) : 9781484233368 , 1484233360
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 5 مگابایت



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.


فهرست مطالب :


Table of Contents
About the Author
About the Technical Reviewers
Preface
1
Chapter 1: Assessment of Numeric Predictions
Notation
Overview of Performance Measures
Consistency and Evolutionary Stability
Selection Bias and the Need for Three Datasets
Cross Validation and Walk-Forward Testing
Bias in Cross Validation
Overlap Considerations
Assessing Nonstationarity Using Walk-Forward Testing
Nested Cross Validation Revisited
Common Performance Measures
Mean Squared Error
Mean Absolute Error
R-Squared
RMS Error
Nonparametric Correlation
Success Ratios
Alternatives to Common Performance Measures
Stratification for Consistency
Confidence Intervals
The Confidence Set
Serial Correlation
Multiplicative Data
Normally Distributed Errors
Empirical Quantiles as Confidence Intervals
Confidence Bounds for Quantiles
Tolerance Intervals
2
Chapter 2: Assessment of  Class Predictions
The Confusion Matrix
Expected Gain/Loss
ROC (Receiver Operating Characteristic) Curves
Hits, False Alarms, and Related Measures
Computing the ROC Curve
Area Under the ROC Curve
Cost and the ROC Curve
Optimizing ROC-Based Statistics
Optimizing the Threshold: Now or Later?
Maximizing Precision
Generalized Targets
Maximizing Total Gain
Maximizing Mean Gain
Maximizing the Standardized Mean Gain
Confidence in Classification Decisions
Hypothesis Testing
Confidence in the Confidence
Bayesian Methods
Multiple Classes
Hypothesis Testing vs. Bayes’ Method
Final Thoughts on Hypothesis Testing
Confidence Intervals for Future Performance
3
Chapter 3: Resampling for Assessing Parameter Estimates
Bias and Variance of Statistical Estimators
Plug-in Estimators and Empirical Distributions
Bias of an Estimator
Variance of an Estimator
Bootstrap Estimation of Bias and Variance
Code for Bias and Variance Estimation
Plug-in Estimators Can Provide Better Bootstraps
A Model Parameter Example
Confidence Intervals
Is the Interval Backward?
Improving the Percentile Method
Hypothesis Tests for Parameter Values
Bootstrapping Ratio Statistics
Jackknife Estimates of Bias and Variance
Bootstrapping Dependent Data
Estimating the Extent of Autocorrelation
The Stationary Bootstrap
Choosing a Block Size for the Stationary Bootstrap
The Tapered Block Bootstrap
Choosing a Block Size for the Tapered Block Bootstrap
What If the Block Size Is Wrong?
4
Chapter 4: Resampling for  Assessing Prediction and Classification
Partitioning the Error
Cross Validation
Bootstrap Estimation of Population Error
Efron’s E0 Estimate of Population Error
Efron’s E632 Estimate of Population Error
Comparing the Error Estimators for Prediction
Comparing the Error Estimators for Classification
Summary
5
Chapter 5: Miscellaneous Resampling Techniques
Bagging
A Quasi-theoretical Justification
The Component Models
Code for Bagging
AdaBoost
Binary AdaBoost for Pure Classification Models
Probabilistic Sampling for Inflexible Models
Binary AdaBoost When the Model Provides Confidence
AdaBoost.MH for More Than Two Classes
AdaBoost.OC for More Than Two Classes
Comparing the Boosting Algorithms
A Binary Classification Problem
A Multiple-Class Problem
Final Thoughts on Boosting
Permutation Training and Testing
The Permutation Training Algorithm
Partitioning the Training Performance
A Demonstration of Permutation Training
6
Chapter 6: Combining Numeric Predictions
Simple Average
Code for Averaging Predictions
Unconstrained Linear Combinations
Constrained Linear Combinations
Constrained Combination of Unbiased Models
Variance-Weighted Interpolation
Combination by Kernel Regression Smoothing
Code for the GRNN
Comparing the Combination Methods
7
Chapter 7: Combining Classification Models
Introduction and Notation
Reduction vs. Ordering
The Majority Rule
Code for the Majority Rule
The Borda Count
The Average Rule
Code for the Average Rule
The Median Alternative
The Product Rule
The MaxMax and MaxMin Rules
The Intersection Method
The Union Rule
Logistic Regression
Code for the Combined Weight Method
The Logit Transform and Maximum Likelihood Estimation
Code for Logistic Regression
Separate Weight Sets
Model Selection by Local Accuracy
Code for Local Accuracy Selection
Maximizing the Fuzzy Integral
What Does This Have to Do with Classifier Combination?
Code for the Fuzzy Integral
Pairwise Coupling
Pairwise Threshold Optimization
A Cautionary Note
Comparing the Combination Methods
Small Training Set, Three Models
Large Training Set, Three Models
Small Training Set, Three Good Models, One Worthless
Large Training Set, Three Good Models, One Worthless
Small Training Set, Worthless and Noisy Models Included
Large Training Set, Worthless and Noisy Models Included
Five Classes
8
Chapter 8: Gating Methods
Preordained Specialization
Learned Specialization
After-the-Fact Specialization
Code for After-the-Fact Specialization
Some Experimental Results
General Regression Gating
Code for GRNN Gating
Experiments with GRNN Gating
9
Chapter 9: Information and Entropy
Entropy
Entropy of a Continuous Random Variable
Partitioning a Continuous Variable for Entropy
An Example of Improving Entropy
Joint and Conditional Entropy
Code for Conditional Entropy
Mutual Information
Fano’s Bound and Selection of Predictor Variables
Confusion Matrices and Mutual Information
Extending Fano’s Bound for Upper Limits
Simple Algorithms for Mutual Information
The TEST_DIS Program
Continuous Mutual Information
The Parzen Window Method
Adaptive Partitioning
The TEST_CON Program
Predictor Selection Using Mutual Information
Maximizing Relevance While Minimizing Redundancy
The MI_DISC and MI_CONT Programs
A Contrived Example of Information Minus Redundancy
A Superior Selection Algorithm for Binary Variables
Screening Without Redundancy
Asymmetric Information Measures
Uncertainty Reduction
Transfer Entropy: Schreiber’s Information Transfer
References
References
Index




پست ها تصادفی