توضیحاتی در مورد کتاب Machine Learning with Python: Theory and Implementation
نام کتاب : Machine Learning with Python: Theory and Implementation
عنوان ترجمه شده به فارسی : یادگیری ماشین با پایتون: تئوری و پیاده سازی
سری :
نویسندگان : Amin Zollanvari
ناشر : Springer
سال نشر : 2023
تعداد صفحات : 426
ISBN (شابک) : 3031333411 , 9783031333415
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 42 مگابایت
بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.
فهرست مطالب :
Preface
About This Book
Primary Audience
Organization
Note for Instructors
Contents
Chapter 1 Introduction
1.1 General Concepts
1.1.1 Machine Learning
1.1.2 Supervised Learning
1.1.3 Unsupervised Learning
1.1.4 Semi-supervised Learning
1.1.5 Reinforcement Learning
1.1.6 Design Process
1.1.7 Artificial Intelligence
1.2 What Is “Learning” in Machine Learning?
1.3 An Illustrative Example
1.3.1 Data
1.3.2 Feature Selection
1.3.3 Feature Extraction
1.3.4 Segmentation
1.3.5 Training
1.3.6 Evaluation
1.4 Python in Machine Learning and Throughout This Book
Chapter 2 Getting Started with Python
2.1 First Things First: Installing What Is Needed
2.2 Jupyter Notebook
2.3 Variables
2.4 Strings
2.5 Some Important Operators
2.5.1 Arithmetic Operators
2.5.2 Relational and Logical Operators
2.5.3 Membership Operators
2.6 Built-in Data Structures
2.6.1 Lists
2.6.2 Tuples
2.6.3 Dictionaries
2.6.4 Sets
2.6.5 Some Remarks on Sequence Unpacking
2.7 Flow of Control and Some Python Idioms
2.7.1 for Loops
2.7.2 List Comprehension
2.7.3 if-elif-else
2.8 Function, Module, Package, and Alias
2.8.1 Functions
2.8.2 Modules and Packages
2.8.3 Aliases
2.9 Iterator, Generator Function, and Generator Expression
2.9.1 Iterator
2.9.2 Generator Function
2.9.3 Generator Expression
Chapter 3 Three Fundamental Python Packages
3.1 NumPy
3.1.1 Working with NumPy Package
3.1.2 NumPy Array Attributes
3.1.3 NumPy Built-in Functions for Array Creation
3.1.4 Array Indexing
3.1.5 Reshaping Arrays
3.1.6 Universal Functions (UFuncs)
3.1.7 Broadcasting
3.2 Pandas
3.2.1 Series
3.2.2 DataFrame
3.2.3 Pandas Read andWrite Data
3.3 Matplotlib
3.3.1 Backend and Frontend
3.3.2 The Two matplotlib Interfaces: pyplot-style and OO-style
3.3.3 Two Instructive Examples
Chapter 4 Supervised Learning in Practice: the First Application Using Scikit-Learn
4.1 Supervised Learning
4.2 Scikit-Learn
4.3 The First Application: Iris Flower Classification
4.4 Test Set for Model Assessment
4.5 Data Visualization
4.6 Feature Scaling (Normalization)
4.7 Model Training
4.8 Prediction Using the Trained Model
4.9 Model Evaluation (Error Estimation)
Chapter 5 k-Nearest Neighbors
5.1 Classification
5.1.1 Standard kNN Classifier
5.1.2 Distance-Weighted kNN Classifier
5.1.3 The Choice of Distance
5.2 Regression
5.2.1 Standard kNN Regressor
5.2.2 A Regression Application Using kNN
5.2.3 Distance-Weighted kNN Regressor
Chapter 6 Linear Models
6.1 Optimal Classification
6.1.1 Discriminant Functions and Decision Boundaries
6.1.2 Bayes Classifier
6.2 Linear Models for Classification
6.2.1 Linear Discriminant Analysis
6.2.2 Logistic Regression
6.3 Linear Models for Regression
Chapter 7 Decision Trees
7.1 A Mental Model for House Price Classification
7.2 CART Development for Classification:
7.2.1 Splits
7.2.2 Splitting Strategy
7.2.3 Classification at Leaf Nodes
7.2.4 Impurity Measures
7.2.5 HandlingWeighted Samples
7.3 CART Development for Regression
7.3.1 Differences Between Classification and Regression
7.3.2 Impurity Measures
7.3.3 Regression at Leaf Nodes
7.4 Interpretability of Decision Trees
Chapter 8 Ensemble Learning
8.1 A General Perspective on the Efficacy of Ensemble Learning
8.1.1 Bias-Variance Decomposition
8.1.2 HowWould Ensemble Learning Possibly Help?
8.2 Stacking
8.3 Bagging
8.4 Random Forest
8.5 Pasting
8.6 Boosting
8.6.1 AdaBoost
8.6.2 Gradient Boosting
8.6.3 Gradient Boosting Regression Tree
8.6.4 XGBoost
Chapter 9 Model Evaluation and Selection
9.1 Model Evaluation
9.1.1 Model Evaluation Rules
9.1.2 Evaluation Metrics for Classifiers
9.1.3 Evaluation Metrics for Regressors
9.2 Model Selection
9.2.1 Grid Search
9.2.2 Random Search
Chapter 10 Feature Selection
10.1 Dimensionality Reduction: Feature Selection and Extraction
10.2 Feature Selection Techniques
10.2.1 Filter Methods
10.2.2 Wrapper Methods
10.2.3 Embedded Methods
Chapter 11 Assembling Various Learning Steps
11.1 Using Cross-Validation Along with Other Steps in a Nutshell
11.2 A Common Mistake
11.3 Feature Selection and Model Evaluation Using Cross-Validation
11.4 Feature and Model Selection Using Cross-Validation
11.5 Nested Cross-Validation for Feature and Model Selection, and Evaluation
Chapter 13 Deep Learning with Keras-TensorFlow
13.1 Artificial Neural Network, Deep Learning, and Multilayer Perceptron
13.2 Backpropagation, Optimizer, Batch Size, and Epoch
13.3 Why Keras?
13.4 Google Colaboratory (Colab)
13.5 The First Application Using Keras
13.5.1 Classification of Handwritten Digits: MNIST Dataset
13.5.2 Building Model Structure in Keras
13.5.3 Compiling: optimizer, metrics, and loss
13.5.4 Fitting
13.5.5 Evaluating and Predicting
13.5.6 CPU vs. GPU Performance
13.5.7 Overfitting and Dropout
13.5.8 Hyperparameter Tuning
Chapter 14 Convolutional Neural Networks
14.1 CNN, Convolution, and Cross-Correlation
14.2 Working Mechanism of 2D Convolution
14.2.1 Convolution of a 2D Input Tensor with a 2D Kernel Tensor
14.2.2 Convolution of a 3D Input Tensor with a 4D Kernel Tensor
14.3 Implementation in Keras: Classification of Handwritten Digits
Chapter 15 Recurrent Neural Networks
15.1 Standard RNN and Stacked RNN
15.2 Vanishing and Exploding Gradient Problems
15.3 LSTM and GRU
15.4 Implementation in Keras: Sentiment Classification
References
Index