Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)

دانلود کتاب Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)

59000 تومان موجود

کتاب یادگیری آماری برای داده‌های وابسته بزرگ (سری‌های Wiley در احتمال و آمار) نسخه زبان اصلی

دانلود کتاب یادگیری آماری برای داده‌های وابسته بزرگ (سری‌های Wiley در احتمال و آمار) بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 12


توضیحاتی در مورد کتاب Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)

نام کتاب : Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)
ویرایش : 1
عنوان ترجمه شده به فارسی : یادگیری آماری برای داده‌های وابسته بزرگ (سری‌های Wiley در احتمال و آمار)
سری :
نویسندگان : ,
ناشر : Wiley
سال نشر : 2021
تعداد صفحات : 556
ISBN (شابک) : 1119417384 , 9781119417385
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 20 مگابایت



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.


فهرست مطالب :


Cover
Title Page
Copyright
Contents
Preface
Chapter 1 Introduction to Big Dependent Data
1.1 Examples of Dependent Data
1.2 Stochastic Processes
1.2.1 Scalar Processes
1.2.1.1 Stationarity
1.2.1.2 White Noise Process
1.2.1.3 Conditional Distribution
1.2.2 Vector Processes
1.2.2.1 Vector White Noises
1.2.2.2 Invertibility
1.3 Sample Moments of Stationary Vector Process
1.3.1 Sample Mean
1.3.2 Sample Covariance and Correlation Matrices
1.4 Nonstationary Processes
1.5 Principal Component Analysis
1.5.1 Discussion
1.5.2 Properties of the PCs
1.6 Effects of Serial Dependence
Appendix 1.A: Some Matrix Theory
Exercises
References
Chapter 2 Linear Univariate Time Series
2.1 Visualizing a Large Set of Time Series
2.1.1 Dynamic Plots
2.1.2 Static Plots
2.2 Stationary ARMA Models
2.2.1 The Autoregressive Process
2.2.1.1 Autocorrelation Functions
2.2.2 The Moving Average Process
2.2.3 The ARMA Process
2.2.4 Linear Combinations of ARMA Processes
2.3 Spectral Analysis of Stationary Processes
2.3.1 Fitting Harmonic Functions to a Time Series
2.3.2 The Periodogram
2.3.3 The Spectral Density Function and Its Estimation
2.4 Integrated Processes
2.4.1 The Random Walk Process
2.4.2 ARIMA Models
2.4.3 Seasonal ARIMA Models
2.4.3.1 The Airline Model
2.5 Structural and State Space Models
2.5.1 Structural Time Series Models
2.5.2 State‐Space Models
2.5.3 The Kalman Filter
2.6 Forecasting with Linear Models
2.6.1 Computing Optimal Predictors
2.6.2 Variances of the Predictions
2.6.3 Measuring Predictability
2.7 Modeling a Set of Time Series
2.7.1 Data Transformation
2.7.2 Testing for White Noise
2.7.3 Determination of the Difference Order
2.7.4 Model Identification
2.8 Estimation and Information Criteria
2.8.1 Conditional Likelihood
2.8.2 On‐line Estimation
2.8.3 Maximum Likelihood (ML) Estimation
2.8.4 Model Selection
2.8.4.1 The Akaike Information Criterion (AIC)
2.8.4.2 The Bayesian Information Criterion (BIC)
2.8.4.3 Other Criteria
2.8.4.4 Cross‐Validation
2.9 Diagnostic Checking
2.9.1 Residual Plot
2.9.2 Portmanteau Test for Residual Serial Correlations
2.9.3 Homoscedastic Tests
2.9.4 Normality Tests
2.9.5 Checking for Deterministic Components
2.10 Forecasting
2.10.1 Out‐of‐Sample Forecasts
2.10.2 Forecasting with Model Averaging
2.10.3 Forecasting with Shrinkage Estimators
Appendix 2.A: Difference Equations
2.10.3 Validity of a Solution and Initial Values
Exercises
References
Chapter 3 ANALYSIS OF MULTIVARIATE TIME SERIES
3.1 TRANSFER FUNCTION MODELS
3.1.1 Single Input and Single Output
3.1.2 Multiple Inputs and Multiple Outputs
3.2 VECTOR AR MODELS
3.2.1 Impulse Response Function
3.2.2 Some Special Cases
3.2.3 Estimation
3.2.4 Model Building
3.2.5 Prediction
3.2.6 Forecast Error Variance Decomposition
3.3 VECTOR MOVING‐AVERAGE MODELS
3.3.1 Properties of VMA Models
3.3.2 VMA Modeling
3.4 STATIONARY VARMA MODELS
3.4.1 Are VAR Models Sufficient?
3.4.2 Properties of VARMA Models
3.4.3 Modeling VARMA Process
3.4.4 Use of VARMA Models
3.5 UNIT ROOTS AND CO‐INTEGRATION
3.5.1 Spurious Regression
3.5.2 Linear Combinations of a Vector Process
3.5.3 Co‐integration
3.5.4 Over‐Differencing
3.6 ERROR‐CORRECTION MODELS
3.6.1 Co‐integration Test
Exercises
References
Chapter 4 Handling Heterogeneity in Many Time Series
4.1 Intervention Analysis
4.1.1 Intervention with Indicator Variables
4.1.2 Intervention with Step Functions
4.1.3 Intervention with General Exogenous Variables
4.1.4 Building an Intervention Model
4.2 Estimation of Missing Values
4.2.1 Univariate Interpolation
4.2.2 Multivariate Interpolation
4.3 Outliers in Vector Time Series
4.3.1 Multivariate Additive Outliers
4.3.1.1 Effects on Residuals and Estimation
4.3.2 Multivariate Level Shift or Structural Break
4.3.2.1 Effects on Residuals and Estimation
4.3.3 Other Types of Outliers
4.3.3.1 Multivariate Innovative Outliers
4.3.3.2 Transitory Change
4.3.3.3 Ramp Shift
4.3.4 Masking and Swamping
4.4 Univariate Outlier Detection
4.4.1 Other Procedures for Univariate Outlier Detection
4.4.2 New Approaches to Outlier Detection
4.5 Multivariate Outliers Detection
4.5.1 VARMA Outlier Detection
4.5.2 Outlier Detection by Projections
4.5.3 A Projection Algorithm for Outliers Detection
4.5.4 The Nonstationary Case
4.6 Robust Estimation
4.7 Heterogeneity for Parameter Changes
4.7.1 Parameter Changes in Univariate Time Series
4.7.2 Covariance Changes in Multivariate Time Series
4.7.2.1 Detecting Multiple Covariance Changes
4.7.2.2 LR Test
Appendix 4.A: Cusum Algorithms
4.A.1 Detecting Univariate LS
4.A.2 Detecting Multivariate Level Shift
4.A.3 Detecting Multiple Covariance Changes
Exercises
References
Chapter 5 CLUSTERING AND CLASSIFICATION OF TIME SERIES
5.1 DISTANCES AND DISSIMILARITIES
5.1.1 Distance Between Univariate Time Series
5.1.2 Dissimilarities Between Univariate Series
5.1.3 Dissimilarities Based on Cross‐Linear Dependency
5.2 HIERARCHICAL CLUSTERING OF TIME SERIES
5.2.1 Criteria for Defining Distances Between Groups
5.2.2 The Dendrogram
5.2.3 Selecting the Number of Groups
5.2.3.1 The Height and Step Plots
5.2.3.2 Silhouette Statistic
5.2.3.3 The Gap Statistic
5.3 CLUSTERING BY VARIABLES
5.3.1 The k‐means Algorithm
5.3.1.1 Number of Groups
5.3.2 k‐Medoids
5.3.3 Model‐Based Clustering by Variables
5.3.3.1 Maximum Likelihood (ML) Estimation of the AR Mixture Model
5.3.3.2 The EM Algorithm
5.3.3.3 Estimation of Mixture of Multivariate Normals
5.3.3.4 Bayesian Estimation
5.3.3.5 Clustering with Structural Breaks
5.3.4 Clustering by Projections
5.4 CLASSIFICATION WITH TIME SERIES
5.4.1 Classification Among a Set of Models
5.4.2 Checking the Classification Rule
5.5 CLASSIFICATION WITH FEATURES
5.5.1 Linear Discriminant Function
5.5.2 Quadratic Classification and Admissible Functions
5.5.3 Logistic Regression
5.6 NONPARAMETRIC CLASSIFICATION
5.6.1 Nearest Neighbors
5.6.2 Support Vector Machines
5.6.2.1 Linearly Separable Problems
5.6.2.2 Nonlinearly Separable Problems
5.6.3 Density Estimation
5.7 OTHER CLASSIFICATION PROBLEMS AND METHODS
Exercises
References
Chapter 6 DYNAMIC FACTOR MODELS
6.1 THE DFM FOR STATIONARY SERIES
6.1.1 Properties of the Covariance Matrices
6.1.1.1 The Exact DFM
6.1.1.2 The Approximate DFM
6.1.2 Dynamic Factor and VARMA Models
6.2 FITTING A STATIONARY DFM TO DATA
6.2.1 Principal Components (PC) Estimation
6.2.2 Pooled PC Estimator
6.2.3 Generalized PC Estimator
6.2.4 ML Estimation
6.2.5 Selecting the Number of Factors
6.2.5.1 Rank Testing via Canonical Correlation
6.2.5.2 Testing a Jump in Eigenvalues
6.2.5.3 Using Information Criteria
6.2.6 Forecasting with DFM
6.2.7 Alternative Formulations of the DFM
6.3 Generalized DFM (GDFM) FOR STATIONARY SERIES
6.3.1 Some Properties of the GDFM
6.3.2 GDFM and VARMA Models
6.4 DYNAMIC PRINCIPAL COMPONENTS
6.4.1 Dynamic Principal Components for Optimal Reconstruction
6.4.2 One‐Sided DPCs
6.4.3 Model Selection and Forecasting
6.4.4 One Sided DPC and GDFM Estimation
6.5 DFM FOR NONSTATIONARY SERIES
6.5.1 Cointegration and DFM
6.6 GDFM FOR NONSTATIONARY SERIES
6.6.1 Estimation by Generalized DPC
6.7 OUTLIERS IN DFMs
6.7.1 Factor and Idiosyncratic Outliers
6.7.2 A Procedure to Find Outliers in DFM
6.8 DFM WITH CLUSTER STRUCTURE
6.8.1 Fitting DFMCS
6.9 SOME EXTENSIONS OF DFM
6.10 HIGH‐DIMENSIONAL CASE
6.10.1 Sparse PCs
6.10.2 A Structural‐FM Approach
6.10.3 Estimation
6.10.4 Selecting the Number of Common Factors
6.10.5 Asymptotic Properties of Loading Estimates
APPENDIX 6.A: SOME R COMMANDS
EXERCISES
References
Chapter 7 FORECASTING WITH BIG DEPENDENT DATA
7.1 REGULARIZED LINEAR MODELS
7.1.1 Properties of Lasso Estimator
7.1.2 Some Extensions of Lasso Regression
7.1.2.1 Adaptive Lasso
7.1.2.2 Group Lasso
7.1.2.3 Elastic Net
7.1.2.4 Fused Lasso
7.1.2.5 SCAD Penalty
7.2 Impacts of Dynamic Dependence on Lasso
7.3 Lasso for Dependent Data
7.4 Principal Component Regression and Diffusion Index
7.5 Partial Least Squares
7.6 Boosting
7.6.1 ℓ2 Boosting
7.6.2 Choices of Weak Learner
7.6.3 Boosting for Classification
7.7 Mixed‐Frequency Data and Nowcasting
7.7.1 MIDAS Regression
7.7.2 Nowcasting
7.8 Strong Serial Dependence
EXERCISES
References
Chapter 8 Machine Learning Of Big Dependent Data
8.1 Regression Trees and Random Forests
8.1.1 Growing Tree
8.1.2 Pruning
8.1.3 Classification Trees
8.1.4 Random Forests
8.2 Neural Networks
8.2.1 Network Training
8.3 DEEP Learning
8.3.1 Types of Deep Networks
8.3.2 Recurrent NN
8.3.3 Activation Functions for Deep Learning
8.3.4 Training Deep Networks
8.3.4.1 Long Short‐Term Memory Model
8.3.4.2 Training Algorithm
8.4 Some Applications
8.4.1 The Package: keras
8.4.2 Dropout Layer
8.4.3 Application of Convolution Networks
8.4.4 Application of LSTM
8.5 DEEP Generative Models
8.6 Reinforcement Learning
Exercises
References
Chapter 9 SPATIO‐TEMPORAL DEPENDENT DATA
9.1 EXAMPLES AND VISUALIZATION OF SPATIO TEMPORAL DATA
9.2 SPATIAL PROCESSES AND DATA ANALYSIS
9.3 GEOSTATISTICAL PROCESSES
9.3.1 Stationary Variogram
9.3.2 Examples of Semivariogram
9.3.3 Stationary Covariance Function
9.3.4 Estimation of Variogram
9.3.5 Testing Spatial Dependence
9.3.6 Kriging
9.3.6.1 Simple Kriging
9.3.6.2 Ordinary Kriging
9.3.6.3 Universal Kriging
9.4 LATTICE PROCESSES
9.4.1 Markov‐Type Models
9.5 SPATIAL POINT PROCESSES
9.5.1 Second‐Order Intensity
9.6 S‐T PROCESSES AND ANALYSIS
9.6.1 Basic Properties
9.6.2 Some Nonseparable Covariance Functions
9.6.3 S‐T Variogram
9.6.4 S‐T Kriging
9.7 DESCRIPTIVE S‐T MODELS
9.7.1 Random Effects with S‐T Basis Functions
9.7.2 Random Effects with Spatial Basis Functions
9.7.3 Fixed Rank Kriging
9.7.4 Spatial Principal Component Analysis
9.7.5 Random Effects with Temporal Basis Functions
9.8 DYNAMIC S‐T MODELS
9.8.1 Space‐Time Autoregressive Moving‐Average Models
9.8.2 S‐T Component Models
9.8.3 S‐T Factor Models
9.8.4 S‐T HMs
APPENDIX 9.A: SOME R PACKAGES AND COMMANDS
Exercises
References
Index




پست ها تصادفی