توضیحاتی در مورد کتاب Deep Learning Interviews: Hundreds of fully solved job interview questions from a wide range of key topics in AI.
نام کتاب : Deep Learning Interviews: Hundreds of fully solved job interview questions from a wide range of key topics in AI.
عنوان ترجمه شده به فارسی : مصاحبه های یادگیری عمیق: صدها سؤال مصاحبه شغلی کاملاً حل شده از طیف گسترده ای از موضوعات کلیدی در هوش مصنوعی.
سری :
نویسندگان : Shlomo Kashani, Amir Ivry (editor)
ناشر : Interviews AI
سال نشر : 2020
تعداد صفحات : 401
ISBN (شابک) : 1916243568 , 9781916243569
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 15 مگابایت
بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.
فهرست مطالب :
cover-amazon-print
Untitled
Manuscrit
Copyright
I Rusty Nail
1 HOW-TO USE THIS BOOK
1.1 Introduction
1.1.1 What makes this book so valuable
1.1.2 What will I learn
1.1.3 How to Work Problems
1.1.4 Types of Problems
II Kindergarten
2 LOGISTIC REGRESSION
2.1 Introduction
2.2 Problems
2.2.1 General Concepts
2.2.2 Odds, Log-odds
2.2.3 The Sigmoid
2.2.4 Truly Understanding Logistic Regression
2.2.5 The Logit Function and Entropy
2.2.6 Python/PyTorch/CPP
2.3 Solutions
2.3.1 General Concepts
2.3.2 Odds, Log-odds
2.3.3 The Sigmoid
2.3.4 Truly Understanding Logistic Regression
2.3.5 The Logit Function and Entropy
2.3.6 Python, PyTorch, CPP
3 PROBABILISTIC PROGRAMMING & BAYESIAN DL
3.1 Introduction
3.2 Problems
3.2.1 Expectation and Variance
3.2.2 Conditional Probability
3.2.3 Bayes Rule
3.2.4 Maximum Likelihood Estimation
3.2.5 Fisher Information
3.2.6 Posterior & prior predictive distributions
3.2.7 Conjugate priors
3.2.8 Bayesian Deep Learning
3.3 Solutions
3.3.1 Expectation and Variance
3.3.2 Conditional Probability
3.3.3 Bayes Rule
3.3.4 Maximum Likelihood Estimation
3.3.5 Fisher Information
3.3.6 Posterior & prior predictive distributions
3.3.7 Conjugate priors
3.3.8 Bayesian Deep Learning
III High School
4 INFORMATION THEORY
4.1 Introduction
4.2 Problems
4.2.1 Logarithms in Information Theory
4.2.2 Shannon\'s Entropy
4.2.3 Kullback-Leibler Divergence (KLD)
4.2.4 Classification and Information Gain
4.2.5 Mutual Information
4.2.6 Mechanical Statistics
4.2.7 Jensen\'s inequality
4.3 Solutions
4.3.1 Logarithms in Information Theory
4.3.2 Shannon\'s Entropy
4.3.3 Kullback-Leibler Divergence
4.3.4 Classification and Information Gain
4.3.5 Mutual Information
4.3.6 Mechanical Statistics
4.3.7 Jensen\'s inequality
5 DEEP LEARNING: CALCULUS, ALGORITHMIC DIFFERENTIATION
5.1 Introduction
5.2 Problems
5.2.1 AD, Gradient descent & Backpropagation
5.2.2 Numerical differentiation
5.2.3 Directed Acyclic Graphs
5.2.4 The chain rule
5.2.5 Taylor series expansion
5.2.6 Limits and continuity
5.2.7 Partial derivatives
5.2.8 Optimization
5.2.9 The Gradient descent algorithm
5.2.10 The Backpropagation algorithm
5.2.11 Feed forward neural networks
5.2.12 Activation functions, Autograd/JAX
5.2.13 Dual numbers in AD
5.2.14 Forward mode AD
5.2.15 Forward mode AD table construction
5.2.16 Symbolic differentiation
5.2.17 Simple differentiation
5.2.18 The Beta-Binomial model
5.3 Solutions
5.3.1 Algorithmic differentiation, Gradient descent
5.3.2 Numerical differentiation
5.3.3 Directed Acyclic Graphs
5.3.4 The chain rule
5.3.5 Taylor series expansion
5.3.6 Limits and continuity
5.3.7 Partial derivatives
5.3.8 Optimization
5.3.9 The Gradient descent algorithm
5.3.10 The Backpropagation algorithm
5.3.11 Feed forward neural networks
5.3.12 Activation functions, Autograd/JAX
5.3.13 Dual numbers in AD
5.3.14 Forward mode AD
5.3.15 Forward mode AD table construction
5.3.16 Symbolic differentiation
5.3.17 Simple differentiation
5.3.18 The Beta-Binomial model
IV Bachelors
6 DEEP LEARNING: NN ENSEMBLES
6.1 Introduction
6.2 Problems
6.2.1 Bagging, Boosting and Stacking
6.2.2 Approaches for Combining Predictors
6.2.3 Monolithic and Heterogeneous Ensembling
6.2.4 Ensemble Learning
6.2.5 Snapshot Ensembling
6.2.6 Multi-model Ensembling
6.2.7 Learning-rate Schedules in Ensembling
6.3 Solutions
6.3.1 Bagging, Boosting and Stacking
6.3.2 Approaches for Combining Predictors
6.3.3 Monolithic and Heterogeneous Ensembling
6.3.4 Ensemble Learning
6.3.5 Snapshot Ensembling
6.3.6 Multi-model Ensembling
6.3.7 Learning-rate Schedules in Ensembling
7 DEEP LEARNING: CNN FEATURE EXTRACTION
7.1 Introduction
7.2 Problems
7.2.1 CNN as Fixed Feature Extractor
7.2.2 Fine-tuning CNNs
7.2.3 Neural style transfer, NST
7.3 Solutions
7.3.1 CNN as Fixed Feature Extractor
7.3.2 Fine-tuning CNNs
7.3.3 Neural style transfer
8 DEEP LEARNING
8.1 Introduction
8.2 Problems
8.2.1 Cross Validation
8.2.2 Convolution and correlation
8.2.3 Similarity measures
8.2.4 Perceptrons
8.2.5 Activation functions (rectification)
8.2.6 Performance Metrics
8.2.7 NN Layers, topologies, blocks
8.2.8 Training, hyperparameters
8.2.9 Optimization, Loss
8.3 Solutions
8.3.1 Cross Validation
8.3.2 Convolution and correlation
8.3.3 Similarity measures
8.3.4 Perceptrons
8.3.5 Activation functions (rectification)
8.3.6 Performance Metrics
8.3.7 NN Layers, topologies, blocks
8.3.8 Training, hyperparameters
8.3.9 Optimization, Loss
V Practice Exam
9 JOB INTERVIEW MOCK EXAM
9.0.1 Rules
9.1 Problems
9.1.1 Perceptrons
9.1.2 CNN layers
9.1.3 Classification, Logistic regression
9.1.4 Information theory
9.1.5 Feature extraction
9.1.6 Bayesian deep learning
VI Volume two
10 VOLUME TWO - PLAN
10.1 Introduction
10.2 AI system design
10.3 Advanced CNN topologies
10.4 1D CNN\'s
10.5 3D CNN\'s
10.6 Data augmentations
10.7 Object detection
10.8 Object segmentation
10.9 Semantic segmentation
10.10 Instance segmentation
10.11 Image classification
10.12 Image captioning
10.13 NLP
10.14 RNN
10.15 LSTM
10.16 GANs
10.17 Adversarial attacks and defences
10.18 Variational auto encoders
10.19 FCN
10.20 Seq2Seq
10.21 Monte carlo, ELBO, Re-parametrization
10.22 Text to speech
10.23 Speech to text
10.24 CRF
10.25 Quantum computing
10.26 RL