Deep Learning and Scientific Computing with R torch (Chapman & Hall/CRC The R Series)

دانلود کتاب Deep Learning and Scientific Computing with R torch (Chapman & Hall/CRC The R Series)

52000 تومان موجود

کتاب یادگیری عمیق و محاسبات علمی با مشعل R (Chapman & Hall/CRC The R Series) نسخه زبان اصلی

دانلود کتاب یادگیری عمیق و محاسبات علمی با مشعل R (Chapman & Hall/CRC The R Series) بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 7


توضیحاتی در مورد کتاب Deep Learning and Scientific Computing with R torch (Chapman & Hall/CRC The R Series)

نام کتاب : Deep Learning and Scientific Computing with R torch (Chapman & Hall/CRC The R Series)
ویرایش : 1 ed.
عنوان ترجمه شده به فارسی : یادگیری عمیق و محاسبات علمی با مشعل R (Chapman & Hall/CRC The R Series)
سری :
نویسندگان :
ناشر : Chapman and Hall/CRC
سال نشر : 2023
تعداد صفحات : 394 [414]
ISBN (شابک) : 1032231386 , 9781032231389
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 11 Mb



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.

توضیحاتی در مورد کتاب :


این کتاب قصد دارد برای (تقریبا) همه مفید باشد. یادگیری عمیق و محاسبات علمی با R Torch مقدمه‌ای کامل با اصول مشعل فراهم می‌کند - هم با توضیح دقیق مفاهیم و ایده‌های زیربنایی، و هم با نشان دادن مثال‌های کافی برای خواننده «مسلط» در مشعل.

فهرست مطالب :


Cover Half Title Series Page Title Page Copyright Page Contents List of Figures Preface Author Biography I. Getting Familiar with Torch 1. Overview 2. On torch, and How to Get It 2.1. In torch World 2.2. Installing and Running torch 3. Tensors 3.1. What’s in a Tensor? 3.2. Creating Tensors 3.2.1. Tensors from values 3.2.2. Tensors from specifications 3.2.3. Tensors from datasets 3.3. Operations on Tensors 3.3.1. Summary operations 3.4. Accessing Parts of a Tensor 3.4.1. “Think R” 3.5. Reshaping Tensors 3.5.1. Zero-copy reshaping vs. reshaping with copy 3.6. Broadcasting 3.6.1. Broadcasting rules 4. Autograd 4.1. Why Compute Derivatives? 4.2. Automatic Differentiation Example 4.3. Automatic Differentiation with torch autograd 5. Function Minimization with autograd 5.1. An Optimization Classic 5.2. Minimization from Scratch 6. A Neural Network from Scratch 6.1. Idea 6.2. Layers 6.3. Activation Functions 6.4. Loss Functions 6.5. Implementation 6.5.1. Generate random data 6.5.2. Build the network 6.5.3. Train the network 7. Modules 7.1. Built-in nn_module()s 7.2. Building up a Model 7.2.1. Models as sequences of layers: nn_sequential() index{nn_sequential()} 7.2.2. Models with custom logic 8. Optimizers 8.1. Why Optimizers? 8.2. Using built-in torch Optimizers 8.3. Parameter Update Strategies 8.3.1. Gradient descent (a.k.a. steepest descent, a.k.a. stochastic gradient descent (SGD)) 8.3.2. Things that matter 8.3.3. Staying on track: Gradient descent with momentum 8.3.4. Adagrad 8.3.5. RMSProp 8.3.6. Adam 9. Loss Functions 9.1. torch Loss Functions 9.2. What Loss Function Should I Choose? 9.2.1. Maximum likelihood 9.2.2. Regression 9.2.3. Classification 10. Function Minimization with L-BFGS 10.1. Meet L-BFGS 10.1.1. Changing slopes 10.1.2. Exact Newton method 10.1.3. Approximate Newton: BFGS and L-BFGS 10.1.4. Line search 10.2. Minimizing the Rosenbrock Function with optim_lbfgs() 10.2.1. optim_lbfgs() default behavior 10.2.2. optim_lbfgs() with line search 11. Modularizing the Neural Network 11.1. Data 11.2. Network 11.3. Training 11.4. What’s to Come II. Deep Learning with torch 12. Overview 13. Loading Data 13.1. Data vs. dataset() vs. dataloader() – What’s the Difference? 13.2. Using dataset()s 13.2.1. A self-built dataset() 13.2.2. tensor_dataset() 13.2.3. torchvision::mnist_dataset() 13.3. Using dataloader()s 14. Training with luz 14.1. Que haya luz – Que haja luz – Let there be Light 14.2. Porting the Toy Example 14.2.1. Data 14.2.2. Model 14.2.3. Training 14.3. A More Realistic Scenario 14.3.1. Integrating training, validation, and test 14.3.2. Using callbacks to “hook” into the training process 14.3.3. How luz helps with devices 14.4. Appendix: A Train-Validate-Test Workflow Implemented by Hand 15. A First Go at Image Classification 15.1. What does It Take to Classify an Image? 15.2. Neural Networks for Feature Detection and Feature Emergence 15.2.1. Detecting low-level features with cross-correlation 15.2.2. Build up feature hierarchies 15.3. Classification on Tiny Imagenet 15.3.1. Data pre-processing 15.3.2. Image classification from scratch 16. Making Models Generalize 16.1. The Royal Road: more – and More Representative! – Data 16.2. Pre-processing Stage: Data Augmentation 16.2.1. Classic data augmentation 16.2.2. Mixup 16.3. Modeling Stage: Dropout and Regularization 16.3.1. Dropout 16.3.2. Regularization 16.4. Training Stage: Early Stopping 17. Speeding up Training 17.1. Batch Normalization 17.2. Dynamic Learning Rates 17.2.1. Learning rate finder 17.2.2. Learning rate schedulers 17.3. Transfer Learning 18. Image Classification, Take Two: Improving Performance 18.1. Data Input (Common for all) 18.2. Run 1: Dropout 18.3. Run 2: Batch Normalization 18.4. Run 3: Transfer Learning 19. Image Segmentation 19.1. Segmentation vs. Classification 19.2. U-Net, a “classic” in image segmentation 19.3. U-Net – a torch implementation 19.3.1. Encoder 19.3.2. Decoder 19.3.3. The “U” 19.3.4. Top-level module 19.4. Dogs and Cats 20. Tabular Data 20.1. Types of Numerical Data, by Example 20.2. A torch dataset for Tabular Data 20.3. Embeddings in Deep Learning: The Idea 20.4. Embeddings in deep learning: Implementation 20.5. Model and Model Training 20.6. Embedding-generated Representations by Example 21. Time Series 21.1. Deep Learning for Sequences: The Idea 21.2. A Basic Recurrent Neural Network 21.2.1. Basic rnn_cell() 21.2.2. Basic rnn_module() 21.3. Recurrent Neural Networks in torch 21.4. RNNs in Practice: GRU and LSTM 21.5. Forecasting Electricity Demand 21.5.1. Data inspection 21.5.2. Forecasting the very next value 21.5.3. Forecasting multiple time steps ahead 22. Audio Classification 22.1. Classifying Speech Data 22.2. Two Equivalent Representations 22.3. Combining Representations: The Spectrogram 22.4. Training a Model for Audio Classification 22.4.1. Baseline setup: Training a convnet on spectrograms 22.4.2. Variation one: Use a Mel-scale spectrogram instead 22.4.3. Variation two: Complex-valued spectograms III. Other Things to do with torch: Matrices, Fourier Transform, and Wavelets 23. Overview 24. Matrix Computations: Least-squares Problems 24.1. Five Ways to do Least Squares 24.2. Regression for Weather Prediction 24.2.1. Least squares (I): Setting expectations with lm() 24.2.2. Least squares (II): Using linalg_lstsq() 24.2.3. Interlude: What if we hadn’t standardized the data? 24.2.4. Least squares (III): The normal equations 24.2.5. Least squares (IV): Cholesky decomposition 24.2.6. Least squares (V): LU factorization 24.2.7. Least squares (VI): QR factorization 24.2.8. Least squares (VII): Singular Value Decomposition (SVD) 24.2.9. Checking execution times 24.3. A Quick Look at Stability 25. Matrix Computations: Convolution 25.1. Why Convolution? 25.2. Convolution in One Dimension 25.2.1. Two ways to think about convolution 25.2.2. Implementation 25.3. Convolution in Two Dimensions 25.3.1. How it works (output view) 25.3.2. Implementation 26. Exploring the Discrete Fourier Transform (DFT) 26.1. Understanding the Output of torch_fft_fft() 26.1.1. Starting point: A cosine of frequency 1 26.1.2. Reconstructing the magic 26.1.3. Varying frequency 26.1.4. Varying amplitude 26.1.5. Adding phase 26.1.6. Superposition of sinusoids 26.2. Coding the DFT 26.3. Fun with sox 27. The Fast Fourier Transform (FFT) 27.1. Some Terminology 27.2. Radix-2 decimation-in-time(DIT) walkthrough 27.2.1. The main idea: Recursive split 27.2.2. One further simplification 27.3. FFT as Matrix Factorization 27.4. Implementing the FFT 27.4.1. DFT, the “loopy” way 27.4.2. DFT, vectorized 27.4.3. Radix-2 decimation in time FFT, recursive 27.4.4. Radix-2 decimation in time FFT by matrix factorization 27.4.5. Radix-2 decimation in time FFT, optimized for vectorization 27.4.6. Checking against torch_fft_fft() 27.4.7. Comparing performance 27.4.8. Making use of Just-in-Time (JIT) compilation 28. Wavelets 28.1. Introducing the Morlet Wavelet 28.2. The roles of

توضیحاتی در مورد کتاب به زبان اصلی :


This book aims to be useful to (almost) everyone. Deep Learning and Scientific Computing with R Torch provides a thorough introduction to torch basics - both by carefully explaining underlying concepts and ideas, and showing enough examples for the reader to become \"fluent\" in torch.



پست ها تصادفی