Generative AI with Python and TensorFlow 2: Create images, text, and music with VAEs, GANs, LSTMs, Transformer models

دانلود کتاب Generative AI with Python and TensorFlow 2: Create images, text, and music with VAEs, GANs, LSTMs, Transformer models

37000 تومان موجود

کتاب هوش مصنوعی مولد با پایتون و تنسورفلو 2: ایجاد تصاویر، متن و موسیقی با VAE، GAN، LSTM، مدل‌های ترانسفورماتور نسخه زبان اصلی

دانلود کتاب هوش مصنوعی مولد با پایتون و تنسورفلو 2: ایجاد تصاویر، متن و موسیقی با VAE، GAN، LSTM، مدل‌های ترانسفورماتور بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد

این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 10


توضیحاتی در مورد کتاب Generative AI with Python and TensorFlow 2: Create images, text, and music with VAEs, GANs, LSTMs, Transformer models

نام کتاب : Generative AI with Python and TensorFlow 2: Create images, text, and music with VAEs, GANs, LSTMs, Transformer models
عنوان ترجمه شده به فارسی : هوش مصنوعی مولد با پایتون و تنسورفلو 2: ایجاد تصاویر، متن و موسیقی با VAE، GAN، LSTM، مدل‌های ترانسفورماتور
سری :
نویسندگان : ,
ناشر : Packt Publishing
سال نشر : 2021
تعداد صفحات : 489
ISBN (شابک) : 1800200889 , 9781800200883
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 15 مگابایت



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.


فهرست مطالب :


Home
Copyright
Contributors
Table of Contents
Preface
Chapter 1: An Introduction to Generative AI: \"Drawing\" Data from Models
Applications of AI
Discriminative and generative models
Implementing generative models
The rules of probability
Discriminative and generative modeling and Bayes\' theorem
Why use generative models?
The promise of deep learning
Building a better digit classifier
Generating images
Style transfer and image transformation
Fake news and chatbots
Sound composition
The rules of the game
Unique challenges of generative models
Summary
References
Chapter 2: Setting Up a TensorFlow Lab
Deep neural network development and TensorFlow
TensorFlow 2.0
VSCode
Docker: A lightweight virtualization solution
Important Docker commands and syntax
Connecting Docker containers with docker-compose
Kubernetes: Robust management of multi-container applications
Important Kubernetes commands
Kustomize for configuration management
Kubeflow: an end-to-end machine learning lab
Running Kubeflow locally with MiniKF
Installing Kubeflow in AWS
Installing Kubeflow in GCP
Installing Kubeflow on Azure
Installing Kubeflow using Terraform
A brief tour of Kubeflow\'s components
Kubeflow notebook servers
Kubeflow pipelines
Using Kubeflow Katib to optimize model hyperparameters
Summary
References
Chapter 3: Building Blocks of Deep Neural Networks
Perceptrons – a brain in a function
From tissues to TLUs
From TLUs to tuning perceptrons
Multi-layer perceptrons and backpropagation
Backpropagation in practice
The shortfalls of backpropagation
Varieties of networks: Convolution and recursive
Networks for seeing: Convolutional architectures
Early CNNs
AlexNet and other CNN innovations
AlexNet architecture
Networks for sequence data
RNNs and LSTMs
Building a better optimizer
Gradient descent to ADAM
Xavier initialization
Summary
References
Chapter 4: Teaching Networks to Generate Digits
The MNIST database
Retrieving and loading the MNIST dataset in TensorFlow
Restricted Boltzmann Machines: generating pixels with statistical mechanics
Hopfield networks and energy equations for neural networks
Modeling data with uncertainty with Restricted Boltzmann Machines
Contrastive divergence: Approximating a gradient
Stacking Restricted Boltzmann Machines to generate images: the Deep Belief Network
Creating an RBM using the TensorFlow Keras layers API
Creating a DBN with the Keras Model API
Summary
References
Chapter 5: Painting Pictures with Neural Networks Using VAEs
Creating separable encodings of images
The variational objective
The reparameterization trick
Inverse Autoregressive Flow
Importing CIFAR
Creating the network from TensorFlow 2
Summary
References
Chapter 6: Image Generation with GANs
The taxonomy of generative models
Generative adversarial networks
The generator model
Training GANs
Non-saturating generator cost
Maximum likelihood game
Vanilla GAN
Improved GANs
Deep Convolutional GAN
Vector arithmetic
Conditional GAN
Wasserstein GAN
Progressive GAN
The overall method
Progressive growth-smooth fade-in
Minibatch standard deviation
Equalized learning rate
Pixelwise normalization
TensorFlow Hub implementation
Challenges
Training instability
Mode collapse
Uninformative loss and evaluation metrics
Summary
References
Chapter 7: Style Transfer with GANs
Paired style transfer using pix2pix GAN
The U-Net generator
The Patch-GAN discriminator
Loss
Training pix2pix
Use cases
Unpaired style transfer using CycleGAN
Overall setup for CycleGAN
Adversarial loss
Cycle loss
Identity loss
Overall loss
Hands-on: Unpaired style transfer with CycleGAN
Generator setup
Discriminator setup
GAN setup
The training loop
Related works
DiscoGAN
DualGAN
Summary
References
Chapter 8: Deepfakes with GANs
Deepfakes overview
Modes of operation
Replacement
Re-enactment
Editing
Key feature set
Facial Action Coding System (FACS)
3D Morphable Model
Facial landmarks
Facial landmark detection using OpenCV
Facial landmark detection using dlib
Facial landmark detection using MTCNN
High-level workflow
Common architectures
Encoder-Decoder (ED)
Generative Adversarial Networks (GANs)
Replacement using autoencoders
Task definition
Dataset preparation
Autoencoder architecture
Training our own face swapper
Results and limitations
Re-enactment using pix2pix
Dataset preparation
Pix2pix GAN setup and training
Results and limitations
Challenges
Ethical issues
Technical challenges
Generalization
Occlusions
Temporal issues
Off-the-shelf implementations
Summary
References
Chapter 9: The Rise of Methods for Text Generation
Representing text
Bag of Words
Distributed representation
Word2vec
GloVe
FastText
Text generation and the magic of LSTMs
Language modeling
Hands-on: Character-level language model
Decoding strategies
Greedy decoding
Beam search
Sampling
Hands-on: Decoding strategies
LSTM variants and convolutions for text
Stacked LSTMs
Bidirectional LSTMs
Convolutions and text
Summary
References
Chapter 10: NLP 2.0: Using Transformers to Generate Text
Attention
Contextual embeddings
Self-attention
Transformers
Overall architecture
Multi-head self-attention
Positional encodings
BERT-ology
GPT 1, 2, 3…
Generative pre-training: GPT
GPT-2
Hands-on with GPT-2
Mammoth GPT-3
Summary
References
Chapter 11: Composing Music with Generative Models
Getting started with music generation
Representing music
Music generation using LSTMs
Dataset preparation
LSTM model for music generation
Music generation using GANs
Generator network
Discriminator network
Training and results
MuseGAN – polyphonic music generation
Jamming model
Composer model
Hybrid model
Temporal model
MuseGAN
Generators
Critic
Training and results
Summary
References
Chapter 12: Play Video Games with Generative AI: GAIL
Reinforcement learning: Actions, agents, spaces, policies, and rewards
Deep Q-learning
Inverse reinforcement learning: Learning from experts
Adversarial learning and imitation
Running GAIL on PyBullet Gym
The agent: Actor-Critic network
The discriminator
Training and results
Summary
References
Chapter 13: Emerging Applications in Generative AI
Introduction
Finding new drugs with generative models
Searching chemical space with generative molecular graph networks
Folding proteins with generative models
Solving partial differential equations with generative modeling
Few shot learning for creating videos from images
Generating recipes with deep learning
Summary
References
Other Books You May Enjoy
Index




پست ها تصادفی