
Oreilly – Inside Deep Learning, Video Edition 2022-6
Published on: 2024-10-12 19:36:17
Categories: 28
Description
Inside Deep Learning course, Video Edition. This course takes you on a journey through the world of modern deep learning theory and practice and helps you apply innovative techniques to solve everyday data problems. In this course, you will learn how to use PyTorch to implement deep learning, choose the right deep learning components, train and evaluate a deep learning model, tune deep learning models for maximum performance, deep learning terminology Understand and adapt existing Python code to solve new problems. About technology: Deep learning doesn’t have to be a black box! Knowing how your models and algorithms work gives you more control over the results. And you don’t need to be a mathematician or senior data scientist to understand what’s going on inside a deep learning system.
What you will learn:
- Choosing the right components for deep learning
- Training and evaluation of a deep learning model
- Fine-tune deep learning models for maximum performance
- Understanding deep learning terminology
This course is suitable for people who:
- They want to understand the complex concepts of deep learning in simple language.
- Looking for hands-on learning to implement deep learning with PyTorch.
- Interested in solving real-world problems using deep learning.
Details of the course Inside Deep Learning, Video Edition
- Publisher: Oreilly
- Lecturer: Edward Raff
- Training level: beginner to advanced
- Training duration: 15 hours and 53 minutes
Course headings
- Part 1. Foundational methods
- Chapter 1. The mechanics of learning
- Chapter 1. The world as tensors
- Chapter 1. Automatic differentiation
- Chapter 1. Optimizing parameters
- Chapter 1. Loading dataset objects
- Chapter 1. Summary
- Chapter 2. Fully connected networks
- Chapter 2. Building our first neural network
- Chapter 2. Classification problems
- Chapter 2. Better training code
- Chapter 2. Training in batches
- Chapter 2. Summary
- Chapter 3. Convolutional neural networks
- Chapter 3. What are convolutions?
- Chapter 3. How convolutions benefit image processing
- Chapter 3. Putting it into practice: Our first CNN
- Chapter 3. Adding pooling to mitigate object movement
- Chapter 3. Data augmentation
- Chapter 3. Summary
- Chapter 4. Recurrent neural networks
- Chapter 4. RNNs in PyTorch
- Chapter 4. Improving training time with packing
- Chapter 4. More complex RNNs
- Chapter 4. Summary
- Chapter 5. Modern training techniques
- Chapter 5. Learning rate schedules
- Chapter 5. Making better use of gradients
- Chapter 5. Hyperparameter optimization with Optuna
- Chapter 5. Summary
- Chapter 6. Common design building blocks
- Chapter 6. Normalization layers: Magically better convergence
- Chapter 6. Skip connections: A network design pattern
- Chapter 6. 1 × 1 Convolutions: Sharing and reshaping information in channels
- Chapter 6. Residual connections
- Chapter 6. Long short-term memory RNNs
- Chapter 6. Summary
- Part 2. Building advanced networks
- Chapter 7. Autoencoding and self-supervision
- Chapter 7. Designing autoencoding neural networks
- Chapter 7. Bigger autoencoders
- Chapter 7. Denoising autoencoders
- Chapter 7. Autoregressive models for time series and sequences
- Chapter 7. Summary
- Chapter 8. Object detection
- Chapter 8. Transposed convolutions for expanding image size
- Chapter 8. U-Net: Looking at fine and coarse details
- Chapter 8. Object detection with bounding boxes
- Chapter 8. Using the pretrained Faster R-CNN
- Chapter 8. Summary
- Chapter 9. Generative adversarial networks
- Chapter 9. Mode collapse
- Chapter 9. Wasserstein GAN: Mitigating mode collapse
- Chapter 9. Convolutional GAN
- Chapter 9. Conditional GAN
- Chapter 9. Walking the latent space of GANs
- Chapter 9. Ethics in deep learning
- Chapter 9. Summary
- Chapter 10. Attention mechanisms
- Chapter 10. Adding some context
- Chapter 10. Putting it all together: A complete attention mechanism with context
- Chapter 10. Summary
- Chapter 11. Sequence-to-sequence
- Chapter 11. Machine translation and the data loader
- Chapter 11. Inputs to Seq2Seq
- Chapter 11. Seq2Seq with attention
- Chapter 11. Summary
- Chapter 12. Network design alternatives to RNNs
- Chapter 12. Averaging embeddings over time
- Chapter 12. Pooling over time and 1D CNNs
- Chapter 12. Positional embeddings add sequence information to any model
- Chapter 12. Transformers: Big models for big data
- Chapter 12. Summary
- Chapter 13. Transfer learning
- Chapter 13. Transfer learning and training with CNNs
- Chapter 13. Learning with fewer labels
- Chapter 13. Pretraining with text
- Chapter 13. Summary
- Chapter 14. Advanced building blocks
- Chapter 14. Improved residual blocks
- Chapter 14. MixUp training reduces overfitting
- Chapter 14. Summary
- Appendix. Setting up Colab
Course images

Sample video of the course
Installation guide
After Extract, view with your favorite Player.
Subtitle: None
Quality: 720p
download link
Download part 1 – 1 GB
Download part 2 – 1 GB
Download part 3 – 172 MB
File(s) password: www.downloadly.ir
File size
2.1 GB
Leave a Comment (Please sign to comment)