Tomadora
Research Papers That Shaped Modern AI
AI-generated course for Machine Learning & AI covering: Module 1: The Dawn of Connectionism & Backpropagation, Module 2: The ImageNet Moment - AlexNet & CNNs, Module 3: Generative Adversarial Networks (GANs), Module 4: Deep Reinforcement Learning & Human-Level Control, Module 5: 'Attention Is All You Need' - The Transformer Architecture, Module 6: BERT & The Power of Pre-training, Module 7: The Rise of Generative Pre-trained Transformers (GPT), Module 8: Diffusion Models for Photorealistic Imagery, Module 9: Scaling Laws and Emergent Abilities
Beginner
30 lessons
823 questions
Download Tomadora to start →
What you'll learn
This course is part of the Machine Learning & AI track on Tomadora. It covers 9 progressive modules with 30 bite-sized lessons, totalling 823 interactive questions including flashcards, multiple choice, true/false, typing, matching, and fill-in-the-blank.
Course syllabus
Module 1: The Dawn of Connectionism & Backpropagation
Explore the foundational papers that introduced the Perceptron and rediscovered the Backpropagation algorithm, setting the stage for modern neural networks by enabling them to learn from data.
- Lesson 1: The Perceptron and its Limits (28 questions)
- Lesson 2: The Breakthrough - The Backpropagation Algorithm (30 questions)
- Lesson 3: Early Applications and Architectures (27 questions)
Module 2: The ImageNet Moment - AlexNet & CNNs
Analyze 'ImageNet Classification with Deep Convolutional Neural Networks,' the 2012 paper that demonstrated the power of deep Convolutional Neural Networks (CNNs) and kickstarted the deep learning revolution.
- The Stage is Set: Computer Vision Before Deep Learning (26 questions)
- Anatomy of a Breakthrough: Deconstructing the AlexNet Architecture (28 questions)
- The Ripple Effect: AlexNet's Legacy and the CNN Revolution (20 questions)
Module 3: Generative Adversarial Networks (GANs)
Delve into the innovative 2014 paper by Ian Goodfellow et al. that introduced GANs, a novel framework where two neural networks contest with each other to generate stunningly realistic data.
- Lesson 1: The Birth of GANs - The Minimax Game (28 questions)
- Lesson 2: Taming the Beast - DCGAN and WGAN (26 questions)
- Lesson 3: Achieving Control and Photorealism - Conditional & Progressive GANs (28 questions)
- Lesson 4: Mastering Disentanglement - The StyleGAN Revolution (27 questions)
Module 4: Deep Reinforcement Learning & Human-Level Control
Study the DeepMind paper 'Playing Atari with Deep Reinforcement Learning,' which combined deep neural networks with reinforcement learning to create the Deep Q-Network (DQN), achieving superhuman performance on classic video games.
- Playing Atari: The Deep Q-Network (DQN) Revolution (29 questions)
- Beyond Value Functions: Policy Gradients and Actor-Critic Methods (26 questions)
- AlphaGo: Mastering Go with Neural Networks and Tree Search (28 questions)
Module 5: 'Attention Is All You Need' - The Transformer Architecture
A deep dive into the seminal 2017 paper from Google that introduced the Transformer. Understand how the self-attention mechanism revolutionized sequence modeling and became the foundation for nearly all modern large language models.
- Breaking the Sequence: Limitations of RNNs and the Dawn of Attention (32 questions)
- Anatomy of the Transformer: The Encoder-Decoder Stacks (27 questions)
- The Core Mechanism: Multi-Head Self-Attention (27 questions)
- Putting It All Together: Positional Encodings and Feed-Forward Networks (27 questions)
Module 6: BERT & The Power of Pre-training
Examine 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.' Learn how the concept of masked language modeling and large-scale pre-training created powerful, reusable language representations.
- The Road to BERT: Contextual Embeddings & Bidirectional Transformers (28 questions)
- The Pre-training Masterclass: Masked Language Model & Next Sentence Prediction (27 questions)
- Unleashing the Power: Fine-Tuning BERT and Its Lasting Legacy (32 questions)
Module 7: The Rise of Generative Pre-trained Transformers (GPT)
Trace the evolution of autoregressive models through the GPT-1, GPT-2, and GPT-3 papers. Understand the principles of decoder-only transformers and the impact of massive scale on model capabilities.
- Attention Is All You Need: The Transformer Architecture (28 questions)
- Improving Language Understanding by Generative Pre-Training: The Birth of GPT (27 questions)
- Language Models are Few-Shot Learners: In-Context Learning with GPT-3 (27 questions)
Module 8: Diffusion Models for Photorealistic Imagery
Investigate the papers behind diffusion models, such as 'Denoising Diffusion Probabilistic Models.' Discover how these models learn to reverse a noise process to generate high-fidelity images, powering systems like DALL-E 2 and Stable Diffusion.
- Lesson 1: Foundations - Denoising Diffusion Probabilistic Models (DDPMs) (27 questions)
- Lesson 2: High-Resolution Synthesis with Latent Diffusion Models (LDMs) (19 questions)
- Lesson 3: Enhancing Control with Classifier-Free Guidance (27 questions)
Module 9: Scaling Laws and Emergent Abilities
Explore recent research papers on the predictable scaling laws of large models and the surprising 'emergent abilities' that appear at scale. Discuss the implications for the future of AI development.
- The Birth of Scaling Laws: Predicting Performance (28 questions)
- Refining the Laws: The Chinchilla Scaling Hypothesis (28 questions)
- Beyond Prediction: The Phenomenon of Emergent Abilities (24 questions)
- The Limits and Future of Scale: Inverse Scaling and Beyond (37 questions)
Frequently asked questions
- What is the Research Papers That Shaped Modern AI course?
- Research Papers That Shaped Modern AI is a beginner course on Tomadora covering 9 modules and 30 lessons. It is designed to be completed in 5-minute bursts during your work breaks, using a Pomodoro-style focus + learn cycle.
- How long does Research Papers That Shaped Modern AI take to finish?
- Each lesson takes about 5 minutes. With 30 lessons, you can finish the course in roughly 3 hours of total learning time, spread across as many breaks as you like.
- Is Research Papers That Shaped Modern AI free?
- Yes. Tomadora is free to download and the entire Machine Learning & AI track — including Research Papers That Shaped Modern AI — is free to learn.
- What level is Research Papers That Shaped Modern AI?
- Research Papers That Shaped Modern AI is rated Beginner. No prior knowledge is required.
- What language is Research Papers That Shaped Modern AI taught in?
- Research Papers That Shaped Modern AI is taught in English.
More courses in Machine Learning & AI