Tomadora
How LLMs Actually Work
AI-generated course covering: Module 1: The Building Blocks of Language Models, Module 2: Decoding the Transformer Architecture, Module 3: The Pre-training Odyssey: Data and Objectives, Module 4: Refining Intelligence: Fine-tuning and Instruction Alignment, Module 5: Generating Language: Inference and Prompt Engineering, Module 6: Beyond Text: Multimodality and Specialized LLMs
Intermediate
21 lessons
461 questions
Download Tomadora to start →
What you'll learn
This course is part of the The AI Revolution track on Tomadora. It covers 6 progressive modules with 21 bite-sized lessons, totalling 461 interactive questions including flashcards, multiple choice, true/false, typing, matching, and fill-in-the-blank.
Course syllabus
Module 1: The Building Blocks of Language Models
Demystify what a Large Language Model truly is. Explore fundamental concepts like tokens, embeddings, and the probabilistic nature of language generation, setting the stage for deeper dives into their internal workings.
- Introduction to Language Models and Their Evolution (23 questions)
- Text Representation: Tokenization and Embeddings (27 questions)
- The Transformer Architecture: Attention Unveiled (10 questions)
Module 2: Decoding the Transformer Architecture
Dive deep into the groundbreaking Transformer architecture. Understand the roles of self-attention mechanisms, multi-head attention, feed-forward networks, and positional encoding that enable LLMs to process and understand context.
- Introduction to the Transformer and the Attention Mechanism (24 questions)
- Deconstructing Self-Attention and Multi-Head Attention (29 questions)
- Encoder-Decoder Stacks, Positional Encoding, and Normalization (18 questions)
- Feed-Forward Networks, Masking, and the Full Transformer Forward Pass (22 questions)
Module 3: The Pre-training Odyssey: Data and Objectives
Uncover the monumental task of pre-training LLMs. Explore the vast datasets, tokenization strategies, and the core unsupervised learning objectives (like masked language modeling or next-token prediction) that form the LLM's initial knowledge base.
- The Digital Ocean: Sourcing Web-Scale Data for LLMs (25 questions)
- Forging the Foundation: Data Preprocessing and Corpus Construction (26 questions)
- The Oracle's Task: Self-Supervised Pre-training Objectives (21 questions)
Module 4: Refining Intelligence: Fine-tuning and Instruction Alignment
Learn how pre-trained behemoths are transformed into helpful assistants. This module covers supervised fine-tuning (SFT), instruction tuning, and the critical role of Reinforcement Learning from Human Feedback (RLHF) in aligning LLMs with human intentions and values.
- The Need for Specialization: Introduction to Fine-tuning LLMs (18 questions)
- Architectures of Adaptation: Full vs. Parameter-Efficient Fine-tuning (PEFT) (28 questions)
- Guiding Behavior: Instruction Alignment and Reinforcement Learning from Human Feedback (RLHF) (26 questions)
Module 5: Generating Language: Inference and Prompt Engineering
Explore the process by which LLMs generate coherent and contextually relevant text. Understand decoding strategies (greedy, beam search, sampling), temperature, and the art and science of prompt engineering to elicit desired outputs efficiently.
- Under the Hood: How LLMs Generate Text (12 questions)
- Crafting Effective Prompts: The Fundamentals (25 questions)
- Mastering Complex Interactions: Advanced Prompt Engineering (22 questions)
- Ensuring Quality: Evaluation and Ethical Prompting (25 questions)
Module 6: Beyond Text: Multimodality and Specialized LLMs
Investigate the evolution of LLMs beyond pure text. Discover how multimodal models integrate vision and audio, and explore specialized architectures designed for specific tasks or constraints, pushing the boundaries of what LLMs can achieve.
- Foundations of Multimodality: Why Go Beyond Text? (22 questions)
- Vision-Language Integration: Understanding VLMs (22 questions)
- Beyond Vision: Audio, Video, and Advanced Multimodal Fusion (15 questions)
- Tailoring LLMs: Specialized Models and Fine-tuning for Specific Domains (21 questions)
Frequently asked questions
- What is the How LLMs Actually Work course?
- How LLMs Actually Work is a intermediate course on Tomadora covering 6 modules and 21 lessons. It is designed to be completed in 5-minute bursts during your work breaks, using a Pomodoro-style focus + learn cycle.
- How long does How LLMs Actually Work take to finish?
- Each lesson takes about 5 minutes. With 21 lessons, you can finish the course in roughly 2 hours of total learning time, spread across as many breaks as you like.
- Is How LLMs Actually Work free?
- Yes. Tomadora is free to download and the entire The AI Revolution track — including How LLMs Actually Work — is free to learn.
- What level is How LLMs Actually Work?
- How LLMs Actually Work is rated Intermediate. Some familiarity with the basics is helpful but not required.
- What language is How LLMs Actually Work taught in?
- How LLMs Actually Work is taught in English.
More courses in The AI Revolution