Thrmal
Certificate of Completion
Thrmal Payments · Thrmalearn Education
This certifies that
Student Name
has successfully completed
AI & LLM Mastery — Level 1
Score: 14/14 Issued: Jan 1, 2025
Thrmalearn
Director of Education
🎓
Certified
Thrmal Payments
Thrmalearn.com

Sign in to continue

Your progress syncs automatically across all devices. Create a free account to get started.

🔒

Access Required

Your subscription is inactive or does not include access to this content. Reactivate or upgrade your plan to continue.

View Plans Sign out
LLM Mastery — Level 2 Practitioner
👤 ✓ synced
// Prerequisite: Level 1 Complete — Advancing to Practitioner Level

LEVEL 02
PRACTITIONER

Hands-on. Code-first. Production-grade. This curriculum takes you from conceptual understanding to building real AI systems — culminating in a live risk-modeling agent for payment processing.

90Days
8Modules
~120Hours
4Real Projects
1Capstone System
6–7Target Level
// Progress
0%
Phase 1 — Python for AI
Days 1–20 · ~24 hrs
01
Python Foundations
Python From Zero — The AI Developer's Toolkit
Variables, functions, loops, files, APIs — only the Python you actually need to build AI systems. No fluff.
⏱ ~14 hrs DAYS 1–12
Weekly Breakdown
Days 1–4
Core Python
Variables, types, loops, functions
~5 hrs
Days 5–8
Working with Data
Lists, dicts, JSON, reading files
~5 hrs
Days 9–12
APIs & Libraries
pip, requests, calling REST APIs
~4 hrs
Topics & Exercises
Install Python, VS Code, and run your first script: "Hello, AI world"
45 minCode
Variables, strings, numbers, booleans — build a simple prompt-builder script
60 minCode
Lists, loops, and dictionaries — store and process a list of prompts
90 minCode
Writing functions — wrap your prompt logic in reusable functions
60 minCode
Working with JSON — read and write structured data (the language of APIs)
60 minCode
The requests library — make your first real API call to a weather API
90 minCode
Error handling (try/except) — write code that doesn't crash when APIs fail
60 minConcept
Environment variables and .env files — keeping API keys secret and safe
45 minCode
Build Project
CLI Prompt Testing Tool
Build a command-line tool in Python where you can type a prompt, it calls the Claude API, and prints the response. Includes a log file that saves every prompt and response as JSON.
Working Python script that calls Claude API using the anthropic library
Saves every exchange to a local prompts_log.json file
Handles errors gracefully (API timeout, bad key, etc.)
Reads a system prompt from a separate config.txt file
Checkpoint Quiz
1. In Python, what does this code do? data = {"name": "Alice", "score": 95} then print(data["score"])
A. Prints the entire dictionary
B. Prints 95 — it accesses the value associated with the key "score"
C. Throws an error because dictionaries aren't printable
D. Prints "score"
2. Why should you store your API key in a .env file instead of directly in your code?
A. .env files load faster than hardcoded values
B. So the key isn't accidentally committed to GitHub or shared with others — it stays secret on your machine
C. API keys don't work when hardcoded
D. It's a Python requirement for all variables
02
Python + AI APIs
Calling LLM APIs Like a Pro
Claude API, OpenAI API, streaming responses, structured outputs, managing conversation history in code.
⏱ ~10 hrs DAYS 13–20
Topics & Exercises
The Anthropic Python SDK — messages, roles, system prompts in code
90 minCode
Managing multi-turn conversations — building a message history array
60 minCode
Streaming API responses — print output token-by-token as it generates
60 minCode
Forcing structured JSON output — prompt + response_format techniques
90 minExercise
Token counting and cost estimation — know what each call costs before you run it
45 minConcept
Rate limits and retry logic — build an exponential backoff wrapper
60 minCode
Batch processing — send 50 documents through Claude and collect all responses
90 minProject
Build Project
AI-Powered Research Summarizer
Feed it a folder of 10 text files (articles, reports, PDFs). It processes each one, extracts key points, and outputs a structured JSON summary file plus a human-readable report. Your first production-grade batch AI pipeline.
Reads all .txt files from a /docs folder automatically
Calls Claude API for each, extracts: summary, key facts, action items
Outputs summaries.json and report.md
Tracks total tokens used and estimated cost
Checkpoint Quiz
1. In a multi-turn conversation with an LLM API, how do you maintain conversation context?
A. The API automatically remembers all previous messages
B. You build and send the full message history array with every API call — the model has no memory between calls on its own
C. You use a special "memory" parameter in the API
D. You store messages in a database and the API reads from it
Phase 2 — RAG & Knowledge Systems
Days 21–40 · ~28 hrs
03
RAG Systems
Building a RAG Pipeline From Scratch
Chunk documents → embed them → store in a vector DB → retrieve by semantic similarity → answer questions grounded in real data.
⏱ ~16 hrs DAYS 21–32
Weekly Breakdown
Days 21–24
Embeddings & Vector DBs
Chroma, similarity search
~6 hrs
Days 25–28
Chunking & Retrieval
Document ingestion pipeline
~5 hrs
Days 29–32
Full RAG Q&A Bot
End-to-end build & eval
~5 hrs
Topics & Exercises
How vector databases work — cosine similarity, nearest-neighbor search explained
60 minConcept
Set up ChromaDB locally — create a collection, add documents, query it
90 minCode
Generating embeddings with the OpenAI or Claude embedding API
60 minCode
Document chunking strategies — fixed-size, sentence, semantic chunking tradeoffs
90 minConcept
Build an ingestion pipeline — read PDFs, chunk them, embed them, store in Chroma
120 minProject
Write the retrieval + generation loop — query → fetch top-K chunks → prompt Claude
90 minCode
Evaluating RAG quality — how do you know if it's retrieving the right chunks?
60 minExercise
Build Project
Internal Company Knowledge Bot
Your first real product: upload your startup's documents (policies, product docs, onboarding materials) and build a chatbot that answers questions based only on those documents. Zero hallucination. Every answer cited to a source.
Ingest PDFs and text files into a ChromaDB vector store
Answer questions with cited sources ("Based on Section 3.2 of your policy doc...")
Return "I don't know" when the answer isn't in your documents
Simple command-line or basic web interface using Streamlit
Checkpoint Quiz
1. When building a RAG system, why do you "chunk" documents rather than storing entire documents as single entries?
A. Vector databases have a 500-word limit per entry
B. Smaller chunks allow more precise retrieval — a 10,000-word document embedding averages out meaning, making it harder to match a specific question to a specific passage
C. Chunking is required by the Claude API
D. It reduces API costs with no other benefit
2. In a RAG pipeline, what is the correct order of operations?
A. Ask LLM → Search documents → Return answer
B. Receive question → Embed question → Search vector DB for similar chunks → Pass chunks + question to LLM → Return grounded answer
C. Fine-tune model on documents → Ask questions → Done
D. Store documents → Ask LLM to read them all → Answer questions
04
Applied RAG
Customer Service & Sales Automation
Build production-ready bots: a customer service agent that handles FAQs + escalation, and a lead qualification bot that scores prospects.
⏱ ~12 hrs DAYS 33–40
Topics & Exercises
Designing conversation flows — intent detection, fallback handling, escalation triggers
60 minConcept
Build a customer service bot with a RAG knowledge base + escalation to human flag
180 minProject
Lead qualification logic — structured output scoring (budget, authority, need, timeline)
90 minCode
Integrating with external data — pull CRM data via API and pass to the model
90 minCode
Deploying a simple chatbot UI with Streamlit — shareable via URL
90 minProject
Build Project
Lead Qualification + Customer Service Agent
A dual-purpose bot: greets visitors, qualifies them as leads (extracts company size, budget, use case), scores them 1–10, and answers product questions from your knowledge base. Outputs a structured lead report as JSON.
Conversational lead intake: extracts structured fields via Claude
Scores lead quality 1–10 with reasoning in JSON output
Answers product/service questions using RAG
Deployed as a Streamlit web app you can share with a URL
Phase 3 — Building Real Agents
Days 41–65 · ~36 hrs
05
Agent Architecture
Building Agents with Tool Use
Tool definitions, function calling, ReAct loops — give your AI the ability to take real actions in the world.
⏱ ~18 hrs DAYS 41–54
Topics & Exercises
Tool/function calling in the Claude API — define a tool schema in JSON
90 minCode
Build a ReAct agent loop: reason → call tool → observe result → continue
120 minProject
Give your agent tools: web search, calculator, file reader, API caller
120 minCode
Agent memory patterns: in-context summary, external key-value store
90 minConcept
Guardrails and safety — preventing agents from taking unintended actions
60 minConcept
LangChain basics — agents, chains, and when to use the framework vs. raw API
120 minCode
Logging, tracing, and debugging agents — understanding what your agent actually did
60 minExercise
Build Project
Research & Summarization Agent
Give it a topic or company name. It autonomously: searches the web for recent news, pulls financial or public data, reads 3–5 sources, and produces a structured research brief. All without you lifting a finger after the initial prompt.
Tools: web_search, fetch_url, summarize_text, write_report
Full ReAct loop — reasons about which tool to call next
Produces a structured Markdown research brief
Full trace log showing every tool call and reasoning step
Checkpoint Quiz
1. When you define a tool for a Claude agent, what are you actually providing?
A. A pre-built plugin that Claude automatically knows how to use
B. A JSON schema describing the tool's name, what it does, and what parameters it takes — Claude decides when and how to call it, and your code actually executes it
C. A Python function that Claude can run directly inside its own runtime
D. A URL that Claude visits to get data
06
Production Agents
Multi-Agent Systems & Production Deployment
Orchestrators + sub-agents, async pipelines, monitoring, deploying agents to the cloud so they run 24/7.
⏱ ~18 hrs DAYS 55–65
Topics & Exercises
Orchestrator + sub-agent pattern — one agent that routes to specialized agents
120 minCode
Async Python (asyncio) — run multiple agent calls in parallel
90 minCode
Webhooks and triggers — make agents run automatically on events
90 minCode
Deploying to cloud — Railway, Render, or AWS Lambda for always-on agents
120 minProject
Monitoring with LangSmith or Helicone — trace every agent run in production
60 minCode
Cost control in production — caching, rate limiting, budget caps per user
60 minConcept
Phase 4 — Capstone System
Days 66–90 · ~30 hrs
07
Pre-Capstone
Risk Modeling Foundations
The science behind risk scoring: data features, scoring models, regulatory considerations, and API security patterns for financial systems.
⏱ ~10 hrs DAYS 66–72
Topics & Exercises
Payment processing risk fundamentals — fraud signals, chargeback rates, velocity checks
90 minConcept
What data points constitute a risk profile? Business type, volume, history, geography
60 minConcept
Calling secure APIs with auth tokens — OAuth 2.0 and API key patterns in Python
90 minCode
Designing a risk score schema — what does 1–100 mean? What triggers each band?
60 minExercise
Prompt engineering for structured risk decisions — chain-of-thought reasoning with hard output schemas
90 minCode
Compliance basics — what an AI risk decision audit trail needs to contain
60 minConcept
08
CAPSTONE BUILD
Payment Risk Modeling Agent — Full System
Your graduation project. A complete, production-grade AI agent that reviews applications, scores risk, makes approval decisions, and monitors ongoing activity.
⏱ ~20 hrs DAYS 73–90
System Architecture
01
Intake Agent
Accepts merchant application via form or API, validates completeness
02
Data Enrichment
Pulls data from business registry, credit, and fraud signal APIs
03
Risk Scoring
Claude analyzes all signals, reasons step-by-step, outputs a score + decision
04
Auto Approval
Low-risk: auto-approved, account created, welcome email triggered
05
Monitoring Agent
Watches live transaction patterns, flags anomalies, alerts on risk threshold breach
Build Deliverables
Intake agent: accepts merchant application JSON, validates all required fields
3 hrsProject
Data tools: Python functions that call mock/real APIs (business registry, credit check, sanctions list)
4 hrsCode
Risk scoring agent: Claude with chain-of-thought reasoning, outputs structured JSON score (0–100) + approve/review/reject + reasoning
4 hrsProject
Approval workflow: auto-approve ≤30 risk score, queue for human review 31–70, auto-reject ≥71
2 hrsCode
Monitoring agent: polls transaction data on a schedule, detects velocity spikes, unusual geographies, chargeback patterns
4 hrsProject
Audit trail: every agent decision logged with timestamp, inputs, reasoning, and output score
2 hrsCode
Admin dashboard (Streamlit): view pending applications, scores, decision history, monitoring alerts
3 hrsProject
Deploy the full system to cloud with a live URL and persistent database
2 hrsProject
Checkpoint Quiz — Final Module
1. Your risk agent scores an application 45/100 (medium risk). What should the system do, and why not just auto-approve or auto-reject?
A. Auto-approve it — 45 is below 50 so it's probably fine
B. Auto-reject it — any score above 30 should be rejected
C. Route to human review — medium-risk cases have ambiguous signals that an AI should flag but a human should confirm, balancing automation with appropriate oversight
D. Ask the applicant to reapply with more information
2. Why is an audit trail critical for an AI risk-scoring system in a financial context?
A. It helps the model learn from its mistakes over time
B. It's required by the Python language specification
C. Regulators and legal counsel may need to understand exactly why a decision was made — automated financial decisions require explainability, accountability, and the ability to detect and correct systematic bias
D. It makes the system run faster
// Graduation Assessment
THE FINAL
PRACTITIONER EXAM

16 questions spanning Python, RAG systems, agent architecture, and production AI design. Pass at 80% to earn your Practitioner certification and confirm your 6–7 skill level.

4
Python & APIs
Questions 1–4
4
RAG & Knowledge
Questions 5–8
4
Agent Systems
Questions 9–12
4
Production & Risk
Questions 13–16