No terms match your search.
A
Agent (AI Agent)
Cases 04, 08
An AI system that goes beyond answering questions — it takes actions in the world. An agent can search the web, call APIs, read and write files, send messages, and reason across multiple steps to accomplish a goal. The key distinction from a chatbot: an agent does things, not just says things.
Alignment
Case 05
The process of training an AI model to behave in ways that match human values and intentions. A capable but misaligned model might pursue goals in harmful or unexpected ways. Alignment is an active research field — current techniques substantially improve behavior but do not provide absolute guarantees. When different AI companies refuse different requests, alignment choices are why.
API (Application Programming Interface)
Cases 01, 02, 04
A set of rules that allows one piece of software to talk to another. When you use an AI model through code or a third-party app, you're making API calls — structured requests that the model receives, processes, and responds to. Think of it as a standardized electrical socket: any device built to the spec can plug in and draw power. API usage is typically billed per token — the more text in and out, the higher the cost.
B
Bias (AI)
Case 05
Systematic patterns in a model's outputs that reflect imbalances or prejudices in its training data. If a model was trained primarily on text from certain demographics, cultures, or perspectives, it may perform unequally across different groups or consistently favor certain framings. Bias is encoded, not intentional — it emerges from what the model learned, not from a programmer's deliberate choice.
C
Calibration
Case 03
The degree to which a model's expressed confidence matches its actual accuracy. A well-calibrated model is uncertain when it's likely to be wrong, and confident when it's likely to be right. Most current LLMs are poorly calibrated — they often sound equally confident whether stating a verified fact or generating a plausible-sounding fabrication. Calibration is measured across many outputs, not individual cases.
Chain-of-Thought Prompting
Case 02
A technique where you instruct the model to reason step by step before giving a final answer. Instead of jumping to a conclusion, the model works through its logic visibly. This dramatically improves performance on complex reasoning tasks because it forces the model to structure its thinking rather than pattern-matching to a surface-level response. Add "think step by step" or "reason through this before answering" to your prompt.
ClawHub
Case 08
A marketplace of installable skills and integrations for OpenClaw — functioning like an app store for your personal AI assistant. Browse available skills, install the ones relevant to your work, and they extend what your OpenClaw assistant can do. Community-built and commercial skills are available for specific workflows and integrations.
Constitutional AI
Case 05
An alignment technique developed by Anthropic where models are trained to self-critique and revise their responses against a set of internal principles — a "constitution." Rather than relying solely on human raters scoring outputs, the model learns to evaluate and improve its own responses according to defined guidelines.
Context Window
Cases 01, 03, 06
The maximum amount of text a model can consider at once in a single interaction — measured in tokens. Think of it as working memory. Everything the model can "see" — your prompt, the conversation history, any documents you've provided — must fit within this window. Content outside the window cannot be referenced. Claude's context window is approximately 200,000 tokens. Longer windows allow processing entire books, legal corpora, or long conversation histories in one pass.
Copilot (AI Workflow)
Case 08
A model of AI use where the tool runs alongside you throughout your day, handling small tasks as they arise rather than being consulted occasionally for big projects. The copilot model treats AI as always-available infrastructure — like having a capable assistant at your desk who never needs a break. Individually each time-saving is small; compounded across a workday, it recovers 1–3 hours daily.
E
Embedding
Case 03
A mathematical representation of text as a list of numbers — a "vector" — that captures semantic meaning. Similar meanings produce similar vectors. This allows AI systems to find conceptually related content even when the exact words differ. Embeddings are the mathematical foundation of how RAG systems locate relevant documents in response to a question.
Enterprise (AI context)
Cases 01, 05, 06
In AI platform terms, "enterprise" describes a specific deployment tier — not just a large company, but a set of contractual and technical guarantees: dedicated infrastructure where your data is isolated from other users, compliance certifications (HIPAA, SOC 2, GDPR), custom data retention policies, Service Level Agreements (SLAs) guaranteeing uptime, and contract-based pricing. Enterprise tiers exist specifically for regulated industries and organizations with data governance requirements.
F
Few-Shot Prompting
Case 02
Providing examples in your prompt to show the model the pattern you want. Instead of describing the task in words, you demonstrate it: "Here are 3 examples of what I want — now do it for this new input." Dramatically improves consistency for structured output tasks like data extraction, formatting, and classification.
Fine-Tuning
Cases 03, 05
The process of further training an existing pre-trained model on a specific dataset to specialize its behavior. Unlike prompting — which guides the model at inference time — fine-tuning changes the model's underlying weights, baking behaviors and knowledge directly in. Best suited for tasks that are specific, consistent, and have many labeled examples. Fine-tuned models are smaller, faster, and more reliable for their target task than prompted general models.
Foundation Model
Cases 05, 06
A large-scale AI model trained on broad data that serves as the base for many downstream applications. Claude, GPT-4, Gemini, Llama, and DeepSeek are all foundation models. They're called "foundation" because they can be adapted — through fine-tuning or prompting — for many specific tasks without being retrained from scratch. The training of a foundation model costs hundreds of millions of dollars; the adaptation is comparatively cheap.
G
Grounding
Case 03
The practice of connecting AI outputs to specific, verifiable sources of truth — rather than allowing the model to generate responses from training data alone. Grounded responses cite a retrieved document, a database record, or a verified source. RAG is the primary grounding technique. Grounded outputs are easier to fact-check because each claim traces back to a source you can inspect.
H
Hallucination
Case 03
When a model generates confident, fluent, plausible-sounding content that is factually incorrect or entirely fabricated. The term comes from psychology — the model "perceives" something that isn't there. Critical to understand: confidence and fluency are not indicators of accuracy. A hallucinating model sounds exactly like a model stating verified facts. This is why independent verification of factual claims, citations, and data is non-negotiable.
Human-in-the-Loop (HITL)
Case 04
An architectural design where humans review, approve, or correct AI outputs at defined checkpoints before the system proceeds. Essential for high-stakes applications where errors are costly or irreversible. HITL adds latency but provides accountability and catches failure modes that full automation misses — particularly important for agents taking actions in financial, legal, medical, or public-facing systems.
I
Inference
Case 01
The process of running a trained model to generate outputs — as distinct from training, which adjusts the model's weights. When you send a prompt to Claude and receive a response, that's inference. API costs are measured in inference compute: the number of tokens processed (input plus output). Inference is relatively cheap compared to training, which is why using AI is accessible even though building it isn't.
J
Jailbreak
Case 05
A technique for bypassing a model's safety guidelines through crafted prompts. Jailbreaks exploit gaps in alignment training to get models to produce content they're designed to refuse. Not a permanent hack — they typically work until the model provider identifies and patches the specific vulnerability. The existence of jailbreaks demonstrates that alignment is an ongoing engineering problem, not a solved one.
K
Knowledge Cutoff
Cases 01, 03
The date after which a model has no training data. Events, publications, and developments after this date are unknown to the model unless provided via RAG or other grounding mechanisms. Models may confabulate plausible-sounding but fabricated information about post-cutoff events when asked directly. Always check a model's knowledge cutoff before relying on it for recent information.
L
Llama
Case 06
Meta's family of open-source large language models. Unlike proprietary models (Claude, GPT-4), Llama's weights are publicly available — anyone can download, deploy, and modify them without paying per API call. This enables fully on-premise deployment where no data leaves your infrastructure, making it the primary choice for privacy-sensitive industries like healthcare and finance with strict data residency requirements.
M
Multi-Agent System
Case 04
An architecture where multiple specialized AI agents collaborate on a task — one researches, one writes, one edits, one deploys. Each agent handles what it's best at; an orchestrator manages handoffs and overall progress. More powerful than a single agent for complex, long-horizon tasks — but requires careful design of termination conditions and handoff rules to prevent infinite loops.
Multimodal
Case 06
A model or system that can process and generate multiple types of data — text, images, audio, video — in a single workflow. A multimodal model can analyze a screenshot, read the text in a document, and write a response that references both. GPT-4 with vision, Gemini, and Claude are all multimodal. The practical implication: you can show the model a bug screenshot and ask it to diagnose the problem.
O
Orchestrator
Case 04
In multi-agent systems, the component responsible for coordinating specialized agents, routing tasks, managing shared state, and determining when the overall goal has been achieved. Acts as a project manager for a team of AI workers. Without a clear orchestrator, multi-agent systems can deadlock, loop, or produce conflicting outputs.
P
Parameters
Case 01
The numerical weights inside a neural network that encode everything the model learned during training. More parameters generally means more capacity to learn complex patterns — a 70-billion parameter model can represent more nuance than a 7-billion parameter model. But parameter count alone does not predict performance: architecture quality, training data curation, and alignment work all matter equally or more.
Plugin / Connector
Cases 06, 08
An integration that extends an AI platform's capabilities by connecting it to external tools, services, or data sources. ChatGPT plugins, Claude's MCP connectors, and Gemini's Workspace integrations are all examples. Connectors allow AI to act on real-world systems — reading your email, querying your database, searching the web — rather than just generating text in isolation.
Pre-training
Case 05
The initial training phase where a model learns from enormous amounts of text by predicting the next token in a sequence, over and over, billions of times. This is where the model acquires its broad knowledge, language understanding, and reasoning capabilities. Pre-training requires massive compute and carefully curated data — it's the step that costs hundreds of millions of dollars and produces a foundation model.
Principle of Least Privilege
Case 04
A security design principle: give any system, agent, or user only the minimum permissions required to do their job — nothing more. Applied to AI agents, it means an agent that drafts emails should not also have delete access to your database. Limits the blast radius when something goes wrong.
Prompt
Case 02
The input you provide to an AI model. This includes your question or instruction, any context you supply, examples, formatting requirements, and constraints. The quality of the prompt is the primary lever for controlling output quality — a vague prompt produces a generic answer; a specific, well-structured prompt produces a precise, useful one. Prompt quality matters more than most people expect.
Prompt Engineering
Case 02
The practice of crafting inputs to AI models to reliably produce high-quality outputs. Not guesswork — a discipline with documented techniques: providing context, few-shot examples, chain-of-thought instructions, role assignments, output format specifications, and explicit constraints. Think of it as learning to communicate clearly in a new language — the better your fluency, the better your results.
Prompt Injection
Case 02
An attack where malicious content in user input (or in retrieved documents) is crafted to override or manipulate the AI system's instructions. "Ignore all previous instructions and..." is the classic form. A significant security concern for any AI system that processes untrusted user input or retrieves content from the web — the injected instruction can hijack the system's behavior for unintended purposes.
R
RAG (Retrieval-Augmented Generation)
Cases 03, 05
An architecture where the model's response is grounded in documents retrieved in real time from a knowledge base, rather than relying solely on training data. The system first retrieves relevant content, then passes it to the model as context. Dramatically reduces hallucination for domain-specific questions (company policy, medical protocols, legal documents) and keeps knowledge current without expensive retraining.
ReAct (Reason + Act)
Case 04
An agent architecture pattern — short for Reason and Act — where the model cycles through: Reason about what to do → Take an action → Observe the result → Reason again. This loop continues until the task is complete. More reliable than single-step approaches because each observation informs the next reasoning step, allowing the agent to adapt to unexpected results rather than blindly executing a fixed plan.
RLHF (Reinforcement Learning from Human Feedback)
Case 05
The technique that transformed raw language models into helpful assistants. Human raters compare model outputs and score them on helpfulness, harmlessness, and honesty. These scores train a reward model. The reward model then guides further fine-tuning. The result is a model shaped by human preferences — which is why ChatGPT in 2022 felt dramatically different from GPT-3 in 2020, despite being built on similar underlying architecture.
S
SQL (Structured Query Language)
Cases 02, 04
The standard language used to communicate with databases. SQL lets you ask a database a question — "find all customers who made a purchase last month" — or give it an instruction — "delete all orders older than five years." In AI context, SQL comes up when agents are given access to databases as tools, and when discussing AI's ability to generate database queries from plain-language descriptions.
System Prompt
Case 02
A privileged instruction block provided to the model before any user input — invisible to users in most applications. Sets persistent behavior, tone, constraints, and context for the entire session. It's where developers define who the AI is, what it can and can't do, and how it should respond. When a customer service bot always sounds polished and refuses to discuss competitors, the system prompt is why.
T
Temperature
Case 01
A parameter that controls the randomness of model outputs. At temperature 0, the model behaves deterministically, always picking the highest-probability next token — producing consistent, reproducible outputs. Higher values introduce randomness: more varied, more creative, but less reliable. General guidance: temperature 0 for legal, compliance, and factual tasks; temperature 0.7–1.0 for brainstorming, creative writing, and ideation.
Termination Condition
Case 04
In agentic systems, the explicitly defined criteria that tell an agent when a task is complete. Without clear termination conditions, agents can loop indefinitely — an editor agent sends feedback to a writer agent, the writer revises, the editor sends more feedback, forever. Well-designed agentic systems define explicit "done" states that trigger task closure or human handoff.
Token
Cases 01, 02
The basic unit of text that language models process — approximately a word or part of a word. "Tokenization" splits text into these units before the model processes it. The model never sees actual words; it sees streams of tokens. API pricing is based on token count (input tokens plus output tokens). A typical page of text is roughly 500–750 tokens. Understanding tokens explains why long conversations cost more and why context windows have limits.
Tool Call / Tool Use
Cases 04, 08
When an agent executes an action using an external capability — searching the web, reading a file, calling an API, running code, sending a message. Each such action is one "tool call." Models have a per-response limit on how many tool calls can occur before the model must pause and return control to the user. This is why long agentic tasks sometimes stop and ask you to press "Continue" — the model has hit its tool call limit for that response.
Tool Call Limit
Cases 04, 08
The maximum number of tool calls a model can execute within a single response turn. When this limit is reached, the model stops, returns its progress, and waits for the user to continue. Well-designed agentic workflows account for this limit by breaking tasks into steps that each fit within one response's tool call budget — preventing tasks from stalling unexpectedly mid-execution.
V
Vector Database
Case 03
A specialized database that stores embeddings — text represented as mathematical vectors — and enables fast similarity search. The retrieval backbone of RAG systems. When you ask a question, the system converts it to a vector, searches for the most similar vectors in the database, and retrieves the associated documents. The model then answers based on those retrieved documents rather than from training data alone. Chroma, Pinecone, and pgvector are common examples.
Z
Zero-Shot Prompting
Case 02
Asking the model to perform a task with no examples — just a clear instruction. Contrast with few-shot prompting, which provides demonstrations. Works well for straightforward tasks the model was trained on extensively. Requires more care for specialized formats or niche domains where the model benefits from seeing what "good" looks like before attempting the task.