The Pioneer's Rosetta Stone: Every AI Term Mapped to the World Before

Manjula Sridhar Apr 2, 2026 5 min read
← Back to Blog

Every era of computing invented its own vocabulary. The pioneers of the mainframe era had to decode assembly mnemonics. The object-oriented era gave us classes, inheritance, and polymorphism. Now AI has handed us a new dictionary — and if you're building with it today, you're a pioneer whether you signed up for that title or not.

This is your Rosetta Stone. For every AI term, we give you two anchors: the human situation it mirrors, and the pre-AI software concept it most closely replaced. Use whichever column clicks for you.

The Pioneer's Rosetta Stone — AI terms mapped to pre-AI equivalents
AI Term Human Equivalent Pre-AI Software Equivalent What Actually Changed
Models & Training
Foundation Model / LLM Trained domain expert / specialist knowledge worker Pre-compiled library / black-box SDK Before AI you hired an expert or imported a library with fixed functions. Now you licence a model with generalised reasoning and adapt it — the "library" can handle tasks it was never explicitly coded for.
Training Years of study and experience Writing & compiling the codebase Instead of writing explicit rules, you feed the model data and let it learn the rules itself. The developer no longer authors every branch — the data does.
Fine-tuning Specialised on-the-job training / apprenticeship Forking a library / patching a dependency A general model is adapted to a specific domain — like forking an open-source library and modifying it for your use case, except the "modification" is done with data, not code.
Inference Answering a question Runtime execution of a compiled binary Running a trained model on new input. The direct equivalent of executing your compiled program — except the "program" was written by gradient descent, not a developer.
Model weights A professional's accumulated knowledge Compiled binary / .so / .dll file The trained artefact you deploy. You ship weights like you used to ship executables — except weights are opaque even to their creators.
Prompting & Instructions
Prompt A job brief / instruction to a colleague Function call with arguments / API request body Natural language replaces code as the invocation mechanism. The interface is conversational, not programmatic — but the intent is the same: tell the system what to do.
System prompt Employee handbook / job description Config file / application.properties / .env Sets the model's persona, rules, and constraints before the user speaks. Like a config file, it defines the operating context — but it's written in prose, not key-value pairs.
Few-shot examples Showing a new hire worked examples Unit test fixtures / sample input-output pairs in a spec Instead of writing a rule, you show the model a few examples and it generalises. The pre-AI equivalent was documenting expected behaviour in a test or spec file.
Chain of thought Showing your working before giving an answer Verbose logging / step-by-step debug trace Asking the model to reason step-by-step before concluding. It improves accuracy for the same reason debug logs help — making the intermediate steps visible forces coherence.
Temperature A person's risk appetite / creativity level Random seed / jitter configuration Controls output randomness. Low temperature = deterministic and safe. High temperature = creative and unpredictable. The software equivalent was setting a random seed or adding jitter to a retry interval.
Memory & Storage
Context window Working memory / desk space In-memory buffer / stack frame / RAM limit The amount of information the model can hold and reason over in one call. Exceed it and earlier content gets dropped — exactly like a fixed-size buffer overrun, except the model silently forgets rather than crashing.
Embeddings Filing something under a meaningful label Hash function / inverted index key Text is converted to a numeric vector so it can be compared by meaning. The pre-AI equivalent was computing a hash or building an inverted index — but those matched by exact terms, not semantics.
Vector database A card catalogue organised by topic, not title Elasticsearch / Solr / full-text search index Stores embeddings and retrieves the most semantically similar records. Elasticsearch was the pre-AI equivalent — but it matched keywords. A vector DB matches meaning.
RAG (Retrieval-Augmented Generation) Checking the reference manual before answering DB lookup before processing / read-through cache The model fetches relevant documents before generating a response. The software pattern is identical to a read-through cache or a pre-query enrichment step — fetch context, then compute.
Persistent memory Taking notes between meetings Database / session store (Redis, Postgres) Information stored between sessions so the model remembers past interactions. The direct software equivalent: writing state to a database or a session cache.
Agents & Automation
AI Agent An autonomous employee who plans and acts Daemon / cron job / autonomous script A model that plans, acts, observes results, and loops until a goal is achieved. The pre-AI equivalent was a daemon or scheduled script — but those followed hardcoded rules; an agent reasons about what to do next.
Orchestrator Project manager / team coordinator Workflow engine (Airflow, Camunda) / message broker Coordinates multiple agents or steps toward a goal. Pre-AI this was Airflow DAGs or a BPM engine — except those required a developer to define every edge. An AI orchestrator can re-plan mid-execution.
Tool use / Function calling Picking up the right tool for the job API call / library import / subprocess invocation The model decides at runtime which external function to call. Pre-AI, the developer hardcoded every call. Now the model reads the situation and chooses — the call sequence is emergent, not scripted.
Skills / Plugins / MCP tools Professional capabilities / specialisations Microservices / REST APIs / npm packages Packaged capabilities the model can invoke on demand. The software equivalent: importing a library or calling a microservice — except the model selects which one to use based on context, not hardcoded logic.
Agentic loop Working iteratively until a task is done Event loop / poll-and-process while(true) loop The model repeatedly observes → reasons → acts → checks results until complete. The structure is identical to an event loop — the difference is the decision logic inside the loop is a model, not an if/else tree.
MCP (Model Context Protocol) A universal adapter / common language OpenAPI spec / REST standard / USB-C A standard protocol for connecting models to external tools and data sources — the OpenAPI spec of the AI agent world. Before MCP, every integration was bespoke.
Quality & Safety
Hallucination No clean equivalent No clean equivalent
closest: undefined behaviour / silent data corruption
The model produces confident, fluent, wrong output. Traditional software either crashed, threw an exception, or returned the correct answer. It didn't fabricate plausible-sounding facts. This failure mode is genuinely new — a product of the architecture itself.
Guardrails Company policy / compliance rules Input validation middleware / schema enforcement Rules that constrain what the model will say or do. Pre-AI this was form validation and business rules middleware — now it has to operate on open-ended natural language, which is fundamentally harder to constrain.
Grounding Citing your sources / fact-checking Foreign key constraint / referential integrity check Anchoring model output to verified data to reduce hallucination. The software equivalent is a foreign key constraint — the output must reference something that actually exists in the source of truth.
Evals (Evaluation suite) Performance review / quality assessment Unit test suite / CI pipeline Structured tests that measure model output quality. Harder than unit tests because output is probabilistic — you're measuring accuracy distributions and failure rates, not binary pass/fail.
Prompt injection Social engineering / manipulating instructions SQL injection / XSS / unsanitised input exploit Malicious input that hijacks the model's instructions. The direct AI equivalent of SQL injection — the attacker smuggles commands through the data channel to override the intended behaviour.
Performance & Infrastructure
Token A word or syllable Byte / CPU instruction / billing unit The atomic unit of model input/output. Models think, price, and rate-limit in tokens the way networks used to think in bytes and CPUs in clock cycles.
Latency / TTFT How long before you get a first response API response time / page load / p50 latency SLA Time to First Token is the AI equivalent of time-to-first-byte. In streaming interfaces TTFT matters more than total generation time — the same perceived-performance principle as progressive page rendering.
Semantic search Asking a librarian who understands context Full-text search (Lucene / Elasticsearch) Finds results by meaning rather than keyword match. "Car accident" finds "vehicle collision". Full-text search was the pre-AI best effort — it could match synonyms with configuration, but not true semantic similarity.
Multimodal A person who can read, watch, and listen simultaneously Multi-format parser / multimedia processing pipeline The model accepts text, images, audio, and video in a single request. Pre-AI this required separate pipelines per media type stitched together with glue code — now a single model handles all of them in unified context.
💡

The Pattern Behind Every Row

The pattern that runs through almost every row is the same shift: agency moves from the developer to the model. Before AI, a human had to anticipate every branch, write every rule, and call every function explicitly. Now the model reads context and decides — which makes it far more flexible, and also far harder to control or predict. That's why concepts like hallucination and guardrails have no clean pre-AI equivalent: the failure modes are new because the architecture is new.

← Back to Blog

Stay Ahead of the Curve

Get our weekly digest of AI security news, compliance updates, and insights delivered to your inbox.

No spam. Unsubscribe anytime.