AI in Construction Glossary
Forty terms you will hear from AI vendors in 2026, in plain English, written for builders who do not have time for jargon.
Every construction software vendor in 2026 will tell you they have AI. Some do. Some have a button that calls a foundation model API once. The difference matters because it shows up in what the product can actually do for you. This glossary is the cheat sheet for telling the difference. It is alphabetized; skim it at your own pace.
How to Use This Glossary
- Read top to bottom once. The terms reinforce each other.
- Bookmark this page; use it during demos when a vendor uses a term and you want to verify what they mean.
- If a term is missing, email us and we will add it.
Terms
- Agent
- An LLM combined with tools (web browsers, file readers, APIs) and a control loop, so it can take actions and react to the results. An agent can complete multi-step tasks. ConstructionBear uses agents to assemble submittal packages and merge PDFs.
- AI Copilot
- A non-autonomous AI assistant that lives inside an existing product. The user is in the driver seat; the copilot suggests, completes, and summarizes. Procore Copilot is an example.
- AI-Native
- A product whose core architecture is designed around an AI model from the start, rather than retrofitted. AI-native products typically have chat as the primary interface and produce outputs (documents, packages) directly from natural language.
- AI-Powered
- A product that has AI features layered on top of a non-AI core. Often used as a marketing term. The honest version of "AI-powered" is "we added an AI summary button."
- Anthropic
- AI safety company that makes the Claude family of models. Founded in 2021 by former OpenAI researchers. Used by ConstructionBear and many other AI products.
- ASI (in AI context)
- In AI: Artificial Superintelligence, a hypothetical AI that exceeds human intelligence across all domains. In construction: Architect Supplemental Instruction, a written clarification of the contract documents. Context matters.
- Benchmarks
- Standardized tests used to measure AI model capability (MMLU, HumanEval, GPQA, SWE-Bench). Benchmarks are useful but routinely gamed by vendors. Trust real-world performance more than benchmark scores.
- Chain of Thought
- A prompting technique where the model is asked to reason step by step before answering. Improves accuracy on complex tasks. Modern reasoning models (Claude, GPT) use chain of thought internally.
- Claude
- The AI model family made by Anthropic. As of 2026 the latest is Claude Opus 4.6. Used inside ConstructionBear for document drafting and reasoning.
- Context Window
- How much text an AI model can read at once. Modern frontier models handle 200K to 1M tokens (roughly a long novel to a small library). Larger context lets the AI reference more drawings, specs, and prior project documents.
- Copilot
- See AI Copilot.
- CSI MasterFormat
- The construction industry standard for organizing specifications by division and section (e.g., 09 21 16 for gypsum board). AI tools that know CSI MasterFormat can route submittals correctly without human intervention.
- Deterministic vs Probabilistic
- Deterministic systems give the same output for the same input. AI is probabilistic, meaning the same prompt can produce different (but usually similar) outputs. Document templates are deterministic. AI-drafted documents are not.
- Document Automation
- The category of AI tools that produce business documents (RFIs, submittals, change orders, pay apps) from minimal input. ConstructionBear is in this category.
- Embedding
- A numerical representation of a piece of text used for similarity search. Embeddings power retrieval (see RAG). When you search "GWB at corridor" and get back the relevant spec section, embeddings are doing the work.
- Fine-Tuning
- The process of further training a base AI model on a specific dataset to specialize it. Different from prompt engineering, which adjusts behavior without retraining. Fine-tuned construction models are rare; prompt engineering plus retrieval is more common.
- Foundation Model
- A large pre-trained AI model that can be adapted to many tasks. GPT-5, Claude Opus, Gemini Ultra are foundation models. They are what AI products like ConstructionBear are built on top of.
- Frontier Model
- The current generation of most-capable AI models. As of 2026 frontier models include Claude Opus 4.6, GPT-5, Gemini 2.5 Ultra. The frontier moves roughly every 6 to 12 months.
- GPT
- The model family made by OpenAI. As of 2026 the latest is GPT-5. Different from the underlying technique, also called "Generative Pre-trained Transformer."
- Grounding
- The practice of attaching AI outputs to verifiable sources (drawings, specs, prior emails) so the user can check the work. Reduces hallucination. Strong AI products show their grounding.
- Hallucination
- A confidently wrong AI output. Caused by the model filling gaps in its knowledge with plausible-sounding text. Good products mitigate hallucinations through retrieval and grounding; they do not eliminate them. Always verify the substantive content of AI-drafted documents.
- Inference
- The act of running an AI model to generate an output. Each API call you make is an inference. Inference cost is what makes high-volume AI products expensive to operate.
- LLM
- Large Language Model. The category of AI that includes Claude, GPT, Gemini, Llama. Trained on massive amounts of text; capable of reading, writing, and reasoning over language.
- Model Card
- A vendor-published document describing an AI model, including capabilities, limitations, and known failure modes. Read these before deploying a model in production.
- Multimodal
- An AI model that handles multiple input types: text, images, audio, video. Modern frontier models are multimodal. Useful for construction because the model can read drawings as images, not just specifications as text.
- OpenAI
- AI company that makes the GPT family of models, ChatGPT, and the underlying API. Founded 2015. The default LLM provider for most enterprise software in 2026.
- Parameter
- A learnable weight inside a neural network. Frontier models have hundreds of billions to trillions of parameters. More parameters generally means more capability but also higher inference cost.
- Prompt
- The input text given to an AI model. The prompt shapes the output. Good prompts are specific, structured, and reference the format you want back.
- Prompt Engineering
- The practice of writing prompts that produce reliable AI outputs. A real skill in 2026. The difference between a $5 result and a $50 result is often prompt quality.
- Quantization
- A compression technique that reduces model size and inference cost at minor capability cost. Used to ship smaller models on phones or in low-cost serving environments.
- RAG
- Retrieval-Augmented Generation. The pattern of looking up relevant documents (drawings, specs, past RFIs) and feeding them into the AI model alongside the prompt. RAG is how AI products avoid making things up about your specific project.
- Reasoning Model
- An AI model trained or configured to think step by step before answering, often using a private chain of thought. Better at math, coding, and complex multi-document analysis. Slower and more expensive than non-reasoning models.
- Retrieval
- The lookup half of RAG. Given a query, retrieval finds the most relevant chunks of text from a corpus (your project files, past RFIs, the spec book) to feed to the model.
- SaaS
- Software as a Service. The default delivery model for construction software in 2026. Procore, Buildertrend, ACC, and ConstructionBear are all SaaS.
- Streaming
- When an AI model returns its output token by token rather than waiting for the whole response. Makes the UX feel instant. ConstructionBear streams responses by default.
- System Prompt
- A hidden prompt that sets the model behavior, persona, and constraints. The user does not see it. Construction-specific products like ConstructionBear use system prompts to enforce CSI MasterFormat conventions and AIA contract language.
- Token
- A chunk of text the model processes (roughly 0.75 words). Pricing for AI models is per token. A typical RFI runs 300 to 800 tokens output; a submittal package can run 5,000 plus tokens with attachments.
- Tool Use
- When an AI model calls external functions (file readers, APIs, web browsers) instead of just generating text. Tool use is what turns an LLM into an agent.
- Vector Database
- A specialized database for storing and searching embeddings. Used in RAG pipelines. Pinecone, Weaviate, pgvector are common implementations.
- Vibe Coding
- A 2025-coined term for using AI to code by describing what you want in natural language and iterating. Mostly applies to software, but the same pattern shows up in construction document workflows.
- Workflow
- The sequence of steps a team uses to complete a task (e.g., RFI submission and tracking). AI-native products redesign workflows around what AI can do; AI-powered products keep the old workflow and add an AI button.
The Three Vendor Tells to Watch For
If you are evaluating an AI construction product and want to know whether they actually built something or just bolted on a button, watch for these three things during a demo:
- Latency on a non-trivial task. Ask the product to draft a full submittal package, including PDF merging. If it takes more than 30 seconds, the workflow is form-driven with AI on top, not AI-native.
- Behavior on ambiguous input. Type a vague request. AI-native products ask one clarifying question and then produce. AI-bolted-on products either fail or produce generic output.
- Project memory. Reference a project you set up earlier in the demo. AI-native products carry the context. Bolted-on products lose it the moment you switch tabs.
Internal Links
- ConstructionBear home — AI-native by architecture, not by marketing.
- RFI Guide 2026
- Submittal templates
- ConstructionBear vs Procore — how AI-native compares to AI-on-top.
- ConstructionBear vs Autodesk Construction Cloud
External Reading
For longer-form analysis on the AI construction software market, see Builders Digest.
Frequently Asked Questions
- Why does this glossary matter for builders?
- AI vendors throw around terms (LLM, RAG, agent, copilot, AI-native) that mean very different things. If you cannot tell the difference, you cannot tell whether a product actually does what the brochure says. This glossary is the cheat sheet.
- Is "AI-native" different from "AI-powered"?
- Yes. AI-native means the product was architected around an AI model from the start. AI-powered usually means an AI feature was added to an existing product. The architecture difference is real and shows up in how the product feels.
- What is the difference between an LLM and an agent?
- An LLM is a language model that responds to a prompt. An agent is an LLM plus tools (file readers, APIs, browsers) plus a loop that lets it take actions and react to results. Agents do work; LLMs answer questions.
- Should I trust AI to write my construction documents?
- Trust it for the mechanical work (formatting, lookups, references) and verify the substantive content. The bar should be the same as trusting a junior engineer or admin: a useful first draft, not a final answer.
- What is a hallucination?
- A confidently stated AI output that is factually wrong. Hallucinations are why AI documents need human review. Good products reduce hallucinations through retrieval (RAG) and grounding, but they do not eliminate them.
