top of page
1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg

Glossary Index 

A

AI Agent

An AI system that autonomously performs tasks using reasoning, tools, and memory.

Agent Benchmarks

Standard tests that evaluate agent performance across tasks.

Agent Memory

Memory systems allowing agents to store and recall information across steps or sessions.

Agentic Workflow

A multi-step process executed autonomously by AI agents.

Autonomous Task Executors

Agents designed to autonomously complete tasks from start to finish with minimal user input.

AI Governance

Policies, processes, and controls that ensure AI systems are safe, compliant, and aligned.

Agent Handoffs

Mechanisms allowing one agent to pass tasks or context to another agent.

Agentic AI

AI systems that can autonomously plan, act, and use tools.

Alignment

Ensuring AI systems act according to human values, safety goals, and organizational rules.

Autonomous Workflow Orchestration

Fully automated end-to-end workflows where agents coordinate tasks without human intervention.

Adaptive Sampling

Dynamically adjusting sampling parameters to improve accuracy and reduce hallucinations.

Agent Memory

Persistent and contextual information storage for AI agents.

Agentic AI

AI systems that autonomously plan, decide, and act toward goals.

Autonomous Evaluation Loops

Systems where agents evaluate and improve their own outputs without human intervention.

Letter A
Top
1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
B-D

B-D

Cascading Models

Systems where small models handle easy tasks and escalate difficult ones to larger models.

Chunking

The process of breaking documents into smaller segments for retrieval.

Context Injection

Automatically inserting retrieved or generated context into LLM prompts.

Differential Privacy

Techniques that ensure individual data points cannot be reverse-engineered from model outputs.

Dynamic Prompting

Automatically generating or modifying prompts based on real-time context.

Causal Reasoning Models

Models that identify cause–effect relationships instead of correlations.

Cognitive Load Balancing

Distributing complex reasoning tasks across multiple agents or models.

Context Window

The maximum number of tokens an LLM can process at once.

Directive Governance

A governance system where policies proactively shape and direct agent behavior.

Chunking

Splitting documents into smaller pieces for retrieval and context injection.

Chain of Thought (CoT)

A reasoning technique where LLMs generate step-by-step logic before answering.

Confidence Scoring

Systems that estimate how confident an LLM is in its own answer.

Cross‑Model Consensus

Using multiple models to vote or agree on an answer.

Domain Adaptation

Adapting a general LLM to perform well in a specific industry or task domain.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
E-G

E-G

Embeddings

Vector representations of text that capture semantic meaning.

Evaluation (LLM/Agent Evaluation)

Assessing correctness, safety, robustness, and task success of LLMs and agent systems.

Event-driven Agents

Agents that trigger actions in response to system or data events.

Fine-Tuning vs RAG

Comparison of modifying model weights (fine-tuning) versus injecting external knowledge (RAG).

Embeddings

Numeric vector representations that capture semantic meaning.

Guardrails

Controls that constrain AI behavior within safe and compliant boundaries.

Enterprise Agentic AI Architecture

A structured blueprint for deploying agentic AI safely and at scale within enterprises.

Evaluation Benchmarks

Standardized tests to measure the performance, safety, and reasoning of LLMs and agents.

Fact-Checking Models

Models designed to evaluate factual correctness of LLM outputs.

Grounding Models

Models designed to tie LLM outputs to real data sources.

Enterprise Agentic Architecture

A scalable, secure architecture for deploying agentic AI across enterprises.

Entropy-based Uncertainty Detection

Detecting uncertain or unstable LLM outputs using entropy measures.

Evaluation Traces / Agent Traces

Detailed logs showing every reasoning step, tool call, and decision made by an agent.

Federated Learning

Training models across distributed data sources without moving the data.

Guardrails & Safety

Controls ensuring AI systems behave safely, remain compliant, and avoid harmful actions.

Evaluation (LLM / Agent)

Measuring performance, reliability, and safety of LLMs and agents.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
H-J

H-J

Hallucination-Mitigation Techniques

Methods used to reduce incorrect or fabricated LLM outputs.

Instruction Tuning

Training LLMs on curated instruction–response datasets.

Hallucinations

Situations where the LLM produces incorrect or fabricated information.

Intent Classification

Detecting what the user wants so the system can route to the correct model or agent.

Hybrid Retrieval (Vector + Keyword)

Combining semantic search and keyword search for optimal retrieval accuracy.

Inverse Retrieval

A retrieval method where the model predicts what information it needs, then retrieves it.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
K-M

K-M

Knowledge Consolidation

Combining multiple sources of information into coherent, stable long-term memory.

Knowledge Fusion

Merging multiple knowledge sources into a single cohesive representation.

LLM (Large Language Model)

Neural models trained on massive datasets to understand and generate language.

Long-context Models

Models capable of processing extremely long sequences of tokens.

Mixture-of-Experts (MoE)

Model architecture where only specialized subsets of parameters activate per task.

Model Guardrails

Rules and constraints that restrict what an LLM or agent is allowed to do.

Multi-Agent Systems

Multiple specialized agents collaborating to solve complex tasks.

Knowledge Distillation for Agents

Teaching smaller agents or models to mimic more advanced ones.

Knowledge Graphs

Graph structures encoding entities and their relationships.

LLM Fingerprinting

Identifying which model generated a piece of text using statistical or embedding patterns.

Memory Pruning

Removing outdated or irrelevant information from agent memory.

Model Compression Techniques

Methods to shrink LLMs while preserving accuracy.

Model Lifecycle Management (MLLM)

Managing models from deployment to updates, monitoring, retraining, and retirement.

Multi-hop Reasoning

Solving problems that require several reasoning jumps or intermediate steps.

Knowledge Drift Detection

Identifying when the knowledge used by models or RAG systems becomes outdated.

Latency & Performance

The speed and efficiency of LLM and agent workflow execution.

LLM Firewalls

Boundary layers that block unsafe inputs, outputs, or tool actions.

Memory Routing

Systems that route what should be stored, recalled, or forgotten in agent memory.

Model Distillation

Compressing a large model into a smaller one while preserving performance.

Model Selection

Choosing the best LLM for a specific task based on capability, cost, and performance.

Large Language Model (LLM)

A large-scale neural network trained to understand and generate human language.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
N-P

N-P

Neural Search Engines

Search engines powered by neural embeddings instead of keywords.

Orchestration

The coordination layer managing agent tasks, tools, memory, and workflows.

Policy Enforcement

AI governance mechanisms that enforce safety, compliance, and access policies.

Observability (LLM / Agent)

Monitoring, tracing, and understanding LLM and agent behavior in production.

Observability

Monitoring and tracing the internal behavior of LLMs and agent systems.

Orchestration Patterns

Reusable workflow templates for building AI and agent systems.

Prompt Engineering

The craft of designing prompts to optimize LLM outputs.

Orchestration

Coordinating agents, tools, and models into controlled workflows.

On-device LLMs

Models running directly on edge devices like laptops, phones, or IoT hardware.

Planning & Execution (Agents)

The process of breaking down goals into steps and executing them using reasoning, retrieval, and tools.

Prompting / Prompt Engineering

Designing inputs to guide LLM behavior and outputs.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
Q-S

Q-S

Query Rewriting

Transforming user queries to improve retrieval accuracy.

Reranking

Reordering retrieved results to optimize relevance before sending to an LLM.

Retrieval-augmented Generation 2.0 (RAG 2.0)

Next-generation RAG systems using re-ranking, multi-hop retrieval, and verification.

Safety RL (RLAIF/RLHF)

Reinforcement-learning methods used to align LLMs with human values and safety rules.

Self-Supervision

Training models on unlabeled data by extracting labels automatically.

Semantic Search

Search based on meaning rather than keywords.

Synthetic Benchmarking

Using AI-generated datasets to test models or agents.

Synthetic Data

Artificially generated data used to train, test, or evaluate AI systems.

RAG (Retrieval-Augmented Generation)

LLM enhanced with external retrieved knowledge for accuracy and grounding.

Retrieval Failures

Situations where RAG returns irrelevant, missing, or incomplete results.

Routing Models

Systems that route tasks to the most appropriate LLM.

Safety Sandboxing

Running agent actions and tool calls in isolated, controlled environments.

Semantic Indexing

Organizing documents using embeddings to support semantic search.

Stateful Agents

Agents that persist knowledge across interactions instead of starting from scratch each time.

Synthetic Data

AI-generated data used to train or evaluate LLMs.

Red-Teaming

Stress-testing AI systems by simulating adversarial or harmful user inputs.

Retrieval Pipelines

End-to-end workflow that transforms queries into embeddings, retrieves documents, reranks them, and injects them into the LLM.

Safety Classifiers

Models that detect harmful, unsafe, or non-compliant content before it reaches the user.

Self-Reflection / Self-Verification

Mechanisms for LLMs or agents to review and correct their own outputs.

Semantic Routing

Routing tasks to models or agents based on semantic features of the input.

Synapse Agents (Coordinated Multi-Agent Systems)

Systems where agents collaborate using a shared memory and communication fabric.

Retrieval-Augmented Generation (RAG)

Combining LLMs with external knowledge retrieval to improve accuracy.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
T-V

T-V

Task Decomposition

Breaking complex tasks into smaller steps or subtasks.

Tool Confidence Estimation

Assessing how likely it is that a tool call is needed or correct.

Tool-augmented Reasoning

Combining LLM reasoning with external tools such as search, databases, and code interpreters.

Verification Layers

Systems that verify the correctness of LLM or agent outputs before final delivery.

Transformer

The neural network architecture behind modern large language models.

Task-specific Adapters (LoRA / PEFT)

Small modular parameters that allow models to specialize without fully retraining.

Tool Latency Optimization

Reducing delays caused by tool calls in agent workflows.

Toolformer-style Models

Models trained specifically to call tools and APIs autonomously.

Token / Tokenization

The process of converting text into tokens that LLMs can process.

Vector Database

A database optimized for storing and searching vector embeddings.

Temporal Reasoning

The ability of models to understand sequences, timelines, and time‑dependent logic.

Tool Use / Tool Calling

LLM or agent invoking external tools, APIs, functions, or code to perform real actions.

Vector Database

Stores embeddings for semantic search and similarity retrieval.

Tool Use / Tool Calling

Allowing LLMs or agents to call external tools and APIs.

1c1db09e-9a5d-4336-8922-f1d07570ec45.jpg
W-Z

W-Z

Watermarking for LLM Output

Embedding hidden signals in AI-generated text to trace its origin.

Weak Supervision

Training models with noisy, incomplete, or programmatically generated labels.

bottom of page