#
Core Concepts
#
Access Management
Access Management in Purple Fabric provides fine-grained control over how users are onboarded, assigned roles, and granted permissions across tenants and workspaces. It ensures that access to agents, tools, knowledge bases, and platform functionalities is strictly governed based on user policies and workspace-specific configurations.
At the tenant level, Tenant Admins have full visibility and control over user provisioning and policy assignments. They can manage global access rules, monitor user transactions across agents. Meanwhile, Workspace Admins can manage workspace-level user policies and access within their scope, enabling decentralized team autonomy under centralized oversight.
This layered approach to access management ensures secure, compliant, and scalable operations across diverse teams and use cases within Purple Fabric.
#
User Policies
Each user is governed by a policy that defines what actions they can perform within a workspace.
User Policies available Out of the Box -
- Tenant Admin - A Tenant Admin is the highest-level administrator within a Purple Fabric tenant. This role holds full visibility and control across all workspaces, users, models, and policies within the tenant. The Tenant Admin is responsible for:
Onboarding and managing users at the platform level
Creating workspaces within the platform
Creating and assigning users to workspaces
Monitoring activity and usage across the tenant
Once a user is onboarded by the Tenant Admin, they can be assigned to one or more workspaces.
- Workspace Admin - A Workspace Admin is responsible for managing operations within a specific workspace on the Purple Fabric platform. Unlike a Tenant Admin, who has visibility and control across all users and workspaces within a tenant, the Workspace Admin operates within the boundaries of a single workspace. Their primary responsibilities include assigning user policies to members already onboarded by the Tenant Admin, configuring workspace-level resources, and overseeing agent-related activities.
Workspace Admins cannot add new users to the platform, but can organize and manage those already granted access by the Tenant Admin.
The workspace admin is responsible for -
Managing User Policies for Workspace Members
Managing resources that are specific to their workspace ( Cost, Tokens )
Monitoring and managing agent-related operations and activities within their designated workspace.
Adding or removing members from the workspace
- GenAI User - The GenAI User policy is intended for users who actively build, test, and manage generative AI agents within the Expert Agent Studio. This role is typically assigned to builders or solution developers who are responsible for creating intelligent agents that can later be used by broader teams or embedded into applications.
Users assigned this policy have access to:
The Expert Agent Studio for building and configuring agents.
The Knowledge Garden for building Knowledge bases
Tools Module for building custom or API tools.
Deploying agents within allowed workspaces.
Monitoring agents via the Governance module
They do not have access to platform-wide configurations, model governance, or tenant-level controls unless additional roles are granted.
- The GenAI Consumer- This policy is designed for end users who interact with generative AI agents but do not participate in their creation or configuration. These users have read-only access to agents and can test agent functionality by creating transactions or submitting queries.
This role is ideal for business users, stakeholders, or teams consuming AI-driven solutions built by GenAI Users. It ensures a safe, sandboxed environment where users can evaluate and utilize agents without altering underlying logic or configurations. This policy promotes controlled usage and wider adoption of GenAI capabilities within the organization.
Users assigned this policy have access to:
Use and test existing agents by initiating conversations.
Consume agents but no permissions to create, modify, or publish new agents.
Note - They can only create transactions for the Conversational Agents to check the agent behavior post publishing; they cannot create transactions for the Automation Agents
- GenAI Controller - The GenAI Controller policy is intended for users who not only build and interact with agents but also manage the underlying integrations that power them. In addition to having access to the Expert Agent Studio, these users can create and manage connections and credentials required by agents to interface with external systems or data sources. This role is ideal for technically proficient users responsible for end-to-end agent setup, including configuring secure access to APIs, databases, or internal tools.
Users under this policy have access to:
Build, interact with, and manage generative AI agents.
Managing the underlying integrations that agents use.
Create and manage connections and credentials for external systems.
#
Workspace
A Workspace in Purple Fabric serves as an isolated environment where teams can collaborate, manage, and deploy GenAI agents securely. Each workspace acts as a scoped unit of access, configuration, and experimentation, enabling users to organize agents, Knowledge Gardens, connections, datasets, and policies, ensuring it is private from other workspaces. Workspaces support role-based access control, allowing fine-grained permission assignments such as GenAI user, GenAI consumer, and Workspace admin roles.
#
Visibility Settings
Purple Fabric supports two visibility modes for agents within a workspace: Private and Public. When an entity (Agent, Knowledge Garden, Tool) is created, the option to select the entity's visibility is given to the user. This allows users to iterate and test their agents, knowledge gardens, and tools without exposing them to others in the workspace. Once ready, the agent can be explicitly marked as public, making it accessible to all users within that workspace who have the appropriate permissions.
Regardless of visibility settings, Workspace Admins have visibility into all agents within their assigned workspace through the Governance Module, enabling them to oversee activity and manage resources effectively. At a broader level, Tenant Admins have full visibility into all agents across every workspace through the Governance Module. This tiered visibility ensures proper governance while allowing creators flexibility and control over when their work is shared.
#
LLM Guardrails
LLM Guardrails are a foundational safety mechanism designed to ensure responsible and controlled use of large language models in enterprise environments. They serve as configurable controls that help limit the generation or processing of inappropriate, sensitive, or harmful content. Guardrails act as a trust layer, protecting end users, safeguarding brand reputation, and enabling AI builders to align generated outputs with compliance and ethical standards.
#
Toxicity Detection
Toxicity Detection is one of the LLM guardrails that purple fabric supports which lets users define thresholds for filtering harmful content based on categories such as:
Hate
Sexual Content
Violence
Misconduct
Insults
For each category, platform users can configure sensitivity levels — Strict, Moderate, or Lenient to reflect their organizational needs. Importantly, toxicity checks can be applied to both prompts (user inputs) and responses (LLM outputs), offering two layers of control.
Additionally, users can configure a custom fallback message, referred to as the Specific Guardrail Response, which is triggered when toxicity is detected, ensuring a clear and consistent user experience instead of the default message.
Explainable AI
Explainable AI (XAI) in Purple Fabric provides transparency into how AI agents arrive at their conclusions by exposing the underlying reasoning process, decision traces, and supporting sources. In enterprise-grade deployments, where accountability, trust, and regulatory compliance are essential, explainability is not optional—it's foundational.
Rather than functioning as a black-box model, Purple Fabric agents are equipped to surface trace-level information, identify referenced data sources, and highlight the AI’s path to response generation. This enables users to validate behavior, debug outputs, and build trust with end users and stakeholders.
Explainability Components in the Platform
Traces: Visualize the step-by-step reasoning an agent followed to generate a response. This includes the tools or functions invoked, intermediate steps, and any decision points.
Sources & Citations: Identify where the agent pulled information from.
Input/Output Mapping: View token-level breakdowns of input and output tokens
Trace-level Monitoring for Transactions: At the individual transaction level, trace logs can help identify failure points, misinterpretations, or hallucinations.
#
Traces
**Definition: ** Traces display a chronological execution path of the agent’s actions, from receiving the prompt to producing the final output. This includes every processing step, decision, and tool invocation performed by the agent.
What You’ll See in a Trace:
Initial user input
Tool selections made (e.g., calling an API, querying a database, invoking a retrieval action)
Each intermediate result (e.g., raw data fetched from the knowledge base)
Final generation stage using LLM
Use Cases:
Debugging & Iteration: Builders can analyze bottlenecks, failures, or hallucinations.
Auditability: Internal teams or regulators can see how decisions are made, step by step.
Trust & Training: New users or reviewers can understand the reasoning process, boosting confidence in agent decisions.
#
Sources and Citations
Citations showcase the specific data sources used by the agent to derive answers. This provides a clear linkage between the response and its origin, whether internal or external.
Source Types:
Internal: Retrieved chunks from the Knowledge Base
External: Web data, APIs, third-party tools, real-time systems
Example:
If an agent is answering a question about a company policy and uses the uploaded HR manual as a source, the user will see the chunks retrieved from the manual, which was used to answer the query linked below the response, clearly indicating that the information was pulled from an authoritative internal document.
#
Auditability
Auditability refers to the ability to track, trace, and verify every action taken by AI agents within the platform. In a GenAI-powered enterprise environment, where agents may handle sensitive data, generate business-critical responses, or automate workflows, auditability becomes a foundational requirement for trust, accountability, and regulatory compliance.
Through auditability, platform users gain comprehensive visibility into how agents are behaving over time, who is interacting with them, what responses they are generating, and how much cost and performance overhead they incur. This not only supports responsible AI practices but also ensures that organizations can meet internal governance standards and external compliance requirements.
Every transaction is logged with associated metadata, such as:
Agent name and type
Prompt and response details
Cost and token consumption
End-user ratings
Time taken for cognitive processing
#
Agent Monitoring
Agent Monitoring is a key governance capability that provides end-to-end visibility into the health, usage, and performance of all deployed AI agents within the platform. It enables users to proactively track issues, optimize LLM spend, and improve agent effectiveness. The monitoring experience is centered around a dynamic grid-based dashboard that offers both agent-level and transaction-level views.
At the agent level, stakeholders can assess overall usage metrics such as the number of transactions, input/output tokens, model cost, and cognitive processing times. Each agent is presented with relevant metadata like agent type, version, and current status (active/inactive).
Drilling down further, the transaction-level view provides granular insights into individual conversations or automation runs. This includes timestamps, user inputs, model completions, and associated costs, allowing users to audit behavior, identify failures, and assess how agents are being used in practice.