Test, verify, and harden your bot's AI safety guardrails.

Chat with your bot in a live session, then have an AI judge evaluate every response against your configured guardrail policies, catching what passed, what failed, and what's missing.

PassFailNot TestedNot Detected
How it works
01

Setup Connection

Enter your bot's webhook credentials to establish a live chat session with your deployed Kore.ai bot.

Capture Interaction
02

Review Guardrails

Configure guardrail settings by importing an App Definition file from your bot, or manually select which safety policies to test against your deployed configuration.

Define Guardrails
03

Test & Evaluate

Chat with your bot live, or deliberately attack it, then let an AI judge audit every response and score each guardrail as passed, failed, or missing.

Configure Evaluation + Results
Guardrails are evaluated strictly against their configured rules, configuration recommendations are provided separately.

Setup Connection

Connect your bot, chat to test responses, and generate adversarial attacks.

Bot Configuration

Web SDK (Chat) Credentials

Live Verification Console

Session ID: Waiting for chat...

Define guardrails

Set your safety policies and inspect real-time system logs.

Guardrail Configuration

Restrict Toxicity
Restrict Topics

Kore GenAI Logs
0 logs

Test & Evaluate

Use predefined guardrail prompts or create custom ones, run evaluations, and analyze results.

Evaluation Model

Keys are persisted securely in your browser's local storage for each provider.