Test, verify, and harden your bot's AI safety guardrails.
Chat with your bot in a live session, then have an AI judge evaluate every response against your configured guardrail policies, catching what passed, what failed, and what's missing.
Setup Connection
Enter your bot's webhook credentials to establish a live chat session with your deployed Kore.ai bot.
Review Guardrails
Configure guardrail settings by importing an App Definition file from your bot, or manually select which safety policies to test against your deployed configuration.
Test & Evaluate
Chat with your bot live, or deliberately attack it, then let an AI judge audit every response and score each guardrail as passed, failed, or missing.
Setup Connection
Connect your bot, chat to test responses, and generate adversarial attacks.
Bot Configuration
Web SDK (Chat) Credentials
Live Verification Console
Define guardrails
Set your safety policies and inspect real-time system logs.
Guardrail Configuration
Kore GenAI Logs0 logs
| Timestamp | Category | Activity | Model |
|---|
Send a message to the bot to start a session.
Logs will be filtered by your session automatically.
Test & Evaluate
Use predefined guardrail prompts or create custom ones, run evaluations, and analyze results.
Evaluation Model
Keys are persisted securely in your browser's local storage for each provider.
Batch Testing via CSV
Format: conversation_number, utterance
| ID | Messages | Status | Details |
|---|---|---|---|
| No results yet. Upload a CSV and run batch. | |||