Skip to main content

Understanding the Sandbox

Test your AI agent before launch. Simulate conversations, play different prospect types, and optimize prompts and offerings safely.

Updated over a month ago

The Sandbox allows you to test your AI agent's conversation skills before launching real campaigns. You act as a prospect while your AI agent tries to sell your offering, helping you identify and fix issues with your setup.

How Sandbox Works

Conversation Simulation:

  • Your AI agent attempts to sell your offering

  • You role-play as different types of prospects

  • Test various scenarios and responses

  • Evaluate AI behavior and conversation quality

What Gets Tested:

  • Prompt quality: How well your Context and First Message prompts work

  • Offering effectiveness: How well the AI understands and presents your solution

  • Conversation flow: Natural dialogue progression and responses

  • Objection handling: AI's ability to address prospect concerns


Testing Strategies

Good Prospect vs Bad Prospect

Good Prospect Testing:

  • Act interested and engaged

  • Ask relevant questions about your solution

  • Show buying intent signals

  • Test if AI can close effectively

Bad Prospect Testing:

  • Be skeptical and challenging

  • Raise objections (timing, relevance...)

  • Act disinterested or distracted

  • Test AI's persistence and objection handling

Challenge Your AI

  • Push boundaries: Test edge cases and difficult scenarios

  • Ask tough questions: See if AI can handle complex inquiries

  • Be uncooperative: Test how AI handles difficult prospects

  • Change topics: See if AI can redirect conversations effectively


Requirements

Campaign Dependency: You must create a campaign before using the Sandbox. The Sandbox tests specific campaign configurations, so you need:

  • A configured campaign

  • Selected offering

  • Selected prompts

Important: Your LinkedIn doesn't need to be connected to use the Sandbox.


What to Look For

Red Flags

  • Unnatural tone: AI sounds robotic or scripted

  • Generic responses: Lack of personalization or context

  • Poor objection handling: Can't address prospect concerns

  • Weak closing: Fails to secure meetings or next steps

Quality Indicators

  • Natural conversation flow: Responses feel human and contextual

  • Strong personalization: AI uses prospect information effectively

  • Confident objection handling: Addresses concerns professionally

  • Clear value communication: Effectively explains your solution's benefits

Optimization Process

  1. Test thoroughly: Run multiple scenarios and prospect types

  2. Identify weaknesses: Note where conversations break down

  3. Refine components: Update prompts or offering based on findings

  4. Retest: Validate improvements in new Sandbox sessions

  5. Repeat: Continue until conversation quality meets standards


Best Practices

Testing Approach

  • Be thorough: Don't rush the testing process

  • Vary scenarios: Test different prospect personalities and situations

  • Document issues: Keep notes on what needs improvement

  • Test after changes: Always retest after modifying prompts or offerings

Before Launch

  • Complete testing: Ensure AI handles all common scenarios

  • Validate improvements: Confirm all identified issues are resolved

  • Team review: Have others test if working in a team environment


The Sandbox is your safety net. Invest time in thorough testing to ensure your AI agent performs excellently before engaging real prospects.

Did this answer your question?