Skip to main content

Test Data Scenarios

Scenarios are named variants of a dataset designed for different test cases. Instead of maintaining separate datasets for happy paths, edge cases, and error states, create scenarios within a single dataset.

What Are Scenarios?

A scenario is a test data variant with its own prompt and row count. Use scenarios to generate purpose-specific data from the same schema:

ScenarioPurposeExample Prompt
happy_pathValid, successful flows"Active customers with verified emails and recent purchases"
edge_casesBoundary conditions"Records with Unicode names, max-length strings, and zero values"
error_statesInvalid or problematic data"Customers with invalid emails, missing fields, and expired dates"
load_testHigh-volume realistic data"Diverse customer demographics across all regions"
demoPolished data for presentations"Recognizable Fortune 500 companies with realistic metrics"

Creating a Scenario

  1. Open a dataset and click the Scenarios tab
  2. Click New Scenario
  3. Configure the scenario:
    • Name — an identifier for the test case (e.g., happy_path, edge_cases)
    • Prompt — generation context specific to this scenario
    • Row count — number of records to generate
  4. Click Create & Generate

The scenario generates its initial data immediately. A new entry appears in the scenario list with the scenario name, row count, and creation timestamp.

Scenario Naming Conventions

Use descriptive, snake_case names that identify the test case:

  • happy_path — valid, successful data
  • edge_cases — boundary conditions (empty strings, max values, unusual formats)
  • error_states — invalid data (missing fields, type mismatches, constraint violations)
  • load_test — high-volume data for performance testing
  • demo — polished, presentation-ready data
  • regression — data for regression test suites
  • expired_records — data with expired timestamps or statuses
  • pending_records — data in pending or unprocessed states

Writing Scenario Prompts

Each scenario's prompt defines the characteristics of its generated data. Prompts are scoped to the scenario — the same schema produces different data based on the prompt.

Happy Path

"Generate active customers with verified emails, complete billing information, and recent purchases in the last 30 days"

Characteristics: All required fields populated, valid formats, recent timestamps, positive values.

Edge Cases

"Generate customers with unusual but valid data: very long names, international characters, edge-of-range values, minimal profile data"

Characteristics: Maximum-length strings, Unicode characters, boundary numeric values, optional fields left empty.

Error States

"Generate customers with problematic data: invalid email formats, missing required fields, negative values, expired dates, conflicting statuses"

Characteristics: Malformed emails, null required fields, invalid date ranges, constraint violations.

The Scenarios Tab

The Scenarios tab lists all scenarios for the dataset:

ColumnDescription
NameScenario identifier
PromptThe generation prompt used
Row countNumber of records in this scenario
BindingsNumber of simulation connections
CreatedWhen the scenario was created

Click a scenario row to preview its data in a table view.

Binding Scenarios to Simulations

Bind a scenario to a mock simulation so the simulation injects data from that scenario dynamically.

  1. Open a scenario from the Scenarios tab
  2. Click Add Binding
  3. Configure the binding:
    • Simulation — select a simulation from the dropdown
    • Route pattern (optional) — limit data injection to a specific route (e.g., /api/v1/customers). Leave blank to apply to all routes.
    • Injection strategy — how data is served to incoming requests
  4. Click Save Binding

When the bound simulation receives a matching request, it pulls data from this scenario instead of using static response bodies.

Injection Strategies

Select an injection strategy when creating a binding:

Sequential

Returns data rows in order, wrapping back to the first row after the last.

RequestReturns
1stRow 1
2ndRow 2
3rdRow 3
nth (past last row)Row 1 (wraps)

Best for: Predictable test sequences, debugging, deterministic assertions.

Random

Returns a random row on each request.

Best for: Load testing, simulating realistic variation, exploratory testing.

Round Robin

Cycles through rows evenly, distributing requests across the dataset.

Best for: Balanced data distribution, multi-user simulation scenarios, fair load distribution.

Regenerating Scenario Data

To regenerate a scenario with a new prompt or row count:

  1. Open the scenario from the Scenarios tab
  2. Click Regenerate
  3. Update the prompt, row count, or both
  4. Click Generate

The scenario updates to the new version. Any simulation bindings automatically reference the latest data.

Deleting a Scenario

  1. Open the Scenarios tab on the dataset
  2. Click the more options menu (three dots) on the scenario row
  3. Select Delete
  4. Confirm in the dialog
caution

Deleting a scenario also removes all of its simulation bindings. Simulations that relied on this scenario's data will fall back to their static response bodies.

Example: Multi-Scenario Test Suite

Build a comprehensive test data strategy with multiple scenarios in a single dataset:

  1. Create a dataset with a User schema (fields: user_id, email, created_at, is_verified)
  2. Add a happy_path scenario — "Verified users with valid emails and recent creation dates" (100 rows)
  3. Add an error_states scenario — "Users with invalid emails, unverified status, and null creation dates" (50 rows)
  4. Bind happy_path to your integration test simulation with Random injection
  5. Bind error_states to your error-handling test simulation with Sequential injection

Now your integration test simulation returns valid user data, while your error-handling simulation returns problematic data — all from the same dataset schema.

Next Steps

  • Generation — understand AI data generation and prompts
  • Sharing — share scenarios with team members
  • Datasets — manage dataset metadata