Events, Test Cases, and Use Cases
When building a tool on Rival, you’ll encounter three terms that appear close together - event, test case, and use case. They’re related but not interchangeable. Understanding the difference helps you build tools that are well-structured, testable, and ready to publish.
Event
An event is the structured input payload your tool receives when it’s executed.
Every tool version has at least one event. When someone runs your tool - from the marketplace UI, the builder, or via API - they’re sending an event. It contains the actual data your tool’s logic operates on.
What an event looks like depends on your tool type:
- A Function event might be
{ "texts": ["Summarize this article..."] } - An MCP event might be
{ "jsonrpc": "2.0", "method": "tools/list", "params": {} } - A Storm event might be
{ "texts": ["classify this dataset"] }alongside a topic file and asset references
In the builder, you create and save events through the Code & Test step. You give each event a name, define its data, and save it. That saved payload is what Rival uses when your tool is tested or executed.
Test Case
A test case is an event used as a runnable example to validate that your tool works before you publish it.
There’s no separate test framework to configure. On Rival, a test case is simply a saved event that gets executed in test mode. When the builder prompts you to add a test case before publishing, it’s asking you to have at least one saved event with valid, executable input - proof that the tool can run end-to-end.
This matters because test cases are part of publish readiness. Rival checks that your tool has runnable examples before it can go live. A tool without a test case can’t be published.
Use Case
A use case is the human-readable scenario your tool is designed to solve. Unlike events and test cases, it’s not a data structure - it’s the intent behind the tool.
Use cases answer the question: what problem does this tool solve, and for whom?
Examples:
- “Summarize a batch of support tickets into short one-line summaries”
- “Classify incoming articles against a predefined topic taxonomy”
- “Extract named entities from unstructured customer feedback”
On Rival, use cases appear in two places. In your tool’s Overview step, you describe use cases as product content - explaining to potential users what the tool is for. In your tool’s Documentation, use cases become the worked examples that show how to apply the tool to a real scenario.
How They Relate
Think of the three terms as different layers of the same idea:
| Term | What It Is | Where It Lives |
|---|---|---|
| Use Case | The scenario or problem the tool solves | Overview and documentation content |
| Event | The structured input payload for an execution | Saved in the builder, sent via API |
| Test Case | An event used to validate the tool before publish | Builder - Code & Test step |
A concrete example for a text summarization tool:
- Use case: Summarize a batch of support tickets into short, readable summaries for a customer service dashboard
- Event:
{ "texts": ["Ticket 1: My order hasn't arrived...", "Ticket 2: I was charged twice..."] } - Test case: That same event, run through the builder to confirm the tool returns valid summaries before publishing
The use case describes the why. The event encodes the what. The test case proves it works.
Why This Matters When Building
Rival’s marketplace is built around executable examples. Users on the tool detail page can run your tool directly - they’re not just reading docs, they’re executing real inputs. That means the quality of your events directly affects how users experience your tool.
Poorly defined events make your tool hard to test and hard to demonstrate. Well-defined events - ones that represent realistic inputs a user would actually send - become the examples your tool’s public page is built around.
When you’re in the builder, the practical workflow is:
- Define your use case first - what scenario is this tool for?
- Encode that scenario as an event - what does a real input look like?
- Run it as a test case - does the tool produce the expected output?
- Use the use case description in your overview and documentation
That loop - use case → event → test case → documentation - is what produces a tool that’s both functional and well-presented on the marketplace.