Skip to content

Testing & Executing

Overview

Testing and execution are central to building reliable tools on Rival. The platform is designed so that you can validate your tool’s behavior during development and then execute the exact same logic in production without any changes.

Testing is not a separate workflow - it is embedded directly into the tool builder and repeated again just before publishing, ensuring that what you ship behaves exactly as expected.

Where does testing happen

Testing is available at two stages in the tool lifecycle.

The primary testing environment is inside the Code & Test (Step 3) tab. This is where you build your logic, define inputs, and run multiple test cases while iterating on your tool.

The second opportunity to test comes in the Publish (Step 6) step. This acts as a final validation checkpoint before release. At this stage, you are testing the exact version that will be published, ensuring there are no last-minute issues.

Events, test cases, and use cases

Every execution on Rival is driven by an event. An event defines how your tool is triggered and what kind of input it expects.

Within each event, you define:

  • The input structure (schema)
  • The parameters required for execution
  • The expected format of input data

To validate your tool, you create test cases (use cases). A test case is a concrete example of input data that can be executed against your tool.

You can define these using either:

  • A JSON editor for structured input
  • A form builder for a guided interface

Both approaches produce the same execution behavior. Test cases are not just for development - they also act as default examples when users execute your tool from the UI or API.

Running a test

Once your event and test cases are defined, testing becomes a simple execution flow.

You provide input values, trigger execution, and receive a response from the platform. The request is routed through the same CortexOne execution layer that will be used in production, so the results reflect real runtime behavior.

This allows you to validate both correctness and performance before publishing.

Understanding execution output

When a tool runs, the platform returns a structured output response.

This response includes:

  • The result produced by your function
  • Status codes indicating success or failure
  • Error messages (if execution fails)

Debugging is performed using this output.

Unlike traditional systems, execution logs do not store error traces. Instead, errors are returned directly in the output response. You can use the error message or status code to understand what went wrong and fix your logic accordingly.

Execution logs and usage tracking

Execution metadata is stored separately from output.

Rival tracks execution details under:

Top right navigation → Manage Billing → Usage & Consumption

This section stores:

  • Tool name
  • Version name
  • Tool type
  • Execution time
  • Duration
  • Cost (compute + function)

These logs are used for tracking usage and billing, not debugging.

Executing tools from the platform

Once a tool is published, it can be executed directly from its public page:

https://cortexone.rival.io/marketplace/{org}/{tool-name}

From this page, users can select a version, choose an event, provide inputs, and run the tool without writing any code.

This provides a quick way to test tools interactively or explore tools created by others.

Executing tools via API

Tools can also be executed programmatically, using API tokens and endpoints. The API can be accessed by opening tool details and paramters and usage tab.

Visit API docs to understand more - https://docs.cortexone.rival.io/core-concepts/apis/

This allows you to integrate Rival tools into applications, workflows, or automation pipelines.

Version selection during execution

When executing a tool, you can either use the latest version or explicitly select a specific version.

Using the latest version ensures you always get the most recent behavior. However, for production systems, it is recommended to use a fixed version so that your integration remains stable even when new versions are released.

Credits and execution

Every execution consumes credits from your organization.

The cost depends on:

  • Tool pricing
  • Compute usage
  • Execution complexity

If your balance is zero or negative, execution will not proceed.

Best practices

Testing should be treated as part of development, not a final step.

Run multiple test cases to cover edge conditions, validate outputs carefully, and always perform a final test before publishing. Once your tool is live, monitor execution behavior and iterate through new versions instead of modifying existing ones.

Next steps

After validating your tool:

  • Publish it to the marketplace
  • Execute it through API integrations
  • Monitor usage and costs
  • Improve it through version updates