Your model is only as impactful as your team’s ability to understand how it’s performing. Achieving that insight and trust often means running experiments such as scenario tests, ensuring that reports on critical KPIs are visible to all stakeholders, and being able to quickly answer questions about performance. While each of these pieces is crucial to successfully launching and maintaining an optimization project, it’s useful to have a clear starting point for understanding the model’s impacts.
That journey can begin with a simple comparison of runs. Imagine: You’re on the decision science team responsible for routing, and you get a question over Slack from an operator, “Did the latest run of our routing model utilize fewer vehicles than the run we made yesterday?” That message is quickly followed by your modeler teammate asking, “Did we configure the same stop balancing constraint for those two runs?”
At first glance, answering these questions seems straightforward; we need to compare the configuration and output of the two runs. But where do you go to find those runs? Do you comb through logs or other messages? How tricky will it be to map the metadata between various tools? Will you need to recreate the runs to get the data you need to answer your teammates’ questions? Without a central platform for your decision work, finding all the data might be more time-consuming than you’d hoped.
With Nextmv, your runs (and all of their associated metrics and data) are accessible via the UI so you can make quick comparisons to answer simple questions, and then use that data to decide what you’d like to investigate further with experiments such as scenario testing or acceptance testing.
Selecting runs to compare
Whether you’re a developer, modeler, operator, or business user, you can easily compare metrics and configuration of runs in the Nextmv UI. Navigate to Runs >> All, then check the box next to the runs you’d like to compare.

Note: You can also compare runs programmatically. Once you’ve created a comparison run, you’ll see a URL like https://cloud.nextmv.io/acc/[your-acct-id]/app/[your-app-name]/runs/compare?ids=latest-a-q02TgENg,latest-BeH2TgPNg . Simply add run IDs to this URL to include them in the comparison.
Comparing run metrics
Once you’ve selected the runs you’d like to compare, you can review use case specific metrics like maximum number of stops on a route, maximum travel duration, or any custom statistics that you’ve included. You can share the comparison with teammates via a direct link.
In the example below, we can see that the number of activated vehicles is the same, but the number of unplanned stops and min stops are different between the two runs.

The compare runs feature serves as a stepping stone toward upcoming capabilities we previewed with Challenge Mode, which allows users to manually modify decision model output data to compare statistics to a previous run. This is useful, for example, if you want to compare a solution produced by a decision model to a manually created solution. More to come here!
Comparing run configuration
To help better understand performance metrics, the “Config” tab displays the configuration values of each run. Configuration can include penalties, constraints, run time, or anything you’ve defined as a configurable option.
In the example below, we see that the first run did not have clustering enabled while the second one did.

Get started today
Create a free Nextmv account to see how your runs are stacking up against each other.
Try tracking your external runs to easily log the run history of your model that’s running locally and start using the compare runs feature today. Have questions? Reach out to our team, and we’ll be happy to help!