Your decision model for routing drivers is live in production, powering your real-time delivery operations. As you’re iterating on future model improvements, a chat notification pops up from a route manager on your team: “Hey, I noticed that the delivery routes in Brooklyn aren’t as balanced as they usually are. Some drivers are on routes that are nearly double the length of others’ routes.”
You stop everything to start your investigation – looking for a snapshot in time with all the details of the latest model runs. You respond, “Thanks for letting me know. I’ll look into it right now and try to reproduce the issue.”
And so begins a time-sensitive process. You begin by searching for and combining relevant data (potentially spread across several sources and in varying formats) to reproduce what the route manager observed. Let’s look at how these events unfold and how Nextmv simplifies the process to drastically shorten your investigation timeline.
Troubleshooting operational issues
When operational issues are impacting end users, it’s often all hands on deck to investigate and resolve them quickly. What does that process look like? From a high level, it usually starts with a concern noticed in operations by a route planner, schedule manager, finance analyst, or other operator.
Operational issues for a routing problem might sound like this:
- I’m putting the schedule together for tomorrow. Why are we using fewer drivers?
- Why aren't the routes as compact as before? The drivers won't be happy with these new routes.
- I'm hearing complaints from customers about longer delivery times. What's changed? How can we address it?
Depending on how you’re running your optimization model, the data required to reproduce results may be tricky to find and time-consuming to glue together. Some pieces (such as stop locations) may live as tabular data in an upstream database while others (such as optimization settings) might be in a cloud provider’s logs. Whether it takes hours or minutes to locate the data you need, you’ll likely end up asking these types of questions:
- Is there something strange in the input?
- Were there model or solver updates?
- Did something change with the configuration?
After you’ve identified the pieces you need to replay the model, you run it and continue your investigation:
- Did I get the same or a similar result (depending on the solving paradigm)?
- Did I get the expected results when updating/correcting the suspected issue?
The answers to those questions will lead you to your next step – whether that’s continuing to try and reproduce the issue or testing and implementing a fix.
Every investigation has nuances, but at the core of each one is a need for transparency and access to relevant data. Nextmv solves this with increased model observability and access to run details including the input and configuration used, the output produced, metadata, and logs. It’s all captured in one place with deep links for easy sharing and one-click replays to reproduce results. Let’s take a look at an example investigation.
Example: Investigating a routing issue with Nextmv
Continuing the delivery routing narrative above, let’s imagine that we work together at a fictional farm share company that delivers produce from local farms to customers’ homes. The routing model runs once a day to plan routes for the following day’s deliveries. Our route managers then review the routes and coordinate with drivers to ensure every customer gets their order on time.
A route manager has noticed that there are a large number of unplanned stops for tomorrow. On occasion, there are one or two unplanned stops that the operator manually assigns, but in the route plan for tomorrow, there are 12 unplanned stops.
The route manager pulls up their Nextmv account and opens the routing model’s run history to identify the problematic run.

It’s easy for the route manager to spot the run named “Delivery Routes February 19.” They share a direct link to the run with the decision science team.

A decision scientist on the team clicks on the run which pulls up the run details showing that it succeeded and the resulting plan has 12 unplanned stops. Before they dig in, they clone the run to see if they can reproduce the results. The cloned run will use the same input, configuration, and model version as the original run by default.

And the results are in: 12 unplanned stops just as the route manager observed.

And on the results page, they can see the unassigned stops on the map marked as concentric red circles.

Now they want to understand why the model made this plan. The decision scientist looks at the run details, including the input, output, metadata, and logs. First, they take a look at the input data used for the run. They don’t find anything strange there: no addresses are out of the normal radius, the number of available vehicles is up-to-date, and the default configuration for stops is correct.

Next, they inspect the run details more closely. They see that there was a configuration applied to the run for a clustering constraint. This clustering constraint is likely responsible for the unplanned stops.


Now that they’ve identified what might be causing the higher number of unplanned stops, they want to see if removing the constraint yields the expected results (fewer unplanned stops) for the route manager. They clone the “Delivery Routes February 19” run (using the same input and model instance), remove the cluster constraint, and run it again.
Aha! All the stops are now planned! They’ll give this plan to the route manager for tomorrow.

The route manager is relieved to have the routes ready for tomorrow, but they’re interested in using the clustering constraint in the future to make the routes more compact. The decision science team says they’ll perform a few scenario tests to understand the impacts to KPIs such as the number of unplanned stops and the number of activated vehicles.
How to get started with run reproducibility
When operational issues arise, streamline the triaging and troubleshooting process with Nextmv. Find the information you need in one place to reproduce issues and find root cases faster. Sign up for a Nextmv account and please reach out with questions.