Simulate “what if” questions for decision models with scenario testing and Nextmv

What if order volume increases 4x? What if I changed shift length? What’s the best model formulation? Efficiently play out different scenarios under realistic conditions before commiting to a plan using Nextmv’s scenario testing capabilities.

Scenario testing for decision models is now a first-class experience in the Nextmv platform. Scenario testing is an effective way to play out different situations under realistic conditions using historical or synthetic input data before committing to a decision. In other words, you can simulate “what if” situations before they happen

This new experience in the Nextmv platform streamlines all types of scenario tests — whether you’re varying one model parameter like fleet size or varying multiple model parameters like shift duration and recovery time. Scenario tests are configured through Nextmv’s user-friendly UI, runs are executed on scalable remote infrastructure, data instances are easily managed, and results are analyzed using a common, shareable format. 

Scenario testing in Nextmv replaces the effort spent writing and managing scripts and notebooks, figuring out testing infra, and manually summarizing results. See how to put scenario testing to the…test…by exploring some examples below 👇👇👇👇 that simulate “what-if” questions for vehicle routing, shift scheduling, and model formulation for price optimization.

Scenario testing with one parameter: A VRP fleet size example

Let’s imagine you’re an algorithm developer at The Farm Share Company, a hypothetical business that delivers farm fresh goods to customers’ homes. Your company operates in multiple cities in the United States. The business team anticipates 4x growth in order volume in New York City (NYC) in the coming year. You currently operate a fleet of 5 vans. How much should you increase your fleet size to meet demand?

You can run an input-driven scenario test in Nextmv to quickly figure out how much your fleet size would need to grow in order to satisfy projected orders (i.e., no unassigned stops). We simply create an input set with 11 data files each with a projected 400 stops and the fleet size ranging from 5 to 15 vehicles. From there, we configure and run our scenario test and see that we need between 11 and 12 vehicles to meet projected demand.

The resulting pivot table allows us to view the average time on road per scenario (“Totals”) further split out by unassigned stops and the number of activated vehicles. It also prompts further exploration. In this example, we ran the scenarios just once. With Nextmv, you can configure your test to repeat scenarios multiple times, which is especially useful when the underlying solver produces non-deterministic results. Repeating scenarios could flesh out more insight into the required 11-12 vehicle count to see result variability. Perhaps the business has the budget to only operate 10 vans, what adjustments can you make? Or what if you’re looking to go all electric? In which scenarios do you still achieve the desired business KPIs? Simple input set creation that varies one parameter is one way to get started with scenario testing in Nextmv.

Scenario testing with multiple parameters: A shift scheduling example

At The Farm Share Company, drivers get to choose their shifts. Recently, operators have noticed that drivers aren’t picking up as many of the 8-hour shifts. Driver feedback indicates they'd prefer shifts in 4-hour blocks instead. What if you offered these shorter shifts?

You know that the existing shift scheduling algorithm minimizes both oversupply (too many drivers) and undersupply (not enough drivers). When exploring shorter shifts, you’re most concerned about undersupply so you’ll look to vary the undersupply penalty in the model. We also care about minimum recovery times (i.e., breaks) between shifts and want to understand how all three of these things interact. 

You create a scenario test using historical data with the following options:

  • Maximum shift duration: 4 hours and 8 hours
  • Minimum recovery time between shifts: 0 hours and 2 hours
  • Undersupply penalty: 0, 1000, and 2000

Here are the results of the scenario test displayed in a heatmap table. The results are the average solution value (in this case, the solution value includes oversupply and undersupply penalties). We’ve kept the oversupply penalty constant and varied undersupply penalty, recovery time, and shift duration. Higher solution values (darker red cells) correspond to more undersupply, which is less desirable.

Here are some observations: 

  • As expected, 4-hour shifts and 8-hour shifts perform equally well with an undersupply penalty of 0
  • Not surprisingly, 4-hour shifts perform generally worse than 8-hours shifts when penalizing undersupply.
  • More interestingly, while 2-hour breaks did not affect solutions with 8-hour shifts, they did have a negative effect when using 4-hour shifts.

So what’s next? If you want to move forward with 4-hour shifts, you’ll need to ensure more drivers are available to meet demand and avoid undersupply. To understand how many drivers you’ll need, you can run another test similar to the one in the first example of this post. You may also want to explore how tuning break times (i.e., using 30 minute increments) impacts undersupply. Nextmv provides a more systematic way of testing these different multi-parameter scenarios.

Scenario testing model formulation: A price optimization example

Now, to the Department of Avocados! You’re responsible for a decision model that sets the price of avocados while maximizing revenue for The Farm Share Company. The model accounts for factors such as expected sales, transportation costs, waste costs, etc. You’ve noticed the model can sometimes be overly optimistic in projected revenue. What if you wanted to account for more realism in the model?

You decide to handle this uncertainty by adding historical variance into the pricing optimization model. Essentially, you sample errors from your price-demand elasticity curve, and add them directly to your model formulation. This helps the solver understand the risks of different options.

The tradeoff, however, is the more error samples you add, the bigger the model gets, the harder it becomes to solve. Given you have a 1-minute maximum solve time requirement, how many samples should you inject into your model formulation? Setting up this model formulation scenario test uses config options like the shift scheduling example above.

To see this full example play out, check out this techtalk from Ryan O’Neil using a Gurobipy price optimization example. While the video predates the availability of this scenario testing user experience, it demonstrates the underlying mechanics and flow to solving this model formulation problem. Spoiler: Ryan does find the right number of samples to inject and it saves The Farm Share Company more than $250,000!

Getting started with simulating “what-if” scenarios and Nextmv

This scenario testing flow in Nextmv simplifies and accelerates what-if experimentation to avoid costly mistakes and generate more business revenue. This form of testing (along with other historical and online experiments) can all be done on the same infrastructure you use to deploy and run models — no context switching, no need to manage a bunch of local resources, or place extra requests to engineering to support your testing efforts. It all just works. And when it comes time to communicate the results to other stakeholders, all you have to do is share a link to collaborate from the same reference point. Everyone also has access to experiment details to understand how it was set up and explore the results in an interactive way — accelerating buy-in for the best case scenario ahead. 😉 

To get started with scenario testing in Nextmv all you need to do is create a free Nextmv account and follow along in the scenario testing documentation. If you’d like a guided walkthrough, consider joining our next Getting Started workshop or request a meeting with our team.

Video by:
No items found.