Balancing optimization project objectives like good, fast, and cost-effective often rest on a team’s ability to be agile. Carolyn Mooney, Nextmv co-founder and CEO, held a panel discussion with industry leaders at Grubhub, Ikea, and Aimpoint Digital to explore what agility looks like in the decision intelligence space and how to put it into practice.
The following is a companion interview to a longer, related conversation that has been edited for length and clarity.
Carolyn Mooney: We hear a lot in our space today that we have to think about what we run and how we run a solution. It might be common to talk about a modeling framework or a solver, and leave it at that, but the how part is where we make things successful. It has a lot to do with the teams we work with. I’m curious, where do you see decision science sitting within your organizations and what teams do you interface with today?
Signy Whitt: Right now decision science has found its home deep in engineering more than anywhere else. It's sitting within the org but partnering very closely with our operations team. We've had the strongest, most mature models in the food delivery space. It lives in engineering because of the work involved and the understanding around what's needed to get these things up and running. But the people we're working with day to day are often our operational partners, who are boots-on-the-ground and trying to make sure everything is running well.
Carolyn: You're typically interfacing with not only the software team, but your business organization itself. Do you feel like there have been challenges in the pacing for decision science or how quickly you can get models live and into the hands of your end users?
Signy: I think the startup cost to get a real model that you can scale can be hard. You're often conflicting a bit with what people on your analytics or finance teams might be able to do with Excel and quick analytical models. They do get there faster for that first stage, but they can't take it further. That's the challenge, to help people understand that. It's exciting to get some quick answers, and it was probably good enough for what you needed right then. I do work with my teams to understand that that’s also good, we need to appreciate that. But from a business investment perspective, we need to be thinking about where it's headed. At some point, you're going to need to take somebody out of it because they're not going to be able to maintain it. And what is that going to look like?
Alison Cozad: I'll build on Signy's point. At Exxon, Gurobi, and Aimpoint, we see cases where someone's built an Excel spreadsheet or intro Python code that got part of the way. They got part of the solution. Maybe they weren't able to add the complexity for this product (you can make it or you can buy it), or maybe they weren't able to add the complexity of, “How do I do this over time?” The moment that added complexity comes in, you need something more sophisticated. Bringing on an optimization modeler can really help take it to that next level. We love when there's this starting point because there's so many things we can eke out of that. We know how they look at the data, what variables are important to them, and what little extra analyses they might put on. So we just tuck those away in the back of our minds because we're going to want to add a similar thing at the back of the optimizer to make it have a similar look and feel to convey the same information that we saw in the intro prototype. I love working with those users who say, "I got this so far, I understand the problem, but I need a little bit of help to take it to the next level."
Carolyn: It seems like prototyping is common in that success pattern. We frequently interface with software teams that talk about agile processes and an MVP. What does agility look like for our space as we're interfacing with those teams? What does that path to production look like? How do you avoid a waterfall-like pattern? I know a lot of executives get concerned about that when they start hearing about AI requirements gathering up front.
Victoria Guerrero: To be agile is to move without getting stuck, which is great, even if we move slowly. It's easy in decision science to get stuck on defining the problem, on not understanding each other, or a lack of requirements because it's not easy to get everything from stakeholders. Being agile means communicating properly not only with stakeholders, but with engineers, product, and business, so everyone has to be involved in the conversation. It's important to always have the right people in the room.
Prototyping is the best way of moving forward. You can do something, even if it's super hacky. We sometimes do things in the most hacky way you can imagine, but it's one week or a few days and then you show it. It doesn't matter what it looks like. Does it provide value: Yes or no? If not, it was just a few days lost; if yes, you move to the next stage. That's why it's important to bring in the stakeholders as soon as possible with MVPs and align on expectations. “Can we deploy something that is missing some features? Are all those features really relevant for the first version?” It's easy to get into a loop of wanting to do everything perfectly from the first day, but that takes forever. Then you go to a stakeholder after a few months and they tell you "No, this is not working because this is not how we do things." That's what it means to me to be agile.
Carolyn: You have to get information about the success of the project as early as possible. When you're talking about a project in abstract, it's hard to get feedback from stakeholders about the value. If you put something in their hands, even if it’s a prototype, you can start to understand what data or rules are missing.
Signy, what does agile mean to you?
Signy: I didn't come to this role as a software engineer, so I did have to spend a lot of time understanding agile principles, what we're doing, sprints, etc. I think about, what do software engineers mean when they say “agile” versus what do I mean when I say “agile”? It comes down to this: You will be iterating. That is very different from when I first started dabbling in OR in school. “Oh, you can use this to create a solution,” and then you hand people the solution and then you're done. Being prepared for that iteration is going to be important, especially when you sit in an engineering organization. That is how all DevOps thinking has been oriented. It’s not, “Plan it in advance and build it”; it’s, “Take some steps and then figure out what the next step might be.” It's a total mindset shift from what you learn when you're studying OR – at least it was 15, 20 years ago – that we have to be able to iterate. I've found that stakeholders don't quite understand that because they want you to give them a solution. Not only will you be unable to give them a perfect solution to begin with, but you’ll need to help them understand that you will not be giving them a perfect solution at all. You will be iterating together. Then you can have a good agreement on what that iteration is going to look like and what the deliverables are going to be.
Carolyn: That 80% solution doesn't cover all the use cases, but it should still speed up your organization somehow. It should still be live in the sense that it’s part of the business process. It can just be part of a business process that informs an operator doing the work manually, and that’s OK. An augmented decision support system is still valuable, especially in our space where we cover anything from strategic to operational decisions.
Signy: Victoria was talking about this with prototyping. Gathering requirements is helpful for a prototype, but it reminds me of someone I worked with who could not react to something until I put it in her hands. There are a lot of people who will work that way, so getting a prototype is an important step in getting feedback. Even if it's the wrong thing, you're pushing it forward a little bit because you're giving them something to react to. They can’t respond to an idea you have in your head.
Carolyn: Learning what it's not is just as important as learning what it is.
Alison: There's a quote I'm sure everyone has heard: "All models are wrong, but some are useful." That's just as true in optimization as it is in more traditional regression models. To start that iteration process, you need to know what to iterate on. For me, agility is how quickly you can detect when and where you're wrong. Going into it assuming you're wrong and using every tool you have to figure out where you're wrong. You've got a ton of tools in front of you. A big one is your end user. As Victoria said, build your prototype and get something in front of them fast, even if it's a simple model. They can poke around and ask, “What is that? This is not how my world works.”
Even if that end user is your biggest critic or your biggest champion, they're the person who can point out where the thing is wrong. Leverage the stakeholders. These are the people that the end user has to justify their decision to. This is never anyone's favorite part of the project, but get some early test cases. Catching those confidently wrong test cases is a cheap way to avoid painful outcomes. Going in knowing your first model is wrong is OK. Embrace that. Think of agility as finding out where you're wrong the fastest.
Victoria: This feedback from the stakeholders, saying something is wrong, is part of the process in gathering requirements; the requirements that everyone has in the movie in their head, but they don't explain the whole movie. There's a little bit of interaction at the end and nobody tells you what's in between. The in-between part comes from the interaction with the stakeholder and the users. When they tell you something is important you need to ask why. Go to the reason why it's wrong, not just in terms of a bug in the code on the engineering or decision side but also in the logic. Many times we only identify a problem in the logic because something triggered an alert somewhere in a dashboard nobody's even looking at every day, but it's there. So it's also important to have continuous monitoring to identify issues and to fix them as soon as possible.
Carolyn: What does agility look like with DecisionOps infrastructure? What pieces need to be in place? We've talked about prototyping and test cases and putting something in the hands of users. What else comes to mind?
Alison: One that is my personal favorite: The ability to answer questions in the same meeting using a scenario analysis tool. Nothing feels better than someone asking a really critical question and then saying, "Well, let me just test this in the optimizer, run it again, and look at that scenario." Your alternative is: “Let me think about that. I'll run some analysis. Let's schedule a meeting. Let's revisit in a week when we can all chat again, and we'll have your answer.” By then, you'll have lost momentum. You'll have to remind people where you started and where you are. Sometimes having that ability to run or compare scenarios is the difference between something being answered in a couple of minutes versus several weeks. With that extra time, instead of doing that analysis and trying to re-explain that, you're progressing the model, adding a new feature, or putting in a new automated scenario to be able to answer that question proactively. End users want that responsiveness. Developers don't want to have to build that every time. The pre-built structure is important.
Carolyn: This is similar to unit tests for catching edge cases in your decisions. The data and the testing process looks a little different.
Alison: If you're being asked the same type of question two times, it should be its own automated scenario. That should be the threshold.
Signy: Alison, you mentioned when you start losing momentum. That's one of the things that can really derail projects and make it difficult and also where acknowledging DecisionOps and these tools is important. The alternative is someone who's coding it up in a notebook. What they're not going to tell their stakeholders is that they had five bugs in their notebook. Now they have to resolve that before they can start answering the question. Then you really start to lose that momentum, and it happens a lot more often than you realize. It's easy to build these things, but maintaining them and keeping them ready is time consuming. You need that tool, you need something that stakeholders can interact with, and how do we do that efficiently so we don't lose momentum in the project.
Carolyn: If you hand something to an end user, you're requiring them to have all those installs: Python, modeling tools, solvers, etc. Those kinds of things can be problematic depending on technical fluency. I've made the mistake of pulling up a terminal versus a visualization. The delta there is drastic. Victoria, can you talk about how you utilize tooling in your day to day?
Victoria: When I joined my current team almost four years ago, I was surprised by the work they had done. Even though it was in the first stage, the first thing the team thought about was the infrastructure. And then the decision science part of the product is the small part, in between the other engineering parts. It’s like an opera: You need the musicians, the instruments, the opera house, and the director. The director is the product owner who allows everyone to do what they do best. When we're allowed to do that, then we can focus on one small part of the problem and take as much as possible from the tools we have.
When I joined this team as a data scientist, I entered into what we call the "solver part", the decision part. I didn't know anything about anything else, but I was able to contribute on day one because the infrastructure was there, and we had all the engineering parts and services we needed split into what's meaningful to the project. For me, the most important part (that makes our jobs easier) is benchmarking. If you have a benchmark, you have the best gift you can have in OR. Anything you do you can compare it against this benchmark. If you’re working on a previous algorithm, you can have shadow mode where you compare the solution 1:1 with the previous algorithm and the new algorithm. And then you gather all of this. You have a nice dashboard and you show it to the stakeholders. This builds trust, because before they do anything in production, they can already see this is working. Collaboration is also important. You can have a digital twin where you can compare different versions of your model and then you can use ad-hoc analysis. There are many ways to see if what you're doing makes sense.
Carolyn: I definitely saw that when I was at Lockheed Martin, in a prior life, working on simulation where all the software stuff was done for me already. I got to just focus on the simulation tool itself and what was happening there. That was really helpful because you had to have deep knowledge of the system that was being simulated, and I had to learn all of that. Looking back, I have a lot of appreciation for the folks that built that tool, not knowing that it wasn't the same everywhere.
The other thing was that we had the same visualizations. Something that I key off of a lot is not making our stakeholders relearn the data every single time, like having a dashboard. You want those dashboards to be something that they recognize, and they know what they're looking at in terms of data exploration. That's something I always think about with decision models. Being able to compare visualizations and understand what the metrics are, but also codifying the metrics. Making sure the KPIs are the same. They have a definition and everyone understands that definition.
Signy: Right now I'm on a mission. A lot of people think they need predictive solutions, but what they really need is a decision. One of the things we need to do to keep that going is helping them benchmark their current state. If you don't have a benchmark of their current state, when you come in with a model, they're only going to see more of their problems and they're going to think all of those problems are because of your model. Suddenly you're on your backfoot trying to defend that those problems already existed, it’s just that now you can see them. That's something I'm really pushing: We've benchmarked the current state, we're talking to stakeholders, they understand it, and we're speaking the same language. When you put in that first prototype, you're showing them how it's improving their current state, not showing them a prototype that gives them visibility into other problems.
Alison: Cannot agree more. There's the case where, if it's not making as much money with the current benchmark as it is in optimization, that's an easy one to show. If they have a completely different basis on the benchmarking, or they allow a little more flexibility on how much of the inventory you can use, that's a good thing to understand and learn. Make sure you're handling it similarly or making a conscious choice to do it differently in the optimization model.
Carolyn: There are implicit decisions that come out of the woodwork. The operators say they're OK if a driver is 5 minutes late or if you reassigned in a different way, but the algorithm is the algorithm. It's code. It doesn't know it has that flexibility unless you tell it.
What other best practices do you see? What's critical to a solo developer? What’s critical to a large team?
Alison: Making sure from the get-go that when you're starting to show your prototype or future models to the end user, you've put some thought into how they're going to absorb that information, whether that's visuals or having a couple of pre-canned scenarios. They need to be able to look at it through their lens before they can even begin to start to give you feedback. Sometimes we think of adding that layer because that's the last step in the project. We hand off this giant Excel spreadsheet to the customer and say, “Does this look right?” But it's a huge matrix, and they don't know what to do with this information.
For a big project, even early on, it's not enough to just build a clever thing. Your users have to understand it. If they don't understand it, they don't trust it. And if they don't trust it, then they can't be sure of the solution because they're ultimately on the line to make the right decision. So what do they do? Blindly trust it even though they don't understand it? Do they ignore it completely and not provide you any feedback? That’s how models get dusty and set aside. I saw this at Exxon Mobil when I worked on a project where they started pinching in limits until it made sense. Maybe there was a feasible region that you can't get to, so all they have is a region to pinch in, but you're leaving a lot of good space unused. Clever users would pinch in temperature limits to match their intuition. And yes, money was left on the table, but the interesting thing is that that wasn't always the case. Sometimes their intuition was pointing out a flaw that was something the model missed. You only discover those things once you give them enough tools and information to challenge the model. You've got to give the user enough information that they can disagree with the model. And if they do, great. You get to update it and iterate. You don't get stuck, you get to improve.
I mentioned earlier that agility is detecting when you're wrong early on. A lot of that is through explanation, visualization, and comparing to the benchmark. That's a really important thing that's not always obvious right out of school.
Signy: That stakeholder component that you're talking about, Alison, is crucial. One of the things I keep thinking about is understanding what your user or stakeholder is going to want to have control over, whether you think they need it or not. Don't try to take it away from them. Help them let go of it.
Carolyn: I talk to prospects who are hesitant to hand over more control, but we have to understand that most of these operators are on the hook for the profit, loss, cost, or the on-time metric coming out of your model. They're the ones who are going to get a call, "Hey, this was messed up today." We might not always get that information. Having levers that are good for them to have control or agency in the decision making is huge. They may not know the code under the hood, but they will quickly pick up when something isn't quite right. They know exactly what those routes should look like.
Victoria: To follow up, whatever our decisions our models make, it has impact and implications in the real world. If we're messed up today, stakeholders will have fires to put out. As a quick example, we had a final test going into production with a product. The moment we pressed the button, an operator from that country said, “That's not possible. We cannot make that delivery.” They know. They can look at those numbers and say, “This is not possible.” So it's very important that they are always in the loop and that the right people are always involved because OR is a way of thinking, but the operators are the ones who can really explain to the stakeholders what the tradeoffs will be if we increase X.
It's also important to not get into solutioning very early because then we lose track of the real problem. This has happened to me. I just want to go into solution mode. And then I have to say, “Wait a second. Listen to the stakeholders and try to filter out what is really a requirement and what they are telling you is a requirement because they did a trick somewhere that's fixing an underlying problem that they have.” That's not a requirement. That's a problem that we need to address in a different way.
Signy: Victoria, how do you keep yourself from solutioning too early?
Victoria: I'm getting better at it. I still get into that mode, but I try to focus on understanding what we are discussing first. Understand that later on, you're going to be figuring out how to model things better. It's difficult, but it's something you need to think about when you are in a discussion. Just listen. Try to understand first.
Carolyn: It's hard not to pattern match. I try to talk to people about what their ideal world looks like, so you're not assuming the solution. You get an idea of what the value is too. You'll get, "I don't want to spend 20 hours a week doing this manually in a spreadsheet. I want there to be less errors." You get some of the information that's not the decision itself, and it's not the ROI of the decision being made. It's the external ROI, the org-level ROI.
Victoria: The more you listen, the easier it gets. You start seeing the connections of what you've done previously instead of trying to figure out how you would solve your next problem. It takes time.
Alison: Signy, can you walk me through how you approach someone who doesn't already have a strict algorithm? How would you go through a review of the current benchmarking process for that?
Signy: I can't say that we're good at this yet because it's something I'm realizing we need to do a lot more of. It gets back to listening and really seeing what is the problem we're trying to solve. It's so easy to come into this when you see so much potential in these solutions and tools and techniques. But stopping and asking, "What problem do you have? And how do you know when it's working or not working? Let's agree on how we're all going to understand that, and then we can measure.” Writing it down is key.
Carolyn: More often than not, I’ve seen some masterpiece Excel spreadsheets that I could not even begin to create myself. I can do a Python script, but I could never do some of the things I've seen in Excel to the level that they're done. It’s an interesting place to start because the vast majority of business stakeholders I've worked with have a very intricate way of looking at data. Sometimes it's just getting through that process with them even if you don't have a model basis. It’s a good way to get that baseline because you can at least get the KPIs and understand how to calculate them on the plan you're creating.
Alison: It's not like you're just putting in a tool where there was nothing before. They're already making these decisions somehow. Maybe it's a glorious Excel spreadsheet or maybe not, but that's the cool thing about optimization and decision science. They've already got their strategy and you can tease some information out of that.
Victoria: Alison, you said before that you went from a solo modeler developer to working in a proper team. Often people who have just finished their PhD are used to working in isolation for a long time, taking care of everything themselves. When someone in that status joins your team, how do you ensure they start communicating properly, since they are used to working alone and not asking for help?
Alison: You hit the nail on the head. It's a tough transition. When folks are used to doing all of this in their head, they don't necessarily have to articulate the problem. It's a bit more amorphous when you have to actually articulate, write it down, or talk through it in a conversation. I think open-ended questions, giving space to think, and proposing something in chat before a meeting so they have some time to work through it, are all good ideas. Sometimes it's very uncomfortable to start answering off the cuff, particularly when they're used to the PhD program where you have to have the right answer ready, but here that's not necessarily expected. We just need to talk through the thought process and get to that point. Informal standups can be uncomfortable at first, but it’s really beneficial to chat and say, “This was hard, this was easy, how do I move forward, I have these questions from yesterday, etc.” Trying to make space for people to articulate what's going on in their head externally is important. You want folks to feel like they can think out loud and work through their thought process.
Carolyn: Just showing your work is a big thing. In the software space, it's all shown work. You make PRs into a repository that's shared. There's a record of everything. We need to feel more comfortable with that process. It's OK to not be right, to iterate, It’s OK if the first model is not perfect. That process is just different from the academic process where you're trying to go for perfection to prove something. It's a different mindset.
Check out the techtalk recording for the full interview of what agility looks like in decision intelligence and how it's put into practice.



