From business leaders to front-line operators, stakeholder trust can make or break the momentum behind any decision intelligence project. But establishing feedback loops and meeting folks where they are can help overcome this challenge. Carolyn Mooney, CEO of Nextmv, sat down with industry practitioners at Toyota, Curated for You, and Hertz to explore these topics through their observations and experience.
The following is a companion interview to a longer, related conversation that has been edited for length and clarity.
Carolyn Mooney: What does it look like when end users trust a decision algorithm or optimization model? What actually happens for the organization?
John Elam: There’s two parts to that question: How do you get there, and what does the end result look like? Frankly, the end result looks like an easier experience. As a product owner, what comes to mind all the time for me is, who do I delight, and how do I delight them? You have users that want to use the tool. They recognize it makes life easier. Getting there has been very straightforward. My most successful projects have an end user that’s providing us with their pains and pleasures well before we talk about technology. They don’t care about what algorithm yet. You work with them to figure out how to envision a future that’s much more delightful.
Carolyn: Brad, what does it mean to you for end users to really trust decision models? What does that world look like?
Brad Klingenberg: It depends on what the application is, but a framework that I find helpful is an arc of progress where you can move from measurement and descriptive statistics to making useful recommendations. In some applications, you can push even further to various stages of automation. The phase transition in that progress is when you flip from a system that is principally run by humans, with some guidance from algorithms or expert systems, to a system that’s run by an algorithm monitored, managed, and supervised by humans. If you can make that flip, that’s the aspiration.
Carolyn: I’ve also seen people start to refer to the business process they’re solving as "the algorithm." The phase switch is when you hear the word "the" or “our” in front of it. It shows that they’ve bought into the fact that it is going to provide a service, and you know how that is going to continue to iterate and evolve over time.
Paxton, you’ve worked on a couple of different systems in the travel sector. How have you seen this transpire?
Paxton Leaf: Once users start referring to it as “our algorithm”, once they start to speak your language, you’re in business. Things get rolling. It’s great when you have your end users unafraid to ask questions. They have a healthy method for giving feedback because when you know that people are bought in, the questions shift away from skepticism. "Why is it doing this? Why is it doing that?" becomes “What is the model trying to tell us?” Those healthy user feedback systems enable them to be brought along for the ride. They feel like they have a hand on the wheel, and they’re able to engage in the process. And you, as the person designing them, don’t lose sight of what’s actually happening out in the business. You can keep making sure that your results and outputs are still tailored to what the business needs. Having that setup in lockstep makes everyone’s lives easier. Anything you want to do down the road is much simpler because you’ve already built the foundation of trust. It lets you expand on the initial work you’ve done to set it up.
Carolyn: When my co-founder Ryan and I were working together at a small meal delivery startup in Philadelphia, we got the opportunity to sit next to our dispatchers. It was amazing because we could ask, "What do you do when this happens?" and get that immediate feedback. As we’ve gone along with Nextmv, something that we’ve thought a lot about is how to make systems more collaborative. That’s a big key to this.
What kind of systems, processes, or software do you have to have in place to get that feedback for building trust?
Paxton: First, the process should be back and forth, and it should be iterative. You should be highly engaged with who you’re working with and what the end result needs to be. An incremental adoption plan also helps get you to where you need to go. It’s tough to try and implement sweeping changes, especially if the people that you’re working with are skeptical of what you’re trying to do and making those changes out of nowhere can really set you back. Incremental steps to build this up is what it takes to get there.
Brad: One thing that has been impactful at Stitch Fix and other places, is actually taking responsibility in a partnered way for the outcomes that you’re driving. At a lot of organizations, you may have a technical team siloed away and measured in some other way. If you can sign up for the KPIs that your partners are driving, and share the responsibility for making them better, that can give you skin in the game in a way that you wouldn’t have if you’re off in a silo somewhere.
Carolyn: That idea is great. I used to joke with people that we as modelers sometimes handcuff the organization. They're on the hook for a line item on the P&L, or they’re on the hook for a core experience KPI, but they have to rely on this algorithm to get them most of the way there. So how do you marry those two things? Aligning incentives is always a great way to move that forward.
Brad: There’s a selfish perspective there too where if you can take that responsibility, you can own that line item in partnership. That gives you a more powerful seat at the table when there’s decisions to be made about how to move forward. It’s also important to ensure the feedback loops are there as you march up that arc of progress. For example, if you’re serving recommendations or suggested decisions in some context, making sure you understand what people accept or reject and why, you can quickly close that loop and iterate.
Carolyn: That sounds like a core data process too. By putting telemetry into your decision processes to get good feedback, and not just verbally and anecdotally, you get a consistent pipeline of observability of your decisions.
John: At Toyota, there’s a core philosophy of ours throughout the entire organization called Genchi Genbutsu. It translates to "go and see." And specifically we’ll say, "go to the gemba," or, go to the place where the thing occurs. If we have a windshield problem, you go to where we make and install the windshields, and you see the work occur. You get such an appreciation for the actual stuff that you’re trying to automate and optimize. When you can see it, it changes everything.
I’m glad everyone’s talking about feedback loops because you have to. I ask for that feedback. I want folks to tell me how it’s bad because I might be blind to that. Those users are going to tell you very quickly, so you can find those champions that will be honest with you. When they’re looking for champions, a lot of people look for cheerleaders. You want someone who’s a cheerleader because you want them to go tell their friends about it, but you also need someone who’s candid. Finding that true champion is paramount to success.
Carolyn: What kind of processes or structures have you had to collect that feedback, and how early in the process?
John: From day one, I would talk to the people about their problem. For example, we made an accessory recommendation tool at Toyota. Every mudflap that’s being installed in Toyota goes through this decision/recommendation system that we built. I had pilot and control groups that we set up, and I would interview. We had regular bi-weekly calls to learn more about their work. We’d launch the pilot, they’d let us know what things they saw about this MVP, and they’d tell us what’s working and what’s not. Just meet and call, meet and call.
Another system I built was a tool for managing resources. On every single page there was a link to provide feedback. When you clicked on it, it would capture the page you’re on, some meta-information about what you’re seeing, and then it would send it to my inbox. It was back when I was a sole developer on a sole application. It was very effective and also easy for the user. Reducing the barrier to that feedback is super important.
Paxton: One of the things for a labor planning project that we missed at the start was how interactive that feedback needed to be and what kind of turnarounds that allowed for. When we thought about how to approach a scheduling problem, we had our standard OR-type thoughts. We know these things work well, we know what to expect, and we know how to model the variables. Then we dropped a prototype on the desk of a station manager who immediately said, "The union is not going to like this. We have to have X amount of people here, here, and here at these times." It became very clear that it had to be iterative. So we pulled everyone into the same room and we said, "We’re going to sit here until we figure it out, and we’re going to go back and forth. We’re going to go look at what’s happening in the real world out in the Hertz Atlanta station. Then we’re going to come back, and we’re going to talk about it again.” It really helped cut through a lot of the noise. A lot of the much harder constraints, like union negotiations versus what churn looks like for full- or part-time employees, you know what kind of costs are there. The thing that was most effective was just putting everybody into the same room, discussing it face-to-face, getting everyone’s skin in the game, and figuring out what happened, over and over again.
Brad: I’d add too, and this is more for systematic feedback, that you should have multiple feedback loops. You often have one feedback loop from the decision-maker. At Stitch Fix, that might be the stylist picking a blouse for a customer. And then you have another feedback loop for the actual business outcome that you’re trying to optimize. At Stitch Fix that would be, "Did the customer buy the blouse?" Often the thing you’re trying to optimize is usually a more complicated, slower, noisier objective function because there’s a lot of things that go into it. We mentioned the idea of having telemetry, and since you know the platform you’ve built, you can understand what people do. You want to build that outcome feedback loop and figure out how to put those together because sometimes they don’t agree. That is an interesting challenge.
Carolyn: Something that I have always been a big believer in, especially with the advent of ChatGPT and all these different GenAI systems, is that people expect to be able to interact with algorithms, data, and systems in a natural language way that we haven’t really had before. How do you see that percolating into the decision science space? Do you find that helpful in the trust-building process? How much transparency do you actually want to give to users?
Brad: It’s a new world. I’m not sold that the optimal interface for every decision platform is a chatbot. There will be times when that’s the right way and that iterative back-and-forth is great, but I think there’s still places where you’re presenting static scenarios or a process that’s been honed. And that means you can do things very differently now with these new technologies. One thing I think is interesting is providing evidence or context for why you’re recommending something or why this price would be better than another. In the past, it would be very difficult to systematically provide that kind of evidence, but today you can summarize different features and write a nice little snippet for people to read. It provides more transparency with more natural language without necessarily always having to chat with an agent back and forth.
Carolyn: Do you get users hands-on early with a model instead of just looking at the output for a model? Can they actually run it? Is that part of your trust-building process? And if it is, how much power do you give them in terms of levers they can pull or data they can add or subtract?
Paxton: Point sensitivity analysis is big. Letting people not just understand what the right answer is, but being able to say, "Well if we change this dial, if we allow for more people to work at this point, if we increase throughput, how many cars do we wash? How does that toggle things?" One of the trade-offs is that for some of the models that we build, the runtime is so high that it’s a little difficult to provide sensitivity analysis with that immediate tactile feedback. There’s a balance in there too. We know that if we’ve got this set up, we know some semblance of what the optimal value is. Maybe we warm start it. We allow someone to go from that solution, we let the user toggle these inputs, and we let them change it from there so that we don’t have to go back in and wait however many minutes or hours it takes to finish running the model itself.
Carolyn: For more of a greenfield process, where you’re taking everything from scratch and making all the decisions, I think that strategy is a really nice one. Being able to say as a user, "Hey, I see the plan that you created, and I changed these two or three things. Use that as a starting point to either re-optimize or give you the KPIs for what you set up. Did you violate any of the constraints?" I think that’s a nice way to start to build that trust. You touched on exposing levers to users, knobs they can turn or play with. I worked on really large-scale simulation systems for a long time, and that was how I built intuition about the very complex system underneath.
John: Solutions pools are a useful technique because you can have a bunch of different things you’re supporting, and the solutions are all feasible. They’re all reasonably close to optimal. Frankly I have found that most folks don’t care how the math works, and I would agree. I love finding the people who are interested, that’s when I get excited, but generally I have come around to the saying of, "I learn the math so they don’t have to”... most people just want an acceptable answer.
Carolyn: That’s way more common than we all talk about. So many systems start in Excel, and that’s how they’re built and operated for a long time. The industry is getting people to move away from a manual process, which makes sense because people better understand data they can interact with. How do you bridge the gap to get people to start trusting that end state?
John: The thing that I’ve seen to be really effective, especially when you start working with optimization tools, is you just run it through your favorite commercial optimizer and you get an answer in seconds, maybe minutes. If it’s a problem you were solving in Excel, it’s going to solve in FICO, Gurobi, etc. in a very short amount of time. Most of the time, their measurements are just to get it done and make it look as good or as similar to last year, month, etc. That’s how they’re getting measured. If I’m able to produce that in seconds, then they can tweak it and say, "What if I change this constraint? What if I put more emphasis on this objective?"
Carolyn: If you think about Brad’s point earlier, being on the hook for that KPI, they’re on the hook for the operation running; the products being produced off of that production line, people showing up for their shifts and getting work done, or on-time percentage for deliveries. That comes along with a lot of messiness because life is uncertain and things happen. You know their day-to-day is not necessarily about being concerned that the algorithm has a cluster objective of blank. It’s important to close that gap for them and meet them where they are in terms of talking about the IO.
Paxton, you mentioned industrial systems engineering and I always think about our algorithms as where we like to play. That’s our little box, and we can work in that box, but that box has an interface with other parts of the system. That input and output is the level at which most business operators are working. I’ve definitely heard people say, "I don’t care about optimality at all." They don’t care about mathematical optimality. They care about a consistently good solution and the speed at which you can get it to them.
Have there been any times where you felt like even with the processes developed, your own mentality around getting that feedback, getting people in the room early, etc., that there’s still trust challenges? Why was it challenging? What slowed the process down in building that trust?
John: I’m struggling with it currently. We have all these regions and most of the regions have accepted this accessory tool that we’ve made, but we have one holdout region. They’ve had some turnover there where folks got promoted, new hires were brought in, and this tool got rolled out right as that was happening. So you have new folks that don't know their management had never used the tool because they were being promoted at that time. They think whatever they were doing before the tool obviously worked well enough. The way I’m going to overcome that is, I’m going to fly out to that region, and I’m going to meet them in person—go out, have lunch, coffee, talk to somebody, etc., it changes the whole relationship. It changes how open they’re going to be with you. It also allows me to understand what their apprehension is just by going to sit with them and find out why.
Carolyn: I think it’s also fascinating when you sit there, you see the systems they’re working in, and you realize that your algorithm is not at the right point in the process, or they can’t figure out how to get the data they want in there at the right time. The screens don’t work in the sequence that you expect. You don’t see that unless they’re screen sharing or you’re sitting next to them. That happened with us at Grubhub. We walked in one time and there was this one region where the manager had six screens. I said, "Whoa, what’s happening here? How do you even process this information?" And he said, "Well, I know this and that and this and that." It was amazing. That’s when you start to realize that a lot of really large organizations run on decision models that are just in people’s brains every day. So how do you start to codify that information and get things into a state where you know they can level up and control that strategy a little bit more?
Brad: It’s a recurring theme, but you know you’ve got to meet people where they are to understand what their actual workflow is, not an abstract model of it.
Check out the techtalk recording for the full interview to learn about driving solution adoption, balancing industry experience with new analytics solutions, and how to build trust.



