On January 5, 2019, I woke up early to go backcountry skiing. Like always, I checked the avalanche forecast—a prediction of the day’s avalanche danger from our local experts. “Yes! The forecast is LOW danger,” I texted my friend. “Perfect! Let’s do God’s Lawnmower,” he replied.
God’s Lawnmower, a classic backcountry ski run, carries its name because a giant avalanche ripped the pine trees off the mountainside, giving the appearance that a giant lawnmower descended from the heavens to carve a steep chute in the trees. As you can imagine, God’s Lawnmower is not a good place to be when even moderate danger exists.
On our way towards the top, my friend and I noticed some instabilities in the snowpack—general red flags for avalanches. On more than one occasion we discussed our unease but decided that because the forecast called for low danger, we’d be okay proceeding.
Just after crossing a small slope, my friend triggered a small avalanche below him that flushed into the trees. At this point we knew: the forecast was wrong, the avalanche danger was high, we had put ourselves in an extremely dangerous situation, and we needed to get off the mountain immediately.
That day turned out to be a relatively famous one amongst Utah backcountry skiers. The avalanche forecaster called his forecast “the most blown avalanche forecast of [his] 20-year career.” Relying on the inaccurate forecast and dismissing their own instincts, many skiers headed to high-consequence terrain. By the end of the day, at least seven people had been caught in separate avalanches. As it turns out, a bad forecast is worse than no forecast at all.
So what does this have to do with retail? Well, like a bad forecast, a bad A/B test—or a bad pilot—is worse than no test at all. A bad test can lead a retailer to confidently implement costly initiatives, ones it would avoid absent the misleading data.
We have seen this multiple times from new clients. Many of our clients, before learning of MarketDial’s platform, designed and analyzed A/B tests on their own or used a different software. Having created inaccurate and unscientific tests, these retailers gathered and acted on misleading data.
As one example, a retailer conducted an A/B test on a price increase on candy bars. After designing its test, the retailer raised the price of candy bars in its test stores to seemingly remarkable success. Based on the prediction, a fleetwide price increase would lead to significant increased revenue. As it turns out, the retailer’s prediction was dead wrong. On implementing the price increase across the retailer’s entire fleet of stores, the retailer watched as its candy-bar revenue tanked.
After converting to MarketDial, this retailer asked us to analyze its pre-MarketDial experiment to determine the errors. As it turns out, to select its test stores the retailer chose stores within driving distance of its headquarters and those that were relatively urban. But these stores did not accurately represent its fleet. Instead, the stores skewed towards higher-income neighborhoods where consumers were less sensitive to a marginal increase in candy-bar price.
The truth is, accurate and predictive testing is difficult. Retailers rarely design accurate tests on their own—there is simply too much to consider: demographics, store characteristics, quantity of data, duration, and much more. Without the benefit of machine-learning capabilities, teams of data scientists, and an accurate and robust platform that can add a layer of consistency to testing, retailers who test on their own are likely creating bad forecasts, ones that can lead them down costly paths.
At MarketDial, we have teams of highly skilled data-scientists and engineers who have dedicated their careers to making sure retailers can easily and accurately design and analyze in-store experiments, every time. With MarketDial, you can be sure the forecast is accurate, giving you the confidence needed to boldly enter profitable terrain and intelligently avoid dangerous terrain.