What It’s Like to Dine with Demand Planners
I recently met with Stacy, director of demand planning at a large apparel company. The dinner conversation turned to the role of data science in supply chain. She excitedly mentioned that they hired some smart data scientists who are writing machine learning algorithms to improve forecast accuracy. I knew her company invested millions of dollars in implementing a demand planning software over the years. I enquired what they are doing with the software. Her answer did not surprise me.
Since implementing the demand planning software four years ago, their business went through a radical transformation. A chief digital officer was brought in to beef up the direct-to-consumer experience for their customers. He placed significant bets on front office technology to drive more customer engagement with personalized offers and more dynamic pricing. The rise in consumer sentiment and historically low unemployment have driven further demand. While these changes were being put in place, their aforementioned demand planning software relied heavily on a 3-year history as the basis for forecasting into the future. While the software was able to capture seasonality and trends in a reasonable manner, it fell far short when it came to capturing additional causals such as pricing, promotions, and external macro factors such as consumer sentiment and unemployment rates which have a significant impact on their business as their products are considered.
This resulted in the forecasts becoming less reliable. The planners started exporting system generated forecasts into excel and started applying judgment calls on the forecast. This has further aggravated the situation as the human bias made the forecasts worse. Stacy then hired few data scientists in eastern Europe for cost reasons and have them apply machine learning techniques to improve forecast accuracy. The data scientists wrote programs in R to take the raw sales history, blend it with price and promotional causals and external macro factors. It was a long project. But the initiative did show promise. The data scientists do their magic and generate a massive spreadsheet with updated forecasts at the beginning of each week which then gets loaded into the aforementioned demand planning system for the planner review. But how they did what they did remained a blackbox for rest of the organization.
I was contemplative as Stacy gave me this narrative. This is a story I heard far too many times. Organizations make heavy investments into packaged software only to see the planners and analysts doing most of their work in offline excel sheets and passing the data through the planning system for execution, relegating it to being a mere shell. The end users of these applications spend far more time at the edges and outside of packaged software than inside it. While Stacy’s organization made progress overall, they did not get the anticipated return on investment. Besides, the approach she has taken is not sustainable. She is only a couple of data scientists’ resignations away from losing all the competence they built.
Most enterprise packaged software packages are not very different from Stacy’s demand planning system. They are built on older paradigms with rigid data models hardwired to the ERP backbone, which by nature overemphasize data within the four walls of the company. The planning algorithms themselves are explicitly programmed with little to no self-learning capability. When it comes to user experience, these applications provide views that are relatively static. Any changes require either vendor enhancements or IT support. In many instances, planners bypass the user interfaces altogether and rely on business intelligence (BI) tools to visualize data and analyze it. These BI tools are disconnected from the live plans and thus introduce latency in decision making.
However, this is all changing with the rise in new generation digital platforms. Taking the example of demand planning where one needs to marry the internal 3-year historical sales data with external factors, the new generation data blending tools are making such efforts very visually driven without relying on advanced skills in SQL scripting and such. This places the power of data blending in the hands of business users. Algorithms learn from the blended data and surface the causal factors that truly influence the demand patterns, thus filtering out the noise. These algorithms are being put together by exposing only the necessary levers to the planners, democratizing the supply chain decision intelligence to users that don’t need to be doctorates in data science. Intuitive and exception driven user experience with highly configurable views enable planners to work at the core of the applications to drive decisions in a fully connected manner rather than being relegated to the edges and excel spreadsheets.
Best of it all, the likes of Stacy don’t need to rip and replace the enterprise systems in which they made millions of dollars of investment. Instead, these new generation technologies can layer on top of the existing systems. Planners can design their own workflows and apps in a visually stimulating environment, filling in the white spaces around packaged software, without turning such exercises into major IT projects. This relieves the planners from the drudgery and has them focus on driving business decisions.
I smiled as these thoughts crossed my mind. “What’s so amusing?” enquired Stacy. “I will tell you. But let us order the main course first”, I said, looking at the menu!