Many of my most successful data science projects happen by accident. You know, the little skunkworks that arose from a serendipitous hallway conversation where an important and urgent business problem meets a half-baked analytical idea. With a suitable dash of data and the right mix of office politics and corporate kung fu, a baby data-science solution that delivers concrete business value is born kicking and screaming, much to the annoyance of the people who now have to support this unplanned thing. How terrible.
Life is so much better when we only have to do carefully planned, expensive and board-approved projects that may deliver values after 4 years, or not. 😉
The truth is we all prefer planning and executing to a plan as opposed to being opportunistic in our work lives. As a colleague shrewdly observes, failure is something that only belongs to the domain of doing; the corollary being that one can never fail in planning. (It’s quite possible to fail to plan, however.) Perhaps that’s why people spend so much time planning. It’s intellectual, low-risk, easy-to-justify for higher management and, for the right sort of people, a fun activity.
But planning has a dark side.
In particular, planning can — I said can, not will — reduce an organisation’s agility, or potential for agility, by prematurely committing the organisation to a fully allocated resource plan that is ultimately shown, in the fullness of time, to be painfully suboptimal.
Planning is essentially an exercise in reducing the uncertainty towards achieving a set of business objectives. Working often against the planners is a dynamic and uncertain world where the exact same set of actions can lead to multiple possible outcomes, and where risks and opportunities can present themselves in unexpected ways and there is sometimes only a short window in which to act. This is certainly the case in complex environments where cause and effect are nonlinear (aka chaotic) and unpredictable, as opposed to a complicated world where there are many different components but the system as a whole is highly structured and predictable.
There is nothing inherently wrong with planning, the problem is that most planners tend to overreach in the misguided but all-too-human tendency to simultaneously underestimate the complexity of the real world and overestimate their ability to predict and control the future. The result of planning is often a plan that is highly-optimised for the particular future that the planner envisaged, but probably not suitable for a range of future actual scenarios that are likely to happen. In other words, most planners don’t plan adequately for complexity and uncertainty and their own mental limitations.
If you work in an organisation that says no to (almost) every new, clear but unplanned business opportunity because everyone is fully allocated and stretched and there is no frickin way the organisation can take on more work, that’s one clear sign that all the extensive planning done at the start of the financial year has failed to take your reality and the inherent complexity in it into account.
So what’s the fix? We can’t not plan, right?
As usual, the solution is simple to state but somewhat hard to practice: Yes, we must plan, but we must plan for uncertainty and build in significant slack or reserves that can be used to take advantage of uncertainty and the opportunities that may come our way in the course of a budget cycle. Instead of trying to build an all-seeing all-knowing Predictive Enterprise — an impossible goal except in the most dour and predictable of industries — we should instead consider building Opportunistic Enterprises that embrace complexity and uncertainty and are always ready to capitalise on the next unplanned serendipitous opportunity.
In an opportunistic enterprise, a significant percentage — from 20% to 40% — of resources, both human and infrastructure, will need to be set aside at all times for an opportunity that, likely with equal probability, may just be around the corner or may never come. There are many difficulties in implementing this strategy. To begin with, a huge dose of patience and experience in the management team is required. Truth be told, most management teams are ill-equipped to deal with sporadic but highly successful projects, preferring instead fully planned-out work streams that deliver predictable but run-of-the-mill results. Ask yourself this: would you prefer a game where you win $100 by successfully repeating variations of a simple task 10 times or a game where you get to win $1000 but which requires you to sit patient and vigilant so you can act quickly when an elusive opportunity — one that happens with 0.2 probability at any moment — presents itself for a short time?
Secondly, creating an office environment that encourages open dialogues, risk taking, agility, a data-driven culture, and serendipitous encounters from different business units is also no easy task. Despite best intentions, most open-office designs are actually distracting rather than encouraging interdisciplinary team work.
Finally, in many publicly funded organisations, not spending all allocated funding is considered unacceptable because that would attract a budget reduction in future financial years so major changes are required at a higher systemic level, which is often beyond management’s control.
A simple concrete way to address the majority of those difficulties is to build an agile data science team in the organisation that is
- self-sufficient in that it is staffed with an appropriate mix of data scientists, data engineers, software engineers, and IT architects to enable the team to take a data science product from conception to development and finally deployment all within the team;
- charged with serving as the surge and rapid-response team to the rest of the organisation, both business and IT, to work on unplanned opportunities; and
- given the flexibility to pursue a portfolio of R&D projects, with a mix of both low-risk low-return operational analytics projects and high-risk high-return moonshot projects, to build up capabilities when not in surge and rescue mode;
Such an agile data science team can be the bedrock of an opportunistic enterprise, which by its nature is anti-fragile. It will allow the enterprise to respond to shifts and changes in the market place faster than competitors, and thrive where others can barely survive.
There’s a name for this opportunism strategy in investing: it’s called value investing, and Berkshire Hathaway and all the Superinvestors of Graham and Doddsville are shining examples of its success over a long period of time. I think this is an equally valid strategy in the practice of data science.