Innovation is a messy endeavour. You never know whether you are heading in the right direction and you fail all the time. A lot, if not most, of your ideas are useless, impractical and just fluff. To an outsider, it looks like you are wasting a lot of resources and meandering around aimlessly. In an organization, the sense that innovation is just a giant waste of time and a drag on the rest of the organization can feel acute.
Innovation is also necessary, both in our personal lives and the organizations we work in. How else would one break new ground? How can an organization solve problems better, faster and cheaper in ways that they don't already know? How can we discover new green fields to play in if we don't lift our heads up and look around and beyond?
The question is, how do we manage the tension between the chaos of creative forces and the orderly march towards realizing real world benefits, both in our personal lives and our organizations? How do we meander effectively?
The Innovation Process
In my mind, innovation proceeds in four phases.
The basic principle underlying the process is minimizing investment of resources and cost of failure before the Productionise stage.
The first stage is simply about collecting ideas. There is no need for ideas to be practical. It's the messiest of all phases. Ideas are often vague and ill-formed. In fact, their initial statements may not even point to the real problem. It's OK.
Ideas are then moved to the Experiments phase when work aimed specifically at sharpening the problem statement, familiarizing with the background context such as the existing business process, defining success metrics for subsequent work and finding unknown unknowns as much as possible is done. This is where the idea begins to crystallize and a clearer vision of the innovation starts to emerge from the primordial soup of ideas. At this stage, there is no complicated setup such as creating data pipelines or creating pitch decks involved. Everything is quick and dirty and most importantly of all time-bound.
Experiments must always end in success or failure. An experiment can fail for a multitude of reasons. The initial idea could be too ill-formed and a clear problem statement could not be sculpted out of the experiment. The work involved is too heavy or too technologically out of reach to justify the next phase of investment of effort given the resources available. (This doesn't mean we give up.) The target metrics required are impossible to reach, for example, requiring 100% accuracy and 0% false positives for a spam email detector. In short, experiments are expected to fail.
When they do, we head back to the Ideas phase. No shame. In fact, it is very much encouraged. This is because ideas are usually crap but they get better. And since experiments are meant to be short and resource-lite, we haven't invested much in it and we can all walk away and try again. The last thing we want is to invest so much in an idea that we have to make it at least look like a success.
Failed experiments do not mean that the idea is not valuable.
Experiments are expected to fail, but that doesn't mean that the idea that originated the experiments is not valuable. As mentioned before, the idea may be extremely valuable but just out of reach for now due to technological or resourcing challenges. The failed experiment is just part of the journey to a sharper problem definition.
It is important that we don't simply judge the value of an idea by the outcomes of single experiments. That would lead to a very myopic and short term view of what innovation can do for us. Failed experiments sometimes tell us that our ideas are no good but they can also tell us that there are things that we failed to consider or there are other building blocks needed. If so, then the next step would be to break the problem down further and do more experiments!
An example. A data science project.
Here I will give an example of how the innovation process applies in a data science project. Say there is a problem statement coming from the business teams. It is something that the data science team has not encountered before and the problem statement is still a little bit hazy.
What can be done is that the data science team can engage the business team for a couple of conversations (1-3 hours in engagement) to understand the general scope of the problem and current business processes impacted.
Both teams then agree to conduct an experiment that will last 2 weeks (30-45 man hours worth of work). The experiment will involve frequent ad-hoc contact between the data scientist and the business point-of-contact with the explicit aim of allowing the data scientist to understand the data characteristics and business process. At the end of 2 weeks, there should be a conclusion on the exact problem statement and whether the problem is likely to be solved in the PoC phase.
If the experiment fails, then no harm done. Only 2 weeks of effort has been invested. Both sides can walk away and try again.
If the experiment turns out to be a success (i.e. we have a clear problem statement that is likely to be solvable and clearly stated success metrics), then the project moves onto the PoC phase where the data scientist will work on the full and larger dataset and push the limits of modelling to meet the success metrics, say >90% accuracy and <5% false positive rate.
A thing to note here is that the notion of a PoC is slightly different from how a software development team might understand the term "Proof-of-Concept". To a software development team, a PoC involves setting up the server to show an MVP of the final application. Here, a PoC is purely a demonstration that the problem can be solved via machine learning means. There is no application development or pipelines setup. That comes in the Production phase in this case.
Although a PoC is less likely to fail, there is still some chance that it might. And if it does, then the maximum investment is 3.5 months.
The Production phase kicks off upon the success of the PoC. At this point, we already know that a machine learning approach can meet the success metrics required (at least in principle). The investment of effort to productionise is to realize the business impact. Here, the data science team will work closely with the software development team to perform activites such as setting up data pipelines, deploying models, output results to business teams and monitoring model performance.
Note how the investment of effort is always kept to the minimum and every precaution is taken to prevent the situation where everyone has too much invested and the project has to look like a success. 1-3 hours (initial engagement) is invested to give experiments (30-45hours) a greater chance of success which in turn allows the data scientists to work effectively during PoC phase (300 hours of man effort).
Key difficulties in implementing the Innovation Process
In my mind, there are three key obstacles to the adoption of the innovation process in any organisation.
First is organisational buy-in. Modern day companies are run on tangible outcome. Every endeavor has to have an immediate quantifiable outcome. However, the innovation process assumes that there will be failures and the benefits might not be reaped until some time later. Hence there needs to be a commitment from the top management of the company to stay the course when projects don't show immediate value. A good way to do this would be to state the innovation budget as a percentage of the operating budget up front each year, much like how a country would state its R&D budget as a percentage of GDP. Normally, the innovation budget would be around a couple of percentage points of the operating budget, definitely not a significant portion. This is equivalent to saying that the company will be spending a small proportion of its expected expenditure on future proofing itself (like buying insurance) and opening up opportunities for surprise up lifts from innovation.
Secondly, even if the organisation is committed to innovation, the people on the ground might not have the right mindset to carry out the innovation process. It is well and good to say that the organisation allows for failure in the name of exploration. But if a person's bonus and promotion are still dependent on tangible outcome, people will still be very risk averse and not participate. To overcome this, innovation needs to be considered explicitly in an individual's KPIs and the individual has to be properly rewarded for taking the risk to innovate. Also there needs to be an innovation team to help the individual or team take the step out of their normal routines to develop something new.
Lastly, the benefits reaped from the innovation projects need to be clearly tracked and accounted for. This is hard as there is often no hard and fast way to calculate the benefits reaped from innovation. How would you quantify the dollar benefit of few errors made? But that doesn't mean we should not try. If the outcome is an analysis report, the benefit could be measured by the fee that would have been demanded by an external party to conduct the same study. If the outcome is a streamlined process or machine learning model, the benefit could be measured by manhours saved (in dollars) or the business owner could be asked the question of how much would he/she have paid to resolve the problem. At the end of the day, the dollar value of the benefits reaped does not mean that the company profits increase by that same dollar amount but it is a good way to track the progress of the innovation initiative. Also, the dollar benefit reaped should be accrued not only in the year of the innovation but in subsequent years as well (suitably discounted of course). For example, if $100 of benefit was accrued in year 1, then in year 2 $80 should be accrued due to the innovation in year 1, and $60 in year 3, so on and so forth. This would encourage innovation with long term benefits instead of shiny new things that get presented to upper management and then thrown away.