Understanding what decisions to make about the future is no easy task. In an earlier article, we outlined the 6 best practices in pharma decision-making, activities crucial to making the decision process efficient, less time consuming, and more accurate. As the current situation is, the processes commonly used to manage data for forecasting purposes offer significant opportunities for improvement.
The Current Situation
For many pharma companies, projects are represented in their portfolio by spreadsheet models, which typically include a static P&L with yearly, quarterly, or monthly costs for every cost account. Different development options and market upsides are simply represented by adding alternative time series to the capture sheet. Often, data is static, with no possibility of including uncertainty in the estimation, except maybe project attrition, and the sales forecasts typically use planned launch dates as input. This means that the data in the spreadsheet represents a desired outcome, rather than the expected outcome.
The current most common processes of collecting data are time-consuming, error-prone, and data has to be checked over and over again. The different types of data are scattered throughout various parts of the business. Project timing is owned by project leads, probability of success by a central strategy group, cost data by Finance and market, and sales data by a commercial group. Each group has a forecasting model for its own data and the group’s contribution to the spreadsheet model is a set of static data calculated for discrete scenarios. There is no single, holistic end-to-end model, which means that any change, large or small, triggers a full update cycle.
Portfolio managers are dependent on extremely busy project teams to get the latest version of the data, and since there is no benefit for the project teams to provide the data, it’s usually done either at the last minute or when absolutely necessary. This means that data is only assembled and assessed at certain times throughout the year; for the annual report, for the stock analyst briefings, and when revising the R&D strategy.
Tracking Data Over Time
As mentioned, portfolio data updates are done to serve specific internal or external events, for instance an analyst briefing or a portfolio strategy update. When the event is getting closer, the work of the portfolio team intensifies. Stakeholders request new and updated scenarios, for which data is validated, and approved. A slide deck is sent out as a pre-read to the stakeholders and presented in a series of meetings, forcing the portfolio teams to work both day and night to make updates to the analyses between the meetings. Eventually, these meetings boil down to a final decision, which can be to prioritize certain programs, to further invest in or terminate assets, or strategic decisions such as to change disease area strategies.
For every new event that requires a portfolio analysis update, the portfolio team has to go back and compare the current state of the portfolio with earlier analyses. Being done manually, this is very time-consuming because all data has to be checked so that it tracks with what has been shown earlier. This can be difficult, because there may be several versions of the old input data, which do not always add up with the versions of the analyses. Last minute changes are sometimes manually entered into the input data, and instead of running through the time-consuming process of updating all the slides based on the new data, only key slides are changed. This is sometimes necessary to be able to keep up with continuous change, but it also means that data is sometimes inadvertently left in an inconsistent state. Thus, when that data is used again in a later analyses, it has to be validated all over again.
In order to make good use of your data, you need to make sure that your project strategy is created first, using all the knowledge there is about the project. including all the project’s risks and opportunities. Specifically, this includes uncertainties in timing, costs, outcomes, and revenues. Without the uncertainties the forecast will be operational in its nature – that is to say it will tell us what we want to happen rather than what we think will happen. As such, the project models are unfit for strategic forecasting.
To effectively pursue your strategy, scattered data needs a common environment that feeds updated data into all levels of the drug development process, erasing the back-and-forth nature of continuously having to reevaluate the data, and eliminating the risk of any department acting on the wrong set of data.
With the modeling, simulating, and analysis tool that Captario has developed, Captario SUM®, users have the opportunity to improve the forecasting process, as it offers traceability of model versions across the development process and into commercialization, while ensuring that all parts of the organization evaluate the same version of the data.