Submitting to hard metrics
SummaryAs regulatory workloads soar, the first step to better resource management is better measurement, says Adrian Leibert, life science and pharma programme manager for regulatory outsourcing at Kinapse
Since it’s hard to manage what you can’t measure, the best way to keep pace with regulatory resourcing is to unambiguously understand what’s involved in complex workloads, says Kinapse’s Adrian Leibert
As budgets have come under pressure and more Regulatory work has been outsourced, firms have faced a growing need to monitor, manage and report on resource consumption and value for money. The default approach has been to do this according to the raw number of documents processed – so if one year 1,000 documents were authored, and the next year the number was 900, there was an expectation that this should cost less. But simple formulae like this do not account for the increased complexity of submissions.
In recent years Regulatory submissions have multiplied and become more involved, as authorities’ requirements have become more stringent, and as companies have looked to new international markets to derive more value from investments. As a result, the likelihood of firms underestimating the work and time involved is high.
It’s a challenge we’ve been working on with a range of life sciences firms that have been tasked with delivering measurable increases in Regulatory productivity. Yet without a viable measure of output, there would be no way to benchmark this and thereby chart any improvement.
Establishing a base measure
Over time, through work with numerous different types of submissions across a wide range of firms, we have established the ability to estimate resources accurately based on global requirements for each submission type, and to break submissions down into constituent tasks. This means we can say with considerable accuracy what makes one job more complex than another.
The resulting matrix of resource requirement per submission type has contributed towards the development of a repeatable framework, which is detailed enough to overcome the challenge of variability between submissions. The alternative previously was typically for teams to input every action into a hugely onerous timesheet, which is unwieldy and unreliable as the basis for measuring improvement because the quality of data can vary so much.
We have employed a computational framework for the last 3-4 years now, which has proved successful with life sciences clients because it removes any grey areas and builds trust – e.g. that companies are not being overcharged for processing Regulatory changes. There is also the potential to allow for other factors such as time lost to IT outages, or spent in training, which can provide new operational insight for managers. If the head of R&D can see in black and white the impact of IT downtime on Regulatory productivity and cost, it soon focuses the mind on where action needs to be taken. It’s a level of granularity and insight that spreadsheets can’t deliver.
Comparative performance reporting at regular intervals, meanwhile, can boost morale, prompt competition between teams, and provide new levels of intelligence about success factors – i.e. what makes one set of submissions faster to deliver than another. Is it simply the numbers of people allocated, or something more innovative in the approach, for example? The longer the period of recording extends, the richer the scope for data mapping and the more persuasive and powerful the trend intelligence. This in turn could focus discussions about bringing batches of work forward to avoid busy periods, or times when staff resources are thinner on the ground.
Advances in technology and the rapidly falling price of data storage and cloud-based analytics are contributing to more companies’ ability to be smarter about resource management. Using cloud-based data collection tools can enable multiple people to add data at the same time, and to do this from different locations, for example.
And the easier companies make it for people to input measurement data, the more consistent, complete and reliable the intelligence built up will be. Systems that display everything on one screen, that can be accessed via mobile phones, for instance, can contribute to compliance. More sophisticated systems can also remove the inaccuracies of gut feel. With a menu of firm parameters to choose from, bias potential is removed. The result is a quantifiable, data-driven system for monitoring resources – the kind of supporting documentation budget-holders dream of.
Compared to other service disciplines, Regulatory outsourcing in life sciences has some way to go to catch up to measurement practices seen elsewhere, but there is a lot that can be learnt from looking at best practices in use in other markets. With Regulatory demands set to keep increasing, and the pressures on life sciences firms to be smarter in how they manage resources and leverage product lines, the demand for more discrete measurement strategies and techniques can only grow. And without proper aids, there’s only so much improvement service managers will be able to show.
Adrian Leibert is life science and pharmaceutical program manager for regulatory outsourcing at Kinapse. For more on the subject of managing Regulatory resources, download the paper, A meaningful measure of Regulatory volume.