The conclusions set out below are based on the analysis of the burden model in the previous pages. I would welcome any feedback on the model, the analysis and the conclusions that I have derived.
What is burden?
Data burden is the burden of data activities that are necessary to meet the legitimate requirements of funders and regulators and that go beyond what the institution would otherwise do for their own operations, monitoring and management.
Different institutions use data in different ways and have a range of different approaches to data management, processing and analysis so the amount of additional activity necessary to meet external reporting requirements is driven, to an extent, by the BAU activities of the institution.
Different instiututions can experience a different level and profile of burden from the same data collection exercise.
What drives the level of burden for a data collection?
Many of the burden activities are fixed-cost and many of them vary according the fit between the data within the institution and the external specifications (the data collection specification and the mertics algorithms). The issue of fit is largely random.
Burden is also driven by the quality and complexity of the materials and processes provided by the data collectors, funders and regulators.
Of all the drivers identified, the size of the institution makes very little difference to the burden experienced. Therefore smaller institutions experience a disproportionate level of data burden.
Burden reduction initiatives within a data collection generally have a marginal effect on the overall data burden experienced by each institution.
What drives the overall data burden for an institution?
Significant amounts of data burden are geared into the sector through different bodies collecting the same data from instituitions, often using different data definitions and collection processes. This uncoordinated approach to data collections multiplies the burden on institutions and diminishes the value of analysis that is derived from each of these datasets.
The value of different data collections could be increased through the alignment of data specifications.
The rationalisation of data collections would have a far greater impact on burden reduction than intiatives focused on reducing the data burden of individual collections.
How is data burden changing?
The nature of data burden is changing from being one that is focused on making high-quality data returns to one that is expanding to include the analysis and interpretation of data returns by the data collectors.
As funders and regulators seek to drive more value from data, the complexity of their analysis algorithms increases significantly. This makes the ability to translate between different data views of institutional activity a new and critial capability for institutions.
The requirement to engage with metric-led funding and regulatory frameworks is driving an increase in overall data burden across the sector.