The Secretary of State is going to choose ten maternity units for intensive scrutiny. The Maternity Task Force will descend. The question nobody is asking: how will the terrible ten be chosen?
If history is the guide, the selection will be a mix of CQC ratings, horror stories in the media, and politics. Leeds is already in. Gloucester and Mid and South Essex are in the mix. Sussex ticks the media box. Stories drive decisions and policy. Data does not.
The problem: there is zero evidence that choosing ten units this way will improve care for everyone. The units selected will get an industry of investigation, statements, reports, and recommendations. The units not selected will carry on, some of them quietly deteriorating. The Thirlwall Inquiry review of reviews tells us everything we need to know: 29 reviews, 1,333 recommendations, 42% fully implemented.
Meanwhile, between CQC and MBRRACE, there are at least 18 organisations with a hand in the maternity pie. When something goes wrong, the world and your uncle comes knocking. It is a maddening muddle.
The signal
Every maternity unit publishes dozens of metrics. Billions of data points laying fallow, waiting for someone to build the key to unlock the value.
At STRASYS, the Decision Intelligence engine for healthcare, we built that key. The Strasys Maternity Index uses 58 measures from national maternity data to generate a signal that says something no individual metric can say on its own: how reliable is this unit's system of care, and is it getting better or worse?
The SMI accounts for three dimensions. Current performance: how a trust is doing now based on clinical, safety, and outcome indicators relative to all England trusts. Twelve-month trend: whether things are improving, static, or deteriorating. Birth volume: how many people are affected. The final score combines all three, giving the clearest picture of maternity risk by quality, trend, and scale.
This matters because the signal is upstream. It is pre-emptive. It reveals the conditions that make harm more likely before harm happens. Before the serious incident. Before the CQC flag. Before the media story. Before the grieving family.
CQC does not correlate
The Strasys Maternity Index does not correlate with CQC maternity ratings. Not even close. A unit rated Good by CQC can sit deep in the red on the SMI. A unit rated Requires Improvement can be trending green.
This should not surprise anyone. CQC inspects governance processes and management arrangements. The SMI measures the reliability of the clinical system that produces outcomes. They are measuring different things. But only one of them tells you whether mothers and babies are getting safer or less safe right now.
Wes Streeting could choose the terrible ten based on CQC ratings and horror stories. Or he could use data that actually predicts risk. The SMI is objective, fair, and a better chance to get the best outcomes.
From signal to action
The signal on its own changes nothing. What matters is what the board does with it. Through our work with NHS maternity services, using Clinical Service Review methodology, we help trusts interrogate the metrics in context: finance, workforce, performance, outcomes. To work out what is driving the signal to dip and flash red, and where to target resources to get the course back to reliable, safe, high-quality outcomes.
Naeem Younis, STRASYS CEO, argues that every maternity dashboard and board report should include the SMI signal, tracked over time. Not as a replacement for CQC or MBRRACE, but as the upstream trigger for in-house inquiry that happens months before any external regulator arrives.
The data exists. The signal exists. The question is whether the system has the will to use it.
See the Maternity Index in Action
How we use predictive analytics to support maternity governance.
Key Definitions
- Strasys Maternity Index (SMI)
- A STRASYS product that benchmarks maternity safety risk across all NHS trusts providing maternity services, using 58 measures from national data. Combines current performance, 12-month trend direction, and birth volume into a single signal of system reliability. Does not correlate with CQC ratings.
- Clinical Service Review
- A STRASYS product combining deep data analysis with lived experience to deliver objective, evidence-based insights. In maternity, this means interrogating SMI signals in the context of finance, workforce, and performance to identify what is driving risk and where to target resources.
- Decision Intelligence
- The discipline of converting complex healthcare data into structured, actionable decisions for NHS leaders. STRASYS coined and owns this category in UK healthcare, combining analytics, behavioural science, and systems thinking.
Frequently Asked Questions
The SMI uses 58 measures from national maternity data to benchmark safety risk across every NHS trust providing maternity services in England. It combines current performance, 12-month trend direction, and birth volume into a single score. The result is a signal of system reliability that reveals whether a unit is getting safer or less safe, and how many people are affected.
No. STRASYS analysis shows the SMI does not correlate with CQC maternity ratings. A unit rated Good by CQC can score poorly on the SMI, and vice versa. CQC inspects governance processes. The SMI measures clinical system reliability. They assess different things, but only the SMI provides a forward-looking signal of risk direction.
STRASYS's position is that selection should be based on data that predicts risk, not on CQC ratings, media coverage, or political pressure. The SMI provides an objective, fair basis for identifying which units present the greatest risk to patient safety based on quality, trend, and scale. Choosing based on horror stories helps the units selected but does nothing for the rest.
The signal should trigger in-house inquiry and action well before external regulators arrive. Boards should track the SMI over time as part of their maternity dashboard, interrogating the underlying metrics in the context of finance, workforce, and performance to identify what is driving deterioration and where to focus resources.
The Thirlwall Inquiry review of reviews documented 1,333 recommendations across 29 reviews, with only 42% fully implemented. The problem is structural: reviews generate recommendations without accountability for implementation. STRASYS's approach focuses on building the analytical infrastructure that enables continuous monitoring and action rather than post-incident reviews that produce reports filed and forgotten.
This article is adapted from the Friday Fish and Chip Paper, Dr Nadeem Moghal's weekly newsletter on LinkedIn.
Dr Nadeem Moghal
Chief Medical and Innovation Officer