Report a relative proportion of patients ruled-out or ruled-in with a diagnostic test

One measure of the performance of a diagnostic test is the proportion of patients the test stratifies as likely not to have the condition (ruled-out:RO) or to have the condition (ruled-in:RI). This proportion depends on the prevalence of the disease in the cohort. This means that it is not valid to compare tests performed in cohorts with differing prevalences. However, this is exactly what is often done in the discussion section of academic papers. Usually there is a target sensitivity (t_sn) for the test which means that the maximum proportion ruled-out at that sensitivity is:

```
ROmax=(TN + TP(1/t_sn -1))/(TN + TP/t_sn)
```

Where TN=True Negative, TP=True Positive. Note for ROmax, there are zero False Positives (FP). Also, in the case where t_sn=1 (False Negatives, FN=0), R0max=1-prevalence.

Similarly, if there is a target specificity (t_sp) the maximum proportion ruled-in is:

```
RImax=(TP + TN(1/t_sp -1))/(TP + TN/t_sp)
```

Note, in the case where t_sp=1 (FP=0), RImax=prevalence.

Therefore, to better be able to compare between studies, both the measured proportion of those ruled-out and those ruled-in should be normalised to their maximum possible values:

```
Adjusted proportion ruled-out = measured proportion ruled-out / ROmax
Adjusted proportion ruled-in = measured proportion ruled-in / RImax
```

This is in line with a thought @chapdoc and I have tried to bring to paper (but were too distracted to do so far). See the NPV/sensitivity discussion. We would have thought you want a measurement which reflects underlying prevalence, which is in essence what youâ€™ve argued here very nicely.

## Thomas Kaier · 26 Jun, 2018

The more I look at this, the more I think I got it wrong. An estimate (ignoring target sensitivity) for Adjusted RO is measured RO/(1-prevalence) which, in all cases, approximates the specificity. There is a small error which disappears if we multiple the Adjusted RO by (1-1/(1-prev))*Sn. The lesson - we shouldn't be comparing RO percentages at all - only specificity.

## John William Pickering · 2 Jul, 2018

The more I look at this, the more I think I got it wrong. An estimate (ignoring target sensitivity) for Adjusted RO is measured RO/(1-prevalence) which, in all cases, approximates the specificity. There is a small error which disappears if we multiple the Adjusted RO by (1-1/(1-prev))*(1-Sn). The lesson - we shouldn't be comparing RO percentages at all - only specificity.

## John William Pickering · 2 Jul, 2018