Sunday, 6 December 2020

False Negatives

This week has made "false negatives" trend.

A negative test, whether or not a person has symptoms, does not guarantee that they are not infected by the virus. How we respond to, and interpret, a negative test is very important because we place others at risk when we assume the test is perfect. However, those infected with the virus are still able to potentially spread the virus.

Using RT-PCR [reverse transcription polymerase chain reaction] test results, along with reported time of exposure to the virus or time of onset of measurable symptoms such as fever, cough and breathing problems, researchers have calculated the probability that someone infected with SARS-CoV-2 would have a negative test result when they had the virus infection.

Scientists have warned the government’s multi-million-pound plan to mass test everyone in Liverpool for Covid-19 in a bid to bring the virus under control may be fundamentally flawed, because the tests may not actually be accurate enough. The rapid lateral flow tests set to be used in the scheme are so unproven they come with a manufacturer’s warning: “Negative results do not rule out infection”. The lateral flow tests are just one of three kinds of tests being used in the pilot but, because they have the quickest turn-around time and are especially easy to use, it is thought they will make up the largest majority of those carried out.

Diagnostic tests [typically involving a nasopharyngeal swab] can be inaccurate in two ways. A false positive result erroneously labels a person infected, with consequences including unnecessary quarantine and contact tracing. False negative results are more consequential, because infected persons [who might be asymptomatic] may not be isolated and can infect others.

Interpretation of a test result depends not only on the characteristics of the test itself but also on the pre-test probability of disease. Clinicians use a heuristic [a learned mental short cut] called anchoring and adjusting to settle on a pre-test probability [called the anchor].

They then adjust this probability based on additional information. This heuristic is a useful short cut but comes with the potential for bias. When people fail to estimate the pre-test probability and only respond to a piece of new information, they commit a fallacy called base-rate neglect.

Another fallacy called anchoring is failing adequately to adjust one’s probability estimate, given the strength of new information. Likelihood ratios can give a clinician an idea of how much to adjust their probability estimates. Clinicians intuitively use anchoring and adjusting thoughtfully to estimate pre- and post-test probabilities unconsciously in everyday clinical practice. However, faced with a new and unfamiliar disease such as covid-19, mental short cuts can be uncertain and unreliable and public narrative about the definitive nature of testing can skew perceptions.

We draw several conclusions. First, diagnostic testing will help in safely opening the country, but only if the tests are highly sensitive and validated under realistic conditions against a clinically meaningful reference standard. Second, the FDA should ensure that manufacturers provide details of tests’ clinical sensitivity and specificity at the time of market authorization, tests without such information will have less relevance to patient care.

No comments:

Post a Comment