Detecting and Investigating UTIs
Detecting a UTI via identification of symptoms by a patient is a critical first step in this detection; however, in a prospective case review undertaken by Linsenmeyer and Oakley (2003) only 61% (90/147) of patients were able to correctly predict the presence of a UTI based on their symptoms. Other methods of detection include urine chemical dipsticks which provide an indication of the presence of nitrites and leukocytes with the benefit of a providing a quick turnaround (Faarvang et al. 2000; Hoffman et al. 2004). However, the primary approach and gold standard is the microbiological evaluation of urine bacterial culture. As noted above, organizations such as NIDRR have defined UTIs at least in part on the results of laboratory investigations documenting the presence, amount and type of bacterial growth that occurs with an infection. This also results in the identification of the antibiotic(s) for which the bacteria species may be susceptible (i.e., sensitivity). These practices are aligned with the recommendations for data capture through the work of the International SCI UTI Basic Data Set working group. This work has been endorsed by the International Spinal Cord Society (ISCoS) Scientific Committee and the American Spinal Injury Association (ASIA) Board. Specifically, the data elements include “date of data collection, length of time of sign(s)/symptom(s), results of urine dipstick test for nitrite and leukocyte esterase, urine culture results and resistance pattern” (Goetz et al. 2013). On resistance patterns, it has been noted that 33% of SCI UTIs are polymicrobial (Dow et al. 2004). The clinician must then decide between a limited or full microbial investigation in selecting the appropriate treatment. The obvious benefit of a full microbial investigation (i.e. accuracy) is offset by potentially adverse effects due to the time delay for the bacterial sensitivity results and the cost of a full investigation. The studies reviewed in the present section examine specific issues associated with the laboratory investigation of UTIs and how these might impact treatment.
Discussion
Perhaps understanding risk factors may be the simplest method of initial recognition and management of UTI. Escalrin de Ruz et al. (2000) prospectively followed 128 SCI patients for 38 months. Logistic regression modeling was performed on demographic characteristics, associated factors, urinary drainage methods, type of bladder dysfunction, urological complications and predisposing factors of each infection episode. The results showed that individuals who were completely dependent (FIM score <74) and who had vesicoureteral reflux were at the highest risk for UTI.
Beyond functional characteristics (e.g., FIM scores and type of bladder dysfunction), Massa et al. (2009) found that UTI signs and symptoms were superior predictive factors compared to a patient’s own subjective impression of their own signs and symptoms. The presence of “cloudy urine” had the highest accuracy (83.1%) and a positive dipstick test for the presence of leukocytes had the highest sensitivity (82.8%, highest true positive results). Although the presence of fever reflected the highest specificity (99.0%), its sensitivity was very low (6.9%) for UTI. The authors concluded from this prospective cohort, that basic objective measures such as cloudy urine and positive dipstick results were better at predicting UTI than the patient’s themselves.
However once an UTI is detected, laboratory investigation using microbiological analysis of urine cultures is important for confirming UTI and also for guiding treatment. For example, Shah et al. (2005), Hoffman et al. (2004) and Tantisiriwat et al. (2007) reporting centre-based results under a variety of study designs, noted Enteroccoccus species, Klebsiella pneumonia, Escherichia coli, Pseudomonas aerginosa, Staphlococcus aureus and Proteus mirabilis as among the most common species of bacteria present in urine from those suspected of having a UTI. Antibiotic sensitivity tests are then conducted to determine if these bacteria are susceptible to specific antibiotics. For example, Tantisiriwat et al. (2007) noted that of the antibiotics tested, E. coli was most susceptible to amikacin (96.1%), ceftazidime (88.9%), and cetriaxone (75%). The efficacy of specific antibiotics investigated in the SCI literature will be summarized in subsequent sections.
Given the cost and the time spent before results can be obtained with bacterial culture (e.g. 18-48 hours), simpler screening methods have been developed for assessing the presence of a UTI. One of these methods involves using a urine “dipstick” which signifies the presence of nitrates or the presence of leukocyte esterase respectively as a potential indicator of UTI. The results of investigations into the sensitivity and specificity of dipstick tests in predicting UTI in patient populations other than SCI have been mixed. Hoffman et al. (2004) conducted an investigation to compare dipstick results for nitrites and leukocyte esterase to urine culture results where each test was conducted monthly over a 5 year period in a community-based SCI sample (n=56). Using NIDRR criteria for UTI, 81% of the total 695 samples collected over the study period met criteria for bacteriuria, and of these, 36% met criteria for a positive UTI. In general, sensitivity (i.e., the ability to correctly identify significant results) was relatively low at 63% even when either the leukocyte esterase or nitrate dipstick was positive; specificity (i.e., the ability to correctly identify samples without significant bacteria) was 89% or higher for any combination of test. When compared to the ability to predict UTIs, the dipstick sensitivity remained relatively low at 63% and specificity was also low at 52% for any combination of dipstick test. Overall results suggest using dipstick testing as a treatment guide could result in inappropriate or delayed treatment and the study authors suggested that individuals with SCI with suspected UTI should be evaluated with urine culture and not dipstick testing (Hoffman et al. 2004). However, a separate investigation comparing positive and negative predictive values for dipstick testing as compared to leukocyte microscopy relative to culture-derived bacteriuria determined that either method was equally effective with reasonable prediction rates of approximately 80% for each alone or in combination (Faarvang et al. 2000).
Practicality and cost savings in UTI prevention and treatment may not have been the prime motive in an investigation by Darouiche et al. (1997), but they did find that an adequate clinical response to treatment was not significantly different as a result of limited versus full microbial investigation. Limited investigations were conducted by examining colony morphology, appearance on Gram-stain, catalase test and oxidase test without organism identification and antibiotic susceptibilities. Rather, antibiotic selection was based on recognized hospital-based patterns of antibiotic susceptibilities. As well, the cost savings, at an average of $183 US per patient, was not significantly less but indicated a trend (p=0.18) associated with limited versus full investigation. Although this provides good evidence in favour of deferring to a limited microbial investigation for SCI UTI treatment selection, the sample size was small (N=15) and warrants further study. It is also unclear from this study if the results are transferable to a setting other than an inpatient hospital unit (i.e., not community-based patients) and whether treatment is determined in part by relying on the experience of the clinical team in determining treatment.
The results of clinical laboratory analysis are also prone to contamination from a variety of practical issues. For example, sample deterioration between the time of sampling and processing is controversial. Horton et al. (1998) conducted a blinded RCT to investigate the effects of refrigeration on urinalysis and culture results. Samples were split and analyzed at 4 hours (“fresh”) and 24 hours (“refrigerated”) post-refrigeration. The bacterial counts of “mixed” organisms (p=0.10) and Staphalococcus aureus (p=0.66) were altered with refrigeration but no changes in colony counts would have altered the treatment regimen chosen based on urinalysis or culture results. This study provides a level of confidence for urine samples refrigerated (up to 24 hours) prior to analysis.
In another investigation of a narrower issue involving potential contamination, Shah et al. (2005) demonstrated that the number of clinically significant organisms (≥105 cfu/mL) detected by urine culture were reduced in SCI inpatients with indwelling or SPC suspected of having a UTI when the catheter was changed just prior to urine collection as compared to those where it was left unchanged (p=0.01). This practice also resulted in a savings of $15.64 per patient.
Conclusion
There is level 4 evidence (from one case series study: Escalrin de Ruz et al. 2000) that patients with SCI who are completely dependent (FIM<74) or who have vesicourethral reflux are at highest risk for UTI.
There is level 4 evidence (from one case series study: Massa et al. 2009) that the presence of cloudy urine or a positive urine dipstick test are better predictors of UTI compared with the patient’s own subjective impression of their own signs and symptoms.
There is conflicting level 4 evidence (from two pre-post studies: Hoffman et al. 2004; Faarvang et al. 2000) concerning whether dipstick testing for nitrates or leukocyte esterase is recommended to guide treatment decision-making.
There is level 1b evidence (from one RCT: Darouiche et al. 1997) that both limited and full microbial investigation result in adequate clinical response to UTI treatment with antibiotics. Therefore the cost savings attributed to a limited microbial investigation favours this practice in the investigation of UTI although more rigorous investigation of the patient outcomes and attributed costs is needed.
There is level 1b evidence (from one RCT: Horton et al. 1998) that refrigeration (up to 24 hours) of urine samples prior to sample processing does not significantly alter urinalysis or urine culture results in SCI patients.
There is level 2 evidence (from one prospective controlled trial study: Shah et al. 2005) that fewer false positive tests showing bacteriuria occur if indwelling or suprapubic catheters are changed prior to collection for urine culture analysis.