Bladder Management

Detecting and Investigating UTIs

Detecting a UTI via identification of symptoms by a patient is a critical first step in this detection; however, in a prospective case review undertaken by Linsenmeyer and Oakley (2003) only 61% (90/147) of patients were able to correctly predict the presence of a UTI based on their symptoms. Other methods of detection include urine chemical dipsticks which provide an indication of the presence of nitrites and leukocytes with the benefit of a providing a quick turnaround (Faarvang et al. 2000Hoffman et al. 2004). However, the primary approach and gold standard is the microbiological evaluation of urine bacterial culture. As noted above, organizations such as NIDRR have defined UTIs at least in part on the results of laboratory investigations documenting the presence, amount and type of bacterial growth that occurs with an infection. This also results in the identification of the antibiotic(s) for which the bacteria species may be susceptible (i.e., sensitivity). These practices are aligned with the recommendations for data capture through the work of the International SCI UTI Basic Data Set working group. This work has been endorsed by the International Spinal Cord Society (ISCoS) Scientific Committee and the American Spinal Injury Association (ASIA) Board. Specifically, the data elements include “date of data collection, length of time of sign(s)/symptom(s), results of urine dipstick test for nitrite and leukocyte esterase, urine culture results and resistance pattern” (Goetz et al. 2013). On resistance patterns, it has been noted that 33% of SCI UTIs are polymicrobial (Dow et al. 2004). The clinician must then decide between a limited or full microbial investigation in selecting the appropriate treatment. The obvious benefit of a full microbial investigation (i.e. accuracy) is offset by potentially adverse effects due to the time delay for the bacterial sensitivity results and the cost of a full investigation. The studies reviewed in the present section examine specific issues associated with the laboratory investigation of UTIs and how these might impact treatment.

Author Year

Research Design
Total Sample Size

Methods Outcome
Horton et al. 1998





Population: SCI, inindividuals.

Intervention: Urine sample was processed within 4 h of sampling (“fresh”) versus 24 (“refrigerated”) hours of refrigeration.

Outcome Measures: Cultures, colony counts, urinalysis.

1.     No significant difference between fresh and refrigerated samples in:

·       White Blood Cell (p=0.724),

·       Number of  bacteria (p=0.440)

·       Leukocytes (p=0.782),

·       Colony counts of E. fecalis & Pseudomonas (p=0.317), E. coli, Citrobacter, Streptococcus, Yeast, or Acinetobacter (p=1.0).

2.     Significant difference between fresh & refrigerated samples with colony counts<50 k: “mixed” organisms (p=0.010)

3.     Staph aureus trend only (p=0.066)

1.   No cultures/colony count changes in up to 24 h refrigeration to alter treatment or clinically significant urinalysis/culture results.

Darouiche et al. 1997 USA

RCT (study 1)

Pre-Post (study 2)


Study 1 N=45/40

Study 2 N=12

Population: SCI with symptomatic polymicrobial urinary tract infection (UTI): Age range 23-84 yr; Gender: males=57, females=0.

Intervention: Limited versus full microbiological investigation for management of symptomatic polymicrobial UTI (limited=cultures for specific organisms not used to guide antibiotic selection).

Outcome Measures: Clinical improvement following symptomatic UTI (criteria defined as presence of bacteria+one symptom) by 4 d after treatment with antibiotic, time to start antibiotic cost of entire therapy and lab tests.

2.   No difference in therapy response between full versus limited approach (95% versus 85%, p=0.4);

3.   Limited approach antibiotic initiation earlier at 1.2±1.4 d versus 3.3±2.5 d for full approach (p=0.01);

4.   Higher proportion of people in limited group required no change in initial antibiotic than with full approach 85% versus 33% (p=0.006);

5.   Recurrence at 1 mo due to at least 1 of the originally infecting species was similar for both groups (p=1.0);

6.   Costs for limited investigation-directed therapy less at $157±$174 versus $252±$237 for full approach indicative of a trend but not significantly different (p=0.18)

Shah et al. 2005


Prospective Controlled Trial


Population: SCI inindividuals with indwelling / suprapubic catheter and suspected of having a urinary tract infection (UTI); Control group=41: Mean age: 55.6 yr; Treatment group=44: Mean age: 64.1 yr.

InterventionIndividuals were admitted to two spinal cord units: 1) continued the routine practice of examining urine samples without replacing the catheter (Control group); versus 2) nurses replaced catheter before obtaining urine samples (Treatment group) for urinary analysis.

Outcome Measures: Prevalence of organisms, types of organisms, laboratory costs.

1.   More clinically significant organisms (≥105 cfu/mL) were found in those whose catheter was not changed versus changed (89/41 individuals versus 60/44 individuals, p=0.01).

2.   Fewer non-clinically significant organisms (<105 cfu/mL) were found in those whose catheter was not changed versus changed (4/41 individuals versus 19/44 individuals, p=0.01).

3.   Changed catheter group had significantly less multidrug resistant organisms than the control group (p<0.001).

7.   The changed versus unchanged catheter approach resulted in a total cost reduction of $15.64 per individual.

Hoffman et al. 2004




Population: SCI>6 mo post-injury who reported recurrent urinary tract infections (UTIs); Mean age: 38.86 yr; Gender: males=42, females=14; Level of injury: tetraplegia=34.

Intervention: Comparing dipstick results for Nitrites or leukocyte esterase (LE) to urine culture results.

Outcome Measures: Urine samples (nitrate, leukocyte esterase (LE), and culture) and self-report of symptoms, urinary tract infection (UTI) presence and treatment collected monthly over 3 yr and then only for 10 mo over last 2 yr, sensitivity, specificity, undertreatment, overtreatment were assessed.

1.     In predicting significant bacteriuria, when either Nitrates or LE separately or both were positive there was a low sensitivity (0.63) and overtreatment rate (0.04) with a high specificity (0.89) and undertreatment rate (0.96).

2.     In predicting National Institute on Disability Rehabilitation Research-based UTI, when either Nitrates or LE separately or both were positive there was a low sensitivity (0.64) but also a low specificity (0.52) resulting in a higher overtreatment (0.66) and lower undertreatment rate (0.22) than seen with bacteriuria prediction; overall results suggest using dipstick testing as a treatment guide could result in overtreatment rates of 70% and low rates of undertreatment.

Faarvang et al. 2000




Population: Those with spinal cord lesion admitted to inindividual SCI program.

Intervention: 256 morning urine samples were collected from individuals using a standardized ‘clean’ technique. Analysis was conducted with chemical dipstick and microscopy within 3 hr.

Outcome Measures: Prevalence of bacteria compared with nitrite/leukocyte dipstick tests (positive and negative predictive values), types of bacteria.

1.     The authors suggested that results comparing positive and negative predictive values indicated that the dipstick and microscopy tests are both equally valuable.

2.     True negative predictive value ~0.7 and true positive predictive value of ~0.9.

3.     128 out of 256 urine samples contained significant bacteriuria.

4.     Only 87 contained just one microorganism.

Massa et al. 2009


Case Series


Population: SCI: Gender: males=34, females=17; Catheterization type: standard catheter=26, hydrophilic catheter=25; Level of injury: cervical=19, thoracic=26, lumbar=5, sacral=1.

Intervention: Analysis of monthly urine culture and urinalysis data, as well as a self-reported questionnaire on uniary tract infection (UTI) signs and symptoms.

Outcome Measures: Presence of UTI defined as both a bacteriuria with a colony count of at least 105 colony-forming units/mL and at least 1 sign or symptom of UTI.

1.     “Cloudy urine” had the highest accuracy (83.1%); “leukocytes in the urine” had the highest sensitivity (82.8%); “fever” had the highest specificity (99.0%), but had low sensitivity (6.9%).

2.     Subjects were able to predict their own UTI with an accuracy of 66.2% and the negative predictive value (82.8%) was substantially higher than the positive predictive value (32.6%).

Tantisiriwat et al. 2007


Case Series


Population: SCI hospitalized at Rehabilitation Center (Thai Red Cross Society); Mean age: 44.7 yr; Gender: males=50, females=26; Type of neurogenic bladder: detrusor overactivity=39, detrusor underactivity=9.

Intervention: Retrospectively chart review to assess urinary tract infection (UTI) prevalence, causative bacteria and susceptibility patterns.

Outcome Measures: Prevalence of UTI, causative bacteria, susceptibility to antibiotic.

1.   Prevalence of UTI was higher in individuals with neurogenic detrusor overactivity (97.14%) than underactivity (66.67%).

2.   Of urine culture performed in 41/68 episodes of UTI, 39 positive cultures identified with E. coli (74.4%), K. pneumonia (12.8%), Enterococcus faecalis (5%) and Proteus mirabilis (5%) most common.

3.   E. coli was most susceptible to amikacin (96.1%), ceftazidime (88.9%), and cetriaxone (75%).

4.   K. pneumonia was most susceptible to ceftazidime (80.0%), cetriaxone (80.0%), amikacin (60.0%) and cotrimoxazole (60%).

Esclarin De Ruz et al. 2000


Case Series


Population: SCI: Mean age: 32.0±14.5 yr; Gender: males=100, females=28.

Intervention: Prospective examination of individuals for 38 mo to identify potential risk factors for UTIs.

Outcome Measures: Associated factors, methods of urinary drainage, bladder type, urological complications, Functional Independence Measure (FIM) and predisposing factors.

1.     Risk factors associated with urinary tract infection were invasive procedures without antibiotic prophylaxis, cervical injury and chronic catheterization.

2.     Risk factors associated with repeat infection were a FIM score less than 74 and vesicoureteral reflux.


Perhaps understanding risk factors may be the simplest method of initial recognition and management of UTI. Escalrin de Ruz et al. (2000) prospectively followed 128 SCI patients for 38 months. Logistic regression modeling was performed on demographic characteristics, associated factors, urinary drainage methods, type of bladder dysfunction, urological complications and predisposing factors of each infection episode. The results showed that individuals who were completely dependent (FIM score <74) and who had vesicoureteral reflux were at the highest risk for UTI.

Beyond functional characteristics (e.g., FIM scores and type of bladder dysfunction), Massa et al. (2009) found that UTI signs and symptoms were superior predictive factors compared to a patient’s own subjective impression of their own signs and symptoms. The presence of “cloudy urine” had the highest accuracy (83.1%) and a positive dipstick test for the presence of leukocytes had the highest sensitivity (82.8%, highest true positive results). Although the presence of fever reflected the highest specificity (99.0%), its sensitivity was very low (6.9%) for UTI. The authors concluded from this prospective cohort, that basic objective measures such as cloudy urine and positive dipstick results were better at predicting UTI than the patient’s themselves.

However once an UTI is detected, laboratory investigation using microbiological analysis of urine cultures is important for confirming UTI and also for guiding treatment. For example, Shah et al. (2005)Hoffman et al. (2004) and Tantisiriwat et al. (2007) reporting centre-based results under a variety of study designs, noted Enteroccoccus species, Klebsiella pneumonia, Escherichia coli, Pseudomonas aerginosa, Staphlococcus aureus and Proteus mirabilis as among the most common species of bacteria present in urine from those suspected of having a UTI. Antibiotic sensitivity tests are then conducted to determine if these bacteria are susceptible to specific antibiotics. For example, Tantisiriwat et al. (2007) noted that of the antibiotics tested, E. coli was most susceptible to amikacin (96.1%), ceftazidime (88.9%), and cetriaxone (75%). The efficacy of specific antibiotics investigated in the SCI literature will be summarized in subsequent sections.

Given the cost and the time spent before results can be obtained with bacterial culture (e.g. 18-48 hours), simpler screening methods have been developed for assessing the presence of a UTI. One of these methods involves using a urine “dipstick” which signifies the presence of nitrates or the presence of leukocyte esterase respectively as a potential indicator of UTI. The results of investigations into the sensitivity and specificity of dipstick tests in predicting UTI in patient populations other than SCI have been mixed. Hoffman et al. (2004) conducted an investigation to compare dipstick results for nitrites and leukocyte esterase to urine culture results where each test was conducted monthly over a 5 year period in a community-based SCI sample (n=56). Using NIDRR criteria for UTI, 81% of the total 695 samples collected over the study period met criteria for bacteriuria, and of these, 36% met criteria for a positive UTI. In general, sensitivity (i.e., the ability to correctly identify significant results) was relatively low at 63% even when either the leukocyte esterase or nitrate dipstick was positive; specificity (i.e., the ability to correctly identify samples without significant bacteria) was 89% or higher for any combination of test. When compared to the ability to predict UTIs, the dipstick sensitivity remained relatively low at 63% and specificity was also low at 52% for any combination of dipstick test. Overall results suggest using dipstick testing as a treatment guide could result in inappropriate or delayed treatment and the study authors suggested that individuals with SCI with suspected UTI should be evaluated with urine culture and not dipstick testing (Hoffman et al. 2004). However, a separate investigation comparing positive and negative predictive values for dipstick testing as compared to leukocyte microscopy relative to culture-derived bacteriuria determined that either method was equally effective with reasonable prediction rates of approximately 80% for each alone or in combination (Faarvang et al. 2000).

Practicality and cost savings in UTI prevention and treatment may not have been the prime motive in an investigation by Darouiche et al. (1997), but they did find that an adequate clinical response to treatment was not significantly different as a result of limited versus full microbial investigation. Limited investigations were conducted by examining colony morphology, appearance on Gram-stain, catalase test and oxidase test without organism identification and antibiotic susceptibilities. Rather, antibiotic selection was based on recognized hospital-based patterns of antibiotic susceptibilities. As well, the cost savings, at an average of $183 US per patient, was not significantly less but indicated a trend (p=0.18) associated with limited versus full investigation. Although this provides good evidence in favour of deferring to a limited microbial investigation for SCI UTI treatment selection, the sample size was small (N=15) and warrants further study. It is also unclear from this study if the results are transferable to a setting other than an inpatient hospital unit (i.e., not community-based patients) and whether treatment is determined in part by relying on the experience of the clinical team in determining treatment.

The results of clinical laboratory analysis are also prone to contamination from a variety of practical issues. For example, sample deterioration between the time of sampling and processing is controversial. Horton et al. (1998) conducted a blinded RCT to investigate the effects of refrigeration on urinalysis and culture results. Samples were split and analyzed at 4 hours (“fresh”) and 24 hours (“refrigerated”) post-refrigeration. The bacterial counts of “mixed” organisms (p=0.10) and Staphalococcus aureus (p=0.66) were altered with refrigeration but no changes in colony counts would have altered the treatment regimen chosen based on urinalysis or culture results. This study provides a level of confidence for urine samples refrigerated (up to 24 hours) prior to analysis.

In another investigation of a narrower issue involving potential contamination, Shah et al. (2005) demonstrated that the number of clinically significant organisms (≥105 cfu/mL) detected by urine culture were reduced in SCI inpatients with indwelling or SPC suspected of having a UTI when the catheter was changed just prior to urine collection as compared to those where it was left unchanged (p=0.01). This practice also resulted in a savings of $15.64 per patient.


There is level 4 evidence (from one case series study: Escalrin de Ruz et al. 2000) that patients with SCI who are completely dependent (FIM<74) or who have vesicourethral reflux are at highest risk for UTI.

There is level 4 evidence (from one case series study: Massa et al. 2009) that the presence of cloudy urine or a positive urine dipstick test are better predictors of UTI compared with the patient’s own subjective impression of their own signs and symptoms.

There is conflicting level 4 evidence (from two pre-post studies: Hoffman et al. 2004Faarvang et al. 2000) concerning whether dipstick testing for nitrates or leukocyte esterase is recommended to guide treatment decision-making.

There is level 1b evidence (from one RCT: Darouiche et al. 1997) that both limited and full microbial investigation result in adequate clinical response to UTI treatment with antibiotics. Therefore the cost savings attributed to a limited microbial investigation favours this practice in the investigation of UTI although more rigorous investigation of the patient outcomes and attributed costs is needed.

There is level 1b evidence (from one RCT: Horton et al. 1998) that refrigeration (up to 24 hours) of urine samples prior to sample processing does not significantly alter urinalysis or urine culture results in SCI patients.

There is level 2 evidence (from one prospective controlled trial study: Shah et al. 2005) that fewer false positive tests showing bacteriuria occur if indwelling or suprapubic catheters are changed prior to collection for urine culture analysis.

Chapter Downloads