9.16 Metoprolol Improves Survival in Severe Traumatic Brain Injury Independent of Rate Control

B. Zangbar1, P. Rhee1, B. Joseph1, N. Kulvatunyou1, I. Ibrahim-zada1, A. Tang1, G. Vercruysse1, R. S. Friese1, T. O’keeffe1  1University Of Arizona,Trauma/Surgery/Medicine,Tucson, AZ, USA

Introduction:  Multiple prior studies have suggested an association between survival and beta-blocker administration in patients with severe traumatic brain injury (TBI). However, it is unknown whether this benefit of beta-blockers is dependent on heart rate control. The aim of this study was to assess whether rate control affects survival in patients receiving metoprolol with severe TBI.

Methods:  We performed a 7-year retrospective analysis of all blunt TBI patients at a level 1 trauma center. Patients >16 years of age with head abbreviated injury scale (AIS) 4 or 5 who were admitted to the ICU from operating room (OR) or emergency room (ER) were included. Patients were stratified into two groups: metoprolol (BB) and no beta-blockers (NBB). Using propensity score matching we matched the patients in two groups in a 1:1 ratio controlling for age, gender, race, admission vital signs, Glasgow coma scale (GCS), injury severity score (ISS), average heart rate monitored during ICU admission, and standard deviation of heart rate during the ICU admission. Our primary outcome measure was survival.

Results: Overall 914 patients met inclusion criteria, of whom 189 received beta-blockers. A propensity-matched cohort of 356 patients (178: BB and 178: NBB) was included, which is shown below. Patients receiving Metoprolol had higher survival than patients who did not receive beta-blockers, despite no difference in mean heart rate and heart rate variability.

Conclusion: Our study shows an association with improved survival in patients with severe TBI receiving Metoprolol, and this effect appears to be independent of any reduction in heart rate. We suggest that Metoprolol should be administered to all severe TBI patients irregardless of any perceived beta-blockade effect on heart rate.
 

9.17 Damage Control Resuscitation is Associated with Increased Survival after Severe Splenic Injury

E. A. Taub1, B. Shrestha1, B. Tsang1, B. A. Cotton1, C. E. Wade1, J. B. Holcomb1  1University Of Texas Health Science Center At Houston,Houston, TX, USA

Introduction:  Increasing data has shown that Damage Control Resuscitation (DCR), employing low-volume, balanced resuscitation, is associated with improved survival in severely injured patients. However, little attention has been paid to outcomes by specific organ injury. We wanted to determine if implementation of DCR has improved survival among patients with severe blunt splenic injury. 

Methods:  Following IRB approval, a retrospective study was performed on all adult trauma patients with severe, blunt splenic injury admitted to our center between 01/2005-12/2012. Severe splenic injury was defined as AAST grade IV or V. Our center adopted and employed DCR principles in 2009. Therefore, patients were stratified into two groups: pre-DCR (2005-2008) and DCR (2009-2012). Patients who died before leaving the emergency department (ED) were excluded. Outcomes (resuscitation products used and survival) were then compared by univariate analysis. A purposeful regression model was then constructed to identify independent predictors of mortality.

Results: Between 2005-2012 there were 29,801 trauma admissions, with 224 patients 18 years of age or older who sustained blunt AAST grade IV or V splenic injuries. Of these, 206 patients survived to leave the ED and made up the study group; 83 pre-DCR and 123 DCR patients. The groups were similar in demographics, prehospital and ED vital signs. However, DCR patients had a higher abdominal AIS scores (median 4 vs. 4; p=0.050). While arrival physiology and base deficit were similar, DCR patients had higher aPTT (median 28.2 vs. 26.4; p=0.017) and lower initial platelet count (median 223 vs. 246; p=0.019). DCR patients received more plasma (median 2 vs. 0 U; p<0.001) and less crystalloid (median 0.1 vs. 1.0 L; <0.001) while in the ED. Splenectomy rates were higher, but not statistically significant, in DCR patients (58 vs. 47%; p=0.103). DCR patients received less RBC (median 2 vs. 6 U), plasma (median 2 vs. 4 U), platelets (median 0 vs. 0) and crystalloid (median 1.0 vs. 3.1 L) in the operating room; all p<0.05. While there were no differences in ICU complications, mortality was lower, but not statistically significant, in the DCR group (10 vs. 19%; p=0.106). Multiple logistic regression demonstrated that DCR was an independent predictor of decreased mortality (odds ratio 0.05, 95% C.I. 0.006-0.341; p=0.003). In addition, this same model (controlling for age, abdominal AIS, admission platelet count) found that DCR was not associated with increased likelihood of splenectomy (odds ratio 1.29, 95% C.I. 0.693-2.431, p=0.419).

Conclusion: In patients with severe splenic injury, implementation of DCR has been associated with a 95% reduction in mortality at our facillity. 

9.18 Rates of Pseudoaneurysm in Non-Operative Management of Splenic Injuries

C. Morrison1, J. C. Lee1, K. Rittenhouse1, M. Kauffman1, B. W. Gross1, F. Rogers1  1Lancaster General Hospital,Trauma,Lancaster, PA, USA

Introduction: The use of angiography has been associated with lower rates of failed non-operative splenic injury management. To date, there is an unclear rate of splenic artery pseudoaneurysms in these patients. We sought to determine the rate of asymptomatic vascular injury in patients with splenic injury managed without operation or angiographic embolization and the outcomes of patients managed with and without re-imaging.

Methods: Patients undergoing splenic injury management with and without surgical intervention or angiographic embolization from 2011 to 2014 were queried from the trauma registry of a Pennsylvania-verified, level II trauma center. Patients were routinely re-imaged as part of our practice. We excluded penetrating trauma and immediate operative intervention was excluded from our analysis. Splenic injuries were classified using American Association for the Surgery of Trauma (AAST) guidelines by attending radiologist or senior trauma surgeon. Rates of repeat imaging, subsequent embolization and re-bleeding, and diagnosis of pseudoaneurysm were determined.

Results: A total of 132 patients met the inclusion criteria, of which 72.7% were managed non-operatively (N=96) and 27.3% operatively (N=36). Within the non-operative patient population, eight pseudoaneurysms were found; three on initial scans and five on repeat scans. Rates of re-imaging were 39.58% (N=38); angioembolization, 22.92% (N=22); and readmission, 10.41% (N=10); in patients managed non-operatively. Three large (>3cm) pseudoaneurysms were observed in the repeat CT scans.

Conclusion: Splenic injuries are typically managed non-operatively without serious complications. Patients with splenic injuries (> Grade 3) managed non-operatively should have repeat imaging within 48 hours to rule out the possibility of pseudoaneurysms, regardless of worsening symptoms or decreasing hemoglobin. Patients with pseudoaneurysms may be amenable to angioembolization.
 
 

9.19 Surgeon decision making is consistent in trauma patients despite fatigue and patient injury

D. D. Coffey1, C. Spalding1, M. S. O’Mara1  1Grant Medical Center / Ohio University,Trauma And Acute Care Surgery / Ohio University Heritage College Of Osteopathic Medicine,Columbus, OHIO, USA

Introduction:   Damage control laparotomy with temporary abdominal closure has become routine in trauma surgery with concern for abdominal compartment syndrome, or a planned second-look procedure.  This technique is associated with complications including fluid/protein loss, enterocutaneous fistula, and ventral hernia.  The increasing prevalence of this procedure has led to concern over too many abdomens being left open due to surgeon routine or surgeon fatigue.  Fatigue is a concern with long surgeon shifts, and after 16 hours decision making capabilities may be impaired.  We hypothesize that patient and physician factors other than physiologic parameters contribute to the decision to not close the initial trauma laparotomy.

Methods:   This was a retrospective chart review comprising a total of 527 patients 5 years. Patients who underwent emergent damage control laparotomies with fascia not closed were included in the open abdomen group. No consistent criteria were defined to choose this course.  Those patients who had fascia primarily closed after the first emergent laparotomy were included as closed abdomens and used as the control group. Patient demographics, injury factors, time of operation, and time to fascial closure were evaluated.

Results:  Demographic and injury factors were predictive of the decision to leave the abdomen open (table), in particular injury severity, patient mass, and blunt mechanism predicted an open abdomen.  Time of day was not predictive of the decision to leave a patient open.  In a logistic regression model of these factors, only patient age (p=0.002), ISS (p<0.0001), and the number of abdominal organs with an injury grade of 3 or more (p=0.0014) predicted the abdomen would be left open.  Of the patients with initially open abdomen, 84 (60%) survived and 67 of those achieved primary fascial closure.  Mean time to closure was 2.4 (±1.6) days.  None of the presenting demographic or injury factors predicted time to primary fascial closure by independent or model analysis (all p>0.1). 

Conclusion:  The decision to perform damage control surgery and leave an abdomen open appears to be consistent throughout the day and to be dependent upon patient factors as evaluated by the operating surgeon.  Fatigue does not seem to be a contributing factor.  This does not hold true for the fascial closure, which is done at approximately two days after the initial procedure, and does not vary based upon demographic or injury factors in the patients that survive.  An opportunity may exist to identify a subset of the open abdomen patients that could return to the operating room for earlier definitive closure, thereby lowering the risk of complications.
 

9.20 Morphomic Factors are Important for Assessing Risk of Cardiovascular Complications in Trauma Patients

J. Li1, N. Wang1, J. Friedman1, D. Cron1, O. Juntila1, M. Terjimanian1, E. Chang1, S. C. Wang1  1University Of Michigan,Ann Arbor, MI, USA

Introduction: Motor vehicle crashes (MVCs) are a major cause of traumatic injury in the US, and in-hospital cardiovascular complications are associated with increased morbidity and mortality in this population. The risk of cardiovascular complications is often difficult to predict using only injury severity and vital signs upon presentation to the trauma center. Previous studies have shown analytic morphomics to be a unique domain of perioperative risk assessment and this utility may provide improved clinical insight in the trauma setting. We hypothesized that individualized morphomic factors were associated with in-hospital cardiac complications for patients involved in MVCs.

Methods: Our study included 3,187 MVC adult patients admitted to the University of Michigan Health System who underwent an abdominal CT scan near the time of injury. Exclusion criteria included patients with an Injury Severity Score (ISS) ≤5 or head-and-neck AIS ≥5. Morphomic factors were measured at the L4 vertebral level using established algorithms. We utilized univariate analysis to determine the relationship of patient demographics, comorbidities, morphomics, and vital signs upon hospital admission with the development of cardiovascular complications. Cardiovascular complications were defined as myocardial infarction (MI), cerebrovascular accident (CVA), and other cardiac-arrest events. Injury severity was stratified by mild (5 <ISS <16), moderate (16 <ISS <25), severe (ISS >25) trauma.

Results: Of the 3,187 eligible patients, 3.8% developed cardiovascular complications. CVA and MI history, bone mineral density, and BMI were significant predictors of cardiovascular events (p<0.05) in mild trauma. Decreased average psoas radiodensity and increased age were found to be significant predictors of cardiovascular events (p<0.01) in moderate trauma. Glasgow Coma Scale and increased anterior body depth were the two most significant predictors of cardiovascular events in severe trauma (p<0.05). Table 1 shows the results of the univariate analysis between the complications and non-complications group for all three levels of trauma. Non-significant predictors of cardiovascular complications across all trauma levels include gender (p=0.17, 1.0, 0.11) and history of diabetes (p=0.37, 0.65, 0.19) and hypertension (p=0.50, 0.07, 0.35).

Conclusion: Psoas radiodensity, bone mineral density, and anterior body depth are significant factors associated with cardiovascular complications. Morphomic factors derived from cross-sectional imaging may aid clinical decision making by identifying high risk patients for in-hospital cardiovascular events following MVCs.

 

9.04 A Pilot Study of Compensatory Perioperative Nutrition in the SICU: Safety and Effectiveness

D. D. Yeh1, C. Cropano1, S. Quraishi1, E. Fuentes1, H. Kaafarani1, J. Lee1, Y. Chang1, G. D. Velmahos1  1Massachusetts General Hospital,Trauma, Emergency Surgery, Surgical Critical Care,Boston, MA, USA

Introduction: Enteral nutrition (EN) is an important component of surgical critical care, yet delivery of prescribed nutrition is often suboptimal.  In surgical patients, EN is commonly interrupted for procedures.  We hypothesized that continuing perioperative nutrition or providing compensatory nutrition would improve caloric delivery without increasing morbidity.

Methods: We enrolled 10 adult (age >18) surgical ICU patients receiving EN who were scheduled for elective tracheostomy and/or percutaneous endoscopic gastrostomy (PEG) between 07/2012-05/2014. In these patients, either perioperative EN was maintained or compensatory nutrition was used (particularly in the case of PEG tube placement). Perioperative EN was defined as continuing tube feeds up to (and sometimes during) operative procedures, whereas compensatory nutrition was defined as a temporary postoperative increase in the hourly EN rate to compensate for interrupted EN. We matched these patients to 40 other patients during the same time period who had tracheostomy and/or PEG placement while adhering to the traditional American Society of Anesthesiology NPO guidelines. Outcomes in patients receiving perioperative and/or compensatory feedings (FED) were compared to those not receiving them (UNFED) using Pearson’s chi-squared and Mann-Whitney Test for proportions and medians, respectively. All tests were two-sided and p<0.05 was considered significant. 

Results:A total of 50 eligible subjects were enrolled. There was no difference in age, sex, BMI, APACHE II score, and prescribed calories (TABLE 1). However, patients in the UNFED group did have higher rates of PEG placement when compared to the FED group (40% vs. 0%, p=0.02) On the day of procedure, the FED group received more actual calories (median 1706 vs. 527 kcal, p<0.001) and a higher percentage of prescribed calories (92% vs 25%, p<0.001). Median caloric deficit on the day of the procedure was also significantly lower in the FED group (175 vs. 1213 kcal, p<0.001). There were no differences in total complications or GI complications between groups.

Conclusion:In our pilot study of surgical ICU patients undergoing tracheostomy and/or PEG tube placement, perioperative and compensatory nutrition resulted in higher caloric delivery and was not associated with increased morbidity. Larger studies are needed to validate our findings and to determine whether aggressive ICU nutrition improves outcomes in critically ill surgical patients. 

 

9.05 Risk Factors for Intestinal Infection After Burns: A Population-based Outcomes Study of 541 Patients

K. Mahendraraj1, R. S. Chamberlain1,2,3  1Saint Barnabas Medical Center,Department Of Surgery,Livingston, NJ, USA 2New Jersey Medical School,Department Of Surgery,Newark, NJ, USA 3St. George’s University School Of Medicine,Department Of Surgery,St. George’s, St. George’s, Grenada

Introduction:
Thermal burns are associated with intestinal barrier failure, bacterial translocation, intestinal infections (IF) and sepsis. While the mechanisms for increased gut permeability have been extensively studied, the risk factors for developing IF are poorly understood. This study examines a large cohort of burn patients to assess the demographics and clinical outcomes of patients who develop burn-related IF compared to those who do not.

Methods:
Data on 95,472 patients with third degree flame burn injuries was abstracted from the Nationwide Inpatient Sample (NIS) Database over a ten year period (2001-2010). IF was defined as any intestinal infection due to any Gram negative Enterobacteriaceae sp., Enterococcus sp., Campylobacter sp, Yersinia spand Clostridium difficile. Standard statistical methodology was used.

Results:
541 (0.6 %) of burn patients were diagnosed with IF post-burn. Patients who developed IF were significantly older than those without IF (54.7 vs. 40 years old, p<0.001). Males (57.1%) and Caucasians (48.6%) developed IF more often.  More extensive third degree burns were more common among IF patients, p<0.005). Length of stay (27.6 vs. 7.9 days), and overall inpatient mortality were significantly higher in IF patients. (6.3% vs. 2.6%), p<0.001.  The most common comorbidities associated with developing IF were hypertension (31.7%), chronic respiratory illness (17.3%), and diabetes (14.8%), p<0.001.  IF patients more often had fluid and electrolyte disorders (44.8%), sepsis (13.3%), burn wound infection (7.4%), and skin graft failure (2%). Multivariate analysis identified age over 60 (OR 1.0), fluid and electrolyte disorders (OR 3.1), peripheral vascular disease (OR 1.7), and multiple burn sites (OR 1.8) as independently associated with IF development, p<0.005. Conversely, TBSA under 10% (OR 0.5) and active smoking habit (OR 0.4) were associated with a lower risk of developing IF, p<0.005. Risk factors for mortality in IF patients included sepsis (OR 2.4), septic shock (OR 1.8) and acute DVT (OR 2.6), p<0.005.

Conclusion:
IFs in burn patients is associated with longer hospitalization, increased mortality, graft failure, sepsis and other adverse events.  The strongest risk factors for IF are fluid and electrolyte disorders, peripheral vascular disease, and multiple severe burn sites IF is more common in Caucasian males with 3rd degree burns >20% TBSA, and older patients with multiple co-morbidities. Clinicians should be cognizant of these IF risk factors when assessing and monitoring high-risk burn patients in order to decrease morbidity and mortality. Additional research into IF preventions strategies in high risk burn patients such as the use of probiotics are already underway.

9.06 Safety and Effectiveness of Pre-hospital Tourniquet Use in 110 Patients with Extremity Injury

M. Scerbo1, E. Taub1, J. P. Mumm1, K. S. Gates1, J. B. Holcomb1, B. A. Cotton1  1University Of Texas Health Science Center At Houston,Acute Care Surgery/Surgery,Houston, TX, USA

Introduction: Field us of tourniquets (TQ) in military medicine is regarded as an effective adjunct for preventing hemorrhage-related deaths from extremity trauma. Their use in the civilian setting, however, has not been widely adopted. The most recent publication of Guidelines for Field Triage of Injured Patients (2011), gives no recommendation for pre-hospital TQ use, stating there is limited evidence to support their use and potential for increased injury. The purpose of this study was to assess whether pre-hospital TQ in the civilian setting can be (1) effective in hemorrhage control and (2) safely applied.

Methods: Following IRB approval, patients arriving to a level-1 trauma center between 01/2009 and 05/2013 were reviewed. All patients with prehospital TQ application were included in the analysis. Cases were adjudicated and assigned the following designations: absolute indication (underwent operation within 2 hours for extremity injury, arterial or venous injury requiring repair/ligation, or traumatic amputation), relative indication (major musculoskeletal or soft-tissue injury requiring operation >2 hours after arrival, documented large blood loss at scene), non-indicated. Patients with absolute or relative indications were placed into the INDICATED group; others were placed into non-INDICATED cohort. An orthopedic, trauma or hand surgeon then adjudicated iatrogenic injuries resulting from TQ application. Univariate analysis was performed to compare groups. Logistic regression was then conducted to assess independent predictors of requiring additional or replacement of pre-hospital TQ to control hemorrhage.

Results:  110 patients had pre-hospital TQ placement. 94 patients (85%) were in the INDICATED group and 16 (15%) in the non-INDICATED. With the exception of higher blunt mechanism (70 vs. 43%; p=0.048), there were no differences in demographics, transport, or scene vitals between groups. INDICATED patients were more likely to have lower extremity TQ (46 vs. 6%; p=0.007) but were less likely to pre-hospital bleeding controlled (71 vs. 100%; p=0.012). 28% of INDICATED patients had their TQ removed in the ED (vs. 100%; p<0.001). Only 16% of INDICATED patients had an additional or new TQ applied in the ED (vs. 7%; p=0.420). Venous thromboembolic events (4.3 vs. 0.0%; p=0.401) and peripheral nerve injuries (5.3 vs. 0.0%; p=0.345) were similar. The amputation rate was 31% for INDICATED patients (vs. 0%; p=0.009). There were no nerve palsies or tissue/muscle injuries leading to amputation/debridement attributable to TQ use in either group. After controlling for scene vital signs and mechanism of injury, the likelihood of requiring an additional or new TQ after arrival to the ED was independently associated with ground transport (odds ratio 6.3, 95% C.I. 1.47-29.08; p=0.014).

Conclusion: Our study suggests that pre-hospital personnel can safely and effectively use TQ in patients with severe extremity injuries. 

9.07 Risk Prediction Model for Mortality in the Moribund Surgical Patient

L. E. Kuo1, G. C. Karakousis1, K. D. Simmons1, D. N. Holena1, R. R. Kelz1  1Hospital Of The University Of Pennsylvania,Department Of Surgery,Philadelphia, PA, USA

Introduction:  Surgeons struggle to counsel families on the role of surgery and the likelihood of survival in the moribund patient. A recent study demonstrated a nearly 50% 30-day survival rate for the moribund surgical patient, but without information on what factors are associated with survival, it is difficult to provide patients and their family members with information on the best course of action for a specific patient. We sought to develop a risk prediction model for postoperative inpatient death for the moribund surgical candidate. 

Methods:  Using ACS NSQIP data from 2007-2012, we identified ASA class 5 (moribund) patients who underwent an operation by a general surgeon. The sample was randomly divided into development and validation cohorts. In the development cohort, patient characteristics which could be readily discerned on preoperative evaluation were evaluated for inclusion in the predictive model. The primary outcome measure was in-hospital mortality, and factors found to be significant in univariate logistic regression were entered into a multivariable model. Points were assigned to these factors based on beta coefficients. This model was used to generate a simple scoring system to predict inpatient mortality. Models were developed separately for operations performed within 24 hours of admission and operations performed at least one day after admission as a means of differentiating between patients who presented to the hospital in the moribund state, and those whose condition reflected deterioration over their hospital course. Each model was tested on the validation cohort.

Results: 3,130 patients were included in the study. In-hospital mortality was 50.5% in the overall sample. In multivariable regression modeling, patient characteristics associated with in-hospital mortality were age, functional status (odds ratio 2.11, confidence interval 1.39-3.19), dialysis within the previous 30 days (1.63, 1.22-2.32), recent myocardial infarction (1.52, 1.04-2.22), and ventilator dependence (2.17, 1.43-3.30). For patients undergoing surgery within 24 hours of admission, body mass index was also associated with inpatient death. The scoring system generated from this model accurately predicted in-hospital mortality in both the development and validation cohorts for patients undergoing surgery within and after 24 hours (Table 1). 

Conclusion: A simple risk prediction model using readily available preoperative patient characteristics can be used to accurately predict postoperative mortality in the moribund patient undergoing surgery. This scoring system can easily be applied in the clinical setting to assist in counseling and decision-making.
 

9.08 The Impact of Ratio Based Blood Products Transfusion on Solid Organ Donations in Trauma Patients

T. Orouji Jokar1, B. Joseph1, M. Khalil1, N. Kulvatunyou1, B. Zangbar1, A. Tang1, T. O’Keeffe1, L. Gries1, R. Latifi1, R. S. Friese1, P. Rhee1  1University Of Arizona,Trauma/Surgery/Medicine,Tucson, AZ, USA

Introduction:  Aggressive management with blood products is known to improve organ donation in trauma patients.  The aim of this study was to evaluate the impact of blood products transfusion ratios on solid-organ procurement rates. We hypothesized that 1:1 (PRBC: FFP) transfusion (RT) increases the solid organ donations in trauma patients.  

Methods:  We performed an 8 year retrospective analysis of all brain dead trauma patients at our level 1 trauma center. Patients who consented for organ donation and donated solid organs were included. Patients were stratified into two groups: patients with 1:1 transfusion (RT) and patients without 1:1 transfusion (No-RT). Outcome measures were: number and type of solid organs donated. Logistic regression analysis was performed.  

Results: A total of 70 patients who donated a total of 318 solid organs were included.  57.1% (n=40) donors received 1:1 ratio transfusion. There was no difference in age (p=0.16), mechanism of injury (p=0.3), and systolic blood pressure on admission (p=0.1) between the two groups. Donors in the RT group were more likely to donate livers (82.5% vs. 63.3%, p=0.041) and lungs (42.5% vs. 17.2%, p=0.024), with an overall higher rate of solid organ donation [5±2.1 vs. 4.1±2.4, p=0.03] compared to patients in the No-RT group. RT was independently associated with increase in solid organ donation rates (OR [95%CI]: 1.13 [1.05-1.8], p=0.043). 

Conclusion: Ratio based blood products transfusion increases solid organ donations in trauma donors. Aggressive resuscitation in 1:1 ratio in trauma patients who are deemed non survivable may improve conversion rates among eligible donors.

9.09 Complications Associated with Pelvic Fixation Methods in Combined Pelvic and Abdominal Trauma

R. J. Miskimins1, M. Decker2, T. R. Howdieshell1, S. W. Lu1, S. D. West1  1University Of New Mexico HSC,Department Of Surgery,Albuquerque, NM, USA 2University Of New Mexico HSC,Department Of Orthopedic Surgery,Albuquerque, NM, USA

Introduction: Approximately 50% of blunt trauma cases with pelvic fractures have associated intraabdominal trauma.  Fixation of the anterior pelvis may be performed by open reduction and internal fixation (ORIF) or external fixation (Ex-fix). The approach to ORIF in patients who have undergone laparotomy is often through extension of the laparotomy incision.  However, a review of the literature shows no recent articles pertaining to timing or method of anterior pelvic ring fixation with recent laparotomy.  The optimal method for fixation in these patients is not known.  We hypothesized that ORIF performed through extension of the midline laparotomy incision would result in a clinically relevant difference in rates of wound closure and wound complications versus external fixation.

Methods: We identified all patients admitted from 2004 to 2014 who underwent laparotomy and ORIF of their anterior pelvic ring through extension of the laparotomy incision or Ex-fix of the anterior pelvic ring. A retrospective review was performed.  Injury Severity Score (ISS); age; length of stay; the rates of ventral hernia, abdominal wound infection, pelvic abscess; number of units transfused; presence of  bowel or bladder injury; additional operative or interventional procedures performed related to any complication were collected.   The continuous variables were analyzed using the Mann Whitney U test and Fisher’s exact test was used to determine statistical significance of categorical data.

Results:A total of 34 patients were identified from January 2004 to April 2014 who underwent exploratory laparotomy and pelvic fixation, 21 underwent external fixation of the anterior pelvic ring while 13 underwent open reduction internal fixation of the anterior pelvic ring by extending the midline laparotomy incision. There was no difference in the ISS, length of stay, age, units of blood products transfused, bowel injury, or bladder injury between the two groups.  The two groups had a similar incidence of ventral hernia (38% vs. 19%, p =0.254); however, the ORIF group were significantly more likely to have a laparotomy incision infection (54% vs. 5%, p=0.002), pelvic abscess (46% vs. 10%, p= 0.033) and need for additional procedures to address their complications (13 vs. 6, p=0.023).  We did note a significantly higher BMI (32.5 vs. 27.2, p=0.023) in the ORIF group which could be a confounding factor contributing to the increase in wound complications.

Conclusion:Individuals who have undergone laparotomy and fixation of their anterior pelvic ring are a complex group of patients. They have high ISS, long hospitals stays and multiple injuries.  The ORIF group experienced significantly higher rates of laparotomy incision infections, pelvic abscesses and required more procedures to manage these complications. These data suggest careful consideration of the method of anterior pelvic ring fixation in patients who also undergo laparotomy.

 

9.10 Effectiveness of Once a Day Enoxaparin for VTE Prophylaxis in the Critically Ill Trauma Patient

S. Morrissey1, N. Ingalls1, P. Chestovich1, D. Frisch2, F. Simon2, D. Fraser1, J. Fildes1  1University Of Nevada School Of Medicine,Las Vegas, NV, USA 2University Medical Center Of Southern Nevada,Las Vegas, NV, USA

Introduction:  Trauma patients are known to have higher incidences of VTE (venous thromboembolism) when compared to other patient populations.  The ideal dose of Enoxaparin for adequate VTE prophylaxis in the critically ill trauma patient has yet to be determined.  Our dosing regimen attempts to minimize missed Enoxaparin doses while still achieving adequate factor Xa levels (anti-factor Xa activity).  This study evaluates the efficacy of this regimen and examines the patient factors that may contribute to its inadequacy.  

Methods:  This is a prospective observational study performed in our trauma intensive care unit (TICU).  We identified all critically ill trauma patients over the age of 18 admitted to the TICU requiring chemical VTE prophylaxis between December 2013 and January 2014.  These patients were started on Enoxaparin 40 mg subcutaneously nightly.  Peak factor Xa levels were drawn 4 hours after the third dose.  Adequate prophylaxis was defined at factor Xa levels of greater than 0.2.  Patient injury patterns and demographics were collected for analysis.

Results:   A total of 25 critically ill trauma patients admitted to the TICU were started on chemical prophylaxis.  21 patients (84%) had adequate peak factor Xa levels (Group 1) vs 4 patients (16%) that had inadequate levels (Group 2).  When comparing demographics and injury patterns using a T-test and Pearson’s chi-squared test between the two groups, Group 2 had a statistically higher mean BMI and incidence of lower extremity fractures and spine injuries.  2/4 (50%) of the patients in group 2 developed superficial venous thrombosis.  There were no missed doses in either group.

Conclusion:  Based on our data, Enoxaparin 40 mg given once nightly provides adequate VTE prophylaxis in the majority of critically ill trauma patients. Not only do our rates of adequate prophylaxis using factor Xa levels surpass those found in the current literature, but this regimen also minimizes missed doses, which could potentially lead to lower levels of VTE. 
 

9.11 Defining Fever in the Critically Injured: Test Characteristics of Three Different Thresholds

V. Polcz1, L. Podolsky2, O. Sizar1, A. Farooq1, M. Bukur1, I. Puente1, F. Habib1,2  1Broward Health Medical Center,Trauma,Ft Lauderdale, FL, USA 2Florida International University,Surgery,Ft Lauderdale, FL, USA

Introduction:
Fever remains the most common sign that prompts the work-up for a possible infectious etiology in critically injured trauma patients admitted to the ICU. Yet, the very definition of fever is highly variable, and the test characteristics of the various cut-offs used have not been clearly defined. An accurate cut-off would allow for more precise and cost-effective management of the febrile trauma patient.

Methods:
Charts for 621 trauma patients at our urban Level I trauma Center were retrospectively evaluated for fever and culture results. The maximum oral temperature during the 24 hour period prior to obtaining culture samples was used. Temperatures were correlated with positive or negative culture results to determine sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and area under the curve.

Results:
Sensitivity and specificity were calculated using using cut-off values of 100.4 F, 101 F, and 101.5 F.  All data points are shown in Table 1. Recierver Operator Curve cut-offs identified 99.75 F as the temperature with the best test characteristics. Sensitivity showed an inverse relationship with temperature. 99.75 degrees exhibited a maximum value of 75.30% (CI: 70.27-79.88), with 101.5 exhibiting the minimum value of 25% (CI: 20.87-29.50). Specificity had a direct relationship to temperature, with 99.75 having a minimum specificity of 59.46% (CI: 51.00-61.00) and 101.5 having a maximum specificity of 92.96% (CI: 88.65-96.00). Positive likelihood ratio (LR) had a lowest value of 1.86 (CI:1.51-2.28) at the lowest temperature of 99.75, and the highest value of 3.35 (CI 2.12-5.95) at 101.5. Negative LR was also lowest at 99.75 with a value of 0.42 (CI: 0.33-0.52), and highest 101.5 with a value of 0.81 (CI: 0.75-0.86). Positive predictive value (PPV) was lowest at a temperature of 99.75 at 80.46% (CI: 75.57-84.74) and highest at a temperature of 101.5 at 39.29% (CI: 35.00-43.70). Negative predictive value (NPV) was highest at 99.75 with a value of 52.07% (CI: 44.27-59.80) and lowest at 101.5 with a value of  39.29% (CI: 35.00-43.70). AUC was inversely related to temperature with a maximum value of 0.32 (CI 0.690-0.774) at 99.75 and a minimum value of 0.498 (CI: 0.450-0.546) for 101.5. 
Conclusion

These results suggest that none of the current cut-offs used to define fever accurately predict an infectious etiology in febrile patients. While a temperature of 99.75 demonstrated the best test characteristics, none of the commonly accepted standards of fever showed a strong correlation to culture results. Further research is warranted in order to identify biomarkers that accurately identify the presence of  infectious processes in trauma patients. 

9.12 Prognostic Value of Cardiac Troponin I Following Severe Traumatic Brain Injury

S. S. Cai2, B. W. Bonds1, D. M. Stein1  1University Of Maryland,R Adams Cowley Shock Trauma Center,Baltimore, MD, USA 2University Of Maryland,School Of Medicine,Baltimore, MD, USA

Introduction:  Recent studies reported elevated cardiac troponin I (cTnI) was frequently observed following severe traumatic brain injury (TBI) and was associated with poor outcomes. Exact relationship is not well understood and the clinical applicability of cTnI in severe TBI patients has yet to be determined. Present study investigated the relationship between cTnI levels and risk of mortality.

Methods:  Adult patients (≥ 18y) with severe TBI (brain Abbreviated Injury Scale [AIS] score ≥ 3) admitted to a level one trauma center from July 2007 to January 2014 were reviewed. Patients with non-isolated severe TBI (AIS ≥ 3 to other anatomic regions) and those without cTnI measurements in 24 hours of admission were excluded. Four cTnI strata were predefined as undetectable (< 0.06 ng/mL) and detectable tertiles (0.06-0.1 ng/mL, 0.1-0.25 ng/mL, and > 0.25 ng/mL). Kaplan-Meier survival analysis and Cox proportional hazard model were applied. Stratification analysis was performed by age (≤ 65y or > 65y) and admission Glasgow Coma Scale (GCS) score (mild 13-15, moderate 9-12, and severe 3-8).

Results: In a total of 2711 patients, elevated cTnI was found in 502 (18.5%) patients. Five-day survival rate was significantly lower in the patients with detectable cTnI compared to those with undetectable cTnI (72.26% vs 88.77%, p < 0.0001). Risk of mortality increased with increasing cTnI levels in a dose-dependent manner (p-trend < 0.0001). Patient in the highest cTnI strata reported 1.55-fold (95% CI: 1.18-2.04, p-trend = 0.0002) higher hazard ratio (HR), or risk of mortality, compared with patients of undetectable cTnI when age, injury type, and injury severity were adjusted. Further stratification underscored the positive association between cTnI levels and risk of mortality, particularly in patients ≤ 65y (HR: 3.10, 95% CI: 2.09-4.59, p-trend < 0.0001) or with severe admission GCS (HR: 1.57, 95% CI: 1.16-2.14, p-trend = 0.0006). Similar association was not observed in patients > 65y or with mild or moderate admission GCS (Table 1). 

Conclusion: Elevated cTnI is an independent predictor of mortality following severe TBI and is significantly associated with higher risk of mortality via a positive, non-linear dose-dependent relationship. This association is predominately seen in patients ≤ 65y or with severe admission GCS. Elevated cTnI may not be a useful predictor of mortality in patients > 65y or with mild or moderate admission GCS. 

9.13 A Case for Less Workup in Near Hangings

M. Subramanian1, L. Liu1, T. Sperry1, T. Hranjec1, C. Minshall1, J. Minei1  1University Of Texas Southwestern Medical Center,Burn, Trauma, Critical Care / General Surgery,Dallas, TX, USA

Introduction:
No guidelines have been set for evaluating and managing patients with near hangings. As a result, most patients receive a comprehensive workup, regardless of mental status or exam.  We hypothesize that patients with a normal neurologic exam, subjective neck pain but no other complaints or exam findings, require no additional workup.

Methods:
We reviewed the charts of adult trauma patients at a Level I Trauma Center that presented after an isolated near hanging episode between 1995 and 2013. One patient was excluded as he sustained an 80-foot fall after near hanging. Patients were stratified based on their initial GCS score into low (< 15) and normal (=15) groups and compared using univariate analysis. 

Results:
In total, 127 patients presented after near hanging: 45 (35.4%) in the low and 82 (64.6%) in the normal group.  Seven (8.5%) patients in the normal group reported pain or tenderness on physical examination but also had the presence one of the following signs or symptoms: dysphagia, dysphonia, stridor, or subcutaneous air.  Patients in the normal group received 133 CT scans and 7 MRI with identification of 2 neck injuries. Both injuries—a C5 facet fracture and a vertebral artery dissection—were identified in patients with additional signs/symptoms present on examination. Neither of these injuries required intervention. The presence of at least one concerning sign or symptom in patients with GCS 15 had 100% sensitivity and 94% specificity for identifying an injury.  

Conclusion:
Using dysphagia, dysphonia, stridor, or subcutaneous air on examination in patients with a normal neurologic examination after an attempted hanging will reduce the number of unnecessary examinations and decrease cost without missing significant injuries.  Despite low incidence of injuries, all trauma patients and those with decreased GCS score should be thoroughly studied. 
 

9.14 Hospital Readmission after Traumatic Brain Injury: Results from the MarketScan Database

J. K. Canner1, F. Gani1, S. Selvarajah1, A. H. Haider1, E. B. Schneider1  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction: Thirty-day readmission after discharge from inpatient care for traumatic brain injury (TBI) among patients under the age of 65 years in the United States has not been well reported. This study examined readmission to acute care in a population of patients under the age of 65, all of whom were covered by employer-provided private health insurance.

Methods: The Market Scan database from 2010 to 2012, which includes over 50 million patients under the age of 65 covered through an employer-sponsored insurance plan, was queried.  Patients hospitalized with a primary diagnosis of TBI, and who had no other injury associated with an Abbreviated Injury Scale (AIS) score of 3 or greater to any other non-head body region, were identified and included for study. Patients with fewer than 30 days of follow-up were excluded. Outcomes of interest included readmission to inpatient care within 30 days of index discharge and primary diagnosis at readmission. Multivariable logistic regression, controlling for demographic, injury and hospital-level variables, examined factors associated with readmission.

Results: A total of 27,998 patients in the MarketScan database with at least one eligible TBI hospitalization met inclusion criteria.  Mean (SD) patient age was 33.9 (20.2) (Figure), 65.3% were male, and 8.8% had a Charlson Comorbidity Index of 2 or greater.  Mean (SD) Injury Severity Score (ISS) was 13.3 (6.7) and 73.1% had a head AIS ≥ 3. Mean (SD) length of stay was 4.4 (8.5) days. Patient disposition at discharge varied as follows: 79.3% were discharged home, 5.8% to inpatient rehabilitation, 5.1% to another facility, and 3.8% died in hospital. Of the 26,922 patients discharged alive, 1,709 (6.4%) were re-hospitalized within 30 days. Among readmitted patients, 27.8% carried a TBI-related primary diagnosis, more than half of which (56.5%) involved some form of intracranial hemorrhage.  Other common primary readmission diagnoses included infection (4.0% of all readmissions), alcohol dependence (2.7%), venous thromboembolism (2.3%), and post-concussion syndrome (2.1%).  Patients who were older (OR: 1.01 per additional year of age), had a head AIS of 3 or greater (OR: 1.10), had one (OR: 1.29) or more (OR: 2.04) comorbidities, or had a longer index length of stay (OR: 1.02 per additional day) demonstrated increased odds of being re-hospitalized within 30 days (all p<0.001).

Conclusion: Patients discharged from inpatient care for TBI are at risk of readmission.  Further research is warranted to better understand specific factors associated with readmission and how consideration of these factors at the time of discharge planning might reduce patient readmission.

8.07 Blunt pancreatic trauma in children: systematic review and meta-analysis of management and outcomes

A. C. Akinkuotu1,2, F. Sheikh1,2, A. Olsen1,2, B. J. Naik-Mathuria1,2  1Texas Children’s Hospital,Pediatric Surgery,Houston, TX, USA 2Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA

Introduction:
Pancreatic injuries represent the fourth most common solid organ injury in children. Although non-operative management (NOM) is the standard of care for minor pancreatic injuries that do not involve the main pancreatic duct, management of major pancreatic injures in children remains controversial, and varies widely among institutions. Since the literature is limited to case reports and series, we sought to perform a systematic review and meta-analysis to determine which management strategy for blunt pancreatic injuries in children had better outcomes.

Methods:
A systematic review of all published literature (PUBMED, SCOPUS and EMBASE) was performed according to the PRISMA guidelines. Case reports and series were excluded. Outcomes of interest for this review were rates of fistula and pseudocyst formation, days on total parenteral nutrition (TPN), days to full enteral feeds and hospital length of stay (LOS).

Results:
Twenty-five studies were included in this review (age range 2 months to 17 years). There were a total of 1014 pancreatic injuries of which 732 (72.2%) were managed with non-operative management (NOM), 267 (26.3%) by operative management (OM) and 15 with drain placement only. Of the studies in which pancreatic injury grades were reported (n=8), there were 190 injuries with AAST grade ≥ 3; 146 were managed with NOM compared to 44 with OM. In the studies in which fistula rates were evaluated (n=6), there was no difference in fistula formation rates between NOM (11/269 (4.09%)) and OM (5/124 (4.03%)) (pooled odds ratio 0.60, 95% CI 0.18 to 2.01; p-0.311). Incidence of pseudocyst formation was recorded in 24 studies and a meta-analysis demonstrated a higher risk of pseudocyst formation with NOM compared to OM (pooled odds ratio 2.05, 95% CI 1.04 to 4.07; p<0.001). Meta-analysis could not be performed on the other outcomes of interest due to heterogeneity of data; however, both duration of TPN and days to full enteral diet trended towards longer times in patients with NOM. In the 10 studies analyzing LOS, 4 showed no difference in hospital LOS between NOM and OM, whereas 2 showed that NOM had a longer length of stay. Other complications noted in these studies included central line infection, pancreatic abscess, small bowel obstruction and wound dehiscence. There were more complications in NOM (n=14) compared to the OM group (n=8) however these complications were not listed in sufficient numbers to analyze.

Conclusion:
Although non-operative management of major pancreatic injury in children is associated with higher rates of pseudocyst formation, current data suggests that there is no difference in outcomes of fistula formation, hospital length of stay or TPN use.  Pseudocyst formation alone may not be reason enough to warrant operation for this injury.  A multicenter, randomized prospective clinical trial is needed to establish guidelines regarding the ideal management of these patients.
 

8.08 A Simple Caliper Measurement Technique to Quantify Severity of Pectus Excavatum

C. W. Snyder1, P. D. Danielson1, S. Farach1, N. M. Chandler1  1All Children’s Hospital – Johns Hopkins Medicine,Department Of Surgery,St. Petersburg, FL, USA

Introduction: Pectus excavatum requires surgical correction when the chest wall deformity, measured by the Haller Index (HI), is severe. Currently, the HI is calculated by cross-sectional imaging with computed tomography, which involves ionizing radiation, or magnetic resonance imaging, which is time-consuming and costly. The purpose of this study was to determine if clinical measurements could accurately determine the severity of the chest wall deformity and the need for surgical repair.

Methods: Patients undergoing surgical repair of pectus excavatum between 2010 and 2014 were included. HI was obtained from radiologist reports. External anterior-posterior (AP), lateral, and right and left chest distances were measured directly from the images by surgeon reviewers. The AP distance was measured at the deepest point of the sternum, from the anterior midline skin surface to the posterior midline skin surface. The lateral distance (LD) was measured at the same level, from the left to right lateral mid-axillary line skin surface. The estimated clinical pectus index (eCPI) was calculated as the LD divided by the AP.  The right and left chest distances were measured from the posterior midline skin to the right and left anterior chest wall at the mid-clavicular line. The percent depth (%depth) was calculated as the difference between the right or left chest distance and the AP divided by the chest distance. If the right or left %depth differed due to asymmetry, the larger %depth was used. On a subset of patients, physical measurements were obtained prospectively  using chest calipers (clinical pectus index, CPI).  Descriptive statistics were calculated for the HI, eCPI, and CPI. The HI and eCPI measurements were compared using Pearson’s correlation coefficient and linear regression.

Results:A total of 41 patients were included, 31 with radiologic measurements and 10 with both radiologic and clinical measurements. The median HI was 4.4 (range, 3.0-8.7). The median (range) eCPI was 1.9 (1.5-2.2) and CPI was 1.9 (1.6-2.3). The median (range) %depth was 18% (17-39%) on caliper measurement.  All patients had eCPI and CPI greater than 1.5 and %depth greater than 17%. The HI and eCPI measurements demonstrated excellent correlation (r=0.75, p<0.0001). On linear regression, the eCPI predicted the HI accurately (Adjusted R2 = 0.54, p<0.0001)(Fig1).

Conclusion:Severity of pectus excavatum can be measured with a simple, inexpensive, non-invasive bedside caliper technique.  The eCPI calculated by external measurements correlates well with the HI. A CPI greater than 1.5 and percent depth greater than 17% corresponds to a severe defect requiring surgical correction. Further study is needed to verify accuracy and reproducibility of this technique. 

 

8.09 Giant Omphalocele: Surgical Management and Perinatal Outcomes

A. C. Akinkuotu1,2, F. Sheikh1,2, O. Olutoye1,2,3, T. Lee1,2, C. J. Fernandes1,3, S. Welty1,3, N. Ayres1,3, D. Cass1,2,3  1Texas Children’s Hospital,Texas Children’s Fetal Center,Houston, TX, USA 2Baylor College Of Medicine,Michael E. DeBakey Department Of Surgery,Houston, TX, USA 3Baylor College Of Medicine,Pediatrics,Houston, TX, USA

Introduction:
Management of giant omphalocele (GO) presents the pediatric surgeon with a conundrum related to the ideal timing of abdominal closure.  Because the optimal surgical approach remains unknown, the decision to delay abdominal closure or operate early in the neonatal period is left to surgeon preference. The purpose of this study was to describe the current management and outcomes of infants with omphaloceles at our institution.

Methods:
All patients treated for omphalocele 1/03-2/14 were reviewed.  Patients were classified as either isolated omphalocele, or omphalocele with minor or major associated anomalies.  Major anomalies were defined as a cardiac defect requiring immediate medical or surgical treatment, Bochdalek-type CDH, alveolar-capillary dysplasia, and chromosomal aneuploidy or duplication. All other anomalies were classified as minor.  Prenatal data collected included fetal MRI-based observed-to-expected total fetal lung volumes (O/E-TFLV).  Giant omphalocele (GO) was defined as >50% of liver in the omphalocele sac.

Results:
Of 95 patients, 59 presented prenatally and had comprehensive fetal center evaluation and 36 presented postnatally.  Of fetal patients, 3 had pregnancy termination, 7 (12%) had in-utero demise, and 3 were delivered elsewhere, treated with comfort measures and suffered immediate neonatal demise.  Of 82 live-born infants at our institution, 21 (26%) had chromosomal anomalies and 25 (30%) had major associated anomalies.  No live-born baby with an isolated defect (n=19) died, whereas mortality was 15% and 33% for those with minor and major anomalies, respectively (p=0.006).  Infants with major anomalies had significantly longer median length of intubation (36 vs.0 vs.0 days; p=0.04) and hospital stay (157vs.8.5vs.18 days; p<0.001) compared to those with minor or no anomalies. Patients with major anomalies also had a significantly higher need for oxygen at 30 days of life (Table).  Of 41 infants with GO, (80% 6-month survival), the majority (85%) were managed surgically by delayed closure with a median age at repair of 9 months (range, 3.4-23.6 months). None of the delayed repair patients required a later operative revision while 2 of 6 with early repair did.

Conclusion:
The presence of associated anomalies is the strongest predictor of mortality in fetuses or neonates with omphalocele.  For those with GO, delayed closure is associated with good outcomes, but larger, prospective studies comparing delayed to early closure are needed to delineate the optimal treatment approach.
 

8.10 Clinical Predictors in the Development of Necrotizing Enterocolitis

S. Faisal1, A. G. Cuenca1, S. D. Larson1, D. W. Kays1, S. Islam1  1University Of Florida,Gainesville, FL, USA

Introduction:  The development of prognostic metrics are especially important in the identification of disease states that may rapidly worsen, such as necrotizing enterocolitis (NEC).  While many such predictors have been reported and are thought to be associated with NEC, none have been validated. The purpose of this study was to attempt to create a model that could help predict NEC based on clinical, physiologic, and lab parameters.

Methods:  We retrospectively collected clinical data on 108 patients with NEC as well as 38 age-matched controls from 2000 to 2009. We performed multiple logistic regression and developed receiver operator curves based on the clinical data collected to determine if any metrics that have been reported as well additional parameters including the presence of cardiac or hepatic dysfunction could be important for the development of NEC and further generalized to the age-matched controls. 

Results: Using Univariate analysis, we found significant differences (p < 0.05) in the birth weight, bandemia, sodium concentration, percent lymphocytes, hemoglobin, method of delivery, and mean arterial pressure (MAP), however we did not note differences in pH, absolute neutrophil count, platelet count, presence of cardiac dysfunction (See Table). Logistic regression was then performed on the significant variables. Surprisingly, only bandemia, MAP, and hemoglobin concentration at the time of clinical suspicion of NEC was found to be significant in our population, with OR of 1.29, 1.13 and 0.7, respectively (see Table). It is possible that missing data and selection bias may confound our model. Therefore, ROC were performed on the variables collected. Bandemia, MAP, and hemoglobin concentration were again found to have the greatest areas under the curve, respectively 0.86, 0.79, and 0.72. 

Conclusion: While bandemia is already considered an important clinical variable, these data suggest that we may be able to improve on already recognized clinical parameters by including decreased hemoglobin concentration and elevated MAP in the clinical algorithm currently used for the identification of NEC in at risk patients. Surprisingly, pH, cardiac dysfunction, or ANC were not found to be predictive of NEC in our patient population. This model is to be tested and validated prospectively in the future.