9.15 Administration Of Target Blood Component Ratio During Super Massive Transfusion Protocol

C. J. Allen1, J. P. Meizoso1, J. J. Ray1, L. F. Teisch1, L. Zebib1, G. M. Moore1, N. Namias1, C. I. Schulman1, R. Dudaryk2, K. G. Proctor1  1University Of Miami,Trauma And Critical Care,Miami, FL, USA 2University Of Miami,Anaesthesiology,Miami, FL, USA

Introduction:  Despite significant advances in trauma management, mortality from hemorrhagic shock remains the number one cause of death in trauma victims. Maintaining a PRBC:FFP:platelet ratio of 1:1:1 reduces mortality in trauma patients receiving a massive transfusion protocol (MTP). The ability to maintain this goal ratio in those requiring super massive transfusion protocol (S-MTP) of >30 units of PRBCs may be diminished due to logistical constraints and immediate supply limitations. We hypothesize that the timing and total administered blood component ratio is different between those who receive MTP versus S-MTP at a high volume level 1 trauma center.

Methods: Between January 2009 and January 2012, we prospectively observed all trauma patients requiring blood component transfusion, obtaining demographics, injury patterns, procedures, total fluid and transfusion requirements, and outcomes. Of those transfused, we retrospectively reviewed those with an MTP activation and obtained the quantity of each component (PRBC, FFP, platelets) administered at serial time increments in those patients. Comparisons were made between MTP (≤30 units PRBC) and S-MTP (>30 units PRBC) patients. Parametric data is represented as mean ± standard deviation, non-parametric as median (interquartile range). Student’s t-test, Mann-Whitney U, and Fisher’s Exact tests were used as appropriate.

Results: 268 total patients required a transfusion with 87 total MTP activations, 16 required S-MTP. Between MTP and S-MTP cohorts, ISS was 31±14 vs 42±18 (p=0.09), initial BE -6±11 vs -9±6 (p=0.173), ED SBP 92±41 vs 66±37 (p=0.03), 24h EBL 1.6(4.1)L vs 8.0(11.4)L (p<0.001), 24h PRBC of 16±7u vs 49±16u (p<0.001). Average ratio of PRBC:FFP:platelet at 24h was 1.6:1:2.0 in MTP, and 1.5:1:1.2 in S-MTP. The PRBC:FFP ratio for each patient was calculated at serial time increments, averaged and plotted in the figure. Within the first hour, S-MTP patients received 10.2±6.9u PRBC and 3.1±4.2u FFP; >7u difference. Overall mortality was 30% in the MTP group and 75% in the S-MTP group (p<0.001).

Conclusion: Likely due to logistical constraints and supply limitations, patients receiving S-MTP have lower than goal plasma transfusion immediately following MTP activation which may contribute to their increased mortality because they did not receive effective hemostatic resuscitation. Our results show the need for >10 units of plasma to be immediately available to this critically hemorrhaging population. Future investigation should focus on improvements of early S-MTP resuscitation by making more plasma readily available (ie liquid plasma) for immediate transfusion.

9.16 Metoprolol Improves Survival in Severe Traumatic Brain Injury Independent of Rate Control

B. Zangbar1, P. Rhee1, B. Joseph1, N. Kulvatunyou1, I. Ibrahim-zada1, A. Tang1, G. Vercruysse1, R. S. Friese1, T. O’keeffe1  1University Of Arizona,Trauma/Surgery/Medicine,Tucson, AZ, USA

Introduction:  Multiple prior studies have suggested an association between survival and beta-blocker administration in patients with severe traumatic brain injury (TBI). However, it is unknown whether this benefit of beta-blockers is dependent on heart rate control. The aim of this study was to assess whether rate control affects survival in patients receiving metoprolol with severe TBI.

Methods:  We performed a 7-year retrospective analysis of all blunt TBI patients at a level 1 trauma center. Patients >16 years of age with head abbreviated injury scale (AIS) 4 or 5 who were admitted to the ICU from operating room (OR) or emergency room (ER) were included. Patients were stratified into two groups: metoprolol (BB) and no beta-blockers (NBB). Using propensity score matching we matched the patients in two groups in a 1:1 ratio controlling for age, gender, race, admission vital signs, Glasgow coma scale (GCS), injury severity score (ISS), average heart rate monitored during ICU admission, and standard deviation of heart rate during the ICU admission. Our primary outcome measure was survival.

Results: Overall 914 patients met inclusion criteria, of whom 189 received beta-blockers. A propensity-matched cohort of 356 patients (178: BB and 178: NBB) was included, which is shown below. Patients receiving Metoprolol had higher survival than patients who did not receive beta-blockers, despite no difference in mean heart rate and heart rate variability.

Conclusion: Our study shows an association with improved survival in patients with severe TBI receiving Metoprolol, and this effect appears to be independent of any reduction in heart rate. We suggest that Metoprolol should be administered to all severe TBI patients irregardless of any perceived beta-blockade effect on heart rate.
 

9.17 Damage Control Resuscitation is Associated with Increased Survival after Severe Splenic Injury

E. A. Taub1, B. Shrestha1, B. Tsang1, B. A. Cotton1, C. E. Wade1, J. B. Holcomb1  1University Of Texas Health Science Center At Houston,Houston, TX, USA

Introduction:  Increasing data has shown that Damage Control Resuscitation (DCR), employing low-volume, balanced resuscitation, is associated with improved survival in severely injured patients. However, little attention has been paid to outcomes by specific organ injury. We wanted to determine if implementation of DCR has improved survival among patients with severe blunt splenic injury. 

Methods:  Following IRB approval, a retrospective study was performed on all adult trauma patients with severe, blunt splenic injury admitted to our center between 01/2005-12/2012. Severe splenic injury was defined as AAST grade IV or V. Our center adopted and employed DCR principles in 2009. Therefore, patients were stratified into two groups: pre-DCR (2005-2008) and DCR (2009-2012). Patients who died before leaving the emergency department (ED) were excluded. Outcomes (resuscitation products used and survival) were then compared by univariate analysis. A purposeful regression model was then constructed to identify independent predictors of mortality.

Results: Between 2005-2012 there were 29,801 trauma admissions, with 224 patients 18 years of age or older who sustained blunt AAST grade IV or V splenic injuries. Of these, 206 patients survived to leave the ED and made up the study group; 83 pre-DCR and 123 DCR patients. The groups were similar in demographics, prehospital and ED vital signs. However, DCR patients had a higher abdominal AIS scores (median 4 vs. 4; p=0.050). While arrival physiology and base deficit were similar, DCR patients had higher aPTT (median 28.2 vs. 26.4; p=0.017) and lower initial platelet count (median 223 vs. 246; p=0.019). DCR patients received more plasma (median 2 vs. 0 U; p<0.001) and less crystalloid (median 0.1 vs. 1.0 L; <0.001) while in the ED. Splenectomy rates were higher, but not statistically significant, in DCR patients (58 vs. 47%; p=0.103). DCR patients received less RBC (median 2 vs. 6 U), plasma (median 2 vs. 4 U), platelets (median 0 vs. 0) and crystalloid (median 1.0 vs. 3.1 L) in the operating room; all p<0.05. While there were no differences in ICU complications, mortality was lower, but not statistically significant, in the DCR group (10 vs. 19%; p=0.106). Multiple logistic regression demonstrated that DCR was an independent predictor of decreased mortality (odds ratio 0.05, 95% C.I. 0.006-0.341; p=0.003). In addition, this same model (controlling for age, abdominal AIS, admission platelet count) found that DCR was not associated with increased likelihood of splenectomy (odds ratio 1.29, 95% C.I. 0.693-2.431, p=0.419).

Conclusion: In patients with severe splenic injury, implementation of DCR has been associated with a 95% reduction in mortality at our facillity. 

9.18 Rates of Pseudoaneurysm in Non-Operative Management of Splenic Injuries

C. Morrison1, J. C. Lee1, K. Rittenhouse1, M. Kauffman1, B. W. Gross1, F. Rogers1  1Lancaster General Hospital,Trauma,Lancaster, PA, USA

Introduction: The use of angiography has been associated with lower rates of failed non-operative splenic injury management. To date, there is an unclear rate of splenic artery pseudoaneurysms in these patients. We sought to determine the rate of asymptomatic vascular injury in patients with splenic injury managed without operation or angiographic embolization and the outcomes of patients managed with and without re-imaging.

Methods: Patients undergoing splenic injury management with and without surgical intervention or angiographic embolization from 2011 to 2014 were queried from the trauma registry of a Pennsylvania-verified, level II trauma center. Patients were routinely re-imaged as part of our practice. We excluded penetrating trauma and immediate operative intervention was excluded from our analysis. Splenic injuries were classified using American Association for the Surgery of Trauma (AAST) guidelines by attending radiologist or senior trauma surgeon. Rates of repeat imaging, subsequent embolization and re-bleeding, and diagnosis of pseudoaneurysm were determined.

Results: A total of 132 patients met the inclusion criteria, of which 72.7% were managed non-operatively (N=96) and 27.3% operatively (N=36). Within the non-operative patient population, eight pseudoaneurysms were found; three on initial scans and five on repeat scans. Rates of re-imaging were 39.58% (N=38); angioembolization, 22.92% (N=22); and readmission, 10.41% (N=10); in patients managed non-operatively. Three large (>3cm) pseudoaneurysms were observed in the repeat CT scans.

Conclusion: Splenic injuries are typically managed non-operatively without serious complications. Patients with splenic injuries (> Grade 3) managed non-operatively should have repeat imaging within 48 hours to rule out the possibility of pseudoaneurysms, regardless of worsening symptoms or decreasing hemoglobin. Patients with pseudoaneurysms may be amenable to angioembolization.
 
 

9.19 Surgeon decision making is consistent in trauma patients despite fatigue and patient injury

D. D. Coffey1, C. Spalding1, M. S. O’Mara1  1Grant Medical Center / Ohio University,Trauma And Acute Care Surgery / Ohio University Heritage College Of Osteopathic Medicine,Columbus, OHIO, USA

Introduction:   Damage control laparotomy with temporary abdominal closure has become routine in trauma surgery with concern for abdominal compartment syndrome, or a planned second-look procedure.  This technique is associated with complications including fluid/protein loss, enterocutaneous fistula, and ventral hernia.  The increasing prevalence of this procedure has led to concern over too many abdomens being left open due to surgeon routine or surgeon fatigue.  Fatigue is a concern with long surgeon shifts, and after 16 hours decision making capabilities may be impaired.  We hypothesize that patient and physician factors other than physiologic parameters contribute to the decision to not close the initial trauma laparotomy.

Methods:   This was a retrospective chart review comprising a total of 527 patients 5 years. Patients who underwent emergent damage control laparotomies with fascia not closed were included in the open abdomen group. No consistent criteria were defined to choose this course.  Those patients who had fascia primarily closed after the first emergent laparotomy were included as closed abdomens and used as the control group. Patient demographics, injury factors, time of operation, and time to fascial closure were evaluated.

Results:  Demographic and injury factors were predictive of the decision to leave the abdomen open (table), in particular injury severity, patient mass, and blunt mechanism predicted an open abdomen.  Time of day was not predictive of the decision to leave a patient open.  In a logistic regression model of these factors, only patient age (p=0.002), ISS (p<0.0001), and the number of abdominal organs with an injury grade of 3 or more (p=0.0014) predicted the abdomen would be left open.  Of the patients with initially open abdomen, 84 (60%) survived and 67 of those achieved primary fascial closure.  Mean time to closure was 2.4 (±1.6) days.  None of the presenting demographic or injury factors predicted time to primary fascial closure by independent or model analysis (all p>0.1). 

Conclusion:  The decision to perform damage control surgery and leave an abdomen open appears to be consistent throughout the day and to be dependent upon patient factors as evaluated by the operating surgeon.  Fatigue does not seem to be a contributing factor.  This does not hold true for the fascial closure, which is done at approximately two days after the initial procedure, and does not vary based upon demographic or injury factors in the patients that survive.  An opportunity may exist to identify a subset of the open abdomen patients that could return to the operating room for earlier definitive closure, thereby lowering the risk of complications.
 

9.20 Morphomic Factors are Important for Assessing Risk of Cardiovascular Complications in Trauma Patients

J. Li1, N. Wang1, J. Friedman1, D. Cron1, O. Juntila1, M. Terjimanian1, E. Chang1, S. C. Wang1  1University Of Michigan,Ann Arbor, MI, USA

Introduction: Motor vehicle crashes (MVCs) are a major cause of traumatic injury in the US, and in-hospital cardiovascular complications are associated with increased morbidity and mortality in this population. The risk of cardiovascular complications is often difficult to predict using only injury severity and vital signs upon presentation to the trauma center. Previous studies have shown analytic morphomics to be a unique domain of perioperative risk assessment and this utility may provide improved clinical insight in the trauma setting. We hypothesized that individualized morphomic factors were associated with in-hospital cardiac complications for patients involved in MVCs.

Methods: Our study included 3,187 MVC adult patients admitted to the University of Michigan Health System who underwent an abdominal CT scan near the time of injury. Exclusion criteria included patients with an Injury Severity Score (ISS) ≤5 or head-and-neck AIS ≥5. Morphomic factors were measured at the L4 vertebral level using established algorithms. We utilized univariate analysis to determine the relationship of patient demographics, comorbidities, morphomics, and vital signs upon hospital admission with the development of cardiovascular complications. Cardiovascular complications were defined as myocardial infarction (MI), cerebrovascular accident (CVA), and other cardiac-arrest events. Injury severity was stratified by mild (5 <ISS <16), moderate (16 <ISS <25), severe (ISS >25) trauma.

Results: Of the 3,187 eligible patients, 3.8% developed cardiovascular complications. CVA and MI history, bone mineral density, and BMI were significant predictors of cardiovascular events (p<0.05) in mild trauma. Decreased average psoas radiodensity and increased age were found to be significant predictors of cardiovascular events (p<0.01) in moderate trauma. Glasgow Coma Scale and increased anterior body depth were the two most significant predictors of cardiovascular events in severe trauma (p<0.05). Table 1 shows the results of the univariate analysis between the complications and non-complications group for all three levels of trauma. Non-significant predictors of cardiovascular complications across all trauma levels include gender (p=0.17, 1.0, 0.11) and history of diabetes (p=0.37, 0.65, 0.19) and hypertension (p=0.50, 0.07, 0.35).

Conclusion: Psoas radiodensity, bone mineral density, and anterior body depth are significant factors associated with cardiovascular complications. Morphomic factors derived from cross-sectional imaging may aid clinical decision making by identifying high risk patients for in-hospital cardiovascular events following MVCs.

 

9.04 A Pilot Study of Compensatory Perioperative Nutrition in the SICU: Safety and Effectiveness

D. D. Yeh1, C. Cropano1, S. Quraishi1, E. Fuentes1, H. Kaafarani1, J. Lee1, Y. Chang1, G. D. Velmahos1  1Massachusetts General Hospital,Trauma, Emergency Surgery, Surgical Critical Care,Boston, MA, USA

Introduction: Enteral nutrition (EN) is an important component of surgical critical care, yet delivery of prescribed nutrition is often suboptimal.  In surgical patients, EN is commonly interrupted for procedures.  We hypothesized that continuing perioperative nutrition or providing compensatory nutrition would improve caloric delivery without increasing morbidity.

Methods: We enrolled 10 adult (age >18) surgical ICU patients receiving EN who were scheduled for elective tracheostomy and/or percutaneous endoscopic gastrostomy (PEG) between 07/2012-05/2014. In these patients, either perioperative EN was maintained or compensatory nutrition was used (particularly in the case of PEG tube placement). Perioperative EN was defined as continuing tube feeds up to (and sometimes during) operative procedures, whereas compensatory nutrition was defined as a temporary postoperative increase in the hourly EN rate to compensate for interrupted EN. We matched these patients to 40 other patients during the same time period who had tracheostomy and/or PEG placement while adhering to the traditional American Society of Anesthesiology NPO guidelines. Outcomes in patients receiving perioperative and/or compensatory feedings (FED) were compared to those not receiving them (UNFED) using Pearson’s chi-squared and Mann-Whitney Test for proportions and medians, respectively. All tests were two-sided and p<0.05 was considered significant. 

Results:A total of 50 eligible subjects were enrolled. There was no difference in age, sex, BMI, APACHE II score, and prescribed calories (TABLE 1). However, patients in the UNFED group did have higher rates of PEG placement when compared to the FED group (40% vs. 0%, p=0.02) On the day of procedure, the FED group received more actual calories (median 1706 vs. 527 kcal, p<0.001) and a higher percentage of prescribed calories (92% vs 25%, p<0.001). Median caloric deficit on the day of the procedure was also significantly lower in the FED group (175 vs. 1213 kcal, p<0.001). There were no differences in total complications or GI complications between groups.

Conclusion:In our pilot study of surgical ICU patients undergoing tracheostomy and/or PEG tube placement, perioperative and compensatory nutrition resulted in higher caloric delivery and was not associated with increased morbidity. Larger studies are needed to validate our findings and to determine whether aggressive ICU nutrition improves outcomes in critically ill surgical patients. 

 

9.05 Risk Factors for Intestinal Infection After Burns: A Population-based Outcomes Study of 541 Patients

K. Mahendraraj1, R. S. Chamberlain1,2,3  1Saint Barnabas Medical Center,Department Of Surgery,Livingston, NJ, USA 2New Jersey Medical School,Department Of Surgery,Newark, NJ, USA 3St. George’s University School Of Medicine,Department Of Surgery,St. George’s, St. George’s, Grenada

Introduction:
Thermal burns are associated with intestinal barrier failure, bacterial translocation, intestinal infections (IF) and sepsis. While the mechanisms for increased gut permeability have been extensively studied, the risk factors for developing IF are poorly understood. This study examines a large cohort of burn patients to assess the demographics and clinical outcomes of patients who develop burn-related IF compared to those who do not.

Methods:
Data on 95,472 patients with third degree flame burn injuries was abstracted from the Nationwide Inpatient Sample (NIS) Database over a ten year period (2001-2010). IF was defined as any intestinal infection due to any Gram negative Enterobacteriaceae sp., Enterococcus sp., Campylobacter sp, Yersinia spand Clostridium difficile. Standard statistical methodology was used.

Results:
541 (0.6 %) of burn patients were diagnosed with IF post-burn. Patients who developed IF were significantly older than those without IF (54.7 vs. 40 years old, p<0.001). Males (57.1%) and Caucasians (48.6%) developed IF more often.  More extensive third degree burns were more common among IF patients, p<0.005). Length of stay (27.6 vs. 7.9 days), and overall inpatient mortality were significantly higher in IF patients. (6.3% vs. 2.6%), p<0.001.  The most common comorbidities associated with developing IF were hypertension (31.7%), chronic respiratory illness (17.3%), and diabetes (14.8%), p<0.001.  IF patients more often had fluid and electrolyte disorders (44.8%), sepsis (13.3%), burn wound infection (7.4%), and skin graft failure (2%). Multivariate analysis identified age over 60 (OR 1.0), fluid and electrolyte disorders (OR 3.1), peripheral vascular disease (OR 1.7), and multiple burn sites (OR 1.8) as independently associated with IF development, p<0.005. Conversely, TBSA under 10% (OR 0.5) and active smoking habit (OR 0.4) were associated with a lower risk of developing IF, p<0.005. Risk factors for mortality in IF patients included sepsis (OR 2.4), septic shock (OR 1.8) and acute DVT (OR 2.6), p<0.005.

Conclusion:
IFs in burn patients is associated with longer hospitalization, increased mortality, graft failure, sepsis and other adverse events.  The strongest risk factors for IF are fluid and electrolyte disorders, peripheral vascular disease, and multiple severe burn sites IF is more common in Caucasian males with 3rd degree burns >20% TBSA, and older patients with multiple co-morbidities. Clinicians should be cognizant of these IF risk factors when assessing and monitoring high-risk burn patients in order to decrease morbidity and mortality. Additional research into IF preventions strategies in high risk burn patients such as the use of probiotics are already underway.

9.06 Safety and Effectiveness of Pre-hospital Tourniquet Use in 110 Patients with Extremity Injury

M. Scerbo1, E. Taub1, J. P. Mumm1, K. S. Gates1, J. B. Holcomb1, B. A. Cotton1  1University Of Texas Health Science Center At Houston,Acute Care Surgery/Surgery,Houston, TX, USA

Introduction: Field us of tourniquets (TQ) in military medicine is regarded as an effective adjunct for preventing hemorrhage-related deaths from extremity trauma. Their use in the civilian setting, however, has not been widely adopted. The most recent publication of Guidelines for Field Triage of Injured Patients (2011), gives no recommendation for pre-hospital TQ use, stating there is limited evidence to support their use and potential for increased injury. The purpose of this study was to assess whether pre-hospital TQ in the civilian setting can be (1) effective in hemorrhage control and (2) safely applied.

Methods: Following IRB approval, patients arriving to a level-1 trauma center between 01/2009 and 05/2013 were reviewed. All patients with prehospital TQ application were included in the analysis. Cases were adjudicated and assigned the following designations: absolute indication (underwent operation within 2 hours for extremity injury, arterial or venous injury requiring repair/ligation, or traumatic amputation), relative indication (major musculoskeletal or soft-tissue injury requiring operation >2 hours after arrival, documented large blood loss at scene), non-indicated. Patients with absolute or relative indications were placed into the INDICATED group; others were placed into non-INDICATED cohort. An orthopedic, trauma or hand surgeon then adjudicated iatrogenic injuries resulting from TQ application. Univariate analysis was performed to compare groups. Logistic regression was then conducted to assess independent predictors of requiring additional or replacement of pre-hospital TQ to control hemorrhage.

Results:  110 patients had pre-hospital TQ placement. 94 patients (85%) were in the INDICATED group and 16 (15%) in the non-INDICATED. With the exception of higher blunt mechanism (70 vs. 43%; p=0.048), there were no differences in demographics, transport, or scene vitals between groups. INDICATED patients were more likely to have lower extremity TQ (46 vs. 6%; p=0.007) but were less likely to pre-hospital bleeding controlled (71 vs. 100%; p=0.012). 28% of INDICATED patients had their TQ removed in the ED (vs. 100%; p<0.001). Only 16% of INDICATED patients had an additional or new TQ applied in the ED (vs. 7%; p=0.420). Venous thromboembolic events (4.3 vs. 0.0%; p=0.401) and peripheral nerve injuries (5.3 vs. 0.0%; p=0.345) were similar. The amputation rate was 31% for INDICATED patients (vs. 0%; p=0.009). There were no nerve palsies or tissue/muscle injuries leading to amputation/debridement attributable to TQ use in either group. After controlling for scene vital signs and mechanism of injury, the likelihood of requiring an additional or new TQ after arrival to the ED was independently associated with ground transport (odds ratio 6.3, 95% C.I. 1.47-29.08; p=0.014).

Conclusion: Our study suggests that pre-hospital personnel can safely and effectively use TQ in patients with severe extremity injuries. 

9.07 Risk Prediction Model for Mortality in the Moribund Surgical Patient

L. E. Kuo1, G. C. Karakousis1, K. D. Simmons1, D. N. Holena1, R. R. Kelz1  1Hospital Of The University Of Pennsylvania,Department Of Surgery,Philadelphia, PA, USA

Introduction:  Surgeons struggle to counsel families on the role of surgery and the likelihood of survival in the moribund patient. A recent study demonstrated a nearly 50% 30-day survival rate for the moribund surgical patient, but without information on what factors are associated with survival, it is difficult to provide patients and their family members with information on the best course of action for a specific patient. We sought to develop a risk prediction model for postoperative inpatient death for the moribund surgical candidate. 

Methods:  Using ACS NSQIP data from 2007-2012, we identified ASA class 5 (moribund) patients who underwent an operation by a general surgeon. The sample was randomly divided into development and validation cohorts. In the development cohort, patient characteristics which could be readily discerned on preoperative evaluation were evaluated for inclusion in the predictive model. The primary outcome measure was in-hospital mortality, and factors found to be significant in univariate logistic regression were entered into a multivariable model. Points were assigned to these factors based on beta coefficients. This model was used to generate a simple scoring system to predict inpatient mortality. Models were developed separately for operations performed within 24 hours of admission and operations performed at least one day after admission as a means of differentiating between patients who presented to the hospital in the moribund state, and those whose condition reflected deterioration over their hospital course. Each model was tested on the validation cohort.

Results: 3,130 patients were included in the study. In-hospital mortality was 50.5% in the overall sample. In multivariable regression modeling, patient characteristics associated with in-hospital mortality were age, functional status (odds ratio 2.11, confidence interval 1.39-3.19), dialysis within the previous 30 days (1.63, 1.22-2.32), recent myocardial infarction (1.52, 1.04-2.22), and ventilator dependence (2.17, 1.43-3.30). For patients undergoing surgery within 24 hours of admission, body mass index was also associated with inpatient death. The scoring system generated from this model accurately predicted in-hospital mortality in both the development and validation cohorts for patients undergoing surgery within and after 24 hours (Table 1). 

Conclusion: A simple risk prediction model using readily available preoperative patient characteristics can be used to accurately predict postoperative mortality in the moribund patient undergoing surgery. This scoring system can easily be applied in the clinical setting to assist in counseling and decision-making.
 

9.08 The Impact of Ratio Based Blood Products Transfusion on Solid Organ Donations in Trauma Patients

T. Orouji Jokar1, B. Joseph1, M. Khalil1, N. Kulvatunyou1, B. Zangbar1, A. Tang1, T. O’Keeffe1, L. Gries1, R. Latifi1, R. S. Friese1, P. Rhee1  1University Of Arizona,Trauma/Surgery/Medicine,Tucson, AZ, USA

Introduction:  Aggressive management with blood products is known to improve organ donation in trauma patients.  The aim of this study was to evaluate the impact of blood products transfusion ratios on solid-organ procurement rates. We hypothesized that 1:1 (PRBC: FFP) transfusion (RT) increases the solid organ donations in trauma patients.  

Methods:  We performed an 8 year retrospective analysis of all brain dead trauma patients at our level 1 trauma center. Patients who consented for organ donation and donated solid organs were included. Patients were stratified into two groups: patients with 1:1 transfusion (RT) and patients without 1:1 transfusion (No-RT). Outcome measures were: number and type of solid organs donated. Logistic regression analysis was performed.  

Results: A total of 70 patients who donated a total of 318 solid organs were included.  57.1% (n=40) donors received 1:1 ratio transfusion. There was no difference in age (p=0.16), mechanism of injury (p=0.3), and systolic blood pressure on admission (p=0.1) between the two groups. Donors in the RT group were more likely to donate livers (82.5% vs. 63.3%, p=0.041) and lungs (42.5% vs. 17.2%, p=0.024), with an overall higher rate of solid organ donation [5±2.1 vs. 4.1±2.4, p=0.03] compared to patients in the No-RT group. RT was independently associated with increase in solid organ donation rates (OR [95%CI]: 1.13 [1.05-1.8], p=0.043). 

Conclusion: Ratio based blood products transfusion increases solid organ donations in trauma donors. Aggressive resuscitation in 1:1 ratio in trauma patients who are deemed non survivable may improve conversion rates among eligible donors.

9.09 Complications Associated with Pelvic Fixation Methods in Combined Pelvic and Abdominal Trauma

R. J. Miskimins1, M. Decker2, T. R. Howdieshell1, S. W. Lu1, S. D. West1  1University Of New Mexico HSC,Department Of Surgery,Albuquerque, NM, USA 2University Of New Mexico HSC,Department Of Orthopedic Surgery,Albuquerque, NM, USA

Introduction: Approximately 50% of blunt trauma cases with pelvic fractures have associated intraabdominal trauma.  Fixation of the anterior pelvis may be performed by open reduction and internal fixation (ORIF) or external fixation (Ex-fix). The approach to ORIF in patients who have undergone laparotomy is often through extension of the laparotomy incision.  However, a review of the literature shows no recent articles pertaining to timing or method of anterior pelvic ring fixation with recent laparotomy.  The optimal method for fixation in these patients is not known.  We hypothesized that ORIF performed through extension of the midline laparotomy incision would result in a clinically relevant difference in rates of wound closure and wound complications versus external fixation.

Methods: We identified all patients admitted from 2004 to 2014 who underwent laparotomy and ORIF of their anterior pelvic ring through extension of the laparotomy incision or Ex-fix of the anterior pelvic ring. A retrospective review was performed.  Injury Severity Score (ISS); age; length of stay; the rates of ventral hernia, abdominal wound infection, pelvic abscess; number of units transfused; presence of  bowel or bladder injury; additional operative or interventional procedures performed related to any complication were collected.   The continuous variables were analyzed using the Mann Whitney U test and Fisher’s exact test was used to determine statistical significance of categorical data.

Results:A total of 34 patients were identified from January 2004 to April 2014 who underwent exploratory laparotomy and pelvic fixation, 21 underwent external fixation of the anterior pelvic ring while 13 underwent open reduction internal fixation of the anterior pelvic ring by extending the midline laparotomy incision. There was no difference in the ISS, length of stay, age, units of blood products transfused, bowel injury, or bladder injury between the two groups.  The two groups had a similar incidence of ventral hernia (38% vs. 19%, p =0.254); however, the ORIF group were significantly more likely to have a laparotomy incision infection (54% vs. 5%, p=0.002), pelvic abscess (46% vs. 10%, p= 0.033) and need for additional procedures to address their complications (13 vs. 6, p=0.023).  We did note a significantly higher BMI (32.5 vs. 27.2, p=0.023) in the ORIF group which could be a confounding factor contributing to the increase in wound complications.

Conclusion:Individuals who have undergone laparotomy and fixation of their anterior pelvic ring are a complex group of patients. They have high ISS, long hospitals stays and multiple injuries.  The ORIF group experienced significantly higher rates of laparotomy incision infections, pelvic abscesses and required more procedures to manage these complications. These data suggest careful consideration of the method of anterior pelvic ring fixation in patients who also undergo laparotomy.

 

9.10 Effectiveness of Once a Day Enoxaparin for VTE Prophylaxis in the Critically Ill Trauma Patient

S. Morrissey1, N. Ingalls1, P. Chestovich1, D. Frisch2, F. Simon2, D. Fraser1, J. Fildes1  1University Of Nevada School Of Medicine,Las Vegas, NV, USA 2University Medical Center Of Southern Nevada,Las Vegas, NV, USA

Introduction:  Trauma patients are known to have higher incidences of VTE (venous thromboembolism) when compared to other patient populations.  The ideal dose of Enoxaparin for adequate VTE prophylaxis in the critically ill trauma patient has yet to be determined.  Our dosing regimen attempts to minimize missed Enoxaparin doses while still achieving adequate factor Xa levels (anti-factor Xa activity).  This study evaluates the efficacy of this regimen and examines the patient factors that may contribute to its inadequacy.  

Methods:  This is a prospective observational study performed in our trauma intensive care unit (TICU).  We identified all critically ill trauma patients over the age of 18 admitted to the TICU requiring chemical VTE prophylaxis between December 2013 and January 2014.  These patients were started on Enoxaparin 40 mg subcutaneously nightly.  Peak factor Xa levels were drawn 4 hours after the third dose.  Adequate prophylaxis was defined at factor Xa levels of greater than 0.2.  Patient injury patterns and demographics were collected for analysis.

Results:   A total of 25 critically ill trauma patients admitted to the TICU were started on chemical prophylaxis.  21 patients (84%) had adequate peak factor Xa levels (Group 1) vs 4 patients (16%) that had inadequate levels (Group 2).  When comparing demographics and injury patterns using a T-test and Pearson’s chi-squared test between the two groups, Group 2 had a statistically higher mean BMI and incidence of lower extremity fractures and spine injuries.  2/4 (50%) of the patients in group 2 developed superficial venous thrombosis.  There were no missed doses in either group.

Conclusion:  Based on our data, Enoxaparin 40 mg given once nightly provides adequate VTE prophylaxis in the majority of critically ill trauma patients. Not only do our rates of adequate prophylaxis using factor Xa levels surpass those found in the current literature, but this regimen also minimizes missed doses, which could potentially lead to lower levels of VTE. 
 

9.11 Defining Fever in the Critically Injured: Test Characteristics of Three Different Thresholds

V. Polcz1, L. Podolsky2, O. Sizar1, A. Farooq1, M. Bukur1, I. Puente1, F. Habib1,2  1Broward Health Medical Center,Trauma,Ft Lauderdale, FL, USA 2Florida International University,Surgery,Ft Lauderdale, FL, USA

Introduction:
Fever remains the most common sign that prompts the work-up for a possible infectious etiology in critically injured trauma patients admitted to the ICU. Yet, the very definition of fever is highly variable, and the test characteristics of the various cut-offs used have not been clearly defined. An accurate cut-off would allow for more precise and cost-effective management of the febrile trauma patient.

Methods:
Charts for 621 trauma patients at our urban Level I trauma Center were retrospectively evaluated for fever and culture results. The maximum oral temperature during the 24 hour period prior to obtaining culture samples was used. Temperatures were correlated with positive or negative culture results to determine sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and area under the curve.

Results:
Sensitivity and specificity were calculated using using cut-off values of 100.4 F, 101 F, and 101.5 F.  All data points are shown in Table 1. Recierver Operator Curve cut-offs identified 99.75 F as the temperature with the best test characteristics. Sensitivity showed an inverse relationship with temperature. 99.75 degrees exhibited a maximum value of 75.30% (CI: 70.27-79.88), with 101.5 exhibiting the minimum value of 25% (CI: 20.87-29.50). Specificity had a direct relationship to temperature, with 99.75 having a minimum specificity of 59.46% (CI: 51.00-61.00) and 101.5 having a maximum specificity of 92.96% (CI: 88.65-96.00). Positive likelihood ratio (LR) had a lowest value of 1.86 (CI:1.51-2.28) at the lowest temperature of 99.75, and the highest value of 3.35 (CI 2.12-5.95) at 101.5. Negative LR was also lowest at 99.75 with a value of 0.42 (CI: 0.33-0.52), and highest 101.5 with a value of 0.81 (CI: 0.75-0.86). Positive predictive value (PPV) was lowest at a temperature of 99.75 at 80.46% (CI: 75.57-84.74) and highest at a temperature of 101.5 at 39.29% (CI: 35.00-43.70). Negative predictive value (NPV) was highest at 99.75 with a value of 52.07% (CI: 44.27-59.80) and lowest at 101.5 with a value of  39.29% (CI: 35.00-43.70). AUC was inversely related to temperature with a maximum value of 0.32 (CI 0.690-0.774) at 99.75 and a minimum value of 0.498 (CI: 0.450-0.546) for 101.5. 
Conclusion

These results suggest that none of the current cut-offs used to define fever accurately predict an infectious etiology in febrile patients. While a temperature of 99.75 demonstrated the best test characteristics, none of the commonly accepted standards of fever showed a strong correlation to culture results. Further research is warranted in order to identify biomarkers that accurately identify the presence of  infectious processes in trauma patients. 

9.12 Prognostic Value of Cardiac Troponin I Following Severe Traumatic Brain Injury

S. S. Cai2, B. W. Bonds1, D. M. Stein1  1University Of Maryland,R Adams Cowley Shock Trauma Center,Baltimore, MD, USA 2University Of Maryland,School Of Medicine,Baltimore, MD, USA

Introduction:  Recent studies reported elevated cardiac troponin I (cTnI) was frequently observed following severe traumatic brain injury (TBI) and was associated with poor outcomes. Exact relationship is not well understood and the clinical applicability of cTnI in severe TBI patients has yet to be determined. Present study investigated the relationship between cTnI levels and risk of mortality.

Methods:  Adult patients (≥ 18y) with severe TBI (brain Abbreviated Injury Scale [AIS] score ≥ 3) admitted to a level one trauma center from July 2007 to January 2014 were reviewed. Patients with non-isolated severe TBI (AIS ≥ 3 to other anatomic regions) and those without cTnI measurements in 24 hours of admission were excluded. Four cTnI strata were predefined as undetectable (< 0.06 ng/mL) and detectable tertiles (0.06-0.1 ng/mL, 0.1-0.25 ng/mL, and > 0.25 ng/mL). Kaplan-Meier survival analysis and Cox proportional hazard model were applied. Stratification analysis was performed by age (≤ 65y or > 65y) and admission Glasgow Coma Scale (GCS) score (mild 13-15, moderate 9-12, and severe 3-8).

Results: In a total of 2711 patients, elevated cTnI was found in 502 (18.5%) patients. Five-day survival rate was significantly lower in the patients with detectable cTnI compared to those with undetectable cTnI (72.26% vs 88.77%, p < 0.0001). Risk of mortality increased with increasing cTnI levels in a dose-dependent manner (p-trend < 0.0001). Patient in the highest cTnI strata reported 1.55-fold (95% CI: 1.18-2.04, p-trend = 0.0002) higher hazard ratio (HR), or risk of mortality, compared with patients of undetectable cTnI when age, injury type, and injury severity were adjusted. Further stratification underscored the positive association between cTnI levels and risk of mortality, particularly in patients ≤ 65y (HR: 3.10, 95% CI: 2.09-4.59, p-trend < 0.0001) or with severe admission GCS (HR: 1.57, 95% CI: 1.16-2.14, p-trend = 0.0006). Similar association was not observed in patients > 65y or with mild or moderate admission GCS (Table 1). 

Conclusion: Elevated cTnI is an independent predictor of mortality following severe TBI and is significantly associated with higher risk of mortality via a positive, non-linear dose-dependent relationship. This association is predominately seen in patients ≤ 65y or with severe admission GCS. Elevated cTnI may not be a useful predictor of mortality in patients > 65y or with mild or moderate admission GCS. 

9.13 A Case for Less Workup in Near Hangings

M. Subramanian1, L. Liu1, T. Sperry1, T. Hranjec1, C. Minshall1, J. Minei1  1University Of Texas Southwestern Medical Center,Burn, Trauma, Critical Care / General Surgery,Dallas, TX, USA

Introduction:
No guidelines have been set for evaluating and managing patients with near hangings. As a result, most patients receive a comprehensive workup, regardless of mental status or exam.  We hypothesize that patients with a normal neurologic exam, subjective neck pain but no other complaints or exam findings, require no additional workup.

Methods:
We reviewed the charts of adult trauma patients at a Level I Trauma Center that presented after an isolated near hanging episode between 1995 and 2013. One patient was excluded as he sustained an 80-foot fall after near hanging. Patients were stratified based on their initial GCS score into low (< 15) and normal (=15) groups and compared using univariate analysis. 

Results:
In total, 127 patients presented after near hanging: 45 (35.4%) in the low and 82 (64.6%) in the normal group.  Seven (8.5%) patients in the normal group reported pain or tenderness on physical examination but also had the presence one of the following signs or symptoms: dysphagia, dysphonia, stridor, or subcutaneous air.  Patients in the normal group received 133 CT scans and 7 MRI with identification of 2 neck injuries. Both injuries—a C5 facet fracture and a vertebral artery dissection—were identified in patients with additional signs/symptoms present on examination. Neither of these injuries required intervention. The presence of at least one concerning sign or symptom in patients with GCS 15 had 100% sensitivity and 94% specificity for identifying an injury.  

Conclusion:
Using dysphagia, dysphonia, stridor, or subcutaneous air on examination in patients with a normal neurologic examination after an attempted hanging will reduce the number of unnecessary examinations and decrease cost without missing significant injuries.  Despite low incidence of injuries, all trauma patients and those with decreased GCS score should be thoroughly studied. 
 

9.14 Hospital Readmission after Traumatic Brain Injury: Results from the MarketScan Database

J. K. Canner1, F. Gani1, S. Selvarajah1, A. H. Haider1, E. B. Schneider1  1Johns Hopkins University School Of Medicine,Department Of Surgery,Baltimore, MD, USA

Introduction: Thirty-day readmission after discharge from inpatient care for traumatic brain injury (TBI) among patients under the age of 65 years in the United States has not been well reported. This study examined readmission to acute care in a population of patients under the age of 65, all of whom were covered by employer-provided private health insurance.

Methods: The Market Scan database from 2010 to 2012, which includes over 50 million patients under the age of 65 covered through an employer-sponsored insurance plan, was queried.  Patients hospitalized with a primary diagnosis of TBI, and who had no other injury associated with an Abbreviated Injury Scale (AIS) score of 3 or greater to any other non-head body region, were identified and included for study. Patients with fewer than 30 days of follow-up were excluded. Outcomes of interest included readmission to inpatient care within 30 days of index discharge and primary diagnosis at readmission. Multivariable logistic regression, controlling for demographic, injury and hospital-level variables, examined factors associated with readmission.

Results: A total of 27,998 patients in the MarketScan database with at least one eligible TBI hospitalization met inclusion criteria.  Mean (SD) patient age was 33.9 (20.2) (Figure), 65.3% were male, and 8.8% had a Charlson Comorbidity Index of 2 or greater.  Mean (SD) Injury Severity Score (ISS) was 13.3 (6.7) and 73.1% had a head AIS ≥ 3. Mean (SD) length of stay was 4.4 (8.5) days. Patient disposition at discharge varied as follows: 79.3% were discharged home, 5.8% to inpatient rehabilitation, 5.1% to another facility, and 3.8% died in hospital. Of the 26,922 patients discharged alive, 1,709 (6.4%) were re-hospitalized within 30 days. Among readmitted patients, 27.8% carried a TBI-related primary diagnosis, more than half of which (56.5%) involved some form of intracranial hemorrhage.  Other common primary readmission diagnoses included infection (4.0% of all readmissions), alcohol dependence (2.7%), venous thromboembolism (2.3%), and post-concussion syndrome (2.1%).  Patients who were older (OR: 1.01 per additional year of age), had a head AIS of 3 or greater (OR: 1.10), had one (OR: 1.29) or more (OR: 2.04) comorbidities, or had a longer index length of stay (OR: 1.02 per additional day) demonstrated increased odds of being re-hospitalized within 30 days (all p<0.001).

Conclusion: Patients discharged from inpatient care for TBI are at risk of readmission.  Further research is warranted to better understand specific factors associated with readmission and how consideration of these factors at the time of discharge planning might reduce patient readmission.

9.01 Improving Predictive Value of Trauma Scoring Through Integration of ASA-PS with ISS

D. Stewart1, C. Janowak1, H. Jung1, A. Liepert1, A. O’Rourke1, S. Agarwal1  1University Of Wisconsin,Surgery,Madison, WI, USA

Introduction:  Many methods exist for predicting mortality among adult trauma patients; however, most systems ignore patient co-morbidity, a significant predictor of outcome, in their calculations. The American Society of Anesthesiologists Physical Status (ASA-PS), a well-validated and easy-to-use scale, is an assessment of pre-operative status that has been shown to accurately predict post-operative mortality.  Using the ASA-PS as a marker of cumulative patient comorbidity severity we sought to test whether we would be able to improve the predictive power of the Injury Severity Score (ISS), the most commonly utilized trauma grading system, with respect to mortality, major complication, and discharge disposition.

Methods:  A retrospective review of a prospectively collected and internally validated database at an academic Level I trauma center was performed for consecutive adult admissions between 2009-2013.  Abbreviated Injury Scale (AIS) was measured by region (head/neck, face, thorax, abdomen, extremities, general) and severity of injury (1 to 5). ISS was measured by summing the squares of the three most injured regions [(AIS1)2 + (AIS2)2 + (AIS3)2].  ASA-PS scores were assigned based on patient comorbidities and then integrated with the traditional ISS in a variety of permutations, including adjustments of ASA-PS for patient age >70 and using individual AIS components of ISS.   We assessed these various models for predictive ability with a primary outcome of mortality and secondary outcomes of major complications as per National Trauma Data Bank (NTDB) definitions as well as discharge disposition using receiver operating characteristic (ROC) analysis.  These were compared with the ISS.

Results: All of the ISS/ASA-PS hybrid formulas outperformed ISS alone in predictive power for mortality, major complication, and discharge disposition.  The best overall permutation, (AIS1)2+(AIS2)2+(Age-Modified ASA-PS)2, yielded an ROC of 0.888 for mortality as compared to ISS with an ROC=0.853 (p<0.001).  Similar differences were seen for discharge disposition (Hybrid ROC=0.743; ISS ROC=0.639, p<0.001) and major complication (Hybrid ROC=0.761; ISS ROC=0.719, p<0.001).

Conclusion: Incorporating ASA-PS into calculations of trauma scoring is both simple and more predictive of mortality, major complication, and discharge disposition than the traditional ISS metric.  Replacing ISS with this new method, which takes patient age and comorbid condition into account through adaptation of the ASA-PS improves prognostication of outcomes and enables care providers to prioritize resources for injured patients.

 

9.02 Acute Ethanol Intoxication Inhibits Platelet Function in Healthy Individuals

A. Slaughter1,2, M. P. Chapman1,2, A. Banerjee1, E. Gonzalez1,2, H. B. Moore1,2, E. E. Moore1,2  1University Of Colorado Denver,Surgery,Aurora, CO, USA 2Denver Health Medical Center,Surgery,Aurora, CO, USA

Introduction:  Despite the established effects of moderate, long-term ethanol consumption on platelet function, the impact of acute ethanol exposure based on homotypic platelet aggregometry remains contradictory. Thus the role of acute ethanol intoxication in the pathogenesis of trauma induced coagulopathy has not been elucidated. The development of recent whole blood viscoelastic assays however provides the opportunity to better evaluate the effect of acute ethanol exposure on the hemostatic capacity of platelets. We hypothesized that acute ethanol intoxication will impair platelet function in otherwise healthy individuals.

Methods:  Healthy volunteers (n=7) participated in the study. Baseline venous blood samples for kaolin-activated whole blood thromboelastography (TEG) and platelet mapping (PM) were obtained at the beginning of the study. Additional blood samples were drawn and incubated with ethanol for 1 h (in vitro exposure). Participants then consumed ethanol to legal intoxication, a blood alcohol content of >0.1 g/dL, as monitored by repeated breathalyzer testing. Blood was drawn after 1h of sustained intoxication (in vivo exposure). We then repeated TEG and PM on the post-incubation and post-intoxication samples. Percentage platelet inhibition at the adenosine diphosphate (ADP) and thromboxane A2 (TxA2) receptors following ethanol exposure was calculated. 

Results: The platelet TxA2 receptor, stimulated by arachidonic acid (AA), demonstrated a significant increase median (IQR) percentage inhibition from baseline following in vivo ethanol intoxication, 50.4% (27.9) vs. 75.2% (25.3) (p=0.018). Likewise, the TxA2 receptor demonstrated a significant increase percentage inhibition from baseline following in vitro ethanol incubation, 50.4% (27.9) vs. 75.6% (30.35) (p=0.046).  Platelet ADP channel percentage inhibition comparing baseline and post-intoxication was 40.2% (35.7) vs. 61.3% (35.1) (p=0.398). ADP channel percentage inhibition comparing baseline and post-incubation was 40.2% (35.7) vs. 64.65 (42.6) (p=0.917).

Conclusion: Acute ethanol intoxication significantly impaired platelet function via the TxA2 receptor. Furthermore TxA2 receptor inhibition followed both in vivo and in vitro exposure. Our results therefore suggest that platelet dysfunction could exacerbate coagulopathy in intoxicated trauma victims.  

9.03 BMI is Inversely Proportional to Need for Therapeutic Operation after Abdominal Stab Wound

M. B. Bloom1, E. J. Ley1, D. Z. Liou1, T. Tran1, R. Chung1, N. Melo1, D. R. Margulies1  1Cedars-Sinai Medical Center,Los Angeles, CA, USA

Introduction:   Several authors have examined the relationship between trauma patient Body Mass Index (BMI) and blunt and polytrauma outcomes.  Less attention has been paid to the need for therapeutic intervention in penetrating trauma.  We sought to determine whether increasing BMI is protective in abdominal stab wounds, and predictive of need for intervention.

Methods:   We conducted a review of all patients presenting with abdominal and flank stab wounds at an urban level I trauma center from January 1, 2000 to December 31, 2012.  Patients were divided into four groups based on their BMI.  Abstracted data includes baseline demographics, physiologic data, and characterization of whether the stab wound had violated the peritoneum, caused intra-abdominal injury, or required an operation that was ultimately therapeutic. Patients who were safely observed without an operation were considered as having no intra-abdominal injuries, but were excluded from the peritoneal violation analysis. The one-sided Cochran-Armitage Trend was used for significance testing of the protective effect.

Results:  Of 281 patients with abdominal stab wounds, 249 had complete data for evaluation, grouped as BMI<18.5(underweight, n=6), BMI 18.5-29.9(normal to overweight, n=195), BMI 30-35(obese, n=38), and BMI>35(severely obese, n=10). There were no statistically significance differences between groups with respect to age, GCS score, SBP, ISS, AIS subtypes, and mortality, but greater BMI was more prevalent in females (p=0.015). All 6 patients with BMI<18.5 had peritoneal violation, and 5/6 (83%) had intra-abdominal injury. The rate of peritoneal violation trended downward as BMI increased (100%, 86%, 77%, 75%; p=0.147). Increasing BMI was associated with a significant decrease in actual visceral injury (83%, 56%, 50%, 30%; p=0.022). Of 6 patients with BMI<18.5, 4 (67%) had intra-abdominal injury requiring an operation that was therapeutic, whereas in BMI>35, only 2/10 (20%) did. The rate progressively decreased as BMI rose (67%, 44%, 39%, 20%; p=0.041).

Conclusion:  Increased BMI protects patients with abdominal stab wounds and is associated with both lower rates of injured viscera and a reduced need for operations. Heavier patients may be more suitable to observation and serial exams, while very thin patients are more likely to require an operation.