id
stringlengths 1
5
| prompt
listlengths 1
1
| completion
listlengths 1
1
| Label
stringclasses 3
values |
|---|---|---|---|
13812
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Hospital-Based Work Support Intervention for Female Cancer Patients – A Multi-Centre Randomised Controlled TrialEffect of Hospital-Based Work Support Intervention\n\n\n\nABSTRACT.OBJECTIVE:\nOne key aspect of cancer survivorship is return-to-work. Unfortunately, many cancer survivors face problems upon their return-to-work. For that reason, we developed a hospital-based work support intervention aimed at enhancing return-to-work. We studied effectiveness of the intervention compared to usual care for female cancer patients in a multi-centre randomised controlled trial.\n\nABSTRACT.METHODS:\nBreast and gynaecological cancer patients who were treated with curative intent and had paid work were randomised to the intervention group (n = 65) or control group (n = 68). The intervention involved patient education and support at the hospital and improvement of communication between treating and occupational physicians. In addition, we asked patient's occupational physician to organise a meeting with the patient and the supervisor to make a concrete gradual return-to-work plan. Outcomes at 12 months of follow-up included rate and time until return-to-work (full or partial), quality of life, work ability, work functioning, and lost productivity costs. Time until return-to-work was analyzed with Kaplan-Meier survival analysis.\n\nABSTRACT.RESULTS:\nReturn-to-work rates were 86% and 83% (p = 0.6) for the intervention group and control group when excluding 8 patients who died or with a life expectancy of months at follow-up. Median time from initial sick leave to partial return-to-work was 194 days (range 14–435) versus 192 days (range 82–465) (p = 0.90) with a hazard ratio of 1.03 (95% CI 0.64–1.6). Quality of life and work ability improved statistically over time but did not differ statistically between groups. Work functioning and costs did not differ statistically between groups.\n\nABSTRACT.CONCLUSION:\nThe intervention was easily implemented into usual psycho-oncological care and showed high return-to-work rates. We failed to show any differences between groups on return-to-work outcomes and quality of life scores. Further research is needed to study which aspects of the intervention are useful and which elements need improvement.\n\nABSTRACT.TRIAL REGISTRATION:\nNederlands Trial Register (NTR) 1658\n\n\nBODY.INTRODUCTION:\nIn recent years, advances in cancer screening and cancer treatment have improved the survival rates for patients with cancer. An increasing number of cancer patients are therefore able to live many years beyond the original cancer diagnosis and face new challenges upon cancer survivorship. For cancer patients of working age, returning to work is a key aspect of survivorship because it is often experienced as an important part of their recovery [1]. Furthermore, work contributes to personal, social, and economic well-being, and therefore return-to-work is associated with the quality of life of cancer patients [2]–[4].\nUnfortunately, not all cancer patients are able to return-to-work and many of these patients have more adverse work outcomes in comparison to the general population. For instance, the risk of unemployment is estimated to be 37% higher for cancer patients compared to non-cancer controls [5] and return-to-work rates are estimated to vary between 30–93% depending on for instance cancer type and treatment [6], [7]. Furthermore, a portion of cancer patients face a decrease in income [8] and suffer from impaired work functioning compared to the general population [9], [10]. Finally, the employer and the society at large are also affected due to the costs of absenteeism, disability pension, and loss of productivity [11].\nIntervention studies aimed at enhancing the return-to-work of cancer patients are rare, especially randomised controlled trials [12], [13]. However, we developed an intervention based on previous studies that demonstrated effective results for enhancing the return-to-work of cancer patients [12], and we developed this intervention together with various stakeholders involved in the return-to-work process of cancer patients [14]. An early intervention – meaning soon after diagnosis or early in treatment - is most appropriate because the longer the duration of sick leave, the more difficult return-to-work is to achieve [15]. For delivering an early intervention, a hospital-based intervention is most appropriate, as most cancer patients do not have contact with their employer or occupational physician during early phases of their cancer treatment and their advice seems to be influential [16], [17]. In addition, previous studies have shown that early interventions could be most effective [12]. Furthermore, return-to-work should be part of the complete psycho-oncological care package and should not be dealt with in isolation [18].\nOur hypothesis is that a hospital-based intervention will enhance the return-to-work of cancer patients, as work is not typically addressed at the hospital [19]. Furthermore, an important and modifiable prognostic factor for the return-to-work of cancer patients is self-assessed work ability which varies according to treatment type and cancer diagnosis [7]. Self-assessed work ability may readily be improved by providing patient education and support that addresses misconceptions concerning return-to-work [20]. The objective of this study was to determine the effect of a hospital-based work support intervention for cancer patients on return-to-work and quality of life, which was achieved.\n\nBODY.MATERIALS AND METHODS:\nThe medical ethics committee of the Academic Medical Center approved the study, and the medical ethics committees of the six participating hospital advised positively regarding feasibility of the study. Patients signed informed consent forms prior to participation in the study. This trial was registered at the Dutch National Registry:NTR1658 (http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1658).\nBoth the design of the study and the content of the hospital-based work support intervention have been described in detail elsewhere [14]. We used items from the CONSORT statement for improving the quality of reporting randomised trials [21]. The protocol for this trial and supporting CONSORT checklist is available as supporting information; see Checklist S1 and Protocol S1.\nBODY.PATIENTS:\nCancer patients between 18 and 60 years of age who had been treated with curative intent at one of the six participating hospital departments, had paid work, and who were on sick leave were eligible to participate. Treatment with curative intent was defined as an expected 1-year survival rate of approximately 80%. We excluded patients who were not sufficiently able to speak, read, or write Dutch, had a severe mental disorder or other severe co-morbidity, and for whom the primary diagnosis of cancer had been made more than two months previously. We monitored non-response by assessing the proportion of patients who participated in comparison to all eligible patients.\n\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nThe hospital-based work support intervention began a few weeks after the onset of the study and was spread across a maximum of 14 months. The hospital-based work support intervention consisted of the following components: 1) delivering patient education and support at the hospital, as part of usual psycho-oncology care; 2) improving communication between the treating physician and the occupational physician; and 3) drawing-up a concrete and gradual return-to-work plan in collaboration with the cancer patient, the occupational physician, and the employer [14]. We integrated patient education and support regarding return-to-work into the usual psycho-oncological care in the form of 4 meetings that lasted 15 minutes each. This care was delivered by an oncology nurse or medical social worker (hereafter referred to as nurse). In addition, a least one letter was sent to the occupational physician to enhance communication. We also asked the occupational physicians to organise a meeting between the patient and the employer to draw-up a return-to-work plan. The key aspects of the hospital-based work support intervention included the patient education and support at the hospital and the provision of information to the occupational physician. In the Netherlands, patients must provide their consent to allow medical information to be sent from a treating physician to an occupational physician. Therefore, we were only able to inform the occupational physicians of patients who provided this form of consent.\n\nBODY.STUDY DESIGN:\nThis study was designed as a multi-centre randomised controlled trial with a follow-up period of two years. Here we report the results of the first follow-up year. Eight departments from six hospitals in the Netherlands participated in the study.\nThe treating physician or nurse informed the cancer patients of the study a few weeks after their diagnosis and determined patient eligibility by assessing the inclusion and exclusion criteria. The research team contacted patients who were eligible and willing to participate and enrolled these patients in the study. After the patients had filled in the baseline questionnaire, one of us [ST] allocated the eligible patients to the intervention or to the control group using the computerised randomisation programme ALEA [22]. The allocation ratio was set as equal in the programme. Stratified randomisation was applied for two important prognostic factors for return-to-work [23]; age (<50 or ≥50) and cancer diagnosis (i.e. hospital department). Minimisation was applied to equalise group sizes. The patient date of each consecutive patient were entered in the programme and according to the conditions mentioned above the programme randomly assigned the patients to the intervention or the control group. The allocation was irrevocable and was not changed during the study nor during the analysis. Patients and providers were immediately informed of the allocation as it was impossible to conceal allocation for this intervention.\nQuestionnaires were administrated to the patients at baseline and at 6 and 12 months of follow-up. The follow-up questionnaires were mailed to the patients' homes with a postage-paid envelope enclosed. Both the questionnaire data and the information from the nurses who delivered the intervention were gathered for the economic evaluation. Outcome measures and cancer treatment were assessed at all time points. Socio-demographic factors and prognostic factors for time until return-to-work were assessed at baseline only.\n\nBODY.MEASUREMENTS:\nThe primary outcomes were return-to-work and quality of life. The intervention was considered effective if patients in the intervention group had a significantly shorter time to return-to-work (in days) than did patients in the control group, provided that their quality of life had not significantly deteriorated.\nReturn-to-work was measured both as the rate of return-to-work at one year of follow-up and as the number of calendar days between the first day of sick leave and the first day at work (either part-time or full-time) that was sustained for at least 4 weeks. Quality of life was assessed with the Short Form-36 (SF-36) [24], which included all subscales and a Visual Analogue Scale (VAS). Secondary outcomes included work ability, work functioning, and costs. Work ability was assessed using the first question of the Work Ability Index (WAI) [25]. Impaired work functioning was assessed with the Work Limitation Questionnaire (WLQ) [26], which could only be filled in if a patient had (partly) returned to work.\nWe conducted the economic evaluation from a societal perspective. We included lost productivity costs and work adjustments costs for both groups and costs to deliver the intervention for the intervention group. Productivity loss was determined by multiplying the cumulative net number of hours on sick leave by the estimated price of productivity loss based on age and gender [27]. We assumed that when a patient partially returned to work, his/her productivity was 100% during the hours of partial work resumption. We calculated productivity losses using both the human capital approach and the friction costs approach [27]. For the human capital approach, all hours on sick leave were included for 100%. For the friction costs approach, all hours on sick leave with a maximum of 167 days were included for 80% [27]. Costs to deliver the intervention were determined by combining the training costs and the costs to deliver the intervention. Training costs consisted of trainer costs, study material costs, and attendance costs for the nurses. Costs to deliver the intervention consisted of the mean hour of investment multiplied by the average nurse wage and subsequently multiplied by 42% overhead costs [27], and the mean hour of investment of the secretary for sending of the letters to the occupational physician, as well as the printing costs for the information leaflet. As the letter from the treating physician to the occupational physician was a copy of the letter to the general practitioner, no additional costs for the treating physician to produce these letters were taken into account.\nThe socio-demographic factors measured at baseline included the number of days between the first day of sick leave and enrolment in the study, marital status, time since diagnosis, breadwinner status, position at work, shift work, years in current position, years of paid employment, income, importance of work (VAS), and company size.\nPrognostic factors for time to return-to-work of the cancer patients included [7], [23] age, gender, education, diagnosis, cancer treatment, number of working hours according to contract, physical workload (Questionnaire of Perception and Judgement of Work (VBBA)) [28], fatigue (Multidimensional Fatigue Inventory (MFI)) [29], depression (Centre for Epidemiologic Studies for Depression Scale (CES-D)) [30], co-morbidity, self-efficacy (general self-efficacy scale (ALCOS)) [31], and clinical characteristics (i.e. diagnosis and treatment).\n\nBODY.SAMPLE SIZE:\nThe calculation of the patient sample size was based on two earlier studies focused on return-to-work in cancer patients [23], [32]. Based on the return-to-work rates in these studies, we assumed a relative risk of not returning to work of 0.53 for individuals in the intervention group versus those receiving usual care [14]. With a power of 80% and two-sided significance level of p<0.05, the sample size required was 109 patients in each group [33]. Assuming that 20% of the initial patients would be lost to follow-up, 270 patients should have been recruited to gather 246 patients at 12 months of follow-up. To account for at least 10% missing data at baseline, 300 patients sought to be included in the study.\n\nBODY.STATISTICAL ANALYSIS:\nData entry was verified by means of a 20% double data entry and a 100% double data check regarding the rate and time of patients until return-to-work. Participants who did and did not whish to participate were analysed on age using Student's t-test. All analyses were performed according to the intention-to-treat principle, which meant that all patients were included in the analysis. We censored patients who dropped out of the study because of missing data. Therefore, differences between patients who dropped out or completed the study were analysed according to their baseline quality of life scores.\nAll data were analysed by means of descriptive statics using PASW version 18. The baseline data were assessed to evaluate whether there was an imbalance between the intervention group and the control group using Student's t-test for continuous variables and the χ2 test for categorical variables. We considered a p-value≤0.05 to be statistically significant.\nWe calculated relative risks and 95% confidence interval for returning to work (full and partial) at 12 months of follow-up for the intervention group versus the control group. The median time until return-to-work was analysed with a Kaplan-Meier survival analysis, and differences between groups were tested with the log rank test. In addition, the Cox proportional hazard model of survival analysis was applied to estimate hazard ratios and the corresponding 95% confidence intervals for the time until return-to-work (full and partial) with a hazard ratio <1 indicating a longer time to return-to-work.\nImprovements in the subsequent primary outcome of quality of life and the secondary outcomes of work ability and work functioning between groups were examined using a longitudinal multilevel analysis. Mean costs between the groups were analysed using Student's t-test.\n\n\nBODY.RESULTS:\nCancer patients who were diagnosed at one of the participating hospital departments between May 2009 and December 2010 and who were eligible and willing to participate were enrolled in the study. The enrolment of new patients ended in December 2010 to enable the inclusion of patient follow-up data within the time constraints of the study. A total of 755 of the 855 cancer patients were excluded; 611 did not meet the eligibility criteria primarily because they were too old, 119 declined participation, and 25 were excluded for other reasons, and this led to an overall response rate of 47% (Figure 1). In total, 133 cancer patients were included in the study; 65 were assigned to the intervention group and 68 were assigned to the control group. At baseline, all 133 patients provided complete data on the primary outcome, whereas 132 (99%) patients provided complete data on the secondary outcomes (Figure 1). The response rate at 12 months of follow-up was 128 (96%) for the outcome of return-to-work and was 108 (81%) for the outcome of quality of life and secondary outcomes. The reason why patients did not return the questionnaire included cancer recurrence (4 patients; 3%), decline (6;5%) or were unknown 11 (8%), while 4 (3%) patients died within the 12-months follow-up period (Figure 1). Patients were on average 47.5±7.9 years old. Breast cancer was the most common diagnosis (62%), followed by cancer diagnosis of the female reproductive system (34%) (Table 1). No statistically significant differences between the intervention group and the control group on any of the socio-demographic or prognostic characteristics measured at baseline or any medical characteristics measured at follow-up were identified (Table 1).\n10.1371/journal.pone.0063271.g001Figure 1BODY.PATIENT FLOW.:\n10.1371/journal.pone.0063271.t001Table 1BODY.PATIENT CHARACTERISTICS.:\nPatient characteristics\nIntervention group (N = 65)\n\nControl group (N = 68)\n\nP-value **\n\n\nSocio-demographic characteristics\n*\n\nAge (years) \n¥\n\n47.5±8.2\n47.6±7.8\n0.92\nGender (% female)\n\n99%\n100%\n0.31\nMarital status (% married or living with partner)\n\n79%\n69%\n0.20\nBreadwinner position (% sole or shared)\n\n65%\n56%\n0.36\nEducation level (%)\n\nLow\n11%\n16%\n0.53\n\nIntermediate\n59%\n51%\n\n\nHigh\n30%\n33%\n\n\nClinical characteristics\n*\n\nDiagnosis\nBreast cancer\n64%\n60%\n0.82\n(%)\n\nCervix cancer\n23%\n22%\n\n\nOvarian cancer\n5%\n10%\n\n\nVulva cancer\n3%\n3%\n\n\nOther\n5%\n5%\n\nNumber of co-\n0\n45%\n54%\n0.09\nmorbidities\n1\n22%\n31%\n\n(%)\n\n≥2\n33%\n15%\n\nSurgery (%)\n\n99%\n96%\n0.78\nChemotherapy (%)\n\n66%\n71%\n0.84\nRadiotherapy (%)\n\n60%\n58%\n0.67\n\nWork-related characteristics\n*\n\nType of occupation\nHealth care/education\n38%\n37%\n0.69\n(%)\n\nAdministrative\n9%\n9%\n\n\nSales\n5%\n12%\n\n\nOther\n48%\n42%\n\nType of work (% mainly physically work)\n\n32%\n40%\n0.38\nPhysical workload (0–28)\n**\n\n4.7±3.6\n5.7±4.4\n0.18\nTime since sick listed (days)\n\n26.5±35.1\n15.0±53.2\n0.15\nImportance of work (0–100)\n**\n\n58.7±23.1\n51.5±28.3\n0.11\nShift work (% shift work)\n\n26%\n19%\n0.36\nType of contract\nPermanent\n89%\n84%\n0.17\n\n(%)\n\nTemporary\n11%\n9%\n\n\nSelf-employed\n0%\n4%\n\n\nOther\n0%\n3%\n\n\nHealth-related characteristics\n*\n\nFatigue (MFI\n**\n\nGeneral fatigue (0–20)\n\n12.4±4.9\n13.1±4.3\n0.37\nDepression (CES-D)\n**\n\nSum score (0–60)\n\n14.1±9.3\n13.5±7.7\n0.67\nSelf-efficacy (ALCOS)\n**\n\nSum score (0–80)\n\n66.5±8.6\n66.2±7.6\n0.83\n*Continuous variables: mean ± standard deviation; nominal and ordinal variables percentages.\n**Higher scores represent higher level of physical workload, importance of work, fatigue, feelings of depression, and self-efficacy.\n¥Age at the time of randomisation.\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nNo harm or unintended effects were reported by patients as a result of participating in the intervention.\nSeven patients (12%) assigned to the intervention group did not receive the patient education and support from the nurse [34]. For all patients who provided this type of consent (86%), at least one letter from the treating physician was sent to the occupational physician. In five cases (10%), the patients' occupational physician organized a meeting between the patient, supervisor, and himself to draw-up a return-to-work plan.\nThe median number of contacts made between the nurse and the patient was 4 (range 1–4) and the median duration of each meeting was 23 minutes (range 7–60). Eight (12%) patients assigned to the control group reported having received patient education or support regarding their return-to-work from their nurse.\n\nBODY.PRIMARY OUTCOME – RETURN-TO-WORK AND QUALITY OF LIFE:\nThe return-to-work rate (full or partial) of all 128 randomised patients with follow-up data at 12 months was 79% for the intervention group and 79% for the control group (p = 0.97), and these rates were 86% and 83%, respectively (p = 0.61), when patients who died within the follow-up period or those with a life expectancy of only a few months were excluded. The relative risk of returning to work (full or partial) for the intervention group versus the control group was 1.03 (95% CI 0.84–1.2). Of the patients who did not return-to-work (intervention versus control group); 2 versus 2 died, 3 versus 1 had a life expectancy of few months, 4 versus 5 lost their jobs, 2 versus 5 experienced adverse side-effects, and 2 versus 0 had other reasons.\nMedian time from the initial sick leave until partial return-to-work was 194 days (range 14–435) for the intervention group and 192 days (range 82–465) for the control group (log rank test; p = 0.90). Median time from initial sick leave until full return-to-work was 283 days (range 25–394) for the intervention group and 239 days (range 77–454) for the control group (log rank test; p = 0.52). Figure 2 summarizes Kaplan-Meier survival analyses for the two groups on partial and full return-to-work. The hazard ratio for partial return-to-work was 1.03 (95% CI 0.64–1.6) for the intervention group versus the control group and was 0.88 (95% CI 0.53–1.5) regarding full return-to-work.\n10.1371/journal.pone.0063271.g002Figure 2BODY.KAPLAN-MEIER SURVIVAL CURVES FOR TIME TO PARTIAL RETURN-TO-WORK (A) AND FULL RETURN-TO-WORK (B).:\nQuality of life scores showed statistically significant improvements over time (p ranged between 0.014 to ≤0.001) but did not differ statistically significant between groups (p ranged between 0.15 to 0.99) (Table 2).\n10.1371/journal.pone.0063271.t002Table 2BODY.QUALITY OF LIFE, WORK ABILITY, AND WORK FUNCTIONING.:\n\nGroup\nBaseline\n6 months follow-up\n12 months follow-up\nP-value**\n\nQuality of life*\n(SF-\n\nPhysical functioning\nIntervention\n76±28\n71±21\n81±16\n0.95\n\n36) (0–100)\n\n\nControl\n73±28\n70±22\n79±20\n\n\n(N = 133)\n\nRole-physical\nIntervention\n48±44\n29±40\n47±40\n0.46\n\n\nControl\n50±43\n31±37\n61±41\n\n\nVitality\nIntervention\n60±21\n51±20\n59±19\n0.60\n\n\nControl\n57±17\n51±16\n56±16\n\n\nGeneral health\nIntervention\n61±21\n54±18\n64±17\n0.15\n\n\nControl\n61±18\n59±18\n70±19\n\n\nSocial functioning\nIntervention\n70±23\n66±24\n75±20\n0.46\n\n\nControl\n68±22\n66±22\n78±20\n\n\nRole-emotional\nIntervention\n49±44\n53±45\n64±42\n0.71\n\n\nControl\n52±41\n64±44\n71±40\n\n\nMental health\nIntervention\n65±17\n71±16\n77±15\n0.32\n\n\nControl\n64±16\n70±16\n72±15\n\n\nPain\nIntervention\n69±30\n67±25\n75±21\n0.99\n\n\nControl\n70±23\n69±20\n76±17\n\nQuality of life*\n\nIntervention\n60±22\n62±23\n73±17\n0.26\nVAS (0–100) (N = 133)\n\nControl\n61±21\n67±18\n70±17\n\nOverall work ability*\n\nIntervention\n5±3\n4±3\n6±2\n0.59\n(WAI) (0–10) (N = 133)\n\nControl\n5±3\n5±3\n7±2\n\nOverall work productivity*\n\nIntervention\nNA\n34±19\n29±15\n0.68\n(WLQ) (0–100) (N = 100)\n\nControl\nNA\n30±14\n27±16\n\nMean ± sd;\n*Higher scores represent a higher level of functioning/well-being/quality of life, work ability, and work functioning.\n**P-value represents the interaction effect of time and group.\n\nBODY.SECONDARY OUTCOMES – WORK ABILITY, WORK PRODUCTIVITY, AND COSTS:\nWork ability improved statistically significant over time (p≤0.001) but did not differ statistically significant between groups (p = 0.58) (Table 2). Work functioning did not improve significantly over time (p = 0.3) and did not differ significantly between groups (p = 0.48) (Table 2).\n\nTable 3 shows that the intervention costs were 119 Euros per patient in the intervention group. The mean (± SD) lost productivity cost according to the human capital approach was 41.393 (±39.269) Euros in the intervention group and 38.968 (±38.399) Euros in the control group. The mean (± SD) lost productivity cost according to the friction costs approach was 14.030 (±3.614) Euros in the intervention group and 13.529 (±3.313) Euros in the control group. The mean work accommodations cost was 2.975 and 3.025 Euros in the intervention group and control group, respectively. These costs did not differ statistically between groups.\n10.1371/journal.pone.0063271.t003Table 3BODY.ECONOMIC EVALUATION.:\nCosts of the work-directed intervention in Euros\nDescription\nCosts (€)\nTraining costs\n1 trainer, time investment 24 hours, 50 Euros per hour\n1200\n\nStudy material, refreshments\n125\n\nAttendance costs nurses, 11 nurses, 30 Euros per hours, 4 hours\n1320\nTotal training costs per patient in the intervention group\n41\n\nWork-directed intervention\nMean hour of investment of nurse was 1.2 hour, 43 Euros per hour\n66\n\nMean hour of investment of secretary was 0.16 hour, 30 Euros per hour\n5\n\nInformational leaflet\n7\nTotal intervention costs per patient in the intervention group\n78\n\nTotal costs per patient in the intervention group\n119\n\nCosts differences between groups in Euros\n\nN\nIntervention group\nControl group\nMean difference\nP-value\nProductivity loss net HCA Mean (±SD) Euros\n\n128\n41393 (±39269)\n38968 (±38399)\n−2425\n0.72\nProductivity loss net FCA\n128\n14030 (±3614)\n13529 (±3313)\n−438\n0.48\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork adjustments\n3*\n\n2975\n3025 (±71)\n50\n0.67\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork-directed intervention Mean Euros\n\n128\n119\n0\n−119\nNA\nAbbreviations: HCA Human Capital Approach; FCA Friction Costs Approach; NA: not applicable; SD = standard deviation.\n*Only three patients had work adjustments that were not related to productivity.\n\n\nBODY.DISCUSSION:\nThe objective of this study was to determine the effect of a hospital-based work support intervention for female cancer patients on return-to-work and quality of life. In general, return-to-work rates were high. We failed to show any differences between groups on return-to-work outcomes and quality of life scores.\nBODY.STRENGTHS AND LIMITATIONS:\nOne strength of our study was the innovative approach that was used to address the adverse work outcomes of cancer patients. Few studies have addressed this important subject by developing an intervention that is primarily aimed at enhancing the return-to-work of cancer patients [12], [13], [34]. Furthermore, another strength of this study was the use of a low-cost intervention that could be implemented without substantially increasing the time required, which is important because of the burden on cancer care. In addition, this intervention was easily adapted to the existing variation in usual psycho-oncological care, which yields high external validity. One limitation of our study was the inability to include sufficient patients, according to our predetermined power analysis, which led to greater uncertainty in the results.\n\nBODY.INTERPRETATION OF FINDINGS:\nWe found that the intervention was easily accepted in usual psycho-oncological care and we found that patients were notably satisfied with the intervention [34]. For those reasons, addressing the return to work of cancer patients is highly relevant for usual psycho-oncological care. We found similar return-to-work outcomes and quality of life scores for both groups. There are several possible explanations for the lack of statistically significant difference between groups, which can be sought in the intervention content and the study design. The basic assumption behind the intervention was that return to work would increase by means of improved self-assessed work ability as a result of patient education and support that addressed misconceptions about cancer and work. We found that self-assessed work ability increased significantly over time but did not differ significantly between groups. It is possible that addressing these misconceptions could have required a more intense intervention or that the training we provided to the nurses was not sufficient. We do not know precisely which misconceptions impede return to work and which should be addressed. On the other hand, this later possibility was indicated as a number of nurses mentioned that they were not completely convinced of their competence to deliver the return-to-work advice. It may be that our half-day training course was too short to enable nurses to gain the knowledge required to adequately address patients' misconceptions about return to work adequately. For these reasons, it is possible that certain misconceptions regarding cancer and work could have persisted and may have resulted in the absence of an intervention effect.\nIn addition, we experienced difficulties in involving the occupational physician and the employer for the intervention while their involvement appeared to be important [35] and may have caused the absence of an intervention effect.\nThere are some observational studies that showed that the treating physician's advice about return-to-work influenced work resumption considerably either with a shorter or with a longer return-to-work [16], [17], [34]. However, our study shows that apparently this is an overestimation that is not reproduced in an experimental study.\n\nBODY.METHODOLOGICAL CONSIDERATIONS:\n\nIn accordance with the intention to treat analysis we included in the survival analysis patients who died within the follow-up period as censored. However, an assumption in survival analysis is that when a patient is censored, the change that a patient will be able to achieve the outcome is still 50% [36], [37], which is not the case in this situation. However, on a population of 133 patients, we do not expect that the 4 patients who were equally divided between the intervention group and control group, influenced the findings significantly.\n\nBODY.EXTERNAL VALIDITY:\nIt is generally acknowledged that the disability legalisation of a country influences the return-to-work outcomes of employees on sick leave and that disability legalisations varies widely among countries [38]. For that reason, the effect of interventions on return-to-work may also vary from one country to another. The results and conclusions of this study are relevant for the Netherlands due to its social security legislation. However, the early hospital-based work support intervention integrated into usual psycho-oncological care could be adapted and generalised to other countries because cancer patients in other countries experiencing a lack of support about their return-to-work from the hospital as often as patients in the Netherlands [39]. The exact content of the intervention should be adapted to the social security legislation of the country it is implemented in.\n\nBODY.RECOMMENDATIONS FOR FURTHER RESEARCH AND PRACTICE:\nIn terms of recommendations for clinical practice, this study revealed that psycho-oncological care can address the return-to-work of cancer patients early in their treatment, as well as follow-up, as the intervention was appreciated by patients and was perceived as useful and feasible by the nurses. Since, the occupational physician and employer involvement is pivotal for a successful return to work but appeared problematic in our intervention, it seems important to solve what impedes their involvement and to adapt the intervention accordingly. As we found similar work outcomes between the intervention group and the control group, a recommendation for further research is to study if an improved intervention leads to shorter time to return-to-work.\nDue to the large range in time to return to work, it seems important to identify patients who have a higher risk of getting adverse work outcomes based on a clinical prediction rule. Therefore, a recommendation for further research is, to develop such a clinical prediction rule for work outcomes and to evaluate it for the accuracy in identifying patients with a higher risk of adverse work outcomes. Furthermore, apart from identifying patients with a higher risk, it is also important to make the intervention more tailored as it appeared that some patients do not need an intervention to achieve a successful return to work while other patients might have benefited from a more intense intervention. A possibility to make an intervention more tailored is by using a stepped care model. This means that a low-intensity intervention can be offered to all patients while a high-intensity intervention is only offered to patients, for whom work resumption turns out to be problematic.\nWe found that the contrast between groups was reduced, due to the study design. Therefore, another recommendation for further research would be to consider alternative study designs, such as a cluster randomised controlled trial [40].\n\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist.\n\n(DOC)\nClick here for additional data file.\nProtocol S1\nTrial Protocol.\n\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to usual care what was the result of work support intervention aimed at enhancing return-to-work on Return-to-work rates?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13813
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Hospital-Based Work Support Intervention for Female Cancer Patients – A Multi-Centre Randomised Controlled TrialEffect of Hospital-Based Work Support Intervention\n\n\n\nABSTRACT.OBJECTIVE:\nOne key aspect of cancer survivorship is return-to-work. Unfortunately, many cancer survivors face problems upon their return-to-work. For that reason, we developed a hospital-based work support intervention aimed at enhancing return-to-work. We studied effectiveness of the intervention compared to usual care for female cancer patients in a multi-centre randomised controlled trial.\n\nABSTRACT.METHODS:\nBreast and gynaecological cancer patients who were treated with curative intent and had paid work were randomised to the intervention group (n = 65) or control group (n = 68). The intervention involved patient education and support at the hospital and improvement of communication between treating and occupational physicians. In addition, we asked patient's occupational physician to organise a meeting with the patient and the supervisor to make a concrete gradual return-to-work plan. Outcomes at 12 months of follow-up included rate and time until return-to-work (full or partial), quality of life, work ability, work functioning, and lost productivity costs. Time until return-to-work was analyzed with Kaplan-Meier survival analysis.\n\nABSTRACT.RESULTS:\nReturn-to-work rates were 86% and 83% (p = 0.6) for the intervention group and control group when excluding 8 patients who died or with a life expectancy of months at follow-up. Median time from initial sick leave to partial return-to-work was 194 days (range 14–435) versus 192 days (range 82–465) (p = 0.90) with a hazard ratio of 1.03 (95% CI 0.64–1.6). Quality of life and work ability improved statistically over time but did not differ statistically between groups. Work functioning and costs did not differ statistically between groups.\n\nABSTRACT.CONCLUSION:\nThe intervention was easily implemented into usual psycho-oncological care and showed high return-to-work rates. We failed to show any differences between groups on return-to-work outcomes and quality of life scores. Further research is needed to study which aspects of the intervention are useful and which elements need improvement.\n\nABSTRACT.TRIAL REGISTRATION:\nNederlands Trial Register (NTR) 1658\n\n\nBODY.INTRODUCTION:\nIn recent years, advances in cancer screening and cancer treatment have improved the survival rates for patients with cancer. An increasing number of cancer patients are therefore able to live many years beyond the original cancer diagnosis and face new challenges upon cancer survivorship. For cancer patients of working age, returning to work is a key aspect of survivorship because it is often experienced as an important part of their recovery [1]. Furthermore, work contributes to personal, social, and economic well-being, and therefore return-to-work is associated with the quality of life of cancer patients [2]–[4].\nUnfortunately, not all cancer patients are able to return-to-work and many of these patients have more adverse work outcomes in comparison to the general population. For instance, the risk of unemployment is estimated to be 37% higher for cancer patients compared to non-cancer controls [5] and return-to-work rates are estimated to vary between 30–93% depending on for instance cancer type and treatment [6], [7]. Furthermore, a portion of cancer patients face a decrease in income [8] and suffer from impaired work functioning compared to the general population [9], [10]. Finally, the employer and the society at large are also affected due to the costs of absenteeism, disability pension, and loss of productivity [11].\nIntervention studies aimed at enhancing the return-to-work of cancer patients are rare, especially randomised controlled trials [12], [13]. However, we developed an intervention based on previous studies that demonstrated effective results for enhancing the return-to-work of cancer patients [12], and we developed this intervention together with various stakeholders involved in the return-to-work process of cancer patients [14]. An early intervention – meaning soon after diagnosis or early in treatment - is most appropriate because the longer the duration of sick leave, the more difficult return-to-work is to achieve [15]. For delivering an early intervention, a hospital-based intervention is most appropriate, as most cancer patients do not have contact with their employer or occupational physician during early phases of their cancer treatment and their advice seems to be influential [16], [17]. In addition, previous studies have shown that early interventions could be most effective [12]. Furthermore, return-to-work should be part of the complete psycho-oncological care package and should not be dealt with in isolation [18].\nOur hypothesis is that a hospital-based intervention will enhance the return-to-work of cancer patients, as work is not typically addressed at the hospital [19]. Furthermore, an important and modifiable prognostic factor for the return-to-work of cancer patients is self-assessed work ability which varies according to treatment type and cancer diagnosis [7]. Self-assessed work ability may readily be improved by providing patient education and support that addresses misconceptions concerning return-to-work [20]. The objective of this study was to determine the effect of a hospital-based work support intervention for cancer patients on return-to-work and quality of life, which was achieved.\n\nBODY.MATERIALS AND METHODS:\nThe medical ethics committee of the Academic Medical Center approved the study, and the medical ethics committees of the six participating hospital advised positively regarding feasibility of the study. Patients signed informed consent forms prior to participation in the study. This trial was registered at the Dutch National Registry:NTR1658 (http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1658).\nBoth the design of the study and the content of the hospital-based work support intervention have been described in detail elsewhere [14]. We used items from the CONSORT statement for improving the quality of reporting randomised trials [21]. The protocol for this trial and supporting CONSORT checklist is available as supporting information; see Checklist S1 and Protocol S1.\nBODY.PATIENTS:\nCancer patients between 18 and 60 years of age who had been treated with curative intent at one of the six participating hospital departments, had paid work, and who were on sick leave were eligible to participate. Treatment with curative intent was defined as an expected 1-year survival rate of approximately 80%. We excluded patients who were not sufficiently able to speak, read, or write Dutch, had a severe mental disorder or other severe co-morbidity, and for whom the primary diagnosis of cancer had been made more than two months previously. We monitored non-response by assessing the proportion of patients who participated in comparison to all eligible patients.\n\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nThe hospital-based work support intervention began a few weeks after the onset of the study and was spread across a maximum of 14 months. The hospital-based work support intervention consisted of the following components: 1) delivering patient education and support at the hospital, as part of usual psycho-oncology care; 2) improving communication between the treating physician and the occupational physician; and 3) drawing-up a concrete and gradual return-to-work plan in collaboration with the cancer patient, the occupational physician, and the employer [14]. We integrated patient education and support regarding return-to-work into the usual psycho-oncological care in the form of 4 meetings that lasted 15 minutes each. This care was delivered by an oncology nurse or medical social worker (hereafter referred to as nurse). In addition, a least one letter was sent to the occupational physician to enhance communication. We also asked the occupational physicians to organise a meeting between the patient and the employer to draw-up a return-to-work plan. The key aspects of the hospital-based work support intervention included the patient education and support at the hospital and the provision of information to the occupational physician. In the Netherlands, patients must provide their consent to allow medical information to be sent from a treating physician to an occupational physician. Therefore, we were only able to inform the occupational physicians of patients who provided this form of consent.\n\nBODY.STUDY DESIGN:\nThis study was designed as a multi-centre randomised controlled trial with a follow-up period of two years. Here we report the results of the first follow-up year. Eight departments from six hospitals in the Netherlands participated in the study.\nThe treating physician or nurse informed the cancer patients of the study a few weeks after their diagnosis and determined patient eligibility by assessing the inclusion and exclusion criteria. The research team contacted patients who were eligible and willing to participate and enrolled these patients in the study. After the patients had filled in the baseline questionnaire, one of us [ST] allocated the eligible patients to the intervention or to the control group using the computerised randomisation programme ALEA [22]. The allocation ratio was set as equal in the programme. Stratified randomisation was applied for two important prognostic factors for return-to-work [23]; age (<50 or ≥50) and cancer diagnosis (i.e. hospital department). Minimisation was applied to equalise group sizes. The patient date of each consecutive patient were entered in the programme and according to the conditions mentioned above the programme randomly assigned the patients to the intervention or the control group. The allocation was irrevocable and was not changed during the study nor during the analysis. Patients and providers were immediately informed of the allocation as it was impossible to conceal allocation for this intervention.\nQuestionnaires were administrated to the patients at baseline and at 6 and 12 months of follow-up. The follow-up questionnaires were mailed to the patients' homes with a postage-paid envelope enclosed. Both the questionnaire data and the information from the nurses who delivered the intervention were gathered for the economic evaluation. Outcome measures and cancer treatment were assessed at all time points. Socio-demographic factors and prognostic factors for time until return-to-work were assessed at baseline only.\n\nBODY.MEASUREMENTS:\nThe primary outcomes were return-to-work and quality of life. The intervention was considered effective if patients in the intervention group had a significantly shorter time to return-to-work (in days) than did patients in the control group, provided that their quality of life had not significantly deteriorated.\nReturn-to-work was measured both as the rate of return-to-work at one year of follow-up and as the number of calendar days between the first day of sick leave and the first day at work (either part-time or full-time) that was sustained for at least 4 weeks. Quality of life was assessed with the Short Form-36 (SF-36) [24], which included all subscales and a Visual Analogue Scale (VAS). Secondary outcomes included work ability, work functioning, and costs. Work ability was assessed using the first question of the Work Ability Index (WAI) [25]. Impaired work functioning was assessed with the Work Limitation Questionnaire (WLQ) [26], which could only be filled in if a patient had (partly) returned to work.\nWe conducted the economic evaluation from a societal perspective. We included lost productivity costs and work adjustments costs for both groups and costs to deliver the intervention for the intervention group. Productivity loss was determined by multiplying the cumulative net number of hours on sick leave by the estimated price of productivity loss based on age and gender [27]. We assumed that when a patient partially returned to work, his/her productivity was 100% during the hours of partial work resumption. We calculated productivity losses using both the human capital approach and the friction costs approach [27]. For the human capital approach, all hours on sick leave were included for 100%. For the friction costs approach, all hours on sick leave with a maximum of 167 days were included for 80% [27]. Costs to deliver the intervention were determined by combining the training costs and the costs to deliver the intervention. Training costs consisted of trainer costs, study material costs, and attendance costs for the nurses. Costs to deliver the intervention consisted of the mean hour of investment multiplied by the average nurse wage and subsequently multiplied by 42% overhead costs [27], and the mean hour of investment of the secretary for sending of the letters to the occupational physician, as well as the printing costs for the information leaflet. As the letter from the treating physician to the occupational physician was a copy of the letter to the general practitioner, no additional costs for the treating physician to produce these letters were taken into account.\nThe socio-demographic factors measured at baseline included the number of days between the first day of sick leave and enrolment in the study, marital status, time since diagnosis, breadwinner status, position at work, shift work, years in current position, years of paid employment, income, importance of work (VAS), and company size.\nPrognostic factors for time to return-to-work of the cancer patients included [7], [23] age, gender, education, diagnosis, cancer treatment, number of working hours according to contract, physical workload (Questionnaire of Perception and Judgement of Work (VBBA)) [28], fatigue (Multidimensional Fatigue Inventory (MFI)) [29], depression (Centre for Epidemiologic Studies for Depression Scale (CES-D)) [30], co-morbidity, self-efficacy (general self-efficacy scale (ALCOS)) [31], and clinical characteristics (i.e. diagnosis and treatment).\n\nBODY.SAMPLE SIZE:\nThe calculation of the patient sample size was based on two earlier studies focused on return-to-work in cancer patients [23], [32]. Based on the return-to-work rates in these studies, we assumed a relative risk of not returning to work of 0.53 for individuals in the intervention group versus those receiving usual care [14]. With a power of 80% and two-sided significance level of p<0.05, the sample size required was 109 patients in each group [33]. Assuming that 20% of the initial patients would be lost to follow-up, 270 patients should have been recruited to gather 246 patients at 12 months of follow-up. To account for at least 10% missing data at baseline, 300 patients sought to be included in the study.\n\nBODY.STATISTICAL ANALYSIS:\nData entry was verified by means of a 20% double data entry and a 100% double data check regarding the rate and time of patients until return-to-work. Participants who did and did not whish to participate were analysed on age using Student's t-test. All analyses were performed according to the intention-to-treat principle, which meant that all patients were included in the analysis. We censored patients who dropped out of the study because of missing data. Therefore, differences between patients who dropped out or completed the study were analysed according to their baseline quality of life scores.\nAll data were analysed by means of descriptive statics using PASW version 18. The baseline data were assessed to evaluate whether there was an imbalance between the intervention group and the control group using Student's t-test for continuous variables and the χ2 test for categorical variables. We considered a p-value≤0.05 to be statistically significant.\nWe calculated relative risks and 95% confidence interval for returning to work (full and partial) at 12 months of follow-up for the intervention group versus the control group. The median time until return-to-work was analysed with a Kaplan-Meier survival analysis, and differences between groups were tested with the log rank test. In addition, the Cox proportional hazard model of survival analysis was applied to estimate hazard ratios and the corresponding 95% confidence intervals for the time until return-to-work (full and partial) with a hazard ratio <1 indicating a longer time to return-to-work.\nImprovements in the subsequent primary outcome of quality of life and the secondary outcomes of work ability and work functioning between groups were examined using a longitudinal multilevel analysis. Mean costs between the groups were analysed using Student's t-test.\n\n\nBODY.RESULTS:\nCancer patients who were diagnosed at one of the participating hospital departments between May 2009 and December 2010 and who were eligible and willing to participate were enrolled in the study. The enrolment of new patients ended in December 2010 to enable the inclusion of patient follow-up data within the time constraints of the study. A total of 755 of the 855 cancer patients were excluded; 611 did not meet the eligibility criteria primarily because they were too old, 119 declined participation, and 25 were excluded for other reasons, and this led to an overall response rate of 47% (Figure 1). In total, 133 cancer patients were included in the study; 65 were assigned to the intervention group and 68 were assigned to the control group. At baseline, all 133 patients provided complete data on the primary outcome, whereas 132 (99%) patients provided complete data on the secondary outcomes (Figure 1). The response rate at 12 months of follow-up was 128 (96%) for the outcome of return-to-work and was 108 (81%) for the outcome of quality of life and secondary outcomes. The reason why patients did not return the questionnaire included cancer recurrence (4 patients; 3%), decline (6;5%) or were unknown 11 (8%), while 4 (3%) patients died within the 12-months follow-up period (Figure 1). Patients were on average 47.5±7.9 years old. Breast cancer was the most common diagnosis (62%), followed by cancer diagnosis of the female reproductive system (34%) (Table 1). No statistically significant differences between the intervention group and the control group on any of the socio-demographic or prognostic characteristics measured at baseline or any medical characteristics measured at follow-up were identified (Table 1).\n10.1371/journal.pone.0063271.g001Figure 1BODY.PATIENT FLOW.:\n10.1371/journal.pone.0063271.t001Table 1BODY.PATIENT CHARACTERISTICS.:\nPatient characteristics\nIntervention group (N = 65)\n\nControl group (N = 68)\n\nP-value **\n\n\nSocio-demographic characteristics\n*\n\nAge (years) \n¥\n\n47.5±8.2\n47.6±7.8\n0.92\nGender (% female)\n\n99%\n100%\n0.31\nMarital status (% married or living with partner)\n\n79%\n69%\n0.20\nBreadwinner position (% sole or shared)\n\n65%\n56%\n0.36\nEducation level (%)\n\nLow\n11%\n16%\n0.53\n\nIntermediate\n59%\n51%\n\n\nHigh\n30%\n33%\n\n\nClinical characteristics\n*\n\nDiagnosis\nBreast cancer\n64%\n60%\n0.82\n(%)\n\nCervix cancer\n23%\n22%\n\n\nOvarian cancer\n5%\n10%\n\n\nVulva cancer\n3%\n3%\n\n\nOther\n5%\n5%\n\nNumber of co-\n0\n45%\n54%\n0.09\nmorbidities\n1\n22%\n31%\n\n(%)\n\n≥2\n33%\n15%\n\nSurgery (%)\n\n99%\n96%\n0.78\nChemotherapy (%)\n\n66%\n71%\n0.84\nRadiotherapy (%)\n\n60%\n58%\n0.67\n\nWork-related characteristics\n*\n\nType of occupation\nHealth care/education\n38%\n37%\n0.69\n(%)\n\nAdministrative\n9%\n9%\n\n\nSales\n5%\n12%\n\n\nOther\n48%\n42%\n\nType of work (% mainly physically work)\n\n32%\n40%\n0.38\nPhysical workload (0–28)\n**\n\n4.7±3.6\n5.7±4.4\n0.18\nTime since sick listed (days)\n\n26.5±35.1\n15.0±53.2\n0.15\nImportance of work (0–100)\n**\n\n58.7±23.1\n51.5±28.3\n0.11\nShift work (% shift work)\n\n26%\n19%\n0.36\nType of contract\nPermanent\n89%\n84%\n0.17\n\n(%)\n\nTemporary\n11%\n9%\n\n\nSelf-employed\n0%\n4%\n\n\nOther\n0%\n3%\n\n\nHealth-related characteristics\n*\n\nFatigue (MFI\n**\n\nGeneral fatigue (0–20)\n\n12.4±4.9\n13.1±4.3\n0.37\nDepression (CES-D)\n**\n\nSum score (0–60)\n\n14.1±9.3\n13.5±7.7\n0.67\nSelf-efficacy (ALCOS)\n**\n\nSum score (0–80)\n\n66.5±8.6\n66.2±7.6\n0.83\n*Continuous variables: mean ± standard deviation; nominal and ordinal variables percentages.\n**Higher scores represent higher level of physical workload, importance of work, fatigue, feelings of depression, and self-efficacy.\n¥Age at the time of randomisation.\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nNo harm or unintended effects were reported by patients as a result of participating in the intervention.\nSeven patients (12%) assigned to the intervention group did not receive the patient education and support from the nurse [34]. For all patients who provided this type of consent (86%), at least one letter from the treating physician was sent to the occupational physician. In five cases (10%), the patients' occupational physician organized a meeting between the patient, supervisor, and himself to draw-up a return-to-work plan.\nThe median number of contacts made between the nurse and the patient was 4 (range 1–4) and the median duration of each meeting was 23 minutes (range 7–60). Eight (12%) patients assigned to the control group reported having received patient education or support regarding their return-to-work from their nurse.\n\nBODY.PRIMARY OUTCOME – RETURN-TO-WORK AND QUALITY OF LIFE:\nThe return-to-work rate (full or partial) of all 128 randomised patients with follow-up data at 12 months was 79% for the intervention group and 79% for the control group (p = 0.97), and these rates were 86% and 83%, respectively (p = 0.61), when patients who died within the follow-up period or those with a life expectancy of only a few months were excluded. The relative risk of returning to work (full or partial) for the intervention group versus the control group was 1.03 (95% CI 0.84–1.2). Of the patients who did not return-to-work (intervention versus control group); 2 versus 2 died, 3 versus 1 had a life expectancy of few months, 4 versus 5 lost their jobs, 2 versus 5 experienced adverse side-effects, and 2 versus 0 had other reasons.\nMedian time from the initial sick leave until partial return-to-work was 194 days (range 14–435) for the intervention group and 192 days (range 82–465) for the control group (log rank test; p = 0.90). Median time from initial sick leave until full return-to-work was 283 days (range 25–394) for the intervention group and 239 days (range 77–454) for the control group (log rank test; p = 0.52). Figure 2 summarizes Kaplan-Meier survival analyses for the two groups on partial and full return-to-work. The hazard ratio for partial return-to-work was 1.03 (95% CI 0.64–1.6) for the intervention group versus the control group and was 0.88 (95% CI 0.53–1.5) regarding full return-to-work.\n10.1371/journal.pone.0063271.g002Figure 2BODY.KAPLAN-MEIER SURVIVAL CURVES FOR TIME TO PARTIAL RETURN-TO-WORK (A) AND FULL RETURN-TO-WORK (B).:\nQuality of life scores showed statistically significant improvements over time (p ranged between 0.014 to ≤0.001) but did not differ statistically significant between groups (p ranged between 0.15 to 0.99) (Table 2).\n10.1371/journal.pone.0063271.t002Table 2BODY.QUALITY OF LIFE, WORK ABILITY, AND WORK FUNCTIONING.:\n\nGroup\nBaseline\n6 months follow-up\n12 months follow-up\nP-value**\n\nQuality of life*\n(SF-\n\nPhysical functioning\nIntervention\n76±28\n71±21\n81±16\n0.95\n\n36) (0–100)\n\n\nControl\n73±28\n70±22\n79±20\n\n\n(N = 133)\n\nRole-physical\nIntervention\n48±44\n29±40\n47±40\n0.46\n\n\nControl\n50±43\n31±37\n61±41\n\n\nVitality\nIntervention\n60±21\n51±20\n59±19\n0.60\n\n\nControl\n57±17\n51±16\n56±16\n\n\nGeneral health\nIntervention\n61±21\n54±18\n64±17\n0.15\n\n\nControl\n61±18\n59±18\n70±19\n\n\nSocial functioning\nIntervention\n70±23\n66±24\n75±20\n0.46\n\n\nControl\n68±22\n66±22\n78±20\n\n\nRole-emotional\nIntervention\n49±44\n53±45\n64±42\n0.71\n\n\nControl\n52±41\n64±44\n71±40\n\n\nMental health\nIntervention\n65±17\n71±16\n77±15\n0.32\n\n\nControl\n64±16\n70±16\n72±15\n\n\nPain\nIntervention\n69±30\n67±25\n75±21\n0.99\n\n\nControl\n70±23\n69±20\n76±17\n\nQuality of life*\n\nIntervention\n60±22\n62±23\n73±17\n0.26\nVAS (0–100) (N = 133)\n\nControl\n61±21\n67±18\n70±17\n\nOverall work ability*\n\nIntervention\n5±3\n4±3\n6±2\n0.59\n(WAI) (0–10) (N = 133)\n\nControl\n5±3\n5±3\n7±2\n\nOverall work productivity*\n\nIntervention\nNA\n34±19\n29±15\n0.68\n(WLQ) (0–100) (N = 100)\n\nControl\nNA\n30±14\n27±16\n\nMean ± sd;\n*Higher scores represent a higher level of functioning/well-being/quality of life, work ability, and work functioning.\n**P-value represents the interaction effect of time and group.\n\nBODY.SECONDARY OUTCOMES – WORK ABILITY, WORK PRODUCTIVITY, AND COSTS:\nWork ability improved statistically significant over time (p≤0.001) but did not differ statistically significant between groups (p = 0.58) (Table 2). Work functioning did not improve significantly over time (p = 0.3) and did not differ significantly between groups (p = 0.48) (Table 2).\n\nTable 3 shows that the intervention costs were 119 Euros per patient in the intervention group. The mean (± SD) lost productivity cost according to the human capital approach was 41.393 (±39.269) Euros in the intervention group and 38.968 (±38.399) Euros in the control group. The mean (± SD) lost productivity cost according to the friction costs approach was 14.030 (±3.614) Euros in the intervention group and 13.529 (±3.313) Euros in the control group. The mean work accommodations cost was 2.975 and 3.025 Euros in the intervention group and control group, respectively. These costs did not differ statistically between groups.\n10.1371/journal.pone.0063271.t003Table 3BODY.ECONOMIC EVALUATION.:\nCosts of the work-directed intervention in Euros\nDescription\nCosts (€)\nTraining costs\n1 trainer, time investment 24 hours, 50 Euros per hour\n1200\n\nStudy material, refreshments\n125\n\nAttendance costs nurses, 11 nurses, 30 Euros per hours, 4 hours\n1320\nTotal training costs per patient in the intervention group\n41\n\nWork-directed intervention\nMean hour of investment of nurse was 1.2 hour, 43 Euros per hour\n66\n\nMean hour of investment of secretary was 0.16 hour, 30 Euros per hour\n5\n\nInformational leaflet\n7\nTotal intervention costs per patient in the intervention group\n78\n\nTotal costs per patient in the intervention group\n119\n\nCosts differences between groups in Euros\n\nN\nIntervention group\nControl group\nMean difference\nP-value\nProductivity loss net HCA Mean (±SD) Euros\n\n128\n41393 (±39269)\n38968 (±38399)\n−2425\n0.72\nProductivity loss net FCA\n128\n14030 (±3614)\n13529 (±3313)\n−438\n0.48\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork adjustments\n3*\n\n2975\n3025 (±71)\n50\n0.67\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork-directed intervention Mean Euros\n\n128\n119\n0\n−119\nNA\nAbbreviations: HCA Human Capital Approach; FCA Friction Costs Approach; NA: not applicable; SD = standard deviation.\n*Only three patients had work adjustments that were not related to productivity.\n\n\nBODY.DISCUSSION:\nThe objective of this study was to determine the effect of a hospital-based work support intervention for female cancer patients on return-to-work and quality of life. In general, return-to-work rates were high. We failed to show any differences between groups on return-to-work outcomes and quality of life scores.\nBODY.STRENGTHS AND LIMITATIONS:\nOne strength of our study was the innovative approach that was used to address the adverse work outcomes of cancer patients. Few studies have addressed this important subject by developing an intervention that is primarily aimed at enhancing the return-to-work of cancer patients [12], [13], [34]. Furthermore, another strength of this study was the use of a low-cost intervention that could be implemented without substantially increasing the time required, which is important because of the burden on cancer care. In addition, this intervention was easily adapted to the existing variation in usual psycho-oncological care, which yields high external validity. One limitation of our study was the inability to include sufficient patients, according to our predetermined power analysis, which led to greater uncertainty in the results.\n\nBODY.INTERPRETATION OF FINDINGS:\nWe found that the intervention was easily accepted in usual psycho-oncological care and we found that patients were notably satisfied with the intervention [34]. For those reasons, addressing the return to work of cancer patients is highly relevant for usual psycho-oncological care. We found similar return-to-work outcomes and quality of life scores for both groups. There are several possible explanations for the lack of statistically significant difference between groups, which can be sought in the intervention content and the study design. The basic assumption behind the intervention was that return to work would increase by means of improved self-assessed work ability as a result of patient education and support that addressed misconceptions about cancer and work. We found that self-assessed work ability increased significantly over time but did not differ significantly between groups. It is possible that addressing these misconceptions could have required a more intense intervention or that the training we provided to the nurses was not sufficient. We do not know precisely which misconceptions impede return to work and which should be addressed. On the other hand, this later possibility was indicated as a number of nurses mentioned that they were not completely convinced of their competence to deliver the return-to-work advice. It may be that our half-day training course was too short to enable nurses to gain the knowledge required to adequately address patients' misconceptions about return to work adequately. For these reasons, it is possible that certain misconceptions regarding cancer and work could have persisted and may have resulted in the absence of an intervention effect.\nIn addition, we experienced difficulties in involving the occupational physician and the employer for the intervention while their involvement appeared to be important [35] and may have caused the absence of an intervention effect.\nThere are some observational studies that showed that the treating physician's advice about return-to-work influenced work resumption considerably either with a shorter or with a longer return-to-work [16], [17], [34]. However, our study shows that apparently this is an overestimation that is not reproduced in an experimental study.\n\nBODY.METHODOLOGICAL CONSIDERATIONS:\n\nIn accordance with the intention to treat analysis we included in the survival analysis patients who died within the follow-up period as censored. However, an assumption in survival analysis is that when a patient is censored, the change that a patient will be able to achieve the outcome is still 50% [36], [37], which is not the case in this situation. However, on a population of 133 patients, we do not expect that the 4 patients who were equally divided between the intervention group and control group, influenced the findings significantly.\n\nBODY.EXTERNAL VALIDITY:\nIt is generally acknowledged that the disability legalisation of a country influences the return-to-work outcomes of employees on sick leave and that disability legalisations varies widely among countries [38]. For that reason, the effect of interventions on return-to-work may also vary from one country to another. The results and conclusions of this study are relevant for the Netherlands due to its social security legislation. However, the early hospital-based work support intervention integrated into usual psycho-oncological care could be adapted and generalised to other countries because cancer patients in other countries experiencing a lack of support about their return-to-work from the hospital as often as patients in the Netherlands [39]. The exact content of the intervention should be adapted to the social security legislation of the country it is implemented in.\n\nBODY.RECOMMENDATIONS FOR FURTHER RESEARCH AND PRACTICE:\nIn terms of recommendations for clinical practice, this study revealed that psycho-oncological care can address the return-to-work of cancer patients early in their treatment, as well as follow-up, as the intervention was appreciated by patients and was perceived as useful and feasible by the nurses. Since, the occupational physician and employer involvement is pivotal for a successful return to work but appeared problematic in our intervention, it seems important to solve what impedes their involvement and to adapt the intervention accordingly. As we found similar work outcomes between the intervention group and the control group, a recommendation for further research is to study if an improved intervention leads to shorter time to return-to-work.\nDue to the large range in time to return to work, it seems important to identify patients who have a higher risk of getting adverse work outcomes based on a clinical prediction rule. Therefore, a recommendation for further research is, to develop such a clinical prediction rule for work outcomes and to evaluate it for the accuracy in identifying patients with a higher risk of adverse work outcomes. Furthermore, apart from identifying patients with a higher risk, it is also important to make the intervention more tailored as it appeared that some patients do not need an intervention to achieve a successful return to work while other patients might have benefited from a more intense intervention. A possibility to make an intervention more tailored is by using a stepped care model. This means that a low-intensity intervention can be offered to all patients while a high-intensity intervention is only offered to patients, for whom work resumption turns out to be problematic.\nWe found that the contrast between groups was reduced, due to the study design. Therefore, another recommendation for further research would be to consider alternative study designs, such as a cluster randomised controlled trial [40].\n\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist.\n\n(DOC)\nClick here for additional data file.\nProtocol S1\nTrial Protocol.\n\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to usual care what was the result of work support intervention aimed at enhancing return-to-work on Median time from initial sick leave to partial return-to-work?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13814
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Hospital-Based Work Support Intervention for Female Cancer Patients – A Multi-Centre Randomised Controlled TrialEffect of Hospital-Based Work Support Intervention\n\n\n\nABSTRACT.OBJECTIVE:\nOne key aspect of cancer survivorship is return-to-work. Unfortunately, many cancer survivors face problems upon their return-to-work. For that reason, we developed a hospital-based work support intervention aimed at enhancing return-to-work. We studied effectiveness of the intervention compared to usual care for female cancer patients in a multi-centre randomised controlled trial.\n\nABSTRACT.METHODS:\nBreast and gynaecological cancer patients who were treated with curative intent and had paid work were randomised to the intervention group (n = 65) or control group (n = 68). The intervention involved patient education and support at the hospital and improvement of communication between treating and occupational physicians. In addition, we asked patient's occupational physician to organise a meeting with the patient and the supervisor to make a concrete gradual return-to-work plan. Outcomes at 12 months of follow-up included rate and time until return-to-work (full or partial), quality of life, work ability, work functioning, and lost productivity costs. Time until return-to-work was analyzed with Kaplan-Meier survival analysis.\n\nABSTRACT.RESULTS:\nReturn-to-work rates were 86% and 83% (p = 0.6) for the intervention group and control group when excluding 8 patients who died or with a life expectancy of months at follow-up. Median time from initial sick leave to partial return-to-work was 194 days (range 14–435) versus 192 days (range 82–465) (p = 0.90) with a hazard ratio of 1.03 (95% CI 0.64–1.6). Quality of life and work ability improved statistically over time but did not differ statistically between groups. Work functioning and costs did not differ statistically between groups.\n\nABSTRACT.CONCLUSION:\nThe intervention was easily implemented into usual psycho-oncological care and showed high return-to-work rates. We failed to show any differences between groups on return-to-work outcomes and quality of life scores. Further research is needed to study which aspects of the intervention are useful and which elements need improvement.\n\nABSTRACT.TRIAL REGISTRATION:\nNederlands Trial Register (NTR) 1658\n\n\nBODY.INTRODUCTION:\nIn recent years, advances in cancer screening and cancer treatment have improved the survival rates for patients with cancer. An increasing number of cancer patients are therefore able to live many years beyond the original cancer diagnosis and face new challenges upon cancer survivorship. For cancer patients of working age, returning to work is a key aspect of survivorship because it is often experienced as an important part of their recovery [1]. Furthermore, work contributes to personal, social, and economic well-being, and therefore return-to-work is associated with the quality of life of cancer patients [2]–[4].\nUnfortunately, not all cancer patients are able to return-to-work and many of these patients have more adverse work outcomes in comparison to the general population. For instance, the risk of unemployment is estimated to be 37% higher for cancer patients compared to non-cancer controls [5] and return-to-work rates are estimated to vary between 30–93% depending on for instance cancer type and treatment [6], [7]. Furthermore, a portion of cancer patients face a decrease in income [8] and suffer from impaired work functioning compared to the general population [9], [10]. Finally, the employer and the society at large are also affected due to the costs of absenteeism, disability pension, and loss of productivity [11].\nIntervention studies aimed at enhancing the return-to-work of cancer patients are rare, especially randomised controlled trials [12], [13]. However, we developed an intervention based on previous studies that demonstrated effective results for enhancing the return-to-work of cancer patients [12], and we developed this intervention together with various stakeholders involved in the return-to-work process of cancer patients [14]. An early intervention – meaning soon after diagnosis or early in treatment - is most appropriate because the longer the duration of sick leave, the more difficult return-to-work is to achieve [15]. For delivering an early intervention, a hospital-based intervention is most appropriate, as most cancer patients do not have contact with their employer or occupational physician during early phases of their cancer treatment and their advice seems to be influential [16], [17]. In addition, previous studies have shown that early interventions could be most effective [12]. Furthermore, return-to-work should be part of the complete psycho-oncological care package and should not be dealt with in isolation [18].\nOur hypothesis is that a hospital-based intervention will enhance the return-to-work of cancer patients, as work is not typically addressed at the hospital [19]. Furthermore, an important and modifiable prognostic factor for the return-to-work of cancer patients is self-assessed work ability which varies according to treatment type and cancer diagnosis [7]. Self-assessed work ability may readily be improved by providing patient education and support that addresses misconceptions concerning return-to-work [20]. The objective of this study was to determine the effect of a hospital-based work support intervention for cancer patients on return-to-work and quality of life, which was achieved.\n\nBODY.MATERIALS AND METHODS:\nThe medical ethics committee of the Academic Medical Center approved the study, and the medical ethics committees of the six participating hospital advised positively regarding feasibility of the study. Patients signed informed consent forms prior to participation in the study. This trial was registered at the Dutch National Registry:NTR1658 (http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1658).\nBoth the design of the study and the content of the hospital-based work support intervention have been described in detail elsewhere [14]. We used items from the CONSORT statement for improving the quality of reporting randomised trials [21]. The protocol for this trial and supporting CONSORT checklist is available as supporting information; see Checklist S1 and Protocol S1.\nBODY.PATIENTS:\nCancer patients between 18 and 60 years of age who had been treated with curative intent at one of the six participating hospital departments, had paid work, and who were on sick leave were eligible to participate. Treatment with curative intent was defined as an expected 1-year survival rate of approximately 80%. We excluded patients who were not sufficiently able to speak, read, or write Dutch, had a severe mental disorder or other severe co-morbidity, and for whom the primary diagnosis of cancer had been made more than two months previously. We monitored non-response by assessing the proportion of patients who participated in comparison to all eligible patients.\n\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nThe hospital-based work support intervention began a few weeks after the onset of the study and was spread across a maximum of 14 months. The hospital-based work support intervention consisted of the following components: 1) delivering patient education and support at the hospital, as part of usual psycho-oncology care; 2) improving communication between the treating physician and the occupational physician; and 3) drawing-up a concrete and gradual return-to-work plan in collaboration with the cancer patient, the occupational physician, and the employer [14]. We integrated patient education and support regarding return-to-work into the usual psycho-oncological care in the form of 4 meetings that lasted 15 minutes each. This care was delivered by an oncology nurse or medical social worker (hereafter referred to as nurse). In addition, a least one letter was sent to the occupational physician to enhance communication. We also asked the occupational physicians to organise a meeting between the patient and the employer to draw-up a return-to-work plan. The key aspects of the hospital-based work support intervention included the patient education and support at the hospital and the provision of information to the occupational physician. In the Netherlands, patients must provide their consent to allow medical information to be sent from a treating physician to an occupational physician. Therefore, we were only able to inform the occupational physicians of patients who provided this form of consent.\n\nBODY.STUDY DESIGN:\nThis study was designed as a multi-centre randomised controlled trial with a follow-up period of two years. Here we report the results of the first follow-up year. Eight departments from six hospitals in the Netherlands participated in the study.\nThe treating physician or nurse informed the cancer patients of the study a few weeks after their diagnosis and determined patient eligibility by assessing the inclusion and exclusion criteria. The research team contacted patients who were eligible and willing to participate and enrolled these patients in the study. After the patients had filled in the baseline questionnaire, one of us [ST] allocated the eligible patients to the intervention or to the control group using the computerised randomisation programme ALEA [22]. The allocation ratio was set as equal in the programme. Stratified randomisation was applied for two important prognostic factors for return-to-work [23]; age (<50 or ≥50) and cancer diagnosis (i.e. hospital department). Minimisation was applied to equalise group sizes. The patient date of each consecutive patient were entered in the programme and according to the conditions mentioned above the programme randomly assigned the patients to the intervention or the control group. The allocation was irrevocable and was not changed during the study nor during the analysis. Patients and providers were immediately informed of the allocation as it was impossible to conceal allocation for this intervention.\nQuestionnaires were administrated to the patients at baseline and at 6 and 12 months of follow-up. The follow-up questionnaires were mailed to the patients' homes with a postage-paid envelope enclosed. Both the questionnaire data and the information from the nurses who delivered the intervention were gathered for the economic evaluation. Outcome measures and cancer treatment were assessed at all time points. Socio-demographic factors and prognostic factors for time until return-to-work were assessed at baseline only.\n\nBODY.MEASUREMENTS:\nThe primary outcomes were return-to-work and quality of life. The intervention was considered effective if patients in the intervention group had a significantly shorter time to return-to-work (in days) than did patients in the control group, provided that their quality of life had not significantly deteriorated.\nReturn-to-work was measured both as the rate of return-to-work at one year of follow-up and as the number of calendar days between the first day of sick leave and the first day at work (either part-time or full-time) that was sustained for at least 4 weeks. Quality of life was assessed with the Short Form-36 (SF-36) [24], which included all subscales and a Visual Analogue Scale (VAS). Secondary outcomes included work ability, work functioning, and costs. Work ability was assessed using the first question of the Work Ability Index (WAI) [25]. Impaired work functioning was assessed with the Work Limitation Questionnaire (WLQ) [26], which could only be filled in if a patient had (partly) returned to work.\nWe conducted the economic evaluation from a societal perspective. We included lost productivity costs and work adjustments costs for both groups and costs to deliver the intervention for the intervention group. Productivity loss was determined by multiplying the cumulative net number of hours on sick leave by the estimated price of productivity loss based on age and gender [27]. We assumed that when a patient partially returned to work, his/her productivity was 100% during the hours of partial work resumption. We calculated productivity losses using both the human capital approach and the friction costs approach [27]. For the human capital approach, all hours on sick leave were included for 100%. For the friction costs approach, all hours on sick leave with a maximum of 167 days were included for 80% [27]. Costs to deliver the intervention were determined by combining the training costs and the costs to deliver the intervention. Training costs consisted of trainer costs, study material costs, and attendance costs for the nurses. Costs to deliver the intervention consisted of the mean hour of investment multiplied by the average nurse wage and subsequently multiplied by 42% overhead costs [27], and the mean hour of investment of the secretary for sending of the letters to the occupational physician, as well as the printing costs for the information leaflet. As the letter from the treating physician to the occupational physician was a copy of the letter to the general practitioner, no additional costs for the treating physician to produce these letters were taken into account.\nThe socio-demographic factors measured at baseline included the number of days between the first day of sick leave and enrolment in the study, marital status, time since diagnosis, breadwinner status, position at work, shift work, years in current position, years of paid employment, income, importance of work (VAS), and company size.\nPrognostic factors for time to return-to-work of the cancer patients included [7], [23] age, gender, education, diagnosis, cancer treatment, number of working hours according to contract, physical workload (Questionnaire of Perception and Judgement of Work (VBBA)) [28], fatigue (Multidimensional Fatigue Inventory (MFI)) [29], depression (Centre for Epidemiologic Studies for Depression Scale (CES-D)) [30], co-morbidity, self-efficacy (general self-efficacy scale (ALCOS)) [31], and clinical characteristics (i.e. diagnosis and treatment).\n\nBODY.SAMPLE SIZE:\nThe calculation of the patient sample size was based on two earlier studies focused on return-to-work in cancer patients [23], [32]. Based on the return-to-work rates in these studies, we assumed a relative risk of not returning to work of 0.53 for individuals in the intervention group versus those receiving usual care [14]. With a power of 80% and two-sided significance level of p<0.05, the sample size required was 109 patients in each group [33]. Assuming that 20% of the initial patients would be lost to follow-up, 270 patients should have been recruited to gather 246 patients at 12 months of follow-up. To account for at least 10% missing data at baseline, 300 patients sought to be included in the study.\n\nBODY.STATISTICAL ANALYSIS:\nData entry was verified by means of a 20% double data entry and a 100% double data check regarding the rate and time of patients until return-to-work. Participants who did and did not whish to participate were analysed on age using Student's t-test. All analyses were performed according to the intention-to-treat principle, which meant that all patients were included in the analysis. We censored patients who dropped out of the study because of missing data. Therefore, differences between patients who dropped out or completed the study were analysed according to their baseline quality of life scores.\nAll data were analysed by means of descriptive statics using PASW version 18. The baseline data were assessed to evaluate whether there was an imbalance between the intervention group and the control group using Student's t-test for continuous variables and the χ2 test for categorical variables. We considered a p-value≤0.05 to be statistically significant.\nWe calculated relative risks and 95% confidence interval for returning to work (full and partial) at 12 months of follow-up for the intervention group versus the control group. The median time until return-to-work was analysed with a Kaplan-Meier survival analysis, and differences between groups were tested with the log rank test. In addition, the Cox proportional hazard model of survival analysis was applied to estimate hazard ratios and the corresponding 95% confidence intervals for the time until return-to-work (full and partial) with a hazard ratio <1 indicating a longer time to return-to-work.\nImprovements in the subsequent primary outcome of quality of life and the secondary outcomes of work ability and work functioning between groups were examined using a longitudinal multilevel analysis. Mean costs between the groups were analysed using Student's t-test.\n\n\nBODY.RESULTS:\nCancer patients who were diagnosed at one of the participating hospital departments between May 2009 and December 2010 and who were eligible and willing to participate were enrolled in the study. The enrolment of new patients ended in December 2010 to enable the inclusion of patient follow-up data within the time constraints of the study. A total of 755 of the 855 cancer patients were excluded; 611 did not meet the eligibility criteria primarily because they were too old, 119 declined participation, and 25 were excluded for other reasons, and this led to an overall response rate of 47% (Figure 1). In total, 133 cancer patients were included in the study; 65 were assigned to the intervention group and 68 were assigned to the control group. At baseline, all 133 patients provided complete data on the primary outcome, whereas 132 (99%) patients provided complete data on the secondary outcomes (Figure 1). The response rate at 12 months of follow-up was 128 (96%) for the outcome of return-to-work and was 108 (81%) for the outcome of quality of life and secondary outcomes. The reason why patients did not return the questionnaire included cancer recurrence (4 patients; 3%), decline (6;5%) or were unknown 11 (8%), while 4 (3%) patients died within the 12-months follow-up period (Figure 1). Patients were on average 47.5±7.9 years old. Breast cancer was the most common diagnosis (62%), followed by cancer diagnosis of the female reproductive system (34%) (Table 1). No statistically significant differences between the intervention group and the control group on any of the socio-demographic or prognostic characteristics measured at baseline or any medical characteristics measured at follow-up were identified (Table 1).\n10.1371/journal.pone.0063271.g001Figure 1BODY.PATIENT FLOW.:\n10.1371/journal.pone.0063271.t001Table 1BODY.PATIENT CHARACTERISTICS.:\nPatient characteristics\nIntervention group (N = 65)\n\nControl group (N = 68)\n\nP-value **\n\n\nSocio-demographic characteristics\n*\n\nAge (years) \n¥\n\n47.5±8.2\n47.6±7.8\n0.92\nGender (% female)\n\n99%\n100%\n0.31\nMarital status (% married or living with partner)\n\n79%\n69%\n0.20\nBreadwinner position (% sole or shared)\n\n65%\n56%\n0.36\nEducation level (%)\n\nLow\n11%\n16%\n0.53\n\nIntermediate\n59%\n51%\n\n\nHigh\n30%\n33%\n\n\nClinical characteristics\n*\n\nDiagnosis\nBreast cancer\n64%\n60%\n0.82\n(%)\n\nCervix cancer\n23%\n22%\n\n\nOvarian cancer\n5%\n10%\n\n\nVulva cancer\n3%\n3%\n\n\nOther\n5%\n5%\n\nNumber of co-\n0\n45%\n54%\n0.09\nmorbidities\n1\n22%\n31%\n\n(%)\n\n≥2\n33%\n15%\n\nSurgery (%)\n\n99%\n96%\n0.78\nChemotherapy (%)\n\n66%\n71%\n0.84\nRadiotherapy (%)\n\n60%\n58%\n0.67\n\nWork-related characteristics\n*\n\nType of occupation\nHealth care/education\n38%\n37%\n0.69\n(%)\n\nAdministrative\n9%\n9%\n\n\nSales\n5%\n12%\n\n\nOther\n48%\n42%\n\nType of work (% mainly physically work)\n\n32%\n40%\n0.38\nPhysical workload (0–28)\n**\n\n4.7±3.6\n5.7±4.4\n0.18\nTime since sick listed (days)\n\n26.5±35.1\n15.0±53.2\n0.15\nImportance of work (0–100)\n**\n\n58.7±23.1\n51.5±28.3\n0.11\nShift work (% shift work)\n\n26%\n19%\n0.36\nType of contract\nPermanent\n89%\n84%\n0.17\n\n(%)\n\nTemporary\n11%\n9%\n\n\nSelf-employed\n0%\n4%\n\n\nOther\n0%\n3%\n\n\nHealth-related characteristics\n*\n\nFatigue (MFI\n**\n\nGeneral fatigue (0–20)\n\n12.4±4.9\n13.1±4.3\n0.37\nDepression (CES-D)\n**\n\nSum score (0–60)\n\n14.1±9.3\n13.5±7.7\n0.67\nSelf-efficacy (ALCOS)\n**\n\nSum score (0–80)\n\n66.5±8.6\n66.2±7.6\n0.83\n*Continuous variables: mean ± standard deviation; nominal and ordinal variables percentages.\n**Higher scores represent higher level of physical workload, importance of work, fatigue, feelings of depression, and self-efficacy.\n¥Age at the time of randomisation.\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nNo harm or unintended effects were reported by patients as a result of participating in the intervention.\nSeven patients (12%) assigned to the intervention group did not receive the patient education and support from the nurse [34]. For all patients who provided this type of consent (86%), at least one letter from the treating physician was sent to the occupational physician. In five cases (10%), the patients' occupational physician organized a meeting between the patient, supervisor, and himself to draw-up a return-to-work plan.\nThe median number of contacts made between the nurse and the patient was 4 (range 1–4) and the median duration of each meeting was 23 minutes (range 7–60). Eight (12%) patients assigned to the control group reported having received patient education or support regarding their return-to-work from their nurse.\n\nBODY.PRIMARY OUTCOME – RETURN-TO-WORK AND QUALITY OF LIFE:\nThe return-to-work rate (full or partial) of all 128 randomised patients with follow-up data at 12 months was 79% for the intervention group and 79% for the control group (p = 0.97), and these rates were 86% and 83%, respectively (p = 0.61), when patients who died within the follow-up period or those with a life expectancy of only a few months were excluded. The relative risk of returning to work (full or partial) for the intervention group versus the control group was 1.03 (95% CI 0.84–1.2). Of the patients who did not return-to-work (intervention versus control group); 2 versus 2 died, 3 versus 1 had a life expectancy of few months, 4 versus 5 lost their jobs, 2 versus 5 experienced adverse side-effects, and 2 versus 0 had other reasons.\nMedian time from the initial sick leave until partial return-to-work was 194 days (range 14–435) for the intervention group and 192 days (range 82–465) for the control group (log rank test; p = 0.90). Median time from initial sick leave until full return-to-work was 283 days (range 25–394) for the intervention group and 239 days (range 77–454) for the control group (log rank test; p = 0.52). Figure 2 summarizes Kaplan-Meier survival analyses for the two groups on partial and full return-to-work. The hazard ratio for partial return-to-work was 1.03 (95% CI 0.64–1.6) for the intervention group versus the control group and was 0.88 (95% CI 0.53–1.5) regarding full return-to-work.\n10.1371/journal.pone.0063271.g002Figure 2BODY.KAPLAN-MEIER SURVIVAL CURVES FOR TIME TO PARTIAL RETURN-TO-WORK (A) AND FULL RETURN-TO-WORK (B).:\nQuality of life scores showed statistically significant improvements over time (p ranged between 0.014 to ≤0.001) but did not differ statistically significant between groups (p ranged between 0.15 to 0.99) (Table 2).\n10.1371/journal.pone.0063271.t002Table 2BODY.QUALITY OF LIFE, WORK ABILITY, AND WORK FUNCTIONING.:\n\nGroup\nBaseline\n6 months follow-up\n12 months follow-up\nP-value**\n\nQuality of life*\n(SF-\n\nPhysical functioning\nIntervention\n76±28\n71±21\n81±16\n0.95\n\n36) (0–100)\n\n\nControl\n73±28\n70±22\n79±20\n\n\n(N = 133)\n\nRole-physical\nIntervention\n48±44\n29±40\n47±40\n0.46\n\n\nControl\n50±43\n31±37\n61±41\n\n\nVitality\nIntervention\n60±21\n51±20\n59±19\n0.60\n\n\nControl\n57±17\n51±16\n56±16\n\n\nGeneral health\nIntervention\n61±21\n54±18\n64±17\n0.15\n\n\nControl\n61±18\n59±18\n70±19\n\n\nSocial functioning\nIntervention\n70±23\n66±24\n75±20\n0.46\n\n\nControl\n68±22\n66±22\n78±20\n\n\nRole-emotional\nIntervention\n49±44\n53±45\n64±42\n0.71\n\n\nControl\n52±41\n64±44\n71±40\n\n\nMental health\nIntervention\n65±17\n71±16\n77±15\n0.32\n\n\nControl\n64±16\n70±16\n72±15\n\n\nPain\nIntervention\n69±30\n67±25\n75±21\n0.99\n\n\nControl\n70±23\n69±20\n76±17\n\nQuality of life*\n\nIntervention\n60±22\n62±23\n73±17\n0.26\nVAS (0–100) (N = 133)\n\nControl\n61±21\n67±18\n70±17\n\nOverall work ability*\n\nIntervention\n5±3\n4±3\n6±2\n0.59\n(WAI) (0–10) (N = 133)\n\nControl\n5±3\n5±3\n7±2\n\nOverall work productivity*\n\nIntervention\nNA\n34±19\n29±15\n0.68\n(WLQ) (0–100) (N = 100)\n\nControl\nNA\n30±14\n27±16\n\nMean ± sd;\n*Higher scores represent a higher level of functioning/well-being/quality of life, work ability, and work functioning.\n**P-value represents the interaction effect of time and group.\n\nBODY.SECONDARY OUTCOMES – WORK ABILITY, WORK PRODUCTIVITY, AND COSTS:\nWork ability improved statistically significant over time (p≤0.001) but did not differ statistically significant between groups (p = 0.58) (Table 2). Work functioning did not improve significantly over time (p = 0.3) and did not differ significantly between groups (p = 0.48) (Table 2).\n\nTable 3 shows that the intervention costs were 119 Euros per patient in the intervention group. The mean (± SD) lost productivity cost according to the human capital approach was 41.393 (±39.269) Euros in the intervention group and 38.968 (±38.399) Euros in the control group. The mean (± SD) lost productivity cost according to the friction costs approach was 14.030 (±3.614) Euros in the intervention group and 13.529 (±3.313) Euros in the control group. The mean work accommodations cost was 2.975 and 3.025 Euros in the intervention group and control group, respectively. These costs did not differ statistically between groups.\n10.1371/journal.pone.0063271.t003Table 3BODY.ECONOMIC EVALUATION.:\nCosts of the work-directed intervention in Euros\nDescription\nCosts (€)\nTraining costs\n1 trainer, time investment 24 hours, 50 Euros per hour\n1200\n\nStudy material, refreshments\n125\n\nAttendance costs nurses, 11 nurses, 30 Euros per hours, 4 hours\n1320\nTotal training costs per patient in the intervention group\n41\n\nWork-directed intervention\nMean hour of investment of nurse was 1.2 hour, 43 Euros per hour\n66\n\nMean hour of investment of secretary was 0.16 hour, 30 Euros per hour\n5\n\nInformational leaflet\n7\nTotal intervention costs per patient in the intervention group\n78\n\nTotal costs per patient in the intervention group\n119\n\nCosts differences between groups in Euros\n\nN\nIntervention group\nControl group\nMean difference\nP-value\nProductivity loss net HCA Mean (±SD) Euros\n\n128\n41393 (±39269)\n38968 (±38399)\n−2425\n0.72\nProductivity loss net FCA\n128\n14030 (±3614)\n13529 (±3313)\n−438\n0.48\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork adjustments\n3*\n\n2975\n3025 (±71)\n50\n0.67\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork-directed intervention Mean Euros\n\n128\n119\n0\n−119\nNA\nAbbreviations: HCA Human Capital Approach; FCA Friction Costs Approach; NA: not applicable; SD = standard deviation.\n*Only three patients had work adjustments that were not related to productivity.\n\n\nBODY.DISCUSSION:\nThe objective of this study was to determine the effect of a hospital-based work support intervention for female cancer patients on return-to-work and quality of life. In general, return-to-work rates were high. We failed to show any differences between groups on return-to-work outcomes and quality of life scores.\nBODY.STRENGTHS AND LIMITATIONS:\nOne strength of our study was the innovative approach that was used to address the adverse work outcomes of cancer patients. Few studies have addressed this important subject by developing an intervention that is primarily aimed at enhancing the return-to-work of cancer patients [12], [13], [34]. Furthermore, another strength of this study was the use of a low-cost intervention that could be implemented without substantially increasing the time required, which is important because of the burden on cancer care. In addition, this intervention was easily adapted to the existing variation in usual psycho-oncological care, which yields high external validity. One limitation of our study was the inability to include sufficient patients, according to our predetermined power analysis, which led to greater uncertainty in the results.\n\nBODY.INTERPRETATION OF FINDINGS:\nWe found that the intervention was easily accepted in usual psycho-oncological care and we found that patients were notably satisfied with the intervention [34]. For those reasons, addressing the return to work of cancer patients is highly relevant for usual psycho-oncological care. We found similar return-to-work outcomes and quality of life scores for both groups. There are several possible explanations for the lack of statistically significant difference between groups, which can be sought in the intervention content and the study design. The basic assumption behind the intervention was that return to work would increase by means of improved self-assessed work ability as a result of patient education and support that addressed misconceptions about cancer and work. We found that self-assessed work ability increased significantly over time but did not differ significantly between groups. It is possible that addressing these misconceptions could have required a more intense intervention or that the training we provided to the nurses was not sufficient. We do not know precisely which misconceptions impede return to work and which should be addressed. On the other hand, this later possibility was indicated as a number of nurses mentioned that they were not completely convinced of their competence to deliver the return-to-work advice. It may be that our half-day training course was too short to enable nurses to gain the knowledge required to adequately address patients' misconceptions about return to work adequately. For these reasons, it is possible that certain misconceptions regarding cancer and work could have persisted and may have resulted in the absence of an intervention effect.\nIn addition, we experienced difficulties in involving the occupational physician and the employer for the intervention while their involvement appeared to be important [35] and may have caused the absence of an intervention effect.\nThere are some observational studies that showed that the treating physician's advice about return-to-work influenced work resumption considerably either with a shorter or with a longer return-to-work [16], [17], [34]. However, our study shows that apparently this is an overestimation that is not reproduced in an experimental study.\n\nBODY.METHODOLOGICAL CONSIDERATIONS:\n\nIn accordance with the intention to treat analysis we included in the survival analysis patients who died within the follow-up period as censored. However, an assumption in survival analysis is that when a patient is censored, the change that a patient will be able to achieve the outcome is still 50% [36], [37], which is not the case in this situation. However, on a population of 133 patients, we do not expect that the 4 patients who were equally divided between the intervention group and control group, influenced the findings significantly.\n\nBODY.EXTERNAL VALIDITY:\nIt is generally acknowledged that the disability legalisation of a country influences the return-to-work outcomes of employees on sick leave and that disability legalisations varies widely among countries [38]. For that reason, the effect of interventions on return-to-work may also vary from one country to another. The results and conclusions of this study are relevant for the Netherlands due to its social security legislation. However, the early hospital-based work support intervention integrated into usual psycho-oncological care could be adapted and generalised to other countries because cancer patients in other countries experiencing a lack of support about their return-to-work from the hospital as often as patients in the Netherlands [39]. The exact content of the intervention should be adapted to the social security legislation of the country it is implemented in.\n\nBODY.RECOMMENDATIONS FOR FURTHER RESEARCH AND PRACTICE:\nIn terms of recommendations for clinical practice, this study revealed that psycho-oncological care can address the return-to-work of cancer patients early in their treatment, as well as follow-up, as the intervention was appreciated by patients and was perceived as useful and feasible by the nurses. Since, the occupational physician and employer involvement is pivotal for a successful return to work but appeared problematic in our intervention, it seems important to solve what impedes their involvement and to adapt the intervention accordingly. As we found similar work outcomes between the intervention group and the control group, a recommendation for further research is to study if an improved intervention leads to shorter time to return-to-work.\nDue to the large range in time to return to work, it seems important to identify patients who have a higher risk of getting adverse work outcomes based on a clinical prediction rule. Therefore, a recommendation for further research is, to develop such a clinical prediction rule for work outcomes and to evaluate it for the accuracy in identifying patients with a higher risk of adverse work outcomes. Furthermore, apart from identifying patients with a higher risk, it is also important to make the intervention more tailored as it appeared that some patients do not need an intervention to achieve a successful return to work while other patients might have benefited from a more intense intervention. A possibility to make an intervention more tailored is by using a stepped care model. This means that a low-intensity intervention can be offered to all patients while a high-intensity intervention is only offered to patients, for whom work resumption turns out to be problematic.\nWe found that the contrast between groups was reduced, due to the study design. Therefore, another recommendation for further research would be to consider alternative study designs, such as a cluster randomised controlled trial [40].\n\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist.\n\n(DOC)\nClick here for additional data file.\nProtocol S1\nTrial Protocol.\n\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to usual care what was the result of work support intervention aimed at enhancing return-to-work on Quality of life and work ability?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13815
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Hospital-Based Work Support Intervention for Female Cancer Patients – A Multi-Centre Randomised Controlled TrialEffect of Hospital-Based Work Support Intervention\n\n\n\nABSTRACT.OBJECTIVE:\nOne key aspect of cancer survivorship is return-to-work. Unfortunately, many cancer survivors face problems upon their return-to-work. For that reason, we developed a hospital-based work support intervention aimed at enhancing return-to-work. We studied effectiveness of the intervention compared to usual care for female cancer patients in a multi-centre randomised controlled trial.\n\nABSTRACT.METHODS:\nBreast and gynaecological cancer patients who were treated with curative intent and had paid work were randomised to the intervention group (n = 65) or control group (n = 68). The intervention involved patient education and support at the hospital and improvement of communication between treating and occupational physicians. In addition, we asked patient's occupational physician to organise a meeting with the patient and the supervisor to make a concrete gradual return-to-work plan. Outcomes at 12 months of follow-up included rate and time until return-to-work (full or partial), quality of life, work ability, work functioning, and lost productivity costs. Time until return-to-work was analyzed with Kaplan-Meier survival analysis.\n\nABSTRACT.RESULTS:\nReturn-to-work rates were 86% and 83% (p = 0.6) for the intervention group and control group when excluding 8 patients who died or with a life expectancy of months at follow-up. Median time from initial sick leave to partial return-to-work was 194 days (range 14–435) versus 192 days (range 82–465) (p = 0.90) with a hazard ratio of 1.03 (95% CI 0.64–1.6). Quality of life and work ability improved statistically over time but did not differ statistically between groups. Work functioning and costs did not differ statistically between groups.\n\nABSTRACT.CONCLUSION:\nThe intervention was easily implemented into usual psycho-oncological care and showed high return-to-work rates. We failed to show any differences between groups on return-to-work outcomes and quality of life scores. Further research is needed to study which aspects of the intervention are useful and which elements need improvement.\n\nABSTRACT.TRIAL REGISTRATION:\nNederlands Trial Register (NTR) 1658\n\n\nBODY.INTRODUCTION:\nIn recent years, advances in cancer screening and cancer treatment have improved the survival rates for patients with cancer. An increasing number of cancer patients are therefore able to live many years beyond the original cancer diagnosis and face new challenges upon cancer survivorship. For cancer patients of working age, returning to work is a key aspect of survivorship because it is often experienced as an important part of their recovery [1]. Furthermore, work contributes to personal, social, and economic well-being, and therefore return-to-work is associated with the quality of life of cancer patients [2]–[4].\nUnfortunately, not all cancer patients are able to return-to-work and many of these patients have more adverse work outcomes in comparison to the general population. For instance, the risk of unemployment is estimated to be 37% higher for cancer patients compared to non-cancer controls [5] and return-to-work rates are estimated to vary between 30–93% depending on for instance cancer type and treatment [6], [7]. Furthermore, a portion of cancer patients face a decrease in income [8] and suffer from impaired work functioning compared to the general population [9], [10]. Finally, the employer and the society at large are also affected due to the costs of absenteeism, disability pension, and loss of productivity [11].\nIntervention studies aimed at enhancing the return-to-work of cancer patients are rare, especially randomised controlled trials [12], [13]. However, we developed an intervention based on previous studies that demonstrated effective results for enhancing the return-to-work of cancer patients [12], and we developed this intervention together with various stakeholders involved in the return-to-work process of cancer patients [14]. An early intervention – meaning soon after diagnosis or early in treatment - is most appropriate because the longer the duration of sick leave, the more difficult return-to-work is to achieve [15]. For delivering an early intervention, a hospital-based intervention is most appropriate, as most cancer patients do not have contact with their employer or occupational physician during early phases of their cancer treatment and their advice seems to be influential [16], [17]. In addition, previous studies have shown that early interventions could be most effective [12]. Furthermore, return-to-work should be part of the complete psycho-oncological care package and should not be dealt with in isolation [18].\nOur hypothesis is that a hospital-based intervention will enhance the return-to-work of cancer patients, as work is not typically addressed at the hospital [19]. Furthermore, an important and modifiable prognostic factor for the return-to-work of cancer patients is self-assessed work ability which varies according to treatment type and cancer diagnosis [7]. Self-assessed work ability may readily be improved by providing patient education and support that addresses misconceptions concerning return-to-work [20]. The objective of this study was to determine the effect of a hospital-based work support intervention for cancer patients on return-to-work and quality of life, which was achieved.\n\nBODY.MATERIALS AND METHODS:\nThe medical ethics committee of the Academic Medical Center approved the study, and the medical ethics committees of the six participating hospital advised positively regarding feasibility of the study. Patients signed informed consent forms prior to participation in the study. This trial was registered at the Dutch National Registry:NTR1658 (http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1658).\nBoth the design of the study and the content of the hospital-based work support intervention have been described in detail elsewhere [14]. We used items from the CONSORT statement for improving the quality of reporting randomised trials [21]. The protocol for this trial and supporting CONSORT checklist is available as supporting information; see Checklist S1 and Protocol S1.\nBODY.PATIENTS:\nCancer patients between 18 and 60 years of age who had been treated with curative intent at one of the six participating hospital departments, had paid work, and who were on sick leave were eligible to participate. Treatment with curative intent was defined as an expected 1-year survival rate of approximately 80%. We excluded patients who were not sufficiently able to speak, read, or write Dutch, had a severe mental disorder or other severe co-morbidity, and for whom the primary diagnosis of cancer had been made more than two months previously. We monitored non-response by assessing the proportion of patients who participated in comparison to all eligible patients.\n\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nThe hospital-based work support intervention began a few weeks after the onset of the study and was spread across a maximum of 14 months. The hospital-based work support intervention consisted of the following components: 1) delivering patient education and support at the hospital, as part of usual psycho-oncology care; 2) improving communication between the treating physician and the occupational physician; and 3) drawing-up a concrete and gradual return-to-work plan in collaboration with the cancer patient, the occupational physician, and the employer [14]. We integrated patient education and support regarding return-to-work into the usual psycho-oncological care in the form of 4 meetings that lasted 15 minutes each. This care was delivered by an oncology nurse or medical social worker (hereafter referred to as nurse). In addition, a least one letter was sent to the occupational physician to enhance communication. We also asked the occupational physicians to organise a meeting between the patient and the employer to draw-up a return-to-work plan. The key aspects of the hospital-based work support intervention included the patient education and support at the hospital and the provision of information to the occupational physician. In the Netherlands, patients must provide their consent to allow medical information to be sent from a treating physician to an occupational physician. Therefore, we were only able to inform the occupational physicians of patients who provided this form of consent.\n\nBODY.STUDY DESIGN:\nThis study was designed as a multi-centre randomised controlled trial with a follow-up period of two years. Here we report the results of the first follow-up year. Eight departments from six hospitals in the Netherlands participated in the study.\nThe treating physician or nurse informed the cancer patients of the study a few weeks after their diagnosis and determined patient eligibility by assessing the inclusion and exclusion criteria. The research team contacted patients who were eligible and willing to participate and enrolled these patients in the study. After the patients had filled in the baseline questionnaire, one of us [ST] allocated the eligible patients to the intervention or to the control group using the computerised randomisation programme ALEA [22]. The allocation ratio was set as equal in the programme. Stratified randomisation was applied for two important prognostic factors for return-to-work [23]; age (<50 or ≥50) and cancer diagnosis (i.e. hospital department). Minimisation was applied to equalise group sizes. The patient date of each consecutive patient were entered in the programme and according to the conditions mentioned above the programme randomly assigned the patients to the intervention or the control group. The allocation was irrevocable and was not changed during the study nor during the analysis. Patients and providers were immediately informed of the allocation as it was impossible to conceal allocation for this intervention.\nQuestionnaires were administrated to the patients at baseline and at 6 and 12 months of follow-up. The follow-up questionnaires were mailed to the patients' homes with a postage-paid envelope enclosed. Both the questionnaire data and the information from the nurses who delivered the intervention were gathered for the economic evaluation. Outcome measures and cancer treatment were assessed at all time points. Socio-demographic factors and prognostic factors for time until return-to-work were assessed at baseline only.\n\nBODY.MEASUREMENTS:\nThe primary outcomes were return-to-work and quality of life. The intervention was considered effective if patients in the intervention group had a significantly shorter time to return-to-work (in days) than did patients in the control group, provided that their quality of life had not significantly deteriorated.\nReturn-to-work was measured both as the rate of return-to-work at one year of follow-up and as the number of calendar days between the first day of sick leave and the first day at work (either part-time or full-time) that was sustained for at least 4 weeks. Quality of life was assessed with the Short Form-36 (SF-36) [24], which included all subscales and a Visual Analogue Scale (VAS). Secondary outcomes included work ability, work functioning, and costs. Work ability was assessed using the first question of the Work Ability Index (WAI) [25]. Impaired work functioning was assessed with the Work Limitation Questionnaire (WLQ) [26], which could only be filled in if a patient had (partly) returned to work.\nWe conducted the economic evaluation from a societal perspective. We included lost productivity costs and work adjustments costs for both groups and costs to deliver the intervention for the intervention group. Productivity loss was determined by multiplying the cumulative net number of hours on sick leave by the estimated price of productivity loss based on age and gender [27]. We assumed that when a patient partially returned to work, his/her productivity was 100% during the hours of partial work resumption. We calculated productivity losses using both the human capital approach and the friction costs approach [27]. For the human capital approach, all hours on sick leave were included for 100%. For the friction costs approach, all hours on sick leave with a maximum of 167 days were included for 80% [27]. Costs to deliver the intervention were determined by combining the training costs and the costs to deliver the intervention. Training costs consisted of trainer costs, study material costs, and attendance costs for the nurses. Costs to deliver the intervention consisted of the mean hour of investment multiplied by the average nurse wage and subsequently multiplied by 42% overhead costs [27], and the mean hour of investment of the secretary for sending of the letters to the occupational physician, as well as the printing costs for the information leaflet. As the letter from the treating physician to the occupational physician was a copy of the letter to the general practitioner, no additional costs for the treating physician to produce these letters were taken into account.\nThe socio-demographic factors measured at baseline included the number of days between the first day of sick leave and enrolment in the study, marital status, time since diagnosis, breadwinner status, position at work, shift work, years in current position, years of paid employment, income, importance of work (VAS), and company size.\nPrognostic factors for time to return-to-work of the cancer patients included [7], [23] age, gender, education, diagnosis, cancer treatment, number of working hours according to contract, physical workload (Questionnaire of Perception and Judgement of Work (VBBA)) [28], fatigue (Multidimensional Fatigue Inventory (MFI)) [29], depression (Centre for Epidemiologic Studies for Depression Scale (CES-D)) [30], co-morbidity, self-efficacy (general self-efficacy scale (ALCOS)) [31], and clinical characteristics (i.e. diagnosis and treatment).\n\nBODY.SAMPLE SIZE:\nThe calculation of the patient sample size was based on two earlier studies focused on return-to-work in cancer patients [23], [32]. Based on the return-to-work rates in these studies, we assumed a relative risk of not returning to work of 0.53 for individuals in the intervention group versus those receiving usual care [14]. With a power of 80% and two-sided significance level of p<0.05, the sample size required was 109 patients in each group [33]. Assuming that 20% of the initial patients would be lost to follow-up, 270 patients should have been recruited to gather 246 patients at 12 months of follow-up. To account for at least 10% missing data at baseline, 300 patients sought to be included in the study.\n\nBODY.STATISTICAL ANALYSIS:\nData entry was verified by means of a 20% double data entry and a 100% double data check regarding the rate and time of patients until return-to-work. Participants who did and did not whish to participate were analysed on age using Student's t-test. All analyses were performed according to the intention-to-treat principle, which meant that all patients were included in the analysis. We censored patients who dropped out of the study because of missing data. Therefore, differences between patients who dropped out or completed the study were analysed according to their baseline quality of life scores.\nAll data were analysed by means of descriptive statics using PASW version 18. The baseline data were assessed to evaluate whether there was an imbalance between the intervention group and the control group using Student's t-test for continuous variables and the χ2 test for categorical variables. We considered a p-value≤0.05 to be statistically significant.\nWe calculated relative risks and 95% confidence interval for returning to work (full and partial) at 12 months of follow-up for the intervention group versus the control group. The median time until return-to-work was analysed with a Kaplan-Meier survival analysis, and differences between groups were tested with the log rank test. In addition, the Cox proportional hazard model of survival analysis was applied to estimate hazard ratios and the corresponding 95% confidence intervals for the time until return-to-work (full and partial) with a hazard ratio <1 indicating a longer time to return-to-work.\nImprovements in the subsequent primary outcome of quality of life and the secondary outcomes of work ability and work functioning between groups were examined using a longitudinal multilevel analysis. Mean costs between the groups were analysed using Student's t-test.\n\n\nBODY.RESULTS:\nCancer patients who were diagnosed at one of the participating hospital departments between May 2009 and December 2010 and who were eligible and willing to participate were enrolled in the study. The enrolment of new patients ended in December 2010 to enable the inclusion of patient follow-up data within the time constraints of the study. A total of 755 of the 855 cancer patients were excluded; 611 did not meet the eligibility criteria primarily because they were too old, 119 declined participation, and 25 were excluded for other reasons, and this led to an overall response rate of 47% (Figure 1). In total, 133 cancer patients were included in the study; 65 were assigned to the intervention group and 68 were assigned to the control group. At baseline, all 133 patients provided complete data on the primary outcome, whereas 132 (99%) patients provided complete data on the secondary outcomes (Figure 1). The response rate at 12 months of follow-up was 128 (96%) for the outcome of return-to-work and was 108 (81%) for the outcome of quality of life and secondary outcomes. The reason why patients did not return the questionnaire included cancer recurrence (4 patients; 3%), decline (6;5%) or were unknown 11 (8%), while 4 (3%) patients died within the 12-months follow-up period (Figure 1). Patients were on average 47.5±7.9 years old. Breast cancer was the most common diagnosis (62%), followed by cancer diagnosis of the female reproductive system (34%) (Table 1). No statistically significant differences between the intervention group and the control group on any of the socio-demographic or prognostic characteristics measured at baseline or any medical characteristics measured at follow-up were identified (Table 1).\n10.1371/journal.pone.0063271.g001Figure 1BODY.PATIENT FLOW.:\n10.1371/journal.pone.0063271.t001Table 1BODY.PATIENT CHARACTERISTICS.:\nPatient characteristics\nIntervention group (N = 65)\n\nControl group (N = 68)\n\nP-value **\n\n\nSocio-demographic characteristics\n*\n\nAge (years) \n¥\n\n47.5±8.2\n47.6±7.8\n0.92\nGender (% female)\n\n99%\n100%\n0.31\nMarital status (% married or living with partner)\n\n79%\n69%\n0.20\nBreadwinner position (% sole or shared)\n\n65%\n56%\n0.36\nEducation level (%)\n\nLow\n11%\n16%\n0.53\n\nIntermediate\n59%\n51%\n\n\nHigh\n30%\n33%\n\n\nClinical characteristics\n*\n\nDiagnosis\nBreast cancer\n64%\n60%\n0.82\n(%)\n\nCervix cancer\n23%\n22%\n\n\nOvarian cancer\n5%\n10%\n\n\nVulva cancer\n3%\n3%\n\n\nOther\n5%\n5%\n\nNumber of co-\n0\n45%\n54%\n0.09\nmorbidities\n1\n22%\n31%\n\n(%)\n\n≥2\n33%\n15%\n\nSurgery (%)\n\n99%\n96%\n0.78\nChemotherapy (%)\n\n66%\n71%\n0.84\nRadiotherapy (%)\n\n60%\n58%\n0.67\n\nWork-related characteristics\n*\n\nType of occupation\nHealth care/education\n38%\n37%\n0.69\n(%)\n\nAdministrative\n9%\n9%\n\n\nSales\n5%\n12%\n\n\nOther\n48%\n42%\n\nType of work (% mainly physically work)\n\n32%\n40%\n0.38\nPhysical workload (0–28)\n**\n\n4.7±3.6\n5.7±4.4\n0.18\nTime since sick listed (days)\n\n26.5±35.1\n15.0±53.2\n0.15\nImportance of work (0–100)\n**\n\n58.7±23.1\n51.5±28.3\n0.11\nShift work (% shift work)\n\n26%\n19%\n0.36\nType of contract\nPermanent\n89%\n84%\n0.17\n\n(%)\n\nTemporary\n11%\n9%\n\n\nSelf-employed\n0%\n4%\n\n\nOther\n0%\n3%\n\n\nHealth-related characteristics\n*\n\nFatigue (MFI\n**\n\nGeneral fatigue (0–20)\n\n12.4±4.9\n13.1±4.3\n0.37\nDepression (CES-D)\n**\n\nSum score (0–60)\n\n14.1±9.3\n13.5±7.7\n0.67\nSelf-efficacy (ALCOS)\n**\n\nSum score (0–80)\n\n66.5±8.6\n66.2±7.6\n0.83\n*Continuous variables: mean ± standard deviation; nominal and ordinal variables percentages.\n**Higher scores represent higher level of physical workload, importance of work, fatigue, feelings of depression, and self-efficacy.\n¥Age at the time of randomisation.\nBODY.HOSPITAL-BASED WORK SUPPORT INTERVENTION:\nNo harm or unintended effects were reported by patients as a result of participating in the intervention.\nSeven patients (12%) assigned to the intervention group did not receive the patient education and support from the nurse [34]. For all patients who provided this type of consent (86%), at least one letter from the treating physician was sent to the occupational physician. In five cases (10%), the patients' occupational physician organized a meeting between the patient, supervisor, and himself to draw-up a return-to-work plan.\nThe median number of contacts made between the nurse and the patient was 4 (range 1–4) and the median duration of each meeting was 23 minutes (range 7–60). Eight (12%) patients assigned to the control group reported having received patient education or support regarding their return-to-work from their nurse.\n\nBODY.PRIMARY OUTCOME – RETURN-TO-WORK AND QUALITY OF LIFE:\nThe return-to-work rate (full or partial) of all 128 randomised patients with follow-up data at 12 months was 79% for the intervention group and 79% for the control group (p = 0.97), and these rates were 86% and 83%, respectively (p = 0.61), when patients who died within the follow-up period or those with a life expectancy of only a few months were excluded. The relative risk of returning to work (full or partial) for the intervention group versus the control group was 1.03 (95% CI 0.84–1.2). Of the patients who did not return-to-work (intervention versus control group); 2 versus 2 died, 3 versus 1 had a life expectancy of few months, 4 versus 5 lost their jobs, 2 versus 5 experienced adverse side-effects, and 2 versus 0 had other reasons.\nMedian time from the initial sick leave until partial return-to-work was 194 days (range 14–435) for the intervention group and 192 days (range 82–465) for the control group (log rank test; p = 0.90). Median time from initial sick leave until full return-to-work was 283 days (range 25–394) for the intervention group and 239 days (range 77–454) for the control group (log rank test; p = 0.52). Figure 2 summarizes Kaplan-Meier survival analyses for the two groups on partial and full return-to-work. The hazard ratio for partial return-to-work was 1.03 (95% CI 0.64–1.6) for the intervention group versus the control group and was 0.88 (95% CI 0.53–1.5) regarding full return-to-work.\n10.1371/journal.pone.0063271.g002Figure 2BODY.KAPLAN-MEIER SURVIVAL CURVES FOR TIME TO PARTIAL RETURN-TO-WORK (A) AND FULL RETURN-TO-WORK (B).:\nQuality of life scores showed statistically significant improvements over time (p ranged between 0.014 to ≤0.001) but did not differ statistically significant between groups (p ranged between 0.15 to 0.99) (Table 2).\n10.1371/journal.pone.0063271.t002Table 2BODY.QUALITY OF LIFE, WORK ABILITY, AND WORK FUNCTIONING.:\n\nGroup\nBaseline\n6 months follow-up\n12 months follow-up\nP-value**\n\nQuality of life*\n(SF-\n\nPhysical functioning\nIntervention\n76±28\n71±21\n81±16\n0.95\n\n36) (0–100)\n\n\nControl\n73±28\n70±22\n79±20\n\n\n(N = 133)\n\nRole-physical\nIntervention\n48±44\n29±40\n47±40\n0.46\n\n\nControl\n50±43\n31±37\n61±41\n\n\nVitality\nIntervention\n60±21\n51±20\n59±19\n0.60\n\n\nControl\n57±17\n51±16\n56±16\n\n\nGeneral health\nIntervention\n61±21\n54±18\n64±17\n0.15\n\n\nControl\n61±18\n59±18\n70±19\n\n\nSocial functioning\nIntervention\n70±23\n66±24\n75±20\n0.46\n\n\nControl\n68±22\n66±22\n78±20\n\n\nRole-emotional\nIntervention\n49±44\n53±45\n64±42\n0.71\n\n\nControl\n52±41\n64±44\n71±40\n\n\nMental health\nIntervention\n65±17\n71±16\n77±15\n0.32\n\n\nControl\n64±16\n70±16\n72±15\n\n\nPain\nIntervention\n69±30\n67±25\n75±21\n0.99\n\n\nControl\n70±23\n69±20\n76±17\n\nQuality of life*\n\nIntervention\n60±22\n62±23\n73±17\n0.26\nVAS (0–100) (N = 133)\n\nControl\n61±21\n67±18\n70±17\n\nOverall work ability*\n\nIntervention\n5±3\n4±3\n6±2\n0.59\n(WAI) (0–10) (N = 133)\n\nControl\n5±3\n5±3\n7±2\n\nOverall work productivity*\n\nIntervention\nNA\n34±19\n29±15\n0.68\n(WLQ) (0–100) (N = 100)\n\nControl\nNA\n30±14\n27±16\n\nMean ± sd;\n*Higher scores represent a higher level of functioning/well-being/quality of life, work ability, and work functioning.\n**P-value represents the interaction effect of time and group.\n\nBODY.SECONDARY OUTCOMES – WORK ABILITY, WORK PRODUCTIVITY, AND COSTS:\nWork ability improved statistically significant over time (p≤0.001) but did not differ statistically significant between groups (p = 0.58) (Table 2). Work functioning did not improve significantly over time (p = 0.3) and did not differ significantly between groups (p = 0.48) (Table 2).\n\nTable 3 shows that the intervention costs were 119 Euros per patient in the intervention group. The mean (± SD) lost productivity cost according to the human capital approach was 41.393 (±39.269) Euros in the intervention group and 38.968 (±38.399) Euros in the control group. The mean (± SD) lost productivity cost according to the friction costs approach was 14.030 (±3.614) Euros in the intervention group and 13.529 (±3.313) Euros in the control group. The mean work accommodations cost was 2.975 and 3.025 Euros in the intervention group and control group, respectively. These costs did not differ statistically between groups.\n10.1371/journal.pone.0063271.t003Table 3BODY.ECONOMIC EVALUATION.:\nCosts of the work-directed intervention in Euros\nDescription\nCosts (€)\nTraining costs\n1 trainer, time investment 24 hours, 50 Euros per hour\n1200\n\nStudy material, refreshments\n125\n\nAttendance costs nurses, 11 nurses, 30 Euros per hours, 4 hours\n1320\nTotal training costs per patient in the intervention group\n41\n\nWork-directed intervention\nMean hour of investment of nurse was 1.2 hour, 43 Euros per hour\n66\n\nMean hour of investment of secretary was 0.16 hour, 30 Euros per hour\n5\n\nInformational leaflet\n7\nTotal intervention costs per patient in the intervention group\n78\n\nTotal costs per patient in the intervention group\n119\n\nCosts differences between groups in Euros\n\nN\nIntervention group\nControl group\nMean difference\nP-value\nProductivity loss net HCA Mean (±SD) Euros\n\n128\n41393 (±39269)\n38968 (±38399)\n−2425\n0.72\nProductivity loss net FCA\n128\n14030 (±3614)\n13529 (±3313)\n−438\n0.48\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork adjustments\n3*\n\n2975\n3025 (±71)\n50\n0.67\n\nMean (±SD) Euros\n\n\n\n\n\n\nWork-directed intervention Mean Euros\n\n128\n119\n0\n−119\nNA\nAbbreviations: HCA Human Capital Approach; FCA Friction Costs Approach; NA: not applicable; SD = standard deviation.\n*Only three patients had work adjustments that were not related to productivity.\n\n\nBODY.DISCUSSION:\nThe objective of this study was to determine the effect of a hospital-based work support intervention for female cancer patients on return-to-work and quality of life. In general, return-to-work rates were high. We failed to show any differences between groups on return-to-work outcomes and quality of life scores.\nBODY.STRENGTHS AND LIMITATIONS:\nOne strength of our study was the innovative approach that was used to address the adverse work outcomes of cancer patients. Few studies have addressed this important subject by developing an intervention that is primarily aimed at enhancing the return-to-work of cancer patients [12], [13], [34]. Furthermore, another strength of this study was the use of a low-cost intervention that could be implemented without substantially increasing the time required, which is important because of the burden on cancer care. In addition, this intervention was easily adapted to the existing variation in usual psycho-oncological care, which yields high external validity. One limitation of our study was the inability to include sufficient patients, according to our predetermined power analysis, which led to greater uncertainty in the results.\n\nBODY.INTERPRETATION OF FINDINGS:\nWe found that the intervention was easily accepted in usual psycho-oncological care and we found that patients were notably satisfied with the intervention [34]. For those reasons, addressing the return to work of cancer patients is highly relevant for usual psycho-oncological care. We found similar return-to-work outcomes and quality of life scores for both groups. There are several possible explanations for the lack of statistically significant difference between groups, which can be sought in the intervention content and the study design. The basic assumption behind the intervention was that return to work would increase by means of improved self-assessed work ability as a result of patient education and support that addressed misconceptions about cancer and work. We found that self-assessed work ability increased significantly over time but did not differ significantly between groups. It is possible that addressing these misconceptions could have required a more intense intervention or that the training we provided to the nurses was not sufficient. We do not know precisely which misconceptions impede return to work and which should be addressed. On the other hand, this later possibility was indicated as a number of nurses mentioned that they were not completely convinced of their competence to deliver the return-to-work advice. It may be that our half-day training course was too short to enable nurses to gain the knowledge required to adequately address patients' misconceptions about return to work adequately. For these reasons, it is possible that certain misconceptions regarding cancer and work could have persisted and may have resulted in the absence of an intervention effect.\nIn addition, we experienced difficulties in involving the occupational physician and the employer for the intervention while their involvement appeared to be important [35] and may have caused the absence of an intervention effect.\nThere are some observational studies that showed that the treating physician's advice about return-to-work influenced work resumption considerably either with a shorter or with a longer return-to-work [16], [17], [34]. However, our study shows that apparently this is an overestimation that is not reproduced in an experimental study.\n\nBODY.METHODOLOGICAL CONSIDERATIONS:\n\nIn accordance with the intention to treat analysis we included in the survival analysis patients who died within the follow-up period as censored. However, an assumption in survival analysis is that when a patient is censored, the change that a patient will be able to achieve the outcome is still 50% [36], [37], which is not the case in this situation. However, on a population of 133 patients, we do not expect that the 4 patients who were equally divided between the intervention group and control group, influenced the findings significantly.\n\nBODY.EXTERNAL VALIDITY:\nIt is generally acknowledged that the disability legalisation of a country influences the return-to-work outcomes of employees on sick leave and that disability legalisations varies widely among countries [38]. For that reason, the effect of interventions on return-to-work may also vary from one country to another. The results and conclusions of this study are relevant for the Netherlands due to its social security legislation. However, the early hospital-based work support intervention integrated into usual psycho-oncological care could be adapted and generalised to other countries because cancer patients in other countries experiencing a lack of support about their return-to-work from the hospital as often as patients in the Netherlands [39]. The exact content of the intervention should be adapted to the social security legislation of the country it is implemented in.\n\nBODY.RECOMMENDATIONS FOR FURTHER RESEARCH AND PRACTICE:\nIn terms of recommendations for clinical practice, this study revealed that psycho-oncological care can address the return-to-work of cancer patients early in their treatment, as well as follow-up, as the intervention was appreciated by patients and was perceived as useful and feasible by the nurses. Since, the occupational physician and employer involvement is pivotal for a successful return to work but appeared problematic in our intervention, it seems important to solve what impedes their involvement and to adapt the intervention accordingly. As we found similar work outcomes between the intervention group and the control group, a recommendation for further research is to study if an improved intervention leads to shorter time to return-to-work.\nDue to the large range in time to return to work, it seems important to identify patients who have a higher risk of getting adverse work outcomes based on a clinical prediction rule. Therefore, a recommendation for further research is, to develop such a clinical prediction rule for work outcomes and to evaluate it for the accuracy in identifying patients with a higher risk of adverse work outcomes. Furthermore, apart from identifying patients with a higher risk, it is also important to make the intervention more tailored as it appeared that some patients do not need an intervention to achieve a successful return to work while other patients might have benefited from a more intense intervention. A possibility to make an intervention more tailored is by using a stepped care model. This means that a low-intensity intervention can be offered to all patients while a high-intensity intervention is only offered to patients, for whom work resumption turns out to be problematic.\nWe found that the contrast between groups was reduced, due to the study design. Therefore, another recommendation for further research would be to consider alternative study designs, such as a cluster randomised controlled trial [40].\n\n\nBODY.SUPPORTING INFORMATION:\nChecklist S1\nCONSORT Checklist.\n\n(DOC)\nClick here for additional data file.\nProtocol S1\nTrial Protocol.\n\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to usual care what was the result of work support intervention aimed at enhancing return-to-work on Work functioning and costs?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13816
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nBeta-agonist overuse and delay in obtaining medical review in high risk asthma: a secondary analysis of data from a randomised controlled trial\n\n\n\nAsthma mortality surveys report delays in seeking medical review and overuse of beta-agonist therapy as factors contributing to a fatal outcome. However, the strength of these associations is limited because many asthma deaths are unwitnessed. We undertook a secondary analysis of data from a 24-week randomised controlled trial of 303 patients with high-risk asthma, randomised to combination budesonide/formoterol inhaler according to a single maintenance and reliever therapy regimen or fixed dose budesonide/formoterol with salbutamol as reliever (Standard) regimen. Medication use was measured by electronic monitors. The thresholds for high, marked and extreme beta-agonist use days were defined in the single maintenance and reliever therapy arm as: >8, >12 and >16 actuations of budesonide/formoterol in excess of four maintenance doses, respectively; and in the Standard arm as: >16, >24 and >32 actuations of salbutamol, respectively. Whether a medical review was obtained within 48 h of an overuse episode was determined by review of data collected during the study by participant report. The mean (standard deviation) proportion of days in which high, marked and extreme beta-agonist overuse occurred without medical review within 48 h was 0·94(0·20), 0·94(0·15) and 0·94(0·17), and 0·92(0·19), 0·90(0·26) and 0·94(0·15) for single maintenance and reliever therapy and Standard regimens, respectively. In at least 90% of days, in which beta-agonist overuse occurred, patients did not obtain medical review within 48 h of beta-agonist overuse, regardless of the magnitude of overuse or the inhaled corticosteroid/long-acting beta-agonist regimen.\nABSTRACT.RELIEVER INHALER OVERUSE AND DELAY IN MEDICAL REVIEW IN ASTHMA:\nIn asthma, overuse of beta-agonist reliever medication and delay in seeking medical review in an exacerbation are linked to asthma deaths. Janine Pilcher at the Medical Research Institute of New Zealand, and co-workers, conducted a review of data from a study of 303 adult patients with severe asthma, followed over 24 weeks. The patients were allocated to either a budesonide/formoterol, or a salbutamol inhaler to take for symptom relief, in addition to their maintenance treatment. Inhalers were fitted with electronic monitors, to accurately document every use. In both groups, on 90% of days when an exacerbation requiring excess use of an inhaler occurred, patients did not follow-up with medical professionals within 48 h as advised. Further, in both groups, ‘extreme’ reliever inhaler use was recorded at least once in around one in four patients.\nABSTRACT.INTRODUCTION:\nAsthma mortality surveys report that delay in seeking medical review is a common and important factor contributing to a fatal outcome.1–5 Beta-agonist overuse immediately before asthma death has also been reported, but the strength of this association is limited because these surveys rely on reports from relatives and friends of patients who have died from asthma.1–5 Furthermore, many asthma deaths are unwitnessed. Prescription database surveys identify a strong association between increased dispensing of short-acting beta-agonist drugs and the risk of mortality.6–10 The risk of asthma death increases markedly when more than 1.4 inhaler canisters per month are dispensed.7 Although this is estimated to be equivalent to an average of ten actuations per day, this may not correspond to actual patterns of use during a severe exacerbation. Patients with asthma also often report taking very high doses of beta-agonists during severe exacerbations leading to hospital admission, but these estimates may be influenced by recall bias.11 In the situation of stable asthma, patients who overuse their medications are likely to under-report actual use and conversely those who underuse tend to over-report actual use.12\n\nThe best method to assess actual patient self-administration of beta-agonist is to use validated electronic monitors that record the exact date and time of each actuation.13, 14 This approach has been used in one 13-week observational study in children15 and also in one 24-week randomised controlled trial (RCT) of two regimens based on maintenance combination budesonide/formoterol inhaler therapy in high risk adult asthma.16 In this RCT, we reported long-term patterns of beta-agonist use16 as well as use during severe exacerbations leading to hospital attendance.17 We identified that in less than 10% of days when high beta-agonist use occurred, defined as > 16 actuations of salbutamol (or equivalent) in a 24-h period, did patients obtain medical review within 48 h, despite this advice being verbally explained and documented in the asthma action plans provided.16 About one in four patients self-administered >32 actuations per day of salbutamol (or equivalent) on at least one occasion during the study. These findings showed the extent of unsupervised beta-agonist overuse by patients at risk of severe asthma exacerbations. In a subsequent analysis, we identified that very high doses of beta-agonist were commonly self-administered by patients without seeking medical review within 2 weeks before hospital presentation with a severe exacerbation of asthma.17\n\nThe objectives of this current analysis were to investigate the incidence of delay in obtaining medical review at different levels of excessive beta-agonist use throughout the 24 weeks of the RCT, and to describe patterns of beta-agonist use before and after severe exacerbations requiring systemic corticosteroid therapy. Our hypothesis was that most episodes of beta-agonist use, in excess of the thresholds at which medical review was recommended in the implemented asthma action plans, would not result in patients obtaining medical care.\n\nABSTRACT.RESULTS:\nThe participants in the RCT are described in Table 1. There were 151 participants randomised to the single maintenance and reliever therapy (SMART) regimen and 152 participants to the Standard regimen.Table 1Participant characteristics\nCharacteristicSMART group (N = 151)Standard group (N = 152)Mean [(standard deviation (SD)]Age (years)\n41·3 (13·7)\n42·6 (14·5)\nACQ-5 Score\n1·78 (0·99)\n1·79 (2·4)\nDaily ICS dose (budesonide or equivalent), µg\n805 (353)\n813 (370)\nOn-treatment FEV1 (L)\n2·62 (0·91)\n2·50 (0·78)\nOn-treatment FEV1 % predicted\n81·6 (18·9)\n80·4 (20·5)\nSevere exacerbations in the prior 12 months\n1·55 (1·31)\n1·73 (1·22)\n\nMedian (IQR)\nBaseline self-reported reliever usea\n\n2 (1 to 3)\n2 (1 to 3)\nNumber of hospital admissions ever for asthma\n1 (0 to 4)\n1 (0 to 4)\n\nN/151 (%)\nN/152 (%)\nMale\n48 (31·8)\n46 (30·3)\n Ethnicity\n\n\n European\n113 (74·8)\n118 (77·6)\n Māori\n25 (16·6)\n19 (12·5)\n Pacific Islander\n5 (3·3)\n10 (6·6)\n Other\n8 (5·3)\n5 (3·3)\nNumber of severe exacerbations in previous 12 months\n 0\n14 (9·3)\n11 (7·2)\n 1\n86 (57·0)\n75 (49·3)\n 2\n29 (19·2)\n31 (20·4)\n 3\n10 (6·6)\n22 (14·5)\n 4\n5 (3·3)\n6 (3·9)\n ≥5\n7 (4·6)\n7 (4·6)\nLABA use\n92 (60·9)\n103 (67·8)\nCombination ICS plus LABA inhaler\n73 (48%)\n82 (54%)\nUse of written asthma self-management plan\n15 (10%)\n20 (13%)\nCurrent smokers\n30 (19·9)\n29 (19·1)\nEx-smokers\n49 (32·5)\n48 (31·6)\nNon-smokers\n72 (47·7)\n75 (49·3)\n\nACQ asthma control questionnaire, FEV\n1 forced expiratory volume in one second, ICS inhaled corticosteroids, LABA long acting beta-agonist\n\na As per the ACQ question 6. ACQ question 6 is a categorical score of beta-agonist use over the preceding 7 days in the following bands: score 0—none; score 1—1 to 2 salbutamol inhalations most days; score 2—3 to 4 salbutamol inhalations most days; score 3—5 to 8 salbutamol inhalations most days; score 4—9 to 12 salbutamol inhalations most days; score 5—13 to 16 salbutamol inhalations most days; score 6—more than 16 salbutamol inhalations most days.\n\n\nABSTRACT.HIGH, MARKED AND EXTREME BETA-AGONIST USE EPISODES WITHOUT MEDICAL REVIEW WITHIN 48 H:\nThe proportions of participants with at least one high, marked and extreme use episode were similar in each regimen (Tables 2a–c, respectively). The total number of days of high, marked and extreme use were greater in the Standard regimen (Tables 2a\n–c, respectively).Table 2aHigh beta-agonist use without medical review\nOutcomeSMART group (N = 151)Standard group (N = 152)Relative risk or rate SMART vs Standard (95% CI)\nP\n\nN (%)Mean (SD)Median (IQR)Range\nN (%)Mean (SD)Median (IQR)RangeRiskRateAt least one episode of high use\n84 (56)\n\n\n\n68 (45)\n\n\n\n1·24 (0·99–1·56)\n–\n0·058\nNumber of days of high use\n\n5·1 (14·3)\n1 (0–3)\n0–130\n\n8·9 (20·9)\n0 (0–5)\n0–149\n–\n0·58 (0·39–0·88)\n0·010\nNumber of days of high use in participants with at least one high use episode\n\n9·1 (18·2)\n2 (1–7·5)\n1–130\n\n19·9 (27·7)\n7 (2–26·5)\n1–149\n–\n–\n–\nNumber of days of high use without medical review within 48 h in participants with at least one high use episode\n\n8·5 (17·8)\n2 (1–6·5)\n0–130\n\n18·3 (24·8)\n5 (2–26)\n1–123\n–\n0·49 (0·31–0·75)\n0·001\nProportion of high overuse days without medical review within 48 h compared to total high overuse days in participants with at least one high use episode\n\n0·94 (0·20)\n1 (1–1)\n0–1\n\n0·94 (0·15)\n1 (1–1)\n0·25 –1\n–\n–\n–\n\nTable 2bMarked beta-agonist overuse without medical review\nOutcomeSMART group (N = 151)Standard group (N = 152)Relative risk or rate SMART vs Standard (95% CI)\nP\n\nN (%)Mean (SD)Median (IQR)RangeN (%)Mean (SD)Median (IQR)RangeRiskRateAt least one episode of marked overuse\n54 (36)\n\n\n\n56 (37)\n\n\n\n0·97 (0·72–1·31)\n–\n0·85\nNumber of days of marked overuse\n\n2·6 (10·2)\n0 (0–1)\n0–109\n\n4·8 (14·9)\n0 (0–2)\n0 to 144\n–\n0·56 (0·35–0·88)\n0·013\nNumber of days of marked overuse in participants with at least one marked overuse episode\n\n7·4 (16·0)\n2 (1–7)\n1–109\n\n13·1 (22·3)\n4·5 (2 to 15)\n1–144\n–\n–\n–\nNumber of days of marked overuse without medical review within 48 h in participants with at least one marked overuse episode\n\n6·7 (15·7)\n2 (1–6)\n1–109\n\n11·7 (19·0)\n4 (2–14)\n0–118\n–\n0·62 (0·37–1·06)\n0·079\nProportion of marked overuse days without medical review within 48 h compared to total marked overuse days in participants with at least one high use episode\n\n0·94 (0·17)\n1 (0·74–1)\n0·15–1\n\n0·92 (0·19)\n1 (0·96–1)\n0–1\n–\n–\n–\n\nTable 2cExtreme beta-agonist overuse without medical review\nOutcomeSMART group (N = 151)Standard group (N = 152)Relative risk or rate SMART vs Standard (95% CI)\nP\nN (%)Mean (SD)Median (IQR)RangeN (%)Mean (SD)Median (IQR)RangeRiskRateAt least one episode of extreme overuse\n41 (27)\n\n\n\n40 (26)\n\n\n\n1·03 (0·71–1·50)\n–\n0·87\nNumber of days of extreme overuse\n\n1·6 (6·7)\n0 (0–1)\n0–75\n\n2·9 (12·2)\n0 (0–1)\n0–137\n–\n0·56 (0·34–0·91)\n0·02\nNumber of days of extreme overuse in participants with at least one extreme overuse episode\n\n5·8 (11·9)\n2 (1–6)\n1–75\n\n11·0 (22·1)\n4·5 (2·5–11)\n1–137\n–\n–\n–\nNumber of days of extreme overuse without medical review within 48 h in participants with at least one extreme overuse episode\n\n5·2 (11·9)\n2 (1–4)\n0–75\n\n9·6 (18·3)\n4·5 (2–10)\n1–112\n–\n0·59 (0·31–1·10)\n0·096\nProportion of extreme overuse days without medical review within 48 h compared to total extreme overuse days in participants with at least one high use episode\n\n0·90 (0·26)\n1 (1–1)\n0–1\n\n0·94 (0·15)\n1 (1–1)\n0·29–1\n–\n–\n–\n\n\nIn the SMART regimen, the number of high use days without medical review within 48 h was approximately half of that in the Standard regimen; relative rate (95% CI) 0·49 (0·31 to 0·75), P = 0·001 (Fig. 1, Table 2a). The point estimate of association was consistent with a lower number of marked and extreme use days without medical review within 48 h with the SMART regimen (Fig. 1 and Tables 2b, 2c, respectively).Fig. 1\na Mean number of high use days in participants with at least one high use episode, and mean number of high use days without medical review within 48 h in participants with at least one high use episode. b Mean number of marked overuse days in participants with at least one marked overuse episode, and mean number of marked overuse days without medical review within 48 h in participants with at least one marked overuse episode. c Mean number of extreme overuse days in participants with at least one extreme overuse episode, and mean number of extreme overuse days without medical review within 48 h in participants with at least one extreme overuse episode\n\n\nIn both the SMART and Standard regimens, the proportion of high, marked and extreme use days without medical review within 48 h, compared to total days of high, marked and extreme use was at least 90% (Tables 2a, 2b and 2c, respectively).\n\nABSTRACT.SEVERE EXACERBATIONS OF ASTHMA:\nDuring the study period, at least one severe asthma exacerbation was experienced by 28/151 (18·5%) of SMART participants and 50/152 (32·9%) of Standard participants. There was a total of 35 and 66 severe exacerbations in the SMART and Standard groups respectively; relative rate (95% CI) 0·54 (0·36 to 0·82), P = 0·004.\n\nABSTRACT.DAILY MEDICATION USE PRIOR TO AND FOLLOWING SEVERE EXACERBATIONS:\nMedication use in the 14 days prior to and 14 days following a severe exacerbation for the SMART and Standard regimens is shown in Figs. 2 and 3, respectively. With the SMART regimen, there was increasing budesonide/formoterol use from about 5 days prior to initiation of oral corticosteroid therapy, with high use for a further 5 days before returning to baseline levels after 10 days. With the Standard regimen, there was variable salbutamol use from about 14 days prior to initiation of oral corticosteroid therapy, increasing markedly the day before, with high use for a further 3 days before returning to baseline levels after 10 days. In the severe exacerbation dataset, only one of the included days, 2 days prior to severe exacerbation for a patient in the Standard group, met dose dumping criteria, defined as greater than or equal to 100 actuations in a 3-h period.18\nFig. 2Median [interquartile range (IQR)] daily budesonide/formoterol actuations in the 14 days prior to, and the 14 days following a severe exacerbation in the SMART group. Day 0 represents the initiation of corticosteroid therapy. There were 35 severe exacerbations in the SMART group. For patients with repeat exacerbations, a minimal interval of 28 days between exacerbations was required, to avoid overlap of data within the same participant. One patient in the SMART group had a second severe exacerbation within 28 days following the prior exacerbation. Thus, electronic data for 34 severe exacerbations were included in the plots of medication use patterns for the SMART group\n\nFig. 3Median (IQR) daily salbutamol a and budesonide/formoterol b actuations in the 14 days prior to, and the 14 days following a severe exacerbation in the Standard group. Day 0 represents the initiation of corticosteroid therapy. There were 66 severe exacerbations in the Standard group. For patients with repeat exacerbations, a minimal interval of 28 days between exacerbations was required, to avoid overlap of data within the same participant. Three patients in the Standard group had repeat severe exacerbations occurring within 28 days following the prior exacerbation. In another patient randomised to Standard treatment, there was no recorded data for two severe exacerbations due to use of non-study inhaled medication. Thus, electronic data for 61 severe exacerbations were included in the plots of medication use patterns in the Standard group. For these plots, only one of the included days (2 days prior to severe exacerbation for a patient in the Standard group) met dose dumping criteria (≥100 actuations in a 3 h period)\n\n\nThe number of days in which high, marked or extreme use occurred in the 14 days prior to a severe exacerbation, expressed as a percentage of all high, marked and extreme use days was 56/766 (7%), 28/400 (7%) and 10/237 (4%) for SMART, and 166/1355 (12%), 111/735 (15%) and 80/438 (18%) for Standard, respectively.\nOf the high use episodes in the 14-day period prior to a severe exacerbation, 38/56 (68%) and 133/166 (80%) were not associated with medical review within 48 h, in the SMART and Standard regimens, respectively. Of the marked overuse episodes in the 14-day period prior to a severe exacerbation, 18/28 (64%) and 87/111 (78%) were not associated with medical review within 48 \n\nABSTRACT.HIGHEST MEDICATION USE PRIOR TO AND FOLLOWING SEVERE EXACERBATIONS:\nIn the SMART group, the median (range) highest daily number of budesonide/formoterol actuations was 11 (4 to 63) over the 14 days before severe exacerbations. In the Standard group, the median (range) highest daily number of salbutamol actuations was 13 (0 to 150) over the 14 days before severe exacerbations; on the day of highest salbutamol use preceding severe exacerbations, the median (range) number of budesonide/formoterol actuations was 4 (0 to 84). On this day of maximal salbutamol use, there were 8/61 (13%) instances when >8 actuations of budesonide/formoterol were taken and 10/61 (16%) instances when zero actuations of budesonide/formoterol were taken.\nIn the SMART group, the median (range) highest daily number of budesonide/formoterol actuations was 10 (0 to 71) in the 14 days after oral corticosteroid use for severe exacerbations (Day 0). In the Standard group, the median (range) highest daily number of salbutamol actuations was 20 (0 to 152) in the 14 days after Day 0; on the day of highest salbutamol use following Day 0, the median (range) number of budesonide/formoterol actuations was 4 (0 to 33). On this day of maximal salbutamol use, there were 7/61 (11%) instances when >8 actuations of budesonide/formoterol were taken and 9/61 (15%) instances when zero actuations of budesonide/formoterol were taken.\n\n\nABSTRACT.DISCUSSION:\nThis study shows that in a high-risk population of adult patients with asthma, around 90% of beta-agonist overuse episodes, well in excess of the asthma action plan thresholds recommending prompt medical review, do not lead to patients obtaining medical review within 48 h. It is likely that the symptomatic relief obtained from the high doses of beta-agonist may be one of the underlying causes for the delay in or lack of seeking medical help in the situation of such poorly controlled, worsening asthma.\nWe observed that in around 90% of marked and extreme overuse episodes, patients did not obtain medical review within 48 h despite this use being at least 1·5 and 2-fold higher than the level of use at which their asthma action plan advised that medical care must be sought. These findings complement our previous observation that 93% of high-use episodes, in which patients took at least 16 actuations of salbutamol or equivalent in a 24-h period, occurred without medical review within the next 48 h, despite this advice being documented in the provided asthma action plans.16 These findings were unexpected as they suggest that increasing beta-agonist use from at least 16 to at least 32 actuations of salbutamol or equivalent in a 24-h period did not lead to a greater likelihood of a patient seeking medical care.\nPatterns of beta-agonist use were also examined in the patients in whom a severe exacerbation was confirmed with the prescription of systemic corticosteroids. In the SMART regimen, the mean daily use of budesonide/formoterol progressively increased 5 days before the index corticosteroid use and remained high for a further 5-day period before returning close to baseline levels after 10 days. For the Standard regimen, the salbutamol use was variable from about 14 days prior to the index course of corticosteroids, increasing markedly the day before, with the high level of use gradually returning to baseline levels over 10 days. The majority of overuse episodes in the 14-day period prior to the initiation of oral corticosteroid therapy were not associated with medical review within 48 h, regardless of the magnitude of the overuse. These findings suggest that even in patients who did seek medical review, there is often delay during worsening asthma, resulting in the later prescription of oral corticosteroids in a severe exacerbation. This may have contributed to our finding that in the setting of a severe exacerbation it takes at least 5 days of oral corticosteroid treatment before there is sufficient improvement to result in a reduction in beta-agonist use, regardless of whether the SMART or Standard regimen was used.\nDifferent patterns of budesonide/formoterol use were observed in the Standard group during severe exacerbations. Some patients used no budesonide/formoterol on the day of highest salbutamol use preceding the severe exacerbation. This non-adherence may contribute to progressive worsening of unstable asthma, increasing the risk of a severe exacerbation. Another pattern was the use of budesonide/formoterol as an as required ‘reliever’ medication well in excess of the maintenance four actuations per day. These observations suggest that some patients may use their maintenance budesonide/ formoterol inhaler according to presumed need, similar to the SMART regimen, despite being prescribed fixed dose maintenance treatment and salbutamol for reliever.\nAnother important observation was that while the SMART regimen reduced the number of high, marked and extreme overuse episodes by at least 40%, it did not influence the likelihood of a patient obtaining medical review when an overuse episode occurred. As a result, the SMART regimen reduced the total number of overuse episodes in which patients did not obtain medical review within 48 h, to a similar extent to which it reduced the total number of overuse episodes. Together with the 46% reduction in severe exacerbations, these findings suggest that the SMART regimen has a better safety profile as it reduced the total number of overuse episodes in which delay in obtaining medical review occurred.\nThere are a number of methodological issues that require consideration. As an open-label study, there was the potential for bias, however the use of a single inhaler in the SMART regimen and separate inhalers in the Standard regimen without the requirement for dummy placebo inhalers enhanced generalisability of the findings to real world clinical practice. Metered dose inhaler (MDI)’s were used rather than dry powder inhalers (Turbuhaler) to deliver budesonide/formoterol, as it enabled us to use validated electronic monitors of inhaler use.19, 20 Beta-agonist overuse related to the measures of budesonide/formoterol and salbutamol for the SMART and Standard regimens, respectively. This is likely to have led to an underestimation of the number of beta-agonist overuse episodes for the Standard regimen, as it did not take into account the use of budesonide/formoterol as a reliever additional to the four maintenance actuations per day, a behaviour which was noted in some patients.17\n\nThere are several additional clinical implications of these findings. Firstly, the patient’s failure to recognise that the need for frequent and high doses of beta-agonist indicates severe asthma, likely contributes to the delay in or lack of seeking medical help. Secondly, this behaviour may reflect previous experience of these high risk patients with asthma, that most exacerbations of asthma requiring high doses of beta-agonist are self-limiting and eventually resolve without medical review, oral corticosteroids, or hospital attendance. This interpretation is supported by our observation that over 80% of all days of beta-agonist overuse, regardless of extent, did not lead to a course of oral steroids within the following 14 days. The higher proportions associated with the SMART regimen suggest that budesonide/formoterol overuse may be more likely to lead to resolution of the severe asthma episode without oral steroids than salbutamol overuse. Thirdly, even in the setting of an RCT with repeat clinic visits during which the asthma action plan was introduced and reinforced, most patients did not follow this written advice to obtain medical review when beta agonist use exceeded predefined levels of increased use.\nIn conclusion, in at least 90% of days in which beta-agonist overuse occurred, patients with high-risk asthma did not obtain medical review within 48 h, regardless of the magnitude of the beta-agonist overuse, or the inhaled corticosteroid/long-acting beta-agonist regimen.\n\nABSTRACT.METHODS:\nABSTRACT.DESIGN:\nThis is a secondary analysis of data from a 24-week multicentre, open-label RCT of Single combination budesonide/formoterol inhaler as Maintenance And Reliever Therapy (SMART regimen) vs. fixed-dose budesonide/formoterol with salbutamol for relief (Standard regimen) in 303 adult patients with a physician’s diagnosis of asthma, described previously.16 Patients were eligible if they had a current prescription for inhaled corticosteroids (ICS) and at least one asthma exacerbation in the preceding year. An exacerbation was defined as presentation to a General Practice or Emergency Department resulting in prescription of oral corticosteroids or treatment with spacer-delivered or nebulised bronchodilator, or self-administration of prednisone for asthma for at least 3 days. Exclusion criteria included a diagnosis of chronic obstructive pulmonary disease and current or ex-smokers with a > 10 pack-year smoking history with onset of respiratory symptoms after the age of 40.\nPatients were randomised to the SMART regimen: a MDI containing 200/6 µg budesonide/formoterol (Vannair, AstraZeneca Limited, Auckland, New Zealand, which is the MDI formulation of Symbicort turbuhaler), two actuations twice daily for maintenance with one extra dose as needed for relief; or the Standard regimen: 200/6 µg Vannair MDI, two actuations twice daily for maintenance with 100 µg salbutamol MDI (Ventolin, GlaxoSmithKline Limited, Auckland, New Zealand), one to two extra actuations for relief. All participants were given written asthma action plans and verbal instructions outlining when to consult their doctor. All participants provided written informed consent and the trial was approved by the New Zealand Multi Region Ethics Committee, and prospectively registered [ACTRN 12610000515099]. The study protocol is available at http://www.mrinz.ac.nz/uploads/mrinz/SMART_Protocol.pdf\n\nAll Vannair and Ventolin MDIs incorporated an electronic monitor (Smartinhaler Tracker, Nexus6 Limited, Auckland, New Zealand), which recorded the date and time (to the nearest second) each time an inhaler was actuated. These monitors were 99·7% accurate in recording actuations in bench testing19 and were used in accordance with strict trial quality control processes.20 Participants were not aware of the exact capabilities of the monitors.\n\nABSTRACT.DEFINITIONS OF HIGH, MARKED AND EXTREME BETA-AGONIST USE EPISODES:\nABSTRACT.HIGH USE:\nFor SMART, this was defined as >8 actuations of budesonide/formoterol in excess of the four maintenance doses per 24-h period i.e., equivalent to >12 actuations in total. For Standard, this was defined as >16 actuations of salbutamol per 24-h period. These thresholds were based on the dose limits of beta-agonist use requiring medical review, defined by the action plans implemented in this study,21, 22 and supported by the results of the short-term bronchodilator equivalence of formoterol 6 µg to salbutamol 200 µg, with repeat dosing in acute asthma.23, 24 In accordance with their action plans, participants were advised to seek medical review at these high use thresholds.\n\nABSTRACT.MARKED USE:\nFor SMART, this was defined as >12 actuations of budesonide/formoterol in excess of the four maintenance doses per 24-h period i.e., equivalent to >16 actuations in total. For Standard, this was defined as >24 actuations of salbutamol per 24-h period. These thresholds were based on 1.5 times the limits of beta-agonist use requiring medical review, defined by the action plans.21, 22\n\n\nABSTRACT.EXTREME USE:\nFor SMART, this was defined as >16 actuations of budesonide/formoterol in excess of the four maintenance doses per 24-h period i.e., equivalent to >20 actuations in total. For Standard, this was defined as >32 actuations of salbutamol per 24-h period. These thresholds were based on twice the limits of beta-agonist use requiring medical review, defined by the action plans.21, 22\n\n\n\nABSTRACT.HIGH, MARKED, AND EXTREME BETA-AGONIST USE WITHOUT MEDICAL REVIEW:\nThe number of days of high, marked, or extreme beta-agonist use without medical review within 48 h was determined using the following rules: for every high, marked, or extreme beta-agonist use day, the ‘index day’, the database was checked to determine if the patient attended for medical review, which could include primary care clinic, an after-hours clinic or hospital attendance, either on the day of overuse or the next day. This 48-h window was defined as per the Standard action plan, which specifies that the patient should attend for medical review ‘within 1 to 2 days’ in the setting of worsening asthma recognised by symptoms including ‘reliever only lasting 2–3 h’.22 The SMART plan advises patients to seek medical review on the same day if more than 12 actuations of budesonide/formoterol are taken.21\n\nIf the participant attended for medical review, then overuse occurring on the index day, the day of medical review and in the 7 days after medical review was not counted as overuse without medical review, as a ‘stand-down’ period. In effect, overuse occurring on these days was ‘permissible’ as the patient had attended for medical review in the setting of this exacerbation. This criterion was chosen as the american thoracic society (ATS)/european respiratory society (ERS) definition of severe exacerbations separate exacerbations by 7 days.25 If a participant attended for repeated medical reviews during the stand-down period, then the 7-day period where overuse without medical review is not counted, was restarted. This approach was used as the object of this analysis was to explore the relationship between the ‘index’ overuse episode and the first episode of medical review following this.\n\nABSTRACT.OUTCOMES:\nIn this exploratory analysis, no particular primary outcome was specified and there is no adjustment for multiple comparisons. The outcome variables were firstly the proportion of participants in each arm of the RCT with at least one episode of overuse for each type of beta-agonist overuse; high, marked and extreme. A second outcome was the number of days of overuse for each type of beta-agonist overuse. A third outcome was the number of days of overuse without medical review within 48 h for each type of beta-agonist overuse. The final outcome was the proportion of days of overuse without medical review within 48 h compared to the total number of days of overuse for each type of beta-agonist overuse.\nMedication use in the 14 days prior to and following a severe exacerbation was also examined. A severe exacerbation was defined according to ATS/ERS criteria25 as: (a) the use of systemic corticosteroids for at least 3 days; or (b) a hospitalisation or Emergency Department visit because of asthma, requiring systemic corticosteroids. Courses of corticosteroids separated by 7 days or more were treated as separate severe exacerbations. The date of the first day of corticosteroid therapy was Day 0. Medication use for the 14-day period before and up to 14 days after Day 0 was extracted from the data set for each participant with at least one severe exacerbation. For patients with repeat exacerbations, a minimum interval of 28 days between exacerbations was required to avoid overlap of data within the same participants.\n\nABSTRACT.STATISTICAL METHODS:\nThe relative risk of at least one episode of overuse by overuse type was by simple contingency table analysis and expressed as a relative risk. The relative rates of days of high, marked and extreme beta-agonist overuse was by Poisson regression with an offset for the adjusted days of treatment exposure, and a further adjustment for over-dispersion. SAS version 9·4 was used.\n\nABSTRACT.DATA AVAILABILITY STATEMENT:\nData is available on application to the corresponding author\n\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\n\nSupplementary Information\n\n**Question:** Compared to budesonide/formoterol with salbutamol what was the result of budesonide/formoterol inhaler on the number of high use days without medical review within 48 h?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13817
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nBeta-agonist overuse and delay in obtaining medical review in high risk asthma: a secondary analysis of data from a randomised controlled trial\n\n\n\nAsthma mortality surveys report delays in seeking medical review and overuse of beta-agonist therapy as factors contributing to a fatal outcome. However, the strength of these associations is limited because many asthma deaths are unwitnessed. We undertook a secondary analysis of data from a 24-week randomised controlled trial of 303 patients with high-risk asthma, randomised to combination budesonide/formoterol inhaler according to a single maintenance and reliever therapy regimen or fixed dose budesonide/formoterol with salbutamol as reliever (Standard) regimen. Medication use was measured by electronic monitors. The thresholds for high, marked and extreme beta-agonist use days were defined in the single maintenance and reliever therapy arm as: >8, >12 and >16 actuations of budesonide/formoterol in excess of four maintenance doses, respectively; and in the Standard arm as: >16, >24 and >32 actuations of salbutamol, respectively. Whether a medical review was obtained within 48 h of an overuse episode was determined by review of data collected during the study by participant report. The mean (standard deviation) proportion of days in which high, marked and extreme beta-agonist overuse occurred without medical review within 48 h was 0·94(0·20), 0·94(0·15) and 0·94(0·17), and 0·92(0·19), 0·90(0·26) and 0·94(0·15) for single maintenance and reliever therapy and Standard regimens, respectively. In at least 90% of days, in which beta-agonist overuse occurred, patients did not obtain medical review within 48 h of beta-agonist overuse, regardless of the magnitude of overuse or the inhaled corticosteroid/long-acting beta-agonist regimen.\nABSTRACT.RELIEVER INHALER OVERUSE AND DELAY IN MEDICAL REVIEW IN ASTHMA:\nIn asthma, overuse of beta-agonist reliever medication and delay in seeking medical review in an exacerbation are linked to asthma deaths. Janine Pilcher at the Medical Research Institute of New Zealand, and co-workers, conducted a review of data from a study of 303 adult patients with severe asthma, followed over 24 weeks. The patients were allocated to either a budesonide/formoterol, or a salbutamol inhaler to take for symptom relief, in addition to their maintenance treatment. Inhalers were fitted with electronic monitors, to accurately document every use. In both groups, on 90% of days when an exacerbation requiring excess use of an inhaler occurred, patients did not follow-up with medical professionals within 48 h as advised. Further, in both groups, ‘extreme’ reliever inhaler use was recorded at least once in around one in four patients.\nABSTRACT.INTRODUCTION:\nAsthma mortality surveys report that delay in seeking medical review is a common and important factor contributing to a fatal outcome.1–5 Beta-agonist overuse immediately before asthma death has also been reported, but the strength of this association is limited because these surveys rely on reports from relatives and friends of patients who have died from asthma.1–5 Furthermore, many asthma deaths are unwitnessed. Prescription database surveys identify a strong association between increased dispensing of short-acting beta-agonist drugs and the risk of mortality.6–10 The risk of asthma death increases markedly when more than 1.4 inhaler canisters per month are dispensed.7 Although this is estimated to be equivalent to an average of ten actuations per day, this may not correspond to actual patterns of use during a severe exacerbation. Patients with asthma also often report taking very high doses of beta-agonists during severe exacerbations leading to hospital admission, but these estimates may be influenced by recall bias.11 In the situation of stable asthma, patients who overuse their medications are likely to under-report actual use and conversely those who underuse tend to over-report actual use.12\n\nThe best method to assess actual patient self-administration of beta-agonist is to use validated electronic monitors that record the exact date and time of each actuation.13, 14 This approach has been used in one 13-week observational study in children15 and also in one 24-week randomised controlled trial (RCT) of two regimens based on maintenance combination budesonide/formoterol inhaler therapy in high risk adult asthma.16 In this RCT, we reported long-term patterns of beta-agonist use16 as well as use during severe exacerbations leading to hospital attendance.17 We identified that in less than 10% of days when high beta-agonist use occurred, defined as > 16 actuations of salbutamol (or equivalent) in a 24-h period, did patients obtain medical review within 48 h, despite this advice being verbally explained and documented in the asthma action plans provided.16 About one in four patients self-administered >32 actuations per day of salbutamol (or equivalent) on at least one occasion during the study. These findings showed the extent of unsupervised beta-agonist overuse by patients at risk of severe asthma exacerbations. In a subsequent analysis, we identified that very high doses of beta-agonist were commonly self-administered by patients without seeking medical review within 2 weeks before hospital presentation with a severe exacerbation of asthma.17\n\nThe objectives of this current analysis were to investigate the incidence of delay in obtaining medical review at different levels of excessive beta-agonist use throughout the 24 weeks of the RCT, and to describe patterns of beta-agonist use before and after severe exacerbations requiring systemic corticosteroid therapy. Our hypothesis was that most episodes of beta-agonist use, in excess of the thresholds at which medical review was recommended in the implemented asthma action plans, would not result in patients obtaining medical care.\n\nABSTRACT.RESULTS:\nThe participants in the RCT are described in Table 1. There were 151 participants randomised to the single maintenance and reliever therapy (SMART) regimen and 152 participants to the Standard regimen.Table 1Participant characteristics\nCharacteristicSMART group (N = 151)Standard group (N = 152)Mean [(standard deviation (SD)]Age (years)\n41·3 (13·7)\n42·6 (14·5)\nACQ-5 Score\n1·78 (0·99)\n1·79 (2·4)\nDaily ICS dose (budesonide or equivalent), µg\n805 (353)\n813 (370)\nOn-treatment FEV1 (L)\n2·62 (0·91)\n2·50 (0·78)\nOn-treatment FEV1 % predicted\n81·6 (18·9)\n80·4 (20·5)\nSevere exacerbations in the prior 12 months\n1·55 (1·31)\n1·73 (1·22)\n\nMedian (IQR)\nBaseline self-reported reliever usea\n\n2 (1 to 3)\n2 (1 to 3)\nNumber of hospital admissions ever for asthma\n1 (0 to 4)\n1 (0 to 4)\n\nN/151 (%)\nN/152 (%)\nMale\n48 (31·8)\n46 (30·3)\n Ethnicity\n\n\n European\n113 (74·8)\n118 (77·6)\n Māori\n25 (16·6)\n19 (12·5)\n Pacific Islander\n5 (3·3)\n10 (6·6)\n Other\n8 (5·3)\n5 (3·3)\nNumber of severe exacerbations in previous 12 months\n 0\n14 (9·3)\n11 (7·2)\n 1\n86 (57·0)\n75 (49·3)\n 2\n29 (19·2)\n31 (20·4)\n 3\n10 (6·6)\n22 (14·5)\n 4\n5 (3·3)\n6 (3·9)\n ≥5\n7 (4·6)\n7 (4·6)\nLABA use\n92 (60·9)\n103 (67·8)\nCombination ICS plus LABA inhaler\n73 (48%)\n82 (54%)\nUse of written asthma self-management plan\n15 (10%)\n20 (13%)\nCurrent smokers\n30 (19·9)\n29 (19·1)\nEx-smokers\n49 (32·5)\n48 (31·6)\nNon-smokers\n72 (47·7)\n75 (49·3)\n\nACQ asthma control questionnaire, FEV\n1 forced expiratory volume in one second, ICS inhaled corticosteroids, LABA long acting beta-agonist\n\na As per the ACQ question 6. ACQ question 6 is a categorical score of beta-agonist use over the preceding 7 days in the following bands: score 0—none; score 1—1 to 2 salbutamol inhalations most days; score 2—3 to 4 salbutamol inhalations most days; score 3—5 to 8 salbutamol inhalations most days; score 4—9 to 12 salbutamol inhalations most days; score 5—13 to 16 salbutamol inhalations most days; score 6—more than 16 salbutamol inhalations most days.\n\n\nABSTRACT.HIGH, MARKED AND EXTREME BETA-AGONIST USE EPISODES WITHOUT MEDICAL REVIEW WITHIN 48 H:\nThe proportions of participants with at least one high, marked and extreme use episode were similar in each regimen (Tables 2a–c, respectively). The total number of days of high, marked and extreme use were greater in the Standard regimen (Tables 2a\n–c, respectively).Table 2aHigh beta-agonist use without medical review\nOutcomeSMART group (N = 151)Standard group (N = 152)Relative risk or rate SMART vs Standard (95% CI)\nP\n\nN (%)Mean (SD)Median (IQR)Range\nN (%)Mean (SD)Median (IQR)RangeRiskRateAt least one episode of high use\n84 (56)\n\n\n\n68 (45)\n\n\n\n1·24 (0·99–1·56)\n–\n0·058\nNumber of days of high use\n\n5·1 (14·3)\n1 (0–3)\n0–130\n\n8·9 (20·9)\n0 (0–5)\n0–149\n–\n0·58 (0·39–0·88)\n0·010\nNumber of days of high use in participants with at least one high use episode\n\n9·1 (18·2)\n2 (1–7·5)\n1–130\n\n19·9 (27·7)\n7 (2–26·5)\n1–149\n–\n–\n–\nNumber of days of high use without medical review within 48 h in participants with at least one high use episode\n\n8·5 (17·8)\n2 (1–6·5)\n0–130\n\n18·3 (24·8)\n5 (2–26)\n1–123\n–\n0·49 (0·31–0·75)\n0·001\nProportion of high overuse days without medical review within 48 h compared to total high overuse days in participants with at least one high use episode\n\n0·94 (0·20)\n1 (1–1)\n0–1\n\n0·94 (0·15)\n1 (1–1)\n0·25 –1\n–\n–\n–\n\nTable 2bMarked beta-agonist overuse without medical review\nOutcomeSMART group (N = 151)Standard group (N = 152)Relative risk or rate SMART vs Standard (95% CI)\nP\n\nN (%)Mean (SD)Median (IQR)RangeN (%)Mean (SD)Median (IQR)RangeRiskRateAt least one episode of marked overuse\n54 (36)\n\n\n\n56 (37)\n\n\n\n0·97 (0·72–1·31)\n–\n0·85\nNumber of days of marked overuse\n\n2·6 (10·2)\n0 (0–1)\n0–109\n\n4·8 (14·9)\n0 (0–2)\n0 to 144\n–\n0·56 (0·35–0·88)\n0·013\nNumber of days of marked overuse in participants with at least one marked overuse episode\n\n7·4 (16·0)\n2 (1–7)\n1–109\n\n13·1 (22·3)\n4·5 (2 to 15)\n1–144\n–\n–\n–\nNumber of days of marked overuse without medical review within 48 h in participants with at least one marked overuse episode\n\n6·7 (15·7)\n2 (1–6)\n1–109\n\n11·7 (19·0)\n4 (2–14)\n0–118\n–\n0·62 (0·37–1·06)\n0·079\nProportion of marked overuse days without medical review within 48 h compared to total marked overuse days in participants with at least one high use episode\n\n0·94 (0·17)\n1 (0·74–1)\n0·15–1\n\n0·92 (0·19)\n1 (0·96–1)\n0–1\n–\n–\n–\n\nTable 2cExtreme beta-agonist overuse without medical review\nOutcomeSMART group (N = 151)Standard group (N = 152)Relative risk or rate SMART vs Standard (95% CI)\nP\nN (%)Mean (SD)Median (IQR)RangeN (%)Mean (SD)Median (IQR)RangeRiskRateAt least one episode of extreme overuse\n41 (27)\n\n\n\n40 (26)\n\n\n\n1·03 (0·71–1·50)\n–\n0·87\nNumber of days of extreme overuse\n\n1·6 (6·7)\n0 (0–1)\n0–75\n\n2·9 (12·2)\n0 (0–1)\n0–137\n–\n0·56 (0·34–0·91)\n0·02\nNumber of days of extreme overuse in participants with at least one extreme overuse episode\n\n5·8 (11·9)\n2 (1–6)\n1–75\n\n11·0 (22·1)\n4·5 (2·5–11)\n1–137\n–\n–\n–\nNumber of days of extreme overuse without medical review within 48 h in participants with at least one extreme overuse episode\n\n5·2 (11·9)\n2 (1–4)\n0–75\n\n9·6 (18·3)\n4·5 (2–10)\n1–112\n–\n0·59 (0·31–1·10)\n0·096\nProportion of extreme overuse days without medical review within 48 h compared to total extreme overuse days in participants with at least one high use episode\n\n0·90 (0·26)\n1 (1–1)\n0–1\n\n0·94 (0·15)\n1 (1–1)\n0·29–1\n–\n–\n–\n\n\nIn the SMART regimen, the number of high use days without medical review within 48 h was approximately half of that in the Standard regimen; relative rate (95% CI) 0·49 (0·31 to 0·75), P = 0·001 (Fig. 1, Table 2a). The point estimate of association was consistent with a lower number of marked and extreme use days without medical review within 48 h with the SMART regimen (Fig. 1 and Tables 2b, 2c, respectively).Fig. 1\na Mean number of high use days in participants with at least one high use episode, and mean number of high use days without medical review within 48 h in participants with at least one high use episode. b Mean number of marked overuse days in participants with at least one marked overuse episode, and mean number of marked overuse days without medical review within 48 h in participants with at least one marked overuse episode. c Mean number of extreme overuse days in participants with at least one extreme overuse episode, and mean number of extreme overuse days without medical review within 48 h in participants with at least one extreme overuse episode\n\n\nIn both the SMART and Standard regimens, the proportion of high, marked and extreme use days without medical review within 48 h, compared to total days of high, marked and extreme use was at least 90% (Tables 2a, 2b and 2c, respectively).\n\nABSTRACT.SEVERE EXACERBATIONS OF ASTHMA:\nDuring the study period, at least one severe asthma exacerbation was experienced by 28/151 (18·5%) of SMART participants and 50/152 (32·9%) of Standard participants. There was a total of 35 and 66 severe exacerbations in the SMART and Standard groups respectively; relative rate (95% CI) 0·54 (0·36 to 0·82), P = 0·004.\n\nABSTRACT.DAILY MEDICATION USE PRIOR TO AND FOLLOWING SEVERE EXACERBATIONS:\nMedication use in the 14 days prior to and 14 days following a severe exacerbation for the SMART and Standard regimens is shown in Figs. 2 and 3, respectively. With the SMART regimen, there was increasing budesonide/formoterol use from about 5 days prior to initiation of oral corticosteroid therapy, with high use for a further 5 days before returning to baseline levels after 10 days. With the Standard regimen, there was variable salbutamol use from about 14 days prior to initiation of oral corticosteroid therapy, increasing markedly the day before, with high use for a further 3 days before returning to baseline levels after 10 days. In the severe exacerbation dataset, only one of the included days, 2 days prior to severe exacerbation for a patient in the Standard group, met dose dumping criteria, defined as greater than or equal to 100 actuations in a 3-h period.18\nFig. 2Median [interquartile range (IQR)] daily budesonide/formoterol actuations in the 14 days prior to, and the 14 days following a severe exacerbation in the SMART group. Day 0 represents the initiation of corticosteroid therapy. There were 35 severe exacerbations in the SMART group. For patients with repeat exacerbations, a minimal interval of 28 days between exacerbations was required, to avoid overlap of data within the same participant. One patient in the SMART group had a second severe exacerbation within 28 days following the prior exacerbation. Thus, electronic data for 34 severe exacerbations were included in the plots of medication use patterns for the SMART group\n\nFig. 3Median (IQR) daily salbutamol a and budesonide/formoterol b actuations in the 14 days prior to, and the 14 days following a severe exacerbation in the Standard group. Day 0 represents the initiation of corticosteroid therapy. There were 66 severe exacerbations in the Standard group. For patients with repeat exacerbations, a minimal interval of 28 days between exacerbations was required, to avoid overlap of data within the same participant. Three patients in the Standard group had repeat severe exacerbations occurring within 28 days following the prior exacerbation. In another patient randomised to Standard treatment, there was no recorded data for two severe exacerbations due to use of non-study inhaled medication. Thus, electronic data for 61 severe exacerbations were included in the plots of medication use patterns in the Standard group. For these plots, only one of the included days (2 days prior to severe exacerbation for a patient in the Standard group) met dose dumping criteria (≥100 actuations in a 3 h period)\n\n\nThe number of days in which high, marked or extreme use occurred in the 14 days prior to a severe exacerbation, expressed as a percentage of all high, marked and extreme use days was 56/766 (7%), 28/400 (7%) and 10/237 (4%) for SMART, and 166/1355 (12%), 111/735 (15%) and 80/438 (18%) for Standard, respectively.\nOf the high use episodes in the 14-day period prior to a severe exacerbation, 38/56 (68%) and 133/166 (80%) were not associated with medical review within 48 h, in the SMART and Standard regimens, respectively. Of the marked overuse episodes in the 14-day period prior to a severe exacerbation, 18/28 (64%) and 87/111 (78%) were not associated with medical review within 48 \n\nABSTRACT.HIGHEST MEDICATION USE PRIOR TO AND FOLLOWING SEVERE EXACERBATIONS:\nIn the SMART group, the median (range) highest daily number of budesonide/formoterol actuations was 11 (4 to 63) over the 14 days before severe exacerbations. In the Standard group, the median (range) highest daily number of salbutamol actuations was 13 (0 to 150) over the 14 days before severe exacerbations; on the day of highest salbutamol use preceding severe exacerbations, the median (range) number of budesonide/formoterol actuations was 4 (0 to 84). On this day of maximal salbutamol use, there were 8/61 (13%) instances when >8 actuations of budesonide/formoterol were taken and 10/61 (16%) instances when zero actuations of budesonide/formoterol were taken.\nIn the SMART group, the median (range) highest daily number of budesonide/formoterol actuations was 10 (0 to 71) in the 14 days after oral corticosteroid use for severe exacerbations (Day 0). In the Standard group, the median (range) highest daily number of salbutamol actuations was 20 (0 to 152) in the 14 days after Day 0; on the day of highest salbutamol use following Day 0, the median (range) number of budesonide/formoterol actuations was 4 (0 to 33). On this day of maximal salbutamol use, there were 7/61 (11%) instances when >8 actuations of budesonide/formoterol were taken and 9/61 (15%) instances when zero actuations of budesonide/formoterol were taken.\n\n\nABSTRACT.DISCUSSION:\nThis study shows that in a high-risk population of adult patients with asthma, around 90% of beta-agonist overuse episodes, well in excess of the asthma action plan thresholds recommending prompt medical review, do not lead to patients obtaining medical review within 48 h. It is likely that the symptomatic relief obtained from the high doses of beta-agonist may be one of the underlying causes for the delay in or lack of seeking medical help in the situation of such poorly controlled, worsening asthma.\nWe observed that in around 90% of marked and extreme overuse episodes, patients did not obtain medical review within 48 h despite this use being at least 1·5 and 2-fold higher than the level of use at which their asthma action plan advised that medical care must be sought. These findings complement our previous observation that 93% of high-use episodes, in which patients took at least 16 actuations of salbutamol or equivalent in a 24-h period, occurred without medical review within the next 48 h, despite this advice being documented in the provided asthma action plans.16 These findings were unexpected as they suggest that increasing beta-agonist use from at least 16 to at least 32 actuations of salbutamol or equivalent in a 24-h period did not lead to a greater likelihood of a patient seeking medical care.\nPatterns of beta-agonist use were also examined in the patients in whom a severe exacerbation was confirmed with the prescription of systemic corticosteroids. In the SMART regimen, the mean daily use of budesonide/formoterol progressively increased 5 days before the index corticosteroid use and remained high for a further 5-day period before returning close to baseline levels after 10 days. For the Standard regimen, the salbutamol use was variable from about 14 days prior to the index course of corticosteroids, increasing markedly the day before, with the high level of use gradually returning to baseline levels over 10 days. The majority of overuse episodes in the 14-day period prior to the initiation of oral corticosteroid therapy were not associated with medical review within 48 h, regardless of the magnitude of the overuse. These findings suggest that even in patients who did seek medical review, there is often delay during worsening asthma, resulting in the later prescription of oral corticosteroids in a severe exacerbation. This may have contributed to our finding that in the setting of a severe exacerbation it takes at least 5 days of oral corticosteroid treatment before there is sufficient improvement to result in a reduction in beta-agonist use, regardless of whether the SMART or Standard regimen was used.\nDifferent patterns of budesonide/formoterol use were observed in the Standard group during severe exacerbations. Some patients used no budesonide/formoterol on the day of highest salbutamol use preceding the severe exacerbation. This non-adherence may contribute to progressive worsening of unstable asthma, increasing the risk of a severe exacerbation. Another pattern was the use of budesonide/formoterol as an as required ‘reliever’ medication well in excess of the maintenance four actuations per day. These observations suggest that some patients may use their maintenance budesonide/ formoterol inhaler according to presumed need, similar to the SMART regimen, despite being prescribed fixed dose maintenance treatment and salbutamol for reliever.\nAnother important observation was that while the SMART regimen reduced the number of high, marked and extreme overuse episodes by at least 40%, it did not influence the likelihood of a patient obtaining medical review when an overuse episode occurred. As a result, the SMART regimen reduced the total number of overuse episodes in which patients did not obtain medical review within 48 h, to a similar extent to which it reduced the total number of overuse episodes. Together with the 46% reduction in severe exacerbations, these findings suggest that the SMART regimen has a better safety profile as it reduced the total number of overuse episodes in which delay in obtaining medical review occurred.\nThere are a number of methodological issues that require consideration. As an open-label study, there was the potential for bias, however the use of a single inhaler in the SMART regimen and separate inhalers in the Standard regimen without the requirement for dummy placebo inhalers enhanced generalisability of the findings to real world clinical practice. Metered dose inhaler (MDI)’s were used rather than dry powder inhalers (Turbuhaler) to deliver budesonide/formoterol, as it enabled us to use validated electronic monitors of inhaler use.19, 20 Beta-agonist overuse related to the measures of budesonide/formoterol and salbutamol for the SMART and Standard regimens, respectively. This is likely to have led to an underestimation of the number of beta-agonist overuse episodes for the Standard regimen, as it did not take into account the use of budesonide/formoterol as a reliever additional to the four maintenance actuations per day, a behaviour which was noted in some patients.17\n\nThere are several additional clinical implications of these findings. Firstly, the patient’s failure to recognise that the need for frequent and high doses of beta-agonist indicates severe asthma, likely contributes to the delay in or lack of seeking medical help. Secondly, this behaviour may reflect previous experience of these high risk patients with asthma, that most exacerbations of asthma requiring high doses of beta-agonist are self-limiting and eventually resolve without medical review, oral corticosteroids, or hospital attendance. This interpretation is supported by our observation that over 80% of all days of beta-agonist overuse, regardless of extent, did not lead to a course of oral steroids within the following 14 days. The higher proportions associated with the SMART regimen suggest that budesonide/formoterol overuse may be more likely to lead to resolution of the severe asthma episode without oral steroids than salbutamol overuse. Thirdly, even in the setting of an RCT with repeat clinic visits during which the asthma action plan was introduced and reinforced, most patients did not follow this written advice to obtain medical review when beta agonist use exceeded predefined levels of increased use.\nIn conclusion, in at least 90% of days in which beta-agonist overuse occurred, patients with high-risk asthma did not obtain medical review within 48 h, regardless of the magnitude of the beta-agonist overuse, or the inhaled corticosteroid/long-acting beta-agonist regimen.\n\nABSTRACT.METHODS:\nABSTRACT.DESIGN:\nThis is a secondary analysis of data from a 24-week multicentre, open-label RCT of Single combination budesonide/formoterol inhaler as Maintenance And Reliever Therapy (SMART regimen) vs. fixed-dose budesonide/formoterol with salbutamol for relief (Standard regimen) in 303 adult patients with a physician’s diagnosis of asthma, described previously.16 Patients were eligible if they had a current prescription for inhaled corticosteroids (ICS) and at least one asthma exacerbation in the preceding year. An exacerbation was defined as presentation to a General Practice or Emergency Department resulting in prescription of oral corticosteroids or treatment with spacer-delivered or nebulised bronchodilator, or self-administration of prednisone for asthma for at least 3 days. Exclusion criteria included a diagnosis of chronic obstructive pulmonary disease and current or ex-smokers with a > 10 pack-year smoking history with onset of respiratory symptoms after the age of 40.\nPatients were randomised to the SMART regimen: a MDI containing 200/6 µg budesonide/formoterol (Vannair, AstraZeneca Limited, Auckland, New Zealand, which is the MDI formulation of Symbicort turbuhaler), two actuations twice daily for maintenance with one extra dose as needed for relief; or the Standard regimen: 200/6 µg Vannair MDI, two actuations twice daily for maintenance with 100 µg salbutamol MDI (Ventolin, GlaxoSmithKline Limited, Auckland, New Zealand), one to two extra actuations for relief. All participants were given written asthma action plans and verbal instructions outlining when to consult their doctor. All participants provided written informed consent and the trial was approved by the New Zealand Multi Region Ethics Committee, and prospectively registered [ACTRN 12610000515099]. The study protocol is available at http://www.mrinz.ac.nz/uploads/mrinz/SMART_Protocol.pdf\n\nAll Vannair and Ventolin MDIs incorporated an electronic monitor (Smartinhaler Tracker, Nexus6 Limited, Auckland, New Zealand), which recorded the date and time (to the nearest second) each time an inhaler was actuated. These monitors were 99·7% accurate in recording actuations in bench testing19 and were used in accordance with strict trial quality control processes.20 Participants were not aware of the exact capabilities of the monitors.\n\nABSTRACT.DEFINITIONS OF HIGH, MARKED AND EXTREME BETA-AGONIST USE EPISODES:\nABSTRACT.HIGH USE:\nFor SMART, this was defined as >8 actuations of budesonide/formoterol in excess of the four maintenance doses per 24-h period i.e., equivalent to >12 actuations in total. For Standard, this was defined as >16 actuations of salbutamol per 24-h period. These thresholds were based on the dose limits of beta-agonist use requiring medical review, defined by the action plans implemented in this study,21, 22 and supported by the results of the short-term bronchodilator equivalence of formoterol 6 µg to salbutamol 200 µg, with repeat dosing in acute asthma.23, 24 In accordance with their action plans, participants were advised to seek medical review at these high use thresholds.\n\nABSTRACT.MARKED USE:\nFor SMART, this was defined as >12 actuations of budesonide/formoterol in excess of the four maintenance doses per 24-h period i.e., equivalent to >16 actuations in total. For Standard, this was defined as >24 actuations of salbutamol per 24-h period. These thresholds were based on 1.5 times the limits of beta-agonist use requiring medical review, defined by the action plans.21, 22\n\n\nABSTRACT.EXTREME USE:\nFor SMART, this was defined as >16 actuations of budesonide/formoterol in excess of the four maintenance doses per 24-h period i.e., equivalent to >20 actuations in total. For Standard, this was defined as >32 actuations of salbutamol per 24-h period. These thresholds were based on twice the limits of beta-agonist use requiring medical review, defined by the action plans.21, 22\n\n\n\nABSTRACT.HIGH, MARKED, AND EXTREME BETA-AGONIST USE WITHOUT MEDICAL REVIEW:\nThe number of days of high, marked, or extreme beta-agonist use without medical review within 48 h was determined using the following rules: for every high, marked, or extreme beta-agonist use day, the ‘index day’, the database was checked to determine if the patient attended for medical review, which could include primary care clinic, an after-hours clinic or hospital attendance, either on the day of overuse or the next day. This 48-h window was defined as per the Standard action plan, which specifies that the patient should attend for medical review ‘within 1 to 2 days’ in the setting of worsening asthma recognised by symptoms including ‘reliever only lasting 2–3 h’.22 The SMART plan advises patients to seek medical review on the same day if more than 12 actuations of budesonide/formoterol are taken.21\n\nIf the participant attended for medical review, then overuse occurring on the index day, the day of medical review and in the 7 days after medical review was not counted as overuse without medical review, as a ‘stand-down’ period. In effect, overuse occurring on these days was ‘permissible’ as the patient had attended for medical review in the setting of this exacerbation. This criterion was chosen as the american thoracic society (ATS)/european respiratory society (ERS) definition of severe exacerbations separate exacerbations by 7 days.25 If a participant attended for repeated medical reviews during the stand-down period, then the 7-day period where overuse without medical review is not counted, was restarted. This approach was used as the object of this analysis was to explore the relationship between the ‘index’ overuse episode and the first episode of medical review following this.\n\nABSTRACT.OUTCOMES:\nIn this exploratory analysis, no particular primary outcome was specified and there is no adjustment for multiple comparisons. The outcome variables were firstly the proportion of participants in each arm of the RCT with at least one episode of overuse for each type of beta-agonist overuse; high, marked and extreme. A second outcome was the number of days of overuse for each type of beta-agonist overuse. A third outcome was the number of days of overuse without medical review within 48 h for each type of beta-agonist overuse. The final outcome was the proportion of days of overuse without medical review within 48 h compared to the total number of days of overuse for each type of beta-agonist overuse.\nMedication use in the 14 days prior to and following a severe exacerbation was also examined. A severe exacerbation was defined according to ATS/ERS criteria25 as: (a) the use of systemic corticosteroids for at least 3 days; or (b) a hospitalisation or Emergency Department visit because of asthma, requiring systemic corticosteroids. Courses of corticosteroids separated by 7 days or more were treated as separate severe exacerbations. The date of the first day of corticosteroid therapy was Day 0. Medication use for the 14-day period before and up to 14 days after Day 0 was extracted from the data set for each participant with at least one severe exacerbation. For patients with repeat exacerbations, a minimum interval of 28 days between exacerbations was required to avoid overlap of data within the same participants.\n\nABSTRACT.STATISTICAL METHODS:\nThe relative risk of at least one episode of overuse by overuse type was by simple contingency table analysis and expressed as a relative risk. The relative rates of days of high, marked and extreme beta-agonist overuse was by Poisson regression with an offset for the adjusted days of treatment exposure, and a further adjustment for over-dispersion. SAS version 9·4 was used.\n\nABSTRACT.DATA AVAILABILITY STATEMENT:\nData is available on application to the corresponding author\n\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\n\nSupplementary Information\n\n**Question:** Compared to budesonide/formoterol with salbutamol what was the result of budesonide/formoterol inhaler on relative rate?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13818
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nUse of a self-rating scale to monitor depression severity in recurrent GP consultations in primary care – does it really make a difference? A randomised controlled study\n\n\n\nABSTRACT.BACKGROUND:\nLittle information is available about whether the use of self-assessment instruments in primary care affects depression course and outcome. The purpose was to evaluate whether using a depression self-rating scale in recurrent person-centred GP consultations affected depression severity, quality of life, medication use, and sick leave frequency.\n\nABSTRACT.METHODS:\nPatients in the intervention group met their GP regularly at least 4 times during the 3 months intervention. In addition to treatment as usual (TAU), patients completed a self-assessment instrument (Montgomery-Asberg Depression Rating Scale) on each occasion, and then GPs used the completed instrument as the basis for a person-centred discussion of changes in depression symptoms. The control group received TAU. Frequency of visits in the TAU arm was the result of the GPs’ and patients’ joint assessments of care need in each case.\nDepression severity was measured with Beck Depression Inventory-II (BDI-II), quality of life with EQ-5D, and psychological well-being with the General Health Questionnaire-12 (GHQ-12). Data on sick leave, antidepressant and sedatives use, and care contacts were collected from electronic patient records. All variables were measured at baseline and 3, 6, and 12 months. Mean intra-individual changes were compared between the intervention and TAU group.\n\nABSTRACT.RESULTS:\nThere were no significant differences between the intervention and control group in depression severity reduction or remission rate, change in quality of life, psychological well-being, sedative prescriptions, or sick leave during the whole 12-month follow-up. However, significantly more patients in the intervention group continued antidepressants until the 6 month follow-up (86/125 vs 78/133, p < 0.05).\n\nABSTRACT.CONCLUSIONS:\nWhen GPs used a depression self-rating scale in recurrent consultations, patients more often continued antidepressant medication according to guidelines, compared to TAU patients. However, reduction of depressive symptoms, remission rate, quality of life, psychological well-being, sedative use, sick leave, and health care use 4-12 months was not significantly different from the TAU group. These findings suggest that frequent use of depression rating scales in person-centred primary care consultations has no further additional effect on patients’ depression or well-being, sick leave, or health care use.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov Identifier: NCT01402206. Registered June 27 2011(retrospectively registered).\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/s12875-016-0578-9) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nDepression is a leading cause of disability and affects 10 to 15% of the population of most countries in the world [1, 2]. A majority of patients with depression are treated in primary care, and approximately 75% of antidepressants are prescribed by general practitioners (GPs) [3, 4]. The usual treatment for depression in Swedish primary care, as in primary care in other countries, depends upon symptom severity. Most people treated for depression in primary care receive one or more of the following: early reassessment (watchful waiting) and symptom monitoring by their GP, cognitive behavioural therapy or other psychotherapy, and antidepressant medication [5, 6].\nDepression guidelines recommend that GPs regularly use self-rating scales to evaluate and monitor symptoms in patients with depression [5, 6]. These guidelines, however, have been based largely on clinical expertise rather than on the results of randomised controlled trials [3]. Furthermore, most self-rating scales recommended for detecting and monitoring depression were designed in psychiatric care settings and have not been adapted for use in primary care [3]. Little information is thus available about whether the use of such instruments in primary care affects depression symptoms, treatment, rehabilitation, or recovery [7]. We know that using such scales for screening does not improve detection or management of depression in settings such as primary care [8]. However, we do not know whether using a self-rating scale to follow the course of depression in person-centred primary care consultations improves treatment and outcomes of depression.\nThe depression rating scale most commonly used in Swedish primary health care is the Montgomery-Asberg Depression Rating Scale [9]. However, its use has mainly been evaluated in psychiatric settings [9, 10]. MADRS-S is sensitive to changes in symptoms of depression and is therefore a valuable tool in person-centred consultations [10–12]. However, a qualitative study based on interviews with Swedish GPs showed that many GPs thought that MADRS-S was of limited utility in consultations [13]. It is thus still unclear whether or not future guidelines should recommend mandatory use of depression self-rating scales in primary health care.\nThe aim of the current study was to evaluate in a randomised control trial whether using a depression rating scale (MADRS-S) in recurrent person-centred GP consultations affected depression severity, quality of life, overall psychological well-being, antidepressant and sedative use, sick leave, and health care use in a long time perspective.\n\nBODY.METHODS:\nThe PRI-SMA study (PRImary care Self-assessment Montgomery-Asberg) was a multicentre, controlled trial that took place at primary health care centres (PHCCs) and was randomised at the GP level. The GPs randomised to the intervention used a patient self-rating scale at recurrent monthly patient visits in addition to providing usual care. The GPs randomised to the control condition provided treatment as usual (TAU). The intervention lasted for 3 months. Adult patients who presented with a recent depressive episode (mild/moderate depression) at the PHCC were recruited to the study by the GPs.\nThe trial took place at 22 Swedish PHCCs between March 2010 and December 2013. All 98 PHCCs in the region were invited to participate in the intervention; 22 agreed to participate.\nBODY.RANDOMISATION OF GPS AT PRIMARY HEALTH CARE CENTRES:\nAll GPs took part in an information meeting about the study. All GPs also met with the study leaders when the leaders visited each participating PHCC at the time of the intervention start-up at that PHCC. Before the intervention started, the GPs at each PHCC were randomised to either intervention treatment or TAU. All GP names were written on slips of paper and mixed in a container, and an administrative employee blinded to the aim of the trial drew names. The GP whose name was first drawn was assigned to the intervention group, the GP whose name was drawn second was assigned to the control group, and so on until all names were drawn.\n\nBODY.STUDY PROCEDURE:\nPatient recruitment took place at one PHCC at a time. During the 2 week recruitment period, a research nurse worked full-time at the PHCC to facilitate patient recruitment, collect patient data, and support PHCC staff.\n\nBODY.INCLUSION OF PATIENTS:\nStudy participants were patients aged 18 and up who visited the PHCCs and were identified and diagnosed by a GP with a new episode of mild/moderate depressive disorder [14].\nDuring the 2 week recruitment period, all randomised GPs who met patients they suspected of having depression used the PRIME-MD diagnostic instrument to confirm or reject the diagnosis of depression in accordance with DSM-IV criteria [14]. All patients who fulfilled the diagnostic criteria for depressive disorder, i.e. mild to moderate (BDI score ≤36), were asked if they would like to participate in the study.\nInclusion criteria were: written informed consent, age ≥18 years, diagnosed with mild to moderate depressive disorder and either not prescribed antidepressant medication or had no changes in antidepressant medication during the preceding 2 months.\nExclusion criteria were: lack of written informed consent, antidepressant medication introduced or changed during the 2 months prior to baseline, diagnosed with severe depressive disorder (BDI-II >36, confirmed by diagnostic procedure by GP), diagnosed with severe mental disorder (i.e., bipolar disorder, antisocial personality disorder, psychosis, substance use disorder, or other serious mental disorder), suicidal ideation or earlier suicide attempts, did not speak or understand Swedish, and/or had cognitive disabilities that made it difficult or impossible to complete the assessment instruments, including MADRS-S.\n\nBODY.INTERVENTION:\nThe intervention consisted of using a patient depression self-rating scale (MADRS-S) in recurrent monthly consultations during the 3-month intervention. Patients made 4 visits to their GPs, at which time they completed MADRS-S to monitor changes in their depressive symptoms that were then discussed in the person-centred consultation [15, 16]. MADRS-S was used as a supplement to, rather than as a substitute for, TAU.\nThe GPs randomised to the group that provided the intervention received four hours of guidance about how to include the results of MADRS-S in the person-centred consultation. The intervention GPs also received a video CD with the same pre-recorded information. The person-centred consultations involved patients and GPs collaborating to increase patients’ ability to manage their depression. The guidance therefore included a reminder to the GPs that MADRS-S was used for the sake of the patients rather than the GPs.\n\nBODY.TREATMENT AS USUAL:\nThe GPs randomised to the group that would provide TAU were instructed to manage patients with depression the same way they usually did (but with the addition of the diagnostic procedure in the initial consultation). In general Swedish GPs are very knowledgeable about and use of person-centred consultation methods in their daily practice of the kind described in Maguire 2002 [16].\nThree months after baseline, patients in the control group were followed up in an appointment with a nurse at the PHCC.\n\nBODY.DATA COLLECTION PROCEDURE:\nA study nurse collected data from participants in the intervention and control groups during the first visit (baseline), at a follow-up visit to the PHCC at the end of the intervention (3 months after baseline), and by postal questionnaires 6 and 12 months after baseline (see Additional files 1 and 2).\n\nBODY.MEASURES:\nBODY.BACKGROUND VARIABLES:\nBackground information was obtained by a questionnaire at the patients’ first visit to the PHCC. Age was measured as age in years, gender as female or male. Marital status was dichotomised as single or married/cohabiting. Children living at home was dichotomised as participants who had children <18 years living at home vs. participants not having children living at home. Educational level was identified as as lower educational level (primary school or vocational school), middle educational level (high school) and higher educational level (college or university). Employment status was dichotomised as working/studying vs. unemployed/retired.\nPlace of birth was dichotomised as born in one of the Nordic countries vs. born outside the Nordic countries. Smoking status was categorised as “non-smokers” (participants who had not smoked during the last year), “sometimes smokers” (those who only smoked a single cigarette per week or more seldom), and “smokers” (more frequent smoking). Leisure-time physical activity was categorised as i) no or almost no physical activity during leisure time, ii) activity ≥4 hrs. per week, and iii) intensive (several times a week and vigorously).\n\nBODY.OUTCOMES:\nDepression severity; Assessed before (at baseline) and after the intervention with the Beck Depression Inventory-II (BDI-II) and categorised as mild (BDI-II score >13-19), moderate (BDI-II score 20-28), and high moderate (BDI-II score, 29-36) [17, 18].\nRemission; Identified as a BDI-II score ≤ 13 at 3-month follow up.\nQuality of life; Measured with the standardised EQ-5D instrument [19, 20].\nOverall psychological well-being; Assessed with the 12-item General Health Questionnaire (GHQ-12) (response scale 0-3) [21].\nPrescriptions for antidepressants; Information on this measure was obtained from patients’ electronic patient records (EPRs) and from responses to the patient questionnaires and dichotomised as yes (used) or no (did not use). Guidelines indicate that treatment should not be stopped before 6-12 months after recovery [5].\nPrescriptions for sedatives; Information on prescriptions for sedatives was obtained from patients’ electronic patient records (EPRs) and from responses to the patient questionnaires and dichotomised as yes (used) or no (did not use).\nSick leave; Information on sick leave was obtained from EPRs and responses to the patient questionnaires. Sick leave was used both as a dichotomous variable (yes or no) and as number of days of sick leave.\nHealth care use; This measure comprised information on number of visits to GPs, number of visits to nurses, number of visits to psychologists/psychotherapists, and total number of visits to the PHCC. Information was obtained from EPRs by study personnel and divided into the number of visits between 0 and 3 months, and 4 and 12 months.\n\n\nBODY.STATISTICAL METHODS:\nThe student’s t-test (paired) was used to compare mean intra-individual changes in the intervention and TAU groups. Two-sided Chi-square or Mann Whitney U-tests were used to compare frequencies. Statistical significance was set at p < 0.05. Logistic regression models were used to examine whether changes in the BDI-II, EQ-5D, and GHQ-12 variables from baseline to 3, 6 and 12 months follow-ups were associated with sex, age, education, antidepressant medication, and group assignment (intervention/TAU).The results are presented as odds ratios (OR) with 95% confidence intervals (CI). All analyses were performed in SPSS. All analyses were conducted on an intention to treat basis.\n\nBODY.POWER CALCULATION:\nPower was calculated for the BDI-II-score outcome on the basis of empirical experience of the variation in this factor, taking into account that GPs were the unit of randomisation. The power estimate was based on the assumption of an alpha error = 0.05 and a beta error = 0.20 (power = 0.80). Given an expected effect difference of 2 units (BDI-II) over time between the intervention and TAU groups, we calculated that we would need 105 patients in each group to achieve sufficient statistical power. To safeguard for dropouts, we recruited 20% more patients than the power calculation indicated that we needed.\n\nBODY.ETHICS:\nThe Regional Ethical Review Board in Gothenburg, Sweden approved this study (Dnr 746-09; T612-10). Prior to inclusion and after receiving oral and written information about the study, participants provided written informed consent. The trial was registered at ClinicalTrials.gov (Identifier: NCT01402206) [22].\n\n\nBODY.RESULTS:\nOf the 91 participating GPs, 45 were randomised into a group that provided the intervention and 46 into a group that provided TAU. The participating GPs enrolled 258 patients, and the flow of the patients through the study is shown in Fig. 1 [23]. There was no significant difference in participation rate in the patients in the intervention and control groups at the 3-, 6- or 12-months follow-ups.Fig. 1PRI-SMA-Study (CONSORT Flowchart)\n\n\nTable 1 presents the background characteristics of the study population. No significant differences were found between the participants in the intervention and TAU groups at baseline.Table 1Baseline characteristics of participants in the PRI-SMA trial (n = 258)\nCharacteristicsTotal\nn (%)Intervention\nn (%)TAU¤\n\nn (%)\np\nParticipants\n258\n125\n133\n\nAge (years, mean )\n43.48\n44.84\n42.19\n0.2\n men\n76 (29.5)\n31 (24.8)\n45 (33.8)\n\n women\n182 (70.5)\n94 (75.2)\n88 (66.2)\n0.1\nMarital status\n\n\n\n\n single\n118 (45.7)\n61 (48.8)\n57 (42.9)\n\n married/cohabiting\n140 (54.3)\n64 (51.2)\n76 (57.1)\n\n children <18 years at home\n82 (43.2)\n36 (40.4)\n46 (45.5)\n0.5\nLower educational level\n34 (13.2)\n17 (13.7)\n17 (12.8)\n\nMiddle educational level\n115 (44.7)\n61 (49.2)\n54 (40.6)\n\nHigh educational level\n108 (42.0)\n46 (37.1)\n62 (46.6)\n0.3\nEmployment\n\n\n\n\n working/studying\n181 (80.8)\n90 (81.1)\n91 (80.5)\n\n unemployed/retired\n43 (19.2)\n21 (18.9)\n22 (19.5)\n0.9\nBorn outside the Nordic countries\n41 (16)\n19 (15.3)\n22 (16.7)\n0.8\nSmoking (yes or sometimes)\n75 (29.4)\n36 (29.3)\n39 (29.5)\n0.9\nLeisure-time physical activity\n\n\n\n\n never\n109 (42.4)\n57 (46)\n52 (39.1)\n\n at least 4 hrs/week\n124 (48.2)\n55 (44.4)\n69 (51.9)\n\n intensive\n24 (9.3)\n12 (9.7)\n12 (9,0)\n0.5\nDepression\n\n\n\n\n mild (BDI-II 12-19)\n32 (13)\n16 (13.6)\n16 (12.4)\n\n moderate (BDI-II 20-28)\n82 (33.2)\n39 (33.1)\n43 (33.3)\n\n high moderate (BDI-II 29-36)\n68 (27.5)\n35 (29.7)\n33 (25.6)\n0.9\n\n¤\nTAU treatment as usual\n\n\nThere was a statistically significant difference between participants and drop-outs during the study concerning age (mean age 44.3 in participants, mean age 37.3 in drop-outs, p = 0.02), gender (male 14/62, 22.6%, female 16/166, 9.6%, p = 0.034), and ethnicity (born in Sweden 21/194, 10.8%, and born outside Sweden 9/32, 28.1%, p = 0.035)\nFigures 2, 3, and 4 shows depression symptoms (BDI-II), quality of life (EQ-5D), and overall psychological well-being (GHQ-12) in intervention and TAU groups at baseline and 3, 6, and 12 months (boxplots). All patients who participated in the study, both those in the intervention and those in the TAU group, improved in all three variables between baseline and 3 months but there were no significant differences in mean changes (mean of intra-individual ∆) between the groups. Improvements were substantial in both groups; at the 3-month follow-up, nearly half the patients no longer had depression (BDI <13) (49% in the intervention and 47.3% in the TAU group, p = 0.89). We also performed logistic regression analyses. The models showed that at the 12-month follow-up, older age was associated with improved BDI-II score (OR 1.05, CI 1.01-1.99), and a higher level of education was associated with improved EQ-5D score (OR 3.9, CI 1.77-8.73). The factors added to the model (sex, age, education, antidepressant medication, and participation in the intervention or TAU group) explained the same amount of variation in both the intervention and TAU groups. Thus, the difference between the groups remained non-significant (data not shown). No adverse events were reported from intervention or TAU group.Fig. 2Depression severity (BDI-II values) in the intervention and TAU groups. Boxplots for baseline, 3-months follow up, 6-months follow up, and 12-months follow up. The outcome variable is presented as box plot with medians, minimum and maximum values and lower and upper quartiles. Y-axis: BDI-II, X-axis: time 0, 3, 6, 12 months. Outliers (circle) are cases with values between 1.5 and 3 times the interquartile range (5 cases with BDI-II ≥50 in patients with difficulties in Swedish language, where complementary diagnostic procedure by the GP ended up on medium depressive disorder)\n\nFig. 3Quality of life (EQ-5D values) in the intervention and TAU groups. Boxplots for baseline, 3-months follow up, 6-months follow up, and 12-months follow up. The outcome variable is presented as box plot with medians, minimum and maximum values and lower and upper quartiles. Y-axis: EQ-5D, X-axis: time 0, 3, 6, 12 months. Outliers (circle) are cases with values between 1.5 and 3 times the interquartile range, extremes (star) are cases with values more than 3 times the interquartile range\n\nFig. 4circle) are cases with values between 1.5 and 3 times the interquartile range\n\n\nTable 2 shows differences between the intervention and TAU groups in the percent of patients who had prescriptions for antidepressants and sedatives. At baseline, 22% of patients in the intervention group and 32% of patients in the TAU group were on maintenance antidepressant medication. After 3 months, the proportion of patients with prescriptions for antidepressant medication was 72% in both the intervention and the TAU group. At the 6-month follow-up, the proportion of patients taking antidepressants had dropped to 69% in the intervention group and 59% in the TAU group (p = 0.007). At the 12-month follow-up, there were no significant differences in the percent of patients who had prescriptions for antidepressants or sedatives in the intervention and TAU groups (Table 2).Table 2Number and percent of patients who had prescriptions for antidepressants and sedatives and stated medication at baseline and 3-, 6-, and 12-months follow up, respectively, in the intervention and TAU groups\nBaseline3 months follow-up6 months follow-up12 months follow-up\nn\n(%)\nn\n(%)\nn\n(% )\nn\n(%)Antidepressants\n\n\n\n\n\n\n\n Intervention (n = 125)\n27\n(22)\n90\n(72)\n86\n(69)**\n74\n(59)\n TAU (n = 133)\n43\n(32)\n96\n(72)\n78\n(59)\n77\n(58)\nSedatives\n\n\n\n\n\n\n\n\n Intervention\n26\n(21)\n64\n(51)\n53\n(42)\n52\n(42)\n TAU¤\n\n32\n(24)\n73\n(55)\n56\n(42)\n51\n(38)\n\n¤\nTAU treatment as usual\n**Significantly higher proportion of patients in intervention group still on antidepressants after 6 months, p = 0.007\n\n\nTable 3 shows the mean number of days of sick leave by patient group. There were no significant differences in the percentage of patients on sick leave in the intervention and TAU groups between baseline and 3 months, or 4 -12 months, or during the entire study period. The mean total duration of sick leave (days) was not significantly different between the intervention and TAU groups (Table 3).Table 3Number and percent of individuals on sick leave, mean days of sick leave, and p-values for difference in mean days of sick leave between participants in intervention and TAU groups, based on information obtained from electronic patient records\nInterventionTAU\nn (%)Mean days of sick leaveSD\nn (%)Mean days of sick leaveSD\nP for difference mean days0-3 months\n31(25)\n63.1\n29.8\n48 (36)\n55.8\n27.7\n0.221\n4-12 months\n35(28)\n100.8\n87.3\n47(35)\n102.7\n85\n0.922\nTotal 0-12 months\n49(39)\n124.8\n102.5\n64 (48)\n123.3\n85.0\n0.942\n\n\nBODY.HEALTH CARE USE:\nMean number of visits at the PHCC, including visits to GP, nurse, psychologist/therapist and total visits to PHCC, was compared between intervention and TAU groups. All information was obtained from EPRs (Table 4). We also compared the total number of visits. There were no significant differences between the intervention and TAU groups regarding these outcomes, during the total 12-month follow-up period. However, during the 0-3 month period, the TAU group made significantly more visits to psychologists/psychotherapists than the intervention group and the intervention group made significantly more total visits to the PHCC than the TAU group.Table 4Number of patients’ contacts with GPs, nurses, and psychologists between baseline to 3 months, and 4 to12 months for intervention and TAU groups, based on information obtained from electronic patient records\nProfession\nType of contact\nIntervention 0-3 months m (SD)\nTAU 0-3 months m (SD)\n\np\n\n GP\n visit\n 3.44 (1.214)\n 2.59 (1.354)\n0.066\n Nurse\n visit\n 0.32 (0.829)\n 0.32 (0.875)\n0.878\n Psychologist/Therapist\n visit\n 0.40 (1.320)\n 0.89 (1.776)\n\n0. 0001\n\n Total visits to PHCC\n visit\n 4.16 (2.398)\n 3.81 (2.692)\n\n0.006\n\n\n\nIntervention 4-12 months m (SD)\nTAU 4-12 months m (SD)\n\np\n\n GP\n visit\n 2. 54 (2.337)\n 2.38 (2.595)\n0.301\n Nurse\n visit\n 0. 68 (1.484)\n 0.62 (1.551)\n0.581\n Psychologist/Therapist\n visit\n 0. 90 (2.504)\n 0.83 (1.908)\n0.401\n Total visits to PHCC\n visit\n 4. 11 (4.586)\n 3.83 (4.403)\n0.619\n\n\nIntervention total 0-12 months m (SD)\nTAU total 0-12 months m (SD)\n\np\n\n GP\n visit\n 8.26 (5.842)\n 7.64 (5.976)\n0.304\n Nurse\n visit\n 0.99 (1.978)\n 0.94 (2.088)\n0.967\n Psychologist/Therapist\n visit\n 1.30 (3.129)\n 1.73 (3.222)\n0.167\n Total visits to PHCC\n visit\n 8.26 (5.842)\n 7.64 (5.976)\n0.812\nBold numeral indicates statistically significant p-value\n\n\n\n\nBODY.DISCUSSION:\nIn this RCT, which evaluated use of a depression self-rating scale in recurrent person-centred GP consultations, more patients in the intervention than control group adhered to antidepressant treatment for the recommended 6 months. However, compared to TAU, there were no significant differences between the effects on depression symptom reduction, quality of life or overall psychological well-being, sedative use, sick leave, or health care use. These findings suggest that recurrent use of depression self-assessment scales in person-centred primary care consultations does have an effect on antidepressant medication adherence, but has no further additional effects, compared to the treatment usually provided in Swedish primary care.\nBODY.STRENGTHS AND WEAKNESSES OF THE STUDY:\nThe intervention was conducted in ordinary primary care rather than advertising for participants, which would have resulted in a sample that was not representative of patients in primary health care. The study compared the intervention with a TAU group rather than using a control group that consisted of patients on a waiting list. Studies of the effects of cognitive behavioural therapy show that being on a waiting list has a nocebo effect (i.e. results in worsening depression symptoms), which in turn makes intervention effects appear larger than they really are [24, 25]. We had access to all patients’ EPRs, a good source of information on prescriptions, sick leave, and health care use. In the current study, we had the opportunity to observe the course of depression symptoms, perceived quality of life, sick leave, and health care use for a full year after baseline. Another major strength is that a research nurse was stationed at each PHCC during the recruitment period. The nurse collected baseline data, gathered oral and written consent, and supported the staff at the site, functions that primary care personnel stress are important to make it easier to combine clinical work and research [26].We were well aware of the possible obstacles to conducting a RCT at a PHCC as described by Richter and Roy-Byrne [27, 28].\nWe randomised at the GP level. Randomisation at the patient level would have necessitated changing doctor for some patients, or GPs trained in the intervention would have provided the intervention to some patients but treatment as usual to others, increasing the possibility of contamination. We chose not to randomise at the PHCC level, as regional and socioeconomic differences between areas served by the PHCCs might have affected the results.\nAnother limitation was absence of a blinded control arm. The control GPs were aware that their colleagues at the same PHCC were providing the intervention, and this might have affected treatment as usual, even if they were requested to continue patient care as usual. This possible bias would support the notion of randomisation at PHCC level, but this would in turn increase the risk of e.g. selection effects. Another limitation is that the intervention of recurrent use of self-rating scales also included regular GP visits, and it is not possible to separate possible effects of the regular GP visits from effects of the patient’s frequent self-rating monitoring. However, during the 3-month intervention, patients in the TAU arm made only 0.85 fewer visits to their GPs than patients in the intervention arm. The frequency of visits in the TAU arm was the result of the GPs’ and patients’ joint assessments of care needs in each case. That frequency of patient visits did not differ markedly in the TAU arm is likely attributable to the current wide-spread emphasis on person-centred care in Swedish primary care.\nAll patients included had a new episode of mild/moderate depression, but the selection procedure did not differentiate between recurrent or first episode patients. There is some evidence that recurrent depression has slower remission [29], but according to the similar frequency of maintenance antidepressant medication in both intervention and TAU groups at inclusion, we can assume that the rate of recurrent depression disorder was the same in both arms. Further, the high number of outcomes, assessed at baseline and at 3, 6, and 12 months, necessitated multiple statistical comparisons, thus increasing the possibility of type I errors. Although the low p-values indicated that the few significant findings were not likely due to chance, the results need further confirmation in other studies. A limitation is also that we did not use more advanced statistical models to account for the correlation between patients treated by the same GP and to handle all observations of the patient instead of pairwise comparisons (e.g. mixed models).\n\nBODY.DROP OUTS:\nAmong the study drop outs, 37 individuals belonged to the intervention group and 48 to the TAU group. At the 3 and 6 month follow-up, participation rate was somewhat lower in the control group, but at the 12 month follow up, participation rate was almost the same in intervention and control groups; 70% and 69%, respectively (Fig. 1).\n\nBODY.GENERALISABILITY:\nThe study population represented patients treated by GPs for mild and moderate depression in primary health care. It included patients from health care centres in urban and rural areas and areas characterised by differing socioeconomic status. Health care is heavily subsidised in Sweden; patients pay a small per-visit fee and their yearly total co-payment is capped at approximately £100. The study results can be generalised to other countries with similar health care systems.\nThe RCT had a representative selection among the Swedish speaking population. One could argue that the exclusion of non-speaking Swedish patients may have limited the study’s generalisability. Unfortunately, it was not possible to translate and validate the instruments or to use an interpreter at every contact with study staff.\nIn this study, we chose to use MADR-S as the basis for patient education and collaborative care. These two factors can facilitate better depression outcomes [29–31]. Depression is complex, and the outcome of treatment often depends on the GP’s ability to connect with the patient during the consultation and her/his ability to choose proper treatment in agreement with the patient. Use of self-rating scales may function as one piece of the depression treatment puzzle, but as in many difficult jigsaw puzzles, more than one piece is needed to complete the picture. Thus, the Swedish National Board of Health and Welfare, i.e. the body that establishes the national guidelines for depression treatment, has expressed interest in whether it is meaningful to routinely use self-rating scales for patients with mild to moderate depression in primary care. We chose to use MADRS-S because it is the most commonly used depression self-rating scale in Sweden. About one third of all GPs state that they sometimes use MADRS-S, although not regularly [13, 32]. MADRS-S is designed to be sensitive to change and is often used in RCT studies that evaluate antidepressant treatment [12].\nThe study shows that use of self-rating scales during regular visits facilitates adherence to pharmacological treatment according to guidelines, in that once medication with antidepressants has been initiated, this treatment should be continued for at least 6 months [5]. On the other hand, the study also shows that treatment as usual, as it is practiced in the Swedish primary care of today, in most cases not including the use of self-rating scales, is just as efficient concerning reduction of depressive symptoms, perceived quality of life and general health, as well as sick leave frequency and duration, which is supported by the results from several international studies [33].\n\nBODY.STRENGTHS AND WEAKNESSES IN RELATION TO OTHER STUDIES:\nTo our knowledge there have been few or no studies that have evaluated the effect of recurrent use of self-rating scales regarding depression in relation to the above described outcomes.\nUse of antidepressants for the period mandated by current guidelines was associated with neither an improvement nor a worsening of patients’ symptoms during the one-year study period. However, using antidepressants for the length of time called for in guidelines has been shown to reduce risk of relapse [34].\n\nBODY.CLINICAL IMPORTANCE OF THE STUDY:\nMore discussion is needed as to how self-rating scales might be best implemented in primary care, how to manage depressed patients using these instruments, and how to integrate this into the consultation. The feasibility of meaningful and person-centred use of self-rating scales, continuity, and accessibility in the care of mild to moderate depression in the clinical setting is of great interest. Engaging the patient with a cognitive and person-centred approach and with regular consultations and follow-ups is a way to empower the patient, give the patient knowledge and tools to handle depression, follow his or her illness course, and create a basis for regaining of function [2, 6, 30]. Continuity and close regular follow-ups facilitate care that is adjusted to the individual's explicit needs and requirements and is an absolute condition for person-centred care [35]. Requirements for treatment are also reached in cooperation and collaboration with the patient, facilitating understanding and communication [36]. The continuous use of self-rating scales in the treatment of primary care patients with depression might support this process of care.\n\n\nBODY.CONCLUSIONS:\nWhen GPs used a depression self-rating scale in recurrent consultations, patients more often continued antidepressant medication according to guidelines, compared to TAU patients. However, concerning outcomes in terms of depression symptoms, quality of life, overall psychological well-being, consumption of care, and sick leave the use of self-rating scales did not seem to increase treatment effects compared to the usual treatment provided in primary care. Thus, the use of depression self-rating scales should perhaps not be mandatory in primary health care but rather left to the discretion of the GP and the patient.\n\n**Question:** Compared to treatment as usual (TAU) what was the result of person-centred general practitioners (GP) consultations on depression severity reduction, remission rate, change in quality of life, psychological well-being, sedative prescriptions, sick leave?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13819
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nUse of a self-rating scale to monitor depression severity in recurrent GP consultations in primary care – does it really make a difference? A randomised controlled study\n\n\n\nABSTRACT.BACKGROUND:\nLittle information is available about whether the use of self-assessment instruments in primary care affects depression course and outcome. The purpose was to evaluate whether using a depression self-rating scale in recurrent person-centred GP consultations affected depression severity, quality of life, medication use, and sick leave frequency.\n\nABSTRACT.METHODS:\nPatients in the intervention group met their GP regularly at least 4 times during the 3 months intervention. In addition to treatment as usual (TAU), patients completed a self-assessment instrument (Montgomery-Asberg Depression Rating Scale) on each occasion, and then GPs used the completed instrument as the basis for a person-centred discussion of changes in depression symptoms. The control group received TAU. Frequency of visits in the TAU arm was the result of the GPs’ and patients’ joint assessments of care need in each case.\nDepression severity was measured with Beck Depression Inventory-II (BDI-II), quality of life with EQ-5D, and psychological well-being with the General Health Questionnaire-12 (GHQ-12). Data on sick leave, antidepressant and sedatives use, and care contacts were collected from electronic patient records. All variables were measured at baseline and 3, 6, and 12 months. Mean intra-individual changes were compared between the intervention and TAU group.\n\nABSTRACT.RESULTS:\nThere were no significant differences between the intervention and control group in depression severity reduction or remission rate, change in quality of life, psychological well-being, sedative prescriptions, or sick leave during the whole 12-month follow-up. However, significantly more patients in the intervention group continued antidepressants until the 6 month follow-up (86/125 vs 78/133, p < 0.05).\n\nABSTRACT.CONCLUSIONS:\nWhen GPs used a depression self-rating scale in recurrent consultations, patients more often continued antidepressant medication according to guidelines, compared to TAU patients. However, reduction of depressive symptoms, remission rate, quality of life, psychological well-being, sedative use, sick leave, and health care use 4-12 months was not significantly different from the TAU group. These findings suggest that frequent use of depression rating scales in person-centred primary care consultations has no further additional effect on patients’ depression or well-being, sick leave, or health care use.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov Identifier: NCT01402206. Registered June 27 2011(retrospectively registered).\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi:10.1186/s12875-016-0578-9) contains supplementary material, which is available to authorized users.\n\nBODY.BACKGROUND:\nDepression is a leading cause of disability and affects 10 to 15% of the population of most countries in the world [1, 2]. A majority of patients with depression are treated in primary care, and approximately 75% of antidepressants are prescribed by general practitioners (GPs) [3, 4]. The usual treatment for depression in Swedish primary care, as in primary care in other countries, depends upon symptom severity. Most people treated for depression in primary care receive one or more of the following: early reassessment (watchful waiting) and symptom monitoring by their GP, cognitive behavioural therapy or other psychotherapy, and antidepressant medication [5, 6].\nDepression guidelines recommend that GPs regularly use self-rating scales to evaluate and monitor symptoms in patients with depression [5, 6]. These guidelines, however, have been based largely on clinical expertise rather than on the results of randomised controlled trials [3]. Furthermore, most self-rating scales recommended for detecting and monitoring depression were designed in psychiatric care settings and have not been adapted for use in primary care [3]. Little information is thus available about whether the use of such instruments in primary care affects depression symptoms, treatment, rehabilitation, or recovery [7]. We know that using such scales for screening does not improve detection or management of depression in settings such as primary care [8]. However, we do not know whether using a self-rating scale to follow the course of depression in person-centred primary care consultations improves treatment and outcomes of depression.\nThe depression rating scale most commonly used in Swedish primary health care is the Montgomery-Asberg Depression Rating Scale [9]. However, its use has mainly been evaluated in psychiatric settings [9, 10]. MADRS-S is sensitive to changes in symptoms of depression and is therefore a valuable tool in person-centred consultations [10–12]. However, a qualitative study based on interviews with Swedish GPs showed that many GPs thought that MADRS-S was of limited utility in consultations [13]. It is thus still unclear whether or not future guidelines should recommend mandatory use of depression self-rating scales in primary health care.\nThe aim of the current study was to evaluate in a randomised control trial whether using a depression rating scale (MADRS-S) in recurrent person-centred GP consultations affected depression severity, quality of life, overall psychological well-being, antidepressant and sedative use, sick leave, and health care use in a long time perspective.\n\nBODY.METHODS:\nThe PRI-SMA study (PRImary care Self-assessment Montgomery-Asberg) was a multicentre, controlled trial that took place at primary health care centres (PHCCs) and was randomised at the GP level. The GPs randomised to the intervention used a patient self-rating scale at recurrent monthly patient visits in addition to providing usual care. The GPs randomised to the control condition provided treatment as usual (TAU). The intervention lasted for 3 months. Adult patients who presented with a recent depressive episode (mild/moderate depression) at the PHCC were recruited to the study by the GPs.\nThe trial took place at 22 Swedish PHCCs between March 2010 and December 2013. All 98 PHCCs in the region were invited to participate in the intervention; 22 agreed to participate.\nBODY.RANDOMISATION OF GPS AT PRIMARY HEALTH CARE CENTRES:\nAll GPs took part in an information meeting about the study. All GPs also met with the study leaders when the leaders visited each participating PHCC at the time of the intervention start-up at that PHCC. Before the intervention started, the GPs at each PHCC were randomised to either intervention treatment or TAU. All GP names were written on slips of paper and mixed in a container, and an administrative employee blinded to the aim of the trial drew names. The GP whose name was first drawn was assigned to the intervention group, the GP whose name was drawn second was assigned to the control group, and so on until all names were drawn.\n\nBODY.STUDY PROCEDURE:\nPatient recruitment took place at one PHCC at a time. During the 2 week recruitment period, a research nurse worked full-time at the PHCC to facilitate patient recruitment, collect patient data, and support PHCC staff.\n\nBODY.INCLUSION OF PATIENTS:\nStudy participants were patients aged 18 and up who visited the PHCCs and were identified and diagnosed by a GP with a new episode of mild/moderate depressive disorder [14].\nDuring the 2 week recruitment period, all randomised GPs who met patients they suspected of having depression used the PRIME-MD diagnostic instrument to confirm or reject the diagnosis of depression in accordance with DSM-IV criteria [14]. All patients who fulfilled the diagnostic criteria for depressive disorder, i.e. mild to moderate (BDI score ≤36), were asked if they would like to participate in the study.\nInclusion criteria were: written informed consent, age ≥18 years, diagnosed with mild to moderate depressive disorder and either not prescribed antidepressant medication or had no changes in antidepressant medication during the preceding 2 months.\nExclusion criteria were: lack of written informed consent, antidepressant medication introduced or changed during the 2 months prior to baseline, diagnosed with severe depressive disorder (BDI-II >36, confirmed by diagnostic procedure by GP), diagnosed with severe mental disorder (i.e., bipolar disorder, antisocial personality disorder, psychosis, substance use disorder, or other serious mental disorder), suicidal ideation or earlier suicide attempts, did not speak or understand Swedish, and/or had cognitive disabilities that made it difficult or impossible to complete the assessment instruments, including MADRS-S.\n\nBODY.INTERVENTION:\nThe intervention consisted of using a patient depression self-rating scale (MADRS-S) in recurrent monthly consultations during the 3-month intervention. Patients made 4 visits to their GPs, at which time they completed MADRS-S to monitor changes in their depressive symptoms that were then discussed in the person-centred consultation [15, 16]. MADRS-S was used as a supplement to, rather than as a substitute for, TAU.\nThe GPs randomised to the group that provided the intervention received four hours of guidance about how to include the results of MADRS-S in the person-centred consultation. The intervention GPs also received a video CD with the same pre-recorded information. The person-centred consultations involved patients and GPs collaborating to increase patients’ ability to manage their depression. The guidance therefore included a reminder to the GPs that MADRS-S was used for the sake of the patients rather than the GPs.\n\nBODY.TREATMENT AS USUAL:\nThe GPs randomised to the group that would provide TAU were instructed to manage patients with depression the same way they usually did (but with the addition of the diagnostic procedure in the initial consultation). In general Swedish GPs are very knowledgeable about and use of person-centred consultation methods in their daily practice of the kind described in Maguire 2002 [16].\nThree months after baseline, patients in the control group were followed up in an appointment with a nurse at the PHCC.\n\nBODY.DATA COLLECTION PROCEDURE:\nA study nurse collected data from participants in the intervention and control groups during the first visit (baseline), at a follow-up visit to the PHCC at the end of the intervention (3 months after baseline), and by postal questionnaires 6 and 12 months after baseline (see Additional files 1 and 2).\n\nBODY.MEASURES:\nBODY.BACKGROUND VARIABLES:\nBackground information was obtained by a questionnaire at the patients’ first visit to the PHCC. Age was measured as age in years, gender as female or male. Marital status was dichotomised as single or married/cohabiting. Children living at home was dichotomised as participants who had children <18 years living at home vs. participants not having children living at home. Educational level was identified as as lower educational level (primary school or vocational school), middle educational level (high school) and higher educational level (college or university). Employment status was dichotomised as working/studying vs. unemployed/retired.\nPlace of birth was dichotomised as born in one of the Nordic countries vs. born outside the Nordic countries. Smoking status was categorised as “non-smokers” (participants who had not smoked during the last year), “sometimes smokers” (those who only smoked a single cigarette per week or more seldom), and “smokers” (more frequent smoking). Leisure-time physical activity was categorised as i) no or almost no physical activity during leisure time, ii) activity ≥4 hrs. per week, and iii) intensive (several times a week and vigorously).\n\nBODY.OUTCOMES:\nDepression severity; Assessed before (at baseline) and after the intervention with the Beck Depression Inventory-II (BDI-II) and categorised as mild (BDI-II score >13-19), moderate (BDI-II score 20-28), and high moderate (BDI-II score, 29-36) [17, 18].\nRemission; Identified as a BDI-II score ≤ 13 at 3-month follow up.\nQuality of life; Measured with the standardised EQ-5D instrument [19, 20].\nOverall psychological well-being; Assessed with the 12-item General Health Questionnaire (GHQ-12) (response scale 0-3) [21].\nPrescriptions for antidepressants; Information on this measure was obtained from patients’ electronic patient records (EPRs) and from responses to the patient questionnaires and dichotomised as yes (used) or no (did not use). Guidelines indicate that treatment should not be stopped before 6-12 months after recovery [5].\nPrescriptions for sedatives; Information on prescriptions for sedatives was obtained from patients’ electronic patient records (EPRs) and from responses to the patient questionnaires and dichotomised as yes (used) or no (did not use).\nSick leave; Information on sick leave was obtained from EPRs and responses to the patient questionnaires. Sick leave was used both as a dichotomous variable (yes or no) and as number of days of sick leave.\nHealth care use; This measure comprised information on number of visits to GPs, number of visits to nurses, number of visits to psychologists/psychotherapists, and total number of visits to the PHCC. Information was obtained from EPRs by study personnel and divided into the number of visits between 0 and 3 months, and 4 and 12 months.\n\n\nBODY.STATISTICAL METHODS:\nThe student’s t-test (paired) was used to compare mean intra-individual changes in the intervention and TAU groups. Two-sided Chi-square or Mann Whitney U-tests were used to compare frequencies. Statistical significance was set at p < 0.05. Logistic regression models were used to examine whether changes in the BDI-II, EQ-5D, and GHQ-12 variables from baseline to 3, 6 and 12 months follow-ups were associated with sex, age, education, antidepressant medication, and group assignment (intervention/TAU).The results are presented as odds ratios (OR) with 95% confidence intervals (CI). All analyses were performed in SPSS. All analyses were conducted on an intention to treat basis.\n\nBODY.POWER CALCULATION:\nPower was calculated for the BDI-II-score outcome on the basis of empirical experience of the variation in this factor, taking into account that GPs were the unit of randomisation. The power estimate was based on the assumption of an alpha error = 0.05 and a beta error = 0.20 (power = 0.80). Given an expected effect difference of 2 units (BDI-II) over time between the intervention and TAU groups, we calculated that we would need 105 patients in each group to achieve sufficient statistical power. To safeguard for dropouts, we recruited 20% more patients than the power calculation indicated that we needed.\n\nBODY.ETHICS:\nThe Regional Ethical Review Board in Gothenburg, Sweden approved this study (Dnr 746-09; T612-10). Prior to inclusion and after receiving oral and written information about the study, participants provided written informed consent. The trial was registered at ClinicalTrials.gov (Identifier: NCT01402206) [22].\n\n\nBODY.RESULTS:\nOf the 91 participating GPs, 45 were randomised into a group that provided the intervention and 46 into a group that provided TAU. The participating GPs enrolled 258 patients, and the flow of the patients through the study is shown in Fig. 1 [23]. There was no significant difference in participation rate in the patients in the intervention and control groups at the 3-, 6- or 12-months follow-ups.Fig. 1PRI-SMA-Study (CONSORT Flowchart)\n\n\nTable 1 presents the background characteristics of the study population. No significant differences were found between the participants in the intervention and TAU groups at baseline.Table 1Baseline characteristics of participants in the PRI-SMA trial (n = 258)\nCharacteristicsTotal\nn (%)Intervention\nn (%)TAU¤\n\nn (%)\np\nParticipants\n258\n125\n133\n\nAge (years, mean )\n43.48\n44.84\n42.19\n0.2\n men\n76 (29.5)\n31 (24.8)\n45 (33.8)\n\n women\n182 (70.5)\n94 (75.2)\n88 (66.2)\n0.1\nMarital status\n\n\n\n\n single\n118 (45.7)\n61 (48.8)\n57 (42.9)\n\n married/cohabiting\n140 (54.3)\n64 (51.2)\n76 (57.1)\n\n children <18 years at home\n82 (43.2)\n36 (40.4)\n46 (45.5)\n0.5\nLower educational level\n34 (13.2)\n17 (13.7)\n17 (12.8)\n\nMiddle educational level\n115 (44.7)\n61 (49.2)\n54 (40.6)\n\nHigh educational level\n108 (42.0)\n46 (37.1)\n62 (46.6)\n0.3\nEmployment\n\n\n\n\n working/studying\n181 (80.8)\n90 (81.1)\n91 (80.5)\n\n unemployed/retired\n43 (19.2)\n21 (18.9)\n22 (19.5)\n0.9\nBorn outside the Nordic countries\n41 (16)\n19 (15.3)\n22 (16.7)\n0.8\nSmoking (yes or sometimes)\n75 (29.4)\n36 (29.3)\n39 (29.5)\n0.9\nLeisure-time physical activity\n\n\n\n\n never\n109 (42.4)\n57 (46)\n52 (39.1)\n\n at least 4 hrs/week\n124 (48.2)\n55 (44.4)\n69 (51.9)\n\n intensive\n24 (9.3)\n12 (9.7)\n12 (9,0)\n0.5\nDepression\n\n\n\n\n mild (BDI-II 12-19)\n32 (13)\n16 (13.6)\n16 (12.4)\n\n moderate (BDI-II 20-28)\n82 (33.2)\n39 (33.1)\n43 (33.3)\n\n high moderate (BDI-II 29-36)\n68 (27.5)\n35 (29.7)\n33 (25.6)\n0.9\n\n¤\nTAU treatment as usual\n\n\nThere was a statistically significant difference between participants and drop-outs during the study concerning age (mean age 44.3 in participants, mean age 37.3 in drop-outs, p = 0.02), gender (male 14/62, 22.6%, female 16/166, 9.6%, p = 0.034), and ethnicity (born in Sweden 21/194, 10.8%, and born outside Sweden 9/32, 28.1%, p = 0.035)\nFigures 2, 3, and 4 shows depression symptoms (BDI-II), quality of life (EQ-5D), and overall psychological well-being (GHQ-12) in intervention and TAU groups at baseline and 3, 6, and 12 months (boxplots). All patients who participated in the study, both those in the intervention and those in the TAU group, improved in all three variables between baseline and 3 months but there were no significant differences in mean changes (mean of intra-individual ∆) between the groups. Improvements were substantial in both groups; at the 3-month follow-up, nearly half the patients no longer had depression (BDI <13) (49% in the intervention and 47.3% in the TAU group, p = 0.89). We also performed logistic regression analyses. The models showed that at the 12-month follow-up, older age was associated with improved BDI-II score (OR 1.05, CI 1.01-1.99), and a higher level of education was associated with improved EQ-5D score (OR 3.9, CI 1.77-8.73). The factors added to the model (sex, age, education, antidepressant medication, and participation in the intervention or TAU group) explained the same amount of variation in both the intervention and TAU groups. Thus, the difference between the groups remained non-significant (data not shown). No adverse events were reported from intervention or TAU group.Fig. 2Depression severity (BDI-II values) in the intervention and TAU groups. Boxplots for baseline, 3-months follow up, 6-months follow up, and 12-months follow up. The outcome variable is presented as box plot with medians, minimum and maximum values and lower and upper quartiles. Y-axis: BDI-II, X-axis: time 0, 3, 6, 12 months. Outliers (circle) are cases with values between 1.5 and 3 times the interquartile range (5 cases with BDI-II ≥50 in patients with difficulties in Swedish language, where complementary diagnostic procedure by the GP ended up on medium depressive disorder)\n\nFig. 3Quality of life (EQ-5D values) in the intervention and TAU groups. Boxplots for baseline, 3-months follow up, 6-months follow up, and 12-months follow up. The outcome variable is presented as box plot with medians, minimum and maximum values and lower and upper quartiles. Y-axis: EQ-5D, X-axis: time 0, 3, 6, 12 months. Outliers (circle) are cases with values between 1.5 and 3 times the interquartile range, extremes (star) are cases with values more than 3 times the interquartile range\n\nFig. 4circle) are cases with values between 1.5 and 3 times the interquartile range\n\n\nTable 2 shows differences between the intervention and TAU groups in the percent of patients who had prescriptions for antidepressants and sedatives. At baseline, 22% of patients in the intervention group and 32% of patients in the TAU group were on maintenance antidepressant medication. After 3 months, the proportion of patients with prescriptions for antidepressant medication was 72% in both the intervention and the TAU group. At the 6-month follow-up, the proportion of patients taking antidepressants had dropped to 69% in the intervention group and 59% in the TAU group (p = 0.007). At the 12-month follow-up, there were no significant differences in the percent of patients who had prescriptions for antidepressants or sedatives in the intervention and TAU groups (Table 2).Table 2Number and percent of patients who had prescriptions for antidepressants and sedatives and stated medication at baseline and 3-, 6-, and 12-months follow up, respectively, in the intervention and TAU groups\nBaseline3 months follow-up6 months follow-up12 months follow-up\nn\n(%)\nn\n(%)\nn\n(% )\nn\n(%)Antidepressants\n\n\n\n\n\n\n\n Intervention (n = 125)\n27\n(22)\n90\n(72)\n86\n(69)**\n74\n(59)\n TAU (n = 133)\n43\n(32)\n96\n(72)\n78\n(59)\n77\n(58)\nSedatives\n\n\n\n\n\n\n\n\n Intervention\n26\n(21)\n64\n(51)\n53\n(42)\n52\n(42)\n TAU¤\n\n32\n(24)\n73\n(55)\n56\n(42)\n51\n(38)\n\n¤\nTAU treatment as usual\n**Significantly higher proportion of patients in intervention group still on antidepressants after 6 months, p = 0.007\n\n\nTable 3 shows the mean number of days of sick leave by patient group. There were no significant differences in the percentage of patients on sick leave in the intervention and TAU groups between baseline and 3 months, or 4 -12 months, or during the entire study period. The mean total duration of sick leave (days) was not significantly different between the intervention and TAU groups (Table 3).Table 3Number and percent of individuals on sick leave, mean days of sick leave, and p-values for difference in mean days of sick leave between participants in intervention and TAU groups, based on information obtained from electronic patient records\nInterventionTAU\nn (%)Mean days of sick leaveSD\nn (%)Mean days of sick leaveSD\nP for difference mean days0-3 months\n31(25)\n63.1\n29.8\n48 (36)\n55.8\n27.7\n0.221\n4-12 months\n35(28)\n100.8\n87.3\n47(35)\n102.7\n85\n0.922\nTotal 0-12 months\n49(39)\n124.8\n102.5\n64 (48)\n123.3\n85.0\n0.942\n\n\nBODY.HEALTH CARE USE:\nMean number of visits at the PHCC, including visits to GP, nurse, psychologist/therapist and total visits to PHCC, was compared between intervention and TAU groups. All information was obtained from EPRs (Table 4). We also compared the total number of visits. There were no significant differences between the intervention and TAU groups regarding these outcomes, during the total 12-month follow-up period. However, during the 0-3 month period, the TAU group made significantly more visits to psychologists/psychotherapists than the intervention group and the intervention group made significantly more total visits to the PHCC than the TAU group.Table 4Number of patients’ contacts with GPs, nurses, and psychologists between baseline to 3 months, and 4 to12 months for intervention and TAU groups, based on information obtained from electronic patient records\nProfession\nType of contact\nIntervention 0-3 months m (SD)\nTAU 0-3 months m (SD)\n\np\n\n GP\n visit\n 3.44 (1.214)\n 2.59 (1.354)\n0.066\n Nurse\n visit\n 0.32 (0.829)\n 0.32 (0.875)\n0.878\n Psychologist/Therapist\n visit\n 0.40 (1.320)\n 0.89 (1.776)\n\n0. 0001\n\n Total visits to PHCC\n visit\n 4.16 (2.398)\n 3.81 (2.692)\n\n0.006\n\n\n\nIntervention 4-12 months m (SD)\nTAU 4-12 months m (SD)\n\np\n\n GP\n visit\n 2. 54 (2.337)\n 2.38 (2.595)\n0.301\n Nurse\n visit\n 0. 68 (1.484)\n 0.62 (1.551)\n0.581\n Psychologist/Therapist\n visit\n 0. 90 (2.504)\n 0.83 (1.908)\n0.401\n Total visits to PHCC\n visit\n 4. 11 (4.586)\n 3.83 (4.403)\n0.619\n\n\nIntervention total 0-12 months m (SD)\nTAU total 0-12 months m (SD)\n\np\n\n GP\n visit\n 8.26 (5.842)\n 7.64 (5.976)\n0.304\n Nurse\n visit\n 0.99 (1.978)\n 0.94 (2.088)\n0.967\n Psychologist/Therapist\n visit\n 1.30 (3.129)\n 1.73 (3.222)\n0.167\n Total visits to PHCC\n visit\n 8.26 (5.842)\n 7.64 (5.976)\n0.812\nBold numeral indicates statistically significant p-value\n\n\n\n\nBODY.DISCUSSION:\nIn this RCT, which evaluated use of a depression self-rating scale in recurrent person-centred GP consultations, more patients in the intervention than control group adhered to antidepressant treatment for the recommended 6 months. However, compared to TAU, there were no significant differences between the effects on depression symptom reduction, quality of life or overall psychological well-being, sedative use, sick leave, or health care use. These findings suggest that recurrent use of depression self-assessment scales in person-centred primary care consultations does have an effect on antidepressant medication adherence, but has no further additional effects, compared to the treatment usually provided in Swedish primary care.\nBODY.STRENGTHS AND WEAKNESSES OF THE STUDY:\nThe intervention was conducted in ordinary primary care rather than advertising for participants, which would have resulted in a sample that was not representative of patients in primary health care. The study compared the intervention with a TAU group rather than using a control group that consisted of patients on a waiting list. Studies of the effects of cognitive behavioural therapy show that being on a waiting list has a nocebo effect (i.e. results in worsening depression symptoms), which in turn makes intervention effects appear larger than they really are [24, 25]. We had access to all patients’ EPRs, a good source of information on prescriptions, sick leave, and health care use. In the current study, we had the opportunity to observe the course of depression symptoms, perceived quality of life, sick leave, and health care use for a full year after baseline. Another major strength is that a research nurse was stationed at each PHCC during the recruitment period. The nurse collected baseline data, gathered oral and written consent, and supported the staff at the site, functions that primary care personnel stress are important to make it easier to combine clinical work and research [26].We were well aware of the possible obstacles to conducting a RCT at a PHCC as described by Richter and Roy-Byrne [27, 28].\nWe randomised at the GP level. Randomisation at the patient level would have necessitated changing doctor for some patients, or GPs trained in the intervention would have provided the intervention to some patients but treatment as usual to others, increasing the possibility of contamination. We chose not to randomise at the PHCC level, as regional and socioeconomic differences between areas served by the PHCCs might have affected the results.\nAnother limitation was absence of a blinded control arm. The control GPs were aware that their colleagues at the same PHCC were providing the intervention, and this might have affected treatment as usual, even if they were requested to continue patient care as usual. This possible bias would support the notion of randomisation at PHCC level, but this would in turn increase the risk of e.g. selection effects. Another limitation is that the intervention of recurrent use of self-rating scales also included regular GP visits, and it is not possible to separate possible effects of the regular GP visits from effects of the patient’s frequent self-rating monitoring. However, during the 3-month intervention, patients in the TAU arm made only 0.85 fewer visits to their GPs than patients in the intervention arm. The frequency of visits in the TAU arm was the result of the GPs’ and patients’ joint assessments of care needs in each case. That frequency of patient visits did not differ markedly in the TAU arm is likely attributable to the current wide-spread emphasis on person-centred care in Swedish primary care.\nAll patients included had a new episode of mild/moderate depression, but the selection procedure did not differentiate between recurrent or first episode patients. There is some evidence that recurrent depression has slower remission [29], but according to the similar frequency of maintenance antidepressant medication in both intervention and TAU groups at inclusion, we can assume that the rate of recurrent depression disorder was the same in both arms. Further, the high number of outcomes, assessed at baseline and at 3, 6, and 12 months, necessitated multiple statistical comparisons, thus increasing the possibility of type I errors. Although the low p-values indicated that the few significant findings were not likely due to chance, the results need further confirmation in other studies. A limitation is also that we did not use more advanced statistical models to account for the correlation between patients treated by the same GP and to handle all observations of the patient instead of pairwise comparisons (e.g. mixed models).\n\nBODY.DROP OUTS:\nAmong the study drop outs, 37 individuals belonged to the intervention group and 48 to the TAU group. At the 3 and 6 month follow-up, participation rate was somewhat lower in the control group, but at the 12 month follow up, participation rate was almost the same in intervention and control groups; 70% and 69%, respectively (Fig. 1).\n\nBODY.GENERALISABILITY:\nThe study population represented patients treated by GPs for mild and moderate depression in primary health care. It included patients from health care centres in urban and rural areas and areas characterised by differing socioeconomic status. Health care is heavily subsidised in Sweden; patients pay a small per-visit fee and their yearly total co-payment is capped at approximately £100. The study results can be generalised to other countries with similar health care systems.\nThe RCT had a representative selection among the Swedish speaking population. One could argue that the exclusion of non-speaking Swedish patients may have limited the study’s generalisability. Unfortunately, it was not possible to translate and validate the instruments or to use an interpreter at every contact with study staff.\nIn this study, we chose to use MADR-S as the basis for patient education and collaborative care. These two factors can facilitate better depression outcomes [29–31]. Depression is complex, and the outcome of treatment often depends on the GP’s ability to connect with the patient during the consultation and her/his ability to choose proper treatment in agreement with the patient. Use of self-rating scales may function as one piece of the depression treatment puzzle, but as in many difficult jigsaw puzzles, more than one piece is needed to complete the picture. Thus, the Swedish National Board of Health and Welfare, i.e. the body that establishes the national guidelines for depression treatment, has expressed interest in whether it is meaningful to routinely use self-rating scales for patients with mild to moderate depression in primary care. We chose to use MADRS-S because it is the most commonly used depression self-rating scale in Sweden. About one third of all GPs state that they sometimes use MADRS-S, although not regularly [13, 32]. MADRS-S is designed to be sensitive to change and is often used in RCT studies that evaluate antidepressant treatment [12].\nThe study shows that use of self-rating scales during regular visits facilitates adherence to pharmacological treatment according to guidelines, in that once medication with antidepressants has been initiated, this treatment should be continued for at least 6 months [5]. On the other hand, the study also shows that treatment as usual, as it is practiced in the Swedish primary care of today, in most cases not including the use of self-rating scales, is just as efficient concerning reduction of depressive symptoms, perceived quality of life and general health, as well as sick leave frequency and duration, which is supported by the results from several international studies [33].\n\nBODY.STRENGTHS AND WEAKNESSES IN RELATION TO OTHER STUDIES:\nTo our knowledge there have been few or no studies that have evaluated the effect of recurrent use of self-rating scales regarding depression in relation to the above described outcomes.\nUse of antidepressants for the period mandated by current guidelines was associated with neither an improvement nor a worsening of patients’ symptoms during the one-year study period. However, using antidepressants for the length of time called for in guidelines has been shown to reduce risk of relapse [34].\n\nBODY.CLINICAL IMPORTANCE OF THE STUDY:\nMore discussion is needed as to how self-rating scales might be best implemented in primary care, how to manage depressed patients using these instruments, and how to integrate this into the consultation. The feasibility of meaningful and person-centred use of self-rating scales, continuity, and accessibility in the care of mild to moderate depression in the clinical setting is of great interest. Engaging the patient with a cognitive and person-centred approach and with regular consultations and follow-ups is a way to empower the patient, give the patient knowledge and tools to handle depression, follow his or her illness course, and create a basis for regaining of function [2, 6, 30]. Continuity and close regular follow-ups facilitate care that is adjusted to the individual's explicit needs and requirements and is an absolute condition for person-centred care [35]. Requirements for treatment are also reached in cooperation and collaboration with the patient, facilitating understanding and communication [36]. The continuous use of self-rating scales in the treatment of primary care patients with depression might support this process of care.\n\n\nBODY.CONCLUSIONS:\nWhen GPs used a depression self-rating scale in recurrent consultations, patients more often continued antidepressant medication according to guidelines, compared to TAU patients. However, concerning outcomes in terms of depression symptoms, quality of life, overall psychological well-being, consumption of care, and sick leave the use of self-rating scales did not seem to increase treatment effects compared to the usual treatment provided in primary care. Thus, the use of depression self-rating scales should perhaps not be mandatory in primary health care but rather left to the discretion of the GP and the patient.\n\n**Question:** Compared to treatment as usual (TAU) what was the result of person-centred general practitioners (GP) consultations on continued antidepressants until the 6 month?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13823
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nComparison of Lactobacillus Sporogenes plus mineral oil and mineral oil alone in the treatment of childhood functional constipation\n\n\n\nABSTRACT.BACKGROUND:\nFunctional constipation is one of the most prevalent childhood gastrointestinal disorders. We evaluated effects of adding a probiotic to mineral oil in the treatment of functional constipation in children.\n\nABSTRACT.MATERIALS AND METHODS:\nThis controlled trial was conducted on 60 children (2 to 14 years old) with functional constipation (Rome III criteria). Children were allocated to receive the synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) or the mineral oil alone for two months. Symptoms of constipation including defecation frequency, stool form, strain and pain at defecation, incomplete evacuation and soiling were assessed and compared before and after the intervention. After the treatment period, the two groups were also compared with regards to subjective global assessment of improvement.\n\nABSTRACT.RESULTS:\nAfter the treatment, stool frequency increased in both groups (P < 0.001), with greater increase in synbiotic + mineral oil group (P = 0.001). Frequency of hard/very hard stool and frequency of painful defecation decreased similarly in both groups (P < 0.001). Straining at defecation, incomplete evacuation, and soiling decreased in both groups (P < 0.001), but more decrease was seen in the synbiotic + mineral oil group (P < 0.05). Finally, there was a better global improvement in the synbiotic + mineral oil group (P < 0.05). No severe side-effects were observed in any group.\n\nABSTRACT.CONCLUSION:\nAdding the synbiotic Lactol® (containing Lactobacillus Sporogenes) to mineral oil can increase the improvement in the constipation symptoms of children without specific side-effects.\n\nBODY.INTRODUCTION:\nConstipation is one of the most prevalent gastrointestinal disorders among children with the reported prevalence varying from about 1-8% in the pediatric population. It accounts for 3-5% of visits to pediatricians and up to 25% of visits to pediatric gastroenterologists.[1] Disease burden and estimated healthcare costs are considerable and it can significantly impair quality of life of patients in both mental and physical health aspects.[23] In most of the cases (90-95%), no organic cause is found for constipation leading to the diagnosis of functional constipation in the child.[4] Thus, treatment strategy mainly involves educating the family and dietary modification. Disimpaction and maintenance oral/rectal medications are administered when needed. Common current medications for maintenance therapy include mineral oil, lactulose, milk of magnesia, polyethylene glycol powder, and sorbitol.[56] Because the etiology of the disease is not clearly understood, there is no specific treatment for functional constipation, and despite various treatments, only about half of the children with functional constipation demonstrate long-term improvement.[5]\nThere is growing interest about the efficacy of probiotics in the treatment of functional as well as organic gastrointestinal disorders. Probiotics have been reported to be effective in the treatment of various disorders including inflammatory bowel disease, irritable bowel syndrome, travelers’ diarrhea, and constipation. Because colonic microflora can affect the motility of the colon, an imbalance in the colonic microflora is proposed to have a role in the pathophysiology of chronic constipation. Also, some kind of probiotics, such as bifidobacteria and lactobacilli, by producing acids (lactic, acetic), result in lowering the pH of the colon, which in turn can enhance peristalsis of the colon and decrease colonic transit time.[78]\nSeveral reports are available on the beneficial effects of probiotics on symptoms of constipation, mainly in adults.[8] Studies showed that probiotic strains, such as B. Infantis, increase defecation frequency and soften stools. However, limited information is available on the efficacy and safety of probiotics in the treatment of children with constipation and available studies have had controversial results.[8] We evaluated the effects of adding a synbiotic containing Lactobacillus Sporogenes strain to mineral oil in the treatment of functional constipation in children.\n\nBODY.MATERIALS AND METHODS:\nBODY.PATIENTS AND SETTING:\nThis controlled clinical trial was conducted on children referring with complaint of constipation to an outpatient clinic of pediatric gastroenterology, at the Alzahra University Hospital, Isfahan city (IRAN), from 2010 to 2011. Children 2 to 14 years of age diagnosed to have functional constipation based on the Rome III criteria were included consecutively.[9] Those with immunocompromised condition or other severe diseases, and those receiving antibiotic in the previous four weeks were not included.[10] Exclusion criteria included experiencing severe side-effects, gastroenteritis, or receiving antibiotic for any reason during the study period. Considering alpha = 5%, study power = 80%, and expecting at least one score change in stool consistency scale, the sample size was calculated as 25 subjects in each group. The study was approved by the ethics committee of the Azad University, Najaf Abad Branch, and informed consent was obtained from all children's parents. Also, the study was registered in the Iranian Registry of Clinical Trials (IRCT201207311579N1).\n\nBODY.INTERVENTION:\nChildren were alternately allocated to the synbiotic and control groups. In the synbiotic group, subjects received synbiotic (Lactol®, Bioplus Life Sciences Pvt. 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) for eight weeks. The synbiotic used in this study was composed of the following; Lactobacillus Sporogenes (15 × 107 Spores), Fructo-Oligosaccharides, Microcrystalline Cellulose, Sodium starch Glycolate, Povidone, Hypermellose Stearate, Sillicon Dioxide, and Propylene Glycol. Subjects in the control group received mineral oil (Paraffin 1 ml/kg/d) for eight weeks. Toilet training and dietary advice were equally administered for both groups.[11]\n\nBODY.ASSESSMENTS:\nDiagnosis of functional constipation was made according to the Rome III criteria by a pediatric gastroenterology specialist. The criteria were two or more of the following: 1) Two or fewer stools per week, 2) hard or very hard stools or painful stool, 3) Passage of very large stool, 4) Stool retention “once a week” or more often, 5) History of large fecal mass in rectum, and 6) Soiling “once a week” or more often. Those who fulfiled the criteria for irritable bowel syndrome were not included into the study.[9] A trained general physician interviewed with each subject, and a questionnaire including demographic data and symptoms of constipation, based on the Questionnaire on Pediatric Gastrointestinal Symptoms-Rome III Version, was completed.[12] Major symptoms included defecation frequency (1 = two times a week or less to 5 = more than three times a day), stool consistency (1 = very hard to 5 = watery), stool retention (yes/no), and painful defecation (yes/no). Other symptoms included urgency, straining, passing mucus, and feeling of incomplete evacuation which were scored from 0 (never) to 4 (always) and soiling which was scored from 0 (never) to 5 (every day). Subjects were interviewed at the end of the intervention (8th week) with the same questionnaire including compliance and side-effects as well, and a subjective global improvement (SGI) scale which was scored from 0 (significantly worsen) to 7 (completely improved).[13]\n\nBODY.STATISTICAL ANALYSES:\nData were analyzed using SPSS software for windows Version 16.0. Statistical tests included independent sample t-test for comparison of parametric data and Mann-Whitney test for comparison of non-parametric data between the two groups. Multivariate analysis using regression model was applied to find factors independently associated with global improvement. A P value of <0.05 was considered significant in all analyses.\n\n\nBODY.RESULTS:\nDuring the study period 30 children were included in each group (mean age = 5.0 ± 2.9 years, 55% female); all of them completed the study [Figure 1]. The two groups were similar in age, gender, and baseline symptoms except stool frequency which was slightly lower in the control group (P = 0.024) [Table 1]. After the study, there was a significant improvement in most symptoms of constipation in both the synbiotic and control groups. However, improvements were greater in the synbiotic group regarding defecation frequency (even after controlling for baseline difference by ordinal regression test), straining, sense of incomplete evacuation, and the frequency of fecal incontinence (P < 0.05 to <0.001) [Table 2]. According to the SGI scale, symptoms were moderately and significantly improved in 13.3% and 86.6% of the subjects respectively, in the symbiotic group, while in the control group, improvement of symptoms was moderate in 36.6% and significant in 40% of subjects, which showed a significantly better global improvement in the synbiotic group (P < 0.001) [Table 2].\nFigure 1Patients flow diagram\nTable 1Comparison of demographic data and baseline clinical characteristics between the two groups\nTable 2Comparison of symptoms at the end of intervention between the two groups\nBODY.MULTIVARIATE ANALYSIS:\nAccording to a difference between the two groups in baseline defecation frequency, we conducted a multivariate analysis to find factors associated with SGI results by controlling age, gender, kind of intervention, and baseline symptoms. Among the mentioned factors, only receiving synbiotic plus mineral oil compared with mineral oil alone was significantly associated with more global improvement (estimate coefficient = 2.359, CI95%: 0.862 to 3.857, P = 0.002) [Table 3].\nTable 3Regression ordinal model on possible predictors of global improvement\n\nBODY.SIDE-EFFECTS:\nNo severe side-effect was reported in any group. Reported side-effects were bloating (3.3%), diarrhea (3.3%), mild epigastric pain (3.3%), vomiting (3.3%), and abdominal pain plus itching and scar in the anal region in one case (3.3%). No side-effects were reported in the intervention group. There was no significant difference between the two groups in this regard.\n\n\nBODY.DISCUSSION:\nThe aim of the present study was to evaluate the effectiveness of adding probiotic to the conventional treatment of childhood functional constipation. According to the results, adding a synbiotic which contained Lactobacillus Sporogenes to mineral oil resulted in more improvement in several symptoms of constipation compared with mineral oil alone. These results were also observed by the global improvement scale. Multivariate analysis with control of baseline symptoms and age and gender also approved these results.\nLimited data are available on the efficacy of probiotics in the treatment of constipation in children. In one study conducted by Bekkali and colleagues, children with functional constipation received a mixture of probiotics composed of Bifidobacteria and Lactobacili for 4 weeks. Results showed that the average of defecation frequency increased from 2/week at the beginning to 3.8/week at the end of the therapy. In contrast to our results, in the study by Bekkali et al., no significant change was observed in stool form. However, similar to our results, fecal incontinence decreased and no important side-effect was observed.[14] In another study by Bu and colleagues, children with constipation were allocated to three groups of Lactobacillus casei rhamnosus, Magnesium Oxide, and placebo. The results of this study showed that the probiotic was as effective as the magnesium oxide, while it did not have its possible side-effects.[15] In a study conducted by Khodadad and colleagues, children with constipation received Paraffin (1.5 ml/kg/d), synbiotic composed of Lactobacillus and Bifidobacteria, or a combination of synbiotic and Paraffin. The results showed that defecation frequency increased significantly in the combination group compared with the paraffin alone. But no beneficial effect was observed for stool form, incontinence, or painful defecation. Similar to our study, no side-effect was reported in the combination group while leakage of paraffin was a reported side-effect in those who received paraffin alone.[16] In contrast to the above mentioned trials showing the beneficial effects of probiotics in the treatment of childhood constipation, Banaszkiewicz and colleagues did not find beneficial effects of adding Lactobacillus GG to Lactose 70% over Lactose alone.[17] Differences among the results of available reports could be due to differences in the probiotic that was used; some studies used mixture compound while others used a single-strain probiotic. Therefore, further trials are needed in this regard. Available reports, however, are in favor of using probiotics as an adjunctive to other conventional therapies of childhood constipation. The possible mechanisms by which probiotics are beneficial in the treatment of constipation are not well studied. Available reports are focused on colonic transit time; however, more studies are needed in this regard.[8]\nThere are some limitations of our study. There was a difference between the two groups in baseline characteristics possibly because of alternate allocation which is not qualified as randomization; however, we conducted a multivariate analysis to control the covariates and the results confirmed the beneficial effects of adding synbiotic to mineral oil. Also, it was ethically not possible to run a placebo-controlled group which received placebo alone; however, it was possible to add a placebo in the mineral oil group and our study has an important limitation in this regard. Therefore, our study was not blinded and the possibility of investigator bias exists. And finally, we had no follow-up to investigate the long-term effects of the intervention.\n\nBODY.CONCLUSIONS:\nThe results of the present study showed that adding a synbiotic containing Lactobacillus Sporogenes to the routine treatment (mineral oil) of constipation in children can significantly increase improvement in symptoms, without specific side-effects. Further studies evaluating and comparing other synbiotic agents, either alone or along with other standard treatments, single or mixture of probiotics, and studies with longer follow-ups are warranted. Also, studies are needed on the possible mechanisms of action of probiotics in this regard.\n\n**Question:** Compared to mineral oil alone what was the result of synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) on stool frequency?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13824
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nComparison of Lactobacillus Sporogenes plus mineral oil and mineral oil alone in the treatment of childhood functional constipation\n\n\n\nABSTRACT.BACKGROUND:\nFunctional constipation is one of the most prevalent childhood gastrointestinal disorders. We evaluated effects of adding a probiotic to mineral oil in the treatment of functional constipation in children.\n\nABSTRACT.MATERIALS AND METHODS:\nThis controlled trial was conducted on 60 children (2 to 14 years old) with functional constipation (Rome III criteria). Children were allocated to receive the synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) or the mineral oil alone for two months. Symptoms of constipation including defecation frequency, stool form, strain and pain at defecation, incomplete evacuation and soiling were assessed and compared before and after the intervention. After the treatment period, the two groups were also compared with regards to subjective global assessment of improvement.\n\nABSTRACT.RESULTS:\nAfter the treatment, stool frequency increased in both groups (P < 0.001), with greater increase in synbiotic + mineral oil group (P = 0.001). Frequency of hard/very hard stool and frequency of painful defecation decreased similarly in both groups (P < 0.001). Straining at defecation, incomplete evacuation, and soiling decreased in both groups (P < 0.001), but more decrease was seen in the synbiotic + mineral oil group (P < 0.05). Finally, there was a better global improvement in the synbiotic + mineral oil group (P < 0.05). No severe side-effects were observed in any group.\n\nABSTRACT.CONCLUSION:\nAdding the synbiotic Lactol® (containing Lactobacillus Sporogenes) to mineral oil can increase the improvement in the constipation symptoms of children without specific side-effects.\n\nBODY.INTRODUCTION:\nConstipation is one of the most prevalent gastrointestinal disorders among children with the reported prevalence varying from about 1-8% in the pediatric population. It accounts for 3-5% of visits to pediatricians and up to 25% of visits to pediatric gastroenterologists.[1] Disease burden and estimated healthcare costs are considerable and it can significantly impair quality of life of patients in both mental and physical health aspects.[23] In most of the cases (90-95%), no organic cause is found for constipation leading to the diagnosis of functional constipation in the child.[4] Thus, treatment strategy mainly involves educating the family and dietary modification. Disimpaction and maintenance oral/rectal medications are administered when needed. Common current medications for maintenance therapy include mineral oil, lactulose, milk of magnesia, polyethylene glycol powder, and sorbitol.[56] Because the etiology of the disease is not clearly understood, there is no specific treatment for functional constipation, and despite various treatments, only about half of the children with functional constipation demonstrate long-term improvement.[5]\nThere is growing interest about the efficacy of probiotics in the treatment of functional as well as organic gastrointestinal disorders. Probiotics have been reported to be effective in the treatment of various disorders including inflammatory bowel disease, irritable bowel syndrome, travelers’ diarrhea, and constipation. Because colonic microflora can affect the motility of the colon, an imbalance in the colonic microflora is proposed to have a role in the pathophysiology of chronic constipation. Also, some kind of probiotics, such as bifidobacteria and lactobacilli, by producing acids (lactic, acetic), result in lowering the pH of the colon, which in turn can enhance peristalsis of the colon and decrease colonic transit time.[78]\nSeveral reports are available on the beneficial effects of probiotics on symptoms of constipation, mainly in adults.[8] Studies showed that probiotic strains, such as B. Infantis, increase defecation frequency and soften stools. However, limited information is available on the efficacy and safety of probiotics in the treatment of children with constipation and available studies have had controversial results.[8] We evaluated the effects of adding a synbiotic containing Lactobacillus Sporogenes strain to mineral oil in the treatment of functional constipation in children.\n\nBODY.MATERIALS AND METHODS:\nBODY.PATIENTS AND SETTING:\nThis controlled clinical trial was conducted on children referring with complaint of constipation to an outpatient clinic of pediatric gastroenterology, at the Alzahra University Hospital, Isfahan city (IRAN), from 2010 to 2011. Children 2 to 14 years of age diagnosed to have functional constipation based on the Rome III criteria were included consecutively.[9] Those with immunocompromised condition or other severe diseases, and those receiving antibiotic in the previous four weeks were not included.[10] Exclusion criteria included experiencing severe side-effects, gastroenteritis, or receiving antibiotic for any reason during the study period. Considering alpha = 5%, study power = 80%, and expecting at least one score change in stool consistency scale, the sample size was calculated as 25 subjects in each group. The study was approved by the ethics committee of the Azad University, Najaf Abad Branch, and informed consent was obtained from all children's parents. Also, the study was registered in the Iranian Registry of Clinical Trials (IRCT201207311579N1).\n\nBODY.INTERVENTION:\nChildren were alternately allocated to the synbiotic and control groups. In the synbiotic group, subjects received synbiotic (Lactol®, Bioplus Life Sciences Pvt. 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) for eight weeks. The synbiotic used in this study was composed of the following; Lactobacillus Sporogenes (15 × 107 Spores), Fructo-Oligosaccharides, Microcrystalline Cellulose, Sodium starch Glycolate, Povidone, Hypermellose Stearate, Sillicon Dioxide, and Propylene Glycol. Subjects in the control group received mineral oil (Paraffin 1 ml/kg/d) for eight weeks. Toilet training and dietary advice were equally administered for both groups.[11]\n\nBODY.ASSESSMENTS:\nDiagnosis of functional constipation was made according to the Rome III criteria by a pediatric gastroenterology specialist. The criteria were two or more of the following: 1) Two or fewer stools per week, 2) hard or very hard stools or painful stool, 3) Passage of very large stool, 4) Stool retention “once a week” or more often, 5) History of large fecal mass in rectum, and 6) Soiling “once a week” or more often. Those who fulfiled the criteria for irritable bowel syndrome were not included into the study.[9] A trained general physician interviewed with each subject, and a questionnaire including demographic data and symptoms of constipation, based on the Questionnaire on Pediatric Gastrointestinal Symptoms-Rome III Version, was completed.[12] Major symptoms included defecation frequency (1 = two times a week or less to 5 = more than three times a day), stool consistency (1 = very hard to 5 = watery), stool retention (yes/no), and painful defecation (yes/no). Other symptoms included urgency, straining, passing mucus, and feeling of incomplete evacuation which were scored from 0 (never) to 4 (always) and soiling which was scored from 0 (never) to 5 (every day). Subjects were interviewed at the end of the intervention (8th week) with the same questionnaire including compliance and side-effects as well, and a subjective global improvement (SGI) scale which was scored from 0 (significantly worsen) to 7 (completely improved).[13]\n\nBODY.STATISTICAL ANALYSES:\nData were analyzed using SPSS software for windows Version 16.0. Statistical tests included independent sample t-test for comparison of parametric data and Mann-Whitney test for comparison of non-parametric data between the two groups. Multivariate analysis using regression model was applied to find factors independently associated with global improvement. A P value of <0.05 was considered significant in all analyses.\n\n\nBODY.RESULTS:\nDuring the study period 30 children were included in each group (mean age = 5.0 ± 2.9 years, 55% female); all of them completed the study [Figure 1]. The two groups were similar in age, gender, and baseline symptoms except stool frequency which was slightly lower in the control group (P = 0.024) [Table 1]. After the study, there was a significant improvement in most symptoms of constipation in both the synbiotic and control groups. However, improvements were greater in the synbiotic group regarding defecation frequency (even after controlling for baseline difference by ordinal regression test), straining, sense of incomplete evacuation, and the frequency of fecal incontinence (P < 0.05 to <0.001) [Table 2]. According to the SGI scale, symptoms were moderately and significantly improved in 13.3% and 86.6% of the subjects respectively, in the symbiotic group, while in the control group, improvement of symptoms was moderate in 36.6% and significant in 40% of subjects, which showed a significantly better global improvement in the synbiotic group (P < 0.001) [Table 2].\nFigure 1Patients flow diagram\nTable 1Comparison of demographic data and baseline clinical characteristics between the two groups\nTable 2Comparison of symptoms at the end of intervention between the two groups\nBODY.MULTIVARIATE ANALYSIS:\nAccording to a difference between the two groups in baseline defecation frequency, we conducted a multivariate analysis to find factors associated with SGI results by controlling age, gender, kind of intervention, and baseline symptoms. Among the mentioned factors, only receiving synbiotic plus mineral oil compared with mineral oil alone was significantly associated with more global improvement (estimate coefficient = 2.359, CI95%: 0.862 to 3.857, P = 0.002) [Table 3].\nTable 3Regression ordinal model on possible predictors of global improvement\n\nBODY.SIDE-EFFECTS:\nNo severe side-effect was reported in any group. Reported side-effects were bloating (3.3%), diarrhea (3.3%), mild epigastric pain (3.3%), vomiting (3.3%), and abdominal pain plus itching and scar in the anal region in one case (3.3%). No side-effects were reported in the intervention group. There was no significant difference between the two groups in this regard.\n\n\nBODY.DISCUSSION:\nThe aim of the present study was to evaluate the effectiveness of adding probiotic to the conventional treatment of childhood functional constipation. According to the results, adding a synbiotic which contained Lactobacillus Sporogenes to mineral oil resulted in more improvement in several symptoms of constipation compared with mineral oil alone. These results were also observed by the global improvement scale. Multivariate analysis with control of baseline symptoms and age and gender also approved these results.\nLimited data are available on the efficacy of probiotics in the treatment of constipation in children. In one study conducted by Bekkali and colleagues, children with functional constipation received a mixture of probiotics composed of Bifidobacteria and Lactobacili for 4 weeks. Results showed that the average of defecation frequency increased from 2/week at the beginning to 3.8/week at the end of the therapy. In contrast to our results, in the study by Bekkali et al., no significant change was observed in stool form. However, similar to our results, fecal incontinence decreased and no important side-effect was observed.[14] In another study by Bu and colleagues, children with constipation were allocated to three groups of Lactobacillus casei rhamnosus, Magnesium Oxide, and placebo. The results of this study showed that the probiotic was as effective as the magnesium oxide, while it did not have its possible side-effects.[15] In a study conducted by Khodadad and colleagues, children with constipation received Paraffin (1.5 ml/kg/d), synbiotic composed of Lactobacillus and Bifidobacteria, or a combination of synbiotic and Paraffin. The results showed that defecation frequency increased significantly in the combination group compared with the paraffin alone. But no beneficial effect was observed for stool form, incontinence, or painful defecation. Similar to our study, no side-effect was reported in the combination group while leakage of paraffin was a reported side-effect in those who received paraffin alone.[16] In contrast to the above mentioned trials showing the beneficial effects of probiotics in the treatment of childhood constipation, Banaszkiewicz and colleagues did not find beneficial effects of adding Lactobacillus GG to Lactose 70% over Lactose alone.[17] Differences among the results of available reports could be due to differences in the probiotic that was used; some studies used mixture compound while others used a single-strain probiotic. Therefore, further trials are needed in this regard. Available reports, however, are in favor of using probiotics as an adjunctive to other conventional therapies of childhood constipation. The possible mechanisms by which probiotics are beneficial in the treatment of constipation are not well studied. Available reports are focused on colonic transit time; however, more studies are needed in this regard.[8]\nThere are some limitations of our study. There was a difference between the two groups in baseline characteristics possibly because of alternate allocation which is not qualified as randomization; however, we conducted a multivariate analysis to control the covariates and the results confirmed the beneficial effects of adding synbiotic to mineral oil. Also, it was ethically not possible to run a placebo-controlled group which received placebo alone; however, it was possible to add a placebo in the mineral oil group and our study has an important limitation in this regard. Therefore, our study was not blinded and the possibility of investigator bias exists. And finally, we had no follow-up to investigate the long-term effects of the intervention.\n\nBODY.CONCLUSIONS:\nThe results of the present study showed that adding a synbiotic containing Lactobacillus Sporogenes to the routine treatment (mineral oil) of constipation in children can significantly increase improvement in symptoms, without specific side-effects. Further studies evaluating and comparing other synbiotic agents, either alone or along with other standard treatments, single or mixture of probiotics, and studies with longer follow-ups are warranted. Also, studies are needed on the possible mechanisms of action of probiotics in this regard.\n\n**Question:** Compared to mineral oil alone what was the result of synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) on Straining at defecation, incomplete evacuation, and soiling?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13825
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nComparison of Lactobacillus Sporogenes plus mineral oil and mineral oil alone in the treatment of childhood functional constipation\n\n\n\nABSTRACT.BACKGROUND:\nFunctional constipation is one of the most prevalent childhood gastrointestinal disorders. We evaluated effects of adding a probiotic to mineral oil in the treatment of functional constipation in children.\n\nABSTRACT.MATERIALS AND METHODS:\nThis controlled trial was conducted on 60 children (2 to 14 years old) with functional constipation (Rome III criteria). Children were allocated to receive the synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) or the mineral oil alone for two months. Symptoms of constipation including defecation frequency, stool form, strain and pain at defecation, incomplete evacuation and soiling were assessed and compared before and after the intervention. After the treatment period, the two groups were also compared with regards to subjective global assessment of improvement.\n\nABSTRACT.RESULTS:\nAfter the treatment, stool frequency increased in both groups (P < 0.001), with greater increase in synbiotic + mineral oil group (P = 0.001). Frequency of hard/very hard stool and frequency of painful defecation decreased similarly in both groups (P < 0.001). Straining at defecation, incomplete evacuation, and soiling decreased in both groups (P < 0.001), but more decrease was seen in the synbiotic + mineral oil group (P < 0.05). Finally, there was a better global improvement in the synbiotic + mineral oil group (P < 0.05). No severe side-effects were observed in any group.\n\nABSTRACT.CONCLUSION:\nAdding the synbiotic Lactol® (containing Lactobacillus Sporogenes) to mineral oil can increase the improvement in the constipation symptoms of children without specific side-effects.\n\nBODY.INTRODUCTION:\nConstipation is one of the most prevalent gastrointestinal disorders among children with the reported prevalence varying from about 1-8% in the pediatric population. It accounts for 3-5% of visits to pediatricians and up to 25% of visits to pediatric gastroenterologists.[1] Disease burden and estimated healthcare costs are considerable and it can significantly impair quality of life of patients in both mental and physical health aspects.[23] In most of the cases (90-95%), no organic cause is found for constipation leading to the diagnosis of functional constipation in the child.[4] Thus, treatment strategy mainly involves educating the family and dietary modification. Disimpaction and maintenance oral/rectal medications are administered when needed. Common current medications for maintenance therapy include mineral oil, lactulose, milk of magnesia, polyethylene glycol powder, and sorbitol.[56] Because the etiology of the disease is not clearly understood, there is no specific treatment for functional constipation, and despite various treatments, only about half of the children with functional constipation demonstrate long-term improvement.[5]\nThere is growing interest about the efficacy of probiotics in the treatment of functional as well as organic gastrointestinal disorders. Probiotics have been reported to be effective in the treatment of various disorders including inflammatory bowel disease, irritable bowel syndrome, travelers’ diarrhea, and constipation. Because colonic microflora can affect the motility of the colon, an imbalance in the colonic microflora is proposed to have a role in the pathophysiology of chronic constipation. Also, some kind of probiotics, such as bifidobacteria and lactobacilli, by producing acids (lactic, acetic), result in lowering the pH of the colon, which in turn can enhance peristalsis of the colon and decrease colonic transit time.[78]\nSeveral reports are available on the beneficial effects of probiotics on symptoms of constipation, mainly in adults.[8] Studies showed that probiotic strains, such as B. Infantis, increase defecation frequency and soften stools. However, limited information is available on the efficacy and safety of probiotics in the treatment of children with constipation and available studies have had controversial results.[8] We evaluated the effects of adding a synbiotic containing Lactobacillus Sporogenes strain to mineral oil in the treatment of functional constipation in children.\n\nBODY.MATERIALS AND METHODS:\nBODY.PATIENTS AND SETTING:\nThis controlled clinical trial was conducted on children referring with complaint of constipation to an outpatient clinic of pediatric gastroenterology, at the Alzahra University Hospital, Isfahan city (IRAN), from 2010 to 2011. Children 2 to 14 years of age diagnosed to have functional constipation based on the Rome III criteria were included consecutively.[9] Those with immunocompromised condition or other severe diseases, and those receiving antibiotic in the previous four weeks were not included.[10] Exclusion criteria included experiencing severe side-effects, gastroenteritis, or receiving antibiotic for any reason during the study period. Considering alpha = 5%, study power = 80%, and expecting at least one score change in stool consistency scale, the sample size was calculated as 25 subjects in each group. The study was approved by the ethics committee of the Azad University, Najaf Abad Branch, and informed consent was obtained from all children's parents. Also, the study was registered in the Iranian Registry of Clinical Trials (IRCT201207311579N1).\n\nBODY.INTERVENTION:\nChildren were alternately allocated to the synbiotic and control groups. In the synbiotic group, subjects received synbiotic (Lactol®, Bioplus Life Sciences Pvt. 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) for eight weeks. The synbiotic used in this study was composed of the following; Lactobacillus Sporogenes (15 × 107 Spores), Fructo-Oligosaccharides, Microcrystalline Cellulose, Sodium starch Glycolate, Povidone, Hypermellose Stearate, Sillicon Dioxide, and Propylene Glycol. Subjects in the control group received mineral oil (Paraffin 1 ml/kg/d) for eight weeks. Toilet training and dietary advice were equally administered for both groups.[11]\n\nBODY.ASSESSMENTS:\nDiagnosis of functional constipation was made according to the Rome III criteria by a pediatric gastroenterology specialist. The criteria were two or more of the following: 1) Two or fewer stools per week, 2) hard or very hard stools or painful stool, 3) Passage of very large stool, 4) Stool retention “once a week” or more often, 5) History of large fecal mass in rectum, and 6) Soiling “once a week” or more often. Those who fulfiled the criteria for irritable bowel syndrome were not included into the study.[9] A trained general physician interviewed with each subject, and a questionnaire including demographic data and symptoms of constipation, based on the Questionnaire on Pediatric Gastrointestinal Symptoms-Rome III Version, was completed.[12] Major symptoms included defecation frequency (1 = two times a week or less to 5 = more than three times a day), stool consistency (1 = very hard to 5 = watery), stool retention (yes/no), and painful defecation (yes/no). Other symptoms included urgency, straining, passing mucus, and feeling of incomplete evacuation which were scored from 0 (never) to 4 (always) and soiling which was scored from 0 (never) to 5 (every day). Subjects were interviewed at the end of the intervention (8th week) with the same questionnaire including compliance and side-effects as well, and a subjective global improvement (SGI) scale which was scored from 0 (significantly worsen) to 7 (completely improved).[13]\n\nBODY.STATISTICAL ANALYSES:\nData were analyzed using SPSS software for windows Version 16.0. Statistical tests included independent sample t-test for comparison of parametric data and Mann-Whitney test for comparison of non-parametric data between the two groups. Multivariate analysis using regression model was applied to find factors independently associated with global improvement. A P value of <0.05 was considered significant in all analyses.\n\n\nBODY.RESULTS:\nDuring the study period 30 children were included in each group (mean age = 5.0 ± 2.9 years, 55% female); all of them completed the study [Figure 1]. The two groups were similar in age, gender, and baseline symptoms except stool frequency which was slightly lower in the control group (P = 0.024) [Table 1]. After the study, there was a significant improvement in most symptoms of constipation in both the synbiotic and control groups. However, improvements were greater in the synbiotic group regarding defecation frequency (even after controlling for baseline difference by ordinal regression test), straining, sense of incomplete evacuation, and the frequency of fecal incontinence (P < 0.05 to <0.001) [Table 2]. According to the SGI scale, symptoms were moderately and significantly improved in 13.3% and 86.6% of the subjects respectively, in the symbiotic group, while in the control group, improvement of symptoms was moderate in 36.6% and significant in 40% of subjects, which showed a significantly better global improvement in the synbiotic group (P < 0.001) [Table 2].\nFigure 1Patients flow diagram\nTable 1Comparison of demographic data and baseline clinical characteristics between the two groups\nTable 2Comparison of symptoms at the end of intervention between the two groups\nBODY.MULTIVARIATE ANALYSIS:\nAccording to a difference between the two groups in baseline defecation frequency, we conducted a multivariate analysis to find factors associated with SGI results by controlling age, gender, kind of intervention, and baseline symptoms. Among the mentioned factors, only receiving synbiotic plus mineral oil compared with mineral oil alone was significantly associated with more global improvement (estimate coefficient = 2.359, CI95%: 0.862 to 3.857, P = 0.002) [Table 3].\nTable 3Regression ordinal model on possible predictors of global improvement\n\nBODY.SIDE-EFFECTS:\nNo severe side-effect was reported in any group. Reported side-effects were bloating (3.3%), diarrhea (3.3%), mild epigastric pain (3.3%), vomiting (3.3%), and abdominal pain plus itching and scar in the anal region in one case (3.3%). No side-effects were reported in the intervention group. There was no significant difference between the two groups in this regard.\n\n\nBODY.DISCUSSION:\nThe aim of the present study was to evaluate the effectiveness of adding probiotic to the conventional treatment of childhood functional constipation. According to the results, adding a synbiotic which contained Lactobacillus Sporogenes to mineral oil resulted in more improvement in several symptoms of constipation compared with mineral oil alone. These results were also observed by the global improvement scale. Multivariate analysis with control of baseline symptoms and age and gender also approved these results.\nLimited data are available on the efficacy of probiotics in the treatment of constipation in children. In one study conducted by Bekkali and colleagues, children with functional constipation received a mixture of probiotics composed of Bifidobacteria and Lactobacili for 4 weeks. Results showed that the average of defecation frequency increased from 2/week at the beginning to 3.8/week at the end of the therapy. In contrast to our results, in the study by Bekkali et al., no significant change was observed in stool form. However, similar to our results, fecal incontinence decreased and no important side-effect was observed.[14] In another study by Bu and colleagues, children with constipation were allocated to three groups of Lactobacillus casei rhamnosus, Magnesium Oxide, and placebo. The results of this study showed that the probiotic was as effective as the magnesium oxide, while it did not have its possible side-effects.[15] In a study conducted by Khodadad and colleagues, children with constipation received Paraffin (1.5 ml/kg/d), synbiotic composed of Lactobacillus and Bifidobacteria, or a combination of synbiotic and Paraffin. The results showed that defecation frequency increased significantly in the combination group compared with the paraffin alone. But no beneficial effect was observed for stool form, incontinence, or painful defecation. Similar to our study, no side-effect was reported in the combination group while leakage of paraffin was a reported side-effect in those who received paraffin alone.[16] In contrast to the above mentioned trials showing the beneficial effects of probiotics in the treatment of childhood constipation, Banaszkiewicz and colleagues did not find beneficial effects of adding Lactobacillus GG to Lactose 70% over Lactose alone.[17] Differences among the results of available reports could be due to differences in the probiotic that was used; some studies used mixture compound while others used a single-strain probiotic. Therefore, further trials are needed in this regard. Available reports, however, are in favor of using probiotics as an adjunctive to other conventional therapies of childhood constipation. The possible mechanisms by which probiotics are beneficial in the treatment of constipation are not well studied. Available reports are focused on colonic transit time; however, more studies are needed in this regard.[8]\nThere are some limitations of our study. There was a difference between the two groups in baseline characteristics possibly because of alternate allocation which is not qualified as randomization; however, we conducted a multivariate analysis to control the covariates and the results confirmed the beneficial effects of adding synbiotic to mineral oil. Also, it was ethically not possible to run a placebo-controlled group which received placebo alone; however, it was possible to add a placebo in the mineral oil group and our study has an important limitation in this regard. Therefore, our study was not blinded and the possibility of investigator bias exists. And finally, we had no follow-up to investigate the long-term effects of the intervention.\n\nBODY.CONCLUSIONS:\nThe results of the present study showed that adding a synbiotic containing Lactobacillus Sporogenes to the routine treatment (mineral oil) of constipation in children can significantly increase improvement in symptoms, without specific side-effects. Further studies evaluating and comparing other synbiotic agents, either alone or along with other standard treatments, single or mixture of probiotics, and studies with longer follow-ups are warranted. Also, studies are needed on the possible mechanisms of action of probiotics in this regard.\n\n**Question:** Compared to mineral oil alone what was the result of synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) on better global improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13826
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nComparison of Lactobacillus Sporogenes plus mineral oil and mineral oil alone in the treatment of childhood functional constipation\n\n\n\nABSTRACT.BACKGROUND:\nFunctional constipation is one of the most prevalent childhood gastrointestinal disorders. We evaluated effects of adding a probiotic to mineral oil in the treatment of functional constipation in children.\n\nABSTRACT.MATERIALS AND METHODS:\nThis controlled trial was conducted on 60 children (2 to 14 years old) with functional constipation (Rome III criteria). Children were allocated to receive the synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) or the mineral oil alone for two months. Symptoms of constipation including defecation frequency, stool form, strain and pain at defecation, incomplete evacuation and soiling were assessed and compared before and after the intervention. After the treatment period, the two groups were also compared with regards to subjective global assessment of improvement.\n\nABSTRACT.RESULTS:\nAfter the treatment, stool frequency increased in both groups (P < 0.001), with greater increase in synbiotic + mineral oil group (P = 0.001). Frequency of hard/very hard stool and frequency of painful defecation decreased similarly in both groups (P < 0.001). Straining at defecation, incomplete evacuation, and soiling decreased in both groups (P < 0.001), but more decrease was seen in the synbiotic + mineral oil group (P < 0.05). Finally, there was a better global improvement in the synbiotic + mineral oil group (P < 0.05). No severe side-effects were observed in any group.\n\nABSTRACT.CONCLUSION:\nAdding the synbiotic Lactol® (containing Lactobacillus Sporogenes) to mineral oil can increase the improvement in the constipation symptoms of children without specific side-effects.\n\nBODY.INTRODUCTION:\nConstipation is one of the most prevalent gastrointestinal disorders among children with the reported prevalence varying from about 1-8% in the pediatric population. It accounts for 3-5% of visits to pediatricians and up to 25% of visits to pediatric gastroenterologists.[1] Disease burden and estimated healthcare costs are considerable and it can significantly impair quality of life of patients in both mental and physical health aspects.[23] In most of the cases (90-95%), no organic cause is found for constipation leading to the diagnosis of functional constipation in the child.[4] Thus, treatment strategy mainly involves educating the family and dietary modification. Disimpaction and maintenance oral/rectal medications are administered when needed. Common current medications for maintenance therapy include mineral oil, lactulose, milk of magnesia, polyethylene glycol powder, and sorbitol.[56] Because the etiology of the disease is not clearly understood, there is no specific treatment for functional constipation, and despite various treatments, only about half of the children with functional constipation demonstrate long-term improvement.[5]\nThere is growing interest about the efficacy of probiotics in the treatment of functional as well as organic gastrointestinal disorders. Probiotics have been reported to be effective in the treatment of various disorders including inflammatory bowel disease, irritable bowel syndrome, travelers’ diarrhea, and constipation. Because colonic microflora can affect the motility of the colon, an imbalance in the colonic microflora is proposed to have a role in the pathophysiology of chronic constipation. Also, some kind of probiotics, such as bifidobacteria and lactobacilli, by producing acids (lactic, acetic), result in lowering the pH of the colon, which in turn can enhance peristalsis of the colon and decrease colonic transit time.[78]\nSeveral reports are available on the beneficial effects of probiotics on symptoms of constipation, mainly in adults.[8] Studies showed that probiotic strains, such as B. Infantis, increase defecation frequency and soften stools. However, limited information is available on the efficacy and safety of probiotics in the treatment of children with constipation and available studies have had controversial results.[8] We evaluated the effects of adding a synbiotic containing Lactobacillus Sporogenes strain to mineral oil in the treatment of functional constipation in children.\n\nBODY.MATERIALS AND METHODS:\nBODY.PATIENTS AND SETTING:\nThis controlled clinical trial was conducted on children referring with complaint of constipation to an outpatient clinic of pediatric gastroenterology, at the Alzahra University Hospital, Isfahan city (IRAN), from 2010 to 2011. Children 2 to 14 years of age diagnosed to have functional constipation based on the Rome III criteria were included consecutively.[9] Those with immunocompromised condition or other severe diseases, and those receiving antibiotic in the previous four weeks were not included.[10] Exclusion criteria included experiencing severe side-effects, gastroenteritis, or receiving antibiotic for any reason during the study period. Considering alpha = 5%, study power = 80%, and expecting at least one score change in stool consistency scale, the sample size was calculated as 25 subjects in each group. The study was approved by the ethics committee of the Azad University, Najaf Abad Branch, and informed consent was obtained from all children's parents. Also, the study was registered in the Iranian Registry of Clinical Trials (IRCT201207311579N1).\n\nBODY.INTERVENTION:\nChildren were alternately allocated to the synbiotic and control groups. In the synbiotic group, subjects received synbiotic (Lactol®, Bioplus Life Sciences Pvt. 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) for eight weeks. The synbiotic used in this study was composed of the following; Lactobacillus Sporogenes (15 × 107 Spores), Fructo-Oligosaccharides, Microcrystalline Cellulose, Sodium starch Glycolate, Povidone, Hypermellose Stearate, Sillicon Dioxide, and Propylene Glycol. Subjects in the control group received mineral oil (Paraffin 1 ml/kg/d) for eight weeks. Toilet training and dietary advice were equally administered for both groups.[11]\n\nBODY.ASSESSMENTS:\nDiagnosis of functional constipation was made according to the Rome III criteria by a pediatric gastroenterology specialist. The criteria were two or more of the following: 1) Two or fewer stools per week, 2) hard or very hard stools or painful stool, 3) Passage of very large stool, 4) Stool retention “once a week” or more often, 5) History of large fecal mass in rectum, and 6) Soiling “once a week” or more often. Those who fulfiled the criteria for irritable bowel syndrome were not included into the study.[9] A trained general physician interviewed with each subject, and a questionnaire including demographic data and symptoms of constipation, based on the Questionnaire on Pediatric Gastrointestinal Symptoms-Rome III Version, was completed.[12] Major symptoms included defecation frequency (1 = two times a week or less to 5 = more than three times a day), stool consistency (1 = very hard to 5 = watery), stool retention (yes/no), and painful defecation (yes/no). Other symptoms included urgency, straining, passing mucus, and feeling of incomplete evacuation which were scored from 0 (never) to 4 (always) and soiling which was scored from 0 (never) to 5 (every day). Subjects were interviewed at the end of the intervention (8th week) with the same questionnaire including compliance and side-effects as well, and a subjective global improvement (SGI) scale which was scored from 0 (significantly worsen) to 7 (completely improved).[13]\n\nBODY.STATISTICAL ANALYSES:\nData were analyzed using SPSS software for windows Version 16.0. Statistical tests included independent sample t-test for comparison of parametric data and Mann-Whitney test for comparison of non-parametric data between the two groups. Multivariate analysis using regression model was applied to find factors independently associated with global improvement. A P value of <0.05 was considered significant in all analyses.\n\n\nBODY.RESULTS:\nDuring the study period 30 children were included in each group (mean age = 5.0 ± 2.9 years, 55% female); all of them completed the study [Figure 1]. The two groups were similar in age, gender, and baseline symptoms except stool frequency which was slightly lower in the control group (P = 0.024) [Table 1]. After the study, there was a significant improvement in most symptoms of constipation in both the synbiotic and control groups. However, improvements were greater in the synbiotic group regarding defecation frequency (even after controlling for baseline difference by ordinal regression test), straining, sense of incomplete evacuation, and the frequency of fecal incontinence (P < 0.05 to <0.001) [Table 2]. According to the SGI scale, symptoms were moderately and significantly improved in 13.3% and 86.6% of the subjects respectively, in the symbiotic group, while in the control group, improvement of symptoms was moderate in 36.6% and significant in 40% of subjects, which showed a significantly better global improvement in the synbiotic group (P < 0.001) [Table 2].\nFigure 1Patients flow diagram\nTable 1Comparison of demographic data and baseline clinical characteristics between the two groups\nTable 2Comparison of symptoms at the end of intervention between the two groups\nBODY.MULTIVARIATE ANALYSIS:\nAccording to a difference between the two groups in baseline defecation frequency, we conducted a multivariate analysis to find factors associated with SGI results by controlling age, gender, kind of intervention, and baseline symptoms. Among the mentioned factors, only receiving synbiotic plus mineral oil compared with mineral oil alone was significantly associated with more global improvement (estimate coefficient = 2.359, CI95%: 0.862 to 3.857, P = 0.002) [Table 3].\nTable 3Regression ordinal model on possible predictors of global improvement\n\nBODY.SIDE-EFFECTS:\nNo severe side-effect was reported in any group. Reported side-effects were bloating (3.3%), diarrhea (3.3%), mild epigastric pain (3.3%), vomiting (3.3%), and abdominal pain plus itching and scar in the anal region in one case (3.3%). No side-effects were reported in the intervention group. There was no significant difference between the two groups in this regard.\n\n\nBODY.DISCUSSION:\nThe aim of the present study was to evaluate the effectiveness of adding probiotic to the conventional treatment of childhood functional constipation. According to the results, adding a synbiotic which contained Lactobacillus Sporogenes to mineral oil resulted in more improvement in several symptoms of constipation compared with mineral oil alone. These results were also observed by the global improvement scale. Multivariate analysis with control of baseline symptoms and age and gender also approved these results.\nLimited data are available on the efficacy of probiotics in the treatment of constipation in children. In one study conducted by Bekkali and colleagues, children with functional constipation received a mixture of probiotics composed of Bifidobacteria and Lactobacili for 4 weeks. Results showed that the average of defecation frequency increased from 2/week at the beginning to 3.8/week at the end of the therapy. In contrast to our results, in the study by Bekkali et al., no significant change was observed in stool form. However, similar to our results, fecal incontinence decreased and no important side-effect was observed.[14] In another study by Bu and colleagues, children with constipation were allocated to three groups of Lactobacillus casei rhamnosus, Magnesium Oxide, and placebo. The results of this study showed that the probiotic was as effective as the magnesium oxide, while it did not have its possible side-effects.[15] In a study conducted by Khodadad and colleagues, children with constipation received Paraffin (1.5 ml/kg/d), synbiotic composed of Lactobacillus and Bifidobacteria, or a combination of synbiotic and Paraffin. The results showed that defecation frequency increased significantly in the combination group compared with the paraffin alone. But no beneficial effect was observed for stool form, incontinence, or painful defecation. Similar to our study, no side-effect was reported in the combination group while leakage of paraffin was a reported side-effect in those who received paraffin alone.[16] In contrast to the above mentioned trials showing the beneficial effects of probiotics in the treatment of childhood constipation, Banaszkiewicz and colleagues did not find beneficial effects of adding Lactobacillus GG to Lactose 70% over Lactose alone.[17] Differences among the results of available reports could be due to differences in the probiotic that was used; some studies used mixture compound while others used a single-strain probiotic. Therefore, further trials are needed in this regard. Available reports, however, are in favor of using probiotics as an adjunctive to other conventional therapies of childhood constipation. The possible mechanisms by which probiotics are beneficial in the treatment of constipation are not well studied. Available reports are focused on colonic transit time; however, more studies are needed in this regard.[8]\nThere are some limitations of our study. There was a difference between the two groups in baseline characteristics possibly because of alternate allocation which is not qualified as randomization; however, we conducted a multivariate analysis to control the covariates and the results confirmed the beneficial effects of adding synbiotic to mineral oil. Also, it was ethically not possible to run a placebo-controlled group which received placebo alone; however, it was possible to add a placebo in the mineral oil group and our study has an important limitation in this regard. Therefore, our study was not blinded and the possibility of investigator bias exists. And finally, we had no follow-up to investigate the long-term effects of the intervention.\n\nBODY.CONCLUSIONS:\nThe results of the present study showed that adding a synbiotic containing Lactobacillus Sporogenes to the routine treatment (mineral oil) of constipation in children can significantly increase improvement in symptoms, without specific side-effects. Further studies evaluating and comparing other synbiotic agents, either alone or along with other standard treatments, single or mixture of probiotics, and studies with longer follow-ups are warranted. Also, studies are needed on the possible mechanisms of action of probiotics in this regard.\n\n**Question:** Compared to mineral oil alone what was the result of synbiotic (Lactol®, composed of Lactobacillus Sporogenes, 1 Tab/20 kg/d) plus mineral oil (Paraffin 1 ml/kg/d) on severe side-effects?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13827
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nContinuous Right Thoracic Paravertebral Block Following Bolus Initiation Reduced Postoperative Pain After Right-Lobe HepatectomyA Randomized, Double-Blind, Placebo-Controlled Trial\n\n\n\nABSTRACT.BACKGROUND AND OBJECTIVES:\nWe hypothesized that continuous right thoracic paravertebral block, following bolus initiation, decreases opioid consumption after right-lobe hepatectomy in patients receiving patient-controlled intravenous analgesia with sufentanil.\n\nABSTRACT.METHODS:\nPatients undergoing right-lobe hepatectomy with a right thoracic paravertebral catheter placed at T7 30 minutes before surgery were randomly assigned to receive through this catheter either a 10-mL bolus of 0.2% ropivacaine before emergence, followed by a continuous infusion of 6 mL/h for 24 hours (PVB group), or saline at the same scheme of administration (control group). All patients were started on patient-controlled intravenous analgesia with sufentanil in the postanesthesia care unit. The primary outcome measure was total sufentanil consumption during the first 24 postoperative hours. P = 0.05 was considered as significant. For the multiple comparisons of data at 5 different time points, the P value for the 0.05 level of significance was adjusted to 0.01.\n\nABSTRACT.RESULTS:\nSixty-six patients were assessed for eligibility, and a PVB catheter was successfully placed for 48 patients. Data were analyzed on 22 patients in group PVB and 22 patients in the control group. The cumulative sufentanil consumption in the PVB group (54.3 ± 12.1 μg) at 24 postoperative hours was more than 20% less than that of the control group (68.1 ± 9.9 μg) (P < 0.001). There was also a significant difference in pain scores (numerical rating scale) between groups, where the PVB group had lower scores than did the control group at rest and with coughing for the first 24 hours (P < 0.001).\n\nABSTRACT.CONCLUSIONS:\nContinuous right thoracic paravertebral block, following bolus initiation, has an opioid-sparing effect on sufentanil patient-controlled intravenous analgesia for right-lobe hepatectomy patients and reduces numerical rating scale pain scores at rest and with coughing in the first 24 postoperative hours.\n\nEpidural analgesia has commonly been recommended following major upper abdominal surgery in the past,1 but this recommendation is controversial.2 Hemostatic deficiencies after hepatectomy may increase the bleeding risk associated with the use of thoracic epidural analgesia and delay epidural catheter removal.3 Epidural analgesia has been independently associated with increased risk of blood transfusion and administration of significantly greater volumes of intravenous fluid following liver resection.3 Balancing the advantages of optimal analgesia against the risks of current techniques requires additional clinical evidence in patients undergoing hepatectomy.\nParavertebral nerve block (PVB) was first described by Sellheim of Leipzig in 1905 as an alternative to central neural blocks for obstetrics, particularly cesarean delivery. Paravertebral nerve blocks can provide high-quality analgesia for patients undergoing many types of surgery in the thoracic, abdominal, and pelvic regions and also for those suffering from trauma pain and chronic pain. The effect of continuous intraoperative and postoperative paravertebral block after right-lobe hepatectomy has been described in 2 patients, but the study did not include a control group.4 In this prospective, double-blind, randomized, placebo-controlled trial, we tested the hypothesis that continuous thoracic paravertebral block, following bolus initiation, reduces opioid consumption during the first 24 hours after right-lobe hepatectomy in patients receiving patient-controlled intravenous analgesia (PCIA) with sufentanil. The present study is reported according to the CONSORT (Consolidated Standards of Reporting Trials) statement.\nBODY.METHODS:\nBODY.PATIENT RECRUITMENT:\nThis is a prospective, randomized, subject- and assessor-blinded, parallel-group, and placebo-controlled trial comparing a continuous right thoracic paravertebral block at T7 versus control group (saline) on postoperative opioid consumption after right-lobe hepatectomy for hepatocellular carcinoma. According to our local standards, the indications for all liver resections include tumor size of 5 to 10 cm; no intrahepatic or distant metastasis; no tumor thrombus in the portal vein, hepatic vein, vena cava, or bile duct; no invasion of the diaphragm or surrounding tissues; no rupture or bleeding of the tumor; indocyanine green retention rate at 15 minutes less than 15%; a remnant liver volume/standard liver volume ratio of greater than 50% in patients with liver cirrhosis and greater than 35% in patients without liver cirrhosis; Child-Pugh class A or B liver function, normal bilirubin concentration; and no previous liver resection. After receiving approval from the local institute’s ethics committee, registration with Clinical Trials (NCT01691937), and written informed consent from the patients, the study was conducted from September 2012 to February 2013 at Tongji Hospital in Wuhan, China. Patients were excluded if any of the following applied: a known allergy to the drugs being used, coagulopathy, on anticoagulants, analgesic intake, history of substance abuse, participation in the investigation of another experimental agent, inability to properly describe postoperative pain to investigators (eg, language barrier, neuropsychiatric disorder), or body mass index of 30 kg/m2 or greater or 15 kg/m2 or less. All patients were informed about the postoperative analgesia routine on the day before surgery and were instructed on how to express pain intensity with use of the numerical rating scale (NRS), where 0 indicates no pain, and 10 indicates the most severe pain. The same surgical team performed all surgical procedures through a subcostal incision.\n\nBODY.RANDOMIZATION:\nSimple randomization was done by independent research staff using opaque sealed envelopes, 24 for each group, indicating group assignment and describing the anesthetic protocol. A right thoracic paravertebral catheter was placed at the T7 level 30 minutes before surgery by an experienced senior consultant (W.M.). After confirmation of a successful paravertebral block, according to predefined criteria, patients were randomly assigned to the continuous PVB group or control group by research staff, and the PVB catheter was connected to an infusion pump containing either ropivacaine or normal saline accordingly. The infusion pumps are indistinguishable in appearance, and therefore the patients, the intraoperative caregiver (B.N.), the research staff assessing postoperative outcomes (H.C.), the acute pain service team, and the surgical team were blinded to group assignment. Unmasking did not occur until statistical analysis was complete.\n\nBODY.ANESTHETIC TECHNIQUE:\nA preoperative pain score at rest was recorded during the preoperative visit. No preoperative medications were administered. Upon arrival in the operating room, standard monitoring, including electrocardiography, noninvasive blood pressure, and pulse oximetry, was established. Surface electrodes for the Narcotrend Monitor (version 4.0; MonitorTechnik, Bad Bramstedt, Germany) were applied to the patient’s forehead, and the Narcotrend index was monitored. After placement of a peripheral venous catheter, a lactated Ringer’s infusion was started.\nBefore anesthesia, all patients had continuous right thoracic paravertebral block at T7 performed under ultrasound guidance with a SonoSite M-Turbo transportable ultrasound device and a 38-mm linear 6- to 13-MHz ultrasound transducer (SonoSite M-Turbo; SonoSite Inc, Bothell, Washington) in the left lateral position via an in-plane needle insertion approach. Intravenous fentanyl and midazolam were titrated for patient comfort. After standard skin disinfection, the skin and periosteum of the rib were anesthetized with 1% lidocaine. The transducer was positioned in a transverse and partial oblique position to the vertebral column, parallel to the rib. At a point about 6 cm lateral to the midline, a prefilled 18-gauge, 10-cm Tuohy needle (PlexoLong Nanoline; Pajunk Inc, Geisingen, Germany) was inserted and oriented into the T7 paravertebral space between the internal intercostal membrane and the pleura under the real-time ultrasound guidance as described by Renes et al.5 After injection of 5 mL of 0.9% saline, an open-tip, single-orifice, styletted catheter (PlexoLong Nanoline; Pajunk Inc) was advanced 3 to 5 cm medially into the paravertebral space. After gentle and repeated aspiration, 1% lidocaine 15 mL was administered via the catheter over a period of at least 60 seconds. Fifteen minutes after block, sensory assessments were performed with a pinprick test, and the block level was recorded. The PVB was considered successful when subjects experienced a decreased sensation to pinprick in 3 or more adjacent dermatomes corresponding to the site of injection.6 Subjects with a successful catheter placement and nerve block onset per protocol were retained in the study. If the desired block was not achieved, we offered the patient the choice of reinsertion of the PVB catheter or to be excluded from the study. The properly located catheters were connected to a programmable portable, electronic infusion pump immediately before surgery. A nurse who was not otherwise involved in the study prepared and released the infusion pump containing blinded study medication with either 0.2% ropivacaine for the PVB group or 0.9% saline for the control group. In order to avoid inadvertent unmasking of allocation, the extent of sensory block was not tested during the postoperative period.\nIn both groups, the patient was then turned supine, and general anesthesia was induced with fentanyl 3 to 4 μg/kg and propofol 1.5 to 2 mg/kg. Tracheal intubation was facilitated with 0.6 to 1 mg/kg rocuronium. The patients were mechanically ventilated with oxygen-air mixture maintaining the end-tidal CO2 at 4.5 to 5.0 kPa. Anesthesia was maintained by remifentanil and sevoflurane to keep the Narcotrend index reading between 40 and 60, and the systolic arterial pressure and heart rate within ±20% of baseline values during the procedure. Atracurium was administered for muscle relaxation as needed. The patient’s nasopharyngeal temperature was maintained between 36°C and 37.5°C by WarmTouch.\nAll patients received a standard right-lobe hepatectomy for hepatocellular carcinoma by a J-shaped right subcostal incision. The J-shaped incision consisted of a right subcostal incision with a mediocranial extension to the xiphoid process. The right lateral extension was variable but comprised transection of the oblique abdominal musculature.7 Twenty minutes before the end of surgery, all patients received 1 to 2 μg/kg fentanyl titration and tropisetron 2 mg intravenously. At the same time, the PVB group had a 10-mL bolus of plain 0.2% ropivacaine via the paravertebral catheter by infusion pump, followed by an infusion of 0.2% ropivacaine at 6 ml/h. The same volume of 0.9% saline was injected and infused in the control group by infusion pump. At the end of surgery, the patient’s trachea was extubated when response to verbal commands, spontaneous respiratory rate exceeding 12 breaths/min, and end-tidal carbon dioxide partial pressure less than 45 mm Hg were observed. They were then transferred to the postanesthesia care unit (PACU).\n\nBODY.POSTOPERATIVE CARE AND PAIN ASSESSMENT:\nUpon PACU arrival, rest pain was evaluated by nurses using NRS (0–10) every 10 minutes. All patients received intravenous sufentanil (2–5 μg) titration at 10-minute intervals for NRS scores greater than 3 in PACU. Once an NRS score of 3 or less had been achieved, spontaneously or after a titration of sufentanil, patients were connected to a patient-controlled analgesia device (BCDB-200; BCM, Shanghai, China) set to deliver 1 μg sufentanil as an intravenous bolus with a 10-minute lockout interval and a continuous background infusion of 1 to 2 μg/h and a maximal permitted dosage of sufentanil 8 μg/h. An acute pain service team was responsible for maintenance of the PCIA pump according to local standard. Transition from PACU to surgical ward was considered safe when the patient had achieved a modified Aldrete score of greater than over equal to 9 for at least 10 minutes and physiological function to near preanesthetic levels.\nPain at rest and on coughing was recorded for each patient using NRS (0–10) at 1, 4, 8, 16, and 24 hours after PACU discharge by the research staff (H.C.). Rescue opioid with intravenous tramadol was provided by acute pain service team when needed. Sedation was recorded using Ramsay Sedation Scale. Excessive sedation was defined as a Ramsay Sedation Scale value of 5 or 6 requiring the administration of naloxone. In the presence of nausea or vomiting, tropisetron 2 mg (maximum dose, 5 mg/d) intravenously was given and repeated if nausea persisted. The data of cumulative dose of sufentanil given postoperatively (titration and patient-controlled analgesia during the 24-hour observation period) and PCIA pressing frequency were recorded. Postoperative nausea and vomiting events during the first 24 hours were also recorded. After 24 hours, patients were asked to rate their overall satisfaction using an NRS, where 0 represents totally unsatisfied, and 10 represents completely satisfied. The infusion of ropivacaine in the PVB group and saline in the control group was stopped 24 hours after the surgery, and the catheter was removed. The PCIA continued until 48 hours after the surgery in both groups.\n\nBODY.STATISTICAL ANALYSIS:\nThe primary outcome measure was total sufentanil consumption in the first 24 postoperative hours. Previously unpublished data including 132 patients from April 2012 to July 2012 provided by the acute pain service team in our institution showed a mean and SD of cumulative sufentanil consumption over the first 24 hours after right-lobe hepatectomy to be about 70 and 10 μg, respectively. In order to have 95% power to detect a 20% reduction of sufentanil consumption at an α level of 0.01, we needed 20 patients per study group. To account for any patient dropouts or missing data, we planned to enroll 24 patients per study group. Descriptive statistics were computed for all study variables. Normally distributed data were analyzed using 2-tailed Student t test. Repeated-measures analysis of variance (ANOVA) was used to compare measurements over time (cumulative sufentanil consumption and PCIA pressing frequency). If there was a statistical difference (P < 0.05) between the 2 groups by repeated-measures ANOVA, Student t test was used to compare the data at each time point. The non–normal distributed NRS data are displayed in box-and-whisker plots. The box extends from the upper quartile to the lower quartile and is marked by a quadrate at the median of the data. The whiskers are determined by the upper and lower data point values not including outliers and extremes. Outliers (>1.5 box lengths) and extremes (>3 box lengths) are represented by circles and asterisks, respectively. All NRS data including outliers and extremes were included in final analysis. Differences of NRS scores at each time point between the 2 groups were analyzed using the Mann-Whitney U test. Categorical data were analyzed using the χ2 or Fisher exact test, where appropriate. The data are reported as mean ± SD, mean (range), median (first third quartiles), or percent, as appropriate. P = 0.05 was considered as significant. The Bonferroni correction was performed on the raw P value where applicable. For the multiple comparisons of NRS, sufentanil consumption, and frequency of PCA data at 5 different time points, the P value for the 0.05 level of significance was adjusted to 0.01. All analyses were performed using SPSS (version 12; SPSS Inc, Chicago, Illinois).\n\n\nBODY.RESULTS:\nA total of 66 patients were assessed for eligibility, 9 patients did not meet inclusion criteria, and 6 patients refused to participate; PVB failed in 3 patients. A PVB catheter was placed successfully in 48 patients, 4 patients were excluded from final analysis because of catheter obstruction (n = 1), change of operative plan (n = 2), and unplanned postoperative mechanical ventilation (n = 1), respectively. Data were analyzed on 22 patients in each group, as shown in Figure 1. The characteristics of patients, duration of surgery, intraoperative variables, and liver function test indices 24 hours before and after surgery in the paravertebral and control groups are shown in Table 1. The baseline characteristics and intraoperative variables were similar in both groups.\nFIGURE 1CONSORT patient flow diagram.\nTABLE 1Patients’ Characteristics, Duration of Surgery, Intraoperative Variables, and Some Indexes of Liver Function 24 Hours Before and After Surgery in the PVB and Control Groups\nThe time-matched cumulative sufentanil consumption and PCIA pressing frequency after surgery are shown in Figures 2 and 3. The cumulative sufentanil consumption and PCIA pressing frequency at each time point for the control group was significantly higher than the paravertebral group as indicated in Figures 2 and 3. Cumulative sufentanil consumption in the PVB group (54.3 ± 12.1 μg) at 24 postoperative hours was reduced by 20.2% compared with the control group (68.1 ± 9.9 μg), P < 0.001.\nFIGURE 2Cumulative sufentanil consumption in the first 24 postoperative hours. Data are presented as mean and SD. Differences of cumulative sufentanil consumption at each time point between the 2 groups were compared with repeated-measures ANOVA followed by Student t test. The Bonferroni correction was performed on the raw P value. Patients in the paravertebral group consumed less sufentanil than did patients in the control group at each time point.\nFIGURE 3Cumulative frequency of PCIA in the first 24 postoperative hours. Data were presented as mean and SD. Differences of cumulative frequency of PCIA at each time point between the 2 groups were compared with repeated-measures ANOVA followed by Student t test. The Bonferroni correction was performed on the raw P value. The cumulative frequency of PCIA at each time point in the control group was more compared with the paravertebral group.\nPain scores within 24 postoperative hours at rest and with coughing are shown in Figures 4 and 5, respectively. The pain intensity at rest and with coughing reported with the NRS score was significantly lower in the paravertebral group as compared with the control group for each time point. The greatest difference in NRS score was found at 1 hour postoperatively in which the median (first third quartiles) NRS score on coughing was 1 (1–2) in the PVB group and 3 (3–4) in the control group (P < 0.001).\nFIGURE 4Pain scores at rest in the first 24 postoperative hours. The NRS data are displayed in box-and-whisker plots. Median (quadrate in the box), interquartile range (box), and range not including outliers (error bars) are shown. Outliers (>1.5 box lengths) and extremes (>3 box lengths) are represented by circles and asterisks, respectively. Differences of NRS scores at each time point between the 2 groups were analyzed using the Mann-Whitney U test. The Bonferroni correction was performed on the raw P value. The NRS scores at rest at each time point in the control group were higher compared with the paravertebral group. NRS (0–10), where 0 indicates no pain, and 10 indicates the most severe pain.\nFIGURE 5Pain scores on coughing in the first 24 postoperative hours. The NRS data are displayed in box-and-whisker plots. Median (quadrate in the box), interquartile range (box), and range not including outliers (error bars) are shown. Outliers (>1.5 box lengths) and extremes (>3 box lengths) are represented by circles and asterisks, respectively. Differences of NRS scores at each time point between the 2 groups were analyzed using the Mann-Whitney U test. The Bonferroni correction was performed on the raw P value. The NRS scores on coughing at each time point in the control group were higher compared with the paravertebral group. NRS (0–10), where 0 indicates no pain, and 10 indicates the most severe pain.\nThe rescue analgesia, complications, satisfaction scores, and length of postoperative hospital stay after surgery of patients in both groups are shown in Table 2. The incidence of vomiting, bloating, and excessive sedation and the length of hospital stay were comparable between the groups. The patient satisfaction score was 8.1 ± 1.0 in the paravertebral group and 6.7 ± 0.6 in the control group (P < 0.001). One patient in the control group was documented to have an episode of respiratory rate below 8 breaths/min. The PCIA background dose was reduced to 1 μg/h sufentanil, and the respiratory rate returned to an acceptable level.\nTABLE 2Rescue Analgesia, Complications, Satisfaction, and Length of Hospital Stay After Surgery of Patients in the PVB and Control Groups\nLoss of pinprick sensation involved a mean of 4.6 (range, 3–7) dermatomes with upper and lower limits of T4 and T11 in the PVB group. The sensory loss to pinprick was 4.8 (range, 3–8) dermatomes, with upper and lower limits of T4 and T11 in the control group. There was no significant difference in extent of somatic block between the 2 groups following the 15-mL lidocaine testing injection (P = 0.618).\n\nBODY.DISCUSSION:\nIn this prospective, randomized, double-blind, placebo-controlled study, a continuous unilateral thoracic PVB, following bolus initiation, produced an opioid-sparing effect within the first 24 hours after right-lobe hepatectomy. Patients receiving continuous thoracic PVB reported significantly lower pain scores than did those with sufentanil PCIA alone. Our results are consistent with previous findings showing that thoracic PVB reduced the severity of postoperative pain or opioid consumption following both minor surgeries such as thoracoscopic surgery,8 breast cancer surgery,9 and major surgeries such as thoracotomy.10\nParavertebral nerve block has been successfully used for pain management for cholecystectomy,11–13 radiofrequency ablation of a metastatic liver lesion,14 percutaneous transhepatic biliary drainage,15 and right-lobe hepatectomy.4 Richardson et al16 have demonstrated that thoracic PVB can abolish the intercostal somatosensory-evoked potentials. Liver capsule injury visceral pain was managed successfully with a continuous infusion of local anesthetic at the right T10 level through a paravertebral catheter,17 whereas common bile duct dilation–associated pain, mediated parasympathetically through the vagus, could not be alleviated by PVB.18 The liver and its capsule are innervated by sympathetic fibers arising from the thoracic sympathetic chain from the T7 to T11 level, and parasympathetic nerves derive from the vagus nerve.19 The right PVB-induced pain reduction following hepatectomy with right-sided subcostal incision may be due to blockade of somatic and sympathetic pain fibers originated from T5 to T11.19 The vagal nerve and contralateral sympathetic fibers are not targeted by unilateral PVB, so total visceral anesthesia may not be achievable. This may be considered a drawback of a unilateral PVB used for pain management of hepatobiliary surgeries. Bilateral PVB technique has been successfully used in thoracic, abdominal, and pelvic regions.20 But bilateral PVB using landmark techniques was found to double the likelihood of inadvertent vascular puncture and to cause an 8-fold increase in pleural puncture and pneumothorax.21 Recent modifications to PVB technique by ultrasound guidance may improve the safety of this technique.\nThe extent of dermatomal spread of local anesthetics in the paravertebral space is variable and characterized by inconsistent somatic and sympathetic blocks. The mean extent of the somatic block was 4.6 and 4.8 dermatomes in the PVB group and control group, respectively, at 15 minutes after injection of 1% lidocaine 15 mL in the present study. Injections in a multilevel fashion may increase the extent of somatic and sympathetic block but would expose patients to additional risks related to punctures even under the ultrasound guidance. Further studies may be needed to test the clinical safety and efficacy of PVB at multiple levels for pain management after major abdominal surgeries.\n22 Ultrasound-guided continuous thoracic PVB has been used for outpatient acute pain management of multilevel unilateral rib fractures.23 However, an unacceptably high misplacement rate of paravertebral catheters has been reported by using a landmark technique recently.24 Catheter displacement may be prevented by the introduction of coiled catheters into clinical practice.25\nBecause of ethical issues and safety consideration, we have not enrolled patients with impaired coagulation functions that contraindicated epidural analgesia. Common concerns with the PVB are the potential risk for pneumothorax and vascular or neural injury. Paravertebral nerve blocks are technically easy to learn with a high success rate of 89.9% to 93.9% and a relatively low rate of over all complications (5%) and serious complications such as pneumothorax (0.5%).21,26 A recent meta-analysis including 18 randomized controlled trials demonstrated that PVBs, compared with epidural analgesia, are associated with less urinary retention, postoperative nausea and vomiting, and hypotension.27 These results were consistent with previous meta-analysis performed by Davies et al28 in 2006. In addition, the rates of failed technique were also lower in the PVB group than those in the epidural analgesia group.27 Paravertebral nerve block may offer an attractive alternative to the epidural technique for patients with local sepsis, coagulation disorders, preexisting neurological disorders, and abnormal thoracic vertebral anatomy. A comparison between the standard epidural analgesia and ultrasound-guided PVB would be meaningful and warrants further studies.\nPatients in the PVB group had longer length of stay, and this may be due to differences in the American Society of Anesthesiologists–Physical Status (ASA-PS) class, age, and blood loss. Although these differences between groups were not statistically significant, together they may have accounted for longer hospitalization.\nThis study has several limitations. Although a difference in the primary end point was shown, we were unable to demonstrate a difference in secondary outcome measures because of the small sample size. According to previous studies, we arbitrarily defined block success as loss of pinprick sensation in 3 or more ipsilateral vertebral dermatomes.6 After liver resection, the possibility of extending the analgesic benefits of paravertebral analgesia beyond 24 hours with or without a more concentrated solution of local anesthetic warrants further investigation. Further larger-scale studies are required to evaluate the effect of paravertebral block for right-lobe hepatectomy on clinically important outcomes including ropivacaine plasma levels, the incidence of postoperative complications, and chronic pain syndrome. Because sufentanil and ropivacaine are metabolized hepatically, elevated plasma levels can be seen after liver surgery.29,30 For safety concerns and in the absence of measurement of plasma levels of these medications, we limited the PVB infusion to 24 hours. This makes it more difficult to distinguish the effect of the initial ropivacaine PVB bolus from the continuous infusion on reducing pain scores and opioid consumption. It has been reported that single-shot PVB with 0.5% ropivacaine or 0.5% bupivacaine can produce 6- to 8-hour analgesic effects after thoracoscopic procedures.31,32 At the end of surgery, the 10-mL bolus of 0.2% ropivacaine prior to active infusion may have produced a significant opioid-sparing effect for an indeterminate number of hours out of the first 24 postoperative hours. The pain alleviating and opioid-sparing effects of PVBs in the first 24 postoperative hours demonstrated by our study may be the results of both the initiation bolus injection and the following continuous infusion of 0.2% ropivacaine.33,34 The present study aimed to test the hypothesis that a continuous PVB protocol (bolus initiation followed by continuous infusion of 0.2% ropivacaine) can reduce opioid consumption during the first 24 hours after right-lobe hepatectomy. We are unable to separate the effect of the initial bolus from the continuous infusion without having avoided the bolus, or running the infusion long enough to be certain there is no residual effect from the initial bolus. Obviously, keeping PVB catheter after single-dose PVB allows feasible rescue bolus injection or continuous infusion of local anesthetics when needed. Comparing the analgesic effects of bolus PVB, continuous PVB without bolus initiation, and bolus initiation followed by continuous infusion is not the primary interests of the present work. This issue clearly warrants further studies. Compared with patients from Western countries, the patients in our study have smaller body size, younger age, and lower ASA-PS scores. This unique demographic profile may impair the generalizability of our results, and further studies are needed to confirm the reproducibility of our technique in more overweight and/or more debilitated patients.\nIn conclusion, this is the first prospective randomized, double-blind, placebo-controlled study evaluating the analgesic efficacy of continuous thoracic paravertebral analgesia in patients undergoing right-lobe hepatectomy. Although thoracic epidural analgesia and interpleural analgesia after hepatectomy have been reported, the use of paravertebral analgesia for hepatectomy patients has rarely been reported.4,35 Our data demonstrated that continuous thoracic paravertebral block, following bolus initiation, is an effective procedure to improve pain treatment after right-lobe hepatectomy and could be incorporated into multimodal analgesic regimens for right-lobe hepatectomy with a right-sided subcostal incision.\n\n**Question:** Compared to saline at the same scheme of administration what was the result of 10-mL bolus of 0.2% ropivacaine before emergence, followed by a continuous infusion of 6 mL/h for 24 hours (PVB group) on The cumulative sufentanil consumption?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13828
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nContinuous Right Thoracic Paravertebral Block Following Bolus Initiation Reduced Postoperative Pain After Right-Lobe HepatectomyA Randomized, Double-Blind, Placebo-Controlled Trial\n\n\n\nABSTRACT.BACKGROUND AND OBJECTIVES:\nWe hypothesized that continuous right thoracic paravertebral block, following bolus initiation, decreases opioid consumption after right-lobe hepatectomy in patients receiving patient-controlled intravenous analgesia with sufentanil.\n\nABSTRACT.METHODS:\nPatients undergoing right-lobe hepatectomy with a right thoracic paravertebral catheter placed at T7 30 minutes before surgery were randomly assigned to receive through this catheter either a 10-mL bolus of 0.2% ropivacaine before emergence, followed by a continuous infusion of 6 mL/h for 24 hours (PVB group), or saline at the same scheme of administration (control group). All patients were started on patient-controlled intravenous analgesia with sufentanil in the postanesthesia care unit. The primary outcome measure was total sufentanil consumption during the first 24 postoperative hours. P = 0.05 was considered as significant. For the multiple comparisons of data at 5 different time points, the P value for the 0.05 level of significance was adjusted to 0.01.\n\nABSTRACT.RESULTS:\nSixty-six patients were assessed for eligibility, and a PVB catheter was successfully placed for 48 patients. Data were analyzed on 22 patients in group PVB and 22 patients in the control group. The cumulative sufentanil consumption in the PVB group (54.3 ± 12.1 μg) at 24 postoperative hours was more than 20% less than that of the control group (68.1 ± 9.9 μg) (P < 0.001). There was also a significant difference in pain scores (numerical rating scale) between groups, where the PVB group had lower scores than did the control group at rest and with coughing for the first 24 hours (P < 0.001).\n\nABSTRACT.CONCLUSIONS:\nContinuous right thoracic paravertebral block, following bolus initiation, has an opioid-sparing effect on sufentanil patient-controlled intravenous analgesia for right-lobe hepatectomy patients and reduces numerical rating scale pain scores at rest and with coughing in the first 24 postoperative hours.\n\nEpidural analgesia has commonly been recommended following major upper abdominal surgery in the past,1 but this recommendation is controversial.2 Hemostatic deficiencies after hepatectomy may increase the bleeding risk associated with the use of thoracic epidural analgesia and delay epidural catheter removal.3 Epidural analgesia has been independently associated with increased risk of blood transfusion and administration of significantly greater volumes of intravenous fluid following liver resection.3 Balancing the advantages of optimal analgesia against the risks of current techniques requires additional clinical evidence in patients undergoing hepatectomy.\nParavertebral nerve block (PVB) was first described by Sellheim of Leipzig in 1905 as an alternative to central neural blocks for obstetrics, particularly cesarean delivery. Paravertebral nerve blocks can provide high-quality analgesia for patients undergoing many types of surgery in the thoracic, abdominal, and pelvic regions and also for those suffering from trauma pain and chronic pain. The effect of continuous intraoperative and postoperative paravertebral block after right-lobe hepatectomy has been described in 2 patients, but the study did not include a control group.4 In this prospective, double-blind, randomized, placebo-controlled trial, we tested the hypothesis that continuous thoracic paravertebral block, following bolus initiation, reduces opioid consumption during the first 24 hours after right-lobe hepatectomy in patients receiving patient-controlled intravenous analgesia (PCIA) with sufentanil. The present study is reported according to the CONSORT (Consolidated Standards of Reporting Trials) statement.\nBODY.METHODS:\nBODY.PATIENT RECRUITMENT:\nThis is a prospective, randomized, subject- and assessor-blinded, parallel-group, and placebo-controlled trial comparing a continuous right thoracic paravertebral block at T7 versus control group (saline) on postoperative opioid consumption after right-lobe hepatectomy for hepatocellular carcinoma. According to our local standards, the indications for all liver resections include tumor size of 5 to 10 cm; no intrahepatic or distant metastasis; no tumor thrombus in the portal vein, hepatic vein, vena cava, or bile duct; no invasion of the diaphragm or surrounding tissues; no rupture or bleeding of the tumor; indocyanine green retention rate at 15 minutes less than 15%; a remnant liver volume/standard liver volume ratio of greater than 50% in patients with liver cirrhosis and greater than 35% in patients without liver cirrhosis; Child-Pugh class A or B liver function, normal bilirubin concentration; and no previous liver resection. After receiving approval from the local institute’s ethics committee, registration with Clinical Trials (NCT01691937), and written informed consent from the patients, the study was conducted from September 2012 to February 2013 at Tongji Hospital in Wuhan, China. Patients were excluded if any of the following applied: a known allergy to the drugs being used, coagulopathy, on anticoagulants, analgesic intake, history of substance abuse, participation in the investigation of another experimental agent, inability to properly describe postoperative pain to investigators (eg, language barrier, neuropsychiatric disorder), or body mass index of 30 kg/m2 or greater or 15 kg/m2 or less. All patients were informed about the postoperative analgesia routine on the day before surgery and were instructed on how to express pain intensity with use of the numerical rating scale (NRS), where 0 indicates no pain, and 10 indicates the most severe pain. The same surgical team performed all surgical procedures through a subcostal incision.\n\nBODY.RANDOMIZATION:\nSimple randomization was done by independent research staff using opaque sealed envelopes, 24 for each group, indicating group assignment and describing the anesthetic protocol. A right thoracic paravertebral catheter was placed at the T7 level 30 minutes before surgery by an experienced senior consultant (W.M.). After confirmation of a successful paravertebral block, according to predefined criteria, patients were randomly assigned to the continuous PVB group or control group by research staff, and the PVB catheter was connected to an infusion pump containing either ropivacaine or normal saline accordingly. The infusion pumps are indistinguishable in appearance, and therefore the patients, the intraoperative caregiver (B.N.), the research staff assessing postoperative outcomes (H.C.), the acute pain service team, and the surgical team were blinded to group assignment. Unmasking did not occur until statistical analysis was complete.\n\nBODY.ANESTHETIC TECHNIQUE:\nA preoperative pain score at rest was recorded during the preoperative visit. No preoperative medications were administered. Upon arrival in the operating room, standard monitoring, including electrocardiography, noninvasive blood pressure, and pulse oximetry, was established. Surface electrodes for the Narcotrend Monitor (version 4.0; MonitorTechnik, Bad Bramstedt, Germany) were applied to the patient’s forehead, and the Narcotrend index was monitored. After placement of a peripheral venous catheter, a lactated Ringer’s infusion was started.\nBefore anesthesia, all patients had continuous right thoracic paravertebral block at T7 performed under ultrasound guidance with a SonoSite M-Turbo transportable ultrasound device and a 38-mm linear 6- to 13-MHz ultrasound transducer (SonoSite M-Turbo; SonoSite Inc, Bothell, Washington) in the left lateral position via an in-plane needle insertion approach. Intravenous fentanyl and midazolam were titrated for patient comfort. After standard skin disinfection, the skin and periosteum of the rib were anesthetized with 1% lidocaine. The transducer was positioned in a transverse and partial oblique position to the vertebral column, parallel to the rib. At a point about 6 cm lateral to the midline, a prefilled 18-gauge, 10-cm Tuohy needle (PlexoLong Nanoline; Pajunk Inc, Geisingen, Germany) was inserted and oriented into the T7 paravertebral space between the internal intercostal membrane and the pleura under the real-time ultrasound guidance as described by Renes et al.5 After injection of 5 mL of 0.9% saline, an open-tip, single-orifice, styletted catheter (PlexoLong Nanoline; Pajunk Inc) was advanced 3 to 5 cm medially into the paravertebral space. After gentle and repeated aspiration, 1% lidocaine 15 mL was administered via the catheter over a period of at least 60 seconds. Fifteen minutes after block, sensory assessments were performed with a pinprick test, and the block level was recorded. The PVB was considered successful when subjects experienced a decreased sensation to pinprick in 3 or more adjacent dermatomes corresponding to the site of injection.6 Subjects with a successful catheter placement and nerve block onset per protocol were retained in the study. If the desired block was not achieved, we offered the patient the choice of reinsertion of the PVB catheter or to be excluded from the study. The properly located catheters were connected to a programmable portable, electronic infusion pump immediately before surgery. A nurse who was not otherwise involved in the study prepared and released the infusion pump containing blinded study medication with either 0.2% ropivacaine for the PVB group or 0.9% saline for the control group. In order to avoid inadvertent unmasking of allocation, the extent of sensory block was not tested during the postoperative period.\nIn both groups, the patient was then turned supine, and general anesthesia was induced with fentanyl 3 to 4 μg/kg and propofol 1.5 to 2 mg/kg. Tracheal intubation was facilitated with 0.6 to 1 mg/kg rocuronium. The patients were mechanically ventilated with oxygen-air mixture maintaining the end-tidal CO2 at 4.5 to 5.0 kPa. Anesthesia was maintained by remifentanil and sevoflurane to keep the Narcotrend index reading between 40 and 60, and the systolic arterial pressure and heart rate within ±20% of baseline values during the procedure. Atracurium was administered for muscle relaxation as needed. The patient’s nasopharyngeal temperature was maintained between 36°C and 37.5°C by WarmTouch.\nAll patients received a standard right-lobe hepatectomy for hepatocellular carcinoma by a J-shaped right subcostal incision. The J-shaped incision consisted of a right subcostal incision with a mediocranial extension to the xiphoid process. The right lateral extension was variable but comprised transection of the oblique abdominal musculature.7 Twenty minutes before the end of surgery, all patients received 1 to 2 μg/kg fentanyl titration and tropisetron 2 mg intravenously. At the same time, the PVB group had a 10-mL bolus of plain 0.2% ropivacaine via the paravertebral catheter by infusion pump, followed by an infusion of 0.2% ropivacaine at 6 ml/h. The same volume of 0.9% saline was injected and infused in the control group by infusion pump. At the end of surgery, the patient’s trachea was extubated when response to verbal commands, spontaneous respiratory rate exceeding 12 breaths/min, and end-tidal carbon dioxide partial pressure less than 45 mm Hg were observed. They were then transferred to the postanesthesia care unit (PACU).\n\nBODY.POSTOPERATIVE CARE AND PAIN ASSESSMENT:\nUpon PACU arrival, rest pain was evaluated by nurses using NRS (0–10) every 10 minutes. All patients received intravenous sufentanil (2–5 μg) titration at 10-minute intervals for NRS scores greater than 3 in PACU. Once an NRS score of 3 or less had been achieved, spontaneously or after a titration of sufentanil, patients were connected to a patient-controlled analgesia device (BCDB-200; BCM, Shanghai, China) set to deliver 1 μg sufentanil as an intravenous bolus with a 10-minute lockout interval and a continuous background infusion of 1 to 2 μg/h and a maximal permitted dosage of sufentanil 8 μg/h. An acute pain service team was responsible for maintenance of the PCIA pump according to local standard. Transition from PACU to surgical ward was considered safe when the patient had achieved a modified Aldrete score of greater than over equal to 9 for at least 10 minutes and physiological function to near preanesthetic levels.\nPain at rest and on coughing was recorded for each patient using NRS (0–10) at 1, 4, 8, 16, and 24 hours after PACU discharge by the research staff (H.C.). Rescue opioid with intravenous tramadol was provided by acute pain service team when needed. Sedation was recorded using Ramsay Sedation Scale. Excessive sedation was defined as a Ramsay Sedation Scale value of 5 or 6 requiring the administration of naloxone. In the presence of nausea or vomiting, tropisetron 2 mg (maximum dose, 5 mg/d) intravenously was given and repeated if nausea persisted. The data of cumulative dose of sufentanil given postoperatively (titration and patient-controlled analgesia during the 24-hour observation period) and PCIA pressing frequency were recorded. Postoperative nausea and vomiting events during the first 24 hours were also recorded. After 24 hours, patients were asked to rate their overall satisfaction using an NRS, where 0 represents totally unsatisfied, and 10 represents completely satisfied. The infusion of ropivacaine in the PVB group and saline in the control group was stopped 24 hours after the surgery, and the catheter was removed. The PCIA continued until 48 hours after the surgery in both groups.\n\nBODY.STATISTICAL ANALYSIS:\nThe primary outcome measure was total sufentanil consumption in the first 24 postoperative hours. Previously unpublished data including 132 patients from April 2012 to July 2012 provided by the acute pain service team in our institution showed a mean and SD of cumulative sufentanil consumption over the first 24 hours after right-lobe hepatectomy to be about 70 and 10 μg, respectively. In order to have 95% power to detect a 20% reduction of sufentanil consumption at an α level of 0.01, we needed 20 patients per study group. To account for any patient dropouts or missing data, we planned to enroll 24 patients per study group. Descriptive statistics were computed for all study variables. Normally distributed data were analyzed using 2-tailed Student t test. Repeated-measures analysis of variance (ANOVA) was used to compare measurements over time (cumulative sufentanil consumption and PCIA pressing frequency). If there was a statistical difference (P < 0.05) between the 2 groups by repeated-measures ANOVA, Student t test was used to compare the data at each time point. The non–normal distributed NRS data are displayed in box-and-whisker plots. The box extends from the upper quartile to the lower quartile and is marked by a quadrate at the median of the data. The whiskers are determined by the upper and lower data point values not including outliers and extremes. Outliers (>1.5 box lengths) and extremes (>3 box lengths) are represented by circles and asterisks, respectively. All NRS data including outliers and extremes were included in final analysis. Differences of NRS scores at each time point between the 2 groups were analyzed using the Mann-Whitney U test. Categorical data were analyzed using the χ2 or Fisher exact test, where appropriate. The data are reported as mean ± SD, mean (range), median (first third quartiles), or percent, as appropriate. P = 0.05 was considered as significant. The Bonferroni correction was performed on the raw P value where applicable. For the multiple comparisons of NRS, sufentanil consumption, and frequency of PCA data at 5 different time points, the P value for the 0.05 level of significance was adjusted to 0.01. All analyses were performed using SPSS (version 12; SPSS Inc, Chicago, Illinois).\n\n\nBODY.RESULTS:\nA total of 66 patients were assessed for eligibility, 9 patients did not meet inclusion criteria, and 6 patients refused to participate; PVB failed in 3 patients. A PVB catheter was placed successfully in 48 patients, 4 patients were excluded from final analysis because of catheter obstruction (n = 1), change of operative plan (n = 2), and unplanned postoperative mechanical ventilation (n = 1), respectively. Data were analyzed on 22 patients in each group, as shown in Figure 1. The characteristics of patients, duration of surgery, intraoperative variables, and liver function test indices 24 hours before and after surgery in the paravertebral and control groups are shown in Table 1. The baseline characteristics and intraoperative variables were similar in both groups.\nFIGURE 1CONSORT patient flow diagram.\nTABLE 1Patients’ Characteristics, Duration of Surgery, Intraoperative Variables, and Some Indexes of Liver Function 24 Hours Before and After Surgery in the PVB and Control Groups\nThe time-matched cumulative sufentanil consumption and PCIA pressing frequency after surgery are shown in Figures 2 and 3. The cumulative sufentanil consumption and PCIA pressing frequency at each time point for the control group was significantly higher than the paravertebral group as indicated in Figures 2 and 3. Cumulative sufentanil consumption in the PVB group (54.3 ± 12.1 μg) at 24 postoperative hours was reduced by 20.2% compared with the control group (68.1 ± 9.9 μg), P < 0.001.\nFIGURE 2Cumulative sufentanil consumption in the first 24 postoperative hours. Data are presented as mean and SD. Differences of cumulative sufentanil consumption at each time point between the 2 groups were compared with repeated-measures ANOVA followed by Student t test. The Bonferroni correction was performed on the raw P value. Patients in the paravertebral group consumed less sufentanil than did patients in the control group at each time point.\nFIGURE 3Cumulative frequency of PCIA in the first 24 postoperative hours. Data were presented as mean and SD. Differences of cumulative frequency of PCIA at each time point between the 2 groups were compared with repeated-measures ANOVA followed by Student t test. The Bonferroni correction was performed on the raw P value. The cumulative frequency of PCIA at each time point in the control group was more compared with the paravertebral group.\nPain scores within 24 postoperative hours at rest and with coughing are shown in Figures 4 and 5, respectively. The pain intensity at rest and with coughing reported with the NRS score was significantly lower in the paravertebral group as compared with the control group for each time point. The greatest difference in NRS score was found at 1 hour postoperatively in which the median (first third quartiles) NRS score on coughing was 1 (1–2) in the PVB group and 3 (3–4) in the control group (P < 0.001).\nFIGURE 4Pain scores at rest in the first 24 postoperative hours. The NRS data are displayed in box-and-whisker plots. Median (quadrate in the box), interquartile range (box), and range not including outliers (error bars) are shown. Outliers (>1.5 box lengths) and extremes (>3 box lengths) are represented by circles and asterisks, respectively. Differences of NRS scores at each time point between the 2 groups were analyzed using the Mann-Whitney U test. The Bonferroni correction was performed on the raw P value. The NRS scores at rest at each time point in the control group were higher compared with the paravertebral group. NRS (0–10), where 0 indicates no pain, and 10 indicates the most severe pain.\nFIGURE 5Pain scores on coughing in the first 24 postoperative hours. The NRS data are displayed in box-and-whisker plots. Median (quadrate in the box), interquartile range (box), and range not including outliers (error bars) are shown. Outliers (>1.5 box lengths) and extremes (>3 box lengths) are represented by circles and asterisks, respectively. Differences of NRS scores at each time point between the 2 groups were analyzed using the Mann-Whitney U test. The Bonferroni correction was performed on the raw P value. The NRS scores on coughing at each time point in the control group were higher compared with the paravertebral group. NRS (0–10), where 0 indicates no pain, and 10 indicates the most severe pain.\nThe rescue analgesia, complications, satisfaction scores, and length of postoperative hospital stay after surgery of patients in both groups are shown in Table 2. The incidence of vomiting, bloating, and excessive sedation and the length of hospital stay were comparable between the groups. The patient satisfaction score was 8.1 ± 1.0 in the paravertebral group and 6.7 ± 0.6 in the control group (P < 0.001). One patient in the control group was documented to have an episode of respiratory rate below 8 breaths/min. The PCIA background dose was reduced to 1 μg/h sufentanil, and the respiratory rate returned to an acceptable level.\nTABLE 2Rescue Analgesia, Complications, Satisfaction, and Length of Hospital Stay After Surgery of Patients in the PVB and Control Groups\nLoss of pinprick sensation involved a mean of 4.6 (range, 3–7) dermatomes with upper and lower limits of T4 and T11 in the PVB group. The sensory loss to pinprick was 4.8 (range, 3–8) dermatomes, with upper and lower limits of T4 and T11 in the control group. There was no significant difference in extent of somatic block between the 2 groups following the 15-mL lidocaine testing injection (P = 0.618).\n\nBODY.DISCUSSION:\nIn this prospective, randomized, double-blind, placebo-controlled study, a continuous unilateral thoracic PVB, following bolus initiation, produced an opioid-sparing effect within the first 24 hours after right-lobe hepatectomy. Patients receiving continuous thoracic PVB reported significantly lower pain scores than did those with sufentanil PCIA alone. Our results are consistent with previous findings showing that thoracic PVB reduced the severity of postoperative pain or opioid consumption following both minor surgeries such as thoracoscopic surgery,8 breast cancer surgery,9 and major surgeries such as thoracotomy.10\nParavertebral nerve block has been successfully used for pain management for cholecystectomy,11–13 radiofrequency ablation of a metastatic liver lesion,14 percutaneous transhepatic biliary drainage,15 and right-lobe hepatectomy.4 Richardson et al16 have demonstrated that thoracic PVB can abolish the intercostal somatosensory-evoked potentials. Liver capsule injury visceral pain was managed successfully with a continuous infusion of local anesthetic at the right T10 level through a paravertebral catheter,17 whereas common bile duct dilation–associated pain, mediated parasympathetically through the vagus, could not be alleviated by PVB.18 The liver and its capsule are innervated by sympathetic fibers arising from the thoracic sympathetic chain from the T7 to T11 level, and parasympathetic nerves derive from the vagus nerve.19 The right PVB-induced pain reduction following hepatectomy with right-sided subcostal incision may be due to blockade of somatic and sympathetic pain fibers originated from T5 to T11.19 The vagal nerve and contralateral sympathetic fibers are not targeted by unilateral PVB, so total visceral anesthesia may not be achievable. This may be considered a drawback of a unilateral PVB used for pain management of hepatobiliary surgeries. Bilateral PVB technique has been successfully used in thoracic, abdominal, and pelvic regions.20 But bilateral PVB using landmark techniques was found to double the likelihood of inadvertent vascular puncture and to cause an 8-fold increase in pleural puncture and pneumothorax.21 Recent modifications to PVB technique by ultrasound guidance may improve the safety of this technique.\nThe extent of dermatomal spread of local anesthetics in the paravertebral space is variable and characterized by inconsistent somatic and sympathetic blocks. The mean extent of the somatic block was 4.6 and 4.8 dermatomes in the PVB group and control group, respectively, at 15 minutes after injection of 1% lidocaine 15 mL in the present study. Injections in a multilevel fashion may increase the extent of somatic and sympathetic block but would expose patients to additional risks related to punctures even under the ultrasound guidance. Further studies may be needed to test the clinical safety and efficacy of PVB at multiple levels for pain management after major abdominal surgeries.\n22 Ultrasound-guided continuous thoracic PVB has been used for outpatient acute pain management of multilevel unilateral rib fractures.23 However, an unacceptably high misplacement rate of paravertebral catheters has been reported by using a landmark technique recently.24 Catheter displacement may be prevented by the introduction of coiled catheters into clinical practice.25\nBecause of ethical issues and safety consideration, we have not enrolled patients with impaired coagulation functions that contraindicated epidural analgesia. Common concerns with the PVB are the potential risk for pneumothorax and vascular or neural injury. Paravertebral nerve blocks are technically easy to learn with a high success rate of 89.9% to 93.9% and a relatively low rate of over all complications (5%) and serious complications such as pneumothorax (0.5%).21,26 A recent meta-analysis including 18 randomized controlled trials demonstrated that PVBs, compared with epidural analgesia, are associated with less urinary retention, postoperative nausea and vomiting, and hypotension.27 These results were consistent with previous meta-analysis performed by Davies et al28 in 2006. In addition, the rates of failed technique were also lower in the PVB group than those in the epidural analgesia group.27 Paravertebral nerve block may offer an attractive alternative to the epidural technique for patients with local sepsis, coagulation disorders, preexisting neurological disorders, and abnormal thoracic vertebral anatomy. A comparison between the standard epidural analgesia and ultrasound-guided PVB would be meaningful and warrants further studies.\nPatients in the PVB group had longer length of stay, and this may be due to differences in the American Society of Anesthesiologists–Physical Status (ASA-PS) class, age, and blood loss. Although these differences between groups were not statistically significant, together they may have accounted for longer hospitalization.\nThis study has several limitations. Although a difference in the primary end point was shown, we were unable to demonstrate a difference in secondary outcome measures because of the small sample size. According to previous studies, we arbitrarily defined block success as loss of pinprick sensation in 3 or more ipsilateral vertebral dermatomes.6 After liver resection, the possibility of extending the analgesic benefits of paravertebral analgesia beyond 24 hours with or without a more concentrated solution of local anesthetic warrants further investigation. Further larger-scale studies are required to evaluate the effect of paravertebral block for right-lobe hepatectomy on clinically important outcomes including ropivacaine plasma levels, the incidence of postoperative complications, and chronic pain syndrome. Because sufentanil and ropivacaine are metabolized hepatically, elevated plasma levels can be seen after liver surgery.29,30 For safety concerns and in the absence of measurement of plasma levels of these medications, we limited the PVB infusion to 24 hours. This makes it more difficult to distinguish the effect of the initial ropivacaine PVB bolus from the continuous infusion on reducing pain scores and opioid consumption. It has been reported that single-shot PVB with 0.5% ropivacaine or 0.5% bupivacaine can produce 6- to 8-hour analgesic effects after thoracoscopic procedures.31,32 At the end of surgery, the 10-mL bolus of 0.2% ropivacaine prior to active infusion may have produced a significant opioid-sparing effect for an indeterminate number of hours out of the first 24 postoperative hours. The pain alleviating and opioid-sparing effects of PVBs in the first 24 postoperative hours demonstrated by our study may be the results of both the initiation bolus injection and the following continuous infusion of 0.2% ropivacaine.33,34 The present study aimed to test the hypothesis that a continuous PVB protocol (bolus initiation followed by continuous infusion of 0.2% ropivacaine) can reduce opioid consumption during the first 24 hours after right-lobe hepatectomy. We are unable to separate the effect of the initial bolus from the continuous infusion without having avoided the bolus, or running the infusion long enough to be certain there is no residual effect from the initial bolus. Obviously, keeping PVB catheter after single-dose PVB allows feasible rescue bolus injection or continuous infusion of local anesthetics when needed. Comparing the analgesic effects of bolus PVB, continuous PVB without bolus initiation, and bolus initiation followed by continuous infusion is not the primary interests of the present work. This issue clearly warrants further studies. Compared with patients from Western countries, the patients in our study have smaller body size, younger age, and lower ASA-PS scores. This unique demographic profile may impair the generalizability of our results, and further studies are needed to confirm the reproducibility of our technique in more overweight and/or more debilitated patients.\nIn conclusion, this is the first prospective randomized, double-blind, placebo-controlled study evaluating the analgesic efficacy of continuous thoracic paravertebral analgesia in patients undergoing right-lobe hepatectomy. Although thoracic epidural analgesia and interpleural analgesia after hepatectomy have been reported, the use of paravertebral analgesia for hepatectomy patients has rarely been reported.4,35 Our data demonstrated that continuous thoracic paravertebral block, following bolus initiation, is an effective procedure to improve pain treatment after right-lobe hepatectomy and could be incorporated into multimodal analgesic regimens for right-lobe hepatectomy with a right-sided subcostal incision.\n\n**Question:** Compared to saline at the same scheme of administration what was the result of 10-mL bolus of 0.2% ropivacaine before emergence, followed by a continuous infusion of 6 mL/h for 24 hours (PVB group) on pain scores (numerical rating scale)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13829
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nMen perform comparably to women in a perspective taking task after administration of intranasal oxytocin but not after placebo\n\n\n\nOxytocin (OT) is thought to play an important role in human interpersonal information processing and behavior. By inference, OT should facilitate empathic responding, i.e., the ability to feel for others and to take their perspective. In two independent double-blind, placebo-controlled between-subjects studies, we assessed the effect of intranasally administered OT on affective empathy and perspective taking, whilst also examining potential sex differences (e.g., women being more empathic than men). In study 1, we provided 96 participants (48 men) with an empathy scenario and recorded self-reports of empathic reactions to the scenario, while in study 2, a sample of 120 individuals (60 men) performed a computerized implicit perspective taking task. Whilst results from Study 1 showed no influence of OT on affective empathy, we found in Study 2 that OT exerted an effect on perspective taking ability in men. More specifically, men responded faster than women in the placebo group but they responded as slowly as women in the OT group. We conjecture that men in the OT group adopted a social perspective taking strategy, such as did women in both groups, but not men in the placebo group. On the basis of results across both studies, we suggest that self-report measures (such as used in Study 1) might be less sensitive to OT effects than more implicit measures of empathy such as that used in Study 2. If these assumptions are confirmed, one could infer that OT effects on empathic responses are more pronounced in men than women, and that any such effect is best studied using more implicit measures of empathy rather than explicit self-report measures.\nBODY.INTRODUCTION:\nOxytocin (OT) is a highly conserved neuropeptide and an accumulation of its receptors are found in the amygdala (Loup et al., 1991). The amygdala is a structure that is part of the limbic system, associated with social behavior and emotion processing (Phelps and LeDoux, 2005), or more broadly with “relevance detection” (Sander et al., 2003). OT is involved in the regulation of the hypothalamic-pituitary-adrenal (HPA) axis, and thus affects processes such as birth and breast-feeding in females and sexual mating, attachment and bonding in both sexes (see Carmichael et al., 1987, 1994; Carter, 1992; Altemus et al., 1995; Meston et al., 2004; Vignozzi et al., 2008). OT is also proposed to have anxiolytic effects (Uvnäs-Moberg, 1997; Heinrichs et al., 2003). Thus, OT acts centrally and peripherally as a central neurotransmitter/neuromodulator and a peripheral hormone in both males and females (Carter, 1998; MacDonald and MacDonald, 2010).\nIn line with its anatomical and functional properties, OT is involved in human interpersonal information processing and behavior such as in enhancing prosocial judgments and behavior. For example, OT as compared to placebo administration (1) yielded higher trust in others (Kosfeld et al., 2005), (2) increased perceived attractiveness and trustworthiness of unfamiliar faces (Theodoridou et al., 2009), (3) increased charitable donations (Barraza et al., 2011), (4) promoted positive inferences about others' mental states (Domes et al., 2007), and (5) facilitated the identification of emotions regardless of valence (Lischke et al., 2012). Pointing to OT influences in clinical populations, individuals with autism benefited from OT administration by showing enhanced affective speech comprehension (Hollander et al., 2007), “mind-reading” (Guastella et al., 2010), and processing of social signals and social feedback (Andari et al., 2010). Likewise, in patients suffering from psychotic symptoms, OT administration improved performance in theory of mind tasks and perceptions of trustworthiness (Pedersen et al., 2011).\nGiven OT's role in interpersonal information processing and behavior, it can be assumed that it also plays a role in empathy, that is, the ability to understand another's emotional perspective and to be personally affected by it in a way that mirrors the feelings of the individual (Eisenberg and Miller, 1987). Empathy is crucial to successful interpersonal skills and relations (Miller and Eisenberg, 1988; Batson, 1991; Eisenberg et al., 2002). According to most models, empathy consists of at least two components (Gladstein, 1983; Mahrer et al., 1994; Kerem et al., 2001). The first component accounts for the cognitive effort involved in considering another's viewpoint (i.e., perspective taking), and the second concerns vicarious emotional affective empathy (Davis, 1983; Hoffman, 2000; Blair, 2005), herein referred to as “affective empathy.” Both are required for normal “empathic ability” (Davis, 1983; Duan and Hill, 1996; Cialdini et al., 1997) and are shown to be related (Thakkar et al., 2009; Mohr et al., 2010; Thakkar and Park, 2010; Gronholm et al., 2012) and modulated by individual difference variables and personal experiences (Mohr et al., 2010; Cooper and Mohr, 2012).\nTo date, examination of OT's role in empathy has been associated with two main pitfalls. Firstly, most studies have used self-report questionnaires, which lack accuracy and are prone to socially desirable responding Tierney and McCabe, 2001; Kämpfe et al., 2009; Gerdes et al., 2010; Taras et al., 2010. Secondly, research findings have been mixed, painting a rather unclear picture (Zak et al., 2007; Singer et al., 2008; Bartz et al., 2010; Hurlemann et al., 2010). On the one hand, studies show that OT administration enhances self-reported emotional (but not cognitive) empathy (Hurlemann et al., 2010) as measured by the Multifaceted Empathy Test (MET; Dziobek et al., 2008) as well as empathic accuracy (Bartz et al., 2010), that is, the ability to accurately rate others' feelings when they narrate emotional events, in particular in listeners that are not socially proficient. Also, OT as compared to placebo administration enhanced perspective taking ability and generosity toward others in an economic game; i.e., more money was transferred to partners after having imagined their perspective and considered their reaction to an offer (Zak et al., 2007). On the other hand, OT versus placebo administration exerted a null effect on emotional empathic responses to a romantic partner's pain, i.e., on self-reported unpleasantness ratings when considering the partner's experience of painful hand stimulation (Singer et al., 2008). These inconsistent findings may either be due to OT and empathy being unrelated, or problems with the self-report measurement of empathy. We here consider these possibilities.\nIn two independent studies, we used comparable double-blind placebo-controlled between-subject designs to assess healthy individuals' empathy as a function of nasal OT administration. In study 1, we provided participants with a vignette in which a person's unfortunate plight was described. Participants rated their empathic feelings toward the individual (see e.g., Coke et al., 1978; Batson et al., 1989; Mikulincer et al., 2001); thereby directly linking the self-reported empathic response to an individual's plight. In study 2, we used a more implicit strategy by assessing reaction times for perspective taking in a computerized task. In this task, participants see back-facing and front-facing human figures sequentially on the computer screen and have to match the own perspective with the one of the figure (e.g., Mohr et al., 2010; Thakkar and Park, 2010; Gardner et al., 2012). Matching the own body position with that of a front-facing figure is cognitively more challenging than matching it with a back-facing figure as reflected in enhanced reaction times (see Figure 1A). This task has been used to examine various questions on cognitive functioning such as those underpinning different forms of mental rotation (Ratcliff, 1979; Zacks et al., 1999), cognitive correlates of out-of-body experiences (Blanke et al., 2005; Easton et al., 2009; Braithwaite et al., 2011) and schizotypy (Mohr et al., 2006; Easton et al., 2009), to evaluate learning (Bailey et al., 2007), and spatial compatibility effects (Gardner and Potts, 2011). Empirical evidence showed that at least part of task performance variance is modulated by empathy. For instance, in this 3rd person perspective taking task (3PP-task) increasing self-reported empathy scores are negatively correlated with response speed (Thakkar et al., 2009) in women but also positively correlated with greater accuracy (Thakkar and Park, 2010) and reaction times (Mohr et al., 2010), in particular for individuals reportedly using a social rather than a spatial perspective taking strategy (Gronholm et al., 2012). Thus, if OT enhances empathy, irrespective of the assessment format chosen, we would expect that increased OT availability enhances individuals' empathy in both studies, leading to higher empathic concern ratings in study 1 and potentially faster reaction times in study 2. Yet, if explicit, self-report measures of empathy bias desirable responding, the effect of OT might not be observed in study 1, with the 3PP task in study 2 producing more pertinent results, at least statistically.\nFigure 1(A) Examples of figures used in the perspective taking task. From left to right: The first two frames illustrate front- and back- facing female figures, while the remaining two frames depict front- and back- facing male figures. (B) Demonstration of the perspective taking task procedure: The cross is presented first, followed by the coloured dot, followed by the figure.\nIn addition to these hypotheses, we considered the role of participant sex. Some studies of OT effects support an enhancing role of OT for interpersonal behavioral responses in both sexes (for evidence of absence of sex-dimorphic effects of OT see Ditzen et al., 2009; Theodoridou et al., 2009, 2011; Alvares et al., 2010; Rockliff et al., 2011), but others yield direct evidence for potential sex-specific effects of this neuropeptide Fischer-Shofty et al., 2013; Theodoridou et al., 2013; for a recent review on the role of sex in individual responses to OT see MacDonald, 2012). It is worth noting that studies that have reported effects of OT specifically on empathy and/or perspective taking either recruited only men (Zak et al., 2007; Bartz et al., 2010) or found that OT selectively affected men (Hurlemann et al., 2010). In light of the above inconsistent evidence (Ditzen et al., 2009; Hurlemann et al., 2010), we considered the possibility that OT might exert sex-specific effects on empathy and perspective taking and tested an equal number of women and men.\n\nBODY.MATERIALS AND METHODS:\nBODY.PROCEDURE COMMON TO BOTH STUDIES:\nIn both studies, we conducted two sessions: a baseline session performed by participants at home and a laboratory session for which participants came to the University. Before participation in either session we obtained written informed consent from each participant. We only tested participants who met our inclusion criteria: being a fluent English speaker, not having consumed any medication, or having any other medical reason why they should not receive OT. In the case of female participants, they could not be pregnant, or if post-birth, should not be breastfeeding. The research protocol in both studies was approved by the Faculty of Science Human Research Ethics Committee at the University of Bristol. We recruited participants through poster advertisement in and around university buildings and sent emails to various departments and posted on the university's jobs website. In the baseline session, participants provided demographic information (e.g., age, sex) and filled in self-report questionnaires such as the Major Depression Inventory (MDI; Bech, 1997); results from these measures can be found in Theodoridou et al. (2009) and Theodoridou et al. (2013). Participants sent the completed questionnaires to the experimenter.\nIn the laboratory session at the local university, approximately one week later, participants were tested individually. They were instructed to abstain from alcohol, caffeine, and nicotine for 24 h before testing and from food and drink (except water) for 2 h before testing. When they arrived in the laboratory, each participant signed an informed consent form. The session lasted up to 2 h, with the actual task battery being assessed in the first 60 min after the waiting period, and the remaining time being used to guard for potential side effects. At the very beginning, participants were told that they would first receive a small dose of OT or a near identical looking and smelling placebo before being tested in various tasks. Information was also given about possible side effects associated with OT administration and participants were informed that they had the right to withdraw from the study at any time. In a double-blind procedure, participants were randomly assigned to self-administer a small intranasal dose of either 24 IU OT (Syntocinon Spray, Novartis, 3 puffs per nostril, each puff containing 4 IU OT), or placebo (containing the same ingredients, but OT, to the OT nasal spray). After a waiting period of 25–30 min, participants completed the task battery including the empathy vignette task (Study 1) and the 3PP-task (Study 2). Tasks were presented in two blocks, randomized in order. In study 1, the empathy scenario task was administered either 35 or 55 min after drug administration. In study 2, the 3PP-task was completed either 35 or 60 min after drug administration (see Theodoridou et al., 2009, 2011, 2013 for additional results from these studies). This time window is likely sensitive to OT effects; Gossen et al. (2012) showed that OT reaches its peak plasma level at approximately 30 min after a dose of 26 IU intranasal OT. Also note that the present protocol has led to established methods and findings in our laboratory before (see Theodoridou et al., 2009, 2011, 2013).\nIn the laboratory sessions, we also assessed current mood, wakefulness, and calmness with the short form of the Multidimensional Mood State Questionnaire (Steyer et al., 1997). This 6-point scale consists of 15 items with answers ranging from “definitely not” to “extremely.” In Study 1, this measure was completed once, whereas in Study 2, it was completed twice, immediately before drug administration and immediately before testing. After testing, participants were instructed to guess the substance they had received1. Finally, participants were debriefed and offered either a monetary reward of £15 or experimental credit (or chocolate in a few instances).\n\nBODY.STUDY 1:\nBODY.PARTICIPANTS:\nOf the 96 participants (mainly students, mean age: 21.4 years, age range: 18–40 years), 51 (25 males) received OT, and 45 received placebo (23 males). As reported in our previous study (Theodoridou et al., 2009), any drug effect, sex effect, or interaction cannot be explained by participants' current affect, wakefulness or calmness. Likewise, independent samples t-tests showed no significant differences between drug groups in depression and trait anxiety (see Theodoridou et al., 2011 for details).\n\nBODY.EMPATHY TASK:\nEmotional reactions to another person's plight were assessed using a procedure similar to that employed previously (e.g., Coke et al., 1978; Batson et al., 1989; Mikulincer et al., 2001). More precisely, the experimenter read out a brief story about the plight of a university student who had recently lost her parents in a car accident. The full story read as follows: “Anna is 21 years old studying, on a full-time basis at the University of Bristol. A month ago her parents and older sister got killed in a car accident. At the moment she is desperately trying to take care of her surviving younger brother and sister while trying to finish her last year of BSc studies. If she does not complete her degree, she will not be able to earn enough money to support her brother and sister and will have to put them up for adoption. What is more, Anna has no relatives that can help her out.” Immediately afterwards, the experimenter read out 10 adjectives (taken from Batson et al., 1989; see below), each of which participants verbally rated on a 7-point visual analog scale according to how they felt while listening to the story (1—not at all felt, 7—very strongly felt). Participants were asked to bear in mind that the student was a mere acquaintance to them. We stressed this point because we wished to examine the effect of OT on prosocial behavior toward non-intimate others to avoid ceiling effects due to emotional closeness to the main character of the story. A digital voice recorder (Olympus; VN-2100PC) was used to record responses in this task.\nHalf of the adjectives tap on empathic concern (i.e., other-oriented emotional empathy at the plight of others): sympathetic, soft-hearted, compassionate, tender, moved, and the other half tap on feelings of personal distress (i.e., self-oriented emotional reactions at the plight of others): alarmed, grieved, distressed, upset, and disturbed. We calculated mean scores for empathic concern responses and personal distress responses, separately (range of scores 1–7 with higher scores reflecting greater empathic concern and personal distress, respectively) to account for the possibility that these two dimensions are differently influenced by OT and/or sex. For instance, OT might increase empathic concern for a person in need and/or attenuate feelings of personal distress (Batson et al., 1987). OT might act anxiolytically (Carter et al., 2001; Heinrichs et al., 2001, 2003; Cardoso et al., 2012; de Oliveira et al., 2012; for reviews see Meyer-Lindenberg et al., 2011; Striepens et al., 2011; MacDonald and Feifel, 2012) decreasing discomfort and concern for one's own self.\n\n\nBODY.STUDY 2:\nBODY.PARTICIPANTS:\nOf the 120 participants (mainly students, mean age: 22.4 years, age range: 18–44 years), 60 (30 males) received OT, and 60 received placebo (30 males). As in study 1 (Theodoridou et al., 2009), we observed no influence of drug and/or sex on participants' current affect, wakefulness, or calmness. In more detail, following Domes et al.'s (2007) procedure, we calculated difference scores on the different mood measures affect, wakefulness and calmness for time 1 (pre drug administration) and time 2 (30 min post drug administration) by subtracting Mood at time 2 from Mood at Time 1. Thus, positive values reflect better mood at time 1 and negative values reflect better mood at time 2. We performed a multivariate ANOVA (MANOVA) on these difference scores with drug (OT, placebo) and participant sex (male, female) as between-subjects variables. There was no effect of drug on change in affect, F(1, 116) = 0.02, p = 0.90, wakefulness, F(1, 116) = 0.04, p = 0.85, and calmness, F(1, 116) = 0.06, p = 0.80. Similarly, no effect of participant sex was found on change in affect, F(1, 116) = 0.10, p = 0.75, change in wakefulness, F(1, 116) = 0.02, p = 0.90, and change in calmness, F(1, 116) = 1.71, p = 0.19, and no significant drug × participant sex interactions (all ps > 0.1).\n\nBODY.PERSPECTIVE TAKING TASK:\nPicture preparation. To make figures more realistic, we here refrained from using schematic drawings used before (Mohr et al., 2010, in press; Cooper and Mohr, 2012) and took photographs of an adult man and an adult woman of approximately the same height instead, both dressed in jeans and black T-shirts (see also Thakkar et al., 2009; Thakkar and Park, 2010). The photographs showed these adult models standing upright with the arms slightly outstretched to the right and left (see Figure 1A). Each of them was photographed in this same position from the front and the back. For each picture taken, the models were always wearing a black glove on the one hand and a brown glove on the other hand. The side of the black glove was once on the right and once on the left for both front-facing and back-facing positions. This counterbalancing resulted in eight possible photographs that were all of the same size (237 × 239 pixels), cropped and set against a white background (see Figure 1A for representative examples). For single trials, a centrally placed fixation cross appeared first for 800 ms followed by a centrally presented dot (diameter = 1.5 cm) for 1000 ms that was black in half of the trials and light brown in the remaining trials. After the disappearance of the dot, one of the pictures was presented centrally (visual angle ≈ 5° width × 6.1° height) until a response was provided, that is, the task was self-paced.\nTask procedure. First, participants received the written instruction that the following task would assess their empathic ability. The exact instruction was as follows: “This is a test of your ability to see the world from another person's perspective. Performance on this test reflects empathetic ability. Empathy is a social skill that is defined as being able to identify with, and understand what another person is perceiving, and to respond appropriately.” This instruction was based on previously used instructions (Massa et al., 2005) and was included to ensure that the task was introduced as a test of empathic abilities. Participants then received both a written and a verbal instruction to imagine being in the other person's shoes. Specifically, the instruction was: “Imagine you are in the person's body position. If the coloured circle is black, indicate which hand the black glove is on. If the coloured circle is brown, indicate which hand the brown glove is on. If the glove would be on your own left hand, press key ‘1.’ If the glove would be on your own right hand, press key ‘5.’ Please respond as quickly and accurately as possible, but always aim to take the other person's perspective first.” Following task instructions, a slide with the demonstration of the task procedure was presented for 6000 ms (see Figure 1B). Each picture was presented 10 times resulting in 80 experimental trials. The task was preceded by eight practice trials. We assessed the number of correct responses and the reaction times for correct responses.\n\n\n\nBODY.DATA ANALYSIS:\nIn Study 1, data from one participant was excluded because this person responded especially slowly (average response time: 18 s). We also excluded scores (ratings) that were two standard deviations above or below the mean (1.51% of the data). A MANOVA test was carried out on mean empathic concern ratings and mean personal distress ratings, with drug (OT, Placebo) and participant sex (male, female) as between-subjects variables. One-sample tt-tests were carried out, on the empathic concern ratings and on the personal distress ratings separately for the sexes in each drug group.\nIn Study 2, reaction times shorter than 200 ms and longer than 5000 ms were considered to be outliers and were dropped (Harris et al., 2002; Mohr et al., 2010). Incorrect trials were also discarded. The data of five participants were discarded because their error rates (ranging from 27.5% to 48.75%) were more than two standard deviations above the mean (Mean error rate = 7.84%, SD = 8.24). A mixed model ANOVA was carried out on the mean RT data, with drug (OT, placebo) and participant sex (male, female) as between-subjects variables, and target sex (male, female), position (front, back) as repeated measures. No statistical analyses were performed on accuracy data as the average error rate (%) after removal of the five outlying cases was very low (Total Mean = 6.59%; OT Mean: 6.68%; Placebo Mean: 6.49%). Pairwise post-hoc comparisons were based on Newman-Keuls tests. In both studies, we performed univariate ANOVA tests to examine potential age differences between the two drug groups and the two sex groups. The alpha level for all the statistical test results reported henceforth was set to 5% (0.05).\n\nBODY.RESULTS:\nBODY.STUDY 1: OT AND RATINGS OF ADJECTIVES SUBSEQUENT TO AN EMPATHY SCENARIO:\nBODY.PARTICIPANTS:\nThe ANOVA on age showed no significant main effects [drug: F(1, 92) = 0.33, p = 0.57, participant sex: F(1, 92) = 0.003, p = 0.96] and no significant interaction, F(1, 92) = 0.01, p = 0.91.\n\nBODY.ADJECTIVE RATINGS:\nThe MANOVA showed that OT (vs. placebo) had no effect on empathic concern, F(1, 90) = 0.16, p = 0.69, or personal distress ratings, F(1, 90) = 0, p = 0.99. A significant main effect of participant sex was found for empathic concern ratings, F(1, 90) = 4.18, p = 0.04, with females self-reporting greater empathic concern (M = 5.30, SD = 0.99) than males (M = 4.93, SD = 0.77). No effect of sex was found on personal distress scores, F(1, 90) = 1.91, p = 0.17. We did not observe any significant interactions [largest interaction effect: F(1, 90) = 1.41, p = 0.24]2.\nWhen comparing the two adjective rating scores against chance level (3.5), the one-sample t-tests showed that the mean empathic concern rating (M = 5.12; SD = 0.90) was significantly higher than chance level, t(93) = 17.38, p < 0.001, as was the mean personal distress rating (M = 3.83; SD = 1.27), t(93) = 2.52, p = 0.01. The same conclusion could be drawn when performing the same comparisons for the two sexes in each drug group, separately (see Table 1 for detailed results). The opposite conclusion could be drawn when performing the same comparisons for mean personal distress ratings, i.e., the mean adjective ratings scores were not different from chance level, apart from higher scores in female participants in the placebo group (see Table 1).\nTable 1Mean empathic concern ratings and their difference from chance level (3.5) for the two sexes in each drug group.\nEmpathic concernt (p-value)Personal distresst (p-value)OT\nM\n4.93 (0.64)\n10.93 (<0.001)\n3.80 (1.01)\n1.47 (0.15)\n\nF\n5.23 (0.89)\n9.82 (<0.001)\n3.86 (1.46)\n1.24 (0.23)\nP\nM\n4.93 (0.91)\n7.39 (<0.001)\n3.49 (1.18)\n−0.04 (0.97)\n\nF\n5.39 (1.10)\n8.03 (<0.001)\n4.16 (1.35)\n2.30 (0.03)\nNotes: standard deviations are in parentheses; OT, oxytocin; P, placebo; M, Male; F, Female; OT Male df = 23; P Male df = 21; OT Female df = 25; P Female df = 21.\n\n\nBODY.STUDY 2: OT AND PERSPECTIVE TAKING:\nBODY.PARTICIPANTS:\nThe ANOVA on age showed no significant main effects [drug: F(1, 116) = 0.87, p = 0.35, participant sex: F(1, 116) = 1.11, p = 0.29] and no significant interaction, F(1, 116) = 0.02, p = 0.89.\n\nBODY.REACTION TIME ANALYSIS FOR THE 3PP-TASK:\nThe ANOVA on mean reaction times for correct decisions showed a significant main effect of figure position, F(1, 111) = 124.03, p < 0.001, with front-facing figures eliciting longer reaction times (M = 1122.53, SD = 346.86) than back-facing figures (M = 913.93, SD = 255.64). In addition, the main effect of figure's sex, F(1, 111) = 31.68, p < 0.001, indicated that participants responded faster to female figures (M = 994.69, SD = 271.36) than to male figures (M = 1040.09, SD = 304.05). The main effects of drug, F(1, 111) = 1.8, p = 0.18, and participant sex, F(1, 111) = 1.35, p = 0.25, were both not significant3.\nWe found significant 2-way interactions between figure position and participant sex, F(1, 111) = 9.03, p = 0.003, and figure position and target sex, F(1, 111) = 10.3, p = 0.004, and a significant 3-way interaction between drug, target sex, and participant sex, F(1, 111) = 5.7, p = 0.02. Post-hoc comparisons on these significant interactions showed for the figure position by participant sex interaction that male participants had faster reaction times than female participants for the front-facing condition (p = 0.04) (see Figure 2A). The same comparison for the back-facing condition was not significant (p = 0.92). Moreover, reaction times were significantly faster in the back- than front-facing condition for both male (p < 0.001) and female participants (p < 0.001). Post-hoc comparisons on the figure position by target sex interaction showed that female figures were responded to faster than male figures in the back-facing condition (p < 0.001) and front-facing condition (p = 0.04) (see Figure 2B). In addition, reaction times for back-facing figures were significantly faster than for front-facing ones for both male and female targets (all ps < 0.001).\nFigure 2Mean reaction times (A) of male and female participants to target figures in the back- and front-facing position, (B) to male and female target figures in the back- and front-facing position, (C) of male and female participants to target figures in the OT group and (D) the placebo group. Vertical bars indicate standard errors.\nFinally, to further elucidate the significant drug, target sex, and participant sex interaction, we performed 2-way ANOVAs, for the OT and the placebo group separately, with figure sex as a repeated measure and sex as a between subject-factor. The ANOVA for the OT group only showed the significant main effect on figure sex, F(1, 56) = 11.88, p = 0.001, i.e., that reaction times were faster for female than male figures. The main effect of sex, F(1, 56) = 0.14, p = 0.71, and the interaction, F(1, 56) = 1.96, p = 0.17, were both not significant (Figure 2C). The ANOVA for the placebo group, however, showed in addition to the main effect of figure sex, F(1, 54) = 24.58, p < 0.001 (male figures > female figures), a main effect of participant sex, F(1, 54) = 6.34, p = 0.01. Male participants responded faster than female participants (Figure 2D). Finally, the interaction was significant, F(1, 54) = 4.25, p = 0.04. Post-hoc comparisons showed that reaction times were faster for female than male figures in female (p < 0.001) and male (p = 0.045) participants. Moreover, male as compared to female participants showed a significant reaction time advantage for male figures (p = 0.02) that was only a statistical trend for female figures (p = 0.08).\n\n\n\nBODY.DISCUSSION:\nIn two independent double-blind, placebo-controlled between-subject design studies we investigated whether the consumption of a single dose of OT affected the ability to empathize with another individual's unfortunate plight (Study 1) and the ability to mentally take the perspective of another person (Study 2). In study 1, we provided participants with a scenario designed to elicit empathy [modified version used by Coke et al. (1978); Batson et al. (1989)] and assessed two levels of self-reported empathy (empathic concern and personal distress) by having participants rate empathy relevant adjectives. In study 2, we recorded reaction times in a computerized 3PP-task to assess empathy more implicitly (e.g., Thakkar et al., 2009; Mohr et al., 2010, in press) and to also avoid the response biases often associated with self-report measures (Kämpfe et al., 2009). Given that OT has been previously shown to enhance interpersonal processes and behavior (for reviews see MacDonald and MacDonald, 2010; Bartz et al., 2011; Guastella and MacLeod, 2012); including empathy (Hurlemann et al., 2010) and related phenomena such as “mind-reading” (Domes et al., 2007; Guastella et al., 2010), we hypothesized that OT as compared to placebo administration would result in enhanced empathic responses (study 1) and speeded perspective taking (study 2). We also considered that this enhancement might be more prominent in study 2, because response biases associated with self-report measures might override the OT effect in study 1. Our main findings were that (1) the adjective ratings in study 1 (empathy scenario) did not reveal higher empathic responses in the OT as compared to the placebo group, and (2) sex differences in the 3PP-task (male over female participant advantage, in particular for male figures) observed in the placebo group were absent in the OT group. Before discussing these results and additional task findings, we would like to mention that neither study 1 nor study 2 found effects of OT administration on affect, wakefulness and calmness, indicating that the above reported behavioral effects of OT could not be attributed to or mediated by changes in mood. These findings are in agreement with previous studies that have reported null effects of OT on mood (Kosfeld et al., 2005; Domes et al., 2007; Fischer-Shofty et al., 2010; Lischke et al., 2011).\nBODY.OT AND EMPATHY RATINGS WHEN EXPOSED TO THE PLIGHT OF AN UNKNOWN PERSON:\nIn study 1, the OT and placebo group provided comparable adjective ratings after hearing about an unknown person's plight. By inference the two groups reported comparable empathic reactions and personal distress when rating these adjectives. Contrary to previous findings supporting the role of OT in prosocial and affiliative behavior (Kosfeld et al., 2005; Theodoridou et al., 2009; Barraza et al., 2011), our results suggest that OT may not play a role in the experience of affective empathy. However, given that to be empathetic is a socially desirable trait and that participants spoke aloud their answers to the experimenter, we can assume that our methodology was prone to self-report biases (e.g., see Tierney and McCabe, 2001; Kämpfe et al., 2009; Gerdes et al., 2010; Taras et al., 2010). Our findings indicate that participants seemed inclined to provide high scores in these ratings: the average empathic concern rating was 5.12, thus, biased toward the highest possible score of seven. Our results show that this mean rating was significantly different to chance level 3.5 (i.e., mid-scale point), and that our female participants' self-reports of empathy deviated more strongly from this chance level than did male participants' self-reports. What is more, the rating of 5.12 appears high compared to values obtained in previous studies using similar (but not identical) designs, and in which seven was also the highest possible value [e.g., see Batson et al., 1988, studies 1 and 3 (3.81 and 3.88, respectively); Batson et al., 1989, study 2 (4.44 and 5.79)]. Consistent with sex differences observed by Batson et al. (1988, 1989), women scored higher in empathic concern than men (see also Baron-Cohen and Wheelwright, 2004; Hurlemann et al., 2010). As for personal distress ratings, the mean value here was lower than the value in a comparable study (e.g., in Batson et al., 1988, study 2). Given the overall caution with regard to self-report measures in this domain (Baron-Cohen and Wheelwright, 2004; Kämpfe et al., 2009), we suggest that empathic concern adjectives might tap onto sex stereotypes regarding emotions, i.e., women being more caring and higher in sympathy than men (Plant et al., 2000). By inference, such stereotypes may trigger stronger response biases and desirable responding than personal distress adjectives.\nImportantly, our results show no effect of drug administration on personal distress or empathic concern. Therefore, our findings fail to provide evidence for an anxiolytic and prosocial effect of OT, respectively, in the context of affective responses to an individual's plight. It is worth noting that in line with previous relevant studies (Batson et al., 1988, 1989; Mikulincer et al., 2001) we only tested affective responses to a woman's plight. Future studies should balance out for possible sex-specific effects and include a male target, although a faster reaction toward women is generally likely (see Lewin and Herlitz, 2002; Cellerino et al., 2004; Mohr et al., 2010) in line with our finding of speeded responses to female figures in study 2.\nTo further understand the influence of sex on the link between OT effects on empathic abilities we tested an equal number of women and men. Studies in which individuals were provided with audio tapes narrating the plight of a needy person showed that women reported higher levels of empathy than men for the needy person (Batson et al., 1988, studies 1 and 3), and that reports of empathy for the needy person were relatively high (Batson et al., 1988, studies 1 and 3; Batson et al., 1989, study 2). Such sex differences in self-reports of empathy have often been documented, but could reflect a female tendency toward more socially appropriate responding relative to men (Eisenberg and Miller, 1987). In our study we did indeed observe that females' self-reports of empathy deviated more strongly from chance level than did males self-reports. However, no differential effect of OT was found in men's and women's self-reported empathy.\n\nBODY.OT AND REACTION TIMES IN A COMPUTERIZED 3PP-TASK:\nIn study 2, we found that speed of response in the drug groups interacted with figure sex and participant sex. We observed that sex differences in the placebo group were absent in the OT group. More precisely, in both drug groups, we found that people responded faster to female than male figures. In the placebo group, we additionally observed that men responded faster than women, and that this sex difference was statistically significant for male figures and a statistical trend for female figures. Given that these sex differences are absent in the OT group, we infer that OT might have sex-dimorphic effects on this measure of perspective taking ability, an inference that is in line with a recent review by MacDonald (2012). The observation that male participants in the OT group responded as slowly as female participants in the OT group could further indicate that men adopted a comparable perspective taking strategy to the one used by women. This conjecture assumes that the women in our study, irrespective of whether they have received OT or placebo, adopt a relatively more time-consuming social perspective taking strategy when completing the 3PP-task as compared to the men in the placebo group. Evidence for this suggestion comes from research indicating that women experience greater rotational costs for front-facing figures (reflected in increased reaction times and decreased accuracy) than men when performing a 3PP-task (Mohr et al., 2010). In addition, further studies on computerized perspective taking ability indicate that social strategies might be more prevalent in women and object-based spatial strategies more prevalent in men, and that women find the 3PP-task more effortful, as reflected in lower accuracy rates (Kaiser et al., 2008). The link between faster reaction times and self-reported empathy seems to be most evident for women (Mohr et al., 2010), while no role of sex is reported in two further studies (Thakkar and Park, 2010; Gardner et al., 2012). Finally, studies using a similar (but not identical) perspective taking task to ours showed that women with high affective empathy scores are slower on perspective taking (Thakkar et al., 2009; but also see Thakkar and Park, 2010).\nIf OT fosters social perspective taking in men, we could expect in future studies that a higher than normal OT availability may facilitate men's tendency to step into another person's shoes, a fundamental component of empathy (e.g., see Kaiser et al., 2008; Thakkar et al., 2009; Gardner et al., 2012). The present conclusion is relevant to empathy and OT researchers as it suggests that OT's beneficial effects might not be general but nuanced (Bartz et al., 2011), i.e., affecting men more strongly than women, at least in the context of social perspective taking. Alternatively, such beneficial OT effects might be evident in those who are less socially proficient, which men are thought to be (see also Bartz et al., 2010).\nWe suggest that the above findings on OT effects in men are not an artifact of overall or aberrant performance in our study population, because we replicated previous behavioral findings using slightly modified versions of the current 3PP-task. Firstly, reaction times were faster for back-facing than front-facing pictures (see also e.g., Arzy et al., 2007; Mohr et al., 2010, in press; Thakkar and Park, 2010; Cooper and Mohr, 2012) indicating that participants performed mental transformations. This finding is in line with previous reports regarding the mental rotation of objects (Shepard and Metzler, 1971; Wohlschläger and Wohlschläger, 1998), body parts (Cooper and Shepard, 1975; Parsons, 1987; Bonda et al., 1995; Petit et al., 2003; Seurinck et al., 2004), and perspective taking tasks (e.g., Kaiser et al., 2008; Rilea, 2008) which report longer reaction times when the position of a stimulus (or own current body position) does not match the position of the target stimulus. Moreover, mental perspective transformations for female figures were faster than those for male figures, an advantage that was specific to back-facing figures, supporting previous observations (see Mohr et al., 2010).\n\n\nBODY.CONCLUSIONS AND POSSIBILITIES:\nTwo studies examined the differential effects of OT on empathic responses, once using self-report ratings after having heard the story of an unknown person's unfortunate plight (study 1) and once using reaction times in a computerized 3PP-task (study 2). The major findings were that while OT as compared to placebo administration did not enhance self-reported empathic concern toward others (study 1), it showed that a male over female advantage in the 3PP-task that was evident in the placebo group was absent in the OT group (study 2). This finding is suggestive of a potential strategy change (purportedly more social than spatial performance strategy, see rationale in more detail above) in men after consuming OT as compared to placebo. Such a facilitation of social perspective taking might already be present in women, regardless of which of the two drugs were consumed. Thus, additional OT availability might affect men but not women in the 3PP-task. This conjecture, if supported in the future, might be relevant to sub-populations low in appropriate social abilities, such as individuals with alexithymia, social anxiety disorder, and schizophrenia (Caldwell et al., 2009; Guastella et al., 2009b; Bartz et al., 2010; Feifel et al., 2010; Rubin et al., 2010; Luminet et al., 2011; Pedersen et al., 2011; Hall et al., 2012) as it has already been shown in autism spectrum disorders (Andari et al., 2010; Guastella et al., 2010).\nGiven the conjectural nature of our conclusions, future studies should verify the strategy participants employ in the 3PP-task, e.g., examine whether women are slower because they use a social perspective strategy while men (at least without pharmacologically enhanced OT availabilities) use a spatial perspective taking strategy (Gardner et al., 2012; but also see Gronholm et al., 2012). Indeed, one could also reason that OT administration hindered a spatial strategy that might have been favored by men, without having to necessarily facilitate a social strategy. While this possibility cannot be excluded, we consider our initial explanation to be more likely because men and women seemingly used different strategies in perspective taking including mental rotation (e.g., Weiss et al., 2003; Kaiser et al., 2008), and women with higher self-reported empathy were found to slow down in perspective taking, presumably because they are using a social strategy that is also more time-consuming (Thakkar et al., 2009). However, it should be noted here that other studies have shown better perspective taking to be linked to faster reaction times (e.g., Mohr et al., 2010; Thakkar and Park, 2010). Given that the above studies assessed empathy via self-report and that such approaches are problematic, we propose that future studies would benefit from using more objective measures of empathy (e.g., actual behavioral observations and/or facial mimicry) to examine whether higher empathy is associated with slower or faster reaction times.\nAnother potential explanation of the above finding is that OT administration generally slows men down. It should be noted that whilst OT administration has been shown to slow men down in contexts other than spatial processing, such as during approach-avoidance motor responses to emotional faces (Theodoridou et al., 2013) and identification of fearful faces (Di Simplicio et al., 2009), it has not been found to have a slowing effect on approach-avoidance motor responses to non-social stimuli (Theodoridou et al., 2013), early processing (i.e., detection speed) of angry, and happy faces (Guastella et al., 2009a) and recognition of emotional faces (Fischer-Shofty et al., 2010). Therefore, it seems unlikely that OT administration generally slows men's speed of response.\nImportantly, the sex differences and the interaction with drug group were observed in a task that was introduced as a task that assesses empathy. Future research could introduce the task as one that assesses mental rotation (see also Massa et al., 2005 for instruction effects), to test whether drug effects are influenced by such context effects. This would provide us with some further insight regarding task expectancies, and their interaction with drug treatment. In any case, we suggest that the 3PP-task was powerful at showing drug by sex interactions because such reaction time measures are difficult to manipulate/see through relative to a self-report task, thereby rendering our participants less able to guess our specific experimental hypotheses. This advantage might also explain why no drug effect or drug by sex interaction was observed in study 1, in which participants are actively asked to rate their feelings toward an unfortunate person in the presence of the experimenter. We suggest that response biases such as social desirability and stereotyping come strongly into play when using such paradigms, resulting in artificially elevated scores and commonly observed sex differences (women having higher scores than men) when using self-report measures (see for example Massa et al., 2005; Wraga et al., 2007). Furthermore, this is unlikely to be the case when using implicit measures such as the current perspective taking task (Mohr et al., 2010, in press; Thakkar and Park, 2010; Gardner et al., 2012). Thus, our findings indicate that more implicit measures such as the current 3PP-task might be better suited to assessing empathy, and that any effects of empathy-related processes whether cognitive or neurochemical (such as OT in the present case) might emerge more consistently when using measures that are less response bias prone.\nBODY.CONFLICT OF INTEREST STATEMENT:\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n**Question:** Compared to computerized implicit perspective taking task what was the result of empathy scenario and recorded self-reports of empathic reactions to the scenario on effect on empathic concern, personal distress ratings?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13831
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nCollection of vaccination data in the German National CohortFindings of a feasibility study in three study centersImpfdatenerhebung in der Nationalen KohorteErgebnisse einer Machbarkeitsstudie in drei Studienzentren\n\n\n\nABSTRACT.BACKGROUND:\nData about the vaccination status of participants are required in epidemiological cohort studies whenever infection or immunity is considered as potential exposure or outcome. Within Pretest 2 of the German National Cohort (GNC) we therefore investigated the acceptance and feasibility of extracting vaccination status from vaccination certificates provided by the participants of the study.\n\nABSTRACT.METHODS:\nThis study was conducted in three study centers (Bremen, n = 73; Hamburg, n = 200; Hannover, n = 193). In order to test if an additional reminder would prevent participants from forgetting their vaccination certificates at home persons willing to participate in Pretest 2 were randomly assigned to one of three invitation groups (IG). About one third of the participants received either no further reminder (IG1), a reminder card together with the appointment letter (IG2) or a separate reminder card 4 days before the appointment (IG3). At the study center, vaccination data were scanned or copied and entered into a database using a unique identification number. Participants were also asked to fill in a short questionnaire to assess the completeness of the provided vaccination data. Additionally, in one of the three participating study centers, general practitioners (GP) were asked to provide vaccination data from their records following respective participants’ consent. Finally, we compared the influenza data from the vaccination certificates with the influenza data obtained from participants in Pretest 2 by use of a self-administered questionnaire (ID-Screen).\n\nABSTRACT.RESULTS:\nDue to different starting dates of the study the intended reminder procedure was implemented only in Hamburg and Hannover. In Hamburg, significantly more vaccination certificates were submitted by the group which received the reminder card separately 4 days before the examination (IG3) compared to IG1 and IG2 (p = 0.04). In Hannover, in contrast, most vaccination certificates were brought by those who received the reminder card together with the appointment letter. Overall, the use of a reminder card had a positive but not significant effect as 89 % (185/209) of participants who received the reminder card submitted vaccination data versus 81 % (84/104) of participants who did not receive any reminder card (p = 0.06). Of all Pretest 2 participants in Hannover, 62 % (120/193) gave written consent for data collection by the GPs. In total, 114 practices were contacted of which 49 (43 %) sent vaccination data. All in all, 360 vaccination certificates with 5065 documented vaccinations were entered into a database, of which 4830 (95 %) were valid for analysis covering a period from 1946 to 2012. The comparison of influenza vaccination data from vaccination certificates to the remembered data from a self-completed questionnaire showed an agreement of data in 46 % (84/184) of cases (Kappa = 0.48). Influenza vaccinations were underreported in 4 % (7/170) of self-completed questionnaires.\n\nABSTRACT.CONCLUSION:\nThe reliable documentation of vaccinations within the context of the GNC proved to be feasible and thus recommendable at a large scale within the GNC as participants showed high willingness and compliance in providing available vaccination documents. An additional validation by means of documents provided by physicians seems to be possible for more than a quarter of participants. In order to maximize the likelyhood of participants’ of bringing their vaccination certificates it would be sufficient to send a reminder card together with the appointment letter.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nThe online version of this article (doi: 10.1007/s00103-014-2050-0) contains supplementary material, which is available to authorized users.\n\nFor research questions in the field of infectious diseases and immunology within epidemiologic studies, data about the vaccination status of the participants are essential. Nevertheless evidence about immunity acquired by vaccinations at the population level is lacking [1]. On account of this insufficient data, research questions like the following have not been investigated so far: (a) vaccination coverage, specifically in adult age [2], (b) vaccine response and immunosenescence [3, 4] and (c) possible interactions between vaccinations, infections and chronic diseases [5, 6]. The German National Cohort (GNC) offers the opportunity to assess vaccination status in combination with the seroprevalences of a large population-based sample throughout Germany and to follow it up over many decades. Other than originally planned, the assessment of vaccinations is not part of the routine data collection of the GNC. Therefore, we conducted an add-on “Level 3” study in Pretest 2 to test the feasibility of vaccination data collection within the GNC but without interfering with the program of the GNC (for details see Ahrens et al. in this issue). Special focus was placed on (1) the acceptance and feasibility of vaccination certificate collection and means of response optimization, (2) the feasibility and benefit of additional vaccination data collection from GPs and (3) the comparison of vaccination data of different sources (vaccination certificates, data from GPs, self-reported influenza vaccinations) regarding completeness and validity.\nBODY.METHODS:\nBODY.RECRUITMENT:\nThe study was conducted at three sites (Bremen, Hamburg, Hannover). Recruitment followed the standard operating procedure (SOP) of the GNC (Ahrens et al. in this issue). Accordingly, all persons who agreed to participate in Pretest 2 received a standardized invitation letter to confirm the appointment at the study center and to provide further information on how to prepare for the assessment. In this appointment letter participants were also asked to bring their vaccination certificates the day of the examination. To test whether an additional reminder would support participants to think of bringing their vaccination certificates, the use of a colored reminder card was tested (Fig. 1). For this purpose all participants were randomly assigned to one of three different invitation groups (IG). Participants of IG1 received the appointment letter only and no further reminder. Participants of IG2 received a colored reminder card together with the appointment letter, and participants of IG3 were sent this separate, colored reminder card 4 days before the appointment.Fig. 1Reminder card\n\n\nBODY.PROCEDURE OF VACCINATION DATA COLLECTION:\nAt the study site, participants gave written informed consent. The vaccination data were scanned electronically (Hamburg, Hannover) or copied (Bremen) under a subject identification number. Personal data like name, address or date of birth were covered before scanning or copying the vaccination certificates. In case participants forgot to bring their vaccination certificate, they were instructed on how to send a copy of the vaccination certificate by mail. In order to assess the feasibility and benefit of additional vaccination data collection from GPs, participants in Hannover were also asked to consent to vaccination data collection from their GPs. If consent was obtained, the practices were contacted by mail. The letter included information about the Pretest 2 of the GNC, participants’ written consent and a documentation sheet for given vaccinations during the last 10 years which could be used in case that data could not be printed out. Finally, all retrieved vaccination data were entered into a database including date of vaccination, name of vaccine, batch number and (combination of) disease(s) to be prevented. If during the data entry questions arose which could not be solved, the entry was to be confirmed as “unfinished” and the reason put into a special field for notes. These unfinished cases were checked and completed as much as possible every other day. New vaccines not yet predefined in the database were added (dynamic list of vaccines). If the date of vaccination (day and/or month) was partly missing, illegible or cut off, the following approach was chosen for coding: the missing day was recorded as 15th of month, the missing month as “June” by keeping the information of recoding in order to be able to differentiate between the real vaccination date and a fictitious date. During the training phase of the personnel (first 50 vaccination certificates) every data entry was checked for quality of data by referring to the copies of vaccination certificates. Afterwards, additional quality checks were carried out for at least 40 % of entered vaccination certificates.\n\nBODY.QUESTIONNAIRE:\nA short questionnaire was filled in by participants during their stay at the study center to assess the completeness of the provided data from vaccination certificates. Questions were as follows: if the participant ever possessed a vaccination certificate, if at the time of examination at the study center a vaccination certificate was possessed, the number of vaccination certificates, if other vaccination documents exist, if all received vaccinations are recorded in the presented vaccination certificate and which vaccinations were performed but not documented in the vaccination certificates. One question was related to the attitude towards vaccinations as persons with a critical attitude may be reluctant to provide their vaccination certificate. Every mentioned missing vaccination was coded separately, so if influenza vaccinations of several years were indicated as missing, each vaccination was coded as one missing vaccination. Specific vaccinations mentioned as received during childhood were coded as reported (tetanus as tetanus) otherwise, if not further specified, summarized as “vaccinations received in childhood”.\n\nBODY.ETHICS APPROVAL:\nThe study was approved by the Ethics Committee of the State Board of Physicians of the German Federal State of Lower Saxony (Ethikkommission der Ärztekammer Niedersachsen) and the federal State of Hamburg.\n\n\nBODY.RESULTS:\nBODY.PARTICIPATION:\nThe recruitment process for this feasibility study differed among the participating study centers. On the one hand, the invitation procedure consisting of three different invitation groups was implemented in Hamburg and Hannover only, due to a later starting date in Bremen. Consequently, all participants in Bremen invited to Pretest 2 received by default the reminder card together with the appointment letter. On the other hand, participation proportions for this feasibility study could be calculated for Hannover only, as in Bremen and Hamburg the consent for the vaccination data collection was integrated into the consent form of the Pretest 2 baseline assessment whereas in Hannover participants were asked for a separate consent. Recruitment data and participation proportions per invitation group are shown in Fig. 2. The participation proportions of the random samples are of special interest because the study protocol for the GNC allows random samples only; therefore, participation proportions will be discussed mainly in this respect. Overall, 400 persons were invited to participate in this feasibility study. In Hannover, 92.2 % (153/166) of Pretest 2 participants gave a separate written consent for the vaccination data collection. Again 92.2 % (141/153) of these participants brought vaccination certificates to the appointment. In Hamburg and Bremen, where participants were not explicitly asked for their willingness to participate in this feasibility study, the proportion of provided vaccination certificates was slightly lower: 80.0 % (128/160, one missing) of invited persons in Hamburg and 75.3 % (55/73) in Bremen provided vaccination data.\nFig. 2Flow chart of the recruitment and submitted vaccination certificates. RS = random sample from local registry, CS = convenience sample, consisting of volunteers, e.g. students, friends, colleagues and family members. Invitation group (IG): Appointment letter + … IG1: no further reminder; IG2: a colored reminder card together with the appointment letter; IG3: 4 days before the appointment at the study center a separate, colored reminder card (Fig. 1). *Hamburg: 1 missing case with no information about assigned invitation group, **Participation: Participants were asked for a separate consent for the vaccination data collection in Hannover only, ***Bremen: No documentation sheet about the invitation process available. 75 % of invited persons provided vaccination data the same day of examination or later.\n\nConsidering the overall effect of the reminder card in Hamburg and Hannover, 89 % (185/209) of participants who received a reminder card (regardless if IG2 or IG3) brought their vaccination certificate compared to 81 % (84/104) of those who did not receive any reminder card (IG1; p = 0.06). The approach to send a separate reminder card 4 days before the appointment was the most successful; 92 % (94/102) participants of IG3 provided vaccination data compared to 85 % (91/107) in IG2 (p = 0.06). Stratifications by study center indicated that in Hannover participants in IG2 provided the most vaccination certificates (95 %), whereas in Hamburg, significantly more vaccination certificates (92 %) were provided in IG3, with no differences between IG1 and IG2 (chi-square2: 6.546, df 2, p = 0.04).\n\nBODY.DATA PROVIDED BY PHYSICIANS:\nOf 193 Pretest 2 participants in Hannover, 120 (62 %) provided written consent for the data collection by general practitioners. In one case the consent was given but the participant had not been in contact with the health care system for a long time so that he could not provide any GP address. In total, 114 practices were contacted of which 43 % (49/114) sent vaccination data. Two practices did not send any data because no vaccination was given during the past 10 years. In most cases the medical staff filled in the paper documentation sheet, with a few providing computerized print-outs.\n\nBODY.QUESTIONNAIRE:\nThe short questionnaire was completed by 94.7 % (426/450) of participants including the convenience sample. There were considerable differences between the study centers. Whereas in Hamburg all participants completed the questionnaire (200/200), this was the case for 98.9 % (175/177) of the participants in Hannover, and for only 70 % (51/73) in Bremen (data shown in supplement). Thorough data cleaning was necessary because of inconsistent responses. For example, the question, if the participant ever possessed a vaccination certificate was denied by 16 persons of whom 5 persons submitted their vaccination certificate to the study personnel the same day. A problem seemed to exist also if more than one vaccination certificate existed so that participants were not sure to which one to refer. Moreover the high percentage of missing answers for most questions, coded as “don’t know/no answer”, suggests that these questions were difficult to remember and thus not suitable for a self-administered questionnaire.\nOf all participants, 96 % had possessed a vaccination certificate once during their life-time, with 91 % possessing one at the moment. There were no differences between random and convenience samples. In all, 37 % of responders indicated that the present vaccination certificate was the first, whereas for 55 % of responders it was the second up to fourth vaccination certificate. In addition, 67 % answered to have brought all available vaccination data to the study center, 54 % included all vaccinations ever received (random sample). Tetanus (18 %), influenza (17 %) and vaccinations received in childhood (13 %) were indicated most often as vaccinations not included in documentation.\nThe question regarding the attitude towards vaccinations was answered by 416/426 (98 %) participants. The attitude towards vaccinations was assessed by 74 % of responders as positive. Of those 18 % being critical, 45 % indicated fear the side effects of vaccinations and 30 % were concerned that vaccinations might overwhelm the immune system. Among other reasons, responders took the chance to explain more in detail their critical attitude against specific vaccinations like influenza vaccination because of bad experiences. Only one person indicated to be in principle opposed to vaccinations; for 8 % of responders vaccinations were no issues of concern. No statistical difference of attitude towards vaccination could be detected between persons who submitted their vaccination certificate compared to those who did not submit any vaccination data.\n\nBODY.CAPTURING AND AVAILABILITY OF VACCINATION DATA:\nOverall 372 vaccination certificates were electronically captured or copied the same day of examination, of which 7 % (27/372) of vaccination certificates were lost primarily because of missing identification numbers (ID) on the copy/scan. The procedure of blackening the personal data before capturing the vaccination data electronically was not followed consequently so that in about 17 % of documents this needed to be worked over subsequently. The quality of vaccination certificates was partly poor, especially the older ones (before 1990) often lacked the names of given vaccines (1700 vaccines), or the date of vaccination. In 5 % (222) of cases, the vaccination date was missing, (partly) illegible or cut off during the process of scanning/copying the vaccination data. Some of these data (121 dates) could be recorded as described above. Further problems encountered were the lack of information about vaccines as single or combined vaccination or the illegibility of documented vaccinations. The median time for data entry was 14.3 min per vaccination certificate (minimum of 0.4 min; maximum of 124.3 min). Including the data of physicians, vaccination data could be obtained for a total of 82 % (367/450) of participants or 79 % (367/466) of all eligible Pretest 2 participants. An overview of submitted vaccination data per study center (number of valid vaccination documents and vaccinations) including the sample characteristics of persons submitting vaccination data is shown in Table 1. The documented vaccinations covered a period from 1946 to 2012.\nTable 1Submitted vaccination data and sample characteristics\nHamburgHannoverBremenTotalNo. of valid documents\n158\n155 (162a)\n47\n360 (367a)\nNo. of vaccinations\n2402\n1968 (2186b)\n510\n4880 (5098b)\nNo. of vaccinations/person\n15.2\n13.9\n10.9\n14.1\nMean (95 % CI)\n(13.8–16.6)\n(12.5–15.3)\n(8.7–13.0)\n(13.2–15.0)\n\nSample characteristics\n\n\nn\n\n\n%\n\n\nn\n\n\n%\n\n\nn\n\n\n%\n\n\nn\n\n\n%\n\nFemale\n90\n57.0\n87\n53.7\n25\n53.2\n202\n55.0\nMale\n68\n43.0\n75\n46.3\n22\n46.8\n165\n45.0\n20–29 years\n23\n14.6\n18\n11.1\n3\n6.4\n44\n12.0\n30–39 years\n15\n9.5\n17\n10.5\n-\n-\n32\n8.7\n40–49 years\n38\n24.1\n47\n29.0\n8\n17.0\n93\n25.3\n50–59 years\n36\n22.8\n36\n22.2\n18\n38.3\n90\n24.5\n60–69 years\n46\n29.1\n44\n27.2\n18\n38.3\n108\n29.4\nBorn abroad\n16\n10.1\n15\n9.8\n7\n14.9\n38\n10.6\nFirst language German\n148\n93.7\n142\n92.8\n43\n91.5\n333\n93.0\nMarried\n59\n37.3\n88\n57.5\n23\n48.9\n170\n47.5\nSingle\n66\n41.8\n44\n28.8\n14\n29.8\n124\n34.6\nFull time employed\n75\n47.5\n73\n47.7\n28\n59.6\n176\n49.2\nMonthly household net income (median): 2250 € up to 3000 €\n83/146\n56.8\n73/141\n51.8\n19/46\n41.3\n175/333\n52.6\n\nN\n\n\n158\n\n\n162\n\n\n47\n\n\n367\n\n\nGNC German National Cohort, GP general practitioners, DEGS Studie zur Gesundheit Erwachsener in Deutschland, GEDA Gesundheit in Deutschland aktuell, CI confidence interval\n\na(…): Including data of seven participants in Hannover (4 of random and 3 of convenience sample) who provided physicians’ data only (with 33 documented vaccinations) as no vaccination certificates were available\n\nb(…): Including all vaccinations documented by physicians (n = 218)\n\n\nBODY.COMPARISON OF VACCINATION DATA PROVIDED BY PARTICIPANTS AND PHYSICIANS:\nFor 40 participants in Hannover vaccination data from vaccination certificates and physicians’ documentation were available with a total of 630 documented vaccinations covering the period from 1956 to 2013. Most frequently documented vaccinations are shown in Table 2. Single vaccinations against poliomyelitis and tetanus were significantly more often, influenza vaccinations significantly less often documented in vaccination certificates than at the physicians’ offices. To assess the quality of vaccination documentation, the data captured by vaccination certificates were compared to data provided by physicians. Using the example of tetanus, 27 persons provided tetanus vaccination information documented in vaccination certificates (125 vaccinations) as well as documented by physicians (54 vaccinations). In all, 30 % (38/125) of tetanus vaccinations documented in the vaccination certificates corresponded exactly with tetanus vaccinations documented by physicians for 20/27 persons (74 %). In these cases the date of vaccination, the kind of vaccine and the producer of the vaccine were documented identically in each document. A total of 6 % (7/125) of documented tetanus vaccinations in the vaccination certificates did not correspond to documented tetanus vaccinations by physicians for 6/27 persons (22 %). Tetanus documentation differed because either the name of the vaccine was missing, not consistent or different producers of vaccines were indicated. In addition, 8/54 (15 %) tetanus vaccinations documented by physicians were not documented in the vaccination certificates which accounts for at least 6 % (8/125) of tetanus vaccinations missing in the vaccination certificates.\nTable 2Frequency of vaccinations documented in vaccination certificates compared to vaccinations documented by physicians\nDocument byParticipant(1956–2012)a\nPhysician(1964–2013)a\nSingle poliomyelitis\n88\n20 %\n13\n7 %\nSingle tetanus\n68\n15 %\n14\n8 %\nTetanus (combined vaccine)\n118\n27 %\n40\n22 %\nInfluenza\n33\n7 %\n66\n36 %\nSingle hepatitis B\n22\n5 %\n8\n4 %21\nHepatitis A + B\n20\n5 %\n14\n8 %\nNo. of vaccinations\n446\n184\nBase: 40 participants with vaccination certificates and vaccination documentation from physicians\n\naThe vaccination certificates covered a period from 1956–2012, the vaccination data sent by physicians from 1964–2013\n\n\nBODY.VACCINATION STATUS:\nThe vaccination coverage of responders (last dose no longer than 10 years ago) for some preventable diseases is shown in table 3.\nTable 3Vaccination coverage according to vaccination certificates\nHamburgHannoverBremenTotaln%n%n%n% (95 % CI)Tetanus\nFemale\n74\n82\n66\n76\n21\n84\n161\n80 (59–74)\nMale\n45\n66\n49\n65\n16\n73\n110\n66 (59–74)\nTotal\n119\n75\n115\n71\n37\n79\n271\n74 (69–78)\nDiphtheria\nFemale\n69\n77\n59\n68\n20\n80\n148\n74 (67–80)\nMale\n43\n63\n46\n61\n16\n73\n105\n63 (56–71)\nTotal\n112\n71\n105\n65\n36\n77\n253\n69 (64–74)\nPolio\nFemale\n50\n56\n39\n45\n13\n52\n102\n51 (44–58)\nMale\n26\n38\n30\n40\n12\n55\n68\n41 (33–49)\nTotal\n76\n48\n69\n43\n25\n53\n170\n46 (41–51)\nHepatitis B\nFemale\n32\n36\n23\n26\n3\n12\n58\n29 (23–35)\nMale\n17\n25\n16\n21\n2\n9\n35\n21 (15–27)\nTotal\n49\n31\n39\n24\n5\n11\n93\n25 (21–30)\nHepatitis A\nFemale\n28\n32\n23\n26\n5\n20\n56\n28 (22–34)\nMale\n23\n33\n16\n21\n1\n5\n40\n24 (18–31)\nTotal\n51\n32\n39\n24\n6\n13\n96\n26 (22–31)\n\nN\n\n\n158\n\n\n\n162\n\n\n\n47\n\n\n\n367\n\n\nMeaslesa\n\nFemale\n19\n79\n12\n63\n-\n-\n31\n72 (58–86)\nMale\n12\n86\n11\n69\n2\n67\n25\n76 (60–91)\nTotal\n31\n82\n23\n66\n2\n67\n56\n74 (64–84)\n\n\nN\n\n\n38\n\n\n\n35\n\n\n\n3\n\n\n\n76\n\n\nNumber of persons who received at least one respective vaccination within the last 10 years\n\naVaccination against measles in Germany recommended since 1974. Base: all participants of the age 20–39 years in the respective study centers who submitted their vaccination certificates\n\n\nBODY.COMPARISON OF DATA FROM A SELF-COMPLETED QUESTIONNAIRE (ID-SCREEN) VERSUS VACCINATION DATA FROM VACCINATION CERTIFICATES:\nIn Pretest 2, the ID-Screen questionnaire, a self-administered questionnaire about infectious diseases, was applied in Hamburg and Hannover (see Castell et al. in this issue). Three questions about influenza vaccination were included: whether an influenza vaccination has ever been received (V1), if yes, in which frequency (V2) and the year of the last influenza vaccination (V3). In total 365 persons answered the influenza questions. Vaccination documents were available for 78 % (284/365) of these participants, which allowed assessing the agreement of self-reported versus documented influenza vaccinations. Half of all responders (184/365) indicated having ever been vaccinated against the flu (Table 4). Agreement with documentations was found for 84 (46 %) of these responders [Kappa 0.48] with up to 20 influenza vaccinations per person. Vaccination was underreported in 4 % (7/170) of cases where the receipt of influenza vaccination was negated but documented in the submitted vaccination records with up to 3 vaccinations per person. The “unvaccinated” responders were mostly vaccinated in 2008–2010 (7/11 vaccinations). The self-reported year of last influenza vaccination was confirmed by vaccination records in 27 % of cases (41/150). In 23 % (35/150) of cases the self-reported year did not correspond to the vaccination records because the year of the last remembered influenza vaccination was before (19 %) or after (5 %) the last influenza vaccination documented in the vaccination records.\nTable 4Comparison of influenza data from a self-completed questionnaire (ID- Screen) versus influenza vaccinations documented in vaccination records\nV1: ever vaccinatedV1: confirmedV2: every yearV3: last year confirmedn%n%n%n%Female\n103\n52.8\n53\n51.5\n29\n28.2\n26/87\n29.9\nMale\n81\n47.6\n31\n38.3\n27\n33.3\n15/63\n23.8\nTotal\n184/365\n50.4\n84\n45.7\n56\n30.4\n41/150\n27.3\n60–69 years\n73/102\n71.6\n35\n47.9\n30\n41.1\n13/63\n20.6\nHamburg\n95/200\n47.5\n39\n41.1\n31\n32.6\n17/76\n22.4\nHannovera\n\n89/165\n53.9\n45\n50.6\n25\n28.1\n24/74\n32.4\n\nV1: ever vaccinated Have you ever been vaccinated against influenza? – yes (ID-Screen questionnaire)\n\nV1, confirmed Agreement between reported influenza vaccination (V1) and documented influenza vaccinations in vaccination certificates. V2 Self-reported regularity of influenza vaccinations in ID- Screen questionnaire (every year, almost every year, less often than every year, only once)V3, last year confirmed Agreement between self-reported year of last received influenza vaccination (n = 150) and vaccination date found in vaccination certificates\n\naHannover only: Including vaccination data provided by physicians\n\n\n\nBODY.DISCUSSION:\nVaccination data collection in Pretest 2 was shown to be feasible as participants were willing to provide vaccination documents. Documentation of vaccination for participants within the GNC would therefore be possible. Considering the completeness and quality of vaccination data it must be said that although the response to this feasibility study was good and much data could be retrieved, there are several limitations to consider. The documented vaccinations covered a period from 1946 to 2012 during which more and more vaccines became available and more vaccination recommendations were given. Not every administered vaccination is documented as vaccination certificates may be forgotten the day of vaccination. Also the fact that vaccinations are administered by different authorities (public health office especially before 1970, different physicians, company physicians, hospitals) accounts for missed vaccination documentation. Some data, like vaccinations given in childhood, are more likely to be missing especially if a person owns already the second or third vaccination certificate. As the comparison of vaccination certificates with data from physicians showed, which could be carried out for about a quarter of participants in Hannover, other received vaccinations may also be missing; however, it is difficult to assess to which extent. Some vaccinations could be found to be more often documented at the physicians’ but not in the vaccination certificate like influenza vaccinations.\nAlso the comparison of self-reported influenza vaccination (ID-Screen, see Castell et al. in this issue) with the documented influenza data from certificates and physicians was limited by missing vaccination documentation which made it impossible to calculate the extent of overreporting. Underreporting of influenza vaccination occurred in at least 4 % of cases for whom vaccination documents were available. These problems are well-known and discussed in literature. Several studies have shown that neither self-reported vaccinations nor vaccination data based on medical records can be regarded as the gold standard for vaccination documentation [7, 8]. Miles et al. [9] concluded that vaccination cards tend to under-, self-reported vaccinations to overestimate coverage whereas a combination of both overestimated coverage in some studies and underestimated coverage in others ([9], S1565). Rolnick, who described underlying demographic factors for over- and under-reporting, recommends to improve not only vaccination documentation in medical records but also to improve the awareness of patients in order to get appropriate vaccination information ([7], S3933).\nAs this feasibility study was an add-on within Pretest 2 and was not supposed to interfere with the assessment program of the GNC nor to use much of personnel or other resources there was no possibility for a personal-assisted interview to probe further for missing vaccination data. Despite the data gaps, we found the vaccination coverage of participants to be comparable to figures of other representative long-term studies such as DEGS, where the vaccination data were obtained from vaccination cards and/ or interviews or GEDA, a telephone survey. For example 74 % of Pretest 2 participants had sufficient tetanus vaccination coverage, compared to 71 % in DEGS1 [2] or 72 % in GEDA [10].\nThe incompleteness of vaccination data extracted from vaccination certificates should be put into perspective to the most important immunological questions addressed within the GNC (introduction), for which at least the vaccinations received during childhood and thus decades ago may be less important. Given the fact that for most vaccinations (except for Hepatitis B) no biomarkers are available to prove that vaccinations were carried out and at the same time the ability of participants to remember all ever received vaccinations is limited and probably also influenced by demographic factors [7], vaccination certificates seem to be at present the best possible population-based data source.\nThe vaccination data provided by physicians were despite the limitations a useful supplement to the vaccination data of participants. Considering the time and effort required, the data collection from GPs would be feasible and useful only for specific research questions (influenza, or persons without vaccination certificates).\nAnother aspect of the feasibility of our study refers to the training of the personnel. The electronic capturing and pseudonymization of vaccination data were defined in a standard operating procedure. Nevertheless at least 3 % of all submitted vaccination certificates got lost because of missing identification number. Other problems concerned poor quality of copies, incomplete data due to copying or scanning the data and failure to blacken the personal data dependably. All of these problems can be avoided by further training of study personnel. Also, well-trained staff is needed for the time-consuming data entry of vaccination data into the database.\nThe use of a reminder card in addition to the standardized appointment letter in which participants were reminded to bring their vaccination certificate to the appointment had a positive effect on the proportion of provided vaccination certificates. If the reminder card should be sent in combination with the appointment letter or separately shortly before the visit in the study center could not be ascertained clearly as results for Hamburg and Hannover were inconsistent. The invitation procedure, especially the delay between appointment arrangements and sending of appointment letters and reminder card was difficult to control and might have varied among the participating study centers. Appointments were mostly made by phone and sometimes short-term, or cancelled and rearranged on short notice which made it often difficult to follow the routine for each invitation group. However, as the reminder card seemed to support participants to think of their vaccination certificates, we suggest sending the appointment letter together with the reminder card, which is not much additional effort.\nThe short questionnaire which was intended to assess the completeness of provided vaccination data proved to be not suitable as a self-administered questionnaire. As the study protocol for the GNC is already extensive, a staff-assisted interview for these questions will not be feasible. The short questionnaire is therefore dispensable. Nevertheless the question referring to the attitude towards vaccinations was useful, as no statistical difference could be detected between those participants providing vaccination certificates and those who did not, meaning that participants who provided vaccination data were not less critical than persons who did not provide any vaccination data. No bias in this regard could be detected. Reasons why vaccination certificates were not provided may be the lack of documents but also the feeling that providing the vaccination certificate would be too personal.\n\nBODY.CONCLUSION:\nWe demonstrated the feasibility of collecting vaccination data from vaccination certificates in a population-based study and recommend implementing this approach at a large scale within the German National Cohort in order to allow long-term prospective research on infection and immunity.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\n(PDF 233 kb)\n\n**Question:** Compared to no further reminder (IG1); what was the result of a separate reminder card 4 days before the appointment (IG3); reminder card together with the appointment letter (IG2); on use of a reminder card?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13832
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nDoes L-carnitine improve endothelial function in hemodialysis patients?\n\n\n\nABSTRACT.BACKGROUND:\nAtherosclerosis is the leading cause of death in hemodialysis patients. These patients are also very prone to L-carnitine deficiency due to kidney disease. In this clinical trial, we investigated the effect of oral L-carnitine on endothelial function of these patients.\n\nABSTRACT.MATERIALS ANS METHODS:\nWe studied 31 adult chronic hemodialysis patients in our center and divided them into two groups. The first group (n = 20) received 1500 mg/dialysis interval (every other day) oral L-carnitine. The control group (n = 11) received placebo for one month. Ultrasonographic measurements of flow mediated dilation and carotid intima-media thickness were performed before and after one month of L-carnitine and placebo therapy.\n\nABSTRACT.RESULTS:\nThis study showed that after one month of L-carnitine or placebo therapy there was no significant improvement in flow mediated dilation (p = 0.80 and p = 0.59, respectively) or decrease in carotid intima-media thickness (p = 0.12 and p = 0.50, respectively).\n\nABSTRACT.CONCLUSIONS:\nOur study revealed that one month of oral L-carnitine therapy did not improve endothelial function in hemodialysis patients. Long-term studies with large sample size using intravenous form and higher doses of the drug are required to clarify the questionable role of L-carnitine in hemodialysis patients.\n\nBODY.INTRODUCTION:\nAtherosclerosis advances in renal failure and develops early in the course of renal dysfunction. Therefore, the consequences of atherosclerosis especially cardiovascular events represent a major clinical problem in these patients.[1] Cardiac death accounts for approximately 40-50% of all deaths and is the leading cause of death in these patients. Death from cardiovascular causes is up to 20 times more common in uremic patients than in the general population with the risk being even higher than in patients with diabetes mellitus.[1]\nEndothelial dysfunction (ED) seems to be the underlying cause of all vascular diseases. Accumulating evidence suggests that chronic renal failure (CRF) is associated with impaired endothelial cell function. ED is recognized as a key process in acute and chronic renal failure as well as end stage renal disease (ESRD) of all causes.[2]\nED and intima-media thickness (IMT) are predictors for the development and progression of atherosclerosis.[3] The central role of endothelium in the development of vascular disease has led to identification of new relevant methods to estimate endothelial function and injury. Flow-mediated dilatation (FMD) is the most widely used noninvasive ultrasound method to assess endothelial function.[4]\nMeasurement of ultrasound-based FMD in the brachial artery is noninvasive and relatively repeatable and reproducible, reflects important biology, and is useful in serial studies of disease reversibility. Brachial artery endothelium-dependent dilatation is also significantly correlated with findings in the coronary circulation in the same patients.[56]\nL-carnitine is a naturally occurring hydrophilic amino acid derivative, produced endogenously in the kidneys and liver and derived from meat and dairy products in the diet. It plays an essential role in the transfer of long-chain fatty acids into the mitochondria for beta-oxidation.[7]\nL-carnitine and short-chain acylcarnitines (esters of L-carnitine), such as acetyl-L-carnitine, are excreted by the kidneys. Renal reabsorption of L-carnitine is normally very efficient. In fact, an estimated 95% is thought to be reabsorbed by the kidneys.[7]\nPatients with renal disease who undergo hemodialysis are at risk for secondary carnitine deficiency.\nL-carnitine and many of its precursors are removed from the circulation during hemodialysis. Impaired L-carnitine synthesis by the kidneys may also contribute to the potential for carnitine deficiency in patients with end-stage renal failure undergoing hemodialysis.[7]\nAs Guarnieri et al. noted, carnitine supplementation has been approved by the US Food and Drug Administration not only for the treatment, but also for the prevention of carnitine depletion in dialysis patients.[8]\nPrevious studies have shown L-carnitine and its propionate to improve endothelial responses in animal models. L-carnitine has also been found to prevent the progression of atherosclerotic lesions. Endogenous carnitine depletion and/or carnitine deficiency should thus be viewed as an additional risk factor in atherogenesis.[910]\nHowever, many of the previous studies were performed on small numbers of patients. They were not well-controlled and did not include patients with CRF, although atherosclerosis and its related complications are the leading cause of mortality in CRF.\nTo the best of our knowledge, this is the first study to evaluate the role of oral L-carnitine supplementation in improving endothelial function. For this purpose, FMD and carotid IMT in CRF patients were measured and compared with the values in a control group of patients with CRF who received placebo.\n\nBODY.MATERIALS AND METHODS:\nIn a randomized double-blind, placebo-controlled trial, patients were recruited from the maintenance HD (Hemodialysis) population at Shafa Hospital, and Javadolaemeh and Samenolaemeh centers (Kerman, Iran) in 2011. This study was approved by research and ethics board at Kerman University of Medical Sciences, Kerman, Iran. Informed written consents were also obtained from all subjects.\nThe number of cases was computed by NCSS statistical software using paired t-test. From previous studies, mean FMD changes before and after the intervention were considered as 7.5 and 6, respectively. Standard deviation (SD) was estimated as 2.5. Type I error probability (α) of 0.05, β of 20%, and power of 80% were also considered. Number of subjects in each group was thus calculated as 24 patients.\nPatients had been on hemodialysis 3 times weekly and for a period of at least 3 months. We excluded patients with infection, inflammation, incomplete or irregular drug or placebo intake, or arteriovenous fistula (AV) fistula for dialysis in both upper extremities. Smokers and individuals who received any new vasodilator agent during the study were also excluded. During the study period, we followed the patients for effective dialysis as scheduled and for any changes in their prescribed medication. Predialysis FMD and carotid IMT measurement by ultrasound were performed by the same operator in the L-carnitine and placebo groups before starting therapy and after one month of therapy.\nThe patients were randomized to receive L-carnitine therapy or placebo. We administered 1500 mg oral L-carnitine or placebo between dialysis sessions in 3 divided doses in the mornings and nights except for the morning of days they underwent dialysis. All adverse effects during the course of the study in both case and control groups were recorded.\nBODY.MEASUREMENT OF FLOW-MEDIATED VASODILATATION:\nUsing the guidelines suggested by Corretti et al.,[11] ultrasound assessment of endothelial-dependent FMD of brachial artery was investigated one hour before starting scheduled dialysis. Briefly, the study was performed in a temperature-controlled room (22°C) with the subjects in a resting and supine position.\nBrachial artery diameter was imaged using a 7.5-Mhz linear array transducer ultrasound system (Siemens Company). The brachial artery was imaged at a location 3-7 cm above the antecubital crease. To create a flow stimulus in the brachial artery, a sphygmomanometer cuff was placed on the forearm and the cuff was inflated 50 mmHg above systolic pressure to occlude artery flow for 5 minutes. All measurements were made at the end of diastole. The FMD was expressed as the percentage of change in post-stimulus diameter of the artery compared to the baseline diameter.\n\nBODY.MEASUREMENT OF CAROTID IMT:\nLongitudinal ultrasonographic scans of the carotid artery were obtained on the day of brachial artery reactivity assessment by FMD. The scans included the evaluation of the right common carotid artery 1 cm proximal to the carotid bulb. In each examination, the same operator used different scanning angles to identify the greatest IMT, defined as the distance between the junction of the lumen and intima and that of the media and adventitia in the far wall of the carotid artery. Finally, 3 measurements of carotid IMT were obtained from the right carotid artery and were averaged to determine the mean carotid IMT. Carotid IMT was expressed in millimeters.\n\nBODY.RANDOMIZATION AND BLINDING:\nA person, not involved in the study, assigned codes to the study groups and randomly allocated the selected participants to receive oral placebo or L-carnitine (1500 mg/day). The randomization was carried out by a procedure based on a random numeric sequence. The authors were unaware of treatment allocation.\n\nBODY.STATISTICAL METHODS:\nAll analyses were carried out using SPSS13.0 (SPSS Inc., Chicago, IL, US). Categorical variables are reported as counts (percentage) and continuous variables as means ± SD unless otherwise indicated.\nTo investigate the parameters in each group before and after therapy, t-test was used and to compare the data between different groups, analysis of variance (ANOVA) was performed. P values less than 0.05 were considered statistically significant.\n\n\nBODY.RESULTS:\nThe calculated sample size in each group was 24 patients. However, since this trial was interventional and there was the possibility of interrupting the course of the treatment by patients or due to medication side effects, we started the study with 60 adult chronic hemodialysis patients and divided them randomly into two equal groups of L-carnitine and placebo. Only 20 patients in the L-carnitine and 11 in the placebo group continued the medication for one month and the rest of the cases were excluded from the study. The compliance in carnitine group was higher than that in the placebo group.\nThe most common reason for discontinuing the drugs was their side effects, such as nausea and vomiting that were interestingly much more common in the placebo group (25% vs. 8%). The placebo was made exactly with the same basic ingredients and in the shape of L-carnitine tablets without including the L-carnitine component.\nAlmost all patients were under multi-drug therapy for different complications of CRF. Therefore, the second common reason of not completing the course of therapy was that they were just tired of taking too many tablets.\nFinally, the data was obtained from 20 patients in the L-carnitine and 11 patients in the placebo groups. Sex and age distribution were comparable in the L-carnitine and control groups (male/female ratios of 16/4 and 7/4, respectively; p = 0.281 and 48.4 ± 13.8 and 51.2 ± 15.0 years old, respectively; p = 0.595). The duration of hemodialysis in the L-carnitine group was insignificantly longer than the placebo group (6.18 ± 6.73 vs. 2.95 ± 2.28 years; p = 0.394).\nAntihypertensive agents, mostly angiotensin converting enzyme inhibitors, were used regularly by 60% of the patients in the L-carnitine group and 80% of the patients in the placebo group.\nAt the beginning of the study, FMD values measured in the L-carnitine and placebo groups were 8.63 ± 13.12 and 4.97 ± 3.33, respectively (p = 0.634). As Table 1 shows, FMD and carotid IMT values at the beginning and one month after the treatment were comparable, i.e. there was no statistically significant improvement in FMD or carotid IMT after 1 month of therapy. The difference between the two groups in the measured carotid IMT values at the beginning of the study was borderline (p = 0.05). Mann-Whitney test on FMD changes after one month of L-carnitine or placebo therapy showed the changes in the L-carnitine and placebo groups to be -1.019 and -0.0195, respectively (p = 0.71).\nTable 1Flow mediated dilation (FMD) and carotid intima-media thickness (IMT) changes in L-carnitine and placebo groups, before and after treatment\nDiabetics constituted 25% and 63% of the L-carnitine and placebo groups, respectively. In the beginning of the trial, hemoglobin A1c (HbA1c) was measured in diabetic patients. Considering 7% or less as normal HbA1c levels, only 41% of the patients had well-controlled diabetes mellitus. The influences of L-carnitine therapy on FMD in diabetic patients and also in nondiabetic patients were evaluated separately. The changes were not statistically significant in either diabetic patients (p = 0.625) or in nondiabetics (p = 0.678). Although at the beginning of the study, the measured FMD in the diabetic patients was lower than nondiabetics, the difference was not statistically significant (4.15 ± 3.38 vs. 6.84 ± 4.29, p = 0.108). In addition, there was no statistically significant difference in response after the therapy.\nLikewise, the chronicity of hemodialysis did not affect FMD changes by L-carnitine, as it was studied in patients with L-carnitine treatment who were under dialysis for less than or equal to 2 years (9.93 ± 18.25 and 6.43 ± 4.48; p = 0.508), and those with more than two years of initiating dialysis (6.09 ± 3.79 and 6.77 ± 4.58; p = 0.815).\nWhile 25% of the patients in the L-carnitine group reported a sense of well being following taking the medication, no patient in the placebo group had such a feeling.\nLoffredo et al. measured FMD in 40 normal patients with a mean age of 64.2 ± 11.2 years. They indicated the mean percentage changes of FMD as 10.34 ± 2.14.[12] We compared it with the mean percentage changes of FMD measured in CRF patients (5.57 ± 4.58) at the beginning of our study (with removing just one value that seemed to be out of the normal range) and the difference was statistically significant (p < 0.005). In other words, FMD was impaired in CRF patients compared to the normal population.\n\nBODY.DISCUSSION:\nThis study demonstrated that one month of oral L-carnitine supplementation was not associated with improvement in endothelial function as measured by FMD and carotid IMT in CRF patients on chronic hemodialysis. In fact, our data indicated that the disease had a progressive course and L-carnitine could not influence on it significantly.\nA number of studies have been conducted over the past years to assess the efficacy of supplemental L-carnitine on improving some clinical complaints of CRF patients. However, there has been no trial regarding the role of L-carnitine on the endothelial function in these patients. Considering the high prevalence of atherosclerosis and related complications in CRF patients, we conducted our study to investigate this concept.\nStasi et al.[13] and McMackin et al.[14] demonstrated that propionyl-L-Carnitine accelerated blood flow recovery and the restoration of vascular function in animal models. Loffredo et al. showed that propionyl-L-carnitine infusion was associated with increased FMD in patients with peripheral vascular disease.[12] In another study, Volek et al. reported that L-carnitine supplementation improved FMD in healthy individuals after high fat meals (which has been shown to cause impairment of vascular health by reducing FMD).[15] Silvestro et al. concluded that in patients with intermittent claudication, propionyl-L-carnitine administration provided a protective effect against deterioration of FMD and increase of soluble vascular cell adhesion molecule-1 (sVCAM-1) induced by exercise.[16]\nPrevious studies in animal models have shown that L-carnitine and its propionate improved endothelial responses by decreasing O2 production and increasing nitric oxide (NO) availability. They also suggested that L-carnitine prevents the progression of atherosclerotic lesions and endogenous carnitine depletion and/or deficiency should thus be viewed as an additional risk factor in atherogenesis.[910]\nThere has been no previous study to investigate the role of L-carnitine therapy on improvement of endothelial function in CRF patients. In this clinical trial, we found that L-carnitine did not change FMD or carotid IMT in CRF patients after one month of therapy. Some explanations can justify this finding. Considering the risk of accumulation of toxic metabolites of L-carnitine such as trimethylamine (TMA) in CRF patients, we prescribed 1500 mg of oral L-carnitine supplementation between dialysis sessions and only for one month. However, the bioavailability of oral L-carnitine is about 15%.[1718] Though the recommended form of L-carnitine in ESRD is the intravenous form, unfortunately it does not exist in the country of study (Iran). Therefore, the oral form and the relatively short duration of prescription of L-carnitine might have influenced on our results. Further clinical trials with intravenous form of L-carnitine should be performed to increase not only the bioavailability of the drug, but also the compliance of the patients. There will be also no risk of toxic metabolites accumulation with prescribing the intravenous form.\nOn the other hand, most CRF patients receive antihypertensive agents, especially angiotensin converting enzyme inhibitors that mediate NO-related vasodilatation and increase baseline vessel diameter which in turn leads to smaller estimates of FMD including the brachial artery FMD. This might have effects on the percentage of changes of FMD after therapy.[19]\nMoreover, since most of the time a chronic insidious disease such as diabetes mellitus leads to CRF, well-established atherosclerosis is usually encountered at the time of starting dialysis and this might have a marked influence on patients’ response to L-carnitine.\nBased on Stephens et al., carbohydrate rich foods, which induce a state of hyperinsulinemia, appear to improve carnitine transport into the skeletal muscles.[20] Bloomer et al. suggested that this may also be true as related to uptake into endothelial cells.[21] If approved, considering the fact that the most common cause of CRF in adults is diabetes mellitus, abnormal insulin secretion and resistance to insulin, may minimize the effect of L-carnitine in the patients.\nWe conclude that the role of L-carnitine in chronic hemodialysis patients is still questionable. Our clinical trial demonstrated no significant improvement in endothelial function in these patients after one month of therapy. Additional randomized controlled clinical trials with larger sample sizes and sufficient power may lead to a better understanding of the possible beneficial effects of L-carnitine on hemodialysis patients.\n\n**Question:** Compared to placebo what was the result of 1500 mg/dialysis interval (every other day) oral L-carnitine on flow mediated dilation?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13833
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nDoes L-carnitine improve endothelial function in hemodialysis patients?\n\n\n\nABSTRACT.BACKGROUND:\nAtherosclerosis is the leading cause of death in hemodialysis patients. These patients are also very prone to L-carnitine deficiency due to kidney disease. In this clinical trial, we investigated the effect of oral L-carnitine on endothelial function of these patients.\n\nABSTRACT.MATERIALS ANS METHODS:\nWe studied 31 adult chronic hemodialysis patients in our center and divided them into two groups. The first group (n = 20) received 1500 mg/dialysis interval (every other day) oral L-carnitine. The control group (n = 11) received placebo for one month. Ultrasonographic measurements of flow mediated dilation and carotid intima-media thickness were performed before and after one month of L-carnitine and placebo therapy.\n\nABSTRACT.RESULTS:\nThis study showed that after one month of L-carnitine or placebo therapy there was no significant improvement in flow mediated dilation (p = 0.80 and p = 0.59, respectively) or decrease in carotid intima-media thickness (p = 0.12 and p = 0.50, respectively).\n\nABSTRACT.CONCLUSIONS:\nOur study revealed that one month of oral L-carnitine therapy did not improve endothelial function in hemodialysis patients. Long-term studies with large sample size using intravenous form and higher doses of the drug are required to clarify the questionable role of L-carnitine in hemodialysis patients.\n\nBODY.INTRODUCTION:\nAtherosclerosis advances in renal failure and develops early in the course of renal dysfunction. Therefore, the consequences of atherosclerosis especially cardiovascular events represent a major clinical problem in these patients.[1] Cardiac death accounts for approximately 40-50% of all deaths and is the leading cause of death in these patients. Death from cardiovascular causes is up to 20 times more common in uremic patients than in the general population with the risk being even higher than in patients with diabetes mellitus.[1]\nEndothelial dysfunction (ED) seems to be the underlying cause of all vascular diseases. Accumulating evidence suggests that chronic renal failure (CRF) is associated with impaired endothelial cell function. ED is recognized as a key process in acute and chronic renal failure as well as end stage renal disease (ESRD) of all causes.[2]\nED and intima-media thickness (IMT) are predictors for the development and progression of atherosclerosis.[3] The central role of endothelium in the development of vascular disease has led to identification of new relevant methods to estimate endothelial function and injury. Flow-mediated dilatation (FMD) is the most widely used noninvasive ultrasound method to assess endothelial function.[4]\nMeasurement of ultrasound-based FMD in the brachial artery is noninvasive and relatively repeatable and reproducible, reflects important biology, and is useful in serial studies of disease reversibility. Brachial artery endothelium-dependent dilatation is also significantly correlated with findings in the coronary circulation in the same patients.[56]\nL-carnitine is a naturally occurring hydrophilic amino acid derivative, produced endogenously in the kidneys and liver and derived from meat and dairy products in the diet. It plays an essential role in the transfer of long-chain fatty acids into the mitochondria for beta-oxidation.[7]\nL-carnitine and short-chain acylcarnitines (esters of L-carnitine), such as acetyl-L-carnitine, are excreted by the kidneys. Renal reabsorption of L-carnitine is normally very efficient. In fact, an estimated 95% is thought to be reabsorbed by the kidneys.[7]\nPatients with renal disease who undergo hemodialysis are at risk for secondary carnitine deficiency.\nL-carnitine and many of its precursors are removed from the circulation during hemodialysis. Impaired L-carnitine synthesis by the kidneys may also contribute to the potential for carnitine deficiency in patients with end-stage renal failure undergoing hemodialysis.[7]\nAs Guarnieri et al. noted, carnitine supplementation has been approved by the US Food and Drug Administration not only for the treatment, but also for the prevention of carnitine depletion in dialysis patients.[8]\nPrevious studies have shown L-carnitine and its propionate to improve endothelial responses in animal models. L-carnitine has also been found to prevent the progression of atherosclerotic lesions. Endogenous carnitine depletion and/or carnitine deficiency should thus be viewed as an additional risk factor in atherogenesis.[910]\nHowever, many of the previous studies were performed on small numbers of patients. They were not well-controlled and did not include patients with CRF, although atherosclerosis and its related complications are the leading cause of mortality in CRF.\nTo the best of our knowledge, this is the first study to evaluate the role of oral L-carnitine supplementation in improving endothelial function. For this purpose, FMD and carotid IMT in CRF patients were measured and compared with the values in a control group of patients with CRF who received placebo.\n\nBODY.MATERIALS AND METHODS:\nIn a randomized double-blind, placebo-controlled trial, patients were recruited from the maintenance HD (Hemodialysis) population at Shafa Hospital, and Javadolaemeh and Samenolaemeh centers (Kerman, Iran) in 2011. This study was approved by research and ethics board at Kerman University of Medical Sciences, Kerman, Iran. Informed written consents were also obtained from all subjects.\nThe number of cases was computed by NCSS statistical software using paired t-test. From previous studies, mean FMD changes before and after the intervention were considered as 7.5 and 6, respectively. Standard deviation (SD) was estimated as 2.5. Type I error probability (α) of 0.05, β of 20%, and power of 80% were also considered. Number of subjects in each group was thus calculated as 24 patients.\nPatients had been on hemodialysis 3 times weekly and for a period of at least 3 months. We excluded patients with infection, inflammation, incomplete or irregular drug or placebo intake, or arteriovenous fistula (AV) fistula for dialysis in both upper extremities. Smokers and individuals who received any new vasodilator agent during the study were also excluded. During the study period, we followed the patients for effective dialysis as scheduled and for any changes in their prescribed medication. Predialysis FMD and carotid IMT measurement by ultrasound were performed by the same operator in the L-carnitine and placebo groups before starting therapy and after one month of therapy.\nThe patients were randomized to receive L-carnitine therapy or placebo. We administered 1500 mg oral L-carnitine or placebo between dialysis sessions in 3 divided doses in the mornings and nights except for the morning of days they underwent dialysis. All adverse effects during the course of the study in both case and control groups were recorded.\nBODY.MEASUREMENT OF FLOW-MEDIATED VASODILATATION:\nUsing the guidelines suggested by Corretti et al.,[11] ultrasound assessment of endothelial-dependent FMD of brachial artery was investigated one hour before starting scheduled dialysis. Briefly, the study was performed in a temperature-controlled room (22°C) with the subjects in a resting and supine position.\nBrachial artery diameter was imaged using a 7.5-Mhz linear array transducer ultrasound system (Siemens Company). The brachial artery was imaged at a location 3-7 cm above the antecubital crease. To create a flow stimulus in the brachial artery, a sphygmomanometer cuff was placed on the forearm and the cuff was inflated 50 mmHg above systolic pressure to occlude artery flow for 5 minutes. All measurements were made at the end of diastole. The FMD was expressed as the percentage of change in post-stimulus diameter of the artery compared to the baseline diameter.\n\nBODY.MEASUREMENT OF CAROTID IMT:\nLongitudinal ultrasonographic scans of the carotid artery were obtained on the day of brachial artery reactivity assessment by FMD. The scans included the evaluation of the right common carotid artery 1 cm proximal to the carotid bulb. In each examination, the same operator used different scanning angles to identify the greatest IMT, defined as the distance between the junction of the lumen and intima and that of the media and adventitia in the far wall of the carotid artery. Finally, 3 measurements of carotid IMT were obtained from the right carotid artery and were averaged to determine the mean carotid IMT. Carotid IMT was expressed in millimeters.\n\nBODY.RANDOMIZATION AND BLINDING:\nA person, not involved in the study, assigned codes to the study groups and randomly allocated the selected participants to receive oral placebo or L-carnitine (1500 mg/day). The randomization was carried out by a procedure based on a random numeric sequence. The authors were unaware of treatment allocation.\n\nBODY.STATISTICAL METHODS:\nAll analyses were carried out using SPSS13.0 (SPSS Inc., Chicago, IL, US). Categorical variables are reported as counts (percentage) and continuous variables as means ± SD unless otherwise indicated.\nTo investigate the parameters in each group before and after therapy, t-test was used and to compare the data between different groups, analysis of variance (ANOVA) was performed. P values less than 0.05 were considered statistically significant.\n\n\nBODY.RESULTS:\nThe calculated sample size in each group was 24 patients. However, since this trial was interventional and there was the possibility of interrupting the course of the treatment by patients or due to medication side effects, we started the study with 60 adult chronic hemodialysis patients and divided them randomly into two equal groups of L-carnitine and placebo. Only 20 patients in the L-carnitine and 11 in the placebo group continued the medication for one month and the rest of the cases were excluded from the study. The compliance in carnitine group was higher than that in the placebo group.\nThe most common reason for discontinuing the drugs was their side effects, such as nausea and vomiting that were interestingly much more common in the placebo group (25% vs. 8%). The placebo was made exactly with the same basic ingredients and in the shape of L-carnitine tablets without including the L-carnitine component.\nAlmost all patients were under multi-drug therapy for different complications of CRF. Therefore, the second common reason of not completing the course of therapy was that they were just tired of taking too many tablets.\nFinally, the data was obtained from 20 patients in the L-carnitine and 11 patients in the placebo groups. Sex and age distribution were comparable in the L-carnitine and control groups (male/female ratios of 16/4 and 7/4, respectively; p = 0.281 and 48.4 ± 13.8 and 51.2 ± 15.0 years old, respectively; p = 0.595). The duration of hemodialysis in the L-carnitine group was insignificantly longer than the placebo group (6.18 ± 6.73 vs. 2.95 ± 2.28 years; p = 0.394).\nAntihypertensive agents, mostly angiotensin converting enzyme inhibitors, were used regularly by 60% of the patients in the L-carnitine group and 80% of the patients in the placebo group.\nAt the beginning of the study, FMD values measured in the L-carnitine and placebo groups were 8.63 ± 13.12 and 4.97 ± 3.33, respectively (p = 0.634). As Table 1 shows, FMD and carotid IMT values at the beginning and one month after the treatment were comparable, i.e. there was no statistically significant improvement in FMD or carotid IMT after 1 month of therapy. The difference between the two groups in the measured carotid IMT values at the beginning of the study was borderline (p = 0.05). Mann-Whitney test on FMD changes after one month of L-carnitine or placebo therapy showed the changes in the L-carnitine and placebo groups to be -1.019 and -0.0195, respectively (p = 0.71).\nTable 1Flow mediated dilation (FMD) and carotid intima-media thickness (IMT) changes in L-carnitine and placebo groups, before and after treatment\nDiabetics constituted 25% and 63% of the L-carnitine and placebo groups, respectively. In the beginning of the trial, hemoglobin A1c (HbA1c) was measured in diabetic patients. Considering 7% or less as normal HbA1c levels, only 41% of the patients had well-controlled diabetes mellitus. The influences of L-carnitine therapy on FMD in diabetic patients and also in nondiabetic patients were evaluated separately. The changes were not statistically significant in either diabetic patients (p = 0.625) or in nondiabetics (p = 0.678). Although at the beginning of the study, the measured FMD in the diabetic patients was lower than nondiabetics, the difference was not statistically significant (4.15 ± 3.38 vs. 6.84 ± 4.29, p = 0.108). In addition, there was no statistically significant difference in response after the therapy.\nLikewise, the chronicity of hemodialysis did not affect FMD changes by L-carnitine, as it was studied in patients with L-carnitine treatment who were under dialysis for less than or equal to 2 years (9.93 ± 18.25 and 6.43 ± 4.48; p = 0.508), and those with more than two years of initiating dialysis (6.09 ± 3.79 and 6.77 ± 4.58; p = 0.815).\nWhile 25% of the patients in the L-carnitine group reported a sense of well being following taking the medication, no patient in the placebo group had such a feeling.\nLoffredo et al. measured FMD in 40 normal patients with a mean age of 64.2 ± 11.2 years. They indicated the mean percentage changes of FMD as 10.34 ± 2.14.[12] We compared it with the mean percentage changes of FMD measured in CRF patients (5.57 ± 4.58) at the beginning of our study (with removing just one value that seemed to be out of the normal range) and the difference was statistically significant (p < 0.005). In other words, FMD was impaired in CRF patients compared to the normal population.\n\nBODY.DISCUSSION:\nThis study demonstrated that one month of oral L-carnitine supplementation was not associated with improvement in endothelial function as measured by FMD and carotid IMT in CRF patients on chronic hemodialysis. In fact, our data indicated that the disease had a progressive course and L-carnitine could not influence on it significantly.\nA number of studies have been conducted over the past years to assess the efficacy of supplemental L-carnitine on improving some clinical complaints of CRF patients. However, there has been no trial regarding the role of L-carnitine on the endothelial function in these patients. Considering the high prevalence of atherosclerosis and related complications in CRF patients, we conducted our study to investigate this concept.\nStasi et al.[13] and McMackin et al.[14] demonstrated that propionyl-L-Carnitine accelerated blood flow recovery and the restoration of vascular function in animal models. Loffredo et al. showed that propionyl-L-carnitine infusion was associated with increased FMD in patients with peripheral vascular disease.[12] In another study, Volek et al. reported that L-carnitine supplementation improved FMD in healthy individuals after high fat meals (which has been shown to cause impairment of vascular health by reducing FMD).[15] Silvestro et al. concluded that in patients with intermittent claudication, propionyl-L-carnitine administration provided a protective effect against deterioration of FMD and increase of soluble vascular cell adhesion molecule-1 (sVCAM-1) induced by exercise.[16]\nPrevious studies in animal models have shown that L-carnitine and its propionate improved endothelial responses by decreasing O2 production and increasing nitric oxide (NO) availability. They also suggested that L-carnitine prevents the progression of atherosclerotic lesions and endogenous carnitine depletion and/or deficiency should thus be viewed as an additional risk factor in atherogenesis.[910]\nThere has been no previous study to investigate the role of L-carnitine therapy on improvement of endothelial function in CRF patients. In this clinical trial, we found that L-carnitine did not change FMD or carotid IMT in CRF patients after one month of therapy. Some explanations can justify this finding. Considering the risk of accumulation of toxic metabolites of L-carnitine such as trimethylamine (TMA) in CRF patients, we prescribed 1500 mg of oral L-carnitine supplementation between dialysis sessions and only for one month. However, the bioavailability of oral L-carnitine is about 15%.[1718] Though the recommended form of L-carnitine in ESRD is the intravenous form, unfortunately it does not exist in the country of study (Iran). Therefore, the oral form and the relatively short duration of prescription of L-carnitine might have influenced on our results. Further clinical trials with intravenous form of L-carnitine should be performed to increase not only the bioavailability of the drug, but also the compliance of the patients. There will be also no risk of toxic metabolites accumulation with prescribing the intravenous form.\nOn the other hand, most CRF patients receive antihypertensive agents, especially angiotensin converting enzyme inhibitors that mediate NO-related vasodilatation and increase baseline vessel diameter which in turn leads to smaller estimates of FMD including the brachial artery FMD. This might have effects on the percentage of changes of FMD after therapy.[19]\nMoreover, since most of the time a chronic insidious disease such as diabetes mellitus leads to CRF, well-established atherosclerosis is usually encountered at the time of starting dialysis and this might have a marked influence on patients’ response to L-carnitine.\nBased on Stephens et al., carbohydrate rich foods, which induce a state of hyperinsulinemia, appear to improve carnitine transport into the skeletal muscles.[20] Bloomer et al. suggested that this may also be true as related to uptake into endothelial cells.[21] If approved, considering the fact that the most common cause of CRF in adults is diabetes mellitus, abnormal insulin secretion and resistance to insulin, may minimize the effect of L-carnitine in the patients.\nWe conclude that the role of L-carnitine in chronic hemodialysis patients is still questionable. Our clinical trial demonstrated no significant improvement in endothelial function in these patients after one month of therapy. Additional randomized controlled clinical trials with larger sample sizes and sufficient power may lead to a better understanding of the possible beneficial effects of L-carnitine on hemodialysis patients.\n\n**Question:** Compared to placebo what was the result of 1500 mg/dialysis interval (every other day) oral L-carnitine on carotid intima-media thickness?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13834
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nHip arthroscopy versus best conservative care for the treatment of femoroacetabular impingement syndrome (UK FASHIoN): a multicentre randomised controlled trial\n\n\n\nABSTRACT.SUMMARY:\nABSTRACT.BACKGROUND:\nFemoroacetabular impingement syndrome is an important cause of hip pain in young adults. It can be treated by arthroscopic hip surgery, including reshaping the hip, or with physiotherapist-led conservative care. We aimed to compare the clinical effectiveness of hip arthroscopy with best conservative care.\n\nABSTRACT.METHODS:\nUK FASHIoN is a pragmatic, multicentre, assessor-blinded randomised controlled trial, done at 23 National Health Service hospitals in the UK. We enrolled patients with femoroacetabular impingement syndrome who presented at these hospitals. Eligible patients were at least 16 years old, had hip pain with radiographic features of cam or pincer morphology but no osteoarthritis, and were believed to be likely to benefit from hip arthroscopy. Patients with bilateral femoroacetabular impingement syndrome were eligible; only the most symptomatic hip was randomly assigned to treatment and followed-up. Participants were randomly allocated (1:1) to receive hip arthroscopy or personalised hip therapy (an individualised, supervised, and progressive physiotherapist-led programme of conservative care). Randomisation was stratified by impingement type and recruiting centre and was done by research staff at each hospital, using a central telephone randomisation service. Patients and treating clinicians were not masked to treatment allocation, but researchers who collected the outcome assessments and analysed the results were masked. The primary outcome was hip-related quality of life, as measured by the patient-reported International Hip Outcome Tool (iHOT-33) 12 months after randomisation, and analysed in all eligible participants who were allocated to treatment (the intention-to-treat population). This trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN64081839, and is closed to recruitment.\n\nABSTRACT.FINDINGS:\nBetween July 20, 2012, and July 15, 2016, we identified 648 eligible patients and recruited 348 participants: 171 participants were allocated to receive hip arthroscopy and 177 to receive personalised hip therapy. Three further patients were excluded from the trial after randomisation because they did not meet the eligibility criteria. Follow-up at the primary outcome assessment was 92% (319 of 348 participants). At 12 months after randomisation, mean iHOT-33 scores had improved from 39·2 (SD 20·9) to 58·8 (27·2) for participants in the hip arthroscopy group, and from 35·6 (18·2) to 49·7 (25·5) in the personalised hip therapy group. In the primary analysis, the mean difference in iHOT-33 scores, adjusted for impingement type, sex, baseline iHOT-33 score, and centre, was 6·8 (95% CI 1·7–12·0) in favour of hip arthroscopy (p=0·0093). This estimate of treatment effect exceeded the minimum clinically important difference (6·1 points). There were 147 patient-reported adverse events (in 100 [72%] of 138 patients) in the hip arthroscopy group) versus 102 events (in 88 [60%] of 146 patients) in the personalised hip therapy group, with muscle soreness being the most common of these (58 [42%] vs 69 [47%]). There were seven serious adverse events reported by participating hospitals. Five (83%) of six serious adverse events in the hip arthroscopy group were related to treatment, and the one in the personalised hip therapy group was not. There were no treatment-related deaths, but one patient in the hip arthroscopy group developed a hip joint infection after surgery.\n\nABSTRACT.INTERPRETATION:\nHip arthroscopy and personalised hip therapy both improved hip-related quality of life for patients with femoroacetabular impingement syndrome. Hip arthroscopy led to a greater improvement than did personalised hip therapy, and this difference was clinically significant. Further follow-up will reveal whether the clinical benefits of hip arthroscopy are maintained and whether it is cost effective in the long term.\n\nABSTRACT.FUNDING:\nThe Health Technology Assessment Programme of the National Institute of Health Research.\n\nBODY.INTRODUCTION:\nFemoroacetabular impingement syndrome is a painful disorder of the hip that is caused by a premature contact (impingement) between the femur and acetabulum during hip movements.1, 2 This premature contact typically occurs as a result of certain hip shapes, such as cam or pincer morphology.1 Cam morphology refers to a flattening or convexity at the femoral head neck junction, whereas pincer morphology refers to a focal or global over-coverage of the femoral head by the acetabulum.1 Femoroacetabular impingement syndrome leads to progressive damage within the joint, including the acetabular labrum and articular cartilage,1 and is associated with the development of osteoarthritis of the hip.1, 3\nBODY.RESEARCH IN CONTEXT:\nEvidence before this study\nFemoroacetabular impingement syndrome is a relatively common non-arthritic cause of hip pain in young adults. It is caused by symptomatic premature contact between the proximal femur and acetabular rim during hip motion, and this phenomenon is associated with certain hip shapes. Repeated impingement leads to damage to the articular cartilage or to acetabular labral tears, and it is these injuries that are thought to be painful. In the past decade, surgeons have developed arthroscopic (ie, keyhole) surgical techniques to treat femoroacetabular impingement syndrome. The rationale is that reshaping the hip and repairing cartilage and labral damage will prevent impingement and relieve symptoms. In 2013, we did a Cochrane systematic review of the effectiveness of hip arthroscopy for femoroacetabular impingement syndrome. We searched MEDLINE (from 1946 to Nov 19, 2013), Embase (1980 to Nov 19, 2013), and the Cochrane Central Register of Controlled Trials (issue 11, 2013), for randomised controlled trials of surgery compared with placebo treatment, non-operative treatment, or no treatment, in any language, in human adults with femoroacetabular impingement syndrome. We found no high-quality evidence examining the effectiveness of surgery for femoroacetabular impingement syndrome. Therefore, we chose to do a pragmatic, multicentre, randomised controlled trial to determine the effectiveness of hip arthroscopy compared with best conservative care.\nAdded value of this study\nTo our knowledge, this is the largest randomised controlled trial to show the clinical effectiveness of hip arthroscopy in treating femoroacetabular impingement syndrome. We have shown that patients with femoroacetabular impingement syndrome have better hip-related quality of life at 12 months since randomisation with either hip arthroscopy or best conservative care (personalised hip therapy), and that this quality of life improves significantly more in patients treated hip arthroscopy, exceeding the minimal clinically important difference. Hip arthroscopy was also more expensive than was personalised hip therapy.\nImplications of all the available evidence\nOver the past 15 years, increasing numbers of patients with femoroacetabular impingement syndrome have been treated with hip arthroscopy. This is the first study to show that hip arthroscopy is more clinically effective, at least in the short term, than best conservative care. Longer-term outcomes are required to establish whether this improvement is sustained and whether surgery is cost-effective. These results should be shared with patients when selecting an appropriate treatment strategy.\n\nSurgery has become an established treatment for femoroacetabular impingement syndrome. The aim of such surgery is to reshape the hip joint to prevent impingement.2 Intra-articular injury, such as a cartilage and labral damage, can be resected, repaired, or reconstructed.2 Initially, open surgery was used to treat femoroacetabular impingement syndrome, but the minimally invasive, so-called keyhole technique known as hip arthroscopy is now being used more frequently.1 This approach has become possible because of advances in technology and surgical techniques. Hip arthroscopy is safer and has a shorter recovery time than does open surgery.4, 5 In the UK in 2013, 1908 operations for femoroacetabular impingement syndrome were done by arthroscopic surgery, compared with only 491 by open surgery.6 Since then, there has been a rapid increase in the use of hip arthroscopy in most countries around the world.4, 5, 6 Non-operative treatments for femoroacetabular impingement syndrome are also available,2 including exercise-based packages of conservative care, delivered by a physiotherapist.7 Potential targets for physiotherapy include the abnormal movement patterns and weakness of hip muscles seen in patients with femoroacetabular impingement syndrome.8, 9\nNumerous case series report improvement in patients with femoroacetabular impingement syndrome after open or arthroscopic surgery, or physiotherapy.5, 10 However, a 2014 Cochrane review11 of surgery for treating femoroacetabular impingement syndrome showed that there was no evidence from randomised controlled trials to support these treatments.\nIn a feasibility study,6 we established that patients were prepared to be recruited, and that surgeons were in equipoise and willing to recruit patients to a randomised controlled trial of hip arthroscopy compared with best conservative care. Our aim was to measure the clinical effectiveness of hip arthroscopy compared with best conservative care in treating patients with femoroacetabular impingement syndrome.\n\nBODY.METHODS:\nBODY.STUDY DESIGN AND PARTICIPANTS:\nWe did this pragmatic, multicentre, assessor-blind randomised controlled trial in 23 National Health Service (NHS) hospitals in the UK. We treated the initial feasibility study6 as an internal pilot study so that participants who took part in the initial study were included in the main trial recruitment.\nParticipants were recruited from the specialist hip arthroscopy service at each hospital. Participating surgeons identified eligible patients during routine diagnostic consultations. Assessments were medical history, clinical examination, X-rays, and cross-sectional imaging (MRI, CT, or both). For patients with a diagnosis of femoroacetabular impingement syndrome, the surgeon classified them as having cam (alpha angle >55°), pincer (lateral centre-edge angle >40° or a positive crossover sign), or mixed-type (combination of cam and pincer) impingement.12\nPatients were eligible if they had hip pain, radiographic features of cam or pincer morphology, were at least 16 years old, were able to give informed consent, and if the treating surgeon believed that they were likely to benefit from hip arthroscopy. Patients were excluded if they had hip osteoarthritis (Tonnis grade >1 or less than 2 mm of superior joint space on an antero-posterior radiograph); a history of hip pathology such as Perthes' disease, slipped upper femoral epiphysis, or avascular necrosis, or previous hip injury such as acetabular fracture, hip dislocation, or femoral neck fracture; or if they had already had shape-changing surgery (open or arthroscopic) of the hip.13, 14 Patients with bilateral femoroacetabular impingement syndrome were eligible, and only the most symptomatic hip was randomly assigned to treatment and followed up. Trained research associates approached eligible patients to explain the trial and to invite them to participate. All participants gave written informed consent.\nQualitative research, to understand recruitment as it occurred, was integrated into the trial. The findings were used to design a recruiter training and centre support programme that was implemented during the trial to optimise recruitment. The research was based on the QuinteT Recruitment Intervention and continued the work done during the internal pilot trial.15, 16\nThe study was approved by NHS Research Ethics Service West Midlands (14/WM/0124). An independent trial steering committee and data monitoring committee provided oversight of the progression of the study. The study protocol has previously been published.17\n\nBODY.RANDOMISATION AND MASKING:\nParticipants were randomly assigned (1:1) with a computer-generated minimisation (adaptive stratified sampling) algorithm for centre and type of impingement, to receive either hip arthroscopy or best conservative care. All baseline data were collected before randomisation, which was done by the recruiting research associate. Allocation concealment was ensured by use of a secure telephone randomisation service hosted by Warwick Clinical Trials Unit. It was not possible to mask patients or the treating clinicians to their allocation. Researchers who collected outcome assessments and analysed the results were masked to allocation by concealment of treatment.\n\nBODY.PROCEDURES:\nSurgery for femoroacetabular impingement syndrome was done using arthroscopic techniques by a senior surgeon (consultant grade) in the NHS, who was trained and experienced in hip arthroscopy. Further details about surgeons' training and experience, and surgical procedures are shown in the appendix. 27 trial surgeons did the surgery. Hip arthroscopy was done under general anaesthesia in a lateral or supine position. Arthroscopic portals were established in the central and peripheral compartments of the hip under radiographic guidance according to the surgeon's usual practice. Shape abnormalities and consequent labral and cartilage pathology were treated. Adequacy of bony reshaping was assessed by intraoperative image intensifier views or by arthroscopic visualisation of a satisfactory impingement free range of movement of the hip, or both. Patients were allowed home when they could walk safely with crutches (typically within 24 h). Patients were referred to outpatient physiotherapy services for a course of rehabilitation as per usual care for that surgeon. These post-operative physiotherapists were distinct from those providing conservative care, to avoid contamination between groups. Patients had MRI of their hip at least 6 weeks after surgery.\nA panel of international experts assessed the fidelity of the surgery (appendix). They reviewed operation notes, intraoperative images, and postoperative MRI scans to subjectively assess whether adequate surgery had been undertaken, according to the protocol. The panel discussed each case and subjectively assessed whether shape abnormalities and intra-articular pathology were treated, and whether there was a sufficient resection to allow impingement-free range of motion. This approach included assessing the proximal femoral and acetabular rim resections and whether the resection edges were smooth.\nPersonalised hip therapy is a package of physiotherapist-led rehabilitation for femoroacetabular impingement syndrome. It was developed during our feasibility study and tested during the pilot study.6 Although the name for this intervention is new, the care offered was based on a consensus of what physiotherapists, physicians, and surgeons currently regard as best conservative care for femoroacetabular impingement syndrome.7 Personalised hip therapy has four core components: an assessment of pain, function, and range of hip motion; patient education; an exercise programme taught in the clinic and repeated at home, that has the key features of individualisation, progression, and supervision; and help with pain relief, which could include one X-ray or ultrasound-guided intra-articular steroid injection when pain prevents performance of the exercise programme.18\nPersonalised hip therapy was delivered by at least one physiotherapist at each centre. 47 physiotherapists were trained formally in this protocol through a 1-day workshop and supported to deliver personalised hip therapy through refresher workshops (appendix). At their initial assessment, participants received a personalised hip therapy information pack that described what to expect during the course of their treatment. They then had between six and ten face-to-face contacts with the physiotherapist over 12–24 weeks. Some contacts were allowed by either telephone or email when geographical distance prevented all contacts being carried out face-to-face. Patient-completed exercise diaries were also encouraged to help both patients and physiotherapists monitor progress and adherence with personalised hip therapy.\nPhysiotherapists recorded full details of their advice and treatments, number and type of treatment contacts, and any non-attendance, on case report forms. These case report forms were reviewed for accuracy in comparison to the usual physiotherapy records at each treatment centre and then assessed for fidelity to the personalised hip therapy protocol by a panel comprising members of the core group who developed the protocol for personalised hip therapy (appendix). The personalised hip therapy panel reviewed case report forms to ensure participants received an adequate number of sessions, all four core components, and that their exercise programme was individualised, supervised, and progressive.\n\nBODY.OUTCOMES:\nThe primary outcome was hip-related quality of life, as measured by the international Hip Outcome Tool (iHOT-33) at 12 months after randomisation.19 iHOT-33 is a patient-derived and patient-reported outcome instrument designed to measure hip-related quality of life in young adults with non-arthritic hip pain. The iHOT-33 consists of four domains: symptoms and functional limitations; sports and recreational physical activities; job-related concerns; and social, emotional, and lifestyle concerns.18 iHOT-33 provides a 100-point score, with 100 representing no pain and perfect function, and lower scores indicating pain and poor function. The instrument has been validated in a relevant population for this trial,19 and has a minimum clinically important difference of 6·1 points.19, 20 Secondary outcomes were health-related quality of life, measured with the EuroQol EQ-5D-5L, the 12-item Short Form Health Survey (SF-12 version 2), adverse events, and health-care resource use, all measured 6 and 12 months after treatment allocation.21, 22, 23 Patients reported adverse events 6 weeks after the start of their intervention (first session of personalised hip therapy, or date of hip arthroscopy). Any serious adverse events were reported by each centre to the Warwick Clinical Trials Unit. Forms for iHOT-33, EQ-5D-5L, SF-12, adverse events, and health-care resource use were collected by postal questionnaires, which were centrally administered. Scores for these measures were collected at the time of consent and again by postal questionnaire at 6 and 12 months after treatment allocation.\n\nBODY.STATISTICAL ANALYSIS:\nThe planned sample size was 172 participants in each group, based on a standard deviation of iHOT-33 of 16 points and a minimum clinically important difference of 6·1 points, giving a standardised effect size of 0·38. We designed the trial to have 90% power to detect an effect of this size at a two-sided 5% significance level, allowing for up to 15% loss to follow-up at the primary outcome timepoint.\nWe prepared a full statistical analysis plan before the final analysis. No interim analyses were planned. The plan was approved by the independent data monitoring committee.\nIn our primary analysis, we investigated differences in the primary outcome measure, (iHOT-33 score at 12 months after randomisation) between the two treatment groups on a intention-to-treat basis, which included all eligible patients randomly assigned to an intervention. We assessed the primary outcome 12 months from treatment allocation rather than from intervention because this was a pragmatic trial design of two different treatment strategies. We used a mixed-effects regression analysis to assess the effects of the interventions on 12-month iHOT-33 scores, after adjusting for the fixed-effects of impingement type, sex, and baseline iHOT-33 score, with recruiting centre included as a random effect to model any potential associations within the recruiting centres. This mixed-effects model was used for all outcome measures.\nWe drew our primary inferences from the intention-to-treat analysis, irrespective of compliance, without imputation for missing data. We also did several additional analyses. We did a per-protocol analysis, comparing participants who actually received surgery and those who actually received personalised hip therapy. Additionally, we compared participants randomly allocated to hip arthroscopy or personalised hip therapy, who received an allocated treatment that was deemed to be of high fidelity. We did prespecified subgroup analyses for different impingement types (cam, pincer, and mixed) and for patients younger and older than 40 years. We added pairwise interaction terms between treatment group and both impingement type and age group to the mixed-effects model to test for important subgroup effects. In addition to the primary adjusted analysis, we also reported unadjusted differences between groups, and we assessed significance using t tests for normally distributed outcomes. We present treatment effect estimates from the adjusted mixed-effects model (primary analysis) with 95% CIs. All hypothesis testing was at the 5% level, with no adjustments for multiple testing. Unless otherwise indicated, the analysis of secondary outcomes followed the same modelling approach as for the primary outcome. All analyses were on a complete-case basis and where follow-up data were missing, the reasons for missing data were obtained and patterns were investigated to judge the plausibility of missingness assumptions. We did a sensitivity analysis using multiple imputation techniques (imputation using chained equations) to assess the effect of missing data on the primary outcome. Finally, we did a post-hoc sensitivity analysis of the effect of variation in recovery time in the hip arthroscopy group. We compared post-intervention adverse events between groups using Fisher's exact test. We did all analyses using Stata (version 14).24\nWe did an economic evaluation from the perspective of the NHS and personal social services.25 We estimated economic costs associated with the delivery of the two interventions. Resource use questions completed by participants at each assessment point provided a profile of all hospital inpatient and outpatient service use, community health and social care encounters, prescribed medications, and NHS supplies, such as crutches or home adaptations. We obtained unit costs (2016 £) from primary and secondary sources in accordance with national guidelines and attached them to every item of resource use.24 We used health utilities generated from EQ-5D-5L responses at every timepoint of assessment to estimate quality-adjusted life-year (QALY) profiles for every participant; these QALYs were calculated as the area under the baseline-adjusted utility curve, assuming linear interpolation between utility measurements. We did a bivariate regression of costs and QALYs, with multiple imputation of missing data, with the view to estimating the incremental cost per QALY gained for hip arthroscopy compared with personalised hip therapy. Further details, including sensitivity analyses done to assess the effect of uncertainty surrounding aspects of the economic evaluation, and prespecified subgroup analyses exploring heterogeneity in the cost-effectiveness results, are provided in the appendix. The trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN64081839.\n\nBODY.ROLE OF THE FUNDING SOURCE:\nThe funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.\n\n\nBODY.RESULTS:\nBetween July 20, 2012, and July 15, 2016, 648 patients who attended the participating surgeons' hip clinics were deemed eligible (figure 1). Of these patients, 348 (54%) agreed to participate. The mean age of participants was 35·3 years (SD 9·6); the mean age of those who declined to participate was 35·4 years (10·3). The 348 participants were randomly allocated to receive hip arthroscopy (n=171) or personalised hip therapy (n=177). Three additional patients were randomly allocated in error, in each instance contrary to protocol procedures. One participant, who was not eligible, was recruited after miscommunication between the treating surgeon and recruiter; one patient was allocated without signing the consent form; and one patient was deemed eligible by a trainee surgeon, but within a few days the treating surgeon deemed they were not eligible. None of these patients participated in the trial (figure 1).Figure 1Trial profile\niHOT-33=International Hip Outcome Tool. *Three patients were randomly assigned in error but did not receive treatment and were not followed-up.\n\nParticipants in the two groups were well matched in terms of demographics and pre-randomisation hip-related quality of life, having had symptoms for approximately 3 years (table 1). 14 (8%) participants who were allocated to personalised hip therapy had all or part of this intervention, but then, at their request, went on to have hip arthroscopy within 12 months after randomisation. No patients allocated to hip arthroscopy had personalised hip therapy. For hip arthroscopy, the median time from random assignment to treatment was 122 days (IQR 80–185), and for personalised hip therapy, it was 37 days (22–60). Surgeons did a mean of 112 (SD 55) hip arthroscopies per year during the study. At 12 months after randomisation, 144 (84%) of the 171 participants allocated to receive hip arthroscopy had received it; 27 (16%) had not. 121 (84%) participants who received a hip arthroscopy procedure had postoperative MRI and their case was assessed by the surgical review panel; 105 (87%) of these 121 procedures were deemed to be of high fidelity, and 16 (13%) were deemed unsatisfactory. The reasons for unsatisfactory surgery were an inadequate bony resection of the proximal femur (n=7) and acetabular rim (n=2), a sharp transition from the femoral head to neck as a result of reshaping surgery (n=5), and no reshaping surgery being done because the hip was degenerate (n=2). Nine (5%) of the 177 participants allocated to personalised hip therapy did not receive any treatment by 12 months. Of the patients who received personalised hip therapy (n=154), 107 (70%) were judged to have received the intervention to a high fidelity (appendix). The most common reason for low fidelity of personalised hip therapy was participants not receiving the minimum of six therapy sessions (34 [72%] of 47). Other reasons for low-fidelity personalised hip therapy were no progression of exercises by the physiotherapist (11 [23%] of 47) and the patient not complying with the exercise programme (two [4%] of 47).Table 1Baseline characteristics of the study population\nHip arthroscopy (n=171)Personalised hip therapy (n=177)Age (years)\n35·4 (9·7)\n35·2 (9·4)\nSex\n\nWomen\n71 (42%)\n64 (36%)\n\nMen\n100 (58%)\n113 (64%)\nCurrent smoker\n\nYes\n31 (18%)\n25 (14%)\n\nNo\n136 (80%)\n151 (85%)\n\nMissing data\n4 (2%)\n1 (1%)\nHip side considered for treatment\n\nRight\n95 (56%)\n103 (58%)\n\nLeft\n75 (44%)\n74 (42%)\nParticipants with bilateral symptoms\n11 (6%)\n18 (10%)\nDuration of hip symptoms (months)\n37 (36·6)\n40 (40·8)\nImpingement type\n\nCam\n129 (75%)\n133 (75%)\n\nMixed\n29 (17%)\n30 (17%)\n\nPincer\n13 (8%)\n14 (8%)\nUnits of alcohol in an average week\n6·2 (8·6)\n6·0 (7·7)\nDiabetes\n\nYes\n2 (1%)\n4 (2%)\n\nNo\n165 (96%)\n171 (97%)\n\nMissing data\n4 (2%)\n2 (1%)\nChronic renal failure\n\nYes\n1 (1%)\n0\n\nNo\n166 (97%)\n176 (99%)\n\nMissing data\n4 (2%)\n1 (1%)\nPhysical activity (UCLA Activity Scale)\n4·3 (2·5)\n4·4 (2·5)\nHip-related quality of life (iHOT-33)\n39·2 (20·9)\n35·6 (18·2)\nSF-12 PCS\n44 (7·6)\n44 (5·9)\nSF-12 MCS\n42 (7·1)\n42 (7·3)\nEQ-5D 3L/5L Index Score\n0·576 (0·26)\n0·557 (0·25)\nEQ-5D 5L VAS\n67 (20·2)\n67 (18·7)\nMean lateral centre edge angle (°)\n31 (5)\n31 (5)\nNumber of participants with LCEA <25°\n7 (4%)\n6 (3%)\nNumber of participants with LCEA <20°\n0\n0\nMean alpha angle (°) measured on antero-posterior radiograph\n61 (17)\n64 (18)\nData are mean (SD) or n (%). UCLA=University of California Los Angeles. iHOT-33=International Hip Outcome Tool. SF-12=12-item Short Form Health Survey. PCS=physical component score. MCS=mental component score. VAS=visual analogue score. LCEA=lateral centre edge angle.\n\n319 (92%) of 348 participants completed the iHOT-33 questionnaires at 12 months after randomisation. Seven (2%) withdrew from follow-up, and 22 (6%) were lost to follow-up. The iHOT-33 score increased in both groups, indicating an improvement in hip-related quality of life (table 2, figure 2). 12 months after randomisation, mean iHOT-33 scores had improved from 39·2 (SD 21) to 58·8 (27) for participants in the hip arthroscopy group, and from 35·6 (18) to 49·7 (25) in the personalised hip therapy group. In the primary intention-to-treat analysis at 12 months, the adjusted estimate of treatment effect measured with iHOT-33 was 6·8 (95% CI 1·7 to 12·0, p=0·0093) in favour of hip arthroscopy, compared with personalised hip therapy. In the per-protocol analysis at 12 months, including participants who received personalised hip therapy (n=154) or hip arthroscopy (n=144), the adjusted estimate of the between-group difference on iHOT-33 was 8·2 (95% CI 2·8 to 13·6) in favour of hip arthroscopy. In the exploratory, pre-specified secondary analysis based on those participants whose treatment was deemed of a high fidelity (hip arthroscopy n=105, personalised hip therapy n=107), the adjusted estimate of between-group difference on the iHOT-33 was 5·8 (95% CI −0·7 to 12·2) in favour of hip arthroscopy. There were no significant between-group differences in SF-12 or EQ-5D-5L scores at 6 or 12 months after randomisation (table 2).Figure 2Changes in mean iHOT-33 score from baseline to 6 and 12 months after randomisation\nError bars are 95% CIs. iHOT-33=International Hip Outcome Tool.\nTable 2Patient-reported outcome measures\nHip arthroscopy (n=171)Personalised hip therapy (n=177)Unadjusted differenceAdjusted difference (95% CI)p valueMean (SD)nMean (SD)niHOT-33\n6 months\n46·6 (25)\n161\n45·6 (23)\n154\n1·0\n−0·7 (−5·2 to 3·7)\n0·743\n12 months*\n58·8 (27)\n158\n49·7 (25)\n163\n9·1\n6·8 (1·7 to 12·0)\n0·0093\nEQ-5D-5L (utility)\n6 months\n0·544 (0·26)\n144\n0·573 (0·23)\n147\n−0·029\n−0·042 (−0·088 to 0·005)\n0·081\n12 months\n0·615 (0·25)\n152\n0·578 (0·24)\n147\n0·037\n0·020 (−0·027 to 0·067)\n0·397\nEQ-5D VAS\n6 months\n67·8 (19·3)\n145\n70·3 (19·3)\n145\n−2·5\n−2·1 (−5·7 to 1·4)\n0·241\n12 months\n71·9 (20·7)\n150\n69·2 (19·4)\n145\n2·7\n2·6 (−1·2 to 6·4)\n0·180\nSF-12 PCS\n6 months\n43·4 (7·0)\n146\n44·2 (6·6)\n142\n−0·8\n−0·7 (−2·1 to 0·7)\n0·304\n12 months\n45·1 (6·3)\n145\n44·2 (6·4)\n132\n1·0\n1·1 (−0·2 to 2·5)\n0·099\nSF-12 MCS\n6 months\n42·1 (7·3)\n146\n42·1 (7·2)\n142\n−0·1\n−0·1 (−1·5 to 1·3)\n0·929\n12 months\n43·2 (7·1)\n145\n42·6 (6·9)\n132\n0·6\n0·4 (−1·2 to 2·0)\n0·589\niHOT-33=International Hip Outcome Tool. VAS=visual analogue score. PCS=physical component score. MCS=mental component score.\n*Primary outcome.\n\nIn the prespecified subgroup analyses, the between-group difference for iHOT-33 was 5·0 (95% CI −1·2 to 11·3) in participants younger than 40 years and 10·9 (1·7 to 20·1) in those older than 40 years (pinteraction=0·3023) in favour of hip arthroscopy; the difference was 8·3 (95% CI 2·5 to 14·2) in patients with cam morphology, 1·1 (–11·5 to 13·7) in those with mixed cam and pincer morphology, and 4·0 (–14·6 to 22·7) in those with pincer morphology (pinteraction=0·5672), in favour of hip arthroscopy.\nAmong patients who received their allocated intervention, there were 147 patient-reported adverse events (in 100 [73%] of 138 patients) in the hip arthroscopy group versus 102 (in 88 [60%]) of 146 patients in the personalised hip therapy group. The most frequently reported adverse event was muscle soreness, reported by 58 (42%) patients in the hip arthroscopy group and 69 (47%) patients in the personalised hip therapy group (table 3). At 12 months, seven serious adverse events had been reported by participating hospitals. Six of these were among the participants in the hip arthroscopy group: one patient was not discharged from the day surgery unit and required an overnight admission, one patient had scrotal haematoma requiring readmission, two patients had superficial wound infections that required oral antibiotics, one patient had a hip joint infection that required further surgery and ultimately a total hip replacement, and one patient had a fall that was unrelated to hip arthroscopy. One participant in the personalised hip therapy group developed biliary sepsis that was unrelated to treatment. There were no treatment-related deaths.Table 3Patient-reported adverse events\nHip arthroscopy (n=138)*Personalised hip therapy (n=146)†p valueMuscle soreness at 6 weeks after intervention\n58 (42%)\n69 (47%)\n0·40\nNumbness in groin, leg, or foot\n35 (25%)\nNA\nNA\nHip pain or stiffness at 6 weeks after intervention\n13 (9%)\n8 (6%)\n0·26\nUnscheduled hospital appointments\n13 (9%)\n6 (4%)\n0·096\nSuperficial wound problems\n9‡ (7%)\nNA\nNA\nHip joint infection\n1 (1%)\nNA\nNA\nFracture\n0\nNA\nNA\nDeep-vein thrombosis\n0\nNA\nNA\nOther adverse events potentially related to intervention\n8 (6%; 2 numbness proximal thigh, 1 scrotal infection, 1 scrotal bruising, 1 labial swelling, 1 ankle pain, 1 erratic International Normalised Ratio, 1 nausea secondary to analgesia, 1 numbness to tip of tongue for 2 weeks after operation)\n1 (1%; muscle spasms)\n0·017\nOther adverse events not related to intervention\n\n18 (13%; 7 lower back pain, 2 knee pain, 2 road traffic collisions, 2 abdominal pain under investigation, 1 viral illness, 1 endometriosis, 1 chronic pain referred to rheumatologist, 1 skin discoloration, 1 multiple sclerosis)\n0·17\nNA=not applicable.\n*Six of 144 patients who received hip arthroscopy within 12 months did not return an adverse events form.\n†Eight of 154 patients who received personalised hip therapy only did not return an adverse events form.\n‡Four patients required antibiotics.\n\nThere was a low level of missing item-level data (eg, iHOT-33 0·6%) in all patient-reported outcome measures at all timepoints; after imputation for missing data, the adjusted estimate of treatment effect was similar, at 6·6 (95% CI 1·7 to 11·4) points in favour of hip arthroscopy. In a post-hoc analysis there was no significant difference in iHOT-33 at 12 months for patients in the hip arthroscopy group who were treated within 6 months of randomisation versus those treated 6 months or more after randomisation (0·9 [95% CI −10·7 to 8·8]). We assessed model assumptions, including assessment of quantile-quantile plots, which were deemed adequate. The mean cost of hip arthroscopy was £3042 (35% staff time, 28% surgical devices and anaesthetic drugs, 19% theatre-running costs, and 18% bed-day costs). Participants in the personalised hip therapy group attended a mean of six physiotherapy sessions (average duration of 30 min), generating mean total treatment costs of £155 per participant. The adjusted incremental cost of hip arthroscopy compared with personalised hip therapy during the 12-month follow-up was £2372, with incremental QALYs of −0·015 (representing a net QALY loss). Personalised hip therapy was more cost-effective than hip arthroscopy at 12 months (appendix).\n\nBODY.DISCUSSION:\nThe UK FASHIoN study is the first randomised controlled trial to provide evidence that hip arthroscopy is effective in patients with femoroacetabular impingement syndrome. In this pragmatic trial, we found that iHOT-33 scores improved for patients in both groups; 12 months after randomisation, there was a mean adjusted difference of 6·8 points in the iHOT-33 score between patients allocated to receive hip arthroscopy and those allocated to receive personalised hip therapy, in favour of hip arthroscopy. This is a statistically significant difference that also exceeded the minimum clinically important difference for iHOT-33. These results are consistent with the hypothesis that hip arthroscopy is more clinically effective than best conservative care.\nThere have been many observational studies showing benefit from hip arthroscopy; however, these studies generally did not have control groups for comparison, and are at high risk of bias.6 Results from a Cochrane systematic review done by members of our group11 showed that there had been no previous relevant randomised controlled trials. Since then, one recent randomised controlled trial26 has reported no difference between hip arthroscopy and conservative care. This study was small, was done in a military setting, with a single surgeon in a single centre, and with a very high rate of crossover (70%) from conservative care to hip arthroscopy.26 When the authors did a per-protocol comparison of those who had hip arthroscopy (n=66) with those who had conservative care (n=14), they reported that “power was lost making type II errors possible”.26 They concluded that “large cohorts across multiple sites are needed to make definitive conclusions”.26 Our trial is larger and therefore has greater power to detect between-group differences, and was done in 23 centres with a more generalisable patient population.\nThere were no differences between groups in the secondary outcome measure of general health-related quality of life (EQ-5D-5L and SF-12). This finding could either be because treatment for femoroacetabular impingement syndrome does not have an effect on health-related quality of life or because the measures we used are not sufficiently sensitive to detect the changes that occur. A further possibility is that the trial was not sufficiently powered to detect changes in health-related quality of life.\nAdverse events in the hip arthroscopy group were more frequent than in the personalised hip therapy group. However, there was only one serious surgical complication in which a patient developed a hip joint infection. In one systematic review of 36 761 cases,27 hip arthroscopy had a reported complication rate of 3·3%, with a rate of major complications of 0·2%; our study findings are consistent with this.\nOur within-trial health economic evaluation suggests that hip arthroscopy is not cost-effective by comparison with personalised hip therapy. However, our economic models were only able to assess cost-effectiveness at 12 months from randomisation. This finding must also be set in the context of the high initial treatment costs of hip arthroscopy, the treatment timing (long delay in patients receiving hip arthroscopy, reducing the period of potential benefit during follow-up), and the period of economic inactivity during postoperative recovery (appendix). There could be long-term benefits from treatment that were not assessed in this economic analysis. We plan further follow-up points at 2, 3, 5, and 10 years, which will inform the lifetime cost-effectiveness of both surgery and personalised hip therapy, and whether treatment effects are maintained or if further treatments are required. Comparison of the rates of hip replacement in both groups will also help establish whether surgery affects the risk of osteoarthritis.1, 3\nStrengths of this trial include the consent to participate rate among eligible patients (54%) and the follow-up rate (92%). Both of these values are high compared with similar trials in orthopaedics, and especially with trials of surgery versus no surgery, contributing to external and internal validity.28 The integrated qualitative research optimised recruitment, as it has done in other trials.15 This trial was thoroughly pragmatic, exploring the effectiveness of a strategy of offering hip arthroscopy compared with conservative care in the everyday reality of a national health service, where patients do not always receive or comply with the treatment they are offered, where surgeons and physiotherapists have varying levels of training, skill, and expertise, where postoperative care is variable, and where there are waiting lists for treatment. The large number of centres (n=23), surgeons (n=27), and physiotherapists (n=43) involved is a strength, which contributes to the generalisability of our findings. The comparator for this trial was personalised hip therapy, which we consider to be the best conservative care that can realistically be provided in the NHS for these patients. Personalised hip therapy was designed through international consensus and developed, supported, and tested in similar ways to other physiotherapist-led conservative care protocols.29 It meets the standards expected of a complex intervention in a randomised controlled trial, and was delivered by musculoskeletal physiotherapists who attended additional training and support events.6, 7, 30\nLimitations of this trial include that participants and treating clinicians were not masked to treatment allocation. A blinded allocation trial, with a placebo control, would have been better suited to measuring the underlying effect of surgery. In our trial, the research question was whether hip arthroscopy or best conservative care was the most effective treatment strategy, leading to an inevitable absence of blinding. Data collection and analysis were done without revealing treatment allocation. An unexpected difficulty was the frequent delay in delivery of surgery for those patients allocated to hip arthroscopy. We had anticipated that this delay would be less than 3 months in most patients because when we designed the trial, there was a strongly enforced NHS target to treat patients within 18 weeks from referral to surgery. In fact, during the study, this target was a challenge in many hospitals. Patients allocated to hip arthroscopy therefore often had longer times to treatment, and because the primary outcome was measured 12 months after randomisation, these patients were often still within a few months (and in some cases a few weeks) of their operation when we measured the primary outcome. Because of these delays, patients in the hip arthroscopy group had, on average, less time to recover before the primary outcome measurement than did patients in the personalised hip therapy group. We compared the outcome of hip arthroscopy participants who had surgery in the first 6 months after randomisation with those who had surgery in the second 6 months. There was no significant difference between these groups, suggesting that the systematic difference in time to treatment between groups does not account for the treatment effect. Our inferences about the effectiveness of hip arthroscopy compared with personalised hip therapy are limited to data collected 12 months after randomisation; long-term follow-up is required to establish if this effect is maintained and if further treatments are required.\nThe fact that not all surgery or personalised hip therapy was deemed to be of a high fidelity is also a reflection of the real-world setting in which this trial was done. Some surgery was not satisfactory, and some participants allocated to personalised hip therapy did not engage with it or complete it: our fidelity assessment showed high-fidelity treatment in 87% of patients receiving hip arthroscopy and 70% of patients receiving personalised hip therapy. However, these proportions are comparable with other studies and reflect the pragmatism of our trial.31 We took great effort to minimise crossover in our trial, using techniques developed in our feasibility study. No participants allocated to receive hip arthroscopy received personalised hip therapy; 14 participants allocated to personalised hip therapy subsequently changed their mind and decided to have surgery within 12 months. We do not believe that these crossovers can account for the results of this trial; indeed, we would expect such crossovers to dilute and so reduce our estimate of the real underlying effect of hip arthroscopy.\nPersonalised hip therapy is believed to work by improving muscle control, strength around the hip, and movement patterns, leading to the avoidance of hip impingement. Surgery is thought to work by reshaping the bone to prevent impingement and by treating painful injuries to articular cartilage and labrum. In our trial, the observed effect of hip arthroscopy over conservative care might be attributable to the surgical procedure, the placebo effect of surgery (given the unblinded nature of this trial), post-surgical rehabilitation, or a combination of these factors. The results of our subgroup analysis of patients with only cam morphology are suggestive of an increased treatment effect of hip arthroscopy compared with other shapes. This finding would support the idea that the removal of a cam shape has a specific therapeutic effect. The low number of patients with pincer or mixed cam and pincer morphology in this study means we are less certain about the influence of reshaping the acetabular rim. Ultimately, we do not fully understand the mechanism of benefit from hip arthroscopy or personalised hip therapy. Future research should focus on investigating these mechanisms further, and which patients would benefit most from hip arthroscopy or personalised hip therapy.\nWe have shown that offering hip arthroscopy to patients with femoroacetabular impingement syndrome led to better patient-assessed function 12 months after randomisation, compared with best conservative care. This improvement comes at a cost; our study does not show cost-effectiveness of hip arthroscopy compared with conservative care within the first 12 months.\n\n**Question:** Compared to personalised hip therapy what was the result of hip arthroscopy on SF-12(12-item Short Form Health Survey), EQ-5D-5L scores at 6 or 12 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13835
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nHip arthroscopy versus best conservative care for the treatment of femoroacetabular impingement syndrome (UK FASHIoN): a multicentre randomised controlled trial\n\n\n\nABSTRACT.SUMMARY:\nABSTRACT.BACKGROUND:\nFemoroacetabular impingement syndrome is an important cause of hip pain in young adults. It can be treated by arthroscopic hip surgery, including reshaping the hip, or with physiotherapist-led conservative care. We aimed to compare the clinical effectiveness of hip arthroscopy with best conservative care.\n\nABSTRACT.METHODS:\nUK FASHIoN is a pragmatic, multicentre, assessor-blinded randomised controlled trial, done at 23 National Health Service hospitals in the UK. We enrolled patients with femoroacetabular impingement syndrome who presented at these hospitals. Eligible patients were at least 16 years old, had hip pain with radiographic features of cam or pincer morphology but no osteoarthritis, and were believed to be likely to benefit from hip arthroscopy. Patients with bilateral femoroacetabular impingement syndrome were eligible; only the most symptomatic hip was randomly assigned to treatment and followed-up. Participants were randomly allocated (1:1) to receive hip arthroscopy or personalised hip therapy (an individualised, supervised, and progressive physiotherapist-led programme of conservative care). Randomisation was stratified by impingement type and recruiting centre and was done by research staff at each hospital, using a central telephone randomisation service. Patients and treating clinicians were not masked to treatment allocation, but researchers who collected the outcome assessments and analysed the results were masked. The primary outcome was hip-related quality of life, as measured by the patient-reported International Hip Outcome Tool (iHOT-33) 12 months after randomisation, and analysed in all eligible participants who were allocated to treatment (the intention-to-treat population). This trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN64081839, and is closed to recruitment.\n\nABSTRACT.FINDINGS:\nBetween July 20, 2012, and July 15, 2016, we identified 648 eligible patients and recruited 348 participants: 171 participants were allocated to receive hip arthroscopy and 177 to receive personalised hip therapy. Three further patients were excluded from the trial after randomisation because they did not meet the eligibility criteria. Follow-up at the primary outcome assessment was 92% (319 of 348 participants). At 12 months after randomisation, mean iHOT-33 scores had improved from 39·2 (SD 20·9) to 58·8 (27·2) for participants in the hip arthroscopy group, and from 35·6 (18·2) to 49·7 (25·5) in the personalised hip therapy group. In the primary analysis, the mean difference in iHOT-33 scores, adjusted for impingement type, sex, baseline iHOT-33 score, and centre, was 6·8 (95% CI 1·7–12·0) in favour of hip arthroscopy (p=0·0093). This estimate of treatment effect exceeded the minimum clinically important difference (6·1 points). There were 147 patient-reported adverse events (in 100 [72%] of 138 patients) in the hip arthroscopy group) versus 102 events (in 88 [60%] of 146 patients) in the personalised hip therapy group, with muscle soreness being the most common of these (58 [42%] vs 69 [47%]). There were seven serious adverse events reported by participating hospitals. Five (83%) of six serious adverse events in the hip arthroscopy group were related to treatment, and the one in the personalised hip therapy group was not. There were no treatment-related deaths, but one patient in the hip arthroscopy group developed a hip joint infection after surgery.\n\nABSTRACT.INTERPRETATION:\nHip arthroscopy and personalised hip therapy both improved hip-related quality of life for patients with femoroacetabular impingement syndrome. Hip arthroscopy led to a greater improvement than did personalised hip therapy, and this difference was clinically significant. Further follow-up will reveal whether the clinical benefits of hip arthroscopy are maintained and whether it is cost effective in the long term.\n\nABSTRACT.FUNDING:\nThe Health Technology Assessment Programme of the National Institute of Health Research.\n\nBODY.INTRODUCTION:\nFemoroacetabular impingement syndrome is a painful disorder of the hip that is caused by a premature contact (impingement) between the femur and acetabulum during hip movements.1, 2 This premature contact typically occurs as a result of certain hip shapes, such as cam or pincer morphology.1 Cam morphology refers to a flattening or convexity at the femoral head neck junction, whereas pincer morphology refers to a focal or global over-coverage of the femoral head by the acetabulum.1 Femoroacetabular impingement syndrome leads to progressive damage within the joint, including the acetabular labrum and articular cartilage,1 and is associated with the development of osteoarthritis of the hip.1, 3\nBODY.RESEARCH IN CONTEXT:\nEvidence before this study\nFemoroacetabular impingement syndrome is a relatively common non-arthritic cause of hip pain in young adults. It is caused by symptomatic premature contact between the proximal femur and acetabular rim during hip motion, and this phenomenon is associated with certain hip shapes. Repeated impingement leads to damage to the articular cartilage or to acetabular labral tears, and it is these injuries that are thought to be painful. In the past decade, surgeons have developed arthroscopic (ie, keyhole) surgical techniques to treat femoroacetabular impingement syndrome. The rationale is that reshaping the hip and repairing cartilage and labral damage will prevent impingement and relieve symptoms. In 2013, we did a Cochrane systematic review of the effectiveness of hip arthroscopy for femoroacetabular impingement syndrome. We searched MEDLINE (from 1946 to Nov 19, 2013), Embase (1980 to Nov 19, 2013), and the Cochrane Central Register of Controlled Trials (issue 11, 2013), for randomised controlled trials of surgery compared with placebo treatment, non-operative treatment, or no treatment, in any language, in human adults with femoroacetabular impingement syndrome. We found no high-quality evidence examining the effectiveness of surgery for femoroacetabular impingement syndrome. Therefore, we chose to do a pragmatic, multicentre, randomised controlled trial to determine the effectiveness of hip arthroscopy compared with best conservative care.\nAdded value of this study\nTo our knowledge, this is the largest randomised controlled trial to show the clinical effectiveness of hip arthroscopy in treating femoroacetabular impingement syndrome. We have shown that patients with femoroacetabular impingement syndrome have better hip-related quality of life at 12 months since randomisation with either hip arthroscopy or best conservative care (personalised hip therapy), and that this quality of life improves significantly more in patients treated hip arthroscopy, exceeding the minimal clinically important difference. Hip arthroscopy was also more expensive than was personalised hip therapy.\nImplications of all the available evidence\nOver the past 15 years, increasing numbers of patients with femoroacetabular impingement syndrome have been treated with hip arthroscopy. This is the first study to show that hip arthroscopy is more clinically effective, at least in the short term, than best conservative care. Longer-term outcomes are required to establish whether this improvement is sustained and whether surgery is cost-effective. These results should be shared with patients when selecting an appropriate treatment strategy.\n\nSurgery has become an established treatment for femoroacetabular impingement syndrome. The aim of such surgery is to reshape the hip joint to prevent impingement.2 Intra-articular injury, such as a cartilage and labral damage, can be resected, repaired, or reconstructed.2 Initially, open surgery was used to treat femoroacetabular impingement syndrome, but the minimally invasive, so-called keyhole technique known as hip arthroscopy is now being used more frequently.1 This approach has become possible because of advances in technology and surgical techniques. Hip arthroscopy is safer and has a shorter recovery time than does open surgery.4, 5 In the UK in 2013, 1908 operations for femoroacetabular impingement syndrome were done by arthroscopic surgery, compared with only 491 by open surgery.6 Since then, there has been a rapid increase in the use of hip arthroscopy in most countries around the world.4, 5, 6 Non-operative treatments for femoroacetabular impingement syndrome are also available,2 including exercise-based packages of conservative care, delivered by a physiotherapist.7 Potential targets for physiotherapy include the abnormal movement patterns and weakness of hip muscles seen in patients with femoroacetabular impingement syndrome.8, 9\nNumerous case series report improvement in patients with femoroacetabular impingement syndrome after open or arthroscopic surgery, or physiotherapy.5, 10 However, a 2014 Cochrane review11 of surgery for treating femoroacetabular impingement syndrome showed that there was no evidence from randomised controlled trials to support these treatments.\nIn a feasibility study,6 we established that patients were prepared to be recruited, and that surgeons were in equipoise and willing to recruit patients to a randomised controlled trial of hip arthroscopy compared with best conservative care. Our aim was to measure the clinical effectiveness of hip arthroscopy compared with best conservative care in treating patients with femoroacetabular impingement syndrome.\n\nBODY.METHODS:\nBODY.STUDY DESIGN AND PARTICIPANTS:\nWe did this pragmatic, multicentre, assessor-blind randomised controlled trial in 23 National Health Service (NHS) hospitals in the UK. We treated the initial feasibility study6 as an internal pilot study so that participants who took part in the initial study were included in the main trial recruitment.\nParticipants were recruited from the specialist hip arthroscopy service at each hospital. Participating surgeons identified eligible patients during routine diagnostic consultations. Assessments were medical history, clinical examination, X-rays, and cross-sectional imaging (MRI, CT, or both). For patients with a diagnosis of femoroacetabular impingement syndrome, the surgeon classified them as having cam (alpha angle >55°), pincer (lateral centre-edge angle >40° or a positive crossover sign), or mixed-type (combination of cam and pincer) impingement.12\nPatients were eligible if they had hip pain, radiographic features of cam or pincer morphology, were at least 16 years old, were able to give informed consent, and if the treating surgeon believed that they were likely to benefit from hip arthroscopy. Patients were excluded if they had hip osteoarthritis (Tonnis grade >1 or less than 2 mm of superior joint space on an antero-posterior radiograph); a history of hip pathology such as Perthes' disease, slipped upper femoral epiphysis, or avascular necrosis, or previous hip injury such as acetabular fracture, hip dislocation, or femoral neck fracture; or if they had already had shape-changing surgery (open or arthroscopic) of the hip.13, 14 Patients with bilateral femoroacetabular impingement syndrome were eligible, and only the most symptomatic hip was randomly assigned to treatment and followed up. Trained research associates approached eligible patients to explain the trial and to invite them to participate. All participants gave written informed consent.\nQualitative research, to understand recruitment as it occurred, was integrated into the trial. The findings were used to design a recruiter training and centre support programme that was implemented during the trial to optimise recruitment. The research was based on the QuinteT Recruitment Intervention and continued the work done during the internal pilot trial.15, 16\nThe study was approved by NHS Research Ethics Service West Midlands (14/WM/0124). An independent trial steering committee and data monitoring committee provided oversight of the progression of the study. The study protocol has previously been published.17\n\nBODY.RANDOMISATION AND MASKING:\nParticipants were randomly assigned (1:1) with a computer-generated minimisation (adaptive stratified sampling) algorithm for centre and type of impingement, to receive either hip arthroscopy or best conservative care. All baseline data were collected before randomisation, which was done by the recruiting research associate. Allocation concealment was ensured by use of a secure telephone randomisation service hosted by Warwick Clinical Trials Unit. It was not possible to mask patients or the treating clinicians to their allocation. Researchers who collected outcome assessments and analysed the results were masked to allocation by concealment of treatment.\n\nBODY.PROCEDURES:\nSurgery for femoroacetabular impingement syndrome was done using arthroscopic techniques by a senior surgeon (consultant grade) in the NHS, who was trained and experienced in hip arthroscopy. Further details about surgeons' training and experience, and surgical procedures are shown in the appendix. 27 trial surgeons did the surgery. Hip arthroscopy was done under general anaesthesia in a lateral or supine position. Arthroscopic portals were established in the central and peripheral compartments of the hip under radiographic guidance according to the surgeon's usual practice. Shape abnormalities and consequent labral and cartilage pathology were treated. Adequacy of bony reshaping was assessed by intraoperative image intensifier views or by arthroscopic visualisation of a satisfactory impingement free range of movement of the hip, or both. Patients were allowed home when they could walk safely with crutches (typically within 24 h). Patients were referred to outpatient physiotherapy services for a course of rehabilitation as per usual care for that surgeon. These post-operative physiotherapists were distinct from those providing conservative care, to avoid contamination between groups. Patients had MRI of their hip at least 6 weeks after surgery.\nA panel of international experts assessed the fidelity of the surgery (appendix). They reviewed operation notes, intraoperative images, and postoperative MRI scans to subjectively assess whether adequate surgery had been undertaken, according to the protocol. The panel discussed each case and subjectively assessed whether shape abnormalities and intra-articular pathology were treated, and whether there was a sufficient resection to allow impingement-free range of motion. This approach included assessing the proximal femoral and acetabular rim resections and whether the resection edges were smooth.\nPersonalised hip therapy is a package of physiotherapist-led rehabilitation for femoroacetabular impingement syndrome. It was developed during our feasibility study and tested during the pilot study.6 Although the name for this intervention is new, the care offered was based on a consensus of what physiotherapists, physicians, and surgeons currently regard as best conservative care for femoroacetabular impingement syndrome.7 Personalised hip therapy has four core components: an assessment of pain, function, and range of hip motion; patient education; an exercise programme taught in the clinic and repeated at home, that has the key features of individualisation, progression, and supervision; and help with pain relief, which could include one X-ray or ultrasound-guided intra-articular steroid injection when pain prevents performance of the exercise programme.18\nPersonalised hip therapy was delivered by at least one physiotherapist at each centre. 47 physiotherapists were trained formally in this protocol through a 1-day workshop and supported to deliver personalised hip therapy through refresher workshops (appendix). At their initial assessment, participants received a personalised hip therapy information pack that described what to expect during the course of their treatment. They then had between six and ten face-to-face contacts with the physiotherapist over 12–24 weeks. Some contacts were allowed by either telephone or email when geographical distance prevented all contacts being carried out face-to-face. Patient-completed exercise diaries were also encouraged to help both patients and physiotherapists monitor progress and adherence with personalised hip therapy.\nPhysiotherapists recorded full details of their advice and treatments, number and type of treatment contacts, and any non-attendance, on case report forms. These case report forms were reviewed for accuracy in comparison to the usual physiotherapy records at each treatment centre and then assessed for fidelity to the personalised hip therapy protocol by a panel comprising members of the core group who developed the protocol for personalised hip therapy (appendix). The personalised hip therapy panel reviewed case report forms to ensure participants received an adequate number of sessions, all four core components, and that their exercise programme was individualised, supervised, and progressive.\n\nBODY.OUTCOMES:\nThe primary outcome was hip-related quality of life, as measured by the international Hip Outcome Tool (iHOT-33) at 12 months after randomisation.19 iHOT-33 is a patient-derived and patient-reported outcome instrument designed to measure hip-related quality of life in young adults with non-arthritic hip pain. The iHOT-33 consists of four domains: symptoms and functional limitations; sports and recreational physical activities; job-related concerns; and social, emotional, and lifestyle concerns.18 iHOT-33 provides a 100-point score, with 100 representing no pain and perfect function, and lower scores indicating pain and poor function. The instrument has been validated in a relevant population for this trial,19 and has a minimum clinically important difference of 6·1 points.19, 20 Secondary outcomes were health-related quality of life, measured with the EuroQol EQ-5D-5L, the 12-item Short Form Health Survey (SF-12 version 2), adverse events, and health-care resource use, all measured 6 and 12 months after treatment allocation.21, 22, 23 Patients reported adverse events 6 weeks after the start of their intervention (first session of personalised hip therapy, or date of hip arthroscopy). Any serious adverse events were reported by each centre to the Warwick Clinical Trials Unit. Forms for iHOT-33, EQ-5D-5L, SF-12, adverse events, and health-care resource use were collected by postal questionnaires, which were centrally administered. Scores for these measures were collected at the time of consent and again by postal questionnaire at 6 and 12 months after treatment allocation.\n\nBODY.STATISTICAL ANALYSIS:\nThe planned sample size was 172 participants in each group, based on a standard deviation of iHOT-33 of 16 points and a minimum clinically important difference of 6·1 points, giving a standardised effect size of 0·38. We designed the trial to have 90% power to detect an effect of this size at a two-sided 5% significance level, allowing for up to 15% loss to follow-up at the primary outcome timepoint.\nWe prepared a full statistical analysis plan before the final analysis. No interim analyses were planned. The plan was approved by the independent data monitoring committee.\nIn our primary analysis, we investigated differences in the primary outcome measure, (iHOT-33 score at 12 months after randomisation) between the two treatment groups on a intention-to-treat basis, which included all eligible patients randomly assigned to an intervention. We assessed the primary outcome 12 months from treatment allocation rather than from intervention because this was a pragmatic trial design of two different treatment strategies. We used a mixed-effects regression analysis to assess the effects of the interventions on 12-month iHOT-33 scores, after adjusting for the fixed-effects of impingement type, sex, and baseline iHOT-33 score, with recruiting centre included as a random effect to model any potential associations within the recruiting centres. This mixed-effects model was used for all outcome measures.\nWe drew our primary inferences from the intention-to-treat analysis, irrespective of compliance, without imputation for missing data. We also did several additional analyses. We did a per-protocol analysis, comparing participants who actually received surgery and those who actually received personalised hip therapy. Additionally, we compared participants randomly allocated to hip arthroscopy or personalised hip therapy, who received an allocated treatment that was deemed to be of high fidelity. We did prespecified subgroup analyses for different impingement types (cam, pincer, and mixed) and for patients younger and older than 40 years. We added pairwise interaction terms between treatment group and both impingement type and age group to the mixed-effects model to test for important subgroup effects. In addition to the primary adjusted analysis, we also reported unadjusted differences between groups, and we assessed significance using t tests for normally distributed outcomes. We present treatment effect estimates from the adjusted mixed-effects model (primary analysis) with 95% CIs. All hypothesis testing was at the 5% level, with no adjustments for multiple testing. Unless otherwise indicated, the analysis of secondary outcomes followed the same modelling approach as for the primary outcome. All analyses were on a complete-case basis and where follow-up data were missing, the reasons for missing data were obtained and patterns were investigated to judge the plausibility of missingness assumptions. We did a sensitivity analysis using multiple imputation techniques (imputation using chained equations) to assess the effect of missing data on the primary outcome. Finally, we did a post-hoc sensitivity analysis of the effect of variation in recovery time in the hip arthroscopy group. We compared post-intervention adverse events between groups using Fisher's exact test. We did all analyses using Stata (version 14).24\nWe did an economic evaluation from the perspective of the NHS and personal social services.25 We estimated economic costs associated with the delivery of the two interventions. Resource use questions completed by participants at each assessment point provided a profile of all hospital inpatient and outpatient service use, community health and social care encounters, prescribed medications, and NHS supplies, such as crutches or home adaptations. We obtained unit costs (2016 £) from primary and secondary sources in accordance with national guidelines and attached them to every item of resource use.24 We used health utilities generated from EQ-5D-5L responses at every timepoint of assessment to estimate quality-adjusted life-year (QALY) profiles for every participant; these QALYs were calculated as the area under the baseline-adjusted utility curve, assuming linear interpolation between utility measurements. We did a bivariate regression of costs and QALYs, with multiple imputation of missing data, with the view to estimating the incremental cost per QALY gained for hip arthroscopy compared with personalised hip therapy. Further details, including sensitivity analyses done to assess the effect of uncertainty surrounding aspects of the economic evaluation, and prespecified subgroup analyses exploring heterogeneity in the cost-effectiveness results, are provided in the appendix. The trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN64081839.\n\nBODY.ROLE OF THE FUNDING SOURCE:\nThe funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.\n\n\nBODY.RESULTS:\nBetween July 20, 2012, and July 15, 2016, 648 patients who attended the participating surgeons' hip clinics were deemed eligible (figure 1). Of these patients, 348 (54%) agreed to participate. The mean age of participants was 35·3 years (SD 9·6); the mean age of those who declined to participate was 35·4 years (10·3). The 348 participants were randomly allocated to receive hip arthroscopy (n=171) or personalised hip therapy (n=177). Three additional patients were randomly allocated in error, in each instance contrary to protocol procedures. One participant, who was not eligible, was recruited after miscommunication between the treating surgeon and recruiter; one patient was allocated without signing the consent form; and one patient was deemed eligible by a trainee surgeon, but within a few days the treating surgeon deemed they were not eligible. None of these patients participated in the trial (figure 1).Figure 1Trial profile\niHOT-33=International Hip Outcome Tool. *Three patients were randomly assigned in error but did not receive treatment and were not followed-up.\n\nParticipants in the two groups were well matched in terms of demographics and pre-randomisation hip-related quality of life, having had symptoms for approximately 3 years (table 1). 14 (8%) participants who were allocated to personalised hip therapy had all or part of this intervention, but then, at their request, went on to have hip arthroscopy within 12 months after randomisation. No patients allocated to hip arthroscopy had personalised hip therapy. For hip arthroscopy, the median time from random assignment to treatment was 122 days (IQR 80–185), and for personalised hip therapy, it was 37 days (22–60). Surgeons did a mean of 112 (SD 55) hip arthroscopies per year during the study. At 12 months after randomisation, 144 (84%) of the 171 participants allocated to receive hip arthroscopy had received it; 27 (16%) had not. 121 (84%) participants who received a hip arthroscopy procedure had postoperative MRI and their case was assessed by the surgical review panel; 105 (87%) of these 121 procedures were deemed to be of high fidelity, and 16 (13%) were deemed unsatisfactory. The reasons for unsatisfactory surgery were an inadequate bony resection of the proximal femur (n=7) and acetabular rim (n=2), a sharp transition from the femoral head to neck as a result of reshaping surgery (n=5), and no reshaping surgery being done because the hip was degenerate (n=2). Nine (5%) of the 177 participants allocated to personalised hip therapy did not receive any treatment by 12 months. Of the patients who received personalised hip therapy (n=154), 107 (70%) were judged to have received the intervention to a high fidelity (appendix). The most common reason for low fidelity of personalised hip therapy was participants not receiving the minimum of six therapy sessions (34 [72%] of 47). Other reasons for low-fidelity personalised hip therapy were no progression of exercises by the physiotherapist (11 [23%] of 47) and the patient not complying with the exercise programme (two [4%] of 47).Table 1Baseline characteristics of the study population\nHip arthroscopy (n=171)Personalised hip therapy (n=177)Age (years)\n35·4 (9·7)\n35·2 (9·4)\nSex\n\nWomen\n71 (42%)\n64 (36%)\n\nMen\n100 (58%)\n113 (64%)\nCurrent smoker\n\nYes\n31 (18%)\n25 (14%)\n\nNo\n136 (80%)\n151 (85%)\n\nMissing data\n4 (2%)\n1 (1%)\nHip side considered for treatment\n\nRight\n95 (56%)\n103 (58%)\n\nLeft\n75 (44%)\n74 (42%)\nParticipants with bilateral symptoms\n11 (6%)\n18 (10%)\nDuration of hip symptoms (months)\n37 (36·6)\n40 (40·8)\nImpingement type\n\nCam\n129 (75%)\n133 (75%)\n\nMixed\n29 (17%)\n30 (17%)\n\nPincer\n13 (8%)\n14 (8%)\nUnits of alcohol in an average week\n6·2 (8·6)\n6·0 (7·7)\nDiabetes\n\nYes\n2 (1%)\n4 (2%)\n\nNo\n165 (96%)\n171 (97%)\n\nMissing data\n4 (2%)\n2 (1%)\nChronic renal failure\n\nYes\n1 (1%)\n0\n\nNo\n166 (97%)\n176 (99%)\n\nMissing data\n4 (2%)\n1 (1%)\nPhysical activity (UCLA Activity Scale)\n4·3 (2·5)\n4·4 (2·5)\nHip-related quality of life (iHOT-33)\n39·2 (20·9)\n35·6 (18·2)\nSF-12 PCS\n44 (7·6)\n44 (5·9)\nSF-12 MCS\n42 (7·1)\n42 (7·3)\nEQ-5D 3L/5L Index Score\n0·576 (0·26)\n0·557 (0·25)\nEQ-5D 5L VAS\n67 (20·2)\n67 (18·7)\nMean lateral centre edge angle (°)\n31 (5)\n31 (5)\nNumber of participants with LCEA <25°\n7 (4%)\n6 (3%)\nNumber of participants with LCEA <20°\n0\n0\nMean alpha angle (°) measured on antero-posterior radiograph\n61 (17)\n64 (18)\nData are mean (SD) or n (%). UCLA=University of California Los Angeles. iHOT-33=International Hip Outcome Tool. SF-12=12-item Short Form Health Survey. PCS=physical component score. MCS=mental component score. VAS=visual analogue score. LCEA=lateral centre edge angle.\n\n319 (92%) of 348 participants completed the iHOT-33 questionnaires at 12 months after randomisation. Seven (2%) withdrew from follow-up, and 22 (6%) were lost to follow-up. The iHOT-33 score increased in both groups, indicating an improvement in hip-related quality of life (table 2, figure 2). 12 months after randomisation, mean iHOT-33 scores had improved from 39·2 (SD 21) to 58·8 (27) for participants in the hip arthroscopy group, and from 35·6 (18) to 49·7 (25) in the personalised hip therapy group. In the primary intention-to-treat analysis at 12 months, the adjusted estimate of treatment effect measured with iHOT-33 was 6·8 (95% CI 1·7 to 12·0, p=0·0093) in favour of hip arthroscopy, compared with personalised hip therapy. In the per-protocol analysis at 12 months, including participants who received personalised hip therapy (n=154) or hip arthroscopy (n=144), the adjusted estimate of the between-group difference on iHOT-33 was 8·2 (95% CI 2·8 to 13·6) in favour of hip arthroscopy. In the exploratory, pre-specified secondary analysis based on those participants whose treatment was deemed of a high fidelity (hip arthroscopy n=105, personalised hip therapy n=107), the adjusted estimate of between-group difference on the iHOT-33 was 5·8 (95% CI −0·7 to 12·2) in favour of hip arthroscopy. There were no significant between-group differences in SF-12 or EQ-5D-5L scores at 6 or 12 months after randomisation (table 2).Figure 2Changes in mean iHOT-33 score from baseline to 6 and 12 months after randomisation\nError bars are 95% CIs. iHOT-33=International Hip Outcome Tool.\nTable 2Patient-reported outcome measures\nHip arthroscopy (n=171)Personalised hip therapy (n=177)Unadjusted differenceAdjusted difference (95% CI)p valueMean (SD)nMean (SD)niHOT-33\n6 months\n46·6 (25)\n161\n45·6 (23)\n154\n1·0\n−0·7 (−5·2 to 3·7)\n0·743\n12 months*\n58·8 (27)\n158\n49·7 (25)\n163\n9·1\n6·8 (1·7 to 12·0)\n0·0093\nEQ-5D-5L (utility)\n6 months\n0·544 (0·26)\n144\n0·573 (0·23)\n147\n−0·029\n−0·042 (−0·088 to 0·005)\n0·081\n12 months\n0·615 (0·25)\n152\n0·578 (0·24)\n147\n0·037\n0·020 (−0·027 to 0·067)\n0·397\nEQ-5D VAS\n6 months\n67·8 (19·3)\n145\n70·3 (19·3)\n145\n−2·5\n−2·1 (−5·7 to 1·4)\n0·241\n12 months\n71·9 (20·7)\n150\n69·2 (19·4)\n145\n2·7\n2·6 (−1·2 to 6·4)\n0·180\nSF-12 PCS\n6 months\n43·4 (7·0)\n146\n44·2 (6·6)\n142\n−0·8\n−0·7 (−2·1 to 0·7)\n0·304\n12 months\n45·1 (6·3)\n145\n44·2 (6·4)\n132\n1·0\n1·1 (−0·2 to 2·5)\n0·099\nSF-12 MCS\n6 months\n42·1 (7·3)\n146\n42·1 (7·2)\n142\n−0·1\n−0·1 (−1·5 to 1·3)\n0·929\n12 months\n43·2 (7·1)\n145\n42·6 (6·9)\n132\n0·6\n0·4 (−1·2 to 2·0)\n0·589\niHOT-33=International Hip Outcome Tool. VAS=visual analogue score. PCS=physical component score. MCS=mental component score.\n*Primary outcome.\n\nIn the prespecified subgroup analyses, the between-group difference for iHOT-33 was 5·0 (95% CI −1·2 to 11·3) in participants younger than 40 years and 10·9 (1·7 to 20·1) in those older than 40 years (pinteraction=0·3023) in favour of hip arthroscopy; the difference was 8·3 (95% CI 2·5 to 14·2) in patients with cam morphology, 1·1 (–11·5 to 13·7) in those with mixed cam and pincer morphology, and 4·0 (–14·6 to 22·7) in those with pincer morphology (pinteraction=0·5672), in favour of hip arthroscopy.\nAmong patients who received their allocated intervention, there were 147 patient-reported adverse events (in 100 [73%] of 138 patients) in the hip arthroscopy group versus 102 (in 88 [60%]) of 146 patients in the personalised hip therapy group. The most frequently reported adverse event was muscle soreness, reported by 58 (42%) patients in the hip arthroscopy group and 69 (47%) patients in the personalised hip therapy group (table 3). At 12 months, seven serious adverse events had been reported by participating hospitals. Six of these were among the participants in the hip arthroscopy group: one patient was not discharged from the day surgery unit and required an overnight admission, one patient had scrotal haematoma requiring readmission, two patients had superficial wound infections that required oral antibiotics, one patient had a hip joint infection that required further surgery and ultimately a total hip replacement, and one patient had a fall that was unrelated to hip arthroscopy. One participant in the personalised hip therapy group developed biliary sepsis that was unrelated to treatment. There were no treatment-related deaths.Table 3Patient-reported adverse events\nHip arthroscopy (n=138)*Personalised hip therapy (n=146)†p valueMuscle soreness at 6 weeks after intervention\n58 (42%)\n69 (47%)\n0·40\nNumbness in groin, leg, or foot\n35 (25%)\nNA\nNA\nHip pain or stiffness at 6 weeks after intervention\n13 (9%)\n8 (6%)\n0·26\nUnscheduled hospital appointments\n13 (9%)\n6 (4%)\n0·096\nSuperficial wound problems\n9‡ (7%)\nNA\nNA\nHip joint infection\n1 (1%)\nNA\nNA\nFracture\n0\nNA\nNA\nDeep-vein thrombosis\n0\nNA\nNA\nOther adverse events potentially related to intervention\n8 (6%; 2 numbness proximal thigh, 1 scrotal infection, 1 scrotal bruising, 1 labial swelling, 1 ankle pain, 1 erratic International Normalised Ratio, 1 nausea secondary to analgesia, 1 numbness to tip of tongue for 2 weeks after operation)\n1 (1%; muscle spasms)\n0·017\nOther adverse events not related to intervention\n\n18 (13%; 7 lower back pain, 2 knee pain, 2 road traffic collisions, 2 abdominal pain under investigation, 1 viral illness, 1 endometriosis, 1 chronic pain referred to rheumatologist, 1 skin discoloration, 1 multiple sclerosis)\n0·17\nNA=not applicable.\n*Six of 144 patients who received hip arthroscopy within 12 months did not return an adverse events form.\n†Eight of 154 patients who received personalised hip therapy only did not return an adverse events form.\n‡Four patients required antibiotics.\n\nThere was a low level of missing item-level data (eg, iHOT-33 0·6%) in all patient-reported outcome measures at all timepoints; after imputation for missing data, the adjusted estimate of treatment effect was similar, at 6·6 (95% CI 1·7 to 11·4) points in favour of hip arthroscopy. In a post-hoc analysis there was no significant difference in iHOT-33 at 12 months for patients in the hip arthroscopy group who were treated within 6 months of randomisation versus those treated 6 months or more after randomisation (0·9 [95% CI −10·7 to 8·8]). We assessed model assumptions, including assessment of quantile-quantile plots, which were deemed adequate. The mean cost of hip arthroscopy was £3042 (35% staff time, 28% surgical devices and anaesthetic drugs, 19% theatre-running costs, and 18% bed-day costs). Participants in the personalised hip therapy group attended a mean of six physiotherapy sessions (average duration of 30 min), generating mean total treatment costs of £155 per participant. The adjusted incremental cost of hip arthroscopy compared with personalised hip therapy during the 12-month follow-up was £2372, with incremental QALYs of −0·015 (representing a net QALY loss). Personalised hip therapy was more cost-effective than hip arthroscopy at 12 months (appendix).\n\nBODY.DISCUSSION:\nThe UK FASHIoN study is the first randomised controlled trial to provide evidence that hip arthroscopy is effective in patients with femoroacetabular impingement syndrome. In this pragmatic trial, we found that iHOT-33 scores improved for patients in both groups; 12 months after randomisation, there was a mean adjusted difference of 6·8 points in the iHOT-33 score between patients allocated to receive hip arthroscopy and those allocated to receive personalised hip therapy, in favour of hip arthroscopy. This is a statistically significant difference that also exceeded the minimum clinically important difference for iHOT-33. These results are consistent with the hypothesis that hip arthroscopy is more clinically effective than best conservative care.\nThere have been many observational studies showing benefit from hip arthroscopy; however, these studies generally did not have control groups for comparison, and are at high risk of bias.6 Results from a Cochrane systematic review done by members of our group11 showed that there had been no previous relevant randomised controlled trials. Since then, one recent randomised controlled trial26 has reported no difference between hip arthroscopy and conservative care. This study was small, was done in a military setting, with a single surgeon in a single centre, and with a very high rate of crossover (70%) from conservative care to hip arthroscopy.26 When the authors did a per-protocol comparison of those who had hip arthroscopy (n=66) with those who had conservative care (n=14), they reported that “power was lost making type II errors possible”.26 They concluded that “large cohorts across multiple sites are needed to make definitive conclusions”.26 Our trial is larger and therefore has greater power to detect between-group differences, and was done in 23 centres with a more generalisable patient population.\nThere were no differences between groups in the secondary outcome measure of general health-related quality of life (EQ-5D-5L and SF-12). This finding could either be because treatment for femoroacetabular impingement syndrome does not have an effect on health-related quality of life or because the measures we used are not sufficiently sensitive to detect the changes that occur. A further possibility is that the trial was not sufficiently powered to detect changes in health-related quality of life.\nAdverse events in the hip arthroscopy group were more frequent than in the personalised hip therapy group. However, there was only one serious surgical complication in which a patient developed a hip joint infection. In one systematic review of 36 761 cases,27 hip arthroscopy had a reported complication rate of 3·3%, with a rate of major complications of 0·2%; our study findings are consistent with this.\nOur within-trial health economic evaluation suggests that hip arthroscopy is not cost-effective by comparison with personalised hip therapy. However, our economic models were only able to assess cost-effectiveness at 12 months from randomisation. This finding must also be set in the context of the high initial treatment costs of hip arthroscopy, the treatment timing (long delay in patients receiving hip arthroscopy, reducing the period of potential benefit during follow-up), and the period of economic inactivity during postoperative recovery (appendix). There could be long-term benefits from treatment that were not assessed in this economic analysis. We plan further follow-up points at 2, 3, 5, and 10 years, which will inform the lifetime cost-effectiveness of both surgery and personalised hip therapy, and whether treatment effects are maintained or if further treatments are required. Comparison of the rates of hip replacement in both groups will also help establish whether surgery affects the risk of osteoarthritis.1, 3\nStrengths of this trial include the consent to participate rate among eligible patients (54%) and the follow-up rate (92%). Both of these values are high compared with similar trials in orthopaedics, and especially with trials of surgery versus no surgery, contributing to external and internal validity.28 The integrated qualitative research optimised recruitment, as it has done in other trials.15 This trial was thoroughly pragmatic, exploring the effectiveness of a strategy of offering hip arthroscopy compared with conservative care in the everyday reality of a national health service, where patients do not always receive or comply with the treatment they are offered, where surgeons and physiotherapists have varying levels of training, skill, and expertise, where postoperative care is variable, and where there are waiting lists for treatment. The large number of centres (n=23), surgeons (n=27), and physiotherapists (n=43) involved is a strength, which contributes to the generalisability of our findings. The comparator for this trial was personalised hip therapy, which we consider to be the best conservative care that can realistically be provided in the NHS for these patients. Personalised hip therapy was designed through international consensus and developed, supported, and tested in similar ways to other physiotherapist-led conservative care protocols.29 It meets the standards expected of a complex intervention in a randomised controlled trial, and was delivered by musculoskeletal physiotherapists who attended additional training and support events.6, 7, 30\nLimitations of this trial include that participants and treating clinicians were not masked to treatment allocation. A blinded allocation trial, with a placebo control, would have been better suited to measuring the underlying effect of surgery. In our trial, the research question was whether hip arthroscopy or best conservative care was the most effective treatment strategy, leading to an inevitable absence of blinding. Data collection and analysis were done without revealing treatment allocation. An unexpected difficulty was the frequent delay in delivery of surgery for those patients allocated to hip arthroscopy. We had anticipated that this delay would be less than 3 months in most patients because when we designed the trial, there was a strongly enforced NHS target to treat patients within 18 weeks from referral to surgery. In fact, during the study, this target was a challenge in many hospitals. Patients allocated to hip arthroscopy therefore often had longer times to treatment, and because the primary outcome was measured 12 months after randomisation, these patients were often still within a few months (and in some cases a few weeks) of their operation when we measured the primary outcome. Because of these delays, patients in the hip arthroscopy group had, on average, less time to recover before the primary outcome measurement than did patients in the personalised hip therapy group. We compared the outcome of hip arthroscopy participants who had surgery in the first 6 months after randomisation with those who had surgery in the second 6 months. There was no significant difference between these groups, suggesting that the systematic difference in time to treatment between groups does not account for the treatment effect. Our inferences about the effectiveness of hip arthroscopy compared with personalised hip therapy are limited to data collected 12 months after randomisation; long-term follow-up is required to establish if this effect is maintained and if further treatments are required.\nThe fact that not all surgery or personalised hip therapy was deemed to be of a high fidelity is also a reflection of the real-world setting in which this trial was done. Some surgery was not satisfactory, and some participants allocated to personalised hip therapy did not engage with it or complete it: our fidelity assessment showed high-fidelity treatment in 87% of patients receiving hip arthroscopy and 70% of patients receiving personalised hip therapy. However, these proportions are comparable with other studies and reflect the pragmatism of our trial.31 We took great effort to minimise crossover in our trial, using techniques developed in our feasibility study. No participants allocated to receive hip arthroscopy received personalised hip therapy; 14 participants allocated to personalised hip therapy subsequently changed their mind and decided to have surgery within 12 months. We do not believe that these crossovers can account for the results of this trial; indeed, we would expect such crossovers to dilute and so reduce our estimate of the real underlying effect of hip arthroscopy.\nPersonalised hip therapy is believed to work by improving muscle control, strength around the hip, and movement patterns, leading to the avoidance of hip impingement. Surgery is thought to work by reshaping the bone to prevent impingement and by treating painful injuries to articular cartilage and labrum. In our trial, the observed effect of hip arthroscopy over conservative care might be attributable to the surgical procedure, the placebo effect of surgery (given the unblinded nature of this trial), post-surgical rehabilitation, or a combination of these factors. The results of our subgroup analysis of patients with only cam morphology are suggestive of an increased treatment effect of hip arthroscopy compared with other shapes. This finding would support the idea that the removal of a cam shape has a specific therapeutic effect. The low number of patients with pincer or mixed cam and pincer morphology in this study means we are less certain about the influence of reshaping the acetabular rim. Ultimately, we do not fully understand the mechanism of benefit from hip arthroscopy or personalised hip therapy. Future research should focus on investigating these mechanisms further, and which patients would benefit most from hip arthroscopy or personalised hip therapy.\nWe have shown that offering hip arthroscopy to patients with femoroacetabular impingement syndrome led to better patient-assessed function 12 months after randomisation, compared with best conservative care. This improvement comes at a cost; our study does not show cost-effectiveness of hip arthroscopy compared with conservative care within the first 12 months.\n\n**Question:** Compared to personalised hip therapy what was the result of hip arthroscopy on International Hip Outcome Tool (iHOT-33)?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13841
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nPreemptive use of etodolac on tooth sensitivity after in-office bleaching: a randomized clinical trial\n\n\n\nABSTRACT.ABSTRACT:\nABSTRACT.PURPOSE:\nThis study determined the effectiveness of the preemptive administration of etodolac on risk and intensity of tooth sensitivity and the bleaching effect caused by in-office bleaching using 35% hydrogen peroxide.\n\nABSTRACT.MATERIAL AND METHODS:\nFifty patients were selected for this tripleblind, randomized, crossover, and placebo-controlled clinical trial. Etodolac (400 mg) or placebo was administrated in a single-dose 1 hour prior to the bleaching procedure. The whitening treatment with 35% hydrogen peroxide was carried out in two sessions with a 7-day interval. Tooth sensitivity was assessed before, during, and 24 hours after the procedure using the analog visual scale and the verbal rating scale. Color alteration was assessed by a bleach guide scale, 7 days after each session. Relative risk of sensitivity was calculated and adjusted by session, while overall risk was compared by the McNemar's test. Data on the sensitivity level of both scales and color shade were subjected to Friedman, Wilcoxon, and Mann-Whitney tests, respectively (α=0.05).\n\nABSTRACT.RESULTS:\nThe preemptive administration of etodolac did not affect the risk of tooth sensitivity and the level of sensitivity reported, regardless of the time of evaluation and scale used. The sequence of treatment allocation did not affect bleaching effectiveness, while the second session resulted in additional color modification. The preemptive administration of etodolac in a single dose 1 hour prior to in-office tooth bleaching did not alter tooth color, and the risk and intensity of tooth sensitivity reported by patients.\n\nABSTRACT.CONCLUSION:\nA single-dose preemptive administration of 400 mg of etodolac did not affect either risk of tooth sensitivity or level of sensitivity reported by patients, during or after the in-office tooth bleaching procedure.\n\nBODY.INTRODUCTION:\nTooth whitening is a simple and non-invasive treatment commonly carried out to reestablish smile aesthetics. High success rates have been demonstrated for bleaching techniques applying 35% hydrogen peroxide (HP35%)\n2\n. Hydrogen peroxide (H2O2)-based bleaching agents at high concentrations (typically 15-38%) are currently used for in-office techniques due to their high oxidizing ability\n12\n\n,\n\n25\n. However, the low molecular weight of H2O2 allows its penetration across the entire dentin tissue, reaching the pulp chamber and promoting damage of pulp stem cells, which is reported by patients as tooth sensitivity\n15\n\n,\n\n29\n.\nPrior clinical trials have reported absolute risk of tooth sensitivity as high as 95% when highly concentrated H2O2 is used for in-office tooth bleaching\n9\n\n,\n\n20\n\n,\n\n23\n\n,\n\n25\n. Thus, the preemptive use of desensitizer agents\n3\n\n,\n\n30\n or anti- inflammatories\n7\n\n,\n\n22\n\n,\n\n23\n\n,\n\n27\n has been proposed to reduce the risk of post-bleaching tooth sensitivity. Only the former significantly decreased tooth sensitivity; however, the application of desensitizers, when not incorporated into the bleaching gel, adds an extra step to the bleaching protocol, which is contrary to a clinician's need for simplification. On the other hand, the preemptive use of anti-inflammatories does not increase the number of steps in the bleaching protocol. Unfortunately, prior studies evaluating etoricoxib, ibuprofen, or dexamethasone were unable to demonstrate any beneficial effect on tooth sensitivity caused by tooth bleaching\n7\n\n,\n\n13\n\n,\n\n22\n\n,\n\n23\n\n,\n\n28\n.\nEtoricoxib and ibuprofen are grouped as class II drugs by the Biopharmaceutical Classification System (BCS), presenting low solubility and high permeability, which can hinder their absorption and create bioavailability mismatch during the bleaching procedure\n28\n. Moreover, anti-inflammatory drugs may have a specific action over inflammatory mediators (bradykinin) and the neurotransmitter (substance P) of tooth pain caused by dental bleaching\n6\n. Among non-steroid anti-inflammatory drugs (NSAIDs), etodolac demonstrated efficacy on the control of prostaglandins and bradykinins\n18\n. Nevertheless, there is no clinical evidence regarding the use of etodolac on the reduction of tooth sensitivity caused by in-office bleaching.\nThus, this study aimed to evaluate the effectiveness of the preemptive prescription of etodolac on risk of tooth sensitivity during and after in-office bleaching treatment. The first hypothesis evaluated was that etodolac would reduce both level and risk of tooth sensitivity when administrated in a single dose prior to in-office bleaching. The second hypothesis tested was whether the use of etodolac would reduce tooth sensitivity with no effect on tooth bleaching.\n\nBODY.MATERIAL AND METHODS:\nThis clinical trial followed the CONSORT statements and was approved by the Scientific Review Committee and by the Committee for the Protection of Human Subjects of the local university (CAAE 37578714.4.0000.5546), and registered at clinicaltrials.gov under the number NCT02881619.\nBODY.TRIAL DESIGN:\nThis study was a randomized, triple-blind, placebo- controlled clinical trial with a crossover design. Patients included signed an informed consent form and were submitted to two in-office bleaching sessions receiving placebo (control) or etodolac prior to the bleaching procedure, while different treatments were allocated for each session.\nA one-week interval (\"washout\") in-between sessions was established. The study was conducted at the clinic of the School of Dentistry of the local university from November 2014 to July 2015.\n\nBODY.PARTICIPANTS:\nPatients from 18 to 35 years old with good oral health were included in this clinical trial. From the patients who received placebo/etodolac, 6 were men and 19 were women, and for etodolac/placebo, 12 were men and 13 were women; the average age was 23 years; 64% were women.\nPatients with any of the six upper anterior teeth with caries, restoration, severe discoloration (e.g., stains caused by tetracycline), enamel hypoplasia, gingival recession, dentin exposure, pulpitis, or endodontics were excluded, as well as smokers. Participants submitted to previous bleaching procedures, with prior tooth sensitivity, known allergy to any component of the medication used in the study, and pregnant or breastfeeding women were also excluded. An air stream was applied to teeth to verify the presence or absence of sensitivity (none or zero). Only patients presenting all six upper anterior teeth with shade mismatch of 2.5 M2 (Vita Bleach guide 3D-Master scale, Vita-Zahnfabrik, Bad Sackingen, Germany) - A3.5 Vita Classical equivalence - or darker were included.\n\nBODY.SAMPLE SIZE CALCULATION:\nThe sample calculation was based on the primary binary outcome (sensitivity risk 24 hours after the procedure) for superiority trial. The power of the test was set at 80%, considering a type I error of 0.05 and risk of tooth sensitivity of 90%, based on a prior study using a similar bleaching agent\n23\n; moreover, a reduction around 30% was expected after treatment. The calculation resulted in fifty patients.\n\nBODY.RANDOMIZATION:\nA randomized list was computer-generated by a person not involved in intervention or evaluation. Participants were defined as blocks in the randomization process, in which the sequence of treatment (placebo or etodolac) was randomly set for each block by computer-generated tables (www.sealedenvelope.com). The sequence was inserted into sealed envelopes numbered from 1 to 50 that were opened by the operator only at the moment of intervention. Patients were numbered according to the sequence of enrollment. Neither the participant nor the operator knew the group allocation, determining a blinded protocol.\n\nBODY.BASELINE EVALUATION:\nPrior to the bleaching procedure, teeth were cleaned using rubber cups associated with pumice and water. The shades of upper incisors and canine teeth (13, 12, 11, 21, 22, and 23) were assessed on a baseline using the bleach guide scale. Color was analyzed by two previously calibrated evaluators. Both evaluators presented superior color matching competency according to the ISO (International Organization for Standardization)/TR 28642:2011. Shade tabs selected were converted to scores ranging from 1 (whiter shade – 0M1) to 15 (darker shade – 5M3).\nConsidering a possible effect of dental anxiety on the sensitivity reported by patients, the Corah's Dental Anxiety Scale was used to determine the level of anxiety of each patient related to the procedure\n17\n. Each answer to the survey instrument was scored on a scale from 1 to 5 (four questions) and the sum of scores was used to determine the level of anxiety: low was under 12, moderate was between 12 and 14, and high was over 14.\n\nBODY.INTERVENTION:\nOne hour before each bleaching session and right after the prophylaxis, patients received a capsule containing 400 mg of NAISE etodolac (Flancox™, Apsen Farmaceutica S/A) or 400 mg of placebo (inert content) according to randomization. Capsules had the same appearance and were manufactured by a person not involved in intervention or evaluation. They were placed into two bottles identified by letters according to the treatment. Neither the operators responsible for intervention and evaluation nor the patients knew the content of each capsule.\nThe color evaluation was verified, and the light- cured resin dam was applied (Top Dam, FGM, Joinville, SC, Brazil) and polymerized (Radii-cal, SDI, Bayswater, Australia) on the gingival tissue corresponding to the teeth to be bleached. A 35% hydrogen peroxide-based bleaching agent (Whiteness HP Blue, FGM, Joinville, SC, Brazil) was mixed and applied to the buccal surface of teeth for 40 minutes. After that, the bleaching agent was removed. A second session was carried out after a week following the same procedures. At this time, the patient received a single-dose capsule containing etodolac or placebo (different from the ones received at the first session), one hour before the procedure. During the bleaching treatment, patients were advised not to ingest colored food and beverages.\n\nBODY.EVALUATIONS:\nTooth sensitivity reported by patients was assessed using a visual analog scale (VAS) and a verbal rating scale (VRS). The VAS consisted of a 10-cm long scale ranging from green (absence of pain) to red (unbearable pain). Patients set their level of sensitivity by pointing to the color corresponding to the pain level, while the distance from this point to the green border was recorded. For the VRS, patients reported their level of sensitivity based on scores: 0= none; 1 = mild; 2= moderate; 3= considerable; and 4= severe. Tooth sensitivity was assessed during bleaching, immediately after removing the bleaching agent, and after 24 hours. For this last assessment, only the VRS was used due to the difficulty of patients to fill the VAS at home. One week after each session, tooth color was evaluated again using the same procedure described previously.\n\nBODY.STATISTICAL ANALYSIS:\nDemographic data from patients were analyzed to determine age, gender, and anxiety level for each allocation sequence. Comparisons among allocation sequences were performed by Mann-Whitney (age), Fisher's exact (gender), and chi-square (anxiety level) tests.\nBased on the presence of any tooth sensitivity (VRS scores different from 0), the absolute risk, odds ratio, and relative risk were calculated regarding the treatments for each moment of bleaching evaluation/ session, as well as the confidence intervals (95%). For each moment, differences on presence/absence ratios were analyzed by Fisher's exact test. For the overall risk related to each treatment, odds ratio was adjusted to the independent variable of \"session of bleaching\" using Mantel-Haenszel statistics. The homogeneity of odds ratios was analyzed by Breslow-Day and Tarone's tests. Next, the estimated odds ratio was converted to relative risk and the overall presence/absence ratios were analyzed by the McNemar's test, considering the study design (crossover).\nFor the VRS, data from scores observed at each moment of bleaching evaluation/session were submitted to the Mann-Whitney rank sum test. Despite the measurement of tooth sensitivity using the VAS, which provided a continuous outcome, data assessed with this scale did not show normal distribution (Shapiro-Wilk's test). Thus, data from the VAS were also analyzed by the Mann-Whitney rank sum test, performing one test per time of evaluation.\nFor color evaluation, comparisons among sequences of treatment were performed using the Mann-Whitney rank sum test. Friedman test followed by Dunn's post hoc test were used to analyze the difference between the moments of evaluation for each sequence of treatment. All statistical analyses were performed adjusting the initial significance level (α=0.05) by the Bonferroni correction.\n\n\nBODY.RESULTS:\n\nFigure 1 shows the flow chart of patients assessed for eligibility, who were included in the study and analyzed. Table 1 shows the demographic characteristics of patients allocated for each sequence of treatment. Regarding anxiety, 88% of patients had a low level and only 2% had a high level (p=0.236). There was no difference among the sequences of treatment for any demographic characteristic analyzed (age: p=0.089 and gender: p=0.140).\nFigure 1BODY.FLOW CHART OF PATIENTS:\nTable 1BODY.PROFILE OF PATIENTS INCLUDED IN THE STUDY ALLOCATED TO EACH SEQUENCE OF TREATMENT:\nAge (years)Median (1st / 3rd quartiles)p-value\nTotal\n\n\n\n23.0 (21.0/26.0)\n\n\nPlacebo/Edotolac\n\n23.8 (21.0/25.5)\n\np\n*\n= 0.089\n\nEdotolac/Placebo\n\n25.0 (22.0/28.0)\n\n\nGender Total\n\n\n\nn (%)\n\n\n\n\nMale\n18 (36.0%)\n\n\nFemale\n32 (64.0%)\n\nPlacebo/Edotolac\n\n\n\np\n**\n= 0.140\n\n\nMale\n6 (24.0%)\n\n\nFemale\n19 (76.0%)\n\nEdotolac/Placebo\n\n\n\n\nMale\n12 (52.0%)\n\n\nFemale\n13 (48.0 %)\n\n\nLevel of anxiety Total\n\n\n\nn (%)\n\n\n\n\nLow anxiety\n44 (88.0%)\n\n\nModerate anxiety\n5 (10.0%)\n\n\nHigh anxiety\n1 (2.0%)\n\nPlacebo/Edotolac\n\n\n\np\n***\n= 0.236\n\n\nLow anxiety\n21 (84.0%)\n\n\nModerate anxiety\n4 (16.0%)\n\n\nHigh anxiety\n0 (0.0%)\n\nEdotolac/Placebo\n\n\n\n\nLow anxiety\n23 (92.0%)\n\n\nModerate anxiety\n1 (4.0%)\n\n\nHigh anxiety\n1 (4.0%)\n\n*Mann-Whitney rank sum test;\n**Fisher Exact test;\n***Chi-square test.\n\nTable 2 shows the results of tooth sensitivity risk. The treatment that patients received prior to the bleaching procedure did not affect the risk to sensitivity at any of the moments of evaluation (during and immediately after: p=1.0; after 24 hours: p=0.683). Figure 2 shows the results for level of sensitivity assessed by the VRS. Treatment did not affect level of sensitivity, regardless of the moment of evaluation. Similar results were observed when the VAS was used (Figure 3).\nTable 2BODY.RESULTS OF RISK OF TOOTH SENSITIVITY OBSERVED FOR EACH TREATMENT:\nSessionMoment of evaluationDuringImmediately after24 h after\nTreatment\nEdotolac\nPlacebo\nEdotolac\nPlacebo\nEdotolac\nPlacebo\n1st session\nPresence of sensitivity (yes/no)\n(10/15)\n(11/14)\n(9/16)\n(10/15)\n(3/22)\n(2/23)\n\nOdds ratio (95% CI)\n0.52 (0.17 – 1.61)\n\n0.84 (0.27 – 2.65)\n\n1.57 (0.24 – 10.30)\n\n\nRelative risk (95% CI)\n0.71 (0.40 – 1.29)\n\n0.90 (0.44 – 1.83)\n\n1.50 (0.27 – 8.22)\n\n\np-value*\n\n0.396\n\n1.000\n\n1.000\n\n2nd session\nPresence of sensitivity (yes/no)\n(8/17)\n(5/20)\n(6/19)\n(4/21)\n(1/24)\n(1/24)\n\nOdds ratio (95% CI)\n1.88 (0.52 – 6.85)\n\n1.66 (0.41 – 6.79)\n\n1.00 (0.59 – 16.93)\n\n\nRelative risk (95% CI)\n1.60 (0.61 – 4.22)\n\n1.50 (0.48 – 4.68)\n\n1.00 (0.06 – 15.12)\n\n\np-value*\n\n0.520\n\n0.725\n\n1.000\n\nAverage\nOdds ratio (95% CI)**\n\n0.92 (0.40 – 2.09)\n\n1.11 (0.46 – 2.67)\n\n1.37 (0.29 – 6.51)\n\n\nRelative risk***\n\n0.95\n\n1.08\n\n1.34\n\n\np-value****\n\n1.000\n\n1.000\n\n0.683\n\n*Fisher exact test;\n**Mantel-Haenszel common odds ratio estimate;\n***Based on odds ratio estimated;\n****McNemar's test. The cut-off value of type I error (α=0.0056) was adjusted by Bonferroni correction.\nFigure 2BODY.SCORES OF TOOTH SENSITIVITY FROM THE VRS (0 TO 4):\nFigure 3BODY.LEVEL OF TOOTH SENSITIVITY ASSESSED WITH VAS (CM):\nIn regards to bleaching effectiveness, the bleaching procedure carried out in this study was able to significantly reduce the shade scores from the bleach guide, while the second session resulted in additional bleaching effect (Figure 4). The sequence of treatment did not affect bleaching effectiveness.\nFigure 4BODY.SCORES FROM THE BLEACH GUIDE SCALE:\n\nBODY.DISCUSSION:\nTooth bleaching performed by patients at home using low-concentration peroxides has been reported as the first-choice technique for vital bleaching, and probably the most widely used one\n5\n\n,\n\n11\n. However, procedures using high-concentration bleaching agents applied in office by clinicians remain an important protocol used to bleach discolored teeth for specific indications of tooth bleaching. This protocol is separate and intended to aid patients who cannot adapt to the use of home-based bleaching trays and who have contraindications related to the gastric system, because it reduces the risk of gel intake\n14\n, and to control the risk factor for developing gingival irritation\n10\n. In-office bleaching is also indicated to patients requiring faster results, while this technique can be combined with at-home bleaching\n26\n.\nIn-office techniques have demonstrated high bleaching effectiveness using high-concentration hydrogen peroxides\n26\n. However, high-concentrated bleaching agents also result in increased tooth sensitivity reported by patients during and up to 24 hours after the bleaching procedure, which is the main adverse effect related to in-office tooth bleaching\n9\n\n,\n\n20\n\n,\n\n21\n. Even though tooth sensitivity is related to the inflammatory process of pulp tissue\n20\n, the findings of this study showed that the preemptive use of etodolac in a single dose did not affect the risk and level of tooth sensitivity caused by in-office bleaching. As expected, preemptive administration of etodolac also did not affect the bleaching results. Thus, the first hypothesis of the study was rejected and the second one was accepted.\nUnlike the tooth sensitivity typically reported by patients presenting teeth with dentin exposure, which relates mainly to thermal stimuli, bleached teeth can hurt even in the absence of any stimulus, showing that the pain mechanism related to peroxides is different from other types of tooth pain\n20\n. Moreover, the sensitivity caused by tooth bleaching tends to increase within a few hours following the bleaching procedure, when most patients described the pain as a \"twinge\" or \"shock-like.\" It has been demonstrated that the oxidizing agents used during the bleaching procedure cause a reduction on metabolism, viability, and cell proliferation\n20\n, allowing to increase the expression of inflammatory mediators, such as substance P and bradykinin, which is a vasoactive peptide released by nerves resulting in a neurogenic inflammation\n6\n. Thus, a preemptive administration of anti-inflammatories could be a reasonable approach to reduce tooth sensitivity associated with bleaching procedures. However, in this study, sensitivity was not lower during and 24 hours after the bleaching procedure. Unfortunately, the peak of tooth sensitivity was not measured in this study, even though this outcome could allow to assess a possible effect of etodolac on sensitivity following the end of the bleaching procedure.\nA prior study demonstrated that etodolac presents higher effectiveness on bradykinin inhibition than other drugs commonly used to control the inflammatory process, motivating its use in this study\n18\n. Regarding pharmacokinetics, etodolac reaches its maximal plasma concentration around 1 to 2 hours after its administration\n4\n. Thus, it was expected that maximal plasma concentration would be reached during the bleaching procedure. Unfortunately, no effect of the preemptive administration of etodolac on the reported tooth sensitivity was observed at this time of evaluation for both sessions. Despite its demonstrated efficacy on inflammation control, the analgesic effect of etodolac increases when its administration is repeated, whereas a single dose prior to the bleaching procedure seems insufficient to prevent tooth sensitivity\n19\n.\nAnother important observation regarding pharmacokinetics is that a crossover design was used in this study with a 1-week interval period. Considering that the half-life of etodolac after oral administration is around 13 hours\n16\n, it is expected to find no residual effect after 1-week. Moreover, the crossover design avoids bias related to pain thresholds of patients\n1\n.\nParticipants included in this study were predominantly young females presenting low level of anxiety. All these demographics characteristics of the studied population might be associated with differences in pain thresholds. Higher tooth sensory threshold has been demonstrated in males due to differences in crown diameters of teeth and underlying mechanisms such as neurological differences or behavior aspects\n8\n. Regarding the age of participants, a recent review did not find any relation between age and the risk to or level of tooth sensitivity\n27\n. However, it is important to emphasize that most participants from trials included in that review were under 30 years old\n27\n. Another important demographic aspect assessed in this study was the participants' level of dental anxiety prior to bleaching procedures. It has been demonstrated that dental anxiety is a strong predictor of pain and that anxious participants are prone to develop painful responses\n24\n. In this research, almost 90% of participants presented low anxiety prior to bleaching procedures, which can be justified by the low invasive aspect of intervention, despite the patients' concern about tooth sensitivity. In fact, despite 66% of participants reporting various level of tooth sensitivity (high risk), the actual level of sensitivity reported was low (medians below moderate at VRS, and means lower than 2 at VAS).\nIn addition to the evaluation of tooth sensitivity, we also assessed color alteration promoted by bleaching procedures. The data analysis of color evaluation used the sequence of allocation instead of the treatment (placebo or etodolac). If we had used treatment, different participants between the first and second sessions would be compared for the same treatment, which would have impaired correct color changes assessment. Moreover, the main aim of color evaluation was to show the effectiveness of the bleaching technique used.\nIn our study, the protocol that was carried out resulted in significant bleaching effect, while an additional color alteration was achieved at the second session. In addition, the last color evaluation was performed a week after the last bleaching session of this study, while longer times may be required for color stabilization\n21\n. However, a shorter time was used because the tooth sensitivity reported by patients was the main outcome of this trial. It has been demonstrated that two sessions of in-office tooth bleaching results in a mean change of 5.3 (± 2.8) units on shade guides\n26\n, which is similar to the average color change achieved in this research. Factors such as patient's age and color at baseline have been strongly associated with bleaching effectiveness, while young patients and darker teeth show more pronounced color changes\n27\n. We found no difference in baseline regarding these parameters between the sequence of allocation, while the inclusion of young participants presenting all teeth darker than 2.5 M2 (score 7) favored obtaining significant color bleaching.\nIn conclusion, the preemptive administration of etodolac in a single dose 1 hour prior to the bleaching procedure was unable to reduce both risk and level of sensitivity caused by in-office bleaching. A limitation of this study was that the preemptive treatment was administrated only for a young population (average age of 23 years), with prevalence of female patients, while different results can be observed for other demographic profiles\n27\n.\n\nBODY.CONCLUSIONS:\nThe preemptive administration of a single dose of etodolac previously to the two bleaching sessions with 35% hydrogen peroxide did not affect tooth color change, risk of sensitivity and level of pain reported by the patients (during the sessions, immediately after, and 24 h after sessions).\n\n**Question:** Compared to placebo what was the result of Etodolac (400 mg) on tooth sensitivity risk?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13842
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nPreemptive use of etodolac on tooth sensitivity after in-office bleaching: a randomized clinical trial\n\n\n\nABSTRACT.ABSTRACT:\nABSTRACT.PURPOSE:\nThis study determined the effectiveness of the preemptive administration of etodolac on risk and intensity of tooth sensitivity and the bleaching effect caused by in-office bleaching using 35% hydrogen peroxide.\n\nABSTRACT.MATERIAL AND METHODS:\nFifty patients were selected for this tripleblind, randomized, crossover, and placebo-controlled clinical trial. Etodolac (400 mg) or placebo was administrated in a single-dose 1 hour prior to the bleaching procedure. The whitening treatment with 35% hydrogen peroxide was carried out in two sessions with a 7-day interval. Tooth sensitivity was assessed before, during, and 24 hours after the procedure using the analog visual scale and the verbal rating scale. Color alteration was assessed by a bleach guide scale, 7 days after each session. Relative risk of sensitivity was calculated and adjusted by session, while overall risk was compared by the McNemar's test. Data on the sensitivity level of both scales and color shade were subjected to Friedman, Wilcoxon, and Mann-Whitney tests, respectively (α=0.05).\n\nABSTRACT.RESULTS:\nThe preemptive administration of etodolac did not affect the risk of tooth sensitivity and the level of sensitivity reported, regardless of the time of evaluation and scale used. The sequence of treatment allocation did not affect bleaching effectiveness, while the second session resulted in additional color modification. The preemptive administration of etodolac in a single dose 1 hour prior to in-office tooth bleaching did not alter tooth color, and the risk and intensity of tooth sensitivity reported by patients.\n\nABSTRACT.CONCLUSION:\nA single-dose preemptive administration of 400 mg of etodolac did not affect either risk of tooth sensitivity or level of sensitivity reported by patients, during or after the in-office tooth bleaching procedure.\n\nBODY.INTRODUCTION:\nTooth whitening is a simple and non-invasive treatment commonly carried out to reestablish smile aesthetics. High success rates have been demonstrated for bleaching techniques applying 35% hydrogen peroxide (HP35%)\n2\n. Hydrogen peroxide (H2O2)-based bleaching agents at high concentrations (typically 15-38%) are currently used for in-office techniques due to their high oxidizing ability\n12\n\n,\n\n25\n. However, the low molecular weight of H2O2 allows its penetration across the entire dentin tissue, reaching the pulp chamber and promoting damage of pulp stem cells, which is reported by patients as tooth sensitivity\n15\n\n,\n\n29\n.\nPrior clinical trials have reported absolute risk of tooth sensitivity as high as 95% when highly concentrated H2O2 is used for in-office tooth bleaching\n9\n\n,\n\n20\n\n,\n\n23\n\n,\n\n25\n. Thus, the preemptive use of desensitizer agents\n3\n\n,\n\n30\n or anti- inflammatories\n7\n\n,\n\n22\n\n,\n\n23\n\n,\n\n27\n has been proposed to reduce the risk of post-bleaching tooth sensitivity. Only the former significantly decreased tooth sensitivity; however, the application of desensitizers, when not incorporated into the bleaching gel, adds an extra step to the bleaching protocol, which is contrary to a clinician's need for simplification. On the other hand, the preemptive use of anti-inflammatories does not increase the number of steps in the bleaching protocol. Unfortunately, prior studies evaluating etoricoxib, ibuprofen, or dexamethasone were unable to demonstrate any beneficial effect on tooth sensitivity caused by tooth bleaching\n7\n\n,\n\n13\n\n,\n\n22\n\n,\n\n23\n\n,\n\n28\n.\nEtoricoxib and ibuprofen are grouped as class II drugs by the Biopharmaceutical Classification System (BCS), presenting low solubility and high permeability, which can hinder their absorption and create bioavailability mismatch during the bleaching procedure\n28\n. Moreover, anti-inflammatory drugs may have a specific action over inflammatory mediators (bradykinin) and the neurotransmitter (substance P) of tooth pain caused by dental bleaching\n6\n. Among non-steroid anti-inflammatory drugs (NSAIDs), etodolac demonstrated efficacy on the control of prostaglandins and bradykinins\n18\n. Nevertheless, there is no clinical evidence regarding the use of etodolac on the reduction of tooth sensitivity caused by in-office bleaching.\nThus, this study aimed to evaluate the effectiveness of the preemptive prescription of etodolac on risk of tooth sensitivity during and after in-office bleaching treatment. The first hypothesis evaluated was that etodolac would reduce both level and risk of tooth sensitivity when administrated in a single dose prior to in-office bleaching. The second hypothesis tested was whether the use of etodolac would reduce tooth sensitivity with no effect on tooth bleaching.\n\nBODY.MATERIAL AND METHODS:\nThis clinical trial followed the CONSORT statements and was approved by the Scientific Review Committee and by the Committee for the Protection of Human Subjects of the local university (CAAE 37578714.4.0000.5546), and registered at clinicaltrials.gov under the number NCT02881619.\nBODY.TRIAL DESIGN:\nThis study was a randomized, triple-blind, placebo- controlled clinical trial with a crossover design. Patients included signed an informed consent form and were submitted to two in-office bleaching sessions receiving placebo (control) or etodolac prior to the bleaching procedure, while different treatments were allocated for each session.\nA one-week interval (\"washout\") in-between sessions was established. The study was conducted at the clinic of the School of Dentistry of the local university from November 2014 to July 2015.\n\nBODY.PARTICIPANTS:\nPatients from 18 to 35 years old with good oral health were included in this clinical trial. From the patients who received placebo/etodolac, 6 were men and 19 were women, and for etodolac/placebo, 12 were men and 13 were women; the average age was 23 years; 64% were women.\nPatients with any of the six upper anterior teeth with caries, restoration, severe discoloration (e.g., stains caused by tetracycline), enamel hypoplasia, gingival recession, dentin exposure, pulpitis, or endodontics were excluded, as well as smokers. Participants submitted to previous bleaching procedures, with prior tooth sensitivity, known allergy to any component of the medication used in the study, and pregnant or breastfeeding women were also excluded. An air stream was applied to teeth to verify the presence or absence of sensitivity (none or zero). Only patients presenting all six upper anterior teeth with shade mismatch of 2.5 M2 (Vita Bleach guide 3D-Master scale, Vita-Zahnfabrik, Bad Sackingen, Germany) - A3.5 Vita Classical equivalence - or darker were included.\n\nBODY.SAMPLE SIZE CALCULATION:\nThe sample calculation was based on the primary binary outcome (sensitivity risk 24 hours after the procedure) for superiority trial. The power of the test was set at 80%, considering a type I error of 0.05 and risk of tooth sensitivity of 90%, based on a prior study using a similar bleaching agent\n23\n; moreover, a reduction around 30% was expected after treatment. The calculation resulted in fifty patients.\n\nBODY.RANDOMIZATION:\nA randomized list was computer-generated by a person not involved in intervention or evaluation. Participants were defined as blocks in the randomization process, in which the sequence of treatment (placebo or etodolac) was randomly set for each block by computer-generated tables (www.sealedenvelope.com). The sequence was inserted into sealed envelopes numbered from 1 to 50 that were opened by the operator only at the moment of intervention. Patients were numbered according to the sequence of enrollment. Neither the participant nor the operator knew the group allocation, determining a blinded protocol.\n\nBODY.BASELINE EVALUATION:\nPrior to the bleaching procedure, teeth were cleaned using rubber cups associated with pumice and water. The shades of upper incisors and canine teeth (13, 12, 11, 21, 22, and 23) were assessed on a baseline using the bleach guide scale. Color was analyzed by two previously calibrated evaluators. Both evaluators presented superior color matching competency according to the ISO (International Organization for Standardization)/TR 28642:2011. Shade tabs selected were converted to scores ranging from 1 (whiter shade – 0M1) to 15 (darker shade – 5M3).\nConsidering a possible effect of dental anxiety on the sensitivity reported by patients, the Corah's Dental Anxiety Scale was used to determine the level of anxiety of each patient related to the procedure\n17\n. Each answer to the survey instrument was scored on a scale from 1 to 5 (four questions) and the sum of scores was used to determine the level of anxiety: low was under 12, moderate was between 12 and 14, and high was over 14.\n\nBODY.INTERVENTION:\nOne hour before each bleaching session and right after the prophylaxis, patients received a capsule containing 400 mg of NAISE etodolac (Flancox™, Apsen Farmaceutica S/A) or 400 mg of placebo (inert content) according to randomization. Capsules had the same appearance and were manufactured by a person not involved in intervention or evaluation. They were placed into two bottles identified by letters according to the treatment. Neither the operators responsible for intervention and evaluation nor the patients knew the content of each capsule.\nThe color evaluation was verified, and the light- cured resin dam was applied (Top Dam, FGM, Joinville, SC, Brazil) and polymerized (Radii-cal, SDI, Bayswater, Australia) on the gingival tissue corresponding to the teeth to be bleached. A 35% hydrogen peroxide-based bleaching agent (Whiteness HP Blue, FGM, Joinville, SC, Brazil) was mixed and applied to the buccal surface of teeth for 40 minutes. After that, the bleaching agent was removed. A second session was carried out after a week following the same procedures. At this time, the patient received a single-dose capsule containing etodolac or placebo (different from the ones received at the first session), one hour before the procedure. During the bleaching treatment, patients were advised not to ingest colored food and beverages.\n\nBODY.EVALUATIONS:\nTooth sensitivity reported by patients was assessed using a visual analog scale (VAS) and a verbal rating scale (VRS). The VAS consisted of a 10-cm long scale ranging from green (absence of pain) to red (unbearable pain). Patients set their level of sensitivity by pointing to the color corresponding to the pain level, while the distance from this point to the green border was recorded. For the VRS, patients reported their level of sensitivity based on scores: 0= none; 1 = mild; 2= moderate; 3= considerable; and 4= severe. Tooth sensitivity was assessed during bleaching, immediately after removing the bleaching agent, and after 24 hours. For this last assessment, only the VRS was used due to the difficulty of patients to fill the VAS at home. One week after each session, tooth color was evaluated again using the same procedure described previously.\n\nBODY.STATISTICAL ANALYSIS:\nDemographic data from patients were analyzed to determine age, gender, and anxiety level for each allocation sequence. Comparisons among allocation sequences were performed by Mann-Whitney (age), Fisher's exact (gender), and chi-square (anxiety level) tests.\nBased on the presence of any tooth sensitivity (VRS scores different from 0), the absolute risk, odds ratio, and relative risk were calculated regarding the treatments for each moment of bleaching evaluation/ session, as well as the confidence intervals (95%). For each moment, differences on presence/absence ratios were analyzed by Fisher's exact test. For the overall risk related to each treatment, odds ratio was adjusted to the independent variable of \"session of bleaching\" using Mantel-Haenszel statistics. The homogeneity of odds ratios was analyzed by Breslow-Day and Tarone's tests. Next, the estimated odds ratio was converted to relative risk and the overall presence/absence ratios were analyzed by the McNemar's test, considering the study design (crossover).\nFor the VRS, data from scores observed at each moment of bleaching evaluation/session were submitted to the Mann-Whitney rank sum test. Despite the measurement of tooth sensitivity using the VAS, which provided a continuous outcome, data assessed with this scale did not show normal distribution (Shapiro-Wilk's test). Thus, data from the VAS were also analyzed by the Mann-Whitney rank sum test, performing one test per time of evaluation.\nFor color evaluation, comparisons among sequences of treatment were performed using the Mann-Whitney rank sum test. Friedman test followed by Dunn's post hoc test were used to analyze the difference between the moments of evaluation for each sequence of treatment. All statistical analyses were performed adjusting the initial significance level (α=0.05) by the Bonferroni correction.\n\n\nBODY.RESULTS:\n\nFigure 1 shows the flow chart of patients assessed for eligibility, who were included in the study and analyzed. Table 1 shows the demographic characteristics of patients allocated for each sequence of treatment. Regarding anxiety, 88% of patients had a low level and only 2% had a high level (p=0.236). There was no difference among the sequences of treatment for any demographic characteristic analyzed (age: p=0.089 and gender: p=0.140).\nFigure 1BODY.FLOW CHART OF PATIENTS:\nTable 1BODY.PROFILE OF PATIENTS INCLUDED IN THE STUDY ALLOCATED TO EACH SEQUENCE OF TREATMENT:\nAge (years)Median (1st / 3rd quartiles)p-value\nTotal\n\n\n\n23.0 (21.0/26.0)\n\n\nPlacebo/Edotolac\n\n23.8 (21.0/25.5)\n\np\n*\n= 0.089\n\nEdotolac/Placebo\n\n25.0 (22.0/28.0)\n\n\nGender Total\n\n\n\nn (%)\n\n\n\n\nMale\n18 (36.0%)\n\n\nFemale\n32 (64.0%)\n\nPlacebo/Edotolac\n\n\n\np\n**\n= 0.140\n\n\nMale\n6 (24.0%)\n\n\nFemale\n19 (76.0%)\n\nEdotolac/Placebo\n\n\n\n\nMale\n12 (52.0%)\n\n\nFemale\n13 (48.0 %)\n\n\nLevel of anxiety Total\n\n\n\nn (%)\n\n\n\n\nLow anxiety\n44 (88.0%)\n\n\nModerate anxiety\n5 (10.0%)\n\n\nHigh anxiety\n1 (2.0%)\n\nPlacebo/Edotolac\n\n\n\np\n***\n= 0.236\n\n\nLow anxiety\n21 (84.0%)\n\n\nModerate anxiety\n4 (16.0%)\n\n\nHigh anxiety\n0 (0.0%)\n\nEdotolac/Placebo\n\n\n\n\nLow anxiety\n23 (92.0%)\n\n\nModerate anxiety\n1 (4.0%)\n\n\nHigh anxiety\n1 (4.0%)\n\n*Mann-Whitney rank sum test;\n**Fisher Exact test;\n***Chi-square test.\n\nTable 2 shows the results of tooth sensitivity risk. The treatment that patients received prior to the bleaching procedure did not affect the risk to sensitivity at any of the moments of evaluation (during and immediately after: p=1.0; after 24 hours: p=0.683). Figure 2 shows the results for level of sensitivity assessed by the VRS. Treatment did not affect level of sensitivity, regardless of the moment of evaluation. Similar results were observed when the VAS was used (Figure 3).\nTable 2BODY.RESULTS OF RISK OF TOOTH SENSITIVITY OBSERVED FOR EACH TREATMENT:\nSessionMoment of evaluationDuringImmediately after24 h after\nTreatment\nEdotolac\nPlacebo\nEdotolac\nPlacebo\nEdotolac\nPlacebo\n1st session\nPresence of sensitivity (yes/no)\n(10/15)\n(11/14)\n(9/16)\n(10/15)\n(3/22)\n(2/23)\n\nOdds ratio (95% CI)\n0.52 (0.17 – 1.61)\n\n0.84 (0.27 – 2.65)\n\n1.57 (0.24 – 10.30)\n\n\nRelative risk (95% CI)\n0.71 (0.40 – 1.29)\n\n0.90 (0.44 – 1.83)\n\n1.50 (0.27 – 8.22)\n\n\np-value*\n\n0.396\n\n1.000\n\n1.000\n\n2nd session\nPresence of sensitivity (yes/no)\n(8/17)\n(5/20)\n(6/19)\n(4/21)\n(1/24)\n(1/24)\n\nOdds ratio (95% CI)\n1.88 (0.52 – 6.85)\n\n1.66 (0.41 – 6.79)\n\n1.00 (0.59 – 16.93)\n\n\nRelative risk (95% CI)\n1.60 (0.61 – 4.22)\n\n1.50 (0.48 – 4.68)\n\n1.00 (0.06 – 15.12)\n\n\np-value*\n\n0.520\n\n0.725\n\n1.000\n\nAverage\nOdds ratio (95% CI)**\n\n0.92 (0.40 – 2.09)\n\n1.11 (0.46 – 2.67)\n\n1.37 (0.29 – 6.51)\n\n\nRelative risk***\n\n0.95\n\n1.08\n\n1.34\n\n\np-value****\n\n1.000\n\n1.000\n\n0.683\n\n*Fisher exact test;\n**Mantel-Haenszel common odds ratio estimate;\n***Based on odds ratio estimated;\n****McNemar's test. The cut-off value of type I error (α=0.0056) was adjusted by Bonferroni correction.\nFigure 2BODY.SCORES OF TOOTH SENSITIVITY FROM THE VRS (0 TO 4):\nFigure 3BODY.LEVEL OF TOOTH SENSITIVITY ASSESSED WITH VAS (CM):\nIn regards to bleaching effectiveness, the bleaching procedure carried out in this study was able to significantly reduce the shade scores from the bleach guide, while the second session resulted in additional bleaching effect (Figure 4). The sequence of treatment did not affect bleaching effectiveness.\nFigure 4BODY.SCORES FROM THE BLEACH GUIDE SCALE:\n\nBODY.DISCUSSION:\nTooth bleaching performed by patients at home using low-concentration peroxides has been reported as the first-choice technique for vital bleaching, and probably the most widely used one\n5\n\n,\n\n11\n. However, procedures using high-concentration bleaching agents applied in office by clinicians remain an important protocol used to bleach discolored teeth for specific indications of tooth bleaching. This protocol is separate and intended to aid patients who cannot adapt to the use of home-based bleaching trays and who have contraindications related to the gastric system, because it reduces the risk of gel intake\n14\n, and to control the risk factor for developing gingival irritation\n10\n. In-office bleaching is also indicated to patients requiring faster results, while this technique can be combined with at-home bleaching\n26\n.\nIn-office techniques have demonstrated high bleaching effectiveness using high-concentration hydrogen peroxides\n26\n. However, high-concentrated bleaching agents also result in increased tooth sensitivity reported by patients during and up to 24 hours after the bleaching procedure, which is the main adverse effect related to in-office tooth bleaching\n9\n\n,\n\n20\n\n,\n\n21\n. Even though tooth sensitivity is related to the inflammatory process of pulp tissue\n20\n, the findings of this study showed that the preemptive use of etodolac in a single dose did not affect the risk and level of tooth sensitivity caused by in-office bleaching. As expected, preemptive administration of etodolac also did not affect the bleaching results. Thus, the first hypothesis of the study was rejected and the second one was accepted.\nUnlike the tooth sensitivity typically reported by patients presenting teeth with dentin exposure, which relates mainly to thermal stimuli, bleached teeth can hurt even in the absence of any stimulus, showing that the pain mechanism related to peroxides is different from other types of tooth pain\n20\n. Moreover, the sensitivity caused by tooth bleaching tends to increase within a few hours following the bleaching procedure, when most patients described the pain as a \"twinge\" or \"shock-like.\" It has been demonstrated that the oxidizing agents used during the bleaching procedure cause a reduction on metabolism, viability, and cell proliferation\n20\n, allowing to increase the expression of inflammatory mediators, such as substance P and bradykinin, which is a vasoactive peptide released by nerves resulting in a neurogenic inflammation\n6\n. Thus, a preemptive administration of anti-inflammatories could be a reasonable approach to reduce tooth sensitivity associated with bleaching procedures. However, in this study, sensitivity was not lower during and 24 hours after the bleaching procedure. Unfortunately, the peak of tooth sensitivity was not measured in this study, even though this outcome could allow to assess a possible effect of etodolac on sensitivity following the end of the bleaching procedure.\nA prior study demonstrated that etodolac presents higher effectiveness on bradykinin inhibition than other drugs commonly used to control the inflammatory process, motivating its use in this study\n18\n. Regarding pharmacokinetics, etodolac reaches its maximal plasma concentration around 1 to 2 hours after its administration\n4\n. Thus, it was expected that maximal plasma concentration would be reached during the bleaching procedure. Unfortunately, no effect of the preemptive administration of etodolac on the reported tooth sensitivity was observed at this time of evaluation for both sessions. Despite its demonstrated efficacy on inflammation control, the analgesic effect of etodolac increases when its administration is repeated, whereas a single dose prior to the bleaching procedure seems insufficient to prevent tooth sensitivity\n19\n.\nAnother important observation regarding pharmacokinetics is that a crossover design was used in this study with a 1-week interval period. Considering that the half-life of etodolac after oral administration is around 13 hours\n16\n, it is expected to find no residual effect after 1-week. Moreover, the crossover design avoids bias related to pain thresholds of patients\n1\n.\nParticipants included in this study were predominantly young females presenting low level of anxiety. All these demographics characteristics of the studied population might be associated with differences in pain thresholds. Higher tooth sensory threshold has been demonstrated in males due to differences in crown diameters of teeth and underlying mechanisms such as neurological differences or behavior aspects\n8\n. Regarding the age of participants, a recent review did not find any relation between age and the risk to or level of tooth sensitivity\n27\n. However, it is important to emphasize that most participants from trials included in that review were under 30 years old\n27\n. Another important demographic aspect assessed in this study was the participants' level of dental anxiety prior to bleaching procedures. It has been demonstrated that dental anxiety is a strong predictor of pain and that anxious participants are prone to develop painful responses\n24\n. In this research, almost 90% of participants presented low anxiety prior to bleaching procedures, which can be justified by the low invasive aspect of intervention, despite the patients' concern about tooth sensitivity. In fact, despite 66% of participants reporting various level of tooth sensitivity (high risk), the actual level of sensitivity reported was low (medians below moderate at VRS, and means lower than 2 at VAS).\nIn addition to the evaluation of tooth sensitivity, we also assessed color alteration promoted by bleaching procedures. The data analysis of color evaluation used the sequence of allocation instead of the treatment (placebo or etodolac). If we had used treatment, different participants between the first and second sessions would be compared for the same treatment, which would have impaired correct color changes assessment. Moreover, the main aim of color evaluation was to show the effectiveness of the bleaching technique used.\nIn our study, the protocol that was carried out resulted in significant bleaching effect, while an additional color alteration was achieved at the second session. In addition, the last color evaluation was performed a week after the last bleaching session of this study, while longer times may be required for color stabilization\n21\n. However, a shorter time was used because the tooth sensitivity reported by patients was the main outcome of this trial. It has been demonstrated that two sessions of in-office tooth bleaching results in a mean change of 5.3 (± 2.8) units on shade guides\n26\n, which is similar to the average color change achieved in this research. Factors such as patient's age and color at baseline have been strongly associated with bleaching effectiveness, while young patients and darker teeth show more pronounced color changes\n27\n. We found no difference in baseline regarding these parameters between the sequence of allocation, while the inclusion of young participants presenting all teeth darker than 2.5 M2 (score 7) favored obtaining significant color bleaching.\nIn conclusion, the preemptive administration of etodolac in a single dose 1 hour prior to the bleaching procedure was unable to reduce both risk and level of sensitivity caused by in-office bleaching. A limitation of this study was that the preemptive treatment was administrated only for a young population (average age of 23 years), with prevalence of female patients, while different results can be observed for other demographic profiles\n27\n.\n\nBODY.CONCLUSIONS:\nThe preemptive administration of a single dose of etodolac previously to the two bleaching sessions with 35% hydrogen peroxide did not affect tooth color change, risk of sensitivity and level of pain reported by the patients (during the sessions, immediately after, and 24 h after sessions).\n\n**Question:** Compared to placebo what was the result of Etodolac (400 mg) on bleaching effectiveness?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13843
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nOptimal Mode of clearance in critically ill patients with Acute Kidney Injury (OMAKI) - a pilot randomized controlled trial of hemofiltration versus hemodialysis: a Canadian Critical Care Trials Group project\n\n\n\nABSTRACT.INTRODUCTION:\nAmong critically ill patients with acute kidney injury (AKI) needing continuous renal replacement therapy (CRRT), the effect of convective (via continuous venovenous hemofiltration [CVVH]) versus diffusive (via continuous venovenous hemodialysis [CVVHD]) solute clearance on clinical outcomes is unclear. Our objective was to evaluate the feasibility of comparing these two modes in a randomized trial.\n\nABSTRACT.METHODS:\nThis was a multicenter open-label parallel-group pilot randomized trial of CVVH versus CVVHD. Using concealed allocation, we randomized critically ill adults with AKI and hemodynamic instability to CVVH or CVVHD, with a prescribed small solute clearance of 35 mL/kg/hour in both arms. The primary outcome was trial feasibility, defined by randomization of >25% of eligible patients, delivery of >75% of the prescribed CRRT dose, and follow-up of >95% of patients to 60 days. A secondary analysis using a mixed-effects model examined the impact of therapy on illness severity, defined by sequential organ failure assessment (SOFA) score, over the first week.\n\nABSTRACT.RESULTS:\nWe randomized 78 patients (mean age 61.5 years; 39% women; 23% with chronic kidney disease; 82% with sepsis). Baseline SOFA scores (mean 15.9, SD 3.2) were similar between groups. We recruited 55% of eligible patients, delivered >80% of the prescribed dose in each arm, and achieved 100% follow-up. SOFA tended to decline more over the first week in CVVH recipients (-0.8, 95% CI -2.1, +0.5) driven by a reduction in vasopressor requirements. Mortality (54% CVVH; 55% CVVHD) and dialysis dependence in survivors (24% CVVH; 19% CVVHD) at 60 days were similar.\n\nABSTRACT.CONCLUSIONS:\nOur results suggest that a large trial comparing CVVH to CVVHD would be feasible. There is a trend toward improved vasopressor requirements among CVVH-treated patients over the first week of treatment.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT00675818\n\nBODY.INTRODUCTION:\nAcute kidney injury (AKI) is a common complication of critical illness, afflicting up to two-thirds of patients admitted to the ICU [1]. A significant minority of patients with AKI requires renal replacement therapy (RRT), and these individuals have high short-term mortality that ranges from 50 to 70% [2]. In an attempt to mitigate these poor outcomes, various components of the RRT prescription have been rigorously examined in large well-designed randomized controlled trials (RCTs) [3-5].\nThe optimal mode of clearance in patients with AKI who require renal support is an area of considerable controversy resulting in significant practice variation [6]. Hemofiltration, whereby solutes are removed by convection, facilitates the removal of both low and higher molecular weight solutes, depending on the pore size of the membrane [7]. Hemodialysis, in which solute removal occurs via diffusion out of the bloodstream into the dialysate down a concentration gradient, removes low molecular weight molecules but provides limited clearance of higher molecular weight substances. When filter characteristics are kept constant, hemofiltration, which more faithfully mimics glomerular filtration, should result in the clearance of larger-sized solutes as compared to hemodialysis [7]. The removal of such solutes, which may include toxic mediators of sepsis and inflammation, provides the theoretical underpinnings for the superiority of hemofiltration as a renal support mode for critically ill patients with AKI.\nWe conducted a multicenter pilot RCT of hemofiltration vs. hemodialysis in critically ill patients with AKI to determine whether a larger definitive trial based on clinically important endpoints would be feasible. In a secondary analysis, we evaluated whether hemofiltration improved global organ dysfunction.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN:\nWe conducted an unblinded RCT of continuous venovenous hemofiltration (CVVH) vs. continuous venovenous hemodialysis (CVVHD) with concealed allocation (http://Clinicaltrials.gov registration number NCT00675818). Our reporting follows the updated CONSORT statement [8].\n\nBODY.SETTING:\nParticipants were recruited from ICUs at six academic hospitals: Mt. Sinai Hospital (Medical-Surgical ICU), Sunnybrook Health Sciences Centre (Critical Care Unit and Cardiovascular ICU) and St. Michael's Hospital (Medical-Surgical and Cardiovascular ICUs), all in Toronto, Canada; Victoria Hospital (Critical Care Trauma Centre), and University Hospital (Medical/Surgical Intensive Care Unit and Cardiac Surgery Recovery Unit), both in London, Canada, and University of Alberta Hospital (General Systems Intensive Care Unit) in Edmonton, Canada. The Research Ethics Boards of Mt. Sinai Hospital, St. Michael's Hospital, Sunnybrook Health Sciences Centre, London Health Sciences Centre and the University of Alberta approved the protocol. The Applied Health Research Centre at St. Michael's Hospital (Toronto, Ontario, Canada) was the trial coordinating center.\n\nBODY.POPULATION:\nWe enrolled critically ill adults (≥ 16 years of age) with AKI, defined as a serum creatinine increase ≥ 50% from baseline (defined as the last known pre-morbid serum creatinine or earliest value available from the current admission). At the time of screening, at least one of the following indications for RRT initiation needed to be present: (i) oliguria (defined as urine output < 100 mL in the preceding 4 hours); (ii) metabolic acidosis (serum bicarbonate < 15 mmol/L and pH < 7.25); (iii) refractory hyperkalemia (serum potassium > 6 mmol/L despite medical efforts at potassium removal); (iv) serum urea > 50 mmol/L, or (v) suspected uremic organ involvement (pericarditis, encephalopathy, neuropathy or myopathy). Finally, participants needed to be hemodynamically unstable, defined as Sequential Organ Failure Assessment (SOFA)- Cardiovascular score ≥ 1 on the day of screening (see Additional file 1 for the modified SOFA score used in this study). This required the patient to have mean arterial pressure < 70 mmHg or receipt of at least one vasopressor or inotrope [9]. Patients were excluded if any one of the following was present: receipt of any RRT within the previous 2 months; presence of an obstructive etiology for AKI; receipt of a kidney transplant within the preceding year; diagnosis of rapidly progressive glomerulonephritis, vasculitis, or acute interstitial nephritis; a clinical indication for intermittent hemodialysis (for example, the presence of a dialyzable toxin); terminal illness with an associated life expectancy less than 2 months; moribund status (life expectancy < 48 hours as per judgment of physicians involved in the patient's care); prior enrolment in this study; enrolment in a competing ICU interventional study; non-availability of a CRRT machine, or administration of RRT for > 36 hours prior to eligibility assessment.\n\nBODY.PATIENT ALLOCATION:\nAfter eligibility was confirmed, we attempted to obtain consent from the patient or if the patient lacked capacity to consent, his/her substitute decision maker (SDM) was approached. A deferred consent option was approved at three sites, which allowed patient enrollment and randomization in the event of patient incapacity and the inability to locate an SDM. Using this mechanism, patients were randomized, and research personnel attempted to locate an SDM every 72 hours to affirm consent for participation. In all cases, when participants regained capacity, they were asked to provide consent if they were initially enrolled using a priori SDM consent or deferred consent; no participant withdrew consent once regaining capacity. Patients were allocated to a study group using sealed, opaque, sequentially numbered envelopes (prepared by the coordinating center) that were opened after consent was obtained [10]. Randomization was stratified by center in random blocks of four, six or eight. The Research Ethics Board at each center approved the study. An independent data and safety monitoring board tracked the trial's conduct.\n\nBODY.STUDY INTERVENTION:\nParticipants randomized to CVVH were prescribed ultrafiltration with isovolemic replacement solution (evenly split between pre- and post-filter) to achieve a target clearance of 35 mL/kg body weight/hr. The prescribed hourly ultrafiltration rate was increased above 35 mL/kg/hr to compensate for the reduced efficiency of clearance related to the pre-filter component of the replacement solution volume administered. This adjusted CVVH dose was calculated from the post-filter replacement fluid (RF), pre-filter replacement fluid and blood (blood) flow rates as follows:\nDose = Postfilter RF rate + ((Prefilter RF rate × (Blood flow/(Blood flow + Prefilter RF rate))).\nIn the CVVHD arm, the dialysate flow was set to achieve a clearance of 35 mL/kg/hr, which included a post-filter hemofiltration flow of 100 to 200 mL/hr. This obligate low-volume post-filter hemofiltration is utilized at participating centers to minimize the risk of blood clotting in the machine's deaeration chamber. Ultrafiltration performed for achievement of net fluid balance or to compensate for the volume of administered anticoagulants (namely, citrate or unfractionated heparin) was not considered in the total dose. We used the patient's most recent measured body weight, as recorded in the chart or estimated by the study coordinator if no value was documented, to calculate prescribed RRT dose. Due to difficulties in achieving the target RRT dose as a result of high transmembrane pressures in some patients with higher body mass (particularly those randomized to CVVH), the protocol was modified in May 2010 (21 months after randomization of the first patient) such that the total fluid dose was capped at 4,000 mL/hr for both treatment arms, irrespective of the patient's weight. After this protocol change, twenty patients were enrolled in the trial; the 4,000 mL/hr dosing cap was invoked in six participants (five allocated to CVVHD and one to CVVH) who would have otherwise needed higher flows to achieve an actual total dose of 35 mL/kg/hr.\nAll study therapies were delivered by the Gambro Prismaflex™ RRT machine using the ST100 (surface area 1.0 m2) or ST150 (surface area 1.5 m2) filter sets, which contain a polyacrylonitrile AN69 membrane (Gambro, Richmond Hill, ON, Canada). We permitted the use of any commercially available dialysate and replacement solutions. Decisions regarding circuit anticoagulation (heparin, regional citrate anticoagulation, or no anticoagulation) and volume control were at the discretion of the attending physicians. Patients remained on study therapy until death, withdrawal of CRRT as part of withdrawal of life support, hemodynamic stability (SOFA-cardiovascular score < 2 for > 24 hrs) permitting stepdown to intermittent hemodialysis, or recovery of kidney function (defined as urine output > 500 mL in the preceding 12 hrs, and most recent serum potassium < 5.5 mmol/L and serum bicarbonate > 18 mmol/L).\n\nBODY.OUTCOMES:\nThe primary feasibility outcome of this study was the ability to administer > 75% of the prescribed CRRT dose to participants in each treatment arm. Secondary feasibility outcomes included the ability to enroll > 25% of fully eligible patients and the ability to follow > 95% of patients to 60 days following randomization (the anticipated follow-up period for the future definitive principal study). Secondary outcomes included change in SOFA score from baseline to days 1, 2 and 7, respectively, following randomization. Serial changes in SOFA scores have been shown to be correlated with clinical outcomes in critically ill patients with AKI who require RRT [11].\n\nBODY.DATA COLLECTION:\nTrained research coordinators collected baseline clinical and demographic data, and information on pre-existing medical conditions. Specific risk factors for AKI were ascertained, including recent procedures, nephrotoxins, and sepsis (defined using consensus guidelines [12]). SOFA score was calculated at the time of randomization and on each day of study therapy. The SOFA-Cardiovascular score was modified to include the receipt of vasopressin. Patients receiving RRT on a given day were assigned a SOFA-Renal score of 4, regardless of urine output or serum creatinine. Participants were followed until death or a maximum of 60 days from randomization, at which time vital status and the ongoing need for RRT among survivors were recorded.\n\nBODY.STATISTICAL ANALYSES:\nAs this was a feasibility trial with the primary objective of informing the design of a large-scale RCT, we planned to enroll a convenience sample of 75 participants from six sites. Patients who were randomized but never received RRT are described, but these individuals were replaced to ensure that at least 75 patients received some form of RRT. Since the primary feasibility outcome was based on the dose of CRRT received, patients for whom dose could not be readily calculated (those who received no RRT or forms of RRT other than CRRT) were excluded from the analysis related to feasibility. However, clinical outcomes are reported for all randomized participants.\nDescriptive statistics were used to characterize participants in either arm. Continuous variables are presented as means (SD) or medians (interquartile range, IQR) and two-group comparisons were performed with the t-test or Wilcoxon test, as appropriate. Two-group comparisons involving categorical variables were carried out with the chi square test. Analysis of covariance, adjusted for baseline SOFA score, was used to evaluate the change in SOFA score on days 1 and 2. Linear mixed models adjusted for baseline SOFA score and day of study therapy were used to evaluate the impact of RRT mode on SOFA score over the first week of therapy. For the fixed effect of treatment (that is, CVVH vs. CVVHD) 95% confidence intervals (CI) were obtained by profiling the log-likelihood function. All analyses were performed using R version 2.12.0 (R Development Core Team 2010, Vienna, Austria).\n\n\nBODY.RESULTS:\nWe screened 347 patients; 143 were eligible for participation and 79 individuals (55.2%) were enrolled over a 24-month time period. The inability to obtain consent from the patient or SDM was the reason for the non-enrollment of otherwise eligible patients. One patient was excluded shortly after enrollment after it was decided to pursue a non-continuous form of RRT. In total, 78 patients were randomized (39 to CVVH, and 39 to CVVHD). In one case, prior to the start of therapy it was recognized that a patient randomized to CVVHD was inappropriately enrolled as the indication for RRT was toxin removal rather than AKI per se. This patient was excluded from all further analyses. Clinical outcomes are reported in an intention-to-treat fashion for the remaining 77 patients (CVVH, 39; CVVHD, 38). Four patients randomized to CVVH were excluded from the feasibility analysis, two due to death prior to commencement of study RRT, and two due to receipt of continuous venovenous hemodiafiltration as the initial mode of therapy. The indication(s) for RRT was (were) oliguria, metabolic acidosis, hyperkalemia and uremia in 34, 17, 6 and 8 patients, respectively, in the CVVH arm. In the CVVHD arm, these indications guided the inclusion of 36, 15, 4 and 3 participants, respectively. In total, 73 participants commenced the therapy to which they were randomized (CVVH, 35; CVVHD, 38); these individuals contributed to the analysis relating to the feasibility of treatment delivery (Figure 1).\nFigure 1Flow of patients through the trial. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\nBODY.BASELINE CHARACTERISTICS (TABLE 1):\nThe mean age of participants was 61.5 (SD 14.2) years and 30/77 (39%) were women. The median time from ICU admission to randomization was 2 (IQR 2 to 8) days. Sepsis was present in 63/77 (82%) of participants, 73/77 (95%) were mechanically ventilated and 69/77 (90%) received vasopressors at the time of randomization. Median 24-hr urine output was 125 (IQR 50 to 250) mL. Approximately two-thirds of participants received some form of RRT prior to randomization, with 23/39 in the CVVH group (59%) for a mean of 10.6 (SD 12.0) hrs, and 30/38 in the CVVHD group (79%) for a mean of 13.3 (SD 11.7) hrs. (Table 1\nTable 1Baseline characteristics\n\nCVVH (n = 39)\nCVVHD (n = 38)\nAge, years\n58.8 ± 13.7\n64.3 ± 14.3\nFemale\n14 (36%)\n16 (42%)\nWeight, kg\n86.6 ± 31.5\n89.2 ± 26.4\nDays from ICU admission to randomization\n2 (1, 4)\n2 (1, 3)\nSOFA score\n16.3 ± 3.3\n15.5 ± 3.0\nPremorbid conditions\nHypertension\n19 (49%)\n26 (68%)\nDiabetes mellitus\n9 (23%)\n9 (24%)\nChronic kidney disease\n5 (13%)\n13 (34%)\nAcute kidney injury risk factors\nSepsis\n31 (79%)\n32 (84%)\nCardiopulmonary bypass in past 7 days\n1 (3%)\n4 (11%)\nIV contrast in past 7 days\n9 (23%)\n6 (16%)\nPhysiologic parameters\nUrine output, mL/24 hr\n125 (60, 245)\n135 (46, 251)\nMinimum systolic BP during 24 hr before enrolment, mmHg\n94.2 ± 16.5\n86.1 ± 13.5\nMinimum diastolic BP during 24 hr before enrolment, mmHg\n48.9 ± 8.6\n47.2 ± 9.4\nLaboratory parameters\nSerum creatinine, µmol/L\n276 (194, 352)\n246 (155, 325)\nUrea, mmol/L\n25.8 ± 38.1\n17.6 ± 12.7\nPotassium, mmol/L\n4.3 ± 0.7\n4.1 ± 0.6\nBicarbonate, mmol/L\n20.5 ± 5.9\n20.2 ± 5.3\nHemoglobin, g/L\n86.1 ± 16.3\n90.0 ± 15.2\nPlatelets, × 109/L\n109.6 ± 82.6\n127.2 ± 80.9\nWBC, × 109/L\n20.2 ± 43.3\n16.9 ± 10.6\nICU interventions\nDuration of RRT prior to randomization, hours\n8 (0, 19)\n13 (1.0, 20.5)\nMechanical ventilation\n37 (95%)\n36 (95%)\nVasopressors\n34 (87%)\n35 (92%)\nTotal parenteral nutrition\n1 (3%)\n2 (5%)\nContinuous data are displayed as mean ± SD or medians (interquartile range), as appropriate. Categorical variables are displayed as n (%). CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemodialysis; BP, blood pressure; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; WBC, White blood cell.\n\nBODY.FEATURES OF STUDY TREATMENTS (TABLE 2):\nAmong the 35 participants who started CVVH, RRT was prescribed for a median of 107 (IQR 55 to 146) hrs and delivered for 85 (IQR 43 to 128) hrs. Overall, 84.7% (95% CI 79.1, 90.3) of the prescribed duration was delivered. The mean weight-standardized dose was 33.6 (SD 7.4) mL/kg/hr while CRRT was ongoing.(Table 2\nTable 2Feasibility and safety data\nCVVH(n = 35)CVVHD(n = 38)P-valueDuration RRT prescribed, hrs\n146 ± 240\n145 ± 156\n0.88\nDuration RRT received, hrs\n130 ± 222\n128 ± 142\n0.87\nMean pre-filter replacement solution flow, mL/hr\n1533 ± 442\n0\nn/a\nMean post-filter replacement solution flow, mL/hr\n1440 ± 488\n180 ± 140\nn/a\nMean dialysate flow, mL/hr\n0\n2871 ± 872\nn/a\nMean RRT dose, mL/kg/hr\n33.6 ± 7.4\n34.7 ± 4.4\n0.50\nPrescribed dose delivered, %\n84.7 ± 16.3\n87.8 ± 13.7\n0.73\nNet ultrafiltration, L/day\n1.7 ± 2.2\n0.8 ± 4.1\n0.98\nDays on study therapy\n5 (3-7)\n4.50 (3.00-10.25)\n0.79\nPrimary reason for CRRT withdrawal\n\n\n0.75\nDeath while on CRRT\n12 (35%)\n10 (27%)\n\nKidney function recovery\n7 (21%)\n7 (19%)\n\nTransfer to intermittent hemodialysis\n12 (35%)\n14 (38%)\n\n\n3 (9%)\n6 (16%)\n\nCatheter changes/day of therapy\n0.09 ± 0.2\n0.1 ± 0.2\n0.92\nUnits RBCs transfused/day of therapy\n0.3 ± 0.3\n0.4 ± 0.7\n0.38\nUnscheduled circuit changes/day of therapy\n0.2 ± 0.3\n0.2 ± 0.2\n0.36\nStudy days with receipt of norepinephrine, %\n61.8 ± 36.2\n65.7 ± 7.5\n0.41\nStudy days with receipt of vasopressin, %\n35.4 ± 35.9\n43.0 ± 43.6\n0.51\nContinuous data are displayed as mean ± standard deviation or median (interquartile range), as appropriate. Categorical variables are displayed as n (%); n/a, not applicable. Data are restricted to patients who initiated therapy with the CRRT mode to which they had been allocated. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; RBCs, red blood cells; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration.\nCVVHD was prescribed in 38 participants for a median of 92 (IQR 57 to 145) hrs and delivered for 76 (IQR 44 to 148) hrs. Over the course of the trial, 87.8% (95% CI 83.3, 92.3) of the prescribed duration was delivered to patients randomized to receive CVVHD. The mean weight-standardized delivered RRT dose was 34.7 mL/kg/hr (SD 4.4).\n\nBODY.CLINICAL OUTCOMES:\nAll subjects were followed to 60 days, by which point 22/39 (56%) and 21/38 (55%) of participants assigned to CVVH and CVVHD, respectively, had died. Among surviving patients, 4/17 (24%) and 3/17 (19%) of those initially assigned to CVVH and CVVHD respectively, were still dependent on RRT.\nAfter adjustment for baseline SOFA score, we found a non-significant decline in SOFA score among participants treated with CVVH compared to CVVHD on the first day (-0.4, 95% CI -1.3, 0.6) and second day (-0.4, 95% CI -1.6, 0.8) following randomization. Over the first week of therapy, the adjusted change in the SOFA score among participants treated with CVVH compared to CVVHD was -0.8 (95% CI -2.1, 0.5). The observed reduction appeared to be driven by a reduction in the cardiovascular component of the SOFA score (Figure 2).\nFigure 2Total and Cardiovascular (CV) Sequential Organ Failure Assessment (SOFA) scores over the first seven days following randomization. Total SOFA scores are denoted with squares; the cardiovascular component of the SOFA score is denoted with triangles.CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\n\nBODY.PROTOCOL VIOLATIONS AND ADVERSE EVENTS:\nWe identified 15 protocol violations involving 15 participants. In one individual who started therapy with CVVH, achievement of the target dose was not feasible and RRT was supplemented with the addition of continuous dialysis; ongoing difficulties with maintaining adequate small molecule clearance using CRRT resulted in a change to sustained low efficiency dialysis. In five cases (one patient on CVVH, and four on CVVHD), stepdown from CRRT to IHD occurred before the participant met the study-defined criteria for hemodynamic stability. In five cases (two patients on CVVH, and three on CVVHD), RRT was discontinued altogether in participants who did not meet study criteria for renal recovery.\nNo adverse events were attributed to the study intervention in either treatment arm.\n\n\nBODY.DISCUSSION:\nWe completed a multi-center, concealed-allocation, randomized trial comparing hemofiltration and hemodialysis in critically ill patients with AKI. This pilot trial achieved its primary objective of confirming the feasibility of performing a large-scale trial evaluating RRT clearance mode in AKI. There was also a non-significant trend towards reduced organ dysfunction, driven by decreased vasopressor requirements, early after the initiation of RRT in patients who received hemofiltration. These feasibility data and the importance of the clinical question justify the conduct of a large trial with adequate power to evaluate the primary outcome of 60-day mortality. With a power of 0.80, type I error of 0.05, and a mortality reduction of 10% in patients treated with CVVH (estimated 60-day mortality in the CVVHD arm 55%), we estimate that such a trial would require the enrolment of nearly 400 patients per arm. A more conservative mortality reduction of 7.5% would require the enrolment of 700 patients per arm.\nLittle is known about the optimal mode of clearance in renal replacement for AKI. Although both convection and diffusion remove small molecules with equal efficiency, convection may remove larger molecules that are not cleared by diffusive mechanisms [13]; previous studies have examined effects on inflammatory markers and have been too small to reliably determine effects on clinical outcomes. In addition, the profile of larger molecules removed by convection is relatively non-specific and may also include molecules that dampen inflammation or crucial medications such as antibiotics [14]. A randomized crossover study of 13 patients with AKI and the systemic inflammatory response syndrome found that CVVH for 24 hrs reduced plasma concentrations of TNFα and cleared more IL-6, compared to CVVHD. However, there was no effect on plasma concentrations of IL-6, IL-10, SL-selectin, or endotoxin [15]. Morgera and coworkers randomized 24 patients with sepsis-associated AKI to treatment with CVVH or CVVHD using a high cutoff membrane permeable to molecules up to 60 kilodaltons in size. Plasma concentration and clearance for IL-6 did not differ, but clearance of IL-1 receptor antagonist, an anti-inflammatory mediator, was enhanced by CVVH [16]. Of note, protein losses were higher in patients who received CVVH. In a prospective crossover study involving 15 patients with AKI who sequentially received CVVH and CVVHD, β2-microglobulin clearance was non-significantly higher among CVVH recipients (P = 0.055) [17]. Among trials focusing on clinical outcomes, a single-centre RCT (n = 20) did not demonstrate a difference in survival, renal recovery or ICU stay in patients treated with CVVH vs. CVVHD administered at fixed doses of 1.7 to 2.0 L/hr [18]. Similarly, a recent unpublished 65-patient RCT of CVVH vs continuous venovenous hemodiafiltration (both at 40 mL/kg/hr) did not demonstrate a survival difference at 28 days [19].\nWhile interventional trials involving devices and processes of care that are susceptible to large variations in practice are challenging, we achieved our feasibility objectives. Specifically, we were able to recruit the majority of eligible subjects, implement the protocolized therapy for > 85% of the prescribed time and ascertain vital status at 60 days for all participants. When accounting for actual time on therapy, the delivered dose exceeded 80% of that prescribed in both treatment arms, thereby surpassing our feasibility threshold of 75%. Accordingly, we believe our study strongly supports the feasibility of a large definitive randomized trial comparing hemofiltration and hemodialysis in critically ill patients with severe AKI.\nThis is the largest published trial to date to study the mode of solute clearance in AKI. Given the challenges of recruiting participants and implementing interventions in a population with a high burden of illness, the success of our pilot was a necessary precursor to a principal trial that examines patient-relevant clinical outcomes. Our eligibility criteria were pragmatic and assured the inclusion of individuals in North America who typically receive CRRT. A minority of individuals for whom no SDM could be found were enroled with deferred consent, thereby limiting exclusion of potentially eligible patients and mitigating selection bias. Of interest, no subject enroled by deferred consent had an SDM who subsequently withdrew consent or withdrew consent personally after regaining capacity. Finally, other than the clearance mode, all other aspects of RRT including stepdown to intermittent hemodialysis and withdrawal of RRT were performed in a manner consistent with usual practice.\nOur study has several limitations. As this was an unblinded trial, we cannot exclude the effect of co-interventions in either treatment arm. However, the nature of our intervention made blinding impractical, and we ensured that the two groups received equivalent RRT doses. There is also no definite intervention related to RRT prescription that has been shown to modify outcomes in critically ill patients with AKI. In addition, we cannot exclude the possibility that patients who were eligible but not randomized were systematically different than trial participants. Our protocol specified a target clearance of 35 mL/kg/hr, which was generally achieved in both arms. The decision to use this dose was guided by the fact that our trial was designed when higher dose CRRT was felt to be potentially beneficial based on data from two trials [20,21]. This pre-dated more recent trials and meta-analyses demonstrating no advantage of higher dose CRRT over doses of 20 to 25 mL/kg/hr, which would likely be the standard in a future trial comparing convection and diffusion [4,5,22,23]. However, since high dose therapy appears safe and may positively modify the effect of convective clearance [20], our higher dose target was reasonable. It should be noted that while 35 ml/kg/hr was prescribed to all participants, actual solute clearance may have been significantly lower due to changes in filter permeability [24,25]. On the other hand, when calculating the prescribed dose, we did not consider clearance associated with net ultrafiltration or removal of volume associated with the administration of anticoagulants. This approach would tend to underestimate the actual solute clearance that was delivered. Importantly, any deviation in dose from our 35 mL/kg/hr target would affect participants in both intervention arms to a similar extent. The Prismaflex CRRT system requires that all patients (even those predominantly getting hemodialysis) receive a small amount of post-filter hemofiltration (up to 200 ml/hr) to prevent clotting in the machine's deaeration chamber. In addition, CVVHD recipients would have had other unavoidable sources of convective clearance, specifically for the achievement of net fluid removal, and for the isovolemic removal of volume associated with the administration of the anticoagulant (citrate or heparin). Thus, CVVHD recipients did not receive purely diffusive solute clearance. We nonetheless estimate that the typical patient enrolled in the CVVHD arm still received > 80% of therapy in the form of diffusive clearance, which reassures us that this trial truly compared two different modes of solute clearance. Our trial was conducted using the AN69 polyacryonitrile filter, which has unique adsorptive characteristics [14]. While this is a widely used membrane in the administration of CRRT, the adsorptive characteristics of filters may differ and we cannot generalize our findings to settings in which other filters are utilized. Moreover, we did not collect data to evaluate the relative effect of CVVH or CVVHD on the removal of molecules of varying size. This trial was conducted using continuous RRT. Although the question of hemodialysis vs. hemofiltration is applicable to patients stable enough to receive intermittent RRT, machines to deliver intermittent hemofiltration were not widely available in North America when the trial was conducted. Among trial participants who became more hemodynamically stable, intermittent hemodialysis was utilized even when the initial CRRT mode was hemofiltration, thereby potentially diluting the benefits associated with CVVH. Finally, while we observed a trend towards improved organ failure scores, this finding must be interpreted with caution given the small sample size of our trial. Moreover, disease severity scores such as the SOFA score are surrogate markers that cannot supplant hard clinical endpoints.\n\nBODY.CONCLUSIONS:\nOur findings clearly support the feasibility of performing a definitive trial comparing CVVH and CVVHD in critically ill patients with AKI. The early non-significant trend towards reduced vasopressor requirements provides preliminary support to the concept that convective modes of clearance reduce inflammation and thus benefit critically ill patients. Given the high mortality associated with AKI, the lack of specific RRT interventions shown to reduce mortality, and current practice variation, the results of our pilot trial provide justification for a larger trial of hemofiltration vs. hemodialysis that will be adequately powered to evaluate meaningful clinical outcomes.\n\nBODY.KEY MESSAGES:\n• A randomized controlled trial of hemofiltration vs hemodialysis is feasible.\n• Hemofiltration may be associated with decreased vasopressor requirements over the first week of therapy.\n• A well-designed and adequately powered trial of hemofiltration vs hemodialysis would address an important area of uncertainty in the management of patients with AKI.\n\nBODY.ABBREVIATIONS:\nAKI: acute kidney injury; CRRT: continuous renal replacement therapy; CVVH: continuous venovenous hemofiltration; CVVHD: continuous venovenous hemodialysis; IL: interleukin; IQR: interquartile range; RF: replacement fluid; RCT: randomized controlled trial; RRT: renal replacement therapy; SDM: substitute decision maker; SOFA: Sequential Organ Failure Assessment; TNF α: tumor necrosis factor alpha.\n\nBODY.COMPETING INTERESTS:\nDr. Bagshaw has served on a Gambro Experts Panel. No other relevant competing interests are reported.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRW conceived and designed the study, obtained funding, supervised study conduct, interpreted the data, drafted the manuscript, and provided critical review of the manuscript. JOF conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. SMB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. KEB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AXG interpreted the study data, supervised study conduct, and provided critical review of the manuscript. MAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. SL interpreted the study data, supervised study conduct and provided critical review of the manuscript. DK interpreted the study data, supervised study conduct, and provided critical review of the manuscript. RMR interpreted the study data, supervised study conduct, and provided critical review of the manuscript. NIP interpreted the study data and provided critical review of the manuscript. KP participated in data analysis and interpretation, designed data collection tools, and participated in the critical review of the manuscript. KT participated in study design, conducted the data analysis, participated in data interpretation, and in the critical review of the manuscript. NKJA conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. All authors approved the final version.\n\nBODY.SUPPLEMENTARY MATERIAL:\nBODY.ADDITIONAL FILE 1:\nModified Sequential Organ Failure Assessment (SOFA) score. SOFA score modified from the original version (see reference [9]) for application in the OMAKI trial.\nClick here for file\n\n**Question:** Compared to continuous venovenous hemodialysis [CVVHD] what was the result of continuous venovenous hemofiltration [CVVH] on Duration RRT prescribed, hrs?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13844
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nOptimal Mode of clearance in critically ill patients with Acute Kidney Injury (OMAKI) - a pilot randomized controlled trial of hemofiltration versus hemodialysis: a Canadian Critical Care Trials Group project\n\n\n\nABSTRACT.INTRODUCTION:\nAmong critically ill patients with acute kidney injury (AKI) needing continuous renal replacement therapy (CRRT), the effect of convective (via continuous venovenous hemofiltration [CVVH]) versus diffusive (via continuous venovenous hemodialysis [CVVHD]) solute clearance on clinical outcomes is unclear. Our objective was to evaluate the feasibility of comparing these two modes in a randomized trial.\n\nABSTRACT.METHODS:\nThis was a multicenter open-label parallel-group pilot randomized trial of CVVH versus CVVHD. Using concealed allocation, we randomized critically ill adults with AKI and hemodynamic instability to CVVH or CVVHD, with a prescribed small solute clearance of 35 mL/kg/hour in both arms. The primary outcome was trial feasibility, defined by randomization of >25% of eligible patients, delivery of >75% of the prescribed CRRT dose, and follow-up of >95% of patients to 60 days. A secondary analysis using a mixed-effects model examined the impact of therapy on illness severity, defined by sequential organ failure assessment (SOFA) score, over the first week.\n\nABSTRACT.RESULTS:\nWe randomized 78 patients (mean age 61.5 years; 39% women; 23% with chronic kidney disease; 82% with sepsis). Baseline SOFA scores (mean 15.9, SD 3.2) were similar between groups. We recruited 55% of eligible patients, delivered >80% of the prescribed dose in each arm, and achieved 100% follow-up. SOFA tended to decline more over the first week in CVVH recipients (-0.8, 95% CI -2.1, +0.5) driven by a reduction in vasopressor requirements. Mortality (54% CVVH; 55% CVVHD) and dialysis dependence in survivors (24% CVVH; 19% CVVHD) at 60 days were similar.\n\nABSTRACT.CONCLUSIONS:\nOur results suggest that a large trial comparing CVVH to CVVHD would be feasible. There is a trend toward improved vasopressor requirements among CVVH-treated patients over the first week of treatment.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT00675818\n\nBODY.INTRODUCTION:\nAcute kidney injury (AKI) is a common complication of critical illness, afflicting up to two-thirds of patients admitted to the ICU [1]. A significant minority of patients with AKI requires renal replacement therapy (RRT), and these individuals have high short-term mortality that ranges from 50 to 70% [2]. In an attempt to mitigate these poor outcomes, various components of the RRT prescription have been rigorously examined in large well-designed randomized controlled trials (RCTs) [3-5].\nThe optimal mode of clearance in patients with AKI who require renal support is an area of considerable controversy resulting in significant practice variation [6]. Hemofiltration, whereby solutes are removed by convection, facilitates the removal of both low and higher molecular weight solutes, depending on the pore size of the membrane [7]. Hemodialysis, in which solute removal occurs via diffusion out of the bloodstream into the dialysate down a concentration gradient, removes low molecular weight molecules but provides limited clearance of higher molecular weight substances. When filter characteristics are kept constant, hemofiltration, which more faithfully mimics glomerular filtration, should result in the clearance of larger-sized solutes as compared to hemodialysis [7]. The removal of such solutes, which may include toxic mediators of sepsis and inflammation, provides the theoretical underpinnings for the superiority of hemofiltration as a renal support mode for critically ill patients with AKI.\nWe conducted a multicenter pilot RCT of hemofiltration vs. hemodialysis in critically ill patients with AKI to determine whether a larger definitive trial based on clinically important endpoints would be feasible. In a secondary analysis, we evaluated whether hemofiltration improved global organ dysfunction.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN:\nWe conducted an unblinded RCT of continuous venovenous hemofiltration (CVVH) vs. continuous venovenous hemodialysis (CVVHD) with concealed allocation (http://Clinicaltrials.gov registration number NCT00675818). Our reporting follows the updated CONSORT statement [8].\n\nBODY.SETTING:\nParticipants were recruited from ICUs at six academic hospitals: Mt. Sinai Hospital (Medical-Surgical ICU), Sunnybrook Health Sciences Centre (Critical Care Unit and Cardiovascular ICU) and St. Michael's Hospital (Medical-Surgical and Cardiovascular ICUs), all in Toronto, Canada; Victoria Hospital (Critical Care Trauma Centre), and University Hospital (Medical/Surgical Intensive Care Unit and Cardiac Surgery Recovery Unit), both in London, Canada, and University of Alberta Hospital (General Systems Intensive Care Unit) in Edmonton, Canada. The Research Ethics Boards of Mt. Sinai Hospital, St. Michael's Hospital, Sunnybrook Health Sciences Centre, London Health Sciences Centre and the University of Alberta approved the protocol. The Applied Health Research Centre at St. Michael's Hospital (Toronto, Ontario, Canada) was the trial coordinating center.\n\nBODY.POPULATION:\nWe enrolled critically ill adults (≥ 16 years of age) with AKI, defined as a serum creatinine increase ≥ 50% from baseline (defined as the last known pre-morbid serum creatinine or earliest value available from the current admission). At the time of screening, at least one of the following indications for RRT initiation needed to be present: (i) oliguria (defined as urine output < 100 mL in the preceding 4 hours); (ii) metabolic acidosis (serum bicarbonate < 15 mmol/L and pH < 7.25); (iii) refractory hyperkalemia (serum potassium > 6 mmol/L despite medical efforts at potassium removal); (iv) serum urea > 50 mmol/L, or (v) suspected uremic organ involvement (pericarditis, encephalopathy, neuropathy or myopathy). Finally, participants needed to be hemodynamically unstable, defined as Sequential Organ Failure Assessment (SOFA)- Cardiovascular score ≥ 1 on the day of screening (see Additional file 1 for the modified SOFA score used in this study). This required the patient to have mean arterial pressure < 70 mmHg or receipt of at least one vasopressor or inotrope [9]. Patients were excluded if any one of the following was present: receipt of any RRT within the previous 2 months; presence of an obstructive etiology for AKI; receipt of a kidney transplant within the preceding year; diagnosis of rapidly progressive glomerulonephritis, vasculitis, or acute interstitial nephritis; a clinical indication for intermittent hemodialysis (for example, the presence of a dialyzable toxin); terminal illness with an associated life expectancy less than 2 months; moribund status (life expectancy < 48 hours as per judgment of physicians involved in the patient's care); prior enrolment in this study; enrolment in a competing ICU interventional study; non-availability of a CRRT machine, or administration of RRT for > 36 hours prior to eligibility assessment.\n\nBODY.PATIENT ALLOCATION:\nAfter eligibility was confirmed, we attempted to obtain consent from the patient or if the patient lacked capacity to consent, his/her substitute decision maker (SDM) was approached. A deferred consent option was approved at three sites, which allowed patient enrollment and randomization in the event of patient incapacity and the inability to locate an SDM. Using this mechanism, patients were randomized, and research personnel attempted to locate an SDM every 72 hours to affirm consent for participation. In all cases, when participants regained capacity, they were asked to provide consent if they were initially enrolled using a priori SDM consent or deferred consent; no participant withdrew consent once regaining capacity. Patients were allocated to a study group using sealed, opaque, sequentially numbered envelopes (prepared by the coordinating center) that were opened after consent was obtained [10]. Randomization was stratified by center in random blocks of four, six or eight. The Research Ethics Board at each center approved the study. An independent data and safety monitoring board tracked the trial's conduct.\n\nBODY.STUDY INTERVENTION:\nParticipants randomized to CVVH were prescribed ultrafiltration with isovolemic replacement solution (evenly split between pre- and post-filter) to achieve a target clearance of 35 mL/kg body weight/hr. The prescribed hourly ultrafiltration rate was increased above 35 mL/kg/hr to compensate for the reduced efficiency of clearance related to the pre-filter component of the replacement solution volume administered. This adjusted CVVH dose was calculated from the post-filter replacement fluid (RF), pre-filter replacement fluid and blood (blood) flow rates as follows:\nDose = Postfilter RF rate + ((Prefilter RF rate × (Blood flow/(Blood flow + Prefilter RF rate))).\nIn the CVVHD arm, the dialysate flow was set to achieve a clearance of 35 mL/kg/hr, which included a post-filter hemofiltration flow of 100 to 200 mL/hr. This obligate low-volume post-filter hemofiltration is utilized at participating centers to minimize the risk of blood clotting in the machine's deaeration chamber. Ultrafiltration performed for achievement of net fluid balance or to compensate for the volume of administered anticoagulants (namely, citrate or unfractionated heparin) was not considered in the total dose. We used the patient's most recent measured body weight, as recorded in the chart or estimated by the study coordinator if no value was documented, to calculate prescribed RRT dose. Due to difficulties in achieving the target RRT dose as a result of high transmembrane pressures in some patients with higher body mass (particularly those randomized to CVVH), the protocol was modified in May 2010 (21 months after randomization of the first patient) such that the total fluid dose was capped at 4,000 mL/hr for both treatment arms, irrespective of the patient's weight. After this protocol change, twenty patients were enrolled in the trial; the 4,000 mL/hr dosing cap was invoked in six participants (five allocated to CVVHD and one to CVVH) who would have otherwise needed higher flows to achieve an actual total dose of 35 mL/kg/hr.\nAll study therapies were delivered by the Gambro Prismaflex™ RRT machine using the ST100 (surface area 1.0 m2) or ST150 (surface area 1.5 m2) filter sets, which contain a polyacrylonitrile AN69 membrane (Gambro, Richmond Hill, ON, Canada). We permitted the use of any commercially available dialysate and replacement solutions. Decisions regarding circuit anticoagulation (heparin, regional citrate anticoagulation, or no anticoagulation) and volume control were at the discretion of the attending physicians. Patients remained on study therapy until death, withdrawal of CRRT as part of withdrawal of life support, hemodynamic stability (SOFA-cardiovascular score < 2 for > 24 hrs) permitting stepdown to intermittent hemodialysis, or recovery of kidney function (defined as urine output > 500 mL in the preceding 12 hrs, and most recent serum potassium < 5.5 mmol/L and serum bicarbonate > 18 mmol/L).\n\nBODY.OUTCOMES:\nThe primary feasibility outcome of this study was the ability to administer > 75% of the prescribed CRRT dose to participants in each treatment arm. Secondary feasibility outcomes included the ability to enroll > 25% of fully eligible patients and the ability to follow > 95% of patients to 60 days following randomization (the anticipated follow-up period for the future definitive principal study). Secondary outcomes included change in SOFA score from baseline to days 1, 2 and 7, respectively, following randomization. Serial changes in SOFA scores have been shown to be correlated with clinical outcomes in critically ill patients with AKI who require RRT [11].\n\nBODY.DATA COLLECTION:\nTrained research coordinators collected baseline clinical and demographic data, and information on pre-existing medical conditions. Specific risk factors for AKI were ascertained, including recent procedures, nephrotoxins, and sepsis (defined using consensus guidelines [12]). SOFA score was calculated at the time of randomization and on each day of study therapy. The SOFA-Cardiovascular score was modified to include the receipt of vasopressin. Patients receiving RRT on a given day were assigned a SOFA-Renal score of 4, regardless of urine output or serum creatinine. Participants were followed until death or a maximum of 60 days from randomization, at which time vital status and the ongoing need for RRT among survivors were recorded.\n\nBODY.STATISTICAL ANALYSES:\nAs this was a feasibility trial with the primary objective of informing the design of a large-scale RCT, we planned to enroll a convenience sample of 75 participants from six sites. Patients who were randomized but never received RRT are described, but these individuals were replaced to ensure that at least 75 patients received some form of RRT. Since the primary feasibility outcome was based on the dose of CRRT received, patients for whom dose could not be readily calculated (those who received no RRT or forms of RRT other than CRRT) were excluded from the analysis related to feasibility. However, clinical outcomes are reported for all randomized participants.\nDescriptive statistics were used to characterize participants in either arm. Continuous variables are presented as means (SD) or medians (interquartile range, IQR) and two-group comparisons were performed with the t-test or Wilcoxon test, as appropriate. Two-group comparisons involving categorical variables were carried out with the chi square test. Analysis of covariance, adjusted for baseline SOFA score, was used to evaluate the change in SOFA score on days 1 and 2. Linear mixed models adjusted for baseline SOFA score and day of study therapy were used to evaluate the impact of RRT mode on SOFA score over the first week of therapy. For the fixed effect of treatment (that is, CVVH vs. CVVHD) 95% confidence intervals (CI) were obtained by profiling the log-likelihood function. All analyses were performed using R version 2.12.0 (R Development Core Team 2010, Vienna, Austria).\n\n\nBODY.RESULTS:\nWe screened 347 patients; 143 were eligible for participation and 79 individuals (55.2%) were enrolled over a 24-month time period. The inability to obtain consent from the patient or SDM was the reason for the non-enrollment of otherwise eligible patients. One patient was excluded shortly after enrollment after it was decided to pursue a non-continuous form of RRT. In total, 78 patients were randomized (39 to CVVH, and 39 to CVVHD). In one case, prior to the start of therapy it was recognized that a patient randomized to CVVHD was inappropriately enrolled as the indication for RRT was toxin removal rather than AKI per se. This patient was excluded from all further analyses. Clinical outcomes are reported in an intention-to-treat fashion for the remaining 77 patients (CVVH, 39; CVVHD, 38). Four patients randomized to CVVH were excluded from the feasibility analysis, two due to death prior to commencement of study RRT, and two due to receipt of continuous venovenous hemodiafiltration as the initial mode of therapy. The indication(s) for RRT was (were) oliguria, metabolic acidosis, hyperkalemia and uremia in 34, 17, 6 and 8 patients, respectively, in the CVVH arm. In the CVVHD arm, these indications guided the inclusion of 36, 15, 4 and 3 participants, respectively. In total, 73 participants commenced the therapy to which they were randomized (CVVH, 35; CVVHD, 38); these individuals contributed to the analysis relating to the feasibility of treatment delivery (Figure 1).\nFigure 1Flow of patients through the trial. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\nBODY.BASELINE CHARACTERISTICS (TABLE 1):\nThe mean age of participants was 61.5 (SD 14.2) years and 30/77 (39%) were women. The median time from ICU admission to randomization was 2 (IQR 2 to 8) days. Sepsis was present in 63/77 (82%) of participants, 73/77 (95%) were mechanically ventilated and 69/77 (90%) received vasopressors at the time of randomization. Median 24-hr urine output was 125 (IQR 50 to 250) mL. Approximately two-thirds of participants received some form of RRT prior to randomization, with 23/39 in the CVVH group (59%) for a mean of 10.6 (SD 12.0) hrs, and 30/38 in the CVVHD group (79%) for a mean of 13.3 (SD 11.7) hrs. (Table 1\nTable 1Baseline characteristics\n\nCVVH (n = 39)\nCVVHD (n = 38)\nAge, years\n58.8 ± 13.7\n64.3 ± 14.3\nFemale\n14 (36%)\n16 (42%)\nWeight, kg\n86.6 ± 31.5\n89.2 ± 26.4\nDays from ICU admission to randomization\n2 (1, 4)\n2 (1, 3)\nSOFA score\n16.3 ± 3.3\n15.5 ± 3.0\nPremorbid conditions\nHypertension\n19 (49%)\n26 (68%)\nDiabetes mellitus\n9 (23%)\n9 (24%)\nChronic kidney disease\n5 (13%)\n13 (34%)\nAcute kidney injury risk factors\nSepsis\n31 (79%)\n32 (84%)\nCardiopulmonary bypass in past 7 days\n1 (3%)\n4 (11%)\nIV contrast in past 7 days\n9 (23%)\n6 (16%)\nPhysiologic parameters\nUrine output, mL/24 hr\n125 (60, 245)\n135 (46, 251)\nMinimum systolic BP during 24 hr before enrolment, mmHg\n94.2 ± 16.5\n86.1 ± 13.5\nMinimum diastolic BP during 24 hr before enrolment, mmHg\n48.9 ± 8.6\n47.2 ± 9.4\nLaboratory parameters\nSerum creatinine, µmol/L\n276 (194, 352)\n246 (155, 325)\nUrea, mmol/L\n25.8 ± 38.1\n17.6 ± 12.7\nPotassium, mmol/L\n4.3 ± 0.7\n4.1 ± 0.6\nBicarbonate, mmol/L\n20.5 ± 5.9\n20.2 ± 5.3\nHemoglobin, g/L\n86.1 ± 16.3\n90.0 ± 15.2\nPlatelets, × 109/L\n109.6 ± 82.6\n127.2 ± 80.9\nWBC, × 109/L\n20.2 ± 43.3\n16.9 ± 10.6\nICU interventions\nDuration of RRT prior to randomization, hours\n8 (0, 19)\n13 (1.0, 20.5)\nMechanical ventilation\n37 (95%)\n36 (95%)\nVasopressors\n34 (87%)\n35 (92%)\nTotal parenteral nutrition\n1 (3%)\n2 (5%)\nContinuous data are displayed as mean ± SD or medians (interquartile range), as appropriate. Categorical variables are displayed as n (%). CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemodialysis; BP, blood pressure; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; WBC, White blood cell.\n\nBODY.FEATURES OF STUDY TREATMENTS (TABLE 2):\nAmong the 35 participants who started CVVH, RRT was prescribed for a median of 107 (IQR 55 to 146) hrs and delivered for 85 (IQR 43 to 128) hrs. Overall, 84.7% (95% CI 79.1, 90.3) of the prescribed duration was delivered. The mean weight-standardized dose was 33.6 (SD 7.4) mL/kg/hr while CRRT was ongoing.(Table 2\nTable 2Feasibility and safety data\nCVVH(n = 35)CVVHD(n = 38)P-valueDuration RRT prescribed, hrs\n146 ± 240\n145 ± 156\n0.88\nDuration RRT received, hrs\n130 ± 222\n128 ± 142\n0.87\nMean pre-filter replacement solution flow, mL/hr\n1533 ± 442\n0\nn/a\nMean post-filter replacement solution flow, mL/hr\n1440 ± 488\n180 ± 140\nn/a\nMean dialysate flow, mL/hr\n0\n2871 ± 872\nn/a\nMean RRT dose, mL/kg/hr\n33.6 ± 7.4\n34.7 ± 4.4\n0.50\nPrescribed dose delivered, %\n84.7 ± 16.3\n87.8 ± 13.7\n0.73\nNet ultrafiltration, L/day\n1.7 ± 2.2\n0.8 ± 4.1\n0.98\nDays on study therapy\n5 (3-7)\n4.50 (3.00-10.25)\n0.79\nPrimary reason for CRRT withdrawal\n\n\n0.75\nDeath while on CRRT\n12 (35%)\n10 (27%)\n\nKidney function recovery\n7 (21%)\n7 (19%)\n\nTransfer to intermittent hemodialysis\n12 (35%)\n14 (38%)\n\n\n3 (9%)\n6 (16%)\n\nCatheter changes/day of therapy\n0.09 ± 0.2\n0.1 ± 0.2\n0.92\nUnits RBCs transfused/day of therapy\n0.3 ± 0.3\n0.4 ± 0.7\n0.38\nUnscheduled circuit changes/day of therapy\n0.2 ± 0.3\n0.2 ± 0.2\n0.36\nStudy days with receipt of norepinephrine, %\n61.8 ± 36.2\n65.7 ± 7.5\n0.41\nStudy days with receipt of vasopressin, %\n35.4 ± 35.9\n43.0 ± 43.6\n0.51\nContinuous data are displayed as mean ± standard deviation or median (interquartile range), as appropriate. Categorical variables are displayed as n (%); n/a, not applicable. Data are restricted to patients who initiated therapy with the CRRT mode to which they had been allocated. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; RBCs, red blood cells; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration.\nCVVHD was prescribed in 38 participants for a median of 92 (IQR 57 to 145) hrs and delivered for 76 (IQR 44 to 148) hrs. Over the course of the trial, 87.8% (95% CI 83.3, 92.3) of the prescribed duration was delivered to patients randomized to receive CVVHD. The mean weight-standardized delivered RRT dose was 34.7 mL/kg/hr (SD 4.4).\n\nBODY.CLINICAL OUTCOMES:\nAll subjects were followed to 60 days, by which point 22/39 (56%) and 21/38 (55%) of participants assigned to CVVH and CVVHD, respectively, had died. Among surviving patients, 4/17 (24%) and 3/17 (19%) of those initially assigned to CVVH and CVVHD respectively, were still dependent on RRT.\nAfter adjustment for baseline SOFA score, we found a non-significant decline in SOFA score among participants treated with CVVH compared to CVVHD on the first day (-0.4, 95% CI -1.3, 0.6) and second day (-0.4, 95% CI -1.6, 0.8) following randomization. Over the first week of therapy, the adjusted change in the SOFA score among participants treated with CVVH compared to CVVHD was -0.8 (95% CI -2.1, 0.5). The observed reduction appeared to be driven by a reduction in the cardiovascular component of the SOFA score (Figure 2).\nFigure 2Total and Cardiovascular (CV) Sequential Organ Failure Assessment (SOFA) scores over the first seven days following randomization. Total SOFA scores are denoted with squares; the cardiovascular component of the SOFA score is denoted with triangles.CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\n\nBODY.PROTOCOL VIOLATIONS AND ADVERSE EVENTS:\nWe identified 15 protocol violations involving 15 participants. In one individual who started therapy with CVVH, achievement of the target dose was not feasible and RRT was supplemented with the addition of continuous dialysis; ongoing difficulties with maintaining adequate small molecule clearance using CRRT resulted in a change to sustained low efficiency dialysis. In five cases (one patient on CVVH, and four on CVVHD), stepdown from CRRT to IHD occurred before the participant met the study-defined criteria for hemodynamic stability. In five cases (two patients on CVVH, and three on CVVHD), RRT was discontinued altogether in participants who did not meet study criteria for renal recovery.\nNo adverse events were attributed to the study intervention in either treatment arm.\n\n\nBODY.DISCUSSION:\nWe completed a multi-center, concealed-allocation, randomized trial comparing hemofiltration and hemodialysis in critically ill patients with AKI. This pilot trial achieved its primary objective of confirming the feasibility of performing a large-scale trial evaluating RRT clearance mode in AKI. There was also a non-significant trend towards reduced organ dysfunction, driven by decreased vasopressor requirements, early after the initiation of RRT in patients who received hemofiltration. These feasibility data and the importance of the clinical question justify the conduct of a large trial with adequate power to evaluate the primary outcome of 60-day mortality. With a power of 0.80, type I error of 0.05, and a mortality reduction of 10% in patients treated with CVVH (estimated 60-day mortality in the CVVHD arm 55%), we estimate that such a trial would require the enrolment of nearly 400 patients per arm. A more conservative mortality reduction of 7.5% would require the enrolment of 700 patients per arm.\nLittle is known about the optimal mode of clearance in renal replacement for AKI. Although both convection and diffusion remove small molecules with equal efficiency, convection may remove larger molecules that are not cleared by diffusive mechanisms [13]; previous studies have examined effects on inflammatory markers and have been too small to reliably determine effects on clinical outcomes. In addition, the profile of larger molecules removed by convection is relatively non-specific and may also include molecules that dampen inflammation or crucial medications such as antibiotics [14]. A randomized crossover study of 13 patients with AKI and the systemic inflammatory response syndrome found that CVVH for 24 hrs reduced plasma concentrations of TNFα and cleared more IL-6, compared to CVVHD. However, there was no effect on plasma concentrations of IL-6, IL-10, SL-selectin, or endotoxin [15]. Morgera and coworkers randomized 24 patients with sepsis-associated AKI to treatment with CVVH or CVVHD using a high cutoff membrane permeable to molecules up to 60 kilodaltons in size. Plasma concentration and clearance for IL-6 did not differ, but clearance of IL-1 receptor antagonist, an anti-inflammatory mediator, was enhanced by CVVH [16]. Of note, protein losses were higher in patients who received CVVH. In a prospective crossover study involving 15 patients with AKI who sequentially received CVVH and CVVHD, β2-microglobulin clearance was non-significantly higher among CVVH recipients (P = 0.055) [17]. Among trials focusing on clinical outcomes, a single-centre RCT (n = 20) did not demonstrate a difference in survival, renal recovery or ICU stay in patients treated with CVVH vs. CVVHD administered at fixed doses of 1.7 to 2.0 L/hr [18]. Similarly, a recent unpublished 65-patient RCT of CVVH vs continuous venovenous hemodiafiltration (both at 40 mL/kg/hr) did not demonstrate a survival difference at 28 days [19].\nWhile interventional trials involving devices and processes of care that are susceptible to large variations in practice are challenging, we achieved our feasibility objectives. Specifically, we were able to recruit the majority of eligible subjects, implement the protocolized therapy for > 85% of the prescribed time and ascertain vital status at 60 days for all participants. When accounting for actual time on therapy, the delivered dose exceeded 80% of that prescribed in both treatment arms, thereby surpassing our feasibility threshold of 75%. Accordingly, we believe our study strongly supports the feasibility of a large definitive randomized trial comparing hemofiltration and hemodialysis in critically ill patients with severe AKI.\nThis is the largest published trial to date to study the mode of solute clearance in AKI. Given the challenges of recruiting participants and implementing interventions in a population with a high burden of illness, the success of our pilot was a necessary precursor to a principal trial that examines patient-relevant clinical outcomes. Our eligibility criteria were pragmatic and assured the inclusion of individuals in North America who typically receive CRRT. A minority of individuals for whom no SDM could be found were enroled with deferred consent, thereby limiting exclusion of potentially eligible patients and mitigating selection bias. Of interest, no subject enroled by deferred consent had an SDM who subsequently withdrew consent or withdrew consent personally after regaining capacity. Finally, other than the clearance mode, all other aspects of RRT including stepdown to intermittent hemodialysis and withdrawal of RRT were performed in a manner consistent with usual practice.\nOur study has several limitations. As this was an unblinded trial, we cannot exclude the effect of co-interventions in either treatment arm. However, the nature of our intervention made blinding impractical, and we ensured that the two groups received equivalent RRT doses. There is also no definite intervention related to RRT prescription that has been shown to modify outcomes in critically ill patients with AKI. In addition, we cannot exclude the possibility that patients who were eligible but not randomized were systematically different than trial participants. Our protocol specified a target clearance of 35 mL/kg/hr, which was generally achieved in both arms. The decision to use this dose was guided by the fact that our trial was designed when higher dose CRRT was felt to be potentially beneficial based on data from two trials [20,21]. This pre-dated more recent trials and meta-analyses demonstrating no advantage of higher dose CRRT over doses of 20 to 25 mL/kg/hr, which would likely be the standard in a future trial comparing convection and diffusion [4,5,22,23]. However, since high dose therapy appears safe and may positively modify the effect of convective clearance [20], our higher dose target was reasonable. It should be noted that while 35 ml/kg/hr was prescribed to all participants, actual solute clearance may have been significantly lower due to changes in filter permeability [24,25]. On the other hand, when calculating the prescribed dose, we did not consider clearance associated with net ultrafiltration or removal of volume associated with the administration of anticoagulants. This approach would tend to underestimate the actual solute clearance that was delivered. Importantly, any deviation in dose from our 35 mL/kg/hr target would affect participants in both intervention arms to a similar extent. The Prismaflex CRRT system requires that all patients (even those predominantly getting hemodialysis) receive a small amount of post-filter hemofiltration (up to 200 ml/hr) to prevent clotting in the machine's deaeration chamber. In addition, CVVHD recipients would have had other unavoidable sources of convective clearance, specifically for the achievement of net fluid removal, and for the isovolemic removal of volume associated with the administration of the anticoagulant (citrate or heparin). Thus, CVVHD recipients did not receive purely diffusive solute clearance. We nonetheless estimate that the typical patient enrolled in the CVVHD arm still received > 80% of therapy in the form of diffusive clearance, which reassures us that this trial truly compared two different modes of solute clearance. Our trial was conducted using the AN69 polyacryonitrile filter, which has unique adsorptive characteristics [14]. While this is a widely used membrane in the administration of CRRT, the adsorptive characteristics of filters may differ and we cannot generalize our findings to settings in which other filters are utilized. Moreover, we did not collect data to evaluate the relative effect of CVVH or CVVHD on the removal of molecules of varying size. This trial was conducted using continuous RRT. Although the question of hemodialysis vs. hemofiltration is applicable to patients stable enough to receive intermittent RRT, machines to deliver intermittent hemofiltration were not widely available in North America when the trial was conducted. Among trial participants who became more hemodynamically stable, intermittent hemodialysis was utilized even when the initial CRRT mode was hemofiltration, thereby potentially diluting the benefits associated with CVVH. Finally, while we observed a trend towards improved organ failure scores, this finding must be interpreted with caution given the small sample size of our trial. Moreover, disease severity scores such as the SOFA score are surrogate markers that cannot supplant hard clinical endpoints.\n\nBODY.CONCLUSIONS:\nOur findings clearly support the feasibility of performing a definitive trial comparing CVVH and CVVHD in critically ill patients with AKI. The early non-significant trend towards reduced vasopressor requirements provides preliminary support to the concept that convective modes of clearance reduce inflammation and thus benefit critically ill patients. Given the high mortality associated with AKI, the lack of specific RRT interventions shown to reduce mortality, and current practice variation, the results of our pilot trial provide justification for a larger trial of hemofiltration vs. hemodialysis that will be adequately powered to evaluate meaningful clinical outcomes.\n\nBODY.KEY MESSAGES:\n• A randomized controlled trial of hemofiltration vs hemodialysis is feasible.\n• Hemofiltration may be associated with decreased vasopressor requirements over the first week of therapy.\n• A well-designed and adequately powered trial of hemofiltration vs hemodialysis would address an important area of uncertainty in the management of patients with AKI.\n\nBODY.ABBREVIATIONS:\nAKI: acute kidney injury; CRRT: continuous renal replacement therapy; CVVH: continuous venovenous hemofiltration; CVVHD: continuous venovenous hemodialysis; IL: interleukin; IQR: interquartile range; RF: replacement fluid; RCT: randomized controlled trial; RRT: renal replacement therapy; SDM: substitute decision maker; SOFA: Sequential Organ Failure Assessment; TNF α: tumor necrosis factor alpha.\n\nBODY.COMPETING INTERESTS:\nDr. Bagshaw has served on a Gambro Experts Panel. No other relevant competing interests are reported.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRW conceived and designed the study, obtained funding, supervised study conduct, interpreted the data, drafted the manuscript, and provided critical review of the manuscript. JOF conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. SMB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. KEB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AXG interpreted the study data, supervised study conduct, and provided critical review of the manuscript. MAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. SL interpreted the study data, supervised study conduct and provided critical review of the manuscript. DK interpreted the study data, supervised study conduct, and provided critical review of the manuscript. RMR interpreted the study data, supervised study conduct, and provided critical review of the manuscript. NIP interpreted the study data and provided critical review of the manuscript. KP participated in data analysis and interpretation, designed data collection tools, and participated in the critical review of the manuscript. KT participated in study design, conducted the data analysis, participated in data interpretation, and in the critical review of the manuscript. NKJA conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. All authors approved the final version.\n\nBODY.SUPPLEMENTARY MATERIAL:\nBODY.ADDITIONAL FILE 1:\nModified Sequential Organ Failure Assessment (SOFA) score. SOFA score modified from the original version (see reference [9]) for application in the OMAKI trial.\nClick here for file\n\n**Question:** Compared to continuous venovenous hemodialysis [CVVHD] what was the result of continuous venovenous hemofiltration [CVVH] on Duration RRT received?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13845
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nOptimal Mode of clearance in critically ill patients with Acute Kidney Injury (OMAKI) - a pilot randomized controlled trial of hemofiltration versus hemodialysis: a Canadian Critical Care Trials Group project\n\n\n\nABSTRACT.INTRODUCTION:\nAmong critically ill patients with acute kidney injury (AKI) needing continuous renal replacement therapy (CRRT), the effect of convective (via continuous venovenous hemofiltration [CVVH]) versus diffusive (via continuous venovenous hemodialysis [CVVHD]) solute clearance on clinical outcomes is unclear. Our objective was to evaluate the feasibility of comparing these two modes in a randomized trial.\n\nABSTRACT.METHODS:\nThis was a multicenter open-label parallel-group pilot randomized trial of CVVH versus CVVHD. Using concealed allocation, we randomized critically ill adults with AKI and hemodynamic instability to CVVH or CVVHD, with a prescribed small solute clearance of 35 mL/kg/hour in both arms. The primary outcome was trial feasibility, defined by randomization of >25% of eligible patients, delivery of >75% of the prescribed CRRT dose, and follow-up of >95% of patients to 60 days. A secondary analysis using a mixed-effects model examined the impact of therapy on illness severity, defined by sequential organ failure assessment (SOFA) score, over the first week.\n\nABSTRACT.RESULTS:\nWe randomized 78 patients (mean age 61.5 years; 39% women; 23% with chronic kidney disease; 82% with sepsis). Baseline SOFA scores (mean 15.9, SD 3.2) were similar between groups. We recruited 55% of eligible patients, delivered >80% of the prescribed dose in each arm, and achieved 100% follow-up. SOFA tended to decline more over the first week in CVVH recipients (-0.8, 95% CI -2.1, +0.5) driven by a reduction in vasopressor requirements. Mortality (54% CVVH; 55% CVVHD) and dialysis dependence in survivors (24% CVVH; 19% CVVHD) at 60 days were similar.\n\nABSTRACT.CONCLUSIONS:\nOur results suggest that a large trial comparing CVVH to CVVHD would be feasible. There is a trend toward improved vasopressor requirements among CVVH-treated patients over the first week of treatment.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT00675818\n\nBODY.INTRODUCTION:\nAcute kidney injury (AKI) is a common complication of critical illness, afflicting up to two-thirds of patients admitted to the ICU [1]. A significant minority of patients with AKI requires renal replacement therapy (RRT), and these individuals have high short-term mortality that ranges from 50 to 70% [2]. In an attempt to mitigate these poor outcomes, various components of the RRT prescription have been rigorously examined in large well-designed randomized controlled trials (RCTs) [3-5].\nThe optimal mode of clearance in patients with AKI who require renal support is an area of considerable controversy resulting in significant practice variation [6]. Hemofiltration, whereby solutes are removed by convection, facilitates the removal of both low and higher molecular weight solutes, depending on the pore size of the membrane [7]. Hemodialysis, in which solute removal occurs via diffusion out of the bloodstream into the dialysate down a concentration gradient, removes low molecular weight molecules but provides limited clearance of higher molecular weight substances. When filter characteristics are kept constant, hemofiltration, which more faithfully mimics glomerular filtration, should result in the clearance of larger-sized solutes as compared to hemodialysis [7]. The removal of such solutes, which may include toxic mediators of sepsis and inflammation, provides the theoretical underpinnings for the superiority of hemofiltration as a renal support mode for critically ill patients with AKI.\nWe conducted a multicenter pilot RCT of hemofiltration vs. hemodialysis in critically ill patients with AKI to determine whether a larger definitive trial based on clinically important endpoints would be feasible. In a secondary analysis, we evaluated whether hemofiltration improved global organ dysfunction.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN:\nWe conducted an unblinded RCT of continuous venovenous hemofiltration (CVVH) vs. continuous venovenous hemodialysis (CVVHD) with concealed allocation (http://Clinicaltrials.gov registration number NCT00675818). Our reporting follows the updated CONSORT statement [8].\n\nBODY.SETTING:\nParticipants were recruited from ICUs at six academic hospitals: Mt. Sinai Hospital (Medical-Surgical ICU), Sunnybrook Health Sciences Centre (Critical Care Unit and Cardiovascular ICU) and St. Michael's Hospital (Medical-Surgical and Cardiovascular ICUs), all in Toronto, Canada; Victoria Hospital (Critical Care Trauma Centre), and University Hospital (Medical/Surgical Intensive Care Unit and Cardiac Surgery Recovery Unit), both in London, Canada, and University of Alberta Hospital (General Systems Intensive Care Unit) in Edmonton, Canada. The Research Ethics Boards of Mt. Sinai Hospital, St. Michael's Hospital, Sunnybrook Health Sciences Centre, London Health Sciences Centre and the University of Alberta approved the protocol. The Applied Health Research Centre at St. Michael's Hospital (Toronto, Ontario, Canada) was the trial coordinating center.\n\nBODY.POPULATION:\nWe enrolled critically ill adults (≥ 16 years of age) with AKI, defined as a serum creatinine increase ≥ 50% from baseline (defined as the last known pre-morbid serum creatinine or earliest value available from the current admission). At the time of screening, at least one of the following indications for RRT initiation needed to be present: (i) oliguria (defined as urine output < 100 mL in the preceding 4 hours); (ii) metabolic acidosis (serum bicarbonate < 15 mmol/L and pH < 7.25); (iii) refractory hyperkalemia (serum potassium > 6 mmol/L despite medical efforts at potassium removal); (iv) serum urea > 50 mmol/L, or (v) suspected uremic organ involvement (pericarditis, encephalopathy, neuropathy or myopathy). Finally, participants needed to be hemodynamically unstable, defined as Sequential Organ Failure Assessment (SOFA)- Cardiovascular score ≥ 1 on the day of screening (see Additional file 1 for the modified SOFA score used in this study). This required the patient to have mean arterial pressure < 70 mmHg or receipt of at least one vasopressor or inotrope [9]. Patients were excluded if any one of the following was present: receipt of any RRT within the previous 2 months; presence of an obstructive etiology for AKI; receipt of a kidney transplant within the preceding year; diagnosis of rapidly progressive glomerulonephritis, vasculitis, or acute interstitial nephritis; a clinical indication for intermittent hemodialysis (for example, the presence of a dialyzable toxin); terminal illness with an associated life expectancy less than 2 months; moribund status (life expectancy < 48 hours as per judgment of physicians involved in the patient's care); prior enrolment in this study; enrolment in a competing ICU interventional study; non-availability of a CRRT machine, or administration of RRT for > 36 hours prior to eligibility assessment.\n\nBODY.PATIENT ALLOCATION:\nAfter eligibility was confirmed, we attempted to obtain consent from the patient or if the patient lacked capacity to consent, his/her substitute decision maker (SDM) was approached. A deferred consent option was approved at three sites, which allowed patient enrollment and randomization in the event of patient incapacity and the inability to locate an SDM. Using this mechanism, patients were randomized, and research personnel attempted to locate an SDM every 72 hours to affirm consent for participation. In all cases, when participants regained capacity, they were asked to provide consent if they were initially enrolled using a priori SDM consent or deferred consent; no participant withdrew consent once regaining capacity. Patients were allocated to a study group using sealed, opaque, sequentially numbered envelopes (prepared by the coordinating center) that were opened after consent was obtained [10]. Randomization was stratified by center in random blocks of four, six or eight. The Research Ethics Board at each center approved the study. An independent data and safety monitoring board tracked the trial's conduct.\n\nBODY.STUDY INTERVENTION:\nParticipants randomized to CVVH were prescribed ultrafiltration with isovolemic replacement solution (evenly split between pre- and post-filter) to achieve a target clearance of 35 mL/kg body weight/hr. The prescribed hourly ultrafiltration rate was increased above 35 mL/kg/hr to compensate for the reduced efficiency of clearance related to the pre-filter component of the replacement solution volume administered. This adjusted CVVH dose was calculated from the post-filter replacement fluid (RF), pre-filter replacement fluid and blood (blood) flow rates as follows:\nDose = Postfilter RF rate + ((Prefilter RF rate × (Blood flow/(Blood flow + Prefilter RF rate))).\nIn the CVVHD arm, the dialysate flow was set to achieve a clearance of 35 mL/kg/hr, which included a post-filter hemofiltration flow of 100 to 200 mL/hr. This obligate low-volume post-filter hemofiltration is utilized at participating centers to minimize the risk of blood clotting in the machine's deaeration chamber. Ultrafiltration performed for achievement of net fluid balance or to compensate for the volume of administered anticoagulants (namely, citrate or unfractionated heparin) was not considered in the total dose. We used the patient's most recent measured body weight, as recorded in the chart or estimated by the study coordinator if no value was documented, to calculate prescribed RRT dose. Due to difficulties in achieving the target RRT dose as a result of high transmembrane pressures in some patients with higher body mass (particularly those randomized to CVVH), the protocol was modified in May 2010 (21 months after randomization of the first patient) such that the total fluid dose was capped at 4,000 mL/hr for both treatment arms, irrespective of the patient's weight. After this protocol change, twenty patients were enrolled in the trial; the 4,000 mL/hr dosing cap was invoked in six participants (five allocated to CVVHD and one to CVVH) who would have otherwise needed higher flows to achieve an actual total dose of 35 mL/kg/hr.\nAll study therapies were delivered by the Gambro Prismaflex™ RRT machine using the ST100 (surface area 1.0 m2) or ST150 (surface area 1.5 m2) filter sets, which contain a polyacrylonitrile AN69 membrane (Gambro, Richmond Hill, ON, Canada). We permitted the use of any commercially available dialysate and replacement solutions. Decisions regarding circuit anticoagulation (heparin, regional citrate anticoagulation, or no anticoagulation) and volume control were at the discretion of the attending physicians. Patients remained on study therapy until death, withdrawal of CRRT as part of withdrawal of life support, hemodynamic stability (SOFA-cardiovascular score < 2 for > 24 hrs) permitting stepdown to intermittent hemodialysis, or recovery of kidney function (defined as urine output > 500 mL in the preceding 12 hrs, and most recent serum potassium < 5.5 mmol/L and serum bicarbonate > 18 mmol/L).\n\nBODY.OUTCOMES:\nThe primary feasibility outcome of this study was the ability to administer > 75% of the prescribed CRRT dose to participants in each treatment arm. Secondary feasibility outcomes included the ability to enroll > 25% of fully eligible patients and the ability to follow > 95% of patients to 60 days following randomization (the anticipated follow-up period for the future definitive principal study). Secondary outcomes included change in SOFA score from baseline to days 1, 2 and 7, respectively, following randomization. Serial changes in SOFA scores have been shown to be correlated with clinical outcomes in critically ill patients with AKI who require RRT [11].\n\nBODY.DATA COLLECTION:\nTrained research coordinators collected baseline clinical and demographic data, and information on pre-existing medical conditions. Specific risk factors for AKI were ascertained, including recent procedures, nephrotoxins, and sepsis (defined using consensus guidelines [12]). SOFA score was calculated at the time of randomization and on each day of study therapy. The SOFA-Cardiovascular score was modified to include the receipt of vasopressin. Patients receiving RRT on a given day were assigned a SOFA-Renal score of 4, regardless of urine output or serum creatinine. Participants were followed until death or a maximum of 60 days from randomization, at which time vital status and the ongoing need for RRT among survivors were recorded.\n\nBODY.STATISTICAL ANALYSES:\nAs this was a feasibility trial with the primary objective of informing the design of a large-scale RCT, we planned to enroll a convenience sample of 75 participants from six sites. Patients who were randomized but never received RRT are described, but these individuals were replaced to ensure that at least 75 patients received some form of RRT. Since the primary feasibility outcome was based on the dose of CRRT received, patients for whom dose could not be readily calculated (those who received no RRT or forms of RRT other than CRRT) were excluded from the analysis related to feasibility. However, clinical outcomes are reported for all randomized participants.\nDescriptive statistics were used to characterize participants in either arm. Continuous variables are presented as means (SD) or medians (interquartile range, IQR) and two-group comparisons were performed with the t-test or Wilcoxon test, as appropriate. Two-group comparisons involving categorical variables were carried out with the chi square test. Analysis of covariance, adjusted for baseline SOFA score, was used to evaluate the change in SOFA score on days 1 and 2. Linear mixed models adjusted for baseline SOFA score and day of study therapy were used to evaluate the impact of RRT mode on SOFA score over the first week of therapy. For the fixed effect of treatment (that is, CVVH vs. CVVHD) 95% confidence intervals (CI) were obtained by profiling the log-likelihood function. All analyses were performed using R version 2.12.0 (R Development Core Team 2010, Vienna, Austria).\n\n\nBODY.RESULTS:\nWe screened 347 patients; 143 were eligible for participation and 79 individuals (55.2%) were enrolled over a 24-month time period. The inability to obtain consent from the patient or SDM was the reason for the non-enrollment of otherwise eligible patients. One patient was excluded shortly after enrollment after it was decided to pursue a non-continuous form of RRT. In total, 78 patients were randomized (39 to CVVH, and 39 to CVVHD). In one case, prior to the start of therapy it was recognized that a patient randomized to CVVHD was inappropriately enrolled as the indication for RRT was toxin removal rather than AKI per se. This patient was excluded from all further analyses. Clinical outcomes are reported in an intention-to-treat fashion for the remaining 77 patients (CVVH, 39; CVVHD, 38). Four patients randomized to CVVH were excluded from the feasibility analysis, two due to death prior to commencement of study RRT, and two due to receipt of continuous venovenous hemodiafiltration as the initial mode of therapy. The indication(s) for RRT was (were) oliguria, metabolic acidosis, hyperkalemia and uremia in 34, 17, 6 and 8 patients, respectively, in the CVVH arm. In the CVVHD arm, these indications guided the inclusion of 36, 15, 4 and 3 participants, respectively. In total, 73 participants commenced the therapy to which they were randomized (CVVH, 35; CVVHD, 38); these individuals contributed to the analysis relating to the feasibility of treatment delivery (Figure 1).\nFigure 1Flow of patients through the trial. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\nBODY.BASELINE CHARACTERISTICS (TABLE 1):\nThe mean age of participants was 61.5 (SD 14.2) years and 30/77 (39%) were women. The median time from ICU admission to randomization was 2 (IQR 2 to 8) days. Sepsis was present in 63/77 (82%) of participants, 73/77 (95%) were mechanically ventilated and 69/77 (90%) received vasopressors at the time of randomization. Median 24-hr urine output was 125 (IQR 50 to 250) mL. Approximately two-thirds of participants received some form of RRT prior to randomization, with 23/39 in the CVVH group (59%) for a mean of 10.6 (SD 12.0) hrs, and 30/38 in the CVVHD group (79%) for a mean of 13.3 (SD 11.7) hrs. (Table 1\nTable 1Baseline characteristics\n\nCVVH (n = 39)\nCVVHD (n = 38)\nAge, years\n58.8 ± 13.7\n64.3 ± 14.3\nFemale\n14 (36%)\n16 (42%)\nWeight, kg\n86.6 ± 31.5\n89.2 ± 26.4\nDays from ICU admission to randomization\n2 (1, 4)\n2 (1, 3)\nSOFA score\n16.3 ± 3.3\n15.5 ± 3.0\nPremorbid conditions\nHypertension\n19 (49%)\n26 (68%)\nDiabetes mellitus\n9 (23%)\n9 (24%)\nChronic kidney disease\n5 (13%)\n13 (34%)\nAcute kidney injury risk factors\nSepsis\n31 (79%)\n32 (84%)\nCardiopulmonary bypass in past 7 days\n1 (3%)\n4 (11%)\nIV contrast in past 7 days\n9 (23%)\n6 (16%)\nPhysiologic parameters\nUrine output, mL/24 hr\n125 (60, 245)\n135 (46, 251)\nMinimum systolic BP during 24 hr before enrolment, mmHg\n94.2 ± 16.5\n86.1 ± 13.5\nMinimum diastolic BP during 24 hr before enrolment, mmHg\n48.9 ± 8.6\n47.2 ± 9.4\nLaboratory parameters\nSerum creatinine, µmol/L\n276 (194, 352)\n246 (155, 325)\nUrea, mmol/L\n25.8 ± 38.1\n17.6 ± 12.7\nPotassium, mmol/L\n4.3 ± 0.7\n4.1 ± 0.6\nBicarbonate, mmol/L\n20.5 ± 5.9\n20.2 ± 5.3\nHemoglobin, g/L\n86.1 ± 16.3\n90.0 ± 15.2\nPlatelets, × 109/L\n109.6 ± 82.6\n127.2 ± 80.9\nWBC, × 109/L\n20.2 ± 43.3\n16.9 ± 10.6\nICU interventions\nDuration of RRT prior to randomization, hours\n8 (0, 19)\n13 (1.0, 20.5)\nMechanical ventilation\n37 (95%)\n36 (95%)\nVasopressors\n34 (87%)\n35 (92%)\nTotal parenteral nutrition\n1 (3%)\n2 (5%)\nContinuous data are displayed as mean ± SD or medians (interquartile range), as appropriate. Categorical variables are displayed as n (%). CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemodialysis; BP, blood pressure; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; WBC, White blood cell.\n\nBODY.FEATURES OF STUDY TREATMENTS (TABLE 2):\nAmong the 35 participants who started CVVH, RRT was prescribed for a median of 107 (IQR 55 to 146) hrs and delivered for 85 (IQR 43 to 128) hrs. Overall, 84.7% (95% CI 79.1, 90.3) of the prescribed duration was delivered. The mean weight-standardized dose was 33.6 (SD 7.4) mL/kg/hr while CRRT was ongoing.(Table 2\nTable 2Feasibility and safety data\nCVVH(n = 35)CVVHD(n = 38)P-valueDuration RRT prescribed, hrs\n146 ± 240\n145 ± 156\n0.88\nDuration RRT received, hrs\n130 ± 222\n128 ± 142\n0.87\nMean pre-filter replacement solution flow, mL/hr\n1533 ± 442\n0\nn/a\nMean post-filter replacement solution flow, mL/hr\n1440 ± 488\n180 ± 140\nn/a\nMean dialysate flow, mL/hr\n0\n2871 ± 872\nn/a\nMean RRT dose, mL/kg/hr\n33.6 ± 7.4\n34.7 ± 4.4\n0.50\nPrescribed dose delivered, %\n84.7 ± 16.3\n87.8 ± 13.7\n0.73\nNet ultrafiltration, L/day\n1.7 ± 2.2\n0.8 ± 4.1\n0.98\nDays on study therapy\n5 (3-7)\n4.50 (3.00-10.25)\n0.79\nPrimary reason for CRRT withdrawal\n\n\n0.75\nDeath while on CRRT\n12 (35%)\n10 (27%)\n\nKidney function recovery\n7 (21%)\n7 (19%)\n\nTransfer to intermittent hemodialysis\n12 (35%)\n14 (38%)\n\n\n3 (9%)\n6 (16%)\n\nCatheter changes/day of therapy\n0.09 ± 0.2\n0.1 ± 0.2\n0.92\nUnits RBCs transfused/day of therapy\n0.3 ± 0.3\n0.4 ± 0.7\n0.38\nUnscheduled circuit changes/day of therapy\n0.2 ± 0.3\n0.2 ± 0.2\n0.36\nStudy days with receipt of norepinephrine, %\n61.8 ± 36.2\n65.7 ± 7.5\n0.41\nStudy days with receipt of vasopressin, %\n35.4 ± 35.9\n43.0 ± 43.6\n0.51\nContinuous data are displayed as mean ± standard deviation or median (interquartile range), as appropriate. Categorical variables are displayed as n (%); n/a, not applicable. Data are restricted to patients who initiated therapy with the CRRT mode to which they had been allocated. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; RBCs, red blood cells; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration.\nCVVHD was prescribed in 38 participants for a median of 92 (IQR 57 to 145) hrs and delivered for 76 (IQR 44 to 148) hrs. Over the course of the trial, 87.8% (95% CI 83.3, 92.3) of the prescribed duration was delivered to patients randomized to receive CVVHD. The mean weight-standardized delivered RRT dose was 34.7 mL/kg/hr (SD 4.4).\n\nBODY.CLINICAL OUTCOMES:\nAll subjects were followed to 60 days, by which point 22/39 (56%) and 21/38 (55%) of participants assigned to CVVH and CVVHD, respectively, had died. Among surviving patients, 4/17 (24%) and 3/17 (19%) of those initially assigned to CVVH and CVVHD respectively, were still dependent on RRT.\nAfter adjustment for baseline SOFA score, we found a non-significant decline in SOFA score among participants treated with CVVH compared to CVVHD on the first day (-0.4, 95% CI -1.3, 0.6) and second day (-0.4, 95% CI -1.6, 0.8) following randomization. Over the first week of therapy, the adjusted change in the SOFA score among participants treated with CVVH compared to CVVHD was -0.8 (95% CI -2.1, 0.5). The observed reduction appeared to be driven by a reduction in the cardiovascular component of the SOFA score (Figure 2).\nFigure 2Total and Cardiovascular (CV) Sequential Organ Failure Assessment (SOFA) scores over the first seven days following randomization. Total SOFA scores are denoted with squares; the cardiovascular component of the SOFA score is denoted with triangles.CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\n\nBODY.PROTOCOL VIOLATIONS AND ADVERSE EVENTS:\nWe identified 15 protocol violations involving 15 participants. In one individual who started therapy with CVVH, achievement of the target dose was not feasible and RRT was supplemented with the addition of continuous dialysis; ongoing difficulties with maintaining adequate small molecule clearance using CRRT resulted in a change to sustained low efficiency dialysis. In five cases (one patient on CVVH, and four on CVVHD), stepdown from CRRT to IHD occurred before the participant met the study-defined criteria for hemodynamic stability. In five cases (two patients on CVVH, and three on CVVHD), RRT was discontinued altogether in participants who did not meet study criteria for renal recovery.\nNo adverse events were attributed to the study intervention in either treatment arm.\n\n\nBODY.DISCUSSION:\nWe completed a multi-center, concealed-allocation, randomized trial comparing hemofiltration and hemodialysis in critically ill patients with AKI. This pilot trial achieved its primary objective of confirming the feasibility of performing a large-scale trial evaluating RRT clearance mode in AKI. There was also a non-significant trend towards reduced organ dysfunction, driven by decreased vasopressor requirements, early after the initiation of RRT in patients who received hemofiltration. These feasibility data and the importance of the clinical question justify the conduct of a large trial with adequate power to evaluate the primary outcome of 60-day mortality. With a power of 0.80, type I error of 0.05, and a mortality reduction of 10% in patients treated with CVVH (estimated 60-day mortality in the CVVHD arm 55%), we estimate that such a trial would require the enrolment of nearly 400 patients per arm. A more conservative mortality reduction of 7.5% would require the enrolment of 700 patients per arm.\nLittle is known about the optimal mode of clearance in renal replacement for AKI. Although both convection and diffusion remove small molecules with equal efficiency, convection may remove larger molecules that are not cleared by diffusive mechanisms [13]; previous studies have examined effects on inflammatory markers and have been too small to reliably determine effects on clinical outcomes. In addition, the profile of larger molecules removed by convection is relatively non-specific and may also include molecules that dampen inflammation or crucial medications such as antibiotics [14]. A randomized crossover study of 13 patients with AKI and the systemic inflammatory response syndrome found that CVVH for 24 hrs reduced plasma concentrations of TNFα and cleared more IL-6, compared to CVVHD. However, there was no effect on plasma concentrations of IL-6, IL-10, SL-selectin, or endotoxin [15]. Morgera and coworkers randomized 24 patients with sepsis-associated AKI to treatment with CVVH or CVVHD using a high cutoff membrane permeable to molecules up to 60 kilodaltons in size. Plasma concentration and clearance for IL-6 did not differ, but clearance of IL-1 receptor antagonist, an anti-inflammatory mediator, was enhanced by CVVH [16]. Of note, protein losses were higher in patients who received CVVH. In a prospective crossover study involving 15 patients with AKI who sequentially received CVVH and CVVHD, β2-microglobulin clearance was non-significantly higher among CVVH recipients (P = 0.055) [17]. Among trials focusing on clinical outcomes, a single-centre RCT (n = 20) did not demonstrate a difference in survival, renal recovery or ICU stay in patients treated with CVVH vs. CVVHD administered at fixed doses of 1.7 to 2.0 L/hr [18]. Similarly, a recent unpublished 65-patient RCT of CVVH vs continuous venovenous hemodiafiltration (both at 40 mL/kg/hr) did not demonstrate a survival difference at 28 days [19].\nWhile interventional trials involving devices and processes of care that are susceptible to large variations in practice are challenging, we achieved our feasibility objectives. Specifically, we were able to recruit the majority of eligible subjects, implement the protocolized therapy for > 85% of the prescribed time and ascertain vital status at 60 days for all participants. When accounting for actual time on therapy, the delivered dose exceeded 80% of that prescribed in both treatment arms, thereby surpassing our feasibility threshold of 75%. Accordingly, we believe our study strongly supports the feasibility of a large definitive randomized trial comparing hemofiltration and hemodialysis in critically ill patients with severe AKI.\nThis is the largest published trial to date to study the mode of solute clearance in AKI. Given the challenges of recruiting participants and implementing interventions in a population with a high burden of illness, the success of our pilot was a necessary precursor to a principal trial that examines patient-relevant clinical outcomes. Our eligibility criteria were pragmatic and assured the inclusion of individuals in North America who typically receive CRRT. A minority of individuals for whom no SDM could be found were enroled with deferred consent, thereby limiting exclusion of potentially eligible patients and mitigating selection bias. Of interest, no subject enroled by deferred consent had an SDM who subsequently withdrew consent or withdrew consent personally after regaining capacity. Finally, other than the clearance mode, all other aspects of RRT including stepdown to intermittent hemodialysis and withdrawal of RRT were performed in a manner consistent with usual practice.\nOur study has several limitations. As this was an unblinded trial, we cannot exclude the effect of co-interventions in either treatment arm. However, the nature of our intervention made blinding impractical, and we ensured that the two groups received equivalent RRT doses. There is also no definite intervention related to RRT prescription that has been shown to modify outcomes in critically ill patients with AKI. In addition, we cannot exclude the possibility that patients who were eligible but not randomized were systematically different than trial participants. Our protocol specified a target clearance of 35 mL/kg/hr, which was generally achieved in both arms. The decision to use this dose was guided by the fact that our trial was designed when higher dose CRRT was felt to be potentially beneficial based on data from two trials [20,21]. This pre-dated more recent trials and meta-analyses demonstrating no advantage of higher dose CRRT over doses of 20 to 25 mL/kg/hr, which would likely be the standard in a future trial comparing convection and diffusion [4,5,22,23]. However, since high dose therapy appears safe and may positively modify the effect of convective clearance [20], our higher dose target was reasonable. It should be noted that while 35 ml/kg/hr was prescribed to all participants, actual solute clearance may have been significantly lower due to changes in filter permeability [24,25]. On the other hand, when calculating the prescribed dose, we did not consider clearance associated with net ultrafiltration or removal of volume associated with the administration of anticoagulants. This approach would tend to underestimate the actual solute clearance that was delivered. Importantly, any deviation in dose from our 35 mL/kg/hr target would affect participants in both intervention arms to a similar extent. The Prismaflex CRRT system requires that all patients (even those predominantly getting hemodialysis) receive a small amount of post-filter hemofiltration (up to 200 ml/hr) to prevent clotting in the machine's deaeration chamber. In addition, CVVHD recipients would have had other unavoidable sources of convective clearance, specifically for the achievement of net fluid removal, and for the isovolemic removal of volume associated with the administration of the anticoagulant (citrate or heparin). Thus, CVVHD recipients did not receive purely diffusive solute clearance. We nonetheless estimate that the typical patient enrolled in the CVVHD arm still received > 80% of therapy in the form of diffusive clearance, which reassures us that this trial truly compared two different modes of solute clearance. Our trial was conducted using the AN69 polyacryonitrile filter, which has unique adsorptive characteristics [14]. While this is a widely used membrane in the administration of CRRT, the adsorptive characteristics of filters may differ and we cannot generalize our findings to settings in which other filters are utilized. Moreover, we did not collect data to evaluate the relative effect of CVVH or CVVHD on the removal of molecules of varying size. This trial was conducted using continuous RRT. Although the question of hemodialysis vs. hemofiltration is applicable to patients stable enough to receive intermittent RRT, machines to deliver intermittent hemofiltration were not widely available in North America when the trial was conducted. Among trial participants who became more hemodynamically stable, intermittent hemodialysis was utilized even when the initial CRRT mode was hemofiltration, thereby potentially diluting the benefits associated with CVVH. Finally, while we observed a trend towards improved organ failure scores, this finding must be interpreted with caution given the small sample size of our trial. Moreover, disease severity scores such as the SOFA score are surrogate markers that cannot supplant hard clinical endpoints.\n\nBODY.CONCLUSIONS:\nOur findings clearly support the feasibility of performing a definitive trial comparing CVVH and CVVHD in critically ill patients with AKI. The early non-significant trend towards reduced vasopressor requirements provides preliminary support to the concept that convective modes of clearance reduce inflammation and thus benefit critically ill patients. Given the high mortality associated with AKI, the lack of specific RRT interventions shown to reduce mortality, and current practice variation, the results of our pilot trial provide justification for a larger trial of hemofiltration vs. hemodialysis that will be adequately powered to evaluate meaningful clinical outcomes.\n\nBODY.KEY MESSAGES:\n• A randomized controlled trial of hemofiltration vs hemodialysis is feasible.\n• Hemofiltration may be associated with decreased vasopressor requirements over the first week of therapy.\n• A well-designed and adequately powered trial of hemofiltration vs hemodialysis would address an important area of uncertainty in the management of patients with AKI.\n\nBODY.ABBREVIATIONS:\nAKI: acute kidney injury; CRRT: continuous renal replacement therapy; CVVH: continuous venovenous hemofiltration; CVVHD: continuous venovenous hemodialysis; IL: interleukin; IQR: interquartile range; RF: replacement fluid; RCT: randomized controlled trial; RRT: renal replacement therapy; SDM: substitute decision maker; SOFA: Sequential Organ Failure Assessment; TNF α: tumor necrosis factor alpha.\n\nBODY.COMPETING INTERESTS:\nDr. Bagshaw has served on a Gambro Experts Panel. No other relevant competing interests are reported.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRW conceived and designed the study, obtained funding, supervised study conduct, interpreted the data, drafted the manuscript, and provided critical review of the manuscript. JOF conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. SMB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. KEB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AXG interpreted the study data, supervised study conduct, and provided critical review of the manuscript. MAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. SL interpreted the study data, supervised study conduct and provided critical review of the manuscript. DK interpreted the study data, supervised study conduct, and provided critical review of the manuscript. RMR interpreted the study data, supervised study conduct, and provided critical review of the manuscript. NIP interpreted the study data and provided critical review of the manuscript. KP participated in data analysis and interpretation, designed data collection tools, and participated in the critical review of the manuscript. KT participated in study design, conducted the data analysis, participated in data interpretation, and in the critical review of the manuscript. NKJA conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. All authors approved the final version.\n\nBODY.SUPPLEMENTARY MATERIAL:\nBODY.ADDITIONAL FILE 1:\nModified Sequential Organ Failure Assessment (SOFA) score. SOFA score modified from the original version (see reference [9]) for application in the OMAKI trial.\nClick here for file\n\n**Question:** Compared to continuous venovenous hemodialysis [CVVHD] what was the result of continuous venovenous hemofiltration [CVVH] on Mean RRT dose?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13846
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nOptimal Mode of clearance in critically ill patients with Acute Kidney Injury (OMAKI) - a pilot randomized controlled trial of hemofiltration versus hemodialysis: a Canadian Critical Care Trials Group project\n\n\n\nABSTRACT.INTRODUCTION:\nAmong critically ill patients with acute kidney injury (AKI) needing continuous renal replacement therapy (CRRT), the effect of convective (via continuous venovenous hemofiltration [CVVH]) versus diffusive (via continuous venovenous hemodialysis [CVVHD]) solute clearance on clinical outcomes is unclear. Our objective was to evaluate the feasibility of comparing these two modes in a randomized trial.\n\nABSTRACT.METHODS:\nThis was a multicenter open-label parallel-group pilot randomized trial of CVVH versus CVVHD. Using concealed allocation, we randomized critically ill adults with AKI and hemodynamic instability to CVVH or CVVHD, with a prescribed small solute clearance of 35 mL/kg/hour in both arms. The primary outcome was trial feasibility, defined by randomization of >25% of eligible patients, delivery of >75% of the prescribed CRRT dose, and follow-up of >95% of patients to 60 days. A secondary analysis using a mixed-effects model examined the impact of therapy on illness severity, defined by sequential organ failure assessment (SOFA) score, over the first week.\n\nABSTRACT.RESULTS:\nWe randomized 78 patients (mean age 61.5 years; 39% women; 23% with chronic kidney disease; 82% with sepsis). Baseline SOFA scores (mean 15.9, SD 3.2) were similar between groups. We recruited 55% of eligible patients, delivered >80% of the prescribed dose in each arm, and achieved 100% follow-up. SOFA tended to decline more over the first week in CVVH recipients (-0.8, 95% CI -2.1, +0.5) driven by a reduction in vasopressor requirements. Mortality (54% CVVH; 55% CVVHD) and dialysis dependence in survivors (24% CVVH; 19% CVVHD) at 60 days were similar.\n\nABSTRACT.CONCLUSIONS:\nOur results suggest that a large trial comparing CVVH to CVVHD would be feasible. There is a trend toward improved vasopressor requirements among CVVH-treated patients over the first week of treatment.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT00675818\n\nBODY.INTRODUCTION:\nAcute kidney injury (AKI) is a common complication of critical illness, afflicting up to two-thirds of patients admitted to the ICU [1]. A significant minority of patients with AKI requires renal replacement therapy (RRT), and these individuals have high short-term mortality that ranges from 50 to 70% [2]. In an attempt to mitigate these poor outcomes, various components of the RRT prescription have been rigorously examined in large well-designed randomized controlled trials (RCTs) [3-5].\nThe optimal mode of clearance in patients with AKI who require renal support is an area of considerable controversy resulting in significant practice variation [6]. Hemofiltration, whereby solutes are removed by convection, facilitates the removal of both low and higher molecular weight solutes, depending on the pore size of the membrane [7]. Hemodialysis, in which solute removal occurs via diffusion out of the bloodstream into the dialysate down a concentration gradient, removes low molecular weight molecules but provides limited clearance of higher molecular weight substances. When filter characteristics are kept constant, hemofiltration, which more faithfully mimics glomerular filtration, should result in the clearance of larger-sized solutes as compared to hemodialysis [7]. The removal of such solutes, which may include toxic mediators of sepsis and inflammation, provides the theoretical underpinnings for the superiority of hemofiltration as a renal support mode for critically ill patients with AKI.\nWe conducted a multicenter pilot RCT of hemofiltration vs. hemodialysis in critically ill patients with AKI to determine whether a larger definitive trial based on clinically important endpoints would be feasible. In a secondary analysis, we evaluated whether hemofiltration improved global organ dysfunction.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN:\nWe conducted an unblinded RCT of continuous venovenous hemofiltration (CVVH) vs. continuous venovenous hemodialysis (CVVHD) with concealed allocation (http://Clinicaltrials.gov registration number NCT00675818). Our reporting follows the updated CONSORT statement [8].\n\nBODY.SETTING:\nParticipants were recruited from ICUs at six academic hospitals: Mt. Sinai Hospital (Medical-Surgical ICU), Sunnybrook Health Sciences Centre (Critical Care Unit and Cardiovascular ICU) and St. Michael's Hospital (Medical-Surgical and Cardiovascular ICUs), all in Toronto, Canada; Victoria Hospital (Critical Care Trauma Centre), and University Hospital (Medical/Surgical Intensive Care Unit and Cardiac Surgery Recovery Unit), both in London, Canada, and University of Alberta Hospital (General Systems Intensive Care Unit) in Edmonton, Canada. The Research Ethics Boards of Mt. Sinai Hospital, St. Michael's Hospital, Sunnybrook Health Sciences Centre, London Health Sciences Centre and the University of Alberta approved the protocol. The Applied Health Research Centre at St. Michael's Hospital (Toronto, Ontario, Canada) was the trial coordinating center.\n\nBODY.POPULATION:\nWe enrolled critically ill adults (≥ 16 years of age) with AKI, defined as a serum creatinine increase ≥ 50% from baseline (defined as the last known pre-morbid serum creatinine or earliest value available from the current admission). At the time of screening, at least one of the following indications for RRT initiation needed to be present: (i) oliguria (defined as urine output < 100 mL in the preceding 4 hours); (ii) metabolic acidosis (serum bicarbonate < 15 mmol/L and pH < 7.25); (iii) refractory hyperkalemia (serum potassium > 6 mmol/L despite medical efforts at potassium removal); (iv) serum urea > 50 mmol/L, or (v) suspected uremic organ involvement (pericarditis, encephalopathy, neuropathy or myopathy). Finally, participants needed to be hemodynamically unstable, defined as Sequential Organ Failure Assessment (SOFA)- Cardiovascular score ≥ 1 on the day of screening (see Additional file 1 for the modified SOFA score used in this study). This required the patient to have mean arterial pressure < 70 mmHg or receipt of at least one vasopressor or inotrope [9]. Patients were excluded if any one of the following was present: receipt of any RRT within the previous 2 months; presence of an obstructive etiology for AKI; receipt of a kidney transplant within the preceding year; diagnosis of rapidly progressive glomerulonephritis, vasculitis, or acute interstitial nephritis; a clinical indication for intermittent hemodialysis (for example, the presence of a dialyzable toxin); terminal illness with an associated life expectancy less than 2 months; moribund status (life expectancy < 48 hours as per judgment of physicians involved in the patient's care); prior enrolment in this study; enrolment in a competing ICU interventional study; non-availability of a CRRT machine, or administration of RRT for > 36 hours prior to eligibility assessment.\n\nBODY.PATIENT ALLOCATION:\nAfter eligibility was confirmed, we attempted to obtain consent from the patient or if the patient lacked capacity to consent, his/her substitute decision maker (SDM) was approached. A deferred consent option was approved at three sites, which allowed patient enrollment and randomization in the event of patient incapacity and the inability to locate an SDM. Using this mechanism, patients were randomized, and research personnel attempted to locate an SDM every 72 hours to affirm consent for participation. In all cases, when participants regained capacity, they were asked to provide consent if they were initially enrolled using a priori SDM consent or deferred consent; no participant withdrew consent once regaining capacity. Patients were allocated to a study group using sealed, opaque, sequentially numbered envelopes (prepared by the coordinating center) that were opened after consent was obtained [10]. Randomization was stratified by center in random blocks of four, six or eight. The Research Ethics Board at each center approved the study. An independent data and safety monitoring board tracked the trial's conduct.\n\nBODY.STUDY INTERVENTION:\nParticipants randomized to CVVH were prescribed ultrafiltration with isovolemic replacement solution (evenly split between pre- and post-filter) to achieve a target clearance of 35 mL/kg body weight/hr. The prescribed hourly ultrafiltration rate was increased above 35 mL/kg/hr to compensate for the reduced efficiency of clearance related to the pre-filter component of the replacement solution volume administered. This adjusted CVVH dose was calculated from the post-filter replacement fluid (RF), pre-filter replacement fluid and blood (blood) flow rates as follows:\nDose = Postfilter RF rate + ((Prefilter RF rate × (Blood flow/(Blood flow + Prefilter RF rate))).\nIn the CVVHD arm, the dialysate flow was set to achieve a clearance of 35 mL/kg/hr, which included a post-filter hemofiltration flow of 100 to 200 mL/hr. This obligate low-volume post-filter hemofiltration is utilized at participating centers to minimize the risk of blood clotting in the machine's deaeration chamber. Ultrafiltration performed for achievement of net fluid balance or to compensate for the volume of administered anticoagulants (namely, citrate or unfractionated heparin) was not considered in the total dose. We used the patient's most recent measured body weight, as recorded in the chart or estimated by the study coordinator if no value was documented, to calculate prescribed RRT dose. Due to difficulties in achieving the target RRT dose as a result of high transmembrane pressures in some patients with higher body mass (particularly those randomized to CVVH), the protocol was modified in May 2010 (21 months after randomization of the first patient) such that the total fluid dose was capped at 4,000 mL/hr for both treatment arms, irrespective of the patient's weight. After this protocol change, twenty patients were enrolled in the trial; the 4,000 mL/hr dosing cap was invoked in six participants (five allocated to CVVHD and one to CVVH) who would have otherwise needed higher flows to achieve an actual total dose of 35 mL/kg/hr.\nAll study therapies were delivered by the Gambro Prismaflex™ RRT machine using the ST100 (surface area 1.0 m2) or ST150 (surface area 1.5 m2) filter sets, which contain a polyacrylonitrile AN69 membrane (Gambro, Richmond Hill, ON, Canada). We permitted the use of any commercially available dialysate and replacement solutions. Decisions regarding circuit anticoagulation (heparin, regional citrate anticoagulation, or no anticoagulation) and volume control were at the discretion of the attending physicians. Patients remained on study therapy until death, withdrawal of CRRT as part of withdrawal of life support, hemodynamic stability (SOFA-cardiovascular score < 2 for > 24 hrs) permitting stepdown to intermittent hemodialysis, or recovery of kidney function (defined as urine output > 500 mL in the preceding 12 hrs, and most recent serum potassium < 5.5 mmol/L and serum bicarbonate > 18 mmol/L).\n\nBODY.OUTCOMES:\nThe primary feasibility outcome of this study was the ability to administer > 75% of the prescribed CRRT dose to participants in each treatment arm. Secondary feasibility outcomes included the ability to enroll > 25% of fully eligible patients and the ability to follow > 95% of patients to 60 days following randomization (the anticipated follow-up period for the future definitive principal study). Secondary outcomes included change in SOFA score from baseline to days 1, 2 and 7, respectively, following randomization. Serial changes in SOFA scores have been shown to be correlated with clinical outcomes in critically ill patients with AKI who require RRT [11].\n\nBODY.DATA COLLECTION:\nTrained research coordinators collected baseline clinical and demographic data, and information on pre-existing medical conditions. Specific risk factors for AKI were ascertained, including recent procedures, nephrotoxins, and sepsis (defined using consensus guidelines [12]). SOFA score was calculated at the time of randomization and on each day of study therapy. The SOFA-Cardiovascular score was modified to include the receipt of vasopressin. Patients receiving RRT on a given day were assigned a SOFA-Renal score of 4, regardless of urine output or serum creatinine. Participants were followed until death or a maximum of 60 days from randomization, at which time vital status and the ongoing need for RRT among survivors were recorded.\n\nBODY.STATISTICAL ANALYSES:\nAs this was a feasibility trial with the primary objective of informing the design of a large-scale RCT, we planned to enroll a convenience sample of 75 participants from six sites. Patients who were randomized but never received RRT are described, but these individuals were replaced to ensure that at least 75 patients received some form of RRT. Since the primary feasibility outcome was based on the dose of CRRT received, patients for whom dose could not be readily calculated (those who received no RRT or forms of RRT other than CRRT) were excluded from the analysis related to feasibility. However, clinical outcomes are reported for all randomized participants.\nDescriptive statistics were used to characterize participants in either arm. Continuous variables are presented as means (SD) or medians (interquartile range, IQR) and two-group comparisons were performed with the t-test or Wilcoxon test, as appropriate. Two-group comparisons involving categorical variables were carried out with the chi square test. Analysis of covariance, adjusted for baseline SOFA score, was used to evaluate the change in SOFA score on days 1 and 2. Linear mixed models adjusted for baseline SOFA score and day of study therapy were used to evaluate the impact of RRT mode on SOFA score over the first week of therapy. For the fixed effect of treatment (that is, CVVH vs. CVVHD) 95% confidence intervals (CI) were obtained by profiling the log-likelihood function. All analyses were performed using R version 2.12.0 (R Development Core Team 2010, Vienna, Austria).\n\n\nBODY.RESULTS:\nWe screened 347 patients; 143 were eligible for participation and 79 individuals (55.2%) were enrolled over a 24-month time period. The inability to obtain consent from the patient or SDM was the reason for the non-enrollment of otherwise eligible patients. One patient was excluded shortly after enrollment after it was decided to pursue a non-continuous form of RRT. In total, 78 patients were randomized (39 to CVVH, and 39 to CVVHD). In one case, prior to the start of therapy it was recognized that a patient randomized to CVVHD was inappropriately enrolled as the indication for RRT was toxin removal rather than AKI per se. This patient was excluded from all further analyses. Clinical outcomes are reported in an intention-to-treat fashion for the remaining 77 patients (CVVH, 39; CVVHD, 38). Four patients randomized to CVVH were excluded from the feasibility analysis, two due to death prior to commencement of study RRT, and two due to receipt of continuous venovenous hemodiafiltration as the initial mode of therapy. The indication(s) for RRT was (were) oliguria, metabolic acidosis, hyperkalemia and uremia in 34, 17, 6 and 8 patients, respectively, in the CVVH arm. In the CVVHD arm, these indications guided the inclusion of 36, 15, 4 and 3 participants, respectively. In total, 73 participants commenced the therapy to which they were randomized (CVVH, 35; CVVHD, 38); these individuals contributed to the analysis relating to the feasibility of treatment delivery (Figure 1).\nFigure 1Flow of patients through the trial. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\nBODY.BASELINE CHARACTERISTICS (TABLE 1):\nThe mean age of participants was 61.5 (SD 14.2) years and 30/77 (39%) were women. The median time from ICU admission to randomization was 2 (IQR 2 to 8) days. Sepsis was present in 63/77 (82%) of participants, 73/77 (95%) were mechanically ventilated and 69/77 (90%) received vasopressors at the time of randomization. Median 24-hr urine output was 125 (IQR 50 to 250) mL. Approximately two-thirds of participants received some form of RRT prior to randomization, with 23/39 in the CVVH group (59%) for a mean of 10.6 (SD 12.0) hrs, and 30/38 in the CVVHD group (79%) for a mean of 13.3 (SD 11.7) hrs. (Table 1\nTable 1Baseline characteristics\n\nCVVH (n = 39)\nCVVHD (n = 38)\nAge, years\n58.8 ± 13.7\n64.3 ± 14.3\nFemale\n14 (36%)\n16 (42%)\nWeight, kg\n86.6 ± 31.5\n89.2 ± 26.4\nDays from ICU admission to randomization\n2 (1, 4)\n2 (1, 3)\nSOFA score\n16.3 ± 3.3\n15.5 ± 3.0\nPremorbid conditions\nHypertension\n19 (49%)\n26 (68%)\nDiabetes mellitus\n9 (23%)\n9 (24%)\nChronic kidney disease\n5 (13%)\n13 (34%)\nAcute kidney injury risk factors\nSepsis\n31 (79%)\n32 (84%)\nCardiopulmonary bypass in past 7 days\n1 (3%)\n4 (11%)\nIV contrast in past 7 days\n9 (23%)\n6 (16%)\nPhysiologic parameters\nUrine output, mL/24 hr\n125 (60, 245)\n135 (46, 251)\nMinimum systolic BP during 24 hr before enrolment, mmHg\n94.2 ± 16.5\n86.1 ± 13.5\nMinimum diastolic BP during 24 hr before enrolment, mmHg\n48.9 ± 8.6\n47.2 ± 9.4\nLaboratory parameters\nSerum creatinine, µmol/L\n276 (194, 352)\n246 (155, 325)\nUrea, mmol/L\n25.8 ± 38.1\n17.6 ± 12.7\nPotassium, mmol/L\n4.3 ± 0.7\n4.1 ± 0.6\nBicarbonate, mmol/L\n20.5 ± 5.9\n20.2 ± 5.3\nHemoglobin, g/L\n86.1 ± 16.3\n90.0 ± 15.2\nPlatelets, × 109/L\n109.6 ± 82.6\n127.2 ± 80.9\nWBC, × 109/L\n20.2 ± 43.3\n16.9 ± 10.6\nICU interventions\nDuration of RRT prior to randomization, hours\n8 (0, 19)\n13 (1.0, 20.5)\nMechanical ventilation\n37 (95%)\n36 (95%)\nVasopressors\n34 (87%)\n35 (92%)\nTotal parenteral nutrition\n1 (3%)\n2 (5%)\nContinuous data are displayed as mean ± SD or medians (interquartile range), as appropriate. Categorical variables are displayed as n (%). CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemodialysis; BP, blood pressure; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; WBC, White blood cell.\n\nBODY.FEATURES OF STUDY TREATMENTS (TABLE 2):\nAmong the 35 participants who started CVVH, RRT was prescribed for a median of 107 (IQR 55 to 146) hrs and delivered for 85 (IQR 43 to 128) hrs. Overall, 84.7% (95% CI 79.1, 90.3) of the prescribed duration was delivered. The mean weight-standardized dose was 33.6 (SD 7.4) mL/kg/hr while CRRT was ongoing.(Table 2\nTable 2Feasibility and safety data\nCVVH(n = 35)CVVHD(n = 38)P-valueDuration RRT prescribed, hrs\n146 ± 240\n145 ± 156\n0.88\nDuration RRT received, hrs\n130 ± 222\n128 ± 142\n0.87\nMean pre-filter replacement solution flow, mL/hr\n1533 ± 442\n0\nn/a\nMean post-filter replacement solution flow, mL/hr\n1440 ± 488\n180 ± 140\nn/a\nMean dialysate flow, mL/hr\n0\n2871 ± 872\nn/a\nMean RRT dose, mL/kg/hr\n33.6 ± 7.4\n34.7 ± 4.4\n0.50\nPrescribed dose delivered, %\n84.7 ± 16.3\n87.8 ± 13.7\n0.73\nNet ultrafiltration, L/day\n1.7 ± 2.2\n0.8 ± 4.1\n0.98\nDays on study therapy\n5 (3-7)\n4.50 (3.00-10.25)\n0.79\nPrimary reason for CRRT withdrawal\n\n\n0.75\nDeath while on CRRT\n12 (35%)\n10 (27%)\n\nKidney function recovery\n7 (21%)\n7 (19%)\n\nTransfer to intermittent hemodialysis\n12 (35%)\n14 (38%)\n\n\n3 (9%)\n6 (16%)\n\nCatheter changes/day of therapy\n0.09 ± 0.2\n0.1 ± 0.2\n0.92\nUnits RBCs transfused/day of therapy\n0.3 ± 0.3\n0.4 ± 0.7\n0.38\nUnscheduled circuit changes/day of therapy\n0.2 ± 0.3\n0.2 ± 0.2\n0.36\nStudy days with receipt of norepinephrine, %\n61.8 ± 36.2\n65.7 ± 7.5\n0.41\nStudy days with receipt of vasopressin, %\n35.4 ± 35.9\n43.0 ± 43.6\n0.51\nContinuous data are displayed as mean ± standard deviation or median (interquartile range), as appropriate. Categorical variables are displayed as n (%); n/a, not applicable. Data are restricted to patients who initiated therapy with the CRRT mode to which they had been allocated. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; RBCs, red blood cells; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration.\nCVVHD was prescribed in 38 participants for a median of 92 (IQR 57 to 145) hrs and delivered for 76 (IQR 44 to 148) hrs. Over the course of the trial, 87.8% (95% CI 83.3, 92.3) of the prescribed duration was delivered to patients randomized to receive CVVHD. The mean weight-standardized delivered RRT dose was 34.7 mL/kg/hr (SD 4.4).\n\nBODY.CLINICAL OUTCOMES:\nAll subjects were followed to 60 days, by which point 22/39 (56%) and 21/38 (55%) of participants assigned to CVVH and CVVHD, respectively, had died. Among surviving patients, 4/17 (24%) and 3/17 (19%) of those initially assigned to CVVH and CVVHD respectively, were still dependent on RRT.\nAfter adjustment for baseline SOFA score, we found a non-significant decline in SOFA score among participants treated with CVVH compared to CVVHD on the first day (-0.4, 95% CI -1.3, 0.6) and second day (-0.4, 95% CI -1.6, 0.8) following randomization. Over the first week of therapy, the adjusted change in the SOFA score among participants treated with CVVH compared to CVVHD was -0.8 (95% CI -2.1, 0.5). The observed reduction appeared to be driven by a reduction in the cardiovascular component of the SOFA score (Figure 2).\nFigure 2Total and Cardiovascular (CV) Sequential Organ Failure Assessment (SOFA) scores over the first seven days following randomization. Total SOFA scores are denoted with squares; the cardiovascular component of the SOFA score is denoted with triangles.CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\n\nBODY.PROTOCOL VIOLATIONS AND ADVERSE EVENTS:\nWe identified 15 protocol violations involving 15 participants. In one individual who started therapy with CVVH, achievement of the target dose was not feasible and RRT was supplemented with the addition of continuous dialysis; ongoing difficulties with maintaining adequate small molecule clearance using CRRT resulted in a change to sustained low efficiency dialysis. In five cases (one patient on CVVH, and four on CVVHD), stepdown from CRRT to IHD occurred before the participant met the study-defined criteria for hemodynamic stability. In five cases (two patients on CVVH, and three on CVVHD), RRT was discontinued altogether in participants who did not meet study criteria for renal recovery.\nNo adverse events were attributed to the study intervention in either treatment arm.\n\n\nBODY.DISCUSSION:\nWe completed a multi-center, concealed-allocation, randomized trial comparing hemofiltration and hemodialysis in critically ill patients with AKI. This pilot trial achieved its primary objective of confirming the feasibility of performing a large-scale trial evaluating RRT clearance mode in AKI. There was also a non-significant trend towards reduced organ dysfunction, driven by decreased vasopressor requirements, early after the initiation of RRT in patients who received hemofiltration. These feasibility data and the importance of the clinical question justify the conduct of a large trial with adequate power to evaluate the primary outcome of 60-day mortality. With a power of 0.80, type I error of 0.05, and a mortality reduction of 10% in patients treated with CVVH (estimated 60-day mortality in the CVVHD arm 55%), we estimate that such a trial would require the enrolment of nearly 400 patients per arm. A more conservative mortality reduction of 7.5% would require the enrolment of 700 patients per arm.\nLittle is known about the optimal mode of clearance in renal replacement for AKI. Although both convection and diffusion remove small molecules with equal efficiency, convection may remove larger molecules that are not cleared by diffusive mechanisms [13]; previous studies have examined effects on inflammatory markers and have been too small to reliably determine effects on clinical outcomes. In addition, the profile of larger molecules removed by convection is relatively non-specific and may also include molecules that dampen inflammation or crucial medications such as antibiotics [14]. A randomized crossover study of 13 patients with AKI and the systemic inflammatory response syndrome found that CVVH for 24 hrs reduced plasma concentrations of TNFα and cleared more IL-6, compared to CVVHD. However, there was no effect on plasma concentrations of IL-6, IL-10, SL-selectin, or endotoxin [15]. Morgera and coworkers randomized 24 patients with sepsis-associated AKI to treatment with CVVH or CVVHD using a high cutoff membrane permeable to molecules up to 60 kilodaltons in size. Plasma concentration and clearance for IL-6 did not differ, but clearance of IL-1 receptor antagonist, an anti-inflammatory mediator, was enhanced by CVVH [16]. Of note, protein losses were higher in patients who received CVVH. In a prospective crossover study involving 15 patients with AKI who sequentially received CVVH and CVVHD, β2-microglobulin clearance was non-significantly higher among CVVH recipients (P = 0.055) [17]. Among trials focusing on clinical outcomes, a single-centre RCT (n = 20) did not demonstrate a difference in survival, renal recovery or ICU stay in patients treated with CVVH vs. CVVHD administered at fixed doses of 1.7 to 2.0 L/hr [18]. Similarly, a recent unpublished 65-patient RCT of CVVH vs continuous venovenous hemodiafiltration (both at 40 mL/kg/hr) did not demonstrate a survival difference at 28 days [19].\nWhile interventional trials involving devices and processes of care that are susceptible to large variations in practice are challenging, we achieved our feasibility objectives. Specifically, we were able to recruit the majority of eligible subjects, implement the protocolized therapy for > 85% of the prescribed time and ascertain vital status at 60 days for all participants. When accounting for actual time on therapy, the delivered dose exceeded 80% of that prescribed in both treatment arms, thereby surpassing our feasibility threshold of 75%. Accordingly, we believe our study strongly supports the feasibility of a large definitive randomized trial comparing hemofiltration and hemodialysis in critically ill patients with severe AKI.\nThis is the largest published trial to date to study the mode of solute clearance in AKI. Given the challenges of recruiting participants and implementing interventions in a population with a high burden of illness, the success of our pilot was a necessary precursor to a principal trial that examines patient-relevant clinical outcomes. Our eligibility criteria were pragmatic and assured the inclusion of individuals in North America who typically receive CRRT. A minority of individuals for whom no SDM could be found were enroled with deferred consent, thereby limiting exclusion of potentially eligible patients and mitigating selection bias. Of interest, no subject enroled by deferred consent had an SDM who subsequently withdrew consent or withdrew consent personally after regaining capacity. Finally, other than the clearance mode, all other aspects of RRT including stepdown to intermittent hemodialysis and withdrawal of RRT were performed in a manner consistent with usual practice.\nOur study has several limitations. As this was an unblinded trial, we cannot exclude the effect of co-interventions in either treatment arm. However, the nature of our intervention made blinding impractical, and we ensured that the two groups received equivalent RRT doses. There is also no definite intervention related to RRT prescription that has been shown to modify outcomes in critically ill patients with AKI. In addition, we cannot exclude the possibility that patients who were eligible but not randomized were systematically different than trial participants. Our protocol specified a target clearance of 35 mL/kg/hr, which was generally achieved in both arms. The decision to use this dose was guided by the fact that our trial was designed when higher dose CRRT was felt to be potentially beneficial based on data from two trials [20,21]. This pre-dated more recent trials and meta-analyses demonstrating no advantage of higher dose CRRT over doses of 20 to 25 mL/kg/hr, which would likely be the standard in a future trial comparing convection and diffusion [4,5,22,23]. However, since high dose therapy appears safe and may positively modify the effect of convective clearance [20], our higher dose target was reasonable. It should be noted that while 35 ml/kg/hr was prescribed to all participants, actual solute clearance may have been significantly lower due to changes in filter permeability [24,25]. On the other hand, when calculating the prescribed dose, we did not consider clearance associated with net ultrafiltration or removal of volume associated with the administration of anticoagulants. This approach would tend to underestimate the actual solute clearance that was delivered. Importantly, any deviation in dose from our 35 mL/kg/hr target would affect participants in both intervention arms to a similar extent. The Prismaflex CRRT system requires that all patients (even those predominantly getting hemodialysis) receive a small amount of post-filter hemofiltration (up to 200 ml/hr) to prevent clotting in the machine's deaeration chamber. In addition, CVVHD recipients would have had other unavoidable sources of convective clearance, specifically for the achievement of net fluid removal, and for the isovolemic removal of volume associated with the administration of the anticoagulant (citrate or heparin). Thus, CVVHD recipients did not receive purely diffusive solute clearance. We nonetheless estimate that the typical patient enrolled in the CVVHD arm still received > 80% of therapy in the form of diffusive clearance, which reassures us that this trial truly compared two different modes of solute clearance. Our trial was conducted using the AN69 polyacryonitrile filter, which has unique adsorptive characteristics [14]. While this is a widely used membrane in the administration of CRRT, the adsorptive characteristics of filters may differ and we cannot generalize our findings to settings in which other filters are utilized. Moreover, we did not collect data to evaluate the relative effect of CVVH or CVVHD on the removal of molecules of varying size. This trial was conducted using continuous RRT. Although the question of hemodialysis vs. hemofiltration is applicable to patients stable enough to receive intermittent RRT, machines to deliver intermittent hemofiltration were not widely available in North America when the trial was conducted. Among trial participants who became more hemodynamically stable, intermittent hemodialysis was utilized even when the initial CRRT mode was hemofiltration, thereby potentially diluting the benefits associated with CVVH. Finally, while we observed a trend towards improved organ failure scores, this finding must be interpreted with caution given the small sample size of our trial. Moreover, disease severity scores such as the SOFA score are surrogate markers that cannot supplant hard clinical endpoints.\n\nBODY.CONCLUSIONS:\nOur findings clearly support the feasibility of performing a definitive trial comparing CVVH and CVVHD in critically ill patients with AKI. The early non-significant trend towards reduced vasopressor requirements provides preliminary support to the concept that convective modes of clearance reduce inflammation and thus benefit critically ill patients. Given the high mortality associated with AKI, the lack of specific RRT interventions shown to reduce mortality, and current practice variation, the results of our pilot trial provide justification for a larger trial of hemofiltration vs. hemodialysis that will be adequately powered to evaluate meaningful clinical outcomes.\n\nBODY.KEY MESSAGES:\n• A randomized controlled trial of hemofiltration vs hemodialysis is feasible.\n• Hemofiltration may be associated with decreased vasopressor requirements over the first week of therapy.\n• A well-designed and adequately powered trial of hemofiltration vs hemodialysis would address an important area of uncertainty in the management of patients with AKI.\n\nBODY.ABBREVIATIONS:\nAKI: acute kidney injury; CRRT: continuous renal replacement therapy; CVVH: continuous venovenous hemofiltration; CVVHD: continuous venovenous hemodialysis; IL: interleukin; IQR: interquartile range; RF: replacement fluid; RCT: randomized controlled trial; RRT: renal replacement therapy; SDM: substitute decision maker; SOFA: Sequential Organ Failure Assessment; TNF α: tumor necrosis factor alpha.\n\nBODY.COMPETING INTERESTS:\nDr. Bagshaw has served on a Gambro Experts Panel. No other relevant competing interests are reported.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRW conceived and designed the study, obtained funding, supervised study conduct, interpreted the data, drafted the manuscript, and provided critical review of the manuscript. JOF conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. SMB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. KEB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AXG interpreted the study data, supervised study conduct, and provided critical review of the manuscript. MAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. SL interpreted the study data, supervised study conduct and provided critical review of the manuscript. DK interpreted the study data, supervised study conduct, and provided critical review of the manuscript. RMR interpreted the study data, supervised study conduct, and provided critical review of the manuscript. NIP interpreted the study data and provided critical review of the manuscript. KP participated in data analysis and interpretation, designed data collection tools, and participated in the critical review of the manuscript. KT participated in study design, conducted the data analysis, participated in data interpretation, and in the critical review of the manuscript. NKJA conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. All authors approved the final version.\n\nBODY.SUPPLEMENTARY MATERIAL:\nBODY.ADDITIONAL FILE 1:\nModified Sequential Organ Failure Assessment (SOFA) score. SOFA score modified from the original version (see reference [9]) for application in the OMAKI trial.\nClick here for file\n\n**Question:** Compared to continuous venovenous hemodialysis [CVVHD] what was the result of continuous venovenous hemofiltration [CVVH] on Prescribed dose delivered, %?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13847
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nOptimal Mode of clearance in critically ill patients with Acute Kidney Injury (OMAKI) - a pilot randomized controlled trial of hemofiltration versus hemodialysis: a Canadian Critical Care Trials Group project\n\n\n\nABSTRACT.INTRODUCTION:\nAmong critically ill patients with acute kidney injury (AKI) needing continuous renal replacement therapy (CRRT), the effect of convective (via continuous venovenous hemofiltration [CVVH]) versus diffusive (via continuous venovenous hemodialysis [CVVHD]) solute clearance on clinical outcomes is unclear. Our objective was to evaluate the feasibility of comparing these two modes in a randomized trial.\n\nABSTRACT.METHODS:\nThis was a multicenter open-label parallel-group pilot randomized trial of CVVH versus CVVHD. Using concealed allocation, we randomized critically ill adults with AKI and hemodynamic instability to CVVH or CVVHD, with a prescribed small solute clearance of 35 mL/kg/hour in both arms. The primary outcome was trial feasibility, defined by randomization of >25% of eligible patients, delivery of >75% of the prescribed CRRT dose, and follow-up of >95% of patients to 60 days. A secondary analysis using a mixed-effects model examined the impact of therapy on illness severity, defined by sequential organ failure assessment (SOFA) score, over the first week.\n\nABSTRACT.RESULTS:\nWe randomized 78 patients (mean age 61.5 years; 39% women; 23% with chronic kidney disease; 82% with sepsis). Baseline SOFA scores (mean 15.9, SD 3.2) were similar between groups. We recruited 55% of eligible patients, delivered >80% of the prescribed dose in each arm, and achieved 100% follow-up. SOFA tended to decline more over the first week in CVVH recipients (-0.8, 95% CI -2.1, +0.5) driven by a reduction in vasopressor requirements. Mortality (54% CVVH; 55% CVVHD) and dialysis dependence in survivors (24% CVVH; 19% CVVHD) at 60 days were similar.\n\nABSTRACT.CONCLUSIONS:\nOur results suggest that a large trial comparing CVVH to CVVHD would be feasible. There is a trend toward improved vasopressor requirements among CVVH-treated patients over the first week of treatment.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov: NCT00675818\n\nBODY.INTRODUCTION:\nAcute kidney injury (AKI) is a common complication of critical illness, afflicting up to two-thirds of patients admitted to the ICU [1]. A significant minority of patients with AKI requires renal replacement therapy (RRT), and these individuals have high short-term mortality that ranges from 50 to 70% [2]. In an attempt to mitigate these poor outcomes, various components of the RRT prescription have been rigorously examined in large well-designed randomized controlled trials (RCTs) [3-5].\nThe optimal mode of clearance in patients with AKI who require renal support is an area of considerable controversy resulting in significant practice variation [6]. Hemofiltration, whereby solutes are removed by convection, facilitates the removal of both low and higher molecular weight solutes, depending on the pore size of the membrane [7]. Hemodialysis, in which solute removal occurs via diffusion out of the bloodstream into the dialysate down a concentration gradient, removes low molecular weight molecules but provides limited clearance of higher molecular weight substances. When filter characteristics are kept constant, hemofiltration, which more faithfully mimics glomerular filtration, should result in the clearance of larger-sized solutes as compared to hemodialysis [7]. The removal of such solutes, which may include toxic mediators of sepsis and inflammation, provides the theoretical underpinnings for the superiority of hemofiltration as a renal support mode for critically ill patients with AKI.\nWe conducted a multicenter pilot RCT of hemofiltration vs. hemodialysis in critically ill patients with AKI to determine whether a larger definitive trial based on clinically important endpoints would be feasible. In a secondary analysis, we evaluated whether hemofiltration improved global organ dysfunction.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN:\nWe conducted an unblinded RCT of continuous venovenous hemofiltration (CVVH) vs. continuous venovenous hemodialysis (CVVHD) with concealed allocation (http://Clinicaltrials.gov registration number NCT00675818). Our reporting follows the updated CONSORT statement [8].\n\nBODY.SETTING:\nParticipants were recruited from ICUs at six academic hospitals: Mt. Sinai Hospital (Medical-Surgical ICU), Sunnybrook Health Sciences Centre (Critical Care Unit and Cardiovascular ICU) and St. Michael's Hospital (Medical-Surgical and Cardiovascular ICUs), all in Toronto, Canada; Victoria Hospital (Critical Care Trauma Centre), and University Hospital (Medical/Surgical Intensive Care Unit and Cardiac Surgery Recovery Unit), both in London, Canada, and University of Alberta Hospital (General Systems Intensive Care Unit) in Edmonton, Canada. The Research Ethics Boards of Mt. Sinai Hospital, St. Michael's Hospital, Sunnybrook Health Sciences Centre, London Health Sciences Centre and the University of Alberta approved the protocol. The Applied Health Research Centre at St. Michael's Hospital (Toronto, Ontario, Canada) was the trial coordinating center.\n\nBODY.POPULATION:\nWe enrolled critically ill adults (≥ 16 years of age) with AKI, defined as a serum creatinine increase ≥ 50% from baseline (defined as the last known pre-morbid serum creatinine or earliest value available from the current admission). At the time of screening, at least one of the following indications for RRT initiation needed to be present: (i) oliguria (defined as urine output < 100 mL in the preceding 4 hours); (ii) metabolic acidosis (serum bicarbonate < 15 mmol/L and pH < 7.25); (iii) refractory hyperkalemia (serum potassium > 6 mmol/L despite medical efforts at potassium removal); (iv) serum urea > 50 mmol/L, or (v) suspected uremic organ involvement (pericarditis, encephalopathy, neuropathy or myopathy). Finally, participants needed to be hemodynamically unstable, defined as Sequential Organ Failure Assessment (SOFA)- Cardiovascular score ≥ 1 on the day of screening (see Additional file 1 for the modified SOFA score used in this study). This required the patient to have mean arterial pressure < 70 mmHg or receipt of at least one vasopressor or inotrope [9]. Patients were excluded if any one of the following was present: receipt of any RRT within the previous 2 months; presence of an obstructive etiology for AKI; receipt of a kidney transplant within the preceding year; diagnosis of rapidly progressive glomerulonephritis, vasculitis, or acute interstitial nephritis; a clinical indication for intermittent hemodialysis (for example, the presence of a dialyzable toxin); terminal illness with an associated life expectancy less than 2 months; moribund status (life expectancy < 48 hours as per judgment of physicians involved in the patient's care); prior enrolment in this study; enrolment in a competing ICU interventional study; non-availability of a CRRT machine, or administration of RRT for > 36 hours prior to eligibility assessment.\n\nBODY.PATIENT ALLOCATION:\nAfter eligibility was confirmed, we attempted to obtain consent from the patient or if the patient lacked capacity to consent, his/her substitute decision maker (SDM) was approached. A deferred consent option was approved at three sites, which allowed patient enrollment and randomization in the event of patient incapacity and the inability to locate an SDM. Using this mechanism, patients were randomized, and research personnel attempted to locate an SDM every 72 hours to affirm consent for participation. In all cases, when participants regained capacity, they were asked to provide consent if they were initially enrolled using a priori SDM consent or deferred consent; no participant withdrew consent once regaining capacity. Patients were allocated to a study group using sealed, opaque, sequentially numbered envelopes (prepared by the coordinating center) that were opened after consent was obtained [10]. Randomization was stratified by center in random blocks of four, six or eight. The Research Ethics Board at each center approved the study. An independent data and safety monitoring board tracked the trial's conduct.\n\nBODY.STUDY INTERVENTION:\nParticipants randomized to CVVH were prescribed ultrafiltration with isovolemic replacement solution (evenly split between pre- and post-filter) to achieve a target clearance of 35 mL/kg body weight/hr. The prescribed hourly ultrafiltration rate was increased above 35 mL/kg/hr to compensate for the reduced efficiency of clearance related to the pre-filter component of the replacement solution volume administered. This adjusted CVVH dose was calculated from the post-filter replacement fluid (RF), pre-filter replacement fluid and blood (blood) flow rates as follows:\nDose = Postfilter RF rate + ((Prefilter RF rate × (Blood flow/(Blood flow + Prefilter RF rate))).\nIn the CVVHD arm, the dialysate flow was set to achieve a clearance of 35 mL/kg/hr, which included a post-filter hemofiltration flow of 100 to 200 mL/hr. This obligate low-volume post-filter hemofiltration is utilized at participating centers to minimize the risk of blood clotting in the machine's deaeration chamber. Ultrafiltration performed for achievement of net fluid balance or to compensate for the volume of administered anticoagulants (namely, citrate or unfractionated heparin) was not considered in the total dose. We used the patient's most recent measured body weight, as recorded in the chart or estimated by the study coordinator if no value was documented, to calculate prescribed RRT dose. Due to difficulties in achieving the target RRT dose as a result of high transmembrane pressures in some patients with higher body mass (particularly those randomized to CVVH), the protocol was modified in May 2010 (21 months after randomization of the first patient) such that the total fluid dose was capped at 4,000 mL/hr for both treatment arms, irrespective of the patient's weight. After this protocol change, twenty patients were enrolled in the trial; the 4,000 mL/hr dosing cap was invoked in six participants (five allocated to CVVHD and one to CVVH) who would have otherwise needed higher flows to achieve an actual total dose of 35 mL/kg/hr.\nAll study therapies were delivered by the Gambro Prismaflex™ RRT machine using the ST100 (surface area 1.0 m2) or ST150 (surface area 1.5 m2) filter sets, which contain a polyacrylonitrile AN69 membrane (Gambro, Richmond Hill, ON, Canada). We permitted the use of any commercially available dialysate and replacement solutions. Decisions regarding circuit anticoagulation (heparin, regional citrate anticoagulation, or no anticoagulation) and volume control were at the discretion of the attending physicians. Patients remained on study therapy until death, withdrawal of CRRT as part of withdrawal of life support, hemodynamic stability (SOFA-cardiovascular score < 2 for > 24 hrs) permitting stepdown to intermittent hemodialysis, or recovery of kidney function (defined as urine output > 500 mL in the preceding 12 hrs, and most recent serum potassium < 5.5 mmol/L and serum bicarbonate > 18 mmol/L).\n\nBODY.OUTCOMES:\nThe primary feasibility outcome of this study was the ability to administer > 75% of the prescribed CRRT dose to participants in each treatment arm. Secondary feasibility outcomes included the ability to enroll > 25% of fully eligible patients and the ability to follow > 95% of patients to 60 days following randomization (the anticipated follow-up period for the future definitive principal study). Secondary outcomes included change in SOFA score from baseline to days 1, 2 and 7, respectively, following randomization. Serial changes in SOFA scores have been shown to be correlated with clinical outcomes in critically ill patients with AKI who require RRT [11].\n\nBODY.DATA COLLECTION:\nTrained research coordinators collected baseline clinical and demographic data, and information on pre-existing medical conditions. Specific risk factors for AKI were ascertained, including recent procedures, nephrotoxins, and sepsis (defined using consensus guidelines [12]). SOFA score was calculated at the time of randomization and on each day of study therapy. The SOFA-Cardiovascular score was modified to include the receipt of vasopressin. Patients receiving RRT on a given day were assigned a SOFA-Renal score of 4, regardless of urine output or serum creatinine. Participants were followed until death or a maximum of 60 days from randomization, at which time vital status and the ongoing need for RRT among survivors were recorded.\n\nBODY.STATISTICAL ANALYSES:\nAs this was a feasibility trial with the primary objective of informing the design of a large-scale RCT, we planned to enroll a convenience sample of 75 participants from six sites. Patients who were randomized but never received RRT are described, but these individuals were replaced to ensure that at least 75 patients received some form of RRT. Since the primary feasibility outcome was based on the dose of CRRT received, patients for whom dose could not be readily calculated (those who received no RRT or forms of RRT other than CRRT) were excluded from the analysis related to feasibility. However, clinical outcomes are reported for all randomized participants.\nDescriptive statistics were used to characterize participants in either arm. Continuous variables are presented as means (SD) or medians (interquartile range, IQR) and two-group comparisons were performed with the t-test or Wilcoxon test, as appropriate. Two-group comparisons involving categorical variables were carried out with the chi square test. Analysis of covariance, adjusted for baseline SOFA score, was used to evaluate the change in SOFA score on days 1 and 2. Linear mixed models adjusted for baseline SOFA score and day of study therapy were used to evaluate the impact of RRT mode on SOFA score over the first week of therapy. For the fixed effect of treatment (that is, CVVH vs. CVVHD) 95% confidence intervals (CI) were obtained by profiling the log-likelihood function. All analyses were performed using R version 2.12.0 (R Development Core Team 2010, Vienna, Austria).\n\n\nBODY.RESULTS:\nWe screened 347 patients; 143 were eligible for participation and 79 individuals (55.2%) were enrolled over a 24-month time period. The inability to obtain consent from the patient or SDM was the reason for the non-enrollment of otherwise eligible patients. One patient was excluded shortly after enrollment after it was decided to pursue a non-continuous form of RRT. In total, 78 patients were randomized (39 to CVVH, and 39 to CVVHD). In one case, prior to the start of therapy it was recognized that a patient randomized to CVVHD was inappropriately enrolled as the indication for RRT was toxin removal rather than AKI per se. This patient was excluded from all further analyses. Clinical outcomes are reported in an intention-to-treat fashion for the remaining 77 patients (CVVH, 39; CVVHD, 38). Four patients randomized to CVVH were excluded from the feasibility analysis, two due to death prior to commencement of study RRT, and two due to receipt of continuous venovenous hemodiafiltration as the initial mode of therapy. The indication(s) for RRT was (were) oliguria, metabolic acidosis, hyperkalemia and uremia in 34, 17, 6 and 8 patients, respectively, in the CVVH arm. In the CVVHD arm, these indications guided the inclusion of 36, 15, 4 and 3 participants, respectively. In total, 73 participants commenced the therapy to which they were randomized (CVVH, 35; CVVHD, 38); these individuals contributed to the analysis relating to the feasibility of treatment delivery (Figure 1).\nFigure 1Flow of patients through the trial. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\nBODY.BASELINE CHARACTERISTICS (TABLE 1):\nThe mean age of participants was 61.5 (SD 14.2) years and 30/77 (39%) were women. The median time from ICU admission to randomization was 2 (IQR 2 to 8) days. Sepsis was present in 63/77 (82%) of participants, 73/77 (95%) were mechanically ventilated and 69/77 (90%) received vasopressors at the time of randomization. Median 24-hr urine output was 125 (IQR 50 to 250) mL. Approximately two-thirds of participants received some form of RRT prior to randomization, with 23/39 in the CVVH group (59%) for a mean of 10.6 (SD 12.0) hrs, and 30/38 in the CVVHD group (79%) for a mean of 13.3 (SD 11.7) hrs. (Table 1\nTable 1Baseline characteristics\n\nCVVH (n = 39)\nCVVHD (n = 38)\nAge, years\n58.8 ± 13.7\n64.3 ± 14.3\nFemale\n14 (36%)\n16 (42%)\nWeight, kg\n86.6 ± 31.5\n89.2 ± 26.4\nDays from ICU admission to randomization\n2 (1, 4)\n2 (1, 3)\nSOFA score\n16.3 ± 3.3\n15.5 ± 3.0\nPremorbid conditions\nHypertension\n19 (49%)\n26 (68%)\nDiabetes mellitus\n9 (23%)\n9 (24%)\nChronic kidney disease\n5 (13%)\n13 (34%)\nAcute kidney injury risk factors\nSepsis\n31 (79%)\n32 (84%)\nCardiopulmonary bypass in past 7 days\n1 (3%)\n4 (11%)\nIV contrast in past 7 days\n9 (23%)\n6 (16%)\nPhysiologic parameters\nUrine output, mL/24 hr\n125 (60, 245)\n135 (46, 251)\nMinimum systolic BP during 24 hr before enrolment, mmHg\n94.2 ± 16.5\n86.1 ± 13.5\nMinimum diastolic BP during 24 hr before enrolment, mmHg\n48.9 ± 8.6\n47.2 ± 9.4\nLaboratory parameters\nSerum creatinine, µmol/L\n276 (194, 352)\n246 (155, 325)\nUrea, mmol/L\n25.8 ± 38.1\n17.6 ± 12.7\nPotassium, mmol/L\n4.3 ± 0.7\n4.1 ± 0.6\nBicarbonate, mmol/L\n20.5 ± 5.9\n20.2 ± 5.3\nHemoglobin, g/L\n86.1 ± 16.3\n90.0 ± 15.2\nPlatelets, × 109/L\n109.6 ± 82.6\n127.2 ± 80.9\nWBC, × 109/L\n20.2 ± 43.3\n16.9 ± 10.6\nICU interventions\nDuration of RRT prior to randomization, hours\n8 (0, 19)\n13 (1.0, 20.5)\nMechanical ventilation\n37 (95%)\n36 (95%)\nVasopressors\n34 (87%)\n35 (92%)\nTotal parenteral nutrition\n1 (3%)\n2 (5%)\nContinuous data are displayed as mean ± SD or medians (interquartile range), as appropriate. Categorical variables are displayed as n (%). CRRT, continuous renal replacement therapy; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemodialysis; BP, blood pressure; RRT, renal replacement therapy; SOFA, Sequential Organ Failure Assessment; WBC, White blood cell.\n\nBODY.FEATURES OF STUDY TREATMENTS (TABLE 2):\nAmong the 35 participants who started CVVH, RRT was prescribed for a median of 107 (IQR 55 to 146) hrs and delivered for 85 (IQR 43 to 128) hrs. Overall, 84.7% (95% CI 79.1, 90.3) of the prescribed duration was delivered. The mean weight-standardized dose was 33.6 (SD 7.4) mL/kg/hr while CRRT was ongoing.(Table 2\nTable 2Feasibility and safety data\nCVVH(n = 35)CVVHD(n = 38)P-valueDuration RRT prescribed, hrs\n146 ± 240\n145 ± 156\n0.88\nDuration RRT received, hrs\n130 ± 222\n128 ± 142\n0.87\nMean pre-filter replacement solution flow, mL/hr\n1533 ± 442\n0\nn/a\nMean post-filter replacement solution flow, mL/hr\n1440 ± 488\n180 ± 140\nn/a\nMean dialysate flow, mL/hr\n0\n2871 ± 872\nn/a\nMean RRT dose, mL/kg/hr\n33.6 ± 7.4\n34.7 ± 4.4\n0.50\nPrescribed dose delivered, %\n84.7 ± 16.3\n87.8 ± 13.7\n0.73\nNet ultrafiltration, L/day\n1.7 ± 2.2\n0.8 ± 4.1\n0.98\nDays on study therapy\n5 (3-7)\n4.50 (3.00-10.25)\n0.79\nPrimary reason for CRRT withdrawal\n\n\n0.75\nDeath while on CRRT\n12 (35%)\n10 (27%)\n\nKidney function recovery\n7 (21%)\n7 (19%)\n\nTransfer to intermittent hemodialysis\n12 (35%)\n14 (38%)\n\n\n3 (9%)\n6 (16%)\n\nCatheter changes/day of therapy\n0.09 ± 0.2\n0.1 ± 0.2\n0.92\nUnits RBCs transfused/day of therapy\n0.3 ± 0.3\n0.4 ± 0.7\n0.38\nUnscheduled circuit changes/day of therapy\n0.2 ± 0.3\n0.2 ± 0.2\n0.36\nStudy days with receipt of norepinephrine, %\n61.8 ± 36.2\n65.7 ± 7.5\n0.41\nStudy days with receipt of vasopressin, %\n35.4 ± 35.9\n43.0 ± 43.6\n0.51\nContinuous data are displayed as mean ± standard deviation or median (interquartile range), as appropriate. Categorical variables are displayed as n (%); n/a, not applicable. Data are restricted to patients who initiated therapy with the CRRT mode to which they had been allocated. RRT, renal replacement therapy; CRRT, continuous renal replacement therapy; RBCs, red blood cells; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration.\nCVVHD was prescribed in 38 participants for a median of 92 (IQR 57 to 145) hrs and delivered for 76 (IQR 44 to 148) hrs. Over the course of the trial, 87.8% (95% CI 83.3, 92.3) of the prescribed duration was delivered to patients randomized to receive CVVHD. The mean weight-standardized delivered RRT dose was 34.7 mL/kg/hr (SD 4.4).\n\nBODY.CLINICAL OUTCOMES:\nAll subjects were followed to 60 days, by which point 22/39 (56%) and 21/38 (55%) of participants assigned to CVVH and CVVHD, respectively, had died. Among surviving patients, 4/17 (24%) and 3/17 (19%) of those initially assigned to CVVH and CVVHD respectively, were still dependent on RRT.\nAfter adjustment for baseline SOFA score, we found a non-significant decline in SOFA score among participants treated with CVVH compared to CVVHD on the first day (-0.4, 95% CI -1.3, 0.6) and second day (-0.4, 95% CI -1.6, 0.8) following randomization. Over the first week of therapy, the adjusted change in the SOFA score among participants treated with CVVH compared to CVVHD was -0.8 (95% CI -2.1, 0.5). The observed reduction appeared to be driven by a reduction in the cardiovascular component of the SOFA score (Figure 2).\nFigure 2Total and Cardiovascular (CV) Sequential Organ Failure Assessment (SOFA) scores over the first seven days following randomization. Total SOFA scores are denoted with squares; the cardiovascular component of the SOFA score is denoted with triangles.CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemofiltration\n\nBODY.PROTOCOL VIOLATIONS AND ADVERSE EVENTS:\nWe identified 15 protocol violations involving 15 participants. In one individual who started therapy with CVVH, achievement of the target dose was not feasible and RRT was supplemented with the addition of continuous dialysis; ongoing difficulties with maintaining adequate small molecule clearance using CRRT resulted in a change to sustained low efficiency dialysis. In five cases (one patient on CVVH, and four on CVVHD), stepdown from CRRT to IHD occurred before the participant met the study-defined criteria for hemodynamic stability. In five cases (two patients on CVVH, and three on CVVHD), RRT was discontinued altogether in participants who did not meet study criteria for renal recovery.\nNo adverse events were attributed to the study intervention in either treatment arm.\n\n\nBODY.DISCUSSION:\nWe completed a multi-center, concealed-allocation, randomized trial comparing hemofiltration and hemodialysis in critically ill patients with AKI. This pilot trial achieved its primary objective of confirming the feasibility of performing a large-scale trial evaluating RRT clearance mode in AKI. There was also a non-significant trend towards reduced organ dysfunction, driven by decreased vasopressor requirements, early after the initiation of RRT in patients who received hemofiltration. These feasibility data and the importance of the clinical question justify the conduct of a large trial with adequate power to evaluate the primary outcome of 60-day mortality. With a power of 0.80, type I error of 0.05, and a mortality reduction of 10% in patients treated with CVVH (estimated 60-day mortality in the CVVHD arm 55%), we estimate that such a trial would require the enrolment of nearly 400 patients per arm. A more conservative mortality reduction of 7.5% would require the enrolment of 700 patients per arm.\nLittle is known about the optimal mode of clearance in renal replacement for AKI. Although both convection and diffusion remove small molecules with equal efficiency, convection may remove larger molecules that are not cleared by diffusive mechanisms [13]; previous studies have examined effects on inflammatory markers and have been too small to reliably determine effects on clinical outcomes. In addition, the profile of larger molecules removed by convection is relatively non-specific and may also include molecules that dampen inflammation or crucial medications such as antibiotics [14]. A randomized crossover study of 13 patients with AKI and the systemic inflammatory response syndrome found that CVVH for 24 hrs reduced plasma concentrations of TNFα and cleared more IL-6, compared to CVVHD. However, there was no effect on plasma concentrations of IL-6, IL-10, SL-selectin, or endotoxin [15]. Morgera and coworkers randomized 24 patients with sepsis-associated AKI to treatment with CVVH or CVVHD using a high cutoff membrane permeable to molecules up to 60 kilodaltons in size. Plasma concentration and clearance for IL-6 did not differ, but clearance of IL-1 receptor antagonist, an anti-inflammatory mediator, was enhanced by CVVH [16]. Of note, protein losses were higher in patients who received CVVH. In a prospective crossover study involving 15 patients with AKI who sequentially received CVVH and CVVHD, β2-microglobulin clearance was non-significantly higher among CVVH recipients (P = 0.055) [17]. Among trials focusing on clinical outcomes, a single-centre RCT (n = 20) did not demonstrate a difference in survival, renal recovery or ICU stay in patients treated with CVVH vs. CVVHD administered at fixed doses of 1.7 to 2.0 L/hr [18]. Similarly, a recent unpublished 65-patient RCT of CVVH vs continuous venovenous hemodiafiltration (both at 40 mL/kg/hr) did not demonstrate a survival difference at 28 days [19].\nWhile interventional trials involving devices and processes of care that are susceptible to large variations in practice are challenging, we achieved our feasibility objectives. Specifically, we were able to recruit the majority of eligible subjects, implement the protocolized therapy for > 85% of the prescribed time and ascertain vital status at 60 days for all participants. When accounting for actual time on therapy, the delivered dose exceeded 80% of that prescribed in both treatment arms, thereby surpassing our feasibility threshold of 75%. Accordingly, we believe our study strongly supports the feasibility of a large definitive randomized trial comparing hemofiltration and hemodialysis in critically ill patients with severe AKI.\nThis is the largest published trial to date to study the mode of solute clearance in AKI. Given the challenges of recruiting participants and implementing interventions in a population with a high burden of illness, the success of our pilot was a necessary precursor to a principal trial that examines patient-relevant clinical outcomes. Our eligibility criteria were pragmatic and assured the inclusion of individuals in North America who typically receive CRRT. A minority of individuals for whom no SDM could be found were enroled with deferred consent, thereby limiting exclusion of potentially eligible patients and mitigating selection bias. Of interest, no subject enroled by deferred consent had an SDM who subsequently withdrew consent or withdrew consent personally after regaining capacity. Finally, other than the clearance mode, all other aspects of RRT including stepdown to intermittent hemodialysis and withdrawal of RRT were performed in a manner consistent with usual practice.\nOur study has several limitations. As this was an unblinded trial, we cannot exclude the effect of co-interventions in either treatment arm. However, the nature of our intervention made blinding impractical, and we ensured that the two groups received equivalent RRT doses. There is also no definite intervention related to RRT prescription that has been shown to modify outcomes in critically ill patients with AKI. In addition, we cannot exclude the possibility that patients who were eligible but not randomized were systematically different than trial participants. Our protocol specified a target clearance of 35 mL/kg/hr, which was generally achieved in both arms. The decision to use this dose was guided by the fact that our trial was designed when higher dose CRRT was felt to be potentially beneficial based on data from two trials [20,21]. This pre-dated more recent trials and meta-analyses demonstrating no advantage of higher dose CRRT over doses of 20 to 25 mL/kg/hr, which would likely be the standard in a future trial comparing convection and diffusion [4,5,22,23]. However, since high dose therapy appears safe and may positively modify the effect of convective clearance [20], our higher dose target was reasonable. It should be noted that while 35 ml/kg/hr was prescribed to all participants, actual solute clearance may have been significantly lower due to changes in filter permeability [24,25]. On the other hand, when calculating the prescribed dose, we did not consider clearance associated with net ultrafiltration or removal of volume associated with the administration of anticoagulants. This approach would tend to underestimate the actual solute clearance that was delivered. Importantly, any deviation in dose from our 35 mL/kg/hr target would affect participants in both intervention arms to a similar extent. The Prismaflex CRRT system requires that all patients (even those predominantly getting hemodialysis) receive a small amount of post-filter hemofiltration (up to 200 ml/hr) to prevent clotting in the machine's deaeration chamber. In addition, CVVHD recipients would have had other unavoidable sources of convective clearance, specifically for the achievement of net fluid removal, and for the isovolemic removal of volume associated with the administration of the anticoagulant (citrate or heparin). Thus, CVVHD recipients did not receive purely diffusive solute clearance. We nonetheless estimate that the typical patient enrolled in the CVVHD arm still received > 80% of therapy in the form of diffusive clearance, which reassures us that this trial truly compared two different modes of solute clearance. Our trial was conducted using the AN69 polyacryonitrile filter, which has unique adsorptive characteristics [14]. While this is a widely used membrane in the administration of CRRT, the adsorptive characteristics of filters may differ and we cannot generalize our findings to settings in which other filters are utilized. Moreover, we did not collect data to evaluate the relative effect of CVVH or CVVHD on the removal of molecules of varying size. This trial was conducted using continuous RRT. Although the question of hemodialysis vs. hemofiltration is applicable to patients stable enough to receive intermittent RRT, machines to deliver intermittent hemofiltration were not widely available in North America when the trial was conducted. Among trial participants who became more hemodynamically stable, intermittent hemodialysis was utilized even when the initial CRRT mode was hemofiltration, thereby potentially diluting the benefits associated with CVVH. Finally, while we observed a trend towards improved organ failure scores, this finding must be interpreted with caution given the small sample size of our trial. Moreover, disease severity scores such as the SOFA score are surrogate markers that cannot supplant hard clinical endpoints.\n\nBODY.CONCLUSIONS:\nOur findings clearly support the feasibility of performing a definitive trial comparing CVVH and CVVHD in critically ill patients with AKI. The early non-significant trend towards reduced vasopressor requirements provides preliminary support to the concept that convective modes of clearance reduce inflammation and thus benefit critically ill patients. Given the high mortality associated with AKI, the lack of specific RRT interventions shown to reduce mortality, and current practice variation, the results of our pilot trial provide justification for a larger trial of hemofiltration vs. hemodialysis that will be adequately powered to evaluate meaningful clinical outcomes.\n\nBODY.KEY MESSAGES:\n• A randomized controlled trial of hemofiltration vs hemodialysis is feasible.\n• Hemofiltration may be associated with decreased vasopressor requirements over the first week of therapy.\n• A well-designed and adequately powered trial of hemofiltration vs hemodialysis would address an important area of uncertainty in the management of patients with AKI.\n\nBODY.ABBREVIATIONS:\nAKI: acute kidney injury; CRRT: continuous renal replacement therapy; CVVH: continuous venovenous hemofiltration; CVVHD: continuous venovenous hemodialysis; IL: interleukin; IQR: interquartile range; RF: replacement fluid; RCT: randomized controlled trial; RRT: renal replacement therapy; SDM: substitute decision maker; SOFA: Sequential Organ Failure Assessment; TNF α: tumor necrosis factor alpha.\n\nBODY.COMPETING INTERESTS:\nDr. Bagshaw has served on a Gambro Experts Panel. No other relevant competing interests are reported.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nRW conceived and designed the study, obtained funding, supervised study conduct, interpreted the data, drafted the manuscript, and provided critical review of the manuscript. JOF conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. SMB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. KEB interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AXG interpreted the study data, supervised study conduct, and provided critical review of the manuscript. MAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. AAH interpreted the study data, supervised study conduct, and provided critical review of the manuscript. SL interpreted the study data, supervised study conduct and provided critical review of the manuscript. DK interpreted the study data, supervised study conduct, and provided critical review of the manuscript. RMR interpreted the study data, supervised study conduct, and provided critical review of the manuscript. NIP interpreted the study data and provided critical review of the manuscript. KP participated in data analysis and interpretation, designed data collection tools, and participated in the critical review of the manuscript. KT participated in study design, conducted the data analysis, participated in data interpretation, and in the critical review of the manuscript. NKJA conceived and designed the study, supervised study conduct, interpreted the data, drafted the initial version of the manuscript, and provided critical review of the manuscript. All authors approved the final version.\n\nBODY.SUPPLEMENTARY MATERIAL:\nBODY.ADDITIONAL FILE 1:\nModified Sequential Organ Failure Assessment (SOFA) score. SOFA score modified from the original version (see reference [9]) for application in the OMAKI trial.\nClick here for file\n\n**Question:** Compared to continuous venovenous hemodialysis [CVVHD] what was the result of continuous venovenous hemofiltration [CVVH] on Net ultrafiltration, L/day?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13851
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nDental Amalgam Restorations and Children’s Neuropsychological Function: The New England Children’s Amalgam Trial\n\n\n\nABSTRACT.BACKGROUND:\nA concern persists that children’s exposure to mercury vapor from dental amalgams produces neurotoxicity.\n\nABSTRACT.OBJECTIVE:\nOur goal was to compare the neuropsychological function of children, without prior exposure to dental amalgam, whose caries were repaired using either dental amalgam or mercury-free composite materials.\n\nABSTRACT.METHODS:\nWe conducted a randomized controlled trial involving 534 6- to 10-year-old urban and rural children who were assessed yearly for 5 years using a battery of tests of intelligence, achievement, language, memory, learning, visual–spatial skills, verbal fluency, fine motor function, problem solving, attention, and executive function.\n\nABSTRACT.RESULTS:\nAlthough the mean urinary mercury concentration was greater among children in the amalgam group than the composite group (0.9 vs. 0.6 μg/g creatinine), few significant differences were found between the test scores of children in the two groups. The differences found were inconsistent in direction. Analyses using two cumulative exposure indices—surface years of amalgam and urinary mercury concentration—produced similar results.\n\nABSTRACT.CONCLUSIONS:\nExposure to elemental mercury in amalgam at the levels experienced by the children who participated in the trial did not result in significant effects on neuropsychological function within the 5-year follow-up period.\n\nWe previously reported on a randomized clinical trial, the New England Children’s Amalgam Trial (NECAT), in which no significant differences were found, over a 5-year follow-up interval, between the neuropsychological scores of children for whom dental amalgam was used to repair caries and the scores of children for whom mercury-free composite materials were used (Bellinger et al. 2006). The Full-Scale IQ score on the Wechsler Intelligence Scale for Children-Third Edition (WISC-III; Wechsler 1991) was the primary end point, and the General Memory Index (GMI) on the Wide Range Assessment of Memory and Learning (WRAML; Sheslow and Adams 1990), and the Visual Motor Composite (VMC) on the Wide Range Assessment of Visual–Motor Ability (WRAVMA; Adams and Sheslow 1995) were the two secondary end points. Each of these is a global score, derived by combining a child’s performance on tasks that assess somewhat different abilities.\nThe additional analyses reported in this article address three issues. First, if mercury vapor liberated from dental amalgams produces specific rather than general neuropsychological effects and are most likely to be evident on tests that assess specific cognitive domains, global scores such as Full-Scale IQ, GMI, and VMC might be relatively insensitive to important treatment-group differences. Even in the absence of treatment-group differences on global test scores, differences in specific cognitive domains could, depending on their nature and severity, represent morbidities with important consequences for children’s health and well-being. Therefore, here we report comparisons of the scores of the amalgam and composite groups on the sub-scales that contribute to Full-Scale IQ, the GMI, and the VMC, as well as scores on a battery of additional, domain-focused, neuropsychological and educational tests.\nSecond, the exposure index used in the primary analyses of the trial was treatment-group assignment. This could have introduced a form of exposure misclassification insofar as the variability in the treatment needs of the children in the amalgam group resulted in the receipt of variable amounts of amalgam and thus in their potential exposure to mercury. Therefore, we repeated the analyses replacing treatment-group assignment with two continuously distributed indices of exposure: surface-years of amalgam and urinary mercury concentration.\nThird, it is possible that only a subset of children experienced adverse neuropsychological effects as the result of exposure to amalgam, either because of behaviors, such as bruxism or frequent gum chewing, that cause enhanced release of mercury (Barregard 2005; Barregard et al. 1995) or because of enhanced sensitivity to mercury. Two recent studies of dental professionals suggest that polymorphisms for brain-derived neurotrophic factor (Echeverria et al. 2005; Heyer et al. 2004) and the coproporphyrinogen oxidase gene (Echeverria et al. 2006) modify the neurotoxicity of elemental mercury. If the prevalence of such enhanced vulnerability to elemental mercury is low or the associated increase in neuropsychological toxicity is modest in magnitude, its impact on the mean scores of children in the amalgam group might not have been sufficiently large to produce significant treatment-group differences. In an attempt to identify the presence of a subgroup of children who are particularly sensitive to amalgam, we compared the distributional characteristics of the scores within the treatment groups.\nBODY.METHODS:\nBODY.STUDY DESIGN AND PARTICIPANTS:\nChildren were eligible if they were 6–10 years of age, fluent in English, had no known prior or existing amalgam restorations, had two or more posterior teeth with dental caries, and did not have a physician-diagnosed psychological, behavioral, neurological, immunosuppressive, or renal disorder (Children’s Amalgam Trial Study Group 2003). Children were recruited from several community dental clinics in the Boston/Cambridge area of Massachusetts, an urban setting, and from a dental clinic in Farmington, Maine, a rural area.\nA total of 5,116 children were screened for eligibility. Eligibility was confirmed for 598 children. At baseline visits, children received a dental examination by a study dentist, X rays, and standard preventive dental care (e.g., cleaning, application of sealants). Other baseline visit activities included the collection of blood and urine samples, anthropometric measurements of height, weight, and body fat, neuropsychological testing of the child, a health interview with the child’s guardian, and neuropsychological testing of the guardian.\nAfter completion of baseline visits, children were randomized to a study treatment arm. Randomization was stratified by geographic location (Boston/Cambridge vs. Farmington) and number of teeth with caries (two to four vs. five or more), using randomly permuted blocks within each of the four strata.\nThe NECAT was conducted in accordance with all applicable requirements for the protection of human subjects. All children provided assent and parents provided informed consent. The study protocol was approved by the institutional review boards of the New England Research Institutes, the Forsyth Institute, and the clinics from which children were recruited.\n\nBODY.INTERVENTIONS AND FOLLOW-UP:\nAll children had semiannual dental examinations as well as additional visits required to meet treatment needs. For children in the amalgam arm, a dispersed-phase amalgam was used to restore all posterior teeth with caries at baseline and to repair incident caries during the 5-year trial period. For children in the composite arm, composite material (white filling) was used for all restorations. Following standard clinical practice, for both groups, composite material was used to repair caries in the front teeth, and stainless steel crowns were used to restore primary teeth with extensive lesions that could not otherwise be restored. The choices of dental materials and techniques were standardized across sites and dentists.\nUrine samples were collected annually and analyzed for elemental mercury using cold vapor atomic absorption. Values were expressed as micrograms per gram creatinine. The analyses reported use only urinary mercury concentrations in samples collected at 3, 4, and 5 years of follow-up. After 1 February 2000, the detection limit, which had been 1.5 ng/mL, was reduced to 0.45 ng/mL as a result of increasing the volume of sample analyzed for each child. Samples with a mercury concentration below the detection limit were assigned a value of 0.45/√2 (Bellinger et al. 2006).\nParticipants and dentists could not be blinded to treatment assignment, but all individuals who collected outcome data (e.g., neuropsychological tests) or analyzed specimens (e.g., for mercury) were blinded to children’s treatment assignments.\n\nBODY.NEUROPSYCHOLOGICAL ASSESSMENTS:\nAt baseline, before randomization and the receipt of any dental treatments, children participated in two 3-hr neuropsychological test sessions. At the first session, the WISC-III (Wechsler 1991) and the Wechsler Individual Achievement Test (WIAT) (Psychological Corporation 1992) were administered. These tests were again administered at 3 and 5 years after baseline. The second baseline session consisted of a battery of domain-focused tests. This test battery, which was again administered at 1, 2, and 4 years after baseline, included the WRAML (Sheslow and Adams 1990), the WRAVMA (Adams and Sheslow 1995), the Trail-Making Test (Spreen and Strauss 1991), finger tapping (the WPS Electronic Tapping Test; Western Psychological Services, Los Angeles, CA), ordered and unordered verbal cancellation (Mesulam 1985), category fluency (McCarthy 1972), the Controlled Oral Word Association Test (letter fluency) (Spreen and Strauss 1991), simple visual reaction time (the Standard Reaction Timer; Software Science, Cincinnati, OH), the Stroop Color–Word Interference Test (Trenerry et al. 1989), and the Wisconsin Card Sorting Test (Heaton et al. 1993).\nA total of 14 testers were used at the Boston/Cambridge site and five testers at the Farmington site. Quality control of the assessments was assured by having all examiners trained and certified by one supervising psychologist (D.C.B.) before conducting assessments of children enrolled in the trial, and monitored over the course of data collection. Each testing session was completely rescored by a second individual and errors were corrected. A variety of computerized algorithms were used to check the entered data for internal consistency.\n\nBODY.SAMPLE SIZE DETERMINATION:\nThe trial was designed to achieve 80% power to detect a 3-point difference between treatment arms in 5-year change in Full-Scale IQ score, adjusted for baseline IQ score and randomization stratum. Assuming a retention rate of 75% over the 5-year follow-up period, the recruitment goal was 250 children per treatment arm, for a total sample size of 500 children.\n\nBODY.STATISTICAL ANALYSIS:\nWe used intention-to-treat analyses, using analysis of covariance, to compare the amalgam and composite groups in terms of the changes, over 5 years, in scores on the WISC-III and WIAT and the changes, over 4 years, in scores on the domain-focused tests. These analyses thus indicated whether the central tendencies of the distributions of change scores differed in the amalgam and composite groups. Adjustments were made for baseline covariates: test score, randomization stratum, age, sex, socioeconomic status, hair mercury, and blood lead level. We calculated socioeconomic status using the method developed by Green (1970). Hair mercury was included to control for methylmercury, a form of mercury that is known to be a developmental neurotoxicant (Weiss 2006) but acquired primarily by consumption of contaminated seafood. Elevated blood lead level is a well-known developmental risk factor, with an increased prevalence among children who are socioeconomically disadvantaged (Bellinger 2006).\nWe evaluated children’s neuropsychological test scores in relation to two continuously distributed indices of exposure. The first index was surface-years of amalgam (number of amalgam surfaces weighted by number of years present), calculated from the information contained in dental clinic records regarding dates of amalgam placement, the number of tooth surfaces involved in the restoration, the timing of loss of primary teeth containing amalgam restorations, and the like. The second index was urinary mercury concentration. Scores on the WISC-III and WIAT, both of which were administered at year 5 of follow-up, were evaluated in relation to the mean of available urinary mercury concentrations at years 3, 4, and 5. Scores on the other tests, which were administered for the final time at year 4 of follow-up, were evaluated in relation to the mean of available urinary mercury concentrations at years 3 and 4 of follow-up. We evaluated the associations between these indices of exposure and children’s neuropsychological test scores using analysis of covariance, adjusting for the same set of baseline covariates used in the intention-to-treat analyses.\nWe conducted the analyses using the Kolmogorov-Smirnov test (Stuart and Ord 1991) to compare characteristics other than the central tendency of the distributions of the change scores in the treatment groups. When the results indicated that the change scores of children in the two groups did not come from the same distribution, we determined whether this was attributed to differences between treatment groups in the percentages of children with change scores indicating substantial deterioration of performance over the follow-up interval. Of particular interest was whether, in the absence of a treatment-group difference in mean change score, a greater percentage of children in the amalgam group than in the composite group demonstrated such deterioration.\nTo evaluate the impact of interexaminer variability on the results, we repeated the intention-to-treat analyses including adjustment for a set of indicator variables representing the neuropsychological examiners.\n\n\nBODY.RESULTS:\nTable 1 shows that the treatment groups did not differ significantly in terms of age, race, household income, education of primary caregiver, Full-Scale IQ, hair and urinary mercury concentrations, blood lead level, and number of decayed tooth surfaces. Females outnumbered males in the composite arm. Children were primarily non-Hispanic white (62%), with non-Hispanic blacks making up an additional 19% of the sample. The mean number of total decayed tooth surfaces at baseline was 9.5, with 1.7 of the surfaces being in permanent teeth. Slightly more than half of the children (54%) had five or more teeth with caries that required restoration, with the rest having two to four carious teeth. Children from the Boston/Cambridge area tended to have more caries than did children from Farmington (10.3 vs. 8.6 carious surfaces, respectively).\nChildren continued to have dental treatment needs over the course of the 5-year follow-up period, averaging approximately one new filled surface per year. The treatment needs were similar in the treatment groups (Table 2). At the end of the follow-up period, the number of restored surfaces in place ranged from 0 to 36 in both treatment groups, and the mean values did not differ significantly (p = 0.16). The amalgam and composite groups also did not differ significantly in the cumulative number of restored surfaces over the trial (p = 0.10). In the amalgam group, 79% of the surfaces had been restored with amalgam. At the end of the 5-year follow-up period, the mean urinary mercury concentration was significantly greater among children in the amalgam group (0.9; range, 0.1–5.7 μg/g creatinine) than among children in the composite group (0.6; range, 0.1–2.9 μg/g creatinine) (p < 0.001).\nTable 3 shows the change scores over the follow-up interval for each test score. Of all the change scores evaluated, only two differed significantly between the amalgam and composite treatment groups. On the Number–Letter Memory subtest of the WRAML, the 4-year change score of the amalgam group was significantly more positive than was the change score of the composite group, indicating greater improvement over time in the amalgam group. The 4-year change in the time required to complete Part B of the Trail-Making Test was significantly more negative in the composite group than in the amalgam group, indicating greater improvement over time in the composite group.\nThe results of analyses using surface-years of amalgam or urinary mercury concentration as the exposure metric were consistent with those of the intention-to-treat analyses, providing no evidence of a detrimental effect of amalgam on children’s test scores. The coefficient for surface-years of amalgam was significant for three scores (Picture Memory and Number–Letter Memory of the WRAML and letter fluency), but for all three scores, the sign was positive, indicating that the score improved with increasing exposure to amalgam (Table 4). Urinary mercury concentration was not significantly associated with any of the test scores (Table 5).\nThe results of the Kolmogorov-Smirnov tests indicated that the only two scores for which the distributions of changes scores in the amalgam and composite groups differed significantly were two subtests of the WRAML: Finger Windows and Number–Letter Memory (Table 3). On both tests, however, children in the amalgam group showed greater improvement over time than the children in the composite group, with the difference being significant for Number–Letter Memory. The distributions of change scores were not significantly different on the Trail-Making Test Part B (time to complete), the test on which the composite group showed significantly more improvement than the amalgam group.\nAdjustment for neuropsychological examiner did not produce results that were appreciably different (data not shown).\n\nBODY.DISCUSSION:\nThese analyses revealed an absence of consistent differences between the scores of children in the amalgam and composite treatment groups on a battery of neuropsychological tests that assessed a wide range of domains, including intelligence, achievement, language, memory, learning, visual–spatial skills, verbal fluency, fine motor function, problem solving, attention, and executive function. The findings were similar when the dichotomous variable treatment-group assignment was replaced by two continuously distributed indices of exposure, one that combined the amount and duration of amalgam a child received and one that was a biomarker, urinary mercury concentration. Furthermore, no evidence was found to support the hypothesis that a subset of children in the amalgam group suffered substantial harm. The number of significant differences observed was similar to that which might have been expected to occur by chance.\nAlthough neuropsychological deficits associated with amalgam exposure have been reported in several studies of dental professionals (Bittner et al. 1998; Echeverria et al. 1995, 1998; Ngim et al. 1992) and others exposed occupationally to mercury (Rohling and Demakis 2006), our findings are similar to those involving mercury exposure in cohorts drawn from the general population of adults and children (Brownawell et al. 2005). In a cross-sectional study of 550 30- to 49-year-old healthy employed adults, scores on tests of verbal memory, nonverbal memory, attention, psychomotor speed, and fine motor coordination were not significantly associated with any of several exposure indices considered (number of visible amalgam surfaces, number of visible occlusal amalgam surfaces, urinary mercury concentration) (Factor-Litvak et al. 2003). The mean urinary mercury concentration in that cohort of adults, 1.7 μg/g creatinine, was higher than the mean concentration of 0.9 μg/g creatinine among the children in the amalgam treatment group in our trial 5 years after placement of their first amalgam restorations. In a study of 1,663 Vietnam-era veterans, the total number of tooth surfaces with amalgam fillings was unrelated to clinical neurological signs (e.g., tremor, coordination, station, gait, strength, sensation, muscle stretch reflexes, or indices of peripheral neuropathy), although it was associated with vibrotacile sensation in non-diabetic participants (Kingman et al. 2005). In a study of 384 German 6-year-olds, 24-hr urinary excretion of mercury, which averaged 0.16 μg, was not significantly related to scores on a variety of tests, including the Vocabulary and Block Design subtests of the WISC and five tests of the computerized Neurobehavioral Evaluation System 2 (pattern comparison, pattern memory, tapping, simple reaction time, continuous performance test) (Walkowiak et al. 1998). In this cohort, some indices of visual contrast sensitivity did decline with increasing urinary mercury excretion, however (Altman et al. 1998).\nOver the course of the follow-up interval, the scores of children in both treatment groups tended to change in the direction of improved performance, even on tests for which scores are standardized for age. Several factors might have contributed to improved performance over time. First, this could represent a type of sampling bias, reflecting the characteristics of families who are motivated to enroll in such a trial and to participate for its full duration. Second, all tests except the WISC-III and WIAT were administered yearly, so the general improvement in scores might reflect the familiarity that children developed with the test materials and expectations. Particularly large improvements tended to be on performance-based tests, such as the WRAVMA pegboard and the Processing Speed composite of the WISC-III, one component of which is Symbol Search, a timed task that involves matching symbols and digits. A substantial improvement was also noted on the WRAML Learning Index, which reflects the rapidity with which a child learns new material, such as sound–symbol pairs, a word list, and the locations of hidden designs. Repeated administration of these tasks, even at yearly intervals, might be expected to result in an increased rate of acquisition of the material.\nAs noted, the dental treatment needs of the children enrolled in the trial were substantial and exceeded those typical of the general population of U.S. children. For example, among 6- to 11-year-old children who participated in the National Health and Nutrition Examination Survey (NHANES) 1999–2002, the prevalence of dental caries in primary teeth was 22%, and the mean numbers of decayed or filled primary teeth and surfaces were 1.7 and 3.7, respectively (Beltran-Aguilar et al. 2005). The prevalence of dental caries in permanent teeth was 20% (mean number of decayed, missing, and filled teeth and surfaces in permanent teeth were 0.1 and 0.4, respectively) (Beltran-Aguilar et al. 2005). Therefore, children assigned to the amalgam group in the NECAT are likely to have experienced greater exposures to mercury vapor from amalgams than do most children in the United States. Given our failure to detect significant differences between the amalgam and composite groups in neuropsychological function, these results provide reassurance that the use of dental amalgam to repair caries is not producing substantial neuropsychological morbidity in the general population of children in the United States.\nThe conclusions must be tempered, first, by a recognition that the follow-up interval of 4–5 years might have been too short to allow for the expression of such deficits. Second, the critical window of children’s greatest vulnerability to elemental mercury might already have passed by the time the children were enrolled in the trial (≥ 6 years of age). Given the heightened sensitivity of the fetus to methylmercury, prenatal exposure to mercury vapor, which is known to cross the placenta, warrants increased attention. In the NHANES 1999–2000 survey, among women of child-bearing age, an increase of 10 dental surfaces restored with amalgam was associated with an estimated increase of 1.8 μg/L in urinary mercury concentration (Dye et al. 2005). Mercury level in amniotic fluid is weakly associated with number of amalgam fillings (Luglie et al. 2005). A recent case–control study did not, however, find an increased risk of delivering a low-birth-weight infant among women who had up to 11 amalgam restorations placed during pregnancy (Hujoel et al. 2005). The results of studies of the reproductive outcomes of women with dental workplace exposures have been mixed (Dahl et al. 1999; Elghany et al. 1997; Ericson and Kallen 1989). In some of these studies, distinguishing the potential impact of mercury exposure from the impacts of other workplace exposures, such as to disinfectants containing ethanol and benzene, is difficult. Third, the prevalence of children with enhanced sensitivity to elemental mercury might be too low among the children enrolled in the NECAT for us to have\n\n**Question:** Compared to composite group (0.9 vs. 0.6 μg/g creatinine) what was the result of amalgam group on number of restored surfaces?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13852
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nDental Amalgam Restorations and Children’s Neuropsychological Function: The New England Children’s Amalgam Trial\n\n\n\nABSTRACT.BACKGROUND:\nA concern persists that children’s exposure to mercury vapor from dental amalgams produces neurotoxicity.\n\nABSTRACT.OBJECTIVE:\nOur goal was to compare the neuropsychological function of children, without prior exposure to dental amalgam, whose caries were repaired using either dental amalgam or mercury-free composite materials.\n\nABSTRACT.METHODS:\nWe conducted a randomized controlled trial involving 534 6- to 10-year-old urban and rural children who were assessed yearly for 5 years using a battery of tests of intelligence, achievement, language, memory, learning, visual–spatial skills, verbal fluency, fine motor function, problem solving, attention, and executive function.\n\nABSTRACT.RESULTS:\nAlthough the mean urinary mercury concentration was greater among children in the amalgam group than the composite group (0.9 vs. 0.6 μg/g creatinine), few significant differences were found between the test scores of children in the two groups. The differences found were inconsistent in direction. Analyses using two cumulative exposure indices—surface years of amalgam and urinary mercury concentration—produced similar results.\n\nABSTRACT.CONCLUSIONS:\nExposure to elemental mercury in amalgam at the levels experienced by the children who participated in the trial did not result in significant effects on neuropsychological function within the 5-year follow-up period.\n\nWe previously reported on a randomized clinical trial, the New England Children’s Amalgam Trial (NECAT), in which no significant differences were found, over a 5-year follow-up interval, between the neuropsychological scores of children for whom dental amalgam was used to repair caries and the scores of children for whom mercury-free composite materials were used (Bellinger et al. 2006). The Full-Scale IQ score on the Wechsler Intelligence Scale for Children-Third Edition (WISC-III; Wechsler 1991) was the primary end point, and the General Memory Index (GMI) on the Wide Range Assessment of Memory and Learning (WRAML; Sheslow and Adams 1990), and the Visual Motor Composite (VMC) on the Wide Range Assessment of Visual–Motor Ability (WRAVMA; Adams and Sheslow 1995) were the two secondary end points. Each of these is a global score, derived by combining a child’s performance on tasks that assess somewhat different abilities.\nThe additional analyses reported in this article address three issues. First, if mercury vapor liberated from dental amalgams produces specific rather than general neuropsychological effects and are most likely to be evident on tests that assess specific cognitive domains, global scores such as Full-Scale IQ, GMI, and VMC might be relatively insensitive to important treatment-group differences. Even in the absence of treatment-group differences on global test scores, differences in specific cognitive domains could, depending on their nature and severity, represent morbidities with important consequences for children’s health and well-being. Therefore, here we report comparisons of the scores of the amalgam and composite groups on the sub-scales that contribute to Full-Scale IQ, the GMI, and the VMC, as well as scores on a battery of additional, domain-focused, neuropsychological and educational tests.\nSecond, the exposure index used in the primary analyses of the trial was treatment-group assignment. This could have introduced a form of exposure misclassification insofar as the variability in the treatment needs of the children in the amalgam group resulted in the receipt of variable amounts of amalgam and thus in their potential exposure to mercury. Therefore, we repeated the analyses replacing treatment-group assignment with two continuously distributed indices of exposure: surface-years of amalgam and urinary mercury concentration.\nThird, it is possible that only a subset of children experienced adverse neuropsychological effects as the result of exposure to amalgam, either because of behaviors, such as bruxism or frequent gum chewing, that cause enhanced release of mercury (Barregard 2005; Barregard et al. 1995) or because of enhanced sensitivity to mercury. Two recent studies of dental professionals suggest that polymorphisms for brain-derived neurotrophic factor (Echeverria et al. 2005; Heyer et al. 2004) and the coproporphyrinogen oxidase gene (Echeverria et al. 2006) modify the neurotoxicity of elemental mercury. If the prevalence of such enhanced vulnerability to elemental mercury is low or the associated increase in neuropsychological toxicity is modest in magnitude, its impact on the mean scores of children in the amalgam group might not have been sufficiently large to produce significant treatment-group differences. In an attempt to identify the presence of a subgroup of children who are particularly sensitive to amalgam, we compared the distributional characteristics of the scores within the treatment groups.\nBODY.METHODS:\nBODY.STUDY DESIGN AND PARTICIPANTS:\nChildren were eligible if they were 6–10 years of age, fluent in English, had no known prior or existing amalgam restorations, had two or more posterior teeth with dental caries, and did not have a physician-diagnosed psychological, behavioral, neurological, immunosuppressive, or renal disorder (Children’s Amalgam Trial Study Group 2003). Children were recruited from several community dental clinics in the Boston/Cambridge area of Massachusetts, an urban setting, and from a dental clinic in Farmington, Maine, a rural area.\nA total of 5,116 children were screened for eligibility. Eligibility was confirmed for 598 children. At baseline visits, children received a dental examination by a study dentist, X rays, and standard preventive dental care (e.g., cleaning, application of sealants). Other baseline visit activities included the collection of blood and urine samples, anthropometric measurements of height, weight, and body fat, neuropsychological testing of the child, a health interview with the child’s guardian, and neuropsychological testing of the guardian.\nAfter completion of baseline visits, children were randomized to a study treatment arm. Randomization was stratified by geographic location (Boston/Cambridge vs. Farmington) and number of teeth with caries (two to four vs. five or more), using randomly permuted blocks within each of the four strata.\nThe NECAT was conducted in accordance with all applicable requirements for the protection of human subjects. All children provided assent and parents provided informed consent. The study protocol was approved by the institutional review boards of the New England Research Institutes, the Forsyth Institute, and the clinics from which children were recruited.\n\nBODY.INTERVENTIONS AND FOLLOW-UP:\nAll children had semiannual dental examinations as well as additional visits required to meet treatment needs. For children in the amalgam arm, a dispersed-phase amalgam was used to restore all posterior teeth with caries at baseline and to repair incident caries during the 5-year trial period. For children in the composite arm, composite material (white filling) was used for all restorations. Following standard clinical practice, for both groups, composite material was used to repair caries in the front teeth, and stainless steel crowns were used to restore primary teeth with extensive lesions that could not otherwise be restored. The choices of dental materials and techniques were standardized across sites and dentists.\nUrine samples were collected annually and analyzed for elemental mercury using cold vapor atomic absorption. Values were expressed as micrograms per gram creatinine. The analyses reported use only urinary mercury concentrations in samples collected at 3, 4, and 5 years of follow-up. After 1 February 2000, the detection limit, which had been 1.5 ng/mL, was reduced to 0.45 ng/mL as a result of increasing the volume of sample analyzed for each child. Samples with a mercury concentration below the detection limit were assigned a value of 0.45/√2 (Bellinger et al. 2006).\nParticipants and dentists could not be blinded to treatment assignment, but all individuals who collected outcome data (e.g., neuropsychological tests) or analyzed specimens (e.g., for mercury) were blinded to children’s treatment assignments.\n\nBODY.NEUROPSYCHOLOGICAL ASSESSMENTS:\nAt baseline, before randomization and the receipt of any dental treatments, children participated in two 3-hr neuropsychological test sessions. At the first session, the WISC-III (Wechsler 1991) and the Wechsler Individual Achievement Test (WIAT) (Psychological Corporation 1992) were administered. These tests were again administered at 3 and 5 years after baseline. The second baseline session consisted of a battery of domain-focused tests. This test battery, which was again administered at 1, 2, and 4 years after baseline, included the WRAML (Sheslow and Adams 1990), the WRAVMA (Adams and Sheslow 1995), the Trail-Making Test (Spreen and Strauss 1991), finger tapping (the WPS Electronic Tapping Test; Western Psychological Services, Los Angeles, CA), ordered and unordered verbal cancellation (Mesulam 1985), category fluency (McCarthy 1972), the Controlled Oral Word Association Test (letter fluency) (Spreen and Strauss 1991), simple visual reaction time (the Standard Reaction Timer; Software Science, Cincinnati, OH), the Stroop Color–Word Interference Test (Trenerry et al. 1989), and the Wisconsin Card Sorting Test (Heaton et al. 1993).\nA total of 14 testers were used at the Boston/Cambridge site and five testers at the Farmington site. Quality control of the assessments was assured by having all examiners trained and certified by one supervising psychologist (D.C.B.) before conducting assessments of children enrolled in the trial, and monitored over the course of data collection. Each testing session was completely rescored by a second individual and errors were corrected. A variety of computerized algorithms were used to check the entered data for internal consistency.\n\nBODY.SAMPLE SIZE DETERMINATION:\nThe trial was designed to achieve 80% power to detect a 3-point difference between treatment arms in 5-year change in Full-Scale IQ score, adjusted for baseline IQ score and randomization stratum. Assuming a retention rate of 75% over the 5-year follow-up period, the recruitment goal was 250 children per treatment arm, for a total sample size of 500 children.\n\nBODY.STATISTICAL ANALYSIS:\nWe used intention-to-treat analyses, using analysis of covariance, to compare the amalgam and composite groups in terms of the changes, over 5 years, in scores on the WISC-III and WIAT and the changes, over 4 years, in scores on the domain-focused tests. These analyses thus indicated whether the central tendencies of the distributions of change scores differed in the amalgam and composite groups. Adjustments were made for baseline covariates: test score, randomization stratum, age, sex, socioeconomic status, hair mercury, and blood lead level. We calculated socioeconomic status using the method developed by Green (1970). Hair mercury was included to control for methylmercury, a form of mercury that is known to be a developmental neurotoxicant (Weiss 2006) but acquired primarily by consumption of contaminated seafood. Elevated blood lead level is a well-known developmental risk factor, with an increased prevalence among children who are socioeconomically disadvantaged (Bellinger 2006).\nWe evaluated children’s neuropsychological test scores in relation to two continuously distributed indices of exposure. The first index was surface-years of amalgam (number of amalgam surfaces weighted by number of years present), calculated from the information contained in dental clinic records regarding dates of amalgam placement, the number of tooth surfaces involved in the restoration, the timing of loss of primary teeth containing amalgam restorations, and the like. The second index was urinary mercury concentration. Scores on the WISC-III and WIAT, both of which were administered at year 5 of follow-up, were evaluated in relation to the mean of available urinary mercury concentrations at years 3, 4, and 5. Scores on the other tests, which were administered for the final time at year 4 of follow-up, were evaluated in relation to the mean of available urinary mercury concentrations at years 3 and 4 of follow-up. We evaluated the associations between these indices of exposure and children’s neuropsychological test scores using analysis of covariance, adjusting for the same set of baseline covariates used in the intention-to-treat analyses.\nWe conducted the analyses using the Kolmogorov-Smirnov test (Stuart and Ord 1991) to compare characteristics other than the central tendency of the distributions of the change scores in the treatment groups. When the results indicated that the change scores of children in the two groups did not come from the same distribution, we determined whether this was attributed to differences between treatment groups in the percentages of children with change scores indicating substantial deterioration of performance over the follow-up interval. Of particular interest was whether, in the absence of a treatment-group difference in mean change score, a greater percentage of children in the amalgam group than in the composite group demonstrated such deterioration.\nTo evaluate the impact of interexaminer variability on the results, we repeated the intention-to-treat analyses including adjustment for a set of indicator variables representing the neuropsychological examiners.\n\n\nBODY.RESULTS:\nTable 1 shows that the treatment groups did not differ significantly in terms of age, race, household income, education of primary caregiver, Full-Scale IQ, hair and urinary mercury concentrations, blood lead level, and number of decayed tooth surfaces. Females outnumbered males in the composite arm. Children were primarily non-Hispanic white (62%), with non-Hispanic blacks making up an additional 19% of the sample. The mean number of total decayed tooth surfaces at baseline was 9.5, with 1.7 of the surfaces being in permanent teeth. Slightly more than half of the children (54%) had five or more teeth with caries that required restoration, with the rest having two to four carious teeth. Children from the Boston/Cambridge area tended to have more caries than did children from Farmington (10.3 vs. 8.6 carious surfaces, respectively).\nChildren continued to have dental treatment needs over the course of the 5-year follow-up period, averaging approximately one new filled surface per year. The treatment needs were similar in the treatment groups (Table 2). At the end of the follow-up period, the number of restored surfaces in place ranged from 0 to 36 in both treatment groups, and the mean values did not differ significantly (p = 0.16). The amalgam and composite groups also did not differ significantly in the cumulative number of restored surfaces over the trial (p = 0.10). In the amalgam group, 79% of the surfaces had been restored with amalgam. At the end of the 5-year follow-up period, the mean urinary mercury concentration was significantly greater among children in the amalgam group (0.9; range, 0.1–5.7 μg/g creatinine) than among children in the composite group (0.6; range, 0.1–2.9 μg/g creatinine) (p < 0.001).\nTable 3 shows the change scores over the follow-up interval for each test score. Of all the change scores evaluated, only two differed significantly between the amalgam and composite treatment groups. On the Number–Letter Memory subtest of the WRAML, the 4-year change score of the amalgam group was significantly more positive than was the change score of the composite group, indicating greater improvement over time in the amalgam group. The 4-year change in the time required to complete Part B of the Trail-Making Test was significantly more negative in the composite group than in the amalgam group, indicating greater improvement over time in the composite group.\nThe results of analyses using surface-years of amalgam or urinary mercury concentration as the exposure metric were consistent with those of the intention-to-treat analyses, providing no evidence of a detrimental effect of amalgam on children’s test scores. The coefficient for surface-years of amalgam was significant for three scores (Picture Memory and Number–Letter Memory of the WRAML and letter fluency), but for all three scores, the sign was positive, indicating that the score improved with increasing exposure to amalgam (Table 4). Urinary mercury concentration was not significantly associated with any of the test scores (Table 5).\nThe results of the Kolmogorov-Smirnov tests indicated that the only two scores for which the distributions of changes scores in the amalgam and composite groups differed significantly were two subtests of the WRAML: Finger Windows and Number–Letter Memory (Table 3). On both tests, however, children in the amalgam group showed greater improvement over time than the children in the composite group, with the difference being significant for Number–Letter Memory. The distributions of change scores were not significantly different on the Trail-Making Test Part B (time to complete), the test on which the composite group showed significantly more improvement than the amalgam group.\nAdjustment for neuropsychological examiner did not produce results that were appreciably different (data not shown).\n\nBODY.DISCUSSION:\nThese analyses revealed an absence of consistent differences between the scores of children in the amalgam and composite treatment groups on a battery of neuropsychological tests that assessed a wide range of domains, including intelligence, achievement, language, memory, learning, visual–spatial skills, verbal fluency, fine motor function, problem solving, attention, and executive function. The findings were similar when the dichotomous variable treatment-group assignment was replaced by two continuously distributed indices of exposure, one that combined the amount and duration of amalgam a child received and one that was a biomarker, urinary mercury concentration. Furthermore, no evidence was found to support the hypothesis that a subset of children in the amalgam group suffered substantial harm. The number of significant differences observed was similar to that which might have been expected to occur by chance.\nAlthough neuropsychological deficits associated with amalgam exposure have been reported in several studies of dental professionals (Bittner et al. 1998; Echeverria et al. 1995, 1998; Ngim et al. 1992) and others exposed occupationally to mercury (Rohling and Demakis 2006), our findings are similar to those involving mercury exposure in cohorts drawn from the general population of adults and children (Brownawell et al. 2005). In a cross-sectional study of 550 30- to 49-year-old healthy employed adults, scores on tests of verbal memory, nonverbal memory, attention, psychomotor speed, and fine motor coordination were not significantly associated with any of several exposure indices considered (number of visible amalgam surfaces, number of visible occlusal amalgam surfaces, urinary mercury concentration) (Factor-Litvak et al. 2003). The mean urinary mercury concentration in that cohort of adults, 1.7 μg/g creatinine, was higher than the mean concentration of 0.9 μg/g creatinine among the children in the amalgam treatment group in our trial 5 years after placement of their first amalgam restorations. In a study of 1,663 Vietnam-era veterans, the total number of tooth surfaces with amalgam fillings was unrelated to clinical neurological signs (e.g., tremor, coordination, station, gait, strength, sensation, muscle stretch reflexes, or indices of peripheral neuropathy), although it was associated with vibrotacile sensation in non-diabetic participants (Kingman et al. 2005). In a study of 384 German 6-year-olds, 24-hr urinary excretion of mercury, which averaged 0.16 μg, was not significantly related to scores on a variety of tests, including the Vocabulary and Block Design subtests of the WISC and five tests of the computerized Neurobehavioral Evaluation System 2 (pattern comparison, pattern memory, tapping, simple reaction time, continuous performance test) (Walkowiak et al. 1998). In this cohort, some indices of visual contrast sensitivity did decline with increasing urinary mercury excretion, however (Altman et al. 1998).\nOver the course of the follow-up interval, the scores of children in both treatment groups tended to change in the direction of improved performance, even on tests for which scores are standardized for age. Several factors might have contributed to improved performance over time. First, this could represent a type of sampling bias, reflecting the characteristics of families who are motivated to enroll in such a trial and to participate for its full duration. Second, all tests except the WISC-III and WIAT were administered yearly, so the general improvement in scores might reflect the familiarity that children developed with the test materials and expectations. Particularly large improvements tended to be on performance-based tests, such as the WRAVMA pegboard and the Processing Speed composite of the WISC-III, one component of which is Symbol Search, a timed task that involves matching symbols and digits. A substantial improvement was also noted on the WRAML Learning Index, which reflects the rapidity with which a child learns new material, such as sound–symbol pairs, a word list, and the locations of hidden designs. Repeated administration of these tasks, even at yearly intervals, might be expected to result in an increased rate of acquisition of the material.\nAs noted, the dental treatment needs of the children enrolled in the trial were substantial and exceeded those typical of the general population of U.S. children. For example, among 6- to 11-year-old children who participated in the National Health and Nutrition Examination Survey (NHANES) 1999–2002, the prevalence of dental caries in primary teeth was 22%, and the mean numbers of decayed or filled primary teeth and surfaces were 1.7 and 3.7, respectively (Beltran-Aguilar et al. 2005). The prevalence of dental caries in permanent teeth was 20% (mean number of decayed, missing, and filled teeth and surfaces in permanent teeth were 0.1 and 0.4, respectively) (Beltran-Aguilar et al. 2005). Therefore, children assigned to the amalgam group in the NECAT are likely to have experienced greater exposures to mercury vapor from amalgams than do most children in the United States. Given our failure to detect significant differences between the amalgam and composite groups in neuropsychological function, these results provide reassurance that the use of dental amalgam to repair caries is not producing substantial neuropsychological morbidity in the general population of children in the United States.\nThe conclusions must be tempered, first, by a recognition that the follow-up interval of 4–5 years might have been too short to allow for the expression of such deficits. Second, the critical window of children’s greatest vulnerability to elemental mercury might already have passed by the time the children were enrolled in the trial (≥ 6 years of age). Given the heightened sensitivity of the fetus to methylmercury, prenatal exposure to mercury vapor, which is known to cross the placenta, warrants increased attention. In the NHANES 1999–2000 survey, among women of child-bearing age, an increase of 10 dental surfaces restored with amalgam was associated with an estimated increase of 1.8 μg/L in urinary mercury concentration (Dye et al. 2005). Mercury level in amniotic fluid is weakly associated with number of amalgam fillings (Luglie et al. 2005). A recent case–control study did not, however, find an increased risk of delivering a low-birth-weight infant among women who had up to 11 amalgam restorations placed during pregnancy (Hujoel et al. 2005). The results of studies of the reproductive outcomes of women with dental workplace exposures have been mixed (Dahl et al. 1999; Elghany et al. 1997; Ericson and Kallen 1989). In some of these studies, distinguishing the potential impact of mercury exposure from the impacts of other workplace exposures, such as to disinfectants containing ethanol and benzene, is difficult. Third, the prevalence of children with enhanced sensitivity to elemental mercury might be too low among the children enrolled in the NECAT for us to have\n\n**Question:** Compared to composite group (0.9 vs. 0.6 μg/g creatinine) what was the result of amalgam group on cumulative number of restored surfaces?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13853
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nDental Amalgam Restorations and Children’s Neuropsychological Function: The New England Children’s Amalgam Trial\n\n\n\nABSTRACT.BACKGROUND:\nA concern persists that children’s exposure to mercury vapor from dental amalgams produces neurotoxicity.\n\nABSTRACT.OBJECTIVE:\nOur goal was to compare the neuropsychological function of children, without prior exposure to dental amalgam, whose caries were repaired using either dental amalgam or mercury-free composite materials.\n\nABSTRACT.METHODS:\nWe conducted a randomized controlled trial involving 534 6- to 10-year-old urban and rural children who were assessed yearly for 5 years using a battery of tests of intelligence, achievement, language, memory, learning, visual–spatial skills, verbal fluency, fine motor function, problem solving, attention, and executive function.\n\nABSTRACT.RESULTS:\nAlthough the mean urinary mercury concentration was greater among children in the amalgam group than the composite group (0.9 vs. 0.6 μg/g creatinine), few significant differences were found between the test scores of children in the two groups. The differences found were inconsistent in direction. Analyses using two cumulative exposure indices—surface years of amalgam and urinary mercury concentration—produced similar results.\n\nABSTRACT.CONCLUSIONS:\nExposure to elemental mercury in amalgam at the levels experienced by the children who participated in the trial did not result in significant effects on neuropsychological function within the 5-year follow-up period.\n\nWe previously reported on a randomized clinical trial, the New England Children’s Amalgam Trial (NECAT), in which no significant differences were found, over a 5-year follow-up interval, between the neuropsychological scores of children for whom dental amalgam was used to repair caries and the scores of children for whom mercury-free composite materials were used (Bellinger et al. 2006). The Full-Scale IQ score on the Wechsler Intelligence Scale for Children-Third Edition (WISC-III; Wechsler 1991) was the primary end point, and the General Memory Index (GMI) on the Wide Range Assessment of Memory and Learning (WRAML; Sheslow and Adams 1990), and the Visual Motor Composite (VMC) on the Wide Range Assessment of Visual–Motor Ability (WRAVMA; Adams and Sheslow 1995) were the two secondary end points. Each of these is a global score, derived by combining a child’s performance on tasks that assess somewhat different abilities.\nThe additional analyses reported in this article address three issues. First, if mercury vapor liberated from dental amalgams produces specific rather than general neuropsychological effects and are most likely to be evident on tests that assess specific cognitive domains, global scores such as Full-Scale IQ, GMI, and VMC might be relatively insensitive to important treatment-group differences. Even in the absence of treatment-group differences on global test scores, differences in specific cognitive domains could, depending on their nature and severity, represent morbidities with important consequences for children’s health and well-being. Therefore, here we report comparisons of the scores of the amalgam and composite groups on the sub-scales that contribute to Full-Scale IQ, the GMI, and the VMC, as well as scores on a battery of additional, domain-focused, neuropsychological and educational tests.\nSecond, the exposure index used in the primary analyses of the trial was treatment-group assignment. This could have introduced a form of exposure misclassification insofar as the variability in the treatment needs of the children in the amalgam group resulted in the receipt of variable amounts of amalgam and thus in their potential exposure to mercury. Therefore, we repeated the analyses replacing treatment-group assignment with two continuously distributed indices of exposure: surface-years of amalgam and urinary mercury concentration.\nThird, it is possible that only a subset of children experienced adverse neuropsychological effects as the result of exposure to amalgam, either because of behaviors, such as bruxism or frequent gum chewing, that cause enhanced release of mercury (Barregard 2005; Barregard et al. 1995) or because of enhanced sensitivity to mercury. Two recent studies of dental professionals suggest that polymorphisms for brain-derived neurotrophic factor (Echeverria et al. 2005; Heyer et al. 2004) and the coproporphyrinogen oxidase gene (Echeverria et al. 2006) modify the neurotoxicity of elemental mercury. If the prevalence of such enhanced vulnerability to elemental mercury is low or the associated increase in neuropsychological toxicity is modest in magnitude, its impact on the mean scores of children in the amalgam group might not have been sufficiently large to produce significant treatment-group differences. In an attempt to identify the presence of a subgroup of children who are particularly sensitive to amalgam, we compared the distributional characteristics of the scores within the treatment groups.\nBODY.METHODS:\nBODY.STUDY DESIGN AND PARTICIPANTS:\nChildren were eligible if they were 6–10 years of age, fluent in English, had no known prior or existing amalgam restorations, had two or more posterior teeth with dental caries, and did not have a physician-diagnosed psychological, behavioral, neurological, immunosuppressive, or renal disorder (Children’s Amalgam Trial Study Group 2003). Children were recruited from several community dental clinics in the Boston/Cambridge area of Massachusetts, an urban setting, and from a dental clinic in Farmington, Maine, a rural area.\nA total of 5,116 children were screened for eligibility. Eligibility was confirmed for 598 children. At baseline visits, children received a dental examination by a study dentist, X rays, and standard preventive dental care (e.g., cleaning, application of sealants). Other baseline visit activities included the collection of blood and urine samples, anthropometric measurements of height, weight, and body fat, neuropsychological testing of the child, a health interview with the child’s guardian, and neuropsychological testing of the guardian.\nAfter completion of baseline visits, children were randomized to a study treatment arm. Randomization was stratified by geographic location (Boston/Cambridge vs. Farmington) and number of teeth with caries (two to four vs. five or more), using randomly permuted blocks within each of the four strata.\nThe NECAT was conducted in accordance with all applicable requirements for the protection of human subjects. All children provided assent and parents provided informed consent. The study protocol was approved by the institutional review boards of the New England Research Institutes, the Forsyth Institute, and the clinics from which children were recruited.\n\nBODY.INTERVENTIONS AND FOLLOW-UP:\nAll children had semiannual dental examinations as well as additional visits required to meet treatment needs. For children in the amalgam arm, a dispersed-phase amalgam was used to restore all posterior teeth with caries at baseline and to repair incident caries during the 5-year trial period. For children in the composite arm, composite material (white filling) was used for all restorations. Following standard clinical practice, for both groups, composite material was used to repair caries in the front teeth, and stainless steel crowns were used to restore primary teeth with extensive lesions that could not otherwise be restored. The choices of dental materials and techniques were standardized across sites and dentists.\nUrine samples were collected annually and analyzed for elemental mercury using cold vapor atomic absorption. Values were expressed as micrograms per gram creatinine. The analyses reported use only urinary mercury concentrations in samples collected at 3, 4, and 5 years of follow-up. After 1 February 2000, the detection limit, which had been 1.5 ng/mL, was reduced to 0.45 ng/mL as a result of increasing the volume of sample analyzed for each child. Samples with a mercury concentration below the detection limit were assigned a value of 0.45/√2 (Bellinger et al. 2006).\nParticipants and dentists could not be blinded to treatment assignment, but all individuals who collected outcome data (e.g., neuropsychological tests) or analyzed specimens (e.g., for mercury) were blinded to children’s treatment assignments.\n\nBODY.NEUROPSYCHOLOGICAL ASSESSMENTS:\nAt baseline, before randomization and the receipt of any dental treatments, children participated in two 3-hr neuropsychological test sessions. At the first session, the WISC-III (Wechsler 1991) and the Wechsler Individual Achievement Test (WIAT) (Psychological Corporation 1992) were administered. These tests were again administered at 3 and 5 years after baseline. The second baseline session consisted of a battery of domain-focused tests. This test battery, which was again administered at 1, 2, and 4 years after baseline, included the WRAML (Sheslow and Adams 1990), the WRAVMA (Adams and Sheslow 1995), the Trail-Making Test (Spreen and Strauss 1991), finger tapping (the WPS Electronic Tapping Test; Western Psychological Services, Los Angeles, CA), ordered and unordered verbal cancellation (Mesulam 1985), category fluency (McCarthy 1972), the Controlled Oral Word Association Test (letter fluency) (Spreen and Strauss 1991), simple visual reaction time (the Standard Reaction Timer; Software Science, Cincinnati, OH), the Stroop Color–Word Interference Test (Trenerry et al. 1989), and the Wisconsin Card Sorting Test (Heaton et al. 1993).\nA total of 14 testers were used at the Boston/Cambridge site and five testers at the Farmington site. Quality control of the assessments was assured by having all examiners trained and certified by one supervising psychologist (D.C.B.) before conducting assessments of children enrolled in the trial, and monitored over the course of data collection. Each testing session was completely rescored by a second individual and errors were corrected. A variety of computerized algorithms were used to check the entered data for internal consistency.\n\nBODY.SAMPLE SIZE DETERMINATION:\nThe trial was designed to achieve 80% power to detect a 3-point difference between treatment arms in 5-year change in Full-Scale IQ score, adjusted for baseline IQ score and randomization stratum. Assuming a retention rate of 75% over the 5-year follow-up period, the recruitment goal was 250 children per treatment arm, for a total sample size of 500 children.\n\nBODY.STATISTICAL ANALYSIS:\nWe used intention-to-treat analyses, using analysis of covariance, to compare the amalgam and composite groups in terms of the changes, over 5 years, in scores on the WISC-III and WIAT and the changes, over 4 years, in scores on the domain-focused tests. These analyses thus indicated whether the central tendencies of the distributions of change scores differed in the amalgam and composite groups. Adjustments were made for baseline covariates: test score, randomization stratum, age, sex, socioeconomic status, hair mercury, and blood lead level. We calculated socioeconomic status using the method developed by Green (1970). Hair mercury was included to control for methylmercury, a form of mercury that is known to be a developmental neurotoxicant (Weiss 2006) but acquired primarily by consumption of contaminated seafood. Elevated blood lead level is a well-known developmental risk factor, with an increased prevalence among children who are socioeconomically disadvantaged (Bellinger 2006).\nWe evaluated children’s neuropsychological test scores in relation to two continuously distributed indices of exposure. The first index was surface-years of amalgam (number of amalgam surfaces weighted by number of years present), calculated from the information contained in dental clinic records regarding dates of amalgam placement, the number of tooth surfaces involved in the restoration, the timing of loss of primary teeth containing amalgam restorations, and the like. The second index was urinary mercury concentration. Scores on the WISC-III and WIAT, both of which were administered at year 5 of follow-up, were evaluated in relation to the mean of available urinary mercury concentrations at years 3, 4, and 5. Scores on the other tests, which were administered for the final time at year 4 of follow-up, were evaluated in relation to the mean of available urinary mercury concentrations at years 3 and 4 of follow-up. We evaluated the associations between these indices of exposure and children’s neuropsychological test scores using analysis of covariance, adjusting for the same set of baseline covariates used in the intention-to-treat analyses.\nWe conducted the analyses using the Kolmogorov-Smirnov test (Stuart and Ord 1991) to compare characteristics other than the central tendency of the distributions of the change scores in the treatment groups. When the results indicated that the change scores of children in the two groups did not come from the same distribution, we determined whether this was attributed to differences between treatment groups in the percentages of children with change scores indicating substantial deterioration of performance over the follow-up interval. Of particular interest was whether, in the absence of a treatment-group difference in mean change score, a greater percentage of children in the amalgam group than in the composite group demonstrated such deterioration.\nTo evaluate the impact of interexaminer variability on the results, we repeated the intention-to-treat analyses including adjustment for a set of indicator variables representing the neuropsychological examiners.\n\n\nBODY.RESULTS:\nTable 1 shows that the treatment groups did not differ significantly in terms of age, race, household income, education of primary caregiver, Full-Scale IQ, hair and urinary mercury concentrations, blood lead level, and number of decayed tooth surfaces. Females outnumbered males in the composite arm. Children were primarily non-Hispanic white (62%), with non-Hispanic blacks making up an additional 19% of the sample. The mean number of total decayed tooth surfaces at baseline was 9.5, with 1.7 of the surfaces being in permanent teeth. Slightly more than half of the children (54%) had five or more teeth with caries that required restoration, with the rest having two to four carious teeth. Children from the Boston/Cambridge area tended to have more caries than did children from Farmington (10.3 vs. 8.6 carious surfaces, respectively).\nChildren continued to have dental treatment needs over the course of the 5-year follow-up period, averaging approximately one new filled surface per year. The treatment needs were similar in the treatment groups (Table 2). At the end of the follow-up period, the number of restored surfaces in place ranged from 0 to 36 in both treatment groups, and the mean values did not differ significantly (p = 0.16). The amalgam and composite groups also did not differ significantly in the cumulative number of restored surfaces over the trial (p = 0.10). In the amalgam group, 79% of the surfaces had been restored with amalgam. At the end of the 5-year follow-up period, the mean urinary mercury concentration was significantly greater among children in the amalgam group (0.9; range, 0.1–5.7 μg/g creatinine) than among children in the composite group (0.6; range, 0.1–2.9 μg/g creatinine) (p < 0.001).\nTable 3 shows the change scores over the follow-up interval for each test score. Of all the change scores evaluated, only two differed significantly between the amalgam and composite treatment groups. On the Number–Letter Memory subtest of the WRAML, the 4-year change score of the amalgam group was significantly more positive than was the change score of the composite group, indicating greater improvement over time in the amalgam group. The 4-year change in the time required to complete Part B of the Trail-Making Test was significantly more negative in the composite group than in the amalgam group, indicating greater improvement over time in the composite group.\nThe results of analyses using surface-years of amalgam or urinary mercury concentration as the exposure metric were consistent with those of the intention-to-treat analyses, providing no evidence of a detrimental effect of amalgam on children’s test scores. The coefficient for surface-years of amalgam was significant for three scores (Picture Memory and Number–Letter Memory of the WRAML and letter fluency), but for all three scores, the sign was positive, indicating that the score improved with increasing exposure to amalgam (Table 4). Urinary mercury concentration was not significantly associated with any of the test scores (Table 5).\nThe results of the Kolmogorov-Smirnov tests indicated that the only two scores for which the distributions of changes scores in the amalgam and composite groups differed significantly were two subtests of the WRAML: Finger Windows and Number–Letter Memory (Table 3). On both tests, however, children in the amalgam group showed greater improvement over time than the children in the composite group, with the difference being significant for Number–Letter Memory. The distributions of change scores were not significantly different on the Trail-Making Test Part B (time to complete), the test on which the composite group showed significantly more improvement than the amalgam group.\nAdjustment for neuropsychological examiner did not produce results that were appreciably different (data not shown).\n\nBODY.DISCUSSION:\nThese analyses revealed an absence of consistent differences between the scores of children in the amalgam and composite treatment groups on a battery of neuropsychological tests that assessed a wide range of domains, including intelligence, achievement, language, memory, learning, visual–spatial skills, verbal fluency, fine motor function, problem solving, attention, and executive function. The findings were similar when the dichotomous variable treatment-group assignment was replaced by two continuously distributed indices of exposure, one that combined the amount and duration of amalgam a child received and one that was a biomarker, urinary mercury concentration. Furthermore, no evidence was found to support the hypothesis that a subset of children in the amalgam group suffered substantial harm. The number of significant differences observed was similar to that which might have been expected to occur by chance.\nAlthough neuropsychological deficits associated with amalgam exposure have been reported in several studies of dental professionals (Bittner et al. 1998; Echeverria et al. 1995, 1998; Ngim et al. 1992) and others exposed occupationally to mercury (Rohling and Demakis 2006), our findings are similar to those involving mercury exposure in cohorts drawn from the general population of adults and children (Brownawell et al. 2005). In a cross-sectional study of 550 30- to 49-year-old healthy employed adults, scores on tests of verbal memory, nonverbal memory, attention, psychomotor speed, and fine motor coordination were not significantly associated with any of several exposure indices considered (number of visible amalgam surfaces, number of visible occlusal amalgam surfaces, urinary mercury concentration) (Factor-Litvak et al. 2003). The mean urinary mercury concentration in that cohort of adults, 1.7 μg/g creatinine, was higher than the mean concentration of 0.9 μg/g creatinine among the children in the amalgam treatment group in our trial 5 years after placement of their first amalgam restorations. In a study of 1,663 Vietnam-era veterans, the total number of tooth surfaces with amalgam fillings was unrelated to clinical neurological signs (e.g., tremor, coordination, station, gait, strength, sensation, muscle stretch reflexes, or indices of peripheral neuropathy), although it was associated with vibrotacile sensation in non-diabetic participants (Kingman et al. 2005). In a study of 384 German 6-year-olds, 24-hr urinary excretion of mercury, which averaged 0.16 μg, was not significantly related to scores on a variety of tests, including the Vocabulary and Block Design subtests of the WISC and five tests of the computerized Neurobehavioral Evaluation System 2 (pattern comparison, pattern memory, tapping, simple reaction time, continuous performance test) (Walkowiak et al. 1998). In this cohort, some indices of visual contrast sensitivity did decline with increasing urinary mercury excretion, however (Altman et al. 1998).\nOver the course of the follow-up interval, the scores of children in both treatment groups tended to change in the direction of improved performance, even on tests for which scores are standardized for age. Several factors might have contributed to improved performance over time. First, this could represent a type of sampling bias, reflecting the characteristics of families who are motivated to enroll in such a trial and to participate for its full duration. Second, all tests except the WISC-III and WIAT were administered yearly, so the general improvement in scores might reflect the familiarity that children developed with the test materials and expectations. Particularly large improvements tended to be on performance-based tests, such as the WRAVMA pegboard and the Processing Speed composite of the WISC-III, one component of which is Symbol Search, a timed task that involves matching symbols and digits. A substantial improvement was also noted on the WRAML Learning Index, which reflects the rapidity with which a child learns new material, such as sound–symbol pairs, a word list, and the locations of hidden designs. Repeated administration of these tasks, even at yearly intervals, might be expected to result in an increased rate of acquisition of the material.\nAs noted, the dental treatment needs of the children enrolled in the trial were substantial and exceeded those typical of the general population of U.S. children. For example, among 6- to 11-year-old children who participated in the National Health and Nutrition Examination Survey (NHANES) 1999–2002, the prevalence of dental caries in primary teeth was 22%, and the mean numbers of decayed or filled primary teeth and surfaces were 1.7 and 3.7, respectively (Beltran-Aguilar et al. 2005). The prevalence of dental caries in permanent teeth was 20% (mean number of decayed, missing, and filled teeth and surfaces in permanent teeth were 0.1 and 0.4, respectively) (Beltran-Aguilar et al. 2005). Therefore, children assigned to the amalgam group in the NECAT are likely to have experienced greater exposures to mercury vapor from amalgams than do most children in the United States. Given our failure to detect significant differences between the amalgam and composite groups in neuropsychological function, these results provide reassurance that the use of dental amalgam to repair caries is not producing substantial neuropsychological morbidity in the general population of children in the United States.\nThe conclusions must be tempered, first, by a recognition that the follow-up interval of 4–5 years might have been too short to allow for the expression of such deficits. Second, the critical window of children’s greatest vulnerability to elemental mercury might already have passed by the time the children were enrolled in the trial (≥ 6 years of age). Given the heightened sensitivity of the fetus to methylmercury, prenatal exposure to mercury vapor, which is known to cross the placenta, warrants increased attention. In the NHANES 1999–2000 survey, among women of child-bearing age, an increase of 10 dental surfaces restored with amalgam was associated with an estimated increase of 1.8 μg/L in urinary mercury concentration (Dye et al. 2005). Mercury level in amniotic fluid is weakly associated with number of amalgam fillings (Luglie et al. 2005). A recent case–control study did not, however, find an increased risk of delivering a low-birth-weight infant among women who had up to 11 amalgam restorations placed during pregnancy (Hujoel et al. 2005). The results of studies of the reproductive outcomes of women with dental workplace exposures have been mixed (Dahl et al. 1999; Elghany et al. 1997; Ericson and Kallen 1989). In some of these studies, distinguishing the potential impact of mercury exposure from the impacts of other workplace exposures, such as to disinfectants containing ethanol and benzene, is difficult. Third, the prevalence of children with enhanced sensitivity to elemental mercury might be too low among the children enrolled in the NECAT for us to have\n\n**Question:** Compared to composite group 0.6 μg/g what was the result of amalgam group 0.9 on mean urinary mercury concentration?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13854
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffects of Guideline-Based Training on the Quality of Formal Ontologies: A Randomized Controlled TrialGuideline-Based Training on Quality of Ontologies\n\n\n\nABSTRACT.BACKGROUND:\nThe importance of ontologies in the biomedical domain is generally recognized. However, their quality is often too poor for large-scale use in critical applications, at least partially due to insufficient training of ontology developers.\n\nABSTRACT.OBJECTIVE:\nTo show the efficacy of guideline-based ontology development training on the performance of ontology developers. The hypothesis was that students who received training on top-level ontologies and design patterns perform better than those who only received training in the basic principles of formal ontology engineering.\n\nABSTRACT.METHODS:\nA curriculum was implemented based on a guideline for ontology design. A randomized controlled trial on the efficacy of this curriculum was performed with 24 students from bioinformatics and related fields. After joint training on the fundamentals of ontology development the students were randomly allocated to two groups. During the intervention, each group received training on different topics in ontology development. In the assessment phase, all students were asked to solve modeling problems on topics taught differentially in the intervention phase. Primary outcome was the similarity of the students’ ontology artefacts compared with gold standard ontologies developed by the authors before the experiment; secondary outcome was the intra-group similarity of group members’ ontologies.\n\nABSTRACT.RESULTS:\nThe experiment showed no significant effect of the guideline-based training on the performance of ontology developers (a) the ontologies developed after specific training were only slightly but not significantly closer to the gold standard ontologies than the ontologies developed without prior specific training; (b) although significant differences for certain ontologies were detected, the intra-group similarity was not consistently influenced in one direction by the differential training.\n\nABSTRACT.CONCLUSION:\nMethodologically limited, this study cannot be interpreted as a general failure of a guideline-based approach to ontology development. Further research is needed to increase insight into whether specific development guidelines and practices in ontology design are effective.\n\nBODY.INTRODUCTION:\nFormal ontologies are software artefacts that represent theories which attempt to give precise mathematical formulations of the properties and relations of certain entities [1]. In the biomedical domain, formal ontologies have been promoted as a key resource to enable knowledge management. Typical applications are semantic annotations of experimental data (e.g. high-throughput data from bioassays), information extraction applied to clinical documents, as well as the integration of heterogeneous databases and services [2]–[5]. A large number of biomedical ontologies have been created for a wide range of subject matters, and many of them have successfully been used, particularly in genetics and proteomics [6], [7]. In the meantime the support of formal ontology as an engineering discipline has been consolidated, and repositories for the standardized access to ontologies have been made available [8]–[11].\nHowever, the use and re-use of ontologies is often limited due to insufficient representational and formal quality. A variety of errors in ontologies and their consequences have been described [12]–[18]. Principal causes for these problems are the complexity of ontology engineering together with the limited availability of trained ontology developers.\nA skilled ontology developer should have practical knowledge in four key areas. Potentially, the most important knowledge source required for successful ontology engineering is the domain itself which is to be represented. Without a deep and sound understanding of the domain entities and the relations between them, ontology developers have nothing they can even start with. For instance, they will just fail when they have to face the task of building a correct taxonomic hierarchy of classes of things in an unknown domain, even before it comes to the design of more complex structures like partonomies, i.e. models that relate parts and wholes. Domain entities like objects, processes, qualities, functions etc., together with the relations that connect them can only be represented after acquiring a thorough understanding of the domain. Even though, there is a considerable risk to design formally correct ontologies which however miss the very point the ontology developer intended to be represented, thus ending up in a bad representation of reality.\nThe second requirement for successful ontology development is the mastery of the formalism used, i.e. the logical expression language, with detailed knowledge on how the language is composed (syntax) and what the expressions in this language mean (semantics). The formality and rigidity of logics is one of the largest obstacles for developers that come from their own domain, without having a mathematics or philosophy background. Experienced domain expert generally have no difficulty to express their knowledge in precise textual statements. However, human language allows for subtle distinction to express the whole range from universal truths to contingent facts. Formal ontologies, however, are restricted to formulate what is universally true in a domain. Many pieces of domain knowledge are outside the scope of formal ontology. To recognize this limitation requires training and experience. Furthermore, human language is misleading. Often, even basic syntactic constructs like head nouns with adjectival modifiers require diverging interpretations: a complicated pregnancy is a pregnancy, but a suspected pregnancy isn’t. Nevertheless we have seen that even experienced ontology developers are influenced by the idiosyncrasies of natural language and introduce inadequate meaning to formal constructs. Visual tools like the Protégé Ontology Editor allow to hide the bare formalism of logic-based description languages like the Web Ontology Language (OWL) from the developer. However, the ontology developer should be fluent in using the corresponding editor functions and know the meaning of the constructs of the formal language.\nOn top of the representational language, an ontology developer should be able to build on existing specifications and standards. A large body of consolidated frameworks on how domain entities should be categorized and represented is provided by various disciplines like philosophy, mathematics and information sciences, in particular top-level ontologies and ontology design patterns (ODPs). Top-level ontologies, like BFO, DOLCE or BioTop provide a foundation of foundational categories (e.g. process, material object, quality) and relations (e.g. part-of, participates-in), to be extended and specialized by domain ontologies. Ontology design patterns are inspired by so called software design patterns originating in software technology [19]–[20] forming complex elements of software code to be re-used in development scenarios with similar requirements or functionality. Both a top-level ontology and ontology design patterns provide re-usable building blocks to guide the development of new domain ontologies and to sustain ontology standardization and interoperability.\nThe development of an ontology is often driven by its later application. Here, the developer should understand the technical framework in which the ontology is to be embedded. This can range from the support of natural language processing to interoperability frameworks and visual navigation tools. For each application scenario, different features of an ontology might be important. While the segment of reality represented in an ontology remains constant across applications, the scope, the depth and the implementation of the representation is influenced by the use-case.\nWhile the core knowledge necessary to develop ontologies originates in the domain to be represented, domain knowledge has to be connected with knowledge about the semantics of the representational languages and the ontological framework in which the representation is grounded, i.e. basic categories, relations, and constraints. Therefore domain experts who develop ontologies should stand on a solid ground in computer science, logics, and philosophy, either by teaming up with experts from these disciplines or by acquiring related skills and knowledge by targeted training.\nGuided by this rationale, we have developed a guideline on good ontology design. It comprises what we consider to be the most essential knowledge to develop good quality ontologies (GoodOD guideline (http://purl.org/goodod/guideline)) using description logic (DL) for representation and reasoning. This guideline combines foundations from philosophical ontology and logics, with the representational language OWL DL, and is rooted in the domain top-level ontology BioTop [21]. The guideline itself is not intended as training material for beginners in the field of ontology engineering but as a concise report which can be used for consultation on the constituents of good ontology development. To actually provide domain experts with the knowledge from the GoodOD guideline, we implemented it in a curriculum [22], [23].\nThe aim of this study was to provide evidence that guideline-based training of ontology developers enhances their performance. Therefore, the effect of training of a certain problem solution based on our guideline was tested against the effect of training in ontology design that was not specific for this problem. This was done as a randomized controlled trial with students from the biomedical domain with a background in computer sciences. A set of ontology similarity (distance) metrics for ontologies was used as a measurement instrument. As primary outcome measure, similarity metrics were applied to compare the ontology artefacts created by students with a gold standard artefact created by experts. As a secondary outcome measure the intra-group ontology similarity was compared between groups. We assumed that an improved quality would manifest itself in two ways: (i) as a higher similarity of trained students’ artefacts to expert ontology artefacts, and (ii) in a larger similarity between products of trained students compared to products of untrained students.\nBODY.QUALITY OF ONTOLOGIES IN THE BIOMEDICAL DOMAIN:\nOntologies use logic to formulate statements about what experts consensually assumed to hold true in a domain, with scope and granularity varying according to the use cases for which the ontologies are created.\nTo evaluate their quality is inherently difficult, as it depends on the definition of quality, as well as on the availability of measurement instruments (see below). The literature about quality problems in domain ontologies is rich, but difficult to be operationalized. Design errors described in the literature can be categorized into different types of various complexities:\nThe violation of implicit or explicit naming conventions [24] result in lexical names of classes or relations which are likely to be misinterpreted or easily to be confused by humans.\nOther errors arise through the mix-up of commonsense meaning of natural language expressions with the semantics of the representational DL language. E.g., on the background of natural language pragmatics a value restriction like ‘OxygenMolecule\nhasPart only OxygenAtom’ can be wrongly understood to imply the existence of parts when seen throuth the lens of natural language pragmatics [18]. However, as a DL expression, a value restriction only limits the possible target classes of a relation without implying the existence of a relatum, i.e. something that is related. Hence, our sample statement says that the parts can only be oxygen atoms (which is wrong, as oxygen molecules also have protons as parts), without stating that every oxygen molecule consists of oxygen atoms (which is true but needs a different logical statement).\nOther failures are due to problems with the counterintuitive semantics of DL. We are used to understand that the absence of a statement on a fact does imply its negation. This is the commonsense interpretation throughout many applications of daily life and even in science. In description logic the absence of a statement from an ontology, i.e. an axiom, does not imply its negation (the so called open world assumption) with all its consequential implications.\nA very common error is the absence of explicit statements that things cannot be in the same class but are in disjoint classes: mutually disjoint partitions are not made explicit [3]. For instance, if an ontology developer creates the two sibling classes Animal and Plant, nothing is said about whether things can exist that are plants and animals at the same time. Although or just because of the small number of language elements and its consequential restricted expressivity, ontologies based on DL can contain a multitude of logical inconsistencies [25].\nA large and important class of errors is related to inadequate representations of the underlying reality itself that do not surface as inconsistencies or contradictions in a logical sense. They are mostly due to an incomplete or wrong understanding of the domain which is to be represented in the ontology. Consequently, they result in underspecification and inaccuracy regarding the underlying ontological premises. These errors are difficult to detect because the ontology reveals no logical contradictions when checked by a DL classifier. Only domain experts with a sound ontological background can detect these errors. They are able to understand the meaning of the critical representation as wrong underlying assumptions. In many cases, faulty ontologies were built without making the underlying assumptions explicit.However, when there are competing or incongruent ontological commitments in the background, they can be even more problematic [26], [27].\nNot only the erroneous axioms themselves, but (potentially even worse) also their computed entailments have impact on basic tasks like equivalence detection, ontology alignment, and affect interoperability in general. Above all, ontologies are often not easily applicable in usage scenarios for which they have not been developed. An essential condition for the overall success of an ontology is that it can be used across a variety of use-cases and applications, and that it can seamlessly interact with other semantic resources.\nRecently, some activities have emerged that target assurance and improvement of ontology quality. Coordination efforts from policy and best practice providers [8] assure the quality of ontologies provided by repositories [9], [10] and tools support the detection and correction of typical errors [17], [24].\n\nBODY.TRAINING IN FORMAL ONTOLOGY:\nA variety of educational material is available on ontology, logics and knowledge representation. It is presented in different forms like textbooks, tutorials, online courses, and lectures and has been written to facilitate ontology engineering for a broad range of learners from different domains. This material is mostly technically oriented and focuses on certain skills, e.g. the usage of Protégé editor or how to represent a specific domain for a specific usage scenario. A public repository especially of training material dedicated to OWL and bio-medical ontology is available at the website of the CO-ODE project (http://www.co-ode.org). Among introductory and expert material on the development of ontologies with the famous Protégé-OWL tutorial is linked using a Pizza-ontology as an example [28].\nSources for training material on ontology theory and ontology engineering are websites of individual researchers, most prominently the website of Barry Smith at the National Center for Ontology Research (NCOR, http://ontology.buffalo.edu/smith/). At the National Center for Biomedical Ontology (NCBO) some audio-visual material is available esp. on the usage of BioPortal [9].\nIt is not possible to mention and evaluate all available educational material on ontology and ontology development here. Most of the material is of high quality and is the result of long term educational efforts, which cannot be valued high enough. But many of these resources are focused on only one aspect of ontology, e.g. describing the expressivity of representational languages in the context of editing and reasoning tools, typically description logics with the Protégé editor and plugin reasoners. Other materials tend to be agnostic of the principles we formulated above like the use of existing upper-level ontologies and design patterns.\nThe curriculum we evaluated in this study integrated everything we considered to be necessary for successful ontology development beyond the knowledge of the domain itself. It is targeted especially to those we consider the most important future ontology developers, viz. students with a domain background in the life sciences. Furthermore, the curriculum is based upon an explicit guideline on good ontology development which we believe provides a necessary, consistent and sufficient core definition of good ontology design. All modules of the curriculum are built up on each other and use consecutive exercises and examples from the same domain ontology.\n\nBODY.ONTOLOGY EVALUATION:\nThere are numerous accounts of the evaluation of ontology artefacts. However, objectives, outcome measures and evaluation methods differ on a large scale, as they need to be aligned to the variant semantic flavours and expressivity, which in turn depend on the ultimately envisioned use case scenarios. Some quantitative metrics and frameworks for quality measurement of DL ontologies are available [29]–[31]. All of them are highly synthetic and take into account features of the ontology that – prima facie – seem to be correlated with its quality (such as connectedness, depth of the hierarchy, annotation coverage, etc.).\nThese metrics are often fraught with the problem that they are not validated independently, a fact that is largely due to the unavailability of a generally agreed upon and easy to operationalize account of ‘ontology quality’. Hence no simple objectified quality assessment method has emerged so far. As a consequence, inter-rater reliability between human experts assessing the ontologies is often the method of choice for determining ontology quality [18], though it is infeasible for larger quantities of ontology content.\nAs mentioned, the perceived quality characteristics of an ontology are very often dependent on the use case they were designed for. Some authors conclude that “[r]esearch results in biomedical ontology should always be evaluated against a biomedical task for which the ontologies are intended” [32]. This places ontology engineering in a light quite similar to software engineering, and puts the focus of ontology quality assurance on areas similar to quality assurance for software artefacts, which is also reflected in the adaptation of the SQuaRE quality model [31], [33]. We agree that usability is a key criterion for the successful adoption of ontologies in real world applications and firmly believe such approaches to be generally laudable since they try to address the needs of ontology users. Nonetheless, an exclusive focus on usability and use-case oriented quality criteria may fall short of capturing the fact that – to some extent – ontologies are more like standards than like software artefacts.\nIn some areas of ontology research, these problems are less pressing. For example, the quality of ontologies generated automatically by machine learning or text mining procedures or of automatically derived ontology alignments [34], [35] can be readily assessed by their similarity to a manually curated gold standard which is normally done, e.g., in competitions for text mining algorithms. To our knowledge there are no prior studies that apply this approach to the assessment of ontology artefacts produced by students after differential training against expert models in order to quantify the effects of the training.\n\n\nBODY.METHODS:\nBODY.CURRICULAR DEVELOPMENT AND IMPLEMENTATION:\nThe curriculum was designed following a six-step approach, which has been designed for medical education [22], [36] but is not limited to this domain. These steps are general needs assessment, needs assessment of targeted learners, goals and measurable objectives, educational strategies, implementation, and evaluation and feedback.\nThe definition of educational objectives is crucial for every educational process. Educational objectives essentially depend, among other factors, on the general requirements, the specific requirements of targeted learners and subject matter. Regarding the former two requirements, we outlined the general need to improve the ontology development quality and we have elucidated which components are necessary to achieve this goal. The needs of targeted learners who are domain experts and students from the life sciences depend heavily on their knowledge of computer sciences and formal logics, as well as on the subject matter they intend to represent. Since we did not want to overburden the curriculum, we restricted the group of targeted learners to students in life sciences with a background in computer sciences for which we could take basic knowledge and skills for granted.\nThe subject matter addressed in the learning objectives had been operationally defined by the GoodOD guideline which comprises the basics of philosophical and formal ontology, descriptions logics, minimal metadata recommendations, top level ontologies and ontology design patterns, intended as a basic toolkit for ontology developers in any domain. Furthermore, learning objectives were formulated addressing the prevention of ontology development errors described in the introductory section of this paper.\nAs an activity, ontology development requires the learners to acquire and shape knowledge of the domain to be represented, master the representation formalisms, as well as the skills of how to actually perform the task of creating a formal model. Therefore, we chose an instructional format which actively engaged the learners in a sequence of brief hands-on exercises, to be done individually or in small groups. Classroom lectures were minimized to short introductory talks. Every step of the curriculum was accompanied by printed handouts. Each instructional unit had duration of 105 minutes, each day consisting of four units. The complete seminar was designed for five and a half instruction days, corresponding to 22 units.\nFigure 1 shows the module sequence of the curriculum.\n10.1371/journal.pone.0061425.g001Figure 1BODY.THE SEQUENCE OF MODULES IN THE CURRICULUM.:\nIt follows the stepwise layout of the GoodOD guideline and the increasing complexity of the contents. Modules 10–13 and 15–16 were used in the intervention (see Table 1).\n\nBODY.STUDY DESIGN, POWER ANALYSIS AND ALLOCATION:\nAn educational randomized controlled study was conducted in association with the curriculum as described above in September, 2011, at the Institute of Medical Biometry and Medical Informatics of the University of Freiburg, Germany. The complete curriculum included the interventional part (see below) and was completed in 5.5 days.\nStudy design and reporting follows the CONSORT statement for randomized controlled trials in parallel group design (see Figure 2) [37].\n10.1371/journal.pone.0061425.g002Figure 2BODY.MODIFIED CONSORT DIAGRAM.:\nTwelve students were allocated to each group and could be analyzed.\nPrior to recruitment, power analysis for trials in parallel group design was performed using the R statistical package. The estimation of standardized effect sizes (Cohens’ d = mean differences/pooled standard deviation) for similarity measures was difficult because no empirical data sets were available which allowed a calculation of the mean differences and standard deviations to be expected. A sample size of n = 12 participants per group was calculated for parallel group design under the estimation of a high effect size of d = 1.2, a power of 0.8 and a significance level of 0.05.\n24 students from four European countries (Austria, Germany, Slovenia and Switzerland) were included in the study. The students were recruited from universities which offer bachelor or master degrees in biology in combination with a minor subject in computer sciences or bachelor or master degree in computer sciences in combination with a minor subject in the life sciences. In every case inclusion criteria for participation were checked: applicants combined undergraduate knowledge in the life sciences with basic knowledge in computer sciences.\nBalanced randomization was performed with a pseudo random number generator from the R statistical package prior to the first interventional training sessions (see Figure 2).\n\nBODY.INTERVENTION:\nThe intervention proper was conducted after the phase of background teaching in which the introductory modules of the curriculum were taught jointly to both groups. The intervention consisted of the differential training students received for certain topics in ontology design. For the modules 10–13 and 15–16 the students were either instructed or not. The allocation of modules to each group is displayed in Table 1. Modules 10–13 are dedicated to the proper usage of top-level categories (taken from BioTop) and corresponded directly to sections in the guidelines. Group A was trained on “Process and Participation” (module 10) and on “Immaterial object” (module 12), whereas group B was trained on “Collective material entity” (module 11) and “Information object” (module 13). Modules 15 and 16 covered the application of ontology design patterns; they also corresponded directly to sections in the guideline. Here, group A received training on the Closure ODP (module 15) and group B on the Spatial disjointness ODP (module 16).\n10.1371/journal.pone.0061425.t001Table 1BODY.INTERVENTION AND DATA COLLECTION OF THE STUDY.:\nIntervention/data collection\nApplication of top-level ontology\nApplication of ODPs\n\nTraining group A\n\nModule 10 (PRO) Process and participation\nModule 12 (IMM) Immaterial object\nModule 15 (CLO) Closure ODP\n\nTraining group B\n\nModule 11 (CME) Collectivematerial entity\nModule 13 (INF) Information object\nModule 16 (SPA) Spatial disjointness ODP\n\nTest exercises both groups\n\nPRO: Photosynthesis, Medical diagnosing; CME: Proteinuria,Penicillin\nIMM: Fetogenesis, Stomach anatomy; INF: Operation plan, Pneumonia diagnosis\nSPA: Cell membranes, Stomach wall; CLO: Circulatory system, Teeth\nThe intervention of the study consisted of the differential training of the students in certain content areas: students in group A received training in modules 10, 12 and 15 and no training in modules 11, 13 and 16, and vice versa for students in group B. Training sessions were kept balanced with regard to instructor, length, difficulty and instructional format. Data were collected in the form of ontology development exercises which were distributed evenly over all content areas with two exercises per training module.\nTraining sessions were kept balanced with regard to instructor, length, difficulty and instructional format. The training sessions were held in parallel for both groups by two instructors for Modules 10–13. Instructors switched groups between these interventional training sessions to minimize trainer bias. Modules 15 and 16, the two training sessions on ontology design patterns, were given by the same instructor. Basically, sessions were as closely balanced as possible between the two groups for training duration, complexity and instructional structure.\nThe study design introduced above has a crossed interventional schema: Groups received both the same interventions (guideline-based training or no-training) but on modules with different topics, respectively. The crossed design has been chosen in this educational study to control context dependency of the effect which is introduced by the topic of the module. It is possible that an effect depends only or partly on the topic of an educational intervention and not on the intervention proper (here: training/no-training). To control this effect, more than one module (topic) was selected in each group and the design was crossed so that each group received training on different modules respectively. For practical and ethical reasons, a pure parallel group design was impossible, however theoretical possible, in which one group would have received training on all modules and the other group no training at all.\nIn this study, it was not intended to compare the overall effect of training between different modules. Only the effects of guideline-based training vs. no-training were compared between the groups on same modules. However, this design has some limitations discussed below.\n\nBODY.OUTCOME MEASURES AND INSTRUMENTS:\n\nPrimary outcome measure was the mean similarity of ontology artefacts developed by the students in the assessment exercises compared with a set of gold standard artefacts provided by the authors. Prior to the experiment the authors had prepared two test exercises for each of the six interventional training topics. All exercises were given in the same format and provided with the following material: (a) an introductory text explaining the exercise; (b) an OWL file containing the necessary ontology primitives; and (c) the upper level ontology BioTopLite, a simplified version of the upper level ontology BioTop (to be imported by (b)), which corresponded to the students’ knowledge from the training sessions (individualized for each group). The learners had to arrange the ontology classes in a taxonomic order and add describing and defining axioms according to the given task. The creation of new object properties (relations) was not allowed. The contents of the tasks and their allocation to the training sessions are presented in Table 1.\nIn order to measure ontology similarities different instruments were combined [39]. Prior to calculation of metrics, a modified normalization approach was applied to minimize syntactic differences between ontologies, which did not affect their intended semantics [40]. Precision, recall and f-measure were determined according to Dellschaft and Staab [34], for which we have developed a software library [39]. Another library, OntoSim, was used to calculate triple-based entity similarity in combination with average linkage or minimum weight maximum graph matching (MWMGM) similarity [35], [41], [42].\n\nSecondary outcome measures were the intra-group ontology similarity metrics between artefacts created by students who had either received the training or not (intra group homogeneity between ontologies). Measuring instruments were the same as for the primary outcome.\n\nBODY.DATA COLLECTION:\nData were collected during assessment sessions held in two and a half days directly following the training sessions in the structure as presented in Table 1, in which the participants processed the exercises. After each session the individual ontology artefact produced by each student was collected as an OWL file and stored in its original state so that each assessment session yielded 24 ontology artefacts. All 12 result OWL files from each of the 24 participating students were collected, totalling 288 files.\n\nBODY.STATISTICAL ANALYSIS:\nA Java program was written to pre-process the files and derive metrics as described above [39]. For each measure and each exercise a tabular output file was produced so that each file contained 24 = 576 similarity/distance values corresponding to the mutual pairs of each of the 24 student results and the respective gold standard ontology.\nFor further aggregation, statistical procedures, and the production of graphs the R statistical package (version 2.15.1) was used [38]. The individual distance measures were aggregated by calculating means and standard deviations for each student inside the own group, so that aggregated measures for the distance of the ontologies in the own group resulted. Based on the individual measures for the distances to the gold standard and the calculated distances to the collection of ontologies in the own group, further parallel group analysis of aggregated means on the level of single ontologies were performed.\nTo estimate differences between treatment groups at the level of each ontology artefact, the treatment effect was calculated as the difference between the group means of distances to the gold standard and between the intra-group mean distances. The statistical significance was tested using the t-test for independent samples.\n\nBODY.ETHICAL APPROVAL:\nParticipants received an expense allowance of 500 for their participation in the training and the study. Before their agreement, students had been informed about all details of the curriculum and the following study. As part of the agreement, it was explicitly stated that the payment of the allowance was only dependent on the students’ complete attendance and full cooperation during the training sessions and the study but not on their success in the assessments or answers in the questionnaires.\nEthical approval was requested from the ethical authority of the University of Freiburg, Freiburg, Germany. The chair of the University of Freiburg ethics committee reviewed the project and concluded that a full formal ethics committee statement was not required due to the educational nature of the study. It was designed according to the general requirements for educational studies at the University Medical Center Freiburg, Freiburg, Germany, and was performed with written informed consent of the participants.\n\n\nBODY.RESULTS:\nTo our knowledge there are no prior studies which obtained similarity measures between ontologies developed by study participants and gold standard ontologies provided by experts after specific training on ontology design as metrics for the improvement of ontology development skills. In this randomized controlled study we sampled ontology artefacts from 24 students addressing 12 modelling task for which they had been either trained for or not prior to the assessment. We calculated similarity measures of these sample ontologies with gold standard ontology artefacts and with the artefacts of other participants inside the same training group (intra-group homogeneity).\nThe obtained data did not provide support for our main hypothesis, viz. that a guideline based ontology training improves the developers’ skills to build ontology artefacts that were more similar to pre-existing gold standards. The artefacts from the trained group were not significantly more similar to the gold standards as the ones from the untrained group. As clearly shown in Table 2, Table 4 and Table 5, we could find only very small and not statistically significant treatment effects between the trained group and the untrained group. This result is consistent for all ontology similarity metrics and ontology assessment tasks (Table 2). Even with aggregated data on topic levels only small and not significant effect sizes were observed (Table 5).\n10.1371/journal.pone.0061425.t002Table 2BODY.SIMILARITY WITH THE GOLD STANDARD MODEL.:\ngroup\ntopic\nontology\nfm\nn-mtb\nmtb\natb\nA\nCLO\ntee\n−0.4\n−3.8\n−0.1\n−0.3\nA\nCLO\nbud\n6.6\n0.6\n−2.5\n\n6.9\nA\nIMM\nsto\n\n11.1\n1.0\n\n−1.5\n−0.1\nA\nIMM\nfet\n−5.4\n2.0\n0.3\n1.2\nA\nPRO\npho\n1.6\n1.6\n−0.4\n−2.8\nA\nPRO\ndia\n1.7\n2.2\n−1.0\n2.4\nB\nCME\npru\n−4.7\n0.1\n0.0\n0.0\nB\nCME\npen\n−3.6\n−0.9\n1.4\n0.1\nB\nINF\npne\n−6.0\n2.0\n−0.1\n−1.1\nB\nINF\nope\n1.0\n1.5\n−0.1\n0.0\nB\nSPA\ncem\n0.5\n−1.4\n0.0\n1.4\nB\nSPA\nsta\n7.0\n4.1\n−0.8\n1.5\nOntology similarity metrics, displayed as absolute difference in percent between trained and untrained groups ordered by trained group, training topic and individual assessment task (for details see Table 1). The similarity/distance metrics shown are f-measure (fm), MWMGMS after normalization (n-mtb), MWMGMS without normalization (mtb) and average linkage without normalization (atb), the last three combined with triple-bases entity similarity as local measure. Significance levels of group comparisons are indicated as ∼: p0.15, : p0.1, : p0.01, : p.001.\n10.1371/journal.pone.0061425.t003Table 3BODY.EFFECT-SIZES OF SIMILARITY WITH THE GOLD STANDARD ONTOLOGY AND INTRA-GROUP SIMILARITY.:\ngroup\ntopic\nontology\nGS-fm[%]\nGS-fm d\nIH-fm[%]\nIH-fm d\nA\nPRO\npho\n1.6\n0.12\n9.7\n0.66\n\nA\nPRO\ndia\n1.7\n0.12\n6.5\n0.35\n\nA\nIMM\nsto\n11.1\n\n0.63\n11.3\n0.59\n\nA\nIMM\nfet\n−5.4\n−0.30\n2.3\n0.16\nA\nCLO\ntee\n−0.4\n−0.03\n−3.0\n−0.20\nA\nCLO\nbud\n6.6\n0.51\n4.1\n0.31\n\nB\nCME\npru\n−4.7\n−0.38\n8.0\n−0.51\n\nB\nCME\npen\n−3.6\n−0.34\n−5.2\n−0.43\n\nB\nINF\npne\n−6.0\n−0.46\n−2.3\n−0.23\n\nB\nINF\nope\n1.0\n0.05\n−13.0\n−0.69\n\nB\nSPA\ncem\n0.5\n0.04\n−9.0\n−0.80\n\nB\nSPA\nsta\n7.0\n0.37\n−0.2\n−0.01\nF-measure ontology similarity metrics with the gold standard (GS) and f-measure intra-group homogeneity (IH), displayed as absolute differences in percent between trained and untrained groups and Cohens’ d effect sizes. Ordering is by trained group, training topic and individual assessment task. For details on abbreviations and symbols see Table 2.\n10.1371/journal.pone.0061425.t004Table 4BODY.AGGREGATION OF EFFECT-SIZES ON TOPIC LEVEL.:\ngroup\ntopic\nGS-fm [%]\nGS-fm d\nIH-fm [%]\nIH-fm d\nA\nPRO\n1.6\n0.12\n8.1\n0.49\n\nA\nIMM\n2.8\n0.16\n6.8\n0.40\n\nA\nCLO\n3.1\n0.23\n0.5\n0.04\nB\nCME\n−4.1\n−0.36\n−6.6\n−0.47\n\nB\nINF\n−2.5\n−0.16\n−7.7\n−0.50\n\nB\nSPA\n3.7\n0.24\n−4.6\n−0.23\n\nF-measure ontology similarity metrics with the gold standard (GS) and f-measure intra-group homogeneity (IH) aggregated on topic level, displayed as absolute differences in percent between trained and untrained groups and Cohens’ d effect sizes. Ordering is by trained group and training topic. For details on abbreviations and symbols see Table 2.\n10.1371/journal.pone.0061425.t005Table 5BODY.INTRA-GROUP SIMILARITY (HOMOGENEITY).:\ngroup\ntopic\nontology\nfm\nn-mtb\nmtb\natb\nA\nPRO\npho\n9.7\n\n0.1\n−0.5\n\n−0.3\nA\nPRO\ndia\n6.5\n\n0.1\n−0.8\n\n−0.9\nA\nIMM\nsto\n11.3\n\n−0.3\n0.0\n\n−1.3\nA\nIMM\nfet\n2.3\n−1.2\n0.0\n0.4\nA\nCLO\ntee\n\n−3.0\n−0.7\n0.0\n−0.4\n\nA\nCLO\nbud\n4.1\n\n0.4\n1.0\n−1.0\nB\nCME\npru\n−8.0\n\n0.1\n\n0.0\n0.0\nB\nCME\npen\n−5.2\n\n−1.4\n\n0.3\n\n1.0\nB\nINF\npne\n−2.3\n\n−0.2\n0.0\n0.1\nB\nINF\nope\n−13.0\n\n−0.7\n0.1\n\n0.3\nB\nSPA\ncem\n−9.0\n\n0.3\n0.0\n−0.2\nB\nSPA\nsta\n−0.2\n1.1\n−0.7\n−0.2\nIntra-group ontology similarity metrics (homogeneity), displayed as absolute difference in percent between trained and untrained groups ordered by trained group, training topic and individual assessment task. For details on abbreviations and symbols see Table 2.\nOur second hypothesis that the homogeneity between ontologies from different developers would increase due to guideline based training when compared with the untrained group was not supported by our data either (Table 3, Table 4 and Table 5). Although we observed moderately significant effect sizes for the f-measure intra-group similarity, the overall comparison between the groups is not conclusive. For group A the f-measure increases significantly about 5% (absolute) after training but for group B it decreases significantly about 6% (Table 5). For the other tested similarity measures only small differences between the trained and untrained group of lower than 1.5% were found. Why the f-measure metrics are more sensitive for intra-group similarity differences than the other metrics remains unclear and should be subject of further investigation. Contrary to our prior assumptions, we did observe differences in intra-group similarity between trained and untrained students in both directions. However, the direction of the differences might depend on the topic of the training session and corresponding assessment tasks or the (random) allocation of students to the groups. The exact cause is however not determinable with our study design.\nAt this point, we are not able to confirm our assumption that skills of novice ontology developers are improved by guideline-based training. Moreover, our results indicate that the content topic of the training and the corresponding assessment task are factors that might dominate the training effects. Furthermore, despite our best efforts those content topics may have been distributed in an imbalanced way in the two groups and may have caused the different effects in the intra-group f-measure similarity metrics.\n\nBODY.DISCUSSION:\nOur research question for this randomized controlled trial was whether the skills of ontology developers can be improved by a guideline-based training when compared with developers who received unspecific training. To provide empirical evidence to support this hypothesis, we have developed a curriculum based on a guideline on good ontology design and evaluated its effect in a randomized controlled trial. The primary outcome parameter of the study was the similarity of ontologies developed by participants with a gold standard model provided by the authors of the guideline. The secondary outcome parameter was the intra-group homogeneity of ontologies.\nThe data of our study did not provide evidence for either hypothesis; neither did the similarity to a gold standard model increase significantly nor did the intra-group homogeneity of participants change consistently.\nThere are several possible causes for the difficulty to detect an unambiguous effect of our training on the quality of ontologies as the product of the development process. On the one hand, the explanatory power of the study was limited due to the study design. However, this argument does not account for the failure to even detect a difference in similarity to the gold standard model. On the other hand the deployed similarity metrics might be too insensitive to detect slight differences in features present in our ontologies.\nThe simplest explanation for the result of our study would be that students were overloaded with too much information in too short a time. The complete interventional training sessions were given in 2.5 days. Although they consisted of two 105 min units for each instructional topic and an additional free training session, they were overly ‘packed’ with information. Thus, students possibly had not enough time to consolidate their recently acquired knowledge in a longer training session. Moreover, the recall of fresh knowledge and skills might even be inhibited by the task to tackle new problems after only a short interval to a prior learning experience that was not yet fully consolidated (retroactive interference) [43].\nHowever, the results of our study should not be considered to indicate a complete failure of the guideline-based approach of ontology development. As an evaluation of a guideline, our study had objectives on two levels. First, it had the objective to provide evidence on the efficacy of a guideline-based training on the performance of ontology developers. On top of that, a secondary objective was to show that the guideline-based approach is superior to the ‘conventional’ more or less unguided development of ontologies. What we can state here, is that our training was not effective, but that does not imply that the guideline-based approach, in general, is not effective. As we outlined above, our interpretation is that the educational intervention and the measurement instrument were insufficient to show small differences in a complex environment.\nBODY.LIMITATIONS OF THE STUDY:\nAs an educational study in a complex environment, this study has some limitations which render the interpretation of its results very difficult.\nFrom a pure educational standpoint, a comparison of an educational intervention with an unspecific instruction in the control group was criticized to be low-stakes educational research [44]. Arguing from this standpoint alone, a positive result would have meant only that the provided educational intervention ‘had been better than nothing’ – which is really not much. However, as a study in guideline evaluation we designed it to show the effectiveness of the guideline-based approach. Seen from this more complex standpoint, a positive result could have been interpreted as: (1) the provided guideline-based intervention ‘had been better than nothing’, and (2) the guideline-based approach to ontology engineering is better than a non-guided approach.\nWe faced the problem to show a clear effect under the condition of a small sample size and a complex educational and technical setting. However, a larger sample size was not feasible due to practical limitations. It was already difficult to recruit the participants of the present study, and a larger group size would also have impeded the educational implementation. There would have been two alternative study designs with higher statistical power:\nWithin the parallel group design we could have chosen to assign all interventions to the same group. Most probably this would have decreased the spread of our results and thus increased the statistical power of the result. However this would have had two important disadvantages: It would have been a hindrance for recruitment, as half of the participants would have learned considerably less during the summer school (and learning is one of the key incentives for participation in such an event). Moreover, this line would have increased the ‘better than nothing’ problem of educational studies.\nStatistical significance would have also been higher, if we had evaluated our data by using the crossover method [45], [46]. Although the conditions were kept as closely similar as possible for the two groups, a major structural limitation of an educational crossover design lies in the training and assessment of different content areas. When sequential crossover studies are performed in pharmaceutical studies, a washout period can be inserted that is long enough to reach complete extinction of the treatment effect of the first treatment period. In educational crossover studies, however, a back-learning (forgetting) is neither possible nor desirable, so that the instructional methods that are to be compared must be applied to different content areas. Although we switched the roles of intervention group and control group for different treatments, we decided not to analyse the results as a crossover study because we considered the topical differences of the educational interventions as too large.\nEven this way, and however carefully chosen for similarity in difficulty, length and amount, the content itself introduces a confounding variable in our design in which training for different content was ‘switched’ between groups. Although we tried to match the training topics of ontology design to have comparable content distributions for each group, they remain markedly different due to the character of the topics (Table 1). We consider this heterogeneity in content a possible reason for the different results on intra-group homogeneity in the two groups. As further elaborated in the next section, no validated and tested evaluation methods for computerized measurement of quality indicators of a larger number of ontologies are available. The resources of this project limited us to employ existing methods without prior prototyping of the study. Thus we missed the possibility to detect the limitations of the ontology similarity/distance metrics which were originally introduced for the evaluation of ontology mapping experiments.\n\nBODY.EVALUATION OF AND METRICS FOR FORMAL ONTOLOGY:\nDue to the complex nature of the ontology artefacts, their evaluation is inherently difficult and manifold. Furthermore, there are often multiple correct ways to represent the same facts, and the objectives and methods of ontology evaluation are numerous [32], [47], [48]. What do we exactly mean when we talk about ‘good ontology’? This property might be dependent on the objectives of the ontology under scrutiny, its philosophical foundations and the intention of the investigator. It is possible for two serious investigators to rate the same ontology on opposite poles of a quality scale [49].\nIn our view, the quality criteria for ontologies should be defined in accordance with their real or hypothetic application scenarios, besides the mere formal correctness of an ontology, which is already a starting point for good quality. However, as our interest lies in biomedical ontology, we can define several parameters for good quality ontology which are independent of their primary intended application scenario:\nConsideration of a continuous and appropriate ontological commitment throughout the ontology development process. The integration of a top-level ontology [21], [50]–[52] in the domain ontology can guarantee for this adherence in a more robust way as the usage of certain ontology design patterns, as the top-level ontology already provides many patterns in an embedded way, ensuring all patterns harmonize with each other, i.e. fit a common scope and development philosophy.\nAdherence to coding standards and naming conventions as defined by policy and best practice providers [8], [53].\nCorrect and exhaustive representation of the domain that is intended to be represented.\nAlthough a variety of metrics and methods could be useful in ontology evaluation, it is not clear how to measure certain properties, which might be used as quality indicators as listed above. In the case of our experiment, we can judge the quality of the produced ontologies by observing selected features such as the structure of the taxonomy, the axiomatization, representational errors, completeness, and correctness. However, to quantitatively assess the structure of the taxonomy or axiomatization in an objective and automatically manner seems to be impossible with the given tools. Our measurement hypothesis to compare the learners’ ontology artefacts with a gold standard, and to apply ontology similarity metrics, seemed most adequate to capture the two latter points. Unfortunately, we cannot claim that this strategy was successful. In the end, these methods, which were originally developed for ontology alignment experiments, were too insensitive to detect relevant differences between the test and gold standard ontologies.\n\nBODY.FURTHER RESEARCH:\nFuture research should develop ontology evaluation methods that are able to quantify quality parameters of ontologies in accordance with a specific description logic expressivity regime (affected by the application use case and reasoning style). These quality parameters must be empirically validated for their explicit scopes. One direction of ontology evaluation can be the deployment of competency questions [54] which is a way to formalize requirements and assess their fulfilment. An automatic and objective evaluation of ontology may be possible with a strong formalization of competency questions as logical expressions.\nFurther research should also yield empirical evidence for the efficacy of certain beliefs, dogmas and practices in ontology development. Ontology development is not an ‘artistic craft’ but an engineering activity which should be led by a high level of evidence and not only by expert opinion in analogy to practice in evidence based medicine [55].\nHow should quality indicators for ontologies be operationally defined? The overall quality of an ontology is dependent on a variety of factors. Hence, quality indicators should at least be defined to provide useful measures for ontology development and deployment; furthermore they should be unambiguous and measurable in a reproducible way, ideally automatically to avoid any bias. How can the quality of ontologies be measured computationally? Although many methods for ontology evaluation have been described or proposed, many open questions remain on how to measure quality indicators of ontologies technically, reproducibly and automatically. To support good quality ontology design, there should be an easy-to-use toolbox of ontology metrics which can be applied from the ontology editor of choice. Using such a toolbox, the developers can evaluate their ontology artefacts at each step of the development process and judge on how to proceed.\nHow effective are specific ontology designs? It is not enough to claim that a certain type of development, top-level ontology, taxonomy or ontological commitment will result in better ontologies. At present, such claims are often brought forward without any empirical evidence. To overcome this situation, it is desirable to provide empirical evidence which methods of ontology design are more effective to reach specific objectives.\nHow can the behaviour of ontology developers be changed effectively? Evidence on good practice is not enough. Ultimately, good practices and new methods of development have to be implemented in the ‘community of developers’. To understand and guide those complex socio-technical processes it must be understood how to change thinking, behaviour, communication and tools of developers. To investigate this complex network more educational studies are necessary. Furthermore, this educational research must adhere to the premise of proceeding incrementally by strictly changing only one parameter at a time in the experimental setting of each study [56].\n\n\nBODY.CONCLUSIONS:\nInadequate knowledge and insufficient skills of ontology developers are among the causes for quality problems of ontologies in the biomedical domain. To improve this situation, we developed GoodOD, a guideline for good ontology design, optimized for the development of OWL ontologies using description logics for representation and reasoning. We implemented a training course based on this guideline. In a randomized controlled trial with 24 students we investigated the efficacy of the guideline based training on the quality of developed ontology modules and patterns.\nWe could not detect enhanced quality in terms of similarity to gold standard representations of the ontology artefacts produced after the selective training, when compared to artefacts built without prior training. Due to the study design outlined in this paper, the interpretation of such study results is difficult. Although we could not provide evidence for the effectiveness of a guideline-based approach to ontology design, the study neither provides evidence against the efficiency of this approach. More effort has to be invested in further research which can reliably provide empirical evidence whether specific development guidelines and practices in ontology design are effective.\n\n**Question:** Compared to untrained group what was the result of training on different topics in ontology development on similarity of the students’ ontology artefacts compared with gold standard ontologies developed by the authors before the experiment; secondary outcome was the intra-group similarity of group members’ ontologies?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13855
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA Comparative Study on the Effects of Vintage Nonpharmacological Techniques in Reducing Myopia (Bates eye exercise therapy vs. Trataka Yoga Kriya)\n\n\n\nABSTRACT.BACKGROUND:\nHuman eye captures light rays as they come and fall on the retina and convert them into an image. However, in myopia, light rays fall in front of retina, causing blurring of image. Correction of this is generally done using correcting devices such as corrective glasses and contact lenses. Existence of some alternative therapies is also noticed in literature.\n\nABSTRACT.AIM:\nTo compare the effects of Bates eye exercises and Trataka Yoga Kriya on myopia.\n\nABSTRACT.MATERIALS AND METHODOLOGY:\nEthical clearance was obtained from the institution, and informed consent was taken from participants. In this randomized comparative study, 24 participants (48 eyes) were taken based on inclusion and exclusion criteria and were randomly divided into two groups: Group A and Group B, where Bates eye exercise therapy and Trataka Yoga Kriya were given, respectively, for 8 weeks. Participants were assessed for their refractive errors and visual acuity pre- and post-intervention.\n\nABSTRACT.RESULTS:\nData were analyzed by SPSS version 20. Results obtained revealed that both Bates exercises and Trataka Yoga Kriya were not significantly effective in reducing refractive errors and in improving visual acuity (P value of refractive error in right eye: 0.4250; left eye: 0.4596; P value of visual acuity in right eye: 0.5691; left eye: 0.8952).\n\nABSTRACT.CONCLUSION:\nThis study concludes that nonpharmacological approaches such as eye exercises and Trataka Yoga Kriya are not significant on myopia.\n\nBODY.INTRODUCTION:\nIn myopia that is also called as short sightedness, there is a problem in the refraction of light, in which when the accommodation is at rest, the parallel rays of light from the space come to focus in front of retina.[1] For the visual disability throughout the world, it is the common cause.[2] About 69% of the general population in India has been reported to be affected with myopia.[3] The prevailing treatment of refractive errors which are commonly used these days are glasses, contact lenses and refractive surgeries.\nThe thought of substitution of the prevailing modalities of treatments leads the thinkers to the nonpharmacological therapies such as “Bates eye exercise therapy” and some ancient techniques such as “Trataka Yoga Kriya.” Some ocular strengthening therapies are involved in Bates method, by which the ocular health can be maintained and promoted. However, this kind of therapy is yet underevaluated by the modern sciences.[2] The mainstream ophthalmologists rejected the Bates method during his generation and remains as same even today.[45] Anyhow, it is seen that the eye exercises strengthen the eye muscles, and thus, diplopia and the point of convergence decrease. Meanwhile, Trataka Yoga Kriya of the ancient Indian tradition has been practiced throughout India and is strongly believed to increase the eye sight.[6]\nIn the present research scenario, due to the lack of evidential studies over nonpharmacological treatment approaches in treating myopia, there is a need to study for different alternative approaches. This will help in extending the scope of physical therapy in modern medicine. Hence, the purpose of this study is to compare Bates exercises and Yoga in patients with myopia.\n\nBODY.OBJECTIVE:\nTo evaluate the effects of Bates eye exercises and Trataka Yoga Kriya on myopia by comparing the changes in refractive errors and visual acuity.\n\nBODY.MATERIALS AND METHODS:\nA comparative study was conducted for 6 months on a sample size of 48 participants with myopia that was calculated using the G-power software. The inclusion criteria were an age group of 18–25 years, a refractive index ranging from −1.00 to −10.00 diopters, both genders, i.e., male and female, and myopia for the past 5 years, and the exclusion criteria were any lenticular or corneal opacity, other known ocular pathology, and infantile myopia. After receiving informed consent, participants were assessed by an ophthalmologist and the baseline values of outcomes were noted. The assessor was blinded and through simple randomization divided the participants into two groups with 24 participants and/or 48 eyes each. Participants were taken for a subjective and objective assessment before and after the treatment. The materials used in this study were Snellen's chart, lenses tray, autorefraction equipment, room temperature water, ice cold water, napkins, and candles. Intervention was given for 8 weeks; at the end of which, participants were assessed by the same ophthalmologist.\nBODY.PROCEDURE:\nIn the Group A, the participants performed eye exercises for 8 weeks. The exercises given to this group were based on Bates technique; they were near to far shifting, palming, solarization, eye wash, eye squeezing, eye circles, and eye massage.\nIn the Group B, Trataka Yoga Kriya was performed by the participants for 8 weeks. The following activity was done by the participants of this group for 8 weeks. Trataka means steady gazing. It can be done on any external objects such as a candle, a leaf, a crystal, and a Shiva Linga. This included two steps; they were Antaranga Trataka and Bahiranga Trataka.\n\n\nBODY.RESULTS:\nThe results of this study were analyzed using the SPSS software version 20, in which the means and standard deviation of the normally distributed data of the participants were calculated. The graphs with P values and t-values were plotted using the GraphPad InStat 3.10 by GraphPad Software.\nThere was no statistical significance in the postintervention P values of the data obtained for both the groups receiving Bates eye exercises and Trataka Yoga Kriya, in terms of refractive errors and visual equity of the participants with myopia. The results of the study are interpreted in the form of tables and graphs as follows.\nBODY.INTRAGROUP COMPARISON OF REFRACTORY ERRORS AND VISUAL ACUITY:\nThe Table 1 and 2 depicts the intra group comparison of refractive errors and visual acuity within the groups respectively.\nTable 1Refractive errors within the groups\nTable 2Visual acuity within the groups\n\nBODY.INTERGROUP COMPARISON OF REFRACTIVE ERRORS AND VISUAL ACUITY:\nThe two-tailed t-test has been incorporated to assess the p-values of post-intervention of refractive errors and visual acuity among both right and left eyes respectively.\nFigure 1 depicts the P value (0.425) of postintervention for refractive errors in the right eyes for the Group A and Group B. Whereas, Figure 2 depicts the P value (0.459) of postintervention for refractive errors in the left eyes for the Group A and Group B respectively.\nFigure 1P value of postintervention in the right eyes of Group A and Group B (refractive errors)\nFigure 2P value of postintervention in the left eyes of Group A and Group B (refractive errors)\nFigure 3 depicts the P value (0.656) of postintervention for visual acuity in the left eyes for the Group A and Group B. Whereas, Figure 4 depicts the P value (0.982) of postintervention for visual acuity in the right eyes for the Group A and Group B respectively. Although none of these P values are statistically significant.\nFigure 3P value of postintervention in the left eyes of Group A and Group B (visual acuity)\nFigure 4P value of postintervention in the right eyes of Group A and Group B (visual acuity)\n\n\nBODY.DISCUSSION:\nMyopia is a leading cause of loss of vision throughout the world, and its prevalence is increasing. The literature review shows that previous studies have stated that there is great effect of different eye exercises on refractive error.[67] The reason behind the effect these eye exercises show is thought to be as it strengthens accommodation, which is the power of changing focus of eyes for vision. However, the basic concept behind Trataka Yoga Kriya is relaxation of mind and eye that in turn improves vision.\nIn this study, it was seen that there was a slight reduction in the myopia by performing the eye exercises. Hence, it states that Bates eye exercise has significant effect over the refractive errors over the participants involved in this study. However, due to lack of evidential studies over this, Bates method and Trataka Yoga are not yet proved to have a significant effect in reducing myopia.\nThe participants in this study got a very minimal reduction of power. Therefore, it could be considered because of stress relief phenomena. Eventually, selection bias (the participants with range of refractive error were wide) and subjective assessment errors might also be some of the reasons. Certain variables were uncontrolled in this study, and their effect influenced the results was assumed to be null.\nWhether or not the participants in the eye exercises group were actually doing the eye exercises in proper repetition and accuracy and the subjects in the Trataka group, how far were able to imagine the image of the candle flame in their inner vision. The lifestyle and dietary habits of the participants were assumed to have no effect on the muscle strengthening and improvement in the accommodation.\nThe future directions would be that the results in this study could be applied over the population diagnosed with myopia. This study did not include long-term intervention and follow-up period though exercises are effective for long-term benefits of interventions. There is a need for more research to be done to provide strong evidence whether Bates eye exercises and Trataka Yoga are effective for the reduction of myopia.\nBy observing the obtained results, the limitations would be lack of a larger sample size, short interventional protocol duration, the Trataka being tiresome for the participants to perform as they had to constantly gaze on the flame of the candle which could cause inconvenience for them; as the participants were students, they were asked to perform the exercises and the Trataka before and after the college hours, respectively. This caused inconvenience for the students to do the activities regularly.\n\nBODY.CONCLUSION:\nThis study concluded that both Bates eye exercises therapy and Trataka Yoga Kriya in reducing refractive errors in myopia were not effective. There was no significant improvement found postintervention in the refractive errors and visual acuity among both the groups.\nBODY.FINANCIAL SUPPORT AND SPONSORSHIP:\nNil.\n\nBODY.CONFLICTS OF INTEREST:\nThere are no conflicts of interest.\n\n**Question:** Compared to Trataka Yoga Kriya what was the result of Bates eye exercise therapy on refractive errors, visual acuity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13856
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect and Safety of Morphine Use in Acute Anterior ST‐Segment Elevation Myocardial InfarctionBonin et al\n\n\n\nABSTRACT.BACKGROUND:\nMorphine is commonly used to treat chest pain during myocardial infarction, but its effect on cardiovascular outcome has never been directly evaluated. The aim of this study was to examine the effect and safety of morphine in patients with acute anterior ST‐segment elevation myocardial infarction followed up for 1 year.\n\nABSTRACT.METHODS AND RESULTS:\nWe used the database of the CIRCUS (Does Cyclosporine Improve Outcome in ST Elevation Myocardial Infarction Patients) trial, which included 969 patients with anterior ST‐segment elevation myocardial infarction, admitted for primary percutaneous coronary intervention. Two groups were defined according to use of morphine preceding coronary angiography. The composite primary outcome was the combined incidence of major adverse cardiovascular events, including cardiovascular death, heart failure, cardiogenic shock, myocardial infarction, unstable angina, and stroke during 1 year. A total of 554 (57.1%) patients received morphine at first medical contact. Both groups, with and without morphine treatment, were comparable with respect to demographic and periprocedural characteristics. There was no significant difference in major adverse cardiovascular events between patients who received morphine compared with those who did not (26.2% versus 22.0%, respectively; P=0.15). The all‐cause mortality was 5.3% in the morphine group versus 5.8% in the no‐morphine group (P=0.89). There was no difference between groups in infarct size as assessed by the creatine kinase peak after primary percutaneous coronary intervention (4023±118 versus 3903±149 IU/L; P=0.52).\n\nABSTRACT.CONCLUSIONS:\nIn anterior ST‐segment elevation myocardial infarction patients treated by primary percutaneous coronary intervention, morphine was used in half of patients during initial management and was not associated with a significant increase in major adverse cardiovascular events at 1 year.\n\n\nBODY.CLINICAL PERSPECTIVE:\nBODY.WHAT IS NEW?:\n\nThis retrospective study evaluating outcomes associated with morphine use in an ST‐segment elevation myocardial infarction population provides additional data that use of morphine is not associated with adverse outcomes.\n\n\n\nBODY.WHAT ARE THE CLINICAL IMPLICATIONS?:\n\nUse of morphine to relieve chest pain is safe in patients with ST‐segment elevation myocardial infarction.\n\n\n\n\n\n\nBODY.INTRODUCTION:\nMorphine is currently used and recommended for the treatment of chest pain during myocardial infarction, but the level of evidence is low, attributed to the lack of supportive clinical studies.1, 2, 3, 4\n\nMoreover, the American Heart Association has relegated morphine use in patients with non‐ST‐segment elevation myocardial infarction from a Class I to a Class IIa recommendation.3 This modification was driven by the results from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes with Early Implementation of the American College of Cardiology/American Heart Association Guidelines) registry showing that, in patients with non‐ST‐segment elevation myocardial infarction, morphine use increased the risk of death and adverse outcome.5 Morphine has also been associated with suboptimal reperfusion success and a low myocardial salvage index after primary percutaneous coronary intervention (PPCI) in patients presenting with ST‐segment elevation myocardial infarction (STEMI).6\n\nNevertheless, in 2 independent cohorts from the FAST‐MI (French Registry of Acute ST‐elevation and non‐ST‐elevation Myocardial Infarction) registry, prehospital morphine use in STEMI patients did not increase in‐hospital complications or 1‐year mortality.7 Because of this controversy, additional investigations are necessary.\nThe aim of this study was to explore the effect of morphine on clinical outcomes in a population of patients with anterior STEMI referred for PPCI.\n\nBODY.METHODS:\nThe data, analytical methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure.\nBODY.STUDY DESIGN:\nWe used the database of the recently published CIRCUS (Does Cyclosporine Improve Outcome in ST Elevation Myocardial Infarction Patients) trial8, 9 for a post hoc analysis assessing the effect of morphine use in the initial management of patients with anterior STEMI undergoing PPCI.\nCIRCUS was an international, multicenter, double‐blind, placebo‐controlled trial conducted between April 2011 and February 2014. The CIRCUS trial showed no significant effect of cyclosporine on the clinical outcomes of patients with anterior STEMI.9\n\nCIRCUS study was performed in accord with the principles of the Declaration of Helsinki and the European guidelines for Good Clinical Practice. Approval was obtained from the ethics committees in the relevant countries, and written consent for data use was obtained from all patients.\n\nBODY.STUDY POPULATION:\nWe analyzed data from the intent‐to‐treat population of the CIRCUS trial.\nEligible patients (male and female) were aged ≥18 years and presented within 12 hours of onset of symptoms evocative of acute coronary syndrome (ACS). They were required to have ST‐segment elevation ≥0.2 mV in 2 anterior contiguous leads and to be scheduled for PPCI. Written informed consent was obtained from all patients.\nExclusion criteria included cardiogenic shock or loss of consciousness during the prehospital period, known hypersensitivity to cyclosporine, known kidney or liver failure, pregnancy or absence of contraception in women of childbearing age, and any disorder associated with immunological dysfunction within the previous 6 months.\n\nBODY.STUDY TREATMENT:\nCIRCUS patients were divided into 2 groups with and without intravenous morphine before PPCI. In a post‐hoc analysis setting, we hypothesized that morphine use would be superior to the absence of morphine use in terms of primary outcomes.\n\nBODY.STUDY OUTCOMES:\nThe primary outcome was the rate of occurrence of major adverse cardiovascular event (MACE) including cardiovascular death, heart failure, cardiogenic shock, recurrent myocardial infarction, unstable angina, and stroke during 1 year of follow‐up. Cardiogenic shock was defined as systolic blood pressure <80 mm Hg for >30 minutes, unresponsive to fluid replacement, and associated with signs of peripheral and end‐organ hypoperfusion. Heart failure was defined as clinical symptoms requiring initiation or intensification of heart failure treatment with intravenous administration of diuretics. Unstable angina was defined as the presence of acute chest pain associated with ST depression, or new onset of negative T waves, and no elevation of troponins.\nSecondary outcomes included rate of occurrence of individual MACE, all‐cause death, adverse left ventricular remodeling during 1 year of follow‐up, and the creatine phosphokinase (CPK) peak blood concentration in the 24 hours post‐PPCI.\nMACE were adjudicated by an event validation committee composed of at least 3 cardiologists/physicians blinded to treatment. Total CPK blood concentration was measured in peripheral total blood at the time of PPCI and 4, 12, and 24 hours post‐PPCI. CPK measures were conducted locally, in each participating center. CPK peak concentration was defined as the highest value collected during the 24 hours post‐PPCI. It was used as an indirect evaluation of infarct size.10, 11 Adverse left ventricular remodeling was defined as an increase of 15% or more of end diastolic left ventricular volume indexed to body surface area. End diastolic volume was measured at discharge and at 1 year by blinded expert readers. The measures were performed on echocardiographic data, according to the Simpson biplane rule.\n\nBODY.SUBGROUP ANALYSIS:\nCyclosporine is an inhibitor of cerebral12 and intestinal13 P‐glycoprotein. P‐glycoproteins are involved in morphine intestinal absorption14 and neurological circulation.15 This P‐glycoprotein inhibition could lead to an increased effect of morphine.14\n\nTo evaluate the effect of this pharmacological interaction on our results, we compared the 4 subgroups: morphine with cyclosporine; morphine with placebo; no morphine with cyclosporine; and no morphine with placebo on MACE occurrence and on infarct size. We also explored morphine with cyclosporine interaction effect on MACE occurrence by 2‐way ANOVA with an interaction term.\n\nBODY.STATISTICAL ANALYSIS:\nAll statistical analyses were post hoc and were not prespecified in the original CIRCUS Statistical Analysis Plan.\nContinuous variables with normal distribution are expressed as means and SDs. Continuous variables with a non‐normal distribution are expressed as medians and interquartile ranges. Normality was tested with the Shapiro–Wilk test. Categorical variables are expressed as percentages. The study population was divided into 2 groups according to morphine use. Comparison of baseline characteristics or outcomes was performed using the chi‐squared test or Fischer's exact test, as appropriate, for categorical variables. The Student t\nFor all comparisons, a value of P<0.05 was considered statistically significant. When appropriate, 95% confidence intervals were calculated.\nAll statistical analyses were conducted using STATA software (version SE 14.2; (StataCorp LP, College Station, TX).\n\n\nBODY.RESULTS:\nThe intent‐to‐treat CIRCUS population included 969 patients.8, 9 Two patients were not included in our analysis because of lack of information on morphine use. Morphine was used before PPCI in 554 (57.3%) patients. Baseline characteristics (Table 1) and periprocedural characteristics (Table 2) were well balanced between groups with and without morphine use. There was a nonsignificant trend toward younger age and a shorter total ischemic time in patients receiving morphine. There was also a trend toward more heart failure and cardiogenic shock in the morphine group: 13.1% versus 10.2% of patients admitted in Killip 2 or 3 and 1.4% versus 0% of patients admitted with cardiogenic in morphine and no‐morphine group, respectively (Table 2).\nTable 1Characteristics of the Patients at Baseline\nAll Patients (n=967)Patients With Morphine (n=554)Patients Without Morphine (n=413)\nP Value (Wilcoxon or Fisher's Test)Age, y\n60±13\n59±13\n61±13\n0.07\nMale sex, %\n82\n82\n82\n0.73\nBody mass index, kg/m2\n\n27±4\n27±4\n27±4\n0.14\nCurrent smoker, %\n42\n43\n41\n0.39\nHypertension, %\n37\n38\n37\n0.74\nDiabetes mellitus, %\n13\n13\n13\n1.00\nDyslipidemia, %\n38\n40\n36\n0.18\nPrevious myocardial infarction, %\n6\n6\n5\n0.67\nPrevious PCI, %\n7\n7\n6\n0.61\nLVEF, %\n47±10\n47±10\n48±10\n0.16\nCyclosporine use before PPCI, %\n49\n51\n46\n0.12\nValues are expressed as means±SD. LVEF indicates left ventricular ejection fraction (measured by echocardiography); PCI, percutaneous coronary intervention; PPCI, primary percutaneous coronary intervention.\nTable 2Periprocedural Characteristics\nPatients With Morphine (n=554)Patients Without Morphine (n=413)Killip class at admission\n505\n353\nClass 1\n432 (85.5)\n317 (90.0)\nClass 2\n57 (11.3)\n29 (8.2)\nClass 3\n9 (1.8)\n7 (2.0)\nClass 4\n7 (1.4)\n0 (0.0)\nTotal ischemic time, mean±SD (h)\n3.99±2.39\n4.94±3.07\nRentrop Grade 2 or 3\n34/554 (6.1)\n31/413 (7.5)\nAngiographic thrombus burden ≥3\n358/533 (67.2)\n267/395 (67.9)\nArea at risk, mean±SD (%)a\n\n36.8±8.4\n35.6±8.6\nProximal localization\n252/552 (45.6)\n157/403 (38.9)\nMultivessel disease\n203/554 (36.6)\n154/413 (37.3)\nThrombolysis rate\n33/554 (5.9)\n27/413 (6.5)\nStenting\n492/554 (88.8)\n362/413 (87.6)\nNo reflow\n30/554 (5.4)\n25/413 (6.1)\nFinal TIMI\n548\n403\nTIMI=0\n9 (1.6)\n4 (1.0)\nTIMI=1\n2 (0.3)\n8 (2.0)\nTIMI=2\n29 (5.3)\n29 (7.2)\nTIMI=3\n508 (92.7)\n362 (89.8)\nTreatment at discharge\nDouble antiplatelet treatment\n511/544 (93.9)\n380/408 (93.1)\nBeta‐blockers\n507/544 (93.2)\n370/408 (90.7)\nStatins\n522/544 (95.9)\n389/408 (95.3)\nACEi\n482/544 (88.6)\n348/408 (85.3)\nARB\n10/544 (1.8)\n15/408 (3.7)\nCalcium‐channel blockers\n12/544 (2.2)\n13/408 (3.2)\nDiuretics\n138/554 (25.4)\n105/413 (25.4)\nAldosterone antagonists\n147/544 (27.0)\n82/408 (20.1)\nValues are expressed as numbers (percentages), unless otherwise specified. ACEi indicates angiotensin‐converting enzyme inhibitors; ARB, angiotensin receptor blockers; TIMI, Thrombolysis in Myocardial Infarction.\naUsing the APPROACH angiographic score.\nAt 1 year, 236 (24.4%) patients had experienced at least 1 MACE. There was no significant difference in occurrence of MACE between groups: 145 (26.2%) and 91 (22.0%) patients in the groups with and without morphine, respectively (P=0.15; Table 3). Cumulative Kaplan–Meier estimates for the first occurrence of MACE (Figure 1) were not significantly different between groups (P=0.10).\nTable 3Clinical Outcome After 1‐Year Follow‐up\nEventMorphine (n=554)No Morphine (n=413)\nP (Fischer's Test)Any MACEa\n\n145 (26.2)\n91 (22.0)\n0.15\nCardiovascular death\n29 (5.2)\n20 (4.8)\n0.88\nHeart failure\n110 (19.9)\n70 (16.9)\n0.28\nCardiogenic shock\n30 (5.4)\n19 (4.6)\n0.66\nRecurrent myocardial infarction\n21 (3.8)\n7 (1.7)\n0.08\nUnstable angina\n15 (2.7)\n8 (1.9)\n0.53\nStroke\n10 (1.8)\n9 (2.2)\n0.82\nValues are expressed as numbers (percentages). MACE indicates major adverse cardiovascular events.\naA patient with more than 1 clinical event was counted as having 1 MACE.\nFigure 1Kaplan–Meier curves for major adverse cardiovascular events (MACE).\nIn the Cox model, morphine use was not associated with the incidence of MACE (hazard ratio=1.25; 95% confidence interval [0.96; 1.62]; P=0.10), even after adjustment for age, ischemic time, infarct size (CPK peak), initial and final Thrombolysis in Myocardial Infarction flow, sex, smoking, hypertension, diabetes mellitus, previous myocardial infarction, and Killip class (hazard ratio=1.04; 95% confidence interval [0.75; 1.45]; P=0.82).\nIncidence of individual MACE during 1 year was not significantly different between groups (Table 3; Figure 2).\nFigure 2Occurrence of major adverse cardiovascular events (MACE) and individual events after 1‐year of follow‐up.\nThere was no statistically significant difference on MACE occurrence (P=0.56) and on infarct size (P=0.61) between the 4 different treatment subgroups (Table 4). Interaction term, even after adjustment, was not significant and so for each clinical outcome individually (Table S1).\nTable 4Subgroups Analysis, Evaluation of Morphine—Cyclosporine Interactions\nMorphine+Cyclosporine (n=266)Morphine+Placebo (n=246)No Morphine+Cyclosporine (n=178)No Morphine+Placebo (n=205)\nP (ANOVA)MACEsa\n\n70 (26.3)\n75 (30.5)\n44 (24.7)\n47 (22.9)\n0.56\nCPK peakb\n\n3945±2638\n4054±2677\n4061±3182\n3742±2571\n0.61\nCPK indicates creatine phosphokinase; MACE, major adverse cardiovascular events.\naValues are expressed as numbers (percent).\nbValues are expressed as mean±SD.\nThere was no significant difference in rates of all‐cause death at 1 year between groups (32 [5.3%] and 22 [5.8%] deaths in the groups with and without morphine, respectively; P=0.89).\nThe unadjusted Kaplan–Meier hazard curve for 1‐year all‐cause mortality is presented in Figure 3. No significant difference was observed between the groups with and without morphine (P=0.77).\nFigure 3Kaplan–Meier curves for 1‐year all‐cause mortality.\nCPK peak blood concentrations after PPCI were comparable in both groups (4023±118 and 3903±149 IU/L in the groups with and without morphine, respectively; P=0.52; Figure 4).\nFigure 4Creatine phosphokinase (CPK) blood concentration evolution during the first 24 hours following primary percutaneous coronary intervention (PPCI)—median, first and third quartiles, and first and ninth decile are presented for each measurement. Mixed‐model analysis for morphine effect on CPK kinetic curve: P=0.09.\nRates of adverse left ventricular remodeling were also similar in both groups (205 [37.0%] and 134 [32.4%] patients with adverse remodeling in the groups with and without morphine, respectively; P=0.21).\n\nBODY.DISCUSSION:\nIn a large cohort of patients with acute anterior STEMI, morphine was used in half of patients before PPCI and had no significant effect on the composite primary outcome including cardiovascular death, heart failure, cardiogenic shock, recurrent myocardial infarction, unstable angina, and stroke.\nFew other studies have evaluated the clinical outcomes of patients who received morphine to alleviate chest pain during an ACS.16 In line with our results, 2 retrospective studies concluded that morphine/intravenous narcotics did not adversely affect the outcomes in patients with ACS. In 1 study including 1758 patients (765 STEMI and 993 non‐ST‐segment elevation myocardial infarction), the rate of 30‐day death was not increased with the use of intravenous narcotics.17 In the second study, the analysis of 2 cohorts of STEMI patients (2438 patients from the FAST‐MI 2010 registry and 1726 from the FAST‐MI 2005)7 suggested that prehospital morphine use was not associated with increased in‐hospital complications and 1‐year mortality.\nOn the contrary, an American retrospective study based on the CRUSADE registry,5 which included 57 039 patients who presented with non‐ST‐segment elevation ACS, reported an increase in mortality and challenged the safety of morphine use in these patients. It is difficult to compare the CRUSADE study with ours because of differences in population, study dates, and design. The CRUSADE study retrospectively enrolled non‐ST‐segment elevation ACS patients, who are different from STEMI patients. It was conducted between 2002 and 2003, at a time when oral antiplatelet agent use was limited to clopidogrel, which has more pharmacological interactions with morphine than more recently approved agents such as ticagrelor and prasugrel.18, 19\n\nThus, it seems that there is currently limited evidence for adverse outcomes associated with morphine use in STEMI patients, whereas additional caution may be necessary in NSTEMI patients.\nIn our study, morphine use did not appear to be associated with any significant cardioprotective effect. Myocardial protection with opioid use has been inconsistently reported. Several studies suggest a cardioprotective effect especially with morphine in the surgical context of coronary artery bypass graft, but these studies were conducted in small surgical patient populations and assessed indirect outcomes.20, 21 In STEMI patients undergoing PPCI, a cardioprotective effect could be demonstrated for morphine in addition to the basal effect of remote ischemic conditioning,22 but no additional myocardial protection was observed with fentanyl in patients undergoing elective PPCI.23\n\nIn experimental models of ischemia reperfusion injury, morphine inhibits the mitochondrial permeability transition pore24 opening through μ and κ myocardial opioid receptors,25, 26 and induced a significant cardioprotective effect.27 Morphine also stimulates the reperfusion injury salvage kinase pathway.24 In animal studies, these mechanisms have been associated with a reduction of infarct size following intravenous or intrathecal morphine use before or immediately after reperfusion.28, 29, 30 However, the translation from animal models to humans is not straightforward owing to disease complexity and associated comorbidities, and our data do not suggest any cardioprotective effect of morphine on ischemia‐reperfusion injury. In the near future, 2 noncurrently published studies (Clinical Trials NCT01186445 and NCT01738100) may provide additional understanding of the cardioprotective effects of intracoronary morphine during ACS.\nIn recent years, there has been a growing concern about the use of morphine for pain relief in patients with ACS. Morphine delays and attenuates the release peak and efficacy of oral antiplatelet agents in ACS31, 32, 33 and healthy18, 19, 34 patients, by inhibiting gastrointestinal absorption35 and inducing to vomiting.16 However, there is no evidence for the clinical relevance of these effects. In our study, although morphine was not associated with any adverse outcome in STEMI patients, there was a nonsignificant trend toward an increase in individual MACE in patients treated with morphine. In particular, recurrent myocardial infarction and heart failure seemed to be more frequent when morphine was used before PPCI, although these results did not reach statistical significance (P=0.08 and 0.28, respectively; Table 3).\nRegardless of the potential outcome modifications associated with morphine use during ACS, pain release is essential to decrease patients' discomfort and to avoid proarrhythmic anxiety. In our study, 42.7% patients did not receive any analgesia, meaning that pain was probably undertreated. This is not acceptable, and pain release should be a priority of any medical care. Without ethical possibility of a placebo‐controlled, randomized study, it is difficult to conclude definitively about morphine safety in this indication. There is no analgesic alternatives to morphine in this indication, and the only new approach currently evaluated is equimolar oxygen/nitrous oxide mixture (MEOPA) associated with paracetamol, under investigation in the ongoing SCADOLII (Comparison of MEOPA + Paracetamol Versus Morphine Treatment in Acute Coronary Syndrome Analgesia) trial. In the absence of clear evidence of adverse effects, morphine should be used without restraint in STEMI as was concluded in a recent review.16\n\nBODY.STUDY LIMITATIONS:\nAlthough analyses were conducted on a prospective homogenous anterior STEMI population, they were post hoc analyses, not prespecified in the original CIRCUS protocol, leading to a loss of statistical relevance and limited statistical power.\nGiven that CIRCUS trial randomization was not stratified on morphine use, the effect of unmeasured confounders on the results cannot be ruled out. Anyway, patients in the morphine group seemed to be sicker at initial medical care than those in the no‐morphine group: 13.1% of patients were admitted with heart failure (Killip 2 or 3) in the morphine group versus 10.2% in the no‐morphine group; 1.4% of patients were admitted with cardiogenic shock in the morphine group versus 0% in the no‐morphine group. There was no randomization, nor precise recommendation, for morphine use in the CIRCUS protocol. So, morphine could have been used as a symptomatic treatment, in combination with continuous positive airway pressure, for patients suffering from dyspnea or acute heart failure at first medical care. Initial heart failure and/or cardiogenic shock are strongly associated with a poorest prognosis.36 Despite this initial difference, the morphine group did not experience significantly more adverse events than the no‐morphine group.\nThe study was not initially designed to evaluate the effect of morphine in patients with anterior STEMI, so the investigation was limited by the number of subjects. It is difficult to evaluate precisely the statistical power of our work. For example, if we previously calculated the number needed to treat with the hypothesis of the 4.2% absolute difference between groups observed in our study, with 80% power and 5% alpha risk, triple sample size would have been necessary. So, limited sample sizes may explain the lack of statistical significance of the observed difference in our work.\nThe use of cyclosporine, even though its known interactions with morphine intestinal absorption and neurological effect, had no significant effect on the results of our study. There was no statistically significant difference between the 4 different subgroups defined by the use of cyclosporine and of morphine, nor significant interaction between morphine and cyclosporine.\nIn the morphine group, we did not record the dose of morphine that was administered as well as corresponding pain scales. Further studies are needed to assess precisely pain management in STEMI patients and its potential impact on clinical outcomes\n\n\nBODY.CONCLUSION:\n\n\nBODY.DISCLOSURES:\nNone.\n\nBODY.SUPPORTING INFORMATION:\n\nTable S1. Interaction of Morphine With Cyclosporine Effect on MACE Occurrence, Explored by 2‐Way ANOVA With Interaction Term\nClick here for additional data file.\n\n**Question:** Compared to no‐morphine what was the result of morphine on major adverse cardiovascular event?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13857
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect and Safety of Morphine Use in Acute Anterior ST‐Segment Elevation Myocardial InfarctionBonin et al\n\n\n\nABSTRACT.BACKGROUND:\nMorphine is commonly used to treat chest pain during myocardial infarction, but its effect on cardiovascular outcome has never been directly evaluated. The aim of this study was to examine the effect and safety of morphine in patients with acute anterior ST‐segment elevation myocardial infarction followed up for 1 year.\n\nABSTRACT.METHODS AND RESULTS:\nWe used the database of the CIRCUS (Does Cyclosporine Improve Outcome in ST Elevation Myocardial Infarction Patients) trial, which included 969 patients with anterior ST‐segment elevation myocardial infarction, admitted for primary percutaneous coronary intervention. Two groups were defined according to use of morphine preceding coronary angiography. The composite primary outcome was the combined incidence of major adverse cardiovascular events, including cardiovascular death, heart failure, cardiogenic shock, myocardial infarction, unstable angina, and stroke during 1 year. A total of 554 (57.1%) patients received morphine at first medical contact. Both groups, with and without morphine treatment, were comparable with respect to demographic and periprocedural characteristics. There was no significant difference in major adverse cardiovascular events between patients who received morphine compared with those who did not (26.2% versus 22.0%, respectively; P=0.15). The all‐cause mortality was 5.3% in the morphine group versus 5.8% in the no‐morphine group (P=0.89). There was no difference between groups in infarct size as assessed by the creatine kinase peak after primary percutaneous coronary intervention (4023±118 versus 3903±149 IU/L; P=0.52).\n\nABSTRACT.CONCLUSIONS:\nIn anterior ST‐segment elevation myocardial infarction patients treated by primary percutaneous coronary intervention, morphine was used in half of patients during initial management and was not associated with a significant increase in major adverse cardiovascular events at 1 year.\n\n\nBODY.CLINICAL PERSPECTIVE:\nBODY.WHAT IS NEW?:\n\nThis retrospective study evaluating outcomes associated with morphine use in an ST‐segment elevation myocardial infarction population provides additional data that use of morphine is not associated with adverse outcomes.\n\n\n\nBODY.WHAT ARE THE CLINICAL IMPLICATIONS?:\n\nUse of morphine to relieve chest pain is safe in patients with ST‐segment elevation myocardial infarction.\n\n\n\n\n\n\nBODY.INTRODUCTION:\nMorphine is currently used and recommended for the treatment of chest pain during myocardial infarction, but the level of evidence is low, attributed to the lack of supportive clinical studies.1, 2, 3, 4\n\nMoreover, the American Heart Association has relegated morphine use in patients with non‐ST‐segment elevation myocardial infarction from a Class I to a Class IIa recommendation.3 This modification was driven by the results from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes with Early Implementation of the American College of Cardiology/American Heart Association Guidelines) registry showing that, in patients with non‐ST‐segment elevation myocardial infarction, morphine use increased the risk of death and adverse outcome.5 Morphine has also been associated with suboptimal reperfusion success and a low myocardial salvage index after primary percutaneous coronary intervention (PPCI) in patients presenting with ST‐segment elevation myocardial infarction (STEMI).6\n\nNevertheless, in 2 independent cohorts from the FAST‐MI (French Registry of Acute ST‐elevation and non‐ST‐elevation Myocardial Infarction) registry, prehospital morphine use in STEMI patients did not increase in‐hospital complications or 1‐year mortality.7 Because of this controversy, additional investigations are necessary.\nThe aim of this study was to explore the effect of morphine on clinical outcomes in a population of patients with anterior STEMI referred for PPCI.\n\nBODY.METHODS:\nThe data, analytical methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure.\nBODY.STUDY DESIGN:\nWe used the database of the recently published CIRCUS (Does Cyclosporine Improve Outcome in ST Elevation Myocardial Infarction Patients) trial8, 9 for a post hoc analysis assessing the effect of morphine use in the initial management of patients with anterior STEMI undergoing PPCI.\nCIRCUS was an international, multicenter, double‐blind, placebo‐controlled trial conducted between April 2011 and February 2014. The CIRCUS trial showed no significant effect of cyclosporine on the clinical outcomes of patients with anterior STEMI.9\n\nCIRCUS study was performed in accord with the principles of the Declaration of Helsinki and the European guidelines for Good Clinical Practice. Approval was obtained from the ethics committees in the relevant countries, and written consent for data use was obtained from all patients.\n\nBODY.STUDY POPULATION:\nWe analyzed data from the intent‐to‐treat population of the CIRCUS trial.\nEligible patients (male and female) were aged ≥18 years and presented within 12 hours of onset of symptoms evocative of acute coronary syndrome (ACS). They were required to have ST‐segment elevation ≥0.2 mV in 2 anterior contiguous leads and to be scheduled for PPCI. Written informed consent was obtained from all patients.\nExclusion criteria included cardiogenic shock or loss of consciousness during the prehospital period, known hypersensitivity to cyclosporine, known kidney or liver failure, pregnancy or absence of contraception in women of childbearing age, and any disorder associated with immunological dysfunction within the previous 6 months.\n\nBODY.STUDY TREATMENT:\nCIRCUS patients were divided into 2 groups with and without intravenous morphine before PPCI. In a post‐hoc analysis setting, we hypothesized that morphine use would be superior to the absence of morphine use in terms of primary outcomes.\n\nBODY.STUDY OUTCOMES:\nThe primary outcome was the rate of occurrence of major adverse cardiovascular event (MACE) including cardiovascular death, heart failure, cardiogenic shock, recurrent myocardial infarction, unstable angina, and stroke during 1 year of follow‐up. Cardiogenic shock was defined as systolic blood pressure <80 mm Hg for >30 minutes, unresponsive to fluid replacement, and associated with signs of peripheral and end‐organ hypoperfusion. Heart failure was defined as clinical symptoms requiring initiation or intensification of heart failure treatment with intravenous administration of diuretics. Unstable angina was defined as the presence of acute chest pain associated with ST depression, or new onset of negative T waves, and no elevation of troponins.\nSecondary outcomes included rate of occurrence of individual MACE, all‐cause death, adverse left ventricular remodeling during 1 year of follow‐up, and the creatine phosphokinase (CPK) peak blood concentration in the 24 hours post‐PPCI.\nMACE were adjudicated by an event validation committee composed of at least 3 cardiologists/physicians blinded to treatment. Total CPK blood concentration was measured in peripheral total blood at the time of PPCI and 4, 12, and 24 hours post‐PPCI. CPK measures were conducted locally, in each participating center. CPK peak concentration was defined as the highest value collected during the 24 hours post‐PPCI. It was used as an indirect evaluation of infarct size.10, 11 Adverse left ventricular remodeling was defined as an increase of 15% or more of end diastolic left ventricular volume indexed to body surface area. End diastolic volume was measured at discharge and at 1 year by blinded expert readers. The measures were performed on echocardiographic data, according to the Simpson biplane rule.\n\nBODY.SUBGROUP ANALYSIS:\nCyclosporine is an inhibitor of cerebral12 and intestinal13 P‐glycoprotein. P‐glycoproteins are involved in morphine intestinal absorption14 and neurological circulation.15 This P‐glycoprotein inhibition could lead to an increased effect of morphine.14\n\nTo evaluate the effect of this pharmacological interaction on our results, we compared the 4 subgroups: morphine with cyclosporine; morphine with placebo; no morphine with cyclosporine; and no morphine with placebo on MACE occurrence and on infarct size. We also explored morphine with cyclosporine interaction effect on MACE occurrence by 2‐way ANOVA with an interaction term.\n\nBODY.STATISTICAL ANALYSIS:\nAll statistical analyses were post hoc and were not prespecified in the original CIRCUS Statistical Analysis Plan.\nContinuous variables with normal distribution are expressed as means and SDs. Continuous variables with a non‐normal distribution are expressed as medians and interquartile ranges. Normality was tested with the Shapiro–Wilk test. Categorical variables are expressed as percentages. The study population was divided into 2 groups according to morphine use. Comparison of baseline characteristics or outcomes was performed using the chi‐squared test or Fischer's exact test, as appropriate, for categorical variables. The Student t\nFor all comparisons, a value of P<0.05 was considered statistically significant. When appropriate, 95% confidence intervals were calculated.\nAll statistical analyses were conducted using STATA software (version SE 14.2; (StataCorp LP, College Station, TX).\n\n\nBODY.RESULTS:\nThe intent‐to‐treat CIRCUS population included 969 patients.8, 9 Two patients were not included in our analysis because of lack of information on morphine use. Morphine was used before PPCI in 554 (57.3%) patients. Baseline characteristics (Table 1) and periprocedural characteristics (Table 2) were well balanced between groups with and without morphine use. There was a nonsignificant trend toward younger age and a shorter total ischemic time in patients receiving morphine. There was also a trend toward more heart failure and cardiogenic shock in the morphine group: 13.1% versus 10.2% of patients admitted in Killip 2 or 3 and 1.4% versus 0% of patients admitted with cardiogenic in morphine and no‐morphine group, respectively (Table 2).\nTable 1Characteristics of the Patients at Baseline\nAll Patients (n=967)Patients With Morphine (n=554)Patients Without Morphine (n=413)\nP Value (Wilcoxon or Fisher's Test)Age, y\n60±13\n59±13\n61±13\n0.07\nMale sex, %\n82\n82\n82\n0.73\nBody mass index, kg/m2\n\n27±4\n27±4\n27±4\n0.14\nCurrent smoker, %\n42\n43\n41\n0.39\nHypertension, %\n37\n38\n37\n0.74\nDiabetes mellitus, %\n13\n13\n13\n1.00\nDyslipidemia, %\n38\n40\n36\n0.18\nPrevious myocardial infarction, %\n6\n6\n5\n0.67\nPrevious PCI, %\n7\n7\n6\n0.61\nLVEF, %\n47±10\n47±10\n48±10\n0.16\nCyclosporine use before PPCI, %\n49\n51\n46\n0.12\nValues are expressed as means±SD. LVEF indicates left ventricular ejection fraction (measured by echocardiography); PCI, percutaneous coronary intervention; PPCI, primary percutaneous coronary intervention.\nTable 2Periprocedural Characteristics\nPatients With Morphine (n=554)Patients Without Morphine (n=413)Killip class at admission\n505\n353\nClass 1\n432 (85.5)\n317 (90.0)\nClass 2\n57 (11.3)\n29 (8.2)\nClass 3\n9 (1.8)\n7 (2.0)\nClass 4\n7 (1.4)\n0 (0.0)\nTotal ischemic time, mean±SD (h)\n3.99±2.39\n4.94±3.07\nRentrop Grade 2 or 3\n34/554 (6.1)\n31/413 (7.5)\nAngiographic thrombus burden ≥3\n358/533 (67.2)\n267/395 (67.9)\nArea at risk, mean±SD (%)a\n\n36.8±8.4\n35.6±8.6\nProximal localization\n252/552 (45.6)\n157/403 (38.9)\nMultivessel disease\n203/554 (36.6)\n154/413 (37.3)\nThrombolysis rate\n33/554 (5.9)\n27/413 (6.5)\nStenting\n492/554 (88.8)\n362/413 (87.6)\nNo reflow\n30/554 (5.4)\n25/413 (6.1)\nFinal TIMI\n548\n403\nTIMI=0\n9 (1.6)\n4 (1.0)\nTIMI=1\n2 (0.3)\n8 (2.0)\nTIMI=2\n29 (5.3)\n29 (7.2)\nTIMI=3\n508 (92.7)\n362 (89.8)\nTreatment at discharge\nDouble antiplatelet treatment\n511/544 (93.9)\n380/408 (93.1)\nBeta‐blockers\n507/544 (93.2)\n370/408 (90.7)\nStatins\n522/544 (95.9)\n389/408 (95.3)\nACEi\n482/544 (88.6)\n348/408 (85.3)\nARB\n10/544 (1.8)\n15/408 (3.7)\nCalcium‐channel blockers\n12/544 (2.2)\n13/408 (3.2)\nDiuretics\n138/554 (25.4)\n105/413 (25.4)\nAldosterone antagonists\n147/544 (27.0)\n82/408 (20.1)\nValues are expressed as numbers (percentages), unless otherwise specified. ACEi indicates angiotensin‐converting enzyme inhibitors; ARB, angiotensin receptor blockers; TIMI, Thrombolysis in Myocardial Infarction.\naUsing the APPROACH angiographic score.\nAt 1 year, 236 (24.4%) patients had experienced at least 1 MACE. There was no significant difference in occurrence of MACE between groups: 145 (26.2%) and 91 (22.0%) patients in the groups with and without morphine, respectively (P=0.15; Table 3). Cumulative Kaplan–Meier estimates for the first occurrence of MACE (Figure 1) were not significantly different between groups (P=0.10).\nTable 3Clinical Outcome After 1‐Year Follow‐up\nEventMorphine (n=554)No Morphine (n=413)\nP (Fischer's Test)Any MACEa\n\n145 (26.2)\n91 (22.0)\n0.15\nCardiovascular death\n29 (5.2)\n20 (4.8)\n0.88\nHeart failure\n110 (19.9)\n70 (16.9)\n0.28\nCardiogenic shock\n30 (5.4)\n19 (4.6)\n0.66\nRecurrent myocardial infarction\n21 (3.8)\n7 (1.7)\n0.08\nUnstable angina\n15 (2.7)\n8 (1.9)\n0.53\nStroke\n10 (1.8)\n9 (2.2)\n0.82\nValues are expressed as numbers (percentages). MACE indicates major adverse cardiovascular events.\naA patient with more than 1 clinical event was counted as having 1 MACE.\nFigure 1Kaplan–Meier curves for major adverse cardiovascular events (MACE).\nIn the Cox model, morphine use was not associated with the incidence of MACE (hazard ratio=1.25; 95% confidence interval [0.96; 1.62]; P=0.10), even after adjustment for age, ischemic time, infarct size (CPK peak), initial and final Thrombolysis in Myocardial Infarction flow, sex, smoking, hypertension, diabetes mellitus, previous myocardial infarction, and Killip class (hazard ratio=1.04; 95% confidence interval [0.75; 1.45]; P=0.82).\nIncidence of individual MACE during 1 year was not significantly different between groups (Table 3; Figure 2).\nFigure 2Occurrence of major adverse cardiovascular events (MACE) and individual events after 1‐year of follow‐up.\nThere was no statistically significant difference on MACE occurrence (P=0.56) and on infarct size (P=0.61) between the 4 different treatment subgroups (Table 4). Interaction term, even after adjustment, was not significant and so for each clinical outcome individually (Table S1).\nTable 4Subgroups Analysis, Evaluation of Morphine—Cyclosporine Interactions\nMorphine+Cyclosporine (n=266)Morphine+Placebo (n=246)No Morphine+Cyclosporine (n=178)No Morphine+Placebo (n=205)\nP (ANOVA)MACEsa\n\n70 (26.3)\n75 (30.5)\n44 (24.7)\n47 (22.9)\n0.56\nCPK peakb\n\n3945±2638\n4054±2677\n4061±3182\n3742±2571\n0.61\nCPK indicates creatine phosphokinase; MACE, major adverse cardiovascular events.\naValues are expressed as numbers (percent).\nbValues are expressed as mean±SD.\nThere was no significant difference in rates of all‐cause death at 1 year between groups (32 [5.3%] and 22 [5.8%] deaths in the groups with and without morphine, respectively; P=0.89).\nThe unadjusted Kaplan–Meier hazard curve for 1‐year all‐cause mortality is presented in Figure 3. No significant difference was observed between the groups with and without morphine (P=0.77).\nFigure 3Kaplan–Meier curves for 1‐year all‐cause mortality.\nCPK peak blood concentrations after PPCI were comparable in both groups (4023±118 and 3903±149 IU/L in the groups with and without morphine, respectively; P=0.52; Figure 4).\nFigure 4Creatine phosphokinase (CPK) blood concentration evolution during the first 24 hours following primary percutaneous coronary intervention (PPCI)—median, first and third quartiles, and first and ninth decile are presented for each measurement. Mixed‐model analysis for morphine effect on CPK kinetic curve: P=0.09.\nRates of adverse left ventricular remodeling were also similar in both groups (205 [37.0%] and 134 [32.4%] patients with adverse remodeling in the groups with and without morphine, respectively; P=0.21).\n\nBODY.DISCUSSION:\nIn a large cohort of patients with acute anterior STEMI, morphine was used in half of patients before PPCI and had no significant effect on the composite primary outcome including cardiovascular death, heart failure, cardiogenic shock, recurrent myocardial infarction, unstable angina, and stroke.\nFew other studies have evaluated the clinical outcomes of patients who received morphine to alleviate chest pain during an ACS.16 In line with our results, 2 retrospective studies concluded that morphine/intravenous narcotics did not adversely affect the outcomes in patients with ACS. In 1 study including 1758 patients (765 STEMI and 993 non‐ST‐segment elevation myocardial infarction), the rate of 30‐day death was not increased with the use of intravenous narcotics.17 In the second study, the analysis of 2 cohorts of STEMI patients (2438 patients from the FAST‐MI 2010 registry and 1726 from the FAST‐MI 2005)7 suggested that prehospital morphine use was not associated with increased in‐hospital complications and 1‐year mortality.\nOn the contrary, an American retrospective study based on the CRUSADE registry,5 which included 57 039 patients who presented with non‐ST‐segment elevation ACS, reported an increase in mortality and challenged the safety of morphine use in these patients. It is difficult to compare the CRUSADE study with ours because of differences in population, study dates, and design. The CRUSADE study retrospectively enrolled non‐ST‐segment elevation ACS patients, who are different from STEMI patients. It was conducted between 2002 and 2003, at a time when oral antiplatelet agent use was limited to clopidogrel, which has more pharmacological interactions with morphine than more recently approved agents such as ticagrelor and prasugrel.18, 19\n\nThus, it seems that there is currently limited evidence for adverse outcomes associated with morphine use in STEMI patients, whereas additional caution may be necessary in NSTEMI patients.\nIn our study, morphine use did not appear to be associated with any significant cardioprotective effect. Myocardial protection with opioid use has been inconsistently reported. Several studies suggest a cardioprotective effect especially with morphine in the surgical context of coronary artery bypass graft, but these studies were conducted in small surgical patient populations and assessed indirect outcomes.20, 21 In STEMI patients undergoing PPCI, a cardioprotective effect could be demonstrated for morphine in addition to the basal effect of remote ischemic conditioning,22 but no additional myocardial protection was observed with fentanyl in patients undergoing elective PPCI.23\n\nIn experimental models of ischemia reperfusion injury, morphine inhibits the mitochondrial permeability transition pore24 opening through μ and κ myocardial opioid receptors,25, 26 and induced a significant cardioprotective effect.27 Morphine also stimulates the reperfusion injury salvage kinase pathway.24 In animal studies, these mechanisms have been associated with a reduction of infarct size following intravenous or intrathecal morphine use before or immediately after reperfusion.28, 29, 30 However, the translation from animal models to humans is not straightforward owing to disease complexity and associated comorbidities, and our data do not suggest any cardioprotective effect of morphine on ischemia‐reperfusion injury. In the near future, 2 noncurrently published studies (Clinical Trials NCT01186445 and NCT01738100) may provide additional understanding of the cardioprotective effects of intracoronary morphine during ACS.\nIn recent years, there has been a growing concern about the use of morphine for pain relief in patients with ACS. Morphine delays and attenuates the release peak and efficacy of oral antiplatelet agents in ACS31, 32, 33 and healthy18, 19, 34 patients, by inhibiting gastrointestinal absorption35 and inducing to vomiting.16 However, there is no evidence for the clinical relevance of these effects. In our study, although morphine was not associated with any adverse outcome in STEMI patients, there was a nonsignificant trend toward an increase in individual MACE in patients treated with morphine. In particular, recurrent myocardial infarction and heart failure seemed to be more frequent when morphine was used before PPCI, although these results did not reach statistical significance (P=0.08 and 0.28, respectively; Table 3).\nRegardless of the potential outcome modifications associated with morphine use during ACS, pain release is essential to decrease patients' discomfort and to avoid proarrhythmic anxiety. In our study, 42.7% patients did not receive any analgesia, meaning that pain was probably undertreated. This is not acceptable, and pain release should be a priority of any medical care. Without ethical possibility of a placebo‐controlled, randomized study, it is difficult to conclude definitively about morphine safety in this indication. There is no analgesic alternatives to morphine in this indication, and the only new approach currently evaluated is equimolar oxygen/nitrous oxide mixture (MEOPA) associated with paracetamol, under investigation in the ongoing SCADOLII (Comparison of MEOPA + Paracetamol Versus Morphine Treatment in Acute Coronary Syndrome Analgesia) trial. In the absence of clear evidence of adverse effects, morphine should be used without restraint in STEMI as was concluded in a recent review.16\n\nBODY.STUDY LIMITATIONS:\nAlthough analyses were conducted on a prospective homogenous anterior STEMI population, they were post hoc analyses, not prespecified in the original CIRCUS protocol, leading to a loss of statistical relevance and limited statistical power.\nGiven that CIRCUS trial randomization was not stratified on morphine use, the effect of unmeasured confounders on the results cannot be ruled out. Anyway, patients in the morphine group seemed to be sicker at initial medical care than those in the no‐morphine group: 13.1% of patients were admitted with heart failure (Killip 2 or 3) in the morphine group versus 10.2% in the no‐morphine group; 1.4% of patients were admitted with cardiogenic shock in the morphine group versus 0% in the no‐morphine group. There was no randomization, nor precise recommendation, for morphine use in the CIRCUS protocol. So, morphine could have been used as a symptomatic treatment, in combination with continuous positive airway pressure, for patients suffering from dyspnea or acute heart failure at first medical care. Initial heart failure and/or cardiogenic shock are strongly associated with a poorest prognosis.36 Despite this initial difference, the morphine group did not experience significantly more adverse events than the no‐morphine group.\nThe study was not initially designed to evaluate the effect of morphine in patients with anterior STEMI, so the investigation was limited by the number of subjects. It is difficult to evaluate precisely the statistical power of our work. For example, if we previously calculated the number needed to treat with the hypothesis of the 4.2% absolute difference between groups observed in our study, with 80% power and 5% alpha risk, triple sample size would have been necessary. So, limited sample sizes may explain the lack of statistical significance of the observed difference in our work.\nThe use of cyclosporine, even though its known interactions with morphine intestinal absorption and neurological effect, had no significant effect on the results of our study. There was no statistically significant difference between the 4 different subgroups defined by the use of cyclosporine and of morphine, nor significant interaction between morphine and cyclosporine.\nIn the morphine group, we did not record the dose of morphine that was administered as well as corresponding pain scales. Further studies are needed to assess precisely pain management in STEMI patients and its potential impact on clinical outcomes\n\n\nBODY.CONCLUSION:\n\n\nBODY.DISCLOSURES:\nNone.\n\nBODY.SUPPORTING INFORMATION:\n\nTable S1. Interaction of Morphine With Cyclosporine Effect on MACE Occurrence, Explored by 2‐Way ANOVA With Interaction Term\nClick here for additional data file.\n\n**Question:** Compared to no‐morphine what was the result of morphine on The all‐cause mortality?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13858
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect and Safety of Morphine Use in Acute Anterior ST‐Segment Elevation Myocardial InfarctionBonin et al\n\n\n\nABSTRACT.BACKGROUND:\nMorphine is commonly used to treat chest pain during myocardial infarction, but its effect on cardiovascular outcome has never been directly evaluated. The aim of this study was to examine the effect and safety of morphine in patients with acute anterior ST‐segment elevation myocardial infarction followed up for 1 year.\n\nABSTRACT.METHODS AND RESULTS:\nWe used the database of the CIRCUS (Does Cyclosporine Improve Outcome in ST Elevation Myocardial Infarction Patients) trial, which included 969 patients with anterior ST‐segment elevation myocardial infarction, admitted for primary percutaneous coronary intervention. Two groups were defined according to use of morphine preceding coronary angiography. The composite primary outcome was the combined incidence of major adverse cardiovascular events, including cardiovascular death, heart failure, cardiogenic shock, myocardial infarction, unstable angina, and stroke during 1 year. A total of 554 (57.1%) patients received morphine at first medical contact. Both groups, with and without morphine treatment, were comparable with respect to demographic and periprocedural characteristics. There was no significant difference in major adverse cardiovascular events between patients who received morphine compared with those who did not (26.2% versus 22.0%, respectively; P=0.15). The all‐cause mortality was 5.3% in the morphine group versus 5.8% in the no‐morphine group (P=0.89). There was no difference between groups in infarct size as assessed by the creatine kinase peak after primary percutaneous coronary intervention (4023±118 versus 3903±149 IU/L; P=0.52).\n\nABSTRACT.CONCLUSIONS:\nIn anterior ST‐segment elevation myocardial infarction patients treated by primary percutaneous coronary intervention, morphine was used in half of patients during initial management and was not associated with a significant increase in major adverse cardiovascular events at 1 year.\n\n\nBODY.CLINICAL PERSPECTIVE:\nBODY.WHAT IS NEW?:\n\nThis retrospective study evaluating outcomes associated with morphine use in an ST‐segment elevation myocardial infarction population provides additional data that use of morphine is not associated with adverse outcomes.\n\n\n\nBODY.WHAT ARE THE CLINICAL IMPLICATIONS?:\n\nUse of morphine to relieve chest pain is safe in patients with ST‐segment elevation myocardial infarction.\n\n\n\n\n\n\nBODY.INTRODUCTION:\nMorphine is currently used and recommended for the treatment of chest pain during myocardial infarction, but the level of evidence is low, attributed to the lack of supportive clinical studies.1, 2, 3, 4\n\nMoreover, the American Heart Association has relegated morphine use in patients with non‐ST‐segment elevation myocardial infarction from a Class I to a Class IIa recommendation.3 This modification was driven by the results from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes with Early Implementation of the American College of Cardiology/American Heart Association Guidelines) registry showing that, in patients with non‐ST‐segment elevation myocardial infarction, morphine use increased the risk of death and adverse outcome.5 Morphine has also been associated with suboptimal reperfusion success and a low myocardial salvage index after primary percutaneous coronary intervention (PPCI) in patients presenting with ST‐segment elevation myocardial infarction (STEMI).6\n\nNevertheless, in 2 independent cohorts from the FAST‐MI (French Registry of Acute ST‐elevation and non‐ST‐elevation Myocardial Infarction) registry, prehospital morphine use in STEMI patients did not increase in‐hospital complications or 1‐year mortality.7 Because of this controversy, additional investigations are necessary.\nThe aim of this study was to explore the effect of morphine on clinical outcomes in a population of patients with anterior STEMI referred for PPCI.\n\nBODY.METHODS:\nThe data, analytical methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure.\nBODY.STUDY DESIGN:\nWe used the database of the recently published CIRCUS (Does Cyclosporine Improve Outcome in ST Elevation Myocardial Infarction Patients) trial8, 9 for a post hoc analysis assessing the effect of morphine use in the initial management of patients with anterior STEMI undergoing PPCI.\nCIRCUS was an international, multicenter, double‐blind, placebo‐controlled trial conducted between April 2011 and February 2014. The CIRCUS trial showed no significant effect of cyclosporine on the clinical outcomes of patients with anterior STEMI.9\n\nCIRCUS study was performed in accord with the principles of the Declaration of Helsinki and the European guidelines for Good Clinical Practice. Approval was obtained from the ethics committees in the relevant countries, and written consent for data use was obtained from all patients.\n\nBODY.STUDY POPULATION:\nWe analyzed data from the intent‐to‐treat population of the CIRCUS trial.\nEligible patients (male and female) were aged ≥18 years and presented within 12 hours of onset of symptoms evocative of acute coronary syndrome (ACS). They were required to have ST‐segment elevation ≥0.2 mV in 2 anterior contiguous leads and to be scheduled for PPCI. Written informed consent was obtained from all patients.\nExclusion criteria included cardiogenic shock or loss of consciousness during the prehospital period, known hypersensitivity to cyclosporine, known kidney or liver failure, pregnancy or absence of contraception in women of childbearing age, and any disorder associated with immunological dysfunction within the previous 6 months.\n\nBODY.STUDY TREATMENT:\nCIRCUS patients were divided into 2 groups with and without intravenous morphine before PPCI. In a post‐hoc analysis setting, we hypothesized that morphine use would be superior to the absence of morphine use in terms of primary outcomes.\n\nBODY.STUDY OUTCOMES:\nThe primary outcome was the rate of occurrence of major adverse cardiovascular event (MACE) including cardiovascular death, heart failure, cardiogenic shock, recurrent myocardial infarction, unstable angina, and stroke during 1 year of follow‐up. Cardiogenic shock was defined as systolic blood pressure <80 mm Hg for >30 minutes, unresponsive to fluid replacement, and associated with signs of peripheral and end‐organ hypoperfusion. Heart failure was defined as clinical symptoms requiring initiation or intensification of heart failure treatment with intravenous administration of diuretics. Unstable angina was defined as the presence of acute chest pain associated with ST depression, or new onset of negative T waves, and no elevation of troponins.\nSecondary outcomes included rate of occurrence of individual MACE, all‐cause death, adverse left ventricular remodeling during 1 year of follow‐up, and the creatine phosphokinase (CPK) peak blood concentration in the 24 hours post‐PPCI.\nMACE were adjudicated by an event validation committee composed of at least 3 cardiologists/physicians blinded to treatment. Total CPK blood concentration was measured in peripheral total blood at the time of PPCI and 4, 12, and 24 hours post‐PPCI. CPK measures were conducted locally, in each participating center. CPK peak concentration was defined as the highest value collected during the 24 hours post‐PPCI. It was used as an indirect evaluation of infarct size.10, 11 Adverse left ventricular remodeling was defined as an increase of 15% or more of end diastolic left ventricular volume indexed to body surface area. End diastolic volume was measured at discharge and at 1 year by blinded expert readers. The measures were performed on echocardiographic data, according to the Simpson biplane rule.\n\nBODY.SUBGROUP ANALYSIS:\nCyclosporine is an inhibitor of cerebral12 and intestinal13 P‐glycoprotein. P‐glycoproteins are involved in morphine intestinal absorption14 and neurological circulation.15 This P‐glycoprotein inhibition could lead to an increased effect of morphine.14\n\nTo evaluate the effect of this pharmacological interaction on our results, we compared the 4 subgroups: morphine with cyclosporine; morphine with placebo; no morphine with cyclosporine; and no morphine with placebo on MACE occurrence and on infarct size. We also explored morphine with cyclosporine interaction effect on MACE occurrence by 2‐way ANOVA with an interaction term.\n\nBODY.STATISTICAL ANALYSIS:\nAll statistical analyses were post hoc and were not prespecified in the original CIRCUS Statistical Analysis Plan.\nContinuous variables with normal distribution are expressed as means and SDs. Continuous variables with a non‐normal distribution are expressed as medians and interquartile ranges. Normality was tested with the Shapiro–Wilk test. Categorical variables are expressed as percentages. The study population was divided into 2 groups according to morphine use. Comparison of baseline characteristics or outcomes was performed using the chi‐squared test or Fischer's exact test, as appropriate, for categorical variables. The Student t\nFor all comparisons, a value of P<0.05 was considered statistically significant. When appropriate, 95% confidence intervals were calculated.\nAll statistical analyses were conducted using STATA software (version SE 14.2; (StataCorp LP, College Station, TX).\n\n\nBODY.RESULTS:\nThe intent‐to‐treat CIRCUS population included 969 patients.8, 9 Two patients were not included in our analysis because of lack of information on morphine use. Morphine was used before PPCI in 554 (57.3%) patients. Baseline characteristics (Table 1) and periprocedural characteristics (Table 2) were well balanced between groups with and without morphine use. There was a nonsignificant trend toward younger age and a shorter total ischemic time in patients receiving morphine. There was also a trend toward more heart failure and cardiogenic shock in the morphine group: 13.1% versus 10.2% of patients admitted in Killip 2 or 3 and 1.4% versus 0% of patients admitted with cardiogenic in morphine and no‐morphine group, respectively (Table 2).\nTable 1Characteristics of the Patients at Baseline\nAll Patients (n=967)Patients With Morphine (n=554)Patients Without Morphine (n=413)\nP Value (Wilcoxon or Fisher's Test)Age, y\n60±13\n59±13\n61±13\n0.07\nMale sex, %\n82\n82\n82\n0.73\nBody mass index, kg/m2\n\n27±4\n27±4\n27±4\n0.14\nCurrent smoker, %\n42\n43\n41\n0.39\nHypertension, %\n37\n38\n37\n0.74\nDiabetes mellitus, %\n13\n13\n13\n1.00\nDyslipidemia, %\n38\n40\n36\n0.18\nPrevious myocardial infarction, %\n6\n6\n5\n0.67\nPrevious PCI, %\n7\n7\n6\n0.61\nLVEF, %\n47±10\n47±10\n48±10\n0.16\nCyclosporine use before PPCI, %\n49\n51\n46\n0.12\nValues are expressed as means±SD. LVEF indicates left ventricular ejection fraction (measured by echocardiography); PCI, percutaneous coronary intervention; PPCI, primary percutaneous coronary intervention.\nTable 2Periprocedural Characteristics\nPatients With Morphine (n=554)Patients Without Morphine (n=413)Killip class at admission\n505\n353\nClass 1\n432 (85.5)\n317 (90.0)\nClass 2\n57 (11.3)\n29 (8.2)\nClass 3\n9 (1.8)\n7 (2.0)\nClass 4\n7 (1.4)\n0 (0.0)\nTotal ischemic time, mean±SD (h)\n3.99±2.39\n4.94±3.07\nRentrop Grade 2 or 3\n34/554 (6.1)\n31/413 (7.5)\nAngiographic thrombus burden ≥3\n358/533 (67.2)\n267/395 (67.9)\nArea at risk, mean±SD (%)a\n\n36.8±8.4\n35.6±8.6\nProximal localization\n252/552 (45.6)\n157/403 (38.9)\nMultivessel disease\n203/554 (36.6)\n154/413 (37.3)\nThrombolysis rate\n33/554 (5.9)\n27/413 (6.5)\nStenting\n492/554 (88.8)\n362/413 (87.6)\nNo reflow\n30/554 (5.4)\n25/413 (6.1)\nFinal TIMI\n548\n403\nTIMI=0\n9 (1.6)\n4 (1.0)\nTIMI=1\n2 (0.3)\n8 (2.0)\nTIMI=2\n29 (5.3)\n29 (7.2)\nTIMI=3\n508 (92.7)\n362 (89.8)\nTreatment at discharge\nDouble antiplatelet treatment\n511/544 (93.9)\n380/408 (93.1)\nBeta‐blockers\n507/544 (93.2)\n370/408 (90.7)\nStatins\n522/544 (95.9)\n389/408 (95.3)\nACEi\n482/544 (88.6)\n348/408 (85.3)\nARB\n10/544 (1.8)\n15/408 (3.7)\nCalcium‐channel blockers\n12/544 (2.2)\n13/408 (3.2)\nDiuretics\n138/554 (25.4)\n105/413 (25.4)\nAldosterone antagonists\n147/544 (27.0)\n82/408 (20.1)\nValues are expressed as numbers (percentages), unless otherwise specified. ACEi indicates angiotensin‐converting enzyme inhibitors; ARB, angiotensin receptor blockers; TIMI, Thrombolysis in Myocardial Infarction.\naUsing the APPROACH angiographic score.\nAt 1 year, 236 (24.4%) patients had experienced at least 1 MACE. There was no significant difference in occurrence of MACE between groups: 145 (26.2%) and 91 (22.0%) patients in the groups with and without morphine, respectively (P=0.15; Table 3). Cumulative Kaplan–Meier estimates for the first occurrence of MACE (Figure 1) were not significantly different between groups (P=0.10).\nTable 3Clinical Outcome After 1‐Year Follow‐up\nEventMorphine (n=554)No Morphine (n=413)\nP (Fischer's Test)Any MACEa\n\n145 (26.2)\n91 (22.0)\n0.15\nCardiovascular death\n29 (5.2)\n20 (4.8)\n0.88\nHeart failure\n110 (19.9)\n70 (16.9)\n0.28\nCardiogenic shock\n30 (5.4)\n19 (4.6)\n0.66\nRecurrent myocardial infarction\n21 (3.8)\n7 (1.7)\n0.08\nUnstable angina\n15 (2.7)\n8 (1.9)\n0.53\nStroke\n10 (1.8)\n9 (2.2)\n0.82\nValues are expressed as numbers (percentages). MACE indicates major adverse cardiovascular events.\naA patient with more than 1 clinical event was counted as having 1 MACE.\nFigure 1Kaplan–Meier curves for major adverse cardiovascular events (MACE).\nIn the Cox model, morphine use was not associated with the incidence of MACE (hazard ratio=1.25; 95% confidence interval [0.96; 1.62]; P=0.10), even after adjustment for age, ischemic time, infarct size (CPK peak), initial and final Thrombolysis in Myocardial Infarction flow, sex, smoking, hypertension, diabetes mellitus, previous myocardial infarction, and Killip class (hazard ratio=1.04; 95% confidence interval [0.75; 1.45]; P=0.82).\nIncidence of individual MACE during 1 year was not significantly different between groups (Table 3; Figure 2).\nFigure 2Occurrence of major adverse cardiovascular events (MACE) and individual events after 1‐year of follow‐up.\nThere was no statistically significant difference on MACE occurrence (P=0.56) and on infarct size (P=0.61) between the 4 different treatment subgroups (Table 4). Interaction term, even after adjustment, was not significant and so for each clinical outcome individually (Table S1).\nTable 4Subgroups Analysis, Evaluation of Morphine—Cyclosporine Interactions\nMorphine+Cyclosporine (n=266)Morphine+Placebo (n=246)No Morphine+Cyclosporine (n=178)No Morphine+Placebo (n=205)\nP (ANOVA)MACEsa\n\n70 (26.3)\n75 (30.5)\n44 (24.7)\n47 (22.9)\n0.56\nCPK peakb\n\n3945±2638\n4054±2677\n4061±3182\n3742±2571\n0.61\nCPK indicates creatine phosphokinase; MACE, major adverse cardiovascular events.\naValues are expressed as numbers (percent).\nbValues are expressed as mean±SD.\nThere was no significant difference in rates of all‐cause death at 1 year between groups (32 [5.3%] and 22 [5.8%] deaths in the groups with and without morphine, respectively; P=0.89).\nThe unadjusted Kaplan–Meier hazard curve for 1‐year all‐cause mortality is presented in Figure 3. No significant difference was observed between the groups with and without morphine (P=0.77).\nFigure 3Kaplan–Meier curves for 1‐year all‐cause mortality.\nCPK peak blood concentrations after PPCI were comparable in both groups (4023±118 and 3903±149 IU/L in the groups with and without morphine, respectively; P=0.52; Figure 4).\nFigure 4Creatine phosphokinase (CPK) blood concentration evolution during the first 24 hours following primary percutaneous coronary intervention (PPCI)—median, first and third quartiles, and first and ninth decile are presented for each measurement. Mixed‐model analysis for morphine effect on CPK kinetic curve: P=0.09.\nRates of adverse left ventricular remodeling were also similar in both groups (205 [37.0%] and 134 [32.4%] patients with adverse remodeling in the groups with and without morphine, respectively; P=0.21).\n\nBODY.DISCUSSION:\nIn a large cohort of patients with acute anterior STEMI, morphine was used in half of patients before PPCI and had no significant effect on the composite primary outcome including cardiovascular death, heart failure, cardiogenic shock, recurrent myocardial infarction, unstable angina, and stroke.\nFew other studies have evaluated the clinical outcomes of patients who received morphine to alleviate chest pain during an ACS.16 In line with our results, 2 retrospective studies concluded that morphine/intravenous narcotics did not adversely affect the outcomes in patients with ACS. In 1 study including 1758 patients (765 STEMI and 993 non‐ST‐segment elevation myocardial infarction), the rate of 30‐day death was not increased with the use of intravenous narcotics.17 In the second study, the analysis of 2 cohorts of STEMI patients (2438 patients from the FAST‐MI 2010 registry and 1726 from the FAST‐MI 2005)7 suggested that prehospital morphine use was not associated with increased in‐hospital complications and 1‐year mortality.\nOn the contrary, an American retrospective study based on the CRUSADE registry,5 which included 57 039 patients who presented with non‐ST‐segment elevation ACS, reported an increase in mortality and challenged the safety of morphine use in these patients. It is difficult to compare the CRUSADE study with ours because of differences in population, study dates, and design. The CRUSADE study retrospectively enrolled non‐ST‐segment elevation ACS patients, who are different from STEMI patients. It was conducted between 2002 and 2003, at a time when oral antiplatelet agent use was limited to clopidogrel, which has more pharmacological interactions with morphine than more recently approved agents such as ticagrelor and prasugrel.18, 19\n\nThus, it seems that there is currently limited evidence for adverse outcomes associated with morphine use in STEMI patients, whereas additional caution may be necessary in NSTEMI patients.\nIn our study, morphine use did not appear to be associated with any significant cardioprotective effect. Myocardial protection with opioid use has been inconsistently reported. Several studies suggest a cardioprotective effect especially with morphine in the surgical context of coronary artery bypass graft, but these studies were conducted in small surgical patient populations and assessed indirect outcomes.20, 21 In STEMI patients undergoing PPCI, a cardioprotective effect could be demonstrated for morphine in addition to the basal effect of remote ischemic conditioning,22 but no additional myocardial protection was observed with fentanyl in patients undergoing elective PPCI.23\n\nIn experimental models of ischemia reperfusion injury, morphine inhibits the mitochondrial permeability transition pore24 opening through μ and κ myocardial opioid receptors,25, 26 and induced a significant cardioprotective effect.27 Morphine also stimulates the reperfusion injury salvage kinase pathway.24 In animal studies, these mechanisms have been associated with a reduction of infarct size following intravenous or intrathecal morphine use before or immediately after reperfusion.28, 29, 30 However, the translation from animal models to humans is not straightforward owing to disease complexity and associated comorbidities, and our data do not suggest any cardioprotective effect of morphine on ischemia‐reperfusion injury. In the near future, 2 noncurrently published studies (Clinical Trials NCT01186445 and NCT01738100) may provide additional understanding of the cardioprotective effects of intracoronary morphine during ACS.\nIn recent years, there has been a growing concern about the use of morphine for pain relief in patients with ACS. Morphine delays and attenuates the release peak and efficacy of oral antiplatelet agents in ACS31, 32, 33 and healthy18, 19, 34 patients, by inhibiting gastrointestinal absorption35 and inducing to vomiting.16 However, there is no evidence for the clinical relevance of these effects. In our study, although morphine was not associated with any adverse outcome in STEMI patients, there was a nonsignificant trend toward an increase in individual MACE in patients treated with morphine. In particular, recurrent myocardial infarction and heart failure seemed to be more frequent when morphine was used before PPCI, although these results did not reach statistical significance (P=0.08 and 0.28, respectively; Table 3).\nRegardless of the potential outcome modifications associated with morphine use during ACS, pain release is essential to decrease patients' discomfort and to avoid proarrhythmic anxiety. In our study, 42.7% patients did not receive any analgesia, meaning that pain was probably undertreated. This is not acceptable, and pain release should be a priority of any medical care. Without ethical possibility of a placebo‐controlled, randomized study, it is difficult to conclude definitively about morphine safety in this indication. There is no analgesic alternatives to morphine in this indication, and the only new approach currently evaluated is equimolar oxygen/nitrous oxide mixture (MEOPA) associated with paracetamol, under investigation in the ongoing SCADOLII (Comparison of MEOPA + Paracetamol Versus Morphine Treatment in Acute Coronary Syndrome Analgesia) trial. In the absence of clear evidence of adverse effects, morphine should be used without restraint in STEMI as was concluded in a recent review.16\n\nBODY.STUDY LIMITATIONS:\nAlthough analyses were conducted on a prospective homogenous anterior STEMI population, they were post hoc analyses, not prespecified in the original CIRCUS protocol, leading to a loss of statistical relevance and limited statistical power.\nGiven that CIRCUS trial randomization was not stratified on morphine use, the effect of unmeasured confounders on the results cannot be ruled out. Anyway, patients in the morphine group seemed to be sicker at initial medical care than those in the no‐morphine group: 13.1% of patients were admitted with heart failure (Killip 2 or 3) in the morphine group versus 10.2% in the no‐morphine group; 1.4% of patients were admitted with cardiogenic shock in the morphine group versus 0% in the no‐morphine group. There was no randomization, nor precise recommendation, for morphine use in the CIRCUS protocol. So, morphine could have been used as a symptomatic treatment, in combination with continuous positive airway pressure, for patients suffering from dyspnea or acute heart failure at first medical care. Initial heart failure and/or cardiogenic shock are strongly associated with a poorest prognosis.36 Despite this initial difference, the morphine group did not experience significantly more adverse events than the no‐morphine group.\nThe study was not initially designed to evaluate the effect of morphine in patients with anterior STEMI, so the investigation was limited by the number of subjects. It is difficult to evaluate precisely the statistical power of our work. For example, if we previously calculated the number needed to treat with the hypothesis of the 4.2% absolute difference between groups observed in our study, with 80% power and 5% alpha risk, triple sample size would have been necessary. So, limited sample sizes may explain the lack of statistical significance of the observed difference in our work.\nThe use of cyclosporine, even though its known interactions with morphine intestinal absorption and neurological effect, had no significant effect on the results of our study. There was no statistically significant difference between the 4 different subgroups defined by the use of cyclosporine and of morphine, nor significant interaction between morphine and cyclosporine.\nIn the morphine group, we did not record the dose of morphine that was administered as well as corresponding pain scales. Further studies are needed to assess precisely pain management in STEMI patients and its potential impact on clinical outcomes\n\n\nBODY.CONCLUSION:\n\n\nBODY.DISCLOSURES:\nNone.\n\nBODY.SUPPORTING INFORMATION:\n\nTable S1. Interaction of Morphine With Cyclosporine Effect on MACE Occurrence, Explored by 2‐Way ANOVA With Interaction Term\nClick here for additional data file.\n\n**Question:** Compared to no‐morphine what was the result of morphine on infarct size?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13860
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Group Support Lifestyle Modification (GSLiM) Programme among Obese Adults in Workplace: A Randomised Controlled TrialGSLiM and Obesity\n\n\n\nABSTRACT.BACKGROUND:\nThere was an increasing trend in the prevalence of obesity and its comorbidities over the past decades in Malaysia. Effective intervention for obesity remains limited. This study aimed to compare the effectiveness of a group based lifestyle modification programme amongst obese individuals with an existing dietary counseling programme.\n\nABSTRACT.METHODS:\nWe recruited one hundred and ninety four overweight and obese (BMI>27.5 kg/m2) employees from a local university. They were randomly allocated to either Group Support Lifestyle Modification (GSLiM) (intervention)(n = 97) or dietary counseling (comparison)(n = 97). The GSLIM activities included self monitoring, cognitive-behaviour sessions, exercise as well as dietary change advocacy, which were conducted through seminars and group sessions over 24 weeks. The comparison group was given dietary counselling once in 12 weeks. Both groups were followed up for additional 12 weeks to check for intervention effect sustenance. Anthropometric and biochemical parameters were measured at baseline, 12, 24 and 36 weeks; while dietary intake, physical activities, psychological measures and quality of life measured at baseline, 24 and 36 weeks. Data analysis was conducted using ANOVA repeated measures with intention to treat principle.\n\nABSTRACT.RESULTS:\nThe participants were predominantly women with mean (standard deviation) age of 40.5 (9.3) years. A total of 19.6% of the participants in GSLiM achieved 6% weight loss compared to 4.1% in the comparison group (Risk Ratio 4.75; 95% CI: 1.68, 13.45). At 24 weeks, the retention rate was 83.5% for GSLiM and 82.5% for comparison group. GSLiM participants also achieved significant improvement in total weight self-efficacy score, negative emotions and physical discomfort subscales, MDPSS friend subscale and all domains in quality of life. Participants in the comparison group experienced reduction in negative self-thoughts.\n\nABSTRACT.CONCLUSION:\nThe GSLiM programme proved to be more effective in achieving targeted weight loss, improving weight self-efficacy, friend social support, and quality of life compared to dietary counseling.\n\nABSTRACT.TRIAL REGISTRATION:\nIranian Registry of Clinical Trials IRCT201104056127N1\n\nBODY.INTRODUCTION:\nMalaysia has the highest combined overweight and obese prevalence among the Southeast Asian countries in 2013 [1]. From 1996 to 2009, the prevalence of obesity in Malaysia increased by 23.7% [2], concurrent with the obesity global pandemic. High risks groups for obesity were women, married adults, those with secondary education and unemployed or homemakers [3]. Arguably, working adults who engage in sedentary work as well as those experience stress at the workplace [4,5] are also at risks of obesity.\nObesity is associated with increase mortality [6] and reduction in overall quality of life [7]. Malaysians observed an increase of mortality due to heart disease and cancers from 67% in 2008 to 73% in 2012 [8,9]. Comprehensive lifestyle modification on diet, physical activity and psychology remains as the primary intervention for obesity [10,11].\nMulticomponent lifestyle modification proved successful in reducing diabetes risk [12] as well as improved cardiovascular risk factors [13]. The Diabetes Prevention Program (DPP) was a high intensity lifestyle modification programme which managed to reduce participant’s diabetic risks by 58% [12]. Yet, in real life, high intensity programme may not be applicable to all settings as it is resource intensive. This results in its translation into the Group Lifestyle Balance (GLB-DPP) implemented in clinical settings, primary care and workplace [14]. The GLB-DPP retained the core principles of the DPP such as 7% weight loss, 150 minutes of physical activity and self-monitoring using group approach with reduced sessions.\nAccording to the Social Cognitive Theory; personal behavior, thoughts and environment reciprocate to produce action [15,16]. Factors influencing the cognitive process favoring weight loss includes high self-efficacy [17,18] and social support [19]. Meanwhile, obese individuals may have higher inclination for negative thoughts related to dysfunctional eating [20] hampering lifestyle modification progress. However, these psychological factors were seldom assessed and reported together with clinical measures as part of the lifestyle modification programme outcomes [21,22].\nWorkplace has been shown to be a feasible and effective setting for lifestyle modification programme for obesity prevention [23–27]. Apart from preventing short term deterioration of quality of life, moderate improvement in weight induced employee productivity [28].\nIn view of urgent need for obesity intervention, we implemented an adaptation of the GLB-DPP programme named Group Support Lifestyle Modification (GSLiM) in the workplace. The objective was to compare the effectiveness of the GSLiM programme with an existing one to one dietary counseling programme in the workplace. The programme was designed to create social support and improve self-efficacy. It was hypothesized that Group Support Lifestyle Modification (GSLiM) would be more effective in producing weight loss, improvement of cardiovascular risk factors, self-efficacy, social support factors as well as their quality of life.\n\nBODY.METHODS:\nThis was a randomised controlled trial conducted in a public university in Kuala Lumpur, Malaysia. Ethical clearance obtained from the Medical Ethics Committee, University Malaya Medical Centre on 16 March 2011 (MEC No. 841.2) and was registered with the Iranian Registry of Clinical Trial (http://www.irct.ir/) (IRCTID: IRCT201104056127N1). There was a slight delayed in trial registration as the authors waited for funding approval obtained from the University Malaya post-graduate research fund.\nBODY.RECRUITMENT AND PARTICIPANTS:\nParticipants were employees from a public university in Kuala Lumpur. Recruitment started soon after trial registration from May 2011 to September 2012. The trial intervention and follow up commenced from September 2011 to July 2013. Respondents gave written informed consent during recruitment and informed of random allocation into either of the treatment arms. Recruitment, data collection and intervention programme were conducted in the Sport Centre of the Faculty of Sports Science within the university.\nEligible criteria included employees of the university, Bahasa Malaysia (national language of the country) literate, with BMI of 27.5kg/m2 or more and able to walk briskly for at least 10 minutes without assistance. Cut off point of 27.5kg/m2 was used as Asians experience higher risk for type 2 diabetes, metabolic syndrome as well as cardiovascular diseases at lower BMI [29–31]. Exclusion criteria included individuals with unstable angina, congestive cardiac failure, cancer, severe pulmonary disease, psychiatry disorders e.g. substance abuse, depression, weight loss of more than 5kg in the last 6 months, pregnant or breastfeeding. Self-administered questionnaire used to assess medical history and family history for disease risks. Further assessment to rule out undiagnosed medical conditions included full blood count (FBC), renal profile (RP), fasting blood sugar (FBS) and lipid profile at recruitment phase.\nOf 510 employees’ responded to invitations via emails and flyers, 275 consented for participation and assessed for eligibility. Nineteen participants withdrew after consented, eleven did not finish screening, fifty-one were excluded and finally, 194 participants were randomised (Fig 1).\n10.1371/journal.pone.0160343.g001Fig 1BODY.CONSORT PARTICIPANTS FLOWCHART OF INTERVENTION AND CONTROL ARM.:\nGSLiM: Group Support Lifestyle Modification; Comparison group: Dietary counselling.\nBODY.RANDOMISATION:\nA third person (TC), who was not involved with the study conducted random allocation of participants into either Group Support Lifestyle Modification (GSLiM) or dietary counseling (comparison). Allocation of participants to GSLiM or comparison arms were assigned using opaque envelopes. This was an open label trial as both treatment provider and participants knew which arm they belonged to since the programme differed between the two groups.\n\n\nBODY.INTERVENTIONS:\nBODY.GLB-DPP:\nThe Group Lifestyle Balance (GLB-DPP) developed based on the Diabetes Prevention Program (DPP) [32] that retained the core principles of DPP. These included self-monitoring for weight, dietary and physical activity, problem solving, staying motivated, and target 7% weight loss from baseline weight. However, the programme delivery of GLB-DPP used group-approach given over twenty-two hourly sessions in three phases (core, fade frequency and support).\n\nBODY.GSLIM:\nPermission to adopt and adapt the GLB-DPP was obtained from the author/s under the Creative Common Licence Share Alike. The GSLiM programme retained the core characteristics of GLB-DPP except a 6% target weight loss from baseline weight used instead of 7% and absence of support sessions after programme completion at 6th month. The 6% target was based on an achievement of a minimum of 1% weight loss per month in view of evidence that even 5% weight loss may produce clinical improvement among obese individuals [10]. The frequency of sessions conducted in GSLiM was less compared to the GLB-DPP with ten sessions in GSLiM intervention compared to the twenty-two of GLB-DPP. However, the original twenty-two topics of the GLB-DPP were retained in GSLiM with the topics delivered through three seminars and five 90-minute sessions in two phases. The core phase (Phase 1) was run once in two weeks for three months, beginning with a two-day seminar. A fade frequency (Phase 2) started with a one-day seminar, followed by two 90-minute sessions and concluded with a half-day seminar at the end of the active intervention. Comparison of sessions between GLB-DPP and Group Support Lifestyle Modification (GSLiM) is shown in Tables 1 and 2.\n10.1371/journal.pone.0160343.t001Table 1BODY.COMPARISON OF GROUP LIFESTYLE BALANCE (GLB) WITH GROUP SUPPORT LIFESTYLE MODIFICATION (GSLIM) PROGRAMME.:\nSame aspects of GLB and GSLiM\nSafe and appropriate intervention that incorporates nutrition, physical activity, and behaviour change.\nGradual increase of 150 mins per week of physical activity and minus 500 kcal per day of dietary intake.\nStrong focus on use of self-monitoring tools with feedback.\nInitial emphasis on fat intake and calories.\nPrimary focus on healthy food choices.\nUse of inexpensive food samples and incentives.\nUse of problem-solving techniques to address barriers to healthy eating and physical activity.\nGroup approach.\n\nSpecific Adaptation to GSLiM\nGLB\nModified GLB (GSLiM)\nGoal: 7% weight loss and increase physical activity to 150 minutes/week.\nGoal: 6% weight loss of baseline body weight and physical activity 150 per week.\n12 weekly 1-hour sessions delivered over 12–15 weeks\n0, 3 and 6 months by seminar sessions.\n5 once in 2 weeks session over 12 weeks\n2 monthly session over subsequent 12 weeks\n\nPedometer introduced during core sessions\nLogging for physical activity in log book\nTrainers: Dietician and exercise specialist\nTrainers: Dietician and exercise specialist and psychologist\nTotal sessions 22 given in spread of 48 weeks\nCompressed session to 10 given in 24 weeks\n10.1371/journal.pone.0160343.t002Table 2BODY.TOPIC COMPARISON BETWEEN GSLIM AND GLB-DPP.:\nGLB-DPP topic arrangementGSLiM topic arrangementsCore sessions: Weekly for 12 weeks\nCore sessions: Bi- weekly (2 per month) for 12 weeks\n1. Welcome to GLB.\n1. Welcome to GSLiM—Seminar overview of topics. (GLB-DPP 1)Physical activity component (overview of topics and specifics on)\nStretching: The truth about flexibility (GLB-DPP session 21)\nHeart health (GLB-DPP session 20)\nDiet component: overview of topics, Tip the calorie balance (GLB-DPP 5)\nPsychological—overview\n\n2. Be a fat and Calorie Detective\n2. GSLiM Session 22.1 Fat, calorie and you (GLB-DPP session 2)\n2.2 Healthy eating (GLB-DPP session 3)\n\n3. Healthy eating\n3. GSLiM Session 33.1 Move those muscles (GLB-DPP session 4)\n3.2 Monitoring your activities (GLB-DPP 10)\n\n4. Move those muscles\n4. GSliM Session 44.1Negative thoughts and Weight (GLB-DPP 9 and 14)\n4.2 Balance your thoughts (GLB-DPP 15)\n\n5. Tip the calorie balance\n5. GSLiM Session 5\n5.1 Behavior, Environment and Lifestyle Change (GLB-DPP 6 and 17)\n5.2 Make Social Cues Work for You (GLB-DPP 11)\n\n6. Take charge of whats around you\n6. GSLiM Session 6• 6.0 Strengthen Your Exercise Programme (GLB-DPP 16)\n7. Problem solving\n\n8. Four keys to healthy Eating out\n\n9. Slippery slope of Lifestyle change\n\n10. Jump start Your Activity Plan\n\n11. Make Social Cues Work for You\n\n12. Ways to Stay Motivated\n\nTransition session (fade frequency)\nTransition session (fade frequency)\n13. Prepare for Long Term Self Management\n7. GSLiM Session 7- One day seminar.\n\na. Stress management (GLB-DPP 18).\n\nb. Problem solving (GLB-DPP 7).\n\nc. Healthy eating out (GLB-DPP 8).\n\nd. More volume fewer calories (GLB-DPP 14).\n\ne. Group physical activity sessions.\n14. More Volume; fewer calories\n8. GSLiM Session 8.• 8.0 Standing Up for Your Health (GLB-DPP 19)\n15. Balance your thoughts\n9. GSLiM Session 9.• 9.0 Stay motivated (GLB-DPP 12)\n16. Strengthen Your Exercise Programme\n10. GSLiM session 10 (final).Looking back and looking forward (GLB-DPP 22)—Sharing achievements.\nPrepare long term self-management (GLB-DPP 13)\n\nSupport sessions (variable sequence)\nFollow up at 36 weeks\n17. Mindful eating\n\n18. Stress and Time Management\n\n19. Standing Up for Your Health\n\n20. Heart Health\n\n21. Stretching: The truth about flexibility\n\n22. Looking back and looking forwards\n\nTable 3.\n10.1371/journal.pone.0160343.t003Table 3BODY.SUMMARY OF PROGRAMME STRUCTURE, SESSIONS AND CONTENTS.:\nSessionApproachTitle and Contents1\nSeminar\nIntroduction to the programme\n\n\nForeword on the Group Support Lifestyle Modification (GSLiM) programme configuration and sessions. Highlight energy balance and the concept of self-monitoring. Develop allegiance to the programme.\n\n\nSet up goals for: 6% weight loss, and physical activity of 150 minutes per week. Daily calorie requirement (DCR) minus 500 for dietary intake to be worked out through the 2 step Harris Benedict equation. Foreword on components: psychological, diet and nutrition and physical activity.\n\n\nDescription of main food components, sources of fat, food pyramid and food plate theory. General outlook on physical activity and exercise. Introduction to cognitive and behaviour aspects related to lifestyle alteration. Getting-to-know- you session lasting half a day to help participants get acquainted with each other.\n2\nGroup session\nCalorie measurement and healthy food options.\n\n\nEducation and hands on session for self-monitoring through the measurement of food and calories by participants utilising scales and measurement tools. Reading food labels and using the food plate theory for practice. Sessions with active facilitator participation.\n3\nGroup session\nMoving Your Muscles.\n\n\nInformation on effective exercise methods (brisk walking). Raise awareness on the safety aspects of physical activity and exercise. Highlight the enhancement of general health through physical activity and exercise. Assess individual's present level of physical activity and exercise. Rise in activity levels to be spread out over a period of time. Set weekly goals for physical activity. Commence self-monitoring for physical activity and exercise. Group activity for aerobic session.\n4\nGroup session\nNegative thoughts.\n\n\nUncover negative thoughts. Increase knowledge on variety of negative thoughts, types of thought disorder and its relationship to behaviour. Suggest ways to offset negative thoughts. Highlight the significance of appreciating oneself for personal accomplishments. Review negative thoughts and set up action plan to counter them.\n5\nGroup session\nRelationship between behaviour, environment and healthy lifestyle change.\n\n\nDefine social and physical environment and relate its influence on physical activity and eating behaviour. Promote positive response to environmental and social drawbacks. Highlight the need for a healthy diet and physical activity /exercise. Emphasize on acting as role models for group members in order to promote physical activity and positive eating. Appraisal of present behaviour and setting up of action plan to achieve positive behaviour change\n6\nGroup session\nConsolidating Exercise Programme\n\n\nInitiate aerobic fitness sessions and define frequency, intensity, time and type of activity (F.I.T.T). Educate participants on the engagement of self-monitoring to set up an effective exercise schedule through the calculation of heart rate and rate of perceived exertion (RPE).\n\n\nDiscuss schedule and include a variety of activities and exercises to develop flexibility, strength, cardiovascular health and endurance.\n7\nSeminar\nConsolidate dietary, fitness, thought and behaviour alteration.\n\n\nReport on the progress of participants. Reassess weight loss goals, diet and physical activity based on the most recent measurements. Behaviour: Initiate stress management procedure (abdominal breathing) and problem solving techniques (I.D.E.A.L approach) to deal with issues that crop up due to alterations in diet and daily activities. Dialogue on problems encountered by participants. Encourage healthy eating through the promotion of high volume and low calorie foods.\n\n\nDietary: Educate on the choice of healthy food when eating out. Explore by group members of ways to be more active on busy work days.\n\n\nFitness: Suggest activities for work related sitting time. Stress on the need for intervals between extended periods of sitting especially at work. Discuss options on exercise location during unfavourable weather conditions. Teach ways to improve cardiovascular health through resistance and strength exercises. Set up a physical activity schedule for the following month.\n8\nGroup session\nChallenges in Lifestyle change.\n\n\nDiscourse among group members on problems encountered during attempts to change their lifestyles focussing on eating habits and physical activities. Exchange of information and problem-solving suggestions among participants. Re-assess obstacles faced and define a strategy to overcome these barriers to change.\n9\nGroup session\nMaintaining motivation.\n\n\nConcentrate on means to be constantly motivated for lifestyle change. Stress on awareness of internal and external sources of motivation. Emphasise on appreciating minor accomplishments related to lifestyle change. Initiate discussion among group members on strategy to maintain motivation.\n10\nSeminar\nExperience sharing.\n\n\nDisclosure of participants’ accomplishments during the course of the programme (6 months). Initiate panel discussion involving team and group members on the achievements of participants during the programme. Encourage participants to share their six-month experience on the road to better health through weight reduction. Participants were also encourage to continue healthy lifestyle.\nA starter kit containing handouts, examples of food energy intake and activity-based energy expenditure in the form of wheel chart was provided to participants during the first session. Handouts were also given to participants after subsequent sessions. Self-monitoring activities included weighing-in, submission of diet records and physical activity log was also conducted for each session. Feedback was provided to motivate the participants on each session.\n\n\nBODY.COMPARISON:\nParticipants in the comparison group underwent individual counseling with a dietician once every 12 weeks. Each dietary counseling session lasted for an hour. Recommendations on reduced calorie intake for weight loss, education on food pyramid, self-monitoring of calorie intake, lowering high-fat food and increase of fruits and vegetable intake were given. Participants were advised to increase their physical activity levels and exercise at least three times a week, but no practical session provided.\n\nBODY.MEASUREMENTS:\nBODY.SOCIODEMOGRAPHIC AND LIFESTYLE HISTORY:\nSociodemographic characteristics included age, gender, ethnicity, education, household income and marital status; family history of diseases such as hypertension, diabetes and hyperlipidemia; and socially related lifestyle behavior such as smoking and alcohol consumption assessed using self-administered questionnaire. Participants were required to fill a three-day dietary record using a log book. They were taught to estimate food intake with standard day-to-day food utensils. Food items according to meals were entered. A food list with energy (kcal) values was given to assist participants’ in self-monitoring for healthy food choices. Physical activity was measured using the International Physical Activity Questionnaire short form (IPAQ-SF). The IPAQ-SF comprised of seven items reported in terms of METS-min per week which can be summarised into three main categories (low, moderate and high). It is a valid measurement of physical activity and has moderate to high reliability [33].\n\nBODY.ANTHROPOMETRY, BIOCHEMICAL AND CLINICAL MEASURE:\nWeight was measured using SECA digital scale (Model HD 309) and height was measured using SECA body meter (Model 208) to the precision of 0.1 kg and 0.1 cm, respectively. Participants wore light clothing, and stood on scale without shoes with leg apart within the scale footprint. Waist and hip circumference were measured using non elastic SECA measuring tape to the nearest 0.1 cm following the WHO guideline [34]. Digital OMRON Model HEM 907 was used to measure blood pressure based on the Malaysian Clinical Practice Guidelines for management of Hypertension [35]. Fasting blood samples (after 8 hours) collected by trained staff and sent within four hours of collection in an icebox to the laboratory. Measurements were conducted by trained research assistants at baseline, 12, 24 and 36 weeks. Height measured only once at baseline. BMI was calculated as weight in kilograms divided by height in meters squared.\n\nBODY.PSYCHOLOGICAL MEASURES AND QUALITY OF LIFE:\nPsychological measures and quality of life were measured using self-administered questionnaires at baseline, 24 and 36 weeks. Weight efficacy lifestyle (WEL) questionnaire was used to measure eating self-efficacy in weight management with 20 items in five subscales. The subscales include food control in negative emotions, availability of food, social pressure, physical discomfort and physical activity with internal consistency ranged from 0.7 to 0.9 [36]. The Malay version of WEL questionnaire had internal consistency (Cronbach’s alpha) of 0.47 to 0.86 [37].\nSocial support was measured using the Multidimensional Perceived Social Support (MDPSS) which comprised of 12 items and three subscales. Internal reliability achieved was 0.82 for perceived social support from family, 0.88 for friends and 0.94 for significant others [38]. The Malay version of MDPSS has internal consistency of 0.89 with reliability coefficients for family, friends and significant others subscales ranging from 0.67 to 0.83 [39].\nAutomatic Thought Questionnaire (ATQ) used to assess negative thoughts [40]. Due to poor construct, the Malay version was reduced to 17 items and two subscales from 30 items [41]. Each subscale i.e. negative self concept (NSE) and personal maladjustment (PMA) had internal consistency (Cronbach alpha) of 0.91 and 0.83 respectively. Method of scoring of the Malay ATQ remained the same as the original 30 item with a cut-off score of 30 for depression.\nQuality of life was measured using the WHO Quality of Life short form questionnaire (WHOQoL-BREF), a shorter version of WHO Quality of Life (WHOQoL) questionnaire with the objective to measure a global perspective of quality of life across cultures [42]. The WHOQoL-BREF comprises of 26 questions in four domains i.e. physical activity, psychological, social support and environment. Validation of the translated Malay version revealed high reliability with internal consistency of 0.64–0.80, and test—retest-reliability of 0.49–0.88 [43,44].\n\n\nBODY.SAMPLE SIZE:\nSample size was calculated using OpenEpi software. Based on Renjilian et al.’s findings (17), 49% of group therapy participants achieved ten percent weight loss compared to 29% in individual counseling. A total of 286 participants was required after setting the study power at 80%, level of significance at p<0.05 and attrition rate of 25%. Due to low response rate, high number of excluded respondents and time constraint for further recruitment, the sample size was not achieved. Power calculation revealed the sample size of 194 achieved 88.8% power for targeted weight loss.\n\n\nBODY.STATISTICAL ANALYSIS:\nIBM SPSS Statistics, Version 20 for Windows (SPSS Inc, Chicago IL) used for data analysis. The dietary data was cross-checked and entered by a trained dietician into Nutritionist Pro where the Malaysian nutrient database was used. Between group mean differences at baseline for continuous variables were analyzed using t-tests and χ2 tests for categorical data. Mean differences between groups over time used analysis of variance for repeated measures. All analysis was carried out using the intention to treat principles. Missing data was imputed using the last observation carried forward. Continuous data at baseline was described as means± standard deviation. Repeated measure data for within group measures was described as mean ±standard error. Confidence interval (CI) was reported where appropriate. Statistical significance was preset at 0.05. Relative risk reported for proportion achieved for weight loss, while Cohen’s d used for the within and between group repeated measure. Adjustment conducted for measures where the baseline values showed significant between-group differences to control for bias i.e. carbohydrate percentage and multidimensional perceived of social support.\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nA total of 194 participants recruited and randomised. The participants’ mean age (standard deviation) was 40.5 (9.3) years. Majority of participants were women (72.7%) and non-academician (92.3%). More than half (64%) had at least secondary education. A total of 81 (83.5%) GSLiM participants and 80 (82.5%) of dietary counseling participants completed the study at 24 weeks (Fig 1). Participants from the GSLiM group had higher total MDPSS score, family support and friend subscales score as well as carbohydrate percentage intake. No other significant difference between groups was observed at baseline (Tables 4 and 5).\n10.1371/journal.pone.0160343.t004Table 4BODY.BASELINE MEASURES.:\nGSLiM, mean (sd), n = 97Diet counselling, mean (sd), n = 97p valueAge, years\n39.7 (9.2)\n40.4 (9.5)\n0.617\nGender\n\n\n0.629\nFemale, n (%)\n69 (71.1)\n72 (74.2)\n\nMale, n (%)\n28 (28.9)\n25 (25.8)\n\nEthnicity\n\n\n0.296\nMalay, n (%)\n87 (89.7)\n91 (93.8)\n\nNon Malays, n (%)\n10 (10.3)\n6 (6.2)\n\nEducation, tertiary, n (%)\n39 (40.2)\n29 (29.9)\n0.132\nHousehold income (RM)\n2680.34 (1511)\n2504.19 (1641.20)\n0.438\nMarried, n. (%)\n76 (78.4)\n74 (76.3)\n0.732\nOccupation, academician, n (%)\n9 (9.3)\n6 (6.2)\n0.420\nFamily history of:\n\n\n\nHypertensive, n (%)\n11 (11.3)\n13 (13.4)\n0.663\nDiabetes, n (%)\n3 (3.1)\n2 (2.1)\n0.683\nHyperlipidaemia, n (%)\n8 (8.2)\n13 (13.4)\n0.248\nBehaviour:\n\n\n\nNon-alcohol intake,\n96 (99)\n97 (100)\n0.316\nSmoker, n (%)\n9 (9.3)\n5 (5.2)\n0.267\nHeight, m\n1.59 (0.08)\n1.58 (0.09)\n0.863\nWeight, kg\n82.3 (16.3)\n82.0 (13.5)\n0.911\nBMI, kg/m2\n32.4 (4.8)\n32.4 (3.8)\n0.980\nWaist circumference, cm\n95.1 (11.2)\n94.9 (9.5)\n0.872\nHip circumference, cm\n112.2 (10.4)\n111.1 (8.2)\n0.429\nSystolic blood pressure, (mmHg)\n128.6 (16.1)\n129.1 (14.4)\n0.712\nDiastolic blood pressure, (mmHg)\n83.0 (12.2)\n83.4 (10.7)\n0.774\nTriglyceride, (mmol/l)\n1.41 (0.62)\n1.56 (0.82)\n0.182\nTotal Cholesterol, (mmol/l)\n5.2 (0.9)\n5.3 (1.1)\n0.913\nHDL Cholesterol, (mmol/l)\n1.23 (0.29)\n1.25 (0.31)\n0.592\nLDL Cholesterol, (mmol/l)\n3.45 (0.82)\n3.35 (0.85)\n0.426\nIPAQ SF total METmin-1 per week,\n1071 (3.24)\n1318.3 (3.06)\n0.238\nWeight efficacy lifestyle\n123.0 (25.6)\n121.6 (23.4)\n0.697\nNegative emotions\n25.6 (6.5)\n25.1 (6.3)\n0.560\nAvailability\n21.6 (6.6)\n21.4 (6.1)\n0.901\nSocial Pressure\n22.3 (6.1)\n22.1 (5.7)\n0.771\nPhysical Discomfort\n27.7 (5.3)\n26.9 (5.6)\n0.315\nPositive activities\n25.8 (5.7)\n26.1 (5.2)\n0.693\nMultidimensional social support a\n66.3 (9.9)\n62.8(8.7)\n0.010b\nFamily Support\n22.8 (3.7)\n21.7 (3.3)\n0.029c\nFriend Support\n20.5 (3.9)\n19.3 (3.4)\n0.021c\nSignificant Others\n22.9 (4.1)\n21.8 (4.5)\n0.061\nTotal automatic thoughts questionnaire score\n26.5 (7.4)\n26.7 (8.0)\n0.867\nNegative self-concept\n16.1 (4.6)\n16.2 (4.6)\n0.853\nPersonal mal-adjustment\n9.1 (2.8)\n9.2 (3.5)\n0.822\nWHOQoL BREF\n\n\n\nPhysical Health\n12.9 (1.6)\n13.1(1.4)\n0.373\nPsychological\n13.1 (1.7)\n13.1 (1.7)\n0.942\nSocial Relationship\n14.8 (2.7)\n14.6 (2.2)\n0.681\nEnvironment\n14.1 (1.8)\n14.0 (1.9)\n0.722\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\n10.1371/journal.pone.0160343.t005Table 5BODY.BASELINE DIETARY INTAKE.:\nGSLiM, mean (sd) N = 65Diet counselling, mean (sd), n = 56p valueDietary energy intake, mean kcal/day\n1421.2 (332.3)\n1351.3 (369.8)\n0.276\nProtein (%)\n15.8 (2.3)\n15.0 (2.8)\n0.109\nCarbohydrate (%)\n53.3 (6.5)\n55.8 (6.1)\n0.034a\nFat (%)\n30.8 (5.4)\n29.0 (5.1)\n0.064\nCholesterol (mg)\n168.0 (88.6)\n162.3 (107.2)\n0.748\nSaturated Fat (mg)\n8.1 (3.9)\n9.0 (4.5)\n0.201\nMono saturated fat (mg)\n7.3 (3.6)\n7.2 (3.0)\n0.772\nMeals frequency\n3.4 (0.6)\n3.6 (0.7)\n0.068\na Significant p<0.05.\n\nBODY.BETWEEN GROUP DIFFERENCES:\nAt week 24, nineteen participants (19.6%) achieved 6% targeted weight loss in the intervention compared to 4 (4.1%) in the comparison group, Risk Ratio (RR) of 4.75 (95% CI: 1.68, 13.45). Out of 23 participants achieved weight loss, 10 (57.5%) intervention participants and 3 (75%) from the comparison group retained the targeted weight loss (6%) at 36th week. However, when measured in terms of weight in kg, no significant difference observed for the mean difference in weight loss between groups (-0.82 ± 2.12 kg), Cohen’s d 0.001.\nTable 6 summarises obesity and cardiovascular measures from baseline to 12th week, 13th to 24th week, baseline to 24th week and from 25th week to 36th week. No other significant mean difference between groups observed at any of the time intervals. Physical activity, psychological and quality of life scores measure from baseline to 24th week and from 25th week to 36th week are shown in Table 7.\n10.1371/journal.pone.0160343.t006Table 6BODY.OBESITY AND CARDIOVASCULAR OUTCOMES BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin GroupBetween groupsGSLiM mean change (s.e.) N = 97Comparison mean change (s.e.) N = 97Mean difference (95% CI)Cohen’s dp value0 to 12 weeks\n(intensive phase)\n\n\n\n\nWeight (kg)\n-1.52 (0.36)a\n-0.09(0.19)\n- 0.47 (4.69, 3.74)\n0.000\n0.825\nBMI (kg/m2)\n- 0.60 (0.14)a\n-0.31(0.08)\n- 0.30 (-1.52,0.92)\n0.001\n0.627\nWaist circumference (cm)\n- 1.67 (0.48)a\n0.54 (0.39)\n- 0.87 (-3.79, 2.06)\n0.002\n0.561\nHip circumference (cm)\n-0.74(0.39)\n-0.16 (0.27)\n0.77 (-1.81, 3.35)\n0.002\n0.557\nSBP (mmHg)\n0.24 (1.19)\n-2.26 (1.01)\n0.43 (-3.59, 4.46)\n0.000\n0.832\nDBP (mmHg)\n-0.25(0.96)\n-1.38 (0.72)\n0.09 (-2.91, 3.09)\n0.000\n0.951\nFasting serum triglyceride (mmHg)\n0.02 (0.05)\n0.009 (0.047)\n-0.14 (-0.33, 0.07)\n0.009\n0.184\nFasting serum cholesterol (mmHg)\n-0.04 (0.06)\n-0.04 (0.05)\n-0.01 (-0.26, 0.23)\n0.000\n0.913\nHDL (mmHg)\n-0.031 (0.017)\n0.018 (0.016)\n-0.05 (-0.13, 0.04)\n0.007\n0.256\nLDL (mmHg)\n-0.058 (0.055)\n-0.049 (0.057)\n0.09 (-0.12, 0.31)\n0.004\n0.401\nFasting blood glucose (mmHg)\n-0.003 (0.086)\n0.121 (0.087)\n-0.11 (-0.39, 0.17)\n0.003\n0.442\n13 to 24 weeks\n(fade frequency)\n\n\n\n\nWeight (kg)\n-0.92 (0.26)b\n-0.61(0.22)b\n-1.35 (-5.54, 2.85)\n0.002\n0.527\nBMI (kg/m2)\n-0.36 (0.10)b\n-0.24 (0.08)\n-0.64 (-1.87, 0.58)\n0.006\n0.300\nWaist circumference (cm)\n0.16 (0.44)\n0.18 (0.37)\n-1.79 (-4.73, 1.14)\n0.008\n0.229\nHip circumference (cm)\n-0.44 (0.29)\n-0.27 (0.23)\n0.39 (-2.21, 3.00)\n0.000\n0.763\nSBP (mmHg)\n0.17 (1.08)\n0.27 (0.90)\n1.63 (-2.37, 5.65)\n0.003\n0.424\nDBP (mmHg)\n-1.07 (0.82)\n1.27 (0.85)\n-0.51 (-3.51, 2.49)\n0.001\n0.737\nFasting serum triglyceride (mmHg)\n-0.05 (0.05)\n-0.047 (0.042)\n0.01 (-0.24, 0.37)\n0.007\n0.911\nFasting serum cholesterol (mmHg)\n0.08 (0.05)\n-0.02 (0.04)\n-0.13 (-0.35, 0.09)\n0.000\n0.239\nHDL (mmHg)\n0.038 (0.018)\n0(0.013)\n-0.05 (-0.14, 0.03)\n0.008\n0.215\nLDL (mmHg)\n0.033(0.049)\n0.039(0.041)\n0.08 (-0.4, 0.302)\n0.003\n0.453\nFasting blood glucose (mmHg)\n-0.088(0.046)\n-0.070(0.117)\n-0.21 (-0.54, 0.12)\n0.008\n0.207\n0 to 24 weeks\n\n\n\n\n\nWeight (kg)\n-2.42 (0.49)a\n-0.69 (0.27)c\n-0.82 (-5.00, 3.37)\n0.001\n0.701\nBMI (kg/m2)\n-0.96 (0.19)a\n-0.27 (0.11)b\n-0.44 (-1.65, 0.78)\n0.003\n0.480\nWaist circumference (cm)\n-1.51(0.52)a\n-0.31 (0.12)\n-1.12 (-4.00, 1.77)\n0.003\n0.445\nHip circumference (cm)\n-1.18 (0.45)c\n-0.43 (0.34)\n0.62 (-1.96, 3.19)\n0.000\n0.850\nSBP (mmHg)\n0.41 (1.23)\n-1.99 (1.14)\n0.82 (-3.09, 4.72)\n0.000\n0.680\nDBP (mmHg)\n-1.32 (0.91)\n-0.11 (0.99)\n-0.49 (-3.4.1, 2.41)\n0.003\n0.736\nFasting serum triglyceride (mmHg)\n-0.03 (0.06)\n-0.04 (0.05)\n-0.13 (-0.34, 0.07)\n0.009\n0.199\nFasting serum cholesterol (mmHg)\n0.04 (0.07)\n-0.02 (0.06)\n0.005(-0.24, 0.25)\n0.000\n0.970\nHDL (mmHg)\n0.01 (0.02)\n0.02 (0.02)\n-0.04 (-0.13, 0.04)\n0.002\n0.302\nLDL (mmHg)\n-0.03 (0.07)\n-0.01 (0.06)\n0.09 (-0.13, 0.30)\n0.003\n0.418\nFasting blood glucose (mmHg)\n-0.09 (0.08)\n0.11 (0.09)\n-0.16 (-0.45, 0.13)\n0.005\n0.299\n25–36 weeks (follow-up)\n\n\n\n\n\nWeight (kg)\n0.19 (0.35)\n-0.11 (0.16)\n-1.42 (-5.65, 2.82)\n0.002\n0.510\nBMI (kg/m2)\n0.02(0.07)\n-0.04 (0.64)\n-0.67 (-1.92, 0.57)\n0.006\n0.286\nWaist circumference (cm)\n0.19 (0.35)\n-0.19 (0.37)\n-1.44 (-4.39, 1.52)\n0.005\n0.338\nHip circumference (cm)\n-0.45 (0.31)\n-0.33 (0.21)\n0.26 (-2.39, 2.91)\n0.001\n0.636\nSBP (mmHg)\n-1.70 (1.27)\n0.46 (0.90)\n0.51 (-3.57, 4.48)\n0.000\n0.802\nDBP (mmHg)\n-0.72(0.97)\n-1.38(0.78)\n-1.03 (-4.08, 2.03)\n0.002\n0.509\nFasting serum triglyceride (mmHg)\n0.008(0.04)\n-0.01(0.03)\n0.008 (-0.25, 0.26)\n0.007\n0.264\nFasting serum cholesterol (mmHg)\n0.01(0.06)\n0.07(0.04)\n-0.13 (-0.36, 0.09)\n0.000\n0.953\nHDL (mmHg)\n0.012 (0.015)\n0.003 (0.018)\n-0.02 (-0.07, 0.11)\n0.003\n0.489\nLDL (mmHg)\n-0.033 (0.063)\n0.067 (0.044)\n0.03 (-0.19, 0.26)\n0.001\n0.715\nFasting blood glucose (mmHg)\n-0.008 (0.048)\n-0.065 (0.039)\n-0.22 (-0.58, 0.14)\n0.006\n0.267\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n10.1371/journal.pone.0160343.t007Table 7BODY.PHYSICAL ACTIVITY, PSYCHOLOGICAL AND QUALITY OF LIFE SCORE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin groupBetween groupsIntervention group mean change (s.e.)(n = 97)Control group mean change (s.e.)(n = 97)Mean Difference at intervals (95% CI)Cohen dP0 to 24 weeks\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n-18.9 (236.5)\n90.3(98.2)\n-244.3(-790.1, 301.5)\n0.004\n0.378\nVigorous, mets min-1\n-75.5 (149.7)\n58.6 (63.9)\n-10.5(-296.4, 275.5)\n0.000\n0.943\nModerate, mets min-1\n-6.8 (95.9)\n50.9 (59.5)\n-238.6(-509.3, 32.2)\n0.015\n0.084\nWalking, mets min-1\n63.3 (79.6)\n-19.3(43.8)\n4.7(-212.0,221.5)\n0.000\n0.966\nTotal WEL score\n10.3 (2.46)a\n1.78(1.88)\n5.62 (-0.10, 11.35)\n0.019\n0.054\nNegative emotions, mean (sd)\n2.02 (0.57)b\n0.07 (0.47)\n1.51 (0.004, 3.02)c\n0.020\n0.049\nAvailability, mean (sd)\n2.63 (0.62)a\n0.93 (0.49)\n0.96 (-0.58, 2.50)\n0.008\n0.218\nSocial Pressure, mean (sd)\n1.71 (0.60)c\n0.52 (0.43)\n0.85 (-0.61, 2.29)\n0.007\n0.253\nPhysical Discomfort, mean (sd)\n1.41 (0.48)c\n0.36 (0.44)\n1.31 (0.05, 2.56)c\n0.022\n0.041\nPositive activities, mean (sd)\n2.52(0.54)a\n-0.09 (0.40)\n0.99 (-0.27, 2.26)\n0.012\n0.121\n#\nTotal MDPSS mean score\n1.46 (0.65)\n1.09(0.63)\n\n0.012\n0.127\n# Friend Support, mean (sd)\n0.86(0.25)b\n0.44 (0.22)\n0.41(0.08, 0.75)a\n0.030\n0.015\n#Family Support, mean (sd)\n0.10 (0.27)\n0.19 (0.25)\n0.09 (- 0.25, 0.44)\n0.002\n0.578\n#Significant Others, Mean (sd)\n0.50 (0.30)\n0.45 (0.32)\n0.19(-0.21, 0.59)\n0.005\n0.347\nTotal ATQ mean score\n-0.84 (0.68)\n-0.40 (0.43)\n-0.40 (-2.29, 1.49)\n0.001\n0.675\nNegative self-concept mean score\n-0.39 (0.45)\n-0.26 (0.25)\n-0.19 (-1.35, 0.97)\n0.001\n0.745\nPersonal maladjustment mean score\n-0.36 (0.26)\n-0.19 (0.23)\n-0.19(-0.94, 0.57)\n0.001\n0.630\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n2.35(0.19)a\n1.82(0.19)\na\n0.07(-0.35, 0.48)\n0.001\n0.743\nPsychological\n1.62(0.15)\na\n1.52(0.16)\na\n0.06(-0.41, 0.52)\n0.000\n0.784\nSocial relationship\n0.63(0.22)c\n0.12 (0.17)\n0.39(-0.22, 1.01)\n0.008\n0.210\nEnvironment\n0.52(0.14)b\n0.25(0.11)b\n0.18(-0.31, 0.67)\n0.003\n0.471\n25 to 36 weeks (follow up)\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n1.1(143.6)\n303.1(138.4)\n-449.9 (-1015.1,115.3)\n0.013\n0.118\nVigorous, mets min-1\n-63.1 (73.2)\n59.4 (79.6)\n-138.8 (-437.5, 159.9)\n0.004\n0.361\nModerate, mets min-1\n62.5 (61.5)\n102.1 (82.0)\n-287.2 (-563.2, -11.2)c\n0.021\n0.041\nWalking, mets min-1\n1.7 (73.4)\n141.6 (63.6)\n- 23.9 (-240.8, 192.9)\n0.000\n0.828\nTotal WEL score\n-1.77 (1.45)\n0.45 (1.19)\n8.76 (3.05, 17.47)b\n0.045\n0.003\nNegative emotions, mean (sd)\n-0.42 (0.39)\n0.06 (0.27)\n2.24 (0.81, 3.67)b\n0.305\n0.002\nAvailability, mean (sd)\n-0.09 (0.35)\n0.19 (0.28)\n1.68 (0.10, 3.25)c\n0.022\n0.037\nSocial Pressure, mean (sd)\n-0.34 (0.34)\n0.09 (0.26)\n1.23 (-0.29, 2.75)\n0.013\n0.114\nPhysical Discomfort, mean (sd)\n-0.40 (0.31)\n0.31 (0.25)\n1.48 (0.28, 2.68)c\n0.030\n0.016\nPositive activities, mean (sd)\n-0.52 (0.36)\n-0.79 (0.33)\n2.14 (0.93, 3.35)a\n0.060\n0.001\n#Total MDPSS mean score\n0.35(0.62)\n0.46(0.41)\n1.46 (-0.26, 3.17)\n0.015\n0.095\n# Friend Support, mean (sd)\n-0.10(0.19)\n0.11(0.14)\n0.80 (0.15, 1.45)\nc\n0.030\n0.017\n#Family Support, mean (sd)\n0.27(0.22)\n0.16(0.18)\n0.29 (-0.35, 0.95)\n0.004\n0.364\n#Significant Others, Mean (sd)\n0.09(0.31)\n0.19(0.18)\n0.38 (-0.38, 1.13)\n0.005\n0.325\nTotal ATQ mean score\n-0.19(0.47)\n-0.73(0.21)b\n-0.35 (-2.17, 1.48)\n0.001\n0.709\nNegative self-concept mean score\n-0.10(0.30)\n-0.47(0.37)b\n-0.07 (-1.22, 1.07)\n0.000\n0.901\nPersonal maladjustment mean score\n-0.08(0.19)\n-0.19 (0.09)\n-0.21 (-0.94, 0.52)\n0.002\n0.568\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n-0.92 (0.18)a\n-0.53(0.19)c\n0.13(-0.37, 0.64)\n0.001\n0.609\nPsychological\n-0.29 (0.19)\n-0.12 (0.20)\n0.03(-0.49, 0.54)\n0.000\n0.923\nSocial relationship\n-0.03 (0.17)\n0.14 (0.11)\n0.56(-0.08, 1.19)\n0.015\n0.084\nEnvironment\n-0.06 (0.11)\n0.06 (0.09)\n0.21(-0.28, 0.69)\n0.004\n0.405\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# MDPSS, adjusted for baseline MDPSS.\n* median values instead of mean for physical activity.\nThe intervention group achieved higher score for negative emotions and physical discomfort of WEL subscales compared to the comparison group from baseline to 24th week. The effects maintained during follow up from 25th to 36th week. Intervention participants achieved higher friend support mean score from baseline to 24th week, even after adjusting for baseline MDPSS score. During the follow up period, except for social pressure, total WEL and other WEL subscales mean score increased significantly compared to the comparison group. However, moderate physical activity in METSmin-1 per week reduced significantly in the intervention group compared to the comparison group during 25th to 36th week. The intervention group experienced lower intake in carbohydrate percentage with higher percentage increase in fat intake over baseline to 12th week and 13th to 24th week. No other dietary change was observed (Table 8).\n10.1371/journal.pone.0160343.t008Table 8BODY.DIETARY ENERGY INTAKE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin group changeBetween groupsIntervention mean (s.e)Control mean (s.e)Mean difference at intervalsCohen dPN = 65N = 56(95% CI)0–12 weeks\n\n\n\n\n\nDietary intake, kcal\n0.41 (38.2)\n45.70 (22.76)\n47.25 (-79.8, 174.3)\n0.005\n0.463\nProtein (%)\n-0.05 (0.29)\n-0.19 (0.20)\n0.88 (-0.01, 1.77)\n0.09\n0.053\nCarbohydrate (%)#\n-4.12 (0.82)a\n-1.58 (0.67)\n-1.78 (-2.86, -0.69)b\n0.08\n0.002\nFat (%)\n3.54 (0.93)b\n1.77 (0.66)\n2.69 (0.91, 4.47)b\n0.07\n0.003\n13–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.06 (12.96)\n15.78 (35.53)\n19.24 (-113.3, 151.8)\n0.001\n0.744\nProtein (%)\n0.08 (0.16)\n0.22 (0.24)\n0.93 (-0.02, 1.87)\n0.03\n0.055\nCarbohydrate (%)#\n-0.19 (0.40)\n-1.91 (0.51)b\n-2.76 (-4.87, -0.64)c\n0.05\n0.011\nFat (%)\n0.11 (0.35)\n2.02 (0.53)b\n2.62 (0.36, 4.88)c\n0.04\n0.023\n25–36 weeks\n\n\n\n\n\nDietary intake, kcal\n-6.52 (9.08)\n39.15 (23.7)\n-8.96 (-139.7, 121.8)\n0.00\n0.892\nProtein (%)\n0.12 (0.09)\n0.44 (0.19)\n0.69 (-0.30, 1.69)\n0.02\n0.171\nCarbohydrate (%)#\n-1.18 (0.13)\n-1.48 (0.62)\n-1.33(-3.61, 0.95)\n0.11\n0.250\nFat (%)\n-0.07 (0.11)\n1.05 (0.55)\n1.17 (-1.16, 3.51)\n0.01\n0.322\n0–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.46 (38.28)\n61.48 (36.17)\n36.12 (-86.9, 159.19)\n0.003\n0.562\nProtein (%)\n0.13 (0.30)\n0.03 (0.20)\n0.81 (-0.08, 1.68)\n0.001\n0.444\nCarbohydrate (%)#\n-4.31(0.88)a\n-3.49 (0.64)a\n-1.84(-3.25, -0.43)c\n0.053\n0.011\nFat (%)\n3.65 (0.96)b\n3.79 (0.82)\n1.74 (0.003, 3.46)\n0.000\n0.050\nConfidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05. All participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# CHO percentage was adjusted for its baseline value for between group analyses in all interval.\n\nBODY.WITHIN INTERVENTION GROUP:\nWeight and BMI of the intervention group were reduced during the first 12 weeks (core phase), with smaller reduction in the next 12 weeks (fade frequency). During follow up, there was an insignificant increase in weight and BMI. Reduction in waist circumference occurred in the first 12 weeks but not sustained during fade frequency. An overall significant hip circumference reduction was observed within interval 0 to 24th week, although the reduction was not significant in the core phase or fade frequency.\nWEL and its subscales’ scores, friend support subscale score of MDPSS and WHOQOL-BREF domains score increased from 0 to 24th week. However, the improvement was not sustained during follow up, with significant reduction in WHOQOL-BREF physical health score observed during 25th to 36th week. The intervention group experienced reduction in carbohydrate percentage intake mainly during the core phase, resulting in significant reduction from baseline to 24th week. Concurrent fat percentage intake increased during the same time interval. There was no significant change in cardiovascular risks and physical activity measure throughout the study.\n\nBODY.WITHIN COMPARISON GROUP:\nThe comparison group weight decreased only during 13th to 24th week, yet it contributed to the overall significant reduction throughout treatment from 0 to 24th week. During the same 0 to 24th\nSignificant decreased in carbohydrate and increased in fat percentage intake, occurred from 13th to 24th week, in the comparison group. As per intervention group, the WHOQOL-BREF physical health score decreased in the follow up phase. Apart from these changes, significant reduction was also observed in negative thoughts and the negative self-concept subscale of the ATQ score during follow up period (25th to 36th weeks).\n\nBODY.ADHERENCE TO INTERVENTION:\nBODY.ATTENDANCE:\nAttendance was assessed as part of adherence measure in the intervention group. Of the total six sessions within the first 12 weeks, 33 (17.0%) participants in the intervention group attended more than four sessions, 34 (17.5%) attended three to four sessions while 30 (15.5%) attended two sessions and less. During the fade frequency with four sessions, 20.6 percent participants attended three to four sessions. None of the participants who attended less than three sessions achieved 6% weight loss, while 8.9% of participants attended four to six sessions achieved the targeted weight loss. About half (44.8%) of the participants attended more than seven sessions achieved 6% targeted weight loss (p<0.001). Median attendance was six sessions with inter-quartile range of five sessions. There was positive correlation between attendance and weight loss (r = 0.491, P<0.001).\n\nBODY.LOG BOOK SUBMISSION:\nThe overall log book submission was unsatisfactory for both groups. Only 46.4% (n = 45) intervention participants submitted their log books at 12th week and 16.5% (n = 16) at 24th week. Only 23.7% (23) comparison group participants submitted their log books at 12 weeks and 19.6% (19) at 24th week. Those who did not submit their log books were of younger age (p = 0.03).\n\n\nBODY.ADVERSE EFFECT OF THE INTERVENTION:\nTwo male participants with BMI above 40kg/m2 from the intervention group experienced mild soft tissue injury during self-conducted exercise and had undergone treatment. No other serious event reported.\n\n\nBODY.DISCUSSION:\nThe intervention proved to be more effective in achieving the targeted 6% weight loss, improved self-efficacy in dietary control, and achieved better friend support and quality of life than the comparison (dietary counseling) group. The psychological improvement in the intervention group was sustained post intervention. Although the comparison group also experienced reduction in weight, BMI, improvement in quality of life, and ATQ, there was no improvement in self-efficacy and social support.\nA total of 19% of GSLiM participants achieved 6% weight loss. For comparison purpose, we recalculated and found 17.5% of GSLiM participants achieved 7% weight loss. Other GLB based study reported 23 to 37% achievement in targeted weight loss [14]. A most recent GLB based trial on work site reported 29% of participants achieved targeted 7% weight loss [26]. While another study assessed the effectiveness of the coach led GLB programme in primary care setting had 37% participants achieving 7% weight loss at 15 months [45]. Our study had lower proportion of targeted weight loss due to fewer sessions in GSLiM compared to the original GLB-DPP. Earlier study reported for every other lifestyle session attended, weight loss could increase by 0.26% (16).\nOur findings concurred with recent review which showed multicomponent behaviour intervention was more effective than single component intervention [46]. The mean weight loss within intervention at 24th week was -2.43 kg (95% CI: -3.75, -1.21, p = 0.001), comparable to another trial which reported mean weight loss of -2.3 kg (95% CI: -2.92 to -1.72, p = 0.001) [47]. Most improvement occured during the first 12 weeks in the intervention group where sessions were more intensive. Earlier findings showed positive association between intensity and magnitude of weight loss [48]. Yet, lower intensity programme with similar effect is still being sought by other researchers [49], as intensive intervention are costly in terms of manpower and materials [50]. Apart from weight loss, significant improvement of the self-efficacy score and its subscales were seen among intervention participants compared to the comparison group, similarly found by other studies [51–53]. The intervention group experienced immediate improvement in social support score at baseline. However, after controlling for the baseline friend support score, significant difference between groups was observed for mean friend support score during the 24 weeks. Similar finding was also reported by a recent cluster randomised trial where healthy eating was found to be supported by friend [54].\nAny improved ATQ score would be expected in the intervention group due to the presence of cognitive elements incorporated in the programme, instead the improvement of ATQ score occurred within the comparison group. The use of ATQ to measure negative thought in obesity research remains limited in the presence of other tools used to assess dysfunctional eating behavior associated with obesity [55]. The average cut off value for depressogenic negative thoughts using the 17 item scale of ATQ—Malay in non-clinical population was 30.0 and clinical population was 47.0 [41]. The mean ATQ score for both the intervention and comparison groups was 25.6 and 26.3 respectively, below the cut off value; therefor the significant mean difference found between the intervention and comparison group was most likely due to chance. ATQ was used in our study to measure depressogenic features amongst obese participants and observed further change as treatment progressed. Moderate concurrent validity (r = 0.65) was found between the Malay ATQ and Malay-Beck Depression inventory (BDI), while discriminant analysis achieved 89.3%.\nOther than improvement in weight, psychological measures and quality of life, there was no improvement in cardiovascular risks among our participants. Other studies found at least 10% weight loss of baseline weight resulted in changes for the cardiovascular risk parameters [56]. Several other reasons may influence the absence of treatment effect on cardiovascular risks in this study. Our participants were within the younger age group and their clinical indicators were of normal range, therefor further improvement would be less likely. The focus of intervention would also influence the result as other studies targeting clinical indicators with health education in dietary approach managed to result in cardiovascular risks improvement [57,58]. Therefore, weight loss may not necessary resulting improvement of cardiovascular risks factors.\nParticipants also experienced small reduction in carbohydrate percentage intake and increase in fat percentage intake, as found by previous study [59]. Low carbohydrate diet has been associated with greater weight loss compared to low fat diet [60]. Nevertheless, there has been call that the effectiveness of lifestyle modification not to be overshadowed by the continuous search for the best dietary approach [61] when moderate balanced nutrients intake should be advocated.\nAlthough promising, the results of this study should be interpreted cautiously. Since participants were employees from the university, generalizability of the study remained limited within population of the same setting and may differ with the general population. Self—reported measures in particular self-administered surveys used may result in reporting bias. Negative behavior such as smoking and alcohol consumption may result in underreporting, while physical activity may be over reported. Although the short form IPAQ was advocated to be used in view of time-saving, a recent review [62] found that IPAQ SF tended to overestimate the METs-minutes/ week score. However, the categories of physical activity derived from the long form IPAQ [63] showed similarity with the categories found in our study and another Malaysian based study [3]. Therefore, we foresee the results to be reliable in particular in assessing change over time. Objective measures such as pedometer or accelerometer cited to be more reliable [64] however its utilization is resource consuming and may not be feasible in community setting. Our study only measured dietary self-efficacy, however, recent finding showed that exercise self-efficacy may be a better predictor for weight loss compared to dietary self-efficacy [65]. Considering the continuous development in the cognitive- psychological aspect of obesity intervention, both physical and dietary self-efficacy need to be assessed in future studies.\nSelf-monitoring is another critical component for weight reduction (39), however the log book submission by both groups was poor although attempts had been made to improve submission. This may further explain the small effect size observed and no improvement in cardiovascular risks although weight loss was achieved within each group. Poor log book submission for physical activity and dietary intake has also been reported in other studies (17). The attendance for group sessions was low compared to other studies [25,45,66]. Yet similar findings to ours was also found in another Malaysian based lifestyle intervention trial [67]. We also found processes such as attendance and log book submission to be correlated with weight loss. Competing time work demand has also been cited as factors affecting attendance for work site intervention [25]. Local culture within the society may influence the attendance towards programme which needs to be explored.\nFinally, we did not evaluate environmental support such as availability of healthy food choices and physical activity facilities which is known to influence weight loss.\nNevertheless, this study is the first group based programme derived from the GLB-DPP conducted in Malaysia. It addressed the dynamics of psychological aspect within the lifestyle modification programme based on a theoretical construct which was rarely implemented and monitored in other translational weight loss programme. The randomised controlled design as opposed to single arm trial was used to evaluate its effectiveness compared to an existing dietary programme.\n\nBODY.CONCLUSION AND RECOMMENDATIONS:\nOur findings showed the group based workplace intervention (GSLiM) programme managed to achieve targeted weight loss, improved self-efficacy and created positive support with lower intensity. The incorporation of group based approach and psychological sessions managed to achieve psychological change needed to achieve weight loss. The GSLiM programme is ready to be used and can be replicated in similar setting with possible enhancement for exercise self-efficacy, attendance, and adherence to self-monitoring. The programme should be extended for longer duration. Further research should explore the predictive value of the psychological factors in assisting participants to achieve their targeted weight loss.\n\nBODY.SUPPORTING INFORMATION:\nS1 FileBODY.RESEARCH PROTOCOL.:\n(PDF)\nClick here for additional data file.\nS2 FileBODY.DATA.:\n(XLS)\nClick here for additional data file.\nS1 TableBODY.CONSORT CHECKLIST FOR RANDOMISED CONTROL TRIAL.:\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to dietary counseling what was the result of Group Support Lifestyle Modification (GSLiM) on Systolic blood pressure, Diastolic blood pressure?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13861
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Group Support Lifestyle Modification (GSLiM) Programme among Obese Adults in Workplace: A Randomised Controlled TrialGSLiM and Obesity\n\n\n\nABSTRACT.BACKGROUND:\nThere was an increasing trend in the prevalence of obesity and its comorbidities over the past decades in Malaysia. Effective intervention for obesity remains limited. This study aimed to compare the effectiveness of a group based lifestyle modification programme amongst obese individuals with an existing dietary counseling programme.\n\nABSTRACT.METHODS:\nWe recruited one hundred and ninety four overweight and obese (BMI>27.5 kg/m2) employees from a local university. They were randomly allocated to either Group Support Lifestyle Modification (GSLiM) (intervention)(n = 97) or dietary counseling (comparison)(n = 97). The GSLIM activities included self monitoring, cognitive-behaviour sessions, exercise as well as dietary change advocacy, which were conducted through seminars and group sessions over 24 weeks. The comparison group was given dietary counselling once in 12 weeks. Both groups were followed up for additional 12 weeks to check for intervention effect sustenance. Anthropometric and biochemical parameters were measured at baseline, 12, 24 and 36 weeks; while dietary intake, physical activities, psychological measures and quality of life measured at baseline, 24 and 36 weeks. Data analysis was conducted using ANOVA repeated measures with intention to treat principle.\n\nABSTRACT.RESULTS:\nThe participants were predominantly women with mean (standard deviation) age of 40.5 (9.3) years. A total of 19.6% of the participants in GSLiM achieved 6% weight loss compared to 4.1% in the comparison group (Risk Ratio 4.75; 95% CI: 1.68, 13.45). At 24 weeks, the retention rate was 83.5% for GSLiM and 82.5% for comparison group. GSLiM participants also achieved significant improvement in total weight self-efficacy score, negative emotions and physical discomfort subscales, MDPSS friend subscale and all domains in quality of life. Participants in the comparison group experienced reduction in negative self-thoughts.\n\nABSTRACT.CONCLUSION:\nThe GSLiM programme proved to be more effective in achieving targeted weight loss, improving weight self-efficacy, friend social support, and quality of life compared to dietary counseling.\n\nABSTRACT.TRIAL REGISTRATION:\nIranian Registry of Clinical Trials IRCT201104056127N1\n\nBODY.INTRODUCTION:\nMalaysia has the highest combined overweight and obese prevalence among the Southeast Asian countries in 2013 [1]. From 1996 to 2009, the prevalence of obesity in Malaysia increased by 23.7% [2], concurrent with the obesity global pandemic. High risks groups for obesity were women, married adults, those with secondary education and unemployed or homemakers [3]. Arguably, working adults who engage in sedentary work as well as those experience stress at the workplace [4,5] are also at risks of obesity.\nObesity is associated with increase mortality [6] and reduction in overall quality of life [7]. Malaysians observed an increase of mortality due to heart disease and cancers from 67% in 2008 to 73% in 2012 [8,9]. Comprehensive lifestyle modification on diet, physical activity and psychology remains as the primary intervention for obesity [10,11].\nMulticomponent lifestyle modification proved successful in reducing diabetes risk [12] as well as improved cardiovascular risk factors [13]. The Diabetes Prevention Program (DPP) was a high intensity lifestyle modification programme which managed to reduce participant’s diabetic risks by 58% [12]. Yet, in real life, high intensity programme may not be applicable to all settings as it is resource intensive. This results in its translation into the Group Lifestyle Balance (GLB-DPP) implemented in clinical settings, primary care and workplace [14]. The GLB-DPP retained the core principles of the DPP such as 7% weight loss, 150 minutes of physical activity and self-monitoring using group approach with reduced sessions.\nAccording to the Social Cognitive Theory; personal behavior, thoughts and environment reciprocate to produce action [15,16]. Factors influencing the cognitive process favoring weight loss includes high self-efficacy [17,18] and social support [19]. Meanwhile, obese individuals may have higher inclination for negative thoughts related to dysfunctional eating [20] hampering lifestyle modification progress. However, these psychological factors were seldom assessed and reported together with clinical measures as part of the lifestyle modification programme outcomes [21,22].\nWorkplace has been shown to be a feasible and effective setting for lifestyle modification programme for obesity prevention [23–27]. Apart from preventing short term deterioration of quality of life, moderate improvement in weight induced employee productivity [28].\nIn view of urgent need for obesity intervention, we implemented an adaptation of the GLB-DPP programme named Group Support Lifestyle Modification (GSLiM) in the workplace. The objective was to compare the effectiveness of the GSLiM programme with an existing one to one dietary counseling programme in the workplace. The programme was designed to create social support and improve self-efficacy. It was hypothesized that Group Support Lifestyle Modification (GSLiM) would be more effective in producing weight loss, improvement of cardiovascular risk factors, self-efficacy, social support factors as well as their quality of life.\n\nBODY.METHODS:\nThis was a randomised controlled trial conducted in a public university in Kuala Lumpur, Malaysia. Ethical clearance obtained from the Medical Ethics Committee, University Malaya Medical Centre on 16 March 2011 (MEC No. 841.2) and was registered with the Iranian Registry of Clinical Trial (http://www.irct.ir/) (IRCTID: IRCT201104056127N1). There was a slight delayed in trial registration as the authors waited for funding approval obtained from the University Malaya post-graduate research fund.\nBODY.RECRUITMENT AND PARTICIPANTS:\nParticipants were employees from a public university in Kuala Lumpur. Recruitment started soon after trial registration from May 2011 to September 2012. The trial intervention and follow up commenced from September 2011 to July 2013. Respondents gave written informed consent during recruitment and informed of random allocation into either of the treatment arms. Recruitment, data collection and intervention programme were conducted in the Sport Centre of the Faculty of Sports Science within the university.\nEligible criteria included employees of the university, Bahasa Malaysia (national language of the country) literate, with BMI of 27.5kg/m2 or more and able to walk briskly for at least 10 minutes without assistance. Cut off point of 27.5kg/m2 was used as Asians experience higher risk for type 2 diabetes, metabolic syndrome as well as cardiovascular diseases at lower BMI [29–31]. Exclusion criteria included individuals with unstable angina, congestive cardiac failure, cancer, severe pulmonary disease, psychiatry disorders e.g. substance abuse, depression, weight loss of more than 5kg in the last 6 months, pregnant or breastfeeding. Self-administered questionnaire used to assess medical history and family history for disease risks. Further assessment to rule out undiagnosed medical conditions included full blood count (FBC), renal profile (RP), fasting blood sugar (FBS) and lipid profile at recruitment phase.\nOf 510 employees’ responded to invitations via emails and flyers, 275 consented for participation and assessed for eligibility. Nineteen participants withdrew after consented, eleven did not finish screening, fifty-one were excluded and finally, 194 participants were randomised (Fig 1).\n10.1371/journal.pone.0160343.g001Fig 1BODY.CONSORT PARTICIPANTS FLOWCHART OF INTERVENTION AND CONTROL ARM.:\nGSLiM: Group Support Lifestyle Modification; Comparison group: Dietary counselling.\nBODY.RANDOMISATION:\nA third person (TC), who was not involved with the study conducted random allocation of participants into either Group Support Lifestyle Modification (GSLiM) or dietary counseling (comparison). Allocation of participants to GSLiM or comparison arms were assigned using opaque envelopes. This was an open label trial as both treatment provider and participants knew which arm they belonged to since the programme differed between the two groups.\n\n\nBODY.INTERVENTIONS:\nBODY.GLB-DPP:\nThe Group Lifestyle Balance (GLB-DPP) developed based on the Diabetes Prevention Program (DPP) [32] that retained the core principles of DPP. These included self-monitoring for weight, dietary and physical activity, problem solving, staying motivated, and target 7% weight loss from baseline weight. However, the programme delivery of GLB-DPP used group-approach given over twenty-two hourly sessions in three phases (core, fade frequency and support).\n\nBODY.GSLIM:\nPermission to adopt and adapt the GLB-DPP was obtained from the author/s under the Creative Common Licence Share Alike. The GSLiM programme retained the core characteristics of GLB-DPP except a 6% target weight loss from baseline weight used instead of 7% and absence of support sessions after programme completion at 6th month. The 6% target was based on an achievement of a minimum of 1% weight loss per month in view of evidence that even 5% weight loss may produce clinical improvement among obese individuals [10]. The frequency of sessions conducted in GSLiM was less compared to the GLB-DPP with ten sessions in GSLiM intervention compared to the twenty-two of GLB-DPP. However, the original twenty-two topics of the GLB-DPP were retained in GSLiM with the topics delivered through three seminars and five 90-minute sessions in two phases. The core phase (Phase 1) was run once in two weeks for three months, beginning with a two-day seminar. A fade frequency (Phase 2) started with a one-day seminar, followed by two 90-minute sessions and concluded with a half-day seminar at the end of the active intervention. Comparison of sessions between GLB-DPP and Group Support Lifestyle Modification (GSLiM) is shown in Tables 1 and 2.\n10.1371/journal.pone.0160343.t001Table 1BODY.COMPARISON OF GROUP LIFESTYLE BALANCE (GLB) WITH GROUP SUPPORT LIFESTYLE MODIFICATION (GSLIM) PROGRAMME.:\nSame aspects of GLB and GSLiM\nSafe and appropriate intervention that incorporates nutrition, physical activity, and behaviour change.\nGradual increase of 150 mins per week of physical activity and minus 500 kcal per day of dietary intake.\nStrong focus on use of self-monitoring tools with feedback.\nInitial emphasis on fat intake and calories.\nPrimary focus on healthy food choices.\nUse of inexpensive food samples and incentives.\nUse of problem-solving techniques to address barriers to healthy eating and physical activity.\nGroup approach.\n\nSpecific Adaptation to GSLiM\nGLB\nModified GLB (GSLiM)\nGoal: 7% weight loss and increase physical activity to 150 minutes/week.\nGoal: 6% weight loss of baseline body weight and physical activity 150 per week.\n12 weekly 1-hour sessions delivered over 12–15 weeks\n0, 3 and 6 months by seminar sessions.\n5 once in 2 weeks session over 12 weeks\n2 monthly session over subsequent 12 weeks\n\nPedometer introduced during core sessions\nLogging for physical activity in log book\nTrainers: Dietician and exercise specialist\nTrainers: Dietician and exercise specialist and psychologist\nTotal sessions 22 given in spread of 48 weeks\nCompressed session to 10 given in 24 weeks\n10.1371/journal.pone.0160343.t002Table 2BODY.TOPIC COMPARISON BETWEEN GSLIM AND GLB-DPP.:\nGLB-DPP topic arrangementGSLiM topic arrangementsCore sessions: Weekly for 12 weeks\nCore sessions: Bi- weekly (2 per month) for 12 weeks\n1. Welcome to GLB.\n1. Welcome to GSLiM—Seminar overview of topics. (GLB-DPP 1)Physical activity component (overview of topics and specifics on)\nStretching: The truth about flexibility (GLB-DPP session 21)\nHeart health (GLB-DPP session 20)\nDiet component: overview of topics, Tip the calorie balance (GLB-DPP 5)\nPsychological—overview\n\n2. Be a fat and Calorie Detective\n2. GSLiM Session 22.1 Fat, calorie and you (GLB-DPP session 2)\n2.2 Healthy eating (GLB-DPP session 3)\n\n3. Healthy eating\n3. GSLiM Session 33.1 Move those muscles (GLB-DPP session 4)\n3.2 Monitoring your activities (GLB-DPP 10)\n\n4. Move those muscles\n4. GSliM Session 44.1Negative thoughts and Weight (GLB-DPP 9 and 14)\n4.2 Balance your thoughts (GLB-DPP 15)\n\n5. Tip the calorie balance\n5. GSLiM Session 5\n5.1 Behavior, Environment and Lifestyle Change (GLB-DPP 6 and 17)\n5.2 Make Social Cues Work for You (GLB-DPP 11)\n\n6. Take charge of whats around you\n6. GSLiM Session 6• 6.0 Strengthen Your Exercise Programme (GLB-DPP 16)\n7. Problem solving\n\n8. Four keys to healthy Eating out\n\n9. Slippery slope of Lifestyle change\n\n10. Jump start Your Activity Plan\n\n11. Make Social Cues Work for You\n\n12. Ways to Stay Motivated\n\nTransition session (fade frequency)\nTransition session (fade frequency)\n13. Prepare for Long Term Self Management\n7. GSLiM Session 7- One day seminar.\n\na. Stress management (GLB-DPP 18).\n\nb. Problem solving (GLB-DPP 7).\n\nc. Healthy eating out (GLB-DPP 8).\n\nd. More volume fewer calories (GLB-DPP 14).\n\ne. Group physical activity sessions.\n14. More Volume; fewer calories\n8. GSLiM Session 8.• 8.0 Standing Up for Your Health (GLB-DPP 19)\n15. Balance your thoughts\n9. GSLiM Session 9.• 9.0 Stay motivated (GLB-DPP 12)\n16. Strengthen Your Exercise Programme\n10. GSLiM session 10 (final).Looking back and looking forward (GLB-DPP 22)—Sharing achievements.\nPrepare long term self-management (GLB-DPP 13)\n\nSupport sessions (variable sequence)\nFollow up at 36 weeks\n17. Mindful eating\n\n18. Stress and Time Management\n\n19. Standing Up for Your Health\n\n20. Heart Health\n\n21. Stretching: The truth about flexibility\n\n22. Looking back and looking forwards\n\nTable 3.\n10.1371/journal.pone.0160343.t003Table 3BODY.SUMMARY OF PROGRAMME STRUCTURE, SESSIONS AND CONTENTS.:\nSessionApproachTitle and Contents1\nSeminar\nIntroduction to the programme\n\n\nForeword on the Group Support Lifestyle Modification (GSLiM) programme configuration and sessions. Highlight energy balance and the concept of self-monitoring. Develop allegiance to the programme.\n\n\nSet up goals for: 6% weight loss, and physical activity of 150 minutes per week. Daily calorie requirement (DCR) minus 500 for dietary intake to be worked out through the 2 step Harris Benedict equation. Foreword on components: psychological, diet and nutrition and physical activity.\n\n\nDescription of main food components, sources of fat, food pyramid and food plate theory. General outlook on physical activity and exercise. Introduction to cognitive and behaviour aspects related to lifestyle alteration. Getting-to-know- you session lasting half a day to help participants get acquainted with each other.\n2\nGroup session\nCalorie measurement and healthy food options.\n\n\nEducation and hands on session for self-monitoring through the measurement of food and calories by participants utilising scales and measurement tools. Reading food labels and using the food plate theory for practice. Sessions with active facilitator participation.\n3\nGroup session\nMoving Your Muscles.\n\n\nInformation on effective exercise methods (brisk walking). Raise awareness on the safety aspects of physical activity and exercise. Highlight the enhancement of general health through physical activity and exercise. Assess individual's present level of physical activity and exercise. Rise in activity levels to be spread out over a period of time. Set weekly goals for physical activity. Commence self-monitoring for physical activity and exercise. Group activity for aerobic session.\n4\nGroup session\nNegative thoughts.\n\n\nUncover negative thoughts. Increase knowledge on variety of negative thoughts, types of thought disorder and its relationship to behaviour. Suggest ways to offset negative thoughts. Highlight the significance of appreciating oneself for personal accomplishments. Review negative thoughts and set up action plan to counter them.\n5\nGroup session\nRelationship between behaviour, environment and healthy lifestyle change.\n\n\nDefine social and physical environment and relate its influence on physical activity and eating behaviour. Promote positive response to environmental and social drawbacks. Highlight the need for a healthy diet and physical activity /exercise. Emphasize on acting as role models for group members in order to promote physical activity and positive eating. Appraisal of present behaviour and setting up of action plan to achieve positive behaviour change\n6\nGroup session\nConsolidating Exercise Programme\n\n\nInitiate aerobic fitness sessions and define frequency, intensity, time and type of activity (F.I.T.T). Educate participants on the engagement of self-monitoring to set up an effective exercise schedule through the calculation of heart rate and rate of perceived exertion (RPE).\n\n\nDiscuss schedule and include a variety of activities and exercises to develop flexibility, strength, cardiovascular health and endurance.\n7\nSeminar\nConsolidate dietary, fitness, thought and behaviour alteration.\n\n\nReport on the progress of participants. Reassess weight loss goals, diet and physical activity based on the most recent measurements. Behaviour: Initiate stress management procedure (abdominal breathing) and problem solving techniques (I.D.E.A.L approach) to deal with issues that crop up due to alterations in diet and daily activities. Dialogue on problems encountered by participants. Encourage healthy eating through the promotion of high volume and low calorie foods.\n\n\nDietary: Educate on the choice of healthy food when eating out. Explore by group members of ways to be more active on busy work days.\n\n\nFitness: Suggest activities for work related sitting time. Stress on the need for intervals between extended periods of sitting especially at work. Discuss options on exercise location during unfavourable weather conditions. Teach ways to improve cardiovascular health through resistance and strength exercises. Set up a physical activity schedule for the following month.\n8\nGroup session\nChallenges in Lifestyle change.\n\n\nDiscourse among group members on problems encountered during attempts to change their lifestyles focussing on eating habits and physical activities. Exchange of information and problem-solving suggestions among participants. Re-assess obstacles faced and define a strategy to overcome these barriers to change.\n9\nGroup session\nMaintaining motivation.\n\n\nConcentrate on means to be constantly motivated for lifestyle change. Stress on awareness of internal and external sources of motivation. Emphasise on appreciating minor accomplishments related to lifestyle change. Initiate discussion among group members on strategy to maintain motivation.\n10\nSeminar\nExperience sharing.\n\n\nDisclosure of participants’ accomplishments during the course of the programme (6 months). Initiate panel discussion involving team and group members on the achievements of participants during the programme. Encourage participants to share their six-month experience on the road to better health through weight reduction. Participants were also encourage to continue healthy lifestyle.\nA starter kit containing handouts, examples of food energy intake and activity-based energy expenditure in the form of wheel chart was provided to participants during the first session. Handouts were also given to participants after subsequent sessions. Self-monitoring activities included weighing-in, submission of diet records and physical activity log was also conducted for each session. Feedback was provided to motivate the participants on each session.\n\n\nBODY.COMPARISON:\nParticipants in the comparison group underwent individual counseling with a dietician once every 12 weeks. Each dietary counseling session lasted for an hour. Recommendations on reduced calorie intake for weight loss, education on food pyramid, self-monitoring of calorie intake, lowering high-fat food and increase of fruits and vegetable intake were given. Participants were advised to increase their physical activity levels and exercise at least three times a week, but no practical session provided.\n\nBODY.MEASUREMENTS:\nBODY.SOCIODEMOGRAPHIC AND LIFESTYLE HISTORY:\nSociodemographic characteristics included age, gender, ethnicity, education, household income and marital status; family history of diseases such as hypertension, diabetes and hyperlipidemia; and socially related lifestyle behavior such as smoking and alcohol consumption assessed using self-administered questionnaire. Participants were required to fill a three-day dietary record using a log book. They were taught to estimate food intake with standard day-to-day food utensils. Food items according to meals were entered. A food list with energy (kcal) values was given to assist participants’ in self-monitoring for healthy food choices. Physical activity was measured using the International Physical Activity Questionnaire short form (IPAQ-SF). The IPAQ-SF comprised of seven items reported in terms of METS-min per week which can be summarised into three main categories (low, moderate and high). It is a valid measurement of physical activity and has moderate to high reliability [33].\n\nBODY.ANTHROPOMETRY, BIOCHEMICAL AND CLINICAL MEASURE:\nWeight was measured using SECA digital scale (Model HD 309) and height was measured using SECA body meter (Model 208) to the precision of 0.1 kg and 0.1 cm, respectively. Participants wore light clothing, and stood on scale without shoes with leg apart within the scale footprint. Waist and hip circumference were measured using non elastic SECA measuring tape to the nearest 0.1 cm following the WHO guideline [34]. Digital OMRON Model HEM 907 was used to measure blood pressure based on the Malaysian Clinical Practice Guidelines for management of Hypertension [35]. Fasting blood samples (after 8 hours) collected by trained staff and sent within four hours of collection in an icebox to the laboratory. Measurements were conducted by trained research assistants at baseline, 12, 24 and 36 weeks. Height measured only once at baseline. BMI was calculated as weight in kilograms divided by height in meters squared.\n\nBODY.PSYCHOLOGICAL MEASURES AND QUALITY OF LIFE:\nPsychological measures and quality of life were measured using self-administered questionnaires at baseline, 24 and 36 weeks. Weight efficacy lifestyle (WEL) questionnaire was used to measure eating self-efficacy in weight management with 20 items in five subscales. The subscales include food control in negative emotions, availability of food, social pressure, physical discomfort and physical activity with internal consistency ranged from 0.7 to 0.9 [36]. The Malay version of WEL questionnaire had internal consistency (Cronbach’s alpha) of 0.47 to 0.86 [37].\nSocial support was measured using the Multidimensional Perceived Social Support (MDPSS) which comprised of 12 items and three subscales. Internal reliability achieved was 0.82 for perceived social support from family, 0.88 for friends and 0.94 for significant others [38]. The Malay version of MDPSS has internal consistency of 0.89 with reliability coefficients for family, friends and significant others subscales ranging from 0.67 to 0.83 [39].\nAutomatic Thought Questionnaire (ATQ) used to assess negative thoughts [40]. Due to poor construct, the Malay version was reduced to 17 items and two subscales from 30 items [41]. Each subscale i.e. negative self concept (NSE) and personal maladjustment (PMA) had internal consistency (Cronbach alpha) of 0.91 and 0.83 respectively. Method of scoring of the Malay ATQ remained the same as the original 30 item with a cut-off score of 30 for depression.\nQuality of life was measured using the WHO Quality of Life short form questionnaire (WHOQoL-BREF), a shorter version of WHO Quality of Life (WHOQoL) questionnaire with the objective to measure a global perspective of quality of life across cultures [42]. The WHOQoL-BREF comprises of 26 questions in four domains i.e. physical activity, psychological, social support and environment. Validation of the translated Malay version revealed high reliability with internal consistency of 0.64–0.80, and test—retest-reliability of 0.49–0.88 [43,44].\n\n\nBODY.SAMPLE SIZE:\nSample size was calculated using OpenEpi software. Based on Renjilian et al.’s findings (17), 49% of group therapy participants achieved ten percent weight loss compared to 29% in individual counseling. A total of 286 participants was required after setting the study power at 80%, level of significance at p<0.05 and attrition rate of 25%. Due to low response rate, high number of excluded respondents and time constraint for further recruitment, the sample size was not achieved. Power calculation revealed the sample size of 194 achieved 88.8% power for targeted weight loss.\n\n\nBODY.STATISTICAL ANALYSIS:\nIBM SPSS Statistics, Version 20 for Windows (SPSS Inc, Chicago IL) used for data analysis. The dietary data was cross-checked and entered by a trained dietician into Nutritionist Pro where the Malaysian nutrient database was used. Between group mean differences at baseline for continuous variables were analyzed using t-tests and χ2 tests for categorical data. Mean differences between groups over time used analysis of variance for repeated measures. All analysis was carried out using the intention to treat principles. Missing data was imputed using the last observation carried forward. Continuous data at baseline was described as means± standard deviation. Repeated measure data for within group measures was described as mean ±standard error. Confidence interval (CI) was reported where appropriate. Statistical significance was preset at 0.05. Relative risk reported for proportion achieved for weight loss, while Cohen’s d used for the within and between group repeated measure. Adjustment conducted for measures where the baseline values showed significant between-group differences to control for bias i.e. carbohydrate percentage and multidimensional perceived of social support.\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nA total of 194 participants recruited and randomised. The participants’ mean age (standard deviation) was 40.5 (9.3) years. Majority of participants were women (72.7%) and non-academician (92.3%). More than half (64%) had at least secondary education. A total of 81 (83.5%) GSLiM participants and 80 (82.5%) of dietary counseling participants completed the study at 24 weeks (Fig 1). Participants from the GSLiM group had higher total MDPSS score, family support and friend subscales score as well as carbohydrate percentage intake. No other significant difference between groups was observed at baseline (Tables 4 and 5).\n10.1371/journal.pone.0160343.t004Table 4BODY.BASELINE MEASURES.:\nGSLiM, mean (sd), n = 97Diet counselling, mean (sd), n = 97p valueAge, years\n39.7 (9.2)\n40.4 (9.5)\n0.617\nGender\n\n\n0.629\nFemale, n (%)\n69 (71.1)\n72 (74.2)\n\nMale, n (%)\n28 (28.9)\n25 (25.8)\n\nEthnicity\n\n\n0.296\nMalay, n (%)\n87 (89.7)\n91 (93.8)\n\nNon Malays, n (%)\n10 (10.3)\n6 (6.2)\n\nEducation, tertiary, n (%)\n39 (40.2)\n29 (29.9)\n0.132\nHousehold income (RM)\n2680.34 (1511)\n2504.19 (1641.20)\n0.438\nMarried, n. (%)\n76 (78.4)\n74 (76.3)\n0.732\nOccupation, academician, n (%)\n9 (9.3)\n6 (6.2)\n0.420\nFamily history of:\n\n\n\nHypertensive, n (%)\n11 (11.3)\n13 (13.4)\n0.663\nDiabetes, n (%)\n3 (3.1)\n2 (2.1)\n0.683\nHyperlipidaemia, n (%)\n8 (8.2)\n13 (13.4)\n0.248\nBehaviour:\n\n\n\nNon-alcohol intake,\n96 (99)\n97 (100)\n0.316\nSmoker, n (%)\n9 (9.3)\n5 (5.2)\n0.267\nHeight, m\n1.59 (0.08)\n1.58 (0.09)\n0.863\nWeight, kg\n82.3 (16.3)\n82.0 (13.5)\n0.911\nBMI, kg/m2\n32.4 (4.8)\n32.4 (3.8)\n0.980\nWaist circumference, cm\n95.1 (11.2)\n94.9 (9.5)\n0.872\nHip circumference, cm\n112.2 (10.4)\n111.1 (8.2)\n0.429\nSystolic blood pressure, (mmHg)\n128.6 (16.1)\n129.1 (14.4)\n0.712\nDiastolic blood pressure, (mmHg)\n83.0 (12.2)\n83.4 (10.7)\n0.774\nTriglyceride, (mmol/l)\n1.41 (0.62)\n1.56 (0.82)\n0.182\nTotal Cholesterol, (mmol/l)\n5.2 (0.9)\n5.3 (1.1)\n0.913\nHDL Cholesterol, (mmol/l)\n1.23 (0.29)\n1.25 (0.31)\n0.592\nLDL Cholesterol, (mmol/l)\n3.45 (0.82)\n3.35 (0.85)\n0.426\nIPAQ SF total METmin-1 per week,\n1071 (3.24)\n1318.3 (3.06)\n0.238\nWeight efficacy lifestyle\n123.0 (25.6)\n121.6 (23.4)\n0.697\nNegative emotions\n25.6 (6.5)\n25.1 (6.3)\n0.560\nAvailability\n21.6 (6.6)\n21.4 (6.1)\n0.901\nSocial Pressure\n22.3 (6.1)\n22.1 (5.7)\n0.771\nPhysical Discomfort\n27.7 (5.3)\n26.9 (5.6)\n0.315\nPositive activities\n25.8 (5.7)\n26.1 (5.2)\n0.693\nMultidimensional social support a\n66.3 (9.9)\n62.8(8.7)\n0.010b\nFamily Support\n22.8 (3.7)\n21.7 (3.3)\n0.029c\nFriend Support\n20.5 (3.9)\n19.3 (3.4)\n0.021c\nSignificant Others\n22.9 (4.1)\n21.8 (4.5)\n0.061\nTotal automatic thoughts questionnaire score\n26.5 (7.4)\n26.7 (8.0)\n0.867\nNegative self-concept\n16.1 (4.6)\n16.2 (4.6)\n0.853\nPersonal mal-adjustment\n9.1 (2.8)\n9.2 (3.5)\n0.822\nWHOQoL BREF\n\n\n\nPhysical Health\n12.9 (1.6)\n13.1(1.4)\n0.373\nPsychological\n13.1 (1.7)\n13.1 (1.7)\n0.942\nSocial Relationship\n14.8 (2.7)\n14.6 (2.2)\n0.681\nEnvironment\n14.1 (1.8)\n14.0 (1.9)\n0.722\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\n10.1371/journal.pone.0160343.t005Table 5BODY.BASELINE DIETARY INTAKE.:\nGSLiM, mean (sd) N = 65Diet counselling, mean (sd), n = 56p valueDietary energy intake, mean kcal/day\n1421.2 (332.3)\n1351.3 (369.8)\n0.276\nProtein (%)\n15.8 (2.3)\n15.0 (2.8)\n0.109\nCarbohydrate (%)\n53.3 (6.5)\n55.8 (6.1)\n0.034a\nFat (%)\n30.8 (5.4)\n29.0 (5.1)\n0.064\nCholesterol (mg)\n168.0 (88.6)\n162.3 (107.2)\n0.748\nSaturated Fat (mg)\n8.1 (3.9)\n9.0 (4.5)\n0.201\nMono saturated fat (mg)\n7.3 (3.6)\n7.2 (3.0)\n0.772\nMeals frequency\n3.4 (0.6)\n3.6 (0.7)\n0.068\na Significant p<0.05.\n\nBODY.BETWEEN GROUP DIFFERENCES:\nAt week 24, nineteen participants (19.6%) achieved 6% targeted weight loss in the intervention compared to 4 (4.1%) in the comparison group, Risk Ratio (RR) of 4.75 (95% CI: 1.68, 13.45). Out of 23 participants achieved weight loss, 10 (57.5%) intervention participants and 3 (75%) from the comparison group retained the targeted weight loss (6%) at 36th week. However, when measured in terms of weight in kg, no significant difference observed for the mean difference in weight loss between groups (-0.82 ± 2.12 kg), Cohen’s d 0.001.\nTable 6 summarises obesity and cardiovascular measures from baseline to 12th week, 13th to 24th week, baseline to 24th week and from 25th week to 36th week. No other significant mean difference between groups observed at any of the time intervals. Physical activity, psychological and quality of life scores measure from baseline to 24th week and from 25th week to 36th week are shown in Table 7.\n10.1371/journal.pone.0160343.t006Table 6BODY.OBESITY AND CARDIOVASCULAR OUTCOMES BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin GroupBetween groupsGSLiM mean change (s.e.) N = 97Comparison mean change (s.e.) N = 97Mean difference (95% CI)Cohen’s dp value0 to 12 weeks\n(intensive phase)\n\n\n\n\nWeight (kg)\n-1.52 (0.36)a\n-0.09(0.19)\n- 0.47 (4.69, 3.74)\n0.000\n0.825\nBMI (kg/m2)\n- 0.60 (0.14)a\n-0.31(0.08)\n- 0.30 (-1.52,0.92)\n0.001\n0.627\nWaist circumference (cm)\n- 1.67 (0.48)a\n0.54 (0.39)\n- 0.87 (-3.79, 2.06)\n0.002\n0.561\nHip circumference (cm)\n-0.74(0.39)\n-0.16 (0.27)\n0.77 (-1.81, 3.35)\n0.002\n0.557\nSBP (mmHg)\n0.24 (1.19)\n-2.26 (1.01)\n0.43 (-3.59, 4.46)\n0.000\n0.832\nDBP (mmHg)\n-0.25(0.96)\n-1.38 (0.72)\n0.09 (-2.91, 3.09)\n0.000\n0.951\nFasting serum triglyceride (mmHg)\n0.02 (0.05)\n0.009 (0.047)\n-0.14 (-0.33, 0.07)\n0.009\n0.184\nFasting serum cholesterol (mmHg)\n-0.04 (0.06)\n-0.04 (0.05)\n-0.01 (-0.26, 0.23)\n0.000\n0.913\nHDL (mmHg)\n-0.031 (0.017)\n0.018 (0.016)\n-0.05 (-0.13, 0.04)\n0.007\n0.256\nLDL (mmHg)\n-0.058 (0.055)\n-0.049 (0.057)\n0.09 (-0.12, 0.31)\n0.004\n0.401\nFasting blood glucose (mmHg)\n-0.003 (0.086)\n0.121 (0.087)\n-0.11 (-0.39, 0.17)\n0.003\n0.442\n13 to 24 weeks\n(fade frequency)\n\n\n\n\nWeight (kg)\n-0.92 (0.26)b\n-0.61(0.22)b\n-1.35 (-5.54, 2.85)\n0.002\n0.527\nBMI (kg/m2)\n-0.36 (0.10)b\n-0.24 (0.08)\n-0.64 (-1.87, 0.58)\n0.006\n0.300\nWaist circumference (cm)\n0.16 (0.44)\n0.18 (0.37)\n-1.79 (-4.73, 1.14)\n0.008\n0.229\nHip circumference (cm)\n-0.44 (0.29)\n-0.27 (0.23)\n0.39 (-2.21, 3.00)\n0.000\n0.763\nSBP (mmHg)\n0.17 (1.08)\n0.27 (0.90)\n1.63 (-2.37, 5.65)\n0.003\n0.424\nDBP (mmHg)\n-1.07 (0.82)\n1.27 (0.85)\n-0.51 (-3.51, 2.49)\n0.001\n0.737\nFasting serum triglyceride (mmHg)\n-0.05 (0.05)\n-0.047 (0.042)\n0.01 (-0.24, 0.37)\n0.007\n0.911\nFasting serum cholesterol (mmHg)\n0.08 (0.05)\n-0.02 (0.04)\n-0.13 (-0.35, 0.09)\n0.000\n0.239\nHDL (mmHg)\n0.038 (0.018)\n0(0.013)\n-0.05 (-0.14, 0.03)\n0.008\n0.215\nLDL (mmHg)\n0.033(0.049)\n0.039(0.041)\n0.08 (-0.4, 0.302)\n0.003\n0.453\nFasting blood glucose (mmHg)\n-0.088(0.046)\n-0.070(0.117)\n-0.21 (-0.54, 0.12)\n0.008\n0.207\n0 to 24 weeks\n\n\n\n\n\nWeight (kg)\n-2.42 (0.49)a\n-0.69 (0.27)c\n-0.82 (-5.00, 3.37)\n0.001\n0.701\nBMI (kg/m2)\n-0.96 (0.19)a\n-0.27 (0.11)b\n-0.44 (-1.65, 0.78)\n0.003\n0.480\nWaist circumference (cm)\n-1.51(0.52)a\n-0.31 (0.12)\n-1.12 (-4.00, 1.77)\n0.003\n0.445\nHip circumference (cm)\n-1.18 (0.45)c\n-0.43 (0.34)\n0.62 (-1.96, 3.19)\n0.000\n0.850\nSBP (mmHg)\n0.41 (1.23)\n-1.99 (1.14)\n0.82 (-3.09, 4.72)\n0.000\n0.680\nDBP (mmHg)\n-1.32 (0.91)\n-0.11 (0.99)\n-0.49 (-3.4.1, 2.41)\n0.003\n0.736\nFasting serum triglyceride (mmHg)\n-0.03 (0.06)\n-0.04 (0.05)\n-0.13 (-0.34, 0.07)\n0.009\n0.199\nFasting serum cholesterol (mmHg)\n0.04 (0.07)\n-0.02 (0.06)\n0.005(-0.24, 0.25)\n0.000\n0.970\nHDL (mmHg)\n0.01 (0.02)\n0.02 (0.02)\n-0.04 (-0.13, 0.04)\n0.002\n0.302\nLDL (mmHg)\n-0.03 (0.07)\n-0.01 (0.06)\n0.09 (-0.13, 0.30)\n0.003\n0.418\nFasting blood glucose (mmHg)\n-0.09 (0.08)\n0.11 (0.09)\n-0.16 (-0.45, 0.13)\n0.005\n0.299\n25–36 weeks (follow-up)\n\n\n\n\n\nWeight (kg)\n0.19 (0.35)\n-0.11 (0.16)\n-1.42 (-5.65, 2.82)\n0.002\n0.510\nBMI (kg/m2)\n0.02(0.07)\n-0.04 (0.64)\n-0.67 (-1.92, 0.57)\n0.006\n0.286\nWaist circumference (cm)\n0.19 (0.35)\n-0.19 (0.37)\n-1.44 (-4.39, 1.52)\n0.005\n0.338\nHip circumference (cm)\n-0.45 (0.31)\n-0.33 (0.21)\n0.26 (-2.39, 2.91)\n0.001\n0.636\nSBP (mmHg)\n-1.70 (1.27)\n0.46 (0.90)\n0.51 (-3.57, 4.48)\n0.000\n0.802\nDBP (mmHg)\n-0.72(0.97)\n-1.38(0.78)\n-1.03 (-4.08, 2.03)\n0.002\n0.509\nFasting serum triglyceride (mmHg)\n0.008(0.04)\n-0.01(0.03)\n0.008 (-0.25, 0.26)\n0.007\n0.264\nFasting serum cholesterol (mmHg)\n0.01(0.06)\n0.07(0.04)\n-0.13 (-0.36, 0.09)\n0.000\n0.953\nHDL (mmHg)\n0.012 (0.015)\n0.003 (0.018)\n-0.02 (-0.07, 0.11)\n0.003\n0.489\nLDL (mmHg)\n-0.033 (0.063)\n0.067 (0.044)\n0.03 (-0.19, 0.26)\n0.001\n0.715\nFasting blood glucose (mmHg)\n-0.008 (0.048)\n-0.065 (0.039)\n-0.22 (-0.58, 0.14)\n0.006\n0.267\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n10.1371/journal.pone.0160343.t007Table 7BODY.PHYSICAL ACTIVITY, PSYCHOLOGICAL AND QUALITY OF LIFE SCORE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin groupBetween groupsIntervention group mean change (s.e.)(n = 97)Control group mean change (s.e.)(n = 97)Mean Difference at intervals (95% CI)Cohen dP0 to 24 weeks\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n-18.9 (236.5)\n90.3(98.2)\n-244.3(-790.1, 301.5)\n0.004\n0.378\nVigorous, mets min-1\n-75.5 (149.7)\n58.6 (63.9)\n-10.5(-296.4, 275.5)\n0.000\n0.943\nModerate, mets min-1\n-6.8 (95.9)\n50.9 (59.5)\n-238.6(-509.3, 32.2)\n0.015\n0.084\nWalking, mets min-1\n63.3 (79.6)\n-19.3(43.8)\n4.7(-212.0,221.5)\n0.000\n0.966\nTotal WEL score\n10.3 (2.46)a\n1.78(1.88)\n5.62 (-0.10, 11.35)\n0.019\n0.054\nNegative emotions, mean (sd)\n2.02 (0.57)b\n0.07 (0.47)\n1.51 (0.004, 3.02)c\n0.020\n0.049\nAvailability, mean (sd)\n2.63 (0.62)a\n0.93 (0.49)\n0.96 (-0.58, 2.50)\n0.008\n0.218\nSocial Pressure, mean (sd)\n1.71 (0.60)c\n0.52 (0.43)\n0.85 (-0.61, 2.29)\n0.007\n0.253\nPhysical Discomfort, mean (sd)\n1.41 (0.48)c\n0.36 (0.44)\n1.31 (0.05, 2.56)c\n0.022\n0.041\nPositive activities, mean (sd)\n2.52(0.54)a\n-0.09 (0.40)\n0.99 (-0.27, 2.26)\n0.012\n0.121\n#\nTotal MDPSS mean score\n1.46 (0.65)\n1.09(0.63)\n\n0.012\n0.127\n# Friend Support, mean (sd)\n0.86(0.25)b\n0.44 (0.22)\n0.41(0.08, 0.75)a\n0.030\n0.015\n#Family Support, mean (sd)\n0.10 (0.27)\n0.19 (0.25)\n0.09 (- 0.25, 0.44)\n0.002\n0.578\n#Significant Others, Mean (sd)\n0.50 (0.30)\n0.45 (0.32)\n0.19(-0.21, 0.59)\n0.005\n0.347\nTotal ATQ mean score\n-0.84 (0.68)\n-0.40 (0.43)\n-0.40 (-2.29, 1.49)\n0.001\n0.675\nNegative self-concept mean score\n-0.39 (0.45)\n-0.26 (0.25)\n-0.19 (-1.35, 0.97)\n0.001\n0.745\nPersonal maladjustment mean score\n-0.36 (0.26)\n-0.19 (0.23)\n-0.19(-0.94, 0.57)\n0.001\n0.630\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n2.35(0.19)a\n1.82(0.19)\na\n0.07(-0.35, 0.48)\n0.001\n0.743\nPsychological\n1.62(0.15)\na\n1.52(0.16)\na\n0.06(-0.41, 0.52)\n0.000\n0.784\nSocial relationship\n0.63(0.22)c\n0.12 (0.17)\n0.39(-0.22, 1.01)\n0.008\n0.210\nEnvironment\n0.52(0.14)b\n0.25(0.11)b\n0.18(-0.31, 0.67)\n0.003\n0.471\n25 to 36 weeks (follow up)\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n1.1(143.6)\n303.1(138.4)\n-449.9 (-1015.1,115.3)\n0.013\n0.118\nVigorous, mets min-1\n-63.1 (73.2)\n59.4 (79.6)\n-138.8 (-437.5, 159.9)\n0.004\n0.361\nModerate, mets min-1\n62.5 (61.5)\n102.1 (82.0)\n-287.2 (-563.2, -11.2)c\n0.021\n0.041\nWalking, mets min-1\n1.7 (73.4)\n141.6 (63.6)\n- 23.9 (-240.8, 192.9)\n0.000\n0.828\nTotal WEL score\n-1.77 (1.45)\n0.45 (1.19)\n8.76 (3.05, 17.47)b\n0.045\n0.003\nNegative emotions, mean (sd)\n-0.42 (0.39)\n0.06 (0.27)\n2.24 (0.81, 3.67)b\n0.305\n0.002\nAvailability, mean (sd)\n-0.09 (0.35)\n0.19 (0.28)\n1.68 (0.10, 3.25)c\n0.022\n0.037\nSocial Pressure, mean (sd)\n-0.34 (0.34)\n0.09 (0.26)\n1.23 (-0.29, 2.75)\n0.013\n0.114\nPhysical Discomfort, mean (sd)\n-0.40 (0.31)\n0.31 (0.25)\n1.48 (0.28, 2.68)c\n0.030\n0.016\nPositive activities, mean (sd)\n-0.52 (0.36)\n-0.79 (0.33)\n2.14 (0.93, 3.35)a\n0.060\n0.001\n#Total MDPSS mean score\n0.35(0.62)\n0.46(0.41)\n1.46 (-0.26, 3.17)\n0.015\n0.095\n# Friend Support, mean (sd)\n-0.10(0.19)\n0.11(0.14)\n0.80 (0.15, 1.45)\nc\n0.030\n0.017\n#Family Support, mean (sd)\n0.27(0.22)\n0.16(0.18)\n0.29 (-0.35, 0.95)\n0.004\n0.364\n#Significant Others, Mean (sd)\n0.09(0.31)\n0.19(0.18)\n0.38 (-0.38, 1.13)\n0.005\n0.325\nTotal ATQ mean score\n-0.19(0.47)\n-0.73(0.21)b\n-0.35 (-2.17, 1.48)\n0.001\n0.709\nNegative self-concept mean score\n-0.10(0.30)\n-0.47(0.37)b\n-0.07 (-1.22, 1.07)\n0.000\n0.901\nPersonal maladjustment mean score\n-0.08(0.19)\n-0.19 (0.09)\n-0.21 (-0.94, 0.52)\n0.002\n0.568\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n-0.92 (0.18)a\n-0.53(0.19)c\n0.13(-0.37, 0.64)\n0.001\n0.609\nPsychological\n-0.29 (0.19)\n-0.12 (0.20)\n0.03(-0.49, 0.54)\n0.000\n0.923\nSocial relationship\n-0.03 (0.17)\n0.14 (0.11)\n0.56(-0.08, 1.19)\n0.015\n0.084\nEnvironment\n-0.06 (0.11)\n0.06 (0.09)\n0.21(-0.28, 0.69)\n0.004\n0.405\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# MDPSS, adjusted for baseline MDPSS.\n* median values instead of mean for physical activity.\nThe intervention group achieved higher score for negative emotions and physical discomfort of WEL subscales compared to the comparison group from baseline to 24th week. The effects maintained during follow up from 25th to 36th week. Intervention participants achieved higher friend support mean score from baseline to 24th week, even after adjusting for baseline MDPSS score. During the follow up period, except for social pressure, total WEL and other WEL subscales mean score increased significantly compared to the comparison group. However, moderate physical activity in METSmin-1 per week reduced significantly in the intervention group compared to the comparison group during 25th to 36th week. The intervention group experienced lower intake in carbohydrate percentage with higher percentage increase in fat intake over baseline to 12th week and 13th to 24th week. No other dietary change was observed (Table 8).\n10.1371/journal.pone.0160343.t008Table 8BODY.DIETARY ENERGY INTAKE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin group changeBetween groupsIntervention mean (s.e)Control mean (s.e)Mean difference at intervalsCohen dPN = 65N = 56(95% CI)0–12 weeks\n\n\n\n\n\nDietary intake, kcal\n0.41 (38.2)\n45.70 (22.76)\n47.25 (-79.8, 174.3)\n0.005\n0.463\nProtein (%)\n-0.05 (0.29)\n-0.19 (0.20)\n0.88 (-0.01, 1.77)\n0.09\n0.053\nCarbohydrate (%)#\n-4.12 (0.82)a\n-1.58 (0.67)\n-1.78 (-2.86, -0.69)b\n0.08\n0.002\nFat (%)\n3.54 (0.93)b\n1.77 (0.66)\n2.69 (0.91, 4.47)b\n0.07\n0.003\n13–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.06 (12.96)\n15.78 (35.53)\n19.24 (-113.3, 151.8)\n0.001\n0.744\nProtein (%)\n0.08 (0.16)\n0.22 (0.24)\n0.93 (-0.02, 1.87)\n0.03\n0.055\nCarbohydrate (%)#\n-0.19 (0.40)\n-1.91 (0.51)b\n-2.76 (-4.87, -0.64)c\n0.05\n0.011\nFat (%)\n0.11 (0.35)\n2.02 (0.53)b\n2.62 (0.36, 4.88)c\n0.04\n0.023\n25–36 weeks\n\n\n\n\n\nDietary intake, kcal\n-6.52 (9.08)\n39.15 (23.7)\n-8.96 (-139.7, 121.8)\n0.00\n0.892\nProtein (%)\n0.12 (0.09)\n0.44 (0.19)\n0.69 (-0.30, 1.69)\n0.02\n0.171\nCarbohydrate (%)#\n-1.18 (0.13)\n-1.48 (0.62)\n-1.33(-3.61, 0.95)\n0.11\n0.250\nFat (%)\n-0.07 (0.11)\n1.05 (0.55)\n1.17 (-1.16, 3.51)\n0.01\n0.322\n0–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.46 (38.28)\n61.48 (36.17)\n36.12 (-86.9, 159.19)\n0.003\n0.562\nProtein (%)\n0.13 (0.30)\n0.03 (0.20)\n0.81 (-0.08, 1.68)\n0.001\n0.444\nCarbohydrate (%)#\n-4.31(0.88)a\n-3.49 (0.64)a\n-1.84(-3.25, -0.43)c\n0.053\n0.011\nFat (%)\n3.65 (0.96)b\n3.79 (0.82)\n1.74 (0.003, 3.46)\n0.000\n0.050\nConfidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05. All participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# CHO percentage was adjusted for its baseline value for between group analyses in all interval.\n\nBODY.WITHIN INTERVENTION GROUP:\nWeight and BMI of the intervention group were reduced during the first 12 weeks (core phase), with smaller reduction in the next 12 weeks (fade frequency). During follow up, there was an insignificant increase in weight and BMI. Reduction in waist circumference occurred in the first 12 weeks but not sustained during fade frequency. An overall significant hip circumference reduction was observed within interval 0 to 24th week, although the reduction was not significant in the core phase or fade frequency.\nWEL and its subscales’ scores, friend support subscale score of MDPSS and WHOQOL-BREF domains score increased from 0 to 24th week. However, the improvement was not sustained during follow up, with significant reduction in WHOQOL-BREF physical health score observed during 25th to 36th week. The intervention group experienced reduction in carbohydrate percentage intake mainly during the core phase, resulting in significant reduction from baseline to 24th week. Concurrent fat percentage intake increased during the same time interval. There was no significant change in cardiovascular risks and physical activity measure throughout the study.\n\nBODY.WITHIN COMPARISON GROUP:\nThe comparison group weight decreased only during 13th to 24th week, yet it contributed to the overall significant reduction throughout treatment from 0 to 24th week. During the same 0 to 24th\nSignificant decreased in carbohydrate and increased in fat percentage intake, occurred from 13th to 24th week, in the comparison group. As per intervention group, the WHOQOL-BREF physical health score decreased in the follow up phase. Apart from these changes, significant reduction was also observed in negative thoughts and the negative self-concept subscale of the ATQ score during follow up period (25th to 36th weeks).\n\nBODY.ADHERENCE TO INTERVENTION:\nBODY.ATTENDANCE:\nAttendance was assessed as part of adherence measure in the intervention group. Of the total six sessions within the first 12 weeks, 33 (17.0%) participants in the intervention group attended more than four sessions, 34 (17.5%) attended three to four sessions while 30 (15.5%) attended two sessions and less. During the fade frequency with four sessions, 20.6 percent participants attended three to four sessions. None of the participants who attended less than three sessions achieved 6% weight loss, while 8.9% of participants attended four to six sessions achieved the targeted weight loss. About half (44.8%) of the participants attended more than seven sessions achieved 6% targeted weight loss (p<0.001). Median attendance was six sessions with inter-quartile range of five sessions. There was positive correlation between attendance and weight loss (r = 0.491, P<0.001).\n\nBODY.LOG BOOK SUBMISSION:\nThe overall log book submission was unsatisfactory for both groups. Only 46.4% (n = 45) intervention participants submitted their log books at 12th week and 16.5% (n = 16) at 24th week. Only 23.7% (23) comparison group participants submitted their log books at 12 weeks and 19.6% (19) at 24th week. Those who did not submit their log books were of younger age (p = 0.03).\n\n\nBODY.ADVERSE EFFECT OF THE INTERVENTION:\nTwo male participants with BMI above 40kg/m2 from the intervention group experienced mild soft tissue injury during self-conducted exercise and had undergone treatment. No other serious event reported.\n\n\nBODY.DISCUSSION:\nThe intervention proved to be more effective in achieving the targeted 6% weight loss, improved self-efficacy in dietary control, and achieved better friend support and quality of life than the comparison (dietary counseling) group. The psychological improvement in the intervention group was sustained post intervention. Although the comparison group also experienced reduction in weight, BMI, improvement in quality of life, and ATQ, there was no improvement in self-efficacy and social support.\nA total of 19% of GSLiM participants achieved 6% weight loss. For comparison purpose, we recalculated and found 17.5% of GSLiM participants achieved 7% weight loss. Other GLB based study reported 23 to 37% achievement in targeted weight loss [14]. A most recent GLB based trial on work site reported 29% of participants achieved targeted 7% weight loss [26]. While another study assessed the effectiveness of the coach led GLB programme in primary care setting had 37% participants achieving 7% weight loss at 15 months [45]. Our study had lower proportion of targeted weight loss due to fewer sessions in GSLiM compared to the original GLB-DPP. Earlier study reported for every other lifestyle session attended, weight loss could increase by 0.26% (16).\nOur findings concurred with recent review which showed multicomponent behaviour intervention was more effective than single component intervention [46]. The mean weight loss within intervention at 24th week was -2.43 kg (95% CI: -3.75, -1.21, p = 0.001), comparable to another trial which reported mean weight loss of -2.3 kg (95% CI: -2.92 to -1.72, p = 0.001) [47]. Most improvement occured during the first 12 weeks in the intervention group where sessions were more intensive. Earlier findings showed positive association between intensity and magnitude of weight loss [48]. Yet, lower intensity programme with similar effect is still being sought by other researchers [49], as intensive intervention are costly in terms of manpower and materials [50]. Apart from weight loss, significant improvement of the self-efficacy score and its subscales were seen among intervention participants compared to the comparison group, similarly found by other studies [51–53]. The intervention group experienced immediate improvement in social support score at baseline. However, after controlling for the baseline friend support score, significant difference between groups was observed for mean friend support score during the 24 weeks. Similar finding was also reported by a recent cluster randomised trial where healthy eating was found to be supported by friend [54].\nAny improved ATQ score would be expected in the intervention group due to the presence of cognitive elements incorporated in the programme, instead the improvement of ATQ score occurred within the comparison group. The use of ATQ to measure negative thought in obesity research remains limited in the presence of other tools used to assess dysfunctional eating behavior associated with obesity [55]. The average cut off value for depressogenic negative thoughts using the 17 item scale of ATQ—Malay in non-clinical population was 30.0 and clinical population was 47.0 [41]. The mean ATQ score for both the intervention and comparison groups was 25.6 and 26.3 respectively, below the cut off value; therefor the significant mean difference found between the intervention and comparison group was most likely due to chance. ATQ was used in our study to measure depressogenic features amongst obese participants and observed further change as treatment progressed. Moderate concurrent validity (r = 0.65) was found between the Malay ATQ and Malay-Beck Depression inventory (BDI), while discriminant analysis achieved 89.3%.\nOther than improvement in weight, psychological measures and quality of life, there was no improvement in cardiovascular risks among our participants. Other studies found at least 10% weight loss of baseline weight resulted in changes for the cardiovascular risk parameters [56]. Several other reasons may influence the absence of treatment effect on cardiovascular risks in this study. Our participants were within the younger age group and their clinical indicators were of normal range, therefor further improvement would be less likely. The focus of intervention would also influence the result as other studies targeting clinical indicators with health education in dietary approach managed to result in cardiovascular risks improvement [57,58]. Therefore, weight loss may not necessary resulting improvement of cardiovascular risks factors.\nParticipants also experienced small reduction in carbohydrate percentage intake and increase in fat percentage intake, as found by previous study [59]. Low carbohydrate diet has been associated with greater weight loss compared to low fat diet [60]. Nevertheless, there has been call that the effectiveness of lifestyle modification not to be overshadowed by the continuous search for the best dietary approach [61] when moderate balanced nutrients intake should be advocated.\nAlthough promising, the results of this study should be interpreted cautiously. Since participants were employees from the university, generalizability of the study remained limited within population of the same setting and may differ with the general population. Self—reported measures in particular self-administered surveys used may result in reporting bias. Negative behavior such as smoking and alcohol consumption may result in underreporting, while physical activity may be over reported. Although the short form IPAQ was advocated to be used in view of time-saving, a recent review [62] found that IPAQ SF tended to overestimate the METs-minutes/ week score. However, the categories of physical activity derived from the long form IPAQ [63] showed similarity with the categories found in our study and another Malaysian based study [3]. Therefore, we foresee the results to be reliable in particular in assessing change over time. Objective measures such as pedometer or accelerometer cited to be more reliable [64] however its utilization is resource consuming and may not be feasible in community setting. Our study only measured dietary self-efficacy, however, recent finding showed that exercise self-efficacy may be a better predictor for weight loss compared to dietary self-efficacy [65]. Considering the continuous development in the cognitive- psychological aspect of obesity intervention, both physical and dietary self-efficacy need to be assessed in future studies.\nSelf-monitoring is another critical component for weight reduction (39), however the log book submission by both groups was poor although attempts had been made to improve submission. This may further explain the small effect size observed and no improvement in cardiovascular risks although weight loss was achieved within each group. Poor log book submission for physical activity and dietary intake has also been reported in other studies (17). The attendance for group sessions was low compared to other studies [25,45,66]. Yet similar findings to ours was also found in another Malaysian based lifestyle intervention trial [67]. We also found processes such as attendance and log book submission to be correlated with weight loss. Competing time work demand has also been cited as factors affecting attendance for work site intervention [25]. Local culture within the society may influence the attendance towards programme which needs to be explored.\nFinally, we did not evaluate environmental support such as availability of healthy food choices and physical activity facilities which is known to influence weight loss.\nNevertheless, this study is the first group based programme derived from the GLB-DPP conducted in Malaysia. It addressed the dynamics of psychological aspect within the lifestyle modification programme based on a theoretical construct which was rarely implemented and monitored in other translational weight loss programme. The randomised controlled design as opposed to single arm trial was used to evaluate its effectiveness compared to an existing dietary programme.\n\nBODY.CONCLUSION AND RECOMMENDATIONS:\nOur findings showed the group based workplace intervention (GSLiM) programme managed to achieve targeted weight loss, improved self-efficacy and created positive support with lower intensity. The incorporation of group based approach and psychological sessions managed to achieve psychological change needed to achieve weight loss. The GSLiM programme is ready to be used and can be replicated in similar setting with possible enhancement for exercise self-efficacy, attendance, and adherence to self-monitoring. The programme should be extended for longer duration. Further research should explore the predictive value of the psychological factors in assisting participants to achieve their targeted weight loss.\n\nBODY.SUPPORTING INFORMATION:\nS1 FileBODY.RESEARCH PROTOCOL.:\n(PDF)\nClick here for additional data file.\nS2 FileBODY.DATA.:\n(XLS)\nClick here for additional data file.\nS1 TableBODY.CONSORT CHECKLIST FOR RANDOMISED CONTROL TRIAL.:\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to dietary counseling what was the result of Group Support Lifestyle Modification (GSLiM) on Negative emotions?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13862
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Group Support Lifestyle Modification (GSLiM) Programme among Obese Adults in Workplace: A Randomised Controlled TrialGSLiM and Obesity\n\n\n\nABSTRACT.BACKGROUND:\nThere was an increasing trend in the prevalence of obesity and its comorbidities over the past decades in Malaysia. Effective intervention for obesity remains limited. This study aimed to compare the effectiveness of a group based lifestyle modification programme amongst obese individuals with an existing dietary counseling programme.\n\nABSTRACT.METHODS:\nWe recruited one hundred and ninety four overweight and obese (BMI>27.5 kg/m2) employees from a local university. They were randomly allocated to either Group Support Lifestyle Modification (GSLiM) (intervention)(n = 97) or dietary counseling (comparison)(n = 97). The GSLIM activities included self monitoring, cognitive-behaviour sessions, exercise as well as dietary change advocacy, which were conducted through seminars and group sessions over 24 weeks. The comparison group was given dietary counselling once in 12 weeks. Both groups were followed up for additional 12 weeks to check for intervention effect sustenance. Anthropometric and biochemical parameters were measured at baseline, 12, 24 and 36 weeks; while dietary intake, physical activities, psychological measures and quality of life measured at baseline, 24 and 36 weeks. Data analysis was conducted using ANOVA repeated measures with intention to treat principle.\n\nABSTRACT.RESULTS:\nThe participants were predominantly women with mean (standard deviation) age of 40.5 (9.3) years. A total of 19.6% of the participants in GSLiM achieved 6% weight loss compared to 4.1% in the comparison group (Risk Ratio 4.75; 95% CI: 1.68, 13.45). At 24 weeks, the retention rate was 83.5% for GSLiM and 82.5% for comparison group. GSLiM participants also achieved significant improvement in total weight self-efficacy score, negative emotions and physical discomfort subscales, MDPSS friend subscale and all domains in quality of life. Participants in the comparison group experienced reduction in negative self-thoughts.\n\nABSTRACT.CONCLUSION:\nThe GSLiM programme proved to be more effective in achieving targeted weight loss, improving weight self-efficacy, friend social support, and quality of life compared to dietary counseling.\n\nABSTRACT.TRIAL REGISTRATION:\nIranian Registry of Clinical Trials IRCT201104056127N1\n\nBODY.INTRODUCTION:\nMalaysia has the highest combined overweight and obese prevalence among the Southeast Asian countries in 2013 [1]. From 1996 to 2009, the prevalence of obesity in Malaysia increased by 23.7% [2], concurrent with the obesity global pandemic. High risks groups for obesity were women, married adults, those with secondary education and unemployed or homemakers [3]. Arguably, working adults who engage in sedentary work as well as those experience stress at the workplace [4,5] are also at risks of obesity.\nObesity is associated with increase mortality [6] and reduction in overall quality of life [7]. Malaysians observed an increase of mortality due to heart disease and cancers from 67% in 2008 to 73% in 2012 [8,9]. Comprehensive lifestyle modification on diet, physical activity and psychology remains as the primary intervention for obesity [10,11].\nMulticomponent lifestyle modification proved successful in reducing diabetes risk [12] as well as improved cardiovascular risk factors [13]. The Diabetes Prevention Program (DPP) was a high intensity lifestyle modification programme which managed to reduce participant’s diabetic risks by 58% [12]. Yet, in real life, high intensity programme may not be applicable to all settings as it is resource intensive. This results in its translation into the Group Lifestyle Balance (GLB-DPP) implemented in clinical settings, primary care and workplace [14]. The GLB-DPP retained the core principles of the DPP such as 7% weight loss, 150 minutes of physical activity and self-monitoring using group approach with reduced sessions.\nAccording to the Social Cognitive Theory; personal behavior, thoughts and environment reciprocate to produce action [15,16]. Factors influencing the cognitive process favoring weight loss includes high self-efficacy [17,18] and social support [19]. Meanwhile, obese individuals may have higher inclination for negative thoughts related to dysfunctional eating [20] hampering lifestyle modification progress. However, these psychological factors were seldom assessed and reported together with clinical measures as part of the lifestyle modification programme outcomes [21,22].\nWorkplace has been shown to be a feasible and effective setting for lifestyle modification programme for obesity prevention [23–27]. Apart from preventing short term deterioration of quality of life, moderate improvement in weight induced employee productivity [28].\nIn view of urgent need for obesity intervention, we implemented an adaptation of the GLB-DPP programme named Group Support Lifestyle Modification (GSLiM) in the workplace. The objective was to compare the effectiveness of the GSLiM programme with an existing one to one dietary counseling programme in the workplace. The programme was designed to create social support and improve self-efficacy. It was hypothesized that Group Support Lifestyle Modification (GSLiM) would be more effective in producing weight loss, improvement of cardiovascular risk factors, self-efficacy, social support factors as well as their quality of life.\n\nBODY.METHODS:\nThis was a randomised controlled trial conducted in a public university in Kuala Lumpur, Malaysia. Ethical clearance obtained from the Medical Ethics Committee, University Malaya Medical Centre on 16 March 2011 (MEC No. 841.2) and was registered with the Iranian Registry of Clinical Trial (http://www.irct.ir/) (IRCTID: IRCT201104056127N1). There was a slight delayed in trial registration as the authors waited for funding approval obtained from the University Malaya post-graduate research fund.\nBODY.RECRUITMENT AND PARTICIPANTS:\nParticipants were employees from a public university in Kuala Lumpur. Recruitment started soon after trial registration from May 2011 to September 2012. The trial intervention and follow up commenced from September 2011 to July 2013. Respondents gave written informed consent during recruitment and informed of random allocation into either of the treatment arms. Recruitment, data collection and intervention programme were conducted in the Sport Centre of the Faculty of Sports Science within the university.\nEligible criteria included employees of the university, Bahasa Malaysia (national language of the country) literate, with BMI of 27.5kg/m2 or more and able to walk briskly for at least 10 minutes without assistance. Cut off point of 27.5kg/m2 was used as Asians experience higher risk for type 2 diabetes, metabolic syndrome as well as cardiovascular diseases at lower BMI [29–31]. Exclusion criteria included individuals with unstable angina, congestive cardiac failure, cancer, severe pulmonary disease, psychiatry disorders e.g. substance abuse, depression, weight loss of more than 5kg in the last 6 months, pregnant or breastfeeding. Self-administered questionnaire used to assess medical history and family history for disease risks. Further assessment to rule out undiagnosed medical conditions included full blood count (FBC), renal profile (RP), fasting blood sugar (FBS) and lipid profile at recruitment phase.\nOf 510 employees’ responded to invitations via emails and flyers, 275 consented for participation and assessed for eligibility. Nineteen participants withdrew after consented, eleven did not finish screening, fifty-one were excluded and finally, 194 participants were randomised (Fig 1).\n10.1371/journal.pone.0160343.g001Fig 1BODY.CONSORT PARTICIPANTS FLOWCHART OF INTERVENTION AND CONTROL ARM.:\nGSLiM: Group Support Lifestyle Modification; Comparison group: Dietary counselling.\nBODY.RANDOMISATION:\nA third person (TC), who was not involved with the study conducted random allocation of participants into either Group Support Lifestyle Modification (GSLiM) or dietary counseling (comparison). Allocation of participants to GSLiM or comparison arms were assigned using opaque envelopes. This was an open label trial as both treatment provider and participants knew which arm they belonged to since the programme differed between the two groups.\n\n\nBODY.INTERVENTIONS:\nBODY.GLB-DPP:\nThe Group Lifestyle Balance (GLB-DPP) developed based on the Diabetes Prevention Program (DPP) [32] that retained the core principles of DPP. These included self-monitoring for weight, dietary and physical activity, problem solving, staying motivated, and target 7% weight loss from baseline weight. However, the programme delivery of GLB-DPP used group-approach given over twenty-two hourly sessions in three phases (core, fade frequency and support).\n\nBODY.GSLIM:\nPermission to adopt and adapt the GLB-DPP was obtained from the author/s under the Creative Common Licence Share Alike. The GSLiM programme retained the core characteristics of GLB-DPP except a 6% target weight loss from baseline weight used instead of 7% and absence of support sessions after programme completion at 6th month. The 6% target was based on an achievement of a minimum of 1% weight loss per month in view of evidence that even 5% weight loss may produce clinical improvement among obese individuals [10]. The frequency of sessions conducted in GSLiM was less compared to the GLB-DPP with ten sessions in GSLiM intervention compared to the twenty-two of GLB-DPP. However, the original twenty-two topics of the GLB-DPP were retained in GSLiM with the topics delivered through three seminars and five 90-minute sessions in two phases. The core phase (Phase 1) was run once in two weeks for three months, beginning with a two-day seminar. A fade frequency (Phase 2) started with a one-day seminar, followed by two 90-minute sessions and concluded with a half-day seminar at the end of the active intervention. Comparison of sessions between GLB-DPP and Group Support Lifestyle Modification (GSLiM) is shown in Tables 1 and 2.\n10.1371/journal.pone.0160343.t001Table 1BODY.COMPARISON OF GROUP LIFESTYLE BALANCE (GLB) WITH GROUP SUPPORT LIFESTYLE MODIFICATION (GSLIM) PROGRAMME.:\nSame aspects of GLB and GSLiM\nSafe and appropriate intervention that incorporates nutrition, physical activity, and behaviour change.\nGradual increase of 150 mins per week of physical activity and minus 500 kcal per day of dietary intake.\nStrong focus on use of self-monitoring tools with feedback.\nInitial emphasis on fat intake and calories.\nPrimary focus on healthy food choices.\nUse of inexpensive food samples and incentives.\nUse of problem-solving techniques to address barriers to healthy eating and physical activity.\nGroup approach.\n\nSpecific Adaptation to GSLiM\nGLB\nModified GLB (GSLiM)\nGoal: 7% weight loss and increase physical activity to 150 minutes/week.\nGoal: 6% weight loss of baseline body weight and physical activity 150 per week.\n12 weekly 1-hour sessions delivered over 12–15 weeks\n0, 3 and 6 months by seminar sessions.\n5 once in 2 weeks session over 12 weeks\n2 monthly session over subsequent 12 weeks\n\nPedometer introduced during core sessions\nLogging for physical activity in log book\nTrainers: Dietician and exercise specialist\nTrainers: Dietician and exercise specialist and psychologist\nTotal sessions 22 given in spread of 48 weeks\nCompressed session to 10 given in 24 weeks\n10.1371/journal.pone.0160343.t002Table 2BODY.TOPIC COMPARISON BETWEEN GSLIM AND GLB-DPP.:\nGLB-DPP topic arrangementGSLiM topic arrangementsCore sessions: Weekly for 12 weeks\nCore sessions: Bi- weekly (2 per month) for 12 weeks\n1. Welcome to GLB.\n1. Welcome to GSLiM—Seminar overview of topics. (GLB-DPP 1)Physical activity component (overview of topics and specifics on)\nStretching: The truth about flexibility (GLB-DPP session 21)\nHeart health (GLB-DPP session 20)\nDiet component: overview of topics, Tip the calorie balance (GLB-DPP 5)\nPsychological—overview\n\n2. Be a fat and Calorie Detective\n2. GSLiM Session 22.1 Fat, calorie and you (GLB-DPP session 2)\n2.2 Healthy eating (GLB-DPP session 3)\n\n3. Healthy eating\n3. GSLiM Session 33.1 Move those muscles (GLB-DPP session 4)\n3.2 Monitoring your activities (GLB-DPP 10)\n\n4. Move those muscles\n4. GSliM Session 44.1Negative thoughts and Weight (GLB-DPP 9 and 14)\n4.2 Balance your thoughts (GLB-DPP 15)\n\n5. Tip the calorie balance\n5. GSLiM Session 5\n5.1 Behavior, Environment and Lifestyle Change (GLB-DPP 6 and 17)\n5.2 Make Social Cues Work for You (GLB-DPP 11)\n\n6. Take charge of whats around you\n6. GSLiM Session 6• 6.0 Strengthen Your Exercise Programme (GLB-DPP 16)\n7. Problem solving\n\n8. Four keys to healthy Eating out\n\n9. Slippery slope of Lifestyle change\n\n10. Jump start Your Activity Plan\n\n11. Make Social Cues Work for You\n\n12. Ways to Stay Motivated\n\nTransition session (fade frequency)\nTransition session (fade frequency)\n13. Prepare for Long Term Self Management\n7. GSLiM Session 7- One day seminar.\n\na. Stress management (GLB-DPP 18).\n\nb. Problem solving (GLB-DPP 7).\n\nc. Healthy eating out (GLB-DPP 8).\n\nd. More volume fewer calories (GLB-DPP 14).\n\ne. Group physical activity sessions.\n14. More Volume; fewer calories\n8. GSLiM Session 8.• 8.0 Standing Up for Your Health (GLB-DPP 19)\n15. Balance your thoughts\n9. GSLiM Session 9.• 9.0 Stay motivated (GLB-DPP 12)\n16. Strengthen Your Exercise Programme\n10. GSLiM session 10 (final).Looking back and looking forward (GLB-DPP 22)—Sharing achievements.\nPrepare long term self-management (GLB-DPP 13)\n\nSupport sessions (variable sequence)\nFollow up at 36 weeks\n17. Mindful eating\n\n18. Stress and Time Management\n\n19. Standing Up for Your Health\n\n20. Heart Health\n\n21. Stretching: The truth about flexibility\n\n22. Looking back and looking forwards\n\nTable 3.\n10.1371/journal.pone.0160343.t003Table 3BODY.SUMMARY OF PROGRAMME STRUCTURE, SESSIONS AND CONTENTS.:\nSessionApproachTitle and Contents1\nSeminar\nIntroduction to the programme\n\n\nForeword on the Group Support Lifestyle Modification (GSLiM) programme configuration and sessions. Highlight energy balance and the concept of self-monitoring. Develop allegiance to the programme.\n\n\nSet up goals for: 6% weight loss, and physical activity of 150 minutes per week. Daily calorie requirement (DCR) minus 500 for dietary intake to be worked out through the 2 step Harris Benedict equation. Foreword on components: psychological, diet and nutrition and physical activity.\n\n\nDescription of main food components, sources of fat, food pyramid and food plate theory. General outlook on physical activity and exercise. Introduction to cognitive and behaviour aspects related to lifestyle alteration. Getting-to-know- you session lasting half a day to help participants get acquainted with each other.\n2\nGroup session\nCalorie measurement and healthy food options.\n\n\nEducation and hands on session for self-monitoring through the measurement of food and calories by participants utilising scales and measurement tools. Reading food labels and using the food plate theory for practice. Sessions with active facilitator participation.\n3\nGroup session\nMoving Your Muscles.\n\n\nInformation on effective exercise methods (brisk walking). Raise awareness on the safety aspects of physical activity and exercise. Highlight the enhancement of general health through physical activity and exercise. Assess individual's present level of physical activity and exercise. Rise in activity levels to be spread out over a period of time. Set weekly goals for physical activity. Commence self-monitoring for physical activity and exercise. Group activity for aerobic session.\n4\nGroup session\nNegative thoughts.\n\n\nUncover negative thoughts. Increase knowledge on variety of negative thoughts, types of thought disorder and its relationship to behaviour. Suggest ways to offset negative thoughts. Highlight the significance of appreciating oneself for personal accomplishments. Review negative thoughts and set up action plan to counter them.\n5\nGroup session\nRelationship between behaviour, environment and healthy lifestyle change.\n\n\nDefine social and physical environment and relate its influence on physical activity and eating behaviour. Promote positive response to environmental and social drawbacks. Highlight the need for a healthy diet and physical activity /exercise. Emphasize on acting as role models for group members in order to promote physical activity and positive eating. Appraisal of present behaviour and setting up of action plan to achieve positive behaviour change\n6\nGroup session\nConsolidating Exercise Programme\n\n\nInitiate aerobic fitness sessions and define frequency, intensity, time and type of activity (F.I.T.T). Educate participants on the engagement of self-monitoring to set up an effective exercise schedule through the calculation of heart rate and rate of perceived exertion (RPE).\n\n\nDiscuss schedule and include a variety of activities and exercises to develop flexibility, strength, cardiovascular health and endurance.\n7\nSeminar\nConsolidate dietary, fitness, thought and behaviour alteration.\n\n\nReport on the progress of participants. Reassess weight loss goals, diet and physical activity based on the most recent measurements. Behaviour: Initiate stress management procedure (abdominal breathing) and problem solving techniques (I.D.E.A.L approach) to deal with issues that crop up due to alterations in diet and daily activities. Dialogue on problems encountered by participants. Encourage healthy eating through the promotion of high volume and low calorie foods.\n\n\nDietary: Educate on the choice of healthy food when eating out. Explore by group members of ways to be more active on busy work days.\n\n\nFitness: Suggest activities for work related sitting time. Stress on the need for intervals between extended periods of sitting especially at work. Discuss options on exercise location during unfavourable weather conditions. Teach ways to improve cardiovascular health through resistance and strength exercises. Set up a physical activity schedule for the following month.\n8\nGroup session\nChallenges in Lifestyle change.\n\n\nDiscourse among group members on problems encountered during attempts to change their lifestyles focussing on eating habits and physical activities. Exchange of information and problem-solving suggestions among participants. Re-assess obstacles faced and define a strategy to overcome these barriers to change.\n9\nGroup session\nMaintaining motivation.\n\n\nConcentrate on means to be constantly motivated for lifestyle change. Stress on awareness of internal and external sources of motivation. Emphasise on appreciating minor accomplishments related to lifestyle change. Initiate discussion among group members on strategy to maintain motivation.\n10\nSeminar\nExperience sharing.\n\n\nDisclosure of participants’ accomplishments during the course of the programme (6 months). Initiate panel discussion involving team and group members on the achievements of participants during the programme. Encourage participants to share their six-month experience on the road to better health through weight reduction. Participants were also encourage to continue healthy lifestyle.\nA starter kit containing handouts, examples of food energy intake and activity-based energy expenditure in the form of wheel chart was provided to participants during the first session. Handouts were also given to participants after subsequent sessions. Self-monitoring activities included weighing-in, submission of diet records and physical activity log was also conducted for each session. Feedback was provided to motivate the participants on each session.\n\n\nBODY.COMPARISON:\nParticipants in the comparison group underwent individual counseling with a dietician once every 12 weeks. Each dietary counseling session lasted for an hour. Recommendations on reduced calorie intake for weight loss, education on food pyramid, self-monitoring of calorie intake, lowering high-fat food and increase of fruits and vegetable intake were given. Participants were advised to increase their physical activity levels and exercise at least three times a week, but no practical session provided.\n\nBODY.MEASUREMENTS:\nBODY.SOCIODEMOGRAPHIC AND LIFESTYLE HISTORY:\nSociodemographic characteristics included age, gender, ethnicity, education, household income and marital status; family history of diseases such as hypertension, diabetes and hyperlipidemia; and socially related lifestyle behavior such as smoking and alcohol consumption assessed using self-administered questionnaire. Participants were required to fill a three-day dietary record using a log book. They were taught to estimate food intake with standard day-to-day food utensils. Food items according to meals were entered. A food list with energy (kcal) values was given to assist participants’ in self-monitoring for healthy food choices. Physical activity was measured using the International Physical Activity Questionnaire short form (IPAQ-SF). The IPAQ-SF comprised of seven items reported in terms of METS-min per week which can be summarised into three main categories (low, moderate and high). It is a valid measurement of physical activity and has moderate to high reliability [33].\n\nBODY.ANTHROPOMETRY, BIOCHEMICAL AND CLINICAL MEASURE:\nWeight was measured using SECA digital scale (Model HD 309) and height was measured using SECA body meter (Model 208) to the precision of 0.1 kg and 0.1 cm, respectively. Participants wore light clothing, and stood on scale without shoes with leg apart within the scale footprint. Waist and hip circumference were measured using non elastic SECA measuring tape to the nearest 0.1 cm following the WHO guideline [34]. Digital OMRON Model HEM 907 was used to measure blood pressure based on the Malaysian Clinical Practice Guidelines for management of Hypertension [35]. Fasting blood samples (after 8 hours) collected by trained staff and sent within four hours of collection in an icebox to the laboratory. Measurements were conducted by trained research assistants at baseline, 12, 24 and 36 weeks. Height measured only once at baseline. BMI was calculated as weight in kilograms divided by height in meters squared.\n\nBODY.PSYCHOLOGICAL MEASURES AND QUALITY OF LIFE:\nPsychological measures and quality of life were measured using self-administered questionnaires at baseline, 24 and 36 weeks. Weight efficacy lifestyle (WEL) questionnaire was used to measure eating self-efficacy in weight management with 20 items in five subscales. The subscales include food control in negative emotions, availability of food, social pressure, physical discomfort and physical activity with internal consistency ranged from 0.7 to 0.9 [36]. The Malay version of WEL questionnaire had internal consistency (Cronbach’s alpha) of 0.47 to 0.86 [37].\nSocial support was measured using the Multidimensional Perceived Social Support (MDPSS) which comprised of 12 items and three subscales. Internal reliability achieved was 0.82 for perceived social support from family, 0.88 for friends and 0.94 for significant others [38]. The Malay version of MDPSS has internal consistency of 0.89 with reliability coefficients for family, friends and significant others subscales ranging from 0.67 to 0.83 [39].\nAutomatic Thought Questionnaire (ATQ) used to assess negative thoughts [40]. Due to poor construct, the Malay version was reduced to 17 items and two subscales from 30 items [41]. Each subscale i.e. negative self concept (NSE) and personal maladjustment (PMA) had internal consistency (Cronbach alpha) of 0.91 and 0.83 respectively. Method of scoring of the Malay ATQ remained the same as the original 30 item with a cut-off score of 30 for depression.\nQuality of life was measured using the WHO Quality of Life short form questionnaire (WHOQoL-BREF), a shorter version of WHO Quality of Life (WHOQoL) questionnaire with the objective to measure a global perspective of quality of life across cultures [42]. The WHOQoL-BREF comprises of 26 questions in four domains i.e. physical activity, psychological, social support and environment. Validation of the translated Malay version revealed high reliability with internal consistency of 0.64–0.80, and test—retest-reliability of 0.49–0.88 [43,44].\n\n\nBODY.SAMPLE SIZE:\nSample size was calculated using OpenEpi software. Based on Renjilian et al.’s findings (17), 49% of group therapy participants achieved ten percent weight loss compared to 29% in individual counseling. A total of 286 participants was required after setting the study power at 80%, level of significance at p<0.05 and attrition rate of 25%. Due to low response rate, high number of excluded respondents and time constraint for further recruitment, the sample size was not achieved. Power calculation revealed the sample size of 194 achieved 88.8% power for targeted weight loss.\n\n\nBODY.STATISTICAL ANALYSIS:\nIBM SPSS Statistics, Version 20 for Windows (SPSS Inc, Chicago IL) used for data analysis. The dietary data was cross-checked and entered by a trained dietician into Nutritionist Pro where the Malaysian nutrient database was used. Between group mean differences at baseline for continuous variables were analyzed using t-tests and χ2 tests for categorical data. Mean differences between groups over time used analysis of variance for repeated measures. All analysis was carried out using the intention to treat principles. Missing data was imputed using the last observation carried forward. Continuous data at baseline was described as means± standard deviation. Repeated measure data for within group measures was described as mean ±standard error. Confidence interval (CI) was reported where appropriate. Statistical significance was preset at 0.05. Relative risk reported for proportion achieved for weight loss, while Cohen’s d used for the within and between group repeated measure. Adjustment conducted for measures where the baseline values showed significant between-group differences to control for bias i.e. carbohydrate percentage and multidimensional perceived of social support.\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nA total of 194 participants recruited and randomised. The participants’ mean age (standard deviation) was 40.5 (9.3) years. Majority of participants were women (72.7%) and non-academician (92.3%). More than half (64%) had at least secondary education. A total of 81 (83.5%) GSLiM participants and 80 (82.5%) of dietary counseling participants completed the study at 24 weeks (Fig 1). Participants from the GSLiM group had higher total MDPSS score, family support and friend subscales score as well as carbohydrate percentage intake. No other significant difference between groups was observed at baseline (Tables 4 and 5).\n10.1371/journal.pone.0160343.t004Table 4BODY.BASELINE MEASURES.:\nGSLiM, mean (sd), n = 97Diet counselling, mean (sd), n = 97p valueAge, years\n39.7 (9.2)\n40.4 (9.5)\n0.617\nGender\n\n\n0.629\nFemale, n (%)\n69 (71.1)\n72 (74.2)\n\nMale, n (%)\n28 (28.9)\n25 (25.8)\n\nEthnicity\n\n\n0.296\nMalay, n (%)\n87 (89.7)\n91 (93.8)\n\nNon Malays, n (%)\n10 (10.3)\n6 (6.2)\n\nEducation, tertiary, n (%)\n39 (40.2)\n29 (29.9)\n0.132\nHousehold income (RM)\n2680.34 (1511)\n2504.19 (1641.20)\n0.438\nMarried, n. (%)\n76 (78.4)\n74 (76.3)\n0.732\nOccupation, academician, n (%)\n9 (9.3)\n6 (6.2)\n0.420\nFamily history of:\n\n\n\nHypertensive, n (%)\n11 (11.3)\n13 (13.4)\n0.663\nDiabetes, n (%)\n3 (3.1)\n2 (2.1)\n0.683\nHyperlipidaemia, n (%)\n8 (8.2)\n13 (13.4)\n0.248\nBehaviour:\n\n\n\nNon-alcohol intake,\n96 (99)\n97 (100)\n0.316\nSmoker, n (%)\n9 (9.3)\n5 (5.2)\n0.267\nHeight, m\n1.59 (0.08)\n1.58 (0.09)\n0.863\nWeight, kg\n82.3 (16.3)\n82.0 (13.5)\n0.911\nBMI, kg/m2\n32.4 (4.8)\n32.4 (3.8)\n0.980\nWaist circumference, cm\n95.1 (11.2)\n94.9 (9.5)\n0.872\nHip circumference, cm\n112.2 (10.4)\n111.1 (8.2)\n0.429\nSystolic blood pressure, (mmHg)\n128.6 (16.1)\n129.1 (14.4)\n0.712\nDiastolic blood pressure, (mmHg)\n83.0 (12.2)\n83.4 (10.7)\n0.774\nTriglyceride, (mmol/l)\n1.41 (0.62)\n1.56 (0.82)\n0.182\nTotal Cholesterol, (mmol/l)\n5.2 (0.9)\n5.3 (1.1)\n0.913\nHDL Cholesterol, (mmol/l)\n1.23 (0.29)\n1.25 (0.31)\n0.592\nLDL Cholesterol, (mmol/l)\n3.45 (0.82)\n3.35 (0.85)\n0.426\nIPAQ SF total METmin-1 per week,\n1071 (3.24)\n1318.3 (3.06)\n0.238\nWeight efficacy lifestyle\n123.0 (25.6)\n121.6 (23.4)\n0.697\nNegative emotions\n25.6 (6.5)\n25.1 (6.3)\n0.560\nAvailability\n21.6 (6.6)\n21.4 (6.1)\n0.901\nSocial Pressure\n22.3 (6.1)\n22.1 (5.7)\n0.771\nPhysical Discomfort\n27.7 (5.3)\n26.9 (5.6)\n0.315\nPositive activities\n25.8 (5.7)\n26.1 (5.2)\n0.693\nMultidimensional social support a\n66.3 (9.9)\n62.8(8.7)\n0.010b\nFamily Support\n22.8 (3.7)\n21.7 (3.3)\n0.029c\nFriend Support\n20.5 (3.9)\n19.3 (3.4)\n0.021c\nSignificant Others\n22.9 (4.1)\n21.8 (4.5)\n0.061\nTotal automatic thoughts questionnaire score\n26.5 (7.4)\n26.7 (8.0)\n0.867\nNegative self-concept\n16.1 (4.6)\n16.2 (4.6)\n0.853\nPersonal mal-adjustment\n9.1 (2.8)\n9.2 (3.5)\n0.822\nWHOQoL BREF\n\n\n\nPhysical Health\n12.9 (1.6)\n13.1(1.4)\n0.373\nPsychological\n13.1 (1.7)\n13.1 (1.7)\n0.942\nSocial Relationship\n14.8 (2.7)\n14.6 (2.2)\n0.681\nEnvironment\n14.1 (1.8)\n14.0 (1.9)\n0.722\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\n10.1371/journal.pone.0160343.t005Table 5BODY.BASELINE DIETARY INTAKE.:\nGSLiM, mean (sd) N = 65Diet counselling, mean (sd), n = 56p valueDietary energy intake, mean kcal/day\n1421.2 (332.3)\n1351.3 (369.8)\n0.276\nProtein (%)\n15.8 (2.3)\n15.0 (2.8)\n0.109\nCarbohydrate (%)\n53.3 (6.5)\n55.8 (6.1)\n0.034a\nFat (%)\n30.8 (5.4)\n29.0 (5.1)\n0.064\nCholesterol (mg)\n168.0 (88.6)\n162.3 (107.2)\n0.748\nSaturated Fat (mg)\n8.1 (3.9)\n9.0 (4.5)\n0.201\nMono saturated fat (mg)\n7.3 (3.6)\n7.2 (3.0)\n0.772\nMeals frequency\n3.4 (0.6)\n3.6 (0.7)\n0.068\na Significant p<0.05.\n\nBODY.BETWEEN GROUP DIFFERENCES:\nAt week 24, nineteen participants (19.6%) achieved 6% targeted weight loss in the intervention compared to 4 (4.1%) in the comparison group, Risk Ratio (RR) of 4.75 (95% CI: 1.68, 13.45). Out of 23 participants achieved weight loss, 10 (57.5%) intervention participants and 3 (75%) from the comparison group retained the targeted weight loss (6%) at 36th week. However, when measured in terms of weight in kg, no significant difference observed for the mean difference in weight loss between groups (-0.82 ± 2.12 kg), Cohen’s d 0.001.\nTable 6 summarises obesity and cardiovascular measures from baseline to 12th week, 13th to 24th week, baseline to 24th week and from 25th week to 36th week. No other significant mean difference between groups observed at any of the time intervals. Physical activity, psychological and quality of life scores measure from baseline to 24th week and from 25th week to 36th week are shown in Table 7.\n10.1371/journal.pone.0160343.t006Table 6BODY.OBESITY AND CARDIOVASCULAR OUTCOMES BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin GroupBetween groupsGSLiM mean change (s.e.) N = 97Comparison mean change (s.e.) N = 97Mean difference (95% CI)Cohen’s dp value0 to 12 weeks\n(intensive phase)\n\n\n\n\nWeight (kg)\n-1.52 (0.36)a\n-0.09(0.19)\n- 0.47 (4.69, 3.74)\n0.000\n0.825\nBMI (kg/m2)\n- 0.60 (0.14)a\n-0.31(0.08)\n- 0.30 (-1.52,0.92)\n0.001\n0.627\nWaist circumference (cm)\n- 1.67 (0.48)a\n0.54 (0.39)\n- 0.87 (-3.79, 2.06)\n0.002\n0.561\nHip circumference (cm)\n-0.74(0.39)\n-0.16 (0.27)\n0.77 (-1.81, 3.35)\n0.002\n0.557\nSBP (mmHg)\n0.24 (1.19)\n-2.26 (1.01)\n0.43 (-3.59, 4.46)\n0.000\n0.832\nDBP (mmHg)\n-0.25(0.96)\n-1.38 (0.72)\n0.09 (-2.91, 3.09)\n0.000\n0.951\nFasting serum triglyceride (mmHg)\n0.02 (0.05)\n0.009 (0.047)\n-0.14 (-0.33, 0.07)\n0.009\n0.184\nFasting serum cholesterol (mmHg)\n-0.04 (0.06)\n-0.04 (0.05)\n-0.01 (-0.26, 0.23)\n0.000\n0.913\nHDL (mmHg)\n-0.031 (0.017)\n0.018 (0.016)\n-0.05 (-0.13, 0.04)\n0.007\n0.256\nLDL (mmHg)\n-0.058 (0.055)\n-0.049 (0.057)\n0.09 (-0.12, 0.31)\n0.004\n0.401\nFasting blood glucose (mmHg)\n-0.003 (0.086)\n0.121 (0.087)\n-0.11 (-0.39, 0.17)\n0.003\n0.442\n13 to 24 weeks\n(fade frequency)\n\n\n\n\nWeight (kg)\n-0.92 (0.26)b\n-0.61(0.22)b\n-1.35 (-5.54, 2.85)\n0.002\n0.527\nBMI (kg/m2)\n-0.36 (0.10)b\n-0.24 (0.08)\n-0.64 (-1.87, 0.58)\n0.006\n0.300\nWaist circumference (cm)\n0.16 (0.44)\n0.18 (0.37)\n-1.79 (-4.73, 1.14)\n0.008\n0.229\nHip circumference (cm)\n-0.44 (0.29)\n-0.27 (0.23)\n0.39 (-2.21, 3.00)\n0.000\n0.763\nSBP (mmHg)\n0.17 (1.08)\n0.27 (0.90)\n1.63 (-2.37, 5.65)\n0.003\n0.424\nDBP (mmHg)\n-1.07 (0.82)\n1.27 (0.85)\n-0.51 (-3.51, 2.49)\n0.001\n0.737\nFasting serum triglyceride (mmHg)\n-0.05 (0.05)\n-0.047 (0.042)\n0.01 (-0.24, 0.37)\n0.007\n0.911\nFasting serum cholesterol (mmHg)\n0.08 (0.05)\n-0.02 (0.04)\n-0.13 (-0.35, 0.09)\n0.000\n0.239\nHDL (mmHg)\n0.038 (0.018)\n0(0.013)\n-0.05 (-0.14, 0.03)\n0.008\n0.215\nLDL (mmHg)\n0.033(0.049)\n0.039(0.041)\n0.08 (-0.4, 0.302)\n0.003\n0.453\nFasting blood glucose (mmHg)\n-0.088(0.046)\n-0.070(0.117)\n-0.21 (-0.54, 0.12)\n0.008\n0.207\n0 to 24 weeks\n\n\n\n\n\nWeight (kg)\n-2.42 (0.49)a\n-0.69 (0.27)c\n-0.82 (-5.00, 3.37)\n0.001\n0.701\nBMI (kg/m2)\n-0.96 (0.19)a\n-0.27 (0.11)b\n-0.44 (-1.65, 0.78)\n0.003\n0.480\nWaist circumference (cm)\n-1.51(0.52)a\n-0.31 (0.12)\n-1.12 (-4.00, 1.77)\n0.003\n0.445\nHip circumference (cm)\n-1.18 (0.45)c\n-0.43 (0.34)\n0.62 (-1.96, 3.19)\n0.000\n0.850\nSBP (mmHg)\n0.41 (1.23)\n-1.99 (1.14)\n0.82 (-3.09, 4.72)\n0.000\n0.680\nDBP (mmHg)\n-1.32 (0.91)\n-0.11 (0.99)\n-0.49 (-3.4.1, 2.41)\n0.003\n0.736\nFasting serum triglyceride (mmHg)\n-0.03 (0.06)\n-0.04 (0.05)\n-0.13 (-0.34, 0.07)\n0.009\n0.199\nFasting serum cholesterol (mmHg)\n0.04 (0.07)\n-0.02 (0.06)\n0.005(-0.24, 0.25)\n0.000\n0.970\nHDL (mmHg)\n0.01 (0.02)\n0.02 (0.02)\n-0.04 (-0.13, 0.04)\n0.002\n0.302\nLDL (mmHg)\n-0.03 (0.07)\n-0.01 (0.06)\n0.09 (-0.13, 0.30)\n0.003\n0.418\nFasting blood glucose (mmHg)\n-0.09 (0.08)\n0.11 (0.09)\n-0.16 (-0.45, 0.13)\n0.005\n0.299\n25–36 weeks (follow-up)\n\n\n\n\n\nWeight (kg)\n0.19 (0.35)\n-0.11 (0.16)\n-1.42 (-5.65, 2.82)\n0.002\n0.510\nBMI (kg/m2)\n0.02(0.07)\n-0.04 (0.64)\n-0.67 (-1.92, 0.57)\n0.006\n0.286\nWaist circumference (cm)\n0.19 (0.35)\n-0.19 (0.37)\n-1.44 (-4.39, 1.52)\n0.005\n0.338\nHip circumference (cm)\n-0.45 (0.31)\n-0.33 (0.21)\n0.26 (-2.39, 2.91)\n0.001\n0.636\nSBP (mmHg)\n-1.70 (1.27)\n0.46 (0.90)\n0.51 (-3.57, 4.48)\n0.000\n0.802\nDBP (mmHg)\n-0.72(0.97)\n-1.38(0.78)\n-1.03 (-4.08, 2.03)\n0.002\n0.509\nFasting serum triglyceride (mmHg)\n0.008(0.04)\n-0.01(0.03)\n0.008 (-0.25, 0.26)\n0.007\n0.264\nFasting serum cholesterol (mmHg)\n0.01(0.06)\n0.07(0.04)\n-0.13 (-0.36, 0.09)\n0.000\n0.953\nHDL (mmHg)\n0.012 (0.015)\n0.003 (0.018)\n-0.02 (-0.07, 0.11)\n0.003\n0.489\nLDL (mmHg)\n-0.033 (0.063)\n0.067 (0.044)\n0.03 (-0.19, 0.26)\n0.001\n0.715\nFasting blood glucose (mmHg)\n-0.008 (0.048)\n-0.065 (0.039)\n-0.22 (-0.58, 0.14)\n0.006\n0.267\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n10.1371/journal.pone.0160343.t007Table 7BODY.PHYSICAL ACTIVITY, PSYCHOLOGICAL AND QUALITY OF LIFE SCORE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin groupBetween groupsIntervention group mean change (s.e.)(n = 97)Control group mean change (s.e.)(n = 97)Mean Difference at intervals (95% CI)Cohen dP0 to 24 weeks\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n-18.9 (236.5)\n90.3(98.2)\n-244.3(-790.1, 301.5)\n0.004\n0.378\nVigorous, mets min-1\n-75.5 (149.7)\n58.6 (63.9)\n-10.5(-296.4, 275.5)\n0.000\n0.943\nModerate, mets min-1\n-6.8 (95.9)\n50.9 (59.5)\n-238.6(-509.3, 32.2)\n0.015\n0.084\nWalking, mets min-1\n63.3 (79.6)\n-19.3(43.8)\n4.7(-212.0,221.5)\n0.000\n0.966\nTotal WEL score\n10.3 (2.46)a\n1.78(1.88)\n5.62 (-0.10, 11.35)\n0.019\n0.054\nNegative emotions, mean (sd)\n2.02 (0.57)b\n0.07 (0.47)\n1.51 (0.004, 3.02)c\n0.020\n0.049\nAvailability, mean (sd)\n2.63 (0.62)a\n0.93 (0.49)\n0.96 (-0.58, 2.50)\n0.008\n0.218\nSocial Pressure, mean (sd)\n1.71 (0.60)c\n0.52 (0.43)\n0.85 (-0.61, 2.29)\n0.007\n0.253\nPhysical Discomfort, mean (sd)\n1.41 (0.48)c\n0.36 (0.44)\n1.31 (0.05, 2.56)c\n0.022\n0.041\nPositive activities, mean (sd)\n2.52(0.54)a\n-0.09 (0.40)\n0.99 (-0.27, 2.26)\n0.012\n0.121\n#\nTotal MDPSS mean score\n1.46 (0.65)\n1.09(0.63)\n\n0.012\n0.127\n# Friend Support, mean (sd)\n0.86(0.25)b\n0.44 (0.22)\n0.41(0.08, 0.75)a\n0.030\n0.015\n#Family Support, mean (sd)\n0.10 (0.27)\n0.19 (0.25)\n0.09 (- 0.25, 0.44)\n0.002\n0.578\n#Significant Others, Mean (sd)\n0.50 (0.30)\n0.45 (0.32)\n0.19(-0.21, 0.59)\n0.005\n0.347\nTotal ATQ mean score\n-0.84 (0.68)\n-0.40 (0.43)\n-0.40 (-2.29, 1.49)\n0.001\n0.675\nNegative self-concept mean score\n-0.39 (0.45)\n-0.26 (0.25)\n-0.19 (-1.35, 0.97)\n0.001\n0.745\nPersonal maladjustment mean score\n-0.36 (0.26)\n-0.19 (0.23)\n-0.19(-0.94, 0.57)\n0.001\n0.630\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n2.35(0.19)a\n1.82(0.19)\na\n0.07(-0.35, 0.48)\n0.001\n0.743\nPsychological\n1.62(0.15)\na\n1.52(0.16)\na\n0.06(-0.41, 0.52)\n0.000\n0.784\nSocial relationship\n0.63(0.22)c\n0.12 (0.17)\n0.39(-0.22, 1.01)\n0.008\n0.210\nEnvironment\n0.52(0.14)b\n0.25(0.11)b\n0.18(-0.31, 0.67)\n0.003\n0.471\n25 to 36 weeks (follow up)\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n1.1(143.6)\n303.1(138.4)\n-449.9 (-1015.1,115.3)\n0.013\n0.118\nVigorous, mets min-1\n-63.1 (73.2)\n59.4 (79.6)\n-138.8 (-437.5, 159.9)\n0.004\n0.361\nModerate, mets min-1\n62.5 (61.5)\n102.1 (82.0)\n-287.2 (-563.2, -11.2)c\n0.021\n0.041\nWalking, mets min-1\n1.7 (73.4)\n141.6 (63.6)\n- 23.9 (-240.8, 192.9)\n0.000\n0.828\nTotal WEL score\n-1.77 (1.45)\n0.45 (1.19)\n8.76 (3.05, 17.47)b\n0.045\n0.003\nNegative emotions, mean (sd)\n-0.42 (0.39)\n0.06 (0.27)\n2.24 (0.81, 3.67)b\n0.305\n0.002\nAvailability, mean (sd)\n-0.09 (0.35)\n0.19 (0.28)\n1.68 (0.10, 3.25)c\n0.022\n0.037\nSocial Pressure, mean (sd)\n-0.34 (0.34)\n0.09 (0.26)\n1.23 (-0.29, 2.75)\n0.013\n0.114\nPhysical Discomfort, mean (sd)\n-0.40 (0.31)\n0.31 (0.25)\n1.48 (0.28, 2.68)c\n0.030\n0.016\nPositive activities, mean (sd)\n-0.52 (0.36)\n-0.79 (0.33)\n2.14 (0.93, 3.35)a\n0.060\n0.001\n#Total MDPSS mean score\n0.35(0.62)\n0.46(0.41)\n1.46 (-0.26, 3.17)\n0.015\n0.095\n# Friend Support, mean (sd)\n-0.10(0.19)\n0.11(0.14)\n0.80 (0.15, 1.45)\nc\n0.030\n0.017\n#Family Support, mean (sd)\n0.27(0.22)\n0.16(0.18)\n0.29 (-0.35, 0.95)\n0.004\n0.364\n#Significant Others, Mean (sd)\n0.09(0.31)\n0.19(0.18)\n0.38 (-0.38, 1.13)\n0.005\n0.325\nTotal ATQ mean score\n-0.19(0.47)\n-0.73(0.21)b\n-0.35 (-2.17, 1.48)\n0.001\n0.709\nNegative self-concept mean score\n-0.10(0.30)\n-0.47(0.37)b\n-0.07 (-1.22, 1.07)\n0.000\n0.901\nPersonal maladjustment mean score\n-0.08(0.19)\n-0.19 (0.09)\n-0.21 (-0.94, 0.52)\n0.002\n0.568\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n-0.92 (0.18)a\n-0.53(0.19)c\n0.13(-0.37, 0.64)\n0.001\n0.609\nPsychological\n-0.29 (0.19)\n-0.12 (0.20)\n0.03(-0.49, 0.54)\n0.000\n0.923\nSocial relationship\n-0.03 (0.17)\n0.14 (0.11)\n0.56(-0.08, 1.19)\n0.015\n0.084\nEnvironment\n-0.06 (0.11)\n0.06 (0.09)\n0.21(-0.28, 0.69)\n0.004\n0.405\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# MDPSS, adjusted for baseline MDPSS.\n* median values instead of mean for physical activity.\nThe intervention group achieved higher score for negative emotions and physical discomfort of WEL subscales compared to the comparison group from baseline to 24th week. The effects maintained during follow up from 25th to 36th week. Intervention participants achieved higher friend support mean score from baseline to 24th week, even after adjusting for baseline MDPSS score. During the follow up period, except for social pressure, total WEL and other WEL subscales mean score increased significantly compared to the comparison group. However, moderate physical activity in METSmin-1 per week reduced significantly in the intervention group compared to the comparison group during 25th to 36th week. The intervention group experienced lower intake in carbohydrate percentage with higher percentage increase in fat intake over baseline to 12th week and 13th to 24th week. No other dietary change was observed (Table 8).\n10.1371/journal.pone.0160343.t008Table 8BODY.DIETARY ENERGY INTAKE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin group changeBetween groupsIntervention mean (s.e)Control mean (s.e)Mean difference at intervalsCohen dPN = 65N = 56(95% CI)0–12 weeks\n\n\n\n\n\nDietary intake, kcal\n0.41 (38.2)\n45.70 (22.76)\n47.25 (-79.8, 174.3)\n0.005\n0.463\nProtein (%)\n-0.05 (0.29)\n-0.19 (0.20)\n0.88 (-0.01, 1.77)\n0.09\n0.053\nCarbohydrate (%)#\n-4.12 (0.82)a\n-1.58 (0.67)\n-1.78 (-2.86, -0.69)b\n0.08\n0.002\nFat (%)\n3.54 (0.93)b\n1.77 (0.66)\n2.69 (0.91, 4.47)b\n0.07\n0.003\n13–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.06 (12.96)\n15.78 (35.53)\n19.24 (-113.3, 151.8)\n0.001\n0.744\nProtein (%)\n0.08 (0.16)\n0.22 (0.24)\n0.93 (-0.02, 1.87)\n0.03\n0.055\nCarbohydrate (%)#\n-0.19 (0.40)\n-1.91 (0.51)b\n-2.76 (-4.87, -0.64)c\n0.05\n0.011\nFat (%)\n0.11 (0.35)\n2.02 (0.53)b\n2.62 (0.36, 4.88)c\n0.04\n0.023\n25–36 weeks\n\n\n\n\n\nDietary intake, kcal\n-6.52 (9.08)\n39.15 (23.7)\n-8.96 (-139.7, 121.8)\n0.00\n0.892\nProtein (%)\n0.12 (0.09)\n0.44 (0.19)\n0.69 (-0.30, 1.69)\n0.02\n0.171\nCarbohydrate (%)#\n-1.18 (0.13)\n-1.48 (0.62)\n-1.33(-3.61, 0.95)\n0.11\n0.250\nFat (%)\n-0.07 (0.11)\n1.05 (0.55)\n1.17 (-1.16, 3.51)\n0.01\n0.322\n0–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.46 (38.28)\n61.48 (36.17)\n36.12 (-86.9, 159.19)\n0.003\n0.562\nProtein (%)\n0.13 (0.30)\n0.03 (0.20)\n0.81 (-0.08, 1.68)\n0.001\n0.444\nCarbohydrate (%)#\n-4.31(0.88)a\n-3.49 (0.64)a\n-1.84(-3.25, -0.43)c\n0.053\n0.011\nFat (%)\n3.65 (0.96)b\n3.79 (0.82)\n1.74 (0.003, 3.46)\n0.000\n0.050\nConfidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05. All participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# CHO percentage was adjusted for its baseline value for between group analyses in all interval.\n\nBODY.WITHIN INTERVENTION GROUP:\nWeight and BMI of the intervention group were reduced during the first 12 weeks (core phase), with smaller reduction in the next 12 weeks (fade frequency). During follow up, there was an insignificant increase in weight and BMI. Reduction in waist circumference occurred in the first 12 weeks but not sustained during fade frequency. An overall significant hip circumference reduction was observed within interval 0 to 24th week, although the reduction was not significant in the core phase or fade frequency.\nWEL and its subscales’ scores, friend support subscale score of MDPSS and WHOQOL-BREF domains score increased from 0 to 24th week. However, the improvement was not sustained during follow up, with significant reduction in WHOQOL-BREF physical health score observed during 25th to 36th week. The intervention group experienced reduction in carbohydrate percentage intake mainly during the core phase, resulting in significant reduction from baseline to 24th week. Concurrent fat percentage intake increased during the same time interval. There was no significant change in cardiovascular risks and physical activity measure throughout the study.\n\nBODY.WITHIN COMPARISON GROUP:\nThe comparison group weight decreased only during 13th to 24th week, yet it contributed to the overall significant reduction throughout treatment from 0 to 24th week. During the same 0 to 24th\nSignificant decreased in carbohydrate and increased in fat percentage intake, occurred from 13th to 24th week, in the comparison group. As per intervention group, the WHOQOL-BREF physical health score decreased in the follow up phase. Apart from these changes, significant reduction was also observed in negative thoughts and the negative self-concept subscale of the ATQ score during follow up period (25th to 36th weeks).\n\nBODY.ADHERENCE TO INTERVENTION:\nBODY.ATTENDANCE:\nAttendance was assessed as part of adherence measure in the intervention group. Of the total six sessions within the first 12 weeks, 33 (17.0%) participants in the intervention group attended more than four sessions, 34 (17.5%) attended three to four sessions while 30 (15.5%) attended two sessions and less. During the fade frequency with four sessions, 20.6 percent participants attended three to four sessions. None of the participants who attended less than three sessions achieved 6% weight loss, while 8.9% of participants attended four to six sessions achieved the targeted weight loss. About half (44.8%) of the participants attended more than seven sessions achieved 6% targeted weight loss (p<0.001). Median attendance was six sessions with inter-quartile range of five sessions. There was positive correlation between attendance and weight loss (r = 0.491, P<0.001).\n\nBODY.LOG BOOK SUBMISSION:\nThe overall log book submission was unsatisfactory for both groups. Only 46.4% (n = 45) intervention participants submitted their log books at 12th week and 16.5% (n = 16) at 24th week. Only 23.7% (23) comparison group participants submitted their log books at 12 weeks and 19.6% (19) at 24th week. Those who did not submit their log books were of younger age (p = 0.03).\n\n\nBODY.ADVERSE EFFECT OF THE INTERVENTION:\nTwo male participants with BMI above 40kg/m2 from the intervention group experienced mild soft tissue injury during self-conducted exercise and had undergone treatment. No other serious event reported.\n\n\nBODY.DISCUSSION:\nThe intervention proved to be more effective in achieving the targeted 6% weight loss, improved self-efficacy in dietary control, and achieved better friend support and quality of life than the comparison (dietary counseling) group. The psychological improvement in the intervention group was sustained post intervention. Although the comparison group also experienced reduction in weight, BMI, improvement in quality of life, and ATQ, there was no improvement in self-efficacy and social support.\nA total of 19% of GSLiM participants achieved 6% weight loss. For comparison purpose, we recalculated and found 17.5% of GSLiM participants achieved 7% weight loss. Other GLB based study reported 23 to 37% achievement in targeted weight loss [14]. A most recent GLB based trial on work site reported 29% of participants achieved targeted 7% weight loss [26]. While another study assessed the effectiveness of the coach led GLB programme in primary care setting had 37% participants achieving 7% weight loss at 15 months [45]. Our study had lower proportion of targeted weight loss due to fewer sessions in GSLiM compared to the original GLB-DPP. Earlier study reported for every other lifestyle session attended, weight loss could increase by 0.26% (16).\nOur findings concurred with recent review which showed multicomponent behaviour intervention was more effective than single component intervention [46]. The mean weight loss within intervention at 24th week was -2.43 kg (95% CI: -3.75, -1.21, p = 0.001), comparable to another trial which reported mean weight loss of -2.3 kg (95% CI: -2.92 to -1.72, p = 0.001) [47]. Most improvement occured during the first 12 weeks in the intervention group where sessions were more intensive. Earlier findings showed positive association between intensity and magnitude of weight loss [48]. Yet, lower intensity programme with similar effect is still being sought by other researchers [49], as intensive intervention are costly in terms of manpower and materials [50]. Apart from weight loss, significant improvement of the self-efficacy score and its subscales were seen among intervention participants compared to the comparison group, similarly found by other studies [51–53]. The intervention group experienced immediate improvement in social support score at baseline. However, after controlling for the baseline friend support score, significant difference between groups was observed for mean friend support score during the 24 weeks. Similar finding was also reported by a recent cluster randomised trial where healthy eating was found to be supported by friend [54].\nAny improved ATQ score would be expected in the intervention group due to the presence of cognitive elements incorporated in the programme, instead the improvement of ATQ score occurred within the comparison group. The use of ATQ to measure negative thought in obesity research remains limited in the presence of other tools used to assess dysfunctional eating behavior associated with obesity [55]. The average cut off value for depressogenic negative thoughts using the 17 item scale of ATQ—Malay in non-clinical population was 30.0 and clinical population was 47.0 [41]. The mean ATQ score for both the intervention and comparison groups was 25.6 and 26.3 respectively, below the cut off value; therefor the significant mean difference found between the intervention and comparison group was most likely due to chance. ATQ was used in our study to measure depressogenic features amongst obese participants and observed further change as treatment progressed. Moderate concurrent validity (r = 0.65) was found between the Malay ATQ and Malay-Beck Depression inventory (BDI), while discriminant analysis achieved 89.3%.\nOther than improvement in weight, psychological measures and quality of life, there was no improvement in cardiovascular risks among our participants. Other studies found at least 10% weight loss of baseline weight resulted in changes for the cardiovascular risk parameters [56]. Several other reasons may influence the absence of treatment effect on cardiovascular risks in this study. Our participants were within the younger age group and their clinical indicators were of normal range, therefor further improvement would be less likely. The focus of intervention would also influence the result as other studies targeting clinical indicators with health education in dietary approach managed to result in cardiovascular risks improvement [57,58]. Therefore, weight loss may not necessary resulting improvement of cardiovascular risks factors.\nParticipants also experienced small reduction in carbohydrate percentage intake and increase in fat percentage intake, as found by previous study [59]. Low carbohydrate diet has been associated with greater weight loss compared to low fat diet [60]. Nevertheless, there has been call that the effectiveness of lifestyle modification not to be overshadowed by the continuous search for the best dietary approach [61] when moderate balanced nutrients intake should be advocated.\nAlthough promising, the results of this study should be interpreted cautiously. Since participants were employees from the university, generalizability of the study remained limited within population of the same setting and may differ with the general population. Self—reported measures in particular self-administered surveys used may result in reporting bias. Negative behavior such as smoking and alcohol consumption may result in underreporting, while physical activity may be over reported. Although the short form IPAQ was advocated to be used in view of time-saving, a recent review [62] found that IPAQ SF tended to overestimate the METs-minutes/ week score. However, the categories of physical activity derived from the long form IPAQ [63] showed similarity with the categories found in our study and another Malaysian based study [3]. Therefore, we foresee the results to be reliable in particular in assessing change over time. Objective measures such as pedometer or accelerometer cited to be more reliable [64] however its utilization is resource consuming and may not be feasible in community setting. Our study only measured dietary self-efficacy, however, recent finding showed that exercise self-efficacy may be a better predictor for weight loss compared to dietary self-efficacy [65]. Considering the continuous development in the cognitive- psychological aspect of obesity intervention, both physical and dietary self-efficacy need to be assessed in future studies.\nSelf-monitoring is another critical component for weight reduction (39), however the log book submission by both groups was poor although attempts had been made to improve submission. This may further explain the small effect size observed and no improvement in cardiovascular risks although weight loss was achieved within each group. Poor log book submission for physical activity and dietary intake has also been reported in other studies (17). The attendance for group sessions was low compared to other studies [25,45,66]. Yet similar findings to ours was also found in another Malaysian based lifestyle intervention trial [67]. We also found processes such as attendance and log book submission to be correlated with weight loss. Competing time work demand has also been cited as factors affecting attendance for work site intervention [25]. Local culture within the society may influence the attendance towards programme which needs to be explored.\nFinally, we did not evaluate environmental support such as availability of healthy food choices and physical activity facilities which is known to influence weight loss.\nNevertheless, this study is the first group based programme derived from the GLB-DPP conducted in Malaysia. It addressed the dynamics of psychological aspect within the lifestyle modification programme based on a theoretical construct which was rarely implemented and monitored in other translational weight loss programme. The randomised controlled design as opposed to single arm trial was used to evaluate its effectiveness compared to an existing dietary programme.\n\nBODY.CONCLUSION AND RECOMMENDATIONS:\nOur findings showed the group based workplace intervention (GSLiM) programme managed to achieve targeted weight loss, improved self-efficacy and created positive support with lower intensity. The incorporation of group based approach and psychological sessions managed to achieve psychological change needed to achieve weight loss. The GSLiM programme is ready to be used and can be replicated in similar setting with possible enhancement for exercise self-efficacy, attendance, and adherence to self-monitoring. The programme should be extended for longer duration. Further research should explore the predictive value of the psychological factors in assisting participants to achieve their targeted weight loss.\n\nBODY.SUPPORTING INFORMATION:\nS1 FileBODY.RESEARCH PROTOCOL.:\n(PDF)\nClick here for additional data file.\nS2 FileBODY.DATA.:\n(XLS)\nClick here for additional data file.\nS1 TableBODY.CONSORT CHECKLIST FOR RANDOMISED CONTROL TRIAL.:\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to dietary counseling what was the result of Group Support Lifestyle Modification (GSLiM) on Physical Discomfort?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13863
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffectiveness of a Group Support Lifestyle Modification (GSLiM) Programme among Obese Adults in Workplace: A Randomised Controlled TrialGSLiM and Obesity\n\n\n\nABSTRACT.BACKGROUND:\nThere was an increasing trend in the prevalence of obesity and its comorbidities over the past decades in Malaysia. Effective intervention for obesity remains limited. This study aimed to compare the effectiveness of a group based lifestyle modification programme amongst obese individuals with an existing dietary counseling programme.\n\nABSTRACT.METHODS:\nWe recruited one hundred and ninety four overweight and obese (BMI>27.5 kg/m2) employees from a local university. They were randomly allocated to either Group Support Lifestyle Modification (GSLiM) (intervention)(n = 97) or dietary counseling (comparison)(n = 97). The GSLIM activities included self monitoring, cognitive-behaviour sessions, exercise as well as dietary change advocacy, which were conducted through seminars and group sessions over 24 weeks. The comparison group was given dietary counselling once in 12 weeks. Both groups were followed up for additional 12 weeks to check for intervention effect sustenance. Anthropometric and biochemical parameters were measured at baseline, 12, 24 and 36 weeks; while dietary intake, physical activities, psychological measures and quality of life measured at baseline, 24 and 36 weeks. Data analysis was conducted using ANOVA repeated measures with intention to treat principle.\n\nABSTRACT.RESULTS:\nThe participants were predominantly women with mean (standard deviation) age of 40.5 (9.3) years. A total of 19.6% of the participants in GSLiM achieved 6% weight loss compared to 4.1% in the comparison group (Risk Ratio 4.75; 95% CI: 1.68, 13.45). At 24 weeks, the retention rate was 83.5% for GSLiM and 82.5% for comparison group. GSLiM participants also achieved significant improvement in total weight self-efficacy score, negative emotions and physical discomfort subscales, MDPSS friend subscale and all domains in quality of life. Participants in the comparison group experienced reduction in negative self-thoughts.\n\nABSTRACT.CONCLUSION:\nThe GSLiM programme proved to be more effective in achieving targeted weight loss, improving weight self-efficacy, friend social support, and quality of life compared to dietary counseling.\n\nABSTRACT.TRIAL REGISTRATION:\nIranian Registry of Clinical Trials IRCT201104056127N1\n\nBODY.INTRODUCTION:\nMalaysia has the highest combined overweight and obese prevalence among the Southeast Asian countries in 2013 [1]. From 1996 to 2009, the prevalence of obesity in Malaysia increased by 23.7% [2], concurrent with the obesity global pandemic. High risks groups for obesity were women, married adults, those with secondary education and unemployed or homemakers [3]. Arguably, working adults who engage in sedentary work as well as those experience stress at the workplace [4,5] are also at risks of obesity.\nObesity is associated with increase mortality [6] and reduction in overall quality of life [7]. Malaysians observed an increase of mortality due to heart disease and cancers from 67% in 2008 to 73% in 2012 [8,9]. Comprehensive lifestyle modification on diet, physical activity and psychology remains as the primary intervention for obesity [10,11].\nMulticomponent lifestyle modification proved successful in reducing diabetes risk [12] as well as improved cardiovascular risk factors [13]. The Diabetes Prevention Program (DPP) was a high intensity lifestyle modification programme which managed to reduce participant’s diabetic risks by 58% [12]. Yet, in real life, high intensity programme may not be applicable to all settings as it is resource intensive. This results in its translation into the Group Lifestyle Balance (GLB-DPP) implemented in clinical settings, primary care and workplace [14]. The GLB-DPP retained the core principles of the DPP such as 7% weight loss, 150 minutes of physical activity and self-monitoring using group approach with reduced sessions.\nAccording to the Social Cognitive Theory; personal behavior, thoughts and environment reciprocate to produce action [15,16]. Factors influencing the cognitive process favoring weight loss includes high self-efficacy [17,18] and social support [19]. Meanwhile, obese individuals may have higher inclination for negative thoughts related to dysfunctional eating [20] hampering lifestyle modification progress. However, these psychological factors were seldom assessed and reported together with clinical measures as part of the lifestyle modification programme outcomes [21,22].\nWorkplace has been shown to be a feasible and effective setting for lifestyle modification programme for obesity prevention [23–27]. Apart from preventing short term deterioration of quality of life, moderate improvement in weight induced employee productivity [28].\nIn view of urgent need for obesity intervention, we implemented an adaptation of the GLB-DPP programme named Group Support Lifestyle Modification (GSLiM) in the workplace. The objective was to compare the effectiveness of the GSLiM programme with an existing one to one dietary counseling programme in the workplace. The programme was designed to create social support and improve self-efficacy. It was hypothesized that Group Support Lifestyle Modification (GSLiM) would be more effective in producing weight loss, improvement of cardiovascular risk factors, self-efficacy, social support factors as well as their quality of life.\n\nBODY.METHODS:\nThis was a randomised controlled trial conducted in a public university in Kuala Lumpur, Malaysia. Ethical clearance obtained from the Medical Ethics Committee, University Malaya Medical Centre on 16 March 2011 (MEC No. 841.2) and was registered with the Iranian Registry of Clinical Trial (http://www.irct.ir/) (IRCTID: IRCT201104056127N1). There was a slight delayed in trial registration as the authors waited for funding approval obtained from the University Malaya post-graduate research fund.\nBODY.RECRUITMENT AND PARTICIPANTS:\nParticipants were employees from a public university in Kuala Lumpur. Recruitment started soon after trial registration from May 2011 to September 2012. The trial intervention and follow up commenced from September 2011 to July 2013. Respondents gave written informed consent during recruitment and informed of random allocation into either of the treatment arms. Recruitment, data collection and intervention programme were conducted in the Sport Centre of the Faculty of Sports Science within the university.\nEligible criteria included employees of the university, Bahasa Malaysia (national language of the country) literate, with BMI of 27.5kg/m2 or more and able to walk briskly for at least 10 minutes without assistance. Cut off point of 27.5kg/m2 was used as Asians experience higher risk for type 2 diabetes, metabolic syndrome as well as cardiovascular diseases at lower BMI [29–31]. Exclusion criteria included individuals with unstable angina, congestive cardiac failure, cancer, severe pulmonary disease, psychiatry disorders e.g. substance abuse, depression, weight loss of more than 5kg in the last 6 months, pregnant or breastfeeding. Self-administered questionnaire used to assess medical history and family history for disease risks. Further assessment to rule out undiagnosed medical conditions included full blood count (FBC), renal profile (RP), fasting blood sugar (FBS) and lipid profile at recruitment phase.\nOf 510 employees’ responded to invitations via emails and flyers, 275 consented for participation and assessed for eligibility. Nineteen participants withdrew after consented, eleven did not finish screening, fifty-one were excluded and finally, 194 participants were randomised (Fig 1).\n10.1371/journal.pone.0160343.g001Fig 1BODY.CONSORT PARTICIPANTS FLOWCHART OF INTERVENTION AND CONTROL ARM.:\nGSLiM: Group Support Lifestyle Modification; Comparison group: Dietary counselling.\nBODY.RANDOMISATION:\nA third person (TC), who was not involved with the study conducted random allocation of participants into either Group Support Lifestyle Modification (GSLiM) or dietary counseling (comparison). Allocation of participants to GSLiM or comparison arms were assigned using opaque envelopes. This was an open label trial as both treatment provider and participants knew which arm they belonged to since the programme differed between the two groups.\n\n\nBODY.INTERVENTIONS:\nBODY.GLB-DPP:\nThe Group Lifestyle Balance (GLB-DPP) developed based on the Diabetes Prevention Program (DPP) [32] that retained the core principles of DPP. These included self-monitoring for weight, dietary and physical activity, problem solving, staying motivated, and target 7% weight loss from baseline weight. However, the programme delivery of GLB-DPP used group-approach given over twenty-two hourly sessions in three phases (core, fade frequency and support).\n\nBODY.GSLIM:\nPermission to adopt and adapt the GLB-DPP was obtained from the author/s under the Creative Common Licence Share Alike. The GSLiM programme retained the core characteristics of GLB-DPP except a 6% target weight loss from baseline weight used instead of 7% and absence of support sessions after programme completion at 6th month. The 6% target was based on an achievement of a minimum of 1% weight loss per month in view of evidence that even 5% weight loss may produce clinical improvement among obese individuals [10]. The frequency of sessions conducted in GSLiM was less compared to the GLB-DPP with ten sessions in GSLiM intervention compared to the twenty-two of GLB-DPP. However, the original twenty-two topics of the GLB-DPP were retained in GSLiM with the topics delivered through three seminars and five 90-minute sessions in two phases. The core phase (Phase 1) was run once in two weeks for three months, beginning with a two-day seminar. A fade frequency (Phase 2) started with a one-day seminar, followed by two 90-minute sessions and concluded with a half-day seminar at the end of the active intervention. Comparison of sessions between GLB-DPP and Group Support Lifestyle Modification (GSLiM) is shown in Tables 1 and 2.\n10.1371/journal.pone.0160343.t001Table 1BODY.COMPARISON OF GROUP LIFESTYLE BALANCE (GLB) WITH GROUP SUPPORT LIFESTYLE MODIFICATION (GSLIM) PROGRAMME.:\nSame aspects of GLB and GSLiM\nSafe and appropriate intervention that incorporates nutrition, physical activity, and behaviour change.\nGradual increase of 150 mins per week of physical activity and minus 500 kcal per day of dietary intake.\nStrong focus on use of self-monitoring tools with feedback.\nInitial emphasis on fat intake and calories.\nPrimary focus on healthy food choices.\nUse of inexpensive food samples and incentives.\nUse of problem-solving techniques to address barriers to healthy eating and physical activity.\nGroup approach.\n\nSpecific Adaptation to GSLiM\nGLB\nModified GLB (GSLiM)\nGoal: 7% weight loss and increase physical activity to 150 minutes/week.\nGoal: 6% weight loss of baseline body weight and physical activity 150 per week.\n12 weekly 1-hour sessions delivered over 12–15 weeks\n0, 3 and 6 months by seminar sessions.\n5 once in 2 weeks session over 12 weeks\n2 monthly session over subsequent 12 weeks\n\nPedometer introduced during core sessions\nLogging for physical activity in log book\nTrainers: Dietician and exercise specialist\nTrainers: Dietician and exercise specialist and psychologist\nTotal sessions 22 given in spread of 48 weeks\nCompressed session to 10 given in 24 weeks\n10.1371/journal.pone.0160343.t002Table 2BODY.TOPIC COMPARISON BETWEEN GSLIM AND GLB-DPP.:\nGLB-DPP topic arrangementGSLiM topic arrangementsCore sessions: Weekly for 12 weeks\nCore sessions: Bi- weekly (2 per month) for 12 weeks\n1. Welcome to GLB.\n1. Welcome to GSLiM—Seminar overview of topics. (GLB-DPP 1)Physical activity component (overview of topics and specifics on)\nStretching: The truth about flexibility (GLB-DPP session 21)\nHeart health (GLB-DPP session 20)\nDiet component: overview of topics, Tip the calorie balance (GLB-DPP 5)\nPsychological—overview\n\n2. Be a fat and Calorie Detective\n2. GSLiM Session 22.1 Fat, calorie and you (GLB-DPP session 2)\n2.2 Healthy eating (GLB-DPP session 3)\n\n3. Healthy eating\n3. GSLiM Session 33.1 Move those muscles (GLB-DPP session 4)\n3.2 Monitoring your activities (GLB-DPP 10)\n\n4. Move those muscles\n4. GSliM Session 44.1Negative thoughts and Weight (GLB-DPP 9 and 14)\n4.2 Balance your thoughts (GLB-DPP 15)\n\n5. Tip the calorie balance\n5. GSLiM Session 5\n5.1 Behavior, Environment and Lifestyle Change (GLB-DPP 6 and 17)\n5.2 Make Social Cues Work for You (GLB-DPP 11)\n\n6. Take charge of whats around you\n6. GSLiM Session 6• 6.0 Strengthen Your Exercise Programme (GLB-DPP 16)\n7. Problem solving\n\n8. Four keys to healthy Eating out\n\n9. Slippery slope of Lifestyle change\n\n10. Jump start Your Activity Plan\n\n11. Make Social Cues Work for You\n\n12. Ways to Stay Motivated\n\nTransition session (fade frequency)\nTransition session (fade frequency)\n13. Prepare for Long Term Self Management\n7. GSLiM Session 7- One day seminar.\n\na. Stress management (GLB-DPP 18).\n\nb. Problem solving (GLB-DPP 7).\n\nc. Healthy eating out (GLB-DPP 8).\n\nd. More volume fewer calories (GLB-DPP 14).\n\ne. Group physical activity sessions.\n14. More Volume; fewer calories\n8. GSLiM Session 8.• 8.0 Standing Up for Your Health (GLB-DPP 19)\n15. Balance your thoughts\n9. GSLiM Session 9.• 9.0 Stay motivated (GLB-DPP 12)\n16. Strengthen Your Exercise Programme\n10. GSLiM session 10 (final).Looking back and looking forward (GLB-DPP 22)—Sharing achievements.\nPrepare long term self-management (GLB-DPP 13)\n\nSupport sessions (variable sequence)\nFollow up at 36 weeks\n17. Mindful eating\n\n18. Stress and Time Management\n\n19. Standing Up for Your Health\n\n20. Heart Health\n\n21. Stretching: The truth about flexibility\n\n22. Looking back and looking forwards\n\nTable 3.\n10.1371/journal.pone.0160343.t003Table 3BODY.SUMMARY OF PROGRAMME STRUCTURE, SESSIONS AND CONTENTS.:\nSessionApproachTitle and Contents1\nSeminar\nIntroduction to the programme\n\n\nForeword on the Group Support Lifestyle Modification (GSLiM) programme configuration and sessions. Highlight energy balance and the concept of self-monitoring. Develop allegiance to the programme.\n\n\nSet up goals for: 6% weight loss, and physical activity of 150 minutes per week. Daily calorie requirement (DCR) minus 500 for dietary intake to be worked out through the 2 step Harris Benedict equation. Foreword on components: psychological, diet and nutrition and physical activity.\n\n\nDescription of main food components, sources of fat, food pyramid and food plate theory. General outlook on physical activity and exercise. Introduction to cognitive and behaviour aspects related to lifestyle alteration. Getting-to-know- you session lasting half a day to help participants get acquainted with each other.\n2\nGroup session\nCalorie measurement and healthy food options.\n\n\nEducation and hands on session for self-monitoring through the measurement of food and calories by participants utilising scales and measurement tools. Reading food labels and using the food plate theory for practice. Sessions with active facilitator participation.\n3\nGroup session\nMoving Your Muscles.\n\n\nInformation on effective exercise methods (brisk walking). Raise awareness on the safety aspects of physical activity and exercise. Highlight the enhancement of general health through physical activity and exercise. Assess individual's present level of physical activity and exercise. Rise in activity levels to be spread out over a period of time. Set weekly goals for physical activity. Commence self-monitoring for physical activity and exercise. Group activity for aerobic session.\n4\nGroup session\nNegative thoughts.\n\n\nUncover negative thoughts. Increase knowledge on variety of negative thoughts, types of thought disorder and its relationship to behaviour. Suggest ways to offset negative thoughts. Highlight the significance of appreciating oneself for personal accomplishments. Review negative thoughts and set up action plan to counter them.\n5\nGroup session\nRelationship between behaviour, environment and healthy lifestyle change.\n\n\nDefine social and physical environment and relate its influence on physical activity and eating behaviour. Promote positive response to environmental and social drawbacks. Highlight the need for a healthy diet and physical activity /exercise. Emphasize on acting as role models for group members in order to promote physical activity and positive eating. Appraisal of present behaviour and setting up of action plan to achieve positive behaviour change\n6\nGroup session\nConsolidating Exercise Programme\n\n\nInitiate aerobic fitness sessions and define frequency, intensity, time and type of activity (F.I.T.T). Educate participants on the engagement of self-monitoring to set up an effective exercise schedule through the calculation of heart rate and rate of perceived exertion (RPE).\n\n\nDiscuss schedule and include a variety of activities and exercises to develop flexibility, strength, cardiovascular health and endurance.\n7\nSeminar\nConsolidate dietary, fitness, thought and behaviour alteration.\n\n\nReport on the progress of participants. Reassess weight loss goals, diet and physical activity based on the most recent measurements. Behaviour: Initiate stress management procedure (abdominal breathing) and problem solving techniques (I.D.E.A.L approach) to deal with issues that crop up due to alterations in diet and daily activities. Dialogue on problems encountered by participants. Encourage healthy eating through the promotion of high volume and low calorie foods.\n\n\nDietary: Educate on the choice of healthy food when eating out. Explore by group members of ways to be more active on busy work days.\n\n\nFitness: Suggest activities for work related sitting time. Stress on the need for intervals between extended periods of sitting especially at work. Discuss options on exercise location during unfavourable weather conditions. Teach ways to improve cardiovascular health through resistance and strength exercises. Set up a physical activity schedule for the following month.\n8\nGroup session\nChallenges in Lifestyle change.\n\n\nDiscourse among group members on problems encountered during attempts to change their lifestyles focussing on eating habits and physical activities. Exchange of information and problem-solving suggestions among participants. Re-assess obstacles faced and define a strategy to overcome these barriers to change.\n9\nGroup session\nMaintaining motivation.\n\n\nConcentrate on means to be constantly motivated for lifestyle change. Stress on awareness of internal and external sources of motivation. Emphasise on appreciating minor accomplishments related to lifestyle change. Initiate discussion among group members on strategy to maintain motivation.\n10\nSeminar\nExperience sharing.\n\n\nDisclosure of participants’ accomplishments during the course of the programme (6 months). Initiate panel discussion involving team and group members on the achievements of participants during the programme. Encourage participants to share their six-month experience on the road to better health through weight reduction. Participants were also encourage to continue healthy lifestyle.\nA starter kit containing handouts, examples of food energy intake and activity-based energy expenditure in the form of wheel chart was provided to participants during the first session. Handouts were also given to participants after subsequent sessions. Self-monitoring activities included weighing-in, submission of diet records and physical activity log was also conducted for each session. Feedback was provided to motivate the participants on each session.\n\n\nBODY.COMPARISON:\nParticipants in the comparison group underwent individual counseling with a dietician once every 12 weeks. Each dietary counseling session lasted for an hour. Recommendations on reduced calorie intake for weight loss, education on food pyramid, self-monitoring of calorie intake, lowering high-fat food and increase of fruits and vegetable intake were given. Participants were advised to increase their physical activity levels and exercise at least three times a week, but no practical session provided.\n\nBODY.MEASUREMENTS:\nBODY.SOCIODEMOGRAPHIC AND LIFESTYLE HISTORY:\nSociodemographic characteristics included age, gender, ethnicity, education, household income and marital status; family history of diseases such as hypertension, diabetes and hyperlipidemia; and socially related lifestyle behavior such as smoking and alcohol consumption assessed using self-administered questionnaire. Participants were required to fill a three-day dietary record using a log book. They were taught to estimate food intake with standard day-to-day food utensils. Food items according to meals were entered. A food list with energy (kcal) values was given to assist participants’ in self-monitoring for healthy food choices. Physical activity was measured using the International Physical Activity Questionnaire short form (IPAQ-SF). The IPAQ-SF comprised of seven items reported in terms of METS-min per week which can be summarised into three main categories (low, moderate and high). It is a valid measurement of physical activity and has moderate to high reliability [33].\n\nBODY.ANTHROPOMETRY, BIOCHEMICAL AND CLINICAL MEASURE:\nWeight was measured using SECA digital scale (Model HD 309) and height was measured using SECA body meter (Model 208) to the precision of 0.1 kg and 0.1 cm, respectively. Participants wore light clothing, and stood on scale without shoes with leg apart within the scale footprint. Waist and hip circumference were measured using non elastic SECA measuring tape to the nearest 0.1 cm following the WHO guideline [34]. Digital OMRON Model HEM 907 was used to measure blood pressure based on the Malaysian Clinical Practice Guidelines for management of Hypertension [35]. Fasting blood samples (after 8 hours) collected by trained staff and sent within four hours of collection in an icebox to the laboratory. Measurements were conducted by trained research assistants at baseline, 12, 24 and 36 weeks. Height measured only once at baseline. BMI was calculated as weight in kilograms divided by height in meters squared.\n\nBODY.PSYCHOLOGICAL MEASURES AND QUALITY OF LIFE:\nPsychological measures and quality of life were measured using self-administered questionnaires at baseline, 24 and 36 weeks. Weight efficacy lifestyle (WEL) questionnaire was used to measure eating self-efficacy in weight management with 20 items in five subscales. The subscales include food control in negative emotions, availability of food, social pressure, physical discomfort and physical activity with internal consistency ranged from 0.7 to 0.9 [36]. The Malay version of WEL questionnaire had internal consistency (Cronbach’s alpha) of 0.47 to 0.86 [37].\nSocial support was measured using the Multidimensional Perceived Social Support (MDPSS) which comprised of 12 items and three subscales. Internal reliability achieved was 0.82 for perceived social support from family, 0.88 for friends and 0.94 for significant others [38]. The Malay version of MDPSS has internal consistency of 0.89 with reliability coefficients for family, friends and significant others subscales ranging from 0.67 to 0.83 [39].\nAutomatic Thought Questionnaire (ATQ) used to assess negative thoughts [40]. Due to poor construct, the Malay version was reduced to 17 items and two subscales from 30 items [41]. Each subscale i.e. negative self concept (NSE) and personal maladjustment (PMA) had internal consistency (Cronbach alpha) of 0.91 and 0.83 respectively. Method of scoring of the Malay ATQ remained the same as the original 30 item with a cut-off score of 30 for depression.\nQuality of life was measured using the WHO Quality of Life short form questionnaire (WHOQoL-BREF), a shorter version of WHO Quality of Life (WHOQoL) questionnaire with the objective to measure a global perspective of quality of life across cultures [42]. The WHOQoL-BREF comprises of 26 questions in four domains i.e. physical activity, psychological, social support and environment. Validation of the translated Malay version revealed high reliability with internal consistency of 0.64–0.80, and test—retest-reliability of 0.49–0.88 [43,44].\n\n\nBODY.SAMPLE SIZE:\nSample size was calculated using OpenEpi software. Based on Renjilian et al.’s findings (17), 49% of group therapy participants achieved ten percent weight loss compared to 29% in individual counseling. A total of 286 participants was required after setting the study power at 80%, level of significance at p<0.05 and attrition rate of 25%. Due to low response rate, high number of excluded respondents and time constraint for further recruitment, the sample size was not achieved. Power calculation revealed the sample size of 194 achieved 88.8% power for targeted weight loss.\n\n\nBODY.STATISTICAL ANALYSIS:\nIBM SPSS Statistics, Version 20 for Windows (SPSS Inc, Chicago IL) used for data analysis. The dietary data was cross-checked and entered by a trained dietician into Nutritionist Pro where the Malaysian nutrient database was used. Between group mean differences at baseline for continuous variables were analyzed using t-tests and χ2 tests for categorical data. Mean differences between groups over time used analysis of variance for repeated measures. All analysis was carried out using the intention to treat principles. Missing data was imputed using the last observation carried forward. Continuous data at baseline was described as means± standard deviation. Repeated measure data for within group measures was described as mean ±standard error. Confidence interval (CI) was reported where appropriate. Statistical significance was preset at 0.05. Relative risk reported for proportion achieved for weight loss, while Cohen’s d used for the within and between group repeated measure. Adjustment conducted for measures where the baseline values showed significant between-group differences to control for bias i.e. carbohydrate percentage and multidimensional perceived of social support.\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nA total of 194 participants recruited and randomised. The participants’ mean age (standard deviation) was 40.5 (9.3) years. Majority of participants were women (72.7%) and non-academician (92.3%). More than half (64%) had at least secondary education. A total of 81 (83.5%) GSLiM participants and 80 (82.5%) of dietary counseling participants completed the study at 24 weeks (Fig 1). Participants from the GSLiM group had higher total MDPSS score, family support and friend subscales score as well as carbohydrate percentage intake. No other significant difference between groups was observed at baseline (Tables 4 and 5).\n10.1371/journal.pone.0160343.t004Table 4BODY.BASELINE MEASURES.:\nGSLiM, mean (sd), n = 97Diet counselling, mean (sd), n = 97p valueAge, years\n39.7 (9.2)\n40.4 (9.5)\n0.617\nGender\n\n\n0.629\nFemale, n (%)\n69 (71.1)\n72 (74.2)\n\nMale, n (%)\n28 (28.9)\n25 (25.8)\n\nEthnicity\n\n\n0.296\nMalay, n (%)\n87 (89.7)\n91 (93.8)\n\nNon Malays, n (%)\n10 (10.3)\n6 (6.2)\n\nEducation, tertiary, n (%)\n39 (40.2)\n29 (29.9)\n0.132\nHousehold income (RM)\n2680.34 (1511)\n2504.19 (1641.20)\n0.438\nMarried, n. (%)\n76 (78.4)\n74 (76.3)\n0.732\nOccupation, academician, n (%)\n9 (9.3)\n6 (6.2)\n0.420\nFamily history of:\n\n\n\nHypertensive, n (%)\n11 (11.3)\n13 (13.4)\n0.663\nDiabetes, n (%)\n3 (3.1)\n2 (2.1)\n0.683\nHyperlipidaemia, n (%)\n8 (8.2)\n13 (13.4)\n0.248\nBehaviour:\n\n\n\nNon-alcohol intake,\n96 (99)\n97 (100)\n0.316\nSmoker, n (%)\n9 (9.3)\n5 (5.2)\n0.267\nHeight, m\n1.59 (0.08)\n1.58 (0.09)\n0.863\nWeight, kg\n82.3 (16.3)\n82.0 (13.5)\n0.911\nBMI, kg/m2\n32.4 (4.8)\n32.4 (3.8)\n0.980\nWaist circumference, cm\n95.1 (11.2)\n94.9 (9.5)\n0.872\nHip circumference, cm\n112.2 (10.4)\n111.1 (8.2)\n0.429\nSystolic blood pressure, (mmHg)\n128.6 (16.1)\n129.1 (14.4)\n0.712\nDiastolic blood pressure, (mmHg)\n83.0 (12.2)\n83.4 (10.7)\n0.774\nTriglyceride, (mmol/l)\n1.41 (0.62)\n1.56 (0.82)\n0.182\nTotal Cholesterol, (mmol/l)\n5.2 (0.9)\n5.3 (1.1)\n0.913\nHDL Cholesterol, (mmol/l)\n1.23 (0.29)\n1.25 (0.31)\n0.592\nLDL Cholesterol, (mmol/l)\n3.45 (0.82)\n3.35 (0.85)\n0.426\nIPAQ SF total METmin-1 per week,\n1071 (3.24)\n1318.3 (3.06)\n0.238\nWeight efficacy lifestyle\n123.0 (25.6)\n121.6 (23.4)\n0.697\nNegative emotions\n25.6 (6.5)\n25.1 (6.3)\n0.560\nAvailability\n21.6 (6.6)\n21.4 (6.1)\n0.901\nSocial Pressure\n22.3 (6.1)\n22.1 (5.7)\n0.771\nPhysical Discomfort\n27.7 (5.3)\n26.9 (5.6)\n0.315\nPositive activities\n25.8 (5.7)\n26.1 (5.2)\n0.693\nMultidimensional social support a\n66.3 (9.9)\n62.8(8.7)\n0.010b\nFamily Support\n22.8 (3.7)\n21.7 (3.3)\n0.029c\nFriend Support\n20.5 (3.9)\n19.3 (3.4)\n0.021c\nSignificant Others\n22.9 (4.1)\n21.8 (4.5)\n0.061\nTotal automatic thoughts questionnaire score\n26.5 (7.4)\n26.7 (8.0)\n0.867\nNegative self-concept\n16.1 (4.6)\n16.2 (4.6)\n0.853\nPersonal mal-adjustment\n9.1 (2.8)\n9.2 (3.5)\n0.822\nWHOQoL BREF\n\n\n\nPhysical Health\n12.9 (1.6)\n13.1(1.4)\n0.373\nPsychological\n13.1 (1.7)\n13.1 (1.7)\n0.942\nSocial Relationship\n14.8 (2.7)\n14.6 (2.2)\n0.681\nEnvironment\n14.1 (1.8)\n14.0 (1.9)\n0.722\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\n10.1371/journal.pone.0160343.t005Table 5BODY.BASELINE DIETARY INTAKE.:\nGSLiM, mean (sd) N = 65Diet counselling, mean (sd), n = 56p valueDietary energy intake, mean kcal/day\n1421.2 (332.3)\n1351.3 (369.8)\n0.276\nProtein (%)\n15.8 (2.3)\n15.0 (2.8)\n0.109\nCarbohydrate (%)\n53.3 (6.5)\n55.8 (6.1)\n0.034a\nFat (%)\n30.8 (5.4)\n29.0 (5.1)\n0.064\nCholesterol (mg)\n168.0 (88.6)\n162.3 (107.2)\n0.748\nSaturated Fat (mg)\n8.1 (3.9)\n9.0 (4.5)\n0.201\nMono saturated fat (mg)\n7.3 (3.6)\n7.2 (3.0)\n0.772\nMeals frequency\n3.4 (0.6)\n3.6 (0.7)\n0.068\na Significant p<0.05.\n\nBODY.BETWEEN GROUP DIFFERENCES:\nAt week 24, nineteen participants (19.6%) achieved 6% targeted weight loss in the intervention compared to 4 (4.1%) in the comparison group, Risk Ratio (RR) of 4.75 (95% CI: 1.68, 13.45). Out of 23 participants achieved weight loss, 10 (57.5%) intervention participants and 3 (75%) from the comparison group retained the targeted weight loss (6%) at 36th week. However, when measured in terms of weight in kg, no significant difference observed for the mean difference in weight loss between groups (-0.82 ± 2.12 kg), Cohen’s d 0.001.\nTable 6 summarises obesity and cardiovascular measures from baseline to 12th week, 13th to 24th week, baseline to 24th week and from 25th week to 36th week. No other significant mean difference between groups observed at any of the time intervals. Physical activity, psychological and quality of life scores measure from baseline to 24th week and from 25th week to 36th week are shown in Table 7.\n10.1371/journal.pone.0160343.t006Table 6BODY.OBESITY AND CARDIOVASCULAR OUTCOMES BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin GroupBetween groupsGSLiM mean change (s.e.) N = 97Comparison mean change (s.e.) N = 97Mean difference (95% CI)Cohen’s dp value0 to 12 weeks\n(intensive phase)\n\n\n\n\nWeight (kg)\n-1.52 (0.36)a\n-0.09(0.19)\n- 0.47 (4.69, 3.74)\n0.000\n0.825\nBMI (kg/m2)\n- 0.60 (0.14)a\n-0.31(0.08)\n- 0.30 (-1.52,0.92)\n0.001\n0.627\nWaist circumference (cm)\n- 1.67 (0.48)a\n0.54 (0.39)\n- 0.87 (-3.79, 2.06)\n0.002\n0.561\nHip circumference (cm)\n-0.74(0.39)\n-0.16 (0.27)\n0.77 (-1.81, 3.35)\n0.002\n0.557\nSBP (mmHg)\n0.24 (1.19)\n-2.26 (1.01)\n0.43 (-3.59, 4.46)\n0.000\n0.832\nDBP (mmHg)\n-0.25(0.96)\n-1.38 (0.72)\n0.09 (-2.91, 3.09)\n0.000\n0.951\nFasting serum triglyceride (mmHg)\n0.02 (0.05)\n0.009 (0.047)\n-0.14 (-0.33, 0.07)\n0.009\n0.184\nFasting serum cholesterol (mmHg)\n-0.04 (0.06)\n-0.04 (0.05)\n-0.01 (-0.26, 0.23)\n0.000\n0.913\nHDL (mmHg)\n-0.031 (0.017)\n0.018 (0.016)\n-0.05 (-0.13, 0.04)\n0.007\n0.256\nLDL (mmHg)\n-0.058 (0.055)\n-0.049 (0.057)\n0.09 (-0.12, 0.31)\n0.004\n0.401\nFasting blood glucose (mmHg)\n-0.003 (0.086)\n0.121 (0.087)\n-0.11 (-0.39, 0.17)\n0.003\n0.442\n13 to 24 weeks\n(fade frequency)\n\n\n\n\nWeight (kg)\n-0.92 (0.26)b\n-0.61(0.22)b\n-1.35 (-5.54, 2.85)\n0.002\n0.527\nBMI (kg/m2)\n-0.36 (0.10)b\n-0.24 (0.08)\n-0.64 (-1.87, 0.58)\n0.006\n0.300\nWaist circumference (cm)\n0.16 (0.44)\n0.18 (0.37)\n-1.79 (-4.73, 1.14)\n0.008\n0.229\nHip circumference (cm)\n-0.44 (0.29)\n-0.27 (0.23)\n0.39 (-2.21, 3.00)\n0.000\n0.763\nSBP (mmHg)\n0.17 (1.08)\n0.27 (0.90)\n1.63 (-2.37, 5.65)\n0.003\n0.424\nDBP (mmHg)\n-1.07 (0.82)\n1.27 (0.85)\n-0.51 (-3.51, 2.49)\n0.001\n0.737\nFasting serum triglyceride (mmHg)\n-0.05 (0.05)\n-0.047 (0.042)\n0.01 (-0.24, 0.37)\n0.007\n0.911\nFasting serum cholesterol (mmHg)\n0.08 (0.05)\n-0.02 (0.04)\n-0.13 (-0.35, 0.09)\n0.000\n0.239\nHDL (mmHg)\n0.038 (0.018)\n0(0.013)\n-0.05 (-0.14, 0.03)\n0.008\n0.215\nLDL (mmHg)\n0.033(0.049)\n0.039(0.041)\n0.08 (-0.4, 0.302)\n0.003\n0.453\nFasting blood glucose (mmHg)\n-0.088(0.046)\n-0.070(0.117)\n-0.21 (-0.54, 0.12)\n0.008\n0.207\n0 to 24 weeks\n\n\n\n\n\nWeight (kg)\n-2.42 (0.49)a\n-0.69 (0.27)c\n-0.82 (-5.00, 3.37)\n0.001\n0.701\nBMI (kg/m2)\n-0.96 (0.19)a\n-0.27 (0.11)b\n-0.44 (-1.65, 0.78)\n0.003\n0.480\nWaist circumference (cm)\n-1.51(0.52)a\n-0.31 (0.12)\n-1.12 (-4.00, 1.77)\n0.003\n0.445\nHip circumference (cm)\n-1.18 (0.45)c\n-0.43 (0.34)\n0.62 (-1.96, 3.19)\n0.000\n0.850\nSBP (mmHg)\n0.41 (1.23)\n-1.99 (1.14)\n0.82 (-3.09, 4.72)\n0.000\n0.680\nDBP (mmHg)\n-1.32 (0.91)\n-0.11 (0.99)\n-0.49 (-3.4.1, 2.41)\n0.003\n0.736\nFasting serum triglyceride (mmHg)\n-0.03 (0.06)\n-0.04 (0.05)\n-0.13 (-0.34, 0.07)\n0.009\n0.199\nFasting serum cholesterol (mmHg)\n0.04 (0.07)\n-0.02 (0.06)\n0.005(-0.24, 0.25)\n0.000\n0.970\nHDL (mmHg)\n0.01 (0.02)\n0.02 (0.02)\n-0.04 (-0.13, 0.04)\n0.002\n0.302\nLDL (mmHg)\n-0.03 (0.07)\n-0.01 (0.06)\n0.09 (-0.13, 0.30)\n0.003\n0.418\nFasting blood glucose (mmHg)\n-0.09 (0.08)\n0.11 (0.09)\n-0.16 (-0.45, 0.13)\n0.005\n0.299\n25–36 weeks (follow-up)\n\n\n\n\n\nWeight (kg)\n0.19 (0.35)\n-0.11 (0.16)\n-1.42 (-5.65, 2.82)\n0.002\n0.510\nBMI (kg/m2)\n0.02(0.07)\n-0.04 (0.64)\n-0.67 (-1.92, 0.57)\n0.006\n0.286\nWaist circumference (cm)\n0.19 (0.35)\n-0.19 (0.37)\n-1.44 (-4.39, 1.52)\n0.005\n0.338\nHip circumference (cm)\n-0.45 (0.31)\n-0.33 (0.21)\n0.26 (-2.39, 2.91)\n0.001\n0.636\nSBP (mmHg)\n-1.70 (1.27)\n0.46 (0.90)\n0.51 (-3.57, 4.48)\n0.000\n0.802\nDBP (mmHg)\n-0.72(0.97)\n-1.38(0.78)\n-1.03 (-4.08, 2.03)\n0.002\n0.509\nFasting serum triglyceride (mmHg)\n0.008(0.04)\n-0.01(0.03)\n0.008 (-0.25, 0.26)\n0.007\n0.264\nFasting serum cholesterol (mmHg)\n0.01(0.06)\n0.07(0.04)\n-0.13 (-0.36, 0.09)\n0.000\n0.953\nHDL (mmHg)\n0.012 (0.015)\n0.003 (0.018)\n-0.02 (-0.07, 0.11)\n0.003\n0.489\nLDL (mmHg)\n-0.033 (0.063)\n0.067 (0.044)\n0.03 (-0.19, 0.26)\n0.001\n0.715\nFasting blood glucose (mmHg)\n-0.008 (0.048)\n-0.065 (0.039)\n-0.22 (-0.58, 0.14)\n0.006\n0.267\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n10.1371/journal.pone.0160343.t007Table 7BODY.PHYSICAL ACTIVITY, PSYCHOLOGICAL AND QUALITY OF LIFE SCORE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin groupBetween groupsIntervention group mean change (s.e.)(n = 97)Control group mean change (s.e.)(n = 97)Mean Difference at intervals (95% CI)Cohen dP0 to 24 weeks\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n-18.9 (236.5)\n90.3(98.2)\n-244.3(-790.1, 301.5)\n0.004\n0.378\nVigorous, mets min-1\n-75.5 (149.7)\n58.6 (63.9)\n-10.5(-296.4, 275.5)\n0.000\n0.943\nModerate, mets min-1\n-6.8 (95.9)\n50.9 (59.5)\n-238.6(-509.3, 32.2)\n0.015\n0.084\nWalking, mets min-1\n63.3 (79.6)\n-19.3(43.8)\n4.7(-212.0,221.5)\n0.000\n0.966\nTotal WEL score\n10.3 (2.46)a\n1.78(1.88)\n5.62 (-0.10, 11.35)\n0.019\n0.054\nNegative emotions, mean (sd)\n2.02 (0.57)b\n0.07 (0.47)\n1.51 (0.004, 3.02)c\n0.020\n0.049\nAvailability, mean (sd)\n2.63 (0.62)a\n0.93 (0.49)\n0.96 (-0.58, 2.50)\n0.008\n0.218\nSocial Pressure, mean (sd)\n1.71 (0.60)c\n0.52 (0.43)\n0.85 (-0.61, 2.29)\n0.007\n0.253\nPhysical Discomfort, mean (sd)\n1.41 (0.48)c\n0.36 (0.44)\n1.31 (0.05, 2.56)c\n0.022\n0.041\nPositive activities, mean (sd)\n2.52(0.54)a\n-0.09 (0.40)\n0.99 (-0.27, 2.26)\n0.012\n0.121\n#\nTotal MDPSS mean score\n1.46 (0.65)\n1.09(0.63)\n\n0.012\n0.127\n# Friend Support, mean (sd)\n0.86(0.25)b\n0.44 (0.22)\n0.41(0.08, 0.75)a\n0.030\n0.015\n#Family Support, mean (sd)\n0.10 (0.27)\n0.19 (0.25)\n0.09 (- 0.25, 0.44)\n0.002\n0.578\n#Significant Others, Mean (sd)\n0.50 (0.30)\n0.45 (0.32)\n0.19(-0.21, 0.59)\n0.005\n0.347\nTotal ATQ mean score\n-0.84 (0.68)\n-0.40 (0.43)\n-0.40 (-2.29, 1.49)\n0.001\n0.675\nNegative self-concept mean score\n-0.39 (0.45)\n-0.26 (0.25)\n-0.19 (-1.35, 0.97)\n0.001\n0.745\nPersonal maladjustment mean score\n-0.36 (0.26)\n-0.19 (0.23)\n-0.19(-0.94, 0.57)\n0.001\n0.630\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n2.35(0.19)a\n1.82(0.19)\na\n0.07(-0.35, 0.48)\n0.001\n0.743\nPsychological\n1.62(0.15)\na\n1.52(0.16)\na\n0.06(-0.41, 0.52)\n0.000\n0.784\nSocial relationship\n0.63(0.22)c\n0.12 (0.17)\n0.39(-0.22, 1.01)\n0.008\n0.210\nEnvironment\n0.52(0.14)b\n0.25(0.11)b\n0.18(-0.31, 0.67)\n0.003\n0.471\n25 to 36 weeks (follow up)\n\n\n\n\n\nTotal IPAQ SF Score, mets min-1\n(sd)*\n1.1(143.6)\n303.1(138.4)\n-449.9 (-1015.1,115.3)\n0.013\n0.118\nVigorous, mets min-1\n-63.1 (73.2)\n59.4 (79.6)\n-138.8 (-437.5, 159.9)\n0.004\n0.361\nModerate, mets min-1\n62.5 (61.5)\n102.1 (82.0)\n-287.2 (-563.2, -11.2)c\n0.021\n0.041\nWalking, mets min-1\n1.7 (73.4)\n141.6 (63.6)\n- 23.9 (-240.8, 192.9)\n0.000\n0.828\nTotal WEL score\n-1.77 (1.45)\n0.45 (1.19)\n8.76 (3.05, 17.47)b\n0.045\n0.003\nNegative emotions, mean (sd)\n-0.42 (0.39)\n0.06 (0.27)\n2.24 (0.81, 3.67)b\n0.305\n0.002\nAvailability, mean (sd)\n-0.09 (0.35)\n0.19 (0.28)\n1.68 (0.10, 3.25)c\n0.022\n0.037\nSocial Pressure, mean (sd)\n-0.34 (0.34)\n0.09 (0.26)\n1.23 (-0.29, 2.75)\n0.013\n0.114\nPhysical Discomfort, mean (sd)\n-0.40 (0.31)\n0.31 (0.25)\n1.48 (0.28, 2.68)c\n0.030\n0.016\nPositive activities, mean (sd)\n-0.52 (0.36)\n-0.79 (0.33)\n2.14 (0.93, 3.35)a\n0.060\n0.001\n#Total MDPSS mean score\n0.35(0.62)\n0.46(0.41)\n1.46 (-0.26, 3.17)\n0.015\n0.095\n# Friend Support, mean (sd)\n-0.10(0.19)\n0.11(0.14)\n0.80 (0.15, 1.45)\nc\n0.030\n0.017\n#Family Support, mean (sd)\n0.27(0.22)\n0.16(0.18)\n0.29 (-0.35, 0.95)\n0.004\n0.364\n#Significant Others, Mean (sd)\n0.09(0.31)\n0.19(0.18)\n0.38 (-0.38, 1.13)\n0.005\n0.325\nTotal ATQ mean score\n-0.19(0.47)\n-0.73(0.21)b\n-0.35 (-2.17, 1.48)\n0.001\n0.709\nNegative self-concept mean score\n-0.10(0.30)\n-0.47(0.37)b\n-0.07 (-1.22, 1.07)\n0.000\n0.901\nPersonal maladjustment mean score\n-0.08(0.19)\n-0.19 (0.09)\n-0.21 (-0.94, 0.52)\n0.002\n0.568\nWHOQOL BREF\n\n\n\n\n\nPhysical health\n-0.92 (0.18)a\n-0.53(0.19)c\n0.13(-0.37, 0.64)\n0.001\n0.609\nPsychological\n-0.29 (0.19)\n-0.12 (0.20)\n0.03(-0.49, 0.54)\n0.000\n0.923\nSocial relationship\n-0.03 (0.17)\n0.14 (0.11)\n0.56(-0.08, 1.19)\n0.015\n0.084\nEnvironment\n-0.06 (0.11)\n0.06 (0.09)\n0.21(-0.28, 0.69)\n0.004\n0.405\nCI, confidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05.\nAll participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# MDPSS, adjusted for baseline MDPSS.\n* median values instead of mean for physical activity.\nThe intervention group achieved higher score for negative emotions and physical discomfort of WEL subscales compared to the comparison group from baseline to 24th week. The effects maintained during follow up from 25th to 36th week. Intervention participants achieved higher friend support mean score from baseline to 24th week, even after adjusting for baseline MDPSS score. During the follow up period, except for social pressure, total WEL and other WEL subscales mean score increased significantly compared to the comparison group. However, moderate physical activity in METSmin-1 per week reduced significantly in the intervention group compared to the comparison group during 25th to 36th week. The intervention group experienced lower intake in carbohydrate percentage with higher percentage increase in fat intake over baseline to 12th week and 13th to 24th week. No other dietary change was observed (Table 8).\n10.1371/journal.pone.0160343.t008Table 8BODY.DIETARY ENERGY INTAKE BETWEEN AND WITHIN GROUPS OVER TIME.:\nWithin group changeBetween groupsIntervention mean (s.e)Control mean (s.e)Mean difference at intervalsCohen dPN = 65N = 56(95% CI)0–12 weeks\n\n\n\n\n\nDietary intake, kcal\n0.41 (38.2)\n45.70 (22.76)\n47.25 (-79.8, 174.3)\n0.005\n0.463\nProtein (%)\n-0.05 (0.29)\n-0.19 (0.20)\n0.88 (-0.01, 1.77)\n0.09\n0.053\nCarbohydrate (%)#\n-4.12 (0.82)a\n-1.58 (0.67)\n-1.78 (-2.86, -0.69)b\n0.08\n0.002\nFat (%)\n3.54 (0.93)b\n1.77 (0.66)\n2.69 (0.91, 4.47)b\n0.07\n0.003\n13–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.06 (12.96)\n15.78 (35.53)\n19.24 (-113.3, 151.8)\n0.001\n0.744\nProtein (%)\n0.08 (0.16)\n0.22 (0.24)\n0.93 (-0.02, 1.87)\n0.03\n0.055\nCarbohydrate (%)#\n-0.19 (0.40)\n-1.91 (0.51)b\n-2.76 (-4.87, -0.64)c\n0.05\n0.011\nFat (%)\n0.11 (0.35)\n2.02 (0.53)b\n2.62 (0.36, 4.88)c\n0.04\n0.023\n25–36 weeks\n\n\n\n\n\nDietary intake, kcal\n-6.52 (9.08)\n39.15 (23.7)\n-8.96 (-139.7, 121.8)\n0.00\n0.892\nProtein (%)\n0.12 (0.09)\n0.44 (0.19)\n0.69 (-0.30, 1.69)\n0.02\n0.171\nCarbohydrate (%)#\n-1.18 (0.13)\n-1.48 (0.62)\n-1.33(-3.61, 0.95)\n0.11\n0.250\nFat (%)\n-0.07 (0.11)\n1.05 (0.55)\n1.17 (-1.16, 3.51)\n0.01\n0.322\n0–24 weeks\n\n\n\n\n\nDietary intake, kcal\n5.46 (38.28)\n61.48 (36.17)\n36.12 (-86.9, 159.19)\n0.003\n0.562\nProtein (%)\n0.13 (0.30)\n0.03 (0.20)\n0.81 (-0.08, 1.68)\n0.001\n0.444\nCarbohydrate (%)#\n-4.31(0.88)a\n-3.49 (0.64)a\n-1.84(-3.25, -0.43)c\n0.053\n0.011\nFat (%)\n3.65 (0.96)b\n3.79 (0.82)\n1.74 (0.003, 3.46)\n0.000\n0.050\nConfidence interval; GSLiM, Group Support Lifestyle Modification.\na Significant p<0.001.\nb Significant p<0.01.\nc Significant p<0.05. All participants with initial weight measurement were included in this intention to treat analysis. From analysis of repeated measures treatment against time with time as the dependent variable. Effect size stated using Cohen’s for between group mean differences.\n# CHO percentage was adjusted for its baseline value for between group analyses in all interval.\n\nBODY.WITHIN INTERVENTION GROUP:\nWeight and BMI of the intervention group were reduced during the first 12 weeks (core phase), with smaller reduction in the next 12 weeks (fade frequency). During follow up, there was an insignificant increase in weight and BMI. Reduction in waist circumference occurred in the first 12 weeks but not sustained during fade frequency. An overall significant hip circumference reduction was observed within interval 0 to 24th week, although the reduction was not significant in the core phase or fade frequency.\nWEL and its subscales’ scores, friend support subscale score of MDPSS and WHOQOL-BREF domains score increased from 0 to 24th week. However, the improvement was not sustained during follow up, with significant reduction in WHOQOL-BREF physical health score observed during 25th to 36th week. The intervention group experienced reduction in carbohydrate percentage intake mainly during the core phase, resulting in significant reduction from baseline to 24th week. Concurrent fat percentage intake increased during the same time interval. There was no significant change in cardiovascular risks and physical activity measure throughout the study.\n\nBODY.WITHIN COMPARISON GROUP:\nThe comparison group weight decreased only during 13th to 24th week, yet it contributed to the overall significant reduction throughout treatment from 0 to 24th week. During the same 0 to 24th\nSignificant decreased in carbohydrate and increased in fat percentage intake, occurred from 13th to 24th week, in the comparison group. As per intervention group, the WHOQOL-BREF physical health score decreased in the follow up phase. Apart from these changes, significant reduction was also observed in negative thoughts and the negative self-concept subscale of the ATQ score during follow up period (25th to 36th weeks).\n\nBODY.ADHERENCE TO INTERVENTION:\nBODY.ATTENDANCE:\nAttendance was assessed as part of adherence measure in the intervention group. Of the total six sessions within the first 12 weeks, 33 (17.0%) participants in the intervention group attended more than four sessions, 34 (17.5%) attended three to four sessions while 30 (15.5%) attended two sessions and less. During the fade frequency with four sessions, 20.6 percent participants attended three to four sessions. None of the participants who attended less than three sessions achieved 6% weight loss, while 8.9% of participants attended four to six sessions achieved the targeted weight loss. About half (44.8%) of the participants attended more than seven sessions achieved 6% targeted weight loss (p<0.001). Median attendance was six sessions with inter-quartile range of five sessions. There was positive correlation between attendance and weight loss (r = 0.491, P<0.001).\n\nBODY.LOG BOOK SUBMISSION:\nThe overall log book submission was unsatisfactory for both groups. Only 46.4% (n = 45) intervention participants submitted their log books at 12th week and 16.5% (n = 16) at 24th week. Only 23.7% (23) comparison group participants submitted their log books at 12 weeks and 19.6% (19) at 24th week. Those who did not submit their log books were of younger age (p = 0.03).\n\n\nBODY.ADVERSE EFFECT OF THE INTERVENTION:\nTwo male participants with BMI above 40kg/m2 from the intervention group experienced mild soft tissue injury during self-conducted exercise and had undergone treatment. No other serious event reported.\n\n\nBODY.DISCUSSION:\nThe intervention proved to be more effective in achieving the targeted 6% weight loss, improved self-efficacy in dietary control, and achieved better friend support and quality of life than the comparison (dietary counseling) group. The psychological improvement in the intervention group was sustained post intervention. Although the comparison group also experienced reduction in weight, BMI, improvement in quality of life, and ATQ, there was no improvement in self-efficacy and social support.\nA total of 19% of GSLiM participants achieved 6% weight loss. For comparison purpose, we recalculated and found 17.5% of GSLiM participants achieved 7% weight loss. Other GLB based study reported 23 to 37% achievement in targeted weight loss [14]. A most recent GLB based trial on work site reported 29% of participants achieved targeted 7% weight loss [26]. While another study assessed the effectiveness of the coach led GLB programme in primary care setting had 37% participants achieving 7% weight loss at 15 months [45]. Our study had lower proportion of targeted weight loss due to fewer sessions in GSLiM compared to the original GLB-DPP. Earlier study reported for every other lifestyle session attended, weight loss could increase by 0.26% (16).\nOur findings concurred with recent review which showed multicomponent behaviour intervention was more effective than single component intervention [46]. The mean weight loss within intervention at 24th week was -2.43 kg (95% CI: -3.75, -1.21, p = 0.001), comparable to another trial which reported mean weight loss of -2.3 kg (95% CI: -2.92 to -1.72, p = 0.001) [47]. Most improvement occured during the first 12 weeks in the intervention group where sessions were more intensive. Earlier findings showed positive association between intensity and magnitude of weight loss [48]. Yet, lower intensity programme with similar effect is still being sought by other researchers [49], as intensive intervention are costly in terms of manpower and materials [50]. Apart from weight loss, significant improvement of the self-efficacy score and its subscales were seen among intervention participants compared to the comparison group, similarly found by other studies [51–53]. The intervention group experienced immediate improvement in social support score at baseline. However, after controlling for the baseline friend support score, significant difference between groups was observed for mean friend support score during the 24 weeks. Similar finding was also reported by a recent cluster randomised trial where healthy eating was found to be supported by friend [54].\nAny improved ATQ score would be expected in the intervention group due to the presence of cognitive elements incorporated in the programme, instead the improvement of ATQ score occurred within the comparison group. The use of ATQ to measure negative thought in obesity research remains limited in the presence of other tools used to assess dysfunctional eating behavior associated with obesity [55]. The average cut off value for depressogenic negative thoughts using the 17 item scale of ATQ—Malay in non-clinical population was 30.0 and clinical population was 47.0 [41]. The mean ATQ score for both the intervention and comparison groups was 25.6 and 26.3 respectively, below the cut off value; therefor the significant mean difference found between the intervention and comparison group was most likely due to chance. ATQ was used in our study to measure depressogenic features amongst obese participants and observed further change as treatment progressed. Moderate concurrent validity (r = 0.65) was found between the Malay ATQ and Malay-Beck Depression inventory (BDI), while discriminant analysis achieved 89.3%.\nOther than improvement in weight, psychological measures and quality of life, there was no improvement in cardiovascular risks among our participants. Other studies found at least 10% weight loss of baseline weight resulted in changes for the cardiovascular risk parameters [56]. Several other reasons may influence the absence of treatment effect on cardiovascular risks in this study. Our participants were within the younger age group and their clinical indicators were of normal range, therefor further improvement would be less likely. The focus of intervention would also influence the result as other studies targeting clinical indicators with health education in dietary approach managed to result in cardiovascular risks improvement [57,58]. Therefore, weight loss may not necessary resulting improvement of cardiovascular risks factors.\nParticipants also experienced small reduction in carbohydrate percentage intake and increase in fat percentage intake, as found by previous study [59]. Low carbohydrate diet has been associated with greater weight loss compared to low fat diet [60]. Nevertheless, there has been call that the effectiveness of lifestyle modification not to be overshadowed by the continuous search for the best dietary approach [61] when moderate balanced nutrients intake should be advocated.\nAlthough promising, the results of this study should be interpreted cautiously. Since participants were employees from the university, generalizability of the study remained limited within population of the same setting and may differ with the general population. Self—reported measures in particular self-administered surveys used may result in reporting bias. Negative behavior such as smoking and alcohol consumption may result in underreporting, while physical activity may be over reported. Although the short form IPAQ was advocated to be used in view of time-saving, a recent review [62] found that IPAQ SF tended to overestimate the METs-minutes/ week score. However, the categories of physical activity derived from the long form IPAQ [63] showed similarity with the categories found in our study and another Malaysian based study [3]. Therefore, we foresee the results to be reliable in particular in assessing change over time. Objective measures such as pedometer or accelerometer cited to be more reliable [64] however its utilization is resource consuming and may not be feasible in community setting. Our study only measured dietary self-efficacy, however, recent finding showed that exercise self-efficacy may be a better predictor for weight loss compared to dietary self-efficacy [65]. Considering the continuous development in the cognitive- psychological aspect of obesity intervention, both physical and dietary self-efficacy need to be assessed in future studies.\nSelf-monitoring is another critical component for weight reduction (39), however the log book submission by both groups was poor although attempts had been made to improve submission. This may further explain the small effect size observed and no improvement in cardiovascular risks although weight loss was achieved within each group. Poor log book submission for physical activity and dietary intake has also been reported in other studies (17). The attendance for group sessions was low compared to other studies [25,45,66]. Yet similar findings to ours was also found in another Malaysian based lifestyle intervention trial [67]. We also found processes such as attendance and log book submission to be correlated with weight loss. Competing time work demand has also been cited as factors affecting attendance for work site intervention [25]. Local culture within the society may influence the attendance towards programme which needs to be explored.\nFinally, we did not evaluate environmental support such as availability of healthy food choices and physical activity facilities which is known to influence weight loss.\nNevertheless, this study is the first group based programme derived from the GLB-DPP conducted in Malaysia. It addressed the dynamics of psychological aspect within the lifestyle modification programme based on a theoretical construct which was rarely implemented and monitored in other translational weight loss programme. The randomised controlled design as opposed to single arm trial was used to evaluate its effectiveness compared to an existing dietary programme.\n\nBODY.CONCLUSION AND RECOMMENDATIONS:\nOur findings showed the group based workplace intervention (GSLiM) programme managed to achieve targeted weight loss, improved self-efficacy and created positive support with lower intensity. The incorporation of group based approach and psychological sessions managed to achieve psychological change needed to achieve weight loss. The GSLiM programme is ready to be used and can be replicated in similar setting with possible enhancement for exercise self-efficacy, attendance, and adherence to self-monitoring. The programme should be extended for longer duration. Further research should explore the predictive value of the psychological factors in assisting participants to achieve their targeted weight loss.\n\nBODY.SUPPORTING INFORMATION:\nS1 FileBODY.RESEARCH PROTOCOL.:\n(PDF)\nClick here for additional data file.\nS2 FileBODY.DATA.:\n(XLS)\nClick here for additional data file.\nS1 TableBODY.CONSORT CHECKLIST FOR RANDOMISED CONTROL TRIAL.:\n(PDF)\nClick here for additional data file.\n\n**Question:** Compared to dietary counseling what was the result of Group Support Lifestyle Modification (GSLiM) on Total WEL score?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13864
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect of risedronate on joint structure and symptoms of knee osteoarthritis: results of the BRISK randomized, controlled trial [ISRCTN01928173]\n\n\n\nTo determine the efficacy and safety of risedronate in patients with knee osteoarthritis (OA), the British study of risedronate in structure and symptoms of knee OA (BRISK), a 1-year prospective, double-blind, placebo-controlled study, enrolled patients (40–80 years of age) with mild to moderate OA of the medial compartment of the knee. The primary aims were to detect differences in symptoms and function. Patients were randomized to once-daily risedronate (5 mg or 15 mg) or placebo. Radiographs were taken at baseline and 1 year for assessment of joint-space width using a standardized radiographic method with fluoroscopic positioning of the joint. Pain, function, and stiffness were assessed using the Western Ontario and McMaster Universities (WOMAC) OA index. The patient global assessment and use of walking aids were measured and bone and cartilage markers were assessed. The intention-to-treat population consisted of 284 patients. Those receiving risedronate at 15 mg showed improvement of the WOMAC index, particularly of physical function, significant improvement of the patient global assessment (P < 0.001), and decreased use of walking aids relative to patients receiving the placebo (P = 0.009). A trend towards attenuation of joint-space narrowing was observed in the group receiving 15 mg risedronate. Eight percent (n = 7) of patients receiving placebo and 4% (n = 4) of patients receiving 5 mg risedronate exhibited detectable progression of disease (joint-space width ≥ 25% or ≥ 0.75 mm) versus 1% (n = 1) of patients receiving 15 mg risedronate (P = 0.067). Risedronate (15 mg) significantly reduced markers of cartilage degradation and bone resorption. Both doses of risedronate were well tolerated. In this study, clear trends towards improvement were observed in both joint structure and symptoms in patients with primary knee OA treated with risedronate.\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is a chronic, progressive disease that particularly affects weight-bearing joints such as hips and knees. The entire joint is affected by a complex combination of degradative and reparative processes, which alter the anatomy and function of articular cartilage, subchondral bone, and other joint tissues. Of the joints affected, knee OA in particular is a major cause of morbidity, often resulting in knee replacement [1-3]. Moreover, costs associated with OA are high – in the USA alone in 1991, the annual cost of knee replacements was estimated to be more than one billion dollars [4]. OA is normally the result of an interplay between systemic (e.g. age, obesity) and local (e.g. sports injury) factors that affect the joints of the body. Radiographic evidence has shown that joint-space narrowing (a surrogate marker for articular cartilage [5]), sclerosis of the subchondral bone, and the presence of osteophytes are typical structural features of OA.\nCurrent therapies for OA are largely aimed at providing symptom relief, and include a wide range of analgesics (e.g. nonsteroidal anti-inflammatory drugs and cyclooxygenase-2 agents). In contrast, only limited data are available about therapies that modify the course of the disease or affect joint structure. Historically, OA has been considered a disease of the cartilage, but more recent evidence suggests that subchrondral bone is also involved in the pathogenesis, in both disease initiation and progression. For example, increased local bone turnover, decreased bone mineral content and stiffness, and decreased trabecular numbers have been observed in OA subchondral bone structure compared with normal bone [6-8]. There is an increased level of interest in subchondral bone as a therapeutic OA target, and, in particular, the possibility that drugs affecting bone metabolism might alter the progression of knee and hip OA.\nThe Duncan-Hartley guinea pig model is a widely used spontaneous model of OA progression [9]. Several recent OA studies have evaluated this model for the effects of the antiresorptive agents bisphosphonates. For example, a comparative analysis of multiple bisphosphonates showed that only the group of nitrogen-containing bisphosphonates with pyridinyl sidechains demonstrated significant effects on the cartilage, although not all of these proved effective [10]. In a separate study using the guinea pig OA model, the pyridinyl bisphosphonate risedronate was shown to slow disease progression, as measured by the size and severity of cartilage lesions and the size of osteophytes, by up to 40% [11]. Based upon these preclinical studies, a clinical trial was performed in order to evaluate the effects of risedronate in patients with mild to moderate knee OA. The primary end points were changes in symptoms and function, with secondary end points of changes in joint structure or in markers of joint structure.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN AND SELECTION OF PATIENTS:\nThe British Study of Risedronate in Structure and Symptoms of Knee OA (BRISK) was a 1-year prospective, double-blind, placebo-controlled study conducted in 10 centres in the UK. Male and female subjects aged 40 to 80 years with mild to moderate medial-compartment knee OA, diagnosed according to the clinical and radiological criteria of the American College of Rheumatology [12], were recruited into the trial.\nOA in at least one knee, designated the signal knee, was required to meet the following clinical and radiographic inclusion criteria. Clinical inclusion criteria were the presence of daily knee pain for at least 1 month out of the 3 months preceding the study, with at least one of the following: age >50 years, morning knee stiffness of <30 minutes, or knee crepitus. The radiographic criteria for inclusion were a joint-space width (JSW) of 2 to 4 mm in the medial tibiofemoral compartment in the semiflexed anterior–posterior (AP) view of the signal knee and a narrower width than in the lateral compartment of the same knee. Patients were also required to have at least one osteophyte in either the medial or the lateral compartment of the tibiofemoral joint. Major exclusion criteria were the presence of rheumatic diseases that could be responsible for secondary OA; use of intra-articular hyaluronic acid in the signal knee; knee injury or diagnostic arthroscopy of the signal knee in the 6 months preceding enrollment; a history of knee surgery (including arthroscopy requiring an incision of internal joint components) in the signal knee at any time; intra-articular corticosteroids in the 3 months preceding enrollment; the presence of non-OA causes of knee pain in the signal knee (e.g. anserine bursitis, fibromyalgia, or osteonecrosis); and the use of bisphosphonates within the 12 months preceding enrollment.\nThe subjects gave their written, informed consent before entering the study, which was conducted in accordance with the International Conference on Harmonization (ICH) guidelines for Good Clinical Practice (GCP) and was approved by the UK Multicentre Research Ethical Committee (MREC).\n\nBODY.TREATMENT ASSIGNMENT:\nThe subjects were randomly assigned in a 1:1:1 ratio to one of three arms to receive risedronate at 5 mg or 15 mg or placebo once daily for 1 year. Before randomization, patients were stratified according to their current use of oestrogen or a selective oestrogen receptor modulator.\nThe subjects were instructed to take their study medication with at least 120 mL of water, 30 minutes before breakfast, or, if the medication was taken later in the day, at least 2 hours before or after food intake and at least 30 minutes before bedtime. They were instructed to take their study medication while they were upright and not to lie down for at least 30 minutes afterwards.\n\nBODY.SYMPTOM OUTCOME MEASURES:\nThe outcome instrument for evaluation of risedronate efficacy on symptoms of OA was the Western Ontario and McMaster Universities (WOMAC) OA index [13]. The visual analogue scale (VAS) of the index was used, in which patients assessed each question using a 100-mm scale, with a higher score representing greater symptom severity. The total index score for the signal knee corresponded to the weighted composite of the 24 question scores standardized to a 100-point scale. Scores were also determined for the subscales of pain (5 questions), stiffness (2 questions), and physical function (17 questions). Other symptom outcome measures included the patient global assessment (PGA) of disease, consumption of pain medication, and the use of walking aids. For the PGA, patients answered the following question using a VAS: \"Considering all the ways your OA affects you, how have you been in the last 48 hours?\" Patients marked values on a scale from 0 to 100 mm.\nA step-down reduction in the use of pain medication was effected 5 days before all symptom evaluations. Patients were provided with approximately 30 tablets each of paracetamol (500 mg) and diclofenac (50 mg) to be used as the only pain medication 3 to 5 days before the baseline assessment and at visits at months 3, 6, and 12 (the exit visit). No pain medications were to be used 2 days before the scheduled evaluation date or on the day itself, with the exception of low-dose acetylsalicylic acid (<350 mg/day) for cardiac protection. Rescue analgesia was permitted during the study except for the 2-day washout period before each visit.\n\nBODY.STRUCTURE OUTCOME MEASURES:\nThe outcome measure for assessment of joint structural changes was the mean change from baseline values in minimum JSW of the medial compartment of the knee. Radiographs of the knee were taken at baseline and at 1 year using a standardized radiographic method with fluoroscopic positioning of the joint in a semiflexed position [14,15]. By the use of this technique, the anterior and posterior rims of the tibia were aligned (to within 1 mm) for reproducible positioning. Radiographs were subjected to extensive quality control at the radiographic facility before dispatch to the Central Analysis Center [15]. Radiographs were read centrally and their quality control was rechecked before computer software was used to obtain the radiographic magnification. This was determined from measurement of a metal ball placed at the head of the fibula at the time of radiography and was used to adjust the computerized measurement obtained of the minimum medial compartment JSW [15]. The test–retest standard deviation of the difference between radiographs taken 2 days apart for this technique was approximately 0.2 mm, based upon repeat measurements in 199 subjects [15]. A retrospective analysis was performed taking into account the precision of the instrument. Retrospectively, clinically meaningful disease progression was defined as joint-space narrowing of ≥ 0.75 mm or a ≥ 25% loss from baseline values. The ≥ 0.75-mm value is almost four times the 0.2-mm standard deviation observed for the x-ray method.\n\nBODY.STRUCTURE–SYMPTOM RELATION:\nThe relation between knee OA symptoms and radiographic joint-space narrowing was assessed retrospectively; the mean change in symptom scores between baseline to month 12 of the total WOMAC score and pain and function subscales was compared with the magnitude of change in JSW over the study period.\n\nBODY.BONE AND CARTILAGE MARKERS:\nEarly-morning fasting urine and serum samples were collected at baseline and at months 3, 6, and 12 for assessment of markers of bone and cartilage turnover. Bone resorption was assessed by measurement of urinary levels of the N-terminal crosslinking telopeptide of type I collagen (NTX-I, Osteomark; OrthoClinical Diagnostics, High Wycombe, Bucks, UK) [16]. Bone formation was assessed by measurement of bone-specific serum alkaline phosphatase (Ostase, Beckman-Coulter, San Diego, CA, USA) [17] and cartilage degradation was assessed by measurement of urinary levels of C-terminal crosslinking telopeptide of type II collagen (CTX-II, Cartilaps, Nordic Bioscience, Herlev, Denmark) [18]. The intra-assay and interassay coefficients of variation were lower than 10%.\n\nBODY.EVALUATION OF SAFETY:\nPatient-reported adverse events (AEs) were recorded throughout the study. AEs were categorized using the Coding Symbols for Thesaurus of Adverse Reaction Terms (COSTART®) coding dictionary. Clinical laboratory measurements for safety monitoring were made throughout the study. Serious AEs were defined as any that resulted in death; were life threatening; resulted in hospitalization; resulted in persistent or significant disability or incapacity; or were judged to be medically significant.\nUpper-GI AEs included the following symptoms and conditions: substernal chest pain; duodenitis; dyspepsia; dysphagia; oesophagitis; gastritis; bleeding gastritis; gastro-oesophageal reflux; oesophageal bleeding; GI bleeding; haematemesis; melena; abdominal pain; ulcers (duodenal, oesophageal, peptic, gastric); bleeding ulcers (duodenal, peptic, gastric); perforated ulcers (duodenal, peptic, gastric); perforated and bleeding ulcers (duodenal, peptic, gastric); and reactivated ulcers (duodenal, peptic, gastric).\n\nBODY.STATISTICAL ANALYSIS:\nTo ensure 80% power to detect a 20% effect of risedronate treatment versus placebo with respect to pain modification (quantified according to the WOMAC pain subscale, assuming a standard deviation of 70 mm on a 0- to 500-mm scale), a 1-year dropout rate of 20%, and a type I error rate of 5% without adjustment for two comparisons with placebo control, the sample size requirement was 100 patients per treatment group.\nAnalyses were undertaken on the intention-to-treat (ITT) population. This was defined as all randomized patients who received at least one dose of study medication. All statistical analyses were performed using a two-sided statistical test with a type-I-error rate of 0.05. Baseline characteristics were compared using Fisher's exact test for categorical variables and the Kruskal–Wallis test for continuous variables. Extended Mantel–Haenszel tests with pooled centres as strata were used for end points with categorical responses. Analysis of variance (ANOVA) methods were used. Symptom analyses were adjusted for baseline value (PGA < WOMAC total or subscale value, as appropriate), pooled study centres, baseline use of oestrogen or selective oestrogen receptor modulators, gender, age, body mass index, and baseline JSW. Mean JSW analyses were adjusted for pooled study centres, baseline use of oestrogen or selective oestrogen receptor modulators, gender, age, body mass index, and baseline JSW as covariates. Each risedronate group was compared with the placebo group. For walking aids, the percentages were compared with placebo using the Cochran–Mantel–Haenszel test after adjusting for pooled centres. Individual AEs and the proportion of clinically meaningful JSW progressors were analysed using Fisher's exact test.\nThe WOMAC scores were calculated in accordance with the WOMAC User's Guide [19]. The total scores were composed of subscales weighted as follows: pain = 42%, stiffness = 21%, and function = 37% [19]. For each subscale, the reported response was the patient's average. If at least two pain items, both stiffness items, or more than four physical function items were omitted, or if the patient's response was unclear, the items were regarded as invalid and the relevant subscale was not included.\n\n\nBODY.RESULTS:\nBODY.PATIENTS:\nTwo hundred and eighty-five patients were considered eligible for the study and were randomized to treatment. Of these, 284 received at least one dose of study medication and were included in the ITT population, and 231 (81%) completed the study (placebo, n = 80; risedronate at 5 mg, n = 80; risedronate at 15 mg, n = 71) (Fig. 1). The number of patients who completed the study and the reasons for withdrawal were similar across treatment groups. Table 1 shows the baseline characteristics for the ITT population. These were similar between treatment groups; the average age of the patients was 63.3 years. There were no significant differences in the use of concomitant analgesics between treatment groups. Patients' compliance during study treatment, based on pill counts, was ≥ 83% and was comparable in the three treatment groups.\n\nBODY.SYMPTOM OUTCOME MEASURES:\nThere was an improvement from baseline values in the symptom outcome measure of total WOMAC scores (weighted and unweighted) (unweighted not shown) and the subscales for all treatment groups (Fig. 2). The group given risedronate at 15 mg showed a trend towards improvement from baseline values, although the differences were not statistically significant (P values from 0.10 to 0.33).\nAssessment of PGAs revealed a statistically significant improvement with risedronate at 15 mg compared with placebo at 1 year (-19.4 for risedronate at 15 mg versus -5.7 for placebo, P < 0.001) (Fig. 3). Although all treatment groups showed a significant improvement from baseline values at 3 months, the improvement in the group receiving risedronate at 15 mg continued to increase with time, whereas the level of improvement with placebo or risedronate at 5 mg did not show any further improvement after 6 months.\nAnalysis of the use of walking aids during study treatment showed a statistically significant difference in the proportion of patients who used walking aids in patients treated with risedronate at 15 mg (7 patients, 4% reduction) compared with placebo (21 patients, 8% increase) (P = 0.009) at 12 months compared with the proportion of patients that had reported using a walking aid during the previous year.\n\nBODY.STRUCTURE OUTCOME MEASURES:\nAssessment of the mean change from baseline values in minimum JSW in the medial compartment of the tibiofemoral joint at 1 year showed that there was a trend for patients receiving risedronate at 5 mg (JSW -0.08 ± 0.44 mm) or 15 mg (JSW -0.06 ± 0.25 mm). The change was greater in patients receiving placebo (JSW -0.12 ± 0.42 mm) compared with baseline values. Overall, the difference between treatment groups in loss of JSW at 12 months was not statistically significant (P = 0.275). The loss in JSW from baseline values was statistically significant only in the placebo group (P < 0.05).\nIn terms of detectable progression (i.e. loss of JSW ≥ 25% or ≥ 0.75 mm), the analysis of the distribution of change from baseline values in JSW at 1 year showed a greater presence of detectable progression in the placebo (8%) and risedronate (5 mg) (4%) (P = 0.36) groups than in the risedronate (15 mg) group (1%) (P = 0.067). The patients with JSW loss of >0.75 mm included none of the patients treated with risedronate at 15 mg and 6% of the patients treated with placebo (P = 0.060). Similarly, only 1% of the patients treated with risedronate at 15 mg but 7% of the patients treated with placebo had >25% loss of JSW (P = 0.12).\n\nBODY.STRUCTURE–SYMPTOM RELATION:\nFigure 4 shows the relation between structure and symptoms for this population of patients. The mean WOMAC total score and the scores on the pain and function subscales increased (i.e. symptom severity increased) with increasing loss of JSW. In the group of patients with any loss of JSW, the mean changes of the WOMAC total score and the scores on the pain and function subscales at 1 year were -5.9 mm, -4.6 mm, and -6.3 mm, respectively, indicating that these symptoms were not increasing overall. In contrast, for the subset of patients with a loss in JSW of 40% or more, the corresponding mean changes were +1.4 mm, +6.0 mm, and +2.3 mm, indicating increased symptom severity in these patients concurrent with narrowing of their knee-joint space.\n\nBODY.MARKERS OF BIOCHEMICAL TURNOVER:\nRisedronate treatment significantly reduced markers of cartilage degradation (Fig. 5) and bone resorption compared with placebo. At 1 year, treatment with risedronate at 15 mg significantly decreased mean urinary CTX-II values, by -22.8% ± 5.35; urinary NTX-I was reduced by -32.9% ± 4.92 relative to baseline values (P < 0.05). Dose-dependent effects were also observed with the 5-mg dose compared with placebo, but to a lesser magnitude. This finding is consistent with the known pharmacologic effect of risedronate on bone turnover. At 1 year, CTX-II and NTX-I values in the placebo group were significantly higher than those in the risedronate 15-mg group (14.5% ± 5.4 and 17.2% ± 4.9 higher, respectively). Significant decreases in bone alkaline phosphatase were observed in the risedronate groups compared with placebo. At 1 year, the mean decreases in the groups receiving risedronate at 15 mg and 5 mg were 29.1% ± 2.6 and 19.5% ± 2.5, respectively, compared with a mean decrease of 2.7% ± 2.5 in the placebo group (P < 0.001).\n\nBODY.SAFETY:\nThe frequencies of AEs were similar in the two treatment groups (Table 2). There were no clinically meaningful differences between groups in the percentage of patients with AEs in any body system and there were no significant differences in routine clinical chemistry parameters between the risedronate groups and the placebo group. The numbers of patients who dropped out of the study because of AEs were similar. Overall, 34 patients reported a total of 53 serious AEs. Investigators considered four serious AEs as possibly related to study treatment; two of these (rash and diarrhea) were in patients treated with placebo and two (anaemia and increased general joint pain) were in patients treated with risedronate at 5 mg.\nTable 2 provides a summary of adverse events for the ITT population and the frequency of the overall GI AEs and the most common upper-GI AEs. Forty-seven upper-GI events were reported in 38 patients, of which abdominal pain and dyspepsia were the most frequently reported. The majority of the upper-GI AEs occurred in patients with a history of GI disease; there were no significant differences between the groups given risedronate and the group given placebo in the incidence of upper-GI AEs in these patients.\n\n\nBODY.DISCUSSION:\nIncreased evidence of the role of bone in both the initiation and progression of OA has resulted in an interest in drugs that affect bone metabolism and might slow or even halt the process of joint degeneration [6]. The early findings reported here suggest that the bisphosphonate risedronate may have disease-modifying effects in patients with knee OA. A recent cross-sectional study also suggests an association between antiresorptive treatments (oestrogen or bisphosphonate) and improved symptoms and/or decreased bone marrow abnormalities [20].\nPositive trends were observed with risedronate treatment with regard to symptomatic improvement, as assessed by the WOMAC index. Treatment with risedronate at 15 mg resulted in a consistent trend in improvement in WOMAC scores, whereas the group receiving placebo showed less improvement. The group receiving risedronate at 15 mg showed a significant improvement in the PGA of OA compared with placebo. Similarly, the percentage of patients who used a walking aid during the study decreased in the group treated with risedronate at 15 mg, contrasting with an increase in the placebo group. While the differences in JSW among the groups were not significant, there was a trend for less loss in the risedronate (15 mg) than in the placebo group. When the data were analysed in a post hoc manner to identify patients with detectable progression (i.e. approximately four times the precision of the measurement), 8% of patients receiving placebo and 1% of patients receiving 15 mg risedronate were found to exhibit this degree of progression. Additionally, risedronate significantly reduced levels of bone resorption and cartilage degradation, as assessed by NTX-I and CTX-II markers, respectively.\nDespite these encouraging results, they were not confirmed in multicentre studies of risedronate treatment for 2 years using a similar protocol [21]. The following provides perspectives comparing the two studies. In our study, there was a dose-dependent trend for improvement in the WOMAC score. In the 2-year North American study, which enrolled 1,232 patients with knee OA, the placebo effect was approximately twice that observed in our study, and was comparable to the magnitude of change observed in the group given 15 mg risedronate in our study. Both studies showed significant decreases in CTX-II, the marker of cartilage degradation, but these were not associated with an attenuation of joint-space loss This lack of association may be related to the length of observation in these studies. Reijman and colleagues [22] observed 1,235 men and women with OA, followed up over an average period of 6.6 years. The subjects with baseline CTX-II levels in the highest quartile had a sixfold risk of progression of knee OA, defined as a decrease in JSW ≥ 2 mm, in comparison with subjects in the lowest quartile [22]. This suggests that an enriched population of subjects with an elevated rate of cartilage loss combined with a longer study period may be required in order to observe significant treatment effects.\nOur study is one of the first to suggest a correlation between symptoms and structure in OA, although preliminary results with doxycycline in the treatment of obese women with knee OA have reported a significant reduction in the proportion of follow-up visits in which a clinically significant increase in pain occurred, favouring treatment over placebo, and coinciding with a decrease in joint-space narrowing [23]. This finding is important, because it runs contrary to previous results, which have suggested a poor correlation between these disease features [24,25]. The limitations of our study include the small number of patients (n 26,27]. The results highlighted the importance of medial tibial plateau alignment with the central x-ray beam and showed that the standard clinical view of a standing extended knee is subject to considerable variability. In contrast, the fluoroscopic technique is well validated and is less variable in test-retest performance [15,28]. Further studies are required to further explore the possible correlation between symptoms and structure observed in our study. If validated, this relation may allow physicians to use the assessment of pain, perhaps in combination with a biomarker such as CTX-II, as a surrogate for other measures of disease progression\n\nBODY.CONCLUSION:\nThis study is one of the first to show a correlation between symptoms and joint structure changes in knee OA. While our findings were suggestive of a beneficial effect of risedronate treatment on preservation of bone and cartilage, these trends seen in this study have not been observed in larger, multicountry cohorts.\n\nBODY.ABBREVIATIONS:\nAE = adverse event; BRISK = British Study of Risedronate in Structure and Symptoms of Knee OA; CTX-II = C-terminal crosslinking telopeptide of type II collagen; GI = gastrointestinal; ITT = intention-to-treat; JSW = joint-space width; NTX-I = N-terminal crosslinking telopeptide of type I collagen; OA = osteoarthritis; PGA = patient global assessment; VAS = visual analogue scale; WOMAC = Western Ontario and McMaster Universities [OA index].\n\nBODY.COMPETING INTERESTS:\nThis manuscript was sponsored by a grant from Procter & Gamble.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nTDS, JFB, DJV, and JMM planned the study and prepared the manuscript. GAC performed the statistical analysis. JCB-W supervised radiological measurements. PG performed the marker assays. PGC assisted with the manuscript and recruited patients. All authors read and approved the final manuscript.\n\n**Question:** Compared to placebo what was the result of risedronate 15 mg on Western Ontario and McMaster Universities (WOMAC) OA index, particularly of physical function, patient global assessment ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13865
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect of risedronate on joint structure and symptoms of knee osteoarthritis: results of the BRISK randomized, controlled trial [ISRCTN01928173]\n\n\n\nTo determine the efficacy and safety of risedronate in patients with knee osteoarthritis (OA), the British study of risedronate in structure and symptoms of knee OA (BRISK), a 1-year prospective, double-blind, placebo-controlled study, enrolled patients (40–80 years of age) with mild to moderate OA of the medial compartment of the knee. The primary aims were to detect differences in symptoms and function. Patients were randomized to once-daily risedronate (5 mg or 15 mg) or placebo. Radiographs were taken at baseline and 1 year for assessment of joint-space width using a standardized radiographic method with fluoroscopic positioning of the joint. Pain, function, and stiffness were assessed using the Western Ontario and McMaster Universities (WOMAC) OA index. The patient global assessment and use of walking aids were measured and bone and cartilage markers were assessed. The intention-to-treat population consisted of 284 patients. Those receiving risedronate at 15 mg showed improvement of the WOMAC index, particularly of physical function, significant improvement of the patient global assessment (P < 0.001), and decreased use of walking aids relative to patients receiving the placebo (P = 0.009). A trend towards attenuation of joint-space narrowing was observed in the group receiving 15 mg risedronate. Eight percent (n = 7) of patients receiving placebo and 4% (n = 4) of patients receiving 5 mg risedronate exhibited detectable progression of disease (joint-space width ≥ 25% or ≥ 0.75 mm) versus 1% (n = 1) of patients receiving 15 mg risedronate (P = 0.067). Risedronate (15 mg) significantly reduced markers of cartilage degradation and bone resorption. Both doses of risedronate were well tolerated. In this study, clear trends towards improvement were observed in both joint structure and symptoms in patients with primary knee OA treated with risedronate.\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is a chronic, progressive disease that particularly affects weight-bearing joints such as hips and knees. The entire joint is affected by a complex combination of degradative and reparative processes, which alter the anatomy and function of articular cartilage, subchondral bone, and other joint tissues. Of the joints affected, knee OA in particular is a major cause of morbidity, often resulting in knee replacement [1-3]. Moreover, costs associated with OA are high – in the USA alone in 1991, the annual cost of knee replacements was estimated to be more than one billion dollars [4]. OA is normally the result of an interplay between systemic (e.g. age, obesity) and local (e.g. sports injury) factors that affect the joints of the body. Radiographic evidence has shown that joint-space narrowing (a surrogate marker for articular cartilage [5]), sclerosis of the subchondral bone, and the presence of osteophytes are typical structural features of OA.\nCurrent therapies for OA are largely aimed at providing symptom relief, and include a wide range of analgesics (e.g. nonsteroidal anti-inflammatory drugs and cyclooxygenase-2 agents). In contrast, only limited data are available about therapies that modify the course of the disease or affect joint structure. Historically, OA has been considered a disease of the cartilage, but more recent evidence suggests that subchrondral bone is also involved in the pathogenesis, in both disease initiation and progression. For example, increased local bone turnover, decreased bone mineral content and stiffness, and decreased trabecular numbers have been observed in OA subchondral bone structure compared with normal bone [6-8]. There is an increased level of interest in subchondral bone as a therapeutic OA target, and, in particular, the possibility that drugs affecting bone metabolism might alter the progression of knee and hip OA.\nThe Duncan-Hartley guinea pig model is a widely used spontaneous model of OA progression [9]. Several recent OA studies have evaluated this model for the effects of the antiresorptive agents bisphosphonates. For example, a comparative analysis of multiple bisphosphonates showed that only the group of nitrogen-containing bisphosphonates with pyridinyl sidechains demonstrated significant effects on the cartilage, although not all of these proved effective [10]. In a separate study using the guinea pig OA model, the pyridinyl bisphosphonate risedronate was shown to slow disease progression, as measured by the size and severity of cartilage lesions and the size of osteophytes, by up to 40% [11]. Based upon these preclinical studies, a clinical trial was performed in order to evaluate the effects of risedronate in patients with mild to moderate knee OA. The primary end points were changes in symptoms and function, with secondary end points of changes in joint structure or in markers of joint structure.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN AND SELECTION OF PATIENTS:\nThe British Study of Risedronate in Structure and Symptoms of Knee OA (BRISK) was a 1-year prospective, double-blind, placebo-controlled study conducted in 10 centres in the UK. Male and female subjects aged 40 to 80 years with mild to moderate medial-compartment knee OA, diagnosed according to the clinical and radiological criteria of the American College of Rheumatology [12], were recruited into the trial.\nOA in at least one knee, designated the signal knee, was required to meet the following clinical and radiographic inclusion criteria. Clinical inclusion criteria were the presence of daily knee pain for at least 1 month out of the 3 months preceding the study, with at least one of the following: age >50 years, morning knee stiffness of <30 minutes, or knee crepitus. The radiographic criteria for inclusion were a joint-space width (JSW) of 2 to 4 mm in the medial tibiofemoral compartment in the semiflexed anterior–posterior (AP) view of the signal knee and a narrower width than in the lateral compartment of the same knee. Patients were also required to have at least one osteophyte in either the medial or the lateral compartment of the tibiofemoral joint. Major exclusion criteria were the presence of rheumatic diseases that could be responsible for secondary OA; use of intra-articular hyaluronic acid in the signal knee; knee injury or diagnostic arthroscopy of the signal knee in the 6 months preceding enrollment; a history of knee surgery (including arthroscopy requiring an incision of internal joint components) in the signal knee at any time; intra-articular corticosteroids in the 3 months preceding enrollment; the presence of non-OA causes of knee pain in the signal knee (e.g. anserine bursitis, fibromyalgia, or osteonecrosis); and the use of bisphosphonates within the 12 months preceding enrollment.\nThe subjects gave their written, informed consent before entering the study, which was conducted in accordance with the International Conference on Harmonization (ICH) guidelines for Good Clinical Practice (GCP) and was approved by the UK Multicentre Research Ethical Committee (MREC).\n\nBODY.TREATMENT ASSIGNMENT:\nThe subjects were randomly assigned in a 1:1:1 ratio to one of three arms to receive risedronate at 5 mg or 15 mg or placebo once daily for 1 year. Before randomization, patients were stratified according to their current use of oestrogen or a selective oestrogen receptor modulator.\nThe subjects were instructed to take their study medication with at least 120 mL of water, 30 minutes before breakfast, or, if the medication was taken later in the day, at least 2 hours before or after food intake and at least 30 minutes before bedtime. They were instructed to take their study medication while they were upright and not to lie down for at least 30 minutes afterwards.\n\nBODY.SYMPTOM OUTCOME MEASURES:\nThe outcome instrument for evaluation of risedronate efficacy on symptoms of OA was the Western Ontario and McMaster Universities (WOMAC) OA index [13]. The visual analogue scale (VAS) of the index was used, in which patients assessed each question using a 100-mm scale, with a higher score representing greater symptom severity. The total index score for the signal knee corresponded to the weighted composite of the 24 question scores standardized to a 100-point scale. Scores were also determined for the subscales of pain (5 questions), stiffness (2 questions), and physical function (17 questions). Other symptom outcome measures included the patient global assessment (PGA) of disease, consumption of pain medication, and the use of walking aids. For the PGA, patients answered the following question using a VAS: \"Considering all the ways your OA affects you, how have you been in the last 48 hours?\" Patients marked values on a scale from 0 to 100 mm.\nA step-down reduction in the use of pain medication was effected 5 days before all symptom evaluations. Patients were provided with approximately 30 tablets each of paracetamol (500 mg) and diclofenac (50 mg) to be used as the only pain medication 3 to 5 days before the baseline assessment and at visits at months 3, 6, and 12 (the exit visit). No pain medications were to be used 2 days before the scheduled evaluation date or on the day itself, with the exception of low-dose acetylsalicylic acid (<350 mg/day) for cardiac protection. Rescue analgesia was permitted during the study except for the 2-day washout period before each visit.\n\nBODY.STRUCTURE OUTCOME MEASURES:\nThe outcome measure for assessment of joint structural changes was the mean change from baseline values in minimum JSW of the medial compartment of the knee. Radiographs of the knee were taken at baseline and at 1 year using a standardized radiographic method with fluoroscopic positioning of the joint in a semiflexed position [14,15]. By the use of this technique, the anterior and posterior rims of the tibia were aligned (to within 1 mm) for reproducible positioning. Radiographs were subjected to extensive quality control at the radiographic facility before dispatch to the Central Analysis Center [15]. Radiographs were read centrally and their quality control was rechecked before computer software was used to obtain the radiographic magnification. This was determined from measurement of a metal ball placed at the head of the fibula at the time of radiography and was used to adjust the computerized measurement obtained of the minimum medial compartment JSW [15]. The test–retest standard deviation of the difference between radiographs taken 2 days apart for this technique was approximately 0.2 mm, based upon repeat measurements in 199 subjects [15]. A retrospective analysis was performed taking into account the precision of the instrument. Retrospectively, clinically meaningful disease progression was defined as joint-space narrowing of ≥ 0.75 mm or a ≥ 25% loss from baseline values. The ≥ 0.75-mm value is almost four times the 0.2-mm standard deviation observed for the x-ray method.\n\nBODY.STRUCTURE–SYMPTOM RELATION:\nThe relation between knee OA symptoms and radiographic joint-space narrowing was assessed retrospectively; the mean change in symptom scores between baseline to month 12 of the total WOMAC score and pain and function subscales was compared with the magnitude of change in JSW over the study period.\n\nBODY.BONE AND CARTILAGE MARKERS:\nEarly-morning fasting urine and serum samples were collected at baseline and at months 3, 6, and 12 for assessment of markers of bone and cartilage turnover. Bone resorption was assessed by measurement of urinary levels of the N-terminal crosslinking telopeptide of type I collagen (NTX-I, Osteomark; OrthoClinical Diagnostics, High Wycombe, Bucks, UK) [16]. Bone formation was assessed by measurement of bone-specific serum alkaline phosphatase (Ostase, Beckman-Coulter, San Diego, CA, USA) [17] and cartilage degradation was assessed by measurement of urinary levels of C-terminal crosslinking telopeptide of type II collagen (CTX-II, Cartilaps, Nordic Bioscience, Herlev, Denmark) [18]. The intra-assay and interassay coefficients of variation were lower than 10%.\n\nBODY.EVALUATION OF SAFETY:\nPatient-reported adverse events (AEs) were recorded throughout the study. AEs were categorized using the Coding Symbols for Thesaurus of Adverse Reaction Terms (COSTART®) coding dictionary. Clinical laboratory measurements for safety monitoring were made throughout the study. Serious AEs were defined as any that resulted in death; were life threatening; resulted in hospitalization; resulted in persistent or significant disability or incapacity; or were judged to be medically significant.\nUpper-GI AEs included the following symptoms and conditions: substernal chest pain; duodenitis; dyspepsia; dysphagia; oesophagitis; gastritis; bleeding gastritis; gastro-oesophageal reflux; oesophageal bleeding; GI bleeding; haematemesis; melena; abdominal pain; ulcers (duodenal, oesophageal, peptic, gastric); bleeding ulcers (duodenal, peptic, gastric); perforated ulcers (duodenal, peptic, gastric); perforated and bleeding ulcers (duodenal, peptic, gastric); and reactivated ulcers (duodenal, peptic, gastric).\n\nBODY.STATISTICAL ANALYSIS:\nTo ensure 80% power to detect a 20% effect of risedronate treatment versus placebo with respect to pain modification (quantified according to the WOMAC pain subscale, assuming a standard deviation of 70 mm on a 0- to 500-mm scale), a 1-year dropout rate of 20%, and a type I error rate of 5% without adjustment for two comparisons with placebo control, the sample size requirement was 100 patients per treatment group.\nAnalyses were undertaken on the intention-to-treat (ITT) population. This was defined as all randomized patients who received at least one dose of study medication. All statistical analyses were performed using a two-sided statistical test with a type-I-error rate of 0.05. Baseline characteristics were compared using Fisher's exact test for categorical variables and the Kruskal–Wallis test for continuous variables. Extended Mantel–Haenszel tests with pooled centres as strata were used for end points with categorical responses. Analysis of variance (ANOVA) methods were used. Symptom analyses were adjusted for baseline value (PGA < WOMAC total or subscale value, as appropriate), pooled study centres, baseline use of oestrogen or selective oestrogen receptor modulators, gender, age, body mass index, and baseline JSW. Mean JSW analyses were adjusted for pooled study centres, baseline use of oestrogen or selective oestrogen receptor modulators, gender, age, body mass index, and baseline JSW as covariates. Each risedronate group was compared with the placebo group. For walking aids, the percentages were compared with placebo using the Cochran–Mantel–Haenszel test after adjusting for pooled centres. Individual AEs and the proportion of clinically meaningful JSW progressors were analysed using Fisher's exact test.\nThe WOMAC scores were calculated in accordance with the WOMAC User's Guide [19]. The total scores were composed of subscales weighted as follows: pain = 42%, stiffness = 21%, and function = 37% [19]. For each subscale, the reported response was the patient's average. If at least two pain items, both stiffness items, or more than four physical function items were omitted, or if the patient's response was unclear, the items were regarded as invalid and the relevant subscale was not included.\n\n\nBODY.RESULTS:\nBODY.PATIENTS:\nTwo hundred and eighty-five patients were considered eligible for the study and were randomized to treatment. Of these, 284 received at least one dose of study medication and were included in the ITT population, and 231 (81%) completed the study (placebo, n = 80; risedronate at 5 mg, n = 80; risedronate at 15 mg, n = 71) (Fig. 1). The number of patients who completed the study and the reasons for withdrawal were similar across treatment groups. Table 1 shows the baseline characteristics for the ITT population. These were similar between treatment groups; the average age of the patients was 63.3 years. There were no significant differences in the use of concomitant analgesics between treatment groups. Patients' compliance during study treatment, based on pill counts, was ≥ 83% and was comparable in the three treatment groups.\n\nBODY.SYMPTOM OUTCOME MEASURES:\nThere was an improvement from baseline values in the symptom outcome measure of total WOMAC scores (weighted and unweighted) (unweighted not shown) and the subscales for all treatment groups (Fig. 2). The group given risedronate at 15 mg showed a trend towards improvement from baseline values, although the differences were not statistically significant (P values from 0.10 to 0.33).\nAssessment of PGAs revealed a statistically significant improvement with risedronate at 15 mg compared with placebo at 1 year (-19.4 for risedronate at 15 mg versus -5.7 for placebo, P < 0.001) (Fig. 3). Although all treatment groups showed a significant improvement from baseline values at 3 months, the improvement in the group receiving risedronate at 15 mg continued to increase with time, whereas the level of improvement with placebo or risedronate at 5 mg did not show any further improvement after 6 months.\nAnalysis of the use of walking aids during study treatment showed a statistically significant difference in the proportion of patients who used walking aids in patients treated with risedronate at 15 mg (7 patients, 4% reduction) compared with placebo (21 patients, 8% increase) (P = 0.009) at 12 months compared with the proportion of patients that had reported using a walking aid during the previous year.\n\nBODY.STRUCTURE OUTCOME MEASURES:\nAssessment of the mean change from baseline values in minimum JSW in the medial compartment of the tibiofemoral joint at 1 year showed that there was a trend for patients receiving risedronate at 5 mg (JSW -0.08 ± 0.44 mm) or 15 mg (JSW -0.06 ± 0.25 mm). The change was greater in patients receiving placebo (JSW -0.12 ± 0.42 mm) compared with baseline values. Overall, the difference between treatment groups in loss of JSW at 12 months was not statistically significant (P = 0.275). The loss in JSW from baseline values was statistically significant only in the placebo group (P < 0.05).\nIn terms of detectable progression (i.e. loss of JSW ≥ 25% or ≥ 0.75 mm), the analysis of the distribution of change from baseline values in JSW at 1 year showed a greater presence of detectable progression in the placebo (8%) and risedronate (5 mg) (4%) (P = 0.36) groups than in the risedronate (15 mg) group (1%) (P = 0.067). The patients with JSW loss of >0.75 mm included none of the patients treated with risedronate at 15 mg and 6% of the patients treated with placebo (P = 0.060). Similarly, only 1% of the patients treated with risedronate at 15 mg but 7% of the patients treated with placebo had >25% loss of JSW (P = 0.12).\n\nBODY.STRUCTURE–SYMPTOM RELATION:\nFigure 4 shows the relation between structure and symptoms for this population of patients. The mean WOMAC total score and the scores on the pain and function subscales increased (i.e. symptom severity increased) with increasing loss of JSW. In the group of patients with any loss of JSW, the mean changes of the WOMAC total score and the scores on the pain and function subscales at 1 year were -5.9 mm, -4.6 mm, and -6.3 mm, respectively, indicating that these symptoms were not increasing overall. In contrast, for the subset of patients with a loss in JSW of 40% or more, the corresponding mean changes were +1.4 mm, +6.0 mm, and +2.3 mm, indicating increased symptom severity in these patients concurrent with narrowing of their knee-joint space.\n\nBODY.MARKERS OF BIOCHEMICAL TURNOVER:\nRisedronate treatment significantly reduced markers of cartilage degradation (Fig. 5) and bone resorption compared with placebo. At 1 year, treatment with risedronate at 15 mg significantly decreased mean urinary CTX-II values, by -22.8% ± 5.35; urinary NTX-I was reduced by -32.9% ± 4.92 relative to baseline values (P < 0.05). Dose-dependent effects were also observed with the 5-mg dose compared with placebo, but to a lesser magnitude. This finding is consistent with the known pharmacologic effect of risedronate on bone turnover. At 1 year, CTX-II and NTX-I values in the placebo group were significantly higher than those in the risedronate 15-mg group (14.5% ± 5.4 and 17.2% ± 4.9 higher, respectively). Significant decreases in bone alkaline phosphatase were observed in the risedronate groups compared with placebo. At 1 year, the mean decreases in the groups receiving risedronate at 15 mg and 5 mg were 29.1% ± 2.6 and 19.5% ± 2.5, respectively, compared with a mean decrease of 2.7% ± 2.5 in the placebo group (P < 0.001).\n\nBODY.SAFETY:\nThe frequencies of AEs were similar in the two treatment groups (Table 2). There were no clinically meaningful differences between groups in the percentage of patients with AEs in any body system and there were no significant differences in routine clinical chemistry parameters between the risedronate groups and the placebo group. The numbers of patients who dropped out of the study because of AEs were similar. Overall, 34 patients reported a total of 53 serious AEs. Investigators considered four serious AEs as possibly related to study treatment; two of these (rash and diarrhea) were in patients treated with placebo and two (anaemia and increased general joint pain) were in patients treated with risedronate at 5 mg.\nTable 2 provides a summary of adverse events for the ITT population and the frequency of the overall GI AEs and the most common upper-GI AEs. Forty-seven upper-GI events were reported in 38 patients, of which abdominal pain and dyspepsia were the most frequently reported. The majority of the upper-GI AEs occurred in patients with a history of GI disease; there were no significant differences between the groups given risedronate and the group given placebo in the incidence of upper-GI AEs in these patients.\n\n\nBODY.DISCUSSION:\nIncreased evidence of the role of bone in both the initiation and progression of OA has resulted in an interest in drugs that affect bone metabolism and might slow or even halt the process of joint degeneration [6]. The early findings reported here suggest that the bisphosphonate risedronate may have disease-modifying effects in patients with knee OA. A recent cross-sectional study also suggests an association between antiresorptive treatments (oestrogen or bisphosphonate) and improved symptoms and/or decreased bone marrow abnormalities [20].\nPositive trends were observed with risedronate treatment with regard to symptomatic improvement, as assessed by the WOMAC index. Treatment with risedronate at 15 mg resulted in a consistent trend in improvement in WOMAC scores, whereas the group receiving placebo showed less improvement. The group receiving risedronate at 15 mg showed a significant improvement in the PGA of OA compared with placebo. Similarly, the percentage of patients who used a walking aid during the study decreased in the group treated with risedronate at 15 mg, contrasting with an increase in the placebo group. While the differences in JSW among the groups were not significant, there was a trend for less loss in the risedronate (15 mg) than in the placebo group. When the data were analysed in a post hoc manner to identify patients with detectable progression (i.e. approximately four times the precision of the measurement), 8% of patients receiving placebo and 1% of patients receiving 15 mg risedronate were found to exhibit this degree of progression. Additionally, risedronate significantly reduced levels of bone resorption and cartilage degradation, as assessed by NTX-I and CTX-II markers, respectively.\nDespite these encouraging results, they were not confirmed in multicentre studies of risedronate treatment for 2 years using a similar protocol [21]. The following provides perspectives comparing the two studies. In our study, there was a dose-dependent trend for improvement in the WOMAC score. In the 2-year North American study, which enrolled 1,232 patients with knee OA, the placebo effect was approximately twice that observed in our study, and was comparable to the magnitude of change observed in the group given 15 mg risedronate in our study. Both studies showed significant decreases in CTX-II, the marker of cartilage degradation, but these were not associated with an attenuation of joint-space loss This lack of association may be related to the length of observation in these studies. Reijman and colleagues [22] observed 1,235 men and women with OA, followed up over an average period of 6.6 years. The subjects with baseline CTX-II levels in the highest quartile had a sixfold risk of progression of knee OA, defined as a decrease in JSW ≥ 2 mm, in comparison with subjects in the lowest quartile [22]. This suggests that an enriched population of subjects with an elevated rate of cartilage loss combined with a longer study period may be required in order to observe significant treatment effects.\nOur study is one of the first to suggest a correlation between symptoms and structure in OA, although preliminary results with doxycycline in the treatment of obese women with knee OA have reported a significant reduction in the proportion of follow-up visits in which a clinically significant increase in pain occurred, favouring treatment over placebo, and coinciding with a decrease in joint-space narrowing [23]. This finding is important, because it runs contrary to previous results, which have suggested a poor correlation between these disease features [24,25]. The limitations of our study include the small number of patients (n 26,27]. The results highlighted the importance of medial tibial plateau alignment with the central x-ray beam and showed that the standard clinical view of a standing extended knee is subject to considerable variability. In contrast, the fluoroscopic technique is well validated and is less variable in test-retest performance [15,28]. Further studies are required to further explore the possible correlation between symptoms and structure observed in our study. If validated, this relation may allow physicians to use the assessment of pain, perhaps in combination with a biomarker such as CTX-II, as a surrogate for other measures of disease progression\n\nBODY.CONCLUSION:\nThis study is one of the first to show a correlation between symptoms and joint structure changes in knee OA. While our findings were suggestive of a beneficial effect of risedronate treatment on preservation of bone and cartilage, these trends seen in this study have not been observed in larger, multicountry cohorts.\n\nBODY.ABBREVIATIONS:\nAE = adverse event; BRISK = British Study of Risedronate in Structure and Symptoms of Knee OA; CTX-II = C-terminal crosslinking telopeptide of type II collagen; GI = gastrointestinal; ITT = intention-to-treat; JSW = joint-space width; NTX-I = N-terminal crosslinking telopeptide of type I collagen; OA = osteoarthritis; PGA = patient global assessment; VAS = visual analogue scale; WOMAC = Western Ontario and McMaster Universities [OA index].\n\nBODY.COMPETING INTERESTS:\nThis manuscript was sponsored by a grant from Procter & Gamble.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nTDS, JFB, DJV, and JMM planned the study and prepared the manuscript. GAC performed the statistical analysis. JCB-W supervised radiological measurements. PG performed the marker assays. PGC assisted with the manuscript and recruited patients. All authors read and approved the final manuscript.\n\n**Question:** Compared to risedronate 15 mg what was the result of risedronate 5 mg on joint-space width?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13866
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect of risedronate on joint structure and symptoms of knee osteoarthritis: results of the BRISK randomized, controlled trial [ISRCTN01928173]\n\n\n\nTo determine the efficacy and safety of risedronate in patients with knee osteoarthritis (OA), the British study of risedronate in structure and symptoms of knee OA (BRISK), a 1-year prospective, double-blind, placebo-controlled study, enrolled patients (40–80 years of age) with mild to moderate OA of the medial compartment of the knee. The primary aims were to detect differences in symptoms and function. Patients were randomized to once-daily risedronate (5 mg or 15 mg) or placebo. Radiographs were taken at baseline and 1 year for assessment of joint-space width using a standardized radiographic method with fluoroscopic positioning of the joint. Pain, function, and stiffness were assessed using the Western Ontario and McMaster Universities (WOMAC) OA index. The patient global assessment and use of walking aids were measured and bone and cartilage markers were assessed. The intention-to-treat population consisted of 284 patients. Those receiving risedronate at 15 mg showed improvement of the WOMAC index, particularly of physical function, significant improvement of the patient global assessment (P < 0.001), and decreased use of walking aids relative to patients receiving the placebo (P = 0.009). A trend towards attenuation of joint-space narrowing was observed in the group receiving 15 mg risedronate. Eight percent (n = 7) of patients receiving placebo and 4% (n = 4) of patients receiving 5 mg risedronate exhibited detectable progression of disease (joint-space width ≥ 25% or ≥ 0.75 mm) versus 1% (n = 1) of patients receiving 15 mg risedronate (P = 0.067). Risedronate (15 mg) significantly reduced markers of cartilage degradation and bone resorption. Both doses of risedronate were well tolerated. In this study, clear trends towards improvement were observed in both joint structure and symptoms in patients with primary knee OA treated with risedronate.\nBODY.INTRODUCTION:\nOsteoarthritis (OA) is a chronic, progressive disease that particularly affects weight-bearing joints such as hips and knees. The entire joint is affected by a complex combination of degradative and reparative processes, which alter the anatomy and function of articular cartilage, subchondral bone, and other joint tissues. Of the joints affected, knee OA in particular is a major cause of morbidity, often resulting in knee replacement [1-3]. Moreover, costs associated with OA are high – in the USA alone in 1991, the annual cost of knee replacements was estimated to be more than one billion dollars [4]. OA is normally the result of an interplay between systemic (e.g. age, obesity) and local (e.g. sports injury) factors that affect the joints of the body. Radiographic evidence has shown that joint-space narrowing (a surrogate marker for articular cartilage [5]), sclerosis of the subchondral bone, and the presence of osteophytes are typical structural features of OA.\nCurrent therapies for OA are largely aimed at providing symptom relief, and include a wide range of analgesics (e.g. nonsteroidal anti-inflammatory drugs and cyclooxygenase-2 agents). In contrast, only limited data are available about therapies that modify the course of the disease or affect joint structure. Historically, OA has been considered a disease of the cartilage, but more recent evidence suggests that subchrondral bone is also involved in the pathogenesis, in both disease initiation and progression. For example, increased local bone turnover, decreased bone mineral content and stiffness, and decreased trabecular numbers have been observed in OA subchondral bone structure compared with normal bone [6-8]. There is an increased level of interest in subchondral bone as a therapeutic OA target, and, in particular, the possibility that drugs affecting bone metabolism might alter the progression of knee and hip OA.\nThe Duncan-Hartley guinea pig model is a widely used spontaneous model of OA progression [9]. Several recent OA studies have evaluated this model for the effects of the antiresorptive agents bisphosphonates. For example, a comparative analysis of multiple bisphosphonates showed that only the group of nitrogen-containing bisphosphonates with pyridinyl sidechains demonstrated significant effects on the cartilage, although not all of these proved effective [10]. In a separate study using the guinea pig OA model, the pyridinyl bisphosphonate risedronate was shown to slow disease progression, as measured by the size and severity of cartilage lesions and the size of osteophytes, by up to 40% [11]. Based upon these preclinical studies, a clinical trial was performed in order to evaluate the effects of risedronate in patients with mild to moderate knee OA. The primary end points were changes in symptoms and function, with secondary end points of changes in joint structure or in markers of joint structure.\n\nBODY.MATERIALS AND METHODS:\nBODY.STUDY DESIGN AND SELECTION OF PATIENTS:\nThe British Study of Risedronate in Structure and Symptoms of Knee OA (BRISK) was a 1-year prospective, double-blind, placebo-controlled study conducted in 10 centres in the UK. Male and female subjects aged 40 to 80 years with mild to moderate medial-compartment knee OA, diagnosed according to the clinical and radiological criteria of the American College of Rheumatology [12], were recruited into the trial.\nOA in at least one knee, designated the signal knee, was required to meet the following clinical and radiographic inclusion criteria. Clinical inclusion criteria were the presence of daily knee pain for at least 1 month out of the 3 months preceding the study, with at least one of the following: age >50 years, morning knee stiffness of <30 minutes, or knee crepitus. The radiographic criteria for inclusion were a joint-space width (JSW) of 2 to 4 mm in the medial tibiofemoral compartment in the semiflexed anterior–posterior (AP) view of the signal knee and a narrower width than in the lateral compartment of the same knee. Patients were also required to have at least one osteophyte in either the medial or the lateral compartment of the tibiofemoral joint. Major exclusion criteria were the presence of rheumatic diseases that could be responsible for secondary OA; use of intra-articular hyaluronic acid in the signal knee; knee injury or diagnostic arthroscopy of the signal knee in the 6 months preceding enrollment; a history of knee surgery (including arthroscopy requiring an incision of internal joint components) in the signal knee at any time; intra-articular corticosteroids in the 3 months preceding enrollment; the presence of non-OA causes of knee pain in the signal knee (e.g. anserine bursitis, fibromyalgia, or osteonecrosis); and the use of bisphosphonates within the 12 months preceding enrollment.\nThe subjects gave their written, informed consent before entering the study, which was conducted in accordance with the International Conference on Harmonization (ICH) guidelines for Good Clinical Practice (GCP) and was approved by the UK Multicentre Research Ethical Committee (MREC).\n\nBODY.TREATMENT ASSIGNMENT:\nThe subjects were randomly assigned in a 1:1:1 ratio to one of three arms to receive risedronate at 5 mg or 15 mg or placebo once daily for 1 year. Before randomization, patients were stratified according to their current use of oestrogen or a selective oestrogen receptor modulator.\nThe subjects were instructed to take their study medication with at least 120 mL of water, 30 minutes before breakfast, or, if the medication was taken later in the day, at least 2 hours before or after food intake and at least 30 minutes before bedtime. They were instructed to take their study medication while they were upright and not to lie down for at least 30 minutes afterwards.\n\nBODY.SYMPTOM OUTCOME MEASURES:\nThe outcome instrument for evaluation of risedronate efficacy on symptoms of OA was the Western Ontario and McMaster Universities (WOMAC) OA index [13]. The visual analogue scale (VAS) of the index was used, in which patients assessed each question using a 100-mm scale, with a higher score representing greater symptom severity. The total index score for the signal knee corresponded to the weighted composite of the 24 question scores standardized to a 100-point scale. Scores were also determined for the subscales of pain (5 questions), stiffness (2 questions), and physical function (17 questions). Other symptom outcome measures included the patient global assessment (PGA) of disease, consumption of pain medication, and the use of walking aids. For the PGA, patients answered the following question using a VAS: \"Considering all the ways your OA affects you, how have you been in the last 48 hours?\" Patients marked values on a scale from 0 to 100 mm.\nA step-down reduction in the use of pain medication was effected 5 days before all symptom evaluations. Patients were provided with approximately 30 tablets each of paracetamol (500 mg) and diclofenac (50 mg) to be used as the only pain medication 3 to 5 days before the baseline assessment and at visits at months 3, 6, and 12 (the exit visit). No pain medications were to be used 2 days before the scheduled evaluation date or on the day itself, with the exception of low-dose acetylsalicylic acid (<350 mg/day) for cardiac protection. Rescue analgesia was permitted during the study except for the 2-day washout period before each visit.\n\nBODY.STRUCTURE OUTCOME MEASURES:\nThe outcome measure for assessment of joint structural changes was the mean change from baseline values in minimum JSW of the medial compartment of the knee. Radiographs of the knee were taken at baseline and at 1 year using a standardized radiographic method with fluoroscopic positioning of the joint in a semiflexed position [14,15]. By the use of this technique, the anterior and posterior rims of the tibia were aligned (to within 1 mm) for reproducible positioning. Radiographs were subjected to extensive quality control at the radiographic facility before dispatch to the Central Analysis Center [15]. Radiographs were read centrally and their quality control was rechecked before computer software was used to obtain the radiographic magnification. This was determined from measurement of a metal ball placed at the head of the fibula at the time of radiography and was used to adjust the computerized measurement obtained of the minimum medial compartment JSW [15]. The test–retest standard deviation of the difference between radiographs taken 2 days apart for this technique was approximately 0.2 mm, based upon repeat measurements in 199 subjects [15]. A retrospective analysis was performed taking into account the precision of the instrument. Retrospectively, clinically meaningful disease progression was defined as joint-space narrowing of ≥ 0.75 mm or a ≥ 25% loss from baseline values. The ≥ 0.75-mm value is almost four times the 0.2-mm standard deviation observed for the x-ray method.\n\nBODY.STRUCTURE–SYMPTOM RELATION:\nThe relation between knee OA symptoms and radiographic joint-space narrowing was assessed retrospectively; the mean change in symptom scores between baseline to month 12 of the total WOMAC score and pain and function subscales was compared with the magnitude of change in JSW over the study period.\n\nBODY.BONE AND CARTILAGE MARKERS:\nEarly-morning fasting urine and serum samples were collected at baseline and at months 3, 6, and 12 for assessment of markers of bone and cartilage turnover. Bone resorption was assessed by measurement of urinary levels of the N-terminal crosslinking telopeptide of type I collagen (NTX-I, Osteomark; OrthoClinical Diagnostics, High Wycombe, Bucks, UK) [16]. Bone formation was assessed by measurement of bone-specific serum alkaline phosphatase (Ostase, Beckman-Coulter, San Diego, CA, USA) [17] and cartilage degradation was assessed by measurement of urinary levels of C-terminal crosslinking telopeptide of type II collagen (CTX-II, Cartilaps, Nordic Bioscience, Herlev, Denmark) [18]. The intra-assay and interassay coefficients of variation were lower than 10%.\n\nBODY.EVALUATION OF SAFETY:\nPatient-reported adverse events (AEs) were recorded throughout the study. AEs were categorized using the Coding Symbols for Thesaurus of Adverse Reaction Terms (COSTART®) coding dictionary. Clinical laboratory measurements for safety monitoring were made throughout the study. Serious AEs were defined as any that resulted in death; were life threatening; resulted in hospitalization; resulted in persistent or significant disability or incapacity; or were judged to be medically significant.\nUpper-GI AEs included the following symptoms and conditions: substernal chest pain; duodenitis; dyspepsia; dysphagia; oesophagitis; gastritis; bleeding gastritis; gastro-oesophageal reflux; oesophageal bleeding; GI bleeding; haematemesis; melena; abdominal pain; ulcers (duodenal, oesophageal, peptic, gastric); bleeding ulcers (duodenal, peptic, gastric); perforated ulcers (duodenal, peptic, gastric); perforated and bleeding ulcers (duodenal, peptic, gastric); and reactivated ulcers (duodenal, peptic, gastric).\n\nBODY.STATISTICAL ANALYSIS:\nTo ensure 80% power to detect a 20% effect of risedronate treatment versus placebo with respect to pain modification (quantified according to the WOMAC pain subscale, assuming a standard deviation of 70 mm on a 0- to 500-mm scale), a 1-year dropout rate of 20%, and a type I error rate of 5% without adjustment for two comparisons with placebo control, the sample size requirement was 100 patients per treatment group.\nAnalyses were undertaken on the intention-to-treat (ITT) population. This was defined as all randomized patients who received at least one dose of study medication. All statistical analyses were performed using a two-sided statistical test with a type-I-error rate of 0.05. Baseline characteristics were compared using Fisher's exact test for categorical variables and the Kruskal–Wallis test for continuous variables. Extended Mantel–Haenszel tests with pooled centres as strata were used for end points with categorical responses. Analysis of variance (ANOVA) methods were used. Symptom analyses were adjusted for baseline value (PGA < WOMAC total or subscale value, as appropriate), pooled study centres, baseline use of oestrogen or selective oestrogen receptor modulators, gender, age, body mass index, and baseline JSW. Mean JSW analyses were adjusted for pooled study centres, baseline use of oestrogen or selective oestrogen receptor modulators, gender, age, body mass index, and baseline JSW as covariates. Each risedronate group was compared with the placebo group. For walking aids, the percentages were compared with placebo using the Cochran–Mantel–Haenszel test after adjusting for pooled centres. Individual AEs and the proportion of clinically meaningful JSW progressors were analysed using Fisher's exact test.\nThe WOMAC scores were calculated in accordance with the WOMAC User's Guide [19]. The total scores were composed of subscales weighted as follows: pain = 42%, stiffness = 21%, and function = 37% [19]. For each subscale, the reported response was the patient's average. If at least two pain items, both stiffness items, or more than four physical function items were omitted, or if the patient's response was unclear, the items were regarded as invalid and the relevant subscale was not included.\n\n\nBODY.RESULTS:\nBODY.PATIENTS:\nTwo hundred and eighty-five patients were considered eligible for the study and were randomized to treatment. Of these, 284 received at least one dose of study medication and were included in the ITT population, and 231 (81%) completed the study (placebo, n = 80; risedronate at 5 mg, n = 80; risedronate at 15 mg, n = 71) (Fig. 1). The number of patients who completed the study and the reasons for withdrawal were similar across treatment groups. Table 1 shows the baseline characteristics for the ITT population. These were similar between treatment groups; the average age of the patients was 63.3 years. There were no significant differences in the use of concomitant analgesics between treatment groups. Patients' compliance during study treatment, based on pill counts, was ≥ 83% and was comparable in the three treatment groups.\n\nBODY.SYMPTOM OUTCOME MEASURES:\nThere was an improvement from baseline values in the symptom outcome measure of total WOMAC scores (weighted and unweighted) (unweighted not shown) and the subscales for all treatment groups (Fig. 2). The group given risedronate at 15 mg showed a trend towards improvement from baseline values, although the differences were not statistically significant (P values from 0.10 to 0.33).\nAssessment of PGAs revealed a statistically significant improvement with risedronate at 15 mg compared with placebo at 1 year (-19.4 for risedronate at 15 mg versus -5.7 for placebo, P < 0.001) (Fig. 3). Although all treatment groups showed a significant improvement from baseline values at 3 months, the improvement in the group receiving risedronate at 15 mg continued to increase with time, whereas the level of improvement with placebo or risedronate at 5 mg did not show any further improvement after 6 months.\nAnalysis of the use of walking aids during study treatment showed a statistically significant difference in the proportion of patients who used walking aids in patients treated with risedronate at 15 mg (7 patients, 4% reduction) compared with placebo (21 patients, 8% increase) (P = 0.009) at 12 months compared with the proportion of patients that had reported using a walking aid during the previous year.\n\nBODY.STRUCTURE OUTCOME MEASURES:\nAssessment of the mean change from baseline values in minimum JSW in the medial compartment of the tibiofemoral joint at 1 year showed that there was a trend for patients receiving risedronate at 5 mg (JSW -0.08 ± 0.44 mm) or 15 mg (JSW -0.06 ± 0.25 mm). The change was greater in patients receiving placebo (JSW -0.12 ± 0.42 mm) compared with baseline values. Overall, the difference between treatment groups in loss of JSW at 12 months was not statistically significant (P = 0.275). The loss in JSW from baseline values was statistically significant only in the placebo group (P < 0.05).\nIn terms of detectable progression (i.e. loss of JSW ≥ 25% or ≥ 0.75 mm), the analysis of the distribution of change from baseline values in JSW at 1 year showed a greater presence of detectable progression in the placebo (8%) and risedronate (5 mg) (4%) (P = 0.36) groups than in the risedronate (15 mg) group (1%) (P = 0.067). The patients with JSW loss of >0.75 mm included none of the patients treated with risedronate at 15 mg and 6% of the patients treated with placebo (P = 0.060). Similarly, only 1% of the patients treated with risedronate at 15 mg but 7% of the patients treated with placebo had >25% loss of JSW (P = 0.12).\n\nBODY.STRUCTURE–SYMPTOM RELATION:\nFigure 4 shows the relation between structure and symptoms for this population of patients. The mean WOMAC total score and the scores on the pain and function subscales increased (i.e. symptom severity increased) with increasing loss of JSW. In the group of patients with any loss of JSW, the mean changes of the WOMAC total score and the scores on the pain and function subscales at 1 year were -5.9 mm, -4.6 mm, and -6.3 mm, respectively, indicating that these symptoms were not increasing overall. In contrast, for the subset of patients with a loss in JSW of 40% or more, the corresponding mean changes were +1.4 mm, +6.0 mm, and +2.3 mm, indicating increased symptom severity in these patients concurrent with narrowing of their knee-joint space.\n\nBODY.MARKERS OF BIOCHEMICAL TURNOVER:\nRisedronate treatment significantly reduced markers of cartilage degradation (Fig. 5) and bone resorption compared with placebo. At 1 year, treatment with risedronate at 15 mg significantly decreased mean urinary CTX-II values, by -22.8% ± 5.35; urinary NTX-I was reduced by -32.9% ± 4.92 relative to baseline values (P < 0.05). Dose-dependent effects were also observed with the 5-mg dose compared with placebo, but to a lesser magnitude. This finding is consistent with the known pharmacologic effect of risedronate on bone turnover. At 1 year, CTX-II and NTX-I values in the placebo group were significantly higher than those in the risedronate 15-mg group (14.5% ± 5.4 and 17.2% ± 4.9 higher, respectively). Significant decreases in bone alkaline phosphatase were observed in the risedronate groups compared with placebo. At 1 year, the mean decreases in the groups receiving risedronate at 15 mg and 5 mg were 29.1% ± 2.6 and 19.5% ± 2.5, respectively, compared with a mean decrease of 2.7% ± 2.5 in the placebo group (P < 0.001).\n\nBODY.SAFETY:\nThe frequencies of AEs were similar in the two treatment groups (Table 2). There were no clinically meaningful differences between groups in the percentage of patients with AEs in any body system and there were no significant differences in routine clinical chemistry parameters between the risedronate groups and the placebo group. The numbers of patients who dropped out of the study because of AEs were similar. Overall, 34 patients reported a total of 53 serious AEs. Investigators considered four serious AEs as possibly related to study treatment; two of these (rash and diarrhea) were in patients treated with placebo and two (anaemia and increased general joint pain) were in patients treated with risedronate at 5 mg.\nTable 2 provides a summary of adverse events for the ITT population and the frequency of the overall GI AEs and the most common upper-GI AEs. Forty-seven upper-GI events were reported in 38 patients, of which abdominal pain and dyspepsia were the most frequently reported. The majority of the upper-GI AEs occurred in patients with a history of GI disease; there were no significant differences between the groups given risedronate and the group given placebo in the incidence of upper-GI AEs in these patients.\n\n\nBODY.DISCUSSION:\nIncreased evidence of the role of bone in both the initiation and progression of OA has resulted in an interest in drugs that affect bone metabolism and might slow or even halt the process of joint degeneration [6]. The early findings reported here suggest that the bisphosphonate risedronate may have disease-modifying effects in patients with knee OA. A recent cross-sectional study also suggests an association between antiresorptive treatments (oestrogen or bisphosphonate) and improved symptoms and/or decreased bone marrow abnormalities [20].\nPositive trends were observed with risedronate treatment with regard to symptomatic improvement, as assessed by the WOMAC index. Treatment with risedronate at 15 mg resulted in a consistent trend in improvement in WOMAC scores, whereas the group receiving placebo showed less improvement. The group receiving risedronate at 15 mg showed a significant improvement in the PGA of OA compared with placebo. Similarly, the percentage of patients who used a walking aid during the study decreased in the group treated with risedronate at 15 mg, contrasting with an increase in the placebo group. While the differences in JSW among the groups were not significant, there was a trend for less loss in the risedronate (15 mg) than in the placebo group. When the data were analysed in a post hoc manner to identify patients with detectable progression (i.e. approximately four times the precision of the measurement), 8% of patients receiving placebo and 1% of patients receiving 15 mg risedronate were found to exhibit this degree of progression. Additionally, risedronate significantly reduced levels of bone resorption and cartilage degradation, as assessed by NTX-I and CTX-II markers, respectively.\nDespite these encouraging results, they were not confirmed in multicentre studies of risedronate treatment for 2 years using a similar protocol [21]. The following provides perspectives comparing the two studies. In our study, there was a dose-dependent trend for improvement in the WOMAC score. In the 2-year North American study, which enrolled 1,232 patients with knee OA, the placebo effect was approximately twice that observed in our study, and was comparable to the magnitude of change observed in the group given 15 mg risedronate in our study. Both studies showed significant decreases in CTX-II, the marker of cartilage degradation, but these were not associated with an attenuation of joint-space loss This lack of association may be related to the length of observation in these studies. Reijman and colleagues [22] observed 1,235 men and women with OA, followed up over an average period of 6.6 years. The subjects with baseline CTX-II levels in the highest quartile had a sixfold risk of progression of knee OA, defined as a decrease in JSW ≥ 2 mm, in comparison with subjects in the lowest quartile [22]. This suggests that an enriched population of subjects with an elevated rate of cartilage loss combined with a longer study period may be required in order to observe significant treatment effects.\nOur study is one of the first to suggest a correlation between symptoms and structure in OA, although preliminary results with doxycycline in the treatment of obese women with knee OA have reported a significant reduction in the proportion of follow-up visits in which a clinically significant increase in pain occurred, favouring treatment over placebo, and coinciding with a decrease in joint-space narrowing [23]. This finding is important, because it runs contrary to previous results, which have suggested a poor correlation between these disease features [24,25]. The limitations of our study include the small number of patients (n 26,27]. The results highlighted the importance of medial tibial plateau alignment with the central x-ray beam and showed that the standard clinical view of a standing extended knee is subject to considerable variability. In contrast, the fluoroscopic technique is well validated and is less variable in test-retest performance [15,28]. Further studies are required to further explore the possible correlation between symptoms and structure observed in our study. If validated, this relation may allow physicians to use the assessment of pain, perhaps in combination with a biomarker such as CTX-II, as a surrogate for other measures of disease progression\n\nBODY.CONCLUSION:\nThis study is one of the first to show a correlation between symptoms and joint structure changes in knee OA. While our findings were suggestive of a beneficial effect of risedronate treatment on preservation of bone and cartilage, these trends seen in this study have not been observed in larger, multicountry cohorts.\n\nBODY.ABBREVIATIONS:\nAE = adverse event; BRISK = British Study of Risedronate in Structure and Symptoms of Knee OA; CTX-II = C-terminal crosslinking telopeptide of type II collagen; GI = gastrointestinal; ITT = intention-to-treat; JSW = joint-space width; NTX-I = N-terminal crosslinking telopeptide of type I collagen; OA = osteoarthritis; PGA = patient global assessment; VAS = visual analogue scale; WOMAC = Western Ontario and McMaster Universities [OA index].\n\nBODY.COMPETING INTERESTS:\nThis manuscript was sponsored by a grant from Procter & Gamble.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nTDS, JFB, DJV, and JMM planned the study and prepared the manuscript. GAC performed the statistical analysis. JCB-W supervised radiological measurements. PG performed the marker assays. PGC assisted with the manuscript and recruited patients. All authors read and approved the final manuscript.\n\n**Question:** Compared to placebo what was the result of risedronate 5 mg, risedronate 15 mg on cartilage degradation and bone resorption?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13867
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA double-blind, randomized, placebo-controlled pilot trial to determine the efficacy and safety of ibudilast, a potential glial attenuator, in chronic migraine\n\n\n\nABSTRACT.BACKGROUND:\nChronic migraine (CM) is problematic, and there are few effective treatments. Recently, it has been hypothesized that glial activation may be a contributor to migraine; therefore, this study investigated whether the potential glial inhibitor, ibudilast, could attenuate CM.\n\nABSTRACT.METHODS:\nThe study was of double-blind, randomized, placebo-controlled, two-period crossover design. Participants were randomized to receive either ibudilast (40 mg twice daily) or placebo treatment for 8 weeks. Subsequently, the participants underwent a 4-week washout period followed by a second 8-week treatment block with the alternative treatment. CM participants completed a headache diary 4 weeks before randomization throughout both treatment periods and 4 weeks after treatment. Questionnaires assessing quality of life and cutaneous allodynia were collected on eight occasions throughout the study.\n\nABSTRACT.RESULTS:\nA total of 33 participants were randomized, and 14 participants completed the study. Ibudilast was generally well tolerated with mild, transient adverse events, principally nausea. Eight weeks of ibudilast treatment did not reduce the frequency of moderate to severe headache or of secondary outcome measures such as headache index, intake of symptomatic medications, quality of life or change in cutaneous allodynia.\n\nABSTRACT.CONCLUSION:\nUsing the current regimen, ibudilast does not improve migraine with CM participants.\n\nBODY.INTRODUCTION:\nChronic migraine (CM) is a disabling and undertreated disorder1 which represents a clinically challenging problem that impacts significantly on the quality of life of affected individuals and imposes a large economic burden upon society due to lost productivity.2,3 Despite this burden, the pathophysiology behind migraine and especially the processes of migraine chronification remain poorly understood. A growing body of preclinical and clinical evidence suggests that acquired sensitization plays a significant role in the pathology of migraine and the process of chronification.4\nRecent evidence supports the importance of neuronal–glial interactions as a potential mechanism for chronic pain. Glia are located in the central nervous system, and under basal condition, they perform housekeeping functions and maintain neural homeostasis.5 However, when glia are activated, they become immunoresponsive and modulate pain by releasing neuroexcitatory signals that can increase neuronal excitability,6,7 activate neighboring glia, and produce nociceptive mediators (such as nitric oxide, excitatory amino acids, and proinflammatory cytokines).8\nNonclinical models support a role of neuron–glia interactions in the initiation of migraine attacks, as glia have been shown to release a range of inflammatory cytokines and chemokines in animal models evoking some features of migraine.9 In nonclinical models, blockade of glial activation and proinflammatory mediators released by glia caused reduction in hypersensitivity and allodynia, respectively.10,11 Additionally, a clinical study has reported elevated levels of proinflammatory cytokines in the cerebrospinal fluid of CM participants.12 Such tonically increased glial activity may underlie the progression from episodic to CM.13 Furthermore, in a separate study, levels of the chemokine interleukin 8 were found to be elevated in the jugular venous blood of patients during a migraine attack.14 Interleukin 8 can be synthesized by glia in response to other proinflammatory mediators including tumor necrosis factor α, lending additional support to the neurogenic inflammation model of migraine.15\nThus, it seems likely that nociceptive signaling during migraine attacks releases mediators that activate glia. Activated glia then release a range of proinflammatory mediators such as cytokines, chemokines, reactive oxygen species, and prostaglandins, which not only facilitate pain transmission but also activate further glia, creating a positive feedback loop. After a stimulus is resolved, experimental evidence suggests that microglia may remain “primed,” entering a state in which they do not actively produce proinflammatory substances, yet they over-respond to subsequent stimuli, resulting in increased proinflammatory cytokine release and an exaggerated pain response.16 This could lead to an increase in migraine frequency as triggers that were previously subthreshold may now initiate headache.\nAlthough glial involvement in the pathogenesis of migraine and in the transformation of episodic migraine to CM has been suggested by a number of authors,9,13,17–19 to our knowledge no treatments targeting this mechanism have been trialed specifically in high frequency migraineurs. However, encouraging results from an open-label retrospective study in participants with varying chronic daily headache phenotypes following treatment with the potential glial attenuator minocycline have been reported.20 This study, published only in abstract form, found a significant decrease in headache frequency following 2 months of minocycline administration when used as an adjunctive preventative medication.20 However, the efficacy of minocycline in animal studies, suggests that it is only effective before chronic pain is established and not subsequently,10 which may not be applicable to chronic pain patients.\nIbudilast is a nonselective phosphodiesterase inhibitor that has been used in Japan for 20 years to treat bronchial asthma,21 post-stroke dizziness,22 and ocular allergies.23 Ibudilast inhibits tracheal smooth muscle contractility24 and platelet aggregation,25 improves cerebral blood flow,22 and attenuates allergic reactions.26,27\nAs ibudilast is administered orally, is well tolerated, and has good blood–brain barrier permeability,28–30 it is a suitable candidate for the treatment of chronic pain. In nonclinical models, ibudilast attenuated microglial activation (reduced withdrawal behaviors), reduced production of proinflammatory cytokines, enhanced the production of anti-inflammatory cytokines, and improved the safety profile of opioids (reduced tolerance and attenuated reward and dependence).23,31–33 Moreover, ibudilast exhibited immunomodulatory actions in multiple sclerosis patients30 and also displayed inhibition of proinflammatory cytokines from peripheral leukocytes.34\nIn the present study, the efficacy of the putative glial attenuator ibudilast in participants with high frequency migraine was investigated. The study investigated the efficacy of ibudilast on the frequency of moderate to severe headache and other headache indices (such as headache index, headache-related questionnaires, and acute symptomatic medication intake) in CM patients over 8 weeks of ibudilast treatment compared with placebo treatment.\n\nBODY.METHODS:\nBODY.STUDY APPROVAL:\nEthical approval was obtained from the Human Research Ethics Committee of the Royal Adelaide Hospital, Adelaide, South Australia. All the participants gave their written informed consent before the commencement of the study. The study was conducted at the Pain and Anaesthesia Research Clinic, Royal Adelaide Hospital, Adelaide, Australia, and in accordance with the Principles of International Conference on Harmonization Good Clinical Practice, the Declaration of Helsinki, and the Australian National Statement on Ethical Conduct in Research Involving Humans.\nThe study was of double-blind, randomized, placebo-controlled, two-period crossover design. Participants were randomized to initially receive either ibudilast or placebo treatment for 8 weeks. Subsequently, the participants underwent a 4-week washout period followed by a second 8-week treatment block with the alternative treatment (Clinicaltrials.gov NCT01389193).\nBODY.PARTICIPANT SELECTION:\nParticipants were recruited from June 2013 to May 2015. Criteria for CM were defined by the revised second edition of the International Classification of Headache Disorders (ICHD-II).35 Key inclusion criteria were:\nMen and women aged between 18 and 65 years\nMigraine with or without aura, as diagnosed according to the ICHD-II36\nOnset of migraine before 50 years of age\nHeadache for ≥15 days per month\nMigraine-like headache for ≥8 days per month, as per the International Headache Society (IHS) guidelines37\n\nBODY.KEY EXCLUSION CRITERIA:\nParticipants were excluded from the study on the basis of the following criteria: change in type or dose of migraine prophylactic medication in last 3 months, medication overuse headache as diagnosed according to the ICHD-II35, posttraumatic headache as diagnosed according to the ICHD-II36, another dominant chronic pain condition, an active inflammatory disease such as rheumatoid arthritis, a history of recent cerebrovascular disorder, inability to provide written informed consent, inability to read and write in English, presence of severe psychological/psychiatric disorders, recent history of significant trauma, recent history of drug or alcohol abuse, clinically significant findings on screening blood sample results, current malignancy, known hypersensitivity to ibudilast or excipients in Ketas® formulation, renal or hepatic impairment, pregnancy, lack of adequate contraception, and breastfeeding.\n\n\nBODY.STUDY MEDICATION:\nThe study used a delayed-release ibudilast product, Ketas®, which was obtained from Kyorin Pharmaceuticals Industries, Ltd (Tokyo, Japan). To blind the treatment allocation, the Ketas® capsule was overencapsulated with the space filled with microcrystalline cellulose. The placebo capsule was of the same size and color and filled with microcrystalline cellulose, so that the active and placebo medications were identical in appearance and weight. Each active capsule contained 10 mg ibudilast. Four capsules were self-administered by participants orally, twice daily for each 8-week treatment block. No treatment was administered during the washout.\n\nBODY.STUDY SCHEDULE:\nBODY.4 WEEKS PRIOR TO BASELINE VISIT:\nAt screening, potential participants underwent,\nMedical history including medication history\nHeadache diagnosis based upon retrospective information from the participant confirmed using baseline headache diary data\nCompletion of the Hospital Anxiety and Depression Scale\nPhysical examination\nUrine pregnancy test for women of childbearing potential\nUrine drug screen to rule out the presence of nonprescribed drugs of abuse\nBlood samples to assess hematology (4 mL), biochemistry including renal function and hepatic function (8 mL).\n\nAfter successful completion of the screening visit and confirmation of study eligibility, participants were provided with a headache diary to complete at least 4 weeks prior to the baseline visit to confirm study eligibility. Standardized education and instructions on how to complete the diary was provided by the experimenter, and the participants were required to fill in the headache diary daily. The headache diary recorded a comprehensive assessment of the headache experienced by participants and consisted of head pain characteristics, headache frequency, average headache intensity (11-point numerical rating scale), duration of headache (number of hours), and intake of symptomatic headache treatments (timing, type, and amount of medication consumed). The headache index was calculated by the summation of headache duration (hours) with headache intensity (11-point numerical rating scale). The dose of opioid-containing analgesics used by each patient was converted to oral morphine equivalent dose based on the local Royal Adelaide Hospital Pain Management Unit opioid conversion table.\n\nBODY.BASELINE VISIT:\nDuring the baseline visit, the headache diary was collected, and participants who were eligible to continue in the study were randomized to either ibudilast or placebo treatment in the first period. Participants were asked to complete two brief self-administered questionnaires. The 6-point Headache Impact Test (HIT-6) evaluated the impact of headache on the quality of life, reflecting upon the previous month,38 and the 12-point Allodynia Symptom Checklist (ASC-12) indicated whether, and quantified how, increased pain sensitivity impacted upon activities of day-to-day life.39\nAbout 14 mL of blood was collected to assess safety blood profile (blood biochemistry [including the assessment of renal function], liver function tests and hematology). Participants were advised to take only the minimum amount of medications required to control their pain; however, they were not specifically informed to reduce their analgesic intake. Participants were asked to complete another headache diary for the 8-week treatment period. The study medication was dispensed to the participants prior to discharge.\nParticipants were required to complete the headache diary throughout the two treatment periods, during the washout phase and for 4 weeks after ceasing the study medication.\n\nBODY.IN STUDY VISITS:\nInspection of headache diary, assessment of adverse events, completion of questionnaires, and safety bloods were taken at weeks 2, 4, and 8 of each period. Figure 1 shows the flowchart of the study design.\n\n\n\nBODY.STATISTICAL ANALYSIS:\nAll the participants who received at least one dose of the study medication were included in the safety population. However, the efficacy data were only analyzed for completers, given the crossover nature of the trial.\nGraphpad Prism 6.0 (GraphPad Software; San Diego, CA, USA) was used for all statistical analysis. The Wilcoxon matched pairs signed rank test was used to calculate the study medication adherence between the CM participants while in different treatment groups. Two-way analysis of variance tests with repeated measures and with Sidak adjustments were used to determine the differences between treatments in multiple comparisons were used to calculate frequency of moderate to severe headache experienced, headache index, frequency of symptomatic medication intake, average opioid intake, HIT-6 scores, and ASC-12 scores. Missing values were replaced by the last observed value for that variable (last observation carried forward). All results are presented as mean ± standard deviation (SD) unless stated otherwise. Significance was set at P<0.05.\n\n\nBODY.RESULTS:\nBODY.DEMOGRAPHICS OF CM PARTICIPANTS:\nA total of 33 patients were randomized to treatment. Fourteen CM participants completed both the treatments of the study and returned completed headache diaries suitable for analysis. Four CM participants completed both the treatments of the study but did not return adequate diaries for analysis and therefore were excluded from the analysis (Figure 2).\nFifteen participants (nine received ibudilast) did not complete the study and hence were also excluded from the analysis. Of these fifteen participants, two participants lost interest in the study; one was excluded due to work commitment; four were excluded because of loss to follow-up; one was excluded as that participant had changed his medication; one was excluded after receiving 1 week of treatment as she was found to be pregnant (she was on placebo); one was excluded after experiencing heart palpitation; four participants were withdrawn by the investigator because of worsening headaches; and one participant was excluded by the investigator as the patient experienced tingling sensation.\nDemographics of the participants are listed in Table 1. The mean age was 46 years (SD 13). The study consisted of 11 men and 22 women. The average duration of headache was 21 years (SD 15). The mean daily morphine equivalent dose was 4.2 mg (SD10).\n\nBODY.COMPLIANCE TO STUDY MEDICATION AND SAFETY:\nIn this study, all the participants had good adherence to study medications as demonstrated with the return of unused study capsules. There was no treatment group difference (P=0.6) with the number of capsules returned (placebo vs ibudilast [mean {SD}: 2.4 {2.4} vs 7.7 {15.6}]).\nAdverse events experienced in the study were of mild intensity, and all the reported adverse events are listed in Table 2. Seventeen of the twenty-three (60%) CM participants who were on ibudilast treatments reported adverse events, whereas eight of the twenty-three (35%) CM participants reported adverse events during placebo treatments. The most frequent adverse event in both the groups was worsening migraine, reported by 4/28 under ibudilast and 2/23 on placebo. The next most frequently reported adverse event was nausea, reported by 4/28 under ibudilast and by no participant on placebo.\n\nBODY.HEADACHES:\nWhen CM participants were on ibudilast treatment, they did not experience reduced frequency of moderate to severe headache compared with placebo treatment (P=0.9) in either treatment weeks 1–4 (mean difference −0.9, 95% confidence interval [CI] −9.1 to 7.3) or treatment weeks 5–8 (mean difference 0.1, 95% CI −8.1 to 8.3). However, irrespective of treatments, in treatment weeks 5–8 of each block, there was a significant reduction (P=0.02) in the frequency of moderate to severe headaches (Figure 3).\nLikewise, when CM participants were on ibudilast treatment, they did not experience a reduction in headache index when compared with placebo treatment (P=0.9) in either treatment weeks 1–4 (mean difference −5.7, 95% CI −56.6 to 45.2) or treatment weeks 5–8 (mean difference −0.4, 95% CI −51.3 to 50.5). However, in treatment weeks 5–8, there was a significant reduction in the headache index (P=0.003) (Figure 4).\n\nBODY.ACUTE SYMPTOMATIC MEDICATION INTAKE AND SELF-REPORTED QUESTIONNAIRES:\nNo treatment differences were detected in the frequency intake of symptomatic medication (P=0.6) in either treatment weeks 1–4 (mean difference −2.6, 95% CI −11.7 to 6.6) or treatment weeks 5–8 (mean difference −1.9, 95% CI −11 to 7.3) (Figure 5A). The oral morphine equivalent was also calculated for the medication that was taken by the CM participants; however, no treatment differences were detected (P=1) in either treatment weeks 1–4 (mean difference −0.5, 95% CI −11.2 to 10.2) or treatment weeks 5–8 (mean difference 0.5, 95% CI −10.2 to 11.2) (Figure 5B).\nScores on the HIT-6 and ASC-12 questionnaires did not differ between the treatment groups during any study time period (baseline, weeks 2, 4, and 8 of treatment); P=0.3 and P=0.9, respectively (data not shown).\n\n\nBODY.DISCUSSION:\nIbudilast caused no change in the frequency of moderate to severe headache when compared with placebo treatment. Furthermore, other headache indices such as headache index, frequency of acute symptomatic medication intake, HIT-6 scores or ASC-12 scores were not reduced when CM patients were on ibudilast treatment. Additionally, worsening migraine was reported at a higher frequency under ibudilast treatment rather than on placebo. However, given the small sample sizes used in this study, lack of statistical significance should be considered with caution, but there was no trend for a benefit with ibudilast. Interestingly, during treatment weeks 5–8 of each period, the frequency of moderate to severe headache and headache index for both the treatment groups were significantly reduced compared to treatment weeks 1–4. This observation could be an indicator of participants having a positive expectancy of the study drug.40\nEven though ibudilast may not be an effective treatment for CM, this study further provides a safety profile of ibudilast in a homogeneous migraine cohort. The adverse events experienced by participants were mild and transient in nature, and most of the adverse events were resolved once ibudilast was halted.\nOur results are in agreement with a recently published finding by our group, which found ibudilast to have no effect in reducing headache index or in reducing opioid intake in medi cation overuse headache patients.41 However, the study also measured toll-like receptor (TLR) reactivity from peripheral blood mononuclear cells and found ibudilast to significantly reduce the proinflammatory cytokine (interleukin 1β) levels from ex vivo stimulation with TLR-2 or TLR-4 agonists. This finding confirmed that ibudilast was likely to be inhibiting an inflammatory pathway, at least peripherally. However, the peripheral target may be different to the mechanism that is responsible for the headache or the treatment period may have been insufficient to reverse the long-term changes associated with CM or may not have been acting centrally.\nIt is unclear from this study whether glia are involved in CM. Ibudilast has demonstrated good central nervous system partitioning28 but as it was not found to have any effect in the clinical endpoints, it suggests that the mechanisms responsible for CM may be signaling through another pathway that is currently unidentified, or neurons may play a stronger role. Thus more research would be required to address whether glial activation is a likely contributor for CM. The recent demonstration of imaging of activated glia in patients with chronic low back pain provides a potential biomarker for future studies.42\nEven though there have been reports of pro- and anti-inflammatory cytokines being a predictor of headaches, the examination of peripheral targets may not be adequate. In a CM study, elevation of tumor necrosis factor α was found in the cerebrospinal fluid but not in the plasma,12 indicating that peripheral biomarkers might not be helpful.\n\nBODY.LIMITATIONS:\nThe sample size for this study was small as the required number of participants could not be recruited within the time frame, so the study ended prematurely (intended target was 40 CM participants). This was due to the difficulty in finding CM participants who fitted the criteria, and this is common in research involving chronic headache patients.43,44\n\nBODY.CONCLUSION:\nThe present study showed ibudilast dosed at 40 mg twice daily for 8 weeks is not a promising treatment for CM. However, this study adds to the literature of the safety and tolerability of ibudilast in a CM cohort, which to our knowledge has not been investigated previously.\n\n**Question:** Compared to placebo what was the result of ibudilast (40 mg twice daily) on the number of capsules returned?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13868
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA double-blind, randomized, placebo-controlled pilot trial to determine the efficacy and safety of ibudilast, a potential glial attenuator, in chronic migraine\n\n\n\nABSTRACT.BACKGROUND:\nChronic migraine (CM) is problematic, and there are few effective treatments. Recently, it has been hypothesized that glial activation may be a contributor to migraine; therefore, this study investigated whether the potential glial inhibitor, ibudilast, could attenuate CM.\n\nABSTRACT.METHODS:\nThe study was of double-blind, randomized, placebo-controlled, two-period crossover design. Participants were randomized to receive either ibudilast (40 mg twice daily) or placebo treatment for 8 weeks. Subsequently, the participants underwent a 4-week washout period followed by a second 8-week treatment block with the alternative treatment. CM participants completed a headache diary 4 weeks before randomization throughout both treatment periods and 4 weeks after treatment. Questionnaires assessing quality of life and cutaneous allodynia were collected on eight occasions throughout the study.\n\nABSTRACT.RESULTS:\nA total of 33 participants were randomized, and 14 participants completed the study. Ibudilast was generally well tolerated with mild, transient adverse events, principally nausea. Eight weeks of ibudilast treatment did not reduce the frequency of moderate to severe headache or of secondary outcome measures such as headache index, intake of symptomatic medications, quality of life or change in cutaneous allodynia.\n\nABSTRACT.CONCLUSION:\nUsing the current regimen, ibudilast does not improve migraine with CM participants.\n\nBODY.INTRODUCTION:\nChronic migraine (CM) is a disabling and undertreated disorder1 which represents a clinically challenging problem that impacts significantly on the quality of life of affected individuals and imposes a large economic burden upon society due to lost productivity.2,3 Despite this burden, the pathophysiology behind migraine and especially the processes of migraine chronification remain poorly understood. A growing body of preclinical and clinical evidence suggests that acquired sensitization plays a significant role in the pathology of migraine and the process of chronification.4\nRecent evidence supports the importance of neuronal–glial interactions as a potential mechanism for chronic pain. Glia are located in the central nervous system, and under basal condition, they perform housekeeping functions and maintain neural homeostasis.5 However, when glia are activated, they become immunoresponsive and modulate pain by releasing neuroexcitatory signals that can increase neuronal excitability,6,7 activate neighboring glia, and produce nociceptive mediators (such as nitric oxide, excitatory amino acids, and proinflammatory cytokines).8\nNonclinical models support a role of neuron–glia interactions in the initiation of migraine attacks, as glia have been shown to release a range of inflammatory cytokines and chemokines in animal models evoking some features of migraine.9 In nonclinical models, blockade of glial activation and proinflammatory mediators released by glia caused reduction in hypersensitivity and allodynia, respectively.10,11 Additionally, a clinical study has reported elevated levels of proinflammatory cytokines in the cerebrospinal fluid of CM participants.12 Such tonically increased glial activity may underlie the progression from episodic to CM.13 Furthermore, in a separate study, levels of the chemokine interleukin 8 were found to be elevated in the jugular venous blood of patients during a migraine attack.14 Interleukin 8 can be synthesized by glia in response to other proinflammatory mediators including tumor necrosis factor α, lending additional support to the neurogenic inflammation model of migraine.15\nThus, it seems likely that nociceptive signaling during migraine attacks releases mediators that activate glia. Activated glia then release a range of proinflammatory mediators such as cytokines, chemokines, reactive oxygen species, and prostaglandins, which not only facilitate pain transmission but also activate further glia, creating a positive feedback loop. After a stimulus is resolved, experimental evidence suggests that microglia may remain “primed,” entering a state in which they do not actively produce proinflammatory substances, yet they over-respond to subsequent stimuli, resulting in increased proinflammatory cytokine release and an exaggerated pain response.16 This could lead to an increase in migraine frequency as triggers that were previously subthreshold may now initiate headache.\nAlthough glial involvement in the pathogenesis of migraine and in the transformation of episodic migraine to CM has been suggested by a number of authors,9,13,17–19 to our knowledge no treatments targeting this mechanism have been trialed specifically in high frequency migraineurs. However, encouraging results from an open-label retrospective study in participants with varying chronic daily headache phenotypes following treatment with the potential glial attenuator minocycline have been reported.20 This study, published only in abstract form, found a significant decrease in headache frequency following 2 months of minocycline administration when used as an adjunctive preventative medication.20 However, the efficacy of minocycline in animal studies, suggests that it is only effective before chronic pain is established and not subsequently,10 which may not be applicable to chronic pain patients.\nIbudilast is a nonselective phosphodiesterase inhibitor that has been used in Japan for 20 years to treat bronchial asthma,21 post-stroke dizziness,22 and ocular allergies.23 Ibudilast inhibits tracheal smooth muscle contractility24 and platelet aggregation,25 improves cerebral blood flow,22 and attenuates allergic reactions.26,27\nAs ibudilast is administered orally, is well tolerated, and has good blood–brain barrier permeability,28–30 it is a suitable candidate for the treatment of chronic pain. In nonclinical models, ibudilast attenuated microglial activation (reduced withdrawal behaviors), reduced production of proinflammatory cytokines, enhanced the production of anti-inflammatory cytokines, and improved the safety profile of opioids (reduced tolerance and attenuated reward and dependence).23,31–33 Moreover, ibudilast exhibited immunomodulatory actions in multiple sclerosis patients30 and also displayed inhibition of proinflammatory cytokines from peripheral leukocytes.34\nIn the present study, the efficacy of the putative glial attenuator ibudilast in participants with high frequency migraine was investigated. The study investigated the efficacy of ibudilast on the frequency of moderate to severe headache and other headache indices (such as headache index, headache-related questionnaires, and acute symptomatic medication intake) in CM patients over 8 weeks of ibudilast treatment compared with placebo treatment.\n\nBODY.METHODS:\nBODY.STUDY APPROVAL:\nEthical approval was obtained from the Human Research Ethics Committee of the Royal Adelaide Hospital, Adelaide, South Australia. All the participants gave their written informed consent before the commencement of the study. The study was conducted at the Pain and Anaesthesia Research Clinic, Royal Adelaide Hospital, Adelaide, Australia, and in accordance with the Principles of International Conference on Harmonization Good Clinical Practice, the Declaration of Helsinki, and the Australian National Statement on Ethical Conduct in Research Involving Humans.\nThe study was of double-blind, randomized, placebo-controlled, two-period crossover design. Participants were randomized to initially receive either ibudilast or placebo treatment for 8 weeks. Subsequently, the participants underwent a 4-week washout period followed by a second 8-week treatment block with the alternative treatment (Clinicaltrials.gov NCT01389193).\nBODY.PARTICIPANT SELECTION:\nParticipants were recruited from June 2013 to May 2015. Criteria for CM were defined by the revised second edition of the International Classification of Headache Disorders (ICHD-II).35 Key inclusion criteria were:\nMen and women aged between 18 and 65 years\nMigraine with or without aura, as diagnosed according to the ICHD-II36\nOnset of migraine before 50 years of age\nHeadache for ≥15 days per month\nMigraine-like headache for ≥8 days per month, as per the International Headache Society (IHS) guidelines37\n\nBODY.KEY EXCLUSION CRITERIA:\nParticipants were excluded from the study on the basis of the following criteria: change in type or dose of migraine prophylactic medication in last 3 months, medication overuse headache as diagnosed according to the ICHD-II35, posttraumatic headache as diagnosed according to the ICHD-II36, another dominant chronic pain condition, an active inflammatory disease such as rheumatoid arthritis, a history of recent cerebrovascular disorder, inability to provide written informed consent, inability to read and write in English, presence of severe psychological/psychiatric disorders, recent history of significant trauma, recent history of drug or alcohol abuse, clinically significant findings on screening blood sample results, current malignancy, known hypersensitivity to ibudilast or excipients in Ketas® formulation, renal or hepatic impairment, pregnancy, lack of adequate contraception, and breastfeeding.\n\n\nBODY.STUDY MEDICATION:\nThe study used a delayed-release ibudilast product, Ketas®, which was obtained from Kyorin Pharmaceuticals Industries, Ltd (Tokyo, Japan). To blind the treatment allocation, the Ketas® capsule was overencapsulated with the space filled with microcrystalline cellulose. The placebo capsule was of the same size and color and filled with microcrystalline cellulose, so that the active and placebo medications were identical in appearance and weight. Each active capsule contained 10 mg ibudilast. Four capsules were self-administered by participants orally, twice daily for each 8-week treatment block. No treatment was administered during the washout.\n\nBODY.STUDY SCHEDULE:\nBODY.4 WEEKS PRIOR TO BASELINE VISIT:\nAt screening, potential participants underwent,\nMedical history including medication history\nHeadache diagnosis based upon retrospective information from the participant confirmed using baseline headache diary data\nCompletion of the Hospital Anxiety and Depression Scale\nPhysical examination\nUrine pregnancy test for women of childbearing potential\nUrine drug screen to rule out the presence of nonprescribed drugs of abuse\nBlood samples to assess hematology (4 mL), biochemistry including renal function and hepatic function (8 mL).\n\nAfter successful completion of the screening visit and confirmation of study eligibility, participants were provided with a headache diary to complete at least 4 weeks prior to the baseline visit to confirm study eligibility. Standardized education and instructions on how to complete the diary was provided by the experimenter, and the participants were required to fill in the headache diary daily. The headache diary recorded a comprehensive assessment of the headache experienced by participants and consisted of head pain characteristics, headache frequency, average headache intensity (11-point numerical rating scale), duration of headache (number of hours), and intake of symptomatic headache treatments (timing, type, and amount of medication consumed). The headache index was calculated by the summation of headache duration (hours) with headache intensity (11-point numerical rating scale). The dose of opioid-containing analgesics used by each patient was converted to oral morphine equivalent dose based on the local Royal Adelaide Hospital Pain Management Unit opioid conversion table.\n\nBODY.BASELINE VISIT:\nDuring the baseline visit, the headache diary was collected, and participants who were eligible to continue in the study were randomized to either ibudilast or placebo treatment in the first period. Participants were asked to complete two brief self-administered questionnaires. The 6-point Headache Impact Test (HIT-6) evaluated the impact of headache on the quality of life, reflecting upon the previous month,38 and the 12-point Allodynia Symptom Checklist (ASC-12) indicated whether, and quantified how, increased pain sensitivity impacted upon activities of day-to-day life.39\nAbout 14 mL of blood was collected to assess safety blood profile (blood biochemistry [including the assessment of renal function], liver function tests and hematology). Participants were advised to take only the minimum amount of medications required to control their pain; however, they were not specifically informed to reduce their analgesic intake. Participants were asked to complete another headache diary for the 8-week treatment period. The study medication was dispensed to the participants prior to discharge.\nParticipants were required to complete the headache diary throughout the two treatment periods, during the washout phase and for 4 weeks after ceasing the study medication.\n\nBODY.IN STUDY VISITS:\nInspection of headache diary, assessment of adverse events, completion of questionnaires, and safety bloods were taken at weeks 2, 4, and 8 of each period. Figure 1 shows the flowchart of the study design.\n\n\n\nBODY.STATISTICAL ANALYSIS:\nAll the participants who received at least one dose of the study medication were included in the safety population. However, the efficacy data were only analyzed for completers, given the crossover nature of the trial.\nGraphpad Prism 6.0 (GraphPad Software; San Diego, CA, USA) was used for all statistical analysis. The Wilcoxon matched pairs signed rank test was used to calculate the study medication adherence between the CM participants while in different treatment groups. Two-way analysis of variance tests with repeated measures and with Sidak adjustments were used to determine the differences between treatments in multiple comparisons were used to calculate frequency of moderate to severe headache experienced, headache index, frequency of symptomatic medication intake, average opioid intake, HIT-6 scores, and ASC-12 scores. Missing values were replaced by the last observed value for that variable (last observation carried forward). All results are presented as mean ± standard deviation (SD) unless stated otherwise. Significance was set at P<0.05.\n\n\nBODY.RESULTS:\nBODY.DEMOGRAPHICS OF CM PARTICIPANTS:\nA total of 33 patients were randomized to treatment. Fourteen CM participants completed both the treatments of the study and returned completed headache diaries suitable for analysis. Four CM participants completed both the treatments of the study but did not return adequate diaries for analysis and therefore were excluded from the analysis (Figure 2).\nFifteen participants (nine received ibudilast) did not complete the study and hence were also excluded from the analysis. Of these fifteen participants, two participants lost interest in the study; one was excluded due to work commitment; four were excluded because of loss to follow-up; one was excluded as that participant had changed his medication; one was excluded after receiving 1 week of treatment as she was found to be pregnant (she was on placebo); one was excluded after experiencing heart palpitation; four participants were withdrawn by the investigator because of worsening headaches; and one participant was excluded by the investigator as the patient experienced tingling sensation.\nDemographics of the participants are listed in Table 1. The mean age was 46 years (SD 13). The study consisted of 11 men and 22 women. The average duration of headache was 21 years (SD 15). The mean daily morphine equivalent dose was 4.2 mg (SD10).\n\nBODY.COMPLIANCE TO STUDY MEDICATION AND SAFETY:\nIn this study, all the participants had good adherence to study medications as demonstrated with the return of unused study capsules. There was no treatment group difference (P=0.6) with the number of capsules returned (placebo vs ibudilast [mean {SD}: 2.4 {2.4} vs 7.7 {15.6}]).\nAdverse events experienced in the study were of mild intensity, and all the reported adverse events are listed in Table 2. Seventeen of the twenty-three (60%) CM participants who were on ibudilast treatments reported adverse events, whereas eight of the twenty-three (35%) CM participants reported adverse events during placebo treatments. The most frequent adverse event in both the groups was worsening migraine, reported by 4/28 under ibudilast and 2/23 on placebo. The next most frequently reported adverse event was nausea, reported by 4/28 under ibudilast and by no participant on placebo.\n\nBODY.HEADACHES:\nWhen CM participants were on ibudilast treatment, they did not experience reduced frequency of moderate to severe headache compared with placebo treatment (P=0.9) in either treatment weeks 1–4 (mean difference −0.9, 95% confidence interval [CI] −9.1 to 7.3) or treatment weeks 5–8 (mean difference 0.1, 95% CI −8.1 to 8.3). However, irrespective of treatments, in treatment weeks 5–8 of each block, there was a significant reduction (P=0.02) in the frequency of moderate to severe headaches (Figure 3).\nLikewise, when CM participants were on ibudilast treatment, they did not experience a reduction in headache index when compared with placebo treatment (P=0.9) in either treatment weeks 1–4 (mean difference −5.7, 95% CI −56.6 to 45.2) or treatment weeks 5–8 (mean difference −0.4, 95% CI −51.3 to 50.5). However, in treatment weeks 5–8, there was a significant reduction in the headache index (P=0.003) (Figure 4).\n\nBODY.ACUTE SYMPTOMATIC MEDICATION INTAKE AND SELF-REPORTED QUESTIONNAIRES:\nNo treatment differences were detected in the frequency intake of symptomatic medication (P=0.6) in either treatment weeks 1–4 (mean difference −2.6, 95% CI −11.7 to 6.6) or treatment weeks 5–8 (mean difference −1.9, 95% CI −11 to 7.3) (Figure 5A). The oral morphine equivalent was also calculated for the medication that was taken by the CM participants; however, no treatment differences were detected (P=1) in either treatment weeks 1–4 (mean difference −0.5, 95% CI −11.2 to 10.2) or treatment weeks 5–8 (mean difference 0.5, 95% CI −10.2 to 11.2) (Figure 5B).\nScores on the HIT-6 and ASC-12 questionnaires did not differ between the treatment groups during any study time period (baseline, weeks 2, 4, and 8 of treatment); P=0.3 and P=0.9, respectively (data not shown).\n\n\nBODY.DISCUSSION:\nIbudilast caused no change in the frequency of moderate to severe headache when compared with placebo treatment. Furthermore, other headache indices such as headache index, frequency of acute symptomatic medication intake, HIT-6 scores or ASC-12 scores were not reduced when CM patients were on ibudilast treatment. Additionally, worsening migraine was reported at a higher frequency under ibudilast treatment rather than on placebo. However, given the small sample sizes used in this study, lack of statistical significance should be considered with caution, but there was no trend for a benefit with ibudilast. Interestingly, during treatment weeks 5–8 of each period, the frequency of moderate to severe headache and headache index for both the treatment groups were significantly reduced compared to treatment weeks 1–4. This observation could be an indicator of participants having a positive expectancy of the study drug.40\nEven though ibudilast may not be an effective treatment for CM, this study further provides a safety profile of ibudilast in a homogeneous migraine cohort. The adverse events experienced by participants were mild and transient in nature, and most of the adverse events were resolved once ibudilast was halted.\nOur results are in agreement with a recently published finding by our group, which found ibudilast to have no effect in reducing headache index or in reducing opioid intake in medi cation overuse headache patients.41 However, the study also measured toll-like receptor (TLR) reactivity from peripheral blood mononuclear cells and found ibudilast to significantly reduce the proinflammatory cytokine (interleukin 1β) levels from ex vivo stimulation with TLR-2 or TLR-4 agonists. This finding confirmed that ibudilast was likely to be inhibiting an inflammatory pathway, at least peripherally. However, the peripheral target may be different to the mechanism that is responsible for the headache or the treatment period may have been insufficient to reverse the long-term changes associated with CM or may not have been acting centrally.\nIt is unclear from this study whether glia are involved in CM. Ibudilast has demonstrated good central nervous system partitioning28 but as it was not found to have any effect in the clinical endpoints, it suggests that the mechanisms responsible for CM may be signaling through another pathway that is currently unidentified, or neurons may play a stronger role. Thus more research would be required to address whether glial activation is a likely contributor for CM. The recent demonstration of imaging of activated glia in patients with chronic low back pain provides a potential biomarker for future studies.42\nEven though there have been reports of pro- and anti-inflammatory cytokines being a predictor of headaches, the examination of peripheral targets may not be adequate. In a CM study, elevation of tumor necrosis factor α was found in the cerebrospinal fluid but not in the plasma,12 indicating that peripheral biomarkers might not be helpful.\n\nBODY.LIMITATIONS:\nThe sample size for this study was small as the required number of participants could not be recruited within the time frame, so the study ended prematurely (intended target was 40 CM participants). This was due to the difficulty in finding CM participants who fitted the criteria, and this is common in research involving chronic headache patients.43,44\n\nBODY.CONCLUSION:\nThe present study showed ibudilast dosed at 40 mg twice daily for 8 weeks is not a promising treatment for CM. However, this study adds to the literature of the safety and tolerability of ibudilast in a CM cohort, which to our knowledge has not been investigated previously.\n\n**Question:** Compared to placebo what was the result of ibudilast (40 mg twice daily) on frequency of moderate to severe headaches 5-8 weeks?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
13870
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA double-blind, randomized, placebo-controlled pilot trial to determine the efficacy and safety of ibudilast, a potential glial attenuator, in chronic migraine\n\n\n\nABSTRACT.BACKGROUND:\nChronic migraine (CM) is problematic, and there are few effective treatments. Recently, it has been hypothesized that glial activation may be a contributor to migraine; therefore, this study investigated whether the potential glial inhibitor, ibudilast, could attenuate CM.\n\nABSTRACT.METHODS:\nThe study was of double-blind, randomized, placebo-controlled, two-period crossover design. Participants were randomized to receive either ibudilast (40 mg twice daily) or placebo treatment for 8 weeks. Subsequently, the participants underwent a 4-week washout period followed by a second 8-week treatment block with the alternative treatment. CM participants completed a headache diary 4 weeks before randomization throughout both treatment periods and 4 weeks after treatment. Questionnaires assessing quality of life and cutaneous allodynia were collected on eight occasions throughout the study.\n\nABSTRACT.RESULTS:\nA total of 33 participants were randomized, and 14 participants completed the study. Ibudilast was generally well tolerated with mild, transient adverse events, principally nausea. Eight weeks of ibudilast treatment did not reduce the frequency of moderate to severe headache or of secondary outcome measures such as headache index, intake of symptomatic medications, quality of life or change in cutaneous allodynia.\n\nABSTRACT.CONCLUSION:\nUsing the current regimen, ibudilast does not improve migraine with CM participants.\n\nBODY.INTRODUCTION:\nChronic migraine (CM) is a disabling and undertreated disorder1 which represents a clinically challenging problem that impacts significantly on the quality of life of affected individuals and imposes a large economic burden upon society due to lost productivity.2,3 Despite this burden, the pathophysiology behind migraine and especially the processes of migraine chronification remain poorly understood. A growing body of preclinical and clinical evidence suggests that acquired sensitization plays a significant role in the pathology of migraine and the process of chronification.4\nRecent evidence supports the importance of neuronal–glial interactions as a potential mechanism for chronic pain. Glia are located in the central nervous system, and under basal condition, they perform housekeeping functions and maintain neural homeostasis.5 However, when glia are activated, they become immunoresponsive and modulate pain by releasing neuroexcitatory signals that can increase neuronal excitability,6,7 activate neighboring glia, and produce nociceptive mediators (such as nitric oxide, excitatory amino acids, and proinflammatory cytokines).8\nNonclinical models support a role of neuron–glia interactions in the initiation of migraine attacks, as glia have been shown to release a range of inflammatory cytokines and chemokines in animal models evoking some features of migraine.9 In nonclinical models, blockade of glial activation and proinflammatory mediators released by glia caused reduction in hypersensitivity and allodynia, respectively.10,11 Additionally, a clinical study has reported elevated levels of proinflammatory cytokines in the cerebrospinal fluid of CM participants.12 Such tonically increased glial activity may underlie the progression from episodic to CM.13 Furthermore, in a separate study, levels of the chemokine interleukin 8 were found to be elevated in the jugular venous blood of patients during a migraine attack.14 Interleukin 8 can be synthesized by glia in response to other proinflammatory mediators including tumor necrosis factor α, lending additional support to the neurogenic inflammation model of migraine.15\nThus, it seems likely that nociceptive signaling during migraine attacks releases mediators that activate glia. Activated glia then release a range of proinflammatory mediators such as cytokines, chemokines, reactive oxygen species, and prostaglandins, which not only facilitate pain transmission but also activate further glia, creating a positive feedback loop. After a stimulus is resolved, experimental evidence suggests that microglia may remain “primed,” entering a state in which they do not actively produce proinflammatory substances, yet they over-respond to subsequent stimuli, resulting in increased proinflammatory cytokine release and an exaggerated pain response.16 This could lead to an increase in migraine frequency as triggers that were previously subthreshold may now initiate headache.\nAlthough glial involvement in the pathogenesis of migraine and in the transformation of episodic migraine to CM has been suggested by a number of authors,9,13,17–19 to our knowledge no treatments targeting this mechanism have been trialed specifically in high frequency migraineurs. However, encouraging results from an open-label retrospective study in participants with varying chronic daily headache phenotypes following treatment with the potential glial attenuator minocycline have been reported.20 This study, published only in abstract form, found a significant decrease in headache frequency following 2 months of minocycline administration when used as an adjunctive preventative medication.20 However, the efficacy of minocycline in animal studies, suggests that it is only effective before chronic pain is established and not subsequently,10 which may not be applicable to chronic pain patients.\nIbudilast is a nonselective phosphodiesterase inhibitor that has been used in Japan for 20 years to treat bronchial asthma,21 post-stroke dizziness,22 and ocular allergies.23 Ibudilast inhibits tracheal smooth muscle contractility24 and platelet aggregation,25 improves cerebral blood flow,22 and attenuates allergic reactions.26,27\nAs ibudilast is administered orally, is well tolerated, and has good blood–brain barrier permeability,28–30 it is a suitable candidate for the treatment of chronic pain. In nonclinical models, ibudilast attenuated microglial activation (reduced withdrawal behaviors), reduced production of proinflammatory cytokines, enhanced the production of anti-inflammatory cytokines, and improved the safety profile of opioids (reduced tolerance and attenuated reward and dependence).23,31–33 Moreover, ibudilast exhibited immunomodulatory actions in multiple sclerosis patients30 and also displayed inhibition of proinflammatory cytokines from peripheral leukocytes.34\nIn the present study, the efficacy of the putative glial attenuator ibudilast in participants with high frequency migraine was investigated. The study investigated the efficacy of ibudilast on the frequency of moderate to severe headache and other headache indices (such as headache index, headache-related questionnaires, and acute symptomatic medication intake) in CM patients over 8 weeks of ibudilast treatment compared with placebo treatment.\n\nBODY.METHODS:\nBODY.STUDY APPROVAL:\nEthical approval was obtained from the Human Research Ethics Committee of the Royal Adelaide Hospital, Adelaide, South Australia. All the participants gave their written informed consent before the commencement of the study. The study was conducted at the Pain and Anaesthesia Research Clinic, Royal Adelaide Hospital, Adelaide, Australia, and in accordance with the Principles of International Conference on Harmonization Good Clinical Practice, the Declaration of Helsinki, and the Australian National Statement on Ethical Conduct in Research Involving Humans.\nThe study was of double-blind, randomized, placebo-controlled, two-period crossover design. Participants were randomized to initially receive either ibudilast or placebo treatment for 8 weeks. Subsequently, the participants underwent a 4-week washout period followed by a second 8-week treatment block with the alternative treatment (Clinicaltrials.gov NCT01389193).\nBODY.PARTICIPANT SELECTION:\nParticipants were recruited from June 2013 to May 2015. Criteria for CM were defined by the revised second edition of the International Classification of Headache Disorders (ICHD-II).35 Key inclusion criteria were:\nMen and women aged between 18 and 65 years\nMigraine with or without aura, as diagnosed according to the ICHD-II36\nOnset of migraine before 50 years of age\nHeadache for ≥15 days per month\nMigraine-like headache for ≥8 days per month, as per the International Headache Society (IHS) guidelines37\n\nBODY.KEY EXCLUSION CRITERIA:\nParticipants were excluded from the study on the basis of the following criteria: change in type or dose of migraine prophylactic medication in last 3 months, medication overuse headache as diagnosed according to the ICHD-II35, posttraumatic headache as diagnosed according to the ICHD-II36, another dominant chronic pain condition, an active inflammatory disease such as rheumatoid arthritis, a history of recent cerebrovascular disorder, inability to provide written informed consent, inability to read and write in English, presence of severe psychological/psychiatric disorders, recent history of significant trauma, recent history of drug or alcohol abuse, clinically significant findings on screening blood sample results, current malignancy, known hypersensitivity to ibudilast or excipients in Ketas® formulation, renal or hepatic impairment, pregnancy, lack of adequate contraception, and breastfeeding.\n\n\nBODY.STUDY MEDICATION:\nThe study used a delayed-release ibudilast product, Ketas®, which was obtained from Kyorin Pharmaceuticals Industries, Ltd (Tokyo, Japan). To blind the treatment allocation, the Ketas® capsule was overencapsulated with the space filled with microcrystalline cellulose. The placebo capsule was of the same size and color and filled with microcrystalline cellulose, so that the active and placebo medications were identical in appearance and weight. Each active capsule contained 10 mg ibudilast. Four capsules were self-administered by participants orally, twice daily for each 8-week treatment block. No treatment was administered during the washout.\n\nBODY.STUDY SCHEDULE:\nBODY.4 WEEKS PRIOR TO BASELINE VISIT:\nAt screening, potential participants underwent,\nMedical history including medication history\nHeadache diagnosis based upon retrospective information from the participant confirmed using baseline headache diary data\nCompletion of the Hospital Anxiety and Depression Scale\nPhysical examination\nUrine pregnancy test for women of childbearing potential\nUrine drug screen to rule out the presence of nonprescribed drugs of abuse\nBlood samples to assess hematology (4 mL), biochemistry including renal function and hepatic function (8 mL).\n\nAfter successful completion of the screening visit and confirmation of study eligibility, participants were provided with a headache diary to complete at least 4 weeks prior to the baseline visit to confirm study eligibility. Standardized education and instructions on how to complete the diary was provided by the experimenter, and the participants were required to fill in the headache diary daily. The headache diary recorded a comprehensive assessment of the headache experienced by participants and consisted of head pain characteristics, headache frequency, average headache intensity (11-point numerical rating scale), duration of headache (number of hours), and intake of symptomatic headache treatments (timing, type, and amount of medication consumed). The headache index was calculated by the summation of headache duration (hours) with headache intensity (11-point numerical rating scale). The dose of opioid-containing analgesics used by each patient was converted to oral morphine equivalent dose based on the local Royal Adelaide Hospital Pain Management Unit opioid conversion table.\n\nBODY.BASELINE VISIT:\nDuring the baseline visit, the headache diary was collected, and participants who were eligible to continue in the study were randomized to either ibudilast or placebo treatment in the first period. Participants were asked to complete two brief self-administered questionnaires. The 6-point Headache Impact Test (HIT-6) evaluated the impact of headache on the quality of life, reflecting upon the previous month,38 and the 12-point Allodynia Symptom Checklist (ASC-12) indicated whether, and quantified how, increased pain sensitivity impacted upon activities of day-to-day life.39\nAbout 14 mL of blood was collected to assess safety blood profile (blood biochemistry [including the assessment of renal function], liver function tests and hematology). Participants were advised to take only the minimum amount of medications required to control their pain; however, they were not specifically informed to reduce their analgesic intake. Participants were asked to complete another headache diary for the 8-week treatment period. The study medication was dispensed to the participants prior to discharge.\nParticipants were required to complete the headache diary throughout the two treatment periods, during the washout phase and for 4 weeks after ceasing the study medication.\n\nBODY.IN STUDY VISITS:\nInspection of headache diary, assessment of adverse events, completion of questionnaires, and safety bloods were taken at weeks 2, 4, and 8 of each period. Figure 1 shows the flowchart of the study design.\n\n\n\nBODY.STATISTICAL ANALYSIS:\nAll the participants who received at least one dose of the study medication were included in the safety population. However, the efficacy data were only analyzed for completers, given the crossover nature of the trial.\nGraphpad Prism 6.0 (GraphPad Software; San Diego, CA, USA) was used for all statistical analysis. The Wilcoxon matched pairs signed rank test was used to calculate the study medication adherence between the CM participants while in different treatment groups. Two-way analysis of variance tests with repeated measures and with Sidak adjustments were used to determine the differences between treatments in multiple comparisons were used to calculate frequency of moderate to severe headache experienced, headache index, frequency of symptomatic medication intake, average opioid intake, HIT-6 scores, and ASC-12 scores. Missing values were replaced by the last observed value for that variable (last observation carried forward). All results are presented as mean ± standard deviation (SD) unless stated otherwise. Significance was set at P<0.05.\n\n\nBODY.RESULTS:\nBODY.DEMOGRAPHICS OF CM PARTICIPANTS:\nA total of 33 patients were randomized to treatment. Fourteen CM participants completed both the treatments of the study and returned completed headache diaries suitable for analysis. Four CM participants completed both the treatments of the study but did not return adequate diaries for analysis and therefore were excluded from the analysis (Figure 2).\nFifteen participants (nine received ibudilast) did not complete the study and hence were also excluded from the analysis. Of these fifteen participants, two participants lost interest in the study; one was excluded due to work commitment; four were excluded because of loss to follow-up; one was excluded as that participant had changed his medication; one was excluded after receiving 1 week of treatment as she was found to be pregnant (she was on placebo); one was excluded after experiencing heart palpitation; four participants were withdrawn by the investigator because of worsening headaches; and one participant was excluded by the investigator as the patient experienced tingling sensation.\nDemographics of the participants are listed in Table 1. The mean age was 46 years (SD 13). The study consisted of 11 men and 22 women. The average duration of headache was 21 years (SD 15). The mean daily morphine equivalent dose was 4.2 mg (SD10).\n\nBODY.COMPLIANCE TO STUDY MEDICATION AND SAFETY:\nIn this study, all the participants had good adherence to study medications as demonstrated with the return of unused study capsules. There was no treatment group difference (P=0.6) with the number of capsules returned (placebo vs ibudilast [mean {SD}: 2.4 {2.4} vs 7.7 {15.6}]).\nAdverse events experienced in the study were of mild intensity, and all the reported adverse events are listed in Table 2. Seventeen of the twenty-three (60%) CM participants who were on ibudilast treatments reported adverse events, whereas eight of the twenty-three (35%) CM participants reported adverse events during placebo treatments. The most frequent adverse event in both the groups was worsening migraine, reported by 4/28 under ibudilast and 2/23 on placebo. The next most frequently reported adverse event was nausea, reported by 4/28 under ibudilast and by no participant on placebo.\n\nBODY.HEADACHES:\nWhen CM participants were on ibudilast treatment, they did not experience reduced frequency of moderate to severe headache compared with placebo treatment (P=0.9) in either treatment weeks 1–4 (mean difference −0.9, 95% confidence interval [CI] −9.1 to 7.3) or treatment weeks 5–8 (mean difference 0.1, 95% CI −8.1 to 8.3). However, irrespective of treatments, in treatment weeks 5–8 of each block, there was a significant reduction (P=0.02) in the frequency of moderate to severe headaches (Figure 3).\nLikewise, when CM participants were on ibudilast treatment, they did not experience a reduction in headache index when compared with placebo treatment (P=0.9) in either treatment weeks 1–4 (mean difference −5.7, 95% CI −56.6 to 45.2) or treatment weeks 5–8 (mean difference −0.4, 95% CI −51.3 to 50.5). However, in treatment weeks 5–8, there was a significant reduction in the headache index (P=0.003) (Figure 4).\n\nBODY.ACUTE SYMPTOMATIC MEDICATION INTAKE AND SELF-REPORTED QUESTIONNAIRES:\nNo treatment differences were detected in the frequency intake of symptomatic medication (P=0.6) in either treatment weeks 1–4 (mean difference −2.6, 95% CI −11.7 to 6.6) or treatment weeks 5–8 (mean difference −1.9, 95% CI −11 to 7.3) (Figure 5A). The oral morphine equivalent was also calculated for the medication that was taken by the CM participants; however, no treatment differences were detected (P=1) in either treatment weeks 1–4 (mean difference −0.5, 95% CI −11.2 to 10.2) or treatment weeks 5–8 (mean difference 0.5, 95% CI −10.2 to 11.2) (Figure 5B).\nScores on the HIT-6 and ASC-12 questionnaires did not differ between the treatment groups during any study time period (baseline, weeks 2, 4, and 8 of treatment); P=0.3 and P=0.9, respectively (data not shown).\n\n\nBODY.DISCUSSION:\nIbudilast caused no change in the frequency of moderate to severe headache when compared with placebo treatment. Furthermore, other headache indices such as headache index, frequency of acute symptomatic medication intake, HIT-6 scores or ASC-12 scores were not reduced when CM patients were on ibudilast treatment. Additionally, worsening migraine was reported at a higher frequency under ibudilast treatment rather than on placebo. However, given the small sample sizes used in this study, lack of statistical significance should be considered with caution, but there was no trend for a benefit with ibudilast. Interestingly, during treatment weeks 5–8 of each period, the frequency of moderate to severe headache and headache index for both the treatment groups were significantly reduced compared to treatment weeks 1–4. This observation could be an indicator of participants having a positive expectancy of the study drug.40\nEven though ibudilast may not be an effective treatment for CM, this study further provides a safety profile of ibudilast in a homogeneous migraine cohort. The adverse events experienced by participants were mild and transient in nature, and most of the adverse events were resolved once ibudilast was halted.\nOur results are in agreement with a recently published finding by our group, which found ibudilast to have no effect in reducing headache index or in reducing opioid intake in medi cation overuse headache patients.41 However, the study also measured toll-like receptor (TLR) reactivity from peripheral blood mononuclear cells and found ibudilast to significantly reduce the proinflammatory cytokine (interleukin 1β) levels from ex vivo stimulation with TLR-2 or TLR-4 agonists. This finding confirmed that ibudilast was likely to be inhibiting an inflammatory pathway, at least peripherally. However, the peripheral target may be different to the mechanism that is responsible for the headache or the treatment period may have been insufficient to reverse the long-term changes associated with CM or may not have been acting centrally.\nIt is unclear from this study whether glia are involved in CM. Ibudilast has demonstrated good central nervous system partitioning28 but as it was not found to have any effect in the clinical endpoints, it suggests that the mechanisms responsible for CM may be signaling through another pathway that is currently unidentified, or neurons may play a stronger role. Thus more research would be required to address whether glial activation is a likely contributor for CM. The recent demonstration of imaging of activated glia in patients with chronic low back pain provides a potential biomarker for future studies.42\nEven though there have been reports of pro- and anti-inflammatory cytokines being a predictor of headaches, the examination of peripheral targets may not be adequate. In a CM study, elevation of tumor necrosis factor α was found in the cerebrospinal fluid but not in the plasma,12 indicating that peripheral biomarkers might not be helpful.\n\nBODY.LIMITATIONS:\nThe sample size for this study was small as the required number of participants could not be recruited within the time frame, so the study ended prematurely (intended target was 40 CM participants). This was due to the difficulty in finding CM participants who fitted the criteria, and this is common in research involving chronic headache patients.43,44\n\nBODY.CONCLUSION:\nThe present study showed ibudilast dosed at 40 mg twice daily for 8 weeks is not a promising treatment for CM. However, this study adds to the literature of the safety and tolerability of ibudilast in a CM cohort, which to our knowledge has not been investigated previously.\n\n**Question:** Compared to placebo what was the result of ibudilast (40 mg twice daily) on frequency intake of symptomatic medication?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13871
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nAIO LQ-0110: a randomized phase II trial comparing oral doxycycline versus local administration of erythromycin as preemptive treatment strategies of panitumumab-mediated skin toxicity in patients with metastatic colorectal cancer\n\n\n\nABSTRACT.BACKGROUND:\nDermatologic toxicities, especially akne-like skin rash, are the most common side-effects associated with anti-epidermal growth factor receptor (EGFR) therapy. Preemptive treatment with oral tetracyclines is recommended as a standard. Topical prophylactic options have thus far not been compared to tetracyclines. In the current study, we sought to establish an alternative topical treatment.\n\nABSTRACT.PATIENTS AND METHODS:\nIn this multicentre, randomized, open-label phase II study patients with (K)Ras-wildtype colorectal cancer receiving panitumumab were randomized (1:1) to receive either doxycycline 100 mg b.i.d. (standard arm) or erythromycin ointment 2% followed by doxycycline in case of insufficient activity. The primary endpoint was the percentage of patients developing no skin toxicity ≥ grade 2 at any time during the first 8 weeks of panitumumab treatment. Skin toxicity was assessed using the NCI CTCAE v 4.0. Secondary endpoints comprised the assessment of skin toxicity using a more thorough grading system (WoMo score), evaluation of skin-related (DLQI) and global quality of life (EORTC QLQ C30).\n\nABSTRACT.RESULTS:\nIn total, 88 patients were included in this trial. 69% of the patients in the erythromycin arm suffered from skin toxicity of grade ≥ 2 versus 63% in the standard arm (P = n.s.). However, as per WoMo score significantly more patients in the erythromycin arm developed moderate or severe skin toxicity at earlier time points. Skin related and overall quality of life was comparable between both arms.\n\nABSTRACT.CONCLUSIONS:\nBased on this data erythromycin cannot be regarded as an alternative to doxycycline as prevention of EGFR-related skin toxicity.\n\nBODY.INTRODUCTION:\nDermatologic toxicities are the most common side-effects associated with anti-EGFR therapy and they are experienced by the majority of patients undergoing EGFR inhibitor therapy. While toxicities are mild (grade 0-1) to moderate (grade 2) in most patients, about 15-20% exhibit higher grade toxicities leading to medical and psychosocial effects that may result in poor patients’ compliance with lower adherence to cancer therapy, more dose delays, and interruption or discontinuation of antineoplastic therapy [1, 2]. Finally, dermatologic toxicities may contribute to a significantly reduced quality of life [2, 3, 4]. Acneiform rash, usually developing within the first 2 to 4 weeks of therapy, is by far the most frequent type of skin toxicity caused by EGFR antibody treatment.\nManagement guidelines for the prophylaxis and treatment of EGFR inhibitor-mediated dermatologic toxicities usually recommend prophylactic use of tetracyclines and special skin care. Indeed, these recommendations are based on few randomized trials:\nJatoi et al. randomized 62 patients treated with anti-EGFR antibodies or tyrosine kinase inhibitors (TKIs) between tetracycline and placebo. While the overall incidence of skin toxicity was not reduced, a significant decrease of severity was observed. Of note, patients treated with tetracycline reported improved skin-related quality of life as assessed by the SKINDEX-16 quality of life (QoL) questionnaire [5]. In a subsequent similar study including 65 patients Jatoi and coworkers could not confirm these findings [6]. Scope et al. randomized 48 colorectal cancer patients treated with cetuximab between minocycline and placebo. Minocycline significantly decreased moderate to severe itch. Likewise, a trend toward a decreased incidence of moderate to severe rash was noticed (20% vs. 42%, P =.13) [7]. Deplanque et al. randomized 147 patients with non-small-cell lung cancer (NSCLC) between treatment with erlotinib alone or in combination with doxycycline. Doxycycline did not reduce the incidence of folliculitis over untreated patients (71% vs. 81%, P =.117) but significantly reduced its severity (P =.001) [8]. Melosky et al. randomized 150 patients with NSCLC treated with erlotinib between prophylactic minocycline, reactive minocycline and no treatment. The incidence of skin toxicity was 84% regardless of treatment arm. Prophylactic treatment with minocycline, however, significantly lengthened the time to the most severe grade of rash. Grade 3 rash was significantly higher in the no-treatment arm [9]. Arrieta et al. randomized 90 patients with NSCLC treated with afatinib between preemptive tetracycline and only reactive treatment. Tetracycline reduced the rash incidence of any grade (44.5% vs. 75.6%, P =.046) and the severity of grade ≥ 2 (15.6% vs. 35.6%, P =.030) over reactive treatment [10]. Finally, Lacouture et al. randomized 114 patients with NSCLC treated with dacomitinib between preemptive doxycycline and placebo. Doxycycline significantly reduced the incidence of select dermatologic adverse events of interest (SDAEI) grade ≥ 2 by 50% (P =.016) [11].\nIn all, it appears to be justified to recommend a prophylaxis with tetracyclines such as doxycycline or minocycline. Two randomized studies confirmed that tetracyclines should be used as prophylaxis and not as an early intervention: Firstly, the so-called STEPP trial randomized 95 patients with metastatic colorectal cancer (mCRC) to be treated with panitumumab in combination with chemotherapy to either preemptive treatment for skin toxicity including doxycycline 100 mg b.i.d. or to receive the same regimen reactively. Incidence of skin toxicity grade ≥ 2 was significantly reduced by preemptive treatment compared with reactive treatment (29% vs. 62%). Moreover, patients randomized to preemptive treatment reported improved QoL according to Dermatology Life Quality Index (DLQI) especially between week 2 and 3 [3]. Secondly, the so-called J-STEPP trial confirmed these data using a comparable study design in a Japanese population with mCRC. Comparable to the STEPP trial, the incidence of ≥ grade 2 skin reactions was decreased with preemptive treatment compared with reactive treatment (21.3% vs. 62.5%, P <.001) [12].\nThus far, no data comparing oral antibiotics with a local treatment have been reported. In order to avoid a long term exposure to oral antibiotics a preemptive topical skin treatment could be useful. Moreover, topical agents are more convenient than oral antibiotics, and compliance might be better. In the present trial we sought to compare a local treatment using erythromycin followed by doxycycline in the case of insufficient activity with doxycycline. Erythromycin seemed to be very suitable as it belongs to standard treatment for acne vulgaris [13].\n\nBODY.RESULTS:\nBODY.PATIENTS:\nIn total, 88 patients were randomized by 11 sites between July 2011 and October 2014. 80 out of 88 patients were evaluated (41 patients in the doxycycline arm, 39 patients in the erythromycin arm). Reasons for exclusion were: withdrawal of consent, no therapy started (n = 4), no post-baseline skin toxicity assessment (n = 1); treatment not started due to RAS mutation (n = 2), investigator´s decision (n = 1).\nPatients´ and major tumor-related baseline characteristics are depicted in Table 1.\nTable 1BODY.MAJOR PATIENTS´ AND TUMOUR CHARACTERISTICS:\nParameterDoxycyclineErythromycinTotalAgeN\n40*\n39\n79\n Mean ± SD (years)\n68.1 ± 12.4\n69.1 ± 9.4\n68.6 ± 11\n Median (years)\n69.5\n70.3\n70.2\nGender\n\n\n\nN\n41\n39\n80\n Female\n13 (32%)\n18 (46%)\n31 (39%)\n Male\n28 (68%)\n21 (54%)\n49 (61%)\nCancer location\n\n\n\nN\n41\n38*\n79\n Colon\n23 (56%)\n18 (47%)\n41 (52%)\n Rectum\n18 (44%)\n20 (53%)\n38 (48%)\nPerformance status\n\n\n\nN\n39*\n38*\n77\n ECOG 0\n16 (41%)\n17 (45%)\n33 (43%)\n ECOG 1\n19 (49%)\n19 (50%)\n38 (49%)\n ECOG 2\n4 (10%)\n2 (5%)\n6 (8%)\n*Data missing.\n\nBODY.PRIMARY END POINT:\nThe percentage of patients developing skin toxicity according to NCI CTC criteria is shown in Table 2. In the erythromycin arm, 69% suffered from skin toxicity grade ≥ 2 (95% confidence interval [CI]: 52% to 83%; primary endpoint). In the standard arm with doxycycline the toxicity rate with grade ≥ 2 amounted to 63% (95% CI: 47 – 78%). Thus, the primary endpoint was not met.\nTable 2BODY.SKIN TOXICITY GRADED ACCORDING TO NCI CTCAE 4.0, WORST PER PATIENT:\nNCI CTCAE gradeDoxycyclineErythromycinTotalN\n41\n39\n80\n0 / no toxicity\n3 (7%)\n3 (8%)\n6 (8%)\n1\n12 (29%)\n9 (23%)\n21 (26%)\n2\n18 (44%)\n12 (31%)\n30 (38%)\n3\n8 (20%)\n15 (38%)\n23 (29%)\nP =.27 (Mantel-Haenszel test for trend, exact version).\nAlthough this randomized phase II study was not powered for formal statistical comparisons of the two treatment groups, results from additional statistical calculations are provided in a descriptive way: Using Fisher's exact test no significant differences between both arms were seen (P =.64). The odds ratio, with values > 1.0 indicating higher toxicity, was 1.29 (95% confidence interval: 0.46 – 3.67; one-sided 95% confidence limit for inferiority of the erythromycin arm: 3.15). In absolute values, the rate difference of 6% more toxicity in the erythromycin group has a 95% confidence interval (according to HAUCK-ANDERSON) ranging from −16% to + 28%.\nFigure 1 shows the time to the onset of skin toxicity of NCI CTCAE grade ≥ 2. In the doxycycline arm median time was 43 days (day one of cycle 4), in the erythromycin arm median time was 29 days (day one of cycle 3) without statistical significance in the corresponding logrank test (P =.68).\nFigure 1BODY.TIME TO SKIN TOXICITY NCI CTCAE GRADE ≥ 2 (FULL ANALYSIS SET [N = 80]):\n\nBODY.SKIN TOXICITY GRADED ACCORDING TO WOMO SCORE:\nThe total score result from the WoMo assessment over time (individual cycles) show a constantly higher score in the erythromycin group (data not shown). When the maximum WoMo score by patient over time is analyzed (Table 3), the difference amounts to almost 10 points which is about half of the standard deviation.\nTable 3BODY.SKIN TOXICITY ACCORDING TO WOMO SCORE:\nWoMo scoreDoxycyclineErythromycinTotalN\n41\n39\n80\nMean ± SD\n25.6 ± 16.7\n34.1 ± 19.8\n29.7 ± 18.7\nMedian\n23.1\n32.1\n29.5\nQuartiles\n14.5 - 34.8\n21.9 - 47.2\n16.1 - 40.9\nRange\n0 - 68.2\n0 - 88.1\n0 - 88.1\nA significantly higher number of patients treated in the erythromycin arm developed moderate or severe skin toxicity according to WoMo score, i.e. ≥ 20 points (30/39 versus 23/41 patients, P = .049); moreover, a higher number of patients had severe skin toxicity (WoMo score ≥ 40 points 14/39 versus 8/41 patients; P =.101).\nAnalyzes based on worsening of the WoMo score were performed. It took one cycle longer to appearance of a moderate to severe skin toxicity (WoMo score ≥ 20) with doxycycline compared to erythromycin. Figure 2 shows the time to increase of the WoMo score to ≥ 20. An obvious difference between both arms appeared on day 15 (day one of cycle 2) and this remained discernible thereafter indicating a clear trend towards superior efficacy of the standard arm using doxycycline (P =.069).\nFigure 2BODY.TIME TO WOMO SCORE ≥ 20 (FULL ANALYSIS SET [N = 80]):\n\nBODY.QUALITY OF LIFE:\nThe mean values of the EORTC QLQ C30 global health score are balanced at the beginning of the first treatment cycle with about 52 points for each group (albeit based on a lower number of valid questionnaires in the erythromycin group; Table 4). Over time, the mean values amount to 59 in the erythromycin arm and decline to 47 points in the doxycycline arm (at the beginning of cycle 4). The between-group differences range between 5 and 10 points per cycle.\nTable 4BODY.QUALITY OF LIFE GRADED ACCORDING TO EORTC QLQ C30: GLOBAL HEALTH SCORE:\nErythromycin arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n32\n36\n27\n28\n31\nMean ± SD\n52.3 ± 23.3\n53.2 ± 17.6\n52.8 ± 18.5\n59.2 ± 13.5\n55.4 ± 17.6\nMedian\n50\n50\n50\n54.2\n58.3\nQuartiles\n33.3 - 66.7\n41.7 - 66.7\n41.7 - 66.7\n50 - 66.7\n41.7 - 66.7\nRange\n0 - 91.7\n16.7 - 83.3\n0 - 83.3\n33.3 - 83.3\n16.7 - 91.7\nDoxycycline arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n40\n31\n31\n25\n25\nMean ± SD\n52.9 ± 18.7\n48.4 ± 17\n47 ± 19.5\n47.3 ± 21.2\n44.7 ± 20.1\nMedian\n50\n50\n50\n50\n50\nQuartiles\n33.3 - 66.7\n33.3 - 66.7\n33.3 - 58.3\n33.3 - 66.7\n33.3 - 58.3\nRange\n16.7 - 91.7\n16.7 - 83.3\n0 - 83.3\n0 - 83.3\n0 - 83.3\n*End of treatment.\nNo relevant differences between both two treatment strategies were noticed as per the DLQI. Throughout the observation period of 8 weeks, the median values of the DLQI total score in both arms are low with a highest medium value of 4 points for both arms at the beginning of cycle 3 (except for the rise of the median at the end-of treatment assessment in the doxycycline group with a very low questionnaire compliance in this arm at that time point) (Table 5). Considering the subscales, only “symptoms and feelings” indicated major deterioration with a median of 2 points on a maximum scale of 6 (data not shown).\nTable 5BODY.SKIN RELATED QUALITY OF LIFE GRADED ACCORDING TO DLQI (TOTAL SCORE):\nErythromycin arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n24\n30\n29\n24\n31\nMean ± SD\n1.2 ± 2.6\n5.3 ± 5.4\n5.4 ± 5.6\n4.4 ± 4.1\n5.4 ± 5.8\nMedian\n0\n3\n4\n3.5\n3\nQuartiles\n0 - 1\n1.2 – 8\n2 - 7\n1 – 6.2\n2 – 6\nRange\n0 - 10\n0 – 25\n0 - 27\n0 - 14\n0 – 26\nDoxycycline arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n24\n27\n23\n20\n15\nMean ± SD\n0.8 ± 1.2\n4.4 ± 4.8\n5.3 ± 5.2\n4.9 ± 5.4\n5.8 ± 4.4\nMedian\n0\n2\n4\n3\n7\nQuartiles\n0 −1\n1 – 5.5\n2 – 7.5\n1.8 – 5.2\n2 – 9\nRange\n0 - 5\n0 - 19\n0- 21\n1 - 22\n0- 13\n*End of treatment.\n\nBODY.TREATMENT ADMINISTRATION AND SAFETY:\nIn total, 140 panitumumab administrations were recorded in the doxycycline arm (mean 3.4 per patient), and 139 in the erythromycin arm (3.6 per patient). 74% of the patients received all 4 scheduled administrations without differences in both arms. In the erythromycin arm, 33% of the patients stopped erythromycin due to insufficient activity and switched to doxycycline. Panitumumab dose reductions or dose delays were infrequent (see Supplementary Materials).\nAs mentioned, detailed analysis of adverse events beyond skin toxicity was not within the scope of the current study. However, neither unexpected events nor frequencies were detected within the framework of the adverse events and SAE reporting in this trial. Eleven out of 41 patients (26.8%) treated within the doxycycline arm, and 7/39 patients (17.9%) in the erythromycin arm reported SAE (P =.34).\n\nBODY.ANTITUMOR EFFICACY:\nRegarding the overall study population, 28% of the patients experienced an objective response, while 46% had stabilization of the disease. This estimation is based on 54 patients (68%) with a valid RECIST evaluation after 8 weeks. There is no indication for an effect of the randomized assignment of the skin treatment (P =.40, Fisher's exact test; odds ratio: 1.74, 95% confidence interval: 0.49 – 6.67).\n\n\nBODY.DISCUSSION:\nThis is the first randomized trial comparing an oral antibiotic prophylaxis (doxycycline) with an alternative local application (erythromycin) as preemptive skin treatment strategy against anti-EGFR targeted drugs. The primary endpoint of the present trial was the percentage of patients developing no skin toxicity grade ≥ 2 at any time during the first 8 weeks of panitumumab treatment. In the erythromycin arm, a moderate to severe skin toxicity could be prevented in only 31%, i.e. 69% suffered from skin toxicity grade ≥ 2. Thus the primary endpoint was missed and the experimental strategy (local erythromycin) would be rated as insufficient. In the reference arm, however, using doxycycline, the toxicity rate grade ≥ 2 amounted to 63%, as well, i.e. it was not lower than in the erythromycin arm. A numerical trend to a lower grade 3 toxicity with oral doxycycline can be discerned but without reaching statistical significance. Although most of the studies investigating tetracycline-family antibiotics in a preemptive setting could not demonstrate a reduction of the overall incidence of skin toxicity, they often showed a decrease of the severity compared to placebo [5, 7, 11], no treatment [8, 9], or reactive treatment [3, 9, 10, 11]. This effect could not be seen in the current trial comparing two different preemptive treatment strategies.\nThe rates of skin toxicity grade ≥ 2 are rather high with 69% and 63%, respectively, compared to the literature. Even if considering only trials with similar settings (mCRC treated with EGFR inhibitors) the rates of grade ≥ 2 skin toxicity in the preemptive arms are considerably lower in previous studies with 20% [5], 21.3% [12] and 29% [3]) and could not be reproduced in the present trial.\nUntil recently, the NCI CTCAE v 3.0 grading scale (published in 2006) had been used in most of the studies for evaluation of EGFR inhibitor induced skin toxicity [3, 5, 6, 8, 9, 12]. This version 3.0 included only a crude scale for severity of skin toxicity and allowed for subjective grading. The updated CTCAE v 4.0 used in the present trial provide a more thorough grading of skin toxicity taking into account that acneiform skin rash may develop only in lower percentage of skin surface. Patients that would have been classified grade 1 according to the NCI CTCAE v 3.0, could easily be regarded as grade 2 or even 3 according to the NCI CTCAE v 4.0. Thus, especially the results of the STEPP trial [3] on which the present trial was based cannot be compared with our data concerning the evaluation of skin rash. It is conceivable that the high amount of grade ≥ 2 toxicities in the present trial is due to the updated and improved grading system.\nNevertheless, even the updated version 4.0 does not accurately reflect the clinical situation. One major problem of this version is that the grading is based on the affected body surface area (BSA) rather than on the (local) severity/symptoms of skin rash. An upgrading beyond the grade stipulated by the grading system is principally possible (for instance due to limited instrumental or self-care activities of daily living) but this decision is left up to the discretion of the investigator. This may of course pose problems with respect to the comparability of skin toxicity between individual investigators and between studies.\nIn view of this, we also used an extended alternative grading system for EGFR-related skin toxicity (so called WoMo score [20]) in the present trial which takes into account also (i) the percentage of the facial area affected with rash or other dermatological adverse events as well as (ii) the type and severity of the efflorescences. Thus, the WoMo score allows for a more thorough description of the skin toxicity and enables more precise description and comparison of efficacy of different prophylactic treatments.\nNotably, using this grading system, we detected obvious differences between the two treatment groups. Firstly, the mean and median WoMo scores were numerically higher in the erythromycin arm. Secondly, the percentage of patients developing moderate or severe skin rash (WoMo score ≥ 20 points) was significantly higher in the erythromycin group and the time to development of moderate or severe skin toxicity was shorter in the erythromycin group (at the beginning of cycle 2 versus cycle 3). This data indicates that the experimental stepwise approach (erythromycin followed by doxycycline) was not able to at least accomplish the same results as immediate doxycycline prophylaxis. In this regard it is important to stress that the treatment intensity and adherence to panitumumab as well as the therapeutic efficacy was comparable between both arms.\nThe analysis of the skin related quality of life using the DLQI total score shows a distinct deterioration of QoL at the beginning of cycle 2 in both arms, being rather stable thereafter in the patients remaining in the observation. The absolute amount of impact on QoL according to the DLQI seems to be rather limited with about 4 out of 30 points [14]. This is due to the finding, that only the subscale “symptoms and feelings” indicated major deterioration. All other scales were hardly affected. At the time the study was designed, the DLQI seemed to be one of the most promising questionnaires to assess skin-related QoL, not least due to the data of the STEPP trial [3]. Meanwhile, a questionnaire (FACT- EGFRI-18) is available designed to assess dermatologic symptoms associated with EGFR inhibitors [15]. It should be used for future clinical trials.\nIn all, we were not able to find significant differences between both prophylactic treatments investigated in this trial using the NCI CTC AE v 4.0 grading system. It can be pointed out that the erythromycin arm duplicated grade 3 skin toxicity. Contrarily, by using the WoMo score we demonstrated that more patients treated with erythromycin developed moderate or severe skin toxicity at earlier time points indicating that local erythromycin should not be used as a substitute of oral tetracyclines as prophylaxis against EGFR mediated skin toxicity. We propose to use the WoMo score in addition to NCI CTCAE grading in clinical trials dedicated to study the efficacy of prophylactic or interventional measures against EGFR-mediate skin toxicity.\nMoreover, rash rates are high despite prophylactic treatment. These high rates beg for further research to establish improved alternative treatment options. High expectations are placed on vitamin K for use of EGFR treatment associated skin toxicity. The prophylactic use of K1 cream was demonstrated by Ocvirk [16] and – most recently – by Schimanski et al. [17]. Regarding this, the results of the EVITA trial (NCI01345526) are eagerly awaited. EVITA is the first randomized study that evaluated the addition of vitamin K to doxycycline prophylaxis. Another promising trial (NCT03051880) investigates the preventive use of topical EGF cream. Until then the prophylactic use of tetracyclines may be regarded as a standard of care in patients undergoing anti-EGFR directed treatment.\n\nBODY.MATERIALS AND METHODS:\nBODY.PATIENTS:\nPatients with metastatic colorectal cancer were eligible to participate in the study provided they met all selection criteria listed in Table 6. No study treatment or any other procedure within the framework of the trial was performed in any patient prior to receipt of written informed consent.\nTable 6BODY.INCLUSION AND EXCLUSION CRITERIA:\nInclusion criteria:\nPatients with wild-type RAS (KRAS and NRAS) status of metastatic colorectal cancer treatment with panitumumab according to label\nRAS wild-type tested in\nKRAS exon 2 (codons 12/13)\nKRAS exon 3 (codons 59/61)\nKRAS exon 4 (codons 117/146)\nNRAS exon 2 (codons 12/13)\nNRAS exon 3 (codons 59/61)\nNRAS exon 4 (codons 117/146)\n\n\nTreatment with pre-emptive study medication shall begin the day before treatment start with panitumumab\nWillingness to cope with biweekly quality of life questionnaires\nWritten Informed consent\nAged at least 18 years\nECOG Performance Status 0-2\nLife expectancy of at least 12 weeks\nAdequate haematological, hepatic, renal and metabolic function parameters:\nLeukocytes > 3000/mm3\nANC ≥ 1500/mm3\nPlatelets ≥ 100,000/mm3\nHaemoglobin > 9 g/dl\nSerum creatinine ≤ 1.5 x ULN\nBilirubin ≤ 1.5 x ULN\nGOT-GPT ≤ 2.5 x ULN (in case of liver metastases GOT / GPT ≤ 5 x ULN)\nAP ≤ 5 x ULN\nMagnesium, Calcium and potassium within normal ranges (may be substituted before study entry)\n\n\nExclusion criteria:\nSubject pregnant or breast feeding, or planning to become pregnant within 6 months after the end of treatment\nSubject (male or female) is not willing to use highly effective methods of contraception (per institutional standard) during treatment and for 6 months (male or female) after the end of treatment (adequate: oral contraceptives, intrauterine device or barrier method in conjunction with spermicidal jelly)\nSerious concurrent diseases\nOn-treatment participation in a clinical study in the period 30 days prior to inclusion\nClinically significant cardiovascular disease in (incl. myocardial infarction, unstable angina, symptomatic congestive heart failure, serious uncontrolled cardiac arrhythmia) ≤ 1 year before enrolment\nHistory of interstitial lung disease, e.g. pneumonitis or pulmonary fibrosis or evidence of interstitial lung disease on baseline chest CT scan\nHistory of HIV infection\nOther previous or concurrent malignancy (≤ 5 years prior to enrolment in study) except non-melanoma skin cancer or cervical carcinoma FIGO stage 0-1 if the patient is continuously disease-free\nKnown allergic reactions on panitumumab, doxycycline or erythromycin\nPrevious treatment with anti-cancer agents directed against EGFR (e.g. cetuximab, panitumumab, erlotinib, gefitinib, lapatinib)\nSkin rash existing before or due to other reasons than panitumumab treatment\nOther dermatologic disease that may interfere with correct grading of panitumumab induced skin rash\nParallel treatment with anti-tumor agents other than panitumumab\n\nAt the time the study was initiated, patients with tumors harboring no KRAS exon 2 mutations were included to be treated according to panitumumab label. Data on the negative predictive effect of mutations beyond KRAS exon 2 (i.e. KRAS exon 3-4 and NRAS exons 2-4) were first presented during the American Society of Clinical Oncology (ASCO) Annual Meeting 2013 and published later on [18, 19]. The inclusion criteria of the current trial were changed accordingly. The amendment was approved by the ethical committee on 17th January 2014.\n\nBODY.STUDY DESIGN AND TREATMENT SCHEDULE:\nThis is a phase II, open-label, randomized, controlled, parallel-arm, multi-center study. Patients received standard tumor treatment using panitumumab in label (i.e. either as monotherapy or in combination with chemotherapy), and were randomized (1:1) to two preemptive strategies of skin toxicity prophylaxis. In order to base the trial on the best available evidence, the superior treatment arm of the phase II STEPP trial [3] was chosen as standard arm: (i) Standard arm: Skin moisturizer (in the morning on rising), sun screen (before going outdoor), doxycycline 100 mg b.i.d. starting the day before treatment start with panitumumab and for the following 8 weeks. (ii) Erythromycin arm (sequential approach): Skin moisturizer (in the morning on rising), sun screen (before going outdoor), erythromycin ointment 2% on face, hands, feet, neck, back, once daily at bedtime starting the day before treatment start with panitumumab and for the following 8 weeks; in case of skin rash ≥ grade 2 doxycycline 100 mg b.i.d. was started.\nThe study treatment period consisted of 8 weeks (i.e. 4 administrations of bi-weekly panitumumab).\n\nBODY.STUDY OBJECTIVES:\nThe primary objective of the study was to establish an alternative preemptive skin treatment avoiding or delaying the use of oral doxycycline in patients with wild-type (K)RAS metastatic colorectal cancer treated with panitumumab. For the primary endpoint (percentage of patients developing no skin toxicity ≥ grade 2 at any time during the first 8 weeks of treatment with panitumumab) skin toxicity was assessed using the NCI CTCAE v 4.0 criteria.\nSecondary objectives comprised: (i) Evaluation of skin-related (DLQI) and global quality of life (EORTC QLQ C30); (ii) assessment of skin toxicity with a different grading scale (i.e. WoMo score).\n\nBODY.STUDY ASSESSMENTS:\nPanitumumab was administered on a bi-weekly basis according to label (i.e. cycle length was 14 days). On day 1 of every panitumumab treatment cycle the following assessments were scheduled: (i) Physical examination including weight, ECOG status and vital signs; (ii) adverse effects according the NCI CTC Version 3.0 criteria; (iii) laboratory tests including clinical chemistry and hematology; (iv) quality of life including EORTC Quality-of-life C30, skin related quality of life with DLQI; (v) skin rash assessment including NCI CTC criteria version 4.0 and WoMo scoring.\nFor skin toxicity grading purposes the NCI CTCAE v 4.0 was used as a primary endpoint (see Supplementary Materials). Moreover, we used the so called WoMo (Wollenberg and Moosmann) score, a thorough grading system. Briefly, the WoMo score is based on a three-part system which takes into account body involvement according to the rule of nines, percentage of facial involvement, as well as a semiquantitative description of the skin lesions by five items (erythema intensity and distribution, papulation, postulation, and scaling/crusts). The final score ranges from 0 to 100 (0 to 20 mild, 21 to 40 moderate, 41 to 100 severe acneiform eruptions) [20] (see Supplementary Materials). The assessment of skin toxicity was done by physicians trained on the NCI CTC and WoMo scores.\nThe assessment of overall quality of life was done using the EORTC QLQ C30. Skin-related quality of life was graded according to the dermatology quality of life index (DLQI; cf: http://sites.cardiff.ac.uk/dermatology/quality-of-life/dermatology-quality-of-life-index-dlqi/). Briefly, the DLQI consists of 10 questions to assess QoL in patients with skin disorders and can be analyzed under six subscales (symptoms and feelings, daily activities, leisure, work and school, personal relationship, and treatment). It is scored on a scale of 0 to 30; higher scores indicate more QoL impairment [21].\nTumor assessments were performed according to local practice. As all antineoplastic drugs used in this trial have a well-documented adverse event profile, the assessment of adverse events beyond skin toxicity was outside the main scope of the trial. However, serious adverse events were recorded and analyzed.\n\nBODY.STATISTICAL CONSIDERATIONS:\nAccording to data derived from the STEPP trial the development of skin toxicity grade ≥ 2 can be avoided in about 70% of patients [2]. The primary hypothesis of the current trial was that a similar efficiency could be achieved by a sequential skin treatment strategy, starting with local erythromycin administration.\nDespite the shorter duration of the STEPP trial (6 weeks of treatment) its results had to be applicable to the 8 week situation in this trial because the primary endpoint in both trials was based on the worst grade skin toxicity which is reached after approximately 2 weeks of panitumumab treatment. The natural course of panitumumab induced skin toxicity after the climax at about 2 weeks is a steady recovery within the following weeks.\nThe statistical calculation was based on the following premises and assumptions:\nAccording to these parameters, and using a standard single-stage phase II design by Fleming (1982), n = 37 patients evaluable for prophylactic efficacy had to be recruited. As a similar number of patients had to be recruited to the reference STEPP arm, a total number of 74 evaluable patients was required. Calculating a drop-out rate of 15% a total of 88 patients had to be included in the study.\nAll randomized patients who received at least one application of study medication and who had at least one post-baseline assessment of the primary endpoint variable will be the full analysis set (FAS), even if they violate the selection criteria of the study.\nSkin toxicity (primary endpoint), clinical response and other rates were calculated, providing confidence intervals. In case of comparison between patient groups, these proportions were to be analyzed by Fisher´s exact test, X2 test or Mantel-Haenszel test (or trend test according to COCHRAN/ARMITAGE), respectively. For correlation analyses between the different quality of life and/or toxicity scores, the non-parametric test according to Spearman was preferably to be applied.\nThe onset of grade ≥ 2 skin toxicity was to be likewise estimated by the product limit method of Kaplan-Meier and eventually compared using the logrank test.\n\nBODY.ETHICS:\nThe study was approved by the Ethics Committee (EC), Medizinische Ethik-Kommission II of the Universitätsmedizin Mannheim, and all other participating institutions.\nThis study was conducted in agreement with the German Drug Law (AMG), ICH Harmonized Tripartite Guideline on Good Clinical Practice, the „Verordnung über die Anwendung der Guten Klinischen Praxis bei der Durchführung von klinischen Prüfungen mit Arzneimitteln zur Anwendung am Menschen“ [22] and the Declaration of Helsinki (Tokyo, Venice, Hong Kong, Somerset West and Edinburgh amendments) [23].\n\n\nBODY.SUPPLEMENTARY MATERIALS:\n\n**Question:** Compared to r doxycycline 100 mg b.i.d. what was the result of erythromycin ointment 2% followed by doxycycline on time to the onset of skin toxicity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13872
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nAIO LQ-0110: a randomized phase II trial comparing oral doxycycline versus local administration of erythromycin as preemptive treatment strategies of panitumumab-mediated skin toxicity in patients with metastatic colorectal cancer\n\n\n\nABSTRACT.BACKGROUND:\nDermatologic toxicities, especially akne-like skin rash, are the most common side-effects associated with anti-epidermal growth factor receptor (EGFR) therapy. Preemptive treatment with oral tetracyclines is recommended as a standard. Topical prophylactic options have thus far not been compared to tetracyclines. In the current study, we sought to establish an alternative topical treatment.\n\nABSTRACT.PATIENTS AND METHODS:\nIn this multicentre, randomized, open-label phase II study patients with (K)Ras-wildtype colorectal cancer receiving panitumumab were randomized (1:1) to receive either doxycycline 100 mg b.i.d. (standard arm) or erythromycin ointment 2% followed by doxycycline in case of insufficient activity. The primary endpoint was the percentage of patients developing no skin toxicity ≥ grade 2 at any time during the first 8 weeks of panitumumab treatment. Skin toxicity was assessed using the NCI CTCAE v 4.0. Secondary endpoints comprised the assessment of skin toxicity using a more thorough grading system (WoMo score), evaluation of skin-related (DLQI) and global quality of life (EORTC QLQ C30).\n\nABSTRACT.RESULTS:\nIn total, 88 patients were included in this trial. 69% of the patients in the erythromycin arm suffered from skin toxicity of grade ≥ 2 versus 63% in the standard arm (P = n.s.). However, as per WoMo score significantly more patients in the erythromycin arm developed moderate or severe skin toxicity at earlier time points. Skin related and overall quality of life was comparable between both arms.\n\nABSTRACT.CONCLUSIONS:\nBased on this data erythromycin cannot be regarded as an alternative to doxycycline as prevention of EGFR-related skin toxicity.\n\nBODY.INTRODUCTION:\nDermatologic toxicities are the most common side-effects associated with anti-EGFR therapy and they are experienced by the majority of patients undergoing EGFR inhibitor therapy. While toxicities are mild (grade 0-1) to moderate (grade 2) in most patients, about 15-20% exhibit higher grade toxicities leading to medical and psychosocial effects that may result in poor patients’ compliance with lower adherence to cancer therapy, more dose delays, and interruption or discontinuation of antineoplastic therapy [1, 2]. Finally, dermatologic toxicities may contribute to a significantly reduced quality of life [2, 3, 4]. Acneiform rash, usually developing within the first 2 to 4 weeks of therapy, is by far the most frequent type of skin toxicity caused by EGFR antibody treatment.\nManagement guidelines for the prophylaxis and treatment of EGFR inhibitor-mediated dermatologic toxicities usually recommend prophylactic use of tetracyclines and special skin care. Indeed, these recommendations are based on few randomized trials:\nJatoi et al. randomized 62 patients treated with anti-EGFR antibodies or tyrosine kinase inhibitors (TKIs) between tetracycline and placebo. While the overall incidence of skin toxicity was not reduced, a significant decrease of severity was observed. Of note, patients treated with tetracycline reported improved skin-related quality of life as assessed by the SKINDEX-16 quality of life (QoL) questionnaire [5]. In a subsequent similar study including 65 patients Jatoi and coworkers could not confirm these findings [6]. Scope et al. randomized 48 colorectal cancer patients treated with cetuximab between minocycline and placebo. Minocycline significantly decreased moderate to severe itch. Likewise, a trend toward a decreased incidence of moderate to severe rash was noticed (20% vs. 42%, P =.13) [7]. Deplanque et al. randomized 147 patients with non-small-cell lung cancer (NSCLC) between treatment with erlotinib alone or in combination with doxycycline. Doxycycline did not reduce the incidence of folliculitis over untreated patients (71% vs. 81%, P =.117) but significantly reduced its severity (P =.001) [8]. Melosky et al. randomized 150 patients with NSCLC treated with erlotinib between prophylactic minocycline, reactive minocycline and no treatment. The incidence of skin toxicity was 84% regardless of treatment arm. Prophylactic treatment with minocycline, however, significantly lengthened the time to the most severe grade of rash. Grade 3 rash was significantly higher in the no-treatment arm [9]. Arrieta et al. randomized 90 patients with NSCLC treated with afatinib between preemptive tetracycline and only reactive treatment. Tetracycline reduced the rash incidence of any grade (44.5% vs. 75.6%, P =.046) and the severity of grade ≥ 2 (15.6% vs. 35.6%, P =.030) over reactive treatment [10]. Finally, Lacouture et al. randomized 114 patients with NSCLC treated with dacomitinib between preemptive doxycycline and placebo. Doxycycline significantly reduced the incidence of select dermatologic adverse events of interest (SDAEI) grade ≥ 2 by 50% (P =.016) [11].\nIn all, it appears to be justified to recommend a prophylaxis with tetracyclines such as doxycycline or minocycline. Two randomized studies confirmed that tetracyclines should be used as prophylaxis and not as an early intervention: Firstly, the so-called STEPP trial randomized 95 patients with metastatic colorectal cancer (mCRC) to be treated with panitumumab in combination with chemotherapy to either preemptive treatment for skin toxicity including doxycycline 100 mg b.i.d. or to receive the same regimen reactively. Incidence of skin toxicity grade ≥ 2 was significantly reduced by preemptive treatment compared with reactive treatment (29% vs. 62%). Moreover, patients randomized to preemptive treatment reported improved QoL according to Dermatology Life Quality Index (DLQI) especially between week 2 and 3 [3]. Secondly, the so-called J-STEPP trial confirmed these data using a comparable study design in a Japanese population with mCRC. Comparable to the STEPP trial, the incidence of ≥ grade 2 skin reactions was decreased with preemptive treatment compared with reactive treatment (21.3% vs. 62.5%, P <.001) [12].\nThus far, no data comparing oral antibiotics with a local treatment have been reported. In order to avoid a long term exposure to oral antibiotics a preemptive topical skin treatment could be useful. Moreover, topical agents are more convenient than oral antibiotics, and compliance might be better. In the present trial we sought to compare a local treatment using erythromycin followed by doxycycline in the case of insufficient activity with doxycycline. Erythromycin seemed to be very suitable as it belongs to standard treatment for acne vulgaris [13].\n\nBODY.RESULTS:\nBODY.PATIENTS:\nIn total, 88 patients were randomized by 11 sites between July 2011 and October 2014. 80 out of 88 patients were evaluated (41 patients in the doxycycline arm, 39 patients in the erythromycin arm). Reasons for exclusion were: withdrawal of consent, no therapy started (n = 4), no post-baseline skin toxicity assessment (n = 1); treatment not started due to RAS mutation (n = 2), investigator´s decision (n = 1).\nPatients´ and major tumor-related baseline characteristics are depicted in Table 1.\nTable 1BODY.MAJOR PATIENTS´ AND TUMOUR CHARACTERISTICS:\nParameterDoxycyclineErythromycinTotalAgeN\n40*\n39\n79\n Mean ± SD (years)\n68.1 ± 12.4\n69.1 ± 9.4\n68.6 ± 11\n Median (years)\n69.5\n70.3\n70.2\nGender\n\n\n\nN\n41\n39\n80\n Female\n13 (32%)\n18 (46%)\n31 (39%)\n Male\n28 (68%)\n21 (54%)\n49 (61%)\nCancer location\n\n\n\nN\n41\n38*\n79\n Colon\n23 (56%)\n18 (47%)\n41 (52%)\n Rectum\n18 (44%)\n20 (53%)\n38 (48%)\nPerformance status\n\n\n\nN\n39*\n38*\n77\n ECOG 0\n16 (41%)\n17 (45%)\n33 (43%)\n ECOG 1\n19 (49%)\n19 (50%)\n38 (49%)\n ECOG 2\n4 (10%)\n2 (5%)\n6 (8%)\n*Data missing.\n\nBODY.PRIMARY END POINT:\nThe percentage of patients developing skin toxicity according to NCI CTC criteria is shown in Table 2. In the erythromycin arm, 69% suffered from skin toxicity grade ≥ 2 (95% confidence interval [CI]: 52% to 83%; primary endpoint). In the standard arm with doxycycline the toxicity rate with grade ≥ 2 amounted to 63% (95% CI: 47 – 78%). Thus, the primary endpoint was not met.\nTable 2BODY.SKIN TOXICITY GRADED ACCORDING TO NCI CTCAE 4.0, WORST PER PATIENT:\nNCI CTCAE gradeDoxycyclineErythromycinTotalN\n41\n39\n80\n0 / no toxicity\n3 (7%)\n3 (8%)\n6 (8%)\n1\n12 (29%)\n9 (23%)\n21 (26%)\n2\n18 (44%)\n12 (31%)\n30 (38%)\n3\n8 (20%)\n15 (38%)\n23 (29%)\nP =.27 (Mantel-Haenszel test for trend, exact version).\nAlthough this randomized phase II study was not powered for formal statistical comparisons of the two treatment groups, results from additional statistical calculations are provided in a descriptive way: Using Fisher's exact test no significant differences between both arms were seen (P =.64). The odds ratio, with values > 1.0 indicating higher toxicity, was 1.29 (95% confidence interval: 0.46 – 3.67; one-sided 95% confidence limit for inferiority of the erythromycin arm: 3.15). In absolute values, the rate difference of 6% more toxicity in the erythromycin group has a 95% confidence interval (according to HAUCK-ANDERSON) ranging from −16% to + 28%.\nFigure 1 shows the time to the onset of skin toxicity of NCI CTCAE grade ≥ 2. In the doxycycline arm median time was 43 days (day one of cycle 4), in the erythromycin arm median time was 29 days (day one of cycle 3) without statistical significance in the corresponding logrank test (P =.68).\nFigure 1BODY.TIME TO SKIN TOXICITY NCI CTCAE GRADE ≥ 2 (FULL ANALYSIS SET [N = 80]):\n\nBODY.SKIN TOXICITY GRADED ACCORDING TO WOMO SCORE:\nThe total score result from the WoMo assessment over time (individual cycles) show a constantly higher score in the erythromycin group (data not shown). When the maximum WoMo score by patient over time is analyzed (Table 3), the difference amounts to almost 10 points which is about half of the standard deviation.\nTable 3BODY.SKIN TOXICITY ACCORDING TO WOMO SCORE:\nWoMo scoreDoxycyclineErythromycinTotalN\n41\n39\n80\nMean ± SD\n25.6 ± 16.7\n34.1 ± 19.8\n29.7 ± 18.7\nMedian\n23.1\n32.1\n29.5\nQuartiles\n14.5 - 34.8\n21.9 - 47.2\n16.1 - 40.9\nRange\n0 - 68.2\n0 - 88.1\n0 - 88.1\nA significantly higher number of patients treated in the erythromycin arm developed moderate or severe skin toxicity according to WoMo score, i.e. ≥ 20 points (30/39 versus 23/41 patients, P = .049); moreover, a higher number of patients had severe skin toxicity (WoMo score ≥ 40 points 14/39 versus 8/41 patients; P =.101).\nAnalyzes based on worsening of the WoMo score were performed. It took one cycle longer to appearance of a moderate to severe skin toxicity (WoMo score ≥ 20) with doxycycline compared to erythromycin. Figure 2 shows the time to increase of the WoMo score to ≥ 20. An obvious difference between both arms appeared on day 15 (day one of cycle 2) and this remained discernible thereafter indicating a clear trend towards superior efficacy of the standard arm using doxycycline (P =.069).\nFigure 2BODY.TIME TO WOMO SCORE ≥ 20 (FULL ANALYSIS SET [N = 80]):\n\nBODY.QUALITY OF LIFE:\nThe mean values of the EORTC QLQ C30 global health score are balanced at the beginning of the first treatment cycle with about 52 points for each group (albeit based on a lower number of valid questionnaires in the erythromycin group; Table 4). Over time, the mean values amount to 59 in the erythromycin arm and decline to 47 points in the doxycycline arm (at the beginning of cycle 4). The between-group differences range between 5 and 10 points per cycle.\nTable 4BODY.QUALITY OF LIFE GRADED ACCORDING TO EORTC QLQ C30: GLOBAL HEALTH SCORE:\nErythromycin arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n32\n36\n27\n28\n31\nMean ± SD\n52.3 ± 23.3\n53.2 ± 17.6\n52.8 ± 18.5\n59.2 ± 13.5\n55.4 ± 17.6\nMedian\n50\n50\n50\n54.2\n58.3\nQuartiles\n33.3 - 66.7\n41.7 - 66.7\n41.7 - 66.7\n50 - 66.7\n41.7 - 66.7\nRange\n0 - 91.7\n16.7 - 83.3\n0 - 83.3\n33.3 - 83.3\n16.7 - 91.7\nDoxycycline arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n40\n31\n31\n25\n25\nMean ± SD\n52.9 ± 18.7\n48.4 ± 17\n47 ± 19.5\n47.3 ± 21.2\n44.7 ± 20.1\nMedian\n50\n50\n50\n50\n50\nQuartiles\n33.3 - 66.7\n33.3 - 66.7\n33.3 - 58.3\n33.3 - 66.7\n33.3 - 58.3\nRange\n16.7 - 91.7\n16.7 - 83.3\n0 - 83.3\n0 - 83.3\n0 - 83.3\n*End of treatment.\nNo relevant differences between both two treatment strategies were noticed as per the DLQI. Throughout the observation period of 8 weeks, the median values of the DLQI total score in both arms are low with a highest medium value of 4 points for both arms at the beginning of cycle 3 (except for the rise of the median at the end-of treatment assessment in the doxycycline group with a very low questionnaire compliance in this arm at that time point) (Table 5). Considering the subscales, only “symptoms and feelings” indicated major deterioration with a median of 2 points on a maximum scale of 6 (data not shown).\nTable 5BODY.SKIN RELATED QUALITY OF LIFE GRADED ACCORDING TO DLQI (TOTAL SCORE):\nErythromycin arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n24\n30\n29\n24\n31\nMean ± SD\n1.2 ± 2.6\n5.3 ± 5.4\n5.4 ± 5.6\n4.4 ± 4.1\n5.4 ± 5.8\nMedian\n0\n3\n4\n3.5\n3\nQuartiles\n0 - 1\n1.2 – 8\n2 - 7\n1 – 6.2\n2 – 6\nRange\n0 - 10\n0 – 25\n0 - 27\n0 - 14\n0 – 26\nDoxycycline arm\nCycle\n1\n2\n3\n4\nEOT*\nN\n24\n27\n23\n20\n15\nMean ± SD\n0.8 ± 1.2\n4.4 ± 4.8\n5.3 ± 5.2\n4.9 ± 5.4\n5.8 ± 4.4\nMedian\n0\n2\n4\n3\n7\nQuartiles\n0 −1\n1 – 5.5\n2 – 7.5\n1.8 – 5.2\n2 – 9\nRange\n0 - 5\n0 - 19\n0- 21\n1 - 22\n0- 13\n*End of treatment.\n\nBODY.TREATMENT ADMINISTRATION AND SAFETY:\nIn total, 140 panitumumab administrations were recorded in the doxycycline arm (mean 3.4 per patient), and 139 in the erythromycin arm (3.6 per patient). 74% of the patients received all 4 scheduled administrations without differences in both arms. In the erythromycin arm, 33% of the patients stopped erythromycin due to insufficient activity and switched to doxycycline. Panitumumab dose reductions or dose delays were infrequent (see Supplementary Materials).\nAs mentioned, detailed analysis of adverse events beyond skin toxicity was not within the scope of the current study. However, neither unexpected events nor frequencies were detected within the framework of the adverse events and SAE reporting in this trial. Eleven out of 41 patients (26.8%) treated within the doxycycline arm, and 7/39 patients (17.9%) in the erythromycin arm reported SAE (P =.34).\n\nBODY.ANTITUMOR EFFICACY:\nRegarding the overall study population, 28% of the patients experienced an objective response, while 46% had stabilization of the disease. This estimation is based on 54 patients (68%) with a valid RECIST evaluation after 8 weeks. There is no indication for an effect of the randomized assignment of the skin treatment (P =.40, Fisher's exact test; odds ratio: 1.74, 95% confidence interval: 0.49 – 6.67).\n\n\nBODY.DISCUSSION:\nThis is the first randomized trial comparing an oral antibiotic prophylaxis (doxycycline) with an alternative local application (erythromycin) as preemptive skin treatment strategy against anti-EGFR targeted drugs. The primary endpoint of the present trial was the percentage of patients developing no skin toxicity grade ≥ 2 at any time during the first 8 weeks of panitumumab treatment. In the erythromycin arm, a moderate to severe skin toxicity could be prevented in only 31%, i.e. 69% suffered from skin toxicity grade ≥ 2. Thus the primary endpoint was missed and the experimental strategy (local erythromycin) would be rated as insufficient. In the reference arm, however, using doxycycline, the toxicity rate grade ≥ 2 amounted to 63%, as well, i.e. it was not lower than in the erythromycin arm. A numerical trend to a lower grade 3 toxicity with oral doxycycline can be discerned but without reaching statistical significance. Although most of the studies investigating tetracycline-family antibiotics in a preemptive setting could not demonstrate a reduction of the overall incidence of skin toxicity, they often showed a decrease of the severity compared to placebo [5, 7, 11], no treatment [8, 9], or reactive treatment [3, 9, 10, 11]. This effect could not be seen in the current trial comparing two different preemptive treatment strategies.\nThe rates of skin toxicity grade ≥ 2 are rather high with 69% and 63%, respectively, compared to the literature. Even if considering only trials with similar settings (mCRC treated with EGFR inhibitors) the rates of grade ≥ 2 skin toxicity in the preemptive arms are considerably lower in previous studies with 20% [5], 21.3% [12] and 29% [3]) and could not be reproduced in the present trial.\nUntil recently, the NCI CTCAE v 3.0 grading scale (published in 2006) had been used in most of the studies for evaluation of EGFR inhibitor induced skin toxicity [3, 5, 6, 8, 9, 12]. This version 3.0 included only a crude scale for severity of skin toxicity and allowed for subjective grading. The updated CTCAE v 4.0 used in the present trial provide a more thorough grading of skin toxicity taking into account that acneiform skin rash may develop only in lower percentage of skin surface. Patients that would have been classified grade 1 according to the NCI CTCAE v 3.0, could easily be regarded as grade 2 or even 3 according to the NCI CTCAE v 4.0. Thus, especially the results of the STEPP trial [3] on which the present trial was based cannot be compared with our data concerning the evaluation of skin rash. It is conceivable that the high amount of grade ≥ 2 toxicities in the present trial is due to the updated and improved grading system.\nNevertheless, even the updated version 4.0 does not accurately reflect the clinical situation. One major problem of this version is that the grading is based on the affected body surface area (BSA) rather than on the (local) severity/symptoms of skin rash. An upgrading beyond the grade stipulated by the grading system is principally possible (for instance due to limited instrumental or self-care activities of daily living) but this decision is left up to the discretion of the investigator. This may of course pose problems with respect to the comparability of skin toxicity between individual investigators and between studies.\nIn view of this, we also used an extended alternative grading system for EGFR-related skin toxicity (so called WoMo score [20]) in the present trial which takes into account also (i) the percentage of the facial area affected with rash or other dermatological adverse events as well as (ii) the type and severity of the efflorescences. Thus, the WoMo score allows for a more thorough description of the skin toxicity and enables more precise description and comparison of efficacy of different prophylactic treatments.\nNotably, using this grading system, we detected obvious differences between the two treatment groups. Firstly, the mean and median WoMo scores were numerically higher in the erythromycin arm. Secondly, the percentage of patients developing moderate or severe skin rash (WoMo score ≥ 20 points) was significantly higher in the erythromycin group and the time to development of moderate or severe skin toxicity was shorter in the erythromycin group (at the beginning of cycle 2 versus cycle 3). This data indicates that the experimental stepwise approach (erythromycin followed by doxycycline) was not able to at least accomplish the same results as immediate doxycycline prophylaxis. In this regard it is important to stress that the treatment intensity and adherence to panitumumab as well as the therapeutic efficacy was comparable between both arms.\nThe analysis of the skin related quality of life using the DLQI total score shows a distinct deterioration of QoL at the beginning of cycle 2 in both arms, being rather stable thereafter in the patients remaining in the observation. The absolute amount of impact on QoL according to the DLQI seems to be rather limited with about 4 out of 30 points [14]. This is due to the finding, that only the subscale “symptoms and feelings” indicated major deterioration. All other scales were hardly affected. At the time the study was designed, the DLQI seemed to be one of the most promising questionnaires to assess skin-related QoL, not least due to the data of the STEPP trial [3]. Meanwhile, a questionnaire (FACT- EGFRI-18) is available designed to assess dermatologic symptoms associated with EGFR inhibitors [15]. It should be used for future clinical trials.\nIn all, we were not able to find significant differences between both prophylactic treatments investigated in this trial using the NCI CTC AE v 4.0 grading system. It can be pointed out that the erythromycin arm duplicated grade 3 skin toxicity. Contrarily, by using the WoMo score we demonstrated that more patients treated with erythromycin developed moderate or severe skin toxicity at earlier time points indicating that local erythromycin should not be used as a substitute of oral tetracyclines as prophylaxis against EGFR mediated skin toxicity. We propose to use the WoMo score in addition to NCI CTCAE grading in clinical trials dedicated to study the efficacy of prophylactic or interventional measures against EGFR-mediate skin toxicity.\nMoreover, rash rates are high despite prophylactic treatment. These high rates beg for further research to establish improved alternative treatment options. High expectations are placed on vitamin K for use of EGFR treatment associated skin toxicity. The prophylactic use of K1 cream was demonstrated by Ocvirk [16] and – most recently – by Schimanski et al. [17]. Regarding this, the results of the EVITA trial (NCI01345526) are eagerly awaited. EVITA is the first randomized study that evaluated the addition of vitamin K to doxycycline prophylaxis. Another promising trial (NCT03051880) investigates the preventive use of topical EGF cream. Until then the prophylactic use of tetracyclines may be regarded as a standard of care in patients undergoing anti-EGFR directed treatment.\n\nBODY.MATERIALS AND METHODS:\nBODY.PATIENTS:\nPatients with metastatic colorectal cancer were eligible to participate in the study provided they met all selection criteria listed in Table 6. No study treatment or any other procedure within the framework of the trial was performed in any patient prior to receipt of written informed consent.\nTable 6BODY.INCLUSION AND EXCLUSION CRITERIA:\nInclusion criteria:\nPatients with wild-type RAS (KRAS and NRAS) status of metastatic colorectal cancer treatment with panitumumab according to label\nRAS wild-type tested in\nKRAS exon 2 (codons 12/13)\nKRAS exon 3 (codons 59/61)\nKRAS exon 4 (codons 117/146)\nNRAS exon 2 (codons 12/13)\nNRAS exon 3 (codons 59/61)\nNRAS exon 4 (codons 117/146)\n\n\nTreatment with pre-emptive study medication shall begin the day before treatment start with panitumumab\nWillingness to cope with biweekly quality of life questionnaires\nWritten Informed consent\nAged at least 18 years\nECOG Performance Status 0-2\nLife expectancy of at least 12 weeks\nAdequate haematological, hepatic, renal and metabolic function parameters:\nLeukocytes > 3000/mm3\nANC ≥ 1500/mm3\nPlatelets ≥ 100,000/mm3\nHaemoglobin > 9 g/dl\nSerum creatinine ≤ 1.5 x ULN\nBilirubin ≤ 1.5 x ULN\nGOT-GPT ≤ 2.5 x ULN (in case of liver metastases GOT / GPT ≤ 5 x ULN)\nAP ≤ 5 x ULN\nMagnesium, Calcium and potassium within normal ranges (may be substituted before study entry)\n\n\nExclusion criteria:\nSubject pregnant or breast feeding, or planning to become pregnant within 6 months after the end of treatment\nSubject (male or female) is not willing to use highly effective methods of contraception (per institutional standard) during treatment and for 6 months (male or female) after the end of treatment (adequate: oral contraceptives, intrauterine device or barrier method in conjunction with spermicidal jelly)\nSerious concurrent diseases\nOn-treatment participation in a clinical study in the period 30 days prior to inclusion\nClinically significant cardiovascular disease in (incl. myocardial infarction, unstable angina, symptomatic congestive heart failure, serious uncontrolled cardiac arrhythmia) ≤ 1 year before enrolment\nHistory of interstitial lung disease, e.g. pneumonitis or pulmonary fibrosis or evidence of interstitial lung disease on baseline chest CT scan\nHistory of HIV infection\nOther previous or concurrent malignancy (≤ 5 years prior to enrolment in study) except non-melanoma skin cancer or cervical carcinoma FIGO stage 0-1 if the patient is continuously disease-free\nKnown allergic reactions on panitumumab, doxycycline or erythromycin\nPrevious treatment with anti-cancer agents directed against EGFR (e.g. cetuximab, panitumumab, erlotinib, gefitinib, lapatinib)\nSkin rash existing before or due to other reasons than panitumumab treatment\nOther dermatologic disease that may interfere with correct grading of panitumumab induced skin rash\nParallel treatment with anti-tumor agents other than panitumumab\n\nAt the time the study was initiated, patients with tumors harboring no KRAS exon 2 mutations were included to be treated according to panitumumab label. Data on the negative predictive effect of mutations beyond KRAS exon 2 (i.e. KRAS exon 3-4 and NRAS exons 2-4) were first presented during the American Society of Clinical Oncology (ASCO) Annual Meeting 2013 and published later on [18, 19]. The inclusion criteria of the current trial were changed accordingly. The amendment was approved by the ethical committee on 17th January 2014.\n\nBODY.STUDY DESIGN AND TREATMENT SCHEDULE:\nThis is a phase II, open-label, randomized, controlled, parallel-arm, multi-center study. Patients received standard tumor treatment using panitumumab in label (i.e. either as monotherapy or in combination with chemotherapy), and were randomized (1:1) to two preemptive strategies of skin toxicity prophylaxis. In order to base the trial on the best available evidence, the superior treatment arm of the phase II STEPP trial [3] was chosen as standard arm: (i) Standard arm: Skin moisturizer (in the morning on rising), sun screen (before going outdoor), doxycycline 100 mg b.i.d. starting the day before treatment start with panitumumab and for the following 8 weeks. (ii) Erythromycin arm (sequential approach): Skin moisturizer (in the morning on rising), sun screen (before going outdoor), erythromycin ointment 2% on face, hands, feet, neck, back, once daily at bedtime starting the day before treatment start with panitumumab and for the following 8 weeks; in case of skin rash ≥ grade 2 doxycycline 100 mg b.i.d. was started.\nThe study treatment period consisted of 8 weeks (i.e. 4 administrations of bi-weekly panitumumab).\n\nBODY.STUDY OBJECTIVES:\nThe primary objective of the study was to establish an alternative preemptive skin treatment avoiding or delaying the use of oral doxycycline in patients with wild-type (K)RAS metastatic colorectal cancer treated with panitumumab. For the primary endpoint (percentage of patients developing no skin toxicity ≥ grade 2 at any time during the first 8 weeks of treatment with panitumumab) skin toxicity was assessed using the NCI CTCAE v 4.0 criteria.\nSecondary objectives comprised: (i) Evaluation of skin-related (DLQI) and global quality of life (EORTC QLQ C30); (ii) assessment of skin toxicity with a different grading scale (i.e. WoMo score).\n\nBODY.STUDY ASSESSMENTS:\nPanitumumab was administered on a bi-weekly basis according to label (i.e. cycle length was 14 days). On day 1 of every panitumumab treatment cycle the following assessments were scheduled: (i) Physical examination including weight, ECOG status and vital signs; (ii) adverse effects according the NCI CTC Version 3.0 criteria; (iii) laboratory tests including clinical chemistry and hematology; (iv) quality of life including EORTC Quality-of-life C30, skin related quality of life with DLQI; (v) skin rash assessment including NCI CTC criteria version 4.0 and WoMo scoring.\nFor skin toxicity grading purposes the NCI CTCAE v 4.0 was used as a primary endpoint (see Supplementary Materials). Moreover, we used the so called WoMo (Wollenberg and Moosmann) score, a thorough grading system. Briefly, the WoMo score is based on a three-part system which takes into account body involvement according to the rule of nines, percentage of facial involvement, as well as a semiquantitative description of the skin lesions by five items (erythema intensity and distribution, papulation, postulation, and scaling/crusts). The final score ranges from 0 to 100 (0 to 20 mild, 21 to 40 moderate, 41 to 100 severe acneiform eruptions) [20] (see Supplementary Materials). The assessment of skin toxicity was done by physicians trained on the NCI CTC and WoMo scores.\nThe assessment of overall quality of life was done using the EORTC QLQ C30. Skin-related quality of life was graded according to the dermatology quality of life index (DLQI; cf: http://sites.cardiff.ac.uk/dermatology/quality-of-life/dermatology-quality-of-life-index-dlqi/). Briefly, the DLQI consists of 10 questions to assess QoL in patients with skin disorders and can be analyzed under six subscales (symptoms and feelings, daily activities, leisure, work and school, personal relationship, and treatment). It is scored on a scale of 0 to 30; higher scores indicate more QoL impairment [21].\nTumor assessments were performed according to local practice. As all antineoplastic drugs used in this trial have a well-documented adverse event profile, the assessment of adverse events beyond skin toxicity was outside the main scope of the trial. However, serious adverse events were recorded and analyzed.\n\nBODY.STATISTICAL CONSIDERATIONS:\nAccording to data derived from the STEPP trial the development of skin toxicity grade ≥ 2 can be avoided in about 70% of patients [2]. The primary hypothesis of the current trial was that a similar efficiency could be achieved by a sequential skin treatment strategy, starting with local erythromycin administration.\nDespite the shorter duration of the STEPP trial (6 weeks of treatment) its results had to be applicable to the 8 week situation in this trial because the primary endpoint in both trials was based on the worst grade skin toxicity which is reached after approximately 2 weeks of panitumumab treatment. The natural course of panitumumab induced skin toxicity after the climax at about 2 weeks is a steady recovery within the following weeks.\nThe statistical calculation was based on the following premises and assumptions:\nAccording to these parameters, and using a standard single-stage phase II design by Fleming (1982), n = 37 patients evaluable for prophylactic efficacy had to be recruited. As a similar number of patients had to be recruited to the reference STEPP arm, a total number of 74 evaluable patients was required. Calculating a drop-out rate of 15% a total of 88 patients had to be included in the study.\nAll randomized patients who received at least one application of study medication and who had at least one post-baseline assessment of the primary endpoint variable will be the full analysis set (FAS), even if they violate the selection criteria of the study.\nSkin toxicity (primary endpoint), clinical response and other rates were calculated, providing confidence intervals. In case of comparison between patient groups, these proportions were to be analyzed by Fisher´s exact test, X2 test or Mantel-Haenszel test (or trend test according to COCHRAN/ARMITAGE), respectively. For correlation analyses between the different quality of life and/or toxicity scores, the non-parametric test according to Spearman was preferably to be applied.\nThe onset of grade ≥ 2 skin toxicity was to be likewise estimated by the product limit method of Kaplan-Meier and eventually compared using the logrank test.\n\nBODY.ETHICS:\nThe study was approved by the Ethics Committee (EC), Medizinische Ethik-Kommission II of the Universitätsmedizin Mannheim, and all other participating institutions.\nThis study was conducted in agreement with the German Drug Law (AMG), ICH Harmonized Tripartite Guideline on Good Clinical Practice, the „Verordnung über die Anwendung der Guten Klinischen Praxis bei der Durchführung von klinischen Prüfungen mit Arzneimitteln zur Anwendung am Menschen“ [22] and the Declaration of Helsinki (Tokyo, Venice, Hong Kong, Somerset West and Edinburgh amendments) [23].\n\n\nBODY.SUPPLEMENTARY MATERIALS:\n\n**Question:** Compared to r doxycycline 100 mg b.i.d. what was the result of erythromycin ointment 2% followed by doxycycline on appearance of a moderate to severe skin toxicity?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13881
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nSaphenous vein stripping surgical technique and frequency of saphenous nerve injury\n\n\n\nABSTRACT.OBJECTIVES:\nSaphenous nerve injury is the most common complication after surgical treatment of varicose veins. The aim of this study was to establish its frequency at great saphenous vein long stripping when four methods of surgery were applied.\n\nABSTRACT.METHODS:\nEighty patients were divided into four groups depending on different stripping methods. Sensory transmission in saphenous nerve and sensory perception of shank were examined before surgery and two weeks, three and six months afterwards with clinical neurophysiology methods.\n\nABSTRACT.RESULTS:\nIn 36% of patients, surgeries caused the injury of saphenous nerve mainly by proximal stripping without invagination (65%, group I). Transmission disturbances ceased completely after three months in patients undergoing distal stripping with invagination (group IV), while in group I they persisted for six months in 35%. Group IV patients were the least injured and group I the most.\n\nABSTRACT.CONCLUSION:\nNeurophysiological findings may suggest that distal stripping with vein invagination gives the best saphenous nerve sparing.\n\nBODY.INTRODUCTION:\nVaricose veins surgery is one of the most frequent operations carried out in Europe as well as one of the most frequent causes of civil-law proceedings against general and vascular surgeons, especially in West European countries, which results mainly from incidental nerve injury.1,2 The increase in medical claims forces a search for the method of varicose veins treatment that can guarantee the fewest complications possible.\nThe fundamental approach in surgical treatment of varicose veins with great saphenous vein incompetence is the long stripping.3–5 There are several variants of the method: proximal stripping with a probe – the classical Babcock method, proximal stripping with invagination (inversion) of the vessel, distal stripping with and without invagination.6–10\nThe varicose veins extraction may be associated with many intra- and postsurgical complications. However, the most frequent is the saphenous nerve lesion, which is related to its anatomical path. Possible anomalies in the common anatomical passage of the saphenous nerve to the great saphenous vein should be considered during varicose veins extraction and what can foster iatrogenic injuries within higher risk zones.11\nFemoral and crural portions can be distinguished in the course of saphenous nerve. The nerve runs subfascially within the femoral portion, whereas within the crural portion it pierces the crural fascia and, running epifascially together with the great saphenous vein, reaches the medial margin of the foot. The nerve supplies sensory innervation to the medial side of the lower thigh, anteromedial side of the knee joint and the leg down to medial side of the foot and hallux. During stripping of the great saphenous vein, a lesion most frequently affects the nerve trunk in region of medial malleolus and its subpatellar branch together with the medial cutaneous branches of the leg, which manifests as inaccurate feeling perception (paresthesia) in the form of tingling, numbness, electric current sensation, hypoesthesia, hyperesthesia, or a burning sensation.\nThe aims of this study were to establish and objectivise the relation between saphenous nerve conduction disorders and varicose vein surgery as well as to compare the frequency of nerve injury in the four methods of great saphenous vein stripping.\n\nBODY.METHODS:\nA group of 80 patients with confirmed venous incompetence of Hach’s III and IV class by clinical (according to CEAP classification – C2 grade) and imaging (Doppler ultrasonography) examinations was enrolled into this neurophysiological prospectively gathered data.12 Patients did not report the neuropathic pain before surgery, the supplemental management with pregabalin and methylcobalamin was not applied. The patients were qualified for surgical treatment in Department of General and Vascular Surgery at Karol Marcinkowski Poznań University of Medical Sciences from 2004 to 2008 basing on the rule of consecutive cases. The neurophysiological examinations were conducted in Department of Pathophysiology of Locomotor Organs four times (before surgery-S1, two weeks after surgery-S2, three months after surgery-S3, and six months after surgery-S4).\nPatients aged from 20 to 50 years (42.2 years on average) were randomly divided into four subgroups (20 persons each) G1–G4 depending on the different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination). The length of great saphenous vein stripping depended on the type of surgery. In each group, the same female to male proportion was maintained (15 females and five males). Each subgroup was operated with different stripping method. The saphenous nerve was not identified along the saphenous vein at the knee and it was not separated from it before stripping. Patients suffering from conditions that possibly affect nervous system functions, which could be polyneuropathies, multiple peripheral mononeuropathies, demyelinating diseases, neuromuscular diseases, deficiency syndromes, nervous system poisoning, advanced discopathy, extremity or vertebral column trauma, arterial diseases, episodes of superficial vein thrombosis, post-thrombotic syndrome were excluded from this study.\nThe general methodology was based on neurophysiological examinations of saphenous nerve conduction.13–16 It was carried out four times. First time it was before the operation, in order to estimate preliminary values of neurophysiological variables (amplitude, latency, conduction velocity) in sensory conduction studies (SCV), then two weeks after the operation to establish a possible nerve injury due to surgical intervention, three months after the operation when the changes affecting conduction (swellings, hematomas, rubber bandage wearing) had disappeared and six months after the operation to assess the improvement of neurophysiological parameters in the patients with previously detected conduction disorders. The results were analyzed with reference to the control group of 50 healthy volunteers (40 females and 10 males aged 45.1 years on average) recruited from hospital workers whom the SCV reference values were acquired.\nThe neurophysiological testing was performed with Keypoint System (Medtronic A/S, Skovlunde, Denmark). Evoked potentials were recorded at different levels of saphenous nerve conduction with standard silver chloride electrodes, using two techniques of saphenous nerve examination (distal and proximal). In the distal method, recording took place anterior to medial malleolus and stimulation 14 cm, 18 cm, 22 cm, 26 cm above, along the medial margin of tibia. In the proximal method, potentials were recorded at the midpoint between medial malleolus and the slot of the knee joint along the medial margin of tibia while stimulation was performed 14 cm, 18 cm, and 22 cm above. The ground electrode was placed halfway between the recording and stimulating electrodes. Bilateral and bipolar saphenous nerve stimulation was performed, applying supramaximal, rectangular, single electric stimuli with a frequency lower than 3 Hz, 0.2 ms duration and intensity up to 30 mA. The recordings were carried out with time base usually 2 ms, amplification 5 µV, high-pass filter at 5 Hz and low-pass filter at 3 kHz. The parameters of amplitude, latency and corresponding afferent conduction velocities were evaluated. Amplitude greater than 1 µV was considered as normal. The conduction velocity depended on the distance at which the saphenous nerve was examined. The following minimal conduction velocities were accepted: 140 mm – 38.58 m/s; 180 mm – 39.13 m/s; 220 mm – 39.85 m/s; 260 mm – 40.21 m/s for distal and 140 mm – 40.53 m/s; 180 mm – 40.86 m/s; 220 mm – 41.38 m/s for proximal method.\nTwo methods complementary to SCV studies were used in order to study the saphenous nerve sensory function. First, it was the intensity of current versus stimulus duration curves (IC–SD) that was conducted after monopolar stimulation of skin anterior to the medial malleolus, then 14 cm and 26 cm. Second, it was the von Frey’s filaments examination (FvF) in three areas, the medial part of popliteal region, the middle 1/3 of the leg on the medial side and medial malleolus region. During each FvF examination, three measurements were performed with calibrated silicon filaments: 0.12 mm, 0.30 mm (corresponding to proper sensory perception), and 0.55 mm in diameter. Touch sensation reported in two out of three assessments was considered as positive. A positive trial with the thinnest filament indicated hyperesthesia and the completely negative trial – analgesia.\nThe research was approved by The Bioethics Committee of Medical University in Poznań and it was performed with the ethical principles of the Helsinki Declaration. Examined subjects gave their informed consent for the examinations. All subjects were informed about the aim of study and about the course of examination.\n\nBODY.STATISTICS:\nIn the statistical analysis of the obtained results, the Statistica programme 9.0 was used (StatSoft). The Fisher–Freeman–Halton test was used for evaluation of sample size during studies on normality distribution, similarly the McNemara when two measurements were compared (between groups) or the Cochran’s Q test when more than three measurements were considered (between stages of observations). The results were considered as statistically significant at p < 0.05.\n\nBODY.RESULTS:\nThere was no nerve transmission abnormalities detected prior to surgery in any group. Nerve transmission abnormalities diagnosed two weeks after the operation (in 30 out of 80 patients – 37.5%) were present in 17 cases (21.25%) after three months and in 11 patients (13.75%) after six months (Table 1).\nTable 1.Number (percentage) of parameters changes in sensory fibers transmission of saphenous nerve (SCV studies) at four stages of observation (S1–S4) during recordings in patients from four groups with different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination).\nBefore surgery (S1)Two weeks after surgery (S2)Three months after surgery (S3)Six months after surgery (S4)Differences between stages of observationTotal, N = 80\n 0/80 (0%)\n 30/80 (37.5%)\n 17/80 (21.25%)\n 11/80 (13.75%)\nS1: S2 < 0.0001\nS1: S3 < 0.0001\nS1: S4 = 0.0010\nS2: S3 = 0.0008\nS2: S4 < 0.0001\nS3: S4 = 0.0412\nG1, N = 20\n 0/20 (0%)\n 13/20 (65%)\n 10/20 (50%)\n 7/20 (35%)\nS1: S2 = 0.0002\nS1: S3 = 0.0020\nS1: S4 = 0.0156\nS2: S4 = 0.0412\nG2, N = 20\n0/20 (0%)\n7/20 (35%)\n4/20 (20%)\n2/20 (10%)\nS1: S2 = 0.0156\nG3, N = 20\n0/20 (0%)\n6/20 (30%)\n3/20 (15%)\n2/20 (10%)\nS1: S2 = 0.0312\nG4, N = 20\n0/20 (0%)\n4/20 (20%)\n0/20 (0%)\n0/20 (0%)\n\nDifferences between groups\nG1:G4 = 0.0095\nG1:G3 = 0.0407\nG1:G4 = 0.0083\n\nG1:G4 = 0.0004\n\n\nStatistically significant differences at p < 0.05 are marked bold.\n\nIn three groups of patients (G1–G3), a statistically significant (p < 0.05) saphenous nerve lesion was identified two weeks after the surgery. In group G1 (proximal stripping without invagination), it occurred in 65%, in group G2 (proximal stripping with vein invagination) in 35%, and in group G3 (distal stripping without vein invagination) – in 30%. Best results of treatment were observed in group G4 (distal stripping with vein invagination) where no statistically significant consequences of nerve injury were detected. Six months after the operation sensory conduction disturbances were identified in a remarkable number of patients (seven out of 20 that is 35%) but only in group G1 (Table 1).\nAmong the patients with disturbances of nerve transmission in the saphenous nerve fibers, the most frequent observation was slowing down of the conduction velocity with proper amplitude parameters (Figure 1). The results of SCV indicating deficiencies in sensory transmission are related with the results of IC–SD as well as with the von Frey filaments examination, which are presented in Tables 2 and 3.\nFigure 1.Examples of neurophysiological SCV recordings from saphenous nerve following electrical stimulation at distances shown on the right part of figure. Recordings were obtained in one of the patients 3 months after proximal stripping without invagination. Calibrations for amplification (µV) and time base (ms) as well as intensity of stimulation (mA) are shown in upper part of figure. Note proper values of potentials amplitudes (marked with crosses) but their prolonged latencies (marked with arrows) influenced the conduction velocities of nerve impulses in sensory fibers.\n\nTable 2.Number (percentage) of changes in sensory perception (IC–SD studies) of saphenous innervation at four stages of observation (S1–S4) during recordings in patients from four groups with different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination).\nBefore surgery (S1)Two weeks after surgery (S2)Three months after surgery (S3)Six months after surgery(S4)Differences between stages of observationTotal, N = 80\n 0/80 (0%)\n 49/80 (61.25%)\n 22/80 (27.5%)\n 12/80 (15%)\nS1: S2 < 0.0001\nS1: S3 < 0.0001\nS1: S4 < 0.0001\nS2: S3 < 0.0001\nS2: S4 = 0.0060\nS3: S4 = 0.0040\nG1, N = 20\n 0/20 (0%)\n 15/20 (75%)\n 11/20 (55%)\n 7/20 (35%)\nS1: S2 < 0.0001\nS1: S3 = 0.0009\nS1: S4 = 0.0156\nS2: S4 = 0.0133\nG2, N = 20\n 0/20 (0%)\n 14/20 (70%)\n 5/20 (25%)\n 2/20 (10%)\nS1: S2 < 0.0001\nS2: S3 = 0.0076\nS2: S4 = 0.0015\nG3, N = 20\n 0/20 (0%)\n 11/20 (55%)\n 4/20 (20%)\n 2/20 (10%)\nS1: S2 = 0.0010\nS2: S3 = 0.0233\nS2: S4 = 0.0076\nG4, N = 20\n 0/20 (0%)\n 9/20 (45%)\n 2/20 (10%)\n 1/20 (5%)\nS1:S2 = 0.0039\nS2:S3 = 0.0233\nS2:S4 = 0.0133\nDifferences between groups\n\nG1:G3 = 0.0483\nG1:G4 = 0.0436\n\n\nG1:G4 = 0.0054\n\n\nStatistically significant differences at p < 0.05 are marked bold.\n\nTable 3.Number (percentage) of sensory perception disturbances (von Frey filaments examination) for saphenous innervation at four stages of observation (S1–S4) during recordings in patients from four groups with different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination).\nBefore surgery (S1)Two weeks after surgery (S2)Three months after surgery (S3)Six months after surgery (S4)Differences between stages of observationTotal, N = 80\n 0/80 (0%)\n 38/80 (47.5%)\n 24/80 (30%)\n 15/80 (18.75%)\nS1: S2 < 0.0001\nS1: S3 < 0.0001\nS1: S4 < 0.0001\nS2: S3 = 0.0005\nS2: S4 < 0.0001\nS3: S4 = 0.0076\nG1, N = 20\n 0/20 (0%)\n 15/20 (75%)\n 12/20 (60%)\n 9/20 (45%)\nS1: S2 < 0.0001\nS1: S3 = 0.0005\nS1: S4 = 0.0039\nS2: S4 = 0.0412\nG2, N = 20\n 0/20 (0%)\n 10/20 (50%)\n 4/20 (20%)\n 2/20 (10%)\nS1: S2 = 0.0020\nS2: S3 = 0.0412\nS2: S4 = 0.0133\nG3, N = 20\n0/20 (0%)\n6/20 (30%)\n5/20 (25%)\n3/20 (15%)\nS1: S2 = 0.0316\nG4, N = 20\n 0/20 (0%)\n 7/20 (35%)\n 3/20 (15%)\n 1/20 (5%)\nS1:S2 = 0.0156\nS2:S4 = 0.0412\nDifferences between groups\nG1:G3 = 0.0103\nG1:G2 = 0.0224\nG1:G2 = 0.0309\n\nG1:G4 = 0.0248\nG1:G4 = 0.0007\nG1:G4 = 0.0084\n\nStatistically significant differences at p < 0.05 are marked bold.\n\nIn postoperative observations, the significant saphenous nerve sensory perception abnormalities were detected in each group of patients based on results of IC–SD and von Frey filaments studies. Three and six months after the varicose veins extractions there was detected similar, moderate improvement of sensory perception parameters in groups G2 and G3. The best results of saphenous nerve regeneration were observed in G4 (one out of 20 patients – 5% in both IC–SD and FvF studies) and the worst in G1 (seven out of 20 patients – 35% in IC–SD examinations and nine out of 20 patients – 45% in FvF studies) groups.\n\nBODY.DISCUSSION:\nSaphenous nerve injury is known for a long time to be a potential complication of the saphenous vein long stripping.6–10,13–24 The proximity of the vein and nerve, especially at the level of shank, results in injuries during the vein resection,25–29 especially in the patients with advanced varicose veins in this area caused by a large number of insufficient perforators. Additionally, in the patients suffering from long-lasting varicose veins, it can lead to an accretion of the widened-vein, providing the saphenous nerve neuropraxy. This pathology also facilitates in injury of the nerve fibers during the operation. The sensory disorders associated with an injury to the nerve trunk or its branches, occurring after the varicose veins surgical treatment, can be either temporary or constant.23,26\nThe results presented in this paper, referring to the saphenous nerve injury during the varicose veins surgical treatment using the saphenous vein long stripping, correlate with the results that are to be found in publications devoted to the subject.18,19,21 Many authors tried to determine the frequency of this complication. It varies from 5.9% to 72% and depends to a large extent on the applied stripping technique.17–19,21,23–26\nIn the study presented, the most numerous saphenous nerve injuries were identified after the proximal long stripping without vein invagination (the Babcock method). Such complications were reported in 20–50% of patients treated with classical stripping of the Babock type.27 It is joined to the anatomical relation of the saphenous vein and the saphenous nerve together with its divisions. Anatomically, the nerve branches form a reverted V shape, which facilitates their injury by the probe’s head, especially while proximal pulling of the vein.17,18,28\nInvagination of the vein as well as the change in the direction of its removal from proximal to distal caused the reduction in the number of neurological complications related to the saphenous nerve injury.8,19 The proximal stripping with invagination of the vessel and the distal stripping without invagination occurred to be comparable in terms of the number of postsurgical sensory disorders in the saphenous nerve innervation area.\nThe best effects were achieved with distal stripping with vein invagination.19 From the results of this study, it can be concluded that this method is the least invasive for the saphenous nerve. All the conduction abnormalities that were objectively identified in SCV studies two weeks after the surgery were temporary and normalized at three months after the operation. Part of the patients subjectively still suffered from some sensory disorders, which was confirmed in FvF and IC–SD examinations perhaps due to the injury in small branches of saphenous nerve. Based on the results of neurophysiological findings provided in this study, we suppose that compilation of distal stripping with vein invagination gives the best saphenous nerve sparing during varicose veins extraction. Transient changes in nerve transmission observed two weeks after surgery can be due to the temporary neuropraxia caused by edema, hematoma, inflammatory processes, or simply by the mechanical irritation during operation.\nA large number of neurological complications after the varicose veins surgery in lower extremities with the use of the saphenous vein stripping make alternative means of treatment of the saphenous vein incompetency worth consideration.30–34\nOne of these alternative methods is the short stripping to the level of knee joint with additional phlebectomy of the varicose veins below this level.24\nIt should be remembered, however, that in spite of more and more innovative methods of the varicose veins treatment, the long stripping is still the most common procedure and a first-choice therapy for the saphenous vein incompetence.3–5 Even if the sensory disorders developed after the surgical removal of the whole vein can persist in some patients, the advantages of varicose veins resection much exceed the inconveniences resulting from the potential neurological complications. Nerve injuries also might appear after thermal ablation associated with varicose veins surgeries and there is a need for a similar like presented study of patients having various endovenous treatments. The limited size of this study seems to be a weakness of the presented report, but its results may suggest that neurophysiological testing is valuable for evaluation of the treatment efficiency after varicose veins surgeries.\n\nBODY.CONCLUSIONS:\nVaricose veins surgery in the lower limbs using the technique of the saphenous vein long stripping significantly impacts the sensory conduction in the saphenous nerve. Distal stripping with vein invagination seems to be the least invasive for the saphenous nerve fibers, which may be a consequence of the anatomical relation of the saphenous nerve and vein. Proximal stripping without vein invagination leads to the most numerous neurological complications detected two weeks after the operation as well as three and six months after the procedure, when the changes possibly affecting the nerve transmission had ceased.\n\n**Question:** Compared to G3-group after distal stripping without invagination, G4-group after distal stripping with invagination what was the result of G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination on saphenous nerve lesion?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13882
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nSaphenous vein stripping surgical technique and frequency of saphenous nerve injury\n\n\n\nABSTRACT.OBJECTIVES:\nSaphenous nerve injury is the most common complication after surgical treatment of varicose veins. The aim of this study was to establish its frequency at great saphenous vein long stripping when four methods of surgery were applied.\n\nABSTRACT.METHODS:\nEighty patients were divided into four groups depending on different stripping methods. Sensory transmission in saphenous nerve and sensory perception of shank were examined before surgery and two weeks, three and six months afterwards with clinical neurophysiology methods.\n\nABSTRACT.RESULTS:\nIn 36% of patients, surgeries caused the injury of saphenous nerve mainly by proximal stripping without invagination (65%, group I). Transmission disturbances ceased completely after three months in patients undergoing distal stripping with invagination (group IV), while in group I they persisted for six months in 35%. Group IV patients were the least injured and group I the most.\n\nABSTRACT.CONCLUSION:\nNeurophysiological findings may suggest that distal stripping with vein invagination gives the best saphenous nerve sparing.\n\nBODY.INTRODUCTION:\nVaricose veins surgery is one of the most frequent operations carried out in Europe as well as one of the most frequent causes of civil-law proceedings against general and vascular surgeons, especially in West European countries, which results mainly from incidental nerve injury.1,2 The increase in medical claims forces a search for the method of varicose veins treatment that can guarantee the fewest complications possible.\nThe fundamental approach in surgical treatment of varicose veins with great saphenous vein incompetence is the long stripping.3–5 There are several variants of the method: proximal stripping with a probe – the classical Babcock method, proximal stripping with invagination (inversion) of the vessel, distal stripping with and without invagination.6–10\nThe varicose veins extraction may be associated with many intra- and postsurgical complications. However, the most frequent is the saphenous nerve lesion, which is related to its anatomical path. Possible anomalies in the common anatomical passage of the saphenous nerve to the great saphenous vein should be considered during varicose veins extraction and what can foster iatrogenic injuries within higher risk zones.11\nFemoral and crural portions can be distinguished in the course of saphenous nerve. The nerve runs subfascially within the femoral portion, whereas within the crural portion it pierces the crural fascia and, running epifascially together with the great saphenous vein, reaches the medial margin of the foot. The nerve supplies sensory innervation to the medial side of the lower thigh, anteromedial side of the knee joint and the leg down to medial side of the foot and hallux. During stripping of the great saphenous vein, a lesion most frequently affects the nerve trunk in region of medial malleolus and its subpatellar branch together with the medial cutaneous branches of the leg, which manifests as inaccurate feeling perception (paresthesia) in the form of tingling, numbness, electric current sensation, hypoesthesia, hyperesthesia, or a burning sensation.\nThe aims of this study were to establish and objectivise the relation between saphenous nerve conduction disorders and varicose vein surgery as well as to compare the frequency of nerve injury in the four methods of great saphenous vein stripping.\n\nBODY.METHODS:\nA group of 80 patients with confirmed venous incompetence of Hach’s III and IV class by clinical (according to CEAP classification – C2 grade) and imaging (Doppler ultrasonography) examinations was enrolled into this neurophysiological prospectively gathered data.12 Patients did not report the neuropathic pain before surgery, the supplemental management with pregabalin and methylcobalamin was not applied. The patients were qualified for surgical treatment in Department of General and Vascular Surgery at Karol Marcinkowski Poznań University of Medical Sciences from 2004 to 2008 basing on the rule of consecutive cases. The neurophysiological examinations were conducted in Department of Pathophysiology of Locomotor Organs four times (before surgery-S1, two weeks after surgery-S2, three months after surgery-S3, and six months after surgery-S4).\nPatients aged from 20 to 50 years (42.2 years on average) were randomly divided into four subgroups (20 persons each) G1–G4 depending on the different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination). The length of great saphenous vein stripping depended on the type of surgery. In each group, the same female to male proportion was maintained (15 females and five males). Each subgroup was operated with different stripping method. The saphenous nerve was not identified along the saphenous vein at the knee and it was not separated from it before stripping. Patients suffering from conditions that possibly affect nervous system functions, which could be polyneuropathies, multiple peripheral mononeuropathies, demyelinating diseases, neuromuscular diseases, deficiency syndromes, nervous system poisoning, advanced discopathy, extremity or vertebral column trauma, arterial diseases, episodes of superficial vein thrombosis, post-thrombotic syndrome were excluded from this study.\nThe general methodology was based on neurophysiological examinations of saphenous nerve conduction.13–16 It was carried out four times. First time it was before the operation, in order to estimate preliminary values of neurophysiological variables (amplitude, latency, conduction velocity) in sensory conduction studies (SCV), then two weeks after the operation to establish a possible nerve injury due to surgical intervention, three months after the operation when the changes affecting conduction (swellings, hematomas, rubber bandage wearing) had disappeared and six months after the operation to assess the improvement of neurophysiological parameters in the patients with previously detected conduction disorders. The results were analyzed with reference to the control group of 50 healthy volunteers (40 females and 10 males aged 45.1 years on average) recruited from hospital workers whom the SCV reference values were acquired.\nThe neurophysiological testing was performed with Keypoint System (Medtronic A/S, Skovlunde, Denmark). Evoked potentials were recorded at different levels of saphenous nerve conduction with standard silver chloride electrodes, using two techniques of saphenous nerve examination (distal and proximal). In the distal method, recording took place anterior to medial malleolus and stimulation 14 cm, 18 cm, 22 cm, 26 cm above, along the medial margin of tibia. In the proximal method, potentials were recorded at the midpoint between medial malleolus and the slot of the knee joint along the medial margin of tibia while stimulation was performed 14 cm, 18 cm, and 22 cm above. The ground electrode was placed halfway between the recording and stimulating electrodes. Bilateral and bipolar saphenous nerve stimulation was performed, applying supramaximal, rectangular, single electric stimuli with a frequency lower than 3 Hz, 0.2 ms duration and intensity up to 30 mA. The recordings were carried out with time base usually 2 ms, amplification 5 µV, high-pass filter at 5 Hz and low-pass filter at 3 kHz. The parameters of amplitude, latency and corresponding afferent conduction velocities were evaluated. Amplitude greater than 1 µV was considered as normal. The conduction velocity depended on the distance at which the saphenous nerve was examined. The following minimal conduction velocities were accepted: 140 mm – 38.58 m/s; 180 mm – 39.13 m/s; 220 mm – 39.85 m/s; 260 mm – 40.21 m/s for distal and 140 mm – 40.53 m/s; 180 mm – 40.86 m/s; 220 mm – 41.38 m/s for proximal method.\nTwo methods complementary to SCV studies were used in order to study the saphenous nerve sensory function. First, it was the intensity of current versus stimulus duration curves (IC–SD) that was conducted after monopolar stimulation of skin anterior to the medial malleolus, then 14 cm and 26 cm. Second, it was the von Frey’s filaments examination (FvF) in three areas, the medial part of popliteal region, the middle 1/3 of the leg on the medial side and medial malleolus region. During each FvF examination, three measurements were performed with calibrated silicon filaments: 0.12 mm, 0.30 mm (corresponding to proper sensory perception), and 0.55 mm in diameter. Touch sensation reported in two out of three assessments was considered as positive. A positive trial with the thinnest filament indicated hyperesthesia and the completely negative trial – analgesia.\nThe research was approved by The Bioethics Committee of Medical University in Poznań and it was performed with the ethical principles of the Helsinki Declaration. Examined subjects gave their informed consent for the examinations. All subjects were informed about the aim of study and about the course of examination.\n\nBODY.STATISTICS:\nIn the statistical analysis of the obtained results, the Statistica programme 9.0 was used (StatSoft). The Fisher–Freeman–Halton test was used for evaluation of sample size during studies on normality distribution, similarly the McNemara when two measurements were compared (between groups) or the Cochran’s Q test when more than three measurements were considered (between stages of observations). The results were considered as statistically significant at p < 0.05.\n\nBODY.RESULTS:\nThere was no nerve transmission abnormalities detected prior to surgery in any group. Nerve transmission abnormalities diagnosed two weeks after the operation (in 30 out of 80 patients – 37.5%) were present in 17 cases (21.25%) after three months and in 11 patients (13.75%) after six months (Table 1).\nTable 1.Number (percentage) of parameters changes in sensory fibers transmission of saphenous nerve (SCV studies) at four stages of observation (S1–S4) during recordings in patients from four groups with different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination).\nBefore surgery (S1)Two weeks after surgery (S2)Three months after surgery (S3)Six months after surgery (S4)Differences between stages of observationTotal, N = 80\n 0/80 (0%)\n 30/80 (37.5%)\n 17/80 (21.25%)\n 11/80 (13.75%)\nS1: S2 < 0.0001\nS1: S3 < 0.0001\nS1: S4 = 0.0010\nS2: S3 = 0.0008\nS2: S4 < 0.0001\nS3: S4 = 0.0412\nG1, N = 20\n 0/20 (0%)\n 13/20 (65%)\n 10/20 (50%)\n 7/20 (35%)\nS1: S2 = 0.0002\nS1: S3 = 0.0020\nS1: S4 = 0.0156\nS2: S4 = 0.0412\nG2, N = 20\n0/20 (0%)\n7/20 (35%)\n4/20 (20%)\n2/20 (10%)\nS1: S2 = 0.0156\nG3, N = 20\n0/20 (0%)\n6/20 (30%)\n3/20 (15%)\n2/20 (10%)\nS1: S2 = 0.0312\nG4, N = 20\n0/20 (0%)\n4/20 (20%)\n0/20 (0%)\n0/20 (0%)\n\nDifferences between groups\nG1:G4 = 0.0095\nG1:G3 = 0.0407\nG1:G4 = 0.0083\n\nG1:G4 = 0.0004\n\n\nStatistically significant differences at p < 0.05 are marked bold.\n\nIn three groups of patients (G1–G3), a statistically significant (p < 0.05) saphenous nerve lesion was identified two weeks after the surgery. In group G1 (proximal stripping without invagination), it occurred in 65%, in group G2 (proximal stripping with vein invagination) in 35%, and in group G3 (distal stripping without vein invagination) – in 30%. Best results of treatment were observed in group G4 (distal stripping with vein invagination) where no statistically significant consequences of nerve injury were detected. Six months after the operation sensory conduction disturbances were identified in a remarkable number of patients (seven out of 20 that is 35%) but only in group G1 (Table 1).\nAmong the patients with disturbances of nerve transmission in the saphenous nerve fibers, the most frequent observation was slowing down of the conduction velocity with proper amplitude parameters (Figure 1). The results of SCV indicating deficiencies in sensory transmission are related with the results of IC–SD as well as with the von Frey filaments examination, which are presented in Tables 2 and 3.\nFigure 1.Examples of neurophysiological SCV recordings from saphenous nerve following electrical stimulation at distances shown on the right part of figure. Recordings were obtained in one of the patients 3 months after proximal stripping without invagination. Calibrations for amplification (µV) and time base (ms) as well as intensity of stimulation (mA) are shown in upper part of figure. Note proper values of potentials amplitudes (marked with crosses) but their prolonged latencies (marked with arrows) influenced the conduction velocities of nerve impulses in sensory fibers.\n\nTable 2.Number (percentage) of changes in sensory perception (IC–SD studies) of saphenous innervation at four stages of observation (S1–S4) during recordings in patients from four groups with different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination).\nBefore surgery (S1)Two weeks after surgery (S2)Three months after surgery (S3)Six months after surgery(S4)Differences between stages of observationTotal, N = 80\n 0/80 (0%)\n 49/80 (61.25%)\n 22/80 (27.5%)\n 12/80 (15%)\nS1: S2 < 0.0001\nS1: S3 < 0.0001\nS1: S4 < 0.0001\nS2: S3 < 0.0001\nS2: S4 = 0.0060\nS3: S4 = 0.0040\nG1, N = 20\n 0/20 (0%)\n 15/20 (75%)\n 11/20 (55%)\n 7/20 (35%)\nS1: S2 < 0.0001\nS1: S3 = 0.0009\nS1: S4 = 0.0156\nS2: S4 = 0.0133\nG2, N = 20\n 0/20 (0%)\n 14/20 (70%)\n 5/20 (25%)\n 2/20 (10%)\nS1: S2 < 0.0001\nS2: S3 = 0.0076\nS2: S4 = 0.0015\nG3, N = 20\n 0/20 (0%)\n 11/20 (55%)\n 4/20 (20%)\n 2/20 (10%)\nS1: S2 = 0.0010\nS2: S3 = 0.0233\nS2: S4 = 0.0076\nG4, N = 20\n 0/20 (0%)\n 9/20 (45%)\n 2/20 (10%)\n 1/20 (5%)\nS1:S2 = 0.0039\nS2:S3 = 0.0233\nS2:S4 = 0.0133\nDifferences between groups\n\nG1:G3 = 0.0483\nG1:G4 = 0.0436\n\n\nG1:G4 = 0.0054\n\n\nStatistically significant differences at p < 0.05 are marked bold.\n\nTable 3.Number (percentage) of sensory perception disturbances (von Frey filaments examination) for saphenous innervation at four stages of observation (S1–S4) during recordings in patients from four groups with different stripping methods (G1-group after proximal stripping without invagination, G2-group after proximal stripping with invagination, G3-group after distal stripping without invagination, G4-group after distal stripping with invagination).\nBefore surgery (S1)Two weeks after surgery (S2)Three months after surgery (S3)Six months after surgery (S4)Differences between stages of observationTotal, N = 80\n 0/80 (0%)\n 38/80 (47.5%)\n 24/80 (30%)\n 15/80 (18.75%)\nS1: S2 < 0.0001\nS1: S3 < 0.0001\nS1: S4 < 0.0001\nS2: S3 = 0.0005\nS2: S4 < 0.0001\nS3: S4 = 0.0076\nG1, N = 20\n 0/20 (0%)\n 15/20 (75%)\n 12/20 (60%)\n 9/20 (45%)\nS1: S2 < 0.0001\nS1: S3 = 0.0005\nS1: S4 = 0.0039\nS2: S4 = 0.0412\nG2, N = 20\n 0/20 (0%)\n 10/20 (50%)\n 4/20 (20%)\n 2/20 (10%)\nS1: S2 = 0.0020\nS2: S3 = 0.0412\nS2: S4 = 0.0133\nG3, N = 20\n0/20 (0%)\n6/20 (30%)\n5/20 (25%)\n3/20 (15%)\nS1: S2 = 0.0316\nG4, N = 20\n 0/20 (0%)\n 7/20 (35%)\n 3/20 (15%)\n 1/20 (5%)\nS1:S2 = 0.0156\nS2:S4 = 0.0412\nDifferences between groups\nG1:G3 = 0.0103\nG1:G2 = 0.0224\nG1:G2 = 0.0309\n\nG1:G4 = 0.0248\nG1:G4 = 0.0007\nG1:G4 = 0.0084\n\nStatistically significant differences at p < 0.05 are marked bold.\n\nIn postoperative observations, the significant saphenous nerve sensory perception abnormalities were detected in each group of patients based on results of IC–SD and von Frey filaments studies. Three and six months after the varicose veins extractions there was detected similar, moderate improvement of sensory perception parameters in groups G2 and G3. The best results of saphenous nerve regeneration were observed in G4 (one out of 20 patients – 5% in both IC–SD and FvF studies) and the worst in G1 (seven out of 20 patients – 35% in IC–SD examinations and nine out of 20 patients – 45% in FvF studies) groups.\n\nBODY.DISCUSSION:\nSaphenous nerve injury is known for a long time to be a potential complication of the saphenous vein long stripping.6–10,13–24 The proximity of the vein and nerve, especially at the level of shank, results in injuries during the vein resection,25–29 especially in the patients with advanced varicose veins in this area caused by a large number of insufficient perforators. Additionally, in the patients suffering from long-lasting varicose veins, it can lead to an accretion of the widened-vein, providing the saphenous nerve neuropraxy. This pathology also facilitates in injury of the nerve fibers during the operation. The sensory disorders associated with an injury to the nerve trunk or its branches, occurring after the varicose veins surgical treatment, can be either temporary or constant.23,26\nThe results presented in this paper, referring to the saphenous nerve injury during the varicose veins surgical treatment using the saphenous vein long stripping, correlate with the results that are to be found in publications devoted to the subject.18,19,21 Many authors tried to determine the frequency of this complication. It varies from 5.9% to 72% and depends to a large extent on the applied stripping technique.17–19,21,23–26\nIn the study presented, the most numerous saphenous nerve injuries were identified after the proximal long stripping without vein invagination (the Babcock method). Such complications were reported in 20–50% of patients treated with classical stripping of the Babock type.27 It is joined to the anatomical relation of the saphenous vein and the saphenous nerve together with its divisions. Anatomically, the nerve branches form a reverted V shape, which facilitates their injury by the probe’s head, especially while proximal pulling of the vein.17,18,28\nInvagination of the vein as well as the change in the direction of its removal from proximal to distal caused the reduction in the number of neurological complications related to the saphenous nerve injury.8,19 The proximal stripping with invagination of the vessel and the distal stripping without invagination occurred to be comparable in terms of the number of postsurgical sensory disorders in the saphenous nerve innervation area.\nThe best effects were achieved with distal stripping with vein invagination.19 From the results of this study, it can be concluded that this method is the least invasive for the saphenous nerve. All the conduction abnormalities that were objectively identified in SCV studies two weeks after the surgery were temporary and normalized at three months after the operation. Part of the patients subjectively still suffered from some sensory disorders, which was confirmed in FvF and IC–SD examinations perhaps due to the injury in small branches of saphenous nerve. Based on the results of neurophysiological findings provided in this study, we suppose that compilation of distal stripping with vein invagination gives the best saphenous nerve sparing during varicose veins extraction. Transient changes in nerve transmission observed two weeks after surgery can be due to the temporary neuropraxia caused by edema, hematoma, inflammatory processes, or simply by the mechanical irritation during operation.\nA large number of neurological complications after the varicose veins surgery in lower extremities with the use of the saphenous vein stripping make alternative means of treatment of the saphenous vein incompetency worth consideration.30–34\nOne of these alternative methods is the short stripping to the level of knee joint with additional phlebectomy of the varicose veins below this level.24\nIt should be remembered, however, that in spite of more and more innovative methods of the varicose veins treatment, the long stripping is still the most common procedure and a first-choice therapy for the saphenous vein incompetence.3–5 Even if the sensory disorders developed after the surgical removal of the whole vein can persist in some patients, the advantages of varicose veins resection much exceed the inconveniences resulting from the potential neurological complications. Nerve injuries also might appear after thermal ablation associated with varicose veins surgeries and there is a need for a similar like presented study of patients having various endovenous treatments. The limited size of this study seems to be a weakness of the presented report, but its results may suggest that neurophysiological testing is valuable for evaluation of the treatment efficiency after varicose veins surgeries.\n\nBODY.CONCLUSIONS:\nVaricose veins surgery in the lower limbs using the technique of the saphenous vein long stripping significantly impacts the sensory conduction in the saphenous nerve. Distal stripping with vein invagination seems to be the least invasive for the saphenous nerve fibers, which may be a consequence of the anatomical relation of the saphenous nerve and vein. Proximal stripping without vein invagination leads to the most numerous neurological complications detected two weeks after the operation as well as three and six months after the procedure, when the changes possibly affecting the nerve transmission had ceased.\n\n**Question:** Compared to G4-group after distal stripping with invagination what was the result of G1-group after proximal stripping without invagination on saphenous nerve sensory perception abnormalities?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13883
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA randomized two way cross over study for comparison of absorption of vitamin D3 buccal spray and soft gelatin capsule formulation in healthy subjects and in patients with intestinal malabsorption\n\n\n\nABSTRACT.BACKGROUND:\nVitamin D deficiency has been proposed to contribute to the development of malabsorption diseases. Despite this, the vitamin D status of these patients is often neglected. The objective of the present work was to compare the absorption of vitamin D3 through the oral route by comparing a 1000 IU soft gelatin capsule and a 500 IU buccal spray (delivering 1000 IU in two spray shots) in healthy subjects and in patients with malabsorption disease.\n\nABSTRACT.METHODS:\nAn open label, randomized, two-periods, two-way cross over study was conducted, first in healthy subjects (n = 20) and then in patients with malabsorption syndrome (n = 20). The study participants were equally divided and received either of the treatments (buccal spray, n = 7; soft gelatin capsule, n = 7; control, n = 6) in Period I for 30 days. After washout of another 30 days, the treatments were changed in crossover fashion in Period II. Fasting blood samples were collected to measure baseline 25-hydroxyvitamin D [25(OH)D] levels in all participants at day 0 (Screening visit), day 30 (completion of period I), day 60 (end of wash out and initiation of period II) and day 90 (completion of period II). Safety was evaluated by hematology and biochemistry analyses. Statistical analyses was performed using differences of mean and percentage change from baseline of 25(OH)D levels between two formulation by two tailed Paired t-test with 95 % confidence interval.\n\nABSTRACT.RESULTS:\nIn healthy subjects, the mean increase in serum 25(OH)D concentration was 4.06 (95 % CI 3.41, 4.71) ng/ml in soft gelatin capsule group and 8.0 (95 % CI 6.86, 9.13) ng/ml in buccal spray group after 30 days treatment (p < 0.0001). In patients with malabsorption disease, the mean increase in serum 25(OH)D concentration was 3.96 (95 % CI 2.37, 5.56) ng/ml in soft gelatin capsule group and 10.46 (95 % CI 6.89, 14.03) ng/ml in buccal spray group (p < 0.0001).\n\nABSTRACT.CONCLUSION:\nIt can be concluded from the results that the buccal spray produced a significantly higher mean serum 25(OH)D concentration as compared to the soft gelatin capsule, in both healthy subjects as well as in patients with malabsorption syndrome over a period of 30 days administration in a two way cross over study. Treatments were well tolerated by both subject groups\n\nABSTRACT.TRIAL REGISTRATION:\nCTRI/2013/06/003770\n\nBODY.INTRODUCTION:\nVitamin D is essential for active intestinal calcium absorption and plays a central role in maintaining calcium homeostasis and skeletal integrity. It is derived mainly from cutaneous synthesis in the presence of ultraviolet sunlight while dietary intake constitutes a minor fraction [1]. Vitamin D deficiency is a common problem through the world [2, 3] and is assessed by low serum concentration of the major circulating metabolite 25-hydroxyvitamin D (25(OH)D) [4, 5]. The prevention of vitamin D deficiency and insufficiency remains a priority of international health services [6–8]. Vitamin D deficiency has been proposed to contribute to the development of intestinal bowel diseases like Crohn's disease, steatorrhea and ulcerative colitis [9]. Conversely, people who have such illnesses have a reduced absorption of vitamin D3 through the intestine [10, 11]. In addition, osteomalacia occurs in patients with a wide variety of disorders affecting the stomach and small bowel, especially when associated with steatorrhea. The pathogenesis of this osteomalacia has in part been explained by malabsorption in vitamin D [9]. Earlier reports show that orally administered tritiated vitamin D3 was malabsorbed in patients with celiac disease, biliary obstruction or pancreatic disease [12]. The pathogenesis of vitamin D deficiency in these patients remains unclear but it is thought to result primarily from fat soluble vitamin malabsorption due to the presence of intestinal disease conditions\nDespite vitamin D malabsorption in patients with gastrointestinal or liver disease, the vitamin status of these patients is often neglected. Although vitamin D supplements are often prescribed, adequate absorption of these formulations has not been documented [10]. Vitamin D is known to be liposoluble, and its relative bioavailability could result in unfavorable conditions when administered in solid form (capsule), since the process of its release is a factor limiting the rate of absorption, bearing in mind that bioavailability is related not only to the pharmaceutically active molecules, but also, to the formulation and excipients used.\nVitamin D3 taken by oral route (peroral delivery) is absorbed in the intestine, where the lining of the digestive tract is aqueous in nature. Therefore vitamin D3, a fat-soluble molecule, in order to be absorbed, must be made water soluble in the intestine. This is accomplished in two steps: emulsification of vitamin D3 in the intestinal lumen, through the action of bile salts, forming small droplets which are dispersed and incorporated into micelles-complex aggregates formed by the interaction of free fatty acids, monoglycerides, and bile salts. Micelles are sufficiently water-soluble to access the intestinal brush border where upon the vitamin D3 content is released and then absorbed [13].\nWhen sprayed inside the mouth, the fine micro sized droplets of vitamin D3 are believed to be quickly and completely absorbed through the buccal mucosa into the numerous capillaries and veins lying close to the tissue surface [13]. Considering the possibility of reduced vitamin D absorption in healthy subjects and even in patients with malabsorption syndrome, a buccal spray formulation was developed. Therefore, the objective of the present work was to compare the absorption of vitamin D3 through the oral route (soft gelatin capsule form) and buccal spray in healthy subjects and patients with intestinal malabsorption syndrome.\n\nBODY.METHODS:\nBODY.STUDY DESIGN AND PATIENTS:\nAn open label, randomized, two-periods, two-way cross over study was conducted, first in healthy subjects and then in patients with malabsorption syndrome, with a similar study design except the presence of disease status in patients with malabsorption syndrome. After approval from the Spandan-IEC ethics committee (registration no: ECR/67/Indt/GJ/2013), the informed consent of study participants were taken and the formulation was administered for 30 days in period I where half of the subjects and patients (collectively participants) received capsule formulation and half of the participants received buccal spray formulation. After completion of treatment in period I, all participants were given 30 days wash out before initiating period II where treatment has changed in cross over fashion, i.e. the participants who received capsule formulation in period I have received buccal spray in period II and vice versa. The clinical study was registered in a centralized clinical trial registry of India (CTRI) before initiating the enrollment of the first patient in the study (CTRI/2013/06/003770).\nThe inclusion criteria were as follows: subjects of either sex between 18 and 65 years of age, with a Body Mass Index (BMI) between 18.0 and 30.0 kg/m2, and the ability to comply with study procedures in the opinion of the investigators. For healthy subjects: no history of liver, kidney or cardiovascular disease, or of any other medical conditions or medications likely to affect vitamin D3 absorption or metabolism. For patients with malabsorption syndrome: confirmed diagnosis of any one of the following malabsorption disease conditions like ulcerative colitis, Crohn’s disease or steatorrhea. Patient with history of above diseases, who are on therapy, were selected for the screening. In these patients, malabsorption syndrome was diagnosed by clinical symptoms like abdominal pain, vomiting, diarrhea, and subcutaneous fat loss together with blood tests like haematology and biochemistry. Stool examination was also performed for all patients to confirm rectal bleeding, presence of occult blood, infectious organisms, or fat. Finally, colonoscopy was performed in all patients to objectively assess the extent of inflammation to confirm the diagnosis. The exclusion criteria were as follows: systemic inflammatory or malignant disease; hepatic or renal failure; uncontrolled hypo- or hyperthyroidism, or the use of drugs that are known to affect bone metabolism such as bisphosphonates, glucocorticoids and anti-convulsants. A pregnant or desired to be pregnant woman during study period was also excluded.\n\nBODY.DATA COLLECTED AT BASELINE:\nEach participant completed a self-administered questionnaire before enrollment. During completion of the questionnaire, they had the possibility to ask for assistance (i.e. clarification of question or any other issue) from one of the project leaders. The questionnaire included questions about usual intake of vitamin D containing foods, clothing and sun exposure habits, as well as height & weight, date of birth, and education. Body mass index was derived as wt/ht2 (kg/m2). Blood pressure was measured through sphygmomanometer and vitals (heart rate, body temperature) were also recorded for safety purposes.\n\nBODY.COLLECTION AND ANALYSES OF BLOOD SAMPLES:\nFasting blood samples were collected to measure baseline 25(OH)D levels in all the participants at day 0 (Screening visit), day 30 (completion of period I), day 60 (end of wash out and initiation of period II) and day 90 (completion of period II). Blood samples were centrifuged (15 min; 2000 g at 4 °C) within 30 min of blood collection and separated serum samples were immediately frozen. Serum samples were stored at −20 °C until analyzed. Serum 25(OH)D levels were measured by Electrochemiluminescence (ECLIA) assay method. This assay was carried out through quantitative determinations of total 25-hydroxyvitamin D in serum samples using a standard kit available from Roche diagnostics GmbH, Germany. All analyses were done in a central independent clinical analysis laboratory (APL Institute of Clinical Laboratory & Research Pvt. Ltd., Ahmedabad, India). The kit has a limit of detection of 3 ng/mL and has a linearity of 0.0 to 60.0 ng/mL. The intra-assay and inter-assay co-efficient of variation were 4 % and 6 %, respectively. Elecsys e-immunoassay analyzers were used for this assay.\nSafety parameters were evaluated including hematology analyses (complete blood counts), biochemical analyses (serum creatinine, total bilirubin, urea, SGOT, SGPT, alkaline phosphates, calcium) and urine was collected for urine routine and microbiological analyses at screening visit (day 0) and at the end of period II visit (day 90).\n\n\nBODY.INTERVENTION RANDOMIZATION AND COMPLIANCE:\nBODY.RANDOMIZATION AND GROUP ALLOCATION:\nThe participants were enrolled at two different hospital sites in India; one physician’s site where all healthy subjects were recruited and a gastroenterologist’s site where all patients with intestinal malabsorption were recruited. Out of the forty-eight participants who met the eligibility criteria, forty (twenty healthy and twenty patients) had agreed to participate and were found eligible, signed a written consent form, and completed a self administered questionnaire concerning usual diet and sun exposure. Subsequently, a venous blood sample was drawn, and a participant received a sealed, non-transparent envelope with the allocated treatment intervention. The randomization procedure was performed beforehand by a statistician by block randomization with blocks of two to one ratio, where the first two participants were randomly given each treatment and a third participant served as a control and didn’t receive any treatment. The participants were allocated to the interventions as they visited the clinic i.e. the first subject was allocated to buccal spray group, second to soft gelatin capsule group and third to the control group. This was done in order to distribute the participants equally in the two interventions (Group I and Group III) and half of the participants served as a control (Group II and Group IV).\nTwenty healthy subjects enrolled for the study were recruited at a physician’s site. They were randomized and enrolled into group I and II: fourteen subjects were enrolled for vitamin D3 treatment and labeled as group I (healthy subjects), while every third subject (total six subjects) were not given any treatment (labeled as group II) and acted as the control for group I.\nThe twenty patients with confirmed diagnosis of malabsorption syndromes (nine with ulcerative colitis, four with Crohn’s disease and seven with steatorrhea) were recruited at a gastroenterologist’s hospital site. Fourteen subjects were enrolled for vitamin D3 treatment and labeled as group III (patients with malabsorption syndrome), while every third subject (total six subjects) were not given any treatment (labeled as group IV) and acted as the control for group III. All patients continued to take their treatment for ulcerative colitis, Crohn’s disease and steatorrhea as prescribed by gastroenterologists.\nTreatment allocation was assigned to each participant according to their number of sequence of attendance at the blood sampling. Treatment allocation was concealed in envelopes numbered in ascending order that they met. Study personnel involved in recruitment and data collection were blinded to the participant’s treatment allocation. All participants were instructed to maintain their routine lifestyle including diet habits and sun exposure during the entire study period, to minimize interference with the daily routine.\n\nBODY.INTERVENTION:\nThe buccal spray and soft gelatin capsules containing vitamin D3 were supplied by Pharma Base, India (a subsidiary of Pharma Base SA, Switzerland). The soft gelatin capsule formulation was purchased from the Indian market. The analysis of both the formulations was done in triplicate according to the method described in European Pharmacopeia at an independent analytical laboratory (Oasis Testing House, Ahmedabad, India). The soft gelatin capsule formulation had a label content of 1000 IU per capsule and the buccal spray formulation had a label content of 500 IU/spray shot.\nAll participants in group I & III were randomized to receive either the vitamin D3 buccal spray (2 sprays each of 500 IU) or soft gelatin capsule containing vitamin D3 (1000 IU) for 30 days. Of the fourteen subjects in each group, half received buccal spray and half received soft gelatin capsule. This was considered as period I of the study. After the completion of the 30-day treatment, all participants were given a 30-day washout. The next treatment in period II was changed in a crossover fashion; those participants who had received the buccal spray formulation (vitamin D3) in period I received the soft gelatin capsule formulation in period II and vice versa. The treatment in period II continued in group I and group III participants for the next 30 days. The detailed study flow chart is described in Fig. 1.Fig. 1Study flow chart\n\n\nBODY.COMPLIANCE:\nAll participants were instructed to take the two buccal sprays and one soft gelatin capsule every day irrespective of the period of the study. Formulation for 7 days was handed over to each participants under group I & III at baseline, along with the compliance form. The participants were instructed to mark the intake and time of intake of number of spray shots or capsule of each day of the study period, as well as to note any extraordinary event that occurred during the period (forgetting to take a spray shot or capsule). If forgetting to take spray shot or capsule, the participants were instructed to take a double dose on the following day, in order to take altogether 60 spray shots or 30 capsules during the study periods. The participants were recommended to take spray shots or capsules after the main meal of the day. However, participants were not instructed to standardize meals or time between meals. All participants were also instructed not to change their daily routine, meal habits, as well as their sun exposure. This was done to minimize interference with daily routine and thus maximizing compliance with taking study medication. All participants in group I and III were instructed to visit the site every week to check compliance with the intake of study medication. Participants in control groups (Group II and Group IV) were instructed to visit their respective clinic after 30 days for their blood sample collection.\nAll participants who received treatment were also instructed to note any adverse events during the entire 90 days of the study which included period I, washout period and period II in a separately given adverse event recording form.\n\n\nBODY.STASTICAL ANALYSIS:\nBODY.SAMPLE SIZE:\nThe sample size of this crossover study was based on the changes in 25(OH)D levels where goal was to detect: 1) percent increase between pre-dose and post-dose levels within treatments, and 2) to have significant difference of percent change in 25(OH)D levels between the treatments. To achieve type-I error rate of less than 5 % (2 tailed), a sample size of 12 subjects in buccal spray group (healthy or patients) was sufficient to provide a statistical power of 80 % to detect a clinically significant mean difference of 5 % in 25(OH)D levels.\n\nBODY.DEMOGRAPHIC AND BASELINE CHARACTERISTIC:\nA descriptive statistics was applied for demographics; age (years), weight (kg), body mass index (kg/m2); and were presented as mean ± standard deviation.\n\nBODY.EFFICACY ANALYSIS:\nStatistical analyses were performed using GraphPad Prism 5, version 5.03 (GraphPad Software, Inc., CA, USA). Differences of mean and percentage change from baseline of 25(OH)D levels between two formulation groups, soft gelatin capsules and buccal spray, were evaluated using two tailed Paired t-test with 95 % confidence interval separately for both healthy subjects and patients with intestinal malabsorption. For comparison between control group and either of the treatments, unpaired t-test with 95 % confidence interval was used.\n\nBODY.SAFETY ANALYSIS:\nThe number and proportion of subjects with changes in laboratory value (change from baseline to end of study visit for complete blood count, serum creatinine, total bilirubin, urea, SGOT, SGPT, alkaline phosphates and calcium) was summarized and the difference was analyzed by chi –square test from normal to abnormal.\n\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nIn total, thirty eight individuals, thirteen healthy individuals with six control and thirteen patients with malabsorption syndrome with six control, fulfilled the eligibility criteria and completed the study (Fig. 1). Two participants (one healthy subject and one patient) reported vitamin D3 intake compliance of less than 90 % and therefore were not considered for final analyses.\nIn healthy subjects, at baseline, 85 % (n = 12) of the subjects had serum 25(OH)D concentration between 10 and 30 ng/ml, and one individual each (7.14 %) had serum 25(OH)D concentration >30 ng/ml and <10 ng/ml each in the oral soft gelatin capsule group. While in the buccal spray group, at baseline all subjects had serum 25(OH)D concentration between 10 and 30 ng/ml. Similarly, in patients with malabsorption disease who received oral soft gelatin capsules, at baseline 8 (57.14 %) patients had serum 25(OH)D concentration below 10 ng/ml, 5 (35.71 %) patients had serum 25(OH)D concentration between 10 and 20 ng/ml, and one individual (7.14 %) had serum 25(OH)D concentration above 20 ng/ml. However, in the buccal spray group, at baseline 9 (64.28 %) patients had serum 25(OH)D concentration below 10 ng/ml and 5 (35.71 %) patients had serum 25(OH)D concentration between 10 and 20 ng/ml. There were no striking differences in baseline characteristics between healthy individuals or patients with malabsorption disease with their corresponding control groups (Table 1).Table 1Demographic Data for Healthy Subjects and Patients with malabsorption syndrome\nParametersHealthy Subjects Group IHealthy Subjects Control Group IIPatients with malabsorption syndrome Group IIIPatients with malabsorption syndromeControlGroup IVN\n14\n6\n14\n6\nSex\nMale = 7, Female = 7\nMale = 3, Female = 3\nMale = 7, Female = 7\nMale = 3, Female = 3\nAge (Yrs)\n36.21 ± 9.97\n34.00 ± 6.42\n39.93 ± 11.65\n44.17 ± 5.56\n(Range)\n(25–60)\n(25–42)\n(26–63)\n(38–53)\nHeight (cms)\n159.86 ± 13.43\n161.33 ± 14.12\n162.29 ± 8.54\n164.33 ± 8.55\nBMI\n23.39 ± 3.88\n21.40 ± 2.39\n21.48 ± 2.82\n23.64 ± 3.02\nAll values are expressed in Mean ± SD; N-number subjects in each group\n\n\nBODY.EFFECT OF INTERVENTION:\nThe control groups in each subject population were compared with their corresponding treatment group. The control groups for healthy subjects as well as patients with malabsorption syndrome showed no change in 25(OH)D concentration over a period of 30 days. The mean baseline levels of 25(OH)D in healthy subjects was 18.25 ng/ml and in patients with malabsorption syndrome was 11.7 ng/ml. These mean levels remained at 18.06 ng/ml and 12.52 ng/ml after 30 days in healthy subjects and patients with malabsorption syndrome respectively (Fig. 2), which was statistically non-significant. When control group in healthy subjects was compared with their corresponding treatment groups, it was found that the difference of mean between control group and buccal spray group was 7.47 (95 % CI, 5.27, 9.67) which was significant (p < 0.05), and the same between control group and soft gelatin capsule group was 3.53 (95 % CI, 1.79, 5.28) which was also statistically significant (p < 0.05). Similarly in patients, the difference of mean between control group and buccal spray group was 8.53 (95 % CI, 2.74, 14.31) which was significant (p < 0.05), while the difference of mean between control group and soft gelatin capsule group was 2.03 (95 % CI, −1.44, 5.50) which was statistically not significant. This shows that buccal spray was more effective to increase mean 25(OH)D levels as compared to oral soft gelatin capsule. Table 2 describes these data.Fig. 2Mean 25(OH)D level in study subjects\nTable 2Comparison of 25(OH)D in control group and their corresponding treatment groups after 30 days\nComparisonDifference of mean95 % CIP ValueHealthy – Buccal Spray vs. Control\n7.47\n5.27 to 9.67\n<0.0001\nHealthy – Soft Gelatin Capsule vs. Control\n3.53\n1.79 to 5.28\n0.0005\nPatients – Buccal Spray vs. Control\n8.53\n2.74 to 14.31\n0.0064\nPatients – Soft Gelatin Capsule vs. Control\n2.03\n−1.44 to 5.50\n0.2338\n\nThe efficacy of buccal spray and soft gelatin capsule to increase the 25(OH)D levels after 30 days of administration was evaluated and compared with each other. After 30 days of administration, overall mean serum 25(OH)D concentration in healthy subjects was 22.75 (sd 6.75) ng/ml as compared to baseline value of 18.69 (sd 5.88) ng/ml in the soft gelatin capsule group, with the mean increase of 4.06 (95 % CI 3.41, 4.71) ng/ml (Table 3). On the other hand, in the buccal spray group, the mean serum 25(OH)D concentration was 26.91 (sd 5.72) ng/ml as compared to baseline value of 18.91 (sd 4.3) ng/ml, with the mean increase of 8.0 (95 % CI 6.86, 9.13) ng/ml (Fig. 2). The difference in mean increase between both the groups was 3.95 (95 % CI 3.19, 4.69) which was statistically significant (p < 0.0001).Table 3\nParametersMean baseline value in ng/mlMean value in ng/ml after 30 day of treatmentDifference of meanPercentage mean increase from baselineHealthy Subjects\nSoft gelatin Capsule Formulation\n Mean\n18.69 ± 5.88\n22.75 ± 6.75\n4.06\n21.72 %\n Range\n9.25, 30.05\n12.75, 35.52\n2.5, 5.6\n15.36, 37.79\n n\n13\n13\n13\n13\n 95 % CI\n15.14 to 22.24\n18.67 to 24.42\n3.41 to 4.71\n16.42 to 24.42\nBuccal Spray Formulation\n Mean\n18.91 ± 4.3\n26.91 ± 5.72\n7.995\n42.99 %\n Range\n13.36, 26.82\n19.6, 38.5\n4.5, 11.67\n29.21, 68.71\n n\n13\n13\n13\n13\n 95 % CI\n16.31 to 21.51\n23.45 to 30.36\n6.86 to 9.13\n37.19 to 48.79\nPatients with intestinal malabsorption syndrome\nSoft gelatin Capsule Formulation\n Mean\n11.01 ± 6.43\n14.97 ± 9.01\n3.965\n36.02 %\n Range\n2.9, 26.5\n4.6, 36.89\n1.06,10.39\n24.62,58.73\n n\n13\n13\n13\n13\n 95 % CI\n7.12 to 14.89\n9.52 to 20.42\n2.37 to 5.56\n30.42 to 41.62\nBuccal Spray Formulation\n Mean\n10.01 ± 4.29\n20.47 ± 7.89\n10.46\n117.8 %\n Range\n4.6, 18.85\n9.8, 34.64\n4.25, 27.44\n61.31,381.1\n n\n13\n13\n13\n13\n 95 % CI\n7.42 to 12.6\n15.7 to 25.24\n6.89 to 14.03\n64.71 to 170.8\n\nWhen calculated from baseline, the mean percentage change in serum 25(OH)D concentration in healthy subjects after 30 days treatment with soft gelatin capsule was 21.72 % (95 % CI 16.42, 24.42), while the same in buccal spray group was 42.99 % (95 % CI 37.19, 48.79) with a mean difference of 20.42 % (95 % CI 16.42, 24.42) between two groups (p < 0.0001) (Table 3). A total of 11 (85 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml, and two individuals (15.38 %) now had serum 25(OH)D concentration >30 ng/ml in the soft gelatin capsule group. However, in the buccal spray group, 10 (75.88 %) subjects had serum 25(OH)D concentration between 10 and 30 ng/ml and 3 (23.1 %) subjects now had serum 25(OH)D concentration >30 ng/ml.\nSimilarly in patients with malabsorption syndrome, overall mean serum 25(OH)D concentration after 30 days of administration of soft gelatin capsule was 14.97 (sd 9.01) ng/ml as compared to baseline value of 11.01 (sd 6.43) ng/ml, with the mean increase of 3.96 (95 % CI 2.37, 5.56) ng/ml. While in buccal spray group, mean serum 25(OH)D concentration was 20.47 (sd 7.89) ng/ml as compared to baseline value of 10.01 (sd 4.29) ng/ml, with the mean increase of 10.46(95 % CI 6.89, 14.03) ng/ml (Fig. 2). The difference in mean increase between both the groups was 6.50 (95 % CI 3.78, 9.22) which was statistically significant (p < 0.0001).\nThe mean percentage change in serum 25(OH)D concentration in patients with malabsorption syndrome after 30 days treatment with soft gelatin capsule was 36.02 % (95 % CI 30.42, 41.62), while the same value in buccal spray group was 117.8 % (95 % CI 64.71, 170.8) with a mean difference of 81.75 % (95 % CI 29.80, 133.7) between the two treatments (p < 0.005). A total of four (31 %) subjects now had serum 25(OH)D concentration below 10 ng/ml, eight (61 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml, and one individual (7.7 %) had serum 25(OH)D concentration above 30 ng/ml. However, in the buccal spray group, only 7.7 % (1 subject) now had serum 25(OH)D concentration below 10 ng/ml, ten (76.9 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml and two (15.4 %) subjects now had serum 25(OH)D concentration more than 30 ng/ml (Table 3).\n\nBODY.STATISTICAL CONSIDERATIONS WITH RESPECT TO PERIOD, SEQUENCE AND POWER:\nStatistical analyses were performed using SAS v9.2 (SAS Institute Inc, Cary, NC, USA) for the post-hoc evaluation of sequence and period effect. The statistical method adopted for this analysis was period + sequence + Subject (sequence) + treatment. It was observed that there is no significant difference in sequence effect in healthy subjects (p = 0.5251) and in patients with malabsorption syndrome (p = 0.0532). It was also revealed that there is no statistically significant period effect in healthy subjects (p = 0.6920) as well as in patients with malabsorption syndrome (p = 0.0715). The post-hoc power analyses with intra-subject variability were also derived for both the group of subjects. In healthy subjects, statistical power obtained was 99.42 with an 11.81 % intra-subject variability. While, in patients with malabsorption syndrome it was 81.62 with an intra-subject variability of 21.86 %.\n\nBODY.SAFETY EVALUATION:\nThere were no significant changes in any of the hematology and biochemistry parameters studied. There are also no notable changes in the vitals for any of the participants. No adverse event reported after administration of the buccal spray or soft gelatin capsule formulations of vitamin D3 during entire study period in healthy or patients with malabsorption disease and hence the product is considered safe.\n\n\nBODY.DISCUSSION:\nSupplemental fat-soluble vitamin D is usually made without determination of whether oral doses are adequately absorbed. The evidence of vitamin D malabsorption (Osteomalacia, rickets, hypocalcaemia, or reduced circulating concentration of 25(OH)D) persists despite routine vitamin D supplementation in cystic fibrosis [14], Crohn’s disease [15], Intestinal resection [16–19], ulcerative colitis, liver disease [20–22] and other malabsorption syndrome [23].\nMany factors are involved in the absorption of vitamin D, including gastric, pancreatic, and biliary secretions, micelle formation, and diffusion through the unstirred water layer, brush border membrane uptake, and transport out of the intestinal cell [24, 25]. As vitamin D is a relatively non-polar sterol, it must be solubilized by incorporation into a bile salt micelle solution in order to be absorbed in the aqueous phase [26]. This process is severely inhibited if there is any interruption of normal pancreatic or biliary secretion. As fat-soluble vitamins are fairly sensitive to disturbances in lipid absorption, vitamin malabsorption may occur in conditions like steatorrhea, ulcerative colitis, and Crohn’s disease. Serum concentrations of 25-hydroxyvitamin D are good indicators of long term vitamin D levels in the body but are insensitive to single doses of vitamin D and do not rise out of the normal range unless doses of vitamin D are chronically administered [27].\nConsidering the malabsorption in intestinal disease and a possible poor and complex absorption with oral formulations, a novel nanoemulsion formulation of buccal spray was developed where vitamin D3 is suspended in an aqueous base which can be easily absorbed through the mucosal layer of the mouth. We compared the serum concentration of vitamin D after 30 days administration with the soft gelatin capsule and the aqueous based buccal spray formulation. To the best of our knowledge, this is the first randomized two way cross over trial comparing the increase in 25(OH)D levels in healthy adults and patients with intestinal malabsorption receiving similar oral doses of two different formulations (per oral and buccal spray).\nThe analyses of baseline levels of study participants showed that vitamin D deficiency was prevalent in both healthy subjects and patients with intestinal malabsorption syndrome. However, patients were more vitamin D deficient as compared to healthy subjects. Four weeks administration of 1000 IU per day increased mean serum 25(OH)D in all treatment groups. In healthy subjects, soft gelatin capsule increased serum 25(OH)D level by 22.5 % (Range 15.4 to 37.8 %), while the buccal spray increased serum 25(OH)D level by 43 % (range 29.2 to 68.7. Similarly, in patients with intestinal malabsorption, soft gelatin capsule increased serum 25(OH)D level by 36 % (range 24.6 to 58.7 %) and the buccal spray increased serum 25(OH)D by 117.8 % (range 61.3 to 381.1 %).\nThe result implies that the buccal spray formulation had a significantly higher mean increase in both the subject groups, healthy subjects and patients with intestinal malabsorption syndrome. Interestingly, the mean increase was much higher in the patients group as compared to the healthy subjects group. This may be because increase in serum 25(OH)D after supplementation is known to be inversely related to baseline 25(OH)D concentration [28].\nIn the present study, the mean baseline levels were almost half in patients with intestinal malabsorption as compared to the baseline levels found in healthy subjects. This indicates the presence of vitamin D3 deficiency in the patient group as compared to healthy subjects. The primary and the most important source of vitamin D is sunlight. Although excessive exposure to sunlight and vitamin D have been positively associated with non-melanoma skin cancer [29], ecological studies suggest that sunlight may protect against female breast, ovarian, prostate, and colon cancer [30]. Solar UV-B exposure and the amount of exposure to sun are related inversely with cancer mortality and survival in detailed epidemiological studies [31]. Some analytical studies suggest a protective association between circulating vitamin D in blood, which is largely derived from sunlight or dietary vitamin D, and colorectal cancer and prostate cancer [30]. However, looking at the overall baseline levels of all participants in the present study, it is also advisable to increase the moderate daily sun exposure and to improve clothing apart from vitamin D3 supplementation.\n\nBODY.CONCLUSION:\nWe conclude that the buccal spray formulation was able to increase mean serum vitamin D3 concentration significantly higher as compared to the soft gelatin capsule, in both healthy subjects (1.9 times) as well as in patients with intestinal malabsorption syndrome (2.6 times).\n\nBODY.AVAILABILITY OF SUPPORTING DATA:\nThe data set(s) supporting the results of this article is (are) included within the article.\n\n**Question:** Compared to control what was the result of buccal spray; on 25(OH)D concentration?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13884
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA randomized two way cross over study for comparison of absorption of vitamin D3 buccal spray and soft gelatin capsule formulation in healthy subjects and in patients with intestinal malabsorption\n\n\n\nABSTRACT.BACKGROUND:\nVitamin D deficiency has been proposed to contribute to the development of malabsorption diseases. Despite this, the vitamin D status of these patients is often neglected. The objective of the present work was to compare the absorption of vitamin D3 through the oral route by comparing a 1000 IU soft gelatin capsule and a 500 IU buccal spray (delivering 1000 IU in two spray shots) in healthy subjects and in patients with malabsorption disease.\n\nABSTRACT.METHODS:\nAn open label, randomized, two-periods, two-way cross over study was conducted, first in healthy subjects (n = 20) and then in patients with malabsorption syndrome (n = 20). The study participants were equally divided and received either of the treatments (buccal spray, n = 7; soft gelatin capsule, n = 7; control, n = 6) in Period I for 30 days. After washout of another 30 days, the treatments were changed in crossover fashion in Period II. Fasting blood samples were collected to measure baseline 25-hydroxyvitamin D [25(OH)D] levels in all participants at day 0 (Screening visit), day 30 (completion of period I), day 60 (end of wash out and initiation of period II) and day 90 (completion of period II). Safety was evaluated by hematology and biochemistry analyses. Statistical analyses was performed using differences of mean and percentage change from baseline of 25(OH)D levels between two formulation by two tailed Paired t-test with 95 % confidence interval.\n\nABSTRACT.RESULTS:\nIn healthy subjects, the mean increase in serum 25(OH)D concentration was 4.06 (95 % CI 3.41, 4.71) ng/ml in soft gelatin capsule group and 8.0 (95 % CI 6.86, 9.13) ng/ml in buccal spray group after 30 days treatment (p < 0.0001). In patients with malabsorption disease, the mean increase in serum 25(OH)D concentration was 3.96 (95 % CI 2.37, 5.56) ng/ml in soft gelatin capsule group and 10.46 (95 % CI 6.89, 14.03) ng/ml in buccal spray group (p < 0.0001).\n\nABSTRACT.CONCLUSION:\nIt can be concluded from the results that the buccal spray produced a significantly higher mean serum 25(OH)D concentration as compared to the soft gelatin capsule, in both healthy subjects as well as in patients with malabsorption syndrome over a period of 30 days administration in a two way cross over study. Treatments were well tolerated by both subject groups\n\nABSTRACT.TRIAL REGISTRATION:\nCTRI/2013/06/003770\n\nBODY.INTRODUCTION:\nVitamin D is essential for active intestinal calcium absorption and plays a central role in maintaining calcium homeostasis and skeletal integrity. It is derived mainly from cutaneous synthesis in the presence of ultraviolet sunlight while dietary intake constitutes a minor fraction [1]. Vitamin D deficiency is a common problem through the world [2, 3] and is assessed by low serum concentration of the major circulating metabolite 25-hydroxyvitamin D (25(OH)D) [4, 5]. The prevention of vitamin D deficiency and insufficiency remains a priority of international health services [6–8]. Vitamin D deficiency has been proposed to contribute to the development of intestinal bowel diseases like Crohn's disease, steatorrhea and ulcerative colitis [9]. Conversely, people who have such illnesses have a reduced absorption of vitamin D3 through the intestine [10, 11]. In addition, osteomalacia occurs in patients with a wide variety of disorders affecting the stomach and small bowel, especially when associated with steatorrhea. The pathogenesis of this osteomalacia has in part been explained by malabsorption in vitamin D [9]. Earlier reports show that orally administered tritiated vitamin D3 was malabsorbed in patients with celiac disease, biliary obstruction or pancreatic disease [12]. The pathogenesis of vitamin D deficiency in these patients remains unclear but it is thought to result primarily from fat soluble vitamin malabsorption due to the presence of intestinal disease conditions\nDespite vitamin D malabsorption in patients with gastrointestinal or liver disease, the vitamin status of these patients is often neglected. Although vitamin D supplements are often prescribed, adequate absorption of these formulations has not been documented [10]. Vitamin D is known to be liposoluble, and its relative bioavailability could result in unfavorable conditions when administered in solid form (capsule), since the process of its release is a factor limiting the rate of absorption, bearing in mind that bioavailability is related not only to the pharmaceutically active molecules, but also, to the formulation and excipients used.\nVitamin D3 taken by oral route (peroral delivery) is absorbed in the intestine, where the lining of the digestive tract is aqueous in nature. Therefore vitamin D3, a fat-soluble molecule, in order to be absorbed, must be made water soluble in the intestine. This is accomplished in two steps: emulsification of vitamin D3 in the intestinal lumen, through the action of bile salts, forming small droplets which are dispersed and incorporated into micelles-complex aggregates formed by the interaction of free fatty acids, monoglycerides, and bile salts. Micelles are sufficiently water-soluble to access the intestinal brush border where upon the vitamin D3 content is released and then absorbed [13].\nWhen sprayed inside the mouth, the fine micro sized droplets of vitamin D3 are believed to be quickly and completely absorbed through the buccal mucosa into the numerous capillaries and veins lying close to the tissue surface [13]. Considering the possibility of reduced vitamin D absorption in healthy subjects and even in patients with malabsorption syndrome, a buccal spray formulation was developed. Therefore, the objective of the present work was to compare the absorption of vitamin D3 through the oral route (soft gelatin capsule form) and buccal spray in healthy subjects and patients with intestinal malabsorption syndrome.\n\nBODY.METHODS:\nBODY.STUDY DESIGN AND PATIENTS:\nAn open label, randomized, two-periods, two-way cross over study was conducted, first in healthy subjects and then in patients with malabsorption syndrome, with a similar study design except the presence of disease status in patients with malabsorption syndrome. After approval from the Spandan-IEC ethics committee (registration no: ECR/67/Indt/GJ/2013), the informed consent of study participants were taken and the formulation was administered for 30 days in period I where half of the subjects and patients (collectively participants) received capsule formulation and half of the participants received buccal spray formulation. After completion of treatment in period I, all participants were given 30 days wash out before initiating period II where treatment has changed in cross over fashion, i.e. the participants who received capsule formulation in period I have received buccal spray in period II and vice versa. The clinical study was registered in a centralized clinical trial registry of India (CTRI) before initiating the enrollment of the first patient in the study (CTRI/2013/06/003770).\nThe inclusion criteria were as follows: subjects of either sex between 18 and 65 years of age, with a Body Mass Index (BMI) between 18.0 and 30.0 kg/m2, and the ability to comply with study procedures in the opinion of the investigators. For healthy subjects: no history of liver, kidney or cardiovascular disease, or of any other medical conditions or medications likely to affect vitamin D3 absorption or metabolism. For patients with malabsorption syndrome: confirmed diagnosis of any one of the following malabsorption disease conditions like ulcerative colitis, Crohn’s disease or steatorrhea. Patient with history of above diseases, who are on therapy, were selected for the screening. In these patients, malabsorption syndrome was diagnosed by clinical symptoms like abdominal pain, vomiting, diarrhea, and subcutaneous fat loss together with blood tests like haematology and biochemistry. Stool examination was also performed for all patients to confirm rectal bleeding, presence of occult blood, infectious organisms, or fat. Finally, colonoscopy was performed in all patients to objectively assess the extent of inflammation to confirm the diagnosis. The exclusion criteria were as follows: systemic inflammatory or malignant disease; hepatic or renal failure; uncontrolled hypo- or hyperthyroidism, or the use of drugs that are known to affect bone metabolism such as bisphosphonates, glucocorticoids and anti-convulsants. A pregnant or desired to be pregnant woman during study period was also excluded.\n\nBODY.DATA COLLECTED AT BASELINE:\nEach participant completed a self-administered questionnaire before enrollment. During completion of the questionnaire, they had the possibility to ask for assistance (i.e. clarification of question or any other issue) from one of the project leaders. The questionnaire included questions about usual intake of vitamin D containing foods, clothing and sun exposure habits, as well as height & weight, date of birth, and education. Body mass index was derived as wt/ht2 (kg/m2). Blood pressure was measured through sphygmomanometer and vitals (heart rate, body temperature) were also recorded for safety purposes.\n\nBODY.COLLECTION AND ANALYSES OF BLOOD SAMPLES:\nFasting blood samples were collected to measure baseline 25(OH)D levels in all the participants at day 0 (Screening visit), day 30 (completion of period I), day 60 (end of wash out and initiation of period II) and day 90 (completion of period II). Blood samples were centrifuged (15 min; 2000 g at 4 °C) within 30 min of blood collection and separated serum samples were immediately frozen. Serum samples were stored at −20 °C until analyzed. Serum 25(OH)D levels were measured by Electrochemiluminescence (ECLIA) assay method. This assay was carried out through quantitative determinations of total 25-hydroxyvitamin D in serum samples using a standard kit available from Roche diagnostics GmbH, Germany. All analyses were done in a central independent clinical analysis laboratory (APL Institute of Clinical Laboratory & Research Pvt. Ltd., Ahmedabad, India). The kit has a limit of detection of 3 ng/mL and has a linearity of 0.0 to 60.0 ng/mL. The intra-assay and inter-assay co-efficient of variation were 4 % and 6 %, respectively. Elecsys e-immunoassay analyzers were used for this assay.\nSafety parameters were evaluated including hematology analyses (complete blood counts), biochemical analyses (serum creatinine, total bilirubin, urea, SGOT, SGPT, alkaline phosphates, calcium) and urine was collected for urine routine and microbiological analyses at screening visit (day 0) and at the end of period II visit (day 90).\n\n\nBODY.INTERVENTION RANDOMIZATION AND COMPLIANCE:\nBODY.RANDOMIZATION AND GROUP ALLOCATION:\nThe participants were enrolled at two different hospital sites in India; one physician’s site where all healthy subjects were recruited and a gastroenterologist’s site where all patients with intestinal malabsorption were recruited. Out of the forty-eight participants who met the eligibility criteria, forty (twenty healthy and twenty patients) had agreed to participate and were found eligible, signed a written consent form, and completed a self administered questionnaire concerning usual diet and sun exposure. Subsequently, a venous blood sample was drawn, and a participant received a sealed, non-transparent envelope with the allocated treatment intervention. The randomization procedure was performed beforehand by a statistician by block randomization with blocks of two to one ratio, where the first two participants were randomly given each treatment and a third participant served as a control and didn’t receive any treatment. The participants were allocated to the interventions as they visited the clinic i.e. the first subject was allocated to buccal spray group, second to soft gelatin capsule group and third to the control group. This was done in order to distribute the participants equally in the two interventions (Group I and Group III) and half of the participants served as a control (Group II and Group IV).\nTwenty healthy subjects enrolled for the study were recruited at a physician’s site. They were randomized and enrolled into group I and II: fourteen subjects were enrolled for vitamin D3 treatment and labeled as group I (healthy subjects), while every third subject (total six subjects) were not given any treatment (labeled as group II) and acted as the control for group I.\nThe twenty patients with confirmed diagnosis of malabsorption syndromes (nine with ulcerative colitis, four with Crohn’s disease and seven with steatorrhea) were recruited at a gastroenterologist’s hospital site. Fourteen subjects were enrolled for vitamin D3 treatment and labeled as group III (patients with malabsorption syndrome), while every third subject (total six subjects) were not given any treatment (labeled as group IV) and acted as the control for group III. All patients continued to take their treatment for ulcerative colitis, Crohn’s disease and steatorrhea as prescribed by gastroenterologists.\nTreatment allocation was assigned to each participant according to their number of sequence of attendance at the blood sampling. Treatment allocation was concealed in envelopes numbered in ascending order that they met. Study personnel involved in recruitment and data collection were blinded to the participant’s treatment allocation. All participants were instructed to maintain their routine lifestyle including diet habits and sun exposure during the entire study period, to minimize interference with the daily routine.\n\nBODY.INTERVENTION:\nThe buccal spray and soft gelatin capsules containing vitamin D3 were supplied by Pharma Base, India (a subsidiary of Pharma Base SA, Switzerland). The soft gelatin capsule formulation was purchased from the Indian market. The analysis of both the formulations was done in triplicate according to the method described in European Pharmacopeia at an independent analytical laboratory (Oasis Testing House, Ahmedabad, India). The soft gelatin capsule formulation had a label content of 1000 IU per capsule and the buccal spray formulation had a label content of 500 IU/spray shot.\nAll participants in group I & III were randomized to receive either the vitamin D3 buccal spray (2 sprays each of 500 IU) or soft gelatin capsule containing vitamin D3 (1000 IU) for 30 days. Of the fourteen subjects in each group, half received buccal spray and half received soft gelatin capsule. This was considered as period I of the study. After the completion of the 30-day treatment, all participants were given a 30-day washout. The next treatment in period II was changed in a crossover fashion; those participants who had received the buccal spray formulation (vitamin D3) in period I received the soft gelatin capsule formulation in period II and vice versa. The treatment in period II continued in group I and group III participants for the next 30 days. The detailed study flow chart is described in Fig. 1.Fig. 1Study flow chart\n\n\nBODY.COMPLIANCE:\nAll participants were instructed to take the two buccal sprays and one soft gelatin capsule every day irrespective of the period of the study. Formulation for 7 days was handed over to each participants under group I & III at baseline, along with the compliance form. The participants were instructed to mark the intake and time of intake of number of spray shots or capsule of each day of the study period, as well as to note any extraordinary event that occurred during the period (forgetting to take a spray shot or capsule). If forgetting to take spray shot or capsule, the participants were instructed to take a double dose on the following day, in order to take altogether 60 spray shots or 30 capsules during the study periods. The participants were recommended to take spray shots or capsules after the main meal of the day. However, participants were not instructed to standardize meals or time between meals. All participants were also instructed not to change their daily routine, meal habits, as well as their sun exposure. This was done to minimize interference with daily routine and thus maximizing compliance with taking study medication. All participants in group I and III were instructed to visit the site every week to check compliance with the intake of study medication. Participants in control groups (Group II and Group IV) were instructed to visit their respective clinic after 30 days for their blood sample collection.\nAll participants who received treatment were also instructed to note any adverse events during the entire 90 days of the study which included period I, washout period and period II in a separately given adverse event recording form.\n\n\nBODY.STASTICAL ANALYSIS:\nBODY.SAMPLE SIZE:\nThe sample size of this crossover study was based on the changes in 25(OH)D levels where goal was to detect: 1) percent increase between pre-dose and post-dose levels within treatments, and 2) to have significant difference of percent change in 25(OH)D levels between the treatments. To achieve type-I error rate of less than 5 % (2 tailed), a sample size of 12 subjects in buccal spray group (healthy or patients) was sufficient to provide a statistical power of 80 % to detect a clinically significant mean difference of 5 % in 25(OH)D levels.\n\nBODY.DEMOGRAPHIC AND BASELINE CHARACTERISTIC:\nA descriptive statistics was applied for demographics; age (years), weight (kg), body mass index (kg/m2); and were presented as mean ± standard deviation.\n\nBODY.EFFICACY ANALYSIS:\nStatistical analyses were performed using GraphPad Prism 5, version 5.03 (GraphPad Software, Inc., CA, USA). Differences of mean and percentage change from baseline of 25(OH)D levels between two formulation groups, soft gelatin capsules and buccal spray, were evaluated using two tailed Paired t-test with 95 % confidence interval separately for both healthy subjects and patients with intestinal malabsorption. For comparison between control group and either of the treatments, unpaired t-test with 95 % confidence interval was used.\n\nBODY.SAFETY ANALYSIS:\nThe number and proportion of subjects with changes in laboratory value (change from baseline to end of study visit for complete blood count, serum creatinine, total bilirubin, urea, SGOT, SGPT, alkaline phosphates and calcium) was summarized and the difference was analyzed by chi –square test from normal to abnormal.\n\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nIn total, thirty eight individuals, thirteen healthy individuals with six control and thirteen patients with malabsorption syndrome with six control, fulfilled the eligibility criteria and completed the study (Fig. 1). Two participants (one healthy subject and one patient) reported vitamin D3 intake compliance of less than 90 % and therefore were not considered for final analyses.\nIn healthy subjects, at baseline, 85 % (n = 12) of the subjects had serum 25(OH)D concentration between 10 and 30 ng/ml, and one individual each (7.14 %) had serum 25(OH)D concentration >30 ng/ml and <10 ng/ml each in the oral soft gelatin capsule group. While in the buccal spray group, at baseline all subjects had serum 25(OH)D concentration between 10 and 30 ng/ml. Similarly, in patients with malabsorption disease who received oral soft gelatin capsules, at baseline 8 (57.14 %) patients had serum 25(OH)D concentration below 10 ng/ml, 5 (35.71 %) patients had serum 25(OH)D concentration between 10 and 20 ng/ml, and one individual (7.14 %) had serum 25(OH)D concentration above 20 ng/ml. However, in the buccal spray group, at baseline 9 (64.28 %) patients had serum 25(OH)D concentration below 10 ng/ml and 5 (35.71 %) patients had serum 25(OH)D concentration between 10 and 20 ng/ml. There were no striking differences in baseline characteristics between healthy individuals or patients with malabsorption disease with their corresponding control groups (Table 1).Table 1Demographic Data for Healthy Subjects and Patients with malabsorption syndrome\nParametersHealthy Subjects Group IHealthy Subjects Control Group IIPatients with malabsorption syndrome Group IIIPatients with malabsorption syndromeControlGroup IVN\n14\n6\n14\n6\nSex\nMale = 7, Female = 7\nMale = 3, Female = 3\nMale = 7, Female = 7\nMale = 3, Female = 3\nAge (Yrs)\n36.21 ± 9.97\n34.00 ± 6.42\n39.93 ± 11.65\n44.17 ± 5.56\n(Range)\n(25–60)\n(25–42)\n(26–63)\n(38–53)\nHeight (cms)\n159.86 ± 13.43\n161.33 ± 14.12\n162.29 ± 8.54\n164.33 ± 8.55\nBMI\n23.39 ± 3.88\n21.40 ± 2.39\n21.48 ± 2.82\n23.64 ± 3.02\nAll values are expressed in Mean ± SD; N-number subjects in each group\n\n\nBODY.EFFECT OF INTERVENTION:\nThe control groups in each subject population were compared with their corresponding treatment group. The control groups for healthy subjects as well as patients with malabsorption syndrome showed no change in 25(OH)D concentration over a period of 30 days. The mean baseline levels of 25(OH)D in healthy subjects was 18.25 ng/ml and in patients with malabsorption syndrome was 11.7 ng/ml. These mean levels remained at 18.06 ng/ml and 12.52 ng/ml after 30 days in healthy subjects and patients with malabsorption syndrome respectively (Fig. 2), which was statistically non-significant. When control group in healthy subjects was compared with their corresponding treatment groups, it was found that the difference of mean between control group and buccal spray group was 7.47 (95 % CI, 5.27, 9.67) which was significant (p < 0.05), and the same between control group and soft gelatin capsule group was 3.53 (95 % CI, 1.79, 5.28) which was also statistically significant (p < 0.05). Similarly in patients, the difference of mean between control group and buccal spray group was 8.53 (95 % CI, 2.74, 14.31) which was significant (p < 0.05), while the difference of mean between control group and soft gelatin capsule group was 2.03 (95 % CI, −1.44, 5.50) which was statistically not significant. This shows that buccal spray was more effective to increase mean 25(OH)D levels as compared to oral soft gelatin capsule. Table 2 describes these data.Fig. 2Mean 25(OH)D level in study subjects\nTable 2Comparison of 25(OH)D in control group and their corresponding treatment groups after 30 days\nComparisonDifference of mean95 % CIP ValueHealthy – Buccal Spray vs. Control\n7.47\n5.27 to 9.67\n<0.0001\nHealthy – Soft Gelatin Capsule vs. Control\n3.53\n1.79 to 5.28\n0.0005\nPatients – Buccal Spray vs. Control\n8.53\n2.74 to 14.31\n0.0064\nPatients – Soft Gelatin Capsule vs. Control\n2.03\n−1.44 to 5.50\n0.2338\n\nThe efficacy of buccal spray and soft gelatin capsule to increase the 25(OH)D levels after 30 days of administration was evaluated and compared with each other. After 30 days of administration, overall mean serum 25(OH)D concentration in healthy subjects was 22.75 (sd 6.75) ng/ml as compared to baseline value of 18.69 (sd 5.88) ng/ml in the soft gelatin capsule group, with the mean increase of 4.06 (95 % CI 3.41, 4.71) ng/ml (Table 3). On the other hand, in the buccal spray group, the mean serum 25(OH)D concentration was 26.91 (sd 5.72) ng/ml as compared to baseline value of 18.91 (sd 4.3) ng/ml, with the mean increase of 8.0 (95 % CI 6.86, 9.13) ng/ml (Fig. 2). The difference in mean increase between both the groups was 3.95 (95 % CI 3.19, 4.69) which was statistically significant (p < 0.0001).Table 3\nParametersMean baseline value in ng/mlMean value in ng/ml after 30 day of treatmentDifference of meanPercentage mean increase from baselineHealthy Subjects\nSoft gelatin Capsule Formulation\n Mean\n18.69 ± 5.88\n22.75 ± 6.75\n4.06\n21.72 %\n Range\n9.25, 30.05\n12.75, 35.52\n2.5, 5.6\n15.36, 37.79\n n\n13\n13\n13\n13\n 95 % CI\n15.14 to 22.24\n18.67 to 24.42\n3.41 to 4.71\n16.42 to 24.42\nBuccal Spray Formulation\n Mean\n18.91 ± 4.3\n26.91 ± 5.72\n7.995\n42.99 %\n Range\n13.36, 26.82\n19.6, 38.5\n4.5, 11.67\n29.21, 68.71\n n\n13\n13\n13\n13\n 95 % CI\n16.31 to 21.51\n23.45 to 30.36\n6.86 to 9.13\n37.19 to 48.79\nPatients with intestinal malabsorption syndrome\nSoft gelatin Capsule Formulation\n Mean\n11.01 ± 6.43\n14.97 ± 9.01\n3.965\n36.02 %\n Range\n2.9, 26.5\n4.6, 36.89\n1.06,10.39\n24.62,58.73\n n\n13\n13\n13\n13\n 95 % CI\n7.12 to 14.89\n9.52 to 20.42\n2.37 to 5.56\n30.42 to 41.62\nBuccal Spray Formulation\n Mean\n10.01 ± 4.29\n20.47 ± 7.89\n10.46\n117.8 %\n Range\n4.6, 18.85\n9.8, 34.64\n4.25, 27.44\n61.31,381.1\n n\n13\n13\n13\n13\n 95 % CI\n7.42 to 12.6\n15.7 to 25.24\n6.89 to 14.03\n64.71 to 170.8\n\nWhen calculated from baseline, the mean percentage change in serum 25(OH)D concentration in healthy subjects after 30 days treatment with soft gelatin capsule was 21.72 % (95 % CI 16.42, 24.42), while the same in buccal spray group was 42.99 % (95 % CI 37.19, 48.79) with a mean difference of 20.42 % (95 % CI 16.42, 24.42) between two groups (p < 0.0001) (Table 3). A total of 11 (85 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml, and two individuals (15.38 %) now had serum 25(OH)D concentration >30 ng/ml in the soft gelatin capsule group. However, in the buccal spray group, 10 (75.88 %) subjects had serum 25(OH)D concentration between 10 and 30 ng/ml and 3 (23.1 %) subjects now had serum 25(OH)D concentration >30 ng/ml.\nSimilarly in patients with malabsorption syndrome, overall mean serum 25(OH)D concentration after 30 days of administration of soft gelatin capsule was 14.97 (sd 9.01) ng/ml as compared to baseline value of 11.01 (sd 6.43) ng/ml, with the mean increase of 3.96 (95 % CI 2.37, 5.56) ng/ml. While in buccal spray group, mean serum 25(OH)D concentration was 20.47 (sd 7.89) ng/ml as compared to baseline value of 10.01 (sd 4.29) ng/ml, with the mean increase of 10.46(95 % CI 6.89, 14.03) ng/ml (Fig. 2). The difference in mean increase between both the groups was 6.50 (95 % CI 3.78, 9.22) which was statistically significant (p < 0.0001).\nThe mean percentage change in serum 25(OH)D concentration in patients with malabsorption syndrome after 30 days treatment with soft gelatin capsule was 36.02 % (95 % CI 30.42, 41.62), while the same value in buccal spray group was 117.8 % (95 % CI 64.71, 170.8) with a mean difference of 81.75 % (95 % CI 29.80, 133.7) between the two treatments (p < 0.005). A total of four (31 %) subjects now had serum 25(OH)D concentration below 10 ng/ml, eight (61 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml, and one individual (7.7 %) had serum 25(OH)D concentration above 30 ng/ml. However, in the buccal spray group, only 7.7 % (1 subject) now had serum 25(OH)D concentration below 10 ng/ml, ten (76.9 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml and two (15.4 %) subjects now had serum 25(OH)D concentration more than 30 ng/ml (Table 3).\n\nBODY.STATISTICAL CONSIDERATIONS WITH RESPECT TO PERIOD, SEQUENCE AND POWER:\nStatistical analyses were performed using SAS v9.2 (SAS Institute Inc, Cary, NC, USA) for the post-hoc evaluation of sequence and period effect. The statistical method adopted for this analysis was period + sequence + Subject (sequence) + treatment. It was observed that there is no significant difference in sequence effect in healthy subjects (p = 0.5251) and in patients with malabsorption syndrome (p = 0.0532). It was also revealed that there is no statistically significant period effect in healthy subjects (p = 0.6920) as well as in patients with malabsorption syndrome (p = 0.0715). The post-hoc power analyses with intra-subject variability were also derived for both the group of subjects. In healthy subjects, statistical power obtained was 99.42 with an 11.81 % intra-subject variability. While, in patients with malabsorption syndrome it was 81.62 with an intra-subject variability of 21.86 %.\n\nBODY.SAFETY EVALUATION:\nThere were no significant changes in any of the hematology and biochemistry parameters studied. There are also no notable changes in the vitals for any of the participants. No adverse event reported after administration of the buccal spray or soft gelatin capsule formulations of vitamin D3 during entire study period in healthy or patients with malabsorption disease and hence the product is considered safe.\n\n\nBODY.DISCUSSION:\nSupplemental fat-soluble vitamin D is usually made without determination of whether oral doses are adequately absorbed. The evidence of vitamin D malabsorption (Osteomalacia, rickets, hypocalcaemia, or reduced circulating concentration of 25(OH)D) persists despite routine vitamin D supplementation in cystic fibrosis [14], Crohn’s disease [15], Intestinal resection [16–19], ulcerative colitis, liver disease [20–22] and other malabsorption syndrome [23].\nMany factors are involved in the absorption of vitamin D, including gastric, pancreatic, and biliary secretions, micelle formation, and diffusion through the unstirred water layer, brush border membrane uptake, and transport out of the intestinal cell [24, 25]. As vitamin D is a relatively non-polar sterol, it must be solubilized by incorporation into a bile salt micelle solution in order to be absorbed in the aqueous phase [26]. This process is severely inhibited if there is any interruption of normal pancreatic or biliary secretion. As fat-soluble vitamins are fairly sensitive to disturbances in lipid absorption, vitamin malabsorption may occur in conditions like steatorrhea, ulcerative colitis, and Crohn’s disease. Serum concentrations of 25-hydroxyvitamin D are good indicators of long term vitamin D levels in the body but are insensitive to single doses of vitamin D and do not rise out of the normal range unless doses of vitamin D are chronically administered [27].\nConsidering the malabsorption in intestinal disease and a possible poor and complex absorption with oral formulations, a novel nanoemulsion formulation of buccal spray was developed where vitamin D3 is suspended in an aqueous base which can be easily absorbed through the mucosal layer of the mouth. We compared the serum concentration of vitamin D after 30 days administration with the soft gelatin capsule and the aqueous based buccal spray formulation. To the best of our knowledge, this is the first randomized two way cross over trial comparing the increase in 25(OH)D levels in healthy adults and patients with intestinal malabsorption receiving similar oral doses of two different formulations (per oral and buccal spray).\nThe analyses of baseline levels of study participants showed that vitamin D deficiency was prevalent in both healthy subjects and patients with intestinal malabsorption syndrome. However, patients were more vitamin D deficient as compared to healthy subjects. Four weeks administration of 1000 IU per day increased mean serum 25(OH)D in all treatment groups. In healthy subjects, soft gelatin capsule increased serum 25(OH)D level by 22.5 % (Range 15.4 to 37.8 %), while the buccal spray increased serum 25(OH)D level by 43 % (range 29.2 to 68.7. Similarly, in patients with intestinal malabsorption, soft gelatin capsule increased serum 25(OH)D level by 36 % (range 24.6 to 58.7 %) and the buccal spray increased serum 25(OH)D by 117.8 % (range 61.3 to 381.1 %).\nThe result implies that the buccal spray formulation had a significantly higher mean increase in both the subject groups, healthy subjects and patients with intestinal malabsorption syndrome. Interestingly, the mean increase was much higher in the patients group as compared to the healthy subjects group. This may be because increase in serum 25(OH)D after supplementation is known to be inversely related to baseline 25(OH)D concentration [28].\nIn the present study, the mean baseline levels were almost half in patients with intestinal malabsorption as compared to the baseline levels found in healthy subjects. This indicates the presence of vitamin D3 deficiency in the patient group as compared to healthy subjects. The primary and the most important source of vitamin D is sunlight. Although excessive exposure to sunlight and vitamin D have been positively associated with non-melanoma skin cancer [29], ecological studies suggest that sunlight may protect against female breast, ovarian, prostate, and colon cancer [30]. Solar UV-B exposure and the amount of exposure to sun are related inversely with cancer mortality and survival in detailed epidemiological studies [31]. Some analytical studies suggest a protective association between circulating vitamin D in blood, which is largely derived from sunlight or dietary vitamin D, and colorectal cancer and prostate cancer [30]. However, looking at the overall baseline levels of all participants in the present study, it is also advisable to increase the moderate daily sun exposure and to improve clothing apart from vitamin D3 supplementation.\n\nBODY.CONCLUSION:\nWe conclude that the buccal spray formulation was able to increase mean serum vitamin D3 concentration significantly higher as compared to the soft gelatin capsule, in both healthy subjects (1.9 times) as well as in patients with intestinal malabsorption syndrome (2.6 times).\n\nBODY.AVAILABILITY OF SUPPORTING DATA:\nThe data set(s) supporting the results of this article is (are) included within the article.\n\n**Question:** Compared to control what was the result of soft gelatin capsule on 25(OH)D concentration?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13885
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nA randomized two way cross over study for comparison of absorption of vitamin D3 buccal spray and soft gelatin capsule formulation in healthy subjects and in patients with intestinal malabsorption\n\n\n\nABSTRACT.BACKGROUND:\nVitamin D deficiency has been proposed to contribute to the development of malabsorption diseases. Despite this, the vitamin D status of these patients is often neglected. The objective of the present work was to compare the absorption of vitamin D3 through the oral route by comparing a 1000 IU soft gelatin capsule and a 500 IU buccal spray (delivering 1000 IU in two spray shots) in healthy subjects and in patients with malabsorption disease.\n\nABSTRACT.METHODS:\nAn open label, randomized, two-periods, two-way cross over study was conducted, first in healthy subjects (n = 20) and then in patients with malabsorption syndrome (n = 20). The study participants were equally divided and received either of the treatments (buccal spray, n = 7; soft gelatin capsule, n = 7; control, n = 6) in Period I for 30 days. After washout of another 30 days, the treatments were changed in crossover fashion in Period II. Fasting blood samples were collected to measure baseline 25-hydroxyvitamin D [25(OH)D] levels in all participants at day 0 (Screening visit), day 30 (completion of period I), day 60 (end of wash out and initiation of period II) and day 90 (completion of period II). Safety was evaluated by hematology and biochemistry analyses. Statistical analyses was performed using differences of mean and percentage change from baseline of 25(OH)D levels between two formulation by two tailed Paired t-test with 95 % confidence interval.\n\nABSTRACT.RESULTS:\nIn healthy subjects, the mean increase in serum 25(OH)D concentration was 4.06 (95 % CI 3.41, 4.71) ng/ml in soft gelatin capsule group and 8.0 (95 % CI 6.86, 9.13) ng/ml in buccal spray group after 30 days treatment (p < 0.0001). In patients with malabsorption disease, the mean increase in serum 25(OH)D concentration was 3.96 (95 % CI 2.37, 5.56) ng/ml in soft gelatin capsule group and 10.46 (95 % CI 6.89, 14.03) ng/ml in buccal spray group (p < 0.0001).\n\nABSTRACT.CONCLUSION:\nIt can be concluded from the results that the buccal spray produced a significantly higher mean serum 25(OH)D concentration as compared to the soft gelatin capsule, in both healthy subjects as well as in patients with malabsorption syndrome over a period of 30 days administration in a two way cross over study. Treatments were well tolerated by both subject groups\n\nABSTRACT.TRIAL REGISTRATION:\nCTRI/2013/06/003770\n\nBODY.INTRODUCTION:\nVitamin D is essential for active intestinal calcium absorption and plays a central role in maintaining calcium homeostasis and skeletal integrity. It is derived mainly from cutaneous synthesis in the presence of ultraviolet sunlight while dietary intake constitutes a minor fraction [1]. Vitamin D deficiency is a common problem through the world [2, 3] and is assessed by low serum concentration of the major circulating metabolite 25-hydroxyvitamin D (25(OH)D) [4, 5]. The prevention of vitamin D deficiency and insufficiency remains a priority of international health services [6–8]. Vitamin D deficiency has been proposed to contribute to the development of intestinal bowel diseases like Crohn's disease, steatorrhea and ulcerative colitis [9]. Conversely, people who have such illnesses have a reduced absorption of vitamin D3 through the intestine [10, 11]. In addition, osteomalacia occurs in patients with a wide variety of disorders affecting the stomach and small bowel, especially when associated with steatorrhea. The pathogenesis of this osteomalacia has in part been explained by malabsorption in vitamin D [9]. Earlier reports show that orally administered tritiated vitamin D3 was malabsorbed in patients with celiac disease, biliary obstruction or pancreatic disease [12]. The pathogenesis of vitamin D deficiency in these patients remains unclear but it is thought to result primarily from fat soluble vitamin malabsorption due to the presence of intestinal disease conditions\nDespite vitamin D malabsorption in patients with gastrointestinal or liver disease, the vitamin status of these patients is often neglected. Although vitamin D supplements are often prescribed, adequate absorption of these formulations has not been documented [10]. Vitamin D is known to be liposoluble, and its relative bioavailability could result in unfavorable conditions when administered in solid form (capsule), since the process of its release is a factor limiting the rate of absorption, bearing in mind that bioavailability is related not only to the pharmaceutically active molecules, but also, to the formulation and excipients used.\nVitamin D3 taken by oral route (peroral delivery) is absorbed in the intestine, where the lining of the digestive tract is aqueous in nature. Therefore vitamin D3, a fat-soluble molecule, in order to be absorbed, must be made water soluble in the intestine. This is accomplished in two steps: emulsification of vitamin D3 in the intestinal lumen, through the action of bile salts, forming small droplets which are dispersed and incorporated into micelles-complex aggregates formed by the interaction of free fatty acids, monoglycerides, and bile salts. Micelles are sufficiently water-soluble to access the intestinal brush border where upon the vitamin D3 content is released and then absorbed [13].\nWhen sprayed inside the mouth, the fine micro sized droplets of vitamin D3 are believed to be quickly and completely absorbed through the buccal mucosa into the numerous capillaries and veins lying close to the tissue surface [13]. Considering the possibility of reduced vitamin D absorption in healthy subjects and even in patients with malabsorption syndrome, a buccal spray formulation was developed. Therefore, the objective of the present work was to compare the absorption of vitamin D3 through the oral route (soft gelatin capsule form) and buccal spray in healthy subjects and patients with intestinal malabsorption syndrome.\n\nBODY.METHODS:\nBODY.STUDY DESIGN AND PATIENTS:\nAn open label, randomized, two-periods, two-way cross over study was conducted, first in healthy subjects and then in patients with malabsorption syndrome, with a similar study design except the presence of disease status in patients with malabsorption syndrome. After approval from the Spandan-IEC ethics committee (registration no: ECR/67/Indt/GJ/2013), the informed consent of study participants were taken and the formulation was administered for 30 days in period I where half of the subjects and patients (collectively participants) received capsule formulation and half of the participants received buccal spray formulation. After completion of treatment in period I, all participants were given 30 days wash out before initiating period II where treatment has changed in cross over fashion, i.e. the participants who received capsule formulation in period I have received buccal spray in period II and vice versa. The clinical study was registered in a centralized clinical trial registry of India (CTRI) before initiating the enrollment of the first patient in the study (CTRI/2013/06/003770).\nThe inclusion criteria were as follows: subjects of either sex between 18 and 65 years of age, with a Body Mass Index (BMI) between 18.0 and 30.0 kg/m2, and the ability to comply with study procedures in the opinion of the investigators. For healthy subjects: no history of liver, kidney or cardiovascular disease, or of any other medical conditions or medications likely to affect vitamin D3 absorption or metabolism. For patients with malabsorption syndrome: confirmed diagnosis of any one of the following malabsorption disease conditions like ulcerative colitis, Crohn’s disease or steatorrhea. Patient with history of above diseases, who are on therapy, were selected for the screening. In these patients, malabsorption syndrome was diagnosed by clinical symptoms like abdominal pain, vomiting, diarrhea, and subcutaneous fat loss together with blood tests like haematology and biochemistry. Stool examination was also performed for all patients to confirm rectal bleeding, presence of occult blood, infectious organisms, or fat. Finally, colonoscopy was performed in all patients to objectively assess the extent of inflammation to confirm the diagnosis. The exclusion criteria were as follows: systemic inflammatory or malignant disease; hepatic or renal failure; uncontrolled hypo- or hyperthyroidism, or the use of drugs that are known to affect bone metabolism such as bisphosphonates, glucocorticoids and anti-convulsants. A pregnant or desired to be pregnant woman during study period was also excluded.\n\nBODY.DATA COLLECTED AT BASELINE:\nEach participant completed a self-administered questionnaire before enrollment. During completion of the questionnaire, they had the possibility to ask for assistance (i.e. clarification of question or any other issue) from one of the project leaders. The questionnaire included questions about usual intake of vitamin D containing foods, clothing and sun exposure habits, as well as height & weight, date of birth, and education. Body mass index was derived as wt/ht2 (kg/m2). Blood pressure was measured through sphygmomanometer and vitals (heart rate, body temperature) were also recorded for safety purposes.\n\nBODY.COLLECTION AND ANALYSES OF BLOOD SAMPLES:\nFasting blood samples were collected to measure baseline 25(OH)D levels in all the participants at day 0 (Screening visit), day 30 (completion of period I), day 60 (end of wash out and initiation of period II) and day 90 (completion of period II). Blood samples were centrifuged (15 min; 2000 g at 4 °C) within 30 min of blood collection and separated serum samples were immediately frozen. Serum samples were stored at −20 °C until analyzed. Serum 25(OH)D levels were measured by Electrochemiluminescence (ECLIA) assay method. This assay was carried out through quantitative determinations of total 25-hydroxyvitamin D in serum samples using a standard kit available from Roche diagnostics GmbH, Germany. All analyses were done in a central independent clinical analysis laboratory (APL Institute of Clinical Laboratory & Research Pvt. Ltd., Ahmedabad, India). The kit has a limit of detection of 3 ng/mL and has a linearity of 0.0 to 60.0 ng/mL. The intra-assay and inter-assay co-efficient of variation were 4 % and 6 %, respectively. Elecsys e-immunoassay analyzers were used for this assay.\nSafety parameters were evaluated including hematology analyses (complete blood counts), biochemical analyses (serum creatinine, total bilirubin, urea, SGOT, SGPT, alkaline phosphates, calcium) and urine was collected for urine routine and microbiological analyses at screening visit (day 0) and at the end of period II visit (day 90).\n\n\nBODY.INTERVENTION RANDOMIZATION AND COMPLIANCE:\nBODY.RANDOMIZATION AND GROUP ALLOCATION:\nThe participants were enrolled at two different hospital sites in India; one physician’s site where all healthy subjects were recruited and a gastroenterologist’s site where all patients with intestinal malabsorption were recruited. Out of the forty-eight participants who met the eligibility criteria, forty (twenty healthy and twenty patients) had agreed to participate and were found eligible, signed a written consent form, and completed a self administered questionnaire concerning usual diet and sun exposure. Subsequently, a venous blood sample was drawn, and a participant received a sealed, non-transparent envelope with the allocated treatment intervention. The randomization procedure was performed beforehand by a statistician by block randomization with blocks of two to one ratio, where the first two participants were randomly given each treatment and a third participant served as a control and didn’t receive any treatment. The participants were allocated to the interventions as they visited the clinic i.e. the first subject was allocated to buccal spray group, second to soft gelatin capsule group and third to the control group. This was done in order to distribute the participants equally in the two interventions (Group I and Group III) and half of the participants served as a control (Group II and Group IV).\nTwenty healthy subjects enrolled for the study were recruited at a physician’s site. They were randomized and enrolled into group I and II: fourteen subjects were enrolled for vitamin D3 treatment and labeled as group I (healthy subjects), while every third subject (total six subjects) were not given any treatment (labeled as group II) and acted as the control for group I.\nThe twenty patients with confirmed diagnosis of malabsorption syndromes (nine with ulcerative colitis, four with Crohn’s disease and seven with steatorrhea) were recruited at a gastroenterologist’s hospital site. Fourteen subjects were enrolled for vitamin D3 treatment and labeled as group III (patients with malabsorption syndrome), while every third subject (total six subjects) were not given any treatment (labeled as group IV) and acted as the control for group III. All patients continued to take their treatment for ulcerative colitis, Crohn’s disease and steatorrhea as prescribed by gastroenterologists.\nTreatment allocation was assigned to each participant according to their number of sequence of attendance at the blood sampling. Treatment allocation was concealed in envelopes numbered in ascending order that they met. Study personnel involved in recruitment and data collection were blinded to the participant’s treatment allocation. All participants were instructed to maintain their routine lifestyle including diet habits and sun exposure during the entire study period, to minimize interference with the daily routine.\n\nBODY.INTERVENTION:\nThe buccal spray and soft gelatin capsules containing vitamin D3 were supplied by Pharma Base, India (a subsidiary of Pharma Base SA, Switzerland). The soft gelatin capsule formulation was purchased from the Indian market. The analysis of both the formulations was done in triplicate according to the method described in European Pharmacopeia at an independent analytical laboratory (Oasis Testing House, Ahmedabad, India). The soft gelatin capsule formulation had a label content of 1000 IU per capsule and the buccal spray formulation had a label content of 500 IU/spray shot.\nAll participants in group I & III were randomized to receive either the vitamin D3 buccal spray (2 sprays each of 500 IU) or soft gelatin capsule containing vitamin D3 (1000 IU) for 30 days. Of the fourteen subjects in each group, half received buccal spray and half received soft gelatin capsule. This was considered as period I of the study. After the completion of the 30-day treatment, all participants were given a 30-day washout. The next treatment in period II was changed in a crossover fashion; those participants who had received the buccal spray formulation (vitamin D3) in period I received the soft gelatin capsule formulation in period II and vice versa. The treatment in period II continued in group I and group III participants for the next 30 days. The detailed study flow chart is described in Fig. 1.Fig. 1Study flow chart\n\n\nBODY.COMPLIANCE:\nAll participants were instructed to take the two buccal sprays and one soft gelatin capsule every day irrespective of the period of the study. Formulation for 7 days was handed over to each participants under group I & III at baseline, along with the compliance form. The participants were instructed to mark the intake and time of intake of number of spray shots or capsule of each day of the study period, as well as to note any extraordinary event that occurred during the period (forgetting to take a spray shot or capsule). If forgetting to take spray shot or capsule, the participants were instructed to take a double dose on the following day, in order to take altogether 60 spray shots or 30 capsules during the study periods. The participants were recommended to take spray shots or capsules after the main meal of the day. However, participants were not instructed to standardize meals or time between meals. All participants were also instructed not to change their daily routine, meal habits, as well as their sun exposure. This was done to minimize interference with daily routine and thus maximizing compliance with taking study medication. All participants in group I and III were instructed to visit the site every week to check compliance with the intake of study medication. Participants in control groups (Group II and Group IV) were instructed to visit their respective clinic after 30 days for their blood sample collection.\nAll participants who received treatment were also instructed to note any adverse events during the entire 90 days of the study which included period I, washout period and period II in a separately given adverse event recording form.\n\n\nBODY.STASTICAL ANALYSIS:\nBODY.SAMPLE SIZE:\nThe sample size of this crossover study was based on the changes in 25(OH)D levels where goal was to detect: 1) percent increase between pre-dose and post-dose levels within treatments, and 2) to have significant difference of percent change in 25(OH)D levels between the treatments. To achieve type-I error rate of less than 5 % (2 tailed), a sample size of 12 subjects in buccal spray group (healthy or patients) was sufficient to provide a statistical power of 80 % to detect a clinically significant mean difference of 5 % in 25(OH)D levels.\n\nBODY.DEMOGRAPHIC AND BASELINE CHARACTERISTIC:\nA descriptive statistics was applied for demographics; age (years), weight (kg), body mass index (kg/m2); and were presented as mean ± standard deviation.\n\nBODY.EFFICACY ANALYSIS:\nStatistical analyses were performed using GraphPad Prism 5, version 5.03 (GraphPad Software, Inc., CA, USA). Differences of mean and percentage change from baseline of 25(OH)D levels between two formulation groups, soft gelatin capsules and buccal spray, were evaluated using two tailed Paired t-test with 95 % confidence interval separately for both healthy subjects and patients with intestinal malabsorption. For comparison between control group and either of the treatments, unpaired t-test with 95 % confidence interval was used.\n\nBODY.SAFETY ANALYSIS:\nThe number and proportion of subjects with changes in laboratory value (change from baseline to end of study visit for complete blood count, serum creatinine, total bilirubin, urea, SGOT, SGPT, alkaline phosphates and calcium) was summarized and the difference was analyzed by chi –square test from normal to abnormal.\n\n\nBODY.RESULTS:\nBODY.BASELINE CHARACTERISTICS:\nIn total, thirty eight individuals, thirteen healthy individuals with six control and thirteen patients with malabsorption syndrome with six control, fulfilled the eligibility criteria and completed the study (Fig. 1). Two participants (one healthy subject and one patient) reported vitamin D3 intake compliance of less than 90 % and therefore were not considered for final analyses.\nIn healthy subjects, at baseline, 85 % (n = 12) of the subjects had serum 25(OH)D concentration between 10 and 30 ng/ml, and one individual each (7.14 %) had serum 25(OH)D concentration >30 ng/ml and <10 ng/ml each in the oral soft gelatin capsule group. While in the buccal spray group, at baseline all subjects had serum 25(OH)D concentration between 10 and 30 ng/ml. Similarly, in patients with malabsorption disease who received oral soft gelatin capsules, at baseline 8 (57.14 %) patients had serum 25(OH)D concentration below 10 ng/ml, 5 (35.71 %) patients had serum 25(OH)D concentration between 10 and 20 ng/ml, and one individual (7.14 %) had serum 25(OH)D concentration above 20 ng/ml. However, in the buccal spray group, at baseline 9 (64.28 %) patients had serum 25(OH)D concentration below 10 ng/ml and 5 (35.71 %) patients had serum 25(OH)D concentration between 10 and 20 ng/ml. There were no striking differences in baseline characteristics between healthy individuals or patients with malabsorption disease with their corresponding control groups (Table 1).Table 1Demographic Data for Healthy Subjects and Patients with malabsorption syndrome\nParametersHealthy Subjects Group IHealthy Subjects Control Group IIPatients with malabsorption syndrome Group IIIPatients with malabsorption syndromeControlGroup IVN\n14\n6\n14\n6\nSex\nMale = 7, Female = 7\nMale = 3, Female = 3\nMale = 7, Female = 7\nMale = 3, Female = 3\nAge (Yrs)\n36.21 ± 9.97\n34.00 ± 6.42\n39.93 ± 11.65\n44.17 ± 5.56\n(Range)\n(25–60)\n(25–42)\n(26–63)\n(38–53)\nHeight (cms)\n159.86 ± 13.43\n161.33 ± 14.12\n162.29 ± 8.54\n164.33 ± 8.55\nBMI\n23.39 ± 3.88\n21.40 ± 2.39\n21.48 ± 2.82\n23.64 ± 3.02\nAll values are expressed in Mean ± SD; N-number subjects in each group\n\n\nBODY.EFFECT OF INTERVENTION:\nThe control groups in each subject population were compared with their corresponding treatment group. The control groups for healthy subjects as well as patients with malabsorption syndrome showed no change in 25(OH)D concentration over a period of 30 days. The mean baseline levels of 25(OH)D in healthy subjects was 18.25 ng/ml and in patients with malabsorption syndrome was 11.7 ng/ml. These mean levels remained at 18.06 ng/ml and 12.52 ng/ml after 30 days in healthy subjects and patients with malabsorption syndrome respectively (Fig. 2), which was statistically non-significant. When control group in healthy subjects was compared with their corresponding treatment groups, it was found that the difference of mean between control group and buccal spray group was 7.47 (95 % CI, 5.27, 9.67) which was significant (p < 0.05), and the same between control group and soft gelatin capsule group was 3.53 (95 % CI, 1.79, 5.28) which was also statistically significant (p < 0.05). Similarly in patients, the difference of mean between control group and buccal spray group was 8.53 (95 % CI, 2.74, 14.31) which was significant (p < 0.05), while the difference of mean between control group and soft gelatin capsule group was 2.03 (95 % CI, −1.44, 5.50) which was statistically not significant. This shows that buccal spray was more effective to increase mean 25(OH)D levels as compared to oral soft gelatin capsule. Table 2 describes these data.Fig. 2Mean 25(OH)D level in study subjects\nTable 2Comparison of 25(OH)D in control group and their corresponding treatment groups after 30 days\nComparisonDifference of mean95 % CIP ValueHealthy – Buccal Spray vs. Control\n7.47\n5.27 to 9.67\n<0.0001\nHealthy – Soft Gelatin Capsule vs. Control\n3.53\n1.79 to 5.28\n0.0005\nPatients – Buccal Spray vs. Control\n8.53\n2.74 to 14.31\n0.0064\nPatients – Soft Gelatin Capsule vs. Control\n2.03\n−1.44 to 5.50\n0.2338\n\nThe efficacy of buccal spray and soft gelatin capsule to increase the 25(OH)D levels after 30 days of administration was evaluated and compared with each other. After 30 days of administration, overall mean serum 25(OH)D concentration in healthy subjects was 22.75 (sd 6.75) ng/ml as compared to baseline value of 18.69 (sd 5.88) ng/ml in the soft gelatin capsule group, with the mean increase of 4.06 (95 % CI 3.41, 4.71) ng/ml (Table 3). On the other hand, in the buccal spray group, the mean serum 25(OH)D concentration was 26.91 (sd 5.72) ng/ml as compared to baseline value of 18.91 (sd 4.3) ng/ml, with the mean increase of 8.0 (95 % CI 6.86, 9.13) ng/ml (Fig. 2). The difference in mean increase between both the groups was 3.95 (95 % CI 3.19, 4.69) which was statistically significant (p < 0.0001).Table 3\nParametersMean baseline value in ng/mlMean value in ng/ml after 30 day of treatmentDifference of meanPercentage mean increase from baselineHealthy Subjects\nSoft gelatin Capsule Formulation\n Mean\n18.69 ± 5.88\n22.75 ± 6.75\n4.06\n21.72 %\n Range\n9.25, 30.05\n12.75, 35.52\n2.5, 5.6\n15.36, 37.79\n n\n13\n13\n13\n13\n 95 % CI\n15.14 to 22.24\n18.67 to 24.42\n3.41 to 4.71\n16.42 to 24.42\nBuccal Spray Formulation\n Mean\n18.91 ± 4.3\n26.91 ± 5.72\n7.995\n42.99 %\n Range\n13.36, 26.82\n19.6, 38.5\n4.5, 11.67\n29.21, 68.71\n n\n13\n13\n13\n13\n 95 % CI\n16.31 to 21.51\n23.45 to 30.36\n6.86 to 9.13\n37.19 to 48.79\nPatients with intestinal malabsorption syndrome\nSoft gelatin Capsule Formulation\n Mean\n11.01 ± 6.43\n14.97 ± 9.01\n3.965\n36.02 %\n Range\n2.9, 26.5\n4.6, 36.89\n1.06,10.39\n24.62,58.73\n n\n13\n13\n13\n13\n 95 % CI\n7.12 to 14.89\n9.52 to 20.42\n2.37 to 5.56\n30.42 to 41.62\nBuccal Spray Formulation\n Mean\n10.01 ± 4.29\n20.47 ± 7.89\n10.46\n117.8 %\n Range\n4.6, 18.85\n9.8, 34.64\n4.25, 27.44\n61.31,381.1\n n\n13\n13\n13\n13\n 95 % CI\n7.42 to 12.6\n15.7 to 25.24\n6.89 to 14.03\n64.71 to 170.8\n\nWhen calculated from baseline, the mean percentage change in serum 25(OH)D concentration in healthy subjects after 30 days treatment with soft gelatin capsule was 21.72 % (95 % CI 16.42, 24.42), while the same in buccal spray group was 42.99 % (95 % CI 37.19, 48.79) with a mean difference of 20.42 % (95 % CI 16.42, 24.42) between two groups (p < 0.0001) (Table 3). A total of 11 (85 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml, and two individuals (15.38 %) now had serum 25(OH)D concentration >30 ng/ml in the soft gelatin capsule group. However, in the buccal spray group, 10 (75.88 %) subjects had serum 25(OH)D concentration between 10 and 30 ng/ml and 3 (23.1 %) subjects now had serum 25(OH)D concentration >30 ng/ml.\nSimilarly in patients with malabsorption syndrome, overall mean serum 25(OH)D concentration after 30 days of administration of soft gelatin capsule was 14.97 (sd 9.01) ng/ml as compared to baseline value of 11.01 (sd 6.43) ng/ml, with the mean increase of 3.96 (95 % CI 2.37, 5.56) ng/ml. While in buccal spray group, mean serum 25(OH)D concentration was 20.47 (sd 7.89) ng/ml as compared to baseline value of 10.01 (sd 4.29) ng/ml, with the mean increase of 10.46(95 % CI 6.89, 14.03) ng/ml (Fig. 2). The difference in mean increase between both the groups was 6.50 (95 % CI 3.78, 9.22) which was statistically significant (p < 0.0001).\nThe mean percentage change in serum 25(OH)D concentration in patients with malabsorption syndrome after 30 days treatment with soft gelatin capsule was 36.02 % (95 % CI 30.42, 41.62), while the same value in buccal spray group was 117.8 % (95 % CI 64.71, 170.8) with a mean difference of 81.75 % (95 % CI 29.80, 133.7) between the two treatments (p < 0.005). A total of four (31 %) subjects now had serum 25(OH)D concentration below 10 ng/ml, eight (61 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml, and one individual (7.7 %) had serum 25(OH)D concentration above 30 ng/ml. However, in the buccal spray group, only 7.7 % (1 subject) now had serum 25(OH)D concentration below 10 ng/ml, ten (76.9 %) subjects now had serum 25(OH)D concentration between 10 and 30 ng/ml and two (15.4 %) subjects now had serum 25(OH)D concentration more than 30 ng/ml (Table 3).\n\nBODY.STATISTICAL CONSIDERATIONS WITH RESPECT TO PERIOD, SEQUENCE AND POWER:\nStatistical analyses were performed using SAS v9.2 (SAS Institute Inc, Cary, NC, USA) for the post-hoc evaluation of sequence and period effect. The statistical method adopted for this analysis was period + sequence + Subject (sequence) + treatment. It was observed that there is no significant difference in sequence effect in healthy subjects (p = 0.5251) and in patients with malabsorption syndrome (p = 0.0532). It was also revealed that there is no statistically significant period effect in healthy subjects (p = 0.6920) as well as in patients with malabsorption syndrome (p = 0.0715). The post-hoc power analyses with intra-subject variability were also derived for both the group of subjects. In healthy subjects, statistical power obtained was 99.42 with an 11.81 % intra-subject variability. While, in patients with malabsorption syndrome it was 81.62 with an intra-subject variability of 21.86 %.\n\nBODY.SAFETY EVALUATION:\nThere were no significant changes in any of the hematology and biochemistry parameters studied. There are also no notable changes in the vitals for any of the participants. No adverse event reported after administration of the buccal spray or soft gelatin capsule formulations of vitamin D3 during entire study period in healthy or patients with malabsorption disease and hence the product is considered safe.\n\n\nBODY.DISCUSSION:\nSupplemental fat-soluble vitamin D is usually made without determination of whether oral doses are adequately absorbed. The evidence of vitamin D malabsorption (Osteomalacia, rickets, hypocalcaemia, or reduced circulating concentration of 25(OH)D) persists despite routine vitamin D supplementation in cystic fibrosis [14], Crohn’s disease [15], Intestinal resection [16–19], ulcerative colitis, liver disease [20–22] and other malabsorption syndrome [23].\nMany factors are involved in the absorption of vitamin D, including gastric, pancreatic, and biliary secretions, micelle formation, and diffusion through the unstirred water layer, brush border membrane uptake, and transport out of the intestinal cell [24, 25]. As vitamin D is a relatively non-polar sterol, it must be solubilized by incorporation into a bile salt micelle solution in order to be absorbed in the aqueous phase [26]. This process is severely inhibited if there is any interruption of normal pancreatic or biliary secretion. As fat-soluble vitamins are fairly sensitive to disturbances in lipid absorption, vitamin malabsorption may occur in conditions like steatorrhea, ulcerative colitis, and Crohn’s disease. Serum concentrations of 25-hydroxyvitamin D are good indicators of long term vitamin D levels in the body but are insensitive to single doses of vitamin D and do not rise out of the normal range unless doses of vitamin D are chronically administered [27].\nConsidering the malabsorption in intestinal disease and a possible poor and complex absorption with oral formulations, a novel nanoemulsion formulation of buccal spray was developed where vitamin D3 is suspended in an aqueous base which can be easily absorbed through the mucosal layer of the mouth. We compared the serum concentration of vitamin D after 30 days administration with the soft gelatin capsule and the aqueous based buccal spray formulation. To the best of our knowledge, this is the first randomized two way cross over trial comparing the increase in 25(OH)D levels in healthy adults and patients with intestinal malabsorption receiving similar oral doses of two different formulations (per oral and buccal spray).\nThe analyses of baseline levels of study participants showed that vitamin D deficiency was prevalent in both healthy subjects and patients with intestinal malabsorption syndrome. However, patients were more vitamin D deficient as compared to healthy subjects. Four weeks administration of 1000 IU per day increased mean serum 25(OH)D in all treatment groups. In healthy subjects, soft gelatin capsule increased serum 25(OH)D level by 22.5 % (Range 15.4 to 37.8 %), while the buccal spray increased serum 25(OH)D level by 43 % (range 29.2 to 68.7. Similarly, in patients with intestinal malabsorption, soft gelatin capsule increased serum 25(OH)D level by 36 % (range 24.6 to 58.7 %) and the buccal spray increased serum 25(OH)D by 117.8 % (range 61.3 to 381.1 %).\nThe result implies that the buccal spray formulation had a significantly higher mean increase in both the subject groups, healthy subjects and patients with intestinal malabsorption syndrome. Interestingly, the mean increase was much higher in the patients group as compared to the healthy subjects group. This may be because increase in serum 25(OH)D after supplementation is known to be inversely related to baseline 25(OH)D concentration [28].\nIn the present study, the mean baseline levels were almost half in patients with intestinal malabsorption as compared to the baseline levels found in healthy subjects. This indicates the presence of vitamin D3 deficiency in the patient group as compared to healthy subjects. The primary and the most important source of vitamin D is sunlight. Although excessive exposure to sunlight and vitamin D have been positively associated with non-melanoma skin cancer [29], ecological studies suggest that sunlight may protect against female breast, ovarian, prostate, and colon cancer [30]. Solar UV-B exposure and the amount of exposure to sun are related inversely with cancer mortality and survival in detailed epidemiological studies [31]. Some analytical studies suggest a protective association between circulating vitamin D in blood, which is largely derived from sunlight or dietary vitamin D, and colorectal cancer and prostate cancer [30]. However, looking at the overall baseline levels of all participants in the present study, it is also advisable to increase the moderate daily sun exposure and to improve clothing apart from vitamin D3 supplementation.\n\nBODY.CONCLUSION:\nWe conclude that the buccal spray formulation was able to increase mean serum vitamin D3 concentration significantly higher as compared to the soft gelatin capsule, in both healthy subjects (1.9 times) as well as in patients with intestinal malabsorption syndrome (2.6 times).\n\nBODY.AVAILABILITY OF SUPPORTING DATA:\nThe data set(s) supporting the results of this article is (are) included within the article.\n\n**Question:** Compared to control what was the result of soft gelatin capsule on adverse event?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
13886
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffects of a home-based intervention on diet and physical activity behaviours for rural adults with or at risk of metabolic syndrome: a randomised controlled trial\n\n\n\nABSTRACT.BACKGROUND:\nThis study aimed to determine whether a home-based 6-month lifestyle intervention program complemented by motivational interviewing could improve diet and physical activity behaviours in 50–69 year olds with or at risk of metabolic syndrome, residing in a disadvantaged rural Western Australian community.\n\nABSTRACT.METHODS:\nParticipants from the City of Albany and surrounding towns (n = 401) were recruited into a 6 month randomised controlled trial. They were screened for metabolic syndrome and randomly allocated to intervention (n = 201) or control group (n = 200). Baseline and post-test data collection for both groups included a self-report questionnaire which incorporated the Fat and Fibre Barometer and the International Physical Activity Questionnaire Short Form. The intervention group received the program materials at baseline and the control group was waitlisted. Generalised estimating equation models assessed repeated outcome measures over time.\n\nABSTRACT.RESULTS:\nA total of 151 (75.1 %) intervention and 159 (79.5 %) control group participants completed post-test and were included in the analysis. After controlling for confounders, the intervention group achieved a marginally significant increase in their metabolic equivalent (MET) minutes of moderate intensity physical activity per week (p = 0.049), and significantly improved fibre intake (p < 0.001), fat intake (p = 0.003), and vegetable serves per day (p = 0.002) from baseline to post-test relative to the control group.\n\nABSTRACT.CONCLUSION:\nA home-based, low-cost intervention with motivational support can effectively improve the physical activity and dietary behaviours of adults aged 50–69 years with or at risk of metabolic syndrome residing in a disadvantaged rural area.\n\nABSTRACT.TRIAL REGISTRATION:\nAnzctr.org.au Identifier: ACTRN12614000512628\n\nBODY.BACKGROUND:\nMetabolic syndrome is characterised by several metabolic abnormalities including raised triglyceride levels, reduced high density lipoprotein (HDL) cholesterol, hypertension, hyperglycaemia, and abdominal obesity [1]. Individuals with metabolic syndrome are at a significantly increased risk of developing chronic diseases such as type 2 diabetes and cardiovascular disease [1, 2]. Prevalence of metabolic syndrome varies between populations, with estimates of approximately 13-30 % of adults in developing countries and 35 % in developed countries such as Australia and the USA [3]. Data from the United Kingdom suggests that prevalence increases with age [3].\nSeveral risk factors are responsible for the majority of metabolic syndrome cases and cardiovascular diseases in developed countries [4]. These include high cholesterol, overweight/obesity, physical inactivity, high blood pressure, and limited fruit and vegetable intake [4]. The risk factors contributing significantly to the burden of disease in Australia are poor diet (11 %) and overweight/obesity (9 %) [5]. In 2011–12, 92 % of Australian adults were not consuming enough serves of vegetables to meet the Australian Dietary Guidelines, and just 49 % achieved the recommended target for fruit consumption [5], while only 40 % met the recommendation of at least 150 min per week of moderate intensity physical activity [5].\nThe gap in health risk behaviours between the least and most disadvantaged groups of the Australian population is widening, particularly for fruit and vegetable intake [6]. People living in lower socioeconomic areas of Australia are more likely to be physically inactive, engage in sedentary behaviour, and have abnormal glucose metabolism compared to those living in higher socioeconomic areas [7]. Similar effects have been noted across seven comparable countries, with higher socioeconomic neighbourhoods associated with increased consumption of fruit and vegetables [8].\nResidents of rural and remote Australia experience higher rates of morbidity and mortality and have less access to health services than those in metropolitan areas [5]. These populations are more likely than city populations to be overweight or obese (70 versus 60 %), insufficiently active (60 versus 54 %), have high blood cholesterol (37 versus 31 %) and comprise more people aged over 65 years (16 versus 13 %) [5]. This results in an increased prevalence of overweight/obesity, metabolic syndrome, and in turn chronic diseases [9]. Targeted screening and interventions in these populations at high risk of developing cardiovascular disease and type 2 diabetes may lead to early identification of metabolic syndrome, early management and delayed onset of these chronic diseases [10, 11].\nInterventions to address metabolic syndrome and related chronic diseases in older adults should encourage reduced sitting time and increased physical activity [12] in combination with diet modifications. These include limiting saturated fat, sugar and salt, and increasing fibre, fruit and vegetable consumption [1, 13, 14]. The literature indicates that structured behavioural interventions focusing on counselling, education, and support strategies can also assist positive behaviour change in individuals at risk of chronic diseases [15].\nA recent systematic review of self-help interventions for adults at risk of chronic diseases suggests that strategies comprising goal setting and self-monitoring in combination with tailored feedback, contact via email, and online social support, may be more effective for achieving behaviour change [16]. However, it is difficult to determine the effectiveness of self-help interventions in disadvantaged target groups due to insufficient data [16]. In particular, there is a gap in the knowledge of the impact of home-based lifestyle interventions in the disadvantaged rural/remote Australian setting targeting older adults with metabolic syndrome [17].\nThe Albany Physical Activity and Nutrition (APAN) Program aimed to improve dietary and physical activity behaviours of 50–69 year old adults at risk or with metabolic syndrome. The APAN program was implemented in the City of Albany and surrounding towns in The Great Southern region of Western Australia, which provided access to a large number of older adults aged 50–69 years (8496) [18]. The Socio-Economic Indexes for Areas scores Albany at 987.4, indicating relative disadvantage (<1000) [19]. This population is representative of other Australian rural/regional areas. In addition, the Great Southern region’s health profile suggests a need for increased access to health programs and services for residents [20]. The present study aimed to determine whether the APAN program effectively improved the physical activity, diet, and sedentary behaviours of participants.\n\nBODY.METHODS:\nBODY.STUDY DESIGN:\nThe protocol of this trial has been described previously [21]. APAN was a two-arm randomised controlled trial of a 6-month physical activity, diet, and healthy weight management intervention. Data were collected from the intervention and control groups at baseline and post-test. The trial was registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12614000512628) and the study protocol was approved by the Curtin University Human Research Ethics Committee (approval number HR149_2013). All participants provided informed consent prior to entry into the study.\n\nBODY.PARTICIPANTS:\nPotential participants were required to be 50–69 years of age and classified as either at risk of, or with metabolic syndrome using the International Diabetes Federation (IDF) criteria [22] to be eligible for the study. To be considered at risk, participants were required to have a large waist circumference as the minimum requirement (waist circumference ≥94 cm for men or ≥80 cm for women [Europids, Sub-Saharan Africans, Eastern Mediterranean, Middle East]; ≥90 cm for men or ≥80 cm for women [South Asians, Chinese, Japanese]), plus one of the following parameters: raised triglyceride concentration (≥1.7 mM, or treatment for this); reduced HDL-cholesterol concentration (<1.03 mM in males and <1.29 mM in females, or treatment for this); raised blood pressure (systolic ≥130 mmHg or diastolic ≥85 mmHg, or treatment of previously diagnosed hypertension); raised fasting plasma glucose concentration (≥5.6 mM). Participants were classified as having metabolic syndrome if they had a large waist circumference plus two of the other criteria stated above [23].\nThe following exclusion criteria applied: on a weight loss diet or having weight fluctuations of >5 % within the previous 6 months; previous diagnosis of diabetes mellitus (other than gestational diabetes); of Aboriginal or Torres Strait Islander decent; receiving specific treatment to lower blood glucose; or involvement in another physical activity program.\n\nBODY.PROCEDURE:\nParticipants aged 50–69 years were recruited from towns within a 50 km radius of Albany, Western Australia. Recruitment of participants occurred in three stages. Figure 1 outlines the study procedure, participant flow, and samples sizes.Fig. 1Consort flow diagram\n\nBODY.SCREENING STAGE 1:\nIndividuals (n = 12,723) from the selected region were initially screened via the Computer Assisted Telephone Interview system using the Australian Type 2 Diabetes Risk Assessment Tool (AUSDRISK), which assesses the risk of developing type 2 diabetes mellitus based on anthropometric, demographic, and lifestyle characteristics [24]. Females with a score of ≥ 9 and males with a score of ≥ 12 were eligible for the next stage of screening (n = 1134), with 7342 individuals excluded and 4247 individuals who opted out.\n\nBODY.SCREENING STAGE 2:\nEligible individuals (n = 1134) were invited to attend a clinic in a central location in Albany. A total of 526 participants attended the clinic and 608 opted out. Anthropometry including waist circumference was measured during the appointment to confirm central obesity (waist circumference ≥94 cm for men or ≥80 cm for women, being the minimum IDF requirement) before progressing to the next stage. Based on this eligibility criterion, 40 participants were excluded, and a further 16 opted out after progressing to screening stage 3.\n\nBODY.SCREENING STAGE 3:\nThe remaining eligibility criteria were assessed at a local pathology laboratory via fasting blood sample and blood pressure measurements. Participants also completed a self-reported questionnaire to measure physical activity and dietary behaviours.\nParticipants meeting the full eligibility criteria (n = 401) were randomly allocated to either the intervention group (n = 201) or control group (n = 200), while adjusting for gender and metabolic syndrome status. Intervention group participants were provided the APAN program materials and allocated to one of two motivational support staff. The control group participants were waitlisted to receive their program after post-test data collection (6 months).\nBaseline and post-test data collection included a self-reported questionnaire to measure participant dietary and physical activity behaviours. Of the 201 intervention and 200 control group participants who completed the questionnaire at baseline, 151 (75.1 % response rate) and 159 (79.5 % response rate) respectively completed the post-test questionnaire and were available for analysis of the self-reported outcome measures.\n\n\nBODY.INTERVENTION:\nThis home-based program was low cost and tailorable to suit varying fitness levels and lifestyles. Its strategies included self-monitoring and goal setting, supported by printed and online material, and participants received email and telephone support using motivational interviewing techniques. The program built upon previous dietary and physical activity programs targeting older adults in a metropolitan area [25, 26], and was adapted for a rural/remote target group.\nBODY.THEORETICAL BASIS:\nThe Self-Determination Theory was applied as the theoretical framework for this study, complemented by Motivational Interviewing [27]. The theory explores human motivation in terms of behaviours that are autonomous (originating from self) and behaviours that are controlled (persuasion or coercion) [28]. Self-Determination Theory is the only motivational theory that identifies autonomy as a basic need for all humans that should be supported in health promotion interventions [29]. Individuals are more likely to engage in certain behaviours if they are valued and intrinsically motivated [30]. Advice from health professionals should be offered in an autonomous manner to allow for individual decision-making [27].\n\nBODY.PROGRAM MATERIALS:\nIntervention group participants were provided with the APAN program materials designed to assist with self-monitoring and goal setting to improve diet, physical activity, and healthy weight maintenance. Recommendations were based on Australia’s Physical Activity and Sedentary Behaviour Guidelines [31] and the Australian Dietary Guidelines [32], as well as previous successful interventions targeting older adults [25, 26]. Materials included a booklet, exercise charts, resistance band, nutrition panel wallet card, and a website with progress tracker and interactive blog. A detailed overview of the program materials has been described previously [21].\n\nBODY.MOTIVATIONAL SUPPORT:\nIntervention group participants were provided motivational support for the duration of the APAN intervention via telephone counselling and emails. Two research assistants trained in motivational interviewing techniques contacted participants at week 1, 3, 6, 12, 18, and 24. Motivational interviewing principles incorporate strategies to encourage an individual to initiate and sustain behaviour change, such as empathy, shared decision-making, and reflective listening [33]. These strategies complemented the principles of the Self-Determination Theory by ensuring participants set their own goals, became involved in decision-making, and were not coerced into a particular behaviour or action [27, 30].\nResearch assistants were provided with a dialogue guide for each of the six MI telephone calls scheduled over the intervention period. The first call focused on general health as the main topic, allowing the research assistants to introduce themselves to participants and gauge their readiness to change, barriers to change, and identify specific goals they wished to achieve. Subsequent calls focused on specific topics including physical activity, sedentary behaviour, nutrition, and the revisiting of goals. Participants were encouraged to utilise their APAN resources to monitor their progress and adjust goals and behaviours as required.\n\n\nBODY.INSTRUMENT:\nThe self-completed structured questionnaire included the Fat and Fibre Barometer [34], the International Physical Activity Questionnaire Short Form (IPAQ-SF) [35], and demographic and general health questions including sex, education, marital status, smoking status, diagnosed health conditions, medications, and alcohol consumption. The IPAQ-SF measured walking time, moderate and vigorous intensity physical activities, and time spent sitting across a usual week. A strength question was also added to determine resistance or weight training using large muscle groups [26].\nTo assess diet, the Fat and Fibre Barometer was utilised to provide information on habitual fat- and fibre-related behaviours of participants [34]. The valid and reliable instrument was appended with questions asking participants for the number of fruit and vegetable serves in a usual week [26]. Body mass index (BMI) was calculated based on weight and height data collected during screening stage 2.\n\nBODY.STATISTICAL ANALYSIS:\nThis paper focuses on dietary and physical activity behaviours whereas changes in clinical metabolic syndrome parameters and anthropometry will be reported elsewhere. Descriptive statistics summarised the baseline lifestyle and demographic characteristics of the intervention and control groups. Independent and paired samples t-tests were applied to the continuous outcome variables, whereas Mann-Whitney U test and Wilcoxon Signed Rank test were applied to those variables exhibiting skewed distributions. To account for the effects of potential confounders, generalised estimating equation (GEE) models with exchangeable correlation structure were used to assess the repeated outcome variables over time. Normal GEE with identity link was applied to normally distributed continuous outcome variables (sitting time [hours per day]; fibre intake score; fat intake score; fat avoidance score), while gamma GEE with log link was applied to skewed continuous variables (walking time [MET min/week]; moderate intensity activity [MET min/week]; vigorous intensity activity [MET min/week]; total activity [MET min/week]; strength training [min/week]; fruit intake [serves per day]; vegetable intake [serves per day]). All statistical analyses were performed using the SPSS Statistics Package 22.\n\n\nBODY.RESULTS:\nDemographic and lifestyle characteristics of participants are summarised in Table 1, which highlight no significant differences between the intervention and control groups at baseline. The mean age of the sample was 61 years with 66.5 % female participants. The majority (98.4 %) had completed either secondary school or tertiary education, and 81.6 % had a partner. The mean BMI was 30.8 indicating that on average, the sample was obese. Smoking status and alcohol consumption were not significantly different between the two groups.Table 1Baseline characteristics of intervention and control group participants\nVariableIntervention groupControl group\np valuea\n(n = 151)(n = 159)Age: mean (SD) years\n60.5 (5.64)\n61.3 (5.18)\n0.181\nMetabolic syndrome status:\n\n\n0.494\nWith\n79 (52.3 %)\n90 (56.6 %)\nAt risk\n72 (47.7 %)\n69 (43.4 %)\nBMI: mean (SD)\n31.0 (5.88)\n30.6 (5.17)\n0.440\nGender: female\n100 (66.2 %)\n106 (66.7 %)\n0.934\nEmployment status:\n\n\n0.296\nFull time\n78 (51.7 %)\n65 (40.9 %)\nPart time\n24 (15.9 %)\n29 (18.2 %)\nUnemployed\n5 (3.3 %)\n7 (4.4 %)\nRetired\n44 (29.1 %)\n58 (36.5 %)\nEducation:\n\n\n0.425\nPrimary school\n3 (2.0 %)\n2 (1.3 %)\nSecondary school\n55 (36.4 %)\n72 (45.0 %)\nTechnical/Diploma\n52 (34.4 %)\n46 (28.8 %)\nUniversity\n41 (27.2 %)\n39 (24.5 %)\nRelationship status: with partner\n124 (82.1 %)\n129 (81.1 %)\n0.810\nSmoking status:\n\n\n0.852\nNever\n84 (55.6 %)\n84 (52.8 %)\nEx-smoker\n52 (34.4 %)\n54 (33.8 %)\nOccasional smoker\n3 (2.0 %)\n4 (2.5 %)\nDaily smoker\n12 (7.9 %)\n17 (10.6 %)\nCo-morbidityb: yes\n92 (60.9 %)\n104 (65.4 %)\n0.413\nAlcohol drinking: yes\n99 (65.6 %)\n113 (71.1 %)\n0.957\n\na\nt-test or chi square test between intervention and control groups\n\nb Presence of at least one of 8 common health problems\n\nBODY.PHYSICAL ACTIVITY OUTCOMES:\nThe self-reported physical activity outcomes between intervention and control groups are displayed in Table 2. Significant improvements in mean walking time (MET min/week), moderate intensity activity (MET min/week), total activity (MET min/week), sitting time (hours per day), and strength training (minutes per week) from baseline to post-test were observed for the intervention group, while a significant reduction (17 min/week) in total MET min/week was observed for the control group from baseline to post-test. Table 3 provides the results of the GEE analyses for physical activity outcomes. After controlling for potential confounders, the intervention group achieved a marginally significant improvement in the self-reported moderate intensity MET minutes per week (p = 0.049) relative to the control group. No significant improvement was observed for the other physical activity variables through the group x time interaction term.Table 2Comparison of self-reported physical activity outcomes between intervention and control groups\nOutcomeIntervention group (n = 151)p valuea\nControl group (n = 159)\np valueb\n\np valuec\n\np valued\nBaselinePostBaselinePostMean (SD)Mean (SD)Mean (SD)Mean (SD)Walking MET min/weeke\n\n396.0\n577.5\n0.011\n330.0\n396.0\n0.047\n0.561\n0.050\n(561.0)\n(742.5)\n(594.0)\n(709.0)\nModerate MET min/weeke\n\n300.0\n480.0\n<0.001\n360.0\n360.0\n0.808\n0.394\n0.189\n(585.0)\n(850.0)\n(640.0)\n(636.0)\nVigorous MET min/weeke\n\n181.5\n217.5\n0.070\n203.0\n179.5\n0.715\n0.902\n0.176\n(479.4)\n(460.2)\n(462.1)\n(428.4)\nTotal MET min/weeke\n\n807.5\n1332.0\n<0.001\n990.0\n973.0\n0.013\n0.428\n0.051\n(1486.9)\n(1624.9)\n(1357.5)\n(1738.0)\nStrength min/weeke\n\n39.2\n53.9\n<0.001\n27.7\n25.0\n0.436\n0.880\n0.001\n(168.9)\n(124.2)\n(66.8)\n(70.4)\nSitting hours/day\n359.1\n319.7\n0.001\n356.0\n339.4\n0.187\n0.955\n0.299\n(187.4)\n(162.7)\n(171.3)\n(169.4)\n\na Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the intervention group\n\nb Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the control group\n\nc Independent t-test (Mann-Whitney U test) between intervention and control group at baseline\n\nd Independent t-test (Mann-Whitney U test) between intervention and control group at post-test\n\neNon-parametric tests applied\nTable 3Regression analysis of physical activity outcomes before and after intervention (n = 310)\nGroup: interventionTime: postGroup x timeCoefficient (SE)pc\nCoefficient (SE)pc\nCoefficient (SE)pc\nWalking MET min/wka\n\n0.09 (0.16)\n0.580\n0.23 (0.10)\n0.020\n−0.10 (0.16)\n0.524\nModerate MET min/wka\n\n−0.07 (0.16)\n0.650\n0.01 (0.11)\n0.991\n0.29 (0.15)\n0.049\nVigorous MET min/wka\n\n−0.21 (0.19)\n0.275\n−0.14 (0.17)\n0.393\n0.15 (0.24)\n0.537\nTotal MET min/wka\n\n−0.05 (0.13)\n0.710\n0.12 (0.08)\n0.164\n0.11 (0.12)\n0.335\nStrength min/wka\n\n0.01 (0.32)\n0.983\n−0.40 (0.22)\n0.066\n0.15 (0.34)\n0.653\nSitting hours/dayb\n\n−10.61 (19.71)\n0.590\n−18.10 (12.54)\n0.149\n−20.86 (17.16)\n0.224\n\naGamma generalised estimating equation model with log link\n\nbNormal generalised estimating equation model with identity link\n\ncAdjusted for age, gender, relationship status, education level, employment status, co-morbidity, alcohol drinking, and smoking status\n\n\nBODY.DIETARY OUTCOMES:\nThe dietary outcomes are compared between groups in Table 4. Significant improvements in all self-reported outcome measures from baseline to post-test were observed for the intervention group, whereas the control group demonstrated no significant changes. Significant differences were observed between groups at post-test for the fibre intake score (p = 0.004), fat intake score (p < 0.001), fruit intake (p = 0.001), and vegetable intake (p < 0.001). Table 5 provides the results of the GEE analyses for dietary outcome variables. After controlling for potential confounders, the intervention group demonstrated significant improvements in the self-reported fibre intake (p < 0.001), fat intake (p = 0.003), and serves of vegetables per day (p = 0.002) using the group x time interaction term.Table 4Comparison of self-reported dietary outcomes between intervention and control groups\nOutcomeIntervention group (n = 151)\np valuea\nControl group (n = 159)\np valueb\n\np valuec\n\np valued\nBaselinePostBaselinePostMean (SD)Mean (SD)Mean (SD)Mean (SD)Fibre intake score\n23.3 (4.2)\n24.9 (4.1)\n<0.001\n23.4 (4.0)\n23.6 (3.8)\n0.313\n0.852\n0.004\nFat avoidance score\n12.8 (3.8)\n13.3 (3.8)\n0.016\n12.8 (3.8)\n12.8 (4.0)\n0.743\n0.933\n0.234\nFat intake score\n30.9 (4.0)\n32.3 (4.0)\n<0.001\n30.4 (3.8)\n30.8 (3.4)\n0.082\n0.194\n<0.001\nFruit intake (serves/day)e\n\n1.5 (1.3)\n2.0 (1.4)\n<0.001\n1.5 (1.2)\n1.4 (1.1)\n0.378\n0.619\n0.001\nVegetable intake (serves/day)e\n\n3.0 (2.3)\n3.4 (1.9)\n<0.001\n2.5 (2.2)\n3.0 (2.3)\n0.570\n0.277\n<0.001\n\na Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the intervention group\n\nb Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the control group\n\nc Independent t-test (Mann-Whitney U test) between intervention and control group at baseline\n\nd Independent t-test (Mann-Whitney U test) between intervention and control group at post-test\n\neNon-parametric tests applied\nTable 5Regression analysis of dietary outcomes before and after intervention (n = 310)\nGroup: interventionTime: postGroup x timeCoefficient (SE)pc\nCoefficient (SE)pc\nCoefficient (SE)pc\nFibre intake scoreb\n\n−0.06 (0.45)\n0.897\n0.20 (0.20)\n0.329\n1.39 (0.32)\n<0.001\nFat intake scoreb\n\n0.57 (0.43)\n0.188\n0.41 (0.23)\n0.071\n0.97 (0.33)\n0.003\nFat avoidance scoreb\n\n0.03 (0.42)\n0.942\n0.05 (0.18)\n0.797\n0.48 (0.28)\n0.086\nFruit intakea\n\n0.07 (0.07)\n0.327\n0.04 (0.04)\n0.343\n0.42 (0.26)\n0.106\nVegetable intakea\n\n0.07 (0.06)\n0.292\n0.04 (0.04)\n0.321\n0.17 (0.06)\n0.002\n\naGamma generalised estimating equation model with log link\n\nbNormal generalised estimating equation model with identity link\n\ncAdjusted for age, gender, relationship status, education level, employment status, co-morbidity, alcohol drinking, and smoking status\n\n\n\nBODY.DISCUSSION:\nPoor diet, physical inactivity, sedentary behaviour, and overweight/obesity are some of the major risk factors contributing to Australia’s burden of disease [5]. These risk factors are more prevalent in disadvantaged and rural/remote communities in the older age groups [7], with the gap widening particularly for fruit and vegetable intake [6]. Consequently, interventions targeting this high risk group are essential to address the rising prevalence of obesity, metabolic syndrome, and chronic diseases in Australia [36, 37]. Assessment of changes to diet and physical activity behaviours of study participants provides an understanding of intervention compliance and the implication of changes in outcome measures [38].\nThis study examined the effectiveness of the APAN program for improving the dietary and physical activity behaviours of 50–69 year old adults at with, or at risk of metabolic syndrome in a disadvantaged rural area. The sample sizes provided sufficient statistical power to evaluate the repeated measures [21]. As expected, the attrition rate was higher for the intervention group (24.9 %) than the control group (20.5 %), and comparable to similar studies [39, 40].\nThe APAN program utilised the AUSDRISK [24] to initially screen individuals for risk of metabolic syndrome based on their risk of developing type 2 diabetes. To date no other studies report using this screening tool to identify metabolic syndrome in large populations. The screening process identified 1060 high risk individuals for developing type 2 diabetes, of which 215 (20 %) were confirmed to have metabolic syndrome and 186 (18 %) at risk of metabolic syndrome using the IDF criteria (central obesity plus one instead of two of the additional parameters). Targeted screening using AUSDRISK and the IDF metabolic syndrome criteria allows for evaluation of a timely lifestyle intervention to prevent the onset of cardiovascular disease and type 2 diabetes in rural Australian communities.\nIt is recommended that lifestyle interventions for disadvantaged groups incorporate self-help strategies due to the potentially high reach and low cost of implementation [16]. A systematic review of optimal methods and strategies for achieving lifestyle behaviour change in individuals with metabolic syndrome reported that interventions incorporating motivational feedback/interviewing in combination with internet monitoring and regular personal feedback are likely to achieve the best results [41]. Additionally, telephone-based services are able to reach geographically and socially disadvantaged areas, which commonly have higher risk of chronic diseases [42]. The APAN program incorporated a combination of these strategies to ensure the program was delivered in a cost-effective manner to the relatively disadvantaged participants in a rural community [19]. The observed changes to diet (fat, fibre and vegetable intake) and physical activity behaviours (moderate intensity activity) for the intervention group suggest that this combination of strategies is effective for the high-risk target group.\nThe APAN program was based on the Australian Dietary Guidelines [32]. Participants were encouraged to consume a diet high in fruit and vegetables with an emphasis on fibre intake, which is the recommendation for individuals with or at risk of metabolic syndrome [43]. The significant improvement in vegetable consumption and fat and fibre intake for the intervention group suggests that the APAN program demonstrates protective dietary factors to address metabolic syndrome in the target group.\nThe APAN program provided strategies for participants to identify and overcome barriers to improving physical activity and diet. Identified barriers to fruit and vegetable intake in the literature include a perception that enough serves of fruit and vegetables were consumed, as well as lack of variety, difficulty changing habits, lack of time to prepare, and low quality produce [44]. It has been suggested that interventions should provide practical solutions to address these barriers, and devise strategies to effectively communicate the recommended serves of fruit and vegetables to participants [44]. The APAN program supported participants by providing detailed information on food groups, recommended serves per day, sample meal plans, and tips for incorporating more fruit and vegetables into their daily meals, and the motivational interviewing component of the intervention encouraged participants to utilise these tools on a regular basis. Therefore, the significant increase in serves of vegetables per day for the intervention group is a successful achievement.\nBased on Australia’s Physical Activity and Sedentary Behaviour Guidelines [31], the APAN program recommended that participants aim for at least 150 min per week of moderate intensity physical activity in combination with a reduction in sitting time and strength training, as these activities all impact on metabolic syndrome outcomes. There was a marginal improvement in moderate intensity activity, which is encouraging as the risk of metabolic syndrome is almost doubled among adults who engage in no moderate intensity activity [43]. Although the intervention group demonstrated some improvement in sitting time, this change was not significantly different relative to the controls. Further research into strategies to encourage a reduction in sitting time in the home environment is recommended for the target group, including optimal motivational interviewing techniques to achieve this.\nLoss of muscle mass and strength, or sarcopenia, affects wellbeing, physical movement, glycaemic control and blood pressure [45, 46]. Strength training is therefore a crucial component of any physical activity intervention targeting older adults due to its physiological effects [46]. The prevalence of strength training in Australian adults remains low, particularly in regional settings (21.9 % for men and 17.5 % for women) and adults aged 55+ (7 %) [46]. The intervention group did not achieve a significant increase in strength training time compared to the control group, highlighting the need for more effort to be placed on this. The APAN exercise chart was adapted from a previous study targeting older adults (60+ years) which might have been too simplistic and not challenging enough for the younger target group in the current study. Further adaptations should be considered for future interventions.\nBODY.LIMITATIONS:\nA limitation of this study is the use of self-reported outcome measures which introduces reporting bias; however, any inaccuracies in reporting are anticipated to be the same across intervention and control groups. Additionally, allocation concealment ensured that the participants were unaware of their group allocation which reduced the effect of any differential over-reporting of desired behaviours and under-reporting of undesired behaviours. Another limitation is the short duration of the intervention (6 months). It is commonly accepted that exposing older adults to a sufficient dose of an intervention to ensure behaviour change is maintained in the long term can be challenging [47]. Risk factors for metabolic syndrome are effectively controlled by intensive, short-term programs for weight loss; however once the programs end there are high rates of recidivism and individuals have a tendency to regain weight [41]. A follow-up study is therefore recommended to determine long-term effectiveness and sustainability of the APAN program.\n\n\nBODY.CONCLUSIONS:\nRural and remote communities are often neglected in lifestyle intervention research. It is therefore imperative that future interventions focus on sustainable health outcomes in real-world settings, particularly in disadvantaged groups. Considering the increased prevalence of preventable disease risk factors, it is essential to address poor diet, physical inactivity, and sedentary behaviour to reduce the prevalence of metabolic syndrome and related chronic diseases. The APAN program improved the physical activity and diet of the intervention group compared to the control group, demonstrating that a home-based, low-cost intervention with motivational support can effectively influence certain behaviour change of 50–69 year old adults with or at risk of metabolic syndrome in a disadvantaged rural area. The study findings thus contribute to the prevention and control of chronic diseases.\n\n**Question:** Compared to control group what was the result of screened for metabolic syndrome on self-reported moderate intensity MET?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13887
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffects of a home-based intervention on diet and physical activity behaviours for rural adults with or at risk of metabolic syndrome: a randomised controlled trial\n\n\n\nABSTRACT.BACKGROUND:\nThis study aimed to determine whether a home-based 6-month lifestyle intervention program complemented by motivational interviewing could improve diet and physical activity behaviours in 50–69 year olds with or at risk of metabolic syndrome, residing in a disadvantaged rural Western Australian community.\n\nABSTRACT.METHODS:\nParticipants from the City of Albany and surrounding towns (n = 401) were recruited into a 6 month randomised controlled trial. They were screened for metabolic syndrome and randomly allocated to intervention (n = 201) or control group (n = 200). Baseline and post-test data collection for both groups included a self-report questionnaire which incorporated the Fat and Fibre Barometer and the International Physical Activity Questionnaire Short Form. The intervention group received the program materials at baseline and the control group was waitlisted. Generalised estimating equation models assessed repeated outcome measures over time.\n\nABSTRACT.RESULTS:\nA total of 151 (75.1 %) intervention and 159 (79.5 %) control group participants completed post-test and were included in the analysis. After controlling for confounders, the intervention group achieved a marginally significant increase in their metabolic equivalent (MET) minutes of moderate intensity physical activity per week (p = 0.049), and significantly improved fibre intake (p < 0.001), fat intake (p = 0.003), and vegetable serves per day (p = 0.002) from baseline to post-test relative to the control group.\n\nABSTRACT.CONCLUSION:\nA home-based, low-cost intervention with motivational support can effectively improve the physical activity and dietary behaviours of adults aged 50–69 years with or at risk of metabolic syndrome residing in a disadvantaged rural area.\n\nABSTRACT.TRIAL REGISTRATION:\nAnzctr.org.au Identifier: ACTRN12614000512628\n\nBODY.BACKGROUND:\nMetabolic syndrome is characterised by several metabolic abnormalities including raised triglyceride levels, reduced high density lipoprotein (HDL) cholesterol, hypertension, hyperglycaemia, and abdominal obesity [1]. Individuals with metabolic syndrome are at a significantly increased risk of developing chronic diseases such as type 2 diabetes and cardiovascular disease [1, 2]. Prevalence of metabolic syndrome varies between populations, with estimates of approximately 13-30 % of adults in developing countries and 35 % in developed countries such as Australia and the USA [3]. Data from the United Kingdom suggests that prevalence increases with age [3].\nSeveral risk factors are responsible for the majority of metabolic syndrome cases and cardiovascular diseases in developed countries [4]. These include high cholesterol, overweight/obesity, physical inactivity, high blood pressure, and limited fruit and vegetable intake [4]. The risk factors contributing significantly to the burden of disease in Australia are poor diet (11 %) and overweight/obesity (9 %) [5]. In 2011–12, 92 % of Australian adults were not consuming enough serves of vegetables to meet the Australian Dietary Guidelines, and just 49 % achieved the recommended target for fruit consumption [5], while only 40 % met the recommendation of at least 150 min per week of moderate intensity physical activity [5].\nThe gap in health risk behaviours between the least and most disadvantaged groups of the Australian population is widening, particularly for fruit and vegetable intake [6]. People living in lower socioeconomic areas of Australia are more likely to be physically inactive, engage in sedentary behaviour, and have abnormal glucose metabolism compared to those living in higher socioeconomic areas [7]. Similar effects have been noted across seven comparable countries, with higher socioeconomic neighbourhoods associated with increased consumption of fruit and vegetables [8].\nResidents of rural and remote Australia experience higher rates of morbidity and mortality and have less access to health services than those in metropolitan areas [5]. These populations are more likely than city populations to be overweight or obese (70 versus 60 %), insufficiently active (60 versus 54 %), have high blood cholesterol (37 versus 31 %) and comprise more people aged over 65 years (16 versus 13 %) [5]. This results in an increased prevalence of overweight/obesity, metabolic syndrome, and in turn chronic diseases [9]. Targeted screening and interventions in these populations at high risk of developing cardiovascular disease and type 2 diabetes may lead to early identification of metabolic syndrome, early management and delayed onset of these chronic diseases [10, 11].\nInterventions to address metabolic syndrome and related chronic diseases in older adults should encourage reduced sitting time and increased physical activity [12] in combination with diet modifications. These include limiting saturated fat, sugar and salt, and increasing fibre, fruit and vegetable consumption [1, 13, 14]. The literature indicates that structured behavioural interventions focusing on counselling, education, and support strategies can also assist positive behaviour change in individuals at risk of chronic diseases [15].\nA recent systematic review of self-help interventions for adults at risk of chronic diseases suggests that strategies comprising goal setting and self-monitoring in combination with tailored feedback, contact via email, and online social support, may be more effective for achieving behaviour change [16]. However, it is difficult to determine the effectiveness of self-help interventions in disadvantaged target groups due to insufficient data [16]. In particular, there is a gap in the knowledge of the impact of home-based lifestyle interventions in the disadvantaged rural/remote Australian setting targeting older adults with metabolic syndrome [17].\nThe Albany Physical Activity and Nutrition (APAN) Program aimed to improve dietary and physical activity behaviours of 50–69 year old adults at risk or with metabolic syndrome. The APAN program was implemented in the City of Albany and surrounding towns in The Great Southern region of Western Australia, which provided access to a large number of older adults aged 50–69 years (8496) [18]. The Socio-Economic Indexes for Areas scores Albany at 987.4, indicating relative disadvantage (<1000) [19]. This population is representative of other Australian rural/regional areas. In addition, the Great Southern region’s health profile suggests a need for increased access to health programs and services for residents [20]. The present study aimed to determine whether the APAN program effectively improved the physical activity, diet, and sedentary behaviours of participants.\n\nBODY.METHODS:\nBODY.STUDY DESIGN:\nThe protocol of this trial has been described previously [21]. APAN was a two-arm randomised controlled trial of a 6-month physical activity, diet, and healthy weight management intervention. Data were collected from the intervention and control groups at baseline and post-test. The trial was registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12614000512628) and the study protocol was approved by the Curtin University Human Research Ethics Committee (approval number HR149_2013). All participants provided informed consent prior to entry into the study.\n\nBODY.PARTICIPANTS:\nPotential participants were required to be 50–69 years of age and classified as either at risk of, or with metabolic syndrome using the International Diabetes Federation (IDF) criteria [22] to be eligible for the study. To be considered at risk, participants were required to have a large waist circumference as the minimum requirement (waist circumference ≥94 cm for men or ≥80 cm for women [Europids, Sub-Saharan Africans, Eastern Mediterranean, Middle East]; ≥90 cm for men or ≥80 cm for women [South Asians, Chinese, Japanese]), plus one of the following parameters: raised triglyceride concentration (≥1.7 mM, or treatment for this); reduced HDL-cholesterol concentration (<1.03 mM in males and <1.29 mM in females, or treatment for this); raised blood pressure (systolic ≥130 mmHg or diastolic ≥85 mmHg, or treatment of previously diagnosed hypertension); raised fasting plasma glucose concentration (≥5.6 mM). Participants were classified as having metabolic syndrome if they had a large waist circumference plus two of the other criteria stated above [23].\nThe following exclusion criteria applied: on a weight loss diet or having weight fluctuations of >5 % within the previous 6 months; previous diagnosis of diabetes mellitus (other than gestational diabetes); of Aboriginal or Torres Strait Islander decent; receiving specific treatment to lower blood glucose; or involvement in another physical activity program.\n\nBODY.PROCEDURE:\nParticipants aged 50–69 years were recruited from towns within a 50 km radius of Albany, Western Australia. Recruitment of participants occurred in three stages. Figure 1 outlines the study procedure, participant flow, and samples sizes.Fig. 1Consort flow diagram\n\nBODY.SCREENING STAGE 1:\nIndividuals (n = 12,723) from the selected region were initially screened via the Computer Assisted Telephone Interview system using the Australian Type 2 Diabetes Risk Assessment Tool (AUSDRISK), which assesses the risk of developing type 2 diabetes mellitus based on anthropometric, demographic, and lifestyle characteristics [24]. Females with a score of ≥ 9 and males with a score of ≥ 12 were eligible for the next stage of screening (n = 1134), with 7342 individuals excluded and 4247 individuals who opted out.\n\nBODY.SCREENING STAGE 2:\nEligible individuals (n = 1134) were invited to attend a clinic in a central location in Albany. A total of 526 participants attended the clinic and 608 opted out. Anthropometry including waist circumference was measured during the appointment to confirm central obesity (waist circumference ≥94 cm for men or ≥80 cm for women, being the minimum IDF requirement) before progressing to the next stage. Based on this eligibility criterion, 40 participants were excluded, and a further 16 opted out after progressing to screening stage 3.\n\nBODY.SCREENING STAGE 3:\nThe remaining eligibility criteria were assessed at a local pathology laboratory via fasting blood sample and blood pressure measurements. Participants also completed a self-reported questionnaire to measure physical activity and dietary behaviours.\nParticipants meeting the full eligibility criteria (n = 401) were randomly allocated to either the intervention group (n = 201) or control group (n = 200), while adjusting for gender and metabolic syndrome status. Intervention group participants were provided the APAN program materials and allocated to one of two motivational support staff. The control group participants were waitlisted to receive their program after post-test data collection (6 months).\nBaseline and post-test data collection included a self-reported questionnaire to measure participant dietary and physical activity behaviours. Of the 201 intervention and 200 control group participants who completed the questionnaire at baseline, 151 (75.1 % response rate) and 159 (79.5 % response rate) respectively completed the post-test questionnaire and were available for analysis of the self-reported outcome measures.\n\n\nBODY.INTERVENTION:\nThis home-based program was low cost and tailorable to suit varying fitness levels and lifestyles. Its strategies included self-monitoring and goal setting, supported by printed and online material, and participants received email and telephone support using motivational interviewing techniques. The program built upon previous dietary and physical activity programs targeting older adults in a metropolitan area [25, 26], and was adapted for a rural/remote target group.\nBODY.THEORETICAL BASIS:\nThe Self-Determination Theory was applied as the theoretical framework for this study, complemented by Motivational Interviewing [27]. The theory explores human motivation in terms of behaviours that are autonomous (originating from self) and behaviours that are controlled (persuasion or coercion) [28]. Self-Determination Theory is the only motivational theory that identifies autonomy as a basic need for all humans that should be supported in health promotion interventions [29]. Individuals are more likely to engage in certain behaviours if they are valued and intrinsically motivated [30]. Advice from health professionals should be offered in an autonomous manner to allow for individual decision-making [27].\n\nBODY.PROGRAM MATERIALS:\nIntervention group participants were provided with the APAN program materials designed to assist with self-monitoring and goal setting to improve diet, physical activity, and healthy weight maintenance. Recommendations were based on Australia’s Physical Activity and Sedentary Behaviour Guidelines [31] and the Australian Dietary Guidelines [32], as well as previous successful interventions targeting older adults [25, 26]. Materials included a booklet, exercise charts, resistance band, nutrition panel wallet card, and a website with progress tracker and interactive blog. A detailed overview of the program materials has been described previously [21].\n\nBODY.MOTIVATIONAL SUPPORT:\nIntervention group participants were provided motivational support for the duration of the APAN intervention via telephone counselling and emails. Two research assistants trained in motivational interviewing techniques contacted participants at week 1, 3, 6, 12, 18, and 24. Motivational interviewing principles incorporate strategies to encourage an individual to initiate and sustain behaviour change, such as empathy, shared decision-making, and reflective listening [33]. These strategies complemented the principles of the Self-Determination Theory by ensuring participants set their own goals, became involved in decision-making, and were not coerced into a particular behaviour or action [27, 30].\nResearch assistants were provided with a dialogue guide for each of the six MI telephone calls scheduled over the intervention period. The first call focused on general health as the main topic, allowing the research assistants to introduce themselves to participants and gauge their readiness to change, barriers to change, and identify specific goals they wished to achieve. Subsequent calls focused on specific topics including physical activity, sedentary behaviour, nutrition, and the revisiting of goals. Participants were encouraged to utilise their APAN resources to monitor their progress and adjust goals and behaviours as required.\n\n\nBODY.INSTRUMENT:\nThe self-completed structured questionnaire included the Fat and Fibre Barometer [34], the International Physical Activity Questionnaire Short Form (IPAQ-SF) [35], and demographic and general health questions including sex, education, marital status, smoking status, diagnosed health conditions, medications, and alcohol consumption. The IPAQ-SF measured walking time, moderate and vigorous intensity physical activities, and time spent sitting across a usual week. A strength question was also added to determine resistance or weight training using large muscle groups [26].\nTo assess diet, the Fat and Fibre Barometer was utilised to provide information on habitual fat- and fibre-related behaviours of participants [34]. The valid and reliable instrument was appended with questions asking participants for the number of fruit and vegetable serves in a usual week [26]. Body mass index (BMI) was calculated based on weight and height data collected during screening stage 2.\n\nBODY.STATISTICAL ANALYSIS:\nThis paper focuses on dietary and physical activity behaviours whereas changes in clinical metabolic syndrome parameters and anthropometry will be reported elsewhere. Descriptive statistics summarised the baseline lifestyle and demographic characteristics of the intervention and control groups. Independent and paired samples t-tests were applied to the continuous outcome variables, whereas Mann-Whitney U test and Wilcoxon Signed Rank test were applied to those variables exhibiting skewed distributions. To account for the effects of potential confounders, generalised estimating equation (GEE) models with exchangeable correlation structure were used to assess the repeated outcome variables over time. Normal GEE with identity link was applied to normally distributed continuous outcome variables (sitting time [hours per day]; fibre intake score; fat intake score; fat avoidance score), while gamma GEE with log link was applied to skewed continuous variables (walking time [MET min/week]; moderate intensity activity [MET min/week]; vigorous intensity activity [MET min/week]; total activity [MET min/week]; strength training [min/week]; fruit intake [serves per day]; vegetable intake [serves per day]). All statistical analyses were performed using the SPSS Statistics Package 22.\n\n\nBODY.RESULTS:\nDemographic and lifestyle characteristics of participants are summarised in Table 1, which highlight no significant differences between the intervention and control groups at baseline. The mean age of the sample was 61 years with 66.5 % female participants. The majority (98.4 %) had completed either secondary school or tertiary education, and 81.6 % had a partner. The mean BMI was 30.8 indicating that on average, the sample was obese. Smoking status and alcohol consumption were not significantly different between the two groups.Table 1Baseline characteristics of intervention and control group participants\nVariableIntervention groupControl group\np valuea\n(n = 151)(n = 159)Age: mean (SD) years\n60.5 (5.64)\n61.3 (5.18)\n0.181\nMetabolic syndrome status:\n\n\n0.494\nWith\n79 (52.3 %)\n90 (56.6 %)\nAt risk\n72 (47.7 %)\n69 (43.4 %)\nBMI: mean (SD)\n31.0 (5.88)\n30.6 (5.17)\n0.440\nGender: female\n100 (66.2 %)\n106 (66.7 %)\n0.934\nEmployment status:\n\n\n0.296\nFull time\n78 (51.7 %)\n65 (40.9 %)\nPart time\n24 (15.9 %)\n29 (18.2 %)\nUnemployed\n5 (3.3 %)\n7 (4.4 %)\nRetired\n44 (29.1 %)\n58 (36.5 %)\nEducation:\n\n\n0.425\nPrimary school\n3 (2.0 %)\n2 (1.3 %)\nSecondary school\n55 (36.4 %)\n72 (45.0 %)\nTechnical/Diploma\n52 (34.4 %)\n46 (28.8 %)\nUniversity\n41 (27.2 %)\n39 (24.5 %)\nRelationship status: with partner\n124 (82.1 %)\n129 (81.1 %)\n0.810\nSmoking status:\n\n\n0.852\nNever\n84 (55.6 %)\n84 (52.8 %)\nEx-smoker\n52 (34.4 %)\n54 (33.8 %)\nOccasional smoker\n3 (2.0 %)\n4 (2.5 %)\nDaily smoker\n12 (7.9 %)\n17 (10.6 %)\nCo-morbidityb: yes\n92 (60.9 %)\n104 (65.4 %)\n0.413\nAlcohol drinking: yes\n99 (65.6 %)\n113 (71.1 %)\n0.957\n\na\nt-test or chi square test between intervention and control groups\n\nb Presence of at least one of 8 common health problems\n\nBODY.PHYSICAL ACTIVITY OUTCOMES:\nThe self-reported physical activity outcomes between intervention and control groups are displayed in Table 2. Significant improvements in mean walking time (MET min/week), moderate intensity activity (MET min/week), total activity (MET min/week), sitting time (hours per day), and strength training (minutes per week) from baseline to post-test were observed for the intervention group, while a significant reduction (17 min/week) in total MET min/week was observed for the control group from baseline to post-test. Table 3 provides the results of the GEE analyses for physical activity outcomes. After controlling for potential confounders, the intervention group achieved a marginally significant improvement in the self-reported moderate intensity MET minutes per week (p = 0.049) relative to the control group. No significant improvement was observed for the other physical activity variables through the group x time interaction term.Table 2Comparison of self-reported physical activity outcomes between intervention and control groups\nOutcomeIntervention group (n = 151)p valuea\nControl group (n = 159)\np valueb\n\np valuec\n\np valued\nBaselinePostBaselinePostMean (SD)Mean (SD)Mean (SD)Mean (SD)Walking MET min/weeke\n\n396.0\n577.5\n0.011\n330.0\n396.0\n0.047\n0.561\n0.050\n(561.0)\n(742.5)\n(594.0)\n(709.0)\nModerate MET min/weeke\n\n300.0\n480.0\n<0.001\n360.0\n360.0\n0.808\n0.394\n0.189\n(585.0)\n(850.0)\n(640.0)\n(636.0)\nVigorous MET min/weeke\n\n181.5\n217.5\n0.070\n203.0\n179.5\n0.715\n0.902\n0.176\n(479.4)\n(460.2)\n(462.1)\n(428.4)\nTotal MET min/weeke\n\n807.5\n1332.0\n<0.001\n990.0\n973.0\n0.013\n0.428\n0.051\n(1486.9)\n(1624.9)\n(1357.5)\n(1738.0)\nStrength min/weeke\n\n39.2\n53.9\n<0.001\n27.7\n25.0\n0.436\n0.880\n0.001\n(168.9)\n(124.2)\n(66.8)\n(70.4)\nSitting hours/day\n359.1\n319.7\n0.001\n356.0\n339.4\n0.187\n0.955\n0.299\n(187.4)\n(162.7)\n(171.3)\n(169.4)\n\na Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the intervention group\n\nb Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the control group\n\nc Independent t-test (Mann-Whitney U test) between intervention and control group at baseline\n\nd Independent t-test (Mann-Whitney U test) between intervention and control group at post-test\n\neNon-parametric tests applied\nTable 3Regression analysis of physical activity outcomes before and after intervention (n = 310)\nGroup: interventionTime: postGroup x timeCoefficient (SE)pc\nCoefficient (SE)pc\nCoefficient (SE)pc\nWalking MET min/wka\n\n0.09 (0.16)\n0.580\n0.23 (0.10)\n0.020\n−0.10 (0.16)\n0.524\nModerate MET min/wka\n\n−0.07 (0.16)\n0.650\n0.01 (0.11)\n0.991\n0.29 (0.15)\n0.049\nVigorous MET min/wka\n\n−0.21 (0.19)\n0.275\n−0.14 (0.17)\n0.393\n0.15 (0.24)\n0.537\nTotal MET min/wka\n\n−0.05 (0.13)\n0.710\n0.12 (0.08)\n0.164\n0.11 (0.12)\n0.335\nStrength min/wka\n\n0.01 (0.32)\n0.983\n−0.40 (0.22)\n0.066\n0.15 (0.34)\n0.653\nSitting hours/dayb\n\n−10.61 (19.71)\n0.590\n−18.10 (12.54)\n0.149\n−20.86 (17.16)\n0.224\n\naGamma generalised estimating equation model with log link\n\nbNormal generalised estimating equation model with identity link\n\ncAdjusted for age, gender, relationship status, education level, employment status, co-morbidity, alcohol drinking, and smoking status\n\n\nBODY.DIETARY OUTCOMES:\nThe dietary outcomes are compared between groups in Table 4. Significant improvements in all self-reported outcome measures from baseline to post-test were observed for the intervention group, whereas the control group demonstrated no significant changes. Significant differences were observed between groups at post-test for the fibre intake score (p = 0.004), fat intake score (p < 0.001), fruit intake (p = 0.001), and vegetable intake (p < 0.001). Table 5 provides the results of the GEE analyses for dietary outcome variables. After controlling for potential confounders, the intervention group demonstrated significant improvements in the self-reported fibre intake (p < 0.001), fat intake (p = 0.003), and serves of vegetables per day (p = 0.002) using the group x time interaction term.Table 4Comparison of self-reported dietary outcomes between intervention and control groups\nOutcomeIntervention group (n = 151)\np valuea\nControl group (n = 159)\np valueb\n\np valuec\n\np valued\nBaselinePostBaselinePostMean (SD)Mean (SD)Mean (SD)Mean (SD)Fibre intake score\n23.3 (4.2)\n24.9 (4.1)\n<0.001\n23.4 (4.0)\n23.6 (3.8)\n0.313\n0.852\n0.004\nFat avoidance score\n12.8 (3.8)\n13.3 (3.8)\n0.016\n12.8 (3.8)\n12.8 (4.0)\n0.743\n0.933\n0.234\nFat intake score\n30.9 (4.0)\n32.3 (4.0)\n<0.001\n30.4 (3.8)\n30.8 (3.4)\n0.082\n0.194\n<0.001\nFruit intake (serves/day)e\n\n1.5 (1.3)\n2.0 (1.4)\n<0.001\n1.5 (1.2)\n1.4 (1.1)\n0.378\n0.619\n0.001\nVegetable intake (serves/day)e\n\n3.0 (2.3)\n3.4 (1.9)\n<0.001\n2.5 (2.2)\n3.0 (2.3)\n0.570\n0.277\n<0.001\n\na Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the intervention group\n\nb Paired t-test (Wilcoxon signed-rank test) between baseline and post-test for the control group\n\nc Independent t-test (Mann-Whitney U test) between intervention and control group at baseline\n\nd Independent t-test (Mann-Whitney U test) between intervention and control group at post-test\n\neNon-parametric tests applied\nTable 5Regression analysis of dietary outcomes before and after intervention (n = 310)\nGroup: interventionTime: postGroup x timeCoefficient (SE)pc\nCoefficient (SE)pc\nCoefficient (SE)pc\nFibre intake scoreb\n\n−0.06 (0.45)\n0.897\n0.20 (0.20)\n0.329\n1.39 (0.32)\n<0.001\nFat intake scoreb\n\n0.57 (0.43)\n0.188\n0.41 (0.23)\n0.071\n0.97 (0.33)\n0.003\nFat avoidance scoreb\n\n0.03 (0.42)\n0.942\n0.05 (0.18)\n0.797\n0.48 (0.28)\n0.086\nFruit intakea\n\n0.07 (0.07)\n0.327\n0.04 (0.04)\n0.343\n0.42 (0.26)\n0.106\nVegetable intakea\n\n0.07 (0.06)\n0.292\n0.04 (0.04)\n0.321\n0.17 (0.06)\n0.002\n\naGamma generalised estimating equation model with log link\n\nbNormal generalised estimating equation model with identity link\n\ncAdjusted for age, gender, relationship status, education level, employment status, co-morbidity, alcohol drinking, and smoking status\n\n\n\nBODY.DISCUSSION:\nPoor diet, physical inactivity, sedentary behaviour, and overweight/obesity are some of the major risk factors contributing to Australia’s burden of disease [5]. These risk factors are more prevalent in disadvantaged and rural/remote communities in the older age groups [7], with the gap widening particularly for fruit and vegetable intake [6]. Consequently, interventions targeting this high risk group are essential to address the rising prevalence of obesity, metabolic syndrome, and chronic diseases in Australia [36, 37]. Assessment of changes to diet and physical activity behaviours of study participants provides an understanding of intervention compliance and the implication of changes in outcome measures [38].\nThis study examined the effectiveness of the APAN program for improving the dietary and physical activity behaviours of 50–69 year old adults at with, or at risk of metabolic syndrome in a disadvantaged rural area. The sample sizes provided sufficient statistical power to evaluate the repeated measures [21]. As expected, the attrition rate was higher for the intervention group (24.9 %) than the control group (20.5 %), and comparable to similar studies [39, 40].\nThe APAN program utilised the AUSDRISK [24] to initially screen individuals for risk of metabolic syndrome based on their risk of developing type 2 diabetes. To date no other studies report using this screening tool to identify metabolic syndrome in large populations. The screening process identified 1060 high risk individuals for developing type 2 diabetes, of which 215 (20 %) were confirmed to have metabolic syndrome and 186 (18 %) at risk of metabolic syndrome using the IDF criteria (central obesity plus one instead of two of the additional parameters). Targeted screening using AUSDRISK and the IDF metabolic syndrome criteria allows for evaluation of a timely lifestyle intervention to prevent the onset of cardiovascular disease and type 2 diabetes in rural Australian communities.\nIt is recommended that lifestyle interventions for disadvantaged groups incorporate self-help strategies due to the potentially high reach and low cost of implementation [16]. A systematic review of optimal methods and strategies for achieving lifestyle behaviour change in individuals with metabolic syndrome reported that interventions incorporating motivational feedback/interviewing in combination with internet monitoring and regular personal feedback are likely to achieve the best results [41]. Additionally, telephone-based services are able to reach geographically and socially disadvantaged areas, which commonly have higher risk of chronic diseases [42]. The APAN program incorporated a combination of these strategies to ensure the program was delivered in a cost-effective manner to the relatively disadvantaged participants in a rural community [19]. The observed changes to diet (fat, fibre and vegetable intake) and physical activity behaviours (moderate intensity activity) for the intervention group suggest that this combination of strategies is effective for the high-risk target group.\nThe APAN program was based on the Australian Dietary Guidelines [32]. Participants were encouraged to consume a diet high in fruit and vegetables with an emphasis on fibre intake, which is the recommendation for individuals with or at risk of metabolic syndrome [43]. The significant improvement in vegetable consumption and fat and fibre intake for the intervention group suggests that the APAN program demonstrates protective dietary factors to address metabolic syndrome in the target group.\nThe APAN program provided strategies for participants to identify and overcome barriers to improving physical activity and diet. Identified barriers to fruit and vegetable intake in the literature include a perception that enough serves of fruit and vegetables were consumed, as well as lack of variety, difficulty changing habits, lack of time to prepare, and low quality produce [44]. It has been suggested that interventions should provide practical solutions to address these barriers, and devise strategies to effectively communicate the recommended serves of fruit and vegetables to participants [44]. The APAN program supported participants by providing detailed information on food groups, recommended serves per day, sample meal plans, and tips for incorporating more fruit and vegetables into their daily meals, and the motivational interviewing component of the intervention encouraged participants to utilise these tools on a regular basis. Therefore, the significant increase in serves of vegetables per day for the intervention group is a successful achievement.\nBased on Australia’s Physical Activity and Sedentary Behaviour Guidelines [31], the APAN program recommended that participants aim for at least 150 min per week of moderate intensity physical activity in combination with a reduction in sitting time and strength training, as these activities all impact on metabolic syndrome outcomes. There was a marginal improvement in moderate intensity activity, which is encouraging as the risk of metabolic syndrome is almost doubled among adults who engage in no moderate intensity activity [43]. Although the intervention group demonstrated some improvement in sitting time, this change was not significantly different relative to the controls. Further research into strategies to encourage a reduction in sitting time in the home environment is recommended for the target group, including optimal motivational interviewing techniques to achieve this.\nLoss of muscle mass and strength, or sarcopenia, affects wellbeing, physical movement, glycaemic control and blood pressure [45, 46]. Strength training is therefore a crucial component of any physical activity intervention targeting older adults due to its physiological effects [46]. The prevalence of strength training in Australian adults remains low, particularly in regional settings (21.9 % for men and 17.5 % for women) and adults aged 55+ (7 %) [46]. The intervention group did not achieve a significant increase in strength training time compared to the control group, highlighting the need for more effort to be placed on this. The APAN exercise chart was adapted from a previous study targeting older adults (60+ years) which might have been too simplistic and not challenging enough for the younger target group in the current study. Further adaptations should be considered for future interventions.\nBODY.LIMITATIONS:\nA limitation of this study is the use of self-reported outcome measures which introduces reporting bias; however, any inaccuracies in reporting are anticipated to be the same across intervention and control groups. Additionally, allocation concealment ensured that the participants were unaware of their group allocation which reduced the effect of any differential over-reporting of desired behaviours and under-reporting of undesired behaviours. Another limitation is the short duration of the intervention (6 months). It is commonly accepted that exposing older adults to a sufficient dose of an intervention to ensure behaviour change is maintained in the long term can be challenging [47]. Risk factors for metabolic syndrome are effectively controlled by intensive, short-term programs for weight loss; however once the programs end there are high rates of recidivism and individuals have a tendency to regain weight [41]. A follow-up study is therefore recommended to determine long-term effectiveness and sustainability of the APAN program.\n\n\nBODY.CONCLUSIONS:\nRural and remote communities are often neglected in lifestyle intervention research. It is therefore imperative that future interventions focus on sustainable health outcomes in real-world settings, particularly in disadvantaged groups. Considering the increased prevalence of preventable disease risk factors, it is essential to address poor diet, physical inactivity, and sedentary behaviour to reduce the prevalence of metabolic syndrome and related chronic diseases. The APAN program improved the physical activity and diet of the intervention group compared to the control group, demonstrating that a home-based, low-cost intervention with motivational support can effectively influence certain behaviour change of 50–69 year old adults with or at risk of metabolic syndrome in a disadvantaged rural area. The study findings thus contribute to the prevention and control of chronic diseases.\n\n**Question:** Compared to control group what was the result of screened for metabolic syndrome on fibre intake score, fat intake score, fruit intake, and vegetable intake?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13888
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEnvironmental temperature and exercise modality independently impact central and muscle fatigue among people with multiple sclerosis\n\n\n\nABSTRACT.BACKGROUND:\nHeat sensitivity and fatigue limit the ability of multiple sclerosis patients to participate in exercise.\n\nABSTRACT.OBJECTIVE:\nThe purpose of this study was to determine the optimal aerobic exercise parameters (environmental temperature and exercise modality) to limit exercise-induced central and muscle fatigue among people with multiple sclerosis.\n\nABSTRACT.METHODS:\nFourteen people with multiple sclerosis with varying levels of disability completed four randomized exercise sessions at 65% of the maximal volume of oxygen: body-weight supported treadmill cool (16°C), body-weight supported treadmill room (21°C), total-body recumbent stepper cool and total-body recumbent stepper room. Maximum voluntary contraction, electromyography, and evoked contractile properties were collected from the more affected plantar flexors along with subjective levels of fatigue, body temperature and perceived level of exertion.\n\nABSTRACT.RESULTS:\nExercise in cooler room temperature increased maximum voluntary contraction force (p = 0.010) and stabilized body temperature (p = 0.011) compared to standard room temperature. People with multiple sclerosis experienced greater peak twitch torque (p = 0.047), shorter time to peak twitch (p = 0.035) and a longer half relaxation time (p = 0.046) after total-body recumbent stepper suggestive of less muscle fatigue.\n\nABSTRACT.CONCLUSION:\nCooling the exercise environment limits the negative effects of central fatigue during aerobic exercise and using total-body recumbent stepper (work distributed among four limbs) rather than body-weight supported treadmill lessens muscular fatigue. Therapists can titrate these two variables to help people with multiple sclerosis achieve sufficient exercise workloads.\n\nBODY.INTRODUCTION:\nExercise training improves muscular strength, aerobic capacity, walking ability and health-related quality of life among people with multiple sclerosis (PwMS).1 However, due to disability, fatigue and heat sensitivity, most PwMS do not exercise2,3 and so they do not obtain its potential benefits. Aerobic fitness, in particular, is associated with less brain atrophy and better cognitive function in multiple sclerosis (MS)4 but at the same time, aerobic exercise increases body temperature which many MS patients find intolerable.5 For example, after leg cycling, MS participants experienced greater increases in body temperature compared to controls which was correlated with prolonged perceived leg fatigue.6 There is a need to find innovative and tailored exercise protocols so that PwMS can achieve optimal exercise workloads.7\nExercise-induced fatigue could be due to impaired motor drive from the central nervous system (CNS) (central fatigue) or reduced capacity from within the muscle.8,9 Research supports the interplay of both mechanisms. Demyelination of motor pathways,10 impaired thermoregulation, and transient increase in symptoms with passive heat exposure5 point to a central origin. However, decreased muscle peak twitch (PT) force output and prolonged half-relaxation time (HRT) during fatiguing electrical stimulation of the tibialis anterior in PwMS11 is suggestive of peripherally induced fatigue within the muscle.11,12\nAerobic exercise is typically performed on a treadmill but can be adapted for PwMS who have balance difficulties by using seated or recumbent methods. For example, Pilutti et al.13 compared the safety and tolerability of the total body recumbent stepper (TBRS) to the body-weight supported treadmill (BWST) and found that 12 weeks of TBRS and BWST training reduced perceived fatigue, although PwMS reported a more positive experience with the TBRS.13 It is conceivable that distributing the workload between four limbs rather than two could lessen excessive lower limb fatigue. Choosing the appropriate exercise modality (seated or upright) and cooling the temperature of the exercise environment are simple methods that therapists and patients can employ in order to titrate aerobic exercise parameters to make training achievable.\nThe primary aim of this study was to determine the acute effects of combining temperature ('cool', 16°C or 'room', 21°C) and exercise modality (BWST or TBRS) on lower extremity central fatigue (maximum voluntary contraction (MVC)), muscle fatigue (evoked contractile properties), body temperature, and perceived levels of fatigue. We hypothesized that: (a) exercising in a cooler temperature would prevent post-exercise decreases in MVC and electromyography (EMG) and (b) due to distributed workload between all four limbs, PwMS would have less perceived fatigue following the TBRS compared with the BWST. In order to understand potential mechanisms, our exploratory aims were to evaluate the relationships between exercise-induced changes in body temperature, subjective fatigue, the extent of clinical disability, and changes in neuromuscular performance.\n\nBODY.METHODS:\nFollowing approval of the Health Research Ethics Board (ref. no. 14.102), participants were recruited from outpatient rehabilitation services and the local MS clinic. From these participants, 14 PwMS (10 females) were included; they all met the following inclusion criteria: (a) diagnosis using the McDonald criteria;14 (b) a negative Physical Activity Readiness Questionnaire (PAR-Q)15 screening; (c) being relapse-free during the past three months; (d) taking no medication that affects heart response to exercise; (e) having no musculoskeletal impediment to exercise; and (f) scoring greater than 24 on the Montreal Cognitive Assessment (MoCA).16\nBODY.EXPERIMENTAL DESIGN:\nDuring the first session, demographic and MS-related data were collected (age, years since diagnosis, Fatigue Impact Scale (FIS)).17 PwMS self-reported their degree of heat sensitivity using a visual analogue scale (VAS); zero being not at all sensitive and 100 being extremely sensitive. Lower limb strength was tested using manual muscle testing, spasticity using the Modified Ashworth Scale and self-selected walking velocity using an instrumented walkway (Protokinetics, Havertown, Pennsylvania, USA).\nBWST and TBRS graded maximal exercise tests (GXTs) were performed (Figure 1(a)) and maximal volume of oxygen (V̇O2max) was used to assign 60–65% of heart rate (HR) reserve for 30 min of aerobic exercise (including a five-minute warm-up and cool down). The participants completed four randomized exercise sessions, one week apart either on TBRS or BWST in a temperature controlled room at 16°C (cool) or 21°C (room); BWSTcool, BWSTroom, TBRScool, TBRSroom (Figure 1(a)). Rate of perceived exertion (RPE; Borg Scale (10-point)) and HR were recorded at intervals 1, 15, and 30 min during exercise. Prior to each exercise session, participants were prepared for EMG (Figure 1(b)) and performed isometric contractions of the plantar flexors at various low intensities as a warm-up. Before and after exercise, participants performed two MVCs, received posterior tibial nerve stimulation, rated their perceived fatigue on a 100 mm VAS, and had body temperature recorded using a tympanic thermometer (Thermoscan, Braun, Kronberg, Germany).\nFigure 1.(a) Experimental design. Graded maximal exercise tests (GXTs) were performed one week apart. Sessions 3–6 were performed in randomized order one week apart; body-weight support treadmill at room 21°C (BWSTroom); BWST at 16°C (BWSTcool); total-body recumbent stepper at 21°C (TBRSroom); TBRS at 16°C (TBRScool). (b) Intervention session protocol: before and after exercise, participants performed two maximal voluntary contractions (MVCs) and received posterior tibial nerve electrical stimulation to measure plantar flexor twitch contractile properties from lateral gastrocnemius (LG) and soleus (SOL) muscles. VO2max: maximal volume of oxygen.\n\nBODY.MAXIMAL EXERCISE TESTS:\nV̇O2max test was performed on both the BWST using 10% body weight support (Sport Art T625M/T52 MD-Rehabilitation Commercial Treadmill, USA) and the TBRS (NuSTEP T4r Recumbent stepper, Michigan, USA), according to guidelines adapted from stroke best practices.18 Height, weight, age, resting HR, and blood pressure were taken prior to the GXT. The metabolic cart (Moxus Metabolic Systems, AEI Technologies, Inc., Pittsburgh, Pennsylvania, USA) was calibrated and expired air was analyzed breath-by-breath using a mask (HR: Polar V800, Polar Electro Oy, Professorintie 5, FI-90440 Kempele, Finland).\n\nBODY.MEASUREMENT OF PLANTAR FLEXOR MUSCLE FORCE AND ELECTROMYOGRAPHY:\nParticipants sat with their weaker leg flexed (i.e. 90° at the hip, knee, and ankle joints) and foot mounted in a modified boot apparatus equipped with a load cell, and performed two five-second MVCs. Participants were told to push into plantar flexion as hard as possible for five seconds. Force was sampled at 1 kHz, amplified (×1000) and averaged. EMG was recorded via surface 10 mm electrodes (MediTrace Pellet Ag/AgCl, Graphic Controls Ltd., Buffalo, New York, USA) placed longitudinally 2 cm apart over the lateral gastrocnemius (LG) and soleus (SOL) muscles of the weaker leg (determined by manual muscle test) and a ground electrode was secured on the lateral epicondyle of the femur. Signals were amplified (1000×), filtered using a Butterworth filter with a pass-band of 10–500 Hz and analog-digitally converted at a sampling rate of 1000 Hz (Biopac MP150WSW, Biopac Systems Inc., Holliston, Mississippi, USA). Data were recorded and analyzed (Acknowledge 4.1, Biopac Systems Inc.). MVC forces were measured as the peak force recorded during each contraction. For the LG and SOL muscles, root mean square (RMS) EMG was calculated over 500 ms about the peak force amplitude during the MVC for each muscle.\n\nBODY.ASSESSMENT OF EVOKED CONTRACTILE PROPERTIES:\nTo assess twitch contractile properties in LG and SOL muscles; electrical stimulation was applied to the posterior tibial nerve via Ag/AgCl electrodes placed in the popliteal fossa (cathode) and over the tibial tuberosity (anode). Current pulses (200 µs duration, 100–400 mA) were delivered via a constant current stimulator (DS7AH; Digitimer, Welwyn Garden City, Hertfordshire, UK). Stimulation intensity was increased until PT torque of LG and SOL plateaued. Evoked contractile properties included: (a) PT torque, (b) time to peak twitch (TPT), and (c) HRT – the time it took for the PT torque to reduce to half of its maximum amplitude. Lower PT and longer TPT are indicative of fatigue and lower efficiency of fiber cross-bridge cycling, respectively.19\n\nBODY.STATISTICAL ANALYSIS:\nTo determine whether the exercise conditions were at the equivalent aerobic intensity, repeated measures analysis of variance (ANOVA) for exercise condition (TBRS, BWST) and temperature (cool, room) were performed on HR, power output, and RPE averaged across three time points (1 min, 15 min, 30 min) during the 30 min of aerobic exercise. A repeated measures ANOVA for exercise condition (TBRS, BWST) by temperature (cool, room) was also performed on change scores (CSs) (post minus pre; MVCcs, RMS EMGcs, PTcs, TPTcs, HRTcs, fatiguecs, body tempcs,) and on perceived level of fatigue.\nBased on evidence that body temperature relates to changes in central drive in PwMS following exercise,20 correlations were performed between post-exercise MVC torque, degree of heat sensitivity and change in body temperature for each exercise modality (BWST; TBRS) in the room temperature condition. In exploratory analyses of MS biomarkers, the relationship between disability (Expanded Disability Status Scale (EDSS)) and variables at baseline (i.e. the average of pre-MVC, RMS EMG, PT, TPT, HRT, and perceived fatigue) was tested using Pearson correlations (Pearsons’s r: p < 0.05, uncorrected).\nIf significant main interactions were found (F-ratio: p < 0.05), a Bonferroni post-hoc test was performed. If the assumption of sphericity was violated, the corrected value for non-sphericity with Greenhouse-Geisser epsilon was reported. Effect sizes were reported as partial eta-squared (np2) where 0.02 is considered a small effect, 0.13 moderate and more than 0.26, a large effect.21 Data are reported as means and standard errors (SEs) (SPSS 19.0, IBM Corporation, Armonk, New York, USA).\n\n\nBODY.RESULTS:\nThree of the 14 individuals were unable to complete the BWST protocol, and therefore were excluded from repeated measures statistical analyses (n = 11). All participants reported heat sensitivity (Figure 2) and 11 had lower extremity weakness (Table 1). Participant characteristics are reported in Table 1 and raw data for voluntary and evoked contractile properties are presented in Table 2.\nFigure 2.Ratings of heat sensitivity rating on heat sensitivity visual analog scale (VAS; score out of 100). Each circle is a participant. Dashed line is the mean and horizontal error bar is the standard deviation (SD).\nTable 1.Participants’ clinical characteristics.\nParticipantEDSSAgeSexType of MSFISLE summed strengthLE spasticityUse of gait aidWalking speed cm/s1\n2.0\n31\nMale\nRRMS\n16\n80\n–\n–\n128.19\n2\n1.5\n38\nMale\nRRMS\n41\n61\n–\n+\n79.15\n3\n4\n56\nFemale\nRRMS\n20\n74\n–\n–\n97.29\n4\n3.5\n52\nMale\nRRMS\n49\n61\n–\n–\n84.37\n5\n1.5\n65\nFemale\nRRMS\n3\n74\n–\n+\n110.86\n6\n7\n55\nFemale\nSPMS\n41\n47\n+\n+\n32.16\n7\n1.5\n43\nFemale\nRRMS\n12\n80\n–\n–\n119.75\n8\n6.5\n71\nFemale\nSPMS\n30\n50\n+\n+\n16.45\n9\n1\n28\nFemale\nRRMS\n2\n78\n–\n–\n123.54\n10\n6.5\n59\nFemale\nPPMS\n26\n57\n–\n+\n46.06\n11\n0\n29\nFemale\nRRMS\n0\n80\n–\n–\n76.50\n12\n2\n60\nMale\nRRMS\n41\n80\n–\n–\n119.22\n13\n0\n53\nFemale\nRRMS\n31\n61\n–\n–\n143.20\n14\n1\n50\nFemale\nRRMS\n36\n66\n–\n–\n124.13\nEDSS: Expanded Disability Status Scale; FIS: Fatigue Impact Scale; LE: lower extremity; MS: multiple sclerosis; PPMS: primary progressive MS; RRMS: relapsing–remitting MS; SPMS: secondary progressive MS.\nStrength/80 summed manual muscle tests of major muscles in both lower limbs.\nPresent (+), absent (–); walking speed, self-selected pace (cm/s).\nTable 2.Raw data for voluntary and evoked contractile properties.\nMeasureExerciseTempPre- (n = 11)Post- (n = 11)\nMVC\n(Nm)\n\nBWST\nCool\n89.81 (± 62.41)\n100.3 (± 66.83)\n\nMVC\n(Nm)\n\nTBRS\nCool\n95.18 (± 53.19)\n108.9 (± 61.89)\n\nMVC\n(Nm)\n\nBWST\nRoom\n112.3 (± 68.66)\n101.1 (± 55.72)\n\nMVC\n(Nm)\n\nTBRS\nRoom\n98.90 (± 62.72)\n93.68 (± 64.79)\n\nLG EMG\n(RMS)\n\nBWST\nCool\n0.063 (± 0.0359)\n0.065 (± 0.0402)\n\nLG EMG\n(RMS)\n\nTBRS\nCool\n0.062 (± 0.0334)\n0.072 (± 0.0488)\n\nLG EMG\n(RMS)\n\nBWST\nRoom\n0.076 (± 0.0446)\n0.062 (± 0.0404)\n\nLG EMG\n(RMS)\n\nTBRS\nRoom\n0.082 (± 0.0582)\n0.072 (± 0.0593)\n\nHRT\n(ms)\n\nBWST\nCool\n121.2 (±25.96)\n110.3 (±21.22)\n\nHRT\n(ms)\n\nTBRS\nCool\n117.9 (±22.37)\n104.6 (±27.80)\n\nHRT\n(ms)\n\nBWST\nRoom\n132.8 (±34.50)\n108.1 (±28.23)\n\nHRT\n(ms)\n\nTBRS\nRoom\n107.4 (±26.94)\n108.9 (±24.20)\n\nPT\n(Nm)\n\nBWST\nCool\n15.09 (± 9.644)\n13.76 (± 7.101)\n\nPT\n(Nm)\n\nTBRS\nCool\n14.09 (± 6.245)\n16.76 (± 10.12)\n\nPT\n(Nm)\n\nBWST\nRoom\n15.07 (± 6.702)\n14.99 (± 8.750)\n\nPT\n(Nm)\n\nTBRS\nRoom\n14.46 (± 6.835)\n16.75 (± 9.719)\n\nTPT\n(ms)\n\nBWST\nCool\n0.15 (± 0.0213)\n0.14 (± 0.0196)\n\nTPT\n(ms)\n\nTBRS\nCool\n0.16 (± 0.0255)\n0.15 (± 0.0347)\n\nTPT\n(ms)\n\nBWST\nRoom\n0.13 (± 0.0504)\n0.14 (± 0.0255)\n\nTPT\n(ms)\n\nTBRS\nRoom\n0.17 (± 0.0229)\n0.14 (± 0.0274)\nMaximal Voluntary Contraction (MVC); Lateral Gastrocnemius (LG); Electromyography (EMG); Newton Meter (Nm); Root Mean Square (RMS); Temperature (Temp); Half Relaxation Time (HRT); Peak Twitch (PT); Time to Peak Twitch (TPT); milliseconds (ms); Total body recumbent stepper (TBRS); Body-weight supported treadmill (BWST).\nBODY.THE INTENSITY OF EXERCISE WAS EQUIVALENT BETWEEN EXERCISE MODALITIES:\nThere was no significant main effect of environmental temperature or exercise modality on HRavg or power outputavg, (data not shown). Although there was no significant main effect of environmental temperature on RPEavg (F(1,11) = 1.00, p = 0.34, np2 = 0.091), there was a significant main effect of exercise modality, (F(1,11) = 8.07, p = 0.018, np2 = 0.447) with a higher level of perceived exertion during TBRS (4.54 ± 0.278) compared to BWST (3.97 ± 0.293). There was no significant interaction of temperature and exercise modality (p = 0.17).\n\nBODY.MAXIMAL PLANTAR FLEXOR ACTIVATION WAS TEMPERATURE-DEPENDENT:\nThere was a significant main effect of environmental temperature on MVCcs (Figure 3; F(1,11) = 9.86, p = 0.010, np2 = 0.497). MVC torque increased post-exercise in cool temperature (12.13 ± 4.54 Nm) and decreased in room temperature (–8.20 ± 5.13 Nm). There was a near main effect of temperature (F(1,11) = 6.73, p = 0.057, np2 = 0.254) but no effect of exercise modality on LG EMGcs. LG EMG remained the same post-exercise in cool temperature (0.002 ± 0.003 mV) but decreased in room temperature (–0.01 ± 0.004 mV).\nFigure 3.Temperature effects on maximal voluntary contraction (MVC). Change in MVC torque in room (black) and cool (black/white) collapsed across exercise modalities. *Indicates a significant (p<0.05) main effect of temperature condition.\nThere was no effect of exercise modality on MVCcs, LG EMG or SOL EMG and no significant interaction between temperature and exercise modality. There was also no effect of temperature on SOL EMGCS (data not shown).\n\nBODY.EXERCISE MODALITY AFFECTED EVOKED CONTRACTILE PROPERTIES:\nFollowing BWST there was evidence of plantar flexor fatigue. There was a significant main effect of exercise modality on plantar flexor PTcs, (Figure 4; F(1,11) = 5.11, p = 0.047, np2 = 0.338) with decreased PT amplitude following BWST (–0.71 ± 0.982 Nm) and an increase following TBRS (2.48 ± 1.173 Nm). There was also a significant main effect of exercise modality on TPTcs (F(1,11) = 5.92, p = 0.035, np2 = 0.372) with prolonged TPT following BWST (0.004 ± 0.009 ms) and shortened TPT following TBRS (–0.019 ± 0.007 ms). Further, there was a significant main effect of exercise modality on HRTcs, (F(1,11) = 5.21, p = 0.046, np2 = 0.343) with shortened HRT following exercise on a BWST (–17.80 ± 2.845 ms) compared to exercise on TBRS (–5.89 ± 4.487 ms).\nFigure 4. Exercise modality effects on evoked contractile properties. Change in peak twitch (PT) in body-weight support treadmill (BWST) (black/white circle) and total-body recumbent stepper (TBRS) (black circle) collapsed across temperature conditions. *Indicates a significant (p<0.05) main effect of exercise modality.\nThere was no effect of temperature on PTcs, TPTcs or HRTcs, and no significant interaction between temperature and exercise modality (data not shown).\n\nBODY.EFFECTS OF EXERCISE MODALITY ON PERCEIVED LEVEL OF FATIGUE AND BODY TEMPERATURE:\nThere were significant main effects of exercise modality and environmental temperature on body tempcs (Table 3; F(1,9) = 7.61, p = 0.022, np2 = 0.458, F(1,9) = 10.08, p = 0.011, np2 = 0.528, respectively). Greater increases in body temperature were observed following exercise in room temperature (0.33 ± 0.104°C) compared to cool temperature (–0.15 ± 0.123°C). There was a greater increase in body temperature following TBRS (0.29 ± 0.094°C) compared to BWST (0.025 ± 0.126°C). There was no significant main effect of exercise modality or environmental temperature on perceived level of fatiguecs and no significant interaction of exercise modality and temperature (data not shown).\nTable 3.Raw data for change in body temperature (data are reported as mean (M) ± standard deviation (SD))\nMeasureExerciseTempPre-(n = 11)Post-(n = 11)Body temp (°C)\nBWST\nCool\n36.65 (±0.454)\n36.45 (±0.780)\nBody temp (°C)\nTBRS\nCool\n36.65 (±0.418)\n36.74 (±0.559)\nBody temp (°C)\nBWST\nRoom\n36.49 (±0.287)\n36.73 (±0.402)\nBody temp (°C)\nTBRS\nRoom\n36.35 (±0.462\n36.78 (±0.524)\nBWST: Body-weight supported treadmill; TBRS: Total body recumbent stepper; Temp: temperature.\n\nBODY.RELATIONSHIP BETWEEN BODY TEMPERATURE, DEGREE OF HEAT SENSITIVITY AND NEUROMUSCULAR PERFORMANCE:\nSince there was little change in body temperature in the cool condition, changes in body temperature in the room temperature conditions were compared to the corresponding post-exercise MVC torque. Higher body temperature was correlated with declines in post-exercise MVC torque in BWST (Figure 5; r = 0.65, p = 0.028) and a nearly significant negative relationship in TBRS (r = 0.58, p = 0.075). There was no relationship between body temperature and evoked contractile properties. Nether was there a relationship between perceived fatigue, degree of heat sensitivity, MVC, EMG, or evoked contractile properties.\nFigure 5. Relationship between body temperature and post-exercise maximal voluntary contraction (MVC). Following body-weight support treadmill at room 21°C (BWSTroom),lower MVC torque was significantly correlated with increases in body temperature. *Indicates a significant (p < 0.05) correlation.\n\nBODY.RELATIONSHIP BETWEEN MS-RELATED DISABILITY AND NEUROMUSCULAR PERFORMANCE:\nAs expected, higher EDSS scores were significantly correlated with lower MVC (r = –0.54, p = 0.045), lower LG (r = 0.62, p = 0.022), SOL RMS EMG (r = –0.66, p = 0.013), and greater baseline perceived fatigue (r = 0.67, p = 0.012).\n\n\nBODY.DISCUSSION:\nThe main findings of the current study were (a) with large effect sizes, a cooler environment (16°C) limited elevation of body temperature and increased plantar flexor MVC torque and protected against decrements in EMG after exercise, suggesting an effect of temperature on CNS drive and (b) exercising on a recumbent stepper using all four limbs enhanced plantar flexor contractile properties as compared to treadmill training, suggesting an effect of exercise modality on the excitation-contraction coupling of the muscle. These findings occurred despite the fact that the exercise interventions were matched for workload and participants perceived the exercise on the TBRS as more strenuous than the BWST. Finally, the neuromuscular performance measures, specifically MVC and RMS EMG of the lower limb, correlated with clinical disability (EDSS), strengthening their usefulness as biomarkers of subtle MS-related sensorimotor change.\nBODY.ENVIRONMENTAL TEMPERATURE AND CENTRAL DRIVE:\nIn the present study, exercising in room temperature (raising body temperature) led to decreased MVC torque and LG EMG, with no concomitant changes in evoked contractile properties post-exercise, pointing to CNS mechanisms underlying these changes. Other studies support such a theory. When PwMS were exposed to passive body heating, walking, force generation capacity, and increased fatigue perception were affected.22 Furthermore, passively-heated PwMS had decreased corticospinal excitability as shown via increased resting motor threshold and decreased motor evoked amplitude.22 Increased body temperature may also alter action potential propagation in demyelinated or partially myelinated axons causing reduced conduction velocity and/or block.5 In the present study, exercising in cool temperature enhanced MVC torque with no change in LG EMG. The cool temperature may have alleviated heat-induced stress on the CNS, allowing for improved MVC performance.\nExercising in warm or cool environments, and rate of heat storage in the body, have been postulated to differentially activate an anticipatory response that regulates skeletal muscle power output and motor unit recruitment through afferent sensory inputs to the brain.23 Other studies have shown that, in PwMS, water immersion precooling before arm-leg ergometry exercise stabilized core temperature and improved 25-foot walk performance24 and that the use of cooling suits improved walking speed and lower-limb strength.25 Decreasing body heat storage, especially during exercise, may allow for improved CNS function and enhanced neuromuscular performance. Since comfort is likely important for exercise compliance, the longer-term effects (over hours and days) of temperature and fatigue on neuromuscular measures and functional tasks is an important area for future study.\n\nBODY.EXERCISE MODALITY:\nBased on our findings that BWST resulted in lower PT force and longer TPT, the BWST may create impaired excitation-contraction coupling of the plantar flexor muscles, indicating greater muscle fatigue, compared to the TBRS. Conversely, the TBRS facilitated the excitation-contraction coupling of the plantar flexor muscles demonstrated by increased PT force and shorter TPT. The enhancement of plantar flexor muscle contractile properties following the TBRS may be related to exercise-induced post-activation potentiation (PAP); enhanced calcium kinetics, myosin phosphorylation, and reduced muscle stiffness after non-fatiguing contractions.26 Biomechanically, the distributed workload between the upper and lower limb muscles required to perform the TBRS at a comparable physiological workload as submaximal BWST exercise, that predominately engages muscles of the lower limb, may help to explain our findings. For those with fatigue and heat sensitivity, choosing the TBRS in a cooler environment may allow patients to reach optimal aerobic exercise training levels.\n\nBODY.PERCEPTIONS OF FATIGUE:\nIn a clinical setting, the patient’s subjective report of heat sensitivity and fatigue during exercise may be used when titrating aerobic training intensity. However, in this study, perceived fatigue following exercise was not related to objective measures of central or muscular fatigue, illustrating a disconnect between psychological vs physiological aspects of fatigue in PwMS following exercise. This disconnect suggests that subjective reports may not be sensitive enough to gauge heat-related fatigue. Dawes et al.20 showed that PwMS reporting general fatigue could train at comparable levels to those without fatigue suggesting that clinicians, in addition to obtaining subjective reports, should focus on objective markers of exercise intensity such as HR to gauge exercise parameters. Our findings suggest that, at least within a training session, the TBRS may be more tolerable especially for those with higher levels of MS-related disability. However, due to the importance of the exercise-induced stress response that is incurred from challenging exercise,27 transition to the BWST may be warranted as training progresses.\n\nBODY.DETECTING SUBCLINICAL CHANGES IN NEUROMUSCULAR PERFORMANCE:\nWe found not only that there was an inverse relationship between EDSS score and MVC force and EMG of the plantar flexor muscles, but also the neuromuscular measures detected subtle deficits sometimes unbeknownst to the participant. Others have demonstrated that MVC, central activation,28 and evoked contractile properties10 are decreased in PwMS compared to controls and there is a negative correlation between EDSS and measures of corticospinal excitability.29 In a disease where most of the degeneration is subclinical and where lesions can outnumber clinical relapses 10 to one,30 there are opportunities to incorporate neuromuscular performance outcomes as potential biomarkers of MS disease progression.\n\nBODY.STUDY LIMITATIONS:\nThis study makes important strides in developing tailored aerobic exercise interventions for PwMS. However, there were some limitations. Even though effect sizes were moderate to large, the sample size was small and because of inability to complete the BWST, participants with EDSS >6 were underrepresented.\nBody temperature was measured using the tympanic method. The most sensitive methods to measure thermoregulation and changes in core temperature are invasive, and are recorded using rectal and esophageal techniques. However, the aim of the present study was to use methods that could be easily incorporated into clinical practice and inform best practice guidelines for clinical exercise prescription.\n\n\nBODY.CONCLUSIONS:\nCooling the temperature of the training environment lessens central fatigue while choosing a modality that distributes workload among four limbs (TBRS) rather than two (BWST) limits muscular fatigue. Therapists can modify these parameters in order to achieve optimal aerobic exercise training levels among PwMS.\n\n**Question:** Compared to body-weight supported treadmill room (21°C) what was the result of body-weight supported treadmill cool (16°C) on maximum voluntary contraction force; stabilized body temperature?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
13891
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: \nEffect of a Community-Based Nursing Intervention on Mortality in Chronically Ill Older Adults: A Randomized Controlled TrialCommunity Nursing and Mortality in Chronically Ill\n\n\n\nKenneth Coburn and colleagues report findings from a randomized trial evaluating the effects of a complex nursing intervention on mortality risk among older individuals diagnosed with chronic health conditions.\nABSTRACT.BACKGROUND:\nImproving the health of chronically ill older adults is a major challenge facing modern health care systems. A community-based nursing intervention developed by Health Quality Partners (HQP) was one of 15 different models of care coordination tested in randomized controlled trials within the Medicare Coordinated Care Demonstration (MCCD), a national US study. Evaluation of the HQP program began in 2002. The study reported here was designed to evaluate the survival impact of the HQP program versus usual care up to five years post-enrollment.\n\nABSTRACT.METHODS AND FINDINGS:\nHQP enrolled 1,736 adults aged 65 and over, with one or more eligible chronic conditions (coronary artery disease, heart failure, diabetes, asthma, hypertension, or hyperlipidemia) during the first six years of the study. The intervention group (n = 873) was offered a comprehensive, integrated, and tightly managed system of care coordination, disease management, and preventive services provided by community-based nurse care managers working collaboratively with primary care providers. The control group (n = 863) received usual care. Overall, a 25% lower relative risk of death (hazard ratio [HR] 0.75 [95% CI 0.57–1.00], p = 0.047) was observed among intervention participants with 86 (9.9%) deaths in the intervention group and 111 (12.9%) deaths in the control group during a mean follow-up of 4.2 years. When covariates for sex, age group, primary diagnosis, perceived health, number of medications taken, hospital stays in the past 6 months, and tobacco use were included, the adjusted HR was 0.73 (95% CI 0.55–0.98, p = 0.033). Subgroup analyses did not demonstrate statistically significant interaction effects for any subgroup. No suspected program-related adverse events were identified.\n\nABSTRACT.CONCLUSIONS:\nThe HQP model of community-based nurse care management appeared to reduce all-cause mortality in chronically ill older adults. Limitations of the study are that few low-income and non-white individuals were enrolled and implementation was in a single geographic region of the US. Additional research to confirm these findings and determine the model's scalability and generalizability is warranted.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicalTrials.gov NCT01071967\n\n\n\n\nPlease see later in the article for the Editors' Summary\n\n\nABSTRACT.EDITORS' SUMMARY:\nABSTRACT.BACKGROUND:\nIn almost every country in the world, the proportion of people aged over 60 years is growing faster than any other age group because of increased life expectancy. This demographic change has several implications for public health, especially as older age is a risk factor for many chronic diseases—diseases of long duration and generally slow progression. Chronic diseases, such as heart disease, stroke, cancer, chronic respiratory diseases, and diabetes, are by far the leading cause of death in the world, representing almost two-thirds of all deaths. Therefore in most countries, the challenge of managing increasingly ageing populations who have chronic illnesses demands an urgent response and countries such as the United States are actively researching possible solutions.\n\nABSTRACT.WHY WAS THIS STUDY DONE?:\nSome studies suggest that innovations in chronic disease management that are led by nurses may help address the epidemic of chronic diseases by increasing the quality and reducing the cost of care. However, to date, reports of the evaluation of such interventions lack rigor and do not provide evidence of improved long-term health outcomes or reduced health care costs. So in this study, the researchers used the gold standard of research, a randomized controlled trial, to examine the impact of a community-based nurse care management model for older adults with chronic illnesses in the United States as part of a series of studies supported by the Centers for Medicare and Medicaid Services.\n\nABSTRACT.WHAT DID THE RESEARCHERS DO AND FIND?:\nThe researchers recruited eligible patients aged 65 years and over with heart failure, coronary heart disease, asthma, diabetes, hypertension, and/or hyperlipidemia who received traditional Medicare—a fee for service insurance scheme in which beneficiaries can choose to receive their care from any Medicare provider—from participating primary care practices in Pennsylvania. The researchers then categorized patients according to their risk on the basis of several factors including the number of chronic diseases each individual had before randomizing patients to receive usual care or the nurse-led intervention. The intervention included an individualized plan comprising education, symptom monitoring, medication, counseling for adherence, help identifying, arranging, and monitoring community health and social service referrals in addition to group interventions such as weight loss maintenance and exercise classes. The researchers checked whether any participating patients had died by using the online Social Security Death Master File. Then the researchers used a statistical model to calculate the risk of death in both groups.\nOf the 1,736 patients the researchers recruited into the trial, 873 were randomized to receive the intervention and 863 were in the control group (usual care). The researchers found that 86 (9.9%) participants in the intervention group and 111 (12.9%) participants in the control group died during the study period, representing a 25% lower relative risk of death among the intervention group. However, when the researchers considered other factors, such as sex, age group, primary diagnosis, perceived health, number of medications taken, hospital stays in the past 6 months, and tobacco use in their statistical model, this risk was slightly altered—0.73 risk of death in the intervention group.\n\nABSTRACT.WHAT DO THESE FINDINGS MEAN?:\nThese findings suggest that that community-based nurse care management is associated with a reduction in all-cause mortality among older adults with chronic illnesses who are beneficiaries of the fee for service Medicare scheme in the United States. These findings also support the important role of nurses in improving health outcomes in this group of patients and show the feasibility of implementing this program in collaboration with primary care practices. Future research is needed to test the adaptability, scalability, and generalizability of this model of care.\n\nABSTRACT.ADDITIONAL INFORMATION:\nPlease access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001265.\nThis study is further discussed in a PLoS Medicine\nPerspective by Arlene Bierman\n\nInformation about the Centers for Medicare and Medicaid Services is available\nThe World Health Organization provides statistics on the prevalence of both chronic illness and ageing\n\n\nHeath Quality Partners provide information about the study\n\nBODY.INTRODUCTION:\nChronically ill older adults have complex patterns of health care, frequent hospital readmissions, often receive poor or inconsistent quality of care, and account for the majority of health care expenditures in the United States [1]–[6]. Long appreciated as the dominant disease burden in upper-income countries, noncommunicable chronic disease is now recognized as a major global health problem [7],[8].\nSeveral leading organizations and experts argue that care coordination emphasizing wellness, prevention, and chronic disease management is a promising means to increase the quality and perhaps reduce the costs of care for chronic illness in the elderly [9]–[11]. Broader testing and use of chronic disease management interventions in several countries have resulted in reports describing the challenges associated with such efforts, but very few have provided evidence of improved long-term health outcomes or reduced health care expenditures [12]–[22]. Positive reports that have been published have often come from non-experimental evaluations of clinic or practice-based interventions targeting a single chronic disease [21],[22]. Some researchers believe that innovations in nursing-led chronic disease management may help address chronic disease in areas of the world with less abundant health care resources, such as sub-Saharan Africa [23],[24].\nBased on research published to date, there is uncertainty about whether nurse care management programs have the potential to improve the long-term health outcomes of chronically ill older adults. In this study we report outcomes of a longitudinal community-based nurse care management model on all-cause mortality, using a randomized controlled design.\nSupport for this research came from the Medicare Coordinated Care Demonstration (MCCD), a national study in the United States administered by the Centers for Medicare and Medicaid Services (CMS), which sponsored 15 unique longitudinal, prospective, randomized, controlled trials [25]. Since 2002, the MCCD has independently tested these different, competitively selected, care coordination programs in an attempt to identify specific models that lower health care costs and improve quality among US Medicare beneficiaries (fee for service coverage) with chronic conditions.\nDescriptions of the varied programs that participated in the MCCD have been published elsewhere by Mathematica Policy Research, Inc. (MPR), the contracted evaluator for the demonstration [26],[27]. The programs selected to participate in the MCCD varied in terms of the number and types of chronic conditions they targeted. Six programs targeted a single condition, three enrolled patients using criteria other than specific diagnoses, and six targeted multiple conditions with or without additional targeting criteria. The most common primary diagnoses of patients enrolled across all programs were heart failure, coronary heart disease, and diabetes. The 15 organizations that implemented programs in the MCCD were also diverse and included four commercial disease management vendors, three hospitals, three academic medical centers, one integrated delivery system, one hospice program, one long-term care facility, one retirement community, and one health care quality research and development organization.\nThe interventions offered by the programs in the MCCD varied, though all programs used care coordinators, which were typically registered nurses (only one program used licensed practical nurses). Nearly all programs educated patients in order to improve medication adherence, diet, exercise, and self-care. Fourteen programs sought to coordinate care for patients through a variety of mechanisms. Ten programs had timely data on hospitalizations and emergency room visits to support interventions related to the transitions of care. Fourteen programs relied on patients to provide care coordinators with a list of medications they were taking. Four programs focused on increasing physicians' adherence to evidence-based or guideline-based care. In all programs, both intervention and control participants continued to receive traditional Medicare coverage (US federal government supported fee for service payments), with an additional fixed negotiated fee per participant per month paid to the programs for each intervention participant. The impact of the various programs tested in the MCCD on medical expenditures, quality of care, and health service utilization has previously been reported [27]–[29].\nThe program described in the current study was designed and implemented by Health Quality Partners (HQP), (http://www.hqp.org), a not-for-profit health care quality research and development organization, and one of the programs participating in the MCCD. Some findings from the current study were previously included in a report to the US Congress, in which a 25% reduction in all-cause mortality among intervention participants compared to usual care participants was observed for the HQP program [29]. The current study was undertaken to more thoroughly evaluate the program's effect on mortality up to 5 y following enrollment.\n\nBODY.METHODS:\nThe study protocol (Text S1) and CONSORT checklist (Text S2) are provided as supporting information. Though nested within the larger MCCD, this study's design and execution were undertaken by the authors independently of CMS or MPR. The national and HQP program-specific evaluation plans designed by MPR for the MCCD have previously been reported [30],[31].\nBODY.PARTICIPANTS:\nAll participants randomized into the HQP program from the start of the MCCD in April 2002 through March 2008 are included in this study. Traditional, fee for service Medicare beneficiaries with Parts A (hospitals, skilled nursing facility, hospice, home health care) and B (physician services, outpatient care, home health services) insurance coverage, residing in eastern Pennsylvania, 65 y of age and older, with heart failure, coronary heart disease, asthma, diabetes, hypertension, or hyperlipidemia, and receiving care at a primary care practice agreeing to work with the HQP program, were eligible to participate in this study. No minimum prior health care utilization or hospitalization was required for eligibility. Exclusion criteria included dementia, end-stage renal disease, schizophrenia, active cancer (except skin) in the prior 5 y, life expectancy less than 6 mo, and current or imminent residence in a long-term care facility. Individuals at very low risk for future health complications based on a pre-enrollment assessment were also excluded from the study. In September 2006, a protocol change made a pre-enrollment assessment of low risk an additional exclusion criterion, because interim evaluations indicated that control group participants in this stratum were not utilizing enough health care services to allow for a sufficient realization of savings in the intervention group to offset program costs.\nIn the US, Medicare is provided in two basic forms: (1) fee for service coverage (traditional Medicare), funded and administered by the federal government, and (2) managed care coverage (Medicare Advantage), funded by the federal government, but sold and administered by private health plans. In areas of the country where insurers offer Medicare Advantage plans, Medicare beneficiaries can choose between these two types of coverage. Medicare Advantage plan members often have financial incentives to use providers within networks recognized by the health plan, and such plans may provide various forms of care coordination or chronic disease management services. By contrast, traditional Medicare beneficiaries can choose to receive their care from any participating Medicare provider and can switch providers at any time without financial penalties. Traditional Medicare, to date, lacks significant care coordination or chronic disease management benefits. All the participants in the current study were beneficiaries receiving traditional US Medicare.\nPotential study subjects were referred to the study from participating primary care practices. Practices were assisted to utilize administrative billing data to identify Medicare beneficiaries that might be eligible for the study based on ICD9 diagnosis codes, age, and insurance information. Primary care providers reviewed the patient list generated from billing data queries and selected patients to refer to the study. Outreach to potentially eligible patients was undertaken by HQP by way of a mailed letter of introduction and follow-up phone calls inviting referred patients to learn more about the study.\n\nBODY.DEVELOPING A NETWORK OF PARTICIPATING PRIMARY CARE PRACTICES:\nA network of primary care practices was developed by meeting with and describing the HQP program and the MCCD to hospitals, physician-hospital organizations, independent physicians associations, and individual practices. The basic requirements of practices agreeing to participate include: (1) responding to communications about their patients initiated by the nurse care managers on an as needed basis, (2) making the office medical records available to the nurse care managers and chart auditors, and (3) assisting in case-finding potentially eligible individuals on their patient panels, using billing system reports or extracts, or other mutually agreed to processes. The program was designed and promoted as easy to use and free of burdens related to: paperwork, recurring authorizations or pre-certifications, routine case reviews, or administrative tasks.\nPractices were encouraged to “test drive” the program by initially referring a small number or select set of patients meeting eligibility criteria. Offices were not required to sign a contract or commit to a minimum length of participation and there were no financial transactions involved. It was explained to offices that by virtue of the randomization process roughly half of their referred and randomized patients would be assigned to the control (usual care) group and half to the intervention group; underscoring that half of all patients from their practice that participated would not receive any extra services. Business Associate agreements committing HQP to safeguard the privacy and confidentiality of the personal health information provided by the practices were executed.\nDuring the time period of this study, 93 primary care practices in and around the 4,662-km2, four-county service area of eastern Pennsylvania (Bucks, Montgomery, Lehigh, and Northampton) agreed to participate. Patients of these practices received most of their acute care services from seven hospitals owned by six different health systems. Most practices solicited (greater than 80%) agreed to participate except for those affiliated with two hospital-owned, multi-practice networks (one operating as a Preferred Provider Organization) that declined to participate, citing their desire to: (1) implement and manage their own care coordination programs to enhance their ability to negotiate with health plans, and (2) maintain more direct control over such programs.\nParticipating practices varied widely in terms of size (most had four or fewer primary care providers), use of electronic records, and organizational affiliation (most were independent). In the past few years, an increasing number of practices have implemented some form of the patient-centered medical home (PCMH); designed to support primary care physicians to improve the proactive coordination and tracking of patient care, typically involving the use of information systems, disease registries, and care team models. There have been no observed barriers, operational difficulties, or decreased interest in collaborating with the HQP program as the result of offices adopting the PCMH.\n\nBODY.ETHICS:\nCMS administered the overall conduct of the MCCD. As previously reported, “The Secretary of Health and Human Services, acting through the CMS, determined that the overall demonstration and evaluation met all criteria in both the Common Rule and National Institutes of Health's Exemption Number 5 for exemption from institutional review board review for research and demonstration projects on public benefit and service programs.” [27] (page 604). All participants provided written informed consent prior to study enrollment. HQP separately sought and received approval of the Institutional Review Board of Doylestown Hospital (Doylestown, Pennsylvania, US) for the present study.\n\nBODY.CLASSIFICATION PRIOR TO RANDOMIZATION:\nAfter providing consent, but prior to study randomization, each participant was classified using two different schema: primary diagnosis and risk stratum. The nurse care management supervisor made the determination of the primary diagnosis. For participants with only one of the chronic health conditions required for study eligibility, that condition was considered the primary diagnosis. For participants having more than one qualifying diagnosis, the condition judged most likely to precipitate a future hospitalization, on the basis of the participant's clinical measures, self-management skills, disease-specific symptoms, and hospital utilization in the prior 6 mo, was chosen as the primary diagnosis.\nEligible participants were also classified into discrete categorical risk strata [31] (page 13). The first step in the algorithm HQP used to determine risk strata, is an assessment of geriatric-related risks using the Sutter Health Questionnaire (used with permission, Cheryl Phillips) [32],[33]. A number of domains are covered in this questionnaire including: self-rated health, number of medications taken, change in weight, falls, health care utilization in prior 6 mo, living arrangement, care giver status, activities of daily living, instrumental activities of daily living, ancillary health care services used, physical activity level/mobility, chronic illnesses, depression, and tobacco use. Individuals scoring at or above a level 3 on the Sutter instrument were defined as the high-risk stratum for this study. Individuals scoring below this breakpoint on the Sutter tool received a second, disease-specific risk assessment developed by HQP, which was used to classify participants into one of three additional risk strata: moderate, low, and very low. Individuals in the very low risk stratum were excluded from study participation from the outset, and those in the low-risk stratum were also excluded beginning in September 2006.\nIn the course of administering the pre-randomization Sutter Health Questionnaire a numeric risk score (total score) was also calculated. This score was used to augment the outcomes analysis in this study by creating risk subgroups according to total score tertiles: lower, middle, and upper, defined by total scores of <15, 15–35, and >35, respectively.\n\nBODY.INTERVENTION:\nParticipants randomized into the control group received the usual care afforded to traditional Medicare beneficiaries and following notification of their study group assignment, had no further contact with HQP. Participants randomized into the intervention group were provided the HQP model of community-based nurse care management. This model was previously described in a report by the MCCD contracted evaluators [31]. The HQP program was developed over several years in multiple care delivery settings and incorporated a broad portfolio of evidence-based preventive and care management interventions delivered longitudinally by nurse care managers in collaboration with local health care and social service providers. A detailed listing of the elements of this intervention is provided as a supplemental table (Text S3). Nurse care managers used a database developed by HQP to track their activities and participant contacts as well as key assessments and clinical data on participants. Additional paper-based documentation and assessment tools were organized and maintained in participant chart records. All intervention group participants received additional assessments to identify their physical, functional, cognitive, psychological, behavioral, social, and environmental needs. Participants determined to be in the high-risk stratum, on pre-randomization assessment, received a comprehensive, in-home geriatric assessment involving 15 specified elements, including: physical assessment (HQP), Index of Independence in Activities of Daily Living (Katz), Mini-Mental State Exam (Folstein), Clock Drawing Test (Heinik et al.), Geriatric Depression Screen-Short Form (Sheikh and Yesavage), Nutritional Risk Assessment – Nutrition Screening Initiative (NSI), violence screening (HQP), alcohol abuse screening using the CAGE Questionnaire (Ewing), behavioral and caregiver assessment, home environment safety checklist, Numeric Pain Scale (Jacox), sleep, incontinence, assessment of immunizations and preventive screenings, and psychosocial support needs (HQP).\n\nInterventions typically incorporated into an individualized plan included: education, symptom monitoring, medication reconciliation and counseling for adherence, and help identifying, arranging, and monitoring community health and social service referrals. Group interventions such as curriculum-based education; structured lifestyle and behavior change programs for weight loss; weight loss maintenance; exercise classes for improving strength and increasing physical activity; and a balance and mobility program for fall prevention were also provided directly to participants by the nurse care managers. Nurses collaborated with the participants' primary care physicians and specialists on an as needed basis to help participants achieve target clinical goals and receive appropriate and timely preventive care according to guidelines. Collaboration also allowed early identification of new or worsening conditions or symptoms, and facilitation of timely medical interventions in an effort to prevent disease exacerbation, hospital admissions, and unnecessary use of the emergency department.\nThe nurse care managers were community based and, depending on the size of a practice's patient panel, served patients from multiple primary care practices. Participant encounters consisted of in-person visits, group sessions, and telephone contacts. In-person encounters occurred in the participants' homes, physicians' offices, and other accessible community settings, such as HQP's offices, hospitals, community centers, libraries, and faith-based organizations. Contact frequency was determined by participant need with a minimum standard of a monthly contact. On average, participants received 17.4 total contacts per year during the period included in the current study. More than half of all contacts were made in-person either as one-to-one meetings or as group classes. Individualized intervention plans were continuously updated to match the dynamic needs of participants and their caregivers. Once enrolled into the program, intervention participants received services until they died, moved out of the area, requested disenrollment, had a change in insurance coverage making them ineligible for the demonstration, or were placed in a care environment in which the nurse care manager felt they were unable to significantly add to the effectiveness of care (e.g., hospice placement). Once fully trained, each care manager served 85 to 110 participants depending on caseload complexity, geographic distribution, experience, and phase of study recruitment.\nIn 2007, a protocol of intensified follow-up was added for participants transitioning home or to another level of care upon discharge from hospital. The protocol established guidelines by which nurses provided timely coordination and communication with hospital and post-hospital care providers. The goals were to ensure well informed, safe, and expeditious discharge plans, perform timely patient follow-through on discharge instructions, reconcile medications, and identify and address any errors, omissions, or contraindications in order to prevent readmissions and other serious adverse events.\nProgram implementation and reliability were supported by careful nurse selection and recruitment practices, pre-service training, ongoing coaching and supervision, structured protocols, explicit operating procedures, clearly articulated performance standards, and a system of data management and statistical process control analysis and reporting to support organizational decision making. A further description of the management elements of this model is provided as a supplemental table (Text S4). This set of management practices has been described as “core implementation components” [34]. Program improvement efforts were ongoing and continuous and resulted in numerous refinements to the program over the course of its implementation within the MCCD.\n\nBODY.OBJECTIVES:\nThe main objective of this study was to determine whether HQP's model of community-based care management, as implemented in the MCCD, is associated with a reduction in all-cause mortality overall and within subgroups of risk strata and primary diagnoses. Another objective was to determine whether there was an intervention-associated reduction in all-cause mortality within subgroups defined by tertiles of a numeric risk score obtained on intake assessment using the Sutter Health Questionnaire. The main reasons to explore treatment effect within these subgroups included: (1) refine future program eligibility criteria to direct resources to those that benefit most from the intervention, and (2) permit comparison of impacts on health outcomes to financial outcomes using similar or identical subgroups used by MPR and CMS in their separate and independent financial analyses. It was hypothesized that participants classified as belonging to one or more high-risk subgroups were more likely to demonstrate an intervention-associated reduction in mortality over the follow-up period of this study. The pre-specified and post hoc analyses of the study are summarized in Table 1.\n10.1371/journal.pmed.1001265.t001Table 1BODY.OUTCOMES AND SUBGROUP ANALYSES SPECIFIED IN THE STUDY PROTOCOL.:\nCharacteristic\nHQP Study Protocol\nPre-specified or Post Hoc\nOverall Mortality (all participants)\nPrimary outcome\nPre-specified\nRisk stratification level\nSubgroup analyses\nPre-specified\nRisk score\nSubgroup analyses\nDecision to analyze by tertile subgroups was post hoc\nPrimary enrollment diagnosis\nSubgroup analyses\nPre-specified\nClinical cardiovascular risk factors\nSecondary outcomes\nNot part of current study because data collection is still underway\n\nBODY.OUTCOMES: PRE-SPECIFIED:\nThe primary outcome of this study was the risk of death from any cause among intervention participants compared to control participants overall and within subgroups defined by risk strata and primary diagnosis. Vital status as of March 31, 2009 was assessed for all participants. The data source used to establish death was the online Social Security Death Master File (SSDMF) (http://www.ssdmf.com). Social security numbers obtained from participants following informed consent and prior to randomization were used to check vital status in the SSDMF.\n\nBODY.OUTCOMES: SPECIFIED POST HOC:\nAnalyzing deaths within subgroups defined by tertiles based on the numeric risk score obtained from the Sutter Health Questionnaire was not pre-specified in the study plan. After the study began, but before analysis commenced, this outcome was added. On the basis of random samples, we estimated an overall error rate of 3%–5% in the assignment of participant risk stratum. This rate was due to mistaken Sutter level determinations resulting from the manual tallying of risk scores and variation in the optional use of “flags” (specific question responses defined in the Sutter Questionnaire), which can, if four or more are present, result in increasing the Sutter level by one level. The numeric risk score of the Sutter Health Questionnaire when calculated retrospectively by computer using questionnaire data fields was more reliable.\nThe risk score derived from the Sutter Questionnaire is obtained in the first step of a multi-step process required for final risk stratum assignment. The risk score is a numeric variable (range in our data: 1–136, mean 29). The use of a computer calculated risk score alone, if predictive of outcomes, could offer a more streamlined, reliable, and efficient method of risk classification, potentially improving future program operations.\n\nBODY.SAMPLE SIZE:\nThe original minimum enrollment recommendation for MCCD study sites (686 in total; 343 participants each for treatment and control groups) made by MPR as part of their sample size estimation was based on the expected impact of effective interventions on hospitalization as described in MPR's study plan for the MCCD [30]. These original sample size calculations were not based on estimated impacts on mortality. Given the actual number of overall participants in this study (1,736), the observed probability of death in the control group (0.129), and the observed unadjusted hazard ratio (0.75), with alpha set at 0.05, this study is estimated to have a power of 58% for analysis of overall mortality risk using the Cox proportional hazard method. Similarly calculated power estimates for subgroups were lower, with the exception of the upper risk tertile (power = 67%) and coronary heart disease (power = 77%) subgroups.\n\nBODY.RANDOMIZATION AND BLINDING:\nThe study was conducted as a parallel group, randomized, controlled trial. Randomization took place at the individual participant level within each of the risk strata determined by HQP prior to enrollment (high, moderate, and low) using a secure website managed by MPR. Participants were randomized on a 1∶1 (intervention: control) basis. All randomization was done offsite by MPR per a protocol established by them and approved by CMS using randomly generated, concealed 4-digit “strings” of treatment-control assignments. By excluding strings of all treatment or all control assignments runs of more than six consecutive assignments to any group were prevented. The random assignment result was available to the program site via the website almost immediately. For practical reasons, study group assignment was not blinded.\n\nBODY.STATISTICAL METHODS:\nAll participants randomized into the trial from its start in April 2002 through March 2008 are included in the outcome analysis according to their original study group assignment. The primary outcome (vital status) on all randomized participants (regardless of early program discontinuation) was collected and analyzed through March 31, 2009. The observation period available for each individual participant ran from his or her date of randomization through March 31, 2009 or the completion of a full 5 y of observation (whichever occurred first). Discontinuation from study participation occurring before observation endpoints were reached, for any reason including lost to follow-up, was not a reason for exclusion from the outcome analysis.\nMortality over time was plotted using the Kaplan-Meier method with p-values calculated using the log-rank test. The Cox proportional hazard method was used to calculate hazard ratios. Covariates selected for inclusion in Cox regression models had a significant association with the risk of death in univariate analysis and a recognized association with mortality (sex, age group, primary diagnosis, perceived health rating, number of medications taken, hospital stays in the past 6 mo) or failed to reach significance in univariate analysis, but are widely acknowledged to have a strong association with death (tobacco use). The proportional hazard assumptions for Cox regression models were tested using Schoenfeld residuals and no violations were identified (all p-values ≥0.05). Subgroup analyses include significance testing of interaction effects using likelihood ratio testing to compare proportional hazards models with a subgroup-treatment interaction term to one without.\nComparison of categorical data was performed using Fisher exact test. Comparison of continuous data was performed using the Student's t test when data was normally distributed or Wilcoxon's rank sum method when data significantly departed from a normal distribution. All values for p were calculated using two-sided tests. Statistical tests were performed using Stata/MP 10.1 for Macintosh (http://www.stata.com).\n\n\nBODY.RESULTS:\nBODY.PARTICIPANT RECRUITMENT AND FLOW:\nOf all patients referred (n = 9,362), sufficient data to attempt assessing study eligibility were available on 88% (n = 8,224). The CONSORT flow of participants through allocation, follow-up, and analysis is represented in Figure 1. Overall, 43% (1,736) of individuals confirmed to be eligible and living within the program service area agreed to participate in this study. Of the 2,265 (57%) eligible participants declining to participate, reasons for refusal were captured on 2,134 (94%). These are summarized in Table 2. The number of participants randomized into the study by year is presented in Table 3.\n10.1371/journal.pmed.1001265.g001Figure 1BODY.CONSORT FLOW DIAGRAM.:\n10.1371/journal.pmed.1001265.t002Table 2BODY.REASONS ELIGIBLE INDIVIDUALS DECLINED STUDY PARTICIPATION, :n = 2,265.\nReason\nNumber\nPercent\nSatisfied with current care\n1,054\n46.5\nToo busy\n307\n13.6\nToo old\n230\n10.2\nOverwhelmed with present number of providers\n195\n8.6\nGeneral mistrust of solicitations\n177\n7.8\nInitially accepted then changed their mind\n171\n7.5\nReason not captured\n131\n5.8\n10.1371/journal.pmed.1001265.t003Table 3BODY.STUDY ENROLLMENT BY YEAR.:\nGroup\nYear Enrolled\n\n2002\n2003\n2004\n2005\n2006\n2007\n2008a\n\nTotal\nControl\n134\n273\n231\n128\n23\n74\n0\n863\nIntervention\n136\n276\n236\n128\n24\n73\n0\n873\nTotal\n270\n549\n467\n256\n47\n147\n0\n1,736\naThrough March 2008.\nOf participants randomized to the intervention group, 151 (17%) prematurely ceased receiving the intervention before reaching an observation endpoint or experiencing an outcome event (death). The most frequent reason for early withdrawal from the intervention was a change in health care insurance coverage (n = 82). Eighty-five participants in the control group also had a change in health care insurance coverage. In most cases, these changes resulted from individuals opting to enroll in a private health insurance administered Medicare plan (Medicare Advantage). The median time from enrollment to program discontinuation for any reason for intervention and control participants was 557 and 560 d, respectively. All of these participants are included in the outcome analyses.\nThe mean follow-up for both control and intervention groups was 4.2 y. In this study, 815 (47%) participants reached the 5-y observation endpoint (alive) and 731 (42%) participants reached the March 31, 2009 endpoint (alive). Altogether, 197 (11%) died prior to reaching these endpoints.\n\nBODY.BASELINE DATA:\nBaseline characteristics for all participants and those belonging to the subgroups of high-risk stratum, upper risk tertile, and primary diagnosis of coronary heart disease are presented in Table 4. Overall, among the study population the mean age was 75 y, 61% were female, 31% lived alone, 17% rated their health as fair or poor, 14% said they were depressed in the prior 3 mo, and 22% reported a fall in the prior year. Participants in the high-risk stratum subgroup had an average age of 78 y, 73% were female, 67% lived alone, and 40% rated their health as fair or poor, 35% were depressed in the prior 3 mo, and 40% had fallen in the prior year. By contrast, participants in the coronary heart disease subgroup were less likely to be women (39%) and less likely to live alone (27%). Baseline characteristics of intervention and control participants are shown in Table 5\n\n10.1371/journal.pmed.1001265.t004Table 4BODY.BASELINE CHARACTERISTICS OF PARTICIPANTS OVERALL AND SELECTED SUBGROUPS.:\nCharacteristic\nClassification\n\nn (%) All\nn (%) High-risk Stratum\nUpper Risk Tertile n (%)\n\nn (%) Coronary Heart Disease\n\nParticipants \nn\n\n\n1,736\n505\n568\n300\n\nSex - Female\n\n\n1,057 (61)\n370 (73)\n416 (73)\n117 (39)\n\nAge in years (mean ± SD)\n\n74.8±6.5\n78.2±7.1\n78.1±6.9\n76.5±6.7\n\nAge group - years\n\n65–69\n502 (29)\n85 (17)\n87 (15)\n61 (20)\n\n70–74\n433 (25)\n78 (15)\n94 (17)\n66 (22)\n\n75–79\n418 (24)\n130 (26)\n156 (27)\n72 (24)\n\n80–84\n256 (15)\n124 (25)\n141 (25)\n72 (24)\n\n85+\n127 (7)\n88 (17)\n90 (16)\n29 (10)\n\nPerceived health\n\nExcellent\n304 (18)\n33 (7)\n43 (8)\n42 (14)\n\nGood\n1,124 (65)\n273 (54)\n327 (58)\n198 (66)\n\nFair\n266 (15)\n161 (32)\n160 (28)\n56 (19)\n\nPoor\n42 (2)\n38 (8)\n38 (7)\n4 (1)\n\nLiving alone\n\n\n546 (31)\n336 (67)\n401 (71)\n82 (27)\n\nDepressed in prior 3 mo\n\n\n244 (14)\n171 (34)\n176 (31)\n49 (16)\n\nFall in prior year\n\n\n374 (22)\n200 (40)\n210 (37)\n61 (20)\n\nLimited mobility\n\n\n162 (9)\n153 (30)\n155 (27)\n34 (11)\n\nUnintended 4.54-kg+weight loss\n\n\n72 (4)\n41 (8)\n42 (7)\n11 (4)\n\nADL score (mean ± SD)\n\n0.8±2.1\n2.2±3.4\n2.0±3.3\n1.0±2.5\n\nIADL score (mean ± SD)\n\n1.1±2.4\n3.0±3.5\n2.7±3.5\n1.4±2.8\n\nNeed help to complete risk survey\n\n\n155 (9)\n119 (24)\n123 (22)\n31 (10)\n\nTobacco use\n\n\n80 (5)\n17 (3)\n18 (3)\n18 (6)\n\nChronic conditions (mean ± SD)\n\n3.8±1.9\n5.2±2.2\n5.0±2.2\n4.5±1.8\n\nNursing home stay ever in past\n\n\n22 (1)\n20 (4)\n20 (4)\n4 (1)\n\nNumber of medications\n\n5 or more\n971 (56)\n383 (76)\n421 (74)\n217 (72)\n\n2 to 4\n624 (36)\n111 (22)\n132 (23)\n74 (25)\n\n1\n108 (6)\n10 (2)\n14 (2)\n6 (2)\n\nNone\n33 (2)\n1 (0)\n1 (0)\n3 (1)\n\nPhysician or clinic visits in past 6 mo\n\n4 or more\n468 (27)\n216 (43)\n233 (41)\n93 (31)\n\n2 or 3\n807 (46)\n218 (43)\n251 (44)\n136 (45)\n\n1\n401 (23)\n62 (12)\n75 (13)\n61 (20)\n\nNone\n60 (3)\n9 (2)\n9 (2)\n10 (3)\n\nER visits in past 6 mo\n\n3 or more\n19 (1)\n13 (3)\n14 (2)\n4 (1)\n\n2\n54 (3)\n39 (8)\n42 (7)\n15 (5)\n\n1\n240 (14)\n116 (23)\n126 (22)\n47 (16)\n\nNone\n1,423 (82)\n337 (67)\n386 (68)\n234 (78)\n\nHospital stays in past 6 mo\n\n4 or more\n18 (1)\n17 (3)\n17 (3)\n4 (1)\n\n2 or 3\n49 (3)\n30 (6)\n35 (6)\n12 (4)\n\n1\n177 (10)\n98 (19)\n101 (18)\n38 (13)\n\nNone\n1,492 (86)\n360 (71)\n415 (73)\n246 (82)\n\nPrimary diagnosis\n\nHeart failure\n98 (6)\n58 (11)\n60 (11)\n—\n\nCoronary heart disease\n300 (17)\n110 (22)\n116 (20)\n300 (100)\n\nDiabetes mellitus\n316 (18)\n119 (24)\n122 (21)\n—\n\nHypertension\n673 (39)\n150 (30)\n187 (33)\n—\n\nAsthma\n81 (5)\n36 (7)\n39 (7)\n—\n\nHyperlipidemia\n268 (15)\n32 (6)\n44 (8)\n—\n\nRisk stratum\n\nHigh\n505 (29)\n505 (100)\n490 (86)\n110 (37)\n\nModerate\n1047 (60)\n—\n76 (13)\n170 (57)\n\nLow\n184 (11)\n—\n2 (0)\n20 (7)\n\nRisk tertile\n\nUpper\n568 (33)\n490 (97)\n568 (100)\n116 (39)\n\nMiddle\n600 (35)\n15 (3)\n—\n111 (37)\n\nLower\n368 (33)\n0 (0)\n—\n73 (24)\nADL, activities of daily living; ER, emergency room; IADL, instrumental activities of daily living; SD, standard deviation.\n10.1371/journal.pmed.1001265.t005Table 5BODY.BASELINE CHARACTERISTICS OF PARTICIPANTS BY STUDY GROUP.:\nCharacteristic\nClassification\n\nn (%) Intervention\n\nn (%) Control\n\nParticipants\n\n\n873\n863\n\nSex - female\n\n\n537 (62)\n520 (60)\n\nAge in years (mean ± SD)\n\n74.7±6.5\n74.9±6.5\n\nAge group - years\n\n65–69\n260 (30)\n242 (28)\n\n70–74\n212 (24)\n221 (26)\n\n75–79\n207 (24)\n211 (24)\n\n80–84\n129 (15)\n127 (15)\n\n85+\n65 (7)\n62 (7)\n\nPerceived health\n\nExcellent\n151 (17)\n153 (18)\n\nGood\n566 (65)\n558 (65)\n\nFair\n136 (16)\n130 (15)\n\nPoor\n20 (2)\n22 (3)\n\nLiving alone\n\n\n276 (32)\n270 (31)\n\nDepressed in prior 3 mo\n\n\n125 (14)\n119 (14)\n\nFall in prior year\n\n\n181 (21)\n193 (22)\n\nLimited mobility\n\n\n82 (9)\n80 (9)\n\nUnintended 4.54-kg+weight loss\n\n\n38 (4)\n34 (4)\n\nADL score (mean ± SD)\n\n0.8±2.2\n0.8±2.1\n\nIADL score (mean ± SD)\n\n1.1±2.4\n1.1±2.4\n\nNeed help to complete risk survey\n\n\n80 (9)\n75 (9)\n\nTobacco use\n\n\n37 (4)\n43 (5)\n\nChronic conditions (mean ± SD)\n\n3.8±1.9\n3.8±2.0\n\nNursing home stay ever in past\n\n\n12 (1)\n10 (1)\n\nNumber of medications\n\n5 or more\n512 (59)\n459 (53)\n\n2 to 4\n301 (34)\n323 (37)\n\n1\n44 (5)\n64 (7)\n\nNone\n16 (2)\n17 (2)\n\nPhysician or clinic visits in past 6 mo\n\n4 or more\n232 (27)\n236 (27)\n\n2 or 3\n402 (46)\n405 (47)\n\n1\n206 (24)\n195 (23)\n\nNone\n33 (4)\n27 (3)\n\nER visits in past 6 mo\n\n3 or more\n10 (1)\n9 (1)\n\n2\n27 (3)\n27 (3)\n\n1\n109 (12)\n131 (15)\n\nNone\n727 (83)\n696 (81)\n\nHospital stays in past 6 mo\n\n4 or more\n7 (1)\n11 (1)\n\n2 or 3\n25 (3)\n24 (3)\n\n1\n90 (10)\n87 (10)\n\nNone\n751 (86)\n741 (86)\n\nPrimary diagnosis\n\nHeart failure\n50 (6)\n48 (6)\n\nCoronary heart disease\n138 (16)\n162 (19)\n\nDiabetes mellitus\n176 (20)\n140 (16)\n\nHypertension\n348 (40)\n325 (38)\n\nAsthma\n39 (4)\n42 (5)\n\nHyperlipidemia\n122 (14)\n146 (17)\n\nRisk stratum\n\nHigh\n252 (29)\n253 (29)\n\nModerate\n528 (60)\n519 (60)\n\nLow\n93 (11)\n91 (11)\n\nRisk tertile\n\nUpper\n289 (33)\n279 (32)\n\nMiddle\n302 (35)\n298 (35)\n\nLower\n282 (32)\n286 (33)\nADL, activities of daily living; ER, emergency room; IADL, instrumental activities of daily living.\n\nBODY.MORTALITY ANALYSES:\nOverall, 86 (9.9%) intervention participants and 111 (12.9%) control participants died during the study period, representing a 25% lower relative risk of death (unadjusted hazard ratio [HR] 0.75 [95% CI 0.57–1.00], p = 0.047) among the intervention group. When covariates for sex, age group, primary diagnosis, perceived health, number of medications taken, hospital stays in the past 6 mo, and tobacco use were included in the model, the adjusted HR was 0.73 (95% CI, 0.55–0.98), p = 0.033.\nThe number and percentages of deaths, graphical representation of the unadjusted and adjusted hazard ratios, and the p-values from tests of subgroup-treatment interaction overall, and for all subgroups are provided in Figure 2. Subgroup analyses did not demonstrate statistically significant interaction effects for any subgroup. A Kaplan-Meier plot and log rank test comparing intervention and control groups overall is shown in Figure 3.\n10.1371/journal.pmed.1001265.g002Figure 2BODY.SUBGROUP ANALYSES.:\nDeaths and tests for interaction by subgroup. HRs and adjusted HRs (aHR) along with 95% CIs are represented by forest plots with x-axis in log 2 scale. The regression model used for the aHR includes covariates for sex, age group, primary diagnosis, perceived health rating, number of medications taken, hospital stays in the past 6 mo, and tobacco use. CHD, coronary heart disease.\n10.1371/journal.pmed.1001265.g003Figure 3BODY.KAPLAN-MEIER ESTIMATE OF CUMULATIVE MORTALITY UP TO 5 Y FROM ENROLLMENT.:\nThe plot includes results for all participants randomized into the study, (unadjusted data), with p-value calculated using the log-rank test.\nThere was a 100% match between deaths known to nurse care managers in the intervention group and deaths identified in the Social Security Death Master File (SSDMF).\n\nBODY.ADVERSE EVENTS:\nNo known program-related adverse events were identified.\n\n\nBODY.DISCUSSION:\nBODY.MAIN FINDINGS:\nThis study provides evidence that the model of community-based nurse care management tested is associated with a reduction in all-cause mortality among chronically ill older adults participating in fee for service Medicare in the US. The strengths of the study include: a randomized controlled trial design (with randomization at the individual participant level), model implementation in collaboration with a broad array of primary care providers across a sizeable geographic region, a long follow-up period, and use of the intention-to-treat method of analysis.\nIn the setting of small sample sizes and low statistical power, the subgroup analyses are best viewed as exploratory. There is a suggestion that participants in the upper risk tertile and those with a diagnosis of coronary heart disease may experience a greater survival benefit from the program. The lack of statistically significant subgroup-treatment interaction, however, indicates the need for caution when interpreting apparent differences between subgroups. The study helped confirm the feasibility of collecting self-reported information for intake risk assessment to identify subgroups that may be more likely to benefit from this intervention.\nThe finding of an increased hazard ratio for intervention participants in the asthma subgroup was unexpected and corresponds to a total subgroup size of 81 (intervention and controls) with only three control and five intervention deaths. Retrospective reviews of HQP chart records for the five intervention participants who died revealed: one died of multiple myeloma while receiving hospice care, one of unknown causes during sleep, one of complications apparently arising from a hospital misadventure resulting in acute renal failure and sepsis, one of severe chronic obstructive lung disease while receiving hospice care, and one died more than a year after discontinuing participation in the HQP program following a change in health insurance coverage. Given the small subgroup size and event counts, Kaplan-Meier plot pattern (not shown), log rank test p-value of 0.472, and the findings noted on chart review, we believe there is little evidence of a program-related association or mechanism for an increased risk of death among intervention participants within the asthma subgroup.\n\nBODY.LIMITATIONS:\nCMS did not make any claims data available to HQP for program operations, performance improvement, or research purposes and did not permit HQP to have any contact with control participants following randomization. HQP's MCCD-related funding consisted of a per participant per month fee for care coordination services with no additional support for research activities associated with the demonstration. Given these limitations, the authors could not directly analyze differences in medical expenditures or health care service utilization between treatment and control groups, though these analyses have been previously reported by others [28],[29].\nEvaluating the impact of care coordination models on mortality was not the primary objective of the MCCD, and the sample size for this study overall and for most subgroups was smaller than optimal for this purpose. A small sample size increases the risk of failing to identify a true difference in survival between treatment and control groups when one actually exists (a type II error), but small subgroups also increase statistical volatility such that small numbers of events or small differences in regression covariates can have an exaggerated effect on results. A likely case in point was the unexpected finding of an increased HR among intervention participants in the asthma subgroup.\nThe study focused on one unique model of community-based care management in a single geographic region of the US. Participants in this study were predominantly white and only a small proportion was believed to be economically poor (though socioeconomic, racial, and ethnic data were not collected in this study). Testing the generalizability of this model among more racially, ethnically, culturally, and economically diverse populations and in other geographic regions is an important research imperative. Until such research is undertaken, it will be impossible to know to what extent the demographic profile of participants in the current study was a determinant of the effectiveness of the model.\n\nBODY.INTERPRETATION:\nThe current study provides the strongest evidence to date that a model of community-based nurse care management can reduce the mortality rate for chronically ill older adults. The study also supports the broader concept that, at least under some circumstances, nurses playing a more intensive role in the longitudinal care of chronically ill older adults can improve the long-term health outcome of this population. This point had not been well established in previous research. A few studies from Europe, of smaller size and shorter duration than the study reported here, are associated with a survival advantage among older adults receiving various types of home visits by nurses [35]–[38]. To date, most studies of care coordination models in the US, including those applying the chronic care model, focusing on primary care redesign, medical homes, or transitions of care, have either not reported mortality as a separate outcome or have demonstrated no impact on mortality [39]–[53]. Two studies from the US that did not randomize individual participants and had other methodological limitations have reported improved survival for recipients of nurse care management provided within primary care settings [54],[55].\nIn reports by the CMS-contracted evaluators of the MCCD, the impact of the HQP model of community-based care management on health expenditures and health service utilization varied by pre-randomization risk, defined either by the risk stratification method used in the current study or by a combination of diagnoses and health service utilization. With all enrollees included (low, moderate, and high risk), no statistical difference in medical expenditures or health service utilization between the intervention and usual care groups has been observed [28],[29]. When analysis is restricted to the subgroup of the high risk stratum (as defined in the current study) intervention participants were reported to have 29% fewer hospitalizations and 20% lower expenditures than individuals assigned to usual care ([27], pages 614–615). Among a subgroup of participants with heart failure, coronary heart disease or chronic obstructive pulmonary disease, and at least one hospitalization in the prior year, the intervention group had 39% fewer hospitalizations, 37% fewer emergency room visits, a 36% decrease in total Part A and Part B Medicare expenditures and a net savings to Medicare (after HQP program fees) of US$397 per participant per month [29]. As a result of these findings, starting October 2010, with continued CMS support, eligibility for this study was changed and HQP began prospectively enrolling higher-risk beneficiaries—individuals with a history of heart failure, coronary heart disease, chronic obstructive pulmonary disease, or diabetes, and at least one hospitalization (for any reason) in the year prior to study randomization.\nFavorable impacts on health service utilization and expenditures among higher risk participants and reduced overall mortality suggest that this model of community-based nurse care management works by reducing avoidable complications that increase both the use of acute health care services and the risk of death. This may have been accomplished, in large part, by supporting participants to better adhere to physician-initiated treatment plans concordant with evidence-based guidelines. Nurse care managers also prompted primary care providers whenever “clinical inertia” [56] or deviations in treatment plans prevented participants from achieving guideline defined goals.\nApart from being an important health outcome in its own right, improved survival, when driven, as in this case, by a preventive intervention, is very likely accompanied by other important improvements in health, functional status, and quality of life, though these have not been measured over the long-term in the MCCD. To assess the full value of this model, future research should in\nThe HQP model of community-based care management has been tested on a regional scale in a health care delivery environment typical of much of the US for over 9 y and found to be compatible with and complementary to the work that primary care practices are increasingly engaged in to develop a patient-centered medical home. The current study also provides evidence that it is feasible to implement this program in collaboration with small, independent primary care practices. Office practices with five or fewer physicians accounted for about 73% of primary care practices in the US in 2003–2004 (with 46% of practices consisting of only one or two physicians) [57]. Smaller practices have been reported to be less likely to use patient-centered medical home processes indicating that effective chronic disease management interventions through office-based efforts alone may be especially challenging for such practices [58].\nForty-three percent of eligible individuals contacted agreed to participate in this study and 57% declined. Future experimental research must recognize and adequately accommodate the cost and time required to case-find and enroll sufficient numbers of participants into studies of this kind. Non-experimental evaluations of replication or scalability efforts, not requiring randomization or informed consent, would likely see greater rates of enrollment among those eligible. Analysis of the variation in enrollment rates between sites could potentially offer insights into how best to optimize engagement and enrollment of eligible individuals. Use of aggregated health care data, if available, would greatly improve the efficiency and effectiveness of case-finding and participant recruitment.\nIt is not known whether the model used in the current study would be effective in other countries having different demographic profiles, socioeconomic conditions, health care insurance, or health care delivery systems. The model's attributes of being community-based, requiring relatively modest start-up capital, and its use of nurses, may make it an approach worth testing in some global health settings. The program as implemented in this study utilized collaboration with primary care physicians therefore locations with reduced availability or access to primary care services could see diminished effectiveness. In some areas, shortages of nurses with sufficient training or experience may preclude implementation of the program. Whether other types of health workers can be trained to substitute for nurses in this model is unclear, but the current study used highly experienced nurses as care managers and the use of alternative providers may not yield similar results.\nIn light of the many care coordination and disease management models that have failed to demonstrate comparable improvements in health outcomes, it seems likely that a degree of fidelity to model design and implementation will be necessary for reproducible effectiveness. Efforts to maintain such program fidelity may conflict with the need for local adaptations to allow implementation in a new environment. The authors believe that core elements contributing to this program's effectiveness include: (1) delivering a broad set of services that match the preventive health needs of the targeted population, (2) frequent longitudinal in-person contacts with participants, (3) collaboration with primary care providers, and (4) training, management, and performance monitoring capabilities.\nThe program of community-based care management tested in the current study appears to be a valuable addition to the primary care of appropriately selected chronically ill older adults. Efforts to more broadly test the adaptability, scalability, and generalizability of this model seem warranted. Future progress in this area will, like many innovations in biomedical science and public health, probably require multiple, well-designed, longitudinal trials.\n\n\nBODY.SUPPORTING INFORMATION:\nText S1\nStudy protocol.\n\n(PDF)\nClick here for additional data file.\nText S2\nCONSORT checklist.\n\n(DOC)\nClick here for additional data file.\nText S3\nElements of the intervention.\n\n(DOC)\nClick here for additional data file.\nText S4\nElements of program management.\n\n(DOC)\nClick here for additional data file.\n\n**Question:** Compared to the usual care afforded to traditional Medicare beneficiaries what was the result of the HQP model of community-based nurse care management. on Mortality?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.