Quality improvement studies

With the growing interest in generating and disseminating knowledge to improve health-care quality, CJNR is pleased to announce a new section titled Quality Improvement Studies. This section will focus on advancing practice-based nursing scholarship, and articles may cover any aspect of clinical or therapeutic care or service improvements in any practice setting.

 

Policy

CJNR has adapted the BMJ Quality and Safety policy on quality improvement reports that are considered to be exempt from ethics review1 and recommends the use of SQUIRE (see Appendix 1)2 for writing up a quality improvement study.

CJNR will accept quality improvement manuscripts that have received either ethics approval by a Research Ethics Board (REB)/Institutional Research Board (REB) or approval by an institutional review committee.

REB/IRB-Approved Quality Improvement Projects

A statement that the project has been approved by the REB/IRB of the appropriate institution or organization must be included in the text of your manuscript. Also, a copy of the REB/IRB approval letter must accompany your submission.

Institutionally Approved Quality Improvement Projects

A statement to the effect that the study met criteria for exemption from an appropriate review board according to institutional policy (e.g., because the work was deemed an important improvement activity and did not fall into the category of human subjects research)

must be included in the text of your manuscript. Also, a copy of the REB/IRB exempt letter must accompany your submission.

 

Sample statements:

  • For main text of manuscript: “According to the policy activities that constitute research at the [name of institution/organization], this work met criteria for operational improvement activities that are exempt from ethics review[JB1] .”
  • For study approval section of submission template: “[name of institution/organization] uses the following criteria for determining whether improvement activities require ethics review”:

Policy criterion: “The work is primarily intended to improve local care, not to produce generalizable knowledge in a field of inquiry.”

Example: “The work reported here meets this criterion because hand hygiene is a universally recommended practice. We sought only to evaluate the improvements in compliance with hand hygiene as a result of auditing and feedback of compliance rates to hospital staff.”

 

Differentiating Quality Improvement From Research

Two key criteria can be used to distinguish between quality improvement and research.2,3,4

  • The express purpose of the project and who will benefit from it: The purpose of quality improvement is to improve care for the population served by a specific health-care facility or to share lessons learned from implementation. The purpose of research, in contrast, is to generate knowledge about a new strategy or innovation that produces specific outcomes that may be applied generally.4
  • The risks and burdens borne by those participating in the project: If an intervention imposes risks or burdens beyond the standard of practice to make the results generalizable, it needs to be reviewed and regulated as research.

 

Format

The manuscript should be a maximum of 10 double-spaced pages, including references (up to 20) and tables/figures (up to 2), and should address the relevant sections of the rating form. The abstract should be a maximum of 200 words. The style guide to be followed is the Publication Manual of the American Psychological Association, 6th Edition.

The presentation of quality improvement articles should adhere to SQUIRE (see Appendix 1)2  and include five sections3:

 

Introduction

  • Provide a brief description of context: relevant details of staff and function of department, team, unit, and patient group.
  • Outline the quality improvement or patient safety issue and describe what you sought to achieve.

 

Methods

  • Organize your Methods section in logical or chronological order. For example, it is more logical to describe the improvement or safety effort before the outcome measures.
  • Delineate what course of action for the improvement was taken and describe what changes were made, and why, how they were implemented, and who was involved in the change process.
  • Delineate the key measures for improvement, including what would constitute improvement in the view of the patient and/or the provider.
  • Describe the process of gathering information, including the methods used to assess the improvement or the safety issue.
  • Describe your analysis, including the analytical methods and software used.

 

Results

  • Highlight your pertinent findings in an objective manner.
  • Report the demographics/characteristics of your population, followed by pertinent findings.
  • If you have a substantial amount of qualitative data, report in bullet form or in a table or box and discuss the main points in the text.

 

Discussion

  • Interpret and discuss your findings to illustrate what is important in your improvement or safety efforts and what the reader should do with this information.
  • Explain why your findings are important; even negative findings will offer useful information.
  • Put your findings in the context of relevant studies. Discuss how your study builds on prior studies and what knowledge it contributes.

 

Conclusion

  • Discuss the implications of your findings in terms of the next steps. Describe what you have learnt and/or achieved and how you will take this forward.

 

1 BMJ Journals. (2015). BMJ quality and safety: Policy on ethics review for quality improvement reports. Retrieved from[JB2]  BMJ April 1 2015  http://qualitysafety.bmj.com

2 Davidoff, F., Batalden, P., Stevens, D., Ogrinc, G., & Mooney, S. (2008). Publication guidelines for quality improvement in health care: Evolution of the SQUIRE project. Annals of Internal Medicine, 149(9), 670–676.

3 Holzmueller, C. G., & Pronovost, P. (2013). Organising a manuscript reporting quality improvement or patient safety research. BMJ Quality and Safety, 22(9), 777–785.

4 Newhouse, R. P., Poe, S., Pettit, J. C., & Rocc, L. (2006). The slippery slope: Differentiating between quality improvement and research. Journal of Nursing Administration, 36(4), 211–219.


Appendix 1 Standards for Quality Improvement Reporting Excellence (SQUIRE)2

Text section

Section or item description

Title and abstract

Did you provide clear and accurate information for finding, indexing, and scanning your article?

1. Title

  1. Indicates that the article concerns quality improvement (broadly defined to include the safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity of care)
  2. States the specific aim of the intervention
  3. Specifies the study method used (e.g., “A qualitative study” or “A randomized cluster trial”)

2. Abstract

Summarizes all key information from various sections of the text using the abstract format of the intended publication

Introduction

Explains why you undertook this work

3. Background knowledge

Provides a brief, non-selective summary of current knowledge of the care problem being addressed and characteristics of organizations in which it occurs

4. Local problem

Describes the nature and severity of the specific local problem or system dysfunction that was addressed

5. Intended improvement

  1. Describes the specific aim (changes/improvements in care processes and patient outcomes) of the proposed intervention
  2. Specifies who (champions, supporters) and what (events, observations) triggered the decision to make changes, and why the improvements were carried out now (timing)

6. Study question

States the primary improvement-related question and any secondary questions that the study of the intervention was designed to answer

Methods

Describes what you did

7. Ethical issues

 

Describes ethical aspects of implementing and studying the improvement, such as privacy concerns, protection of participants' physical well-being, and potential author conflicts of interest, and how ethical concerns were addressed

8. Setting

 

Specifies how elements of the environment considered most likely to influence change/improvements at the particular site or sites were identified and characterized

9. Planning the intervention

 

  1. Describes the intervention and its components in sufficient detail that others will be able to reproduce it
  2. Indicates main factors that contributed to choice of the specific intervention (e.g., analysis of causes of dysfunction; matching relevant improvement experience of others with the local situation)
  3. Outlines initial plans for how the intervention was to be implemented: what was to be done (initial steps; what these steps were meant to achieve; how tests of change would be used to modify intervention), and by whom (intended roles, qualifications, and training of staff)

10. Planning the study of the intervention

 

  1. Outlines plans for assessing how well the intervention was implemented (dose or intensity of exposure)
  2. Describes mechanisms by which intervention components were expected to cause changes and plans for testing the effectiveness of those mechanisms
  3. Identifies the study design (e.g., observational, quasi-experimental, experimental) chosen for measuring the impact of the intervention on primary and secondary outcomes, if applicable
  4. Describes plans for implementing essential aspects of the chosen study design, as described in publication guidelines for specific designs, if applicable (see, e.g., http://www.equator-network.org)
  5. Describes aspects of the study design that specifically concerned internal validity (integrity of the data) and external validity (generalizability)

11. Methods of evaluation

 

  1. Describes instruments and procedures (qualitative, quantitative, or mixed) used to assess (i) the effectiveness of implementation, (ii) the contributions of intervention components and context to the effectiveness of the intervention, and (iii) primary and secondary outcomes
  2. Reports efforts to validate and test reliability of assessment instruments
  3. Describes methods used to ensure data quality and adequacy (e.g., blinding; repeating measurements and data extraction; training in data collection; collection of sufficient baseline measurements)

12. Analysis

 

  1. Provides details of qualitative and quantitative (statistical) methods used to draw inferences from the data
  2. Aligns unit of analysis with level at which the intervention was implemented, if applicable
  3. Specifies degree of variability expected in implementation, change expected in primary outcome (effect size), and ability of study design (including size) to detect such effects
  4. Describes analytic methods used to demonstrate effects of time as a variable (e.g., statistical process control)

Results

What you found

13. Outcomes

a.Nature of setting and improvement intervention

i.Characterizes elements of setting or settings (e.g., geography, physical resources, organizational culture, history of change efforts) and structures and patterns of care (e.g., staffing, leadership) that provided context for the intervention

ii.Outlines the course of the intervention (e.g., sequence of steps, events or phases; type and number of participants at key points), preferably using a timeline diagram or flow chart

iii.Documents degree of success in implementing intervention components

iv.Describes how and why the initial plan evolved, and the most important lessons learned from that evolution, particularly the effects of internal feedback from tests of change (reflexiveness)

b.Changes in processes of care and patient outcomes associated with the intervention

v.Presents data on changes observed in the care delivery process

vi.Presents data on changes observed in measures of patient outcome (e.g., morbidity, mortality, function, patient/staff satisfaction, service utilization, cost, care disparities)

vii.Considers benefits, harms, unexpected results, problems, failures

viii.Presents evidence regarding the strength of association between observed changes/improvements and intervention components/contextual factors

ix.Includes summary of missing data for intervention and outcomes

Discussion

What the findings mean

14. Summary

  1. Summarizes the principal successes and difficulties in implementing intervention components, and main changes observed in care delivery and clinical outcomes
  2. Highlights the study’s particular strengths

15. Relation to other evidence

 

Compares and contrasts your results with those of others, drawing on a broad review of the literature; use of a summary table may be helpful in building on existing evidence

16. Limitations

 

  1. Considers possible sources of confounding, bias, or imprecision in design, measurement, and analysis that might have affected study outcomes (internal validity)
  2. Explores factors that could affect generalizability (external validity) ­— e.g., representativeness of participants; effectiveness of implementation; dose-response effects; features of local care setting
  3. Addresses likelihood that observed gains will weaken over time and describes plans, if any, for monitoring and maintaining improvement; explicitly states whether such planning was done
  4. Reviews efforts made to minimize and adjust for study limitations
  5. Assesses the effect of study limitations on interpretation and application of results

17. Interpretation

  1. Explores possible reasons for differences between observed and expected outcomes
  2. Draws inferences consistent with the strength of the data about causal mechanisms and size of observed changes, paying particular attention to components of the intervention and contextual factors that helped determine the intervention’s effectiveness and settings where this intervention is most likely to be effective
  3. Suggests steps that might be modified to improve future performance
  4. Reviews issues of opportunity cost and actual financial cost of the intervention

18. Conclusions

  1. Considers overall practical usefulness of the intervention
  2. Suggests implications of this report for further studies of improvement interventions

Other information

Other factors relevant to conducting and interpreting the study

19. Funding

Describes funding sources, if any, and role of funding organization in design, implementation, interpretation, and publishing of study

 

Back to top