Assignment title: Information
Assessment 2 - Reading Log 2 Rationale Reading logs help students interact with reference materials. They provide a record of what students have read and a starter to discuss what they achieved by reading it. They help students reflect on themselves as readers. Reading log entry criteria 1. The correct citation of the reference; 2. The main thesis of the reference; 3. Any predictions or applicability to future states or about what might happen next; 4. How you felt about what you have read in terms of accuracy, reliability, validity and generalizability; 5. Issues/assertions in the reading that you agree with and why you agree with it; 6. Issues/assertions in the reading that you disagree with and why you disagree with it; 7. Why the material read is useful (or not). The Task Pick one peer reviewed article from Module 3 AND/OR one from Module 4 of the course reference resources (modules are listed under the course content tab with topics listed below them) or you may use any of the peer reviewed journal articles listed in the bibliography. Using the above seven (7) Reading Log entry criteria as headings, write at least 200 words and no more than 300 words on each of the reference materials selected. You may choose to write these in a table (see below for an example of the table) or you may write these in essay form under again using the criteria as headings in the essay. Please make sure that you only use peer reviewed journal articles or research reports as using web pages or documents shown on WHO or other non-peer reviewed sources will not attract any marks. If you are unsure contact your lecturer. You may use any format you wish as long as it contains the headings listed below in the example. You may also use the format that is shown in the example below according to your own preference. Example of Tabular Form for Reading Log Criterion Discussion Correct Citation Using APA6 format ONLY, cite the article correctly Main Thesis The main thesis of a research paper or peer reviewed journal article is a concise summary of the main point or claimof the paper, etc. A thesis statement is usually one sentence that appears at the end of the first paragraph, though it may occur as more than one. Predictions and/or Applicability Predictions refer to the quality of being regarded as likely to happen as a behaviour or event. Applicability refers to the act or state of being relevant and pertinent to the area being researched . Accuracy/Reliability/Vallidity/generalizability Accuracy refers to the quality or state of being correct or precise. Reliability refers to the extent to which an experiment, test, or measuring procedure yields the same results on repeated trials.Validity refers to the quality of being logically or factually sound or cogent and generalizability refers to Generalizability is applied by researchers in an academic setting. It can be defined as the extension of research findings and conclusions from a study conducted on a sample population to the population at large. While the dependability of this extension is not absolute, it is statistically probable. Agreed Issues/assertions and rationale List the issues or assertions in the paper that you agree with and why you agree with them. Perhaps you agree beause of the sample size, other authors who are asserting similar things, your own life experience or because empirical evidence supports the assertion. Disagreed Issues and rationale List the issues or assertions in the paper that you disagree with and why you disagree with them. Perhaps you agree beause of the sample size, other authors who are asserting different things, your own life experience or because empirical evidence does not supports the assertion. Usefulness or otherwise of the article and why Describe why the article is useful or is not useful. YOu should provide rationale fromt the lilterature to support your postiion on the usefulness of the article. Assignment submission • Students are to submit each of their Reading Log Entries via SafeAssign Submission points be • Please be aware that e-mailed word documents will not be accepted. Late mark penalties will apply as per University policy and re-stated in the assessment plan section of this course web site. • The due dates and times given above under due dates are the latest that students may lodge their Reading Log Entries. Entries after that date will incur late penalties as per University policy. Marking criteria Criteria Marks Citation 1 The main thesis 1 Predictions and applicability 18 Opinion on accuracy, reliability and validity 20 Issues you agree with and rationale 20 Issues you disagree with and rationale 20 Why is this article useful/not useful 20 I am mentioning two articles from the module 3 and module 4 topics. Please consider these two articles in writing assignment and write it according to the criteria mentioned by the course convenor. Measuring healthcare quality: the challenges Correct citation: van den Heuvel, J., Niemeijer, G. C., & Does, R. J. M. M. (2013). Measuring healthcare quality: The challenges. International Journal of Health Care Quality Assurance, 26(3), 269-278. van den Heuvel, Jaap; Niemeijer, Gerard C; Ronald J.M.M. Does . International Journal of Health Care Quality Assurance 26.3 (2013): 269-78. Turn on hit highlighting for speaking browsers by selecting the Enter button Show duplicate items from other databases Abstract (summary) TranslateAbstract Purpose - Current health care quality performance indicators appear to be inadequate to inform the public to make the right choices. The aim of this paper is to define a framework and an organizational setting in which valid and reliable healthcare information can be produced to inform the general public about healthcare quality. Design/methodology/approach - To improve health care quality information, the paper explores the analogy between financial accounting, which aims to produce valid and reliable information to support companies informing their shareholders and stakeholders, and healthcare aiming to inform future patients about healthcare quality. Based on this analogy, the authors suggest a measurement framework and an organizational setting to produce healthcare information. Findings - The authors suggest a five-quality element framework to structure quality reporting. The authors also indicate the best way to report each type of quality, comparing performance indicators with certification/accreditation. Health gain is the most relevant quality indicator to inform the public, but this information is the most difficult to obtain. Finally, the organizational setting, comparable to financial accounting, required to provide valid, reliable and objective information on healthcare quality is described. Practical implications - Framework elements should be tested in quantitative studies or case studies, such as a performance indicator's relative value compared to accreditation/ certification. There are, however, elements that can be implemented right away such as third party validation of healthcare information produced by healthcare institutions. Originality/value - Given the money spent on healthcare worldwide, valid and reliable healthcare quality information's value can never be overestimated. It can justify delivering "expensive" healthcare, but also points the way to savings by stopping useless healthcare. Valid and reliable information puts the patient in the driver's seat and enables him or her to make the right decision when choosing their healthcare provider. Full Text • TranslateFull text • Introduction We can observe several initiatives in The Netherlands that measure healthcare quality to provide transparency to the public. Unfortunately, this is done in a non-standardised way by multiple organisations. One initiative is a weekly magazine called Elsevier. Since 1997, it has published hospital rankings based on expert opinions from general practitioners, physicians, nurses, managers and board members ([10] de Hen et al. , 1997). Also, the Dutch Healthcare Inspectorate developed an ever-expanding quality performance indicator (PI) list that hospital staff are obliged to measure and report to the inspectorate. Results reported are rarely verified, so reliability is dubious. Subsequently, a Dutch newspaper published hospital rankings based on selected Healthcare Inspectorate quality PIs multiplied by the newspapers own weighting factor ([7] Geenen and Wessels, 2004). Patient organisations developed their own specific quality PIs related to explicit diseases, such as diabetes, breast cancer and colon carcinoma ([25] NPCF, 2010; [30] de Ronde and Smit-Winterink, 2003). More recently, healthcare insurance companies followed with their attempts to measure quality based on the quality PIs - specific indicators developed by patient organisations and the Consumer Quality Index ([31] Stubbe et al. , 2007). In 2011, a yearly guide (Dr Yep) was published for the first time, which ranked hospitals based on information provided by: staff; Healthcare Inspectorate PIs and mystery guest experiences ([5] Dokter et al. , 2011). The most recent attempt, also in 2010, is Elsevier's revised list is based on public data such as the Healthcare Inspectorate PIs and treatment access times. In this information labyrinth, the same hospital can get different scores in one specific survey compared to other surveys in one year. All surveys invariably claim to measure healthcare quality and patients remain confused by inconsistent and continually their favourite hospital's varying rankings. Despite this claim, [22] Lingsma (2010, pp. 240-2) concludes that the Dutch general public has access to different process and outcome measures, none of which represents care quality. In this article, therefore, we introduce a framework and an organisational setting for measuring healthcare quality that provides standardised, valid and reliable information to the public. Lessons learned from financial accounting [29] Pronovost et al. (2008) state that reporting quality measures are like the Wild West because everyone is making their own rules (measures) and there is no external rule verification or enforcement leading to unreliable measures. The rapidly growing measures hospital managers voluntarily develop and publicly report have little assurance that measures are accurate, including whether there are unintentional biases or outright falsehoods. The contrast between financial and healthcare-quality performance reporting is dramatic. They suggest that healthcare managers could learn from the generally accepted accounting principles (US GAAP) as a model to develop a public healthcare-quality reporting system ([28] Pronovost et al. , 2007). [26], [27] Porter and Teisberg (2006, 2007) argue that only with unbiased and reliable public reporting can we expect a value-based competition on results and in turn affordable high-quality healthcare. To better understand the [28] Pronovost et al. (2007) analogy, the US GAAP's purpose is to assure the public that the stock represents the value as stated and that the public can trust the information provided by the company. In other words, this is an external role relative to the stakeholders and the public and this information pertains to the company's economic performance reported in the income statement and the balance sheet. To assure that the external financial reporting is trustworthy, the US Financial Accounting Standards Board (FASB) and the European International Accounting Standards Board (IASB) develop standards and rules independently ([15] International Accounting Standards Board, 2010). Furthermore, the company is required to hire an outside independent agent, a certified public accountant, to go over the books and verify that the numbers indeed represent reality and performance. This external reporting function is parallel to the quality assurance (QA) function in a quality management system (QMS). [17] Jayaraman and Rivenson (2008) argue that healthcare is more complex than finance services and that information conveyed in external reports may lack the details required by internal reports and vice-versa. No modern business management team, however, relies on the external financial statement for day-to-day operations. Thus, firms have a parallel internal management accounting system providing detailed information that does not follow GAAP and seldom, if ever, is shared with the public. As in financial management, a QMS incorporates an internal information method that does not necessarily follow any external reporting standards but helps managers control and improve quality. To obtain valid and reliable information, we explore the analogy between financial and quality management to organize structure and provide external reporting on healthcare quality for the public. We provide a brief quality management principles overview as the primary quality-information source. Then we take a closer look at the relationship between quality management and external reporting, known as QA. We provide a framework for measuring healthcare quality and suggest an organisation to provide this information to the public. Quality management and measuring quality According to [18] Juran (1986), quality management's three principles are: quality planning; quality improvement and quality control (the Juran Trilogy). We discuss these three principles and look at them as quality information sources to support QA. Quality improvement : is the most important function to establish an ongoing healthcare organisation, which needs to be done via special projects. Our team has ten year's experience implementing Lean Six Sigma in healthcare systems ([13], [14] van den Heuvel et al., 2006a, b; [4] Does et al. , 2006). From this experience, we know that information required for quality improvement (QI) differs from project to project. After closing a project, most information was useless because to preserve the improvement and to control the process, other data were required. Information to perform QI projects is highly specific, costly to gather and only useful for a short period. Therefore, this source is unsuitable for providing healthcare quality information to share with the public. Quality planning : to improve healthcare, it is not sufficient to eliminate deficiencies, reduce medication errors and eliminate delays etc., by just doing projects. A key quality-planning objective is to design new processes to prevent repeating mistakes and without designing deficiencies into the new products, processes and services ([20] Juran, 1988). A fairly simple example is introducing a new hospital-computer system to support medication prescription and distribution to reduce medication errors. Quality planning can be done in a structured manner, by systematically looking at healthcare markets, patient demands and present healthcare specifications. The specific path to be followed and the information needed to get to a newly designed healthcare product are unpredictable, which means that the information generated in the quality planning process is specific, time dependent and closely related to unique questions. Therefore, this information plays a minor role in public reporting. Quality control is the managerial process that provides stability to prevent adverse change and to maintain the status quo ([21] Juran and Blanton Godfrey, 1999, p. 4.2). All employees, from the hospital floor worker to the CEO, exercise control. The only difference is the subject and control exercised by different groups. Healthcare professionals typically control product and processes related to the unit in which they work. Executives control budgets, revenues, costs etc. The information needed to exercise control includes PIs that are well known in every hospital. Performance can be measured from financial, production, efficiency, personnel and healthcare quality perspectives. Complications, postoperative infection rates and pressure sore incidence are popular. It takes effort to design an information system for controlling a specific department; i.e. nursing department control is a different compared to a fully automated production line. Quality assurance is similar to quality control ([21] Juran and Blanton Godfrey, 1999, pp. 2.13-2.14); therefore information related to control can interest the external stakeholders. Special attention, however, is required when detailed control information from varying departments is aggregated and simplified to fit public reporting using a single indicator. Quality assurance Quality assurance activities provide evidence to establish confidence that quality requirements will be met ([9] Grynaet al. , 2007). Juran pointed out that quality control and QA have much in common. Each evaluates performance and each compares performance to goals. Quality assurance's main purpose is to verify that control is being maintained. Performance is evaluated after operations and resulting information is provided to the operating forces and others needing to know, including senior managers, corporate staff, regulatory bodies and the general public ([21] Juran and Blanton Godfrey, 1999, p. 4.3). [19] Juran (1977) articulated the need for QA as an external function to complement the Juran Trilogy's internal management role. He also suggested that the financial function provides a useful managerial model for the quality function to emulate in job description and organisation terms. How to report quality information There are several ways quality information can be presented. The first and most obvious are PIs. It is tempting to use PIs because they have a precise and concrete aura. These two supposed virtues will most likely lose their attraction after an aggregation process through different departments and several hierarchical layers. The natural response is to add more and also more detailed indicators, which rarely provide more insight but instead is likely to produce more confusion. Additionally, based on Shewhart's work, we can demonstrate that hospitals with the same performance levels can produce different PI values owing to common cause variation ([24] Mohammed et al. , 2001). Comparing these hospitals in league table format would, therefore, be meaningless because random variation is the only explanation for different scores. The second way to present quality information is QMS certification. Compliance with the ISO-9000 standards, for example, provides confidence that hospital managers have a well functioning QMS ([12] van den Heuvel et al., 2005; [23] Marquardt, 1999). Certification, however, does not guarantee healthcare quality. The third is accrediting the entire or parts of the healthcare organisation. Accrediting a healthcare institute by the Joint Commission or the NIAZ in The Netherlands, for instance, supports QMS's existence and functioning and provides guarantees that professional standards are followed. A recent study demonstrated that implementing a surgical safety checklist containing various professional standards in six Dutch hospitals was associated with a significant reduction in surgical complications and mortality ([3] de Vries et al. , 2011). So, following standards enhances quality. Therefore, demonstrating that standards are met is a strong QA instrument. Certification and accreditation have in common that a third party verifies an organisation meets standards. The conclusion is fairly simple and transparent to the public; the organisation does or does not comply with the standards. Different healthcare QA information Based on the input-process-output model and Garvin and Juran's quality definitions, we identified five types of quality that can be measured to provide healthcare QA information ([1] Boulding, 1956; [6] Garvin, 1984): Input quality has to do with materials and professionals involved in healthcare processes. Well-trained personnel are expected to deliver better quality and a better hip prosthesis is expected to last longer. Serious quality problems related to prostheses have been described, for instance, in cardiac surgery ([8] van der Graaf et al. , 1992). Most QMSs pay attention to this type of quality and it can be best made explicit by an ISO certification ([11] van den Heuvel et al. , 1998). Healthcare process quality has to do with well-designed healthcare delivery processes and flawless performance. This quality can also be best made explicit by certification or accreditation. Unlike industry, the patient is an active participant in the healthcare production process. Therefore, some process PIs can provide relevant information. Access and waiting times, rework and medication errors are process PIs that are relevant to future patients ([13], [14] van den Heuvel et al., 2006a, b). These indicators are not relevant to a person buying a product; s/he is not interested in the way the production process performs provided that product quality is excellent. Healthcare product quality has to do with the situation that exists at the moment healthcare delivery is completed. Has the treatment been performed according to professional standards? Were there adverse events or complications and treatment side effects? Because the patient is part of the healthcare process and the healthcare product (e.g. owning a new hip), there is some overlap between healthcare process and healthcare product quality. The best way to establish healthcare product quality is to assess the patient's healthcare status after treatment is completed. Reporting healthcare product quality is best done using PIs. When healthcare product quality items are closely related to the healthcare process (proper medical and nursing procedures have been followed), certification and especially accreditation such as the Joint Commission Accreditation are also appropriate. Health gain is quality we define similarly to reliability used in engineering - the probability that a machine performs, for instance after repair or maintenance, as intended under specified operating conditions for a specified time. Reliability, therefore, is quality over time ([2] Condra, 1993). Similarly, health gain could be defined as the therapy related reduction of complaints and limitations over time. So, if a patient gets a hip arthroplasty then the maximum health gain would be how long and what limitations the patient lives with if the best possible prosthesis was implanted; the best possible operating procedures were followed and after that the best care was given, until complaints return. The next question would be: what are the scores of the hospital and physician I intend to visit and how do they relate to the best possible result. This would provide an excellent quality PI. Who wouldn't want to know this before going to a physician? Although highly relevant, this information is hard to collect. It requires ongoing, sometimes 20-year measurements that is costly. Aggregation is hardly possible. What would the value of averaging one excellent and one poorly performing physician? Furthermore, the information is prone to being outdated after every innovation, such as a new prosthesis or a new surgical procedure. We consider this information the most relevant of all five quality types but, unfortunately, availability gets the lowest score. Patient/client satisfaction can be measured using questionnaires or interviews. This information can be obtained at reasonable costs and is especially relevant for improving patient and client services and quality planning. The relevance to QA is limited except to provide service-quality information. Reporting, relevancy and availability We now provide a framework in which we suggest how different quality information is best reported. The quality types, mentioned previously, are shown in the first column. In the second and third we show how healthcare quality can be best measured and made explicit comparing certification/accreditation and PIs. In the fourth, we estimate relevance to the public of information that can be produced from each quality type. In the fifth column, we estimate healthcare information availability. The "Xs" in Table I [Figure omitted. See Article Image.] represent scores. One "X" in the certification/accreditation and PI column means it is not suitable to measure in this type of quality and five "Xs" means it is suitable. In the relevance and availability columns, one "X" means low and five "Xs" mean very high. We identified five quality types that can provide healthcare quality information. Four are embedded in the QMS and information is available. Health gain is not or at least very rarely part of the QMS and this information is scarce. Unfortunately, health gain information is also the most relevant to the public and especially to patients. So, we have to realize when looking at quality that the most relevant information is least available. Information tapped from the QMS has to be processed or at least aggregated to become relevant to the public. Two physicians, one excellent and the other poorly performing, demonstrate that aggregation deteriorates the information. [22] Lingsma (2010, p. 49) found that apart from differences in care quality, the larger part of the observed differences between hospital quality PI scores can be attributed to random variation, patient characteristics that were not adjusted, residual confounding because of imperfect case-mix correction and registration bias. She concluded, therefore, that no outcome indicators currently used are suitable for ranking hospitals. Given these quality PI imperfections, one could imagine that QMS certification, like ISO-9001:2008 or healthcare system accreditation like the Joint Commission might provide better transparency and assurance to the public than current quality-PIs. Organizing quality assurance Developing valid, reliable and relevant information to really measure quality is only one QA aspect. The other, also suggested by [28] Pronovost et al. (2007) is to set up an organisation to produce this information. We recognise five activities to organize QA: Determining which quality PIs are required to provide the most reliable and valid healthcare quality picture. This is a challenge given the current PIs' poor validity and reliability. So, better PIs have to be developed. Furthermore it has to be an ongoing process - inventing new PIs and updating existing ones. Determining the rules regarding how each PI has to be measured. In pressure ulcer cases, one could for instance exclude the child department or measure and report only departments (like the ICU) that are prone to pressure ulcers. Also, schemes for measuring pressure ulcers have to be designed to reduce registration bias. Guidelines are needed for total patients to be included to reduce random variation. Finally, strict rules have to relate to case mix adjustments. Measuring PIs by healthcare organization staff. Preferably these measurements are performed and incorporated in the ordinary quality-management process. Given the right PI's and rules, registration bias has to be reduced in this step. Verifying results and measurements independently that can be compared with a certified public accountant's work. A management letter can be produced that gives an impression of the total-quality measuring process. This can be added to the final quality-information publication. Aggregating and transforming quality information into an overall hospital-score on one or more dimensions. This process also needs specific guidelines, for instance on weighting factors and external verification otherwise some quality information might look useful but in fact is worthless. The Netherlands Healthcare Inspectorate covers the first two activities. They recommend PIs and guidelines for measuring them. There is debate between the Inspectorate and medical specialists about their relevancy and validity because the indicators were also used to judge physicians and hospitals. To prevent this counterproductive debate, service-quality PIs have to be developed and defined by boards of independent experts, like the FASB and the IASB do for accounting rules. Indicators used to evaluate a hospital by the Healthcare Inspectorate will most likely differ from indicators that are valuable for informing the public. So, the ultimate goal producing and publishing an indicator has to be perfectly clear. To deal with the last two activities, verification and aggregation, independent organizations comparable to accountancy firms in the financial world are required. When we look at certification and accreditation, the situation is more mature. There are organizations engaged in developing QMS standards and safety management systems and these standards have been customized to healthcare ([16] ISO, 2001). Also there are independent organizations that can execute certification or accreditation and provide specific certificates. Perhaps this situation is an additional and a strong argument for stimulating certification and accreditation as healthcare QA instruments. Conclusions We can see in The Netherlands that PIs are not suitable for determining quality differences between hospitals. Healthcare quality reporting can improve in a similar way to financial reporting. Quality assurance information, therefore, has to be derived from the healthcare institution's QMS. We identify five healthcare-quality types and provide an overview how they can be measured to produce quality assurance information - using PIs and accreditation or certification. Health gain PIs provide the most valuable healthcare-quality information. Unfortunately, they are the least available and, therefore, need further developing. To provide valid and reliable QA information, independent boards like accountancy-based FASB and IASB need to develop standardised healthcare quality PIs and rules to measure them. Also, preferably other, independent organisations, comparable to accountancy agencies, are required to verify and validate healthcare institution PI scores. Only by implementing a coherent system can reliable and valid healthcare information can be produced and presented to the public. Certification and accreditation can separately or in addition to PIs also produce valuable healthcare information. Valid and reliable healthcare quality information must be available if patients are to be in a position to make the right decisions when choosing their healthcare providers. The authors appreciate Professor Soren Bisgaard's contribution to this article. Sadly, Professor Bisgaard died in December 2009. References 1. Boulding, K.E. (1956), "General Systems Theory - the skeleton of science", Management Science, Vol. 2 No. 3, pp. 197-208. 2. Condra, L.W. (1993), Reliability Improvement with Design of Experiments, Marcel Dekker, New York, NY. 3. de Vries, E.N., Prins, H.A., Crolla, R.M., den Outer, A.J., van Andel, G., van Helden, S.H., Schlack, W.S., van Putten, M.A., Gouma, D.J., Dijkgraaf, M.G., Smorenburg, S.M., Boermeester, M.A. and SURPASS Collaborative Group (2011), "Effect of a comprehensive surgical safety system on patient outcomes", The New England Journal of Medicine, Vol. 363 No. 20, pp. 1928-37. 4. Does, R.J.M.M., Vermaat, M.B., de Koning, H., Bisgaard, S. and van den Heuvel, J. (2006), "Standardizing healthcare projects", Six Sigma Forum Magazine, Vol. 6 No. 1, pp. 14-23. 5. Dokter, A., Hofstede, R. and Lebbink, J. (2011), Dr Yep Kies de beste zorg, De Zorggeverij BV, Borgum. 6. Garvin, D.A. (1984), "What does product quality really mean?", Sloan Management Review, Vol. 26 No. 1, pp. 25-43. 7. Geenen, R. and Wessels, K. (2004), "AD Ziekenhuis Top 100", Algemeen Dagblad, No. 13, October. 8. van der Graaf, Y., de Waard, F., van Herwaarden, L.A. and Defauw, J. (1992), "Risk of strut fracture of Bjork-Shiley valves", Lancet, Vol. 339, pp. 257-61. 9. Gryna, F.M., Chua, R.C.H. and DeFeo, J.A. (2007), Juran's Quality Planning and Analysis, 5th ed., McGraw-Hill, New York, NY. 10. de Hen, P., van Rossum, N. and Visser, M. (1997), De Beste Ziekenhuizen van Nederland, pp. 77-92, Elsevier, No. 39. 11. van den Heuvel, J., Hendriks, M.J. and van Waes, P.F.G.M. (1998), "An ISO-quality system in the radiology department; a benefit analysis", Academic Radiology, Vol. 5 No. 2, pp. S441-S445. 12. van den Heuvel, J., Koning, L., Bogers, A.J.J.C., Berg, M. and Deijen, M.E.M. (2005), "An ISO quality management system in a hospital: bureaucracy or just benefits", International Journal of Health Care Quality Assurance, Vol. 18 No. 5, pp. 361-9. 13. van den Heuvel, J., Does, R.J.M.M., Bogers, A.J.J.C. and Berg, M. (2006), "Six Sigma: the ultimate cure for health care?", The Joint Commission Journal on Quality and Patient Safety, Vol. 32 No. 7, pp. 393-9. 14. van den Heuvel, J., Does, R.J.M.M. and de Koning, H. (2006), "Lean Six Sigma in a hospital", International Journal of Six Sigma and Competitive Advance, Vol. 2 No. 4, pp. 377-88. 15. International Accounting Standards Board (2010), International Financial Reporting Standards, IASB, London. 16. ISO (2001), ISO 9000 Guidelines for Healthcare Sector, ISO, Geneva, Switzerland, December. 17. Jayaraman, D. and Rivenson, H. (2008), "Accounting principles and measuring healthcare quality reply", The Journal of the American Medical Association, Vol. 299 No. 7, p. 764. 18. Juran, J.M. (1986), "The quality trilogy: a universal approach to managing for quality", Quality Progress, Vol. 19 No. 8, pp. 19-24. 19. Juran, J.M. (1977), "Quality and its assurance - an overview", paper presented at 2nd NATO Symposium on Quality and Its Assurance, London. 20. Juran, J.M. (1988), Juran on Planning for Quality - An Executive Handbook, 5th ed., McGraw-Hill, New York, NY. 21. Juran, J.M. and Blanton Godfrey, A. (1999), Juran's Quality Handbook, 5th ed., McGraw-Hill, New York, NY. 22. Lingsma, H.F. (2010), Measuring Quality of Care: Methods and Applications to Acute Neurological Diseases, Erasmus University Rotterdam, Rotterdam. 23. Marquardt, D.W. (1999), "The ISO 9000 family of International Standards", in Juran, J.M. (Ed.), Quality Handbook, 5th ed., McGraw-Hill, New York, NY. 24. Mohammed, A.M., Cheng, K.K., Rouse, A. and Marshall, T. (2001), "Bristol, Shipman and clinical governance: Shewhart's forgotten lessons", Lancet, Vol. 357, pp. 463-7. 25. NPCF (2010), "Kwaliteit in zicht", available at: www.npcf.nl (accessed July 2011). 26. Porter, M.E. and Teisberg, E.O. (2006), Redefining Health Care: Creating Value-based Competition on Results, Harvard Business School Press, Boston, MA. 27. Porter, M.E. and Teisberg, E.O. (2007), "How physicians can change the future of health care", The Journal of the American Medical Association, Vol. 297 No. 10, pp. 1103-11. 28. Pronovost, P.J., Miller, M. and Wachter, R.M. (2007), "The GAAP in quality measurement and reporting", The Journal of the American Medical Association, Vol. 298 No. 15, pp. 1800-2. 29. Pronovost, J.P., Miller, M. and Wachter, R.M. (2008), "Accounting principles and measuring healthcare quality-reply", The Journal of the American Medical Association, Vol. 299 No. 7, pp. 764-5. 30. de Ronde, T. and Smit-Winterink, M. (2003), Kwaliteitscriteria vanuit patientenperspectief voor onderzoek en behandeling van vrouwen en mannen met borstkanker, BorstkankerVereniging Nederland, Utrecht. 31. Stubbe, J., Gelsema, T. and Delnoy, D.M.J. (2007), "The Consumer Quality Index Hip Knee Questionnaire measuring patients' experiences of care after a total hip or knee arthroplasty", BMC Health Services Research, Vol. 7 No. 60, available at: www.biomedcentral.com/1472-6963/7/60 Comparing Cancer Care, Outcomes, and Costs Across Health Systems: Charting the Course 1. Joseph Lipscomb, 2. K. Robin Yabroff, 3. Mark C. Hornbrook, 4. Anna Gigli, 5. Silvia Francisci, 6. Murray Krahn, 7. Gemma Gatta, 8. Annalisa Trama, 9. Debra P. Ritzwoller, 10. Isabelle Durand-Zaleski, 11. Ramzi Salloum, 12. Neetu Chawla, 13. Catia Angiolini, 14. Emanuele Crocetti, 15. Francesco Giusti, 16. Stefano Guzzinati, 17. Maura Mezzetti, 18. Guido Miccinesi and 19. Angela Mariotto +Author Affiliations 1. Affiliations of authors: Rollins School of Public Health and Winship Cancer Institute, Emory University, Atlanta, GA (JL); Health Services and Economics Branch, Applied Research Program (KRY, NC), and Data Modeling Branch, Surveillance Research Program (AM), Division of Cancer Control and Population Sciences, National Cancer Institute, Bethesda, MD; The Center for Health Research, Kaiser Permanente Northwest, Portland, OR (MCH); Institute of Research on Population and Social Policies, National Research Council, Rome, Italy (AG); National Center for Epidemiology, Surveillance and Health Promotion, Italian National Health Institute, Rome, Italy (SF); Toronto Health Economics and Technology Assessment Collaborative (THETA), Department of Medicine and Faculty of Pharmacy, University of Toronto, Toronto, ON (MK); Evaluative Epidemiology Unit (GG) and Department of Predictive and Preventive Medicine (AT), Fondazione IRCSS, Istituto Nazionale dei Tumori, Milan, Italy; Institute for Health Research, Kaiser Permanente Colorado, Denver, CO (DPR); AP-HP URCEco and Hộpital Henri Mondor, Paris, France (ID-Z); Department of Health Policy and Management, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC (RS); Medical Oncology Unit, Oncology Department, Azienda Sanitaria, Florence, Italy (CA); Clinical and Descriptive Epidemiology Unit, Institute for Cancer Study and Prevention, Florence, Italy (EC, FG, GM); Veneto Institute of Oncology - IOV IRCCS, Padua, Italy (SG); Department of Economics and Finance, University of Rome "Tor Vergata", Rome, Italy (MM). 1. Correspondence to: Joseph Lipscomb, PhD, Department of Health Policy and Management, Rollins School of Public Health, Rm 720, 1518 Clifton Road, NE, Atlanta, GA 30322 (e-mail:[email protected]). This monograph highlights the multiple payoffs from comparing patterns of cancer care, costs, and outcomes across health systems, both within a single country or across countries, and at a point in time or over time. The focus of comparative studies can be on the relative performance of systems in delivering quality cancer care, in controlling the cost of cancer care, or in improving outcomes, such as reducing mortality rates and improving survival. The focus also can be on comparing the effectiveness, cost, or cost-effectiveness of competing cancer prevention and control interventions within a given system or across systems, while taking into account variations in patient characteristics, disease incidence and severity, resource availability, unit costs, and other factors influencing system performance. Two recurring themes in this monograph are: 1) the opportunities for cross-system analysis, learning, and improvement are enormous and just beginning to be tapped; and 2) the empirical and methodological challenges in realizing this potential are likewise enormous, but real progress is being made. In this concluding article, we revisit and illustrate both themes, with the aim of suggesting a research agenda for enhancing capacity to conduct strong empirical cross-system analyses in cancer care delivery. To focus the inquiry, we limit consideration to those cancer care systems, whether within or across countries, sufficiently developed to have access to registries that not only can document cancer incidence and mortality but, through linkage to additional data sources, can serve as platforms for patterns-of-care, costing, or other in-depth studies. This necessarily puts the spotlight on developed nations; and among these, we concentrate on those in Europe and North America represented at the September 2010 workshop, "Combining Epidemiology and Economics for Measurement of Cancer Costs," in Frascati, Italy (1). We distinguish between population-level studies, designed to compare the performance of health systems across countries or within a single country along specified dimensions, and patient-level studies, designed to investigate the effectiveness, cost, or cost-effectiveness of specific interventions and programs for individual patients (or individuals at risk for cancer) either within a given health-care system or across systems. In population-level studies, the outcome of interest might be summary measures of cancer mortality, survival, or other prominent patient outcome–oriented indexes of performance that are feasible to measure across systems for defined populations. Patient-level studies will often investigate the determinants of variations in patterns of care, costs, or outcomes, or apply economic evaluation methods to examine whether specific interventions offer good value for money. Although most patient-level studies to date are within-country or within-system, we note important examples of cross-country or cross-system analyses. In the next section, we highlight some examples of population- and patient-level studies. This sets the stage for the subsequent sections discussing a range of options, including some already in progress, for strengthening the data, methods, and organizational infrastructure to support policy-relevant comparative research on cancer outcomes and costs. Next Section Comparisons Across Health Systems: Informative but DifficultPopulation-Level Studies The methods for conducting empirically sound cross-nationalcomparisons of cancer incidence, mortality, and survival are relatively well developed. In recent years, important and frequent collaborative contributions have been made by research teams organized by the International Agency for Research on Cancer (IARC) of the World Health Organization and the International Association of Cancer Registries (IACR) (2), as well as by the EUROCARE (European Cancer Registry–based Study on Survival and Care) study group (3,4). Growing out of EUROCARE-3 was the CONCORD study, which provided survival estimates for about 1.9 million adults diagnosed with female breast, colon, rectum, or prostate cancers during 1990–1994, and followed up to 1999 (5). Projects led by EUROCARE and EUROPREVAL have analyzed cancer prevalence within and across European countries (4). Although these and other prominent studies (6) have compared disease incidence, prevalence, mortality, and survival (singly or jointly), there are evidently no recent cross-national studies on cancer cost, whether overall or by disease site. Although Organization for Economic Cooperation and Development (OECD) compiles and publishes country-specific data on health expenditures and its components, it does not produce cross-national cost estimates by disease class or specific cancer diagnoses (7). There are noteworthy examples of within-country efforts to monitor health system performance on cancer metrics over time. In Canada, Cancer Care Ontario (CCO) supports the Ontario Cancer System Quality Index (8). In the United States, the Agency for Healthcare Research and Quality publishes each year the National Health Care Quality Report (9), and several US cancer agencies and organizations collaborate to produce an annual "report to the nation" on incidence, mortality, survival, and selected special topics (10). Previous SectionNext Section Patient-Level Comparative Studies The substantial diversity of health-care delivery systems across countries, and indeed within any country, creates significant opportunities for policy-relevant research comparing alternative approaches to care delivery along the cancer continuum: prevention, detection, treatment, survivorship, and end-of-life care (11,12). By observing how seemingly similar individuals either at risk for cancer or with the disease are treated in different systems, we have the opportunity in principle of benefitting from what amounts to quasi-natural experiments in care delivery (13). This could allow for benchmarking of "high quality" or "high value" services and identifying best (and less than best) practices. One cross-national comparison is well illustrated in the study of colorectal cancer treatment patterns in Italy and the United States reported herein by Gigli and colleagues (14), who found clear between-country differences in use of adjuvant therapy, open abdominal surgery and endoscopic procedures, and hospitalization. Similarly, Warren and colleagues (15) compared end-of-life care for non–small cell lung cancer patients aged 65 and over in Ontario and the United States, finding significantly greater use of chemotherapy in the United States, but higher rates of hospitalization in the last 30 days of life in Ontario. Each study was feasible because the participating countries could link high-quality cancer registry data with administrative files to identify similar cancer patients and then track receipt of services over time. In cross-national settings where insurance or other administrative data files are not available or accessible, alternative strategies for augmenting cancer registry data can be pursued. An instructive case in point is the "high resolution" analyses reported by Gatta and colleagues (16), examining the impact of guideline-recommended care on survival in samples of patients diagnosed with breast, colorectal, or prostate cancer across a number of European countries. Building on earlier EUROCARE studies (17–20), these analyses brought together cancer registry data enhanced with additional clinical detail from multiple participating registries and countries (eg, for breast cancer, data from 26 registries in 12 countries). Included as determinants of cross-country survival differences were such macro-level variables as total spending on health care and the relative availability of such inputs as computed tomography, magnetic resonance imaging, and radiotherapy equipment. Several implications flow from these cross-system studies. For valid and reliable analyses of cancer care, outcomes, and costs across geographical boundaries, high-quality registry data (or its clinical equivalent) are necessary, but generally not sufficient. Such data must be augmented with either administrative files or additional clinical information to provide an accurate time profile of patient-level diagnoses, services and procedures received, and outcomes, as well as patient, provider, and health system variables. For any given health system comparison, all pertinent variables should be defined and measured in the same way, or at least measure the same construct. We are far from achieving widespread international "interoperability" in measurement and reporting of cancer care use and costs. The resulting challenges in being able to draw valid cross-country inferences from existing studies are well illustrated in our review here of economic studies in colorectal cancer, as conducted primarily in countries with well-developed networks of cancer registries (21). In the main, studies from different countries yielded estimates of direct medical costs in ways that precluded a sound comparison across studies. Few studies estimated direct nonmedical costs (eg, patient or caregiver time) or the productivity costs associated with disease and treatments. Indeed, aggregate and patient-level cost estimates varied in so many ways across countries that meaningful comparisons now are almost impossible. A broadly similar conclusion emerges from the review of colorectal cancer patterns of care studies from across Europe, Australia, and New Zealand (22) and in comparisons between Canada and the United States (23). That challenges in conducting micro-level analyses can arise across health-care systems within a country is underscored by Fishman and colleagues (24). They describe the data system hurdles in conducting comparative effectiveness research in samples of elderly US cancer patients when some are enrolled in Medicare for-for-service (FFS) plans and others in Medicare-managed care plans that include health maintenance organizations (HMOs). As one direct response to the issue of data comparability within Medicare, Rosetti and colleagues (25) developed a "Standardized Relative Resource Cost Algorithm" (SRRCA) to assign standardized (comparable) relative costs to cancer patients in HMOs and FFS plans. Such innovative fixes as the SRRCA represent important, yet incremental, steps toward addressing a more fundamental issue in conducting sound comparative effectiveness research within the United States. With its strong cancer registry networks but vast array of administrative data systems and non-interoperable electronic health informatics systems, how does the country advance toward a "national cancer data system," as advocated by the Institute of Medicine in 1999 (26) and echoed by multiple cancer policy makers since then? (27). Previous SectionNext Section Building Capacity for Comparative Studies Across Health Systems Enhancing the Empirical Base High-quality sources of data to support scientifically sound population-based studies of cancer care, outcomes, and costs have emerged most often from partnerships involving some combination of government agencies, professional and provider organizations, and researchers. The empirical infrastructure required for comparative analyses will not simply emerge on its own, as the product somehow of "natural market forces" in the health-care arena. Little disagreement arises among payers, providers, and consumers of cancer care surrounding the contention that decision making about competing interventions should be informed by solid evidence on effectiveness and costs. But only rarely does any single or combination of these private stakeholders have the financial and organizational wherewithal, or indeed an adequate incentive, to take on the full task of building and sustaining a population-level database for cancer research. Now, if by some means the necessary empirical infrastructure does emerge, one would want to encourage its broad and rapid application, not only by the parties that paid for it but by qualified researchers everywhere, and assure that its use by one set of researchers does not diminish its availability or utility to others. In this sense, the data infrastructure needed to support population-level cancer research could well be characterized as a type of public good, with the implication that it will be underproduced in the absence of collective action organized and supported by public agencies. This line of argument (or at least aspects of it) has been well recognized in both the North American and European arenas for population-level cancer research (28). As noted, the EUROCARE project, based in Milan and Rome, has developed the capacity to draw survival and other surveillance data from over 80 publicly supported cancer registries in 21 European nations covering about 36% of their combined populations (16). In Canada, the health services research program jointly sponsored by CCO and the Institute for Clinical Evaluative Sciences (ICES) has developed publicly available datasets linking clinical and administrative information on cancer care, outcomes, and resource utilization in the province of Ontario (29), and now most Canadian provinces have similar linked datasets. Most recently, Ontario and British Columbia researchers teamed up to examine pre- and post-diagnosis cancer-related costs for multiple tumor sites (30). In the United States, the SEER–Medicare linked database represents a partnership involving the National Cancer Institute (NCI), the Centers for Medicare and Medicaid Services (CMS), and the federally supported SEER registries covering roughly 28% of the US population (31,32). The Cancer Research Network has developed standardized tumor, clinical, utilization, and cost data for large HMOs in the United States, all of which have electronic medical record systems (33,34). The Centers for Disease Control and Prevention (CDC), in collaboration with seven state cancer registries and multiple university-based researchers, have supported the Breast and Prostate Cancer Data Quality and Patterns of Care Study, creating large population-based samples to study quality-of-care and survival outcomes (35). Current collaborative efforts, however, fall short of providing cancer researchers and policy makers with the data platforms required for population-based studies encompassing all geographical regions, all population groups, and the full range of clinical, patient-reported, and cost-related outcomes that can inform decision making. Specific research initiatives such as the NCI-created Cancer Care Outcomes Research and Surveillance (CanCORS) Consortium (36) have rendered proof of concept that primary data collection and multiple datasets linked together can effectively support a range of important innovative studies (37,38). But such initiatives alone are not intended to address the larger matter of how to develop and sustain the empirical base for population-based cancer research over time. What are the prospects for building sustainable data platforms that are accessible and affordable to a broad swath of individual researchers and policy makers? A comprehensive pursuit of this mammoth topic would require its own monograph, but we highlight some notable examples. European Partnership for Action Against Cancer and Other European Confederations. The European Partnership for Action Against Cancer (EPAAC) is a confederation of over 30 public and private sector organizations that seeks to work closely with the European Union, the IARC, the European Network of Cancer Registries (ENCR), the EUROCARE project, the OECD, and others to advance an ambitious agenda for cancer prevention and control research (39). Among EPAAC's objectives is a "European Cancer Information System" that would draw on multiple partnerships to develop harmonized population-based data on cancer incidence, survival, prevalence, mortality, and also high-resolution studies to examine the impact of medical resource availability, patient-level variables including lifestyle factors, and specific interventions on outcomes. In a complementary development, IARC and ENCR announced in 2012 the creation of a European Cancer Observatory to provide easier access to basic surveillance data from over 40 European countries (40). Although not disease-focused, the "EUnetHTA" is a network of government-appointed organizations, regional agencies, and nonprofit organizations established in 2008 to harmonize and improve the quality of health technology assessment across Europe (41). As such, its work could eventually inform the evaluation efforts in specific domains, including cancer. CCO–ICES and Other Provincial Partnerships in Canada. Potentially well positioned to create and sustain data platforms for cancer care, cost, and outcomes research is Canada, at least on a province-by-province basis, as the CCO–ICES health services research initiative in Ontario is beginning to demonstrate (29). A particularly strong feature of this system is the capability of linking cancer registry data with additional clinical information and service provision data from the province's publicly funded universal health-care system. As a result, it is possible to track medical services rendered, the corresponding resources consumed, and survival outcomes over time on a population basis. American College of Surgeons and American Society of Clinical Oncology. In the United States, there are several parallel initiatives underway to strengthen the capability for monitoring and improving the quality of cancer care. These include the American College of Surgeons (ACoS) Commission on Cancer's (CoC) Rapid Quality Reporting System (42), already adopted in over 20% of the CoC's 1500 approved cancer programs, and the new "CancerLinQ" information system under development by the American Society of Clinical Oncology (ASCO) (43). Both of these far-reaching initiatives are aimed at providing near real-time feedback to care providers and eventually at strengthening the basis for comparative effectiveness research of cancer therapies. As currently configured, neither appears readily geared to support population-based cost or cost-effectiveness analyses of care across the cancer continuum. SEER–Medicare: Building on the Concept. A key to making further progress on the economic analysis front is pursuit of a strategy that is simple in concept but complex in execution: Expand the SEER–Medicare linked dataset "model" to cover virtually 100% of the US population—in partnership with the CDC's National Program of Cancer Registries—and to include linkages with administrative data from Medicaid and as many major private insurance plans and managed care organizations as possible. If data elements were standardized and harmonized across payers, the result would be linked cancer registry–claims data yielding population-representative samples across all ages, geographical areas, and types of health plans. Clearly, a number of major organizational, financial, and perhaps even legal hurdles would have to be cleared for such an ambitious plan to take flight and become sustainable over time. Extracting Maximal Value From the Empirical Base: The Essential Role of Modeling At the core of any epidemiologically based analysis of health outcomes and cost is a model (44) and a number of associated tasks. The tasks can be viewed as falling under two headings: 1) using the available data to assign values (either point estimates or probability distributions) to all the variables deployed in the analysis and then investigating each of the hypothesized causal connections, for example, impact of intervention A on health outcome X, or the impact of Y on cost outcome C, or both, after adjusting for confounding; and 2) combining these estimated variables, and their inferred causal connections, into some form of decision model to investigate the impact of alternative intervention strategies on the outcomes of interest (eg, health outcomes, cost, or cost-effectiveness) for some selected target population. The decision model becomes the analytical platform for posing compelling "what if" questions. For example, how costs are expected to shift if intervention X′ is selected rather than X? At the same time, the decision model is the vehicle for evaluating policy options (X versus X′) to optimize some designated criterion, for example, cost per quality-adjusted life year. The pivotal point is that in studying the impact of X versus X′ in the selected target population, the analyst is not necessarily constrained by data availability or data quality limitations within that population. Rather, the aim is to make the decision model appropriate to the question at hand by bringing to bear the best available data from all feasible sources. Statistical Inference and Prediction Whatever the outcome being investigated, the within-country or cross-country context, or the strengths and limitations of the corresponding empirical base, paying close attention to strategies for both statistical inference and decision modeling is foundational. We briefly call attention to three problems of statistical inference (among many) that are especially pertinent: (a) appropriately characterizing the distributional features of the outcome of interest (a particular concern when cost is the dependent variable); (b) adjusting for patient-related and other selection effects that otherwise can lead to biased inferences about the impact of factors on outcomes, costs, or both; and (c) recognizing that cancer care interventions may be complex, multilevel, and delivered in geographical and clinical environments characterized by the statistical phenomenon of "clustering." Over the past two decades, considerable progress has been made in coping with (a), especially in the area of cost, where robust generalized modeling approaches have been developed (45–47). Regarding (b), the threat of selection bias in the estimation of outcomes, including cost, has long been recognized in the econometrics literature. In recent years, two basic approaches to bias reduction have been pursued, with applications in the health-care arena accelerating over the past decade: propensity score matching or weighting (48) and instrumental variable (IV) methods (49–54), which seek to identify and remove biasing effects arising from observable or unobservable influences on the dependent variable of interest. Likewise, developing cost estimation and prediction models that jointly handle problems (a), (b), and (c) by recognizing the frequently hierarchical nature of interventions is a prime area for further work (54–56). Decision Modeling Consider the following policy questions: • What are the relative contributions of screening and adjuvant therapy to achieving reductions in mortality from breast cancer? • What is the effect of rising chemotherapy costs on the possible cost savings from colorectal cancer screening? • What is the cost-effectiveness of human papillomavirus vaccination and cervical cancer screening in women older than 30? • How may one estimate the clinical benefits, harms, and cost implications of a particular cancer screening program prior to its widespread adoption so as to inform decision making about optimal screening policy? These seemingly diverse inquiries in cancer prevention and control have certain important features in common. They are complex, involving many clinical and economic considerations. The time horizon over which clinical benefits, harms, and costs flow at the patient level will not be measured in months but years and, indeed, may span the remainder of the individual's life, from the point of intervention going forward. It is highly unlikely that either experimental or observational data would be available for any one cohort in sufficient detail and duration to include direct observations on all the variables involved in the multiperiod investigation. There is one more feature in common: Each of these four questions has already been investigated in impressive detail using some form of decision modeling (57–60), most typically a variant of micro-simulation. However strong or deficient the empirical base for population-based cancer research within a health system or across health systems, adopting a decision modeling strategy provides the additional flexibility to bring the best available data to bear (whatever the source) on the problem at hand. Previous SectionNext Section Conclusions The central challenge in conducting technically sound comparative analyses of cancer care patterns, outcomes, or costs across health-care systems is marshaling the skill, the will, and the fiscal and administrative resources to develop and sustain the necessary data infrastructure that can support strong (and frequently team-based) research. Whether for cross-national studies or within-country studies, the task is made all the more difficult because most of the component building blocks for national, regional, or state cancer data systems—including insurance and other administrative data sources, medical records systems, and even cancer registries—were not originally designed to support research. Nonetheless, the empirical base needed for a given investigation can frequently be created through some combination of dataset cleaning and updating (eg, re-abstracted registry records); dataset linkages (eg, registry data with claims files, or registry data with medical records); and/or dataset creation (eg, surveys to collect individual-level data on cancer risk-increasing or risk-reducing behaviors, time costs, or patient-reported outcomes, in some cases using the cancer registry to establish the sampling frame). Indeed, some projects have linked both secondary and newly created sources to provide a rich longitudinal picture of the cancer patient experience over time, from diagnosis, through treatment, and into the survivorship period (36). Population-based cancer registries, whether covering a city, state, province, region, or entire country, are the bedrocks not only of epidemiological investigations of disease trends but also trends in cancer patterns of care and economic cost. As a result of sustained work by tumor registries and their affiliated experts worldwide, a consensus is emerging about the international rules-of-the-road for cancer surveillance data definition, collection, and analysis (2) (pp. 67–71). Over time, disparate registry operations have developed operational definitions and criteria for appraising data completeness, accurate identification of true-positive cancer cases, and approaches to computing and reporting statistics on incidence, prevalence, mortality, and survival (61,62). This standardization supports current and future efforts to foster comparative analyses of cancer care, outcomes, and costs. Yet to da