Select the name of the type of clinical care program you would like more information about …

Example: ‘B’ for Bladder Cancer

Cancer Treatment A-Z

Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / News / 2012 News / Review study makes recommendations to enhance Cancer Comparative Effectiveness Research

Review study makes recommendations to enhance Cancer Comparative Effectiveness Research

by Mary Ruth last modified May 03, 2012 03:41 PM
Which treatment for prostate cancer is most effective? Will a specific combination of cancer drugs increase patient survival for colon cancer? As the pace of scientific discovery continues to accelerate, patients and their providers face more choices and decisions about how to address their health care needs, and information that can help inform their decisions is often hard to find.

Cancer comparative effectiveness research (CER) can help. This emerging field of research aims to assist consumers, clinicians, purchasers and policy makers to make informed decisions that will improve health care at both the individual and population levels. CER can efficiently and rapidly generate new scientific evidence, address knowledge gaps, reduce clinical uncertainty, and guide health care choices.

To better shape the next phase of CER, a team of scientists reviewed literature and conducted interviews with 41 comparative effectiveness researchers at the Agency for Healthcare Research and Quality’s Cancer Developing Evidence to Inform Decisions about Effectiveness (DEcIDE) Comparative Effectiveness Research Consortium.

Based on these interviews and further analysis, these scientists, members of the AHRQ Cancer DEcIDE Data Committee, developed recommendations to improve, accelerate and implement the use of CER to inform cancer care and clinical practice. The team reports in the April 20, 2012 online issue of the journal Cancer.

Their review includes discussion of data sets with recommendations on how better to identify those that are relevant to addressing individual research questions, and how to evaluate them. It characterizes the explosive growth in information technology and recent advances in research methods, and the large population-based data that are consequently now more available for research. In contrast to data collected for clinical trials, many of these data sets were not initially collected to support research, let alone address specific research questions. For example, the National Cancer Institute’s SEER-Medicare data links cancer registry data to administrative and insurance claims data from Medicare. Together they have been used to analyze important questions such as the relative benefits of one type of radiation therapy for prostate cancer compared to another. These and similar data have been used to address a wide range of research questions for which prospective clinical trials are not feasible, practical, or sufficiently timely.

To aid CER researchers, the team conducted interviews and research and identified relevant data sets for cancer CER research. They organized these sets into useful categorical classifications, and presented characterizations of the data and examples of their use. They then propose recommendation to both maximize their immediate utility for CER, and also to inform the development of the next generation of data so that it may be more useful for research.

“We often find that similar concepts are measured in very different ways between data sets, making them difficult to compare in research. One recommendation we have for the CER community is to develop uniform definitions and consistent ways of collecting and coding data so that they can be more useful in research,” says Anne-Marie Meyer, PhD, the study’s lead author and the UNC Facility Director for the Integrated Cancer Information and Surveillance System (ICISS) at UNC Lineberger Comprehensive Cancer Center.  “Longer-established systems also need to be enhanced to extend their ability to answer important questions in the context of contemporary cancer care.” She is an epidemiologist research associate at UNC Lineberger.

The authors cite a need to establish systematic, standardized measures of collection and coding of data, pointing out that the lack of global standardization inhibits data pooling, comparability among multiple sources, and the ability to generalize findings in the context of the population’s diversity.

Bill Carpenter, PhD, assistant professor of health policy and management at the UNC Gillings School of Global Public Health, and UNC PI for the AHRQ-funded Cancer DEcIDE CER Consortium, explains, “In addition, we need to develop a forum and a process for overcoming issues of data ownership, access, and governance. Much data is currently proprietary, and so the process of gaining access to them is bureaucratically cumbersome and time consuming. As a result, the majority of time in a research project is spent gaining access to data rather than examining them and answering important questions that can improve patient care quality and cancer outcomes.” He is faculty director for ICISS.

The reviewers suggest strategies such as enhancing national cancer registries’ collection of data on genetic markers; increasing transferability of data with open-source, open-access tools; and more intermediate outcomes such as time to disease progression or recurrence, toxicities, and treatment side-effects.

They recommend improving study design and population sampling and developing methods to leverage existing data.

Dr. Meyer notes, “It is important to recognize that new methods can extend the utility of existing data, but they cannot always correct for problems in the way data were collected. Only through improvements in study design and dataset development can some of these barriers be overcome.  As such, new efforts to develop extensive new data would be well-served to incorporate diverse research perspectives, including those of methodologists.  Many limitations or data errors can be corrected up-front and ensure that these data will be useful in the future.”

Addressing these recommendations will not necessarily be easy, and may involve changing longstanding paradigms in data ownership and governance. The team points out the need for developing codified relationships among federal agencies, their contractors and many data holders to facilitate timely data sharing for research that supports the public good. Standardized relationships between state and federal agencies help reassure data holders that their data will be used appropriately and will facilitate the timeliness of data for research and enable quick turn-around on important questions.

“With the internet, we have seen an astounding democratization of information. In this context, data liquidity should be our new standard: we should aspire to create new data that can be used broadly, by many audiences, to address many questions. As this happens, competitive advantage among researchers and research organizations will no longer be a function of ‘who owns the data,’ and will instead become, ‘Who can articulate the important questions, and appropriately and expeditiously analyze the data to correctly answer them?’” says Carpenter.

They underscore the need for improved access to data, but recognize the critical concern for data security, privacy and confidentiality.

Carpenter continues, “Data security and the privacy and confidentiality of the individuals whose data are collected must remain paramount. But we must be thoughtful about this. We cannot set up systems that so emphasize these needs as to be impractical and thus not useable. To be successful here, we must incorporate both technological and administrative systems to protect the data, but we must also emphasize the importance of building and protecting an essential trust fabric between the people whose data are collected, the data collectors, and the researchers. No successful relationship can get off the ground without trust, or be perpetuated without steadfast stewardship.”

Their final recommendation describes the advantages of a collaborative, multidisciplinary approach to represent and articulate the different approaches, cultures, terminology, measures, and priorities for cancer CER.  Research must move beyond the silos.  They cite the recently established Patient Centered Outcomes Research Institute (PCORI) as the obvious choice to lead such an effort.

Meyer concludes, “In recognition of the value and importance of CER, the Patient Centered Outcomes Research Institute (PCORI) was created as a hub of CER and a center for leading and coordinating the national discussion on it. Though still in its formative stages, it has advanced very quickly under the leadership of Dr. Joe Selby. It has recently revised its initial research agenda, and is expected to begin engaging researchers in important CER in the very near future. UNC has been in consistent contact with the founding members of PCORI and, with Dr. Ethan Basche – a member of the PCORI Methods Committee – joining the UNC faculty in the next few months, we look forward to continuing to engage in this discussion, contribute to this important research, and develop new knowledge to help patients make the best health care decisions possible given their specific needs and priorities.”

Additional study authors are Amy P. Abernethy, MD, Duke University; Til Sturmer, MD, PhD, UNC; and Michael Kosorok, PhD, UNC.

Filed under: ,