Skip to Main Content

ROCHESTER, Minn. — The patients of Mayo Clinic, whether they know it or not, have seeded a burgeoning digital health industry with their personal data. Details about their care, from disease diagnoses to digital tracings of their heartbeats, have been provided to companies for training artificial intelligence systems to detect dangerous arrhythmias, pregnancy complications, and deterioration in the hospital.

In the past two years, 16 companies have gained access to de-identified patient data through licensing deals that have widened Mayo’s revenue stream and generated crucial insights for health tech firms eager to commercialize digital products and services. Ethics experts worry, though, that patients’ interests are falling by the wayside: They were not notified of the deals or asked to consent to the use of their data for the products under development.

advertisement

Mayo, which operates medical centers across the country, has become one of the most active participants in this data trade as it embraces AI to transform the way it delivers care. Executives said its arrangements with AI companies are part of a cycle of innovation in which privacy and progress are not mutually exclusive. They said both goals are attainable — and essential — if Mayo is to develop more effective diagnostic tools and treatments.

“Why is Mayo Clinic allowing access to our de-identified data? Because it will advance medicine and make care for patients in the future better,” said Andy Danielsen, chairman of Mayo Clinic Ventures, the health system’s investment arm. “That’s the only reason we’re doing this.”

But in a marketplace where patient data is an increasingly valuable commodity, the line between innovation and exploitation has never been thinner, as patients confront new privacy risks and the possibility that their data could be re-identified, exposing them to potential job losses, discrimination, and social isolation.

advertisement

STAT interviewed Mayo executives and outside ethics experts to examine the tension between developing AI tools and the fundamental privacy rights of patients, including questions at the heart of a broader push by U.S. hospitals to use patient data and AI technology to improve care.  Should the details of data deals with outside companies be disclosed to patients? Should they be allowed to opt out? And what, if anything, is owed to patients if their data are used in products that generate a windfall for Mayo and its private partners?

“If your data and biospecimens are valuable, they are yours,” said Kayte Spector-Bagdady, a bioethicist and lawyer at the University of Michigan Medical School. “There is a harm of respect for people to use your stuff without your permission, or make money from your stuff without giving some back to you.”

But compensating patients for their data raises a potential flashpoint with academic medical centers. Mayo executives said it could slow innovation and undermine development of new treatments and digital services — a need whose urgency has been reinforced by the Covid-19 pandemic.

So far, the hospital’s 16 data deals with technology companies have generated less than $5 million, a tiny fraction of the more than $13 billion it collects annually. Mayo executives said the health system does not sell data to brokers or to anyone else who would seek to directly purchase it. Instead, it partners with private firms, and invests in them, to co-develop products that rely in part on the research of its physicians and the data they can supply on patients. The financial upside is spread over a longer time horizon, as these companies grow and begin to sell their products more widely. The health system also furthers its advantage against smaller competitors who do not have enough patient data or financial firepower to compete on medicine’s AI frontier.

Mayo is hardly the only health system sharing data with technology companies or wrestling with the proper balance between data rights and innovation. Many academic medical centers are participating in a vast marketplace in which data and biological specimens are shared with digital startups and large technology companies such as Google, Microsoft, Amazon, and IBM.

Some of the data arrangements struck in recent years have generated controversy, including a deal in which Memorial Sloan Kettering Cancer Center granted access to 25 million patient pathology slides to an artificial intelligence company called Paige.AI. The cancer hospital, and several of its clinicians and board members, held equity stakes in the company, raising an uproar over possible conflicts of interest and use of patient data without their knowledge. The Catholic hospital chain Ascension was also criticized for a deal in which it gave Google widespread access to identifiable patient data, without informing doctors or patients, leading to a federal inquiry.

Mayo has also struck up a data storage and research partnership with Google and has said it may allow a small number of the tech giant’s employees to access identifiable patient data in limited circumstances. Such a scenario might arise if Mayo wants help from a Google engineer to combine certain data sets for AI research where it is impractical to strip out identifiers.

“This would never be about us granting the keys to the data to Google,” said John Halamka, president of Mayo Clinic Platform, the health system’s initiative to use AI to help develop new treatments and digital services. “It would be us bringing them in, and we would control their access.”

Minn Mayo Clinic
The Mayo Clinic campus in Rochester, Minn. Jim Mone/AP

Innovation outpaces patient protections

The rapid advance of AI, and the dealmaking that surrounds it, is outpacing efforts to ensure that patients’ data rights are adequately protected. Mayo and other hospitals are pressing forward with data sharing arrangements as they seek to update the terms of engagement.

Mayo has launched a broad initiative to transform the way it delivers care by 2030, emphasizing the need to develop novel digital products that rely heavily on patient data. Its work began several years ago, as more sophisticated types of AI created opportunities to build diagnostic tools and help discover drugs.

One of Mayo’s earliest deals was struck in 2016, when it joined forces with the mobile electrocardiogram company AliveCor to build products to track heart function and flag abnormalities. The company wanted to build a product around Mayo research that showed that AI could detect elevated blood potassium levels, a potentially fatal condition, by reading EKG data. But to train its AI to do so reliably, AliveCor needed huge amounts of data to test and refine its product, and Mayo had just what was looking for.

The health system agreed to supply the company, which had hired several former Google engineers, with more than 2.8 million digital EKGs taken on patients over 20 years. They also partnered on a related project to use AI to detect atrial fibrillation, an arrhythmia that increases a patient’s risk of stroke and other cardiovascular problems.

Their work, with patient data at its center, led to the development of Kardia band, the first FDA-approved algorithm to aid consumers in diagnosing heart conditions. The company beat Apple to market by nine months.

In 2017, Mayo Clinic made several investments in AliveCor, taking an equity stake in the company. It was one of many deals crafted with the help of Mayo Clinic Ventures, which helps to commercialize insights derived from patient data and its physicians’ research. Mayo Clinic Ventures started in 1986 as a small technology transfer office and has steadily grown into an investment fund with financial stakes in a wide array of companies.

In the last two years, it has struck 321 licensing deals, generating about $80 million per year for Mayo. Most of those deals involve commercialization of traditional medical devices and biopharmaceutical products developed in part by the health system’s physicians. But a growing number are focused on the development of digital devices and products that rely on artificial intelligence — and access to patient data.

Executives said they could not identify eight of the 16 tech companies involved in these deals because they had not agreed to publicly disclose their relationship with Mayo. Such secrecy is common among digital health firms as they seek to build their products without tipping their hand to competitors. The other eight companies include Odonata Health, which is developing a wearable that uses AI to track a woman’s health during pregnancy; Cadence Neuroscience, which is developing therapy for epileptic seizures that targets electrical stimulation therapy to brain tissue; and Eko, the maker of a digital stethoscope embedded with Mayo-developed algorithms to detect early signs of heart failure.

Mayo declined to provide copies of the data-sharing arrangements with those companies, citing confidentiality provisions in the contracts. Danielsen, the chairman of Mayo Ventures, said the health system works with a data expert at Vanderbilt University to ensure that patient privacy is protected and fully de-identified.

“We do what’s right for the patient, which means privacy first, and then we do the best business arrangement we can — not the other way around,” he said, adding that the use of patient data is crucial to improving care for future generations.

“We take the position that this is data we’ve been entrusted with clearly by our patients, and that we have a moral and ethical obligation to use it ourselves and with companies to advance medical care,” Danielsen said. “That said, we have a huge responsibility on us to make sure data is de-identified.”

Sharing data with outside companies has been part of health care innovation for decades. But in the case of AI products, that use of the data is more direct and integral to the product itself, which changes the nature of patients’ involvement and the risks they face in the process, ethics experts said.

Spector-Bagdady, the bioethicist from the University of Michigan, said her research has shown that patients have articulated two concerns regarding the commercialization of their data. The first is the risk that it could be combined with other data sets, such as GPS on their smartphones or health-related social media posts, to identify them. A sub-industry has formed around the use of data shared in non-clinical settings by patients to build shadow health records that can be used to develop a more granular understanding of patients’ behaviors and medical problems.

Patients are also concerned about the use of their data for the financial gain of others, including large hospital systems and private companies that are part of the $3.5 trillion health care industry in the U.S. This concern is broadly exemplified by the story of Henrietta Lacks, a Black woman whose cancer cells were used without her permission for decades of commercial research, resulting in numerous medical advances and products. Her genetic information and medical records were eventually published, which raised privacy concerns among family members.

That case eventually helped lead the Department of Health and Human Services to require researchers to get informed consent before commercializing biospecimens collected from human subjects. But those regulations are narrowly crafted, leaving it up to hospitals to determine how to handle much broader swaths of information collected on their patients.

In a recent paper published in the New England Journal of Medicine, Spector-Bagdady and colleagues described an ethical framework for the use of patient data by academic centers. They wrote that the University of Michigan has determined that a standard consent form typically signed by patients at the point of care is not sufficient to justify the use of their data for commercial purposes, even in de-identified form. These forms typically ask patients to consent to the re-use of their data to support medical research.

“Because of important privacy concerns that have been raised after recent revelations regarding such agreements, and because we know that most participants don’t want their data to be commercialized in this way, we currently prohibit the sharing of data under these circumstances,” they wrote.

That means many years worth of patient data cannot be shared with third parties, a stance that undermines the emerging business model that relies heavily on access to such information to build AI products and services.

“That’s an important kind of research, but we don’t feel comfortable sharing retrospective data with big industry like that,” Spector-Bagdady said. “When we can tell patients about the relationship we’re entering into, then it’s OK. But we have not accepted wholesale data-mining agreements.”

Laying new ground rules for data use

Mayo is working to address the ethical quandaries posed by data sharing with commercial partners on a couple of fronts. Executives said they are reexamining the consent process and launching a patient advisory committee to vet data deals with outside parties.

The health system is also adopting technical solutions designed to protect patients’ privacy by preventing any of their data from leaving Mayo’s control. Instead of shipping patient information to outside parties, the health system is allowing them to view it in a virtual environment that the hospital controls.

That allows Mayo to keep tighter control on the data and monitor the activities of third parties, to prevent unauthorized access or use of the data in inappropriate ways. It has used this type of framework in several recent deals, including a data sharing partnership with Nference, a Cambridge, Mass.-based company that uses AI to comb for insights in biomedical data sets.

The company was the first partner for Mayo’s new data analytics platform, which seeks to compile and analyze data to accelerate efforts to discover new drugs and other treatments. The platform relies on a federated learning architecture, which allows multiple entities to build a common AI model without sharing their data with one another. It enables the training of an algorithm by drawing on data kept in multiple devices or local servers, so the data never need be combined in a centralized location.

“Our intent is to not share the data,” said Halamka, the president of Mayo Clinic Platform. “If we’re sharing the insight, but not the data, that seems more reasonable.”

But that doesn’t necessarily resolve the underlying need to directly inform patients about these partnerships and get consent for the use of their data. Danielsen said a group of executives, physicians, and lawyers are now evaluating how to update the hospital’s consent process to match it with current business practices, such as making more specific disclosures to patients or allowing them to opt out of data sharing in certain circumstances.

In a way, he said, the work requires hitting a target that is constantly moving due to the evolution in the use of data in research and efforts to develop new digital products and AI tools.

“Today we’re working off consents that everybody felt good about several years ago,” he said. “But society’s notion of privacy and what should be accepted and not accepted is always changing. We’re constantly looking at our consents and saying, ‘Is this good? Is it adequate? Is it understandable?’ It’s a continual process that hopefully leads to improvement.”

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.