- Microsoft Azure, UC San Francisco’s Center for Digital Health Innovation (CDHI), Fortanix, and Intel recently collaborated to establish a confidential computing platform to boost artificial intelligence (A) in healthcare.
The platform will include privacy-preserving analytics to accelerate the development and validation of clinical algorithms with a “zero-trust” environment. The goal is to protect intellectual property and address data security concerns.
Additionally, the organizations expect the new technology to reduce overall time and cost.
“When researchers create innovative algorithms that can improve patient outcomes, we want them to be able to have cloud infrastructure they can count on to achieve this goal and protect the privacy of personal data,” Scott Woodgate, senior director, Azure security and management at Microsoft, said in the press release.
“Microsoft is proud to be associated with such an important project and provide the Azure confidential computing infrastructure to healthcare organizations globally.”
The collaboration will leverage the confidential computing capabilities of Fortanix’s Confidential Computing Enclave Manager, Intel’s Software Guard Extensions (SGX), and Microsoft Azure’s confidential computing infrastructure.
Additionally, CDHI’s BeeKeeperAI will enable more efficient data access, transformation, and orchestration across various data providers, the companies said.
Together, the capabilities will generate a proven clinical algorithm against a simulated data set.
Confidential computing technology protects patient privacy by enabling a specific algorithm to interact with a specifically curated data set. The healthcare system will be in control of the data set at all times through their Azure confidential computing cloud infrastructure.
The data will be placed into a secure enclave and powered by Intel’s Software Guard Extensions (SGX) and leveraged by Fortanix’ cryptographic functions, including validating the signature of the algorithm’s image.
Although the data will be processed separately, multiple organizations can leverage the system without needing to trust one another.
“Trusted execution environments enabled by Intel SGX could be key to accelerating multi-party analysis and algorithm training while helping to keep data protected and private,” said Anil Rao, vice president of data center security and systems architecture platform hardware engineering division at Intel.
“This collaboration with UCSF, Fortanix and Microsoft Azure demonstrates the amazing potential of confidential computing with Intel’s hardware-rooted protection defending the data.”
A clinical-grade algorithm that quickly identifies individuals who require a blood transfusion in the emergency department will be used as a reference standard to compare validation results, the press release stated.
Additionally, the algorithm will test whether the model or the data are vulnerable to intrusion at any point.
The overall collaborative goal, in addition to validation, is to support multi-site clinical trials that will boost the development of regulated AI solutions.
Algorithms that are used in the context of delivering healthcare must be able to consistently perform across all patient populations, socioeconomic groups, and geographic locations in order to gain regulatory approval, the press release highlighted.
Before their implementation into clinical practice, validation of AI algorithms was a concern for multiple reasons.
“Oftentimes, this has been an insurmountable barrier to realizing the promise of scaling algorithms to maximize potential to detect disease, personalize treatment, and predict a patient’s response to their course of care,” explained Rachael Callcut, MD, director of data science at CDHR and co-developer of the BeeKeeperAI solution.
“Bringing together these technologies creates an unprecedented opportunity to accelerate AI deployment in real-world settings.”
Comments
Something to say?
Log in or Sign up for free