Coda EducateEthics in HealthcareFOAMed LibraryMedical EducationDeep Learning - Pushing the boundaries of Health AI

Coda Educate: Conversation 1

Deep Learning – Pushing the boundaries of health AI. How do we make it fair and the data safe?

Over the last 5 years there has actually been a confluence of a few different historical threats. Health data has become increasingly digitalised and we’ve had the proliferation of accessible massive scale computing. Both of which have unlocked a technique developed in the early 80’s called deep learning. This is really good at pattern recognition over large data sets.

Key trends in the last year include the first randomised clinical trials in the clinical application of AI in health, the potential for AI in clinical discovery particularly using multimodal data (including electronic medical records, imaging data, genomic data) and combining that to find patterns in very large data sets.

Evidently, this is the beginning of precision medicine.

At the same time, there are some systemic challenges facing AI in health, including workflow integration, bias, equity and just access.

Moreover, how can we mitigate these biases and make them fair?

Finally, how do we make this sensitive data safe? Is the answer a Federated machine learning where we send the AI algorithms out to local networks, and apply them there?

Deep Learning – Pushing the boundaries of health AI. How do we make it fair and the data safe?

Philips Lumify + Ultrasound Training Package



This podcast is brought to you by Philips

Martin Seneviratne

Martin is an Australian doctor now working as a research scientist in the AI research team at Google Health (formerly DeepMind), based in London. His work focuses on applications of machine learning to electronic health record data. Prior to Google, Martin did a masters in biomedical informatics at Stanford, and before that was a junior doctor in Sydney.


Comments are closed.