By Saurabh Vakil and Jim Davis
“Life is a tournament of multi-player games. Each game is unique. Mother nature and the environment are players. AI and bots are the pieces that help us extend the finish line and survive in the game of life.” – Craig Mundie, President, Mundie & Associates.
Mundie, the former Chief Research Officer for Microsoft, sounds like he recited a statement from a philosophical treatise, but it was actually a quote from his keynote speech at the recent National Institute of Health (NIH) workshop on “Harnessing Artificial Intelligence and Machine Learning to Advance Biomedical Research.”
The NIH is making a concerted effort to assess the state of AI in biomedicine, zeroing in on focus areas and identifying challenges and obstacles. The goal: reinforce its leadership in discovering applications of the technology and fulfilling its mission to “Enhance health, lengthen life, and reduce illness and disability.”
Francis Collins, the director of the NIH, led this one-day workshop that was attended by a cross-section of researchers, academia, specialist doctors and technology professionals to listen to the best minds in the field.
The impact of Brain 2.0 on Biomedical research
Mundie kicked off the workshop with a thought-provoking thesis that places machines ahead of humans when it comes to “learning abilities.” Computers learn at a scale human cannot and never will be able to match. The conventional machine intelligence wisdom which provides that machines are “trained” by humans through big data and evolving algorithms will give way. Projects such as AlphaGo have already shown that systems are able to “learn” without being trained on datasets, and in some cases are able to deploy new strategies that humans haven’t yet devised. OpenAI, meanwhile, has undertaken multi-player co-operative strategy gaming and is beating top quality opponents at Dota 2.
- Following from this, he proposes viewing research through the lens of an upgraded version of multi-player gaming and claims machines can become superhumans without any help from humans.
- The players he sees playing this game are mother-nature, the environment, and ‘bots’ helping humans, though in this case ‘bots’ can include human coaches in wellness and preventative health care as well as referring to computer-assisted scenarios.
- It is conceivable to see humans being trained by the machine in high-dimensional problems, though he concedes we are still a long way off from the machines being able to explain to us their answers.
- Application of this game theory seeks to help us acquire prescience about our health.
Mundie calls his theory Brain 2.0 and a personal “Penicillin Discovery Moment.” He proves his point using examples from both his work advising SomaLogic as well as personal experience with his wife’s cancer diagnosis. The condensed version of his theory is that genomes and proteomes are going to provide much of the key dataset for medical advances. Leaps in compute power (he believes quantum computing will be widely available within the decade) will enable the rapid discovery of patterns (pathways, as they are called in the medical world) of disease progression and subsequent reverse engineering of treatments from the individual’s “personal identifying data” that are the sum of raw proteomic data. In other words, Mundie theorizes that entire process for identifying disease treatments will shift on its head from an approach that currently tries to extrapolate individual treatments from a population sample.
It remains to be seen how this provocative theory plays out in the future and whether it makes a real impact on biomedical research.
REMOTE MONITORING WITHOUT WEARABLE SENSORS: One of the developments that have the promise of finding imminent real-world use is the outcome of a fascinating project dubbed Emerald that seems like the stuff of science fiction. Prof. Dina Katabi at MIT heads the development of this modified Wi-Fi box that has the ability to “see” through walls. Combining wireless technology with machine learning algorithms, the box is able to monitor the movement of an individual or multiple individuals from behind the wall of a room in which they are present. Dr. Katabi and her research team have created a system that can do many things, including:
- Monitor breath, sleep, heart rate and gait speed
- Measure sleep without the cumbersome mesh of wires and sensors at 80% accuracy of sleep lab measurement.
This tool can have an impact in the diagnosis of many disorders. For instance, gait speed acts as an important endpoint in Parkinson’s and surrogate markers for cognitive impairment. Also, there is the potential to use breathing as a predictor of pulmonary and Parkinson’s disease as well as depression and Alzheimer’s.
A closer look at the architecture of the technology reveals that the wi-fi box is pushing the frontiers of both edge and fog computing; with its intelligent sensor residing locally and data being processed in the local environment. Combine this with chatbot technology and you could have highly intelligent “artificial” home-based caretaker for the elderly with unprecedented capabilities.
MORE APPLICATIONS OF AI: Some of the other areas of research and work in the field shows how AI is slowly but surely moving forward to become a differentiator in clinical settings. The inflection point for the technology in making a big and lasting impact on patient outcomes is around the corner. Sample the following examples:
- Radiology: AI in radiology continues to live up to its promise. With deep learning research having doubled in 2017, the ability to diagnose cancer in the body areas that were previously possible only through surgical intervention has crossed over to detection through images according to Dr. Ronald Summers, a Senior Investigator at NIH Clinical Center. “Segmentation” and “normalization” are the key processes that lead to breakthrough diagnostic capabilities. Applications include detection of colonic polyps, lymph nodes, spine disease, colitis, and cancers of prostate and pancreas. Going forward, research in imaging combined with genomics holds a great promise.
- Pediatrics: With 20% of the US population being pediatric; a distinct focus on research is highly warranted, and luckily, provided by researchers like Dr. Judith Dexheimer of the University of Cincinnati. She explains ML research in Pediatrics is different as it must consider the health dynamics of this population due to various stages of growth and its impact on genetic changes, vital signs, medical dosing requirements and so on. ML has been applied to diagnosing sepsis and appendicitis as well as decisions involving Pediatric Intensive Care Unit transfer. Researchers are finding new applications in healthcare and provider decision support and integration of natural language processing (NLP) with the patient encounters. In a unique experiment that analyzed pediatric data from multiple sources and locations, the researchers were able to predict a potential case of child suicide and prevent it. With the number of suicide cases among children increasing, the results of this ground-breaking research have a significant impact on enhancing children and adolescents’ physical and mental well-being.
- Genomics: One of the most fascinating applications of machine learning is deciphering genome functions as the causal determinants in a large number of diseases. This is highlighted by Dr. Anshul Kundaje of Stanford University in his research on the subject. Using a landmark study on genetic variants, Dr. Kundaje has shown that association of a certain class of genes have much higher statistical significance with the Alzheimer’s disease than the rest; paving the way for early prediction of the disease’ onset.
- Life-Threatening Diseases: Sometimes, the most one can do to deal with some of the life-threatening and debilitating diseases like congestive heart failure (CHF) or epilepsy is to manage them after they occur. That might often be too late. But what if machine learning models are able to predict conditions well in advance of their occurrence? IBM has partnered with leading pharmaceutical companies and medical institutions for research in developing deep learning models for prediction of CHF, epileptic seizures and Huntington’s and Parkinson’s diseases. Eileen Koski of IBM research shows the areas in which IBM has poured its research money. One example: development of an AI tool based on speech which is intended to normalize mental health diagnosis and evaluation, discover hidden cues and multiply reach. The incredible tool can separate out between groups of normal, manic and schizophrenic persons from a sample.
DATA SHARING AND OTHER CHALLENGES
One of the primary requirements in AI is the availability of data in large quantities. Machines are “trained” to acquire artificial intelligence using the big data. Before its significance is taken for granted, Dr. David Heckerman cautions that more needs to be done to motivate all stakeholders; doctors in private practice, researchers, institutions to share data without hesitation. According to Heckerman, who is also a former Microsoft employee, the stakeholders, especially doctors in private practice, equate data with dollars. His research is focusing on finding ways to motivate all concerned to come forward and share data generously. With all the promise of AI in biomedical research, it is evident that there still remains more work to be done before major breakthroughs are achieved. Every stakeholder recognizes the promise, especially the NIH under the direction of Francis Collins, who is determined to make a significant dent and gear the NIH to provide the necessary leadership for the initiative.
In Collins’ closing remarks, he noted that:
- NIH has taken a head start towards the identification of the most significant projects for further research including the thought-provoking Brain 2.0 hypothesized by Craig Mundie.
- Other projects include cancer genomics and therapeutics, Environmental influences on Childhood Health Outcomes (ECHO), Adolescent Brain Cognitive Development (ABCD), and more to be identified.
- Data, the fodder for machine learning, takes precedence over everything else. The first order of business is to prioritize data sets of greatest interest and harmonize them to make them machine learnable. Everyone “loves to hate” EHR but care should be taken to safeguard its significance and use.
- Everything necessary to build and strengthen the ecosystem is being worked on, including the development of hardware platforms, training, community-building, as well as human resources and building the NIH brain trust.
If the future of medicine is indeed a Brain 2.0 paradigm where you start with an individual and find answers for the population, then NIH is creating the best chance to get there with its vision and initiatives.