Life Is A Tournament Of Multiplayer Games: Can The NIH Referee?

By Saurabh Vakil and Jim Davis

“Life is a tournament of multi-player games. Each game is unique. Mother nature and the environment are players. AI and bots are the pieces that help us extend the finish line and survive in the game of life.” – Craig Mundie, President, Mundie & Associates.

Mundie, the former Chief Research Officer for Microsoft, sounds like he recited a statement from a philosophical treatise, but it was actually a quote from his keynote speech at the recent National Institute of Health (NIH) workshop on “Harnessing Artificial Intelligence and Machine Learning to Advance Biomedical Research.”

The NIH is making a concerted effort to assess the state of AI in biomedicine, zeroing in on focus areas and identifying challenges and obstacles. The goal: reinforce its leadership in discovering applications of the technology and fulfilling its mission to “Enhance health, lengthen life, and reduce illness and disability.”

Francis Collins, the director of the NIH, led this one-day workshop that was attended by a cross-section of researchers, academia, specialist doctors and technology professionals to listen to the best minds in the field.

The impact of Brain 2.0 on Biomedical research

Mundie kicked off the workshop with a thought-provoking thesis that places machines ahead of humans when it comes to “learning abilities.”  Computers learn at a scale human cannot and never will be able to match. The conventional machine intelligence wisdom which provides that machines are “trained” by humans through big data and evolving algorithms will give way. Projects such as AlphaGo have already shown that systems are able to “learn” without being trained on datasets, and in some cases are able to deploy new strategies that humans haven’t yet devised. OpenAI, meanwhile, has undertaken multi-player co-operative strategy gaming and is beating top quality opponents at Dota 2.

  • Following from this, he proposes viewing research through the lens of an upgraded version of multi-player gaming and claims machines can become superhumans without any help from humans.
  • The players he sees playing this game are mother-nature, the environment, and ‘bots’ helping humans, though in this case ‘bots’ can include human coaches in wellness and preventative health care as well as referring to computer-assisted scenarios.
  • It is conceivable to see humans being trained by the machine in high-dimensional problems, though he concedes we are still a long way off from the machines being able to explain to us their answers.
  • Application of this game theory seeks to help us acquire prescience about our health.

Mundie calls his theory Brain 2.0 and a personal “Penicillin Discovery Moment.” He proves his point using examples from both his work advising SomaLogic as well as personal experience with his wife’s cancer diagnosis. The condensed version of his theory is that genomes and proteomes are going to provide much of the key dataset for medical advances. Leaps in compute power (he believes quantum computing will be widely available within the decade) will enable the rapid discovery of patterns (pathways, as they are called in the medical world) of disease progression and subsequent reverse engineering of treatments from the individual’s “personal identifying data” that are the sum of raw proteomic data. In other words, Mundie theorizes that entire process for identifying disease treatments will shift on its head from an approach that currently tries to extrapolate individual treatments from a population sample.

It remains to be seen how this provocative theory plays out in the future and whether it makes a real impact on biomedical research.

Key takeaways

REMOTE MONITORING WITHOUT WEARABLE SENSORS: One of the developments that have the promise of finding imminent real-world use is the outcome of a fascinating project dubbed Emerald that seems like the stuff of science fiction. Prof. Dina Katabi at MIT heads the development of this modified Wi-Fi box that has the ability to “see” through walls. Combining wireless technology with machine learning algorithms, the box is able to monitor the movement of an individual or multiple individuals from behind the wall of a room in which they are present. Dr. Katabi and her research team have created a system that can do many things, including:

  • Monitor breath, sleep, heart rate and gait speed
  • Measure sleep without the cumbersome mesh of wires and sensors at 80% accuracy of sleep lab measurement.

This tool can have an impact in the diagnosis of many disorders. For instance, gait speed acts as an important endpoint in Parkinson’s and surrogate markers for cognitive impairment. Also, there is the potential to use breathing as a predictor of pulmonary and Parkinson’s disease as well as depression and Alzheimer’s.

A closer look at the architecture of the technology reveals that the wi-fi box is pushing the frontiers of both edge and fog computing; with its intelligent sensor residing locally and data being processed in the local environment. Combine this with chatbot technology and you could have highly intelligent “artificial” home-based caretaker for the elderly with unprecedented capabilities.

MORE APPLICATIONS OF AI: Some of the other areas of research and work in the field shows how AI is slowly but surely moving forward to become a differentiator in clinical settings. The inflection point for the technology in making a big and lasting impact on patient outcomes is around the corner. Sample the following examples:

  • Radiology: AI in radiology continues to live up to its promise. With deep learning research having doubled in 2017, the ability to diagnose cancer in the body areas that were previously possible only through surgical intervention has crossed over to detection through images according to Dr. Ronald Summers, a Senior Investigator at NIH Clinical Center. “Segmentation” and “normalization” are the key processes that lead to breakthrough diagnostic capabilities. Applications include detection of colonic polyps, lymph nodes, spine disease, colitis, and cancers of prostate and pancreas. Going forward, research in imaging combined with genomics holds a great promise.
  • Pediatrics: With 20% of the US population being pediatric; a distinct focus on research is highly warranted, and luckily, provided by researchers like Dr. Judith Dexheimer of the University of Cincinnati. She explains ML research in Pediatrics is different as it must consider the health dynamics of this population due to various stages of growth and its impact on genetic changes, vital signs, medical dosing requirements and so on. ML has been applied to diagnosing sepsis and appendicitis as well as decisions involving Pediatric Intensive Care Unit transfer. Researchers are finding new applications in healthcare and provider decision support and integration of natural language processing (NLP) with the patient encounters. In a unique experiment that analyzed pediatric data from multiple sources and locations, the researchers were able to predict a potential case of child suicide and prevent it. With the number of suicide cases among children increasing, the results of this ground-breaking research have a significant impact on enhancing children and adolescents’ physical and mental well-being.
  • Genomics: One of the most fascinating applications of machine learning is deciphering genome functions as the causal determinants in a large number of diseases. This is highlighted by Dr. Anshul Kundaje of Stanford University in his research on the subject. Using a landmark study on genetic variants, Dr. Kundaje has shown that association of a certain class of genes have much higher statistical significance with the Alzheimer’s disease than the rest; paving the way for early prediction of the disease’ onset.
  • Life-Threatening Diseases: Sometimes, the most one can do to deal with some of the life-threatening and debilitating diseases like congestive heart failure (CHF) or epilepsy is to manage them after they occur. That might often be too late. But what if machine learning models are able to predict conditions well in advance of their occurrence? IBM has partnered with leading pharmaceutical companies and medical institutions for research in developing deep learning models for prediction of CHF, epileptic seizures and Huntington’s and Parkinson’s diseases. Eileen Koski of IBM research shows the areas in which IBM has poured its research money. One example: development of an AI tool based on speech which is intended to normalize mental health diagnosis and evaluation, discover hidden cues and multiply reach. The incredible tool can separate out between groups of normal, manic and schizophrenic persons from a sample.

DATA SHARING AND OTHER CHALLENGES

One of the primary requirements in AI is the availability of data in large quantities. Machines are “trained” to acquire artificial intelligence using the big data. Before its significance is taken for granted, Dr. David Heckerman cautions that more needs to be done to motivate all stakeholders; doctors in private practice, researchers, institutions to share data without hesitation. According to Heckerman, who is also a former Microsoft employee, the stakeholders, especially doctors in private practice, equate data with dollars. His research is focusing on finding ways to motivate all concerned to come forward and share data generously.  With all the promise of AI in biomedical research, it is evident that there still remains more work to be done before major breakthroughs are achieved. Every stakeholder recognizes the promise, especially the NIH under the direction of Francis Collins, who is determined to make a significant dent and gear the NIH to provide the necessary leadership for the initiative.

In Collins’ closing remarks, he noted that:

  • NIH has taken a head start towards the identification of the most significant projects for further research including the thought-provoking Brain 2.0 hypothesized by Craig Mundie.
  • Other projects include cancer genomics and therapeutics, Environmental influences on Childhood Health Outcomes (ECHO), Adolescent Brain Cognitive Development (ABCD), and more to be identified.
  • Data, the fodder for machine learning, takes precedence over everything else. The first order of business is to prioritize data sets of greatest interest and harmonize them to make them machine learnable. Everyone “loves to hate” EHR but care should be taken to safeguard its significance and use.
  • Everything necessary to build and strengthen the ecosystem is being worked on, including the development of hardware platforms, training, community-building, as well as human resources and building the NIH brain trust.

If the future of medicine is indeed a Brain 2.0 paradigm where you start with an individual and find answers for the population, then NIH is creating the best chance to get there with its vision and initiatives.

Datazoom builds “data delivery network” to support video analytics

Summary:

Video quality and service availability go hand in hand with improving financial performance for online video service providers. Datazoom is a startup that wants to make it easier for these companies to leverage data to improve operations and profitability. The underlying platform has significant potential that extends beyond video analytics into a variety of markets, including IoT.

Key Takeaways

  • Datazoom has funding for a service that aims to make integration of analytics tools easier for online video service providers.
  • Datazoom’s “data delivery network” has potential applications beyond OTT services. The company could move into gaming and IoT applications where latency impacts the ability to gather and analyze large amounts of data, for example.

Company background:

Datazoom was co-founded by CEO Diane Strutner and Jason Thiebeault. Strutner was previously the VP of Global Sales and Business Development at NicePeopleAtWork (NPAW), a provider of a video analytics platform. Thiebeault currently serves as executive director of the Streaming Video Alliance. Michael Skariah, formerly director of engineering at Ooyala, serves as the company’s CTO. The company closed a pre-seed round of $700,000, led by Brooklyn Bridge Ventures.

Details:

As it turns out, the integration process and the merger of data from dozens of disparate sources is a problem that is hard to solve for these companies. To solve the problem, Datazoom has built a data ingest and management platform. Datazoom’s Adaptive Video Logistics platform serves as an abstraction layer pulling data from this customizable SDK into the data ingest platform, enabling service providers to aggregate and time-align data from multiple sources in real-time.

Customers can choose which data needs to be signaled back to a data hub (meaning no wait for a response to an HTTPS request), and make changes or updates to the data collection or collection frequency at any time. Ordinarily, each SDK used for data collection adds significant (minutes) of latency; Datazoom says its method helps the analytics process because data is collected in a more uniform manner with sub 1-second latency (which is covered by an SLA).

Part of the cloud-agnostic infrastructure that enables fast data collection resembles a CDN – one that Datazoom calls a “data delivery network” –is currently hosted on AWS and Google Cloud, with Azure POPs coming soon.  On the other side of the equation, Datazoom has roughly a dozen integrations with data collectors (video and audience analytics tools, ad serving tools and the like), and says it is completing two to three more integrations each week as customer requests come in.

Value proposition:

By Datazoom’s count, there is an average of 14 tools used by major brands to capture data from video players. Now multiply that by the number of device types you are delivering to because you need to use a different player for mobile OS, a smart TV client, and a game console to deliver your service to consumers. That’s a bear to manage when it comes to deploying code on each client and contributes to bloat on the client that can impact device performance-one of the issues you were originally trying to solve.

As an example, some analytics services leverage logic in an SDK to do failover; this means when a stream degrades for a consumer, the player will automatically seek another source for the stream. Datazoom enables customers to leverage data from the player to do more than CDN switching; they can use different ad delivery, authentication and other systems as the situation dictates.

Datazoom aims to simplify the gathering of data not by bidding to be the one data stream to rule them all, but to move the integration point away from the player. 

Pricing for the SaaS offering is done by pricing based on a combination of the volume of data that is processed, and the desired SLA level for latency. Datazoom’s premise is that this will be a more predictable cost than basing pricing on the number of video views or sessions.

Market context:

By some counts, there are more than 200 video service providers in the global market. Datazoom is targeting the biggest names in the market because they have the biggest number of tools to integrate (think NBC Universal, Sony and such). Others in the space include Conviva, NPAW, and Cedexis. CDN service providers like Akamai have their own analytics offerings centered around network and delivery performance, with Akamai also offering web application performance monitoring through its Soasta acquisition.

Customers: Datazoom already has letters of intent from major brands and is busy conducting trials with a number of potential clients.

Sign up for a free account!

Microsoft re-org focuses on edge, AI

Microsoft CEO Satya Nadella issued a memo to Microsoft employees recently – and we expect it is going to have as lasting an impact on the company as Bill Gates’ famous “Internet Tidal Wave” memo of 1995.

Entitled “Embracing our future: Intelligent Cloud and Intelligent Edge”, Nadella outlines a plan to reorganize the company around a vision of a ubiquitous distributed compute fabric that extends from cloud to ‘edge’ and infuses everything with AI.

Key Takeaways:

  • Edge computing and AI are going to be essential technologies for this new version of Microsoft.
  • Microsoft plans to invest $5bn in IoT products, services, and research over the next four years, further highlighting the shift of development focus to edge services.
  • Microsoft is showing leadership in AI by working to ensure that it develops tools to detect and address bias in AI systems.

Details

Nadella has directed a team to be organized around what he called Microsoft’s “Cloud + AI Platform,” with the goal being the creation of an integrated platform across all layers of the tech stack, from core cloud services to edge services.

Within this organization are several specific units focused on AI, including:

  • Business AI – focused on the internal application of AI
  • AI Perception & Mixed Reality – A new team taking speech, vision, MR and related technology and build Microsoft products as well as cloud services for third parties on Azure.
  • AI Cognitive Services & Platform (focusing on AI Platform, AI Fundamentals, Azure ML, AI Tools and Cognitive Services)

Perhaps less noticed, but of great significance is Microsoft’s move to address issues around the ethics of applying AI. As Facebook is learning the hard way, AI and ML applied to personal information can be misused, causing significant damage to society as well as its stock valuation.

Nadella is establishing the internal “AI and Ethics in Engineering and Research” (AETHER) Committee to ensure Microsoft’s AI platform “benefit the broader society.” Nadella promised to invest in strategies and tools for detecting and addressing bias in AI systems.

Windows won’t go away, but definitely takes a back seat in the “Experiences and Devices” team. The former leader of the Windows and Devices Group, Terry Myerson, is leaving Microsoft, and there are a number of other executive moves detailed by longtime Microsoft observer and journalist Mary Jo Foley.

Company background

Nadella has issued impactful memos before – following his 2014 company-wide email describing his vision of Microsoft as a platform company in a mobile- and cloud-first world, he cut 18,000 jobs as part of the most massive layoffs in the company’s history.

How does it company to Gates’ ‘Tidal Wave’ memo from 22 years ago? Gates’ memo was filled with more urgency, as the company was in danger of missing the internet wave;  Nadella’s latest memo is forward-looking in a different way, written from the perspective of a company that’s not far behind the competition. Still, Nadella recognizes that company’s structure and size could be a hindrance in adapting to the era of cloud and edge and has adjusted accordingly. It might not be as well remembered as Gates’ memo 20 years down the road but will be of no less importance as the industry fills out the role that edge services will play in the next two to five years.

Rafay Systems wants to ease app use on ‘Programmable Edge’

Rafay Systems is a startup that aims to ease the process of developing and deploying applications at the infrastructure edge, whether it be within a metro network, a remote data center, or at the radio access network edge. Rafay is aiming to stand out from the cloud providers and CDNs by allowing developers to bring their own custom applications, and not just using pre-defined applications or functions.

Key Takeaways:

  • Rafay Systems is positioning itself as a provider of a fully programmable edge compute service.
  • More than just an edge cloud provider, our understanding is that Rafay is also aiming to integrate a number of critical services developers need to manage the application. deployment lifecycle across edges running in different geographies. In addition, Rafay also expects to provide network services that developers can leverage to scale workloads running across various ‘Rafay edges.’
  • As with other providers, educating developers on use cases will be key, as well as differentiating from the services that CDNs and cloud providers offer.

Opportunity/Value proposition:

Founder Haseeb Budhani talked to Edge Research about the opportunity that Rafay is chasing. Budhani believes that there is a clear market opportunity for giving developers the ability to run their application logic anywhere that they need to. Developers should be able to leverage compute and storage resources at the edge of the network as a service, just as they would with any cloud service. Rafay has termed (and trademarked) this concept the “Programmable Edge.” (For further explication, Budhani outlined what the Programmable Edge should be, and some of the possible use cases in this post on LinkedIn.)

Rafay hasn’t formally launched the service yet, but is presently engaged with partners and customers and expects to deliver a Beta version of the service in the summer of 2018.

Company background:

Rafay Systems was founded in 2017 by:

  • Haseeb Budhani-
    Co-founder and CEO
    Formerly co-founder and CEO of Soha Systems
  • Hanumantha Kavuluru-
    Co-founder & VP of Engineering
    Formerly co-founder and VP engineering at Soha Systems

The co-founders’ previous company, Soha Systems, was acquired by Akamai Technologies in October 2016.

Funding

Rafay Systems has secured an undisclosed amount of seed funding from:

  • Costanoa Ventures
  • Floodgate
  • Menlo Ventures
  • Moment Ventures

Market background:

The heightened interest and use of microservices and containers has changed how applications are developed; where they are to be deployed is the key question. Performance, cost, availability of applications are all factors pointing to a need to move application logic closer to end users-to the ‘Edge.’ CDNs have the distributed resources needed to do this, and arguably already allow logic to run at the edge of their networks.

Competition:

Cloudflare Workers, Fastly Edge Cloud platform, Akamai’s Distributed Edge Compute and Cloudlets are among the examples of how CDNs are evolving into programmable platforms.

Rafay’s position is that CDNs aren’t programmable enough. Custom code can be deployed on CDN edges but only by the CDN’s internal engineering and professional services teams. To enable any developer to leverage the infrastructure edge, Budhani argues that a different approach is needed.

Another big difference between edge services from cloud providers as well as CDNs: edge services offer functions that are invoked to handle individual HTTP requests, and are bounded by time (e.g. 5 minutes in the case of AWS Lambda for a function to be performed). Functions-based environments may not be the best fit for applications that are running continuously , are generating logs or other types of output, among other factors.

Cloudflare partners with IBM Cloud to offer security and CDN

Cloudflare is moving up in the market for CDN and security services. There’s no surer sign of that than a new partnership that has Cloudflare security and CDN services being offered via IBM.

Key Takeaways:

  • Cloudflare’s deal with IBM shows the company is serious about continuing to expand in the enterprise market, and IBM is a good path to do that, even if IBM Cloud is smaller than the other players in the space.
  • Cloudflare hopes to integrate into other IBM services, including the QRadar security analytics offering, and later the development of applications that leverage Watson for Cyber Security.
  • The deal does not portend any significant change in IBM’s relationship with Akamai, but does represent Cloudflare’s increasing incursion on Akamai’s turf. 

Details:

Cloudflare’s portfolio of services will be offered through IBM Cloud, marketed as Cloud Internet Services. The services will be available via the IBM Cloud dashboard. The services go beyond the CDN services that IBM Cloud as well as other cloud providers have integrated into their IaaS offerings. Cloudflare’s portfolio includes:

Cloudflare services

Security PerformanceAvailability
DDoS mitigationCDNLoad balancing
WAFSmart RoutingRate Limiting
Bot detection/mitigationWeb optimization

Cloudflare has other services, but the table focuses on those services referenced on Cloudflare’s web page promoting the integration with IBM Cloud.

IBM is officially a Cloudflare reseller, meaning the services can be sold and deployed by IBM in any existing customer environment, including on-premise, hybrid as well as public cloud.

CloudFlare said the deal grew out of having IBM as a customer. IBM had been using Cloudflare for DDOS and WAF security as well as load balancing for the X-force Exchange business. X-force is IBM’s threat intelligence sharing platform.

Cloudflare’s ability to take this initial deal and expand into other parts of IBM sets the stage for other developments, including future integration of Cloudflare’s data set into IBM’s QRadar security analytics offering. Longer term, Cloudflare says it is looking to develop applications that leverage Watson for Cyber Security.

Company background

Cloudflare has been growing rapidly for several years, owning largely to its focus on the SMB market and numerous reseller deals with cloud/hosting providers. In 2017, there was a concerted push upmarket into enterprise accounts, and the company has significantly tilted its revenue mix towards that market, and the company reached milestones such as a $150m run rate and topping the 500-employee mark last year. 

Competition: 

Akamai is the largest CDN vendor and does a significant amount of business in security services like DDoS and WAF. Akamai is a long-time IBM partner, and just last year announced integration of its CDN offering with IBM Cloud. The deal does not, in our view, portend any significant change in the relationship.

Akamai said IBM is a significant reseller of its entire portfolio of services via the IBM Global Technology Services organization Edge Delivery Services branding. IBM’s Global Security Services organization also sells Akamai’s DDOS services and have integrated their Q-Radar SIEM with Akamai’s Kona security services. Additionally, Akamai was recently named IBM Watson Customer Engagement partner of the year for helping provide secure delivery of the Watson AI service and Watson API.

The Cloudflare deal is not exclusive; indeed, most cloud service providers (other than AWS) offer CDN via multiple vendors. Microsoft Azure offers CDN via both Akamai and Verizon Digital Media Services (EdgeCast). Google Cloud offers CDN Interconnect, which charges an fee based on egress traffic to CDN providers including Akamai, Cloudflare, CenturyLink/Level3, Fastly, Instart Logic, Limelight Networks, and Verizon.

Customers:

Cloudflare did say there are already joint customers who are using its services, including 23andMe, the genetics testing service.