If you are an investor in this space, how do you evaluate the defensibility of the products, services, or the business model of a prospective health AI company? patents? copyrights? data? Given that many entities have access to large datasets and the foundational algorithms are mostly open source, the barriers to entry are rather low. As such, barriers other than patents should be used to build sustainable businesses over time. Prototype AI models can be surprisingly easy to build, but there’s little precedent for preparing them for clinical use. They need to be integrated into multiple disparate systems using all sorts of different data types. Also, the know-how to commercialize them can be a source of competitive advantage, which requires the ability to demonstrate the value when it comes to reducing costs and improving outcomes.
One possible approach is to build end-to-end systems that introduce intelligent automation to the entire process, and which aren’t just an algorithm that sits in the cloud and solves one problem in the value chain. This provides a more defensible model, as solving healthcare workflows end-to-end requires deep expertise and a combination of skills that are hard to come by. You can see that today with many of the companies that have built AI solutions. Most of them were able to get hold of data and to develop models that are point solutions. However, the deep thinking required around the workflow issues that need to be solved to successfully operationalize their algorithm is often missing. As such, most are surprised when they learn that FDA approval isn’t their ticket to rapid adoption and successful commercialization.
Another approach to create competitive differentiation and barriers to entry is to develop a brand and to establish a reputation for building great models. One of the key concerns about AI models is that they were built and validated on limited datasets and that their performance in the real world often falls short of what they showed in their FDA trials. There’s already evidence of this with some of the models by Epic (EHR system) and other vendors.
Researchers at Mount Sinai’s Icahn School of Medicine found that the same deep learning algorithms diagnosing pneumonia in their own chest x-rays didn’t work as well when applied to images from the National Institutes of Health and the Indiana University Network for Patient Care. Also, when clinical algorithms were launched at Mount Sinai hospital, a number of issues were encountered including inconsistent data quality, the difficulty of managing different data sources, and a lack of standardization.
This all points to an opportunity to create a brand in this emerging field as a purveyor of top- notch models that are trained and validated on large and heterogeneous datasets with robust performance in the real world and across different settings. Also, given the potential bias for AI models in healthcare, being known for practicing responsible AI will be a differentiator. The Coalition for AI Health (CHAI) is developing recommendations for dependable health AI technologies that can be used by buyers when vetting these products. The CHAI has previously produced a paper on bias, equity and fairness which is being used to develop the recommendations. The result will be a framework, the Guidelines for the Responsible Use of AI in Healthcare, which will intentionally foster resilient AI assurance, safety and security. This presents an opportunity for companies that are serious about using a robust approach to building responsible AI models to differentiate themselves and gain a competitive advantage.
One other powerful way to create barriers to entry and to differentiate is to undertake large- scale clinical studies that establish the benefits of your AI solutions and show stakeholders that the algorithms achieve their expected results. These results could include improving patient outcomes, reducing costs, improving institutions’ operations, and boosting clinical and administrative staff productivity. This isn’t the kind of effort that most companies have the appetite for, know how to do, have the funds for or can execute well.
Investors are well-advised to study the defensibility of the products and the business models of these companies and not think that having an FDA-approved AI solution means you will get market traction or that you can defend an initial land grab over the longer term.