Can You Take Your Model With You?

One of the key issues with AI is that algorithms developed in one institution or one set of data may not perform as well when used at different institutions with different data. Researchers at Mount Sinai’s Icahn School of Medicine found that the same deep learning algorithms diagnosing pneumonia in their own chest x-rays did […]
AI Model Transparency

Besides issues in getting a hold of large and diverse datasets, annotation or labeling, and sexy new methods to train models (synthetic data and federated learning!,) transparency also relates to model interpretability—in other words, humans should be able to understand or interpret how a given technology reached a certain decision or prediction. AI technologies will […]
Synthetic Data

We have been examining the issues of obtaining data (enough of it! and high quality) and preparing that data to be used in training and validating models. One emerging way to deal with the issue of creating datasets for algorithm training is by creating synthetic data. This builds on the idea of using some of […]
Data Labeling and Transparency

Transparency of data and AI algorithms is also a major concern. Transparency is relevant at multiple levels. First, in the case of supervised learning, , the accuracy of predictions relies heavily on the accuracy of the underlying annotations inputted into the algorithm. Poorly labeled data will yield poor results so transparency of labeling such that […]
Federated AI As A Solution? Part III

In the final installment in this series about federated AI, we wrap up the discussion around the potential benefits of federated AI in developing good models in healthcare. Remember that this extensive discussion of federated AI was preceded by our examination of the many obstacles that exist in healthcare in getting enough good data to […]
Federated AI As A Solution? Part II

We have been examining the topic of data and its importance to developing and running AI algorithms now for several weeks. When it comes to AI, the topic of data can not be discussed enough! If a model is good, it is because it was developed on good data. The reverse is also true. In […]
Federated AI As A Solution? Part I

In our last discussion, we tackled the issue of data as the bloodline for developing AI algorithms for various applications in healthcare. We also discussed how difficult it is to get that data and make sure that the dataset that you will use is high quality, diverse, and representative of the target patient populations. There […]
How Do You Get the Data?

Data to AI is what oil has been to the world economy for the last century. Unless you have plenty of it, you will not be able to get far! It’s not enough to get a hold of sufficient data to train a model, you also need to have other data sets of the same […]
Data Standardization and Integration Into Existing Clinical Workflows II

There is an emerging solution for standardizing healthcare data. FHIR utilizes a set of modular components, known as ‘Resources,’ which can be assembled into working systems that will facilitate data sharing within the EHR and mobile-based apps as well as cloud-based communications. Looking to the future, FHIR framework will be critical for implementation of AI-based […]
Data Standardization and Integration Into Existing Clinical Workflows I

Data standardization is critical for aggregating data from different sources to train and use AI algorithms. Data standardization refers to the process of transforming data into a common format that can be understood across different tools and methodologies. This is a key concern because data are collected in different methods for different purposes and can […]