Australian researchers have found that clinical registries may be an untapped font of information for artificial intelligence.
A team of Australian researchers has found that the use and potential of artificial intelligence (AI) in eyecare could be limited by its access to top-notch clinical registries.
The researchers, from the Save Sight Institute, The University of Sydney and Sydney Eye Hospital, published their study, titled Artificial Intelligence and Ophthalmic Clinical Registries, was published in The American Journal of Ophthalmology.1
According to the study, the latest advances in AI show a promising solution to increasing clinical demand and ever limited health resources. AI models demonstrate some power, but they rely on large amounts of representative training data to output meaningful predictions in the clinical environment. The researchers pointed out that clinical registries represent a promising source of large volume real-world data which could be tapped to train more accurate and widely applicable AI models.
In November, 2023, the researchers conducted a systematic search of EMBASE, Medline, PubMed, Scopus and Web of Science for primary research articles that applied AI to ophthalmic clinical registry data.
According to the study, 23 primary research articles applying AI to ophthalmic clinic registries (n = 14) were found. The researchers defined the registries mostly by the condition captured and the most common conditions where the technology was applied were neovascular age-related degeneration (n = 3) and glaucoma (n = 3). They also found that tabular clinical data was the most common form of input into AI algorithms and outputs were primarily classifiers (n = 8, 40%) and risk quantifier models (n = 7, 35%).1
Moreover, the researchers noted in the study that the AI algorithms applied were almost exclusively supervised conventional machine learning models (n = 39, 85%) such as decision tree classifiers and logistic regression, with only seven applications of deep learning or natural language processing algorithms. Significant heterogeneity was found with regards to model validation methodology and measures of performance.1
The researchers found that limited applications of deep learning algorithms to clinical registry data have been reported.
“The lack of standardized validation methodology and heterogeneity of performance outcome reporting suggests that the application of AI to clinical registries is still in its infancy constrained by the poor accessibility of registry data and reflecting the need for a standardization of methodology and greater involvement of domain experts in the future development of clinically deployable AI,” they concluded.