Elkin Co-Author on GenAI Environment Scan Study

By Dirk Hoffman

Published February 14, 2025

Peter Elkin.

Peter L. Elkin, MD

Peter L. Elkin, MD, professor and chair of biomedical informatics at the Jacobs School of Medicine and Biomedical Sciences, is a co-author on a new study to determine the current state-of-the-art with respect to usage of generative artificial intelligence (GenAI), including large language models (LLM) and multimodal AI in academic universities.

Print

The study titled “Environment Scan of Generative AI Infrastructure for Clinical and Translational Science,” was published Jan. 25 in Nature Partner Journals Health Systems.

It reported a comprehensive environmental scan of the GenAI infrastructure in the national network for clinical and translational science across 36 institutions supported by the CTSA Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health.

Results Indicate Better Coordination Needed

Elkin was a member on the lead team for the Informatics Enterprise Committee for NCATS and was a sponsor of the study. The University at Buffalo was one of the respondents and Elkin contributed to the discussion and reviewed the submission.

The study found there are broad differences across the CTSA consortium with respect to usage. Most institutions are investing in health AI and this trend is likely to expand in the future, according to Elkin. “The predicted impact of GenAI on health and health care is huge.”

Key findings indicated a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. 

The results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers. 

Elkin says that the shareware talks at the CTSA Biostatistics, Biomedical Informatics and Data Science Enterprise Committee meetings are one way of sharing information.

“We presented our SCAI (semantic clinical artificial intelligence) LLM in December to rave reviews,” he says. “And there is another face-to-face meeting in April around translational sciences.”

Data Security and AI Bias Among Concerns

The study’s analysis revealed that 53% of institutions identified data security as a primary concern, followed by lack of clinician trust (50%) and AI bias (44%), which must be addressed to ensure the ethical and effective implementation of GenAI technologies, researchers say.

Elkin says such concerns are large and complex topics to address.

“Here at UB, we have our LLM on a fully encrypted server, behind a separate firewall and we turned off learning so that LLM does not remember what you put into it or what answer it provided,” he says.

“This is done to increase confidence in our AI when handling sensitive data.”