Bring Your Own LLM: How to Empower Healthcare Organizations

The Team at Brim Analytics

December 4, 2024

Powerful LLMs Enable Force Multipliers in Healthcare

Artificial intelligence (AI) and large language models (LLMs) have immense potential to transform healthcare. Two notable examples include:

  • Chart abstraction to accelerate research: At Brim Analytics, we focus on tools that extract structured data from unstructured medical notes. This eliminates monotonous tasks during clinical trial recruitment, registry data collection, and medical research, empowering researchers to focus on impactful work.
  • AI notetakers and scribes: These tools document clinical visits automatically, allowing physicians to focus on patient care. They not only enhance efficiency but also combat physician burnout and improve the overall patient experience.

Until recently, achieving this level of AI sophistication required building custom solutions—an incredibly expensive and resource-intensive process. But the advent of powerful generalized LLMs has democratized access, enabling fast and affordable AI-powered applications for all but the most specialized use cases.

Security, Privacy, and Governance Present Obstacles to AI Integration

For healthcare organizations, security, privacy, and governance are paramount. Not only are they legal imperatives, but they also significantly impact patient trust and risk management.

However, integrating AI-powered solutions introduces complexity. Healthcare vendors must comply with strict data governance protocols, including signing Business Associate Agreements (BAAs) with the healthcare organizations. Even more complicated, they must also ensure downstream BAAs with LLM providers (e.g., Azure, AWS, OpenAI) and any additional vendors accessing protected health information (PHI).

This can create long chains of agreements and heightened risks. Healthcare organizations are rightfully cautious when considering AI solutions, given the potential pitfalls in managing data security and compliance.

The Bring Your Own LLM Model Provides a Way Forward

A Bring Your Own LLM (BYO LLM) model offers a compelling solution for reducing risk while maintaining access to powerful AI capabilities. In this architecture, healthcare organizations retain control by using their own LLMs, empowering vendors to integrate securely.

  • Secure endpoints: LLM providers like Azure offer endpoints customized for healthcare organizations. These endpoints comply with stringent security, privacy, and governance standards. For example, NYU Langone recently detailed their private and secure Azure LLM endpoint in the Journal of the American Medical Informatics Association (read more here).
  • Flexibility and scalability: Vendors like Brim can operate seamlessly using any approved LLM, whether OpenAI, Azure, or Bedrock. Healthcare organizations benefit from higher token quotas and volume-based pricing, separating LLM costs from vendor application costs.
  • Cloud tenant governance: BYO LLM allows vendors to run applications directly within the healthcare organization’s cloud tenant, ensuring compliance and operational control over PHI.

This approach assumes that in-context learning with foundational models is sufficient for most tasks. Several studies have shown comparable results between in context learning and fine tuning for a number of data sets. We expect any gap to narrow as LLMs continue to advance.

Conclusion

The BYO LLM model bridges the gap between AI innovation and healthcare’s strict security needs. By leveraging private, secure LLM endpoints, healthcare organizations can confidently integrate powerful AI applications while maintaining governance and control.

At Brim Analytics, we’re excited to lead this charge by building solutions that adapt seamlessly to a BYO LLM framework, empowering researchers and clinicians to achieve more, with less risk.

Less time reading charts,
more time making breakthroughs.

Fill out the form to get a demo.

Thanks! Our team will reach out soon. You can also schedule a demo now.

Oops! Something went wrong while submitting the form.