My Open-Source LLM Copilot
My LLM Chatbot Demo
Demonstrate the essential features of an open-source LLM via chatbot: predict healthcare tasks and chat with context.
Open-Source LLM: Llama-3.1-70B.
Chatbot: Gradio User Interface.
Free Trial of My LLM Copilot
1. Register an account and contact us to get your own open-source LLM copilot in the cloud.
2. Test the open-source LLM for 7 days and evaluate whether it can benefit your healthcare tasks.
To Deploy My Own LLM Copilot
1. Contact us to select an LLM from free validation of multiple LLMs for your tasks.
2. We deploy and fine-tune the LLM as a technical service to meet your specific requirements.
Immediate Goal: Create my own open-source LLM now in three quick steps
- Select: Let the ELHS team validate the top open-source and commercial LLMs for my tasks, such as diagnostic prediction for Alzheimer's disease or most neurological diseases. Compare the prediction accuracy of different LLM versions, including Llama 3.1 and 3 family, Gemma 2, Mistral, ChatGPT, Gemini, Claude, and Ernie. Choose the right open-source LLM as my baseline LLM.
- Deploy: Deploy the selected open-source LLM with a chatbot, such as Gradio, under my full control, either in the cloud or on-premises.
- Fine-tune: Use the chatbot daily by myself or my team. Evaluate and understand the behaviors of the LLM. Continuously train the LLM with my patient data, i.e., fine-tune the LLM, to improve its performance.
Note: "My LLM" may refer to one or multiple large language models for me as an individual healthcare professional, my team, my department, or even my organization.
Long-term Goal: Accelerate training my GenAI copilot in three big steps
- Learning GenAI: Test and compare different LLMs in predicting my healthcare tasks using the Learning module.
- Creating LLM: After the ELHS team validates the top open-source LLMs for my tasks, select the right open-source LLM and deploy it with a chatbot in my controlled environment.
- Researching Copilot: Use the chatbot daily and evaluate the LLM's behaviors. Fine-tune the LLM with my patient data in learning cycles to continuously improve its performance as an effective source of healthcare intelligence, gradually training the LLM chatbot to be my trusted GenAI copilot.
Note: "My copilot" may refer to a generative AI copilot for me as an individual healthcare professional, my team, my department, or even my organization to use for the validated specific healthcare tasks.
Why is it possible to have My Own LLM and Copilot so easily?
Because we provide this free copilot platform along with the necessary technical support, both cost and technical barriers are reduced for you.
- Reduced Cost Barrier:
-
You have two essential questions to answer before deploying your own open-source LLM. The first question is: Can open-source LLMs predict my healthcare tasks accurately or at least at an acceptable level? To answer this, you first need to test the latest versions of the best commercial LLMs to understand the current GenAI limits, including ChatGPT, Gemini, Claude, and Ernie. You may not have access to these paid-version chatbots yourself. However, you can test and compare all of them in the Learning module on this platform for free. Then, you need to easily test several top open-source LLMs to evaluate their performance. The free AIChat in the Learning module includes Llama 3.1, Gemma 2, and Mistral open models for you to test.
-
The second question is: Which version of the open model is best for my healthcare tasks? To answer this, you need to compare the performance of different versions of the top open models, which can be time-consuming. Our copilot tech team will conduct the validation study for you free of charge and advise you on selecting the most appropriate open LLM, considering multiple factors, including LLM performance, compute requirements, data safety, and fine-tuning potential.
-
Deploying and running your LLM along with a chatbot has operational costs. Since our mission is to democratize GenAI for all doctors and reduce healthcare disparities, we strive to minimize this cost for you by all possible means. For example, we will provide an open-source LLM and Gradio chatbot deployed in the cloud for you to try for free. After the free trial period, you may decide whether to proceed with your own deployment.
- Reduced Technical Barrier:
-
For most doctors, deploying and running their own LLM is technically out of reach without support. Therefore, our technical team eliminates this pain point by deploying and running LLMs for doctors as a service.
-
Doctors also require technical support to effectively use their LLM to solve healthcare problems. To remove this technical barrier, our team will help doctors develop the necessary GUI for their healthcare tasks and settings. We will deploy the Gradio chatbot as a starter so that doctors can experiment with their LLMs and gradually define user requirements for their specific applications, likely leading to their own unique GenAI copilot.
-
AI technical expertise is even more critical for LLM fine-tuning. The first challenge is to prepare high-quality training datasets from your patient data using more standardized and consistent data collection approaches. The second challenge is to run effective fine-tuning algorithms with limited compute resources. The third challenge is to evaluate the performance improvement rapidly in a more automated fashion. With our technical support, doctors can reduce their concerns about such technically demanding projects.
Background
Thanks to open-source LLMs like Llama, it has become much more feasible for everyone to have their own LLM. This is especially important for doctors, as they are often required to use LLMs deployed under their full control or that of their organization for patient privacy and data security reasons. Open-source LLM platforms like Hugging Face also make accessing models easier. You can simply select an open-source model, download it for free, and deploy it in your own environment, whether on-premises or in the cloud.
To seamlessly integrate your LLM into your healthcare workflow, you will also need to install a graphical user interface, such as a chatbot, on top of your LLM. Once you test your LLM via a chatbot, you can assess whether the baseline open-source LLM meets your requirements. If it doesn't, you can further improve its performance by fine-tuning it with your patient data, gradually training your LLM and developing your chatbot into your trusted GenAI copilot.
Our mission at the ELHS Institute is to help reduce global health disparities by democratizing GenAI in healthcare. In a review paper invited by JHMHP, we explained why GenAI democratization is intrinsically driven and listed a range of healthcare applications for GenAI. Our initial concept of the GenAI copilot for medical training is outlined in our paper published by JAMA. We have created a more realistic benchmarking system to systematically evaluate top LLMs in diagnostic prediction across most diseases, as presented in our JAMIA paper. Nature published our pioneering study on how to effectively deploy and continuously train machine learning (ML) models in clinical settings using the concept of a ML-enabled learning health system (ML-LHS) unit. By establishing efficient data pipelines for a collaborating hospital, we have helped clinical teams publish ML model papers on several major diseases.
In order to help more doctors, medical students, and healthcare professionals learn and use GenAI in healthcare, we have created this ELHS Copilot platform. Our goal is to help you get started quickly and ultimately create your own GenAI copilot to help optimize your healthcare delivery.
Technical Support
If you need technical assistance, please let us know. We are happy to help you select and deploy the right open-source LLM, including a Gradio chatbot for easy testing. We can also assist with fine-tuning your LLM using your data.