|
Fine-Tuning Large Language Models on Sensitive data: Challenges, Solutions an Perspectives| title | Fine-Tuning Large Language Models on Sensitive data: Challenges, Solutions an Perspectives |
|---|
| start_date | 2025/02/28 |
|---|
| schedule | 14h-15h |
|---|
| online | no |
|---|
| location_info | auditoire Socrate |
|---|
| summary | Large Language Models (LLMs) have opened new perspectives in many domains, including the medical field. However, training these models on sensitive data such as Electronic Health Records (EHRs) presents unique challenges, particularly in safeguarding patient privacy and complying with strict data protection regulations.
In this talk, I will share insights into fine-tuning LLMs in a production environment while balancing performance optimization with ethical and regulatory demands. Topics will cover the use of synthetic data and the development of automated training pipelines.
I will conclude by exploring opportunities for future enhancement, such as the incorporation of user feedback and iterative dataset refinement techniques. These advancements will aim to enable smaller, fine-tuned models to outperform their larger counterparts. |
|---|
| responsibles | Gao, Cui |
|---|
Workflow history| from state (1) | to state | comment | date |
| submitted | published | | 2025/02/17 14:56 UTC |
| |
|