About Us

Since 2016, Weill Cornell Medicine (New York) and the Gustave Roussy Cancer Campus (Paris) have joined forces to organize an annual conference that provides a forum for education, discussion, and networking among investigators interested in developing safe and effective RT-IT combinations (ImmunoRad).

Contact Info

Email
christine.corinus

Phone
+33 (0) 1 42 11 53 22

  Gustave Roussy Cancer Campus,
Research Department
Pièce 65 - B2M

 

Select your language

Social Media News

[#Monday news]

LLM driven multimodal target volume contouring in radiation oncology


Yujin Oh, Sangjoon Park, Hwa Kyung Byun, Yeona Cho, Ik Jae Lee, Jin Sung Kim & Jong Chul Ye
Nature Communications volume 15, Article number: 9186 (2024) Cite this article


Excited to share our latest research “LLM-driven multimodal target volume contouring in radiation oncology” published in Nature Communications today (24-Oct-2024). link: Nature

In this study, we introduce LLMSeg, a novel multimodal AI model driven by large language models (LLMs) to enhance the challenging task of target volume contouring in radiation therapy, specifically for breast cancer. This is the first model of its kind to incorporate clinical data into target volume delineation, marking a significant step forward in radiation oncology AI.

This work was made possible through the collaboration with Dr. Sangjoon Park (co-1st author), Prof. Jin Sung Kim (corresponding author), and Prof. Jong Chul Ye (corresponding author), as well as the support from KAIST, Yonsei Severance Hospital and ONCOSOFT. Additionally, I would also like to express my sincere gratitude to dedicated radiation oncologists for their commitment and hard work!


Abstract

Target volume contouring for radiation therapy is considered significantly more challenging than the normal organ segmentation tasks as it necessitates the utilization of both image and text-based clinical information. Inspired by the recent advancement of large language models (LLMs) that can facilitate the integration of the textural information and images, here we present an LLM-driven multimodal artificial intelligence (AI), namely LLMSeg, that utilizes the clinical information and is applicable to the challenging task of 3-dimensional context-aware target volume delineation for radiation oncology. We validate our proposed LLMSeg within the context of breast cancer radiotherapy using external validation and data-insufficient environments, which attributes highly conducive to real-world applications. We demonstrate that the proposed multimodal LLMSeg exhibits markedly improved performance compared to conventional unimodal AI models, particularly exhibiting robust generalization performance and data-efficiency.