Fine-Tuning GPT-2 for Contextually Relevant Text Generation to Discover New Immune Checkpoint Research


Kalaycı M. E., Albayrak M., Turhan K.

15.TIP BİLİŞİMİ KONGRESİ, Trabzon, Türkiye, 30 - 31 Mayıs 2024, ss.147-157

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Basıldığı Şehir: Trabzon
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.147-157
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

 In this study, we aimed to access new immune checkpoint specific information by fine-tuning the text generation capabilities of the GPT-2 language model developed by OpenAI. Using a dataset of abstracts from 7,000 articles obtained through the search query “new immune checkpoint” in the Scopus database, we fine-tuned the model to perform better in this specific type of text. The training and validation losses decreased from 2.05 to 0.51 and 1.92 to 1.28, respectively, over 5 epochs, indicating improved performance. By enhancing the model's ability to understand the structure and content of texts, we increased its capacity to generate more accurate and consistent texts for our personal use. This study highlights the effectiveness of fine-tuning language models to improve text generation capabilities. Immune checkpoint inhibitors are promising agents in cancer treatment. This study aims to demonstrate the qualitative effectiveness of using the GPT-2 model for fine-tuning in the discovery of new immune checkpoints.