- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Exploring the potential of LLMs for biomedical relation...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Exploring the potential of LLMs for biomedical relation extraction Kanwal, Swati
Abstract
Though the current state-of-the-art models for biomedical relation extraction rely on encoder-only LLMs that are pre-trained on domain-specific data and then fine-tuned using large amounts of annotated data, there is a compelling argument to explore decoder-only LLMs and alternative task formulation paradigms to investigate if similar or superior results can be achieved for this task without the need for large amounts of annotated data and computational resources. Surprisingly, there has been limited exploration of decoder-only LLMs for biomedical relation extraction. This study aims to address this gap, presenting BioREPS, a novel method that enhances decoder-only LLMs through semantic similarity and chain-of-thought prompting, yielding promising outcomes. This study highlights the effectiveness of instruction training and self-generated chain-of-thought prompts in enhancing the reasoning abilities of decoder-only LLMs for biomedical relation extraction. Additionally, this research investigates various task formulation paradigms and the empirical advantages of domain-specific training for biomedical relation extraction through a series of experiments. The results confirm that ”general” decoder-only Language Models (LLMs) hold immense potential for the task of biomedical relation extraction and come close to state-of-the-art performance with a fractional amount of data and no additional training.
Item Metadata
Title |
Exploring the potential of LLMs for biomedical relation extraction
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2024
|
Description |
Though the current state-of-the-art models for biomedical relation extraction rely on encoder-only LLMs that are pre-trained on domain-specific data and then fine-tuned using large amounts of annotated data, there is a compelling argument to explore decoder-only LLMs and alternative task formulation paradigms to investigate if similar or superior results can be achieved for this task without the need for large amounts of annotated data and computational resources. Surprisingly, there has been limited exploration of decoder-only LLMs for biomedical relation extraction. This study aims to address this gap, presenting BioREPS, a novel method that enhances decoder-only LLMs through semantic similarity and chain-of-thought prompting, yielding promising outcomes. This study highlights the effectiveness of instruction training and self-generated chain-of-thought prompts in enhancing the reasoning abilities of decoder-only LLMs for biomedical relation extraction. Additionally, this research investigates various task formulation paradigms and the empirical advantages of domain-specific training for biomedical relation extraction through a series of experiments. The results confirm that ”general” decoder-only Language Models (LLMs) hold
immense potential for the task of biomedical relation extraction and come close to state-of-the-art performance with a fractional amount of data and no additional training.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2024-04-04
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0440989
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2024-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International