- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A comparative study of automatically generated and...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
A comparative study of automatically generated and large language model-generated unit tests for piecewise function approximation algorithms Kanabar, Riya Manoj
Abstract
Large language models have emerged as promising tools for automated test generation, yet their effectiveness compared to systematic testing approaches remains empirically unvalidated for mathematical algorithms. This thesis investigates whether LLMs can generate unit tests as effective as systematic enumeration for piecewise function approximation algorithms — a fundamental class of algorithms requiring both tolerance satisfaction and segment minimization. We develop a comparative testing framework evaluating seven state-of-the-art language models against systematic exhaustive testing. The framework employs provably optimal algorithms as ground-truth oracles, enabling definitive identification of algorithmic failures across both tolerance violations and suboptimal solutions. Systematic generation produces 75 billion test cases through bounded parameter space enumeration, while LLM-based generation utilizes varied prompting strategies across multiple contemporary models. GPU acceleration and JIT compilation achieve computational feasibility, reducing ten-billion-scale evaluation from years to hours. We evaluate 14 candidate algorithms spanning diverse paradigms across piecewise constant and piecewise linear approximation problems. The comparative evaluation reveals substantial differences in failure detection effectiveness between systematic and LLM-based test generation. These findings establish empirical evidence regarding LLM capabilities and limitations for unit test generation in mathematical algorithm domains, informing practical decisions about appropriate deployment of AI-assisted testing methodologies.
Item Metadata
| Title |
A comparative study of automatically generated and large language model-generated unit tests for piecewise function approximation algorithms
|
| Creator | |
| Supervisor | |
| Publisher |
University of British Columbia
|
| Date Issued |
2026
|
| Description |
Large language models have emerged as promising tools for automated test generation, yet their effectiveness compared to systematic testing approaches remains empirically unvalidated for mathematical algorithms. This thesis investigates whether LLMs can generate unit tests as effective as systematic enumeration for piecewise function approximation algorithms — a fundamental class of algorithms requiring both tolerance satisfaction and segment minimization. We develop a comparative testing framework evaluating seven state-of-the-art language models against systematic exhaustive testing. The framework employs provably optimal algorithms as ground-truth oracles, enabling definitive identification of algorithmic failures across both tolerance violations and suboptimal solutions. Systematic generation produces 75 billion test cases through bounded parameter space enumeration, while LLM-based generation utilizes varied prompting strategies across multiple contemporary models. GPU acceleration and JIT compilation achieve computational feasibility, reducing ten-billion-scale evaluation from years to hours. We evaluate 14 candidate algorithms spanning diverse paradigms across piecewise constant and piecewise linear approximation problems. The comparative evaluation reveals substantial differences in failure detection effectiveness between systematic and LLM-based test generation. These findings establish empirical evidence regarding LLM capabilities and limitations for unit test generation in mathematical algorithm domains, informing practical decisions about appropriate deployment of AI-assisted testing methodologies.
|
| Genre | |
| Type | |
| Language |
eng
|
| Date Available |
2026-01-21
|
| Provider |
Vancouver : University of British Columbia Library
|
| Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
| DOI |
10.14288/1.0451338
|
| URI | |
| Degree (Theses) | |
| Program (Theses) | |
| Affiliation | |
| Degree Grantor |
University of British Columbia
|
| Graduation Date |
2026-02
|
| Campus | |
| Scholarly Level |
Graduate
|
| Rights URI | |
| Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International