Few-Shot Question Answering in Low-Resource Languages using Model-Agnostic Meta-Learning (MAML)
Main Article Content
Abstract
Question Answering (QA) systems have made tremendous strides in languages with abundant resources, such as English. Unfortunately, model performance is severely constrained in low-resource languages due to the lack of annotated data. The Model-Agnostic Meta-Learning (MAML) framework is proposed in this study as a means of few-shot quality assurance for languages with limited resources. With only a small number of annotated question-answer pairs, the method enables quick domain- or language-specific adaptation. With an emphasis on low-resource Indian languages like Telugu, Bengali, and Hindi, we assess the framework using the multilingual QA standards TyDiQA and XQuAD. Our MAML-based technique achieves an 8.5% increase in F1 score and an 8.2% improvement in Exact Match (EM) over the best fine-tuned baselines, according to experimental data. This method considerably outperforms standard fine-tuning and transfer learning approaches in few-shot circumstances. This study demonstrates how meta-learning can be used to create flexible, scalable quality-assurance systems for languages that aren't widely used.
Downloads
Article Details
Section

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.