Few-Shot Question Answering in Low-Resource Languages using Model-Agnostic Meta-Learning (MAML)

Main Article Content

Md Tahseen Equbal
Md Wasim Nehal
Wasim Ahmad Sheikh
Aarif Rasul
Md Irshad Anwar
Asad Iqbal

Abstract

Question Answering (QA) systems have made tremendous strides in languages with abundant resources, such as English. Unfortunately, model performance is severely constrained in low-resource languages due to the lack of annotated data. The Model-Agnostic Meta-Learning (MAML) framework is proposed in this study as a means of few-shot quality assurance for languages with limited resources. With only a small number of annotated question-answer pairs, the method enables quick domain- or language-specific adaptation. With an emphasis on low-resource Indian languages like Telugu, Bengali, and Hindi, we assess the framework using the multilingual QA standards TyDiQA and XQuAD. Our MAML-based technique achieves an 8.5% increase in F1 score and an 8.2% improvement in Exact Match (EM) over the best fine-tuned baselines, according to experimental data. This method considerably outperforms standard fine-tuning and transfer learning approaches in few-shot circumstances. This study demonstrates how meta-learning can be used to create flexible, scalable quality-assurance systems for languages that aren't widely used.

Downloads

Download data is not yet available.

Article Details

Section

Research Paper

How to Cite

Few-Shot Question Answering in Low-Resource Languages using Model-Agnostic Meta-Learning (MAML). (2026). Journal of Global Research in Multidisciplinary Studies(JGRMS), 2(1), 86-89. https://doi.org/10.5281/

Most read articles by the same author(s)