Evaluating Deep Transfer Learning Models for Assessing Text Readability for ESL Learners
Abstract
Assessing the readability of texts is a basic task in educating English-as-a-second-language (ESL) learners. As the manual evaluation of readability requires considerable human effort and is costly, methods for automatically assessing readability are needed. In natural language processing, automatic readability assessment is considered a text classification task. Recently, the predictive performance of text classification methods has significantly improved owing to the development of deep transfer learning. In transfer learning text classification, a large unlabeled corpus is used for pre-training, following which fine-tuning with training data, i.e., pairs of texts and their labels manually annotated, is performed. The predictive performance of these methods depends on the pre-trained models and fine-tuning parameters. In previous studies, however, experiments were typically conducted using one pre- trained model with few fixed fine-tuning parameters because testing different models and parameters resulted in technical difficulties, such as insufficient availability of GPU memory. In this study, we compared various pre-trained models on various settings using an NVIDIA A100 unit with 80GiB of GPU memory. We found that using many epochs, considering many tokens, and using large models are key to achieving excellent accuracy.Downloads
Download data is not yet available.
Downloads
Published
2022-11-28
Conference Proceedings Volume
Section
Articles
How to Cite
Evaluating Deep Transfer Learning Models for Assessing Text Readability for ESL Learners. (2022). International Conference on Computers in Education, 681-683. https://library.apsce.net/index.php/ICCE/article/view/4666