An Evaluation of Generated Question Sequences Based on Competency Modelling

Authors

  • Onjira SITTHISAK Author
  • Lester GILBERT Author

DOI:

https://doi.org/10.58459/icce.2010.75

Abstract

In order to support lifelong learning, assessment systems have to focus on representation and updating a variety of knowledge domains, rules, assessments and learner’s competency profiles. Adaptive assessment provides efficient and personalised routes to establishing the proficiencies of learners. Existing adaptive assessment systems are faced the challenge of dealing with inconsistently measuring and representing student’s knowledge. We can envisage a future in which learners are able to maintain and expose their competency profile to multiple services, throughout their life, which will use the competency information in the model to personalise assessment. This paper presents an adaptive assessment system based on a competency model. The system automatically generates questions from a competency framework and sequence the questions based on the taxonomies of subject matter or of capability, making it possible to guide learners in developing questions and testing knowledge for themselves. The questions and their sequencing are constructed from a given set of learning outcomes and the subject matter recorded in an ontological database. The architecture of the system and the mechanism of sequencing the questions are discussed.

Downloads

Download data is not yet available.

Downloads

Published

2010-11-29

How to Cite

An Evaluation of Generated Question Sequences Based on Competency Modelling. (2010). International Conference on Computers in Education. https://doi.org/10.58459/icce.2010.75