Supporting Mediated Peer-Evaluation to Grade Answers to Open-Ended Questions
Maria De Marsico 1, Filippo Sciarrone 2, Andrea Sterbini 1, Marco Temperini 1 *
More Detail
1 Sapienza University, Rome, ITALY
2 RomaTre University, Rome, ITALY
* Corresponding Author


We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade.


This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Article Type: Research Article

EURASIA J Math Sci Tech Ed, 2017 - Volume 13 Issue 4, pp. 1085-1106

Publication date: 18 Feb 2017

Article Views: 669

Article Downloads: 345

Open Access References How to cite this article