RESEARCH PAPER
The Development of Model and Measuring Tool for Specialists Accreditation in Area of Public Health Services
 
More details
Hide details
1
Sechenov First Moscow State Medical University, Moscow, Russia
2
Ministry of Health, Moscow, Russia
3
The State University of Management, Moscow, Russia
4
Kazan (Volga region) Federal University, Kazan, Russia
5
Graduate School of Education, The University of Western Australia, Nedlands, WA, Australia
CORRESPONDING AUTHOR
Zhanna M. Sizova   

Sechenov First Moscow State Medical University, Moscow, Russia
Online publish date: 2017-09-16
Publish date: 2017-09-16
 
EURASIA J. Math., Sci Tech. Ed 2017;13(10):6779–6788
KEYWORDS
ABSTRACT
The main purpose of the paper is to present some theoretical approaches and some methods providing assessment optimization in specialists’ accreditation in area of public health services. The results of research presented in this paper, include the model of multistage adaptive measurements and two methods for reliability and validity analysis, providing high justice decisions in accreditation and corresponding to requirements in High-Stakes Testing procedures. The assessment optimization intends for minimization time of assessment and for reliability and validity data increasing. For optimization the special model of measurements based on multistage adaptive testing is offered. The using of offered model in assessment design allows to realize the advantages of traditional adaptive testing and linear testing, while minimizing their disadvantages. So, this model is recommended as dominating for assessment in accreditation. For validity increasing in assessment in accreditation the approach based on Structural Equation Modeling is offered. This method allows to analyze the significance of relations between observed and latent variables that have any interpretation as causal effects, and to construct the model of their relations. The example of model of casual relations between disciplines, latent variables (competencies) and factors is offered. The model helps to increase construct and content validity of measuring tool using in public health services accreditation. The methods of reliability estimation in multistage measurements, offered in paper, has innovative character. It has branching structure as the value of reliability in multistage measurements depends not only on reliability of separate stages, but also from correlations between them. The presented approaches allow to increase validity and reliability of decisions in public health services specialists’ assessment or in other spheres of assessment during accreditation.
 
REFERENCES (23)
1.
Baig, L. A., & Violato, C. (2012). Temporal stability of objective structured clinical exams: a longitudinal study employing item response theory. BMC Medical Education, 12(121), 1-6.
 
2.
Berk, R. A. (1980). Criterion-referenced measurement: The state of art. Baltimor, MD: Johns Hopkins University Press.
 
3.
Chelyshkova, M. & Zvonnikov, V. (2013). The optimization of formative and summative assessment by adaptive testing and zones of students’ development. Journal of Psychosocial Research, 8(1), 127-132.
 
4.
Chelyshkova, M. (2000). Adaptive testing in education. The monography. Moscow: Logos.
 
5.
Chelyshkova, M. (2002). Theory and practice of educational tests construction: the manual. Moscow: Logos.
 
6.
Crocker, L., & Algina, J. (2010). Introduction to classical and modern test theory. Under the editorship of V.I. Zvonnikov and M.B. Chelyshkova. Moscow: Logos Publ.
 
7.
Dorozhkin, E. M., Chelyshkova, M. B., Malygin, A. A., Toymentseva, I. A. & Anopchenko, T. Y. (2016). Innovative approaches to increasing the student assessment procedures effectiveness. International Journal of Environmental and Science Education, 11(14), 7129-7144.
 
8.
Fu, L., Kayumova, L. R. & Zakirova, V. G. (2017). Simulation Technologies in Preparing Teachers to Deal with Risks. EURASIA Journal of Mathematics, Science and Technology Education, 13(8), 4753-4763.
 
9.
Gates, S. (2005). Measuring more than efficiency. Report No. R-1356-04-RR. New York: Conference Board.
 
10.
Hambleton, R. K., & Zaal, J. (2000). Computerized adaptive testing: Theory, applications, and standards, in: R. K. Hambleton, J. Zaal (Eds.). Advances in educational and psychological testing: Theory and applications. Boston: Kluwer Academic Publishers, p. 341-366.
 
11.
Hawkins, R., Welcher, C., Holmboe, E., Kirk, L., Norcini, J., Simons, K., & Skochelak, S. (2015). Implementation of competency-based medical education: are we addressing the concerns and challenges? Medical Education, 49(11), 1086-1102.
 
12.
Heeneman, S., Oudkerk, P. A., & Schuwirth, L. W. T. (2015). Department of Pathology, Maastricht The impact of programmatic assessment on student learning: theory versus practice. Medical Education, 49(5), 487-498.
 
13.
Joreskog, К. С, & Sorbom, D. (2007). LISREL 17, A guide to the program and applications. Chicago: SPSS.
 
14.
Ke, Z., Borakova, N. U., & Valiullina, G. V. (2017). Peculiarities of Psychological Competence Formation of University Teachers in Inclusive Educational Environment. EURASIA Journal of Mathematics, Science and Technology Education, 13(8), 4701-4713.
 
15.
Klein, A. L. (1996). Validity and reliability for competency-based systems: Reducing litigation risks. Compensation and Benefits Review, Springer-Verlag, New York.
 
16.
Kramer, D. (2007). Mathematical data processing in social sciences: modern methods: studies. The grant for students of higher educational institutions / Dunkan Kramer; the translation from English by Timofeeva I. V., Kiseleva J. I., М: Publishing Centre “Academy”.
 
17.
Malygin, A. A. (2011). Adaptive testing students’ educational achievements in distance learning (PhD Thesis). Moscow.
 
18.
McKinley, R. K., Fraser, R. C., Van Der Vleuten, C., & Hastings, A. M. (2000), Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package. Medical Education, 34(7), 573–579.
 
19.
McLachlan, J. C., & Whiten, S. C. (2000). Marks, scores and grades: scaling and aggregating student assessment outcomes. Medical Education, 34(10), 788–797.
 
20.
Oparina, N. M., Polina, G. N, Fayzulin, R. M., & Shramkova, I. G. (2007). Adaptive testing. Habarovsk: “DVGUPS”.
 
21.
Ushakov, A. N., & Romanova, M. L. (2010). Adaptive testing in the structure of educational control. Scientific Notes of P.F. Lesgaft University, 5(63), 87-93.
 
22.
Yan, D., von Davier, A. A., & Lewis, C. (2014). Computerized multistage testing: Theory and applications. New York, NY: CRC Press.
 
23.
Zvonnikov, V. I., & Chelyshkova, M. B. (2012). Assessment of training results quality at certification: competence approach (the second edition). Moscow: Logos.
 
eISSN:1305-8223
ISSN:1305-8215