Exploring Bimodality in Introductory Computer Science Performance Distributions
More details
Hide details
Colorado Mesa University, Grand Junction, Colorado, USA
McGill University, Montreal, Quebec, CANADA
Publish date: 2018-07-11
EURASIA J. Math., Sci Tech. Ed 2018;14(10):em1591
This study examines student performance distributions evidence bimodality, or whether there are two distinct populations in three introductory computer science courses grades at a four-year southwestern university in the United States for the period 2014-2017. Results suggest that computer science course grades are not bimodal. These findings counter the double hump assertion and suggest that proper course sequencing can address the needs of students with varying levels of prior knowledge and obviate the double-hump phenomenon. Studying performance helps to improve delivery of introductory computer science courses by ensuring that courses are aligned with student needs and address preconceptions and prior knowledge and experience.
Ahadi, A., & Lister, R. (2013). Geek genes, prior knowledge, stumbling points and learning edge momentum: parts of the one elephant? In Proceedings of the ninth annual international ACM conference on International computing education research (pp. 123-128). ACM.
Alturki, R. (2016). Measuring and Improving Student Performance in an Introductory Programming Course. Informatics in Education, 15(2), 183-204.
Basnet, R. B., Doleck, T., Lemay, D. J., & Bazelais, P. (2018). Exploring Computer Science Students’ Continuance Intentions to Use Kattis. Education and Information Technologies, 23(3), 1145–1158.
Bornat, R. (2014). Camels and humps: a retraction. Retrieved on November, 2017 from
Brown, J. D. (2014). Differences in how norm-referenced and criterion-referenced tests are developed and validated? Shiken, 18(1), 29-33.
Burning Glass. (2016). Beyond Point and Click: The Expanding Demand for Coding Skills (pp. 1-12). Retrieved from
Caspersen, M. E., Larsen, K. D., & Bennedsen, J. (2007). Mental models and programming aptitude. In Proceedings of the 12th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (206-210). New York, NY: ACM.
Corney, M. W. (2009). Designing for engagement: Building IT systems. In ALTC First Year Experience Curriculum Design Symposium. Queensland University of Technology, Brisbane.
Dehnadi, S., & Bornat, R. (2006). The camel has two humps. Middlesex University Working Paper. Retrieved on November 2017, from
Dishman, L. (2016). Why Coding Is Still The Most Important Job Skill Of The Future. Fast Company. Retrieved from
Dunn, L., Parry, S., & Morgan, C. (2002) Seeking quality in criterion referenced assessment. In Learning Communities and Assessment Cultures Conference, EARLI Special Interest Group on Assessment and Evaluation, University of Northumbria, UK. Retrieved from
França, A. C. C., da Cunha, P. R., & da Silva, F. Q. (2010). The Effect of Reasoning Strategies on Success in Early Learning of Programming: Lessons Learned from an External Experiment Replication. In 14th International Conference on Evaluation and Assessment in Software Engineering (EASE). Keele University, UK.
Hartigan, J., & Hartigan, P. (1985). The Dip Test of Unimodality. The Annals of Statistics, 13(1), 70-84.
Höök, L. J., & Eckerdal, A. (2015). On the bimodality in an introductory programming course: An analysis of student performance factors. In Learning and Teaching in Computing and Engineering (LaTiCE), 2015 International Conference on (pp. 79-86). IEEE.
Kafai, Y., & Burke, Q. (2013). Computer Programming Goes Back to School. Phi Delta Kappan, 95(1), 61-65.
Lister, R. (2010). Computing Education Research: Geek genes and bimodal grades. ACM Inroads, 1(3), 16.
Lung, J., Aranda, J., Easterbrook, S., & Wilson, G. (2008). On the difficulty of replicating human subjects studies in software engineering. In Proceedings of the 30th International Conference on Software Engineering (ICSE ‘08). New York, NY: ACM.
Lye, S., & Koh, J. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior, 41, 51-61.
Ma, L., Ferguson, J., Roper, M., & Wood, M. (2011). Investigating and improving the models of programming concepts held by novice programmers. Computer Science Education, 21(1), 57-80.
Ott, C., Robins, A., Haden, P., & Shephard, K. (2015). Illustrating performance indicators and course characteristics to support students’ self-regulated learning in CS1. Computer Science Education, 25(2), 174-198.
Patitsas, E., Berlin, J., Craig, M., & Easterbrook, S. (2016). Evidence that computer science grades are not bimodal. In Proceedings of the 2016 ACM Conference on International Computing Education Research (pp. 113-121). ACM.
Qian, Y., & Lehman, J. (2017). Students’ Misconceptions and Other Difficulties in Introductory Programming. ACM Transactions on Computing Education, 18(1), 1-24.
Robins, A. (2010). Learning edge momentum: a new account of outcomes in CS1. Computer Science Education, 20(1), 37-71.
Robins, A., Rountree, J., & Rountree, N. (2003). Learning and Teaching Programming: A Review and Discussion. Computer Science Education, 13(2), 137-172.
Sadler, R. D. (2005). Interpretations of Criteria-Based Assessment and Grading in Higher Education. Assessment and Evaluation in Higher Education, 30(2), 175-194.
Thompson, C. (2018). The Next Big Blue-Collar Job Is Coding. WIRED. Retrieved from
Watson, C., & Li, F. W. (2014). Failure rates in introductory programming revisited. In Proceedings of the 2014 conference on Innovation & technology in computer science education (pp. 39-44). ACM.
Wray, S. (2007). SQ minus EQ can predict programming aptitude. In Proceedings of the PPIG 19th Annual Workshop, Finland (Vol. 1, No. 3).
Zingaro, D. (2015). Examining Interest and Grades in Computer Science 1. ACM Transactions on Computing Education, 15(3), 1-18.