Rubric Authoring Tool for Supporting the Development and Assessment of Cognitive Skills in Higher Education

Natalie Simper


This paper explores a method to support instructors in assessing cognitive skills in their course, designed to enable aggregation of data across an institution. A rubric authoring tool, ‘BASICS’ (Building Assessment Scaffolds for Intellectual Cognitive Skills) was built as part of the Queen’s University Learning Outcomes Assessment (LOA) Project. It provides a workflow for assessment choices and generates an assessment rubric that can be tailored to individual needs based on user input. The dimensions and criteria in BASICS were adapted from the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics, and drew on annotations from over 900 work samples from the LOA project. This paper summarizes the development of the tool, and presents initial reliability and validity data from a pilot study. The pilot found that the BASICS developed rubric was consistent for the assessment of critical thinking and problem solving. The pilot compared assessment data derived from course Teaching Assistants with that of trained Research Assistants. Analysis found moderate intraclass correlation coefficients between the BASICS rubric and corresponding VALUE rubric dimensions, suggesting that the BASICS rubric aligned with the VALUE criteria. Preliminary findings suggest that BASICS is an effective tool for instructors to author rubrics, tailored to their own specifications for assessment of cognitive skills in a course. It is also promising as a method for aggregation of data across the institution. Researchers are conducting further investigation to evaluate the reliability of BASICS rubrics over multiple work samples from a range of disciplinary contexts.


Assessment; Rubric; Critical thinking; Creative thinking; Problem solving

Full Text:



Abrami, P., Bernard, R., Borokhovski, E., Waddington, D., Wade, C., & Persson, T. (2015). Strategies for Teaching Students to Think Critically: A Meta-Analysis. Review of Educational Research, 85(2), 275–314.

Andrade, H. L., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research & Evaluation, 10(3), 1–11.

Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205–222.

Assignment Library. (n.d.). Retrieved July 14, 2016, from

Bernstein, D., & Greenhoot, A. F. (2014). Team-Designed Improvement of Writing and Critical Thinking in Large Undergraduate Courses. Teaching & Learning Inquiry: The ISSOTL Journal, 2(1), 39–61.

Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University . Maidenhead: Open University. Mc-Graw-Hill Education.

Boud, D. (1995). Assessment and learning: contradictory or complementary. In Assessment for learning in higher education (pp. 35–48). Kogan Page.

Boud, D., Dochy, F., & others. (2010). Assessment 2020. Seven propositions for assessment reform in higher education. Australian Leaning and Teaching Council. Retrieved from

Brown, S. (2004). Assessment for learning. Learning and Teaching in Higher Education, 1(1), 81–89.

Career Readiness Competencies: Employer Survey Results. (2014). Retrieved November 23, 2016, from

Clark, J. E., & Eynon, B. (2011). Measuring Student Progress with E-Portfolios. Peer Review, 13(4), 6–8.

Dawson, P. (2015). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, (ahead of print), 1–14.

Deller, F., Brumwell, S., & MacFarlane, A. (2015). The Language of Learning Outcomes: Definitions and Assessments. Toronto, ON: Higher Education Quality Council of Ontario.

Fox, S., Bizman, A., & Herrmann, E. (1983). The halo effect: Is it a unitary concept? Journal of Occupational Psychology, 56(4), 289–296.

Frank, B., Simper, N., & Kaupp, J. (2016). How We Know They’re Learning: Comparing Approaches to Longitudinal Assessment of Transferable Learning Outcomes (pp. 1–14). Presented at the 2016 ASEE Conference and Exposition, New Orleans, LA: ASEE Conferences.

Gray, P. J. (2013). Developing Assessment Rubrics in Project Based Courses: Four Case Studies. In 9th International CDIO Conference, Cambridge, Massachusetts.

Hafner, J., & Hafner, P. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25(12), 1509–1528.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144.

Moni, R. W., Beswick, E., & Moni, K. B. (2005). Using student feedback to construct an assessment rubric for a concept map in physiology. Advances in Physiology Education, 29(4), 197–203.

Nitko, A. J. (2001). Educational assessment of students. Des Moines, IA: ERIC.

Partnership for 21st Century Skills. (2008). Retrieved July 12, 2016, from

Popham, W. J. (1997). What’s wrong-and what’s right-with rubrics. Educational Leadership, 55, 72–75.

PowerPoint Presentation - BASICS guide compressed.pdf. (n.d.). Retrieved July 12, 2016, from

Queen’s Learning Outcomes Assessment. (n.d.). Retrieved December 22, 2015, from

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.

Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing, 15(1), 18–39.

Rhodes, T. (2010). Since we seem to agree, why are the outcomes so difficult to achieve? New Directions for Teaching and Learning, 2010(121), 13–21.

Rhodes, T. L. (2011). Emerging evidence on using rubrics. Peer Review, 13(4/1), 4–5.

Rhodes, T. L., & Finley, A. P. (2013). Using the VALUE rubrics for improvement of learning and authentic assessment. Washington, DC: Association of American Colleges and Universities.

Rotherham, A. J., & Willingham, D. T. (2009). “21st-Century” Skills. Educational Leadership, 1(67), 16–21.

Siefert, L. (2011). Assessing general education learning outcomes. Peer Review, 13(4/1), 9–11.

Stellmack, M. A., Konheim-Kalkstein, Y. L., Manor, J. E., Massey, A. R., & Schmitz, J. A. P. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teaching of Psychology, 36(2), 102–107.

Tierney, R., & Simon, M. (2004). What’s still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2), 1–10.

Timmerman, B. E. C., Strickland, D. C., Johnson, R. L., & Payne, J. R. (2011). Development of a “universal”rubric for assessing undergraduates’ scientific reasoning skills using scientific writing. Assessment & Evaluation in Higher Education, 36(5), 509–547.

VALUE. (2014, October 7). Retrieved September 30, 2016, from

Wiggins, G. P., & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.


Teaching & Learning Inquiry is the official journal of the
International Society for the Scholarship of Teaching and Learning (ISSOTL)