THE CERTIFICATION IN COLLEGE TEACHING OFFERED AT MICHIGAN STATE UNIVERSITY, seeks to build teaching competency in five areas, including assessment of student learning. personal reflections follow.
Formative assessment of student learning is vital to effective teaching – gauging student learning provides the instructor with the opportunity to adjust their instructional methods in real time and in future iterations of the course. In practice, constructing thorough and informative assessments is challenging. To ensure adequate alignment of learning objectives, assessments, and instructional activities, backward design can be used. Summative assessments also provide meaningful information about student learning in distinct modules.
Summer 2017 Certification in College Teaching Institute
Summer 2018 Mentored Teaching Project, mentored by Dr. Kevin D. Walker
Artifacts and Rationales
My notes and materials from the Workshop on Assessing Student Learning at the 2017 Certification in College Teaching Institute are included here. The practice of assessment serves two main purposes in the academic setting: to assess student learning and to assess the effectiveness of instructional techniques. Formative assessments, including quizzes, discussion, and clicker questions, inform the instructor of students’ prior knowledge, misconceptions, and promote cognitive dissonance throughout the semester, allowing issues to be confronted before summative assessments such as exams and projects take place. Backward design is an incredibly useful practice whereby instruction and assessment are guided by the learning objectives set out for students. Alignment of these three components ensures that assessments will accurately reflect student learning, and that content and activities are informative and worthwhile. One comment that really moved me from this workshop was that, with backward design, “what is measured reflects what is valued,” which is something I’ve noticed in my career as both a student and educator.
In constructing the grading rubrics for my US2018 CEM 355 teaching as research project, I sought to assess students on their discussion and conclusions in addition to experimental results. As students in their first semester of organic chemistry laboratory with little experience, I did not want to evaluate their technical proficiency based on purity and yield of product.
I chose to assess my mentored teaching project via student self-evaluation at the end of two semesters. I did not evaluate student progress based on the rubrics, though this would have resulted in a more thorough understanding of the relationship between progress and student self-efficacy. Additionally, as the semester in which intervention took place progressed, I realized that though my rubrics were improved compared to previous versions, a more defined points scale would have facilitated both my evaluation of student progress and students’ own understanding of their understanding throughout the semester.
Project Assessment completed by Dr. Kevin D. Walker
Assessment is the cornerstone of educational research, and education itself. While my knowledge of assessment has certainly grown since before I embarked on the Certification in College Teaching, I look forward to learning and practicing more. The mentored teaching project was a valuable learning experience, putting me in the driver’s seat of establishing learning objectives, assessments, and tools for evaluation.
As mentioned in the discussion of my mentored teaching project results, more detailed rubrics outlining specific grade distributions for assignments would have provided additional data by which to analyze student progress. Objective data of actual student performance was not collected, which is one significant drawback of this study. I am eager to apply what I’ve learned about assessment from the mentored teaching experience using backward design to create lesson plans and assessments in the future.
OMC Fall 2018