Assessment of Teaching and Learning (QAF 2.1.2.4 and 5.1.3.1.4)

Note:  The following guidance might also be helpful when considering how a self-study is to address the teaching and assessment evaluation criteria for cyclical program reviews (QAF 5.1.3.1.4 a)).

When developing a new program proposal, what information is reasonable and appropriate to meet the QAF evaluation criterion 2.1.2.4 a): “Appropriateness of the methods for assessing student achievement of the program-level learning outcomes and degree level expectations?”

External Reviewers and the Appraisal Committee/Quality Council members need to be able to discern the relation between the assessment methods that will be used in a program and individual program learning outcomes and Degree Level Expectations (DLEs). To give an obvious example, if a learning outcome is focused on the development of oral communication skills, then a written test as the method of assessment would be questionable. If an outcome indicates the importance of applying specific knowledge in order to develop a set of cognitive and conceptual problem-solving skills, then written tests and assignments certainly can be appropriate. If an outcome concerning such application involves achieving designated proficiency of hands-on skill, then a practical assignment with, but not limited to, observational assessment would have a more immediate relation to this outcome. Simply put, “hands-on application” and “written conceptualization” do not convey a clear and immediate relation.

Reviewers of a program proposal ask the same questions that students and instructors ask: “is the assignment or assessment method well-suited for students to demonstrate the knowledge, skills, attributes, etc. they have acquired in the course?” and “will the assessment allow the instructor to assess and evaluate the achievement of specific program learning outcomes?”

Examples of ways in which universities can provide information that will assist reviewers in assessing this criterion include:

  • Providing a list of the types of assessment methods that will be used by a program, indicating where in the curriculum these assessment methods will be used, and providing a table in which assessment methods are aligned with program learning outcomes and degree level expectations. Tracking assessment results by cohort may also assist in continuous program improvement.
  • Providing a list of the types of assessment methods that will be used by a program and specifying, in paragraph form, where and how each assessment methods will be used to achieve specific program learning outcomes across the program. (Such an approach might be preferred if specific assessment methods will be used to assess several program learning outcomes at once.)
  • Explaining the process by which a program will track student progress as it relates to individual program learning outcomes across the degree by breaking down course final grade by assessments completed and using a tracking tool across the program. In this approach, programs should demonstrate alignment between each assessment method and program learning outcome.

NOTE: The templates linked to below have been identified by the Quality Council’s Appraisal Committee as representing best practices in addressing 2.1.2.4 a) in new program proposal submissions. Universities may want to consider adapting these templates into their own new program proposals.

McMaster University – 2.1.2.4 a) Best Practice Example (Graduate Program – PhD)

Queen’s University – 2.1.2.4 a) Best Practice Example (Graduate Program – PhD)

Trent University – 2.1.2.4 a) Best Practice Example (Undergraduate Program – BSc)

York University – 2.1.2.4 a) and b) Best Practice Example (Graduate Program – Master)

When developing a new program proposal, what information is reasonable and appropriate to meet the QAF evaluation criterion 2.1.2.4 b) “Appropriateness of the plans to monitor and assess:

  1. The overall quality of the program;
  2. Whether the program is achieving in practice its proposed objectives;
  3. Whether its students are achieving the program-level learning outcomes; and
  4. How the resulting information will be documented and subsequently used to inform continuous program improvement.

Note:  The following guidance might also be helpful when considering how a self-study is to address the teaching and assessment evaluation criteria for cyclical program reviews (QAF 5.1.3.1.4 b)).

External Reviewers and the Appraisal Committee/Quality Council members need to be able to discern how a program will document and be able to assess whether students, upon graduation, have achieved the intended program learning outcomes and degree level expectations. In particular, how the university plans to document the level of performance of students in the program as a whole and how it will use this information towards the continuous improvement of the program moving forward. The university should consider: What is the information being collected? Who will collect it? Will any student feedback be obtained after graduation? How will all of the information collected be used? How and when will the information be provided back to the program?

The type of documentation will be program-specific. Setting a course grade or GPA number that students must achieve for graduation, documenting the grade spread of a graduating cohort, calculating placement rates, and devising plans for surveying alumni one-year post-graduation and then five-years later are all methods that can be used by programs to satisfy this criterion. There is no one-size fits all. Each proposal is assessed in terms of whether program design and delivery, and student performance of knowledge, skills, and abilities are achieved at the level of the degree (undergraduate Bachelor’s, graduate Diploma, Master’s, Doctoral). In addition to these expectations, each proposal is also assessed, given the program design and delivery, in terms of whether students are actually achieving the outcomes specified as central to the program. Criterion 2.1.2.4 b asks programs to devise ways of documenting whether such outcomes are being achieved primarily as a means of programs’ ongoing self-assessment as well as to provide information for continuous program improvement and future cyclical program reviews.

Simply put, “how do you plan to assess whether all the effort put into designing and, soon, delivering the program is working in the way and with the levels of success you expected? What sort of information do you need in order to be able to answer that question? How will you use the information for continuous program improvement?” Generally speaking, that information is drawn from performance during the program and after graduation.

Examples of ways in which universities can provide information that will assist reviewers in assessing parts i., ii., and iii. of criterion 2.1.2.4 b) include:

  • A proposal that shows how the plans for documenting the level of student performance have been designed specifically to be consistent with the degree level expectations. Here, program-level learning outcomes are based on the DLEs and provide the backbone for the program. Onto these are mapped appropriate courses and methods of assessment, culminating in a capstone experience required of all students and associated with most of the program learning outcomes and DLEs. Thus, upon successful completion of the capstone experience, students will have achieved the program’s objectives. In addition, more global methods of assessment, such as exit and alumni surveys, will provide a broader view of the program and student performance. Together, these assessment methods provide a complete picture of the program that is easily documented and can be used for continuous improvement and formal cyclical reviews.
  • A proposal in which achievement of the program learning outcomes is demonstrated using a set of rubrics specifically developed to measure success in achieving specific program learning outcomes. In such a case, each rubric would be aligned with a particular program learning outcome and used in the assessment of a required capstone assignment so that successful completion of the capstone assignment would demonstrate the achievement of an individual program learning outcome. Such an approach would be augmented by gathering additional data, for example, feedback from students, exit and alumni surveys, and career success in order to provide a complete picture of the program’s ability to satisfy criterion 2.1.2.4 b.
  • A proposal that describes the process by which a program is tracking student progress related to program learning outcomes across the curriculum using a tracking tool. To complement this direct and quantitative form of program assessment, more indirect forms of assessments are used; for example, students can be exposed to the program learning outcomes as they begin their degree and upon graduation. Students and alumni can also be asked to reflect on the program, including its content, modes of delivery and program learning outcomes. Finally, the proposal demonstrates how, together, these data are used by the program to assess its success related to the achievement of program learning outcomes by its graduates.
  • A proposal that describes the process by which a program will use accreditation requirements to ensure that its students are meeting the program learning outcomes. Such a proposal will provide some details on the criteria used in the accreditation process so that both External Reviewers and Appraisal Committee/Quality Council members can assess whether 2.1.2.4 b is addressed by the accreditation review.

In order to then demonstrate how the resulting information will be documented and subsequently used to inform continuous program improvement (2.1.2.4 b) iv), universities are strongly encouraged to ensure the proposal describes the way(s) in which the achievement of program learning outcomes, particularly at the cohort level, is assessed on a routine basis. Most typically, this monitoring and oversight responsibility will be assigned to an individual or committee. Indicators used by such a person or committee typically include student grades, awards data, and exit surveys. It is expected that the proposal will indicate that classes, and assessment practices, will be closely monitored by the individual / committee with responsibility for this oversight on an ongoing basis. Feedback from students, faculty, teaching assistants, community members, and others is obtained and assessed, as is career success and satisfaction of graduates. To this end, every effort is made to maintain contact with graduates of the program (i.e., through a requirement for alumni surveys). The proposal should also signal that efforts to improve the program, whether in content or delivery, in response to these data / feedback will be routine and on-going in order, for example, to better address contemporary issues that arise in relevant communities.

In addition, programs are encouraged to seek guidance from the university’s Centre for Teaching and Learning (or equivalent), as well as the QA Office, before finalizing the proposal for external review.

Finally, it is worth noting that the work put into getting these aspects related to teaching and learning “right” at the time of the program’s creation will significantly help come time to launch the program, as well as at its first cyclical review.