Assessment of Teaching and Learning – 2.1.6 a) and b)

 QAF 2.1.6 a): When developing a new program proposal, what information is reasonable and appropriate to meet the QAF evaluation criterion 2.1.6 a “Appropriateness of the proposed methods for the assessment of student achievement of the intended program learning outcomes and Degree Level Expectations?”

External Reviewers and the Appraisal Committee/Quality Council members need to be able to discern the relation between the method of assessment and the particular learning outcome and reflecting the appropriate Degree Level Expectations (UUDLEs or GDLEs). To give an obvious example, if a learning outcome is focused on the development of oral communication skills, then a written test as the method of assessment would be questionable. If an outcome indicates the importance of applying specific knowledge in order to develop a set of cognitive and conceptual problem-solving skills, then written tests and assignments certainly can be appropriate. If an outcome concerning such application involves achieving designated proficiency of hands-on skill, then a practical assignment with, but not limited to, observational assessment would have a more immediate relation to this outcome. Simply put, “hands-on application” and “written conceptualization” do not convey a clear and immediate relation.

Reviewers of a program proposal ask the same questions that students and instructors ask: “is the assignment or assessment method well-suited for students to demonstrate their knowledge, skills, attributes, etc. and for instructors to be able to assess and evaluate the demonstration?”

Some examples of how universities have provided information that assists reviewers to assess this criterion are included below:

Example 1: Proposal for new undergraduate program

The proposal lists several types of formative and summative assessments methods. The program emphasizes hands-on methods of assessments, such as lab and field work, and independent research projects. The proposal provides a table that shows the alignment between program learning outcomes and assessment methodologies.

Example 2: Proposal for a new Honours Bachelors program

In addition to traditional tests and exams that measure recall of content and application of knowledge, the program will assess comprehensive knowledge through projects that require application. Deep learning, critical thinking, and creative problem-solving include knowledge of alternative bases of wisdom and methodologies in measured relation to those of the “Western” mainstream tradition. There will also be an emphasis on community engagement and experiential learning, wherever possible. Learning is also demonstrated through ethical research practices, as well as written and oral presentations, which are key skills that need to be practiced and developed to prepare graduates for career success.

Examples of such an approach to assessment include:

  • Evaluating student ability to merge diverse knowledges, including those non-Eurocentric and community-based with Western theoretical concepts in essays, presentations, research projects and exams
  • Measuring student ability to identify and communicate the impacts of historical institutional and governmental policies of the health, wellbeing, self-determination and organization of a specific community
  • Gauging student efficacy in identifying, supporting and assisting in the research interests of self-defined communities
  • Evaluating student use of community-designed theoretical frameworks towards the development of practical, meaningful and innovative applications that benefit community partners and the university.

The proposal indicated that the assessment practices in program would include:

  • Tests & Examinations • Simulation
  • Research Essay • Critical Reflection
  • Seminar Presentation • Debates
  • Oral Examination/Assessment • Presentations
  • Research Project • Relevant Community Engagement

Example 3: Master’s Program with Capstone

Assessment of Learning Student performance in the program will be assessed through a variety of methods including: reports, presentations, assignments and project portfolios, cumulative for the calculation of course grades. A table presented very clearly the techniques to be used to assess skills and competencies associated with designated program learning outcomes. These included: Assignments, Projects, Capstone, Portfolio, Presentations

Assessing Experiential Learning Through its modular and flexible structure, the program has been designed to be responsive and accommodating to student interests and creativity. Responsiveness necessitates an infrastructure for assessing learning that ensures rigor, consistency and meaningful standards across the potentially diverse outcomes anticipated. Central to this tension between flexibility and consistency is the capstone project, to be evaluated by the Program Committee.

Capstone requirements include a project proposal and the selection of a Project Advisory Committee (PAC) for approval by the Program Committee. The project proposal must include a rubric and metric for individual and group evaluation that will be negotiated between students and their PAC and approved by the Program Committee to ensure rigor across projects and the means for final capstone assessment and evaluation. This apparatus will provide project specific flexibility to define deliverables and outcomes, and a program-wide system for assessing quality and student achievement. Students are required to provide project updates every three months to ensure ongoing feedback and review before final deliverable(s) are evaluated. Finally, the Project Advisory Committee will be required to submit a grade and an assessment of the project with clear demarcations of individual student contributions (in the case of group projects) to the Program Committee for review.

Responsibility for helping students to navigate the program and address any flagged issues lies with the program Director and departmental Graduate Coordinators. The Program Committee will also report to the Curriculum Committee and will forward any student issues or appeals that cannot be resolved at the program level to the department committees on standing. At the program level, the Director, Graduate Coordinators, and the Program Committee will be responsible for approving the student admissions, capstone proposals, Project Advisory Committee composition and final project evaluations. Students and student progress through the program as a whole will be closely monitored by program administrators to ensure appropriate progress and that all programmatic requirements are met.

Example 4: Master’s Program

Pedagogical and Evaluation Strategies: Faculty members use a multi-modal method for course instruction. This method is also applied to evaluation. Students are evaluated on their core knowledge gained in the course; however, application of these concepts is a key component of the student’s evaluation. For example, exams and midterms are designed with a combination of questions to ensure that students not only understand concepts, but also know how to apply them, how to make appropriate decisions to determine potential solutions, and to use their knowledge to develop unique or innovative solutions. Student performance is also measured in terms of their written and oral communication skills in the majority of the courses, which allows students to develop their communication skills continuously throughout their entire degree, leading to graduates who have the basic knowledge and skills required but who can use this knowledge effectively and appropriately. Lastly, students will keep an e-portfolio throughout the entirety of the program. The portfolio includes the deliverables from the core courses, the research method course, the specialization courses and, the Major Research Paper. The portfolio will help them to integrate knowledge acquired in various courses and their MRP, and enable faculty to make an assessment based on the overall program learning outcomes.

Example 5: Master’s Program

The program will utilize a variety of methods to assess student achievement across the program-learning objectives during and after the twelve-month program (presented in a Table depicting relations between assessment methods and learning outcomes). Assessment includes community partner feedback through a survey provided at the end of the placement; written and oral class work, such as term papers, draft of proposed methods, critical reflections, presentations, and final grades; the MRP proposal (including the Research Ethics Board application); and the final MRP. To provide information about classroom work, instructors will be asked to provide a breakdown of the class final grade mean by type of work completed. We will also track presentations at conferences, teaching feedback for TAs, participation in research grants and scholarships and awards. An exit survey will be administered upon completion of the MRP to assess student perspectives on their own achievement, level of satisfaction with the program, and future plans. If students are in agreement, we will occasionally contact program alumni to find out how they have used their MA training.

Rationale: These methods of demonstrating and documenting student achievement are appropriate because they provide an on-going assessment of the student during the program and are easily recorded for longer-term uses. The community partner survey will assess satisfaction with 1) the work accomplished by the student, and 2) placement facilitation and oversight by university staff and faculty. Student reflections at the end of the placement will provide the student perspective on these same topics. Administering an exit survey will then provide a broader overall assessment of the program, from the student’s perspective. Following up with the alumni over time provides information of the longer-term value of the program both for students and society, and ensuing cyclical program reviews.

2.1.6 b): When developing a new program proposal, what information is reasonable and appropriate to meet the QAF evaluation criterion 2.1.6 b “Completeness of plans for documenting and demonstrating the level of performance of students, consistent with the institution’s statement of its Degree Level Expectations?”

“Documenting” and “demonstrating” depend on the nature of the program. “Level of performance” infers a continuum of success. For instance, many programs, whether traditional academic or those with an immediate practical application, document for their own records the grade spread of a graduating cohort (Undergraduate: cumulative major in the context of cumulative Degree GPA; Graduate: cumulative in-program). Some Programs set a course grade or GPA number which students must achieve for graduation. Many programs calculate completion versus placement rates, sometimes with the assistance of professional organizations for data collection, and devise plans for surveying alumni one-year post-graduation, then five-years. Again, types of documentation as well as the aim and need of demonstration are program-specific. There is no one-size fits all. “Documentation” is a function of the different needs of programs for the information; however, all programs will undergo review every eight years.

Each proposal is assessed, in part, in terms of whether program design and delivery, and student performance of knowledge, skills, and abilities are achieved at the level of the degree (undergraduate Bachelor’s, graduate Diploma, Master’s, Doctoral). In addition to these expectations, each proposal is also assessed, given the program design and delivery, in terms of whether students are actually achieving the outcomes specified as central to the program. Criterion 2.1.6b asks programs to devise ways of demonstrating and documenting whether such outcomes are being achieved primarily as a means of programs’ ongoing self-assessment as well as to provide information for future cyclical program reviews.

Simply put, “how do you plan to assess (document and demonstrate) whether all the effort put into designing and, soon, delivering the program is working in the way you expected and with the levels of success you expected? What sort of information do you need in order to be able to answer that question?” Generally speaking, that information is drawn from students’ actual results during the program and upon graduation as well as after graduation.

For some programs, isolating the most appropriate information – the what, but also the how – is a challenge, including one of human resources; however, meeting the challenge in a program-specific way is a necessity when program design, approval, and delivery befall learning outcomes.

Some examples of how universities are providing this information in their new program proposals are shown below:

Example 1: Proposal for new undergraduate program

Projects or assignments for the capstone courses will be created with not only the course, but also program learning outcomes in mind, so that both individual faculty and the program as a whole have a way of evaluating whether or not students have met the program learning outcomes and degree level expectations.

To document and demonstrate the level of performance of students with respect to program learning outcomes, the university is using the Desire2Learn Insights tool. With this tool, course level assignments are linked to course and program outcomes and degree level expectations to provide quantitative data on student achievement of program level learning outcomes.

To complement the direct forms of program assessment, as an indirect form, students are exposed to the program learning outcomes when they begin their degree, and in an exit survey upon graduation, they are asked their opinion on how well they feel the outcomes were reflected in the course curriculum, as well as how well they feel they were achieved.

Example 2: Proposal for new Honours Bachelor’s Program

The program will be externally reviewed during cyclical reviews, and assessed on an ongoing basis through indicators such as student grades, integrative assessment practices, and awards data. Classes, and assessment practices, will be closely monitored on an ongoing basis and as we receive feedback from students, faculty, teaching assistants, community members, and others. Ultimately, we will judge success by assessing the career success and satisfaction of our graduates and thus we will make every effort to maintain contact with our graduates to this end. Efforts to improve the program, whether in content or delivery, in response to this data/feedback will be routine and on-going in order to better address contemporary issues that arise in relevant communities.

Appendices provide curriculum-mapping and assessment charts (Program Learning Outcomes tied to specific courses and a table indicating with assessment type was attributed to each course learning outcome).

Example 3: Proposal for Master’s program in a professional discipline

Proposal indicated that the program would use accreditation requirements to ensure that students are meeting the program learning outcomes.

Example 4: Proposal for new Master’s Program

The plans for documenting and demonstrating the level of student performance have been designed specifically to be consistent with the degree level expectations. The program-level learning outcomes are based on the DLEs and provide the backbone for the program. Onto these were mapped appropriate courses and methods of assessment. Since the MRP is the capstone experience and is associated with most of the learning outcomes and DLEs, upon successful completion students will have achieved the program’s objectives. In addition, more global methods of assessment, such as the exit survey, provide a broader view of the program and student performance. Together, these assessment methods provide a complete picture of the program that is easily documented and can be used for formal cyclical reviews or other purposes.