Monitoring

There are many ways to monitor the effects of teaching leadership. These include observing changes in survey scores (SCEQ, CEQ, USE) the outcomes of departmental reviews, particiipation by academic staff in the SoTL Performance Index, and assessment results

Monitoring Learning and Teaching

Student Course Experience Questionnaire (SCEQ)
http://www.itl.usyd.edu.au/sceq/

Course Experience Questionnaire (CEQ)
http://www.itl.usyd.edu.au/ceq/

Unit of Study Evaluations (USE)
http://www.itl.usyd.edu.au/use/

Academic Board Reviews
http://www.usyd.edu.au/ab/faculty_review/index.shtml

Scholarship of teaching Performance Index
http://www.itl.usyd.edu.au/awards/sotl.htm

Assessment results (see next section)


The Course Experience Questionnaire and Teaching Development

The Course Experience Questionnaire now is but a shadow of the version first developed. It includes only the Good Teaching Scale and the Generic Skills Scale, and one item asking students for their overall satisfaction with the quality of their course.

The six items that make up the Good Teaching Scale on the Course Experience Questionnaire are:

  • The staff put a lot of time into commenting on my work.
  • The teaching staff normally gave me helpful feedback on how I was going.
  • The teaching staff of this course motivated me to do my best work.
  • My lecturers were extremely good at explaining things.
  • The teaching staff worked hard to make their subjects interesting.
  • The staff made a real effort to understand difficulties I might be having with my work.

Students respond to these items in a 1-5 Likert scale from Strongly disagree, Disagree, Neutral, Agree, to Strongly agree.

Mean scores for the scale are reported as a mean on the 1-5 scale, or converted to a -100 to +100 range (-100 = 1, -50 = 2, 0 = 3, 50 = 4 and 100 = 5. A very rough rule of thumb for significance in differences in means is 15-20 points on the -100 to +100 scale, or 0.3-0.4 on the 1-5 scale.

What can be done to improve students’ perceptions of teaching quality?

  • Are the CEQ perceptions still valid? Courses may have been changed since CEQ data were collected. Even the most recent results could have been influenced by students perceptions of their first year five years ago. A more contemporary indicator containing the same scales is the SCEQ administered to a stratified sample of current students every 2 years. Aggregating USE scores by program will also give a more recent overview, and enable possible identification of exemplary and troublesome UoS.
  • Choose to explore ways to improve perceptions of teaching quality as an Action Point in a strategic plan.
  • Adopt a closing the loop strategy. Many students are reluctant to participate in a system from which there appears to be no change. Helping them see that efforts are being made to adopt their suggestions is likely to lead to enhanced perceptions that teaching matters and also to increased change. The SEG Education Committee now expects faculties to describe to students what they intend to do with SCEQ results, thus closing the loop between feedback and action.
  •  Improving students’ perceptions of the quality of their teaching can be addressed holistically by developing more student-centred approaches to teaching and/or by focusing on the items from which the scale score has been derived. Doing the former in the areas of the latter is one recommended approach. For example, the responses to Q3 below show nearly 30% of students disagree that they normally get helpful feedback. A student-centred approach would be to find out what feedback students find helpful and focus current feedback and commentary time on those aspects. it could be done using qualitative comments from students or by asking more direct questions in a class.

Encouraging academic staff to make use of USE data

  • Give them access to USE data aggregated at the program level – where is their UoS.
  • Look at theirUoS. What do you expect?
  • Ask co-ordinators/teachers to complete the questionnaire themself before they see the results. Where are there differences? What might be the reasons?
  • Establish what the context is – distribution of results (mean and sd); % disagree. What is acceptable?
  • Look at trends. What did students say last semester? What are the changes in the areas  addressed last time? What could be done this time?
  • Compare means and % of disagreement with similar UoS.
  • Develop some thoughts on what might be done to improve things (holistic and by item).
  • Assess what the written comments add or confirm
  • Encourage teachers to take some thoughts/conclusions to discuss with students.

Next: Assessment results