Faculty Policy Series #49
Course and Teacher Ratings (CTRs)
The Course and Teacher Ratings (CTRs) at Hofstra University provide a measure of student perceptions of a faculty member’s teaching effectiveness that complement peer and administrative observations (FPS#46). The form provides students with an opportunity to rate the faculty member on specified attributes of teaching performance, as well as to provide open-ended comments. CTR forms are made available to students in course sections in all units (except the Law School and Medical School) each fall and spring semester for administration in accordance with this FPS. Upon a faculty member’s request to the Provost’s office, CTRs may be administered to students in summer and January courses. The latest version of the form is available on line at https://www.hofstra.edu/sites/default/files/2021-04/copy-blank-scantron-comment-forms.pdf. Summary numerical CTRs are distributed to the offices of the Department Chair, the Dean, and the Provost, as well as to the faculty member. These summary ratings are also available online via the portal. The open-ended comments will not be read by administration and will be available online to the faculty member.
I. Administration of CTRs
CTRs will be available fourteen days before the last day of classes for faculty to administer to students and will remain open until 11:59 p.m. on the last day of classes. The opening and closing dates of CTRs and the interval for responding will be adjusted proportionally for part-of-term sections that span fewer weeks of the semester. CTR responses are to be collected in all on-load class sections, except those identified by the Department Chair as being inappropriate for this type of assessment. CTRs may be administered in courses that are taught on a per-capita basis but those data will not be included in semester averages but listed separately. [Separate modules or alternative forms may be developed for use in laboratory, performance, and/or studio classes, and those courses in which the faculty member provides per capita instruction to individual students.] The CTRs are designed to allow a faculty member to add up to three questions that address issues that are not covered by the form. Faculty members are encouraged to use class time for administration given the low response rate of on-line CTRs.
II. Interpretation and Use of CTR Results
The CTRs may be used for both formative (individual faculty development) and summative (evaluative) purposes, in the context of data from other sources. Although attention to individual items may be useful for formative and interpretative purposes, only scale scores should be used when analyzing the CTRs for summative purposes. Each scale score represents the average score on one of several distinct factors (groups of related items). For each course taught, the faculty member shall be provided with his or her own mean (arithmetic average) for each item and on each scale. Faculty shall also receive the mean and standard deviation (a measure of the degree of variability in the ratings) for each item for all courses with the same prefix. See FPS 49B (Fall 2008) for an annotated example. The prefix mean shall represent the unweighted mean for all courses within that prefix. The standard error of measurement (SEM), a measure of the degree of error to be expected in a score, shall be made available to the faculty member and the department chair in order to construct appropriate confidence intervals.
Departments are responsible for developing specific policies on the use of CTRs for summative purposes, and for sharing these policies with all instructors and the dean’s office. These policies may include the identification of specific CTR items that receive close attention for summative purposes because they cover course and instructor attributes that are highly valued by the department’s faculty. Department and DPC chairs are encouraged to review CTR feedback during promotion and tenure probationary periods with candidates to discuss resources and strategies for improvement. CTRs are to be used as one of several means of evaluating faculty teaching effectiveness.
Administrators and personnel committees shall evaluate performance across courses taught within a semester as follows:
In each semester, the mean for each scale shall be calculated by averaging the scale scores across classes taught by the faculty member during that semester. The mean scale scores are not weighted for class size, e.g., a class with 60 students does not receive more weight than a class with 35 students.
To account for error in the scores, each of the faculty member’s mean scale scores shall be assumed to extend two standard errors of measurement below and above the attained scale score. This score interval shall be referred to as the faculty member’s confidence interval.
For each scale, the faculty member’s confidence interval shall be compared with the department (or prefix) standard. The department (or prefix) standard shall be the mean of the department (or prefix) means for the previous four academic years.
If the department (or prefix) standard falls within the faculty member’s confidence interval, the scale score shall be considered acceptable.
On-load courses with between five and ten respondents should be included in the analysis of a faculty member’s CTRs, but should be interpreted cautiously.
Regardless of the department (or prefix) standard, a scale score of 2 or lower shall be deemed meritorious on all scales except the workload scale.
CTR scores must be considered within the context of the faculty member’s teaching assignments; for example, scores in courses with distinctive characteristics (e.g., introductory courses, courses for non-majors) should be compared, when feasible, with similarly structured courses. Although there are significant limitations associated with the analysis of CTR scores for a single class, individual course information can facilitate such contextual interpretation and should be submitted. For summative analyses, scores should be averaged over multiple sections and trends in scores should be analyzed over time to ameliorate the effects of idiosyncratic CTRs in a single course. To evaluate performance in a particular course, confidence intervals shall be constructed and interpreted as indicated above.
CTR scores provide the raw data that must be evaluated by the faculty committees and administrators making recommendations regarding personnel decisions. Not only must the CTR scores be contextualized in respect to the courses being taught but it must be recognized that these data provide only one source of information. They must be evaluated in the context of the data from other sources, e.g., peer and administrative observations. Evaluations from any source that are negative in the aggregate must be viewed as a cause for concern and no single source should be viewed as privileged or automatically warranting greater weight than other sources. Similarly, none of the sources can be automatically dismissed or disregarded as providing less important or less relevant information. It is incumbent on those making recommendations on personnel matters to consider all sources of information in a serious and balanced manner.Evaluations from any source that are negative in the aggregate must be viewed as a cause for concern, but no single source (including CTR scores) should be viewed as privileged or automatically warranting greater weight than other sources. For instance, CTR scores must not be treated as a uniquely privileged source that outweighs other measures of information.
Given that the interpretation of the CTRs is based on scales derived from factor analyses of the individual item scores and that changes in the pattern of the ratings may result in changes in the scales themselves, as well as in the standard error of measurement, new factor analyses of the CTRs shall be performed every five years. At such time, the reliability and the SEM shall be recalculated, and the scales shall be modified if warranted by the data.