Components of the Academic Prioritization Tool
the academic prioritization tool examined ten different program characteristics in its prioritization of degree-granting academic units. Each of these characteristics was itself a combination of various measures: for instance, expenditures relative to student credit hours taught, or research funding relative to space allocation. These ten program characteristics were then grouped into four clusters: resource efficiency, degree production, scholarly accomplishments and undergraduate teaching effectiveness. The measures within each cluster were scaled to give a cluster score. Scores in each cluster, equally weighted, were averaged in order to reach a final prioritization score.
The ten different program characteristics in the four different clusters are described further below.
In some instances, certain measures had to be interpolated from other data—for instance, the Scholarly Accomplishments measure was not available for some creative work departments. Departments and programs not offering an undergraduate degree, for instance Education and Law, were not graded on undergraduate teaching effectiveness. In such cases, the Academic Prioritization score was the average of only three clusters.
Resource Efficiency examines program delivery in terms of cost, relative revenue production, and space-use efficiency. The metrics examined under resource efficiency are almost all measured at the departmental or college level, not at the level of the individual degree or major; therefore, aggregate departmental resource efficiency is applied to multiple degree programs. For instance, the cost and space utilization efficiency measures for the Department of Art and Art History would be used in examining its various BA, MA, BFA and MFA degree programs in Art History and Studio Arts. This is because one faculty and one space array is deployed to offer these multiple degrees.
The separate factors examined for the Resource Efficiency cluster are:
- Cost per student credit hour. Cost is measured by calculating all expenditures in a unit classified as “general fund” including faculty and staff salaries (but not most benefits), graduate student expenditures in the unit, and operating costs. Cost is then divided by the number of student credit hours produced in a unit in all of its degree and non-degree offerings. For departments in the Colleges of Arts and Sciences and Engineering and Applied Science, the college-level operating costs—for instance, for college-level advising—are prorated among individual departments. Indirect cost allocations to departments and colleges are not removed from the calculation, as it is felt delivery of the research mission of a unit is integral to the unit’s teaching mission.
- Income and expenses. This measure is a profit/loss measure for each department that calculates gross revenue generated in terms of the tuition value of student credit hour and major production as well as research and auxiliary revenue (sources) relative to various costs (uses). This measure is an important complement to cost efficiency in that it captures the relative scale of production. In other words, while Psychology and Classics are comparable in terms of cost per credit hour, the huge size of the Psychology program projects into a considerable source of campus excess of sources over uses.
- Efficiency of space utilization. Space utilization is an important criterion in the overall resource efficiency of program delivery, but the costs of space allocations are not captured by general fund expenditures. To measure space efficiency, all units are ranked by the dollar amount of research per departmental square foot of space and by the number of majors per square foot of space. The average of these two rankings is the basis of the total score. It is necessary to recognize that many programs, especially in the laboratory sciences and creative arts, require large amounts of space for program delivery relative to programs that depend primarily on centrally scheduled classrooms, faculty offices, and space for departmental offices rather than on laboratory or studio space.
Degree Production measures the share of total degrees produced at Ƶ Boulder over the past six years. Unit degree production is calculated for two separate factors: (a) all degrees and (b) graduate degrees specifically. Graduate degrees are included as a separate unit characteristic in recognition of Ƶ Boulder’s statutory mission in graduate education.
When a student graduates with a double major, both degree-granting units receive credit. It is important to note here that degree production will differ from number of majors because many students graduate with a degree different from an initially declared major. A low degree-to-major number may indicate student retention issues inside a department.
Scholarly Accomplishments summarizes the research and creative achievements of the faculty in the academic units across campus. For a research university like Ƶ Boulder, it is critical to measure the scholarly accomplishments of our departments in some standardized way not only inside the campus but in comparison to other institutions. Fortunately, recent advantages in research data collection allow us to go beyond simple publication counts and rankings based on reputations. Based on the data made available through Academic Analytics, we can now measure the relative scholarly achievement of most units on campus. The elements of research and creative work examined by Academic Analytics include the number of publications, the amount of grant funding received, citations to previous work, and honors and award recognition to the faculty (Nobel Prizes, MacArthur Grants, membership in National Academies, and so forth). These factors are captured on both a per capita and total basis to give an aggregate measure that can then be compared to other PhD-granting departments in the same field.
Between the last academic prioritization exercise (2014) and the current one (2018), the measurement of scholarly accomplishment was refined to better reflect the standing of Ƶ Boulder units in relation to cognate units at peer institutions. In particular, Academic Analytics data were narrowed so that Ƶ Boulder faculty are compared only to peer faculty at peer, AAU institutions. This change makes the data more reflective of Ƶ Boulder’s expectations that its faculty will achieve prominence in scholarship and creative work equal to that of faculty at the top institutions in the US. Specifically, Ƶ Boulder faculty were compared in terms of scholarly accomplishment to faculty at AAU institutions, not to those at all PhD-granting institutions; faculty in Ƶ Boulder units that do not grant the PhD were compared to faculty at AAU peer institutions that do grant the PhD (since we expect our faculty to be competitive with AAU peer faculty in scholarly accomplishment even if their unit does not offer the PhD); and faculty in departments that cover more than one field (for example, Mathematics and Applied Mathematics) were disaggregated so that they were compared to faculty in the same field at AAU peer institutions.
For units that offer the PhD, we then scale this ranking as a percentile ranking among PhD-offering programs in that discipline. For instance, Geography’s ranking is 100% because it scores as the top program of 87 Geography PhD programs in the US and Canada. In this way we can compare the scholarly quality and accomplishments of units across campus relative to their accomplishments in their respective disciplines. Programs not granting PhDs are scored at the median percentile. All programs are then ranked by quintile.
Teaching Effectiveness measures the extent to which a unit serves its own majors and the curricular needs of other units on campus. Rather than relying on Faculty Course Questionnaire data, which would present aggregation issues when moving from the course level at which the data is collected to the department level at which prioritization must be accomplished, we employ two measures that are produced as part of the annual Senior Survey given to graduates: (a) overall satisfaction with the major; and (b) the extent to which a major prepared a student to meet their post-graduation plans. In this way we capture the effectiveness of program in satisfying and preparing its students. In addition, we include a measure of (c) the amount of teaching done by each unit for students outside that unit’s major. Many departments fill a critical role of delivering important parts of the core curriculum or of the curriculum requirements of other majors. For instance, Physics and Math are a necessary part of the education of all engineering majors, but are taught outside the College of Engineering and Applied Science. The extent of this service teaching is an important part of the teaching mission of the campus and shows how critical the mission of some units is, even if those units do not produce a large number of degrees.
*Should you require assistance reading these documents, please contact us at (need contact info)