Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging self-assessment of work and structuring peer-assessments.

Why use rubrics?

Rubrics are an important tool to assess learning in an equitable and just manner. This is because they enable:

  1. A common set of standards and criteria to be uniformly applied, which can mitigate bias
  2. Transparency regarding the standards and criteria on which students are evaluated
  3. Efficient grading with timely and actionable feedback
  4. Identifying areas in which students need additional support and guidance
  5. The use of objective, criterion-referenced metrics for evaluation

Some instructors may be reluctant to provide a rubric to grade assessments under the perception that it stifles student creativity (Haugnes & Russell, 2018). However, sharing the purpose of an assessment and criteria for success in the form of a rubric along with relevant examples has been shown to particularly improve the success of BIPOC, multiracial, and first-generation students (Jonsson, 2014; Winkelmes, 2016). Improved success in assessments is generally associated with an increased sense of belonging which, in turn, leads to higher student retention and more equitable outcomes in the classroom (Calkins & Winkelmes, 2018; Weisz et al., 2023). By not providing a rubric, faculty may risk having students guess the criteria on which they will be evaluated. When students have to guess what expectations are, it may unfairly disadvantage students who are first-generation, BIPOC, international, or otherwise have not been exposed to the cultural norms that have dominated higher-ed institutions in the U.S (Shapiro et al., 2023). Moreover, in such cases, criteria may be applied inconsistently for students leading to biases in grades awarded to students.

Steps for Creating a Rubric

Clearly state the purpose of the assessment, which topic(s) learners are being tested on, the type of assessment (e.g., a presentation, essay, group project), the skills they are being tested on (e.g., writing, comprehension, presentation, collaboration), and the goal of the assessment for instructors (e.g., gauging formative or summative understanding of the topic).

Determine the specific criteria or dimensions to assess in the assessment. These criteria should align with the learning objectives or outcomes to be evaluated. These criteria typically form the rows in a rubric grid and describe the skills, knowledge, or behavior to be demonstrated. The set of criteria may include, for example, the idea/content, quality of arguments, organization, grammar, citations and/or creativity in writing. These criteria may form separate rows or be compiled in a single row depending on the type of rubric.

(See row headersǴ󾱲ܰ1)

Create a scale of performance levels that describe the degree of proficiency attained for each criterion. The scale typically has 4 to 5 levels (although there may be fewer levels depending on the type of rubrics used). The rubrics should also have meaningful labels (e.g., not meeting expectations, approaching expectations, meeting expectations, exceeding expectations). When assigning levels of performance, use inclusive language that can inculcate a growth mindset among students, especially when work may be otherwise deemed to not meet the mark. Some examples include, “Does not yet meet expectations,” “Considerable room for improvement,” “ Progressing,” “Approaching,” “Emerging,” “Needs more work,” instead of using terms like “Unacceptable,” “Fails,” “Poor,” or “Below Average.”

(See column headersǴFigure 1)

Develop a clear and concise descriptor for each combination of criterion and performance level. These descriptors should provide examples or explanations of what constitutes each level of performance for each criterion. Typically, instructors should start by describing the highest and lowest level of performance for that criterion and then describing intermediate performance for that criterion. It is important to keep the language uniform across all columns, e.g., use syntax and words that are aligned in each column for a given criteria.

(See cellsǴFigure 1)

It is important to consider how each criterion is weighted and for each criterion to reflect the importance of learning objectives being tested. For example, if the primary goal of a research proposal is to test mastery of content and application of knowledge, these criteria should be weighted more heavily compared to other criteria (e.g., grammar, style of presentation). This can be done by associating a different scoring system for each criteria (e.g., Following a scale of 8-6-4-2 points for each level of performance in higher weight criteria and 4-3-2-1 points for each level of performance for lower weight criteria). Further, the number of points awarded across levels of performance should be evenly spaced (e.g., 10-8-6-4 instead of 10-6-3-1). Finally, if there is a letter grade associated with a particular assessment, consider how it relates to scores. For example, instead of having students receive an A only if they received the highest level of performance on each criterion, consider assigning an A grade to a range of scores (28 - 30 total points) or a combination of levels of performance (e.g., exceeds expectations on higher weight criteria and meets expectations on other criteria).

(See the numerical values in the column headersǴFigure 1)

 a close up of a score sheet

Figure 1:Graphic describing the five basic elements of a rubric

Note: Consider using a template rubric that can be used to evaluate similar activities in the classroom to avoid the fatigue of developing multiple rubrics. Some tools include or which provide suggested words for each criteria depending on the type of assessment. Additionally, the above format can be incorporated in or in which are common grading tools. Alternately, tables within a Word processor or Spreadsheet may also be used to build a rubric. You may also adapt the example rubrics provided below to the specific learning goals for the assessment using the blank template rubrics we have provided against each type of rubric. Watch the linked video for a quick .Word document (docx) files linked below will automatically download to your device whereas pdf files will open in a new tab.

Types of Rubrics

In these rubrics, one specifies at least two criteria and provides a separate score for each criterion. The steps outlined above for creating a rubric are typical for an analytic style rubric. Analytic rubrics are used to provide detailed feedback to students and help identify strengths as well as particular areas in need of improvement. These can be particularly useful when providing formative feedback to students, for student peer assessment and self-assessments, or for project-based summative assessments that evaluate student learning across multiple criteria. You may use a blank analytic rubric template (docx) or adapt an existing sample of an analytic rubric (pdf).

figure 2

Fig 2: Graphic describing a sample analytic rubric (adopted from George Mason University, 2013)

These are a subset of analytical rubrics that are typically used to assess student performance and engagement during a learning period but not the end product. Such rubrics are typically used to assess soft skills and behaviors that are less tangible (e.g., intercultural maturity, empathy, collaboration skills). These rubrics are useful in assessing the extent to which students develop a particular skill, ability, or value in experiential learning based programs or skills. They are grounded in the theory of development (King, 2005). Examples include an and a .

These rubrics consider all criteria evaluated on one scale, providing a single score that gives an overall impression of a student’s performance on an assessment.These rubrics also emphasize the overall quality of a student’s work, rather than delineating shortfalls of their work. However, a limitation of the holistic rubrics is that they are not useful for providing specific, nuanced feedback or to identify areas of improvement. Thus, they might be useful when grading summative assessments in which students have previously received detailed feedback using analytic or single-point rubrics. They may also be used to provide quick formative feedback for smaller assignments where not more than 2-3 criteria are being tested at once. Try using our blank holistic rubric template docx)or adapt an existing sample of holistic rubric (pdf).

figure 3

Fig 3: Graphic describing a sample holistic rubric (adopted from Teaching Commons, DePaul University)

These rubrics contain only two levels of performance (e.g., yes/no, present/absent) across a longer list of criteria (beyond 5 levels). Checklist rubrics have the advantage of providing a quick assessment of criteria given the binary assessment of criteria that are either met or are not met. Consequently, they are preferable when initiating self- or peer-assessments of learning given that it simplifies evaluations to be more objective and criteria can elicit only one of two responses allowing uniform and quick grading. For similar reasons, such rubrics are useful for faculty in providing quick formative feedback since it immediately highlights the specific criteria to improve on. Such rubrics are also used in grading summative assessments in courses utilizing alternative grading systems such as specifications grading, contract grading or a credit/no credit grading system wherein a minimum threshold of performance has to be met for the assessment. Having said that, developing rubrics from existing analytical rubrics may require considerable investment upfront given that criteria have to be phrased in a way that can only elicit binary responses. Here is a link to the checklist rubric template (docx).

 Graphic describing a sample checklist rubric

Fig. 4: Graphic describing a sample checklist rubric

A single point rubric is a modified version of a checklist style rubric, in that it specifies a single column of criteria. However, rather than only indicating whether expectations are met or not, as happens in a checklist rubric, a single point rubric allows instructors to specify ways in which criteria exceeds or does not meet expectations. Here the criteria to be tested are laid out in a central column describing the average expectation for the assignment. Instructors indicate areas of improvement on the left side of the criteria, whereas areas of strength in student performance are indicated on the right side. These types of rubrics provide flexibility in scoring, and are typically used in courses with alternative grading systems such as ungrading or contract grading. However, they do require the instructors to provide detailed feedback for each student, which can be unfeasible for assessments in large classes. Here is a link to the single point rubric template (docx).

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Best Practices for Designing and Implementing Rubrics

When designing the rubric format, descriptors and criteria should be presented in a way that is compatible with screen readers and reading assistive technology. For example, avoid using only color, jargon, or complex terminology to convey information. In case you do use color, pictures or graphics, try providing alternative formats for rubrics, such as plain text documents. Explore resources from the Ƶ Digital Accessibility Office to learn more.

Co-creating rubrics can help students to engage in higher-order thinking skills such as analysis and evaluation. Further, it allows students to take ownership of their own learning by determining the criteria of their work they aspire towards. For graduate classes or upper-level students, one way of doing this may be to provide learning outcomes of the project, and let students develop the rubric on their own. However, students in introductory classes may need more scaffolding by providing them a draft and leaving room for modification (Stevens & Levi 2013). Watch the linked video for . Further, involving teaching assistants in designing a rubric can help in getting feedback on expectations for an assessment prior to implementing and norming a rubric.

When first designing a rubric, it is important to compare grades awarded for the same assessment by multiple graders to make sure the criteria are applied uniformly and reliably for the same level of performance. Further, ensure that the levels of performance in student work can be adequately distinguished using a rubric. Such a norming protocol is particularly important to also do at the start of any course in which multiple graders use the same rubric to grade an assessment (e.g., recitation sections, lab sections, teaching team). Here, instructors may select a subset of assignments that all graders evaluate using the same rubric, followed by a discussion to identify any discrepancies in criteria applied and ways to address them. Such strategies can make the rubrics more reliable, effective, and clear.

Sharing the rubric with students prior to an assessment can help familiarize students with an instructor’s expectations. This can help students master their learning outcomes by guiding their work in the appropriate direction and increase student motivation. Further, providing the rubric to students can help encourage metacognition and ability to self-assess learning.

Sample Rubrics

Below are links to rubric templates designed by a team of experts assembled by the Association of American Colleges and Universities (AAC&U) to assess 16 major learning goals. These goals are a part of the Valid Assessment of Learning in Undergraduate Education (VALUE) program. All of these examples are analytic rubrics and have detailed criteria to test specific skills. However, since any given assessment typically tests multiple skills, instructors are encouraged to develop their own rubric by utilizing criteria picked from a combination of the rubrics linked below.

Note: Clicking on the above links will automatically download them to your device in Microsoft Word format. These links have been created and are . Additional information regarding the VALUE Rubrics may be found on the .

Below are links to sample rubrics that have been developed for different types of assessments. These rubrics follow the analytical rubric template, unless mentioned otherwise. However, these rubrics can be modified into other types of rubrics (e.g., checklist, holistic or single point rubrics) based on the grading system and goal of assessment (e.g., formative or summative). As mentioned previously, these rubrics can be modified using the blank template provided.

Additional information:

Office of Assessment and Curriculum Support. (n.d.). . University of Hawai’i, Mānoa

Calkins, C., & Winkelmes, M. A. (2018). . UNLV Best Teaching Practices Expo, 3.

Fraile, J., Panadero, E., & Pardo, R. (2017). Studies In Educational Evaluation, 53, 69-76

Haugnes, N., & Russell, J. L. (2016). . To Improve the Academy, 35(2), 249–283.

Jonsson, A. (2014). , Assessment & Evaluation in Higher Education, 39(7), 840-852

McCartin, L. (2022, February 1). . University of Northern Colorado

Shapiro, S., Farrelly, R., & Tomaš, Z. (2023). Chapter 4: Effective and Equitable Assignments and Assessments. (pp, 61-87, second edition). TESOL Press.

Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (second edition). Sterling, VA: Stylus.

Teaching Commons (n.d.). . DePaul University

Teaching Resources (n.d.). . NC State University

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). . Peer Review, 8(1/2), 31-36.

Weisz, C., Richard, D., Oleson, K., Winkelmes, M.A., Powley, C., Sadik, A., & Stone, B. (in progress, 2023).Transparency, confidence, belonging and skill development among 400 community college students in the state of Washington.

Association of American Colleges and Universities. (2009). .

Canvas Community. (2021, August 24). Canvas LMS Community.

Center for Teaching & Learning. (2021, March 03). . University of Colorado, Boulder

Center for Teaching & Learning. (2021, March 18). . University of Colorado, Boulder.

Chase, D., Ferguson, J. L., & Hoey, J. J. (2014). Assessment in creative disciplines: Quantifying and qualifying the aesthetic. Common Ground Publishing.

Feldman, J. (2018). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms. Corwin Press, CA.

Gradescope (n.d.). . Gradescope Help Center.

Henning, G., Baker, G., Jankowski, N., Lundquist, A., & Montenegro, E. (Eds.). (2022). Reframing assessment to center equity. Stylus Publishing.

King, P. M. & Baxter Magolda, M. B. (2005). . Journal of College Student Development. 46(2), 571-592.

Selke, M. J. G. (2013). Rubric assessment goes to college: Objective, comprehensive evaluation of student work. Lanham, MD: Rowman & Littlefield.

The Institute for Habits of Mind. (2023, January 9). .