πŸ“Ž Get this presentation: drtlinks.org/ai-rubric-canvas.html
Scroll or use ↑↓ to navigate
Canvas LMS Workshop

Using AI to Build
Better Rubrics, Faster

A five-step workflow for generating, refining, and importing rubrics into Canvas β€” leveraging LLMs to improve grading efficiency, consistency, and SpeedGrader integration. Customizable to any discipline.

Michael Thompson Professor, Electrical & Computer Engineering Baylor University

Why AI-Assisted Rubrics?

Well-structured rubrics are essential for consistent, transparent grading β€” and most disciplines have their own standards and accreditation frameworks that demand documented assessment evidence. Canvas rubrics unlock SpeedGrader's full potential while generating data for continuous improvement, yet the manual setup discourages adoption.

LLMs can accelerate rubric development from hours to minutes, while the human expert stays in the loop for quality and alignment with your discipline's learning outcomes and standards.

⏱

Time Investment

Building detailed rubrics with clear descriptors for every criterion and level is a significant time commitment per assignment.

A 5-criterion rubric with 5 performance levels means writing 25 unique descriptors β€” each needing to be specific and measurable. An LLM generates a solid first draft in under a minute, letting you invest your time in refinement instead of blank-page writing.
βš–

Grading Consistency

Without rubrics in SpeedGrader, grading relies on mental models that drift across students and sessions.

Research shows grading drift increases with fatigue and batch size. A rubric with clickable performance levels in SpeedGrader forces structured evaluation β€” the same standard applied to student 1 and student 85.

Feedback Quality

Rubric-linked feedback gives students specific, actionable information tied to clearly defined performance levels.

Students see exactly which level they hit on each criterion and what would move them up. This replaces vague margin comments with structured feedback tied to observable behaviors β€” far more useful for improvement.
πŸ“Š

Accreditation & Assessment

Rubrics aligned to your discipline's standards produce structured assessment data for accreditation and continuous improvement.

In engineering, I map criteria to ABET Student Outcomes with a five-level expectations scale β€” every graded assignment becomes accreditation evidence. Nursing might align to AACN essentials, business to AACSB goals. Customize the scale and criteria to your framework.

The Five-Step Workflow

From prompt to published rubric in Canvas

1
Generate Rubric
LLM Prompt
AI-Powered

Prompt the LLM with your assignment description, your discipline's learning outcomes or accreditation standards, and your preferred performance scale. Ask for measurable, observable descriptors. For example, in engineering I use ABET Student Outcomes and a five-level expectations scale.

Key: The more specific your prompt, the less refinement needed.
Mike's Tip
Actually, attaching the assignment file with minimal prompting can work pretty well. You don't always need an elaborate prompt β€” the LLM can infer a lot from the assignment itself.
2
Refine & Edit
Human + AI
AI-Powered Expert Review

Review each descriptor for alignment with your actual grading standards and disciplinary expectations. Edit manually or use targeted follow-up prompts. Verify adjacent levels are clearly distinguishable β€” especially around the pass/fail boundary in your scale.

Key: This is where domain expertise is irreplaceable.
Mike's Tip
Use your LLM's fine-grain editing features. Ask the LLM to evaluate and improve the rubric β€” it's great at spotting vague descriptors and suggesting more measurable language.
3
Enable & Template
Feature + CSV
⚠ Gotcha

First enable Enhanced Rubrics in Canvas: Settings β†’ Feature Options β†’ Enhanced Rubrics. Without this, import/export won't appear. Then navigate to Rubrics and download the blank CSV template.

Key: Canvas menus change β€” ask AI for help navigating if needed.
Mike's Tip
This is the first "gotcha" β€” it's easy to forget. If you don't see import/export options for rubrics, this is almost certainly why.
4
Populate Template
LLM + Cleanup
AI-Powered Most Failure-Prone

Prompt the LLM to map your rubric into the CSV template's exact structure. Then remove non-printing characters (curly quotes, BOM), verify column alignment, and check point totals. Open in a plain text editor before importing.

Key: Non-printing characters are the #1 cause of import errors.
Mike's Tip
The second "gotcha" β€” if you forget this cleanup step and get an import error, take a screenshot of the error, put it back in the LLM, and try again. The AI is usually good at diagnosing CSV formatting issues from the error message.
5
Import & Finalize
Canvas LMS
Canvas

Upload the CSV via Canvas rubric import. You must edit and save the rubric β€” imported rubrics stay in "draft" and won't attach to assignments until edited. Then attach to your assignment and test in SpeedGrader.

Key: Once finalized, duplicate and reuse across courses and semesters.
Mike's Tip
The third "gotcha" β€” if you can't find your rubric when trying to attach it to an assignment, it's probably still marked "draft." Open it, make a trivial edit, and save. That takes it out of draft status.
Click a step to expand Β· click again for Mike's Tip
Step 01 — Generate

Prompt the LLM to Draft Your Rubric

Start by giving the LLM the context it needs: the assignment description, your discipline's learning outcomes or accreditation standards, desired criteria, and your preferred performance scale. The key is to customize the prompt to your field β€” the LLM can work with any framework you give it.

Map criteria to your discipline's standards (e.g., ABET SOs for engineering, AACN for nursing, AACSB for business)
Define your performance scale β€” in ECE, I use: Well Below / Below / Meets / Above / Well Above Expectations
Request measurable, observable descriptors β€” avoid vague adjectives
Define your anchor level first (e.g., "Meets Expectations") β€” other levels are deviations from it
Include the assignment prompt and any relevant course learning objectives
πŸ’‘ Ask the LLM to produce a table format and to define your anchor level first β€” it grounds the rubric and makes adjacent levels easier to differentiate.
prompt.txt
β–Έ EXAMPLE β€” Engineering (ABET-aligned)
Create a grading rubric for a digital logic lab report assignment following ABET assessment best practices. This assignment assesses ABET Student Outcome 6 (experimentation/analysis) and SO 1 (complex problem solving).

The rubric should have 5 criteria: Circuit Design, Simulation Results, Timing Analysis, Written Explanation, and Code Quality.

Use the five-level ABET-aligned scale: Well Below Expectations, Below Expectations, Meets Expectations, Above Expectations, Well Above Expectations.

Define "Meets Expectations" first as the baseline for each criterion β€” it should describe competent, acceptable performance. Then define the other levels as deviations. Each descriptor must be specific, measurable, and observable β€” no vague adjectives like "good" or "adequate." The total rubric should be worth 100 points.
↑ This is my prompt for ECE β€” yours would reference your discipline's standards, outcomes, and scale.
LLM generates discipline-aligned rubric table
Circuit Design (25 pts) β€” SO 1
Meets Expectations: Design is functionally correct for all specified inputs; uses appropriate components with no unnecessary logic…
Above Expectations: Meets all baseline criteria and demonstrates optimization such as reduced gate count or propagation delay…
Step 02 — Refine

Edit with Human Judgment & AI Assistance

The LLM draft is a starting point. Review each criterion and descriptor for alignment with your actual expectations. You can edit manually or use follow-up prompts to adjust specific cells.

Verify descriptors match your actual grading standards
Confirm each criterion maps to a specific learning outcome or accreditation standard
Ensure adjacent levels are clearly distinguishable β€” especially around the pass/fail boundary in your scale
Adjust point weightings to reflect priorities
Use AI follow-up prompts for targeted revisions
πŸ’‘ Try: "Make the distinction between Meets Expectations and Below Expectations clearer for the Timing Analysis criterion" β€” targeted edits are faster than rewriting.
refinement
Manual Edit AI Assist
Manual: Change "Code Quality" weight from 15 β†’ 20 pts. Reduce "Written Explanation" from 20 β†’ 15 pts to compensate.
AI Prompt: "The 'Below Expectations' level for Simulation Results is too vague. Rewrite it to specify that fewer than 60% of test cases produce correct waveforms, with no documentation of discrepancies."
Iterate until rubric matches your expectations
This is where your domain expertise matters most β€” the AI accelerates, you validate.
Step 03 — Template

Enable Enhanced Rubrics & Download the CSV Template

Canvas doesn't have rubric import/export enabled by default. You need to flip one switch first, then you can go straight to downloading the blank CSV template.

⚠️
Gotcha β€” Enable Enhanced Rubrics First
Go to Settings β†’ Feature Options β†’ Enhanced Rubrics and toggle it on. Without this, you won't see the import/export options. This is easy to miss.
Enable Enhanced Rubrics in Feature Options (one-time setup)
Navigate to Rubrics and download the blank CSV template directly
The template defines the exact column structure and formatting Canvas expects
πŸ’‘ Canvas menus change frequently. If you can't find a setting where you expect it, ask an AI β€” describe what you're looking for and it can walk you through the current UI.
rubric_template.csv
πŸ“ STEP A β€” ENABLE THE FEATURE
Settings β†’ Feature Options β†’ Enhanced Rubrics βœ“
πŸ“ STEP B β€” DOWNLOAD THE TEMPLATE
Rubrics β†’ Download CSV Template ↓
Template defines the exact format Canvas expects
Rubric Name Criteria Name Criteria Desc Rating Name Rating Desc Points
Lab Report Rubric Circuit Design SO 1 Well Above Expectations (descriptor) 25
Above Expectations (descriptor) 22
Meets Expectations (descriptor) 19
Below Expectations (descriptor) 15
Well Below Expectations (descriptor) 10
Simulation Results SO 6 Well Above Expectations (descriptor) 20
… … … … … …
Canvas-ready structure with exact column format
Step 04 — Populate

Integrate Rubric into the CSV Template

Use the LLM to map your refined rubric content into the downloaded template's exact structure. This step requires careful attention to detail β€” Canvas is particular about formatting.

Prompt the LLM with both the rubric and the template
Remove non-printing characters β€” curly quotes, BOM markers, hidden whitespace
Carefully check all template fields β€” column alignment, escaping, row structure
Adjust rubric content if needed to fit template constraints
⚠️ This is the most failure-prone step. Non-printing characters and misaligned fields are the #1 cause of import errors. Open the CSV in a plain text editor to verify before importing.
cleanup checklist
Here is my rubric [paste rubric table] and here is the Canvas CSV template [paste template]. Please populate the template with my rubric content, preserving the exact column structure.
Critical Checks:
βœ“Straight quotes only β€” no curly/smart quotes (" not ")
βœ“No BOM (byte order mark) at file start
βœ“UTF-8 encoding, LF line endings
βœ“Fields with commas are properly quoted
βœ“No trailing whitespace or blank rows
βœ“Point values match rubric total
Step 05 — Import & Finalize

Import to Canvas, Edit, and Attach

Upload the populated CSV to Canvas, review the imported rubric for any formatting issues, make final adjustments, and attach it to the target assignment for use in SpeedGrader.

Import the CSV via Canvas rubric import
Review every criterion and rating in the Canvas UI
Edit and save the rubric β€” imported rubrics stay in "draft" until edited and won't attach to assignments
Attach rubric to the assignment
Test in SpeedGrader to verify grading workflow
πŸ’‘ Once imported, the rubric is a native Canvas object β€” you can duplicate it, modify it for future assignments, and share it across courses.
Canvas LMS
Import Canvas
Import β†’ Review β†’ Edit β†’ Attach
Upload CSV
β†’
Verify Rubric
β†’
Edit Required
β†’
Attach to Assignment
⚠ Gotcha: Imported rubrics remain in "draft" status until you edit and save them in Canvas. Draft rubrics will not attach to assignments. You must open the rubric, make at least one edit, and save to finalize it.
Once attached, the rubric appears in SpeedGrader as a clickable scoring grid. Select the performance level for each criterion β€” scores auto-calculate and feedback is linked to specific descriptors.

The Process β€” Review

Click each step as we recap

1βœ“
Generate Rubric
Prompt the LLM with your assignment, discipline standards, and performance scale
AI
2βœ“
Refine & Edit
Human expertise validates β€” use targeted prompts or manual edits to sharpen descriptors
AI + Expert
3βœ“
Enable Enhanced Rubrics & Download Template
Settings β†’ Feature Options β†’ Enhanced Rubrics, then download the blank CSV
Gotcha #1
4βœ“
Populate Template & Clean Up
LLM fills the CSV β€” remove non-printing characters, verify in a plain text editor
Gotcha #2
5βœ“
Import, Edit, & Attach
Upload CSV, make a trivial edit to exit "draft" status, attach to assignment
Gotcha #3
0 / 5

Key Takeaways

Speed Without Sacrifice

LLMs handle the labor-intensive drafting. You invest your time where it matters β€” in aligning rubrics to your actual standards.

SpeedGrader Integration

Canvas rubrics transform grading from open-ended evaluation to structured, clickable scoring β€” faster and more consistent.

Human in the Loop

The AI accelerates; you validate. Steps 2 and 4 are where domain expertise is irreplaceable.

Customize to Your Discipline

The workflow adapts to any framework β€” ABET, AACN, AACSB, or your own. Ground your prompts in your discipline's standards and the rubrics become assessment evidence, not just grading tools.

Watch the Details

Non-printing characters and CSV formatting are the failure mode. A plain text editor check before import saves headaches.

The rubric is a living document β€” iterate, reuse, and refine across semesters.