< Back to 2021 samples

Template - Content design scorecard - minified

Document Design scorecard
Status
Evaluators The team members who are evaluating the design
Experience This scorecard measure the content quality of the [insert feature or experience link] experience.
User story or job to be done The main user story or job to be done being captured in this flow, or link to functional requirements specification.
Related links

How to use this template

  1. Copy this template into your project space.
  2. Work your way through each step of your selected experience or flow.
  3. At each step consider the checklist below and score your designs, adding notes about why you've scored a certain way.
  4. Once you've considered these points, collate the key points in the good and bad sections below.
  5. Repeat steps 2-4 with a peer, setting the right amount of context around the journey and role of the scorecard.
  6. Collate and prioritize the next steps identified from this round of assessment. Complete this together with your team.
  7. Share any decisions and/or red flags you've identified at your next team catchup.

Summary

Complete this section last.

Overall score: A/B/C/D

The good The bad Next steps
Summarize good practices Summarize where improvements can be made Assign tasks and due dates for improvements

Content design bandwidth and engagement: 0-5 plus notes

Bandwidth Engagement Notes
Was the content designer balancing competing priorities (0) or fully focuses on this work in a sustainable way (5)? Was the content designer operating as a self-serve reviewer (0), as an internal service (3), or fully embedded with the development team (5)? Include notes about the assigned content designer's bandwidth and engagement

How to score

A B C D
Excellent
Represents the current best practice. Maximizes clarity and simplicity of the content.
Good
Generally represents good practice and contributes clear, simple content but with exceptions. Capable of further improvement.
Fair
Of average quality. may have some good facets but capable of significant further improvement. Not representative of current good practice.
Poor
Unclear, confusing.

Scorecard

Criteria name What is this criteria? How to measure Score and notes
Table stakes
Legibility Use of legible founts and text layout
  • Typography follows design system guidance
  • Type is sized 12pt or larger
All applicable met = A
Missing some = B
Missing most = C
None that apply are met = D
Score:
Notes:
Structure Quality of the UI/UX flow or the long-form document's organization in relation to its function
  • Title/heading matches expectations set by entry paths
  • Content follows an inverted pyramid (the most important information is presented first, supplemental information presented after)
  • Content is arranged from general to specific
  • A general benefit/value statement is present
  • Warnings and errors are called out and visually distinct from body copy
  • Icons that serve as navigation have accompanying text that explains them
All applicable met = A
Missing some = B
Missing most = C
None that apply are met = D
Score:
Notes:
Action Clarity about what action is required of the user
  • A single, primary, clear call to action is present
  • Additional calls to action are downplayed in the visual field
  • Placeholders are present in input fields and they describe the type of input expected by the system
  • Error messages provide users with all they need to know to overcome the error
All applicable met = A
Missing some = B
Missing most = C
None that apply are met = D
Score:
Notes:
Other crucial criteria
Plain words Extent to which the vocabulary is easily understood
  • Free from idioms or unfamiliar metaphors
  • Technical terms are defined or links to glossaries are provided
  • Prefers Anglo-Saxon terms to Latin alternatives (for example "get" instead of "receive")
  • Provides concrete examples or explanations for abstract concepts (for example "your records may be deleted" instead of "your data's integrity may be at risk")
All applicable met = A
Missing some = B
Missing most = C
None that apply are met = D
Score:
Notes:
Readability Ease with which the reader can follow the argument of the text
  • Free from excessively long sentences
  • Free from relative clause structures ("which" and "who" nested inside a sentence)
  • Employs a logical flow of information (one topic moves logically to the next)
  • Presents a single thought or idea per sentence
  • Free from "so what?" moments from a reader's perspective
  • Anticipates and answers any questions the information may create
You can use a formula, if needed. For example, the Flesch-Kincaid grade level calculator. Be warned that readability scores can be misleading and may not correlate to user performance.
In concept tests, try a single ease question: "How easy or difficult was it to read the information provided?"
Score:
Notes:
Audience fit Appropriateness to the knowledge and skill level of the user
  • Conforms to accessibility guidelines
  • Conforms to inclusive language guidelines
  • Vocabulary matches target audience's industry terminology and phraseology
  • Vocabulary matches audience experience (for example new users may not know product-specific terms and concepts)
  • Text is allowed to wrap and is not truncated (this facilitates better internationalization)
  • Input is independent of surrounding content (i.e. form inputs don't use a complete-the-sentence approach)
All applicable met = A
Missing some = B
Missing most = C
None that apply are met = D
Score:
Notes:
Relevance How relevant the content is to the recipient
  • The intended audience is plain and appears at the beginning of the communication
  • Prerequisites appear as callouts at the beginning of the communication
  • Headings and subsections are clear and reflect the content that follows them
  • Diagrams and flowcharts are clearly labeled as to their purpose
For product designs:
All applicable met = A
Missing some = B
Missing most = C
None that apply are met = D
For documentation:
"Was this helpful?" scores and the rate of "not relevant" responses.
  • Less than 1/6 of "not helpful" responses are "not relevant" = A
  • Between 1/3 and 1/6 of "not helpful" responses are "not relevant" = B
  • 1/3 of "not helpful" responses are "not relevant" = C
  • >1/3 of "not helpful" responses are "not relevant" = D