The 10 SUS Questions
The System Usability Scale consists of 10 carefully designed questions that alternate between positive and negative statements. Each question targets a specific aspect of usability.
Quick Reference: All 10 Questions
Questions alternate between positive (odd numbers) and negative (even numbers) to prevent response bias.
I think that I would like to use this system frequently.
I found the system unnecessarily complex.
I thought the system was easy to use.
I think that I would need the support of a technical person to use this system.
I found the various functions in this system were well integrated.
I thought there was too much inconsistency in this system.
I would imagine that most people would learn to use this system very quickly.
I found the system very cumbersome to use.
I felt very confident using the system.
I needed to learn a lot of things before I could get going with this system.
Detailed Question Analysis
Understanding what each question measures helps you interpret results and identify specific usability issues.
I think that I would like to use this system frequently.
Frequency of Use Intent
Assesses whether users find the product valuable enough to use regularly. High agreement indicates the product meets a genuine need.
Low scores here may indicate the product doesn't solve a real problem or has better alternatives.
I found the system unnecessarily complex.
Complexity Perception
Evaluates whether users feel overwhelmed by the product. This is a negative statement, so disagreement is good.
High agreement suggests the interface has too many features, confusing navigation, or unclear workflows.
I thought the system was easy to use.
Ease of Use
Measures how intuitive users find the product. Easy-to-use products require minimal learning.
This directly correlates with onboarding success and user adoption rates.
I think that I would need the support of a technical person to use this system.
Need for Support
Assesses self-sufficiency. Users shouldn't need technical help for basic tasks. Disagreement is positive.
High agreement indicates poor documentation, confusing UI, or missing affordances.
I found the various functions in this system were well integrated.
Feature Integration
Evaluates whether features work together cohesively rather than feeling disjointed.
Low scores may indicate feature bloat, inconsistent design patterns, or poor information architecture.
I thought there was too much inconsistency in this system.
Consistency
Measures whether users encounter unexpected behaviors or inconsistent patterns. Disagreement is positive.
High agreement suggests UI inconsistencies, unpredictable interactions, or broken mental models.
I would imagine that most people would learn to use this system very quickly.
Learnability
Assesses how quickly new users can become proficient. Products should be learnable without extensive training.
Strong agreement indicates good onboarding, clear affordances, and intuitive design.
I found the system very cumbersome to use.
Awkwardness
Evaluates whether interactions feel natural or clunky. Disagreement is positive.
High agreement suggests poor interaction design, unnecessary steps, or frustrating workflows.
I felt very confident using the system.
Confidence
Measures whether users feel in control and capable when using the product.
Strong agreement indicates clear feedback, predictable behavior, and good error prevention.
I needed to learn a lot of things before I could get going with this system.
Learning Curve
Assesses prerequisite knowledge needed. Products shouldn't require extensive learning. Disagreement is positive.
High agreement suggests missing onboarding, poor defaults, or overly technical interface.
Why Do Questions Alternate?
John Brooke intentionally alternated between positive and negative statements to prevent acquiescence bias—the tendency for respondents to agree with statements regardless of content.
Positive (Odd)
Questions 1, 3, 5, 7, 9
Agreement indicates good usability
Negative (Even)
Questions 2, 4, 6, 8, 10
Disagreement indicates good usability
This alternation forces respondents to read each question carefully and think about their actual experience, producing more reliable results.
How Questions Affect Scoring
The scoring formula accounts for the alternating nature of questions:
Positive Questions (1,3,5,7,9)
Contribution = Response - 1
If user answers 5 (Strongly Agree): 5 - 1 = 4 points
Negative Questions (2,4,6,8,10)
Contribution = 5 - Response
If user answers 1 (Strongly Disagree): 5 - 1 = 4 points
Each question contributes 0-4 points, summed and multiplied by 2.5 for a final score of 0-100. Learn more about how SUS scoring works.
Frequently Asked Questions
Can I modify the SUS questions?
You can replace the word 'system' with your product name (e.g., 'I would like to use [AppName] frequently'). However, the wording of the 10 questions themselves should not be changed, as this would invalidate the scoring and benchmarks.
What if respondents don't understand a question?
If respondents are confused, they should select the middle option (3 - Neutral). However, frequent confusion suggests the 'system' terminology may need to be replaced with your specific product name for clarity.
Do all 10 questions need to be answered?
Yes. The SUS formula requires all 10 responses to calculate a valid score. If any questions are skipped, the results are unreliable. Our calculator validates that all questions are answered.
Why are there only 10 questions?
John Brooke designed SUS to be 'quick and dirty'—fast enough to administer frequently without survey fatigue, while still being statistically reliable. Research has validated that 10 questions is sufficient.
Can I add additional questions?
You can add questions before or after the SUS questionnaire, but keep them separate. Additional questions should not be included in the SUS score calculation. Consider adding open-ended questions for qualitative insights.
Ready to run a SUS survey?
Use our free calculator or create a shareable survey for your team.