Writing User Stories and Acceptance Criteria with AI

The User Story Template

A user story follows this format:

As a [type of user], I want to [action], so that [benefit].

Example: As a sales manager, I want to auto-assign follow-up tasks to my team, so that I spend less time on scheduling.

Good user stories are short, specific, and focused on user needs. They are not feature specs. They are not technical requirements.

Prompting for User Stories

AI can draft user stories from feature descriptions. Here is how.

Paste your feature description and ask AI to write 4-6 user stories from different user perspectives.

Example prompt: "We are building a calendar-blocking feature that lets users mark focus time and hide their availability during those blocks.

Feature overview: Users can mark 1-2 hour blocks as focus time. During focus blocks, their calendar shows as busy so colleagues do not book meetings. The user can still see the full calendar. Only the busy blocking applies to others.

Please write 4 user stories from the perspective of: (1) a developer who needs deep work time, (2) a manager coordinating team meetings, (3) an IC worried about visibility, (4) a remote worker managing timezone overlaps."

AI will give you stories. You refine them. You make sure each story reflects a real user need your research uncovered.

Acceptance Criteria That Are Actually Testable

Acceptance criteria define when a story is done. Good criteria are testable. Bad criteria are vague.

Bad: "The calendar block should be clear."

Good: "When a user creates a 2-hour focus block, their calendar shows as busy to other users for those 2 hours. Other users cannot book a meeting during that time. The focus user can still see and edit their own calendar during the block."

The INVEST Checklist

Use INVEST to check if your user stories are good.

Independent: The story does not depend on other stories. You can work on it separately.

Negotiable: The story is not a fixed spec. There is room for the team to discuss how to build it.

Valuable: The story delivers value to users. It is not just a technical task.

Estimable: The team can estimate how long it will take. The story is not too big or too vague.

Small: The story should take a few days to a week. Not months.

Testable: You can test it. Success criteria are clear.

Prompting for Better Acceptance Criteria

AI often writes criteria that are too vague. You need to refine them.

Example prompt: "I wrote this user story: As a product marketer, I want to see which features are mentioned most in customer feedback, so that I know what to emphasize in messaging.

Acceptance criteria: (1) The dashboard shows feedback themes. (2) Themes are ranked by frequency. (3) The user can filter by feedback source.

Please rewrite these criteria to be more specific and testable. Include concrete examples of what the user would see."

AI will make the criteria more specific. You now have something the engineering team can actually test.

Common Mistakes

Mistake one: Writing stories that are too big. "As a user, I want to manage my entire calendar." This is a feature set, not a story.

Solution: Break big stories into smaller ones. One story should take one or two weeks.

Mistake two: Writing criteria that sound like feature specs instead of test cases.

Bad: "The system shall provide filtering by date range."

Good: "When a user selects a date range of January 1 to January 31, only feedback from January appears."

Mistake three: Forgetting to add your user research. AI writes generic stories. Your job is to ground them in real user needs.

Solution: Always paste interview findings or user feedback into your prompt. Make sure stories match what users actually said.

Discussion

  • Loading…

← Back to Tutorials