AI-Assisted Prioritization with RICE, ICE, and MoSCoW

RICE Scoring

RICE stands for Reach, Impact, Confidence, Effort.

Reach: How many users will this reach? (number of users in a quarter)

Impact: How much will it help those users? (1x for small, 3x for medium, 10x for large)

Confidence: How confident are you about reach and impact? (10% to 100%)

Effort: How much work? (in person-months)

Formula: (Reach × Impact × Confidence) / Effort

Higher scores win.

Using AI to Score RICE

AI can help you think through the scores. But you must provide the business context.

Example prompt: "We are considering three features:

  1. Dark mode for the web app
  2. Native mobile app for iOS
  3. Bulk CSV export for power users

Our product has 10,000 active users. Dark mode affects all of them, but it is mostly a nice-to-have (small impact). Mobile would open a new audience of 5,000 potential users (medium impact). CSV export affects 200 power users but would save them hours every month (very high impact).

Dark mode takes 3 person-months. Mobile takes 12 person-months. CSV export takes 0.5 person-months.

Please score each with RICE and explain the ranking."

AI will do the math. But the real work is you providing the estimates. AI cannot know your users or effort better than your team does.

ICE Scoring

ICE stands for Impact, Confidence, Ease.

Impact: How much will it help? (1-10 scale)

Confidence: How sure are you? (1-10 scale)

Ease: How easy to build? (1-10 scale)

Formula: (Impact × Confidence) / Ease

ICE is simpler than RICE. Use it when you want fast scoring without deep estimates.

Sample ICE Prompt

"We have 8 feature requests from customers. I want to score them quickly with ICE.

Here are the requests: [list 8 requests]

For context, our team has 4 engineers. We release every two weeks. We have high confidence in requests from customers we talked to, lower confidence in requests from one-off support tickets.

Please score each request 1-10 for Impact, Confidence, and Ease. Then calculate ICE scores and rank them."

AI handles the math. You decide the estimates.

MoSCoW Method

MoSCoW is simpler than RICE or ICE. You just categorize:

Must have: Critical. Product breaks without it.

Should have: Important. Needed for good experience.

Could have: Nice to have. Low priority.

Will not have: Out of scope. Not this release.

Use MoSCoW when you need a quick yes or no. Use RICE when you need detailed comparison.

Sample MoSCoW Prompt

"We are planning the next quarter. We have 15 feature ideas. Categorize each as Must, Should, Could, or Will Not using MoSCoW. Remember: we only have capacity for 5-6 features this quarter.

Feature list: [paste 15 features]

Constraints: (1) We must ship something our most-valuable customer requested. (2) We should fix the top 3 reported bugs. (3) Performance is critical for new user onboarding.

For each feature, explain the category choice."

AI will categorize them. You can now say "These 6 are Must and Should. These 9 move to next quarter."

Why AI Scoring Is a Starting Point

Here is the key insight: AI scoring is fast, but it is not final.

AI does the math correctly. But it does not know your strategy. It does not know that you want to enter a new market (which makes mobile a Must). It does not know that one customer is 50% of revenue (which makes their request a Must).

After AI scores, layer in strategy.

Prompt: "Using the RICE scores above, now consider: (1) we want to grow our mobile presence. (2) Customer X is our biggest account. (3) Our engineering team is burned out on back-end work.

Based on these factors, would you change the ranking?"

AI can help you think through the strategic implications. But you own the final ranking.

Discussion

  • Loading…

← Back to Tutorials