This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
CDT
This is some text inside of a div block.

Robots don’t have taste, but you do: How to define and measure AI content quality

October 24, 2025 4:35 PM
October 24, 2025 5:20 PM
Friday
,
Oct 24
October 24, 2025 4:35 PM
October 24, 2025 5:20 PM
4:35 pm
5:20 pm
CDT
AI Focus Day
Keynote
Spotlight
Watch party
Live broadcast
Zoom breakout

In the rush to add generative AI features to products, content quality often takes a back seat. Content designers can tell when something’s off, but explaining what’s wrong — and making the case for fixing it — is still new terrain.

In this session, you’ll learn a practical framework for evaluating AI-generated content using clear acceptance criteria, consistent test data, and adversarial content testing. Using real examples, we’ll break down how to create a repeatable benchmarking process that translates subjective assessments into actionable data. You’ll walk away with strategies to measure AI-generated content, identify risks, and build a strong case for quality in AI-driven projects.

In this session, you’ll learn how to:

  • Define acceptance criteria and use consistent “golden data sets” to more objectively measure any type of AI-generated content.
  • Stress test with adversarial content to uncover hidden risks in AI-generated output.
  • Gain strategies for effectively communicating AI content quality issues to engineers, product managers, and other stakeholders.
  • Make the business case for investing in AI content evaluation.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
CDT
This is some text inside of a div block.

Robots don’t have taste, but you do: How to define and measure AI content quality

October 24, 2025 4:35 PM
October 24, 2025 5:20 PM
Friday
,
Oct 24
October 24, 2025 4:35 PM
October 24, 2025 5:20 PM
4:35 pm
5:20 pm
CDT
AI Focus Day
Keynote
Live broadcast
Zoom breakouts

In the rush to add generative AI features to products, content quality often takes a back seat. Content designers can tell when something’s off, but explaining what’s wrong — and making the case for fixing it — is still new terrain.

In this session, you’ll learn a practical framework for evaluating AI-generated content using clear acceptance criteria, consistent test data, and adversarial content testing. Using real examples, we’ll break down how to create a repeatable benchmarking process that translates subjective assessments into actionable data. You’ll walk away with strategies to measure AI-generated content, identify risks, and build a strong case for quality in AI-driven projects.

In this session, you’ll learn how to:

  • Define acceptance criteria and use consistent “golden data sets” to more objectively measure any type of AI-generated content.
  • Stress test with adversarial content to uncover hidden risks in AI-generated output.
  • Gain strategies for effectively communicating AI content quality issues to engineers, product managers, and other stakeholders.
  • Make the business case for investing in AI content evaluation.