AI Copywriting Tool
HSE's first AI product. It generates product descriptions, SEO texts, and USPs for ~17,000 SKUs, eliminating an estimated €100k+ in annual agency costs. User Research answered the question that mattered before launch: is it good enough to replace external copywriters?
Overview
An internal tool that replaced an external copywriting agency for ~17,000 SKUs. User Research was brought in as a go/no-go gate before full production rollout.
Role & Scope
- Role: Team Lead Product Design (head-of-discipline scope)
- Scope: Coordinated the research effort with the User Researcher. Defined the research question and success criteria with the product team. Ensured the validation was rigorous enough to inform a real go/no-go decision, not just confirm a conclusion the team had already reached.
- Users: Internal teams (content, merchandising). Indirectly: customers reading the generated copy on product pages.
- Duration: 2024
Impact
- AI copy perceived as equivalent to human-written copy · Qualitative user research · later confirmed by A/B test (Customer Intelligence team)
- ~€100k+ estimated annual agency costs eliminated
- Copy generated in minutes, not days · external agency turnaround was ~5 days per SKU
The Problem
Writing product copy for 17,000 SKUs is slow and expensive. HSE used an external copywriting agency for descriptions, SEO texts, and USPs. Every new product required a ~5-day turnaround cycle. With ~3,000 new SKUs added per year, that dependency translated to roughly €100k annually in agency costs, and a constant queue of products waiting for copy before they could go live.
The AI tool was built to remove that dependency. The business case was clear. But before rolling out fully, the team needed an answer to a harder question: does AI-generated copy actually work for customers, or does it just look plausible?
How We Validated
Our User Researcher ran a blind qualitative study. Participants were shown product pages with AI copy and human copy, without knowing which was which. The study was designed to detect differences in trust, accuracy, and purchase intent, not just preference.
- Blind comparison: participants rated quality, accuracy, and trustworthiness without knowing the source
- Category coverage: sessions covered fashion, jewellery, and home products. Each has different copy conventions.
- Task-based: participants made purchase decisions based on the copy. More honest than rating in the abstract.
- Finding: no significant difference in trust, accuracy perception, or purchase intent between AI and human copy
Why the Research Mattered
Bringing research in early was a deliberate choice. That question was defined and answered before rollout, not after.
The research became a go/no-go gate. The finding was clear: no significant difference in trust, accuracy, or purchase intent. That gave the product team the evidence to move with confidence. That is the difference between research as validation and research as a decision input.
Takeaway
AI can write product copy. Research proved it. When users cannot tell the difference, you have evidence worth shipping on.