Improving Search with AI
Increased search click-through rate by 4%, improved customer satisfaction by 8%, and reduced maintenance time by 42%.
Overview
A cross-team initiative to rebuild HSE's core search experience, one of the highest-intent journeys for 1.3M+ active customers across web and app.
Role & Scope
- Role: Team Lead Product Design (head-of-discipline scope)
- Scope: Led the designer embedded in the search domain, partnered with Product and Engineering, aligned goals across teams, and ensured design decisions were tied to measurable outcomes.
- Channels: Web shop, main app
- Duration: 2024
Impact
- +4% click-through rate on search result pages
- +8% customer satisfaction for search
- -42% maintenance time for engineering
The Problem
The previous search setup behaved like a black box: limited control over ranking and relevance, difficult to explain why results were shown, hard to tune for different user intents, and too much operational effort for engineering. This created friction for users and slowed down iteration.
The Goal
Make search more relevant for different customer behaviours, and make it easier to improve over time. Increase result relevance and clarity, improve CTR on search results, reduce dissatisfaction with search.
- Increase result relevance and clarity
- Improve CTR on search results
- Reduce dissatisfaction with search
How I Led
Search touched many teams: Product, Engineering, Design, Merchandising. My job was to keep the experience goals clear and ensure we could ship improvements without constant cross-team friction.
- Led the designer of team search and set design direction
- Aligned PMs and engineers on shared success metrics
- Structured prioritization and clarified trade-offs
- Kept design involved in system decisions early, not at the end
What We Changed
We rebuilt the search platform using Elasticsearch. The design work focused on how users experience relevance.
- Intent-based scenarios (product number, brand, category, generic)
- Result page hierarchy and information clarity
- Filters and refinements that match user intent
- Better empty and zero-result states
- Consistent behavior across web and mobile
Defining Relevance
Search quality is not one metric. "Relevance" had to be defined before it could be improved or measured.
HSE customers search in four distinct ways: by product number from TV, by brand, by category, and by open-ended queries. Each has a different definition of a good result.
We mapped those intent types and defined what success looks like for each: correct ranking, clear information scent, useful filters, and recovery from zero results. That definition became a shared rubric across Design, Research, Product, and Data.
Qualitative Research
Before defining success metrics, we ran user interviews to understand how customers described search failures in their own words: results that felt off, queries that returned nothing useful, filters that did not match how they thought about products.
That language shaped the rubric. It told us what "relevant" actually meant to users, not just what the system was optimizing for.
How We Validated It
- Audited top queries before and after each change
- Tracked CTR, query refinement rate, and zero-result recovery
- Measured search exits and customer satisfaction
Takeaway
Redesigning search for 1M+ customers: when design defines relevance and AI delivers it, the results are externally validated and measurable.