← Back home

Unified AI Search

Overview

The Problem

Our initial search experience worked much like any other property platform. Users entered a location and applied filters such as number of bedrooms or price. Despite offering far more options than our competitors, many of these filters weren’t discovered or used. We called filters like garden size, architectural style, parking type or tenure "Jitty Unique”, because no other platform had them. These unique tags were essential for users to unlock the full value of Jitty.

However, we noticed a gap between how people described their needs in research sessions and how they actually searched. They’d talk vividly about things like “a downstairs toilet for an elderly relative”, “being close enough to family and commutable”, “south-facing garden”, or “Victorian features”, yet defaulted to basic filters. At the time, fewer than 50% of searches were making use of Jitty Unique tags.

The Goal

We set a target: increase the number of Jitty Unique Searches from under 50% to 75%. We wanted the product to reflect how people really thought about finding a home, not just location and price, but all the nuance and lifestyle fit that mattered.

Design Process

Iteration 1 – Natural Language Search

We began by beta testing a free-text search bar. Users could type whatever they wanted, and we’d convert that input into structured filters behind the scenes. The first version worked technically but fell short in terms of UX. It wasn’t obvious to users what filters the system had applied, or whether it had interpreted their search correctly.

To improve this, we changed the placeholder text to a question “What are you looking for?” and added prompt suggestions (that I built 😊) to guide users into more expressive queries. We also introduced real-time conversion of their search query into visible filters. This small change made a big difference. The experience became clearer, more responsive, and much easier to trust.

When we launched this version, it performed strongly, hitting just under 10,000 searches, close to our previously biggest month (when we launched in London).

Richer Insights from Natural Language

This was the first time we had direct data to what users really wanted. Rather than just checking tickboxes, people were expressing specific desires and constraints: “pastel kitchen”, “40 minutes from St Pancras”, “near good schools”. Analysing this data surfaced trends we may not have captured otherwise. One insight led directly to building travel-time search, which went on to become our most-used filter.

Iteration 2 – Vision Search

Natural language search offered infinite search possibilities, but we soon hit scaling limitations. Each new filter needed integration into our AI ingestion agent, creating a mounting technical challenge. To address this, we launched Inspiration Search, a vision-based search using image embeddings. This allowed users to search for visual concepts across our database without requiring pre-defined filters. Thanks to faster models, we could test this approach safely by creating a separate "inspiration" search page.

The feature was a massive success. Monthly searches more than doubled, reaching over 20,000. Users delighted in discovering unique and aspirational features like "wine cellar," "warehouse loft," and "house that looks like a castle", it became a cheat code for r/SpottedOnRightmove or r/ZillowGoneWild.

However, we observed users naturally combining visual and structured terms, searching for things like "two-bed warehouse conversion in Hackney." This behavior revealed that users didn't want separate search modes. They expected one unified system.

Iteration 3 – Project Gollum: One Search to Rule Them All

We took this learning and built what we called Project Gollum, our unified AI search. It combined traditional filters, natural language parsing and visual embeddings into a single search experience.

This was the version of search we’d been building towards. Users could type expressive queries like “3 bed Victorian house in Hackney with wooden floors and traditional features” and our system would parse location, match structured filters, and use image search for the rest, all in real time. We retired the Inspiration Search page and rolled everything into one seamless experience.

This launch became our new baseline. Search volumes climbed to almost 25,000, and we saw a huge shift in the type of queries being made. Most importantly, 83.9% of all searches now included Jitty Unique filters, a significant leap from our goal of 75%.

Iteration 4 – Handling Partial Matches

The final major enhancement focused on transparency. Around 25% of searches were resulting in partial matches, we might match two parts of a user’s request but not all of it. Previously, we didn’t show this clearly, which made the results feel ambiguous or broken.

We introduced a UI element that showed which parts of a query had been matched and which hadn’t. This was represented as a set of pills, giving users clear feedback about what the system had understood.

Outcome & Impact

This redesign reshaped how people used Jitty. Natural language and visual queries became mainstream. Monthly searches more than doubled compared to our previous high, and we achieved a Jitty Unique Search rate of 83.9%, surpassing our goal.

The fallback and match transparency features created a feedback loop that continues to fuel product development. Travel time search, for example, came directly from recurring user input in free-text queries and became our most-used filter.

Feedback from users was overwhelmingly positive. Internally, it helped define how we positioned the product, not just as a property portal, but as the most powerful way to discover homes.