The Data & Insights Hub
180,000 independent hoteliers relied on Expedia to understand how their business was performing. Conflicting metrics and fragmented views left them unable to trust what they were seeing, let alone act on it. I led the redesign to restore that trust and point them toward their next best step.
Why it mattered
Unlike hotel chains, independent hoteliers had little to no account support or dedicated resources to fall back on. Expedia's platform was their only window into how their business was performing, how they compared with competitors, and where to focus next.
180,000+
hoteliers with little to no dedicated account support
65%
of Expedia's partners were independent hoteliers
$19M
the value of the performance space to Expedia
The problem
Fragmented views
Inconsistent data
No path to action
Hoteliers had to navigate multiple pages and reports to piece together a complete view of their performance. Most gave up before they got there, and made decisions based on whatever they happened to find first.

What made it hard
These weren't solely design oversights. They were symptoms of years of siloed work: different teams, each owning a separate part of the experience and shaping it around their immediate goals. Scattered metrics and inconsistent data were not accidents. They were the direct result of how the organization was structured.
Years of siloed work had produced a direct conflict
Two senior leaders from different organizations, reporting to different people, had incompatible visions: one prioritized commercial objectives, while the other prioritized features that other teams would depend on. Both approaches had their merits, but the tension had made progress impossible. I designed and facilitated a cereal-box workshop, an exercise in which stakeholders design the product's "box" together, forcing them to articulate a single coherent promise to the user rather than two competing internal ones. Working on the same artifact shifted the conversation from what each team wanted to what hoteliers actually needed.

A research gap that became a distribution strategy
Market managers, Expedia's internal team that coached hoteliers using the same tools, were a blind spot in our research. I identified the gap, partnered with our user researcher to close it, and saw a clear implication. If the tool worked well for market managers, they would become advocates and accelerate adoption among hoteliers. That reframing changed the design brief. Designing for both was not just broader coverage. It was a distribution strategy.
The solution
With a shared direction and a rewritten brief, I had a clear frame. Hoteliers needed to trust the data they were looking at before any guidance could carry weight, and that meant giving them a single, reliable place to find it.
One place for everything
I designed a single dashboard that brought all performance data together for the first time. A fixed layout would have forced a compromise: too bare for hoteliers who needed granular visibility, too dense for those who just wanted the basics. Instead, hoteliers controlled which metrics they saw and how they organized them. The same interface could serve a property manager and a seasoned revenue manager without asking either to compromise.

"It’s now much easier to see all the data I need."
— Noelia, Revenue Manager
Data hoteliers could trust
The metrics were owned by a separate team with different timelines and priorities. I ran a series of demos to VP-level leadership that made the gaps impossible to ignore, and I negotiated a phased approach: launching with a verified set of metrics and expanding as each one was resolved. I then partnered with our content designer to build a metric glossary defining how each metric works and how it is calculated. Those definitions were surfaced directly in the interface, so hoteliers could click on any metric to understand exactly what they were looking at.

"This is great. I have been waiting for this for a long time. Now we can speak the same language between Expedia and our hotel partners."
— Greg, Market Manager
An AI coaching layer built on trust
Defining quality
Dealing with latency
Closing the loop
Reliable data was the foundation, but the guidance had to earn users' trust as well. I designed a feature to generate personalized insights where no two outputs would be the same. Unlike a conventional deterministic interface, an AI-powered one isn't fully controllable, and that fundamentally changed what design meant here. I co-led two workshops with our content designer, one with market managers and one with ML engineers, to produce a set of "golden examples". These became both the evaluation benchmark and the quality bar that made the output trustworthy enough to ship.


What to avoid
We shouldn't display direct pricing recommendations/adjustments
Golden examples
With the help of market managers, we came up 16 golden examples
System prompt
Craft a brief, compelling headline for each actionable insight that gives clear course of action…
"The AI overview is really helpful for showcasing things I may have missed."
— Richard, Property Manager
From MVP to global release
Our initial team included a senior product manager, a senior engineer, a researcher, a content designer, and me. We shipped an MVP in a single quarter. Launching in English only let us skip localization entirely and iterate on content until we were confident in it. Rather than releasing broadly, we started with a test group where users could roll back to the previous experience at any time. That opt-out was a more honest signal than any survey. Our threshold for global release was 85% of users choosing to stay.
We ran three rounds of qualitative research across the full timeline, each shaping what we built next. As research signals improved and rollback requests declined, we expanded access to more users and locations. Reaching global release required more features, which required more hands. The team grew: the PM changed, thirteen engineers joined, and two designers joined under my direction. My role shifted from hands-on craft to setting design direction, sequencing what we built, and keeping the experience coherent as more people built more of it.
The outcome
For the first time, hoteliers had a single place to look, data they could trust, and guidance grounded in their own numbers. Recommendation completion rate was the metric senior leaders tied directly to gross profit, and it contributed to a 7% increase on a $19M base. When we started, it sat at 2%. That was not just a low number; it was a measure of how completely hoteliers had learned to tune the platform out. It tripled to 6%. The retention rate and SUS score told the same story.
3X
recommendation completion rate (2% → 6%)
85.5%
of hoteliers in the test group stayed in the new version
A (83.1)
Top 10% of software products tested with the SUS method