B2C · E-Commerce · Personalization · In-House
Zalando Lounge: ML-Based Personalization
Restructured 4 siloed teams around shared metrics and led the design of an ML-powered personalized campaign, driving +5% GMV (~€100M) across 20+ markets.
Design Manager, Head of Design
Zalando Lounge
2021 to 2023
iOS · Android · Web

Executive Summary
Zalando Lounge is Europe's largest flash sale fashion platform, operating across 20+ markets on iOS, Android, and web. Its model creates a discovery challenge at scale: time sensitive campaigns, limited inventory, and strong competition for attention. On any given day, users faced 112+ campaigns with 800+ products each. Only 5.2% of campaigns were viewed, and just 9% of products within each campaign were seen. This was not only a UX issue. It was a discovery gap with direct impact on conversion and GMV.
The core mission
Deliver a personalized Lounge journey that feels intentional and easy to act on—transparent recommendations members can scan quickly and trust—while scaling curation, ML, and measurement across millions of shoppers, multiple devices, and 20+ markets.
Strategic Landscape
Zalando Lounge is Europe's largest flash-sale fashion platform, operating across 20+ markets on iOS, Android, and web. The model creates a unique pressure: time-sensitive campaigns, limited inventory, and high competition for top items. On any given day, users faced 112+ campaigns with 800+ products each.
Only 5.2% of campaigns were viewed. Just 9% of products within each campaign were seen. This wasn't a UX inconvenience. It was a discovery gap that directly eroded conversion and GMV, the two metrics the leadership team was accountable for. At this scale, personalization wasn't a nice-to-have. It was existential.
Diagnosis
The challenge was not only about helping users navigate a large catalog. It was also a missed opportunity. Personalization was already an established customer expectation, a proven pattern in the market, and a capability Zalando itself had already validated in its parent product.
User problem
Users were drowning in choice with no clear way to surface what was most relevant to them. With 112+ campaigns and 800+ products per campaign, only 9% of products were ever seen. Qualitative research and onsite surveys confirmed the signal: the catalog felt generic, not personal.
Business problem
This was a missed opportunity. Personalization was already influencing purchase behavior, competitors were setting expectations, and Zalando had already proven the value of tailored recommendations in its core app. Yet in Lounge, relevant discovery was still underpowered, limiting conversion and GMV.
Organizational problem
Personal Relevance sat within CFA, but meaningful recommendations depended on close coordination across Lounge, especially with ADP, alongside Photo Studio, Markets Team, and Buyers. ADP was central to the work and had to operate almost like part of the squad while remaining organizationally outside it. Critical inputs were distributed, but no single team owned the recommendation experience end to end.
Zalando Lounge team structure
Personalization in Lounge did not depend on one team alone. Personal Relevance sat within CFA, while critical inputs came from ADP, Photo Studio, Markets Team, and Buyers. Understanding this setup is key to understanding why the problem persisted and why coordination mattered so much.
Product Development
Pillar 2: Browse Customer Facing Applications (CFA)
These are the cross-functional squads that build the digital product. Each squad has dedicated Product, Design, Engineering, and Analytics members co-owning goals.
Inspire Squad
Tailor Squad
Goal Based Shopping Squad
Personal Relevance Squad
This projectContributing Units
Separate organizational units within Zalando Lounge
Each unit reports through its own leadership chain and optimizes for its own metrics. They contribute to the customer experience but have no shared goals with CFA or each other.
Algorithmic Data Products(ADP)
Optimizing: Algorithm accuracy
Photo Studio
Optimizing: Visual quality
Markets
Optimizing: Campaign operations
Personal Relevance, ADP, Photo Studio, and Markets each chased their own metrics—zero shared ownership of the user experience.
Personal Relevance Squad
Not a re-org. One squad formed through rituals: co-creation, alignment syncs, async communication, and a unified roadmap.
Personal Relevance Squad
Unified squadConnected through rituals
Co-creation sessions
Alignment syncs
Async communication
Unified roadmap
Shared ownership of user-centric success metrics, with every discipline contributing to one product capability instead of optimizing in isolation.
Strategic Response
No algorithm could fix a fragmented experience owned by competing teams. The org problem had to be solved before the design problem.
Cross-functional squad
Shared observation
Two-week sprints
Single-market validation
Closed feedback loops
Trade-off: Sacrificed rollout speed for learning depth. Leadership alignment took 3 weeks of buy-in with Directors of Product, Engineering, and Analytics.
Measurement framework
HEART framework (Happiness, Engagement, Adoption, Retention, Task Success): first time at Lounge. Replaced siloed KPI optimization with a shared success language.
Evolution
From vision to capability
It started with a PRFAQ that defined what great could look like. From there, we turned that vision into milestones, then built the methods, rituals, tools, and collaborations needed to make it real.
Design & Delivery
What the squad uncovered
Three signals shaped the solution. Users saw only 9% of products per campaign, so discovery was fundamentally broken. Stylists in the Photo Studio had fashion intuition that no algorithm could replicate. And users needed to scan quickly but also go deep when something caught their eye.
The solution: Top Picks for You
A curated, personalized campaign limited to 80 products. Contextual grouping so users almost always see them all. Progressive disclosure: scannable overview first, expandable detail for depth. The "Showstopper" campaign cover adapts content, language, and layout per device and market, which is what made scaling to 17 markets possible without custom work per region. Explanation popups and a feedback tool so users could see why they were getting specific recommendations and tell us when we got it wrong.
The critical design leadership decision: blending human curation with ML. Rather than letting the algorithm run alone, we brought stylists into the loop to co-create personalized prototypes that reflected a 1 to 3 year vision. This was controversial because it meant slower iteration, but it produced recommendations that felt human, not computed.
Iteration and results
M1 improved attention, but not enough downstream behavior to justify scale. We treated that as a signal, refined both the recommendation logic and the experience, and only scaled once the full metric picture improved.
CTR rose to 5.20% while GMV stayed negative and purchases slipped—so we held back from scaling. The diagnosis was weak conversion after the click: we weighted purchase history more strongly in the algorithm and tightened progressive disclosure before the next readout.
Average CTR
M1
5.20%
M2
8.11%
Articles Purchased
M1
−0.20%
M2
+0.91%
Add-to-cart and purchases vs. control reached statistical significance.
User satisfaction
M1
31%
M2
51%
By M2, the experience was not only attracting users. It was converting better, purchasing more, and generating stronger satisfaction signals.
The key leadership decision was restraint: not scaling the first positive signal, but waiting until the system performed end to end.
Impact
+5% GMV
≈ €100M incremental revenue from higher engagement and session-to-purchase efficiency
+2%
Conversion rate uplift, significantly above typical algorithmic tweaks
+50%
Customer satisfaction increase as users valued transparency and context
These results were reached through three measured milestones, iterating from mixed signals in M1 to statistical significance by M2.
Transformation
Beyond the metrics, the lasting impact was organizational. Personalization moved from a siloed technical problem to a cross-functional product capability.
Permanent cross-functional model
Shared ownership
Innovation pipeline
Leadership Principles
Vision before execution
Design thinking unlocks data teams
Culture scales, code doesn't
Looking ahead
Top Picks secured resources for advanced ML, deeper style preference analysis, and context-aware recommendations: weather data, personal milestones, user-controlled personalization. Long-term: omnichannel personalization across web, mobile, in-store, and marketing.
Appendix: HEART Framework
The HEART framework combined attitudinal and behavioral metrics to measure whether personalization improved both user perception and real behavior. It created a shared lens for evaluating success across experience, engagement, and business impact.
HEART Framework | Top Picks for You
| Goals | Signals | Metrics | |
|---|---|---|---|
| Adoption | Users see value in the TP4U feature proposal | · Using the new feature | · % of users opening TP4U Catalog |
| Happiness | Users find what they are looking for faster & easier, and shop happily. | · Feature works as expected · Users share gratitude towards feature · They find what they need/like and buy faster than before | · Catalog page loading time · Provide positive explicit feedback regarding the feature (ie. GetFeedback) · Time from session beginning to checkout completion |
| Engagement | Users view/see every product in the TP4U | · Users find majority of the 80 assortment selected for the catalog interesting, surprising, or appealing | · % products photo swiped or PDP viewed · % of scroll proportion on catalog page (available products) · % of unique users reached the end of the page |
| Retention | Users enjoy checking out TP4U and come back again | · TP4U becomes a part of Lounge app ritual · Increases retention of existing users | · % of LDAU that view TP4U · % increase of LDAU among TP4U visitors |
| Task Success | Users find interesting products that appeal to them & purchase | · Users view products in detail · Users Add2Cart · Users make a purchase | · Number of products viewed by user (image swiped or PDP viewed) · % of users Add2Cart · % of users make a purchase in a single month |