Human-Centered Idea Validation via Rapid-Prototyping — Case Study: heyChauffeur

Role: Project Lead & Experiment Design Lead

Building a top-notch product/service takes time, we all know it. It takes a tribe of excellent features and services working in harmony providing a unique journey to the consumer that she can distinguish it from others with a strong feeling and reasoning. Until reaching that distinguished service design, teams try a lot and fail until a set of successful features/services are bonded to one another shaping the DNA of an enterprise.

During this long marathon, figuring out which ideas to work on could also become a hefty project. If your job function is product related, you’re probably already familiar with many ways to determine what the team should be working on next. These methods could range from simply the CEO, Head of Product, or Product Manager et al. deciding what’s coming up next in the pipeline to using frameworks transparently justifying the decision by gathering the necessary point of information then prioritizing.

Here’s long list of Product Prioritization Frameworks (highly recommend #17) that have proven themselves to be great methodologies.

Image Credit: “Solving Product Design Exercises,  Artiom Dashinsky ”
Image Credit: “Solving Product Design Exercises, Artiom Dashinsky

Let’s say you’ve used a simple matrix such as Effort vs. Impact and determined the position of your ideas by looking at them from two perspectives:

  • The impact it’d make after successfully deployed;
  • The amount of work it’d take to ship it.

This is a very practical way to layout ideas and prioritize them and I’ve personally used it a lot. However, it tends to make the team focus on the ideas that are relatively easier to implement.


If your company is not only building a digital product, but providing an end-to-end service via a strong collaboration between the business and technology, then maybe your game-changing ideas could pile up on the Goodcorner due to the fact it’s rather a service design that requires a lot of groundwork, than a feature built on the tech-stack.


No matter which methodology you use there will be shortcomings and it will not be easy for the leadership to make big & risky bets without really justifying the Return on Investment (ROI).

“We were exactly on that boat when we had the💡of heyChauffeur..”

From “ Online Sales of New Cars Study ” by Roland Berger.
From “Online Sales of New Cars Study” by Roland Berger.

It all began when we noticed the need among consumers for an alternative way to test-drive cars — in Roland Berger’s 2016 report regarding the German automotive industry.


Test-driving a car is one of the last touch-points in the car consumer journey high-likely to determine the buying decision. Booking a time slot at the dealership to test-drive the car you’re interested in buying, commuting to the dealership, and doing all these before the car is sold to another customer are some tasks to be completed by the consumer no matter how busy they might be in their lives. Perhaps, one of the best alternatives to test-driving a car at the dealership is providing that experience at consumer’s home or office.

At this point, it was pretty easy for our Idea Sprint team to come up with a hypothesis:


Here are some topics/challenges for providing a test-drive that could be done at the consumer’s preferred location:

  • Inventory management: Availability of the exact used car (OR almost identical one) consumer is interested in buying;
  • Insurance: Car insurance covering the time consumer will be test-driving the vehicle and during the delivery of the vehicle;
  • Logistics: Scheduling, dealing with other logistics of getting the car to the consumer;
  • Depreciation: The loss of value in product that’s being test-driven.
  • Cost: Time spent on the job & compensation of people involved.

Before even the team accomplishes the goal of reaching a Minimum Viable Product (MVP) that delivers a unique test-driving experience, months could have passed. If the project would have failed, it’d have been an expensive test.

Would the leadership even consider this costly experimentation to be done in the first place?

The beauty of human-centered or user-centered design process is to think of each business challenge while considering the consumer’s needs and their potential response to a given feature or product. While doing this, we have a chance to test ideas via rapid-prototyping that help us validate the the solution and guide us towards justifying the reason building an expensive MVP.


At heycar, we dedicate one week in each quarter to ideate and implement Proof of Concepts, MVPs, Validation via Rapid-Prototypes, and pretty much anything that could bring value to the consumer and our business. The Idea Sprint was a great opportunity for our heyChauffeurteam to set a time constraint, one week, and accomplish our goal of validating the need against our users.

Designing the Experiment

So, how can we validate a new service idea allowing users book cars to test-drive at home without implementing the service?

And, doing all that in just one week..

🙀 🌈 🙀 🌈 🙀 🌈 🙀 🌈 🙀 🌈 🙀 🌈 🙀 🌈 🙀 🌈 🙀

Well.. As mentioned before human-centered design starts with the user, so did we. We knew that we had to validate this need against users from the beginning. The fastest way to implement an experiment would be building something on the platform instead of testing the service in real-life. So, we started looking at what we have in our hands and brainstormed.

User Persona:

The Strategic Design team has already defined the user persona — Prestige — that’s most-likely to be interested in such service, and even pay money for it.

One of the personas among heycar users observed by Strategic Design —  Ron Gabay
One of the personas among heycar users observed by Strategic Design — Ron Gabay


DataScience team could identify the Prestige segment by looking at the historical data and let us target these users via userIDs once they returned to the site.



We’ve already had feature flagging system in place, however, we had to build a userID based A/B testing infrastructure that runs on the client-side. This was also another challenge to the team.


Then, it was time to set our goals for the experiment:

Goals of the Experiment:

  • Validate, if users would be interested in test-driving at home;
  • Validate, if people would pay 100EUR for heyChauffeur service;
  • Validate, if “Prestige” persona is most interested segment within the userbase


Looking at our resources, we’ve discussed and designed the experiment as below to be able to get the most out of it:

  • Implement a funnel that communicates the heyChauffeur value proposition to selected users via feature flags.
  • Observe users’ behavior (clicks & email submissions) to determine their level of interest and intent.
  • Design two variants (paid vs. free heyChauffeur) to determine whether there is a behavioral difference between user segments.

Here’s how we’ve differentiated the two offers from each other:

  • Variant A — free to book cars
  • Variant B — 100EUR to book up to 3 cars

Below, you can find the user flow we’ve designed for this experiment.


As you can see from the user flow, initially we would be receiving metrics regarding our users’ interest in the service, then we’d understand the intent levels with the help of second step — email submission. This would also allow us to deliver the service to those users who are really interested in heyChauffeur.

Key Performance Indicators (KPIs):

To determine the level of interest and intent, we’ve decided on below KPIs:

– Interest: Interacting with the offer by submitting postal code

Number of Postal Code Submissions / Unique Users Who Viewed the Offer

– Intent: Providing an email address to receive updates

Number of Unique Users Submitted Email Addresses / Unique Users Who Viewed the Offer


The moment we’ve implemented this experiment and started seeing numbers come in, we were heroes. Everyone in the team really felt proud to be a part of such experimentation that took place during the hack week.

And, the results were also impressive. We’ve really validated the need against our users. There were no significant differences between Prestige Persona vs. Other Returning Users who have participated in the experiment.

Metrics for each step of the funnel as well as variants.
Metrics for each step of the funnel as well as variants.

Experimentation and validation culture at heycar has been improving rapidly and we’ve learned quite a lot as a team. Putting user at the center of each experiment and validation process has been helping us to see the future of our product a lot more clear.

I would like to mention our team members who have contributed in the project. It’s been a great pleasure to work with them during the Idea Sprint.

Experiment Lead: Kaan L Caglar

Full Stack Developers: Marcelo BoeiraIlker Guller

Data Scientist: Ángel De Jaén

Visual Design: Ron SchmidtAlice Revel

Strategic Design: Ron Gabay