Product Management & Design
Reviews_horizontalphone.jpg

Improving Product Reviews

Improving Product Reviews

Improving Product Reviews

Role: Manager, Product Management

In the world of eCommerce, reviews content is critical. In discussions with our brand stakeholders, we heard time and time again that they wanted to make reviews “better.” But how? What did this mean? With an official line item in our fiscal year goals to tick off, our team set out to figure out exactly what we were even trying to solve.

Part I: Discovery

driversofcarting_imagess.jpg

PROBLEM DEFINITION

We’d heard from our stakeholders that “reviews were really important,” but no one had really been able to tell us why. So that was our starting place — to first clarify, “Are reviews important?” before spending time going down a major rabbit hole. We have found that it was WAY TOO EASY to loose days in discovery land.

Luckily, a fairly basic review of existing analytics provided great insight: As it turned out, both star rating and number of reviews were statistically significant drivers of pdp-to-cart conversion rates. Each star adds approximately a 1% boost to the product’s overall conversation rate whereas each review adds about a .02% boost. Pretty quickly we were identifying that “improving reviews” might be about “increasing reviews" and “maximizing the impact of positive reviews” on our conversion funnel.

RESEARCH

With some confidence that increasing reviews was a goal worthy of investigating, we utilized 3 primary inputs to learn more about possible drop off points associated with the write a review process: a) competitive analysis b) customer research c) feature analytics. Our competitive analysis pointed us toward a key area where the URBN approach to reviews was notably different than many competitors: we had a requirement for users to be authenticated to write a review while many retailers did not. And, a deep dive into our reviews feature analytics showed serious drop off for the rate of “review” completion for customers who were forced to login/create a new account as compared to those customers entering the review experience in an authenticated state (thereby bypassing that step in the process).

Example: Competitive Analysis

Primary Learning: Many competitors don’t require authentication to write a review.

Primary Learning: Many competitors don’t require authentication to write a review.

Example: Data Studio Dashboard - Feature Analytics

Primary Learning: Customers who have to sign in to write a review have a reviews completion rate almost 30x lower than those who do not!

Primary Learning: Customers who have to sign in to write a review have a reviews completion rate almost 30x lower than those who do not!

an interlude on the product team’s role in managing stakeholders & delivery resources

During this process we had a few very loud, very prominent stakeholders pushing for a complex redesign of the reviews form to make it feel slicker. These stakeholders had assumed the solution for “improving reviews” meant a “better reviews form.” And they weren’t shy to tell us about it. Simultaneously, we had a serious compression in our delivery pipeline with lots of competing initiatives and too little resources to deliver them all. As a result we knew we needed to be scrappy in our approach to deliver value without risking the rest of the portfolio of work our teams were heads down on. This was going to be a test of delivering good enough on a slim budget, despite the stakeholder pressure facing us. And we would need some proof to get that group to buy-in and agree on a non-reviews form solution.

 

Part II: Experiment

HYPOTHESIS FORMATION

With a scrappy mindset in place, we zero’d in on the drop off we were seeing for customers entering the write a review funnel who were pre-authenticated versus those who were forced to login/create account in order to enter the write review form. The former group was completing the write a review process at a rate of just over 1% while the latter group completed reviews at a rate of almost 30%! This felt like a gap worth going after, especially because we could easily test it, measure it, and launch it virtually for free (no engineering resources).

HYPOTHESIS VALIDATION

Screen Shot 2020-05-20 at 11.31.28 PM.png

A/B Test Design

Test Structure:
Control:
Authentication is required to leave a review
Test: Authentication is NOT required to leave a review

Results:
Across various site properties, we saw between a 12-40% increase in the rate of completed reviews per click to write actions for the test group. This would amount to several hundred extra reviews per brand during the two week test period alone.

 

Part III: Implement & Iterate

One of the concerns around removing the authentication gate was that it would result in a flood of unqualified, junk reviews. We arranged to work with our brand stakeholders to align our feature release with a plan for monitoring the rate of review approvals in the moderation platform across all three brands. At 15, 30 and 60 day intervals we were able to use our internal Business Intelligence tools to confirm that our increase in reviews was not artificially inflated by useless content.