In partnership with Nike and R/GA, RAIN embarked on designing an experience on the Google Assistant Platform around a first-of-its-kind, voice-activated sneaker drop for the Nike Adapt BB, a power-lacing shoe that Nike describes as their most advanced and custom ‘fit’ ever. Months of strategy, design, and development culminated on February 7 during a live NBA broadcast of the L.A. Lakers vs. Boston Celtics game on TNT. Although former-Celtic and current-Laker point guard Rajon Rondo iced the game with a buzzer beater in TNT’s 3rd highest rated game of the season, the real action went down at half time, when 2MM+ viewers were prompted to “Ask Nike” via Google Assistant for the opportunity to buy the Nike Adapt BB ahead of its release date, and unbeknownst to the public, a chance at a free pair.
The unique and notable variable in this activation was building up to a surprise, climactic moment between brand and user, when limited, time-gated access to a much-hyped sneaker became known. All other elements of the experience were in service of telling the shoe’s story from the athlete’s perspective, providing an engaging multimedia brand experience, building buzz for a seminal drop moment and the general release of the Nike Adapt BB.
Designing and building around this unique use case included some things not often seen in voice design in these early days – a large, captive television audience and a leading global brand intent on converting through voice for a much-hyped product release. Because of this, RAIN required a different approach from many other voice projects we’ve led. And through the process, we were able to demonstrate a novel application of voice assistants that should excite brands and customers alike. As experience design and platform capabilities begin to meet and exceed customer expectations around utility, brand experience, and shopping via voice, the possibilities for future innovation through a conversational interface feel as real as ever.
Let’s breakdown a few of the notable aspects of our approach, and what to take away from them for brands and agencies looking to push the envelope in voice.
Personalized, Goal-Based Pathing:
Defining a voice experience around the sneaker pre-drop, drop, and post-drop moments meant carefully considering the desired relationship between users and Nike on Google Assistant at all times; so we built a notification system for updates and calls to action, and ensured that content was timely and relevant across each stage.
But the most precarious business challenge we had to solve for was the randomization of selecting users during the drop moment. This meant the proper handling of secure transactions for a limited number of users in order to deliver free Nike Adapt BBs, all while handling thousands of users a minute heading down different paths that connected to Nike’s point of purchase platforms of Nike.com and the SNKRS App.
So What? Our industry often cites simple voice experiences as the ‘best’ voice experiences, but what this work showed is that complexity can still be managed elegantly, where highly personalized voice experiences take place simultaneously, at scale.
As experience design and platform capabilities begin to meet and exceed customer expectations, the possibilities for future innovation through a conversational interface feel as real as ever.
One of the most persistent obstacles we have to overcome in evaluating voice experiences with our partners is the perception that Amazon Echo or Google Home speaker devices are ground zero for interaction. While these are primary entry points for many users, there are meaningful assistant-integrated touchpoints that don’t emanate from a smart speaker, most notably the mobile phone. With Nike, we didn’t prioritize one over the other, but instead built an experience that capitalized on each of these modalities for what they do best for a user.
On any device a user could engage with the brand, featuring exclusive content around the Nike Adapt BB from Jayson Tatum or Kyle Kuzma, two ‘Adapt’ Athletes and rising stars in the NBA. On a screen device such as a Google Home Hub or the Google Assistant mobile app, users also had interactive touch-enabled navigation. And in enabling elements like notification or setting up profile information, there was a seamless transition between the speaker modality to a screen modality. Brands and agencies should always be in a voice-first mindset when designing in the space, but it’s this type of screen-augmented experience that is critical to advancing the discipline of voice experience design. We have to connect digital touchpoints – visual and audio – as a system, and give users all the right tools to make a deep brand connection or fulfill a purchase intent.
As we considered voice purchasing – still an emerging behavior that has yet to reach scale – it resulted in an additional multimodal design consideration. The challenges of purchasing through Nike meant we had to decide where voice-only interaction can take the lead, and where we tap into Nike’s primary digital platforms and infrastructure. We leaned heavily into Google Assistant platform capability for the surprise giveaway of Nike Adapt BB, and worked to provide seamless handoff to Nike.com and the SNKRS App for the purchasing opportunities. “Voice-first” was a user’s first access point, but the platforms leveraged from within the Google Action helped to deliver the best overall experience.
So what? It’s key to meet consumers where they are, on whatever devices they prefer. But in enabling features like commerce, it’s equally important to ensure systems talk to one another – across mobile apps, web, and voice – for a seamless experience. When building these sorts of experiences, designers should ask themselves questions like: Where should you prioritize content over function? How about the best place to transact or generate a “lead” into the business? And in making these decisions, always keep in mind how the user would expect to have their needs met. Read more on multimodal design considerations here.
We have to connect digital touchpoints as a system, and give users all the right tools to make a deep brand connection.
Building Infrastructure for Scale:
RAIN is battle-tested when it comes to database integrations, user authentication, API and SDK integrations and many digital projects leverage a little bit of everything on that front. In the Nike build, probably the most interesting technology challenge we faced was scenario planning around the influx of users to the experience at the same time potentially overloading the servers (remember, a captive television audience for primetime NBA game). Now, the average person won’t understand what consecutive concurrencies, lambda functions, database capacity, or expanded subnets have to do with a voice experience, but for our engineers and AWS partners who constructed a hosting architecture that was able to support up to 30,000 requests per second (request = potential user), this was a crucial safety net that was put into place so that media could hum at full speed and the seminal ‘drop’ moment could happen, having eliminated fear of technical failure.
Handling this type of immense traffic in a very concentrated time period was new for our team, as this use case was a first-of-its kind, but we learned on the fly with great support from our partners and are poised to deliver more ‘first-ever’ voice experiences in the future.
What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of"
nested selector system.