The Davids and The Goliaths
This is the first installment in our Owned Virtual Assistant series.
“Things have gotten a little out of control.” That’s a common theme we hear in our dealings with the world’s biggest brands as they navigate the modern tech-enabled business landscape and explore the world of voice technology.
For the last five years, RAIN has had a front row seat to the voice revolution. We’ve seen pipe dreams and Star Trek visions become realities of our daily lives, as the assistants and devices around us get smarter and more conversational by the day.
We’ve worked closely and directly with big tech, but most of our time has been spent helping companies like Starbucks, BlackRock, Nestle, Nike, and MasterCard to anticipate and capitalize on the rise of technology that big tech is pioneering. This has largely involved mainstream voice assistants (“MVAs”) like Amazon Alexa, Google Assistant, Apple’s Siri and Samsung Bixby – each of which is a multi-billion-dollar initiative that forms a long-term, strategic piece of each tech company’s ecosystem. But as voice tech has matured, another broader issue has been bubbling up in the background.
For brands, the scale of tech firms has become a vexing double-edged sword – and some would even say, a necessary evil. To-date, these tensions have been more about commerce, advertising, and competitiveness than they have been about voice and AI. Most brands simply have no choice but to tap into the big tech marketplace in order to target and sell to consumers where they spend their time, which, in the digital realm, tends to be controlled by companies you can count on one hand. Reach comes at a steep ongoing cost to get the right eyeballs, or taking a hit to margins on any given sale. Moreover, the data that brands cede to technology companies is the 21st-century equivalent of oil, and those who stockpile it will continue to accrue significant advantage. There’s a reason the most valuable companies in the world are technology companies. And why brands are starting to feel like they’re cogs in a much bigger machine that they do not control.
Now back to voice.
Over the last few years, brands have jumped at the emergence of voice as a scaled, big-tech-enabled channel to reach consumers – in the comfort of their homes, in their cars, or even on their person. Amazon, Google, and Samsung have all courted brand investment in 3rd-party app presences on their platforms in the forms of “skills,” “actions,” and “capsules.” Thousands of leading brands have responded by building 3rd-party voice apps (many of which RAIN has conceived, designed and built). The result is a bloated marketplace – there are over 100,000 apps in the Alexa Skills Store – with a staggering variance in quality, from a strategic standpoint (e.g., is this voice app really going to drive a brand’s business objectives?) but also in terms of design and development polish (e.g., what does it do, and how well does it work?). It’s not easy to define and execute well on a great use case for voice. Even more difficult, as many brands have learned, is getting people to find it and to come back to it regularly.
And there’s yet further risk. Brands are beholden to MVAs on their always-evolving technical and design constraints for third parties (e.g., screen-optimized voice experiences), as well as their first-party product roadmaps, where first-party functionalities (e.g., default music streaming services) could easily usurp 3rd-party apps whenever a platform sees fit.
While these may all seem like strong reasons for brands to second-guess the value of MVAs, brands simply can’t ignore the opportunities to reach customers through these scaled channels and device ecosystems. They must be considered as part of any solid voice strategy.
But the MVA landscape has obscured perhaps a bigger and more exciting prospect for brands: not simply building apps on someone else’s platform, but building their own assistants. This paper is about what happens when brands are put in the driver’s seat, building “Owned Virtual Assistants” (“OVAs”) in their own image, under their control.
What is an “Owned Virtual Assistant”?
In the simplest terms, RAIN defines Owned Virtual Assistants as digital agents that operate under the control of one company or brand, delivered primarily through touch points they control, with a specialized set of functions unique to their brand owner.
Technically speaking, RAIN conceives of OVAs as having a few key attributes. They’re conversationally-modeled, channel-agnostic, and their “brains” (business logic and knowledge domains) can live independently from their modes of expression (e.g., the touchpoints where a consumer will use it). There’s a lot to unpack there.
means voice and/or chat interfaces, with multi-modal (audio + visual) affordances.
means having a disposition for scale and a technical approach that is flexible enough to support it.
means a centralized hub for the code and logic of the assistant, enabling consistency across touch points. It also means non-reliance on the native development restrictions of MVA platforms. And while an OVA could extend its tendrils into MVAs and manifest as a 3rd-party app on those platforms (much as it might within a brand’s mobile app), the “brain” that powers those apps is independent.
When you think about these qualities in the context of big tech, you realize that MVAs are actually forms of OVAs. While MVAs may be quite generalist in their functions, complete with 3rd-party marketplaces, they are shaped principally around the specific assets and objectives of the companies who’ve created them, as part of the fabric of their products and services. Consumers want to use their voice to “Google,” to stock up on paper towels from Amazon with nothing more than a command, or manage their phones hands-free with Siri. But if consumers wanted to use voice to easily order lunch, manage booked travel, or access personal retail loyalty programs, they would probably want to talk to the brands directly.
While MVAs allow brands to build robust experiences as third parties as a part of delivering broader utility to their customers, it’s not their top priority. Data suggests that first-party voice usage outperforms 3rd-party usage by a wide margin, with skills and actions currently lagging behind use cases like search, smart home, music, and timers/reminders. Having MVAs as an intermediary to these brands can be suboptimal for the customer, and for the brand. But we can infer that consumers are ready to have these conversations directly, because they’re already doing so actively with big tech MVAs.
To be clear, it’s not that we’re bearish on the future of skills and actions–they’re not going anywhere. Rather, we believe it’s time brands thought beyond the MVA, considering new approaches to assistance that are direct, independent, and more within their control.
The Bottom Line
There are myriad ways to achieve reach without a reliance on MVAs and their default interfaces. Mobile apps. Websites. Call centers / IVRs. Kiosks. Drive throughs. Cars. Headphones. Custom speakers and other hardware. Rooms & buildings. Software programs. The list goes on.
As brands think about how to plan for this current and rapidly accelerated future of voice-integrated, it has to be with the clarity that consumers want brands directly, and the limitations of today cannot persist tomorrow.
With this context and definition in mind, our next installment in the series will explore the value of building an OVA, and what it will mean for brands to reach a new level of control over their conversational footprints.
What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of"
nested selector system.