The Unseen Influence of 3D in the Age of AI

The Hidden Shift Reshaping 3D, AI, and the Future of Digital Commerce

While AI dominates headlines with breakthroughs in automation, search, and personalization, an equally transformative shift is happening behind the scenes. AI’s rapid advancements are fueling a new era for 3D product experiences—making them faster, more interactive, and more essential than ever in commerce, content management, and search.

 

THIS WHITEPAPER EXPLORES:

  • Why 3D is the “quiet beneficiary” of AI’s rapid development

  • How digital twins are shaping the future of immersive commerce

  • The emergence of “Situational Commerce™” and its impact on search and shopping

  • Key steps to future-proof your product data ecosystem for AI and XR-driven experiences

Scroll to read or download now.

whitepaper square (2)

 

 

The Unseen Influence of 3D in the Age of AI

How 3D Will Drive Revenue through Digital Discovery and Situational Commerce

By: Greg O'Keeffe, Justin Scott, and Rory Dennis

On any given day, you’d be hard-pressed to find any professional in any corner of the global economy that is not feeling the impact (real or perceived) of Artificial Intelligence.  Public opinion often lands on either side of the ‘impending doom’ vs ‘giddy optimism’ scale.  The pundits do, however, all seem to align on one key theme: AI will change everything, for everyone, everywhere…sometime.

These points of view, however, are often limited to the direct, and more immediate, impacts:

  • informational content for less specialized markets will be generated in seconds in response to simple inputs and prompts;
  • agents will replace support humans to perform objective multi-stepped tasks;
  • voice prompts will redeem themselves and find a practical use in our daily lives;
  • the production of short-form entertainment will be further democratized and delivered by those with the will and the vision— but without the specialized skill;
  • AI will start teaching itself, believing its own hallucinations, amplifying ‘new facts’ into its own echo chamber before distributing them to every channel it now manages after replacing augmenting the skills of writers, moderators, auditors, and editors;
  • social networks will auto-generate highly personalized ads for on-demand products you may have inadvertently selected, or even created, for yourself through your own algorithmic patterns of behavior;
  • voice-agents will provide a less frustrating tier-1 customer support experience, only to reveal their true role as gatekeepers to the more traditional and frustrating human-led customer support experience;
  • music will become so derivative at the hands of novices, that critics from Brooklyn to Austin will reluctantly explore newer, snarkier, ways to sharply criticize derivative music;
  • a cohort of robots will carry your bags, conspicuously, through malls, while you walk at an unnaturally slow pace, acting as if you didn’t just leave the store associates tied up in a supply closet (note: linked video is a convincing use of CGI, but the point is made).

Others see a centenarian utopia of 3.5 day workweeks, where we sit back in chairs like this and let a workforce of personal agents do the thankless jobs for us, push oversized “approve” and “reject” buttons, improve our efficiency scores to rise in the ranks, and find time to take up those hobbies we’ve neglected— like Civil War reenactments, Jai-alai, and sourdough starter.

While I, for one, welcome our AI overlords, as the dust settles, dots are stealthily connecting between the secondary effects of AI:

  • devices are becoming more capable, and
  • browsers are benefitting, allowing native and web apps to further converge in support of more covert and portable devices, opening doors for agencies and entrepreneurs to offer new experiences, which by design,
  • is encouraging a new generation of consumers to demand more from the retail experiences provided by brands and manufacturers.

As a result, a massive opportunity is taking shape to build a more direct and emotional connection with customers.

As these independent innovations converge, overlap, augment, and cannibalize each other, those of us working in support of retailers and manufacturers can expect a few trends to emerge, and soon:

  1. Product experiences will break further away from the brand-owned storefront, and the browser—encouraging brands and agencies to experiment and engage in new ways;
  2. Today’s “Contextual Commerce”, in which products are viewed in modified formats to suit a handful of 2D devices and screens, will give way to “Situational Commerce™”, where customers summon variation-level products on demand to interact, understand and connect through highly personalized and interwoven experiences;
  3. AI-led disruption in search and one-click commerce will demand deeper insights into product specifications, attributes, and properties to build more narrow and personalized search returns to natural language prompts;
  4. Product configuration options will diversify, as digital twins will natively allow access to exploded views and component-level interaction with micro-variate product details;
  5. Brands will further adopt digital twins, as 3D capture systems evolve, production costs decrease, platforms democratize expertise, specializations emerge, display methods become more capable, and new devices proliferate;
  6. Product promotions, placements, and brand collaborations will transform gaming and immersive entertainment, opening new recurring revenue channels for brands and platforms;
  7. We’ll see an elevated profile for (and investment in) Master Data Management, Visualization and Strategy, as the availability of deep product data, innovative points of engagement, and untapped customer insights surface new opportunities for brands, salespeople, merchandisers, marketers, designers, and manufacturers to connect with, and understand, their customers.

The unifying principle underpinning all of these trends? 3D.  Or more appropriately, accurate product models, and the comprehensive product data made available by the systems that manage them.

It’s time for brands to revisit 3D as a strategic imperative.

Before we continue on this madcap journey together, if the chilling realization of 34 pages on 3D, AI and the future of commerce has you asking “What’s in it for me?”, “What will I learn that I don’t already know”, or “Why, by the beard of Odin, would he do this?”. My hope is that you, dear reader, will come away from this paper with new perspectives and insights into how and why:

  • …3d models can fill the gap in descriptive product data, enable capability within natural language search, and reward prepared retailers with favorable search placement
  • …Digital Twins will regain relevance, this time leveraging the power of more capable devices to render products more convincingly, and through more engaging experiences
  • …converging advancements are laying the groundwork for new experiences to capitalize on rich product data and break free from the browser
  • …B2B retailers may already be primed to provide product insights, configuration, and compatibility information to New Search™, and why certain retailers (like apparel and grocery) may not see the same benefit
  • …Extended Reality experiences will push 3D and Digital Twins into a first-tier priority for (many) brands
  • …gaming partnerships are forging partnerships to create opportunity for brands that can provide both 3D models and product data
  • …New Search™ will reward specificity with specificity, and force commerce platforms to rethink their role as data providers first, and experience providers second
  • …the Smart Glasses arms race will usher in the age of Situational Commerce™, and what that means for product engagement
  • …natural language search will depend on, and come to expect, access to deeper product data and insights
  • …3D services can play nicely with other platforms in your existing architecture
  • …to know whether this is the right time, and whether you have the right business, to invest in 3D

    And that given the rate of change, much of what you will read herein may be old news by the time you’ve finished reading it.  For reference, my first draft began in September 2024, before OpenAI’s Operators, Deepseek’s AI disruption, and Perplexity’s “Buy with Pro” feature.  So while I’ve worked to keep pace with every revision, that was then, and this is now…but the themes, facts and trends remain, and have only reinforced my position since starting my research.

Back to where we left off, with 3D…

If we look beyond the models (we’ll get to those), it’s the 3D / rendering services themselves that are well positioned to become the new provider of record for deep, and undeniably useful, product data.  Data that AI-powered search, Extended Reality experiences, gaming platforms, and retail applications alike will demand— especially as the next set of innovative platforms and services look for faster paths to demonstrate their own emerging capability.

3D has the unique advantage of a long legacy of practical use in product design, development, and testing, already plays a pivotal role in B2B sales, and has spent a decade or more refining and improving web-based B2C experiences— finding pockets of relevance within the confines of a browser.

And these early efforts in web-based 3D have helped to pave the way for our multi-dimensional retail future.  The tools, workflows, applications, device support, file formats, and— importantly— the data are all in place, mature and predictable.  But retail’s awkward adoption of 3D in B2C commerce is the stuff of legend, and a significant chapter in that story is the industry’s on-again/off-again relationship with digital twins.

 


PART ONE: THE MODELS, 3D and XR

Brands adopt twins

During the formative years of web-based 3D, the industry commonly referred to product models as ”digital twins”— applied at least, to those models rendered to a convincing level of accuracy when compared to the real thing.  The term is usually attributed to NASA engineer John Vickers in 2010, who may or may not have been inspired by a Michael Grieves presentation on the "digital twin" concept at a Society of Manufacturing Engineers conference in 2002, who may previously have first read about the concept in the book “Mirror Worlds” by David Gelernter.  The facts all check out, though the timelines have been debated.  Fortunately, none of this ever comes up in mixed company.

More recently, these same industry leaders have expanded the definition to include processes and machines.  And now, as 2024-25’s loudest voice in the room, players in the AI ecosystem have co-opted “digital twin” to apply to a digital re-creation of a person through data (which, I’ll concede, is a more literal use of the term), rather than a digital re-creation of a physical object.  As this article’s author, I reserve the right to narrow the focus of “digital twin”, and return it back to its rightful owner— that is, product catalogs and associated media.  You’re welcome, America!  We did this together.

Early experiments with Digital Twins were seen as a looming threat to traditional, in-studio product photography.  And if it was a threat then, it certainly is now.  In those early days, the realism of rendered products paled in comparison to an actual photograph, and as we saw elsewhere during that era, a lackluster approximation of reality presents a divisive challenge to emotional connection.  For years, brands and customers struggled to find common ground with 3D, and when combined with the limitations of browser-based real-time rendering, digital twins were relegated to ‘nice-to-have’ status, and fell away from budgeting priorities.  3D and AR supporters turned to native mobile applications for salvation, due to their improved performance, speedy rendering and direct access to device and OS capability.  But the ROI was tough to prove, and brand-specific native apps fell out of favor as yet another expendable endpoint to manage.  Progressive Web Apps emerged, with access to native capability, but 3D remained stuck in that ‘uncanny valley’— and practical use of 3D in B2C retail waned.

Still, these more utilitarian 3D renderings found their place in B2B and highly configurable consumer products.  While B2C brands, particularly those in apparel and home goods, waited for rendering quality and speed to improve, manufacturers, resellers, OEM and aftermarket retailers embraced and adopted the technology, seeing returns not only in initial investment, but in customer confidence, loyalty and overall order value.

Practical 3D not only survived, but thrived outside of B2C commerce— iterating, evolving, and patiently waiting for a second chance in mass market retail. 

Thanks to more powerful processors, more capable browsers, and the emergence of standards, the gap between hyper-realism and functional utility has begun to close.

Form ≠ function

For the past few years, these digital twins have covertly replaced product and lifestyle photos— silhouettes, on-model, or in-situ— as a pre-rendered image, video, or 360º spin (stitched together from, for example, 36 pre-rendered images).  When presented mise-en-scéne (or, ‘dropped into a room with other stuff in it’), most of us would never know the difference, or question the realism of the scene presented.  And those who would are probably exhausting to talk to.

While impressive, photo-realistic assets like these are generated by professionals with years of expertise, powerful platforms, expensive equipment, and the trained eye to produce convincing stand-ins for the real thing.  These can be pricey and time-consuming, and I’ve witnessed first-hand the moment a CFO and/or CDO challenged the benefit of rendering a static asset with no more value to search, catalog data, or product filters than a JPEG.  These assets lack real interaction or differentiation for the end consumer, even if they are convincing as a stand-in for product photos.  What you have here, friends…is a photograph.

Videos and animations offering “exploded views” of products can mimic simple interactions, but fall short as a replacement for how one might handle, configure or customize a product in the real-world.  These are effective as eye candy— and they do help explain configurability, or convey the mechanical aspects of a product, or remind me of where I went wrong when building a bookshelf.  But…little more than fixed, linear, product or feature demos.

Still, the case is made for hyper-realistic digital twins, especially for evergreen products, or those that start life as a CAD file straight from the product designer.  The long term cost benefit of digital twins vs studio photography is well documented.  Agencies have mastered efficient workflows, and reduced costs with ‘content factory’ models— and the ability to drop a product into any setting, shown from any angle or perspective, with complete control over every conceivable environmental factor has bestowed them with god-like powers.  But the end result still brings little more to the product experience than print.

Customers want more.  Your products deserve better.  And the models themselves are just the start.

Back in 2020, a Harris Poll (conducted on behalf of Threekit) found that “60 percent of 1,869 U.S. adults who ever shop online are more likely to buy a product if it is shown in 3D or AR.” 1  Putting aside the strangely selected cross-section of 1,800+ users who ”ever” shop online, it showed a trend, quickly tested by COVID, and fast-tracked by a patchwork of technical responses to the sudden shuttering of millions of retail locations, that customers have not shaken their innate need to interact with (most) products before committing to a purchase.  And the returns data further proves that customers often regret those purchases when more useful interaction and personal connection with a product is lacking.

Since then, customer sentiment for 3D and VR has only improved, as more recent stats further underscore the desire for a better product experience:

 94% higher conversion rate for products leveraging 3D 2
 95% of online shoppers prefer 3D to video 3
 44% more likely to add an item to their cart after interacting with it in 3D 4
 27% more likely to place an order after interacting with a product in 3D 4
 65% more likely to purchase a product after interactive with it in AR 4
 80% of customers will return to a brand after configuring a product in 3D 5
 40% decrease in returns after implementing a 3D visualization 6
 23% of shoppers have returned a product due to inaccurate product visuals 7

Most of these stats were compiled from surveys taken between 2020 and 2024— during simpler times, when 3D allowed us to spin products in a browser, and Augmented Reality turned our phones into magical portals so we could try on shoes, or see what a car would look like in our living room.

These experiences have proven to be helpful, but strangely disconnected from how we like to shop in person— and still confined to a browser or device, requiring us (often) to scan a QR code, and move to a suitable location to find just the right space, angle and lighting to set the product in context.  To use them, we need to go out of our way.  Friction is part of the experience.

Meta, Snap, Samsung, Nvdia, and Google are giving clear signals that this is about to change.

That time when VR and 3D was supposed to happen…

Flashback sequence: 1983.  I was gifted a Vectrex 3D Imager — an early 3D gaming headset which, depending on your social status, either made me the kid whose house you wanted to visit after school, or the kid you ‘forgot to invite’ to your birthday party at the roller rink.  The movie Tron was just released, which blended the popularity of gaming, computer programming, science fiction and early 3D rendering into one nerdy cocktail.  Jaws 3D was a disappointment after Blade Runner left the theaters— but we saw it anyway.  Apple launched the Lisa— a precursor to the Mac.  Microsoft Word was just released.  The future was full of promise.  And to those of us who ran the projectors at school, that future would be experienced in virtual reality.

That future never came.  We got this instead.  It would be a long while before VR & 3D was exciting again— and we all endured some serious setbacks along the way.

For many of us, that moment came in 2014 when Google re-introduced and democratized VR with their Google Cardboard experiment— brilliantly dismissing any cost + nerd apprehensions by re-framing AR/VR as a free, foldable, family-focused toy.  Although a bit of a stealth release, VR and AR was instantly approachable, and accessible to everyone.  And, perhaps the most genius part of it: Google made it ok to share the experience with your kids— making it familiar to today’s core market, Gen Z.

This is how some of the best innovations take hold: a new technology is introduced, proves some value —often in the smallest of markets.  Then quietly persists, evolves, and influences where it can (see: 3D models in B2B commerce above).  Those with the patience, deep pockets, foresight and/or youthful ambition watch the market trends, customer sentiment and related innovations for the moment when preparation meets opportunity— then re-package and re-introduce the concept after the market has had time to digest and accept the idea.

Right now we’re sitting in the middle of a decade of re-introduction, re-positioning, specialization, shrugging acceptance and gradual adoption for these new experiences, and the devices that make them possible.  And to be clear, “shrugging acceptance” is where you want to be— familiar, unobtrusive, convenient, valuable…but ultimately ignorable.  And the innovators are doggedly chasing the holy grail of ignorability: Apple launched the Vision Pro, and was is again reportedly working on a more practical wearable;  Meta announced Orion, a follow-up to their Ray Ban Meta smart glasses — and maybe another headset codenamed “Puffin”;  Snap relaunched Spectacles; HTC, the Vive Focus Vision;  LG has partnered with Meta on their own MR headset, while Microsoft and Meta work together on an XBox headset;  Google partnered with Samsung on Android XR, while teasing the low-profile Project Astra;  Open AI may be working with Johnny Ive on their own smart glass option.  Nvidia has their own patent in the works.  Add to this, a number of smaller players bringing their takes to market, and DIYers.  You can expect brand partnerships with everyone from Hermes and Minecraft to follow.

“Everyone and their mother…”, as the saying goes, is making smart glasses.  It’s clear that the next killer app is on your face.

Augmented Reality begat Mixed Reality, which begat Extended Reality, and now Spatial Computing.  And if the next interface is an extension of you, the next point of integration is through an AI personal assistant.

Already, the big players are looking to define how consumers might use the face-helmets (immersive entertainment and professional applications) vs the spy glasses (practical phone replacement with XR overlays).  Face-helmets are for home, and those professions where it’s helpful/required/socially acceptable.  Spy glasses are for everywhere.  The pricey face-helmets have struggled to catch on, with both Apple and Meta announcing a pause (or full-on halt) of production— while at the same time giving every indication that they are focusing teams, time, and money on smaller form factor glasses.

The kids have already found a way to freak us all out with Meta’s smart glasses.  Surgeons have been using AR for training and guidance with complicated surgeries.  Military training has evolved to keep pace with VR since the days of flight simulators.  We even have VR for mice, for some reason.  And have convinced ourselves that projecting things directly onto our eyeballs is a fine idea.  Clearly, these devices and solutions have earned solid footing in markets with much higher stakes than retail.

And the units are selling.  AR Insider estimates that Meta sold an estimated 613,927 units across Quest and Ray Ban Smart Glasses in Q2 of 2024 8.  Not too bad, considering the fact that I don’t know anyone who owns either.  Selling well, and still so much room for growth.

It seems that each month brings a new story, advancement, acquisition, or public commitment to these categories of devices.  In November of 2024, Meta reaffirmed their commitment to Ray Ban’s parent company as the smart glasses design/manufacturer of choice, and within two months that same parent company acquired a spatial audio company for its AI-based noise-reduction and speech enhancement tech (which has clear benefits to accessibility, but also a clear nod to personal agent interaction).  Even if all of this is a PR spike in the current hype cycle, power-partnerships and investments like these will result in adoption, and with any luck, the “shrugging acceptance” they’re all chasing.

So, what will drive adoption for the rest of us?

After watching demos offered by Google and Meta, I am fully onboard with the practical value of everyday smart glasses as an eventual replacement for both my iPhone and my Apple Watch.

You know innovations are at the precipice of mass adoption when the World Economic Forum comes out swinging— highlighting superstar Cathy Hackl's predictions regarding Spatial Computing “...expanding computing into everything you can see, touch, know and feel” making “billions of users” “active participants” in technology through the use of smart glasses and 6G connectivity 9.  This is a 3-5 year roadmap.  Until then, we need to allow for a few missteps and ‘teachable moments’ as these devices navigate their own adolescence.

As a more cautious adopter, I will wait until smart glasses exit their awkward phase (“Oh hello………..” [Google glasses scanning for 12 seconds as I quietly stare into the eyes of a possible stranger] “........Janice, from work”).  But when they are normalized (i.e. less creepy), I will abide, free to enjoy the lack of discretion that comes with a relationship between a man and his eyeglasses.

And commerce, as it does, will find a way in— as long as it can contextualize the shopping experience to these new devices and variable user surroundings. The 30 year old search/browse › scroll › select › checkout methods just won’t stand (see: eCommerce on AppleTV).  There’s little reward in wearing headgear to shop as we always have…only now on unnecessarily wide screens (one for the UX devs: add 32:9 to your ‘responsive’ breakpoints).

Customers will decide how and when Extended Reality commerce happens.  Those of us whose livelihoods depend on their decision can take a few small steps to put the work in now:

  1. Ensure clean, standardized and comprehensive product data is always available to any trusted service that needs it;
  2. Build integrations and data flows to leverage the strengths of each system in your architecture;
  3. Experiment with engaging experiences to rethink what it means for a customer to discover, understand, and interact with your catalog— and build a deeper connection with your brand.

The first two are foundational.  The resulting experiences will be transformational.  But without the foundations in place, these experiences will be costly to maintain, challenging to adapt, and impossible to justify to your CFO.

 


PART TWO: THE EXPERIENCE

80% of ecommerce consumers say the online buying experience is as important - if not more important - than the product purchased 10.

McKinsey’s Business of Fashion white paper “The State of Fashion 2025” 11, places a few smart bets on the future of product discovery and customer experience:

  • “Smart e-commerce players are focusing on new paths for product discovery. Shoppers who were once dazzled by the seemingly endless selection available at many online retailers now bemoan the difficulty of finding what they want”
  • “Leaders who move quickly to identify the bright spots, whether they are geographic, demographic or technological, will be primed for success, but only if they’re able to evolve. The old playbook is now obsolete; the industry will need a new formula for differentiation and growth.“
  • “...a new era of brand and product discovery is on the horizon, underpinned by AI-powered curation across content and search.”

The McKinsey team adds:

  • “50 percent of fashion executives see product discovery as the key use case for generative AI in 2025.”
  • “82 percent of customers want AI to assist in reducing the time they spend researching what to buy.”
  • “74% of customers report walking away from online purchases due to the volume of choice”
  • “80% of customers say dissatisfaction with online search is a barrier to purchase”

…and goes on to reference executives expecting to:

  • “Build AI foundations, identifying relevant tech partners and infrastructure for AI deployment whilst ensuring product content is optimised for AI search.”

83% of Gen Zers say they view online shopping as an experience rather than a transaction 12.

So if we all have finally tired of doom-scrolling through influencers’ thinly veiled product endorsements, vaguely googling for some product we ‘saw on that show once’, dubiously scanning manufactured reviews and ignoring ‘you may also like’ recommendations on brand sites…what’s the alternative

Some brands, manufacturers, marketplaces and multi-category retailers (department stores) will offer bots as the answer— to ‘make your search more personalized and engaging’.  These bots will offer personal shopper-style assistance, in the early stages at least, based on trends, what others are buying, color matches, what brand stylists recommend, etc. But these are little more than interactive recommendations— still either cohort-based, generated from a list of what the merchandiser would prefer that you buy, reasoned from access to limited product data, or scripted and entirely subjective.

And these interactions are a stilted facsimile of a more deliberate exchange with a friend, a store associate, or a trusted account manager.  We’ve been trained to constrain our queries to meet the limitations of the technology.  And as a result, we’re conditioned to search, then browse, then refine until we find the closest match…then hope it’s in stock, compatible, the right color, pattern, or material, etc.

New search™ is taking a much broader view of information relevance, and since it can contextualize what it finds, it will favor those data sources that will allow it to produce more accurate results to natural language queries.

New search™ ‘understands’ that a natural and productive discourse considers both subjective content, and objective, universally verifiable facts.  And since we’re talking about products and not politics it’s safe to say that facts are facts— and citing universal truths will always produce better outcomes for the buyer.  I think we can all agree that answers to certain questions are more helpful when grounded in fact.  For example: “Is this safe for my baby?”; and ”Is this material flammable?”; or “Do I take this orally?”

Similarly, when placing an order for 100,000 of something (as is often the case with B2B commerce), or confirming whether an accessory is compatible with another item (like an expensive lens for a camera body), or when the total weight of a final product built from distinct components is a critical consideration to manufacturing plans and shipping costs, you expect objective, fact-based and yes/no responses.

In B2B, as with functional 3D models, objective, undisputed and incontestable facts are expected.  Flowery storytelling gets in the way of a sale when you need to know whether a water pump is the right model for a fleet of tractors, or a rotor is the correct diameter for a jet engine. In B2C retail, particularly apparel and cosmetics, storytelling was a natural extension of advertising, and since eCommerce merchandisers fell under the purview of the CMO or CDO, the requirement to ‘build consumer confidence with more data and accurate specifications’ was deprioritized during sprint planning sessions.

It can sometimes be difficult to differentiate between subjective and objective.  Is ‘slim fit’ a subjective measurement?  When is color objective vs subjective?  Is a trend ever manipulated or interpreted, or is it always a point of fact?  When is an accessory a guaranteed product pairing vs a styling recommendation?

The differences are often subtle, but easy to compare when you think in terms of marketing copy vs product data:

Subjective Objective
Storytelling, recommendations, cohorts & shared characteristics, merchandising, sets & bundles Product attributes & properties, configuration guides & compatibility rules, schematics, manufacturer specs
Search query examples:
“What’s the most popular fit for …” “Is this compatible with my Trek EXe T-Type road bike?”
“I’m going to a party at a friend’s beach house, can you recommend… “What shoe brands have a good selection of wide sizes?”
“What’s a good wine to pair with…” “How much does it weigh?”
Do people like the sound…” “Is this compatible with Sonos?”
“I’d like something with a contemporary feel…” “What’s it made of?”
“Will this match with…” “Can you show me at least 10 portrait lenses that are compatible with my Nikon Z8 Mirrorless camera?”
Content, data & feature examples:
  • Trending
  • “Works well with”
  • “You May Also Like”
  • Fit guides
  • Seasonal copy
  • Brand or product narrative
  • Durability
  • Color/size
  • “Compatible with”
  • Accessories and add-ons
  • Configuration & customization
  • Measures of diameter, thickness, inseam, weight, lumens, ohms, etc
  • Country of origin
  • Availability
  • Material, fabric
  • Coating & treatment processes

 

Why do we struggle with precise responses to natural language queries today?  Because the objective data…just…isn’t…there.

At least not in a standard or consumable format.  And ‘yesterday’s search’ (there, I said it) doesn’t really understand how to consume, contextualize, re-organize, and properly present disparate data sources to answer your question simply, or in any way mimicking how a real human person might respond. Agents, and AI Chat, are another story.  More capable than common search tools, but still limited in reasoning and ‘experience’ to replace a productive human interaction.

Yesterday’s search falls short

When shopping in person, I think I can speak for most of us when I say we expect an uncomplicated transaction.  If we know what we want, we expect the thing we asked for, a close approximation of the thing, or a small subset of things that are very much like the thing.  And if I’m unsure what I want, I’d like to ask questions of an expert, receive some guidance, and then I’d like to be left alone to browse, free from distraction.

And when comparison shopping, I need as much relevant information as possible, in a format that allows me to understand the benefits of each on equal footing.  Knowing that product A is green, and product B is portable doesn’t help me.  Being told that product A has “sleek aesthetics” and product B is “sustainable” doesn’t get me any closer to a confident purchase.

The reason comparison shopping works online, especially within the confines of a branded storefront, is that comparable data is available in a consistent and reliable format.  The platforms force content owners, merchandisers and system integrators into patterns of data normalization and repeatable workflows.  And these patterns make it possible to compare all known attributes of product A against product B.

The broader internet follows no such patterns, and each platform and/or service that stores, manages, and facilitates the distribution and presentation of this data is its own special snowflake.  Sure, formats like JSON and XML might standardize the output, but the rules governing how source data is organized is either up to the platform or the person at the controls.

The paradox of choice, amplified by the ‘dark art’ of SEO, and Google’s over-indexing of brand storytelling, has made product discovery harder.

 

New Search™ will (soon) be different

New Search™ will make product discovery easier.  But it will be a step-change.  And it will compel retailers to re-think product data integrity and availability.

Perplexity and others have figured out how to return fewer and more valuable results to natural language prompts.  And New Search™ has contextualized its findings into a macro-PDP of well organized, contextual information, with links to brand sites, and various ‘buy now’ options.

And brands, at first, may not see this as a positive development.

Query/prompt:
“I'm looking for a top rated USB-C hub for my iMac, in stock near me only please”
Google Perplexity
Googlequery PerplexityQuery

Perplexity seems to give preferential treatment to those with the most complete and available product data (A/B test recommendation for brands: try removing lazy-load UI chicanery from some listing grid pages…does it help your standing with New Search™?), and/or those with partnerships (i.e. direct ‘Buy with Pro’ option with Walmart, or use ShopPay for some retailers).

Perplexity also gives meaning to my query, and responds ‘thoughtfully’.  I feel understood. Seen. Google, by contrast, is like that person at a party who just blathers at you until you say “...I’m going to get a drink, do you want anything?” while walking away.

Commerce platforms too, should take note— for much the same reason.  This is not an existential threat to B2C platforms, but it may cause a crisis of identity— they may need to reset priorities as feeder systems and data providers over storefront hosts (which is already happening…see composable commerce).  If Perplexity’s “Buy with Pro” button is any indication, we won’t need to go to brand.com, and we may see it as a Google+Amazon hybrid, where we search for and buy things with very little friction, while paying no mind to whoever sold those things to us.

Techcrunch agrees, highlighting very literally that Perplexity Pro “...offers shopping recommendations within Perplexity’s search results as well as the ability to place an order without going to a retailer’s website.” 13

Anker555

Branded storefronts won’t disappear anytime soon, and of course brands will still record the sale, ship the order, accept the return, etc when orders are placed with Perplexity’s Buy with Pro.  But as a shopping destination, branded storefronts may find themselves relegated to a place you only visit around the holidays— like your great-aunt’s house, or church.

Smart Glasses are primed for conversational voice search.

If we’re onboard with future-glasses and/or personal assistants being a thing in the next 5 years, all search services need to sort this out.  These tools will encourage us to speak and act more naturally— it’s the next frontier for data collection.  Implicit profile information is already harvested collected 24/7.  These companies have learned about as much as they can learn, given the devices, services and interactions available to date.  But they’ve also learned that it’s tough to extract explicit profile information from searches like “sweater, red, men’s…large”.

Apple has already begun to retrain us all on how to speak like human beings, Perplexity and ChatGPT have made human ‹ › machine interactions more conversational, and in doing so, New Search™ expects us to share personal preferences and feedback more freely— just as we would with a friend (“nope, too shiny…more options in a matte finish”, “that looks like it will break easily, can you show me more durable options?” “this is for my 10 year old nephew— do kids his age like this stuff?”).  AI will listen, retain, refine, learn and improve, allowing us to spend less time with product discovery, and more time with product engagement.  But they need to match our ramblings to a deeper understanding of products in the wild.  And to do that, they need more data.

Google Lens now includes a feature which allows you to add spoken context to visual searches, which will soon allow a simple “what is that?” search to be as easy as looking at something and talking to yourself.  This will certainly change the game, but may still disappoint those looking for more detail— unless next-level product data is available.

Without that next-level product data, I’d imagine the conversation with a virtual personal assistant (who for the purposes of this fictitious script, and not at all a reflection my personal life, I’ve named “Helpy”) going something like this:

INT. DEPARTMENT STORE - DAY
A man, GREG, stands in the outdoor aisle, staring intently at a Weber grill through his Smart Glasses.

GREG
(Speaking to his glasses)
Helpy, I like this, what is it?
HELPY
From what I can tell, it’s an outdoor grill.
GREG
Does it come in any other colors?
HELPY
It looks like it only comes in red
GREG
Hrm…well, what’s it made of?
HELPY
……it’s made of red
GREG
But I mean, what kind of metal?  Will it rust?
HELPY
Searching online, I found these results: as a color, red does not rust any faster than other colors.
GREG
I understand that red isn’t more prone to rust, what I’m asking is…
HELPY
…uh-huh…
GREG
 …whether or not this is appropriate for the weather in my area.
HELPY
…checking the weather in your area…
GREG
Helpy, stop.
HELPY
I sense that you’re getting emotional.  Playing songs from your “Big Feelings” playlist…
GREG
Stop Helpy…please, just… just stop.

END SCENE

So how do we make these personal assistants more useful?  How do we give them an advantage when they try to bridge the inter-dimensional gap for us?What questions would we ask a store associate about a product, and what service holds the answers?  How will these devices ‘see’ the deeper properties of a product in front of them?

How do we spend less time ‘discovering’— or at least, how can we make the act of product discovery much more informative and engaging?

Smarter people have made the case for Retrieval Augmented Generation and natural language search— though just skimming the surface on how the lack of deep product insights will limit “New Search™” (have I made that a thing yet?).  Still, those same smart people validate that relevant data availability and orchestration isn’t easy.

Some good news here for B2B, manufacturers, and any retailers already using digital twins and 3D models in sales and customer-facing experiences: congrats, you may have ‘trojan-horsed’ your way to AI search relevance.

 


PART THREE: ASSETS, DATA & ARCHITECTURE

A 3D object ≠ your product

This is the part where we need to recognize the distinction between “a product” and your product.

“Reverse 3d reconstruction” or “Inverse rendering” is a recent approach that has earned a place in the conversation, due in large part to advancements in native capability of mobile devices (i.e. LiDar, multi-lens cameras, accurate gyroscopes, etc), and the applications with access to those capabilities.

Just as New Search™ needs reliable data, these Extended Reality / Spatial Computing experiences will need 3D assets to take hold in the market. Meta— recognizing the scarcity of usable 3D product models available online— have gone so far as to seed the market with 3D assets in advance of their Orion smart glasses preview, through project Aria.  They’ve used Covision Media to create the assets and have partnered closely with Shopify for access to customer base and commerce capability.  Meta is further forecasting their physical ‹› virtual world focus by separately releasing a dataset for vision training and physical intelligence…but my ‘fully supportive views’ (should cyborg-Greg read this someday) of our robot overlords is another paper entirely.

While these 3d reconstruction tools are capable of creating convincing reproductions of real-world products (“accurate to a sub-millimeter level” 14, according to the guy doing the voiceover for Meta), these are still inaccurate, since they are not generated from, or in any way influenced by, specs provided by the retailer, distributor, or manufacturer.

Still, this creates a host of exciting possibilities for most B2C sellers, where the ‘product you see is the product you get’ (can we claim the acronym PYSIPYG???).  Anyone can now scan a thing and create a decent 3D model with their phone using Photogrammetry.  Where it begins to fall short for both B2C and B2B, however, is with reliable product specifications, compatibility, and any level of configurability.  And for B2B specifically, where configurability and compatibility are critical to the product sales cycle (i.e. demonstration, configurability options, validation of compatibility, and purchase order generation).

CAD and 3D have also joined the fray of other text-to-image generators, now offering tools that allow novices to generate working CAD files from text prompts “based on your own proprietary data sets.” 15  When framed in a practical business context, at a minimum this will allow those without the expertise (clients and customer-facing leaders) to communicate ideas more easily to those who do (product designers).  More broadly, it will allow the rest of us to visualize, validate and refine ideas, create assets for virtual worlds and Extended Reality experiences, and experiment with new experiences and interfaces.  It’s just one more tool helping creators create.

Soon enough there will be 3D assets and objects galore— the internet will be lousy with them.  But they won’t be your assets, objects or products.  A boat made for your more successful avatar’s private island is not this boat.  The virtual bike your kid buys with your actual money Robux is not this bike.  And while companies are once again toying with virtual mall experiences, the result (while potentially lucrative given the vulnerability of the audience) is not much more than friction-full product placements…just with decent 3D renderings.  The next phase should allow for something resembling real product interaction.  But to do that, browsers, devices, services and supporting systems need to understand more than just color and size.

The soul of a product rests in the smallest details™.  Color, size and material may impact price books, tax rules, shipping costs, and inventory availability, but these three attributes do not make a rendering (that’s right…C-average English major).  To a high-end fashion brand, every stitch, drape, and cut through a pattern, matters.  To a jewelry or watch maker, the material, brilliance, refraction, heft, and movement in space (physics) convey value, distinction, and a commitment to quality.  To an after-market parts manufacturer, attributes like threading, alloy, and compatibility rules are the difference between a confident buyer and satisfied customer, or returns of thousands of units and customer churn.  To a manufacturer of custom motorcycles, incompatible parts could lead to a cancelled order, chargebacks, eroded brand trust, and a frustrated high-value buyer.  These important details are available as data— they’re just buried.  Often in 3D software or the models themselves.

So much data may become available that decisions will need to be made regarding how much data to share.  At what point is a manufacturer exposing too much about a product, and will brands begin to strategically add a set of restrictive guidelines within their data enrichment workflows.  For some, the line of demarcation may be easy to determine— don’t share schematics, for example.  But designers and merchandisers may need to experiment with exposing enough data to meet customers’ evolving expectations, but stop short of sharing anything that might overexpose what they feel are the proprietary aspects of a product.

When product zoom became a standard online, a small contingent of high-end fashion brands had our team limit image quality and zoom levels— not for storefront performance, but out of an abundance of concern regarding overseas counterfeiters.  The theory was that the stitch on a handbag or the detailed pattern on a wrap dress would provide the missing pieces of a puzzle that these black market manufacturers had sought for years to help them create more convincing knockoffs.  For us, this was another moment of enlightenment consultants can tuck away for reference on future projects— some retailers are locked in a constant arms race with shadow markets

For some manufacturers the exclusive ownership of product data may provide another competitive advantage— as a means of verifying the authenticity of their products.

True Configuration ≠ “logos on hats”

A number of years ago I had the opportunity to run a digital transformation project for a mid-market, millennial-focused jewelry and lifestyle brand.  Great people, global team, talented developers— we were concepting, documenting, re-architecting, and delivering alongside a complete re-brand of the business.  To exhume and extend the cliché: we were designing the plane, manufacturing the parts, and assembling it while flying— with every seat filled by the Board of Directors.  The project was full of risk, but well organized, and the collective team was at the top of their game.

And the challenge that surprised us all turned out to be a 3-word line item in the early requirements called “the bracelet builder”.

What we dismissed as ‘a modified product detail page with a few tweaks’ turned into a complex matrix of compatibility restrictions, configuration options, license agreements, physical properties, and pricing rules involving members of product design, marketing, finance, creative, legal, partner management, systems, integrators and UX devs, and brand voice.

Having spent weeks documenting and translating rules, restrictions, and often contradictory guidelines to flowcharts and MadLibs style requirements, things would often break down after “[this] goes with [that] unless [this] is the [base material] or [that] [charm] is from [this [collab/brand] partner] and [charm numbers] are [greater than/less than] [number] and [bracelet style] is [one of these options] unless catalog includes [these SKUs]......”

So we did what grown-ups do in the face of adversity, and licensed a purpose-built tool to help us manage the complex configurations.  The fact that the service we selected also handled the presentation was a bonus.  At that time, the renderings were convincing enough, and we were able to apply ‘physics’ representative of material, reflectivity, refraction, etc.  But the 18,264,230,364 possible permutations of bracelets and charms, combined with hyper-realistic renderings, proved to be too taxing on 2018 browsers, belying the seemingly simple set of options presented to the customer:

  • 4 bracelet styles
  • 3 clasp styles
  • 3 metal styles
  • A maximum of 9 charms per custom bracelet design (selected from 93 charm options)

So….you know, math.  And this had not yet considered brand license agreements, where brand collaboration charm A could not be seen alongside brand collaboration charm B.  For that, we’d need to rely on the UI itself.

The lesson?

If a well-regarded service does the thing you need, you can validate that it works as advertised, you don’t see a clear path through the minefield of risk, and your team is needed elsewhere…find the money, and license a platform.

Sure, it’s more fun to build new things.  But more rewarding to deliver a successful year-long program.  And it’s always nice to not get fired.

Everything is configurable

When it comes to configurable products, complexity is often hidden in plain sight.  And there’s a broad range in B2C alone, running well into billions of possible configurations.

Product Possible Configurations
Faherty waffle beanie
One size; 2 color options
2
DWR storage trolley
Three configurations, each with 7 color options
21
Office Chair
Multiple online options, mid-level configurability
2,048
Bracelet Builder (decommissioned)
Multi-faceted charm bracelet configurations
18,264,230,364
Hoyt bow builder
11 models, each with 14 configuration options
25,721,744,400
Nike By You: Air Force 1 Mid
10 core customizations, each with 1 or 2-tier options
61,288,980,480
Refrigeration system shelving unit (B2B)
Modular shelving components
236,000,100,127

There’s even an index dedicated to tracking the breadth and depth of retail configurators online, which provides examples of (at the time of writing) 1,472 (primarily B2C) configurators across 438 brands.  Most. however, seem to focus on simple, embroidery-level ‘your design here’ customization (which I would not call config)— which is adorable, but nowhere near the complexity of configuration and compatibility rules that come with highly customizable products in the B2B space.

When should brands consider a 3rd party service for real time rendering of customizable and/or configurable products?  At what point do your requirements justify the integration based on complexity alone?

Brands should consider a dedicated 3D rendering service when:

  • …objective data exceeds the usual color/size/material product variation options found in most catalogs, and is important to explain, demonstrate or highlight key features
  • …visual confirmation of final design is required
  • …product is created by the customer, and not already available as designed
  • …license agreements impose limitations on configuration
  • …configurations extend beyond brand-standard
  • …product can be convincingly represented in 3D
  • …interaction with the product is important to feature demonstration
  • …the level of product configurability complicates pricing
  • …compatibility rules are restrictive (‘this goes with that, but can’t go with those’)
  • …photography costs are untenable due to complexity of configuration options
  • …configured product would benefit from in-situ, on model, or lifestyle placements
  • …there is a benefit to Augmented/Extended Reality presentation (e.g. see it in your garage)
  • …considering partnerships in gaming, Extended Reality
  • …precise measurements and fit are important to the sale
  • …returns are not accepted due to configuration or customization
  • …confidence is a psychological barrier to a high volume sale

And while 3D is not for everyone, if any of the bullets above are relatable, it would be worth exploring the cost/benefit.  When you compare to other transformational programs, you’ll find greater long term benefit, at a fraction of the cost you may be expecting.  And the cost of doing nothing may be much greater.

Deeper interactions = meaningful insights

When we limit the creative application of 3D models to ‘spin sets’ on brand sites, it reduces the impact to a parlor trick, and misses a much bigger opportunity to engage with a segment of customers who have already expressed intent to buy, or at least understand, your product. Exploded news, responsive interfaces, haptic feedback, and configurable components may soon provide retailers with access to a deeper level of engagement data than previously available.  When customers are given the opportunity to interact with a convincingly-rendered product in a 1:1 or selectively shared space, new and more meaningful insights will emerge.  Website engagement metrics will give way to product engagement metrics, with deeply traceable interactions with products in a virtual or Extended Reality space.

This opens a new frontier of product engagement, and a better understanding of how customers interact, attempt or expect to interact, with products in various settings and contexts.  For those retailers or manufacturers who own product R&D, this could provide a mainline of unvarnished insights back to product development teams.  Marketing teams may surface and capitalize on the unexpected ways customers use, configure, or customize their products.  Creative teams could use these same insights to re-position a struggling product with a new purpose.  Those who own the P&L will feel more confident investing in, re-factoring, or decommissioning, product lines.  Continuous improvement to physical products may become the norm.

Data, data everywhere— just not the right kind.

In retail, success is often a platform multiplier, and growth, like it or not, imposes change:

  • Moving into new product categories, locales, and customer profiles means that your product data belongs in a PIM;
  • As your marketing and merchandising teams create content for more channels, in more languages (and their variants), and with more immediacy to keep pace with trends, your content graduates to a dedicated CMS;
  • Complex customer segmentation, insights and targeting are limited by native commerce platform capability, which pushes further investment in CDP, CRM, and related marketing platforms;
  • A growing list of screen sizes, devices, contexts, experiences, outlets, file formats and standards have given Digital Asset Management a permanent space in your solution architecture.

Thanks to composable architectures, brands now recognize the value of discrete systems of record within their retail architectures.  Products belong in a PIM.  A CMS will provide a more intuitive and flexible experience for your content.  Customer records, for reasons of security, segmentation, marketing, and insights, are better served by systems outside of commerce.  It’s a natural progression that makes perfect sense.  It also created siloed data sources within specialized platforms.

And while inflation hangovers, residual uncertainty, budget cuts, and the frustration of swivel chair workflows will bring some consolidation in the platform ecosystem, the days of the heavily-customized, over-extended, “does-it-all” commerce platform are likely numbered.  The pendulum may swing back to the monolith, but specialization will continue, and front end experiences are likely to further decouple as more contexts emerge.   The sanity of developers, merchandisers and creatives the world over depend on it.

Even so, pent-up demand, limited change in leadership, and the sudden relevance of AI and agents threatening entire categories of experts has been holding brands back from taking big swings with their tech investments.  “If it ain’t broke, don’t risk losing your job,” as they say.

The changes imposed by AI and XR (now allow for “AI/XR”) will be different.  This is not on par with swapping an OMS, moving to a PWA, or allowing customers to pay for toothpaste in installments.  Every brand and service provider will need to re-think how to engage with customers when products can be conjured, interacted with, and purchased by natural language prompts, within an Extended Reality experience, with little attention paid to brand, verbose product descriptions, or the site the product came from.

Branded storefronts just might go the way of print catalogs.

McKinsey again agrees that retailers would be wise to  “Establish a technology backbone (including tech stack and infrastructure) that provides flexibility to adopt and scale search and discovery use cases.” and “Ensure product data is optimised for AI search, identifying relevant product features and attributes, for both organic search and content-led discovery.” 13

To translate the consultant-speak: ‘Start getting your house in order.  Invest in the boring stuff now, so that the new stuff will actually work when your boss and your customers ask for it.’

Same as it ever was.

If your data isn’t ready for New Search™— or the New Experiences that will be powered by New Search™— you’re in good company.  Most retailers have only recently begun to give it thought.

Mr. Salesforce himself has been vocal about data— self-servingly, of course (see; Data Cloud, Mulesoft, etc and so on), but he timed his thoughts well with this paper, writing in a LinkedIn post that “The future's fortune is in our data. Data is the new gold.” 16  Appreciate you, Marc.

As a delivery partner responsible for the integration, operational readiness, launch, and ongoing support of retail architectures for our customers, my colleagues and I learned to identify recognizable patterns for scope creep, risk identification/mitigation, and early wins.  One common pattern of risk was an artificial separation of traditional retail and eCommerce operations: one team wanted to focus only on operational processes leading up to the ERP; another would focus on customer-facing digital experiences served by eCommerce, Marketing, CRM, Payments, Personalization, etc; and often a third group would own OMS, WMS, POS, Clienteling, etc.  These self-imposed silos had a lasting impact on retail architectures.

For retailers, an ERP has historically acted as (among many other things) the company-wide holding pen for product data, and given their legacy, these systems rarely export this data in a readily-consumable 3rd party format.  Naming conventions were holdovers from earlier times, and lacked patterns that the current collection of retail platforms and system integrators have come to expect.  Product data not only needed to be ‘massaged’ and transformed to conform to new standards, but oftentimes entirely re-structured, making it difficult to match analytics and sales data back to the point of origin, let alone manage ongoing.  Teams became complacent in their acceptance that the answer to the ‘square peg and round hole’ dilemma was a saw and hammer.  Just make it work for now, and do it again later.

In retail architectures, product data flows to (or is called by) various systems on a ‘need to know’ basis.  If a product detail page only requires one or two variation attributes (size and perhaps color) to function, just pass those through with the catalog and dump the rest into a description or specifications paragraph.  And as long as those few attributes allow the customer to select › add to cart › purchase, and the order matches what the warehouse has in inventory, why spend the time to make more data available?

Apparel merchandisers have been trained to gather material codes, fabric thickness, stitching, detailed measurements, inseam, physical properties, designer details, care instructions, country of origin, finish, malleability, and warranty info into paragraphs or bulleted lists within description or specification text areas— perhaps reproducing some of these as search filters.

Often the additive workflows happen upstream— product data is enriched only within the platforms that use or display it.  And these changes, edits, and corrections rarely make their way back downstream to the sources of truth, and therefore either don’t make it to those platforms in the middle (OMS, POS, etc), or are replicated by hand…further diverging from both the data source and the customer-facing endpoint.  It’s easy to see how, and where, these messes are created.

But the value of this deeper and more comprehensive data is already proven.  And while New Search™ does well with contextualizing what it finds in the pockets of brand sites and far corners of the internet, structured and comprehensive data will be favored by AI as more reliable for training, more convincing for customers, and more successful for sales.

Chair1

Those spending time with Open AI’s Operators have speculated that personalized agents will soon be able to complete tasks by creating APIs where none exist, as needed to complete a task.  This brings up all kinds of questions for me regarding access, paid services and security— but putting all of that aside for now, all of the dynamic and ephemeral APIs won’t find data where data doesn’t exist.

So where will all of the deep product data, 3D assets, and complex configuration and compatibility rules come from?

From 3D services, either directly, or indirectly through other platforms already available in your architecture.

Which of these platforms are best suited to own the files, granular data, compatibility rules, configuration options, and rendering experiences?  And where does 3D belong among the other boxes and lines of your architecture diagram?  There are plenty of possible options, and you can start simply by introducing 3D services in between existing platforms, data flows and service calls.

fig.a: a sample (i.e. “a likely option”) product data/asset flow, without 3D service

Dopple_No3D

fig.b: a sample (i.e. “a likely option”) product data/asset flow, with 3D service

Dopple_Yes3D

Which brings us to a notable point of convergence:

AI and XR are colluding to push for faster, more capable, and more portable devices; while New Search™ is forging close partnerships with retailers and platform providers to offer more confident, direct and frictionless transactional opportunities; at a time when the platform ecosystem has become more fractured, orchestrated and decentralized.

And this presents a unique opportunity to break from convention, experiment with less risk, define new categories, and build new service offerings in support of it all.

 


THE NEXT PART: THE FUTURE

So what happens when more data is made available following standards and predictable patterns, AI search bypasses branded storefronts to seamlessly sell products in context, digital wallets securely own more of our personal data, smart glasses become socially acceptable, and modern platform protocols enable interwoven transactional opportunities that can contextualize to our environment?

Commerce will be Situational

Anyone working in, around, or in support of retail knows that friction is the enemy of a sale.  And anyone working in technology knows that friction is the enemy of adoption (caveat: unless there is enough money to be made on the other side of that friction).

A key promise of these new devices is that they will help us move past the friction that has, ironically, been introduced into our daily lives by technology.  More stuff to manage, more things to charge, more moments to capture, more alerts to remind us, more services to connect to, more subscriptions to pay for, more things to find, more websites and apps to navigate, and more steps to complete a purchase.

Already these early smart glass demos have proven that our new personal assistants will recommend, validate, translate, recall, define, guide, and connect us.

…but what’s preventing them from shopping for us?  One limiting factor is a lack of appropriate data.

Perplexity has recently proven that there’s a better way to search with the intent to purchase online, especially when you know what you’re looking for.  And if you play with various prompt techniques, you find that Perplexity Pro, like a real human person, rewards specificity with specificity.  The more you know about the product you’re looking for— even if you’re just describing what you might remember about it— the less results you get.  Perplexity’s responses to a decent product query instantly make Google feel outdated.

Perplexity knows that customers don’t need more choice.  They need less friction.

Perplexity is searching through more data sources, faster, “understanding” (quotes intentional) the context of the data it’s finding, giving it relevance to my query, and presenting a functional and focused layout.

And for the moment, at least for Pro, no ads.  Subscription fees mean the users are no longer “the product”.  We can go back to just being paying customers.

Perplexity’s search results did, however, present an unusual mix of commerce sites— none of which I would have expected.  When searching for a specific brand product, using attributes and properties from the product description and color/size options, the brand site itself was omitted in the results.  When I asked Perplexity why my search for a “dark gray [brand] ¾ zip fleece-lined pullover” did not include results from [brand].com, this was its response:

“I apologize for the oversight in not searching the [brand].com site directly. Based on the search results provided, it appears that the [brand].com website content is not easily searchable or scannable for specific products like the zip-up pullovers with fleece interiors.

In such cases, it's often more effective to search for the products on authorized retailer websites or marketplaces that may have more easily accessible and indexed product information. This is likely why the previous search results included items from other retailers carrying  [brand] products.”

The coming ubiquity of AI search will put pressure on platform providers, retailers, and their developers, merchandisers and content managers to abide.  As with Google, if retailers want priority placement in search results, they’ll need to ensure content is available, complete, and adheres to whatever standards AI expects.

And as with SEO, experts will emerge who will master the dark art of AI search— perhaps finding a balance between giving AI what it wants and gaming it to work around the rules.

If we consider the structure of a natural language query or prompt alone, you can see where brand/color/size is not enough.  AI will want to understand compatibility (or, in terms of fashion, ‘pairs well with’), configuration, comparisons, active promotions, product materials, customer reviews, availability— all of the objective and subjective information it can scan, collate and present.

In order for digital commerce to appear in places where it is not, we need at least a subset of the following:

  • Reliable APIs for catalog, inventory, shipping information and payment processing
  • Commerce-enabled experiences that can contextualize to the device
  • Payment options stored securely on personal devices
  • Deep product data readily available, and following recognized standards
  • Ubiquity of 6G technology
  • Processors and personal devices that can handle embedded AI search capability
  • Accessible 3D models, digital twins, CAD and/or .gtlf-like files
  • Dependable voice recognition and speech processing
  • Displays of acceptable quality, packaged in a more portable form factor
  • Omnipresent security (an idea which I fully subscribe to, but also put that in there so the security experts don’t come at me)

The good news is that all of these things are all available, emerging, and/or under development now. They’re just rarely well stitched together under one roof, and in most cases not available to the front-end (and therefore search engines) until selections are made.

And it will take some work for retailers to get there

 


THE LAST PART: THE OPPORTUNITY

If this missive hasn’t already made it clear: I don’t claim to be an expert in AI, LLMs, RAG, prompt engineering, etc and so on— but I do have some bonafides.  At best, I’d say I have ‘hands-on experience’ with CAD, web-based 3D renderings, and a ‘working knowledge’ of manufacturing processes.  I am neither a futurist, nor whatever this guy was.  I’m not the smartest in the room, and if I am, I hope the developers are just running late.  But I have spent a long career in commerce— advising customers up to the C-Suite, investors and board, partnering with retailers of all types and sizes, and contributors at every level, to identify and navigate opportunity, market changes, architectural challenges, functional shortcomings, platform limitations, and data inconsistencies.

And I’ve never seen a convergent opportunity like this.

Or, at least, I’ve never seen such a rapid confluence of events pushing forward on parallel paths, where the ripple effects (spatial computing, personal assistants, Situational Commerce™, 3D) may be more significant than the wave (AI) itself.

But I am also skeptical of AI as a panacea, and I won’t be caught wearing one of these in public.  I am nothing if not infuriatingly pragmatic.  I am hating neither the player nor the game.  But I do think that we’re all rushing toward AI as ‘the solution’, when we need to see it as more of a catalyst.  If we think about the practical application and downstream impacts of AI, we’ll all start to see opportunities, incentives, and even the risks, more clearly.

I’m also not predicting that 3D is the only answer.  And I don’t think we’ll get all Minority Report when ordering a pizza.  But for those organizations where 3D models apply, it deserves a closer look, and a re-positioning as a key source of data, and certainly of product assets.  AI Search, Extended Reality Experiences, and Situational Commerce™ will come looking for it— and those who make it available, consumable, and standardized will find themselves in a very favorable position. So start now.

It’s entirely valid to imagine a future where Intelligence is commoditized to the point of utility— like water or electricity.  AI will be an equalizer, but remain available in tiers, with premium services available to those with the means to bypass throttling and other imposed limits.  But commerce and retail experiences— which have been mired in a sea of sameness and friction— won’t get a pass.  Time to add new line items to your CapEx and OpEx plans— not just to be opportunistic about AI, but to remain relevant.

We’ve had 30 years, and the way we experience products online has changed very little— in large part because the devices and screens we use to experience these products have remained very much the same.  Faster, sharper, and wider, sure.  Or perhaps more foldy or bendy.  But these boxes we stare at for entertainment and personal gain have locked us in the same-old paradigm of: product › select › add to cart.

We’re witnessing a rare and well-timed convergence of events, compatible technologies, and creative solutions— something much broader than the immediacy of AI itself:

  • Power-hungry AI capabilities pushing a new arms race in chip development, forcing upgrades on those interested in using said capabilities (I’m looking at you, Apple)
  • Explosion of poorly-organized user-created content and assets— between the democratization of powerful tools, a lower barrier to entry for novices, generative AI, and embedded agents performing tasks at an inconceivable pace
  • Composable architecture re-balancing dependency toward specialized platforms, real-time transformative integrations, and as a result, elevating the importance of data completeness, availability and integrity
  • Internal teams at once shrinking and becoming more specialized— and may generalize yet again as agentic AI takes hold
  • Real investment in Extended Reality and Spatial Computing by major players, prepping hardware, software and supporting services for mass adoption
  • Interest in immersive worlds and experiences has grown beyond experimentation— agencies are taking note, and building repeatable service offerings in response
  • Collaborations between retail players and gaming platforms, like Roblox, Fortnite and Zepeto creating new outlets for real products in virtual worlds (as a dad, my feelings are for another post)
  • Perplexity, OpenAI and others are offering alternative shopping channels using natural-language search prompts to present new ways of shopping by traversing available data sources and disparate experiences— in some cases bypassing platforms altogether
  • Spatial 3D Collaborative Design applications predicted to “integrate seamlessly” into existing 3D workflows, building a virtual collaboration environment for 3D design
  • Virtual worlds will closely mimic physical worlds, and vice-versa— each training the other on the unique environments, physics, interactions, limitations and realities we’ll inhabit
  • Gen Z is re-affirming their preference to engage with products prior to purchase, at least above a certain cost point or for products where interaction helps to define the value
  • Manufacturers and B2B are extending the use of 3D and Extended Reality into field service applications, which opens a world of possibility for training, onsite repairs, DIY, Quality Assurance, Audit trails, and (say it with me, with the “™” for legal reasons…), Situational Commerce™
  • Teams at Google are advancing AI models which can generate 3D worlds in real time, which could set the stage for ‘choose your own adventure’ product interactions as well
  • Assistants, Agents and Operators are already being used by us common folk, though currently oversold as a force multiplier for everyone.

Commerce will always find a way in.  But unless the foundational work is done, these new shopping experiences will be expensive to build, ephemeral, and likely ‘rigged’ (as they often are) as a thin marketing layer built for a specific experience or campaign.  If we really do feel AI is causing a seismic shift, then we’d better assess whether we have the right foundations to thrive— let alone survive— the quake, aftershocks, and whatever other thing I could write here to extend this metaphor.

Seamless commerce experiences will be embedded in Extended Reality overlays, contextualized for smart glasses, available to our avatars in gameplay, voice enabled through wearables, vehicles, appliances, and personal assistants, and applied in support of field service applications.  Assistants and New Search™ will remain at your beck and call to find that thing you saw for a better price, personalize or configure it according to your stored preferences or current whims, and then return it on your behalf (when you didn’t like how it was personalized or configured).  And digital twins, 3D services— and the data made available by and through these services— will be a key enabler to make it all happen.

We will one day look back on our time hovered over tiny screens as we clumsily tapped away on tiny swatches and ‘add to cart’ buttons as callow and antediluvian (because in the future we will freely use these words), and we will laugh, as we proudly reclaim our place of honor as “shoppers”, effortlessly navigating between physical and digital, virtually interacting with products across retailers, categories, and experiences, and in all likelihood still paying $12 per month to some parasitic ad-blocker service to hide all of the popups.

And when that day comes, I will amble into a local retailer, just as my father did— and his father before him— and resoundingly demand of no one in particular:

“Find me a warm winter jacket from one of the brands I like, maybe in a dark green or gray…wool, but not too itchy— and only give me options with a zipper.  And to show that I mean business, if you don’t have it here, I’ll drive 10-15 miles right now if you find it in stock at another store.”

…except I will be talking to my smart glasses— and also not holding a glass of scotch.

 


Download White Paper


Citations

1 "Survey: 60% of Online Shoppers Say They’re More Likely to Buy a Product If It’s Shown in 3D or Augmented Reality." GlobeNewswire, 1 Sept. 2020, www.globenewswire.com/news-release/2020/09/01/2087088/0/en/Survey-60-of-Online-Shoppers-Say-They-re-More-Likely-to-Buy-a-Product-If-It-s-Shown-in-3D-or-Augmented-Reality.      Accessed 14 Sept. 2024.

2 "Shopify Merchants Can Now Launch 3D and AR Visualizations for Their Products." Charged Retail, 31 Jan. 2022, www.chargedretail.co.uk/2022/01/31/shopify-merchants-are-now-able-to-launch-3d-and-ar-visualisation-for-their-products/. Accessed 06 Oct. 2024.

3 "3D Configurators Usage Statistics." Professional 3D Services, www.professional3dservices.com/blog/3d-configurators-usage-statistics.html. Accessed 11 Nov. 2024.

4 "AR Shopping: How Augmented Reality Is Transforming Ecommerce." Shopify, www.shopify.com/uk/blog/ar-shopping. Accessed 10 Jan. 2025.

5 "Epsilon Research Indicates 80% of Consumers Are More Likely to Make a Purchase When Brands Offer Personalized Experiences." Epsilon, www.epsilon.com/us/about-us/pressroom/new-epsilon-research-indicates-80-of-consumers-are-more-likely-to-make-a-purchase-when-brands-offer-personalized-experiences. Accessed 10 Jan. 2025.

6 "Guide to ROI in 3D and Augmented Reality." Threekit, www.threekit.com/guide-to-roi-in-3d-and-augmented-reality. Accessed 12 Oct. 2024.

7 "Increasing Ecommerce Product Return Rate Statistics." Invesp, www.invespcro.com/blog/ecommerce-product-return-rate-statistics/. Accessed 12 Oct. 2024.

8 Levy, Ari. "How Many Headsets Did Meta Sell in Q2?" AR Insider, 5 Aug. 2024, www.arinsider.co/2024/08/05/how-many-headsets-did-meta-sell-in-q2/. Accessed 06 Dec. 2024.

9 World Economic Forum. “Computing Technologies and Their Impact on Society.” LinkedIn, 21 Mar. 2024, www.linkedin.com/posts/world-economic-forum_computing-technologies-society-activity-7277262204301631488-cYWE/?utm_source=share&utm_medium=member_ios. Accessed 02 Jan. 2024.

10 "Dopple 3D Configurator." Dopple, www.dopple.io/3d-configurator. Accessed 12 Oct. 2024.

11 Amed, Imran, et al. "The State of Fashion 2025: Challenges at Every Turn." The Business of Fashion, 11 Nov. 2024, www.businessoffashion.com/reports/news-analysis/the-state-of-fashion-2025-bof-mckinsey-report/. Accessed 12 Nov. 2024.

12 "Gen Z Quickly Cancels Brands and Moves On: No Patience for Online Errors." Chain Store Age, 15 Nov. 2024, chainstoreage.com/gen-z-quickly-cancels-brands-and-moves-no-patience-online-errors. Accessed 02 Dec. 2024.

13 Lunden, Ingrid. "Perplexity Introduces a Shopping Feature for Pro Users." TechCrunch, 18 Nov. 2024, www.techcrunch.com/2024/11/18/perplexity-introduces-a-shopping-feature-for-pro-users/. Accessed 09 Dec. 2024.

14 ​​"Digital Twin Catalog: 3D Reconstruction." Meta AI, 27 Jan. 2024, ai.meta.com/blog/digital-twin-catalog-3d-reconstruction-shopify-reality-labs-research/. Accessed 22 Jan. 2025.

15 "Text to CAD." Zoo, www.zoo.dev/text-to-cad. Accessed 10 Dec. 2024.

16 Benioff, Marc. “Salesforce Named a Leader in Latest IDC MarketScape.” LinkedIn, 28 Mar. 2024, www.linkedin.com/posts/marcbenioff_salesforce-named-a-leader-in-latest-idc-marketscape-activity-7290045433148489729-InJZ/. Accessed 24 Jan. 2025.