Amazon's "Lazy" Use of OpenAI Leading To Increased User Inability To Get Products
Vendors using Amazon are increasingly employing OpenAI to write product descriptions - or even create products - with the result that unsuspecting buyers looking for products are finding it unavailable. JL
Kyle Orland reports in ars technica:
Amazon users are used to search results filled with products that arefraudulentscams. A flood of AI content is threatening to overwhelmAmazon's marketplace. Submissions that involve
text or visual art now has to worry about AI-generated work. Some version of an OpenAI error message appears in Amazon products ranging fromlawn chairstooffice furniture to Chinese religious tracts. OpenAI can't provide content that "requires using trademarked brand
names" or "promotes a specific religious institution" or,
"encourage unethical behavior." These error-message-filled listings highlight the lack of basic editing manyAmazon scammersexercise when putting products on Amazon. Sellers are using the
technology to create product names and descriptionsAmazon users are at this point used to search results filled with products that arefraudulent,scams, orquite literally garbage. These days, though, they also may have to pick through obviously shady products, with names like "I'm sorry but I cannot fulfill this request it goes against OpenAI use policy."
As of press time, some version of that telltale OpenAI error message appears in Amazon products ranging fromlawn chairstooffice furnituretoChinese religious tracts(Update: Links now go to archived copies, as the original were taken down shortly after publication). A few similarly named products that were available as of this morning have been taken down as word of the listings spreads across social media (one such exampleis archived here).
The descriptions for these oddly named products are also riddled with obvious AI error messages like, "Apologies, but I am unable to provide the information you're seeking." One product description for a set of tables and chairs (which has since been taken down) hilariously noted: "Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3]]."Another set of product descriptions(archive link), seemingly for tattoo ink guns, repeatedly apologizes that it can't provide more information because: "We prioritize accuracy and reliability by only offering verified product details to our customers."
Using large language models to help generate product names or descriptions isn't against Amazon policy. On the contrary, in September, Amazonlaunched its own generative AI toolto help sellers "create more thorough and captivating product descriptions, titles, and listing details." And we could only find a small handful of Amazon products slipping through with the telltale error messages in their names or descriptions as of press time.
Still, these error-message-filled listings highlight the lack of care or even basic editing manyAmazon scammersare exercising when putting their spammy product listings on the Amazon marketplace. For every seller that can be easily caught accidentally posting an OpenAI error, there are likely countless others using the technology to create product names and descriptions that onlyseemlike they were written by a human who has actual experience with the product in question.
Enlarge/A set of clearly real people conversing on Twitter / X.
Amazon isn't the only online platform where these AI bots are outing themselves. A quick search for "goes against OpenAI policy" or "as an AI language model" can find many artificial posts onTwitter / XorThreadsorLinkedIn, for example. Security engineer Dan Feldmannoted a similar problem on Amazon in April, though searching with the phrase "as an AI language model" doesn't seem to generate any obviously AI-generated search results these days.
As fun as it is to call out these obvious mishaps for AI-generated content mills, a flood of harder-to-detect AI content is threatening to overwhelm everyone fromart communitiestosci-fi magazinestoAmazon's ebook marketplace. Pretty much any platform that accepts user submissions that involve text or visual art now has to worry about being flooded with wave after wave of AI-generated work trying to crowd out the human community they were created for. It's a problem that's likely to get worse before it gets better.
As a Partner and Co-Founder of Predictiv and PredictivAsia, Jon specializes in management performance and organizational effectiveness for both domestic and international clients. He is an editor and author whose works include Invisible Advantage: How Intangilbles are Driving Business Performance. Learn more...
0 comments:
Post a Comment