On Monday, Snapchat unveiled “My AI,” an experimental ChatGPT-based conversational chatbot that uses artificial intelligence. According to a press post from Snap, Inc., My AI is launching “this week” and will cost $3.99 monthly for Snapchat+ customers.
Giving the AI bot a unique name allows users to customize it. Similar to a conventional chat with a human, conversations with the AI model will occur in a comparable interface. According to Evan Spiegel, CEO of Snap, “the fundamental concept is that we’re going to chat to AI every day in addition to talking to our friends and family every day.”
However, Snap claims that My AI, like its GPT-powered counterparts ChatGPT and Bing Chat, is susceptible to “hallucinations” or unanticipated falsehoods produced by an AI model. In its post announcing My AI, Snap includes the following lengthy disclaimer:
“My AI, like other chatbots powered by AI, is susceptible to hallucinations and can be made to say almost anything. I apologize in advance and ask that you know its many shortcomings. Every communication you have with My AI will be recorded and could be evaluated to enhance the experience. Please don’t divulge any secrets to My AI, and don’t follow its recommendations.”
“Hallucination” is a word machine learning researchers use to describe when an AI model draws incorrect conclusions about a topic or circumstance that isn’t addressed in its training data set. The ability of modern large language models, like ChatGPT, to readily fabricate plausible-sounding falsehoods, such as academic articles that don’t exist and erroneous biographies, is a well-known problem.
The company claims that its new Snapchat bot will “recommend birthday gift ideas for your BFF, plan a hiking trip for a long weekend, suggest a dinner recipe, or even write a haiku about cheese for your cheddar-obsessed pal” and will be pinned above conversations with friends in its tab in the Snapchat app.
How the same bot that cannot be “rel[ied] on for guidance” can also arrange a precise and secure “hiking trip for a long weekend” is not explained by Snap. This dissonance has been used by opponents of the generative AI’s hurried rollout to argue that these chatbots are not ready for wider usage, especially when used as a reference.
While some have made a game out of finding ways to get around ChatGPT and Bing Chat’s workarounds, Snap is said to have taught its GPT model to refrain from talking about sex, profanity, violence, or political views. These limitations might be required to prevent the bizarre behavior we observed with Bing Talk a few weeks ago.
And this is true twice as much because “OpenAI’s newest huge language model might power my AI.” The Verge reports that Snap is using the “Foundry” new OpenAI enterprise package, which OpenAI secretly launched earlier this month. It provides businesses exclusive access to OpenAI’s GPT-3.5 and “DV” models on the cloud. According to several AI scientists, the rumored high-powered successor to GPT-3, “DV,” may be equal to GPT-4.
In other words, ChatGPT might not be as quick or as precise as the “hallucinations” Snap referenced in its news release. And despite the cautions, many might trust it given the other GPT models’ strong persuasiveness. As more commercial services powered by GPT go live in the coming months, it’s something to watch.
The post “We Apologise in Advance!” Snapchat has Issued a Warning That Using its New AI Conversation Bot May Cause Hallucinations appeared first on Asume Tech.
from Technology - Asume Tech https://asumetech.com/we-apologise-in-advance-snapchat-has-issued-a-warning-that-using-its-new-ai-conversation-bot-may-cause-hallucinations/
No comments:
Post a Comment