To counteract the dissemination of false information by chatbots, it’s essential to guide them towards reliable data sources.
Many are captivated by A.I. chatbots such as ChatGPT and Bard for their ability to craft essays and recipes. However, these users soon encounter “hallucinations,” where the A.I. produces made-up details.
AI Chatbots in Modern Technology AI Chatbots simulate conversations with human users, especially over the Internet. They use artificial intelligence to understand user input and give relevant responses. Various applications, such as customer support, e-commerce, and entertainment, feature chatbots. They can manage many customer inquiries at once, offer instant responses, and work 24/7.
Introduction to ChatGPT OpenAI developed ChatGPT as a variant of the GPT (Generative Pre-trained Transformer) architecture. It generates human-like text from the input it gets. Vast amounts of internet text trained ChatGPT, enabling it to produce coherent and contextually relevant conversational responses. However, sometimes it might generate what people consider “hallucinations” or fabrications because it generates responses based on its training, not on known facts.
Bing AI: Microsoft’s Integration Microsoft’s Bing search engine integrates Bing AI’s artificial intelligence capabilities. Although not many details about “Bing AI” as a standalone product exist, Microsoft often adds AI to its products, like Bing. This integration enhances search results, offers personalized content, and betters user experience.
Google Bard: A Dive into AI Technologies Google Bard, one of Google’s AI technologies, engages with information in various formats, from text and images to videos and audio. Google’s history of AI enhancement in Search, with forerunners like BERT and MUM, supports Bard. Bard synthesizes insights for complex questions and simplifies intricate information. However, challenges arose, such as it giving inaccurate information at its launch, underlining the need for thorough testing and ongoing refinement.
Baidu’s ERNIE: A Knowledge-Enhanced Model Baidu introduced ERNIE (Enhanced Representation through kNowledge IntEgration) as a knowledge-enhanced foundation model. The recent iteration, ERNIE 3.5, outperforms its predecessors. One standout feature is plugins, like “Baidu Search,” which allow ERNIE to produce real-time and accurate information. Techniques like “Knowledge Snippet Enhancement” boost the model’s grasp and use of global knowledge. Various sectors, including smart offices, coding, marketing, media, education, and finance, have adopted ERNIE.
These chatbots, which formulate responses based on data from the internet, are prone to errors. Mistakes, like suggesting incorrect ingredients for a cake, can be disappointing.
As A.I. becomes more integrated into mainstream technology, understanding its optimal use is vital. After evaluating numerous A.I. tools recently, I’ve determined that many misuse it due to inadequate guidance from tech firms.
Relying solely on chatbots for answers can be misleading. However, when they are directed to trusted sources, such as reputable websites and academic studies, they can provide accurate and valuable information. Sam Heutmaker, founder of Context, an A.I. startup, mentioned, “Provided with the right data, they can produce fascinating results. Without guidance, about 70% of the output might be incorrect.”
By simply instructing chatbots to utilize specific data, they can produce coherent responses and valuable insights. This shift in approach transformed my perspective on A.I. from skepticism to enthusiasm. For instance, a travel plan created by ChatGPT based on my preferred travel sites proved successful.
Guiding chatbots to reputable sources, like established media and academic journals, can also combat the spread of false information. Here’s how I utilized this approach for cooking, research, and travel planning:
Meal Planning:
While chatbots like ChatGPT and Bard can create appealing recipes, they often fail in execution. For instance, a New York Times experiment with an A.I. model resulted in unsatisfactory Thanksgiving dishes.
However, my experience improved when I used ChatGPT plug-ins. Through the Tasty Recipes plug-in, I obtained a meal plan with dishes like lemongrass pork banh mi and grilled tofu tacos, all sourced from BuzzFeed’s Tasty website. For other recipes, I employed the Link Reader plug-in, which extracts recipes from reputable sites, requiring some manual effort but ensuring quality.
Research:
While researching a popular video game series, I found that ChatGPT and Bard often misinterpreted key plot details. For accurate research, it’s crucial to rely on trusted sources and verify the information. I discovered Humata.AI, a free tool favored by scholars and attorneys, which allows users to upload documents and ask chatbots questions about the content. In one instance, I uploaded a research paper from PubMed, and the tool provided a concise summary, saving me hours of reading.
Cyrus Khajvandi, co-founder of Humata, mentioned that chatbots like ChatGPT can sometimes use outdated web models, leading to context-less data.
Travel Planning:
A recent attempt by a Times travel writer to get a Milan itinerary from ChatGPT resulted in misguided suggestions. However, when I sought a vacation plan for Mendocino County, Calif., and directed ChatGPT to my favorite travel sources, it provided a comprehensive itinerary, saving me hours of preparation.
In Conclusion:
While companies like Google and OpenAI are striving to minimize chatbot hallucinations, users can already harness A.I.’s potential by controlling the data it uses for responses.
Nathan Benaich, an A.I. venture capitalist, stated that the primary advantage of training machines with vast data is their ability to mimic human thought processes. The key, he emphasized, is to combine this capability with trustworthy information.