The weak point of artificial intelligence
If you’ve used ChatGPT, Gemini, or other chatbots, you’ve probably noticed that these tools make things up a fair amount of the time. Although they call them intelligence, these systems cannot distinguish between reality and fiction. For this reason, they tend to state their fabrications as facts.
All of these 30 November 2022It’s something we learned in , when ChatGPT was released. Even though 2 years have passed, this information is not fully available. keep being true It does. Last year, companies made statements like this in their presentations.hallucination” was referring to vision tendencies. However, in 2024, companies are integrating productive artificial intelligence even into the most critical areas, as if this fabrication problem has been solved, and they no longer say anything about hallucination when not asked.
Recently popular artificial intelligence search engine PerplexityIt was determined that he summarized contents of Forbes and Wired without permission. Upon detailed examination, it was discovered that there were errors in the summaries provided by Perplexity. In other words, these systems can make mistakes even when summarizing a single source you present to them. We have touched on the issue of perplexity before, but Wired introduced this search engine. “bullshit machine” He described it as.
Similarly Google‘s AI-powered search AI OverviewWe saw similarly erroneous results in . It is not known if anyone has done it, but Google’s artificial intelligence telling people to eat rocks or use glue to attach cheese to pizza He recommended.
Truth is just a possibility
After the launch of ChatGPT, he did a lot of research on the accuracy of these systems. Scientific research has shown that these systems serious concerns about accuracy reveals. As we said and mentioned before; There is no intelligence in artificial intelligence. In the simplest terms, these are “probability robot”. So they are not concerned with accuracy per se and are designed to produce texts that appear to be truthful without any real concern for accuracy.
The problems we mentioned, such as making things up and hallucinating, are actually How do artificial intelligence work? An answer to the question. These errors introduced by these systems are This is not a simple error, it is an indication of how technology works. For further reading, please take a look at our content above.
However, to summarize briefly; The large language models (LLM) that power Gemini or ChatGPT are simply a system built to predict the next letter after each letter. These are not created based on knowledge, they are not even created based on words. Since there is a guess, a probability involved, no company can guarantee that the model will always give the same answer to the same question.
On the other hand, for ordinary people who are not close to this technology, artificial intelligence continues to be a technology that will answer their questions, diagnose their diseases and/or take away their jobs. AI companies are trying to address and resolve the issue of accuracy – although they have stopped short of saying so openly. One of the simplest but not definitive solutions is to ensure that the size of the model, that is, the data library it is fed with, and that this library consists of quality data. Our transition to models with billions of parameters instead of millions is an indication of this. But ultimately because there is a problem with what the technology is based on Reaching a solution is a difficult journey will be.
Source
https://www.forbes.com/sites/quickerbettertech/2023/06/23/on-technology-the-achilles-heel-of-ai-that-no-one-is-talking-about/
https://link.springer.com/article/10.1007/s10676-024-09775-5
https://www.aisnakeoil.com/p/chatgpt-is-a-bullshit-generator-but
This news our mobile application Download using
You can read it whenever you want (even offline):
Source link: https://www.donanimhaber.com/yapay-zeka-icin-asil-in-topugu-bir-seyler-uydurmak–178764