News Net Nebraska

Complete News World

ChatGPT hallucinogen, no joke: disturbing news that is increasingly unreliable

ChatGPT hallucinogen, no joke: disturbing news that is increasingly unreliable

Artificial Intelligence – TheMagazinetech

When using AI you have to take into account hallucinations, they are similar to hallucinations experienced by humans but operate on mathematical patterns

Artificial intelligence is gaining more and more space in our daily lives. He was declared infallible and exaggeratedly clever, in fact, after the first comments of the computer technicians who worked on the build OpenAI GPT ChatOpinions have changed.

Artificial intelligence has no conscience and is “trained” by infinite mathematical schemes, and the training moment is very important because it is through it that intelligence can acquire skills, or on the contrary become more “stupid” by confusing itself. When AI gets confused in a topic, the damage in real life can be massive because humans use intelligence to get help even at work.

These errors are called “hallucinations” and can create altered realities that users perceive to be true. who in turn use it to construct new realities. We realize the error only later, when it is sometimes too late. In short, artificial intelligence cannot be completely trusted, which is why, so far, it is impossible to think that it will completely replace human labour. But also for their use as assistants, they deserve attention.

AI hallucinations, what they are and why they are dangerous

Just as a person can suffer from a cognitive deficit by not having a measure of what is real, Even AI-generated systems like ChatGPT suffer from hallucinations that mislead them. Complicating matters is the difficulty of addressing these blunders.

See also  Wine tourism Airbnb chooses the most famous winelands

Specifically, by hallucinations we mean a conceptual distortion, it is understood that AI has no knowledge of facts, and they write following statistical models, in the sense that they are based on mathematical schemes but not on conscious awareness, which is quite a human characteristic.

Artificial Intelligence, Limits, and Hallucinations – TheMagazinetech

Artificial intelligence is not as accurate as once thought: their data can be completely wrong

Instead of giving no answers, AI systems create parallel learning algorithms that can roughly, if not completely, process the information. this It leads to the appearance of incomplete or invented answers immediately in order to respond to the request. So the AI ​​is trained before being put into the field, but sometimes errors can be generated during training, so for example, in some contexts, the AI ​​might mistake a watch for a bracelet. Examples that may seem of little interest but actually translate into our daily lives can lead to various problems.

When an AI is trained on very specific data (overfitting), it becomes less accurate in situations it doesn’t know. But, conversely, if it is trained using very general data (overgeneralization), it tends to create fuzzy connections between contexts that probably don’t exist. All of these distortions are identified as hallucinations. An example is the case of a New York lawyer who used data from chat suggestions that turned out to be sentences that never existed. All this does nothing but create great doubts about the effectiveness of using human intelligence to replace human labour.

See also  From air conditioner to bed nets, a guide to 'summer' bonuses: How to save