Although it is enjoyable to chat with artificial intelligence models, spend time and experience what they can do, a new survey conducted with ChatGPT users contains results that make you say ‘too much is harmful’. Accordingly, the majority of ChatGPT users believe that the artificial intelligence model has conscious experiences like humans.
Technology and science experts largely reject the idea that today’s most powerful AI models have consciousness or self-awareness similar to that of humans and animals. However, as AI models improve, they begin to show signs that may appear to be consciousness to a casual observer, which seems to confuse users.
In the study, 300 US citizens were asked to describe their frequency of AI use and read a brief description of ChatGPT. More than two-thirds (67 percent) of respondents said they thought there was a possibility of self-awareness or phenomenal consciousness in ChatGPT. The study showed that people who use AI tools more frequently are more likely to upload some form of consciousness to those systems.
The researchers emphasized that public intuitions about AI awareness may differ from expert opinions, which could have significant implications for the ethical, legal and moral status of AI. In this sense, what can be dangerous is this: Community perceptions and usage habits are the factors that most affect artificial intelligence and cause its development. In this sense, this concept, which does not actually exist, can be tried to be integrated into artificial intelligence.
What do you think about this? Don’t forget to share your opinions with us in the comments.
Source link: https://shiftdelete.net/chatgpt-kullanicilari-yapay-zekalar-bilincli-varliklar
Web sitemizde ziyaretçilerimize daha iyi hizmet sağlayabilmek adına bazı çerezler kullanıyoruz. Web sitemizi kullanmaya devam ederseniz çerezleri kabul etmiş sayılırsınız.
Gizlilik Politikası