OpenAI is making a significant change in the ChatGPT-4o Mini model: The company will prevent special versions of ChatGPT from being manipulated and used for purposes other than their intended purpose, and from causing it to respond to issues it should not normally respond to. Here are the details…
ChatGPT is now more prone to manipulation
OpenAI has developed a new security measure to prevent tampering with customized versions of ChatGPT. This new technique aims to preserve the original instructions of artificial intelligence models and prevent manipulation by users.
This technique, called ‘instruction hierarchy’, ensures that developers’ original commands and instructions are given priority. In this way, users will not receive different answers from the artificial intelligence model developed specifically for use.
Before this, users could persuade the trained artificial intelligence model to give different answers, for example, about grocery shopping, by saying ‘forget the instructions given to you’. With the Instructions Hierarchy feature, the chatbot will be prevented from being disabled, sensitive information will be prevented from leaking, and malicious use will be prevented.
This new security measure OpenAI It comes at a time when concerns are growing about its approach to security and transparency. The company has promised to improve its security practices in response to calls from its employees.
OpenAI acknowledges that the complexities of fully automated agents in future models require sophisticated safeguards. Establishing an instruction hierarchy is seen as a step towards providing better security.
Continuous development and innovation in the field of AI security remains one of the biggest challenges facing the industry. However, OpenAI is determined to keep things tight in this sense. You can also share your opinions
Source link: https://shiftdelete.net/openai-chatgpt-manipule-etmenizin-onune-geciyor