Apple began to test a radical method to improve artificial intelligence models. In this method, user data is not used directly, not even out of the device. In other words, the system aims to produce smarter intelligence without touching the user’s privacy. The new system is currently tested in beta versions of iOS 18.5 and MacOS 15.5.
This system developed by Apple works with synthetic data. These counterfeit data are created by simulating users’ latest e-mail and message samples. Only the “this example is closer” signal is transmitted to Apple and the data never leaves the device. In this way, the company claims to fully protect the user privacy.
Apple aims to improve artificial intelligence output by analyzing false samples.
The basic logic of the system is to carry out model training through synthetic examples instead of real user data. However, it should be controversial how useful these synthetic examples work. Therefore, Apple has established a system that indirectly evaluates user data. The degree of similarity is not important, not the data itself.
The point that draws attention here is that the system only transmits the data similarity signal to Apple. So the content is neither shared nor stored. The whole evaluation process remains in the device. The device sends only one point: “This synthetic example is closer to reality.”
Apple thinks that this method will offer better quality artificial intelligence outputs without damaging privacy. Especially in short and targeted texts such as e-mail summary, this method may be great. By selectively include synthetic data, the writing quality of artificial intelligence can be increased. In addition, the user is not compromised by privacy.
The company had previously used a similar confidentiality approach. Differential Privacy Technology, which came with iOS 10, allowed data analysis that did not detect the individual user. The same approach is now integrated into the new artificial intelligence education system. Large sets of data created with randomlyized data make individual information invisible.
In addition, Apple wants to compete with GPT -like models with this new method. Models such as chatgpt are trained with large data sets and user data are frequently used in this process. Apple is trying to capture the same quality with a method that prioritizes privacy. However, it will be clear whether this approach will give results.
Access to Apple’s new system is now limited to volunteer users. The data of the users participating in the device analytics program are used as a reference in these tests. Despite everything, this program does not push the limits of privacy unless the data does not get out of the device. The developed system is only collecting signals through examples.
The inability to achieve the desired success of Siri and Apple Intelligence also reveals the urgency of this new method. The name at the beginning of the Siri team was changed and some artificial intelligence features were postponed. This also shows that Apple is behind its competitors in AI development. It can be said that the company’s system is activated to close the gap in artificial intelligence.
Source link: https://www.teknoblog.com/apple-yapay-zeka-gizlilik-egitim-sistemi/