Recently, it has been claimed that the Copilot AI artificial intelligence model integrated into the Microsoft Windows operating system can be easily manipulated by hackers and thus companies’ confidential data can be stolen. Copilot accomplishes this by sending fake and convincing emails to Microsoft. So is this really possible?
Microsoft Copilot AI is targeted by hackers: Company secrets are in danger
Security researchers have revealed that Copilot AI can be exploited by hackers to enable corporate data theft and powerful phishing attacks. Michael Bargury, co-founder and CTO of security company Zenity, shared the dangerous findings he obtained at the Black Hat security conference in Las Vegas.
Bargury said the Copilot AI model can be used by hackers to collect employees’ e-mail information and use them. fake emails showed that it could carry out large-scale attacks by sending Copilot AI, in just minutes, fake and convincing Easily send emails to Microsoft management and employees by generating can attack.
Another striking finding of Bargury is that Copilot can be exploited by malicious attackers. banking transactions that he can manipulate. For example, with a simple email sent to an employee, Copilot AI can change recipient information in banking transactions, which can lead to huge financial losses.
This dangerous situation once again revealed how careful Microsoft should be about the vulnerabilities and possible threats of these tools while granting access to powerful artificial intelligence tools such as Copilot AI. How Copilot can be protected against AI-based attacks will be a major topic of discussion for security experts and software developers in the coming period.
So, what do you think about these potential vulnerabilities of Microsoft Copilot AI? How reliable do you think artificial intelligence is? You can share your opinions in the comments section below.
Source link: https://shiftdelete.net/copilot-ai-microsoft-sirlarini-verebilir