For AI-driven data centers in fiscal 2025 Microsoft allocated $80 billionis establishing a new engineering organization within the company focused on accelerating AI infrastructure and software development. Information shared by Bloomberg according to, Jay Parikh will direct this new episode. Reporting directly to Microsoft CEO Satya Nadella, Parikh will oversee different groups, including the company’s AI platform and developer teams.
By the way, let us note that Parikh, who was previously Vice President and head of global engineering at Meta, worked on technical infrastructure and data center projects at Meta. Additionally, before joining Microsoft in October, Jay Parikh was appointed CEO of cloud security startup Lacework.
Details of CoreAI
Shared by The Verge information According to , this new organization is called “CoreAI – Platform and Tools”. The organization in question is a combination of Microsoft’s existing Dev Div and AI platform teams, as well as some employees in the CTO office division. CoreAI is actively reorganizing Microsoft’s developer teams to ensure AI remains a top priority.
Microsoft’un on your blog shared information; It also shows us the direction of the company. According to Nadella; Microsoft’s focus for the coming year will be on “model-forward” apps that “reshape entire app categories.”
Security implications from Microsoft’s red team with a focus on artificial intelligence
While Microsoft is moving towards a new structure centered on artificial intelligence, the company’s red team continues its work on artificial intelligence without slowing down. The team has put out a new white paper titled “Lessons from the Red Team of 100 Productive AI Products.”
The report finds that generative AI increases existing security risks. In this sense, we can say that the team has uncovered new vulnerabilities that require a multifaceted approach to risk mitigation. Within the scope of the report, it is mentioned that generative artificial intelligence models increase existing security vulnerabilities and reveal new cyber attack vectors.
According to the report; Within generative AI, traditional security risks such as outdated software components or improper error handling remain critical concerns. But additionally, model-level weaknesses such as rapid injections also create unique challenges in AI systems.
Additionally, as stated in the report; While automation tools are useful for creating prompts, orchestrating cyberattacks, and scoring responses, creating a red team cannot be fully automated. The report notes that AI red teaming relies heavily on human expertise. This is because language models that can identify general risks such as hate speech or obscene content have difficulty assessing nuanced domain-specific issues. This is where subject matter experts can step in and evaluate content in specific areas such as medicine, cybersecurity, and chemical, biological, radiological, and nuclear contexts where automation is often inadequate.
Finally, the report notes that a layered approach combining continuous testing, robust defenses, and adaptive strategies is needed to mitigate risks in generative AI.
Source link: https://webrazzi.com/2025/01/14/microsoft-un-yeni-muhendislik-kurulusu-coreai/