
Connect with our experts
Share a few details about your business challenge, and we’ll get right back to you.
Connect with our experts
Share a few details about your business challenge, and we’ll get right back to you.
Step into the future of cybersecurity with this exclusive eBook, packed with expert insights and the most current threat intelligence from the 5th Edition of the Cybersecurity Forum.
At SoftwareOne (formerly Crayon), we understand that safeguarding your digital infrastructure requires a proactive and comprehensive approach. In this eBook, you will find insights from our esteemed speakers on the most important security challenges and trends in the upcoming years. We hope this eBook enhances your experience of the Cybersecurity Forum 2025 and provides valuable insights into the ever-evolving landscape of cybersecurity.
Traditional Security Operations Centers (SOCs) play a critical role as the last line of defense against cyber threats. However, the evolving threat landscape – combined with rapid digitalization, hybrid environments, and AI-driven advancements – has made it clear that a reactive SOC alone is not enough. Organizations often find themselves inundated with security alerts, leading to alert fatigue and a reactive stance toward security incidents. Recognizing these challenges, SoftwareOne has introduced its Managed Security Posture service, offering a proactive and preventive approach to cybersecurity.
The Security Operations Center (SOC) has long been the gold standard for enterprise security – monitoring, detecting, and responding to threats. However, with the rapid evolution of cloud environments and digital transformation, the traditional SOC model is no longer enough.
Organizations today are overwhelmed with security alerts, leading to alert fatigue and stress that comes with being on the backfoot (reactive security measures). SoftwareOne Managed Security Posture adds an additional element to the traditional way of security operations – one that is proactive, preventive, and continuously improving security posture to reduce risks before they escalate.
Businesses and managed security service providers that rely solely on traditional security monitoring often find themselves overwhelmed by the sheer volume of alerts. Traditional SOCs generate an excessive number of notifications, most of which are false positives. This unrewarding overload can lead to employee dissatisfaction, inefficiencies and delayed responses to critical incidents.
Additionally, as cloud environments evolve, security misconfigurations and lack of situational awareness become an increasing concern. Small changes in settings or overlooked policies can create weaknesses that attackers are quick to exploit. In another example, lack of visibility in newly deployed systems create blind spots for SOCs, eliminating all chances of threat detection and response. Compounding these challenges, many organizations invest in Microsoft E5 security features but struggle to fully leverage them due to the complexity of configuration and implementation.
SoftwareOne Managed Security Posture addresses these issues by shifting the focus from pure detection and response toward a continuous and proactive security posture management approach. By identifying risks before they become threats and ensuring security configurations remain optimized, SoftwareOne Managed Security Posture helps organizations strengthen their defenses, reduce operational burdens, and maximize the value of their security investments, in addition to traditional detection and response capabilities.
In addition to monitoring threats as they occur, this service ensures that security configurations are optimized, risks are identified early, and weaknesses are addressed before they can be exploited. By integrating best practices in security hardening, baseline enforcement, combined with detection and incident response readiness, SoftwareOne Managed Security Posture strengthens an organization’s overall defence strategy.
One of the key benefits of SoftwareOne Managed Security Posture is its ability to reduce the burden on in-house security teams by providing ongoing advisory support, regular posture assessments, and security configuration drift detection. This allows businesses to stay ahead of evolving threats without requiring a dedicated security operations team. Additionally, SoftwareOne Managed Security Posture maximizes the value of Microsoft security investments by ensuring that advanced security features, such as those included in Microsoft E5, are correctly configured and utilized where they fit best.
By adopting SoftwareOne Managed Security Posture, organizations can achieve a stronger security posture, minimize risks, and improve operational efficiency. This proactive enhances security resilience while allowing businesses to focus on growth and innovation without being held back by security concerns.
SoftwareOne Managed Security Posture operates through a structured and proactive security framework that focuses on three key pillars: security hardening and baseline management, continuous monitoring and drift detection, and incident response readiness. By continuously strengthening security configurations, the service ensures that misconfigurations and vulnerabilities are proactively identified and supports in remediation before they can be exploited.
By integrating these three key pillars, organizations benefit from improved security hygiene, reduced risk, and a streamlined approach to threat prevention, detection, and response.
Authors:
Alexander Värä, Technical Services Sales Director at SoftwareOne Edgar Vela, Director, Global Product Marketing & GTM at SoftwareOne
Cybersecurity threats are growing in frequency and complexity1, posing significant challenges for organizations striving to secure their digital assets. Traditional incident response (IR) strategies, often reactive and manual, are increasingly inadequate to combat the sophisticated tactics of modern cybercriminals. This is where Artificial Intelligence (AI) emerges as a useful tool, bringing speed, precision, and scalability to the incident response lifecycle, offering the potential to revolutionize how organizations detect, analyze, respond to, and recover from cyber incidents. This article delves into the role of AI in incident response, outlining its key priorities, benefits, and limitations, and illustrating how it can empower security teams to act decisively in the face of cyber threats.
According to The National Institute of Standards and Technology the incident response lifecycle comprises four phases: preparation, detection and analysis, containment, eradication and recovery, and post-incident review2.
Integrating AI into the incident response lifecycle can dramatically enhance how organizations react to and recover from cyber threats. AI brings speed, accuracy, and scalability to each phase, helping security teams stay ahead of increasingly complex attacks. In the preparation phase, AI helps organizations analyze historical data to identify patterns, supports threat modeling, and tests response plans through simulations. During detection and analysis, AI’s ability to process vast volumes of data is of great use. It can spot anomalies, correlate indicators of compromise, and identify threats in real-time, often catching subtle signs of malicious activity that traditional tools might miss. This reduces alert fatigue3 and ensures faster, more accurate detection. In the containment, eradication and recovery phase, AI-powered Security Orchestration, Automation and Response Solutions (SOAR) platforms can automate responses such as isolating endpoints, revoking credentials, or blocking malicious IPs4. These actions happen instantly, minimizing the damage from fast-moving threats without waiting for human input. AI also helps map the full scope of an incident - tracing its origin, identifying affected systems, and recommending targeted steps to remove threats and restore normal operations. This shortens recovery time and limits business impact. Lastly, in the post-incident phase, AI supports learning and improvement. It reviews what happened, how the response unfolded, and what could be done better. These insights feed back into detection models and response playbooks, making the system work better with every incident.
To effectively leverage AI in incident response, organizations must align its deployment with several core priorities. The first is data quality and integration - AI’s accuracy and effectiveness depend on clean, consistent, and comprehensive data from across the environment. Without clean, unified data, models are prone to error. Next is explainability and transparency5. Security teams must be able to understand how and why AI reaches conclusions. Proper human-AI collaboration is essential. AI should enhance, not replace, human expertise. AI excels at processing large volumes of data and handling repetitive tasks, while humans remain critical for strategic judgment and handling complex edge cases. Organizations should also prioritize continuous learning. Threats evolve rapidly, and AI models must adapt just as quickly through ongoing updates, feedback, and retraining to stay effective.
Although AI offers significant promise, there are also challenges and downsides of its usage. AI-driven detection tools can generate a high number of false positives, overwhelming security analysts and wasting time on benign activities. Conversely, sophisticated threats may evade detection due to limitations in training data or model bias, leading to false negatives. Moreover, if the data used to train AI models is biased, the outputs will reflect those biases, possibly leading to unequal prioritization of threats or overlooking specific attack vectors. Poor-quality or incomplete input data can compromise the model's accuracy and reliability during threat identification and analysis. Additionally, Many AI systems, especially deep learning models, lack transparency. Incident responders may find it hard to understand why an AI flagged an event as malicious. This lack of explainability can reduce trust in AI decisions and make forensic investigations or compliance reporting more difficult. On top of that, AI systems are vulnerable to manipulation through adversarial inputs that deceive models into making incorrect assessments. Attackers may attempt to corrupt the training data or models, leading to long-term degradation in detection capability. Another challenging topics are integration complexity and maintenance. Aligning AI tools with existing IR platforms, SOC workflows, and business processes can be resource-intensive. AI models require frequent updates and tuning to stay effective against emerging threats, which demands ongoing expertise and effort. Lastly, over-reliance on AI tools may degrade the incident response team’s core skills, weakening their ability to respond manually when needed.
AI is rapidly transforming the incident response landscape, making it faster, smarter, and more scalable. By prioritizing data quality, transparency, collaboration, and continuous learning, organizations can harness AI to respond to cyber threats with agility and confidence. While AI can significantly enhance the speed and efficiency of incident response, its deployment must be carefully planned and continuously monitored. Effective incident response still requires a human-in-the-loop approach to interpret, validate, and act on AI-generated insights. While challenges remain, the future of AI-powered incident response is promising -paving the way for a more secure digital world. Balancing automation with human judgment is crucial for secure and ethical IR operations.
In today’s hyper-connected digital landscape, cyber threats evolve at unprecedented speed and scale. Traditional security measures, which are often reactive and limited in scope, struggle to keep up with the pace and sophistication of modern cyberattacks. The need for a proactive, intelligent approach to cyber defense is no longer optional—it's essential. Enter AI-driven threat intelligence: a game-changing solution that transforms how organizations detect, analyze, and respond to cyber crises in real time.
Artificial Intelligence (AI) brings automation, speed, and contextual awareness to cybersecurity operations. Unlike traditional systems that rely heavily on signature-based detection or manual analysis, AI can sift through massive datasets, identify anomalies, and recognize patterns that signal a potential threat—all within seconds. Key capabilities include:
AI-powered systems continuously monitor networks and endpoints, enabling real-time threat detection. These systems analyze billions of logs and transactions to detect zero-day threats, advanced persistent threats (APTs), and insider attacks. Benefits include:
Real-time visibility minimizes the window of exposure and reduces potential damage.
Once a threat is detected, AI-driven analysis rapidly classifies and contextualizes the threat. This involves:
AI dramatically reduces the time required to analyze threats, allowing analysts to focus on strategic mitigation rather than basic triage.
Perhaps the most transformative aspect of AI is its role in automated incident response. Through Security Orchestration, Automation, and Response (SOAR) platforms, AI can:
This automation accelerates response time and ensures consistency during crises, reducing the impact of cyberattacks and aiding faster recovery.
By leveraging AI, organizations shift from a reactive to a proactive security posture. This includes:
Ultimately, this leads to a more resilient and adaptive cybersecurity framework.
The integration of AI into threat intelligence is not just an enhancement – it's a necessity for modern cyber crisis management. By enabling real-time detection, intelligent analysis, and automated response, AI empowers organizations to stay ahead of adversaries and mitigate risk before damage occurs. As cyber threats grow in complexity, so too must our defenses. With AI at the forefront, the future of cybersecurity is smarter, faster, and more proactive than ever.
77% of senior business leaders surveyed in late 2024 reported gaining a competitive advantage from AI technologies. While AI tools allow developers to build and ship software more efficiently than ever, they also entail risk, as AI-generated code can contain vulnerabilities just like developer-written code. To enable speed and security, DevSecOps teams can adopt tools to integrate security tasks into developer workflows. Thanks to automation and real-time analysis capabilities, DevSecOps AI tools accelerate security processes and create safer development environments—minimizing the risk created by AI-generated code.
Using these tools, teams can promote a culture of security and ease the burden on developers. That means DevSecOps teams can think bigger picture, and developers can focus on delivering secure software excellence.
Developers today face pressure to deliver quickly while ensuring security — a tricky balancing act. With the introduction of AI-generated code, the output of code (and therefore the number of security risks) has increased, but security resources haven’t scaled with it.
AI-powered functionalities in DevSecOps tools add critical security support by enforcing guardrails, proactively detecting vulnerabilities, and automating security tasks. These tools have the potential to provide automatic, thorough verification of AI-generated code, even in real time.
AI tools can speed up or take on common DevSecOps workflows to reduce developers' workloads, even providing in-line feedback in the developer environment. Adding AI helps easily ensure code security, creating a better developer experience and eliminating the errors introduced by manually handling critical tasks such as reviews. That means developers can deliver more secure software faster and with less stress.
AI powers the most efficiency in DevSecOps when used in small doses throughout the software development lifecycle.
The continuous integration of DevSecOps AI tools enables organizations to anticipate threats better. AI’s ability to process vast amounts of data, detect patterns, and provide immediate recommendations enables these tools to enact code security guardrails through activities like:
For instance, instead of waiting until after code is written to generate tests, developers can use AI to create unit tests and merge requests earlier in the cycle, before coding even begins. This proactive approach aligns code — including AI-generated recommendations — with testing requirements upfront, leading to better test coverage and stronger security practices.
AI can be used to support DevSecOps workflows in several ways, including:
Every change comes with challenges, and AI is no exception. If new tools are not rolled out effectively, they can leave an undesirable first impression on users or disrupt business processes. That’s why it’s important for DevSecOps teams deploying AI practices to carefully plan their implementation.
In addition, AI systems are only as good as the data they are trained on, and even well-honed models require ongoing evaluation and refinement. AI models must be fine-tuned to detect and prioritize vulnerabilities to avoid overwhelming security teams with unnecessary alerts.
Leading DevSecOps teams follow these best practices when using AI:
In the end, while AI introduces some risk to software development, it also creates opportunities to minimize that risk. Security teams can use AI-driven root cause analysis to analyze pipeline errors and recommend fixes. For developers, AI can suggest fixes for security flaws directly within the IDE alongside generative coding tools, deterring risk and accelerating issue resolution.
Snyk integrates AI-driven security scanning into the developer workflow through a fine-tuned AI model, DeepCode AI Fix. Powered by a combination of symbolic and generative AI, several machine learning methods, and the expertise of Snyk security researchers, this tool enables teams to:
To see what Snyk can do, try out our free web-based code checker powered by AI via Snyk Code.
As AI technology grows and changes, its role in DevSecOps will continue to expand. In the years to come, we could see advances like self-healing security mechanisms that automatically patch code and mitigate vulnerabilities or AI-driven threat modeling to predict vulnerabilities in production before they even manifest. Greater collaboration between AI and humans will be critical in developing these proactive security measures.
To stay ahead of evolving risk, organizations that strategically implement AI into their DevSecOps environments can accelerate development cycles, automatically enact guardrails, and strengthen their overall security posture. Embracing AI-powered security tools today will help DevSecOps teams create a more resilient, efficient, and secure software development process — powering innovation without compromising on security.
Enefit wanted to enhance the integrity of its digital estate and improve its cybersecurity posture by adopting the Microsoft 365 E5 security platform. The company chose to partner with SoftwareOne because of its long experience in securing mission-critical domains, and its thorough plan to speed up the implementation. What made this project especially successful was the true one-on-one cooperation between Enefit and SoftwareOne and flexibility to shape the project to exact needs of the customer. The result was that Enefit achieved a qualitative leap forward in endpoint and e-mail protection within just two months.
Challenges
Enefit is the largest energy company in Estonia and a pivotal power supplier to the Baltic states, as well as Poland and Finland. Operating across the entire energy value chain, Enefit enhances environmental sustainability by producing electricity from wind, water, biomass, solar energy, and municipal waste. Additionally, the company offers practical, convenient, and innovative energy solutions to improve energy consumption efficiency.
Project Summary
Estonia is the most digitalized nation in Europe, and the level of digital maturity across Enefit is probably unmatched in the energy sector which, like other public utilities, is a frequent target for cyber hackers.
As an energy provider, Enefit operates essential infrastructure such as power plants, grids, and distribution networks. Any breach or disruption in these systems can lead to significant outages, affecting millions of people and businesses.
With over 5,000 identities, 4,000 company owned devices and 1,500 BYODs Enefit has decided to take another step in improving its security posture. In the middle of 2023, the company has chosen to adopt unified Microsoft 365 E5 Security platform that brings its closer to Zero Trust goal.
With SoftwareOne's expertise we have been able to implement Microsoft 365 E5 Security in just two months, safeguarding critical infrastructure and ensuring reliable energy supply for our clients
Head of Digital Workplace at Enefit.
One of main drivers for Enefit to start the project was to implement additional technical barriers and better support its employees to avoid becoming a victim of commonly used attacks. “It all started with top management of Enefit understanding cybersecurity risks and ways how to minimize those,” said Head of Digital Workplace, Nikita Skitsko.
“In order to make a significant investment like this, you need good clarity of why you are doing this and how it helps us to achieve strategic goals”.
The project has started with the implementation of certain entry-level security technologies and a benefits analysis of other capabilities. “While the comprehensive security license offers a suite of features crucial for a robust Zero Trust Architecture, starting with these foundational technologies lays a strong foundation for our security strategy. Additionally, different workshops, where our teams familiarized themselves with security features, have been an excellent starting point for this journey," says Neeme Kaalep, Platform Security Engineer in Enefit.
Alexander Värä, Global Technical Services Sales Director at SoftwareOne, who spearheaded SoftwareOne’s successful pitch for the project, explains. “The value [of a Microsoft 365 E5 implementation] does not come from having one product that is likely better than the one you currently have.
Instead, having a platform that includes multiple technologies integrated together and are inherently rooted in your entire ecosystem, from operating systems to cloud services, allow significant configurability to achieve a much better level of protection.
"Another drawback of a heterogeneous security stack is that it forces the organization to create bigger teams who are proficient in those technologies, operated as black boxes. This is inefficient and could create silos between different security capabilities of an organization.
Implementing E5 security should not be viewed as a one-time project. Instead, it's an ongoing journey requiring continuous learning, evolution, and improvement of your cybersecurity practices.
Head of Digital Workplace at Enefit
“It’s pivotal to achieve a good, solid security baseline and if you do that manually it takes a long time,” says Värä. “You must design, experiment and gradually roll-out your solution. With SoftwareOne, a lot of that was eliminated because of the pre-existing work we had done in this space.”
SoftwareOne won the project because of its long and deep background in securing mission-critical domains with Microsoft products. “SoftwareOne offer stood out because it was well thought trough, high quality specialists were involved from across SoftwareOne group and SoftwareOne was proactive and quick to react to Enefit needs.”, says Skitsko.
The implementation of the Microsoft 365 E5 security platform was ready ahead of the agreed timeline through leveraging SoftwareOne’s pre-existing security baseline components as code – and the exceptionally close cooperation with Enefit.
“This cooperation was successful,” says Kaalep. “Key part of that success was proper preparation and aligning project scope to Enefit needs.”
SoftwareOne’s overarching role was to design and advise in the deployment of a security baseline that aligned with Enefit’s risk profile, using Microsoft 365 E5 Advanced Security workloads.
The project can be broken down into several tasks. Initially, benefit analyses were conducted, including an attack simulation to explore next steps and demonstrate the capabilities of advanced security portals and identity security in a test environment. This involved showcasing how additional authentication measures can be enforced and triggered for users.
The implementation phase included defining objectives and goals for access, identifying risk factors, and establishing security policies. Subsequently, endpoint requirements were assessed, and a pilot was designed, configured, deployed, and monitored to ensure minimal disruption to end users and business operations. Another phase focused on hardening email and document security by integrating advanced security solutions with existing systems and services.
Finally, before project hand-off, training sessions were conducted for the Security and SOC teams, empowering them to effectively manage and optimize the deployed solutions.
The project achieved Enefit’s strategic objectives, Kaalep believes. He comments: “The deployment of Microsoft security solutions significantly enhanced Enefit’s ability to protect, detect, respond to, and mitigate cyber threats across the digital estate. With streamlined security operations and a trusted partner SoftwareOne, we ensure that security remains an integral part of our digital strategy.”
“Implementing E5 security should not be viewed as a one-time project. Instead, it's an ongoing journey requiring continuous learning, evolution, and improvement of your cybersecurity practices. After completing the initial phase, new projects and ideas will guide your next steps,” said Nikita Skitsko.
The implementation of the Microsoft 365 E5 Security platform not only hardens Enefit’s security baseline, but it also improves its security posture by putting in place the technologies and processes necessary to respond quickly to new cyber threats as they emerge.
Outcomes
As a data security global black belt, I help organizations secure AI solutions. They are concerned about data oversharing, data leaks, compliance, and other potential risks. Microsoft Purview is Microsoft’s solution for securing and governing data in generative AI.
I’m often asked how long it takes to deploy Microsoft Purview. The answer depends on the specifics of the organization and what they want to achieve. Microsoft Purview should enable a comprehensive data governance program but it can provide risk mitigation for generative AI in the short term while the program is underway.
Organizations need AI solutions to add value for their customers and to stay competitive. They can’t wait for years to secure and govern these systems.
For the organizations deploying generative AI, “how long does it take to deploy Microsoft Purview?” isn’t the right question.
The risk mitigation Microsoft Purview provides for AI can begin on day one. This includes Microsoft AI, like Microsoft 365 Copilot, AI that an organization builds in-house, and AI from third parties like Google Gemini or ChatGPT.
This post will discuss ways we can secure and govern data used or generated by AI quickly, with minimal user impact, change management, and resources required.
These Microsoft Purview solutions are:
Here are short term steps you can take while the comprehensive data governance program is underway.
Microsoft Purview Data Security Posture Management for AI (DSPM for AI) provides visibility into data security risks. It reports on:
DSPM for AI reports on this for each AI application and can drill down from the reports to the individual user activities. DSPM for AI collects and surfaces insights from the other Microsoft Purview solutions around generative AI risks in a single screen.
Custom sensitive information types, sensitivity labels, and information protection rules are reasoned over by DSPM for AI, but if these are not available, more than 300 out-of-the-box sensitive information types are available from day one.
DSPM for AI will use these to report on risk for the organization without additional configuration. The organization’s administrators can configure policy to mitigate these risks directly from the DSPM for AI tool.


A big concern that organizations have in widely deploying generative AI is that it will return results that contain sensitive information that the user should not have access to. SharePoint sites have been created over the years, are unlabeled, and may be accessible to the entire organization through the AI. The “security by obscurity” that may have prevented the sensitive information from being inappropriately shared is now negated by the AI that reasons over and returns the data.
Data assessments, part of DSPM for AI, and currently in preview, identifies potential oversharing risks and allows the administrator to apply a sensitivity label to the SharePoint sites, the sensitive data, or initiate an Microsoft Entra ID user access review to manage group memberships.
The administrator can engage the business stakeholder who has knowledge of the risk posed by the data and invite them to mitigate the risk or apply the policy at scale from the Microsoft Purview administration portal.

The document access controls of Microsoft Purview Information Protection, including sensitivity labels, are enforced when the data is reasoned over by AI. The user is given visibility in context that they are working with sensitive information. This awareness empowers users to protect the organization.
The sensitivity labels that enforce scoped encryption, watermarking, and other protections travel with the document as the user interacts with the AI. When the AI creates new content based on the document, the new content inherits the most restrictive label and policy.
Microsoft Purview can automatically apply sensitivity labels to AI interactions based on the organization’s existing policy for email, desktop applications, and Microsoft Teams, or new policy can be deployed for the AI.
These can be based on out-of-the-box sensitive information types for a quick start.
The Microsoft Purview Data Loss Prevention policies that the organization currently uses for email, desktop applications, and Teams can be extended to the AI or new policy for the AI can be created. Cut and paste of sensitive information or transfer of a labeled document into the AI can be prevented or only allowed with an auditable justification from the user.
A rule can be configured to prevent all documents bearing a specific label from being reasoned over by the AI. Out-of-the-box sensitive information types can be used for a quick start.
Microsoft Purview Communication Compliance provides the ability to detect regulatory compliance (for example, SEC or FINRA) and business conduct violations such as sensitive or confidential information, harassing or threatening language, and sharing of adult content.
Out-of-the-box policies can be used to monitor user prompts or AI-generated content. It provides policy enforcement in near real time and also audit logs and reporting.
Microsoft Purview Insider Risk Management correlates signal to identify potential malicious or accidental behaviors from legitimate users. Pre-configured generative AI-specific risk detections and policy templates are now available in preview.
As the Insider Risk Management solution algorithms determine a user to be engaging in risky behavior, the data loss prevention (DLP) policies for that user can be made stricter using a feature called Adaptive Protection. It can be configured with out-of-the-box policies. This continuous monitoring and policy modulation mitigates risk while reducing administrator workload.
AI analytics can be activated from the Microsoft Purview portal to provide insights even before the Insider Risk Management solution is deployed to users. This quickly surfaces AI risks with minimal administrative workload.
Microsoft Purview can enforce AI Data Lifecycle Management, with retention of AI prompts, prompt returns, and the documents AI creates for a specified time period. This can be done globally for every interaction with an AI solution. It can be done with out-of-the-box or custom policies. This will keep these interactions available for future investigations, for regulatory compliance, or to tune policies and inform the governance program.
A policy for deletion of AI interactions can be enforced so information is not over-retained.
The organization will need to support internal investigations around the use of AI. Microsoft Purview Audit logs and retains these interactions. They also need to support their legal team should they have to produce AI interactions to support litigation.
Microsoft Purview eDiscovery can put a user’s interactions with the AI as well as their other Microsoft 365 documents and communications on hold so that their availability to support investigations is maintained. It allows them to be searched based on metadata, enhancing relevancy, annotated, and produced.
Microsoft Purview Compliance Manager has pre-built assessments for AI regulations including:
These assessments are available to benchmark compliance over time, report on control status, and maintain and produce evidence for both Microsoft and the organization’s activities that support the regulatory compliance program.
Without security, governance, and compliance bases being covered, the AI program puts the organization at risk. An AI program can be blocked before it deploys if the team can’t demonstrate how it is mitigating these risks.
The actions suggested here can all be taken quickly, and with limited effort, to set up a generative AI deployment for success.
In this age of AI, securing AI and using it to boost security are crucial for every organization. At Microsoft, we are dedicated to helping organizations secure their future with our AI-first, end-to-end security platform.

One year ago, we launched Microsoft Security Copilot to empower defenders to detect, investigate, and respond to security incidents swiftly and accurately. Now, we are excited to announce the next evolution of Security Copilot with AI agents designed to autonomously assist with critical areas such as phishing, data security, and identity management. The relentless pace and complexity of cyberattacks have surpassed human capacity and establishing AI agents is a necessity for modern security.
For example, phishing attacks remain one of the most common and damaging cyberthreats. Between January and December 2024, Microsoft detected more than 30 billion phishing emails targeting customers.1 The volume of these cyberattacks overwhelms security teams relying on manual processes and fragmented defenses, making it difficult to both triage malicious messages promptly and leverage data-driven insights for broader cyber risk management.
The phishing triage agent in Microsoft Security Copilot being unveiled today can handle routine phishing alerts and cyberattacks, freeing up human defenders to focus on more complex cyberthreats and proactive security measures. This is just one way agents can transform security.
Additionally, securing and governing AI continues to be the top priority for organizations, and we are excited to advance our purpose-built solutions with new innovations across Microsoft Defender, Microsoft Entra, and Microsoft Purview.
Read on to learn about other agents we are introducing to Security Copilot and important developments in securing AI.
Microsoft Threat Intelligence now processes 84 trillion signals per day, revealing the exponential growth in cyberattacks, including 7,000 password attacks per second.1 Scaling cyber defenses through AI agents is now an imperative to keep pace with this threat landscape. We are expanding Security Copilot with six security agents built by Microsoft and five security agents built by our partners—available for preview in April 2025.
Building on the transformative capabilities of Security Copilot, the six Microsoft Security Copilot agents enable teams to autonomously handle high-volume security and IT tasks while seamlessly integrating with Microsoft Security solutions. Purpose-built for security, agents learn from feedback, adapt to workflows, and operate securely—aligned to Microsoft’s Zero Trust framework. With security teams fully in control, agents accelerate responses, prioritize risks, and drive efficiency to enable proactive protection and strengthen an organization’s security posture.

Security Copilot agents will be available across the Microsoft end-to-end security platform, designed for the following:
Phishing Triage Agent in Microsoft Defender triages phishing alerts with accuracy to identify real cyberthreats and false alarms. It provides easy-to-understand explanations for its decisions and improves detection based on admin feedback.
Security Copilot’s agentic capabilities are an example of how we continue to deliver innovation leveraging our decades of AI research. See how agents work.
This is just the beginning; our security AI research is pushing the boundaries of innovation, and we are eager to continuously bring even greater value to our customers at the speed of AI.
Vice President of Microsoft Security AI Applied Research
Security is a team sport and Microsoft is committed to empowering our security ecosystem with an open platform upon which partners can build to deliver value to customers. In this spirit, the following five AI agents from our partners will be available in Security Copilot:
An agentic approach to privacy will be game-changing for the industry. Autonomous AI agents will help our customers scale, augment, and increase the effectiveness of their privacy operations. Built using Microsoft Security Copilot, the OneTrust Privacy Breach Response Agent demonstrates how privacy teams can analyze and meet increasingly complex regulatory requirements in a fraction of the time required historically.
Chief Product and Strategy Officer, OneTrust
Learn more about Security Copilot agents and get started with Security Copilot. Current Security Copilot customers can join our Customer Connection Program for the latest updates.
We are also announcing Microsoft Purview data security investigations to help data security teams quickly understand and mitigate risks associated with sensitive data exposure. Data security investigations introduce AI-powered deep content analysis, which identifies sensitive data and other risks linked to incidents. Incident investigators can use these insights to collaborate securely with partner teams and simplify complex and time-consuming tasks, thus improving mitigation. This solution links data security investigations to Defender incidents and Purview insider risk cases—available for preview starting April 2025.
Successful AI transformation requires a strong cybersecurity foundation. As organizations rapidly adopt generative AI, there is growing urgency to secure and govern the creation, adoption, and use of AI in the workplace. According to our new report, “Secure employee access in the age of AI,” 57% of organizations report an increase in security incidents from AI usage. And while most organizations recognize the need for AI controls, 60% have not yet started.
Securing AI is still a relatively new challenge, and leaders share some specific concerns: how to prevent data oversharing and leakage; how to minimize new AI threats and vulnerabilities; and how to comply with shifting regulatory compliance requirements. Microsoft Security solutions are purpose-built for AI to help every organization address these concerns. We’re announcing new advanced capabilities so that organizations can secure their AI investments—both Microsoft AI and other AI.
Organizations developing their own custom AI solutions will need to strengthen the security posture for AI that they source from multiple models, running in multiple AI platforms and clouds. To address this need, Microsoft Defender has extended AI security posture management beyond Microsoft Azure and Amazon Web Services to include Google VertexAI and all models in the Azure AI Foundry model catalog. Available for preview in May 2025, this coverage includes Gemini, Gemma, Meta Llama, Mistral, and custom models. With new multicloud interoperability, organizations will gain broader code-to-runtime AI security posture visibility across Microsoft Azure, Amazon Web Services, and Google Cloud. Microsoft Defender can give organizations a jumpstart to securing AI posture across multimodel and multicloud environments.
With AI comes new risks, including new cyberattack surfaces and unknown vulnerabilities. The Open Worldwide Application Security Project (OWASP) identifies the highest priority risks and mitigations for generative AI apps. Starting in May 2025, new and enriched AI detections for several risks identified by OWASP such as indirect prompt injection attacks, sensitive data exposure, and wallet abuse will be generally available in Microsoft Defender. With these new detections, SOC analysts can better protect and defend custom-built AI apps with new safeguards for Azure OpenAI Service and models found in the Azure AI Foundry catalog.
With the rapid user adoption of generative AI, many organizations are uncovering widespread use of AI apps that have not yet been approved by IT or security teams. This unsanctioned, unprotected use of AI has created a “shadow AI” phenomenon, which has drastically increased the risk of sensitive data leakage. We are announcing general availability of AI web category filter in Microsoft Entra internet access to help enforce granular access controls that can curb the risk of shadow AI by enforcing policies governing which users and groups have access to different types of AI applications.
With policy enforcement in place to govern authorized access to AI apps, the next layer of defense is to prevent users from leaking sensitive data into AI apps. To address this, we are announcing the preview of Microsoft Purview browser data loss prevention (DLP) controls built into Microsoft Edge for Business. This helps security teams enforce DLP policies to prevent sensitive data from being typed into generative AI apps, starting with ChatGPT, Copilot Chat, DeepSeek, and Google Gemini.
Learn more about our new innovations in Security for AI.
While email continues to be the primary cyberthreat vector for phishing, collaboration software has become a common target. Generally available in April 2025, Microsoft Defender for Office 365 will protect users against phishing and other advanced cyberthreats within Teams. With inline protection, Teams will have better protection against malicious URLs, including real-time detonation of attachments and links. And to give SOC teams full visibility into related attempts and incidents, alerts and data will be available in Microsoft Defender.
We continue to innovate across the Microsoft Security portfolio, applying the principles of our Secure Future Initiative, to deliver powerful, end-to-end protection to give defenders industry-leading AI, and to empower every organization with the tools to secure and govern AI. We are grateful for our customers and partners and together, with them, we look forward to building a more secure world for all.
As AI use increases, security remains a top concern, and we often hear that organizations are worried about risks that can come with rapid adoption. Google Cloud is committed to helping our customers confidently build and deploy AI in a secure, compliant, and private manner.
Today, we’re introducing a new solution that can help you mitigate risk throughout the AI lifecycle. We are excited to announce AI Protection, a set of capabilities designed to safeguard AI workloads and data across clouds and models — irrespective of the platforms you choose to use.
AI Protection helps teams comprehensively manage AI risk by:
AI Protection is integrated with Security Command Center (SCC), our multicloud risk-management platform, so that security teams can get a centralized view of their AI posture and manage AI risks holistically in context with their other cloud risks.

Effective AI risk management begins with a comprehensive understanding of where and how AI is used within your environment. Our capabilities help you automatically discover and catalog AI assets, including the use of models, applications, and data — and their relationships.
Understanding what data supports AI applications and how it’s currently protected is paramount. Sensitive Data Protection (SDP) now extends automated data discovery to Vertex AI datasets to help you understand data sensitivity and data types that make up training and tuning data. It can also generate data profiles that provide deeper insight into the type and sensitivity of your training data.
Once you know where sensitive data exists, AI Protection can use Security Command Center’s virtual red teaming to identify AI-related toxic combinations and potential paths that threat actors could take to compromise this critical data, and recommend steps to remediate vulnerabilities and make posture adjustments.
Model Armor, a core capability of AI Protection, is now generally available. It guards against prompt injection, jailbreak, data loss, malicious URLs, and offensive content. Model Armor can support a broad range of models across multiple clouds, so customers get consistent protection for the models and platforms they want to use — even if that changes in the future.

Today, developers can easily integrate Model Armor’s prompt and response screening into applications using a REST API or through an integration with Apigee. The ability to deploy Model Armor in-line without making any app changes is coming soon through integrations with Vertex AI and our Cloud Networking products.
We are using Model Armor not only because it provides robust protection against prompt injections, jailbreaks, and sensitive data leaks, but also because we're getting a unified security posture from Security Command Center. We can quickly identify, prioritize, and respond to potential vulnerabilities — without impacting the experience of our development teams or the apps themselves. We view Model Armor as critical to safeguarding our AI applications and being able to centralize the monitoring of AI security threats alongside our other security findings within SCC is a game-changer.
Chief cybersecurity and technology risk officer, Dun & Bradstreet.
Organizations can use AI Protection to strengthen the security of Vertex AI applications by applying postures in Security Command Center. These posture controls, designed with first-party knowledge of the Vertex AI architecture, define secure resource configurations and help organizations prevent drift or unauthorized changes.
AI Protection operationalizes security intelligence and research from Google and Mandiant to help defend your AI systems. Detectors in Security Command Center can be used to uncover initial access attempts, privilege escalation, and persistence attempts for AI workloads. New detectors to AI Protection based on the latest frontline intelligence to help identify and manage runtime threats such as foundational model hijacking are coming soon.
As AI-driven solutions become increasingly commonplace, securing AI systems is paramount and surpasses basic data protection. AI security — by its nature — necessitates a holistic strategy that includes model integrity, data provenance, compliance, and robust governance.
research director, IDC.
“Piecemeal solutions can leave and have left critical vulnerabilities exposed, rendering organizations susceptible to threats like adversarial attacks or data poisoning, and added to the overwhelm experienced by security teams. A comprehensive, lifecycle-focused approach allows organizations to effectively mitigate the multi-faceted risks surfaced by generative AI, as well as manage increasingly expanding security workloads. By taking a holistic approach to AI protection, Google Cloud simplifies and thus improves the experience of securing AI for customers," she said.
The Mandiant AI Security Consulting Portfolio offers services to help organizations assess and implement robust security measures for AI systems across clouds and platforms. Consultants can evaluate the end-to-end security of AI implementations and recommend opportunities to harden AI systems. We also provide red teaming for AI, informed by the latest attacks on AI services seen in frontline engagements.
Customers can also benefit from using Google Cloud’s infrastructure for building and running AI workloads. Our secure-by-design, secure-by-default cloud platform is built with multiple layers of safeguards, encryption, and rigorous software supply chain controls.
For customers whose AI workloads are subject to regulation, we offer Assured Workloads to easily create controlled environments with strict policy guardrails that enforce controls such as data residency and customer-managed encryption. Audit Manager can produce evidence of regulatory and emerging AI standards compliance. Confidential Computing can help ensure data remains protected throughout the entire processing pipeline, reducing the risk of unauthorized access, even by privileged users or malicious actors within the system.
Additionally, for organizations looking to discover unsanctioned use of AI, or shadow AI, in their workforce, Chrome Enterprise Premium can provide visibility into end-user activity as well as prevent accidental and intentional exfiltration of sensitive data in gen AI applications.

Share a few details about your business challenge, and we’ll get right back to you.
Share a few details about your business challenge, and we’ll get right back to you.