Sabiki Security Blog

Non-Human Identity Security & AI Governance Insights

Research, guidance, and practical advice on securing AI agents and non-human identities in Microsoft 365.

What Are Non-Human Identities — And Why Are They Your Biggest Security Risk in 2026?

Service accounts, OAuth apps, AI agents, automation scripts — non-human identities now outnumber human users by more than 100 to 1 in most enterprise environments. Most organisations have no idea how many they have, let alone what they can access.

When security teams talk about identity risk, they're usually thinking about people — phishing attacks, compromised passwords, account takeovers. But in 2026, the most dangerous identities in your Microsoft 365 environment aren't human at all.

Non-human identities (Non-Human Identities) are the service accounts, OAuth applications, AI agents, automation scripts, and API integrations that operate continuously in the background of your organisation's digital infrastructure. They authenticate to systems, access data, and perform actions — often with permissions that would make a senior administrator uncomfortable — and they do all of this without any of the controls that apply to human users.

No multi-factor authentication. No conditional access policies triggered by suspicious login locations. No one asking "should this account really have access to that?" Non-Human Identities are largely invisible, largely ungoverned, and increasingly, the entry point of choice for attackers.

The scale of the problem

In most enterprise Microsoft 365 environments, non-human identities outnumber human users by somewhere between 25 and 100 to 1. A mid-sized organisation with 500 employees might have 15,000 to 50,000 active service principals, OAuth consent grants, and registered applications — the vast majority of which were created without any formal security review and have never been audited since.

The numbers get more alarming when you look at what those identities can actually do. CyberArk's 2025 Identity Security Threat Landscape Report found that over 50 million machine identity credentials were found on the dark web — API keys, service account tokens, OAuth client secrets — a 250% increase since 2021. These aren't theoretical risks. They're active credentials being actively traded and used.

"AI agents inherit the permissions of the users and admins who deploy them. If an admin creates an AI agent, it can do whatever they want, can access anything." — Ivan Fioravanti, CoreView

What makes Non-Human Identities different from human identities

Traditional identity security tools were designed for human users. They look for suspicious login patterns, unusual access times, unfamiliar locations — signals that make sense when you're watching for a compromised human account. Non-human identities break every one of these assumptions.

An Non-Human Identity that suddenly starts accessing 10,000 mailboxes at 3am isn't suspicious by traditional metrics — it might just be running a scheduled report. An API integration that silently accumulates permissions over six months doesn't trigger an alert. A service principal that was created by an employee who left the company three years ago and has been running quietly ever since — nobody's looking at that.

Non-Human Identities also lack the lifecycle controls that apply to human accounts. When an employee leaves, IT offboards their account. When an OAuth application is granted permissions by an admin who later leaves, those permissions stay — indefinitely. The application keeps running. The access keeps working. Nobody notices.

The three categories of Non-Human Identity risk

1. Excessive permissions

Most Non-Human Identities accumulate permissions over time. An integration that started with Mail.Read gets upgraded to Mail.ReadWrite when someone needs to send automated emails. Then Application.ReadWrite.All gets added when a new feature needs to register apps. Each change feels small and justified in context. The cumulative result is a service principal with near-administrator-level access that nobody has reviewed in three years.

2. Orphaned identities

When the human responsible for an Non-Human Identity leaves the organisation — or when the project the Non-Human Identity was created for ends — the identity typically stays. It continues to authenticate, continues to hold permissions, and continues to represent an attack surface. In our experience monitoring Microsoft 365 environments, unowned service principals typically represent 20-40% of all Non-Human Identities in a given tenant.

3. Credential age

Unlike human passwords, which most organisations rotate on a policy cycle, Non-Human Identity credentials — client secrets, API keys, certificates — are often set and forgotten. We regularly see credentials in Microsoft 365 tenants that were created two, three, or even five years ago and have never been rotated. An attacker who obtains one of these credentials has persistent, long-term access.

What you can do about it

The first step is visibility. You cannot govern what you cannot see. Most organisations start by being surprised at the sheer number of Non-Human Identities in their environment — typically many more than anyone expected. A thorough inventory of every service principal, OAuth application, and registered app in your Microsoft 365 tenant is the foundation of any Non-Human Identity security programme.

From there, the priorities are clear: identify unowned identities and assign accountability, review permissions against the principle of least privilege, establish credential rotation policies, and implement continuous monitoring to catch changes as they happen rather than during an annual audit.

The organisations that establish these foundations now — before an incident forces the issue — will be the resilience leaders of tomorrow. Those that don't will find out about their Non-Human Identity exposure in the worst possible way.

How AIRM helps

AIRM automatically discovers and inventories every non-human identity in your Microsoft 365 tenant — service principals, OAuth apps, AI agents, and automation accounts. Every identity is scored for risk, checked for ownership, and monitored continuously for changes. No agents required, no complex setup — connect in minutes.

Your Approved AI Agents Could Already Be Compromised: The Case for Continuous Monitoring

Sanctioning an AI agent isn't a one-time security decision — it's the beginning of a monitoring obligation. Here's why point-in-time approval is no longer enough, and what continuous behavioural monitoring looks like in practice.

When an IT administrator approves an AI agent for use in the organisation, they make a reasonable decision based on the information available at that moment. The application is from a trusted vendor, the permissions look appropriate for the use case, and the business need is legitimate. The approval is granted. The agent is deployed. And then — in most organisations — the security team stops thinking about it.

This is the fundamental flaw in how organisations approach AI agent security in 2026. Approval is treated as a destination rather than a starting point. But the threat landscape for AI agents is dynamic, not static. An agent that was safe on day one can represent a critical risk on day 47.

How approved agents become threats

There are several mechanisms by which a legitimately approved AI agent can transition from safe to dangerous:

Vendor-side compromise

If the vendor whose AI agent you've deployed is itself compromised, an attacker can potentially push malicious updates or modify the agent's behaviour without any change to the permissions in your tenant. Your security tools see the same trusted service principal they've always seen. The behaviour, however, is now under the control of an attacker.

Permission creep

AI agents frequently require additional permissions as their capabilities expand. Each individual permission request might be reasonable and legitimately approved. The cumulative effect, however, can be a service principal with access to resources far beyond what the original use case required. A marketing automation tool that started with Calendar.Read and gradually accumulated Mail.ReadWrite, Directory.Read.All, and Files.ReadWrite is a very different risk profile from what was originally approved.

Scope drift

Even without any change to the agent's configuration, the resources it accesses can drift over time. An AI agent with access to "all SharePoint sites" that was deployed when the company had 20 sites now has access to 200 sites — including sensitive HR and legal repositories that didn't exist when the original approval was granted.

Rogue behaviour from within

AI agents can exhibit unexpected behaviour as a result of prompt injection attacks, model drift, or simply edge cases in their programming. An agent that has been reliably performing its intended function can start accessing data it was never designed to access, not because it was compromised externally, but because something in its operating environment changed.

"The organisations that establish agent inventories, privilege policies, and runtime controls today will be the resilience leaders of tomorrow. Those that delay will find themselves managing autonomous systems operating beyond their security team's line of sight." — State of Agentic AI Security Report, 2026

What continuous monitoring actually means

Continuous monitoring of AI agents is not the same as continuous scanning. Many organisations conflate the two. Running a daily inventory check tells you what exists — it doesn't tell you whether the behaviour of what exists has changed.

True continuous monitoring requires three things:

Baseline establishment. Before you can detect anomalous behaviour, you need to know what normal looks like. For every AI agent in your environment, you need a behavioural baseline: which resources it typically accesses, at what times, with what frequency, and using which permission scopes. This baseline must be built from observation, not assumption.

Deviation detection. Once a baseline is established, every subsequent scan should be compared against it. Any deviation — new permissions, new resource access, unusual activity patterns — should be flagged for review. The sensitivity of this detection matters enormously: too sensitive and you drown in false positives; not sensitive enough and meaningful changes slip through.

Contextualised alerting. A deviation is only useful if it reaches the right person with enough context to act. An alert that says "service principal X has new permissions" is much less useful than one that says "Claude-for-Work, approved on March 15, has acquired Directory.ReadWrite.All — this permission was not present in the previous scan and represents a significant blast radius increase."

The accumulation advantage

One of the most underappreciated aspects of continuous monitoring is the compounding value of historical data. An anomaly detector that has been watching a specific AI agent for six months is dramatically more accurate than one seeing it for the first time. It knows the agent's normal access patterns down to the hour of day. It knows which permissions have been stable and which have changed. It can distinguish between a legitimate configuration update and an unexpected permission grant.

This is why point-in-time audits, even regular ones, cannot substitute for continuous monitoring. The intelligence that drives effective anomaly detection accumulates over time. Every scan adds to the picture. The longer your monitoring has been running, the more precisely it can identify what doesn't belong.

How AIRM helps

AIRM's Anomaly Intelligence Engine builds a unique behavioural fingerprint for every AI agent in your Microsoft 365 environment — then flags deviations as they happen. The engine accumulates intelligence with every scan, becoming more precise over time. When something changes that shouldn't have, you know about it within the next scan cycle.

EU AI Act and Microsoft 365: What Your AI Agents Must Comply With Before August 2026

The EU AI Act's high-risk system obligations phase in on August 2, 2026. If your organisation uses AI agents in Microsoft 365, you need to understand what's required — and whether you can demonstrate compliance.

The EU AI Act is the world's first comprehensive regulatory framework for artificial intelligence. It came into force in August 2024, with a phased implementation schedule that reaches a critical milestone on August 2, 2026 — the date by which organisations operating high-risk AI systems must demonstrate compliance with a substantial set of governance obligations.

For security teams managing Microsoft 365 environments, this creates a specific and urgent challenge. Many of the AI agents operating in M365 tenants — Copilot connectors, Power Automate agents, third-party AI integrations — may qualify as high-risk AI systems under the Act's classification criteria. Whether they do or not, the Act's broader requirements around transparency, human oversight, and documentation apply to organisations that deploy AI at scale.

What the EU AI Act requires

The Act takes a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk, limited risk, and minimal risk. For most enterprise AI deployments in Microsoft 365, the relevant obligations fall under Articles 6 through 14.

Article 6 — Classification

Organisations must assess whether their AI systems constitute high-risk systems under the Act's criteria. AI systems used in employment, essential services, critical infrastructure, or that make consequential decisions affecting people's rights are typically classified as high-risk. Security teams need to maintain a formal inventory of all AI systems and document their classification assessment.

Article 9 — Risk Management

High-risk AI systems must have a documented risk management system that is continuous — not a one-time assessment. This means ongoing monitoring, regular risk evaluation, and documented mitigation measures. For AI agents in Microsoft 365, this creates a direct requirement for the kind of continuous behavioural monitoring that most organisations don't currently have in place.

Article 13 — Transparency and Logging

High-risk AI systems must automatically log their operation throughout their lifecycle. This creates an audit trail requirement that is meaningful and retrievable — not just a theoretical capability. Organisations must be able to demonstrate, to a regulator who asks, what their AI systems did and when.

Article 14 — Human Oversight

Perhaps the most operationally significant provision. Organisations must implement effective human oversight of their AI systems — meaning real mechanisms to monitor, intervene in, and halt AI operations when needed. An AI agent running without any oversight mechanism is a direct compliance gap under Article 14.

Non-compliance with the EU AI Act can result in fines of up to €35 million or 7% of global annual turnover, whichever is higher. For high-risk AI obligations specifically, the fines reach €15 million or 3% of global turnover.

What this means for Microsoft 365 environments

The practical implications for organisations running AI agents in Microsoft 365 are significant. Most organisations currently have no formal inventory of their AI agents. They have no documented classification assessment. They have no continuous risk management system. And they have no meaningful human oversight mechanism beyond "an admin can revoke the application's permissions if they notice something wrong."

Regulators are not going to accept the defence that AI governance is complex and the technology moves fast. The Act has been in force since 2024. The high-risk obligations have been known since 2023. August 2026 is not a surprise.

Building a compliant programme before the deadline

A compliant AI governance programme for Microsoft 365 requires, at minimum: a complete inventory of all AI agents operating in the tenant; a documented risk classification for each; ongoing monitoring with evidence of that monitoring; human review and approval processes; and the ability to generate compliance evidence on demand.

This is not a theoretical or distant requirement. For organisations with European operations or European customers, the clock is running.

How AIRM helps

AIRM maps every AI agent finding to the EU AI Act's control framework — Articles 6, 9, 13, 14, 28, and 53. Every scan produces evidence of continuous monitoring. The compliance report for EU AI Act can be generated in one click, providing auditor-ready documentation of your AI governance programme.

How MSPs Can Turn Non-Human Identity Security Into a High-Margin Recurring Revenue Stream

Non-human identity monitoring is one of the fastest-growing security categories — and one that most MSPs aren't offering yet. Here's the commercial case, the delivery model, and what clients will actually pay.

The MSP security market is crowded. Endpoint protection, email security, SIEM, SOC-as-a-service — every MSP has these conversations with their clients. Differentiation is hard when your competitors are selling the same tools from the same vendors.

Non-human identity security is different. Most MSPs aren't offering it yet. Most of your clients don't have it and don't know they need it. The category is growing rapidly, the regulatory tailwinds are strong, and the commercial structure — per-tenant monthly recurring revenue — fits the MSP model perfectly.

Why clients will pay for this

The conversation with a client about Non-Human Identity security almost always starts the same way: "How many AI agents and application integrations do you think you have in your Microsoft 365 tenant?" The client guesses. Then you show them the real number. That moment — when a client realises they have 400 service principals they didn't know about, several of them with near-admin permissions and credentials that haven't been rotated in three years — is a very effective sales conversation.

The conversation gets more urgent when you add the compliance dimension. If the client is in a regulated industry — financial services, healthcare, legal, professional services — the EU AI Act, DORA, or ISO 42001 obligations give you a concrete regulatory hook. This isn't "nice to have" security. This is "you need to be able to demonstrate AI governance to your regulators."

The delivery model

Non-Human Identity security as a managed service is operationally lightweight. Unlike endpoint security, which requires agents on every device, or SOC services, which require 24/7 analyst coverage, Non-Human Identity monitoring is largely automated. The platform scans the tenant, surfaces findings, generates alerts, and creates PSA tickets. The MSP technician reviews the tickets, takes guided response actions, and reviews the weekly reports with the client.

A skilled technician can manage 10-15 client tenants for Non-Human Identity security in roughly the same time they'd spend managing 2-3 clients for more labour-intensive security services. The margin profile is significantly better.

Pricing and margin structure

At the MSP tier, Non-Human Identity security tools are typically priced per tenant per month rather than per user — which aligns with how MSPs think about their client base. A platform cost of $49-69 per tenant per month, billed to clients at $120-150 per tenant per month, generates $50-100 per tenant per month in gross margin.

For an MSP with 25 clients, that's $1,250 to $2,500 per month in additional gross margin from a service that requires minimal ongoing labour. Over 12 months, that's $15,000 to $30,000 in additional gross profit — all recurring, all from existing clients, none requiring new sales effort once the service is established.

The client reporting story

One of the most powerful elements of an Non-Human Identity security service is the client deliverable: a branded, professional PDF report that explains their AI agent and Non-Human Identity risk posture in plain English. This isn't a technical log file. It's a board-ready document that the client can hand to their leadership or their auditors and say "here is evidence of our AI governance programme."

MSPs who provide this kind of high-quality, readable security reporting build significantly stronger client relationships than those who produce raw data exports. The report becomes the tangible evidence of the service's value — and a powerful retention tool at renewal.

How AIRM helps

AIRM is built channel-first. Multi-tenant dashboard, PSA integrations with ConnectWise, HaloPSA and Autotask, and fully branded client reports. Every report carries your logo and company name. MSP pricing starts at $69 per tenant per month with volume discounts from 10 tenants.

Blast Radius: The Security Metric That Changes How You Think About Risk

Risk scoring tells you how suspicious an identity is. Blast radius tells you how catastrophic its compromise would be. Understanding the difference — and why you need both — is fundamental to modern identity security.

For years, identity security has been focused on behaviour. Is this account doing something unusual? Has it accessed resources it doesn't normally access? Is it logging in from an unexpected location? These are valid and important signals. But they address only one dimension of identity risk — the likelihood of compromise — while ignoring the other dimension entirely: the impact of compromise.

Blast radius fills that gap. It answers a different and equally important question: if this identity were compromised right now, how much damage could an attacker do?

Why blast radius matters more than most people realise

Consider two service principals in the same Microsoft 365 tenant. The first has been showing unusual behaviour — accessing resources outside its normal pattern, making API calls at unusual hours. Traditional risk scoring flags it as medium risk. It has a risk score of 45. The second is completely quiet. It runs exactly when it's supposed to, accesses exactly what it's supposed to, and has never triggered a single anomaly alert. Its risk score is 12. Low risk by every traditional measure.

Now consider their permissions. The first service principal has Mail.Read access to a single shared mailbox. Even if it's completely compromised, an attacker gains read access to that mailbox and nothing else. The blast radius is small. The second service principal has Application.ReadWrite.All and AppRoleAssignment.ReadWrite.All — two of the most powerful permissions available in Microsoft 365. With these permissions, an attacker can create new applications, grant them any permissions they want, and effectively escalate themselves to any privilege level in the tenant. The blast radius is catastrophic.

Which one represents the greater risk to the organisation? Most traditional security tools would point to the first. The correct answer is emphatically the second.

A quiet, well-behaved service principal with catastrophic permissions is arguably more dangerous than a noisy, suspicious one with limited access — because it won't trigger behavioural alerts while still representing a devastating potential compromise.

How blast radius is calculated

Blast radius is derived from the permissions an identity holds — specifically, from mapping those permissions to the resources they control and the actions they enable. A permission like Mail.Read maps to read access to mailboxes. Mail.ReadWrite.All maps to read, write, and delete access to all mailboxes in the tenant. Directory.ReadWrite.All maps to the ability to modify any user, group, or organisation object in the directory.

Each permission carries a resource impact score based on the sensitivity of the data it controls, the breadth of access it grants (single user vs entire tenant), and the write/delete capabilities it enables. These scores are combined into an overall blast radius score for the identity.

The visual representation of blast radius is a node graph: the identity at the centre, connected by edges to the resource types it can reach. Red edges indicate write access. Blue edges indicate read-only access. An identity with red edges to Mail, Files, and Directory simultaneously is visually — and intuitively — a critical blast radius case.

The dual band insight

The most sophisticated approach to identity risk scoring uses two independent dimensions: behaviour risk (what the identity is doing) and blast radius (what it could do if compromised). These dimensions are independent and must be assessed separately.

The combination of these two signals produces four risk archetypes that each require different responses:

High behaviour risk / High blast radius: Immediate priority. This identity is behaving suspiciously and has dangerous permissions. Investigate and contain as a matter of urgency.

Low behaviour risk / High blast radius: The sleeping threat. This identity is quiet but devastating if compromised. Review and reduce its permissions. This is often the most overlooked risk category.

High behaviour risk / Low blast radius: Worth investigating but not urgent. The identity is behaving oddly but can't do much damage even if compromised.

Low behaviour risk / Low blast radius: Routine monitoring. No immediate action required.

How AIRM helps

AIRM calculates both Behaviour Risk Band and Blast Radius Band for every identity in your Microsoft 365 tenant. The Blast Radius Map visualises exactly which resources are reachable and whether access is read or write. The contextual insight panel explains what a compromise would mean in plain English — for every identity, every scan.

DORA and Non-Human Identity Risk: What Financial Services Organisations Need to Know

DORA has been in force since January 2025. Financial services organisations and their ICT providers must now demonstrate continuous risk management of all ICT assets — including third-party AI agents and service principals.

The Digital Operational Resilience Act came into force across the European Union on January 17, 2025. Unlike many regulatory frameworks that have long implementation timelines, DORA was not gradual — organisations in scope had a hard deadline and are now expected to demonstrate compliance.

For financial services organisations and their ICT providers, the implications for non-human identity security are direct and substantial. DORA's ICT risk management requirements treat application integrations, API connections, and automated processes as ICT assets that must be formally managed — and that management must be continuous, not periodic.

Who is in scope

DORA applies to a broad range of financial entities: credit institutions, payment institutions, investment firms, insurance companies, and their critical ICT third-party service providers. If your organisation provides IT services to financial services firms — as many MSPs do — you may be directly in scope as an ICT third-party service provider, or indirectly in scope through your clients' compliance obligations.

The key DORA requirements for Non-Human Identity security

Article 5 — ICT Risk Management Framework

DORA requires a comprehensive ICT risk management framework that identifies all sources of ICT risk. Service principals and AI agents that have been granted access to financial data, customer records, or operational systems are ICT assets that must be identified, classified, and risk-assessed within this framework.

Article 8 — ICT Asset Management

All ICT assets must be documented. An unreviewed service principal — one that exists in your Microsoft 365 tenant without a documented owner, purpose, or risk assessment — is a compliance gap under Article 8. This article alone creates a powerful case for maintaining a complete, up-to-date inventory of all non-human identities.

Article 10 — Detection

Mechanisms must be in place to promptly detect anomalous activity, including unusual credential or access behaviour. This is not a theoretical requirement — regulators will ask to see evidence of your detection capabilities. Anomaly alerts, incident logs, and response documentation are all evidence of compliance.

Article 28 — Third-Party ICT Risk

Financial entities must manage risks posed by third-party ICT service providers, including monitoring their access rights and permissions. Every third-party AI agent or application integration in your Microsoft 365 tenant is a third-party ICT relationship that falls under Article 28's oversight requirements.

DORA's third-party risk requirements are particularly relevant for MSPs. If you provide IT services to financial services firms, your clients may require you to demonstrate that your own access to their tenants — and the tools you use within those tenants — is formally governed.

Building a DORA-compliant Non-Human Identity programme

A DORA-compliant approach to Non-Human Identity security requires, at minimum: a complete and current inventory of all service principals, AI agents, and application integrations; documented risk classifications for each; evidence of continuous monitoring with anomaly detection; and documented procedures for reviewing and responding to identified risks.

The documentation requirement is critical. DORA compliance is not just about having good processes — it's about being able to demonstrate those processes to a regulator. Security teams need to be able to produce, on demand, evidence of their Non-Human Identity monitoring programme, including what they monitor, how they monitor it, and what they do when something is detected.

How AIRM helps

AIRM maps every finding to DORA's control framework across Articles 5, 8, 9, 10, 28, and 30. Per-framework compliance reports provide auditor-ready documentation. Every scan generates evidence of your continuous monitoring programme. Unowned service principals are flagged automatically as unmanaged ICT assets under Article 8.

Microsoft 365 Copilot Is Not Automatically Secure: What Every Admin Needs to Know

Microsoft 365 Copilot is a powerful productivity tool with a significant security surface that most deployments don't adequately address. Here's what the risks actually are and how to manage them.

Microsoft 365 Copilot is being deployed at an extraordinary pace. With paid seats growing more than 160% year over year according to Microsoft's own metrics, organisations across every industry are rolling out AI-assisted productivity to their workforces. The productivity benefits are real and significant. The security implications are less well understood.

The most important thing security teams need to understand about Copilot is that it operates within the permissions of the user it's assisting. When Copilot helps a user draft an email, search SharePoint, or summarise a document, it does so using that user's identity and that user's access rights. This means Copilot's potential blast radius is effectively the sum of every user it assists — which, in a full deployment, is the entire organisation.

The three key Copilot security risks

1. Oversharing amplification

Copilot's effectiveness depends on its ability to find and surface relevant information from across the Microsoft 365 estate. This means it queries SharePoint, OneDrive, Exchange, and Teams — everywhere a user has access. In organisations where permissions haven't been tightly managed (which is most organisations), Copilot can surface information that users technically have permission to see but have never actually seen before. Documents in SharePoint sites they forgot they had access to. Emails in shared mailboxes. Files in team sites that should have had tighter access controls.

2. Prompt injection risk

As AI agents become more capable of taking actions on behalf of users — sending emails, creating files, scheduling meetings — they become targets for prompt injection attacks. A malicious document that contains hidden instructions designed to manipulate Copilot's behaviour is a real and documented attack vector. Security teams need to understand this risk and monitor for unusual actions taken by Copilot agents.

3. The Copilot agent surface

Beyond the core Copilot assistant, organisations are increasingly deploying custom Copilot agents built with Copilot Studio. These agents can have their own permissions, their own credentials, and their own action capabilities. A Copilot Studio agent that has been granted broad permissions to query and modify data across the M365 estate is, from a security perspective, identical to any other service principal with those permissions — and needs to be governed the same way.

"Minor configuration changes that have historically not caused problems are now exploited at superhuman speed." — Simon Hughes, CoreView

What a secure Copilot deployment looks like

Securing Copilot is fundamentally an exercise in identity and access management. The steps are straightforward: audit what Copilot can access through the permissions of its users; tighten overly broad sharing and access controls before rollout; implement governance for Copilot Studio agents as first-class identities with risk scores and monitoring; and establish ongoing monitoring for unusual Copilot-driven activity.

The organisations that have invested in cleaning up their Microsoft 365 permissions before Copilot deployment will have a significantly better experience — both in terms of security posture and in terms of Copilot's output quality, since a well-governed environment produces more accurate and appropriate AI responses.

How AIRM helps

AIRM identifies Copilot Studio agents as a distinct identity type, monitors their permissions, and scores their blast radius. Any unusual permission changes or anomalous behaviour triggers an alert. The Copilot governance picture is part of your overall AI agent inventory — no separate tool required.

The Orphaned Service Principal Problem: Why Unowned Identities Are Your Biggest Hidden Risk

When the person responsible for a service principal leaves the organisation, the identity typically stays — indefinitely. Across most Microsoft 365 tenants, unowned service principals represent 20-40% of all non-human identities. Here's why that matters and what to do about it.

Every service principal in a Microsoft 365 tenant was created by someone. A developer building an integration. An admin setting up an automation. A vendor deploying their product. At the moment of creation, there was a clear purpose, a clear owner, and — hopefully — a clear understanding of what permissions were being granted.

Time is not kind to that clarity. The developer moves to another team. The admin leaves the company. The vendor relationship ends but nobody thinks to revoke the permissions. The integration is replaced by a newer version, but the old service principal keeps running. Three years later, there is a service principal with Application.ReadWrite.All permissions, credentials that haven't been rotated since 2022, and no one in the organisation who knows what it does, who's responsible for it, or whether it's still needed.

This is the orphaned service principal problem. And it is far more common than most security teams realise.

The scale of the problem

In our experience monitoring Microsoft 365 environments across a range of organisation sizes and sectors, unowned service principals typically represent between 20% and 40% of all non-human identities. In organisations that have been using Microsoft 365 for five years or more, the proportion can be higher — the longer the environment has been running, the more accumulated technical debt in the form of forgotten, unowned identities.

The permissions these identities hold are often significant. They were created during a period when "grant what's needed, review later" was the prevailing approach — and the "review later" step never happened. We regularly find orphaned service principals with Directory.ReadWrite.All, Application.ReadWrite.All, or Mail.ReadWrite.All permissions that have been sitting unreviewed for years.

Why attackers love orphaned identities

From an attacker's perspective, orphaned service principals are ideal targets. The credentials are often old and may have been exposed in historical breaches. The permissions are often excessive — granted generously because "we might need them" and never trimmed. Nobody is watching them, because nobody knows they exist. And they have no owner to notice if their credentials are being used in unexpected ways.

An attacker who identifies and compromises an orphaned service principal with Application.ReadWrite.All permissions can use that foothold to escalate their access to virtually any level within the tenant — all without triggering the alerts that would fire for a compromised human account, because service principals don't have conditional access policies or MFA requirements.

The ownership assignment challenge

Assigning ownership to orphaned service principals is conceptually simple but practically difficult. For recent creations, there's usually a creator identity in the audit logs. For older identities, the creator may have left the organisation, the logs may have rolled over, or the original purpose may no longer be clear.

The practical approach is to use the identity's characteristics to infer probable ownership: the publisher domain suggests the vendor; the permission scopes suggest the use case; the application name — if it follows a naming convention — may indicate the team. From there, an investigation workflow can be triggered to confirm whether the identity is still needed and, if so, who should own it going forward.

Identities that cannot be attributed to a current owner or active use case should be candidates for decommissioning — after careful testing to ensure that removing them doesn't break a dependency that nobody knew existed.

How AIRM helps

AIRM flags every unowned service principal and Non-Human Identity in your Microsoft 365 tenant. The ownership assignment workflow allows you to investigate each one, assign accountability, and track the resolution. Unowned identities with high blast radius scores are prioritised automatically. The Non-Human Identity Risk Report includes a dedicated section on unowned identities for client reporting.

Known Rogue Service Principals: The Threat Intelligence Gap in Most M365 Environments

Threat intelligence for human-facing threats is well established. For non-human identity threats — known malicious OAuth apps, compromised vendor service principals, rogue automation accounts — most organisations are flying blind.

The threat intelligence industry has spent decades cataloguing the indicators of compromise associated with human-facing attacks: malicious IP addresses, phishing domains, malware hashes, command-and-control infrastructure. This intelligence is integrated into endpoint protection, email security, and SIEM platforms — and it works well for the threats it was designed to address.

For non-human identity threats, the picture is very different. The concept of a "rogue service principal" — a Microsoft 365 application integration that is itself a threat actor or has been used in known attacks — is relatively new. The threat intelligence infrastructure to detect and respond to it is underdeveloped in most organisations.

What rogue service principals look like

Rogue service principals fall into several categories. Malicious OAuth applications are apps that present themselves as legitimate productivity tools but are designed to harvest credentials or maintain persistence in compromised tenants. Compromised vendor applications are legitimate applications whose underlying vendor has been breached, turning a trusted service principal into an attacker-controlled one. And there are applications associated with known threat actor campaigns — specific publisher domains or application names that have been linked to documented attack activity.

The challenge with all of these is that they look, from a basic permissions perspective, like any other service principal. The malicious OAuth app has been granted Mail.Read by a user who thought it was a legitimate email productivity tool. The compromised vendor application has all the permissions it was legitimately granted when it was first deployed. Standard identity security tools don't distinguish between a trusted application and one that appears on known rogue lists.

The OAuth consent phishing vector

One of the most common attack patterns involving rogue service principals is OAuth consent phishing — also known as "illicit consent grant attacks." In these attacks, a user receives a phishing email that directs them to grant permissions to a malicious application. The application looks legitimate — it might impersonate a Microsoft service, a productivity tool, or an internal application. The user clicks through the OAuth consent flow and grants the application access to their mailbox, files, or directory.

The result is a service principal in the tenant with user-granted permissions — and because the permissions were granted through a legitimate OAuth flow, they don't look suspicious from a technical perspective. The application can now read the user's email, access their files, and potentially use those permissions to pivot to other users or elevate its access.

Building detection capability

Detection of rogue service principals requires a combination of behavioural monitoring and intelligence matching. Behavioural indicators — applications that access many mailboxes rapidly after first consent, applications that immediately enumerate directory objects, applications that request unusual permission combinations — can flag suspicious activity even without specific threat intelligence matches.

Intelligence matching — checking new and existing service principals against known lists of malicious application identifiers, publisher domains, and attack campaign indicators — provides a second layer of detection that catches known threats even when their behaviour appears normal.

How AIRM helps

AIRM monitors for known rogue service principals and flags them immediately when detected in your tenant. The combination of behavioural anomaly detection and known-bad indicators creates two complementary detection layers. A known rogue service principal in your tenant generates a P1 alert — the highest priority tier.

ISO 42001 Explained: The World's First AI Management System Standard and What It Means for Your Business

ISO/IEC 42001 is the world's first certifiable AI management system standard. Unlike NIST AI RMF, which is voluntary guidance, ISO 42001 is designed for certification — and your enterprise clients are starting to ask for it.

In December 2023, the International Organization for Standardization published ISO/IEC 42001 — the world's first AI management system (AIMS) standard. For organisations navigating the rapidly growing landscape of AI governance requirements, ISO 42001 represents something important: a certifiable, internationally recognised framework for demonstrating responsible AI management.

Unlike the NIST AI Risk Management Framework, which provides guidance but not certification, ISO 42001 is structured for formal compliance assessment. Organisations can be audited against its requirements and achieve certification — making it the AI governance equivalent of ISO 27001 for information security. And just as enterprise clients began requiring ISO 27001 certification from their suppliers, they will begin requiring ISO 42001.

What ISO 42001 requires

ISO 42001 follows the familiar "Plan-Do-Check-Act" methodology that underpins other ISO management system standards. It is structured around ten clauses, seven of which are mandatory requirements that must be addressed for certification.

Clause 4 — Organisational Context

Organisations must identify and understand the context in which their AI systems operate, including internal and external factors that affect AI management. For organisations using AI agents in Microsoft 365, this means understanding the risk and stakeholder implications of those agents — which requires, first, knowing what AI agents exist.

Clause 6.1 — Risk Assessment

ISO 42001 requires a systematic process for identifying and assessing risks and opportunities related to AI systems. This is not a one-time assessment — it is an ongoing process that must be documented and reviewed. Any AI agent operating in your environment must be included in this risk assessment.

Clause 8.4 — AI System Impact Assessment

Before deploying AI systems (and periodically thereafter), organisations must conduct impact assessments examining the potential effects on individuals, groups, and society. For AI agents with broad permissions in Microsoft 365 — access to all email, all files, all directory data — this impact assessment needs to address the significant potential consequences of those agents operating incorrectly or being compromised.

Clause 9.1 — Monitoring and Measurement

Organisations must continuously monitor, measure, analyse, and evaluate AI system performance and risk. This clause is the operational heart of ISO 42001 compliance — it's not enough to set up AI systems responsibly; you must demonstrate ongoing oversight.

ISO 42001 and Microsoft 365 AI agents

For organisations using AI agents in Microsoft 365 — Copilot Studio agents, Power Automate flows, third-party AI integrations — ISO 42001 creates specific operational requirements. Every AI agent must be inventoried (Clause 4), risk-assessed (Clause 6.1), impact-assessed (Clause 8.4), and continuously monitored (Clause 9.1).

Most organisations currently meet none of these requirements in a documented, evidential way. The gap between "we have AI agents in our environment" and "we have a certified AI management system" is significant — but it is closeable with the right approach.

How AIRM helps

AIRM maps every finding in your Microsoft 365 environment to ISO 42001's control framework — Clauses 4, 6.1, 8.1, 8.4, 9.1, and 10. The ISO 42001 compliance report provides structured evidence of your AI management programme. Every scan contributes to the evidence base for Clause 9.1 continuous monitoring.

The AI Governance Conversation: How MSPs Can Lead the Discussion With Every Client

Every client you have is being asked about AI governance by their leadership, their auditors, or their customers. Most of them don't know how to answer. This is your opportunity to lead.

Something has shifted in enterprise conversations about AI. Six months ago, the discussion was about what AI could do — productivity gains, automation potential, competitive advantage. Today, the questions are different. "Do we have governance around our AI tools?" "Can we demonstrate to our auditors that we're managing AI risk?" "What happens if one of our AI agents is compromised?"

These are CISO conversations. But they're happening in boardrooms, not just security meetings. And because most organisations don't have good answers yet, there is a significant advisory opportunity for MSPs who do.

The questions your clients are being asked

The conversation typically starts with pressure from above. A client's board has been reading about AI governance obligations. Their cyber insurance provider is asking about AI inventory and monitoring. Their largest enterprise customer has sent a vendor security questionnaire that includes questions about AI agent governance. Their auditors are asking about EU AI Act readiness.

The client turns to their MSP and asks: "Are we managing this?" In most cases, the honest answer from the MSP is: "We're not sure." This is the moment.

Leading the conversation

The MSP who gets ahead of this conversation — who comes to the client with a clear assessment, clear evidence, and a clear solution — is the MSP who deepens the client relationship, expands their scope of work, and demonstrates the advisory value that distinguishes a strategic partner from a break-fix vendor.

The starting point is a simple question in your next QBR: "We'd like to run an AI agent and non-human identity assessment on your Microsoft 365 environment. It'll take 24 hours and we'll show you exactly what AI and automation tools have access to your data, what that access looks like in terms of risk, and whether there are any compliance concerns we should be addressing."

The answer will almost always be yes. And the results of that assessment will almost always reveal something significant — an unowned service principal with broad permissions, an orphaned AI integration nobody knew was still running, a compliance gap under DORA or the EU AI Act that nobody had thought about.

Turning the conversation into a service

The assessment conversation opens the door to a recurring managed service. Monthly monitoring, quarterly reporting, annual compliance documentation — packaged as a fixed-price service that sits alongside the client's existing security stack without replacing anything they already have.

The value proposition is clear: "We monitor every AI agent and service principal in your environment, score their risk, alert you when something changes that shouldn't, and produce compliance-ready documentation for your auditors. You get the evidence of AI governance that your board, your insurers, and your regulators are asking for."

How AIRM helps

AIRM's MSP platform gives you everything you need to have this conversation: a 14-day trial that produces a real assessment of any client tenant, branded executive summary reports you can present at QBRs, and a recurring monitoring service you can price confidently. The first conversation writes itself once you have real data in hand.

Microsoft 365 E7 vs AIRM: What the Frontier Suite Covers and Where the Gaps Are

Microsoft 365 E7 is a significant step forward for AI governance. At $99 per user per month, it brings Agent 365 and the Entra Suite together in one package. Here's an honest assessment of what it covers — and where purpose-built Non-Human Identity security fills the remaining gaps.

Microsoft 365 E7 — The Frontier Suite — became generally available on May 1, 2026. At $99 per user per month, it combines Microsoft 365 E5, Microsoft 365 Copilot, Agent 365, and the full Microsoft Entra Suite into a single offering. For enterprise organisations looking to govern AI agents at scale, it represents a meaningful advancement.

We think it's important to give an honest assessment of what E7 actually provides and where the gaps remain — because the decision to deploy E7, AIRM, or both should be based on accurate information, not marketing claims from either side.

What Microsoft E7 and Agent 365 do well

Agent 365 provides a registry of Microsoft-native agents — those built with Copilot Studio, Microsoft Agent Builder, and partner ecosystem tools that have been registered through Microsoft's infrastructure. For organisations whose AI agent footprint is primarily Microsoft-native, this is genuinely useful. You get an inventory, usage metrics, and the ability to apply Conditional Access and Entra governance policies to those agents.

The Entra Suite component adds meaningful identity governance capabilities — Verified ID, Identity Governance, and network-level access controls through Global Secure Access. For organisations that have been on E3 or E5 without the Entra Suite, these are significant upgrades.

And the Defender integration provides agent risk signals that are correlated with Microsoft's broader threat intelligence — giving security teams a view of agent activity within the Microsoft security ecosystem.

Where the gaps are

Third-party AI agents

Agent 365's registry covers Microsoft-native agents. It does not cover third-party AI agents — Claude-for-Work, OpenAI enterprise integrations, Perplexity, Gemini Workspace connectors, and the hundreds of other AI products that connect to Microsoft 365 via OAuth. These represent a significant and growing proportion of the AI agent surface in most organisations — and they are largely invisible to Agent 365.

Non-human identity security broadly

E7 and Agent 365 are focused on AI agents. The broader category of non-human identities — service principals, automation accounts, legacy application registrations, orphaned integrations — is not Agent 365's primary concern. Entra provides some visibility into service principals, but the risk scoring, blast radius analysis, behavioural monitoring, and compliance mapping that define a dedicated Non-Human Identity security programme are not part of the E7 value proposition.

MSP multi-tenant architecture

E7 is designed for single-tenant enterprise deployment. MSPs managing multiple client tenants have Microsoft 365 Lighthouse, which provides some cross-tenant visibility, but does not offer the Non-Human Identity-focused multi-tenant dashboard, PSA integrations, or branded client reporting that a managed Non-Human Identity security service requires.

Per-framework compliance reporting

E7 does not produce per-framework compliance reports for EU AI Act, DORA, ISO 42001, or the other 8 frameworks in AIRM's compliance engine. The compliance story for E7 is built around Microsoft's own security frameworks and Secure Score — not the third-party regulatory requirements that enterprise clients and their auditors are increasingly focused on.

The right frame is not E7 versus AIRM — it's E7 plus AIRM. E7 governs Microsoft-native agents within Microsoft's security stack. AIRM monitors the full Non-Human Identity attack surface — including third-party agents, legacy service principals, and orphaned identities — and maps findings to the regulatory frameworks your clients are accountable to.

The cost comparison

E7 is priced at $99 per user per month — meaning a 500-person organisation pays $49,500 per month, or $594,000 per year. AIRM is priced per tenant, not per user — meaning the cost scales with the number of organisations you're managing rather than the size of those organisations. For MSPs and for direct customers with multiple tenants, this is a fundamentally different and more favourable cost structure.

For many organisations, the decision will be to deploy E7 for its broader productivity and governance capabilities, and AIRM for the specific Non-Human Identity security and compliance monitoring capabilities that E7 doesn't address. The two tools are complementary, not competitive.

How AIRM helps

AIRM covers what E7 doesn't — third-party AI agents, legacy service principals, orphaned Non-Human Identities, blast radius analysis, behavioural anomaly detection, and per-framework compliance reporting for EU AI Act, DORA, ISO 42001 and more. Connect any Microsoft 365 tenant in minutes, no matter what Microsoft licensing tier they're on.

The 7 Questions to Ask Before Approving an AI Agent in Microsoft 365

Most organisations approve AI agents based on vendor trust and surface-level permission review. Here's the security checklist every admin and CISO should run through before granting any AI agent access to your Microsoft 365 environment.

The approval process for AI agents in most organisations is dangerously casual. A team lead discovers a useful AI productivity tool. They forward the link to IT. IT checks whether it's from a known vendor. The answer is yes. The tool gets approved. Done.

This process would be inadequate for a new human employee. For an AI agent that may have persistent access to thousands of mailboxes, the organisation's entire file store, and directory-level permissions to create and modify accounts, it is genuinely reckless.

Here are the seven questions every organisation should answer before approving an AI agent for use in Microsoft 365.

1. What permissions is this agent actually requesting — and why?

The OAuth consent screen that most users click through in seconds is the most important security document in the approval process. Read it carefully. Every permission listed is a capability the agent will have — not just when it needs it, but continuously, indefinitely.

For each permission, ask: is this genuinely necessary for the agent's stated function? Mail.Read for an email summarisation tool makes sense. Application.ReadWrite.All for the same tool does not. If a vendor cannot explain why they need each permission they've requested, that is itself a red flag.

2. What is the blast radius if this agent is compromised?

Map the permissions to their worst-case consequences. An agent with Mail.ReadWrite.All gives an attacker read, write, and delete access to every mailbox in the tenant. An agent with Directory.ReadWrite.All gives an attacker the ability to modify any user, group, or organisational object. Understand the blast radius before you approve, not after a breach forces you to.

3. Who owns this agent after approval?

Every approved AI agent must have a named owner — a person who is accountable for reviewing its access, monitoring its behaviour, and decommissioning it when it's no longer needed. "The IT team" is not an owner. If there's no specific person willing to accept accountability for this agent, it should not be approved.

4. What is the vendor's security posture?

The security of an AI agent is only as good as the security of the vendor operating it. Has this vendor had a breach? Do they have SOC 2 Type II or ISO 27001 certification? What is their incident response and disclosure policy? What happens to the data processed by their agent? These questions are particularly important for AI tools that process sensitive business data as part of their core function.

5. How will this agent be monitored after approval?

Approval should not be the end of the security conversation — it should be the beginning of an ongoing monitoring obligation. Before approving, define what monitoring looks like: who reviews the agent's activity, how often, and what would trigger a review or revocation of access. An agent with no post-approval monitoring plan should not be approved at the broad-access tier.

6. What is the decommissioning plan?

Every AI agent approval should have an expiry — either a specific date or a defined trigger event (project completion, contract end, product replacement). Define it before approval, not after the agent has been running for three years with no one knowing what it does or whether it's still needed.

7. Is this agent necessary at the requested permission level?

The principle of least privilege applies as much to AI agents as to human users. Can the agent's intended function be achieved with narrower permissions? Many AI productivity tools request broad permissions because it simplifies their integration architecture — not because they genuinely need everything they're asking for. Push back. Request a more limited permission set. If the vendor refuses or cannot accommodate a least-privilege approach, weight that heavily in your approval decision.

The organisations that build rigorous AI agent approval processes now — before an incident makes them mandatory — will have dramatically smaller blast radius exposure when (not if) an agent is compromised.

How AIRM helps

AIRM's post-approval monitoring answers questions 5 and 6 automatically — every scan checks that approved agents haven't acquired new permissions, changed behaviour, or had their credentials age beyond acceptable limits. The blast radius score answers question 2 visually, from day one.

Non-Human Identity Security Tools Compared: What to Look for When Evaluating Platforms in 2026

The Non-Human Identity security category is growing fast and the tools vary widely. Here's what actually matters when evaluating platforms — and the questions that separate genuine capabilities from marketing claims.

Two years ago, "non-human identity security" wasn't a recognised product category. Today it's one of the fastest-growing segments in enterprise security, with a growing number of vendors offering varying combinations of discovery, risk scoring, monitoring, and response capabilities.

As with any fast-moving category, the marketing tends to outrun the reality. "Continuous monitoring," "AI-powered detection," and "comprehensive Non-Human Identity visibility" are phrases that appear in virtually every vendor's materials — and mean very different things in practice. Here's how to evaluate what you're actually buying.

Discovery: what can it actually see?

The first question is scope. What types of non-human identities does the platform discover? Service principals and OAuth applications are table stakes. But does it identify AI agents specifically? Does it classify automation accounts, legacy application registrations, and first-party Microsoft applications differently? Does it work across all Microsoft 365 workloads — Exchange, SharePoint, Teams, Azure AD — or only a subset?

Ask vendors to run a discovery scan on a representative tenant and show you the complete inventory. The number is usually surprising. If a vendor's discovery returns a surprisingly tidy, manageable list, probe how they define the boundaries of what they capture.

Risk scoring: how is it calculated?

Every Non-Human Identity security platform has some form of risk scoring. The important questions are about methodology: what inputs go into the score, how are they weighted, and is the methodology transparent and explainable?

Watch out for single-dimension scoring that combines behaviour and permissions into one number. These scores obscure the critical distinction between an identity that is behaving suspiciously versus one that would be catastrophic if compromised but is currently quiet. The best platforms score these dimensions independently and surface both.

Also ask: does the risk score update continuously, or is it recalculated periodically? A score that's updated weekly is very different from one that reflects the identity's current state.

Anomaly detection: is it genuinely behavioural?

This is where the most significant variation between platforms exists. Rule-based anomaly detection — flagging identities that match predefined patterns like "credential older than 90 days" or "permission scope includes Directory.ReadWrite.All" — is valuable but limited. It catches known problems, not unknown ones.

Genuine behavioural anomaly detection requires a baseline. Ask vendors: how is the baseline established? How long does it take? Is the baseline per-identity, or is it based on population averages? A platform that compares an identity's current behaviour to its own historical behaviour is fundamentally more powerful than one that compares it to an industry average or a fixed rule set.

Blast radius analysis: permission mapping or assumption?

Blast radius is an increasingly common feature claim. The important question is whether it's based on actual permission mapping or on generalised assumptions. A platform that says "this identity has high blast radius because it has Directory permissions" is making a different — and less precise — claim than one that maps the specific permissions to the specific resources they control and visualises the result.

MSP and multi-tenant support

For MSPs or organisations managing multiple tenants, native multi-tenancy is a critical requirement that is far from universal. Many enterprise Non-Human Identity security tools are designed for single-tenant deployment and have no concept of an MSP managing 50 client environments. Check whether multi-tenant management is a first-class feature or an afterthought, and whether it includes PSA integration, per-tenant reporting, and MSP-appropriate pricing models.

Compliance reporting: frameworks or export?

Many platforms offer "compliance reporting" that amounts to a data export you then need to map to your regulatory framework manually. True compliance reporting maps findings to specific control requirements within named frameworks — EU AI Act Article 9, DORA Article 10 — and generates evidence documentation you can give directly to auditors. Ask to see a sample report before you buy.

How AIRM compares

AIRM scores behaviour risk and blast radius independently, builds per-identity behavioural baselines from scan history, maps findings to 11 named compliance frameworks with per-control evidence, and is built MSP-first with native PSA connectors. A 14-day trial on any M365 tenant takes minutes to set up.

How to Audit Your Microsoft 365 Service Principals in Under an Hour

Most organisations have never done a proper service principal audit. Here's a practical, step-by-step guide to understanding what's running in your tenant, what it can access, and which identities represent the greatest risk.

A Microsoft 365 service principal audit sounds daunting. Most organisations have never done one, and many assume it requires specialist tooling or days of analyst time. In practice, you can get a clear picture of your service principal landscape in under an hour — if you know what you're looking for and where to look.

This guide walks through the process step by step. We'll use native Microsoft tooling where possible and flag where dedicated Non-Human Identity security tooling dramatically accelerates the process.

Step 1: Get the inventory (15 minutes)

Start in the Microsoft Entra admin centre. Navigate to Applications → Enterprise Applications and set the filter to "All Applications." This gives you the list of service principals in your tenant — both those you've created and those that were created when users consented to third-party applications.

Export this list to CSV. The columns you need for the initial analysis are: Display Name, Application ID, Created Date, Publisher, and Sign-in Activity (last sign-in date). Sort by Created Date descending to see your most recent additions, and by Last Sign-in Date ascending to surface identities that haven't been used recently.

The total number will almost certainly be larger than anyone in your organisation expected. This is normal. Note it down — it will be useful context for the conversation you'll need to have with leadership about Non-Human Identity governance.

Step 2: Identify the high-permission identities (20 minutes)

Navigate to each application in the list and check its API permissions. You're looking specifically for application permissions (not delegated permissions) that operate at tenant scope. The highest-risk permissions to flag immediately are: Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All, Directory.ReadWrite.All, Mail.ReadWrite (without "Selected"), and RoleManagement.ReadWrite.Directory.

Any service principal with one or more of these permissions warrants a detailed review. Document the application name, the permissions held, the date created, and — if you can determine it — who created it and why.

Step 3: Check for orphaned identities (10 minutes)

Filter your enterprise application list by Owner: None. Any application with no assigned owner is an orphaned identity — it has no accountable person responsible for reviewing its access or decommissioning it when no longer needed. Flag all of these for ownership assignment.

Cross-reference the orphaned list with the high-permission list from Step 2. Orphaned identities with high-risk permissions are your immediate priority.

Step 4: Check credential ages (10 minutes)

For application registrations (as opposed to enterprise applications created by consent), check the App Registrations section and review the Certificates & Secrets for each. Note any client secrets or certificates that are expired or approaching expiry — these represent both operational risk (the integration will break) and security risk (aged credentials are more likely to have been exposed).

Any credential older than 12 months that hasn't been rotated should be flagged for rotation.

Step 5: Document and prioritise

By the end of this process, you should have: a complete inventory count, a list of high-permission identities requiring review, a list of orphaned identities requiring ownership assignment, and a list of aged credentials requiring rotation. This is your Non-Human Identity risk register — and it's the foundation of an ongoing governance programme.

The manual process above gives you a snapshot. It cannot tell you whether behaviour has changed since the last time you looked, whether a service principal has acquired new permissions in the last 30 days, or whether any of your identities match known rogue application indicators. That's where continuous monitoring tools provide the capability that manual audits cannot.

How AIRM helps

AIRM completes this entire audit automatically in minutes — with risk scoring, blast radius analysis, ownership tracking, and credential age monitoring built in. Rather than a one-time snapshot, every scan updates the picture and flags changes since the last run. The first scan is free on a 14-day trial.

NIS2 and Non-Human Identities: What EU Organisations Need to Do Now

NIS2 has been in force since October 2024. Its supply chain security and incident reporting obligations have direct implications for how organisations govern AI agents and service principals in Microsoft 365.

The Network and Information Security Directive 2 (NIS2) came into force across European Union member states in October 2024, replacing the original NIS Directive with significantly expanded scope, stricter requirements, and substantially higher penalties for non-compliance. For security teams managing Microsoft 365 environments, NIS2 creates specific obligations around supply chain security and incident detection that directly implicate how AI agents and service principals are governed.

Who is in scope

NIS2 significantly expands the scope of the original directive. Essential entities — energy, transport, banking, healthcare, water, digital infrastructure — face the strictest obligations. Important entities — postal services, waste management, manufacturing, food, digital providers — face slightly lighter requirements but are still bound by the core framework. Crucially, NIS2 also imposes obligations on the supply chain: if you provide digital services to an in-scope entity, your security posture may be subject to scrutiny as part of their supply chain risk management obligations.

The supply chain security requirement

Article 21 of NIS2 requires in-scope organisations to implement security measures addressing supply chain security, including the security relationships between the organisation and its direct suppliers or service providers. In the context of Microsoft 365, every third-party AI agent and service principal represents a supply chain relationship. The organisation has granted that application permissions to access its data and systems. If that application is compromised, the organisation's environment is compromised through a trusted channel.

NIS2's supply chain requirements mean organisations must assess and manage the security of these relationships — not just at the point of onboarding, but continuously. An AI agent that was vetted twelve months ago and has since had a change of ownership, a vendor breach, or a permissions expansion is a supply chain risk that has not been reassessed.

Incident detection and reporting

NIS2 introduces strict incident reporting timelines: a 24-hour early warning notification for significant incidents, followed by a detailed incident notification within 72 hours. This creates a dependency on detection capability. If an AI agent in your Microsoft 365 environment is compromised and you don't detect it for three weeks, you've already missed your reporting window by the time you know you have an incident.

The implication is clear: continuous monitoring with real-time alerting is not optional under NIS2 — it's a prerequisite for meeting the incident reporting obligations.

Management accountability

NIS2 introduces a notable provision making senior management personally liable for approving and overseeing cybersecurity measures. This shifts AI governance from a technical IT concern to a board-level accountability. CISOs and security teams who previously struggled to get AI governance on the boardroom agenda now have a regulatory lever to work with.

How AIRM helps

AIRM maps findings to NIS2's Article 21 control requirements — supply chain security, access control, incident detection, and risk management. Continuous monitoring provides the detection capability NIS2's 24-hour reporting window demands. The compliance report documents your programme for management sign-off and regulatory evidence.

Cyber Insurance and AI Agents: Why Insurers Are Asking Questions You Can't Answer Yet

Cyber insurance questionnaires are changing. Insurers are starting to ask specific questions about AI agent governance, non-human identity monitoring, and service principal management. Most organisations cannot answer them — yet.

The cyber insurance market has always been a leading indicator of where enterprise security practice is heading. Insurers price risk for a living. When they start asking specific questions about a new category of risk, it means their actuarial data is telling them something — and what it's telling them right now about AI agents and non-human identities is not encouraging.

In the past 12 months, cyber insurance renewal questionnaires at multiple carriers have begun including questions about AI governance and non-human identity management. These questions are not yet universal, but their presence is growing — and the answers organisations give (or cannot give) are beginning to affect both premium pricing and policy terms.

The questions insurers are starting to ask

The specific questions vary by carrier, but the themes are consistent. Do you maintain an inventory of AI agents and automated systems that have access to your corporate data? Do you have a formal process for approving new AI agent integrations? Do you monitor non-human identities for anomalous behaviour? Do you have documented ownership and accountability for application integrations in your Microsoft 365 environment? When did you last audit the permissions held by third-party AI tools connected to your systems?

For most organisations, the honest answer to most of these questions is some version of "not formally" or "not recently." This creates a problem — not just for policy renewal, but because insurers are increasingly tightening coverage exclusions for incidents that arise from ungoverned AI agent access.

The coverage exclusion risk

Cyber insurance policies have always contained exclusions for incidents arising from failures of reasonable security practice. As AI agent governance becomes an established security expectation, insurers will increasingly argue that incidents resulting from unmonitored, unreviewed AI agent access fall within these exclusions.

An organisation that is breached via a compromised AI agent with unreviewed permissions and no monitoring programme will face a significantly harder claims process than one that can demonstrate a documented governance programme — even if that programme didn't prevent the breach. The difference between "we had monitoring but it was bypassed" and "we had no monitoring" is the difference between a covered claim and an excluded one.

What good looks like to an insurer

From an insurance perspective, the evidence of good AI agent governance is straightforward: a current inventory of all AI agents and service principals, documented ownership and approval processes, evidence of continuous monitoring with anomaly detection, and a documented incident response process for AI agent compromise. These aren't theoretical requirements — they're the answers that move organisations into better risk tiers and reduce the likelihood of coverage disputes at claim time.

The organisations building AI governance programmes today are the ones who will have clean answers to the questions their insurer asks in 12 months. The organisations waiting will answer the same questions with silence — and pay for it in premiums and exclusions.

How AIRM helps

AIRM provides the documented evidence of AI agent governance that cyber insurers are beginning to require — a current Non-Human Identity inventory, continuous monitoring records, anomaly detection history, and per-framework compliance documentation. The Executive Summary report is specifically designed to be accessible to non-technical stakeholders including insurance assessors.

Essential Eight and AI Governance: How Australia's Cyber Framework Applies to AI Agents in Microsoft 365

The Australian Signals Directorate's Essential Eight is the baseline security framework for Australian government agencies and increasingly adopted by the private sector. Here's how its controls apply to AI agents and non-human identities in Microsoft 365.

The Essential Eight Maturity Model is the Australian Signals Directorate's prioritised set of cybersecurity strategies designed to help organisations mitigate the most common cyber threats. Originally focused on endpoint and network security, the framework's controls have significant implications for how AI agents and non-human identities are governed in Microsoft 365 — even though many Australian organisations haven't yet made this connection explicitly.

The most relevant Essential Eight controls for Non-Human Identity security

Restrict Administrative Privileges

This control requires that administrative privileges be restricted to those who genuinely need them and reviewed regularly. In the context of non-human identities, it applies directly to service principals with application-level permissions. A service principal with AppRoleAssignment.ReadWrite.All or Application.ReadWrite.All has privileges equivalent to a global administrator in many attack scenarios. These identities should be subject to the same scrutiny as human admin accounts — formal justification, regular review, and least-privilege enforcement.

At Maturity Level 2, organisations must have a process for reviewing administrative privilege use. At Maturity Level 3, privileged access must be time-limited where possible. These requirements apply to Non-Human Identity administrative privileges as much as to human ones.

Application Control

The application control strategy is designed to prevent malicious code from executing. In the context of AI agents, it maps to the concept of an approved application register — only AI agents and service principals that have been formally reviewed and approved should be permitted to operate in the environment. OAuth consent grants from unknown publishers, applications with no documented approval, and legacy integrations that have never been reviewed all represent application control failures.

Patch Applications

While this control is typically discussed in terms of software patching, it has a direct analogue for Non-Human Identity credentials. Aged API keys and client secrets are the non-human identity equivalent of unpatched software — known weaknesses that have not been remediated. The credential rotation policies that AIRM monitors for are the Non-Human Identity equivalent of patch management for human-facing systems.

Multi-Factor Authentication

Essential Eight's MFA requirements explicitly acknowledge that non-human accounts typically cannot use MFA — which makes the other compensating controls (monitoring, least privilege, credential rotation) more important, not less. Organisations at Maturity Level 3 should have specific documented compensating controls for Non-Human Identity accounts that cannot use MFA.

The Essential Eight maturity gap for Non-Human Identity

Most Australian organisations that have invested in Essential Eight compliance have focused on the controls as they apply to human users and endpoint devices. The application of these controls to non-human identities is less mature — and represents a genuine gap that assessors are increasingly likely to probe as AI agent adoption accelerates.

Organisations with significant AI agent footprints in Microsoft 365 should explicitly map their Non-Human Identity governance programme to the Essential Eight controls and document how each control is addressed for non-human identities specifically.

How AIRM helps

AIRM includes Essential Eight in its compliance framework engine, mapping every finding to the relevant controls. The Essential Eight compliance report provides documented evidence of your Non-Human Identity governance programme against the framework — useful for both internal assurance and external assessments by IRAP assessors.

The CISO's Guide to Presenting AI Governance to the Board

Boards are asking about AI governance — and most CISOs aren't prepared to answer in terms the board will understand. Here's how to frame the risk, communicate the programme, and make the case for investment.

In 2024, "AI governance" was a topic that lived in security team meetings. In 2026, it's on the agenda in boardrooms — because CEOs and non-executive directors are reading the same headlines about AI agent compromises, EU AI Act enforcement, and cyber insurance exclusions that CISOs are. They're asking questions. They want reassurance. And they want to understand what the organisation is doing about it.

The challenge for CISOs is translating a technically complex risk domain into language that resonates with a board whose background is predominantly finance, strategy, and operations — not identity security architecture. Here's how to do it.

Lead with business risk, not technical detail

A board presentation that opens with "we have 847 service principals in our Microsoft 365 tenant, of which 23% have application-level permissions" will lose most boards in the first sentence. The same information, framed as business risk, lands very differently: "Our AI tools and business automations have access to our entire email estate, financial systems, and customer records — and until recently, we had no formal oversight of that access."

The board's job is to manage risk at the organisational level. Give them the risk in terms they can work with: reputational exposure, regulatory liability, operational disruption, and financial impact. The technical details are the evidence, not the message.

Use the regulatory hook

NIS2, DORA, EU AI Act, and ISO 42001 give CISOs a regulatory framework to work with that boards take seriously. Regulatory non-compliance is a board-level concern — it carries direct financial penalties, personal liability for senior management under NIS2, and reputational consequences that boards understand viscerally. Use the regulatory landscape as the context for why the investment is necessary now rather than later.

Show them the evidence gap

One of the most effective board moments is showing them what evidence you currently cannot produce. "If our auditors or regulators asked us today to demonstrate continuous oversight of our AI agents, what would we show them?" For most organisations, the honest answer is nothing — no inventory, no monitoring records, no access review history. This evidence gap is both a compliance risk and a governance failure that boards are equipped to understand and act on.

Present a programme, not a problem

Boards don't want to hear about problems without solutions. Frame the AI governance conversation as a programme with clear phases, clear costs, and clear outcomes: Phase 1 — inventory and risk assessment; Phase 2 — ongoing monitoring and alerting; Phase 3 — compliance documentation and reporting. Give them a timeline, a budget, and a set of measurable outcomes they can track at subsequent board meetings.

The one slide that works

If you only have one slide, make it this: three columns labelled Before, Now, and Target. Before: no inventory, no monitoring, no compliance evidence. Now: [current state of your programme]. Target: continuous monitoring, documented governance, auditor-ready compliance reports. This simple frame communicates where you've been, where you are, and where you're going — in terms any board member can engage with.

How AIRM helps

AIRM's Executive Summary report is designed to be exactly the board-ready deliverable described above — written in plain English, with an executive narrative, risk posture summary, and compliance snapshot. It's the document you hand the board to close the gap between "we have a programme" and "here's the evidence."

AI Agents Are the New Shadow IT — and Yesterday's Governance Playbook Doesn't Work

Shadow IT was the defining unsanctioned technology risk of the 2010s. AI agents are the equivalent for the 2020s — and they're more dangerous, more invisible, and more tightly integrated into core business systems than shadow IT ever was.

Security teams who lived through the shadow IT era of the 2010s have a well-rehearsed governance playbook: discover what's running, assess the risk, block the high-risk items, and implement an approved alternatives programme. The playbook worked reasonably well for SaaS applications because the threat model was containable — a user signing into an unapproved cloud storage service could exfiltrate data, but the blast radius was limited.

AI agents break this playbook. They're not just unapproved applications sitting alongside your approved stack — they're entities with identity, persistent permissions, and the ability to take autonomous actions across your core business systems. The threat model is categorically different, and yesterday's governance approaches cannot address it.

Why AI agents are more dangerous than traditional shadow IT

Traditional shadow IT was typically passive. A user uploads files to an unapproved cloud storage service. The data leaves the organisation, which is bad — but the application itself doesn't then go on to enumerate the Active Directory, send emails on behalf of users, or modify SharePoint permissions.

An AI agent that a business user deploys without IT oversight can do all of these things. It has been granted OAuth permissions — typically by the user themselves through a consent flow that appears harmless — that give it programmatic access to email, files, calendar, directory, and potentially much more. It runs continuously, not just when someone logs in. And it operates with machine speed and scale, meaning that if it's compromised or behaving incorrectly, the damage can be catastrophic within minutes rather than hours.

The consent flow problem

Shadow IT governance traditionally relied on blocking unapproved applications at the network layer. AI agent governance has no equivalent control. The OAuth consent flow is a legitimate Microsoft authentication mechanism — you cannot block it without breaking legitimate integrations. The governance must happen at the identity layer: monitoring what has been granted, to whom, with what permissions, and whether those permissions are being used appropriately.

Three in four CISOs report having discovered unsanctioned AI tools already running in their environments. In most cases, those tools were installed not by rogue actors but by well-intentioned employees trying to be more productive. The consent was granted legitimately. The permissions are real. The security team had no visibility into any of it.

The speed problem

Traditional shadow IT governance could afford to be reactive — you discovered the unapproved application, assessed it, and decided whether to block or approve it over days or weeks. AI agent governance requires a different tempo. An AI agent that has been compromised can exfiltrate data, escalate privileges, and establish persistence within a single overnight scan cycle. The discovery-to-response timeline must be hours, not days.

What the new governance playbook looks like

Effective AI agent governance requires three capabilities that the shadow IT playbook lacked: continuous discovery (not periodic), per-identity risk scoring with blast radius analysis (not binary approved/blocked), and real-time alerting with guided response (not manual review queues). These capabilities collectively enable a governance posture that matches the speed and scale of the threat.

The organisations that understand AI agents as a fundamentally new governance challenge — rather than shadow IT with a new name — will build the right capabilities. Those that apply yesterday's playbook will discover its limitations in the worst possible way.

How AIRM helps

AIRM was designed specifically for the AI agent governance challenge — not adapted from an endpoint or network security tool. Continuous discovery, dual-band risk scoring, blast radius analysis, and real-time alerting with PSA integration are all built for the tempo and threat model that AI agent governance requires.

How OAuth Consent Attacks Work — and How to Detect Them in Microsoft 365

OAuth consent attacks — also known as illicit consent grant attacks — are one of the most effective and underdetected attack vectors in Microsoft 365. Here's how they work, why they're so hard to catch, and what detection looks like in practice.

In a traditional phishing attack, the attacker wants the user's credentials. They send a convincing email, the user follows a link, and enters their username and password on a fake login page. Modern security controls — MFA, conditional access, phishing-resistant authentication — have made this attack progressively harder to execute successfully.

OAuth consent attacks are a response to this hardening. Instead of stealing credentials, the attacker registers a malicious application and convinces the user to grant it permissions through Microsoft's own OAuth consent flow. There are no stolen credentials. The authentication is completely legitimate. And the attacker ends up with persistent, MFA-bypassing access to the victim's Microsoft 365 data — often including their entire email history, all their files, and their address book.

How the attack works step by step

Step 1 — Application registration. The attacker registers a Microsoft 365 application, either through a legitimate Azure AD tenant they control or through a compromised tenant. The application is given a convincing name — "Microsoft Teams Backup," "Office 365 Security Scanner," "Productivity Analytics" — and is configured to request specific OAuth permissions.

Step 2 — Phishing delivery. A phishing email is sent to the target, directing them to a consent URL. The email might claim that a security tool needs to be authorised, that a new IT policy requires app installation, or that the user needs to connect their account to a collaboration tool. The link leads directly to Microsoft's genuine OAuth consent screen — there is no fake login page.

Step 3 — Consent grant. The user arrives at Microsoft's real consent screen. They see the application name, the requested permissions, and a legitimate Microsoft UI. They click Accept. The application is now registered as an enterprise application in their organisation's tenant with the permissions they consented to.

Step 4 — Data access. The attacker uses the OAuth token issued at consent to access the victim's data. If the permissions granted include Mail.ReadAll or Files.ReadAll, the attacker can download the victim's entire email history and file store without ever needing their password again. The access persists indefinitely unless the consent is explicitly revoked.

Why these attacks are so hard to detect

The core difficulty is that every step in this attack chain looks legitimate from Microsoft's perspective. The consent was granted by the real user. The OAuth token was issued by Microsoft's real authentication infrastructure. The API calls being made are using valid permissions. No credentials were stolen. No suspicious login attempts occurred. Standard security monitoring tools that look for credential-based attack patterns will see nothing.

The detection signal is entirely at the application layer: a new enterprise application appeared in the tenant, with a set of permissions, at a time correlated with a user interaction. Finding this signal requires monitoring the enterprise application inventory continuously — not just when someone remembers to check.

Detection indicators to watch for

Several characteristics make OAuth consent attack applications identifiable in retrospect and — with the right monitoring — detectable in near-real-time. New application registrations from unknown publishers, particularly those with high-value permission scopes. Applications that access large volumes of data immediately after consent — an application that reads 50,000 emails in the first hour is not a productivity tool. Applications whose consent was granted by a single user but that have tenant-wide permissions (application permissions rather than delegated ones). And applications whose publisher domain was registered recently — malicious app infrastructure tends to be newly created.

How AIRM helps

AIRM detects new application registrations in every scan and flags characteristics consistent with OAuth consent attacks — unknown publishers, immediate high-volume access, mismatch between user-granted and application-level permissions. Known malicious application identifiers are checked against the tenant inventory automatically, generating P1 alerts when matches are found.

Microsoft Graph Permissions Explained: Which Scopes Create Real Risk and Why

Microsoft Graph is the API surface through which AI agents and service principals access Microsoft 365 data. Understanding which permission scopes create genuine security risk — and why — is fundamental to Non-Human Identity governance.

Microsoft Graph is the unified API that provides programmatic access to virtually every capability and data type in Microsoft 365 — email, files, calendar, contacts, users, groups, Teams, SharePoint, and much more. Every AI agent and service principal that integrates with Microsoft 365 does so through Graph, using a set of permission scopes that define what it can access and what actions it can take.

Understanding these permission scopes is essential for anyone responsible for Non-Human Identity security. Not all permissions are equal. Some scopes provide read access to a single user's calendar. Others effectively grant administrator-level control over the entire tenant. The gap between the most benign and most dangerous Microsoft Graph permissions is enormous — and many organisations grant high-risk scopes without fully understanding what they're authorising.

Application permissions vs delegated permissions

Before examining specific scopes, it's important to understand the distinction between the two types of Microsoft Graph permission. Delegated permissions operate on behalf of a signed-in user — the application can only access what that specific user can access, and the user must be present for the access to work. Application permissions operate independently of any user — the application can access data across the entire tenant without any user being logged in.

Application permissions are dramatically more powerful and dangerous than delegated permissions. An application with delegated Mail.Read can read the emails of the user who consented. An application with application-level Mail.Read can read the emails of every user in the tenant — all day, every day, whether anyone is logged in or not. Non-Human Identity security assessment should always flag application permissions as higher risk than delegated equivalents.

The highest-risk permission scopes

Application.ReadWrite.All

This permission allows the service principal to read and write all applications and service principals in the tenant — including creating new applications and granting them permissions. An attacker who controls a service principal with this permission can create a new privileged application, grant it any permissions they want, and use it as a persistent backdoor. This is one of the most dangerous permissions in the Microsoft Graph permission model and should essentially never be granted without explicit security review and ongoing monitoring.

AppRoleAssignment.ReadWrite.All

This permission allows the service principal to grant any application permission to any application. Combined with Application.ReadWrite.All, it provides complete control over the permission model of the entire tenant. Even without Application.ReadWrite.All, it allows privilege escalation by granting elevated permissions to existing applications.

Directory.ReadWrite.All

Read and write access to all objects in the directory — users, groups, devices, organisational settings. This permission allows creating, modifying, and deleting users and groups, which is the foundation of most privilege escalation and persistence techniques used by attackers who have established a foothold in a tenant.

RoleManagement.ReadWrite.Directory

The ability to read and write Azure AD role assignments — including assigning Global Administrator to any account. A service principal with this permission can create a new user and make them a Global Administrator, giving an attacker complete control over the tenant through a single API call.

Mail.ReadWrite (Application)

Read, write, and delete all mail in all mailboxes. The write and delete capabilities make this significantly more dangerous than Mail.Read. An attacker can use this permission not just to exfiltrate email but to delete evidence of an attack, impersonate users in ongoing email conversations, and plant false information in mailbox history.

The permission combinations that matter most

Individual permissions are dangerous. Combinations can be catastrophic. Application.ReadWrite.All plus AppRoleAssignment.ReadWrite.All is the tenant takeover combination — an attacker with both can grant themselves any permission they want within a single API interaction. Mail.ReadWrite plus Calendars.ReadWrite plus Files.ReadWrite represents complete access to a user's digital life. Directory.ReadWrite.All plus RoleManagement.ReadWrite.Directory is a direct path to global administrator.

When reviewing service principal permissions, look not just at individual scopes but at the combinations held by each identity. The blast radius of a service principal is determined by what an attacker could do with all of its permissions simultaneously — not by any single permission in isolation.

How AIRM helps

AIRM maps every Microsoft Graph permission held by every service principal in your tenant to a resource impact score, combining them into a Blast Radius score and band. The Blast Radius Map visualises which resource categories are reachable and whether access is read or write. High-risk permission combinations are flagged automatically in the risk assessment.

Service Principal Credential Rotation: Why Most Organisations Get It Wrong and What Best Practice Looks Like

Credential rotation for service principals is one of the most neglected areas of Microsoft 365 security. Most organisations have no rotation policy, no monitoring for aged credentials, and no process for responding when a credential is compromised. Here's how to fix all three.

Password rotation for human accounts is a well-established security practice — even if the specific policies are contested (length vs frequency, periodic vs risk-based), the principle that credentials should be rotated is universally accepted. For service principal credentials — client secrets and certificates — this principle is applied inconsistently at best and ignored entirely at worst.

The result is Microsoft 365 environments where client secrets created in 2020 are still active in 2026, where no one knows which applications depend on which credentials, and where the discovery of a credential in a breach dump triggers a scramble to figure out what it was connected to and what the blast radius of its exposure might be.

Why credential rotation is harder for service principals

The practical challenges of service principal credential rotation are real and worth acknowledging. Unlike human passwords, service principal credentials are embedded in application configurations — sometimes in environment variables, sometimes in configuration files, sometimes in secrets managers, sometimes in all three. Rotating a credential requires identifying every place it's used, updating all of those locations simultaneously, and testing that the application continues to function correctly. For long-running integrations with multiple deployment environments, this is a genuine operational burden.

These challenges explain why rotation is neglected. They don't justify it. The operational cost of unplanned credential rotation following a compromise is orders of magnitude higher than the cost of planned rotation on a scheduled basis.

What best practice looks like

Maximum secret lifetime: 12 months

Client secrets should have a maximum lifetime of 12 months. This is shorter than Microsoft's default maximum (24 months) and much shorter than what most organisations actually implement (no expiry). A 12-month maximum limits the window of exposure if a credential is compromised and forces a regular review of which applications are still active and still require their credentials.

Certificate-based authentication preferred

Where possible, service principals should use certificate-based authentication rather than client secrets. Certificates provide stronger cryptographic security, have clearer expiry management, and are harder to accidentally expose in logs or configuration exports. The operational overhead of certificate management is higher than secret management, but the security benefit is significant.

Inventory before rotation

You cannot rotate credentials you don't know about. The prerequisite for any rotation programme is a complete inventory of all service principal credentials — client secrets and certificates — with their expiry dates and (crucially) the applications they're associated with. Without this inventory, rotation efforts are partial and the gaps are invisible.

Monitoring and alerting for approaching expiry

Credential rotation should be planned and scheduled, not reactive. Monitoring for credentials approaching expiry — with alerts at 60, 30, and 7 days — allows rotation to be planned, tested, and executed without operational disruption. Expired credentials create service outages. Expiry monitoring prevents both the security risk of over-aged credentials and the operational risk of unexpected service interruptions.

Responding to a compromised credential

When a service principal credential is confirmed or suspected to have been compromised, the response must be immediate. Revoke the affected credential first — before investigation, before root cause analysis, before stakeholder communication. Every minute the credential remains active after compromise is a minute the attacker retains access.

Then investigate: what data could have been accessed with the compromised credential's permissions? What actions could have been taken? Was there evidence of actual access in the audit logs? This investigation is significantly easier if you have a record of the credential's normal usage pattern — which is why continuous monitoring has value that extends far beyond detection.

How AIRM helps

AIRM monitors credential ages across all service principals in your Microsoft 365 tenant — flagging secrets and certificates approaching expiry before they become either a security risk or an operational one. Aged credentials are incorporated into the overall risk score and surfaced in the Non-Human Identity Risk Report. Alerts fire at configurable thresholds.

Why the Consolidation of MSP Security Is Creating a Once-in-a-Decade Differentiation Opportunity

As the MSP security market consolidates around a small number of dominant platforms, the MSPs who win will be those who've added genuinely differentiated capabilities their clients can't get from a bundle. Non-Human Identity security is one of those capabilities.

The MSP security market is undergoing significant consolidation. Large platform vendors are acquiring specialist tools and bundling them into comprehensive suites. Clients are being encouraged to standardise on fewer, larger vendors. The pressure to consolidate is real — from both the vendor side (simpler contracts, better margins) and the client side (fewer tools to manage, cleaner integrations).

For MSPs, this consolidation creates both a threat and an opportunity. The threat is margin compression — as the tools you resell become commodities within larger bundles, your ability to differentiate on tool selection diminishes. The opportunity is differentiation through specialist capabilities that aren't in the bundles — services that require expertise to deliver and that clients genuinely need but can't get from a platform vendor alone.

What's getting bundled

Endpoint protection, email security, basic SIEM, vulnerability scanning, and identity protection for human users are all heading towards commoditisation within major platform suites. Microsoft 365 E7 now bundles Copilot, Agent 365, the full Entra Suite, and E5 security capabilities in a single SKU. Clients on E7 have less immediate need for standalone endpoint or email security tools — which means MSPs who've built their security practice exclusively around these categories face a structural margin challenge.

What isn't being bundled

The capabilities that remain genuinely specialist — and that create genuine MSP differentiation — are those that require domain expertise, platform-specific depth, and the kind of ongoing operational attention that doesn't fit neatly into a platform bundle. Non-Human Identity security for Microsoft 365 is a prime example. Microsoft's E7 includes Agent 365 for Microsoft-native agents. It does not include continuous monitoring of third-party AI agents, legacy service principals, blast radius analysis, or the per-framework compliance reporting that regulated clients need.

An MSP who can offer this capability — delivered as a monthly recurring service, integrated into their PSA, with branded client reports — is offering something that the platform vendors aren't, and that clients genuinely need right now. That's differentiation that survives consolidation.

Building the practice

The MSPs who will win in the next five years are those building security practices around capabilities with strong regulatory tailwinds, that require genuine expertise to deliver, and that address risks clients don't yet know they have but will soon be required to manage. Non-Human Identity security sits squarely in this category: EU AI Act, DORA, and NIS2 are all driving demand, the expertise barrier is real, and most clients have a significant undiscovered risk posture that becomes visible the moment you run the first scan.

How AIRM helps

AIRM gives MSPs a specialist Non-Human Identity security capability that complements — rather than competes with — the platform suites their clients are standardising on. Per-tenant pricing, native PSA integration, and branded reporting mean the service is immediately deployable and commercially viable from the first client.