Back to Blog

Business Email Compromise in Microsoft 365 Environment

11/18/2025

Disclaimer

Details have been sanitized to protect client confidentiality.
Technical elements remain accurate for educational purposes.


Background

This started the way a lot of these things do.

A client’s contacts began reporting weird emails that looked like they were coming from the company’s finance side. The messages were close enough to real that people were asking, “Is this actually you?”

I work in managed services and security, so my job here was pretty simple on paper:

  • Figure out if an account was actually compromised
  • See how far the damage went
  • Lock the attacker out
  • Then harden the environment so the same thing is harder next time

At this point I did not have admin rights to their Microsoft 365 tenant. So the first phase was mostly OSINT, external analysis, and talking directly with executives to get a sense of how bad the exposure might be before I ever touched their tenant.

I also had one big question in the back of my mind the whole time:

Are attackers just guessing or spraying possible addresses,
or do they already know the company’s internal naming pattern?

That question ended up being important for how severe we considered the incident.


The Investigation

The pre-tenant phase: before I had Global Admin

The first thing I did was look at the bigger picture around the people involved in the suspicious email threads.

Using public breach data and standard OSINT sources, I found that:

  • Several external contacts tied to the client had their addresses show up in past data breaches
  • Some of those breaches included password hashes, phone numbers, addresses and other personal details
  • Two executives at the client had personal email accounts that had also appeared in recent breaches

This told me a few things:

  • The attacker did not need to discover these people from scratch
  • Names, addresses and relationships between the client and their partners were very likely already out in the wild
  • It was realistic that someone could guess or derive the client’s email naming pattern

Around the same time, I found a newly registered lookalike domain that mimicked the client’s brand. It was:

  • Registered very recently
  • Hosted on a third party mail platform
  • Clearly not controlled by the client

So even before touching Microsoft 365, I had a working theory:

  1. Third party breaches expose external and executive details
  2. Attacker stands up a lookalike domain
  3. Attacker targets the client and their partners using that infrastructure
  4. Either they are just guessing addresses, or they have the real internal naming scheme

While working through this, I also sat with two executives and walked through their personal email security:

  • Reviewed recent activity and devices on their accounts
  • Checked third party app access
  • Reset passwords that had not been changed in years
  • Confirmed that multifactor authentication was turned on
  • Confirmed that business email should be going through Microsoft 365, not their personal accounts

This did not prove that Microsoft 365 was compromised yet, but it made one thing clear:
there was enough exposed data around these people that a targeted BEC attempt was very believable.

The next step was the one I wanted from the start: real access to the tenant.


Inside the tenant: confirming the BEC

Once I was granted Global Admin in their Microsoft 365 environment, the real picture came into focus.

What I found:

  • One user account had sign-ins from a datacenter IP that had nothing to do with their normal work pattern
  • These sign-ins were happening on a very consistent four hour schedule, which fits automated credential replay or token usage
  • The mailbox for that user was full of invoice and payment conversations with multiple external partners
  • That same user was a member of several high impact distribution groups that represented things like general info, admin and permitting style addresses
  • The tenant had Security Defaults enabled, but some users (including this one) were still effectively using no MFA

At that point it was not just “someone is spoofing us from the outside.” It was a confirmed Business Email Compromise on a single high value mailbox inside Microsoft 365.


Guessing vs knowing the naming scheme

Remember that earlier question about whether this was random spraying or someone who actually knew the internal naming pattern?

Once I had tenant access, I tried to answer that directly.

I pulled:

  • The list of real users and aliases inside Microsoft 365
  • The addresses that had been targeted or impersonated on the typosquatted domain

If this had been guessing or broad spraying, I would expect to see noise. Things like:

  • Addresses that do not exist internally
  • Variants that are close but slightly off
  • A wide pattern that tries lots of combinations

That is not what we saw.

Every account the attacker tried to mimic lined up cleanly with real people and real naming rules inside the company. There were no random extras. No junk addresses. No “maybe this is a guess.”

That told us:

  • This was not just someone throwing names at the wall
  • The attacker either had access to internal information at some point
    or
  • They had enough leaked data across different breaches to reconstruct the naming scheme with high confidence

Either way, it moved the incident from “spray against a brand” to “very targeted and informed BEC against specific people.”


Technical Deep-Dive

Attack flow

Putting the pieces together, the likely flow looked something like this:

  1. Exposure over time

    • Client contacts and executives show up in multiple public breaches
    • Some of those breaches include passwords or password hashes and detailed personal data
  2. Brand and relationship mapping

    • From those breaches, an attacker can see who talks to who
    • The client, their partners and their naming patterns become easier to predict
  3. Domain setup

    • A lookalike domain that matches the client’s brand style gets registered
    • Mail for that domain is hosted on a third party service and ready to send and receive
  4. Internal compromise

    • At some point, the attacker successfully authenticates to a real Microsoft 365 account inside the client’s tenant
    • Sign-in logs show access from a datacenter IP on a regular four hour schedule
    • The pattern is not tied to any known VPN or remote work path
  5. Position inside finance related mail

    • The compromised user is involved in invoice and payment threads with multiple organizations
    • The mailbox contains live discussions about amounts, payment methods and schedules
    • Distribution group membership lets the attacker send as broader business functions, not just the individual user
  6. Detection and response

    • External partners report suspicious emails
    • Investigation starts, first from the outside, then inside Microsoft 365
    • Once we confirm the datacenter logins, mailbox content and the precise naming pattern alignment, the focus shifts to containment

Indicators of Compromise

The most important IoCs in this case:

  • Datacenter-origin sign-ins for a user who never works from that kind of environment
  • Sign-ins happening on a tight and consistent schedule over days
  • A mailbox that touches money related conversations across multiple partners
  • Ongoing failed login attempts from the same infrastructure after the password was reset
  • A typosquatted domain that mirrors real internal naming, with no random or unused aliases

That last one is a big signal that this is not just random scanning. Someone did their homework.

Containment and clean-up

Once the compromise was confirmed, we did the usual but important work:

  • Disabled the compromised account
  • Performed a full password reset
  • Revoked all active sessions across all devices
  • Re-enabled the account only after confirming the user could sign in securely
  • Reviewed:
    • Mailbox rules
    • Forwarding settings
    • Delegations and send-as rights
    • OAuth application grants

We also mapped out the distribution groups the user was tied to, so we understood who could have been targeted from those addresses if the attacker had started sending at scale.

On top of that, we onboarded the tenant into an identity-focused detection platform to improve:

  • Sign-in anomaly visibility
  • Mailbox rule change detection
  • OAuth and app-based abuse detection

Lessons Learned

This was not some massive nation-state story. It was a very normal small business. That is exactly why it matters.

Here is what I took away from it.

1. “Security Defaults are on” is not the same as “MFA is enforced”

On the surface, the tenant looked like it had Microsoft’s built-in protections turned on. In reality:

  • Some users could still sign in without MFA
  • One of those users ended up being the compromised one

If you run Microsoft 365, it is worth taking the time to actually audit:

  • Which accounts have MFA registered and enforced
  • Whether high risk users are treated any differently
  • Whether you are relying on “on by default” settings that you have never validated in practice

If this client had Entra ID P1 or Microsoft 365 Business Premium or similar licensing, I would have:

  • Built Conditional Access Policies to:
    • Require MFA for all cloud app sign-ins
    • Block legacy protocols
    • Treat admin and finance-related accounts as higher risk

Because they did not have that licensing, we were limited to Security Defaults, basic settings and third party detection tools. The lack of CAP was a direct limitation in the response and in long term hardening.

2. Licensing is a security decision, not just a cost choice

It is easy to treat licensing like a budgeting exercise, but in this case it changed what we could do:

  • No CAP meant no native fine-grained access policies
  • Logging and telemetry were good enough to confirm a compromise, but not as deep as they could have been
  • Some of the protections that would make this harder in the first place simply were not available

For smaller environments that handle real money or sensitive contracts, I am more comfortable pushing for:

  • At least Business Premium as a baseline
  • Clear documentation that says, in plain language, “Here is what we cannot see or control with your current license”

3. Personal accounts bleed into business risk

In this case, executives had used personal email for work-related conversations in the past. Those personal accounts later showed up in breaches. That does not automatically mean those accounts were the entry point, but it adds noise and risk.

The practical takeaways:

  • Personal accounts will always exist, so at minimum:
    • Turn on strong MFA
    • Reset passwords that have been reused for a decade
    • Review devices and app access
  • Try to keep all real business communication inside Microsoft 365 or whatever your primary platform is
  • If personal accounts are involved at all, treat them as part of the threat surface and review them during an investigation

4. Distribution groups are easy to overlook and very easy to abuse

The compromised user could send as several generic addresses. That matters because:

  • Customers and vendors are more likely to trust those addresses
  • It makes reply-chain hijacking inside financial conversations much easier
  • A single mailbox can quietly become a megaphone

What I recommend now:

  • Regularly audit who can send as which addresses
  • Trim distribution membership down to what is actually needed
  • Treat widely-used addresses as sensitive, even if they are not “admin”

5. Naming patterns are data too

This incident was a good reminder that something as simple as “first initial plus last name” can turn into fuel for an attacker once it is exposed.

Because we took the time to ask “are they guessing, or do they know,” and then actually proved it by comparing internal users to the typosquatted aliases, we could say:

  • This was not random
  • Someone had a pretty clean map of who exists in this company

That changes how you think about risk going forward. It is not just “turn on MFA and move on.” It is also:

  • Where else could that kind of internal pattern be leaking
  • How do we avoid using personal accounts in ways that reveal more than they should
  • How do we track and respond when naming patterns and contact maps are clearly in hostile hands

6. Tools help, but basic hygiene still does most of the work

The identity threat detection platform made it easier to watch for weird sign-ins and rule changes, and I am glad we deployed it. But none of that replaces the basics:

  • Strong, enforced MFA
  • Getting off legacy protocols where possible
  • Having clear boundaries around guest access and personal accounts
  • Knowing who can influence money and how their accounts are protected

In the end, this case was one compromised mailbox that had just enough access and just enough financial visibility to be dangerous. That is all it takes for a BEC actor to try and redirect real money.

If you run a Microsoft 365 environment and any of this sounds familiar, the best time to tighten it up is before your own four hour sign-in pattern shows up in the logs.