8 min read
2026 Sagiss Managed Security Report: AI Phishing in the Workplace
Published: April 1, 2026 Updated: April 1, 2026
72% of Workers Say AI Is Making Phishing Attacks Harder to Detect
Introduction
Workplace phishing is no longer limited to suspicious messages with obvious errors or easy-to-spot warning signs. As AI improves the tone, grammar, and realism of fraudulent messages, employees may be confronting phishing attempts that look far more like ordinary workplace communication. Email, chat, and collaboration tools are built for speed, and that speed can leave little room for pause when a message appears polished, familiar, or urgent.
That matters because risky message behavior remains common. Employees may understand the general advice to slow down, verify requests, and watch for red flags. Yet in practice, many still click links, reply to requests, or confirm legitimacy only after taking action. In that environment, AI may be making an existing problem harder to manage by helping phishing messages blend more easily into the flow of everyday work.
To understand how workers are navigating suspicious messages in this environment, Sagiss, a Dallas-based provider of IT and managed security services, partnered with Pollfish on the 2026 Sagiss Managed Security Report: AI Phishing in the Workplace, surveying 500 U.S. desk-based workers. The study includes responses from 100 Dallas-Fort Worth workers, offering a regional lens on how local businesses are navigating these risks.
Key Findings
- 72% say phishing attempts are more convincing than a year ago because of AI-written language.
- 64% say an AI-generated message could likely impersonate someone they work with, and 57% say AI makes phishing harder to spot because it feels more professional.
- 63% clicked a work-related link in the past year and later felt they should have double-checked it first.
- 57% have verified a message’s request only after taking action first.
- 45% have replied to a work message and later questioned whether it was legitimate.
- 68% check work email or chat outside normal business hours at least sometimes, and 56% feel pressure to respond after hours at least sometimes.
Together, these findings suggest that workplace phishing risk is being shaped by more than awareness alone. Employees are making fast decisions in high-pressure environments, while AI is making fraudulent messages harder to spot. That combination is forcing security leaders to rethink how phishing risk shows up in day-to-day work.
“AI is changing the way phishing looks and feels, but the deeper issue is that employees are making decisions under constant pressure,” said Sagiss President Travis Springer. “Cybersecurity leaders need to account for the reality of modern work, where people are moving quickly, responding across channels, and often making judgment calls before they have fully verified what is in front of them.”
Workers Still Act First and Reconsider Later
The clearest finding in the survey is that risky message behavior remains common. In the past 12 months, 63% of respondents say they clicked a work-related link and later felt they should have double-checked it first. That includes 42% who say this happened multiple times. Another 45% say they have replied to a work email or chat message and later questioned whether it was legitimate. A similar 58% say they have verified a request only after taking action first, including 36% who say they have done so multiple times.
These results suggest that awareness alone has not solved the problem. Workers have spent years hearing the same guidance about suspicious messages, but many still fall into patterns of quick response and delayed verification. The challenge is not simply whether employees know phishing attacks exist. The challenge is whether they can consistently apply good judgment in the middle of a busy workday.
That same pattern shows up in how people respond to urgency. About 41% say they have ignored their initial suspicion about a message because it seemed urgent at least once. Even when workers recognize some level of risk, speed can still win the first decision.

42% say they have clicked a work-related link multiple times in the past year and later felt they should have double-checked it first.

52% say they have not replied to a work email or chat message and later questioned whether it was legitimate

36% say they have verified a request only after taking action first multiple times.

55% say they have never ignored their initial suspicion about a message because it seemed urgent.
Workplace Pressure Helps Drive Those Mistakes
The survey points to a practical explanation for why these behaviors persist. People are often making decisions in conditions that favor fast action over careful review. Asked which situations make mistakes most likely, 55% point to rushing between tasks or meetings, while 48% point to multitasking. These are not fringe scenarios. They are normal features of many workdays.
When respondents were asked what makes suspicious messages hardest to verify before responding, the top answer was that the message looks legitimate or well written at 37%. After that came too many messages or notifications at 28% and time pressure at 27%. Only 7% say the problem is that they are unsure how to verify a message. That distinction matters. The problem is less about basic knowledge and more about conditions that make caution harder to practice consistently.
Inbox overload appears to make that problem worse. When email or chat has a high number of unread messages, 22% say they skim more quickly, and 15% say they prioritize urgency over verification. Even among workers trying to keep up, the pressure to move faster can weaken the pause that phishing defense depends on.

55% say rushing between tasks or meetings is the situation most likely to lead to mistakes.

37% say suspicious messages are hardest to verify when they look legitimate or well written.

38% say having a high number of unread messages does not change their behavior.
After-Hours Communication Extends the Risk Window
The workday does not seem to be the boundary for message risk. A combined 69% of respondents say they check work email or chat outside normal business hours at least sometimes. Another 56% say they feel pressure to respond after hours at least sometimes. In other words, many workers stay connected well beyond standard working time, and that creates more moments when judgment may be rushed or fragmented.
That exposure shows up in behavior. About 34% say they have responded to a work message after hours and later felt they should have verified it more carefully. The most common reason for responding after hours is not panic, but workload management. Around 31% say they do it to stay caught up and reduce workload later. Another 21% say the message feels urgent.
This is an important part of the story because it shifts the discussion from cybersecurity training alone to workplace habits. When employees are expected to stay responsive at all hours, they are also exposed to more decision points when attention is split and verification may feel secondary.

29% say they sometimes check work email or chat outside normal business hours.

29% say they sometimes feel pressure to respond to work messages outside normal business hours.

61% say they have not responded to a work message after hours and later felt they should have verified it more carefully.

31% say they respond to work messages after hours to stay caught up and reduce workload later.
AI Is Making Suspicious Messages Look More Credible
The survey’s strongest forward-looking result is the role respondents believe AI is already playing in phishing. Seventy-two percent say phishing attempts are more convincing than a year ago because of AI-written language. Almost 65% say it is somewhat or very likely that an AI-generated message could successfully impersonate someone they work with. More than half, 57%, say AI makes phishing harder to spot because it feels more professional.
Concern about imitation is not just about whether or not AI can be used to mimic someone they know but there’s real fear that it could happen. About 59% say they are moderately, very, or extremely concerned about AI being used to imitate a coworker’s writing style or tone. That level of concern aligns with a broader shift in how suspicious messages are perceived. The issue is not merely that more phishing attacks exist. It is that the content may now look more polished and more believable inside ordinary workplace communication.
These AI-specific questions point in a clearer direction: many workers believe the quality of phishing language has improved, and that improvement is making deception more effective.

39% say it is somewhat likely that an AI-generated message could successfully impersonate someone they work with.

57% say AI makes phishing harder to spot because it feels more professional.

28% say they are slightly concerned about AI being used to imitate a coworker’s writing style or tone.
AI’s Familiarity, Tone, and Realism Make These Phishing Messages More Persuasive
One of the most useful takeaways in the data is that people may now be persuaded by cues they usually associate with legitimate work. About 42% say they have trusted a message because it sounded like a coworker or someone they regularly work with at least once. That helps explain why AI-assisted phishing may feel especially difficult to spot. Messages do not have to be flashy to be effective. They only have to feel normal enough to pass the first test of credibility.
Respondents also describe suspicious messages as becoming more polished in specific ways. About 33% say they have noticed better grammar and writing in suspicious messages over the past year. Another 27% point to more personalized messages, while 27% say suspicious messages now reference real workplace details more often. A similar 26% say the tone feels more natural or human. Together, these responses suggest that phishing messages are increasingly shaped to blend into the texture of everyday work communication rather than stand apart from it.
That impression is reinforced by the cues that drive quick responses. Familiar sender name is one of the strongest triggers, and message references to real workplace details also contribute to responsiveness. In a work environment where people are trained to value efficiency, collaboration, and speed, the most dangerous messages may be the ones that borrow those same signals of trust.

53% say they have never trusted a message because it sounded like a coworker.

33% say they have noticed better grammar and writing in suspicious messages over the past year.
Dallas/Fort Worth Workers Recognize What AI Can Do, but Many Do Not Yet See Its Broader Impact at Work
Dallas/Fort Worth workers appear to recognize AI’s capability to strengthen phishing attacks, but they are less likely to say that impact is already showing up broadly in the messages they encounter at work. Nearly two-thirds, 64%, say phishing attempts are more convincing than a year ago because of AI-written language, compared with 72% overall. At the same time, 66% say an AI-generated message could likely impersonate someone they work with, and 56% say AI makes phishing harder to spot because messages feel more professional. Together, those results suggest workers in the region understand how AI can make suspicious messages more polished and more believable.
Even so, many do not seem convinced that this capability is yet showing up broadly in the suspicious messages they encounter at work. Half of Dallas/Fort Worth workers say suspicious work messages seem no more convincing today than they did a year ago. They are also less likely to say suspicious messages have become more personalized.That suggests Dallas/Fort Worth workers recognize AI’s potential for harm, but are more cautious about saying it is already reshaping the messages they see at work.
What Dallas/Fort Worth workers do appear to be noticing is a shift in how suspicious messages are written. About 29% say those messages now show better grammar and writing, and 28% say they have a more natural or human tone. By comparison, fewer point to more personalized messages at 19%. That suggests the local change may be showing up less through obvious customization and more through polish, tone, and writing quality that make suspicious messages feel more professional at first glance.

41% of Dallas respondents phishing attempts are somewhat more convincing than a year ago because of AI-written language

50% of Dallas respondents say suspicious work messages seem no more convincing today than a year ago.

30% of Dallas respondents say they have noticed no change in suspicious messages over the past year.
Conclusion
The survey suggests that workplace phishing risk is rooted in behavior patterns that have not gone away. Employees still click, reply, and verify later with surprising frequency. Those decisions appear to be shaped less by ignorance than by the realities of work itself: high message volume, constant urgency, and pressure to stay responsive well beyond the standard workday.
“As AI makes suspicious messages more credible, employers need to think beyond awareness alone,” Springer said. “They also need to consider the pace, pressure, and communication habits shaping employee decisions every day. Taking these factors into consideration has become a cybersecurity essential. As a managed security services provider, we emphasize this to all our clients; security truly is a team effort.”
AI raises the stakes because it can make phishing messages more polished, more believable, and more similar to the communication workers already trust. For employers, that points to a broader challenge. Reducing phishing risk may require more than repeating familiar awareness advice. It may also require looking closely at the environments in which employees make fast decisions, the communication norms that reward immediate response, and the trust signals that attackers can now reproduce more convincingly than before.
Methodology
The 2026 Sagiss Managed Security Report: AI Phishing in the Workplace is based on a Pollfish survey conducted on February 23, 2026, among 500 desk-based workers who use email or chat as part of their jobs, including 100 workers in the Dallas-Fort Worth metroplex.
Results are presented as percentages of respondents. Some questions allowed respondents to select more than one answer, so totals may exceed 100%.
The sample includes workers from companies of varying sizes, including 41% who say their employer has more than 1,000 employees.
Sagiss, LLC