Phishing, Pretexting, and Trust: How Attackers Exploit People

On a typical business morning, finance teams around the world open email expecting invoices, vendor updates, and executive requests. But for Toyota Boshoku Corporation, one such email led to a catastrophic financial loss.

In August 2019, employees at a subsidiary of Toyota Boshoku were targeted in a Business Email Compromise (BEC) attack, a highly convincing form of phishing in which attackers impersonated trusted partners and altered messages to request urgent wire transfers. The fraudsters sent emails that appeared to come from known contacts, including familiar vendor details and legitimate-looking accounting requests. Acting on these messages, the finance team executed fund transfers totaling $37 million into accounts controlled by the attackers. The funds were ultimately lost, highlighting how social engineering, not malware or system exploits, was the pivot point of the breach.

This incident illustrates a sobering reality: even global enterprises with advanced cybersecurity tooling can be compromised when attackers leverage psychological tactics to bypass human judgment.

Despite decades of investment in cybersecurity tools, one uncomfortable truth remains: most security incidents still begin with a person. Not a zero-day vulnerability. Not a sophisticated nation-state exploit. A human being who clicked, trusted, shared, or complied.

From phishing emails that look convincingly legitimate to phone calls impersonating executives or vendors, social engineering attacks continue to dominate the modern threat landscape. Attackers understand something many organizations still underestimate: people are easier to manipulate than systems.

This is not a failure of intelligence or intent. It is a predictable outcome of human behaviour under pressure, urgency, and trust. And until organizations address the human factor in cybersecurity with the same seriousness as technical controls, attackers will continue to exploit it.

The Human Factor in Cybersecurity: Why It Still Matters

Cybersecurity conversations often centre on technology such as firewalls, endpoint protection, SIEM platforms, and automated detection. While these controls are essential, they are designed to defend against known technical threats. Social engineering attacks bypass them entirely by targeting decision-making instead of code.

According to Verizon’s 2024 Data Breach Investigations Report, over 74% of breaches involved the human element, including phishing, social engineering, credential misuse, or error.

This statistic has remained stubbornly consistent year over year. Despite better tools and greater awareness, attackers continue to succeed because human behaviour has not fundamentally changed.

People are busy and are well-intentioned. Naturally, we trust internal emails, familiar vendors, and authority figures. Social engineers weaponize these traits with precision.

What Are Social Engineering Attacks?

Social engineering attacks are manipulation-based techniques that exploit human psychology rather than technical vulnerabilities. The goal is to trick individuals into performing actions that compromise security: clicking a link, sharing credentials, transferring funds, or disclosing sensitive information.

Unlike brute-force or malware-heavy attacks, social engineering relies on:

  • Trust
  • Urgency
  • Authority
  • Fear
  • Familiarity

These attacks are often low-cost, scalable, and highly effective making them a preferred method for cybercriminals and organized threat actors alike.

The most common forms include phishing, pretexting, baiting, and impersonation.

Phishing Attacks: The Most Persistent Threat Vector

Phishing attacks remain the most prevalent form of social engineering, accounting for a significant percentage of initial access attempts across industries.

Phishing typically involves fraudulent emails, messages, or websites designed to appear legitimate. Common tactics include:

  • Fake password reset requests
  • Invoices or payment notifications
  • HR or payroll updates
  • Cloud service alerts (e.g., file-sharing notifications)
  • Executive or IT support impersonation

The Anti-Phishing Working Group (APWG) reported over 1 million unique phishing attacks per quarter in 2023, marking historic highs.

Modern phishing is no longer riddled with spelling errors or suspicious formatting. Many campaigns are professionally written, branded, and timed to coincide with real-world events, tax season, mergers, travel, or system upgrades.

Even highly technical employees can be deceived when emails align with their role or current workload.

Pretexting: When Attackers Tell a Convincing Story

While phishing casts a wide net, pretexting is more targeted and often more dangerous.

Pretexting involves creating a fabricated scenario to persuade someone to divulge information or take a specific action. The attacker may pose as:

  • A senior executive
  • A trusted vendor
  • An IT administrator
  • A regulator or auditor
  • A new employee needing help

These attacks frequently occur over phone calls, video conferences, or email chains and are often informed by reconnaissance gathered from LinkedIn, company websites, or past data breaches.

The FBI has repeatedly warned about pretexting-based fraud, particularly Business Email Compromise (BEC), which resulted in over USD $2.9 billion in reported losses in 2023 alone.

Pretexting succeeds because it exploits trust relationships and organizational hierarchies – areas where people are conditioned not to question.

Why Technology Alone Is Not Enough

Many organizations assume that advanced security tooling will compensate for human error. This is a dangerous misconception.

While email filtering, endpoint detection, and identity controls reduce risk, they cannot:

  • Prevent a user from willingly sharing credentials
  • Stop a phone-based impersonation
  • Detect context-based deception
  • Override authority-based compliance

The UK National Cyber Security Centre explicitly states that user behaviour and organizational culture are critical components of cybersecurity resilience. Without strong behavioural controls, even the best technology can be undermined by a single moment of misplaced trust.

The Psychology Behind Social Engineering

To effectively reduce social engineering risk, organizations must understand why these attacks work.

Attackers deliberately exploit cognitive biases, including:

  • Authority bias: People comply with perceived leaders or experts
  • Urgency bias: People act quickly when time pressure is implied
  • Reciprocity: People want to be helpful when asked
  • Familiarity bias: People trust known brands or colleagues
  • Fear and loss aversion: People act to avoid negative consequences

Social engineers design messages that trigger emotional responses, bypassing rational evaluation. This is why awareness alone is insufficient. People often recognize threats only in hindsight.

Real-World Consequences of Human-Driven Breaches

The impact of social engineering attacks extends far beyond IT inconvenience.

Consequences often include:

  • Financial losses and fraud
  • Operational disruption
  • Regulatory penalties
  • Loss of sensitive or personal data
  • Reputational damage
  • Erosion of customer trust

For small and mid-sized organizations, a single successful phishing incident can be existential. According to the U.S. Small Business Administration, many SMBs fail within months of a major cyber incident due to financial and operational strain. The human factor in cybersecurity is not just a technical issue, it is a business risk issue.

Security Awareness Training: Necessary but Not Sufficient

Cybersecurity awareness training is a critical foundation, but its effectiveness depends on how it is implemented. Annual, checkbox-style training sessions do little to change behaviour. Effective programs are:

  • Ongoing and adaptive
  • Role-specific
  • Reinforced through simulations
  • Supported by leadership
  • Integrated into daily workflows

The SANS Institute emphasizes that awareness must evolve into behavioural change, not just knowledge transfer. Training should focus on decision-making under pressure, not just identifying obvious red flags.

Building Behavioural Controls Into Security Programs

Organizations that successfully reduce social engineering risk go beyond awareness and implement behavioural and procedural safeguards, including:

Clear Policies and Escalation Paths

Employees must know how to verify unusual requests without fear of reprimand or delay.

Segregation of Duties

No single individual should have the authority to approve high-risk actions without secondary validation.

Executive Participation

When leadership models verification and caution, it legitimizes secure behaviour across the organization.

Regular Testing

Phishing simulations and tabletop exercises help normalize scrutiny and reporting.

Psychological Safety

Employees should feel safe reporting mistakes quickly, early disclosure can dramatically reduce impact.

These controls acknowledge that mistakes will happen and focus on limiting blast radius rather than assigning blame.

Trust Is the Real Target

At its core, social engineering is not about technology, it is about trust.

Attackers exploit:

  • Trust in email
  • Trust in authority
  • Trust in routine
  • Trust in familiarity

Defending against social engineering attacks requires recalibrating trust without destroying collaboration. This balance is difficult but achievable with the right mix of training, policy, and leadership reinforcement.

The goal is not paranoia, it is healthy skepticism supported by process.

A People-Centric Security Mindset

As threat actors continue to refine their techniques, organizations must accept a fundamental reality: cybersecurity is as much about people as it is about systems.

Technology will continue to evolve. Attackers will adapt. But human behaviour and our desire to help, to comply, to trust will remain constant.

Organizations that invest in people-centric safeguards, behavioural controls, and meaningful cybersecurity awareness training are far better positioned to reduce risk than those relying solely on technical defences.

Key Takeaway

Social engineering attacks persist because they exploit human nature, not technical gaps. Reducing risk requires aligning technology, policy, training, and culture around the human factor in cybersecurity which is where the real battle is still being fought.

Don't miss these stories: