Psychological Profiling of Phish-Ready Users: Ethical Boundaries & Practical Use
- Shilpi Mondal

- 53 minutes ago
- 10 min read
SHILPI MONDAL| DATE: DECEMBER 09,2025

Phishing, deceptively crafted messages or communications that trick individuals into revealing sensitive data, remains one of the most persistent and effective forms of cyberattack. Phishing exploits not software vulnerabilities, but human psychology.
In recent years, researchers have begun investigating an approach that goes beyond “phishing detection” alone: profiling individuals’ psychological traits to identify who might be more susceptible to phishing; what we might call “phish-ready users.” This raises both promising possibilities and serious ethical concerns. This article explores the psychological foundations of phishing susceptibility, what recent research reveals about profiling, the potential practical uses and the ethical boundaries that should guide any such effort.
Why phishing works: Psychological levers at play
Social engineering as a psychological attack: Phishing is not (just) a technical exploit. It is, fundamentally, a psychological exploit. Attackers rely on human factors, not code vulnerabilities.
Generally speaking, phishing falls under social engineering tactics in cyber threats - techniques tricking people into weakening security.
In this setup, scam emails are designed to exploit mental, emotional, and social weak spots. Because they play on how folks choose actions - especially when rushed - they steer users into making illogical moves.
Common psychological tactics used by phishers: Empirical and theoretical research repeatedly shows a set of psychological tactics or “influence levers” that phishers exploit.

Among the usual ones are:
Authority & Trust: Impersonating a trusted figure or entity (e.g., a CEO, bank, government agency) triggers deference. People tend to comply when a communication appears to come from a perceived authority
Urgency mixed with fear: phrases like "your account gets locked" or "reply fast or else" play on our instinct to rush when stressed, skipping careful thought.
Social Proof & Familiarity: Folks tend to trust things more when they see coworkers doing the same - safety in numbers kind of thing. When people similar to them go along, doubts drop off.
Emotional/ Cognitive Overload: Too much info, confusing words, or strong emotions might push people into quick reactions without thinking - stuff just hits too hard, so they respond on instinct instead of pausing to reflect.
Deception & Illusion of Truth: Tricking people with fake websites, copied logos, or similar layouts gives a false sense of trust - so users often don’t stop to verify if it’s real. Instead, they just believe what looks familiar.
The power ofthese methods comes from tapping into gut reactions instead of logical thinking. As noted in a foundational review, social engineering attacks “exploit weaknesses in human cognitive functions.”
Given how rooted these vulnerabilities are in basic human psychology not in technological flaws they are inherently difficult to “patch” with software alone.
Psychological Profiling: What does research say about “phish-ready users”?

In the last few years, experts looking at online behaviour shifted focus - now checking out targets more than just scams. So rather than focusing solely on how phishing works, they’re digging into what sets apart folks who get tricked versus those who don’t - also wondering if these differences can actually be measured.
Key findings: Personality traits, behaviours, and demographics:
· A recent study from 2025 showed some character types, particularly those acting on impulse or feeling anxious, are more likely to fall for online scams.
· A different report from 2025 looked at many personality and background aspects results linked openness, friendliness, or anxiety to a higher risk of falling for scams; however, being careful lowered it.
· Research using behavioural metrics (e.g., reaction times, click rates in phishing simulations) confirms that human error, rather than technological failure, remains a primary vector for phishing success.
· A recent modelling approach, applying frameworks such as the Heuristic‑Systematic Model (HSM) and Cyber‑Routine Activity Theory (Cyber-RAT) to phishing susceptibility among younger users (e.g. Gen Z), suggests that both habitual online behaviour and reliance on heuristic decision-making, rather than systematic evaluation, increase risk.
Moreover, recent work has attempted to formalise the “psychological profile” of phish-ready individuals, transitioning from verbal characterisation to data-driven modelling.
These studies demonstrate that phishing susceptibility is not random. Social engineering interacts with individual differences — personality, cognitive style, emotional state, and even routine behaviours in predictable ways.
Taking steps toward focused prevention - here’s hoping smart profiles boost real defences: The results really matter. When people with riskier mindsets get spotted - or when situations and features that boost vulnerability become clear - companies can act. They could step in early. Adjust their approach based on behaviour clues. Use insights to shape better responses. Prevent issues before they grow
Design targeted training or interventions: Set up specific training sessions tailored to individuals by focusing more on those who tend to act quickly without thinking or who worry excessively, rather than making everyone attend the same generic phishing talk.
Use adaptive defences: Stay flexible with protection by applying extra security layers selectively when someone handles risky information. Use multiple safeguards only when the threat level is high, keeping processes smooth otherwise.
Deploy “cyber-psychological hygiene” programs: Run "digital mental habits" training that combines online safety tips with mindset techniques, helping people build confidence, recognise their own vulnerabilities, and remain calm under stress to reduce impulsive mistakes.
Thus, psychological profiling could, in principle, help turn the human weakest link into a manageable part of the defence strategy.
Ethical Boundaries and Risks: Why profiling “phish-ready users” is a slippery slope
Even though it might help, using mind analysis in online safety brings big worries about ethics, personal space, and unfair treatment.
Privacy, consent and autonomy:
Informed consent: Getting permission first matters when using personal info to guess someone’s behaviour. If people don’t clearly agree, it can harm their right to choose what happens to their data. Pulling details about character - say, how anxious or impulsive they are - is tricky ground morally. People should know what's being gathered and why.
Potential misuse: Using such data without caution can lead to unfair treatment, like biased hiring or excessive employee monitoring.
Clear info plus control: People need to know where their data goes, who sees it, while understanding choices like removing details, fixing mistakes, or saying no later on.
Risk of unfair treatment or prejudice: Some personality features often link up with things like age, where someone’s from, or their income level. When companies build profiles, it can accidentally push unfair patterns forward - groups might get flagged or left out, not due to real risk, yet shaped by societal conditions instead.
Ethical limits on predictive profiling: Even if statistical correlations exist, they do not determine individual behaviour. Labelling someone “phish-ready” does not guarantee that person will fall for phishing; it only indicates elevated risk. Picking up labels like they're fixed rules can bring about unfair outcomes.
Besides, things might get worse - targeting people could shift toward deeper mind checks, which slowly break down honesty, private space, or how we value ourselves.
Security vs. human dignity: balancing interests: Organisations have legitimate security concerns. But enforcing profiling-based controls or mandatory “psych tests” may erode trust, harm workplace culture, or violate rights. There needs to be a balance between security and respect for individuals as autonomous, private persons.
Practical Use Cases: How and Where Profiling Could Be Applied (Responsibly)
Given both the promise and the perils, where might psychological profiling of phishing vulnerability be applied responsibly and ethically?
Voluntary training & awareness programs: Organisations (companies, educational institutions) can offer optional psychology-based training modules. For example:

· Personality-aware security education for participants who opt in, trainers can highlight common vulnerabilities tied to impulsivity, stress, or cognitive overload.
· Gamified simulations that help participants learn their own behaviours under stress or urgency to build self-awareness and “phishing self-efficacy.” (Such gamified approaches have been proposed and studied.
Because participation is voluntary and data is not used for punitive measures, this respects autonomy while boosting resilience.
Risk-based adaptive security controls (with consent): In high-risk roles (finance, admin, IT, HR), organisations might, with informed consent, apply stronger security controls for individuals with higher phishing susceptibility scores. For example:
· Mandatory multi-factor authentication (MFA)
· Additional verification/training before executing critical transactions (e.g., money transfers)
· Periodic refresher training aligned with psychology-based risk profiles
This approach treats psychological profiling as part of a defence-in-depth strategy, not a judgment, but a risk management tool.
People-wide cyber safety studies, along with public info drives: Scientists plus officials might look at grouped, unnamed info to see what habits or mindsets link to bigger scam risks - so they build better awareness drives, school lessons, or local programs while skipping individual tracking.
Using psychology tips in computer safety software: Some recent work incorporates “psychological traits” or “persuasion features” into machine learning models that detect phishing content. For example, a 2022 study showed that scoring emails based on psychological persuasion tactics (fear, urgency, desire) improved phishing detection performance.
In the future, security tools could combine technical detection with “psychological-risk heuristics,” especially in environments where human judgment remains central (e.g., approving financial transactions).
Why Profiling Is Challenging — And What We Don’t (Yet) Know
Even as early research shows promise, there remain important limitations and uncertainties.
Limited scope and generalisation: Some research uses fake phishing tests, yet actions during actual attacks might change when stress is high and targets are everywhere.
· People high in neuroticism or impulsive by nature might be more at risk, yet it’s never a sure thing who’ll end up affected. Though these tendencies show some link, they don’t seal anyone’s fate. Just because someone scores one way doesn’t mean harm will follow. It's about chances, not guarantees.
· Cultural, educational, and social context matters. What holds in one population (e.g., a university cohort, a corporate office) may not replicate elsewhere. Indeed, a recent review of profiling victims of cyberattacks points out that “digital context amplifies certain vulnerabilities and introduces new forms of risk.”
Ethical, legal and privacy hurdles: As discussed earlier, consent, fairness, data protection, and transparency. Many organisations may lack robust governance frameworks to ensure appropriate use.
Risk of over-relying on profiling: neglect of broader defence: Focusing too hard on profiling might make teams slack off when it comes to general security. Phishing protection needs several layers - tools, education, rules - but shouldn’t rely only on behaviour tracking.
Guiding Principles — Ethical Use of Psychological Profiling for Phishing

If organisations, researchers, or policymakers choose to adopt psychological profiling, they should commit to a set of ethical guardrails:
Informed consent: Users must be fully informed about what is being measured, how data will be used, stored, and who can access it. Participation ideally should be voluntary.
Be open: let people know what the profile is for, how it’ll be used, where it might fall short, yet also explain possible downsides they could face.
Non-punitive use: Profiling scores should not be used for punishment, discrimination, or negative judgments; only for protective or supportive security measures.
Data minimisation & anonymisation: Only collect what is necessary; aggregate/desensitise data where possible; avoid storing sensitive psychological data longer than needed.
Equity & fairness: Ensure profiling does not unfairly target specific demographic groups; monitor for bias.
Complementarity, not replacement: Use profiling as one component alongside technical defences, education, and organisational policy, never as a sole security strategy.
Ongoing evaluation: Regularly audit the effectiveness, fairness, and unintended consequences of profiling initiatives.
The Future: Toward a Cyber-Psychology-Informed Security Paradigm
The mix of behaviour studies, mind science, and online safety is creating something fresh - labelled by one study as Cybersecurity Cognitive Psychology. While experts from different areas pitch in, ideas start merging; this blend forms a unique field shaped by real-world habits and digital risks. Though it’s just starting out, early results show promise because it focuses on how people actually act, not just theory.
With science moving forward, it could happen sooner than expected:
Standardised psychological-risk assessment tools: validated instruments to assess phishing susceptibility.
Adaptive organisational frameworks: where security controls dynamically adapt to user risk profiles (with consent).
Public-awareness campaigns informed by behavioural data: tailored interventions not just teaching “don’t click suspicious links,” but educating people about when and why they are psychologically vulnerable.
Integration of “human factor” models into detection systems: technical tools that flag not only suspicious content, but also risky contexts (e.g. urgent request + high-stress time + user traits).
This future holds promise but only if approached with care, respect for individual rights, and a commitment to fairness.
Conclusion
Phishing is no longer just a matter of technology and firewalls. It’s increasingly a matter of psychology of social engineering, cognitive biases, emotional pressure, and human vulnerability. Research now suggests that some individuals may be more susceptible to phishing than others based on personality traits, behavioural tendencies, and habitual online behaviour.
Psychological profiling of “phish-ready users” offers a powerful but delicate tool. If leveraged responsibly with informed consent, transparency, and fairness it can enhance security awareness, reinforce defences, and tailor interventions for those most at risk. But used carelessly, it could become a tool of surveillance, discrimination, or unjust control.
In the end, organisations and society must tread carefully. Profiling should support security; it should never replace the human dignity, privacy, and autonomy of individuals.
With thoughtful frameworks and ethical guardrails, psychological profiling may well become a valuable but always responsibly applied pillar of next-generation cybersecurity defence.
Citations:
Wikipedia contributors. (2025, November 20). Phishing. Wikipedia. https://en.wikipedia.org/wiki/Phishing
Washo, A. H. (2021). An interdisciplinary view of social engineering: A call to action for research. Computers in Human Behaviour Reports, 4, 100126. https://doi.org/10.1016/j.chbr.2021.100126
Masas, R. (2023, December 20). What is Social Engineering | Attack Techniques & Prevention Methods | Imperva. Learning Centre. https://www.imperva.com/learn/application-security/social-engineering-attack/
https://iacis.org/iis/2023/2_iis_2023_71-83.pdf?utm_source=chatgpt.com
Marshall, M. (2025, October 18). The Psychology of Phishing: Why humans fall for social engineering and how identity management can protect your enterprise. Avatier. https://www.avatier.com/blog/the-psychology-phishing-identity
Rodriguez, R. M., Golob, E., & Xu, S. (2020, July 9). Human Cognition through the Lens of Social Engineering Cyberattacks. arXiv.org. https://arxiv.org/abs/2007.04932
Islam, A., Rashid, M. M., Othman, F., Kaosar, M. G., & Islam, L. (2025). Identifying personality traits associated with phishing susceptibility. Security Journal, 38(1). https://doi.org/10.1057/s41284-025-00466-4
Identifying personality traits associated with phishing susceptibility. (n.d.). Psychologie Légale. https://psychologie-legale.fr/identifying-personality-traits-associated-with-phishing-susceptibility
Tjondro, E., Ester, C., Sardjono, Y. G., & Kusumawardhani, A. (2025). Investment scam vulnerability among university students: the role of personality traits and risk tolerance. Cogent Education, 12(1). https://doi.org/10.1080/2331186x.2025.2464309
López-Aguilar, P., Urruela, C., Batista, E., Machin, J., & Solanas, A. (2025). Phishing vulnerability and personality traits: Insights from a systematic review. Computers in Human Behaviour Reports, 20, 100784. https://doi.org/10.1016/j.chbr.2025.100784
Pasupuleti, M. K. (2025). Human-Centric Cybersecurity: Evaluating phishing susceptibility using behavioural metrics. International Journal of Academic and Industrial Research Innovations(IJAIRI), 05(06), 412–424. https://doi.org/10.62311/nesx/rphcrcscrcp4
Gan, C. L., Lee, Y. Y., & Liew, T. W. (2024). Fishing for phishy messages: predicting phishing susceptibility through the lens of cyber-routine activities theory and heuristic-systematic model. Humanities and Social Sciences Communications, 11(1). https://doi.org/10.1057/s41599-024-04083-1
Baral, G., & Arachchilage, N. a. G. (2018, November 22). Building Confidence not to be Phished through a Gamified Approach: Conceptualising Users’ Self-Efficacy in Phishing Threat Avoidance Behaviour. arXiv.org. https://arxiv.org/abs/1811.09024
Shahriar, S., Mukherjee, A., Gnawali, O., & University of Houston. (2021). IMPROVING PHISHING DETECTION VIA PSYCHOLOGICAL TRAIT SCORING. In the University of Houston [Journal-article]. https://arxiv.org/pdf/2208.06792
Image Citations:
Blog — Centre for Internet and Society. (n.d.). https://cis-india.org/internet-governance/blog?b_start:int=160&subject=rti
Watson, K. (2025, February 11). AI Phishing: How AI is Making Attacks More Sophisticated? Second Cyber. https://seconcyber.com/ai-phishing-how-ai-is-making-attacks-more-sophisticated/




Comments