Next Module: Social Engineering and Phishing
Summary
This module introduces concepts to consider when changing security behaviors of individuals. Every organization is made up of people so security practitioners must understand that people use mental models and metaphors to understand and/or react to threats. Using these concepts, students can consider trade-offs of various approaches in behavioral security.
Learning Objectives
- Understand the use of mental models and metaphors in cybersecurity
- Describe concepts such as attention filter, system 1 vs system 2 thinking, FOMO vs. JOMO and how these are reflected in user interface design.
- Learn which trade-offs exist between approaches to improving security behaviors.
Pre-Readings
- See Course Readings for “Changing Security Behaviors”
Discussion
Students will be asked to volunteer examples from their own experiences.
- Describe a risky or dangerous security (in)action or practice that you have done or you’ve seen in an organization that you’ve worked with or for (or with the Clinic). Example: “Holding doors open for coworkers between biometrically-secured entrances.”
- Discuss why this practice took place. Example: “social norms making it impolite to close doors on other people.”
- Was there anything meant to prevent this practice from taking place? Why didn’t it work?
Input
Discuss the use of mental models and metaphors in cybersecurity.
Mental models take time to develop. Compare gunpowder and TCP/IP.
Gunpowder over hundreds of years
- China: 9th Century
- Europe: 17th Century
- Outmodes personal armor, traditional fortresses, and military doctrine
- Creates new ideals of leadership, new industrial establishments, new relationships between government and governed., etc.
TCP/IP over tens of years
- DARPA: mid 1970s
- Apple Macintosh: 1984
- WWW: early 90s
- iPhone: 2007
- ‘The Cloud’: 2010
We use many metaphors to describe cybersecurity challenges, yet none of them are close to adequate for capturing the nature of the challenge that we will face. They lead the people who use them to think and act in ways that make no sense to the users of other metaphors.
Building from mental models and metaphors, there are simple design principles for behavioral security.
- Reward pro-security behaviors immediately and visibly (should Comcast pay people or increase their Internet speeds in return for taking security measures?)
- Enhance the awareness of risk (could you make the security messages and alerts look very different than other messages and alerts?)
- ‘Naming and Shaming’ of security policy violators?
- Default settings — obvious, but key
Attention filter.
Every time you make a decision, you run down your neural gas tank for the day.
The conscious mind can process about 120 bits /second (bandwidth) while one person speaking is about 60 bits/second. The brain’s attentional filter tells the rest of the brain what to focus on. You can train it, and so can your users… but it’s not easy.
Your attention filter evolved to respond to two categories of stimuli
- Change: Carla Schatz and the Cat visual cortex
- Importance: what happens when you are in a crowded room and someone across the room says “fire,””sex,” or “your name!”
System 1 and System 2 thinking.
- Simple and specific is good (‘open a window’ gets better adherence than ‘use in a well-ventilated room’). But presumably less comprehension
- Large quantities of text look like they will take a lot of effort to read, so people often read none of it. But it’s hard to explain security trade-offs in very few words.
Illustrations convey lots of emotion but less information. - What Chrome did (see APF paper):
- Nearly gave up on the comprehension objective
- Was able to almost double the adherence objective (roughly 30 to roughly 60%) by using colors, defaults, and demoting the unsafe choice to an ‘advanced’ button
- But they don’t know how many people might have become frustrated and switched browsers
Fear of Missing Out (FOMO) vs. Joy of Missing Out (JOMO)
- Precisely the same engineering (technical and social) that leads to engagement, leads to many security problems.
- Provocation: Texting while driving is a behavioral semi-equivalent to clicking on a link.
- Provocation 2: Distraction is the attacker’s best friend.
Deepening
- Should they make users aware of how the underlying technology works? Or make their choices as simple as possible?
- Should they use peer learning and social influence? Or rules?
- Should they use risk-assessment mindsets? Or simple heuristics?
Use this list of tradeoffs to guide the conversation:
- Concrete incentives vs peer pressure vs knowledge of why
- Specificity and customization (guides) vs Generalizability and transferability
- Assume a level of pre-existing knowledge vs Idiot-proofness
- Showing a payoff (you’re more secure) vs the Fredkin paradox
- Risk mindset vs Worst-case thinking
- Feedback and dialogue vs Time and efficacy
Synthesis
Discuss the following examples and strategies of changing security behavior (and connect them to the aforementioned tradeoffs):
- 1-on-1 Training
- Workshop
- Online training
- Guides
- Reports
- Technical Audits
- Workplace Incentive Programs
- “Name and Shame” Programs