While the burgeoning field of usable security has made security mechanisms more usable by humans in general, prior research invariably has come up short in that not all humans respond the same to stimuli or have the same preferences. Our goal is to examine the ways in which security mitigations can be tailored to individuals, and how this is likely to result in even greater security compliance than what has been previously achieved through user-centric design. While previous work shows that individual differences are predictive of privacy and security attitudes, further research is needed to explore the myriad ways in which this can be applied. Our research agenda will center around reframing security mitigation designs so that they are targeted at the decision-making dimensions that we previously found to be predictive of computer security attitudes. This will include iterative human-subjects experimentation to evaluate whether targeted mitigations result in greater compliance.
Grant /
January 2020