Skip to main content

News & Events

The human factors of improving cybersecurity

By Amy Sprague
September 25, 2020

ISE Assistant Professor Prashanth Rajivan studies human factors challenges in cybersecurity in his Laboratory for Behavioral Research in Computer Security (BRICS Lab). We caught up with Rajivan over Zoom to learn more about his research rationale, methodology, future directions and more.

You put cybersecurity in the context of human factors. Can you help draw the connection between the two?

William Yang

ISE Assistant Professor Prashanth Rajivan

Most people think of cybersecurity as a technical problem. But it’s more than that – it’s a socio-technical problem as it is driven by humans. At the most basic individual level, we make decisions about whether to update software or whether we should reply to an email and if we could identify potential “phishing” attacks. At an organizational level, there is a whole group of people who are monitoring computer networks and trying to defend them from attacks. On the adversarial end, we have humans actively finding ways to break into these systems. 

So a human is always present in the loop, either as an end user who is making these active, personal decisions, or as a defender, who is trying to protect our computer networks from being exploited, or as an attacker. I study decision-making in this system with the goal of helping people take better actions to guard against attacks.

This sounds a bit like the behavioral economics concept of “nudges.”

Yes, absolutely. We’re looking at how we develop these behavioral influencers, or nudges, in our cybersecurity system so people make better decisions easily and regularly. This is crucial with security updates, for example. We all get those alerts that we should update our security, software, or operating systems, but many of us just push it off. And there are valid reasons for pushing it off. We may be in the middle of something and it is unclear how long the interruption will take. We don’t have a good understanding of what the update will even do for us. In the meantime, we could be delaying long enough for an attacker to exploit our vulnerability. So we’re asking how these updates should be structured so that we act more quickly.

What is the direction of your research?

We are studying what kind of biases people face instead of just trying to imagine a solution. We’re looking at why people are delaying or ignoring the call to update their security. The goal is to try to reduce that bias or cost for them. There is no way we can totally fix a human behavior, but if we could, say, get them to update their security within a week versus two weeks, then we’ve made a great advancement.

What is your research approach?

credit card with a fish hook through it

One methodology we are using is “human in the loop” simulation. As an example, we are doing one research experiment on a type of phishing scam dubbed “spear phishing.” This is a type of targeted attack to a small group of people, like a small department in an organization. The attacker has access to enough personal information to craft a message that sounds legitimate from a trusted source. Maybe they mix in some facts they find on social media. They can see who your co-workers are and who your supervisor is. They can mask themselves as these contacts. So to understand this dynamic, we are looking into the relationship between privacy and spear phishing vulnerabilities. 

Studying this is quite challenging as we can’t just send subjects spear phishing emails to see how they would react, and we’re never going to get access to the actual attackers sending these emails. So instead we create these simulation environments called “synthetic task environments.” We randomly assign participants in teams of four to the roles of users, defender and attacker. They play the role and the assigned attacker gets training on crafting a spear phishing attack and the others decide whether to respond.

Do any cyber attacks surprise you anymore?

Oh that’s a good question. I have to say the recent Twitter attack that hacked the accounts of Bill Gates, Elon Musk, Barack Obama, Jeff Bezos and several other high profile people really surprised me. These accounts all asked for bitcoin deposits for COVID-19 relief and promised double the return. That was pulled off by a teenager as a very targeted spear phishing success that solicited account information from a Twitter employee to hack the accounts. I was surprised Twitter employees would fall victim, especially when the scam involved so many high profile accounts.

How do you see the cybersecurity field evolving?

I strongly believe Artificial Intelligence (AI) is indispensable. We are looking at developing new methods for people to team up with AI to solve the security challenges. We can train an AI system to catch a high number of malicious attacks, but we need better methods for the AI to help humans make better decisions under uncertainty. In the case of spotting phishing emails, you can never completely remove the user from the system. If you’re expecting an email that gets flagged as spam, then you need to override the flagging. AI-infused security systems will work better by teaming with their human counterparts.

Knowing what you do, how do you live online that might be different from the average person?

I am definitely more careful and a lot more paranoid! I will say that after going through many of my own experiments, I have noted a marked improvement in my ability to detect malicious emails, but I realize this could easily be an overconfidence bias on my part!