Event Recap / September 2023

Jameeka Green Aaron: “Cybersecurity and Representation: Diversity, Equity, and Innovation”

“The idea of CyberMētis,” explained Ann Cleaveland, Executive Director of CLTC, in her opening remarks, “is to really deep dive into the practice of cyber and hear from incredibly distinguished people from industry about, what is a day in their life like? What kind of cybersecurity problems are coming across their desk every day? And what kind of different careers are out there?”

The Fall 2023 CyberMētis Speaker Series kicked off on September 11 with a presentation by Jameeka Green Aaron, Chief Information Security Officer (CISO) at Okta Customer Identity, who presented a talk entitled, “Cybersecurity and Representation: Diversity, Equity, and Innovation.”

Green Aaron was an ideal speaker for the series, as she is responsible for the holistic security and compliance of Okta’s Customer Identity Cloud, platform, products, and cloud infrastructure. She is a recognized industry leader and brings 25 years of experience to the role, with a career that has spanned a wide variety of industries, including aerospace and defense, retail, and manufacturing, at both Fortune 100 and privately held companies — including Nike, Hurley, Lockheed Martin, and the U.S. Navy.

“You guys are my people,” Green Aaron told the audience. “It’s never been more important that we understand the importance of diverse perspectives and representation in technology…. AI is built on representation. That could be really great for us, or it could be a really awful future for us…. It is my job to make sure that people understand the risks that are at hand.”

Green Aaron began the talk with an overview of her career, explaining that she grew up in Stockton, California before joining the United States Navy. She had just returned from a deployment when the attacks of Sept. 11, 2001 led her to continue her service — and enter the fields of IT and cybersecurity. After leaving the Navy, she gained a range of experience at firms like Lockheed Martin (where she helped write code for satellites), and after 18 years in aerospace and defense, she shifted industries by taking a job at Nike. “It was my first introduction to Okta and identity because I was deploying identity for 74,000 users across the globe,” she said.

The global nature of her job at Nike taught Green Aaron the importance of not simply following a US-focused perspective when thinking about digital security. “I learned a lot about the implementation of technology in the US, and the way that we are sometimes very US-centric,” she explained. “We had built all these things in for people to authenticate, but it was very much based on how people in the US authenticated. When I worked in China, they were like, none of your stuff works here. We had to really think about the implementation in Greater China and throughout the world, in spaces where people are not necessarily using laptops, and where the mobile phone is the primary version of their access to the internet.”

A Job Protecting People

In her current job at Okta, Green Aaron explained that she works on customer identity, which describes how people log into the apps they use for banks, restaurants, retail stores, and other businesses. “Sometimes identity is verified by CAPTCHA, in other cases by a one-time pin, a SMS text message, or an email sent to confirm a person is logging in,” she explained. “Oftentimes, you don’t see customer identity because we are behind the branding or logos of the companies we serve.”

Green Aaron stressed the importance of thinking about the users of the technology, rather than the tech itself. “My job is to protect people,” she said. “My job is not to protect databases. It’s not to protect technical resources. It’s not to protect nameless, faceless things.” She explained that stolen credentials (e.g., passwords) are the number one threat to employees and customers, and that “80 percent of breaches involving attacks in web applications are attributed to stolen credentials…. There’s a reason for that. Credentials are moneymakers. If your username and password are stolen, they can become money as they can be sold on the dark web.”

“We are always using different technologies to help us understand when credentials are stolen, notifying the companies that we support that those credentials have been stolen,” she said. “But we also have the ability to automatically reset passwords. And so when we realize that, hey, this particular set of users’ credentials have been stolen, we can go in and actually reset password.”

She said that much of her work currently is focused on artificial intelligence (AI), which has long been used in security, but which is becoming increasingly sophisticated. “AI right now is one of the biggest potential threats that we face in the identity space,” she said. “A lot of our systems are using machine learning constantly to help us better defend against bots. But bots are also using machine learning to learn about and create ways around our defenses. When we talk about AI in cyberspace, it’s not new to the cybersecurity community. But the way that it’s being used now is different.”

When we talk about AI in cyberspace, it’s not new to the cybersecurity community. But the way that it’s being used now is different.”

Part of the challenge is that generative AI has potential to replicate the distinct characteristics that give someone their identity, she explained. “For someone who works in identity security, a lot of what we build is based on the uniqueness of your humanity,” she said. “You have a unique walk, you have a unique set of features. We’re using that to help us better create and protect identity. We’re using biometrics, we’re using pass keys. But AI can recreate Jameeka. This is a critical problem in the identity space that we’re thinking about. What’s even more terrifying is that I actually don’t have the answer.”

She exhorted the students in attendance to work toward solutions to managing the challenge of AI. “You as students are thought leaders in this space, and so it’s critically important that when we think about AI from a cybersecurity perspective, we need you guys to help us understand, what are we missing here? And how do we actually defend against AI?” she said. “What AI can do for humanity probably outweighs the risks, but also with every new technology, new attackers come along, so we should always be ready for that.”

The Importance of Diversity

Green Aaron explained that one of the significant issues that concerns her about AI is racial bias. Many of the AI systems trained for facial recognition and other purposes were not effectively trained with a diverse pool of faces. “I do not ever use AI facial recognition at the airport because it doesn’t work, and one of the reasons that it doesn’t work for me is because I’m a Black woman,” she said. “I also can open my sister’s phone through facial recognition. We are not twins, we just happen to look a lot alike.”

The consequences can be dire, she noted, citing the example of Porcha Woodruff, who was eight months pregnant when she was arrested by the Detroit Police Department for carjacking following a false identification by an AI system. “The AI identified 75 potential suspects for a carjacking, and then the person who was carjacked was given a lineup of six people, and she picked out the person,” Green Aaron said. “She was eight months pregnant, she was taken to jail, she was fingerprinted. She is effectively the first person who has now come out and said she was arrested for false AI identification.”

“Every single time in this country there has been a false arrest for AI identification, the person has been Black. Every single time,” Green Aaron said. “The models that we’ve used to train these large language models have not been inclusive.”

Every single time in this country there has been a false arrest for AI identification, the person has been Black. Every single time.”

Green Aaron pointed out that commonly used applications have been found to have concerning problems. For example, in 2015, Google Photos drew heat for identifying Black people as gorillas. “One of the reasons why I accepted this invitation to come to this school to talk is because I wanted to be able to talk to people who understood what I was saying, not just from a technical perspective, but from a humanity perspective,” Green Aaron said. “Imagine opening up your phone, and it classifies all of your friends and family members as gorillas. It would just make you angry, it would be hurtful. But that’s the world that we live in. And that’s why, when we think about generative AI and how we, as cyber professionals and future cyber professionals, are impacted, it is important that we have diversity in these large language models.

The risk of AI systems being “poisoned” should also be a concern, Green Aaron said, as models trained on public data from the internet can be trained or deceived to misunderstand information. “Being able to poison the model is something that absolutely can be done,” she said.

Overall, Green Aaron is optimistic about AI, but stresses that checks need to be in place. “One of the great things about AI is that it can create a society where our biases can be effectively be controlled or erased,” she said. “My hope is not that AI goes away. What I hope happens is that it gives us the ability to say, hey, this was my experience, it matters, it’s valid, but I also want to make sure this doesn’t happen again. It has the ability to help us be better than we really are. But it also has the ability to help us be worse than than we really are.”

“So what are we going to do about this?” she asked. “As a leader in identity, as the voice of an underrepresented group, and as a veteran, I have learned all of my experiences to improve identity…. I know from experience that every time a good technology comes along, threat actors come along with it. As a collective community, we all need to be prepared to tackle them.”


Register to join us on Monday, October 2 for the second seminar in the Fall 2023 CyberMētis Speaker Series, featuring John deCraen, Associate Managing Director of Cyber Risk at Kroll.