Cybersecurity in the AI era: Busting myths and practical advice | Guest Alex Sharpe

Get your FREE 2024 Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/

Alex Sharpe, a cybersecurity expert with over 30 years of experience, joins the Cyber Work Podcast to discuss the realistic promises and limitations of AI and machine learning in cybersecurity — and pragmatic advice on their responsible use. From debunking myths to sharing insights from his excellent presentation at ISACA Digital Trust World 2024, Alex covers how AI can be integrated into cybersecurity practices and its impact on the workforce. Plus, explore how to stay ahead in the evolving cybersecurity job market. Don't miss out on this illuminating conversation!

View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast/

00:00 Introduction to today's episode
00:33 Free Cybersecurity Salary Guide
01:27 Guest introduction: Alex Sharpe
01:45 Alex Sharpe's background and experience
02:39 AI in cybersecurity: opportunities and limitations
04:41 The evolution of technology and human productivity
13:13 AI misconceptions and realities
29:42 AI's real-world impact
30:27 Challenges in autonomous vehicles
31:39 Data poisoning and steganography
33:04 AI in security and data science
34:36 AI proficiency and career advice
35:23 AI's integration in daily life
36:08 Innovation and guardrails
47:05 Future of AI and career skills
50:57 Guardrails and public-private partnerships
54:38 Career advice and final thoughts

About Infosec
Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.

[00:00:00] Chris Sienko: Today on Cyberwork, I spoke with Alex Sharpe, a 30 plus year veteran in cybersecurity, governance, and digital transformation. I was blown away by Alex's presentation at this year's ISACA conference, breaking down the opportunities but also the inbuilt limitations of modern AI and machine learning enhanced cybersecurity.

Alex punctures the myths inherent in both the utopian and the dystopian views on AI, giving you cogent and thoughtful advice on how to utilize AI in your work, your studies, and your life in an exciting but responsible way. We're really getting some interesting conversations on this one and you absolutely don't want to miss today's episode of Cyberwork.

The IT and cybersecurity job market is thriving. The Bureau of Labor Statistics predicts 377, 500 new IT jobs annually. You need skill and hustle to obtain these jobs, of course, but the good news is that cybersecurity professionals can look forward to extremely competitive salaries. That's why InfoSec has leveraged 20 years of industry experience Drawing from multiple sources to give you, cyber work listeners, an analysis of the most popular and top paying industry certifications.

You can use it to navigate your way to a good paying cyber security career. 

So to get your free copy of our cyber security salary guide ebook, just click the link in the description below. It's right there near the top, just below me. You can't miss it. click the link in the description and download our free cyber security salary guide ebook.

Your cyber security journey starts here. 

Now let's get the show started 

 

[00:01:27] Chris Sienko: Welcome to this week's episode of the Cyber Work Podcast. My guests are a cross section of cybersecurity industry thought leaders, and our goal is to help you learn about cybersecurity trends and how those trends affect the work of infosec professionals, and we'll leave you with some tips and advice for breaking in or moving up the ladder in the cybersecurity industry.

My guest today, Alex Sharpe, is a long time, 30 year cybersecurity governance and digital transformation expert with real world operational experience. Mr. Sharpe has run business units and has influenced national policy. He has spent much of his career helping corporations and government agencies create value while mitigating cyber risk.

This gives him a pragmatic understanding of the delicate balance between business realities, cyber security, and operational effectiveness. He began his career at NSA, moving into the management consulting ranks, building practices at Booz Allen and KPMG, Those are two clients of ours as well. We, he subsequently co founded two firms with successful exits.

He has participated in almost 30 MNH, MNATREX transactions, and he has delivered to clients in over 25 countries on six continents. And so I wanted to just sort of let people know, I met Alex at this year's ISACA conference, and we had A great time talking and had had some good conversations since then.

And so, uh, today Alex is going to be talking about some of the stuff he talked about in his presentation, which, uh, was some of the benefits and some of the limits of AI. I thought it was a really level headed approach to the subject, and I'm looking forward to having y'all hear his insights as well. So Alex, thank you so much for joining me today and welcome to Cyber Work.

[00:02:57] Alex Sharpe: thank you for having me, Chris. I'm, I'm pretty excited. I, you know, I love this topic and I'll take any opportunity to, to talk about it and compare notes.

[00:03:05] Chris Sienko: Yeah, it was, it was, it was definitely apparent. It was, it was one of the most fun, uh, presentations at ISACA this year. And I was like, okay, we definitely, we definitely need to, uh, uh, exchange business cards here. So, uh, so yeah, I wanted to, you know, let our listeners know about how I, you know, learned about you.

Uh, you know, like I said, you were, uh, did the presentation called the CSO's role in driving trust and the safety and security of AI at this year's ISACA. In search of, uh, in pursuit of digital trust, uh, conference, ISACA. Uh, so along with its stated objective, uh, which was discussing the CISO's role in the adoption of AI.

And ensuring the safety of the same. Uh, Alex's presentation was a pretty even handed look. I thought at what AI realistically can and cannot do and what it will and will not be able to change and transform over the coming years. Uh, so as you said in, on one of your opening slides, and as I said, in the intro, you've been in cybersecurity business for 30 plus years and in digital transformation for 25 of those years, so.

Uh, what, what got you first excited about computers and security and tech? And how back, how far back does your excitement for tech and securing things go and where did it originate?

[00:04:10] Alex Sharpe: Oh, it's in my DNA. I don't know. It might've been, uh, my mother used to say I was born asking for a puppy. And I, I think came with that was a love of technology. I don't know what it is, but I've always gravitate, gravitated towards what we now call STEM. You know, watch all the sci fi movies. Um, you know, the, the, the really good ones and the really wacky ones.

I, I love them all. And 

[00:04:40] Chris Sienko: agree. 

[00:04:41] Alex Sharpe: over time, uh, you know, kind of evolving into the. the just the way the world works. Um, you learn that human productivity is actually driven by technology, right? Um, and how we define technology changes and evolves, you know, at one point, probably the most valuable technologist in the cave was the person who could create fire.

[00:05:09] Chris Sienko: Yeah, yeah, yeah, for sure.

[00:05:11] Alex Sharpe: Now, you know, we've evolved past that, right? And now it's supercomputers software. And, you know, it's a quantum. You throw out all the, the, the things you see in the media. So the, the general trends don't change. It's what the specific technology is.

[00:05:31] Chris Sienko: Yeah. It's always too, it's always tools and human extension, I guess. When you think, when you think of it like that, like, you know, Like from fire onward, it's always, uh, what can, what, you know, what can we do to sort of augment ourselves in a way that sort of, uh, pushes things forward in a, in a, in, in whatever sort of a stated goal we have.

[00:05:48] Alex Sharpe: And we'll get into that a little more in a second, but you know, all human productivity is driven by. Changes in technology. It turns out over the past couple hundred years, human productivity increases by roughly 3 percent a year. 1 percent of that is just sheer human growth. The rest is from technology, but for the most part.

Technology just shifts what work is done, and it also changes the flow of work because it gives us new ways to operate and all. And, you know, I'm sure we'll get into it a little bit more. The way AI is disrupting is different than what we usually see, and that's causing, you know, some different type of 

[00:06:31] Chris Sienko: not quite the same thing as, uh, you know, buggy salesman feeling the hurt when, uh, when cars came out kind of thing, right.

[00:06:38] Alex Sharpe: Or, uh, yes, right? And there's always unintended consequences, good and bad, you know. When we started moving automobiles, we are putting carbon into the atmosphere, but our water supplies are healthier because we don't have to deal with the residuals that the horses leave in the street every day.

[00:06:58] Chris Sienko: Yeah, absolutely. I mean, I still remember what a, um, a crisis it was in the 80s. Like, the paper, like, you know, the environmental issues of paper and how much less paper, you know, how many less things are on books and whatever, you know, and all the trees. Like, I save all the trees and stuff and it's like, yeah, a lot less paper now all of a sudden.

[00:07:15] Alex Sharpe: Oh yeah. And you also go back to fluorocarbons. Um, I, I actually do some talks on this. If you look at like fluorocarbons, right? When fluorocarbons came out, air conditioning, aerosols, everything, And then we found out that they're actually hurting the environment. Right. So we had, we had this major leap forward.

We go, Oh, right. But we responded. 

[00:07:37] Chris Sienko: Yeah, that 

[00:07:38] Alex Sharpe: that's another thing that happens with, with humans. We tend to respond. If you, if you look at say the second world war with, you know, nuclear weapons, it really is incredible knock on wood

[00:07:53] Chris Sienko: Mm hmm

[00:07:54] Alex Sharpe: that we haven't killed ourselves.

[00:07:55] Chris Sienko: Boy howdy Yeah, yeah. Yeah. Yeah for real Yeah, well, let's uh, let's get into that. Well first, let's let's get into you a little bit first here Alex So your security bona fides do indeed go back to the mid 80s When you were a cybersecurity professional program manager and systems engineer for the NSA You were you work for several organizations with full P& L responsibility as well as teaching and mentorship roles You Uh, but primarily you've worked as a consultant under the title sharp management consulting.

So, you know, people always ask about this, but security in the mid eighties had a lot different things to, you know, do from security now. So I usually ask that question. How has it changed? But I'll ask, like, I'm curious if there's anything about how security was done back then that's surprisingly similar now.

[00:08:37] Alex Sharpe: Well, I'm glad you went there. Right. Cause it's, it's in a. It's both. And, and I, I was actually talking to, uh, um, a client who's good friend over the, you know, turned into a good friend over the weekend. A lot of things that were like nascent and talked about and have a lot of interest when I first started have now arced over and they're popular again.

I guess it's kind of like vintage clothing. I don't know. Um, but the similarities. Are are striking, right? The fundamental principles have not changed. The domains in it haven't changed. Believe it or not, many of the players haven't changed. What we are really dealing with is a major complexity of scale. We have lost the hard outer perimeter because a lot of the security architectures that we're still trying to work with are based on the castle in mode strategy, right? You take all of our valuable stuff, we put it in one place, we build these tight walls around it, we protect it with guards and guns and access control systems.

And, you know, then we have these strange people down at the end of the hall, these IT folks. We don't really know what they do, but we know if we give them some pizza and ask them questions, things happen, right?

[00:10:02] Chris Sienko: Sometimes they come in and yell at us, but other than that, yeah.

[00:10:05] Alex Sharpe: Yeah, and you know, one of the jokes from the business folks is quite often when an IT person is yelling at them, they don't know what they're being yelled at for. They don't understand it. So it's kind of okay. You just kind of expect it. But because of the changes and the changes of scale, and now it's a business problem, which I think is the number one difference that's, that's there is it's a business problem.

Um, we need to move from a thought, and this is probably the, one of the top three changes from the eighties. It's no longer a technology problem with a technology solution. It's a business reality, right? So the question is how do it's no longer what technical defenses do we put up to protect these well defined assets?

It's how do we protect the business? And that involves technology, people, process, organization.

[00:11:08] Chris Sienko: Mm-Hmm.

[00:11:10] Alex Sharpe: So even within that domain, the fundamental principles have not changed. They really haven't but that's changed the other thing that's changed. We lost the outer perimeter. We got a business Um focus now, which is great and most traditional cyber people are struggling with that whether they realize it or not And then lastly The and I think this is something that we don't talk about enough Cyber has become a business

[00:11:40] Chris Sienko: Mm-Hmm.

[00:11:41] Alex Sharpe: Product vendors, consultants, advisors, right?

We can go right down the list. It's become a business for them. And it's also a business for the adversary.

[00:11:51] Chris Sienko: Oh, yeah, right for sure.

[00:11:53] Alex Sharpe: Look at the folks that, you know, on the, if you ever have the ability to, um, look at some of the dark web reports and all, they run their ransomware campaigns and all this like a business.

[00:12:07] Chris Sienko: Yeah. Yeah, it's like a bizarro-world Amazon over there

[00:12:12] Alex Sharpe: Yeah, yeah.

[00:12:15] Chris Sienko: yeah, so someone I had a Lili Infante on who was uh, you know one of the people who helped take, you know, take take down Lazarus group and uh, Uh, yeah, she said basically they have like 24 hour customer support and and you know one one click shopping.

It's it's ruthlessly efficient. Yeah

[00:12:32] Alex Sharpe: It really is. It really is. It's kind of scary to look at some of that stuff. Um, um, one of my, my buddies does a lot of deep, uh, deep web, you know, and dark web researching, and he was, uh, showing me some of the stuff they found about their, um, like, their innovations and discussion groups on how do we use AI to reduce our costs, right?

Right. We improve our hit rate. How do we reduce our cost? It's, it's funny.

[00:13:03] Chris Sienko: It's wild. Yeah, no, for sure. Uh, so yeah, I mean, I want to talk, you know, drill deep into, into AI and how we're talking about it and how they were talking about it at ISACA this year. Cause it was. I think we can agree that was over 50 percent of the presentations had AI in the title, you know Um, so I get the sense, you know based on reactions to AI And they're all over the place from utopian glee to dystopian panic that not everyone who's thinking about what AI is And what it mean will mean for the future and what it means for the present necessarily has all the facts about what AI Realistically is And what it is not and possibly what it can't likely ever be.

So can you start by talking about some of the most common misconceptions about AI? And, and I don't mean just in terms of Skynet or hell 9, 000 or Colossus versus guardian, it's the oldie, but a goodie. Uh, but in terms of, uh, full automation, human obscurity, full mechanical reproduction, human cognitive function, et cetera, which are all things that people seem to think are on the table.

Like what are people are mostly getting right? And what do you think they're getting really wrong?

[00:13:59] Alex Sharpe: Okay. So a little bit of a story there about the AI stuff. I think something we can't forget. And I think it's very, very important, right? What we call AI today is really the, the basic concepts predate the U. S. Civil War. And that'll take your, 

[00:14:17] Chris Sienko: Oh, a. Some time to get your head around. One of the pioneers in that, that space is actually a woman by name of Ada Lovelace, and when the department of defense in the nineties created an object oriented language, they were trying to name it after her and, you know, they quickly learned that creating their own language is not a good idea.

[00:14:38] Alex Sharpe: So they funded research on a lot of others, which. basis of today. But it goes back a long time, and the core concepts of artificial intelligence as we know him today were actually Aaron Alan Turing. So if you ever watched the imitation game,

[00:14:52] Chris Sienko: Oh, yeah.

[00:14:52] Alex Sharpe: the movies named after his paper about what we now call a I and the name came back about seven years later at at Stanford University.

And the reason I bring that up is because what people often forget is artificial intelligence, by definition, is designed to mimic, and that's a very important word, mimic human behavior,

[00:15:19] Chris Sienko: Mm-Hmm.

[00:15:20] Alex Sharpe: right? And we fall under the impression that this stuff is actually thinking and reasoning. And I can give you some examples where some, um, educated people did some silly things.

Like, um, a person I train with at the gym, she actually uploaded, um, a paragraph on what she wanted from a car, and said, what car should I buy? And then went out and test drove all the cars that, that, um, ChatGPT recommended and wanted to know why none of them made sense for her.

[00:16:00] Chris Sienko: Mm-Hmm.

[00:16:01] Alex Sharpe: Okay, because it's not reasoning, right?

I know somebody did something else like that with their medical records. Then I proceeded to inform them that the moment that they uploaded those medical records that they were now out of her control and anybody can get to him. And she proceeded to argue with me. It's like, no, no, no, no, no, right? So number one thing we can't forget is these tools are designed to mimic human behavior.

[00:16:29] Chris Sienko: Mm-Hmm. Oh,

[00:16:30] Alex Sharpe: Mimic, and that's very, very important. We also can't forget that, uh, we are in, even though we've seen a lot of advances, we are still in the very early, early stages of all this, even though we've been using the tools for quite some time, right? Um, I had a client, um, ask what are the three safest AI based applications that they should put in their enterprise?

And I said, Grammarly, the, the weather app on your phone and ways.

[00:17:09] Chris Sienko: Yeah.

[00:17:10] Alex Sharpe: And I'm like, no, what AI apps. Those are all AI based apps, right? So we forget that a lot of ways we're using this all the time. We also forget that when we use things like Sirius and you know, all the, the talking stuff, that is some of the oldest AI technologies we have today.

That was the original funding that came out of ARPA for Alan Turing back in the fifties. It was all driven by guess who defense in the intelligence community, right? And now it's made mainstream, right? It's gone tactical or practical, right? We're going to see more of that. So these things are mimicking human behavior They're not doing human behavior Second thing to remember, it's a paraphrase quote from the head of DARPA.

When things with AI go bad, they go bad in ways that human never would. And a lot of that is because we forget it's mimicking human behavior. It's not actually performing human behavior. And we, we see that constantly and we don't have enough time now, but some of the stuff is very funny. Some of the stuff is very 

[00:18:19] Chris Sienko: this kind of what they refer to as hallucinations? Is that. Or

is that a little different? 

[00:18:24] Alex Sharpe: Hallucinations is part of it. Let me give you an example. There was a thing out there used to be called the rubber ducky study and I, I can't find it anymore, probably because it's been googled many times. So before AI got popular, it was out there all the time. They built a model, filled it full of data.

We said, Hey, we're going to give you images of rubber duckies. You figure out what a rubber ducky is. So you could give this thing a picture of a rubber ducky day in and day out. It would pull out the rubber duckies no matter how, how obscure it was. One day a scientist said, Oh, let's take this picture and turn it on an angle.

Fed it to the machine. The machine said, It's an ostrich, or whatever it is.

[00:19:06] Chris Sienko: Yeah.

[00:19:07] Alex Sharpe: Because they had never trained it on a picture of a rubber ducky on an angle. It had never seen it before. It didn't know what to do. Um, you know, sometimes the joke is it lost its little mind or it ran home to mama.

[00:19:21] Chris Sienko: Yeah.

[00:19:23] Alex Sharpe: And that's it.

You don't know what these things are going to do,

[00:19:25] Chris Sienko: Mmhmm.

[00:19:26] Alex Sharpe: So we have to remember that we as humans go. Oh, this is this is a rubber ducky We look at images the machines look at digits

[00:19:35] Chris Sienko: Mmhmm.

[00:19:35] Alex Sharpe: thing that people forget is these things tend to be single use, right? Don't take kind of like my example before just because it's good at one thing doesn't mean it's good at everything Right and anytime you use this stuff Understand and and presume it something's going to go wrong It's just gonna go wrong.

It's a learning opportunity. It's great, but be smart about it. So when it goes wrong, it's not gonna hurt you too bad.

[00:20:01] Chris Sienko: Mmhmm. Mmhmm.

[00:20:04] Alex Sharpe: Don't upload your your personal sensitive information like your social security number. You know, things like

[00:20:10] Chris Sienko: that is the world's biggest you are the product, uh, piece of machinery going right now as far as I'm considered, as far as I'm concerned. I don't think people really realize that. I think it's just, they think it's a scratch pad that's gonna like talk back to them or something. I don't know.

[00:20:22] Alex Sharpe: I've seen it time and time again. Um, and I saw this when like, uh, some social media first came out, they, there was a tendency of believing, Oh, this was a really bad idea in the analog world, but it must be fine if I put it on someplace that 9 billion people can get to it, 

[00:20:39] Chris Sienko: Sure. 

[00:20:40] Alex Sharpe: right? For some reason, the change in tools, maybe because it's a lack of understanding, um, or whatever.

But, um, It's a bad

[00:20:48] Chris Sienko: Or just thinking things through to the end, end points or whatever in favor of, you know, what, what the next cool thing is going to be about it. But, uh, I mean, I guess I want to jump from that to. Uh, you know, sort of the, the focus of your consulting, like, can you tell me about, uh, some of the things that CISOs need to understand about these issues around AI and how to manage the risk inherent in these types of AI implementations?

I mean, this is, you know, we want to, we want to tie it back to kind of, um, job roles and, you know, we've been talking the last couple of weeks, CISOs are, you know, pretty stressed right now. And there's a lot of, uh, you know, when things go wrong, there's a lot of finger pointing, there's a lot of, uh, you know, shadow firing, if, uh, you know, If you're, you're on call when, uh, when the, when the breach event, you know, inevitably happens or whatever.

But, uh, you know, this is just one more thing. It's like, how do I add this, this really volatile thing to our existing stack? Do you have any, any sort of general suggestions on that?

[00:21:39] Alex Sharpe: Oh, man. Um, there's a, there's a couple of things. So I spend a fair amount of time on this and I, I can honestly say that every time I have a conversation, the thinking evolves. But with that said, the absolute number one thing that we have to remember is we still need to do the, well, all right, the top two things, right?

It's a technology problem without a technology solution. Um,

[00:22:09] Chris Sienko: Yep.

[00:22:11] Alex Sharpe: It requires people, process, technology, and organization. It also requires the basic blocking and tackling. And if any of your audience members reach out, I've done a couple of articles on this, and the presentation you attended was specifically on it.

I'd be more than willing to share with them. But it turns out that these models, so good news, bad news, when we buy these products and these models, Out of the box, they are not coming with the basic blocking and tackling we're all used to. Now, I sit on, um, some security working groups specifically around AI and also some functional groups.

And I also talk to a lot of CISOs and product developers. And the first thing that they will tell you they know, that's not fair. When this subject comes up, one of the first things they'll tell you is they know there's a problem. And the reason is they know that these features and functions need to be integrated, but the models themselves are at the point where they're not stable enough and the performance isn't there.

Uh, so when they turn on the security features, everything goes to crap. Right. So they, they know it's a problem, which I consider that to be a major leap forward, as opposed to what we saw with the internet and the different versions of the web and social media. These folks know that the stuff's not in there and they need to do it.

I think one of the responsibilities of a CISO. Um, for the sake of the organizations and themselves is to do a fair amount of myth busting.

[00:24:06] Chris Sienko: Yes.

[00:24:08] Alex Sharpe: Now, the myth busting needs to be educating folks on how I works, how it changes the risk equation and how there's unique risks. So one of the things I often recommend is chief risk officer, whoever that is in your organization.

We should spend some time looking at the risk register and asking ourselves, how does AI change the equation? Two sides. Um, how does it make, how does it change the risk itself? But then how does it alter our ability to mitigate that risk? Right? And we, like, there's a lot of efforts right now, there have been for quite some time.

As a matter of fact, the first AI use case I worked on circa 1990. Was how do we use AI to analyze logs to predict malicious incidents or how to better find malicious incidents? We're still working on that use case, but there's a lot under the term Defender's Dilemma. Phil Venables at Google has, has him and Royal Hanson have an excellent paper on it, which I highly recommend.

The other thing is we need to create guardrails. Right. So Europe, Europe is Europe. They dove in with regulation.

[00:25:39] Chris Sienko: Yeah.

[00:25:41] Alex Sharpe: That's what they do. They're very good at that. The US is taking a different tack. It's like, well, you know, we don't want to stifle innovation. So let's, let's look at frameworks and things to consider and guardrails and all this, and then penalize people who go too far.

Okay. I, I, in principle, I'm okay with that. I mean, there's a lot of judgment calls that have to be made along the way, and there has to be a lot of sensible stuff along the way. But I highly believe in guardrails. I also believe in partnerships inside and outside, up and down the chain, left and right on the org chart, right?

AI is a tremendous technology, but it's also like electricity. Not one person in the organization owns it, but everybody uses it.

[00:26:30] Chris Sienko: Mhm.

Yeah. 

[00:26:32] Alex Sharpe: everybody needs to engage in a different way.

[00:26:35] Chris Sienko: Yeah.

[00:26:36] Alex Sharpe: I highly endorse. Um, having a steering committee or a task force, you know, that's cross functional, preferably charter chartered by the board of directors because it's a business imperative and you also want the authority to be able to navigate and do what you need to do.

I highly recommend that. I also highly recommend, um, integrating AI into your GRC practices. Kind of like we talked about before, let's go down the list of our risk register and say. How does this change things, right? Does it introduce new risks? Does it inflame others? Does it provide us a different vehicle for mitigation?

What does it do? So I think there's a lot there. But again, if, um, your viewers reach out, I'm more than happy to share presentations and, and papers on the subject. I, And I would love all the feedback they have.

[00:27:37] Chris Sienko: Absolutely. Now, yeah, I mean, uh, again, I'm, I'm, I'm not the expert here, so feel free to tell me, uh, if I'm, if I'm off here, but yeah, I mean, but Even if you know a little bit about what all this stuff is, it's, it's still at its core. It's a very, very, very, very high level set of calculating devices. So like the things that you want it to do have to have sort of a calculation aspect.

So I think, I think you're right. Government risk compliance is like a really good, like, these are all of the different potential things you can do. And. I am going to calculate all of the different possibilities and so forth. And, and I, yeah, I think that's interesting. Cause I, I hear, um, you know, log analysis as being one of the things people always say, like, oh, it's going to, it's going to revolutionize that.

You know, of course you'll need a human on the, on the back end of it to sort of like interpret what it said or whatever, but it's still a calculation. Device, you know? Yeah. Okay. Tell

[00:28:31] Alex Sharpe: 30 years later, we're still working on it. Uh,

[00:28:34] Chris Sienko: what's, what's

happening in that right now then. Okay.

[00:28:38] Alex Sharpe: you two things.

[00:28:40] Chris Sienko: hmm. Mm

[00:28:42] Alex Sharpe: of all, when I saw over the years, I've still seen large vendors still working on it, right? And because we compute differently, we can compute more data. We have more data. We know more about what's going on. Right? It's more prevalent. There's a lot of reasons why it's more viable today than it was three decades ago.

Right? Biggest issue that that I see is now we're no longer have logs all in one places. We have them dispersed amongst an enterprise,

[00:29:15] Chris Sienko: Sure. Sure, sure.

[00:29:16] Alex Sharpe: and we're getting better at how to correlate that. Just simple things like having everybody on the same same date. Timestamp makes a huge difference in correlating log file.

Um, it's, it's, it's really kind of, it's again, one of those little knucklehead things you go, Oh yeah, we should 

[00:29:32] Chris Sienko: Oh! 

[00:29:33] Alex Sharpe: that. Oh, 

[00:29:34] Chris Sienko: This does change everything! Ha ha ha

[00:29:36] Alex Sharpe: Ooh, I should have had a V8. Um, you know, it happens. Um, but what we're seeing is we're, we're seeing that AI is doing exactly what we thought it would do. It's not popping up with a magic answer.

It's saying. Out of all of this, here's the five or ten places you need to look.

[00:30:01] Chris Sienko: Exactly. Yes.

[00:30:02] Alex Sharpe: here's why. And that's a tremendous advantage. That's huge.

[00:30:07] Chris Sienko: Yeah, that's a, that's a time saver supreme, you know, I mean, it's, it's, it really is the thing that, you know, like people say, oh, it's going to take away this or that or whatever, but like, yeah, that is, that is drudge work that nobody needs, man. Just going through.

[00:30:19] Alex Sharpe: Oh my god. You know, just simple stuff. And, you know, you talk about math. Let's go there for a second. Huge amounts of advances in autonomous vehicles, right? But when you look at these models, And how they're operating. They are Incredibly intricate. We don't know how to inspect them. We know they're constantly changing And again when they do silly things we go what happened, right?

So there was an incident that was shared with me Autonomous vehicle this autonomous vehicle All the time came along and stopped at the stop sign, right? We as humans, we see a stop sign. We see a innate figure, you know, octagon painted in red, white letters. We see that. Machines see ones and zeros. Well, one day the autonomous vehicle comes up and just goes right by it.

And what was relayed to me was kids were putting stickers on this,

[00:31:19] Chris Sienko: Yeah,

[00:31:20] Alex Sharpe: this, right. But what happened is we thought, you know, the humans are looking at this going, well, that's a stop sign with stickers. And the machine says, no, I'm looking at this little spot over 

[00:31:30] Chris Sienko: this is either a stop sign or it's not a stop sign, and there's just not enough evidence that it's a stop sign. Yeah.

[00:31:35] Alex Sharpe: Exactly. And we forget that a lot of times.

[00:31:38] Chris Sienko: hmm.

[00:31:39] Alex Sharpe: Um, we also forget that, especially with images. I don't know if I showed, yeah, I probably did in my, my, um, my presentation. It's one of my favorite things where you can actually like put filtering on top of the image 

[00:31:53] Chris Sienko: Yeah, data poisoning, right? Mm hmm.

[00:31:56] Alex Sharpe: Um, and it fits in an overall area called steganography, which is how you can hide messages, right?

So the machines themselves will see something that you don't, and it's doing exactly what it thinks it was told,

[00:32:10] Chris Sienko: Mm hmm, mm

[00:32:11] Alex Sharpe: right? You know, so it

[00:32:14] Chris Sienko: Yeah, I mean, just as an example, you showed the same picture of like a dog three times and the computer had seen it as like a dog and ostrich and uh, and a lamppost or something like that. Yeah, yeah. And it was, and it was because, like you said, there was those sort of like hidden sort of message or hidden image elements that human eye doesn't see that computer can only see.

[00:32:33] Alex Sharpe: Yeah, so it fails in ways that, um, humans never would. And also people can screw with it intentionally or, you know, or benign. Um,

[00:32:44] Chris Sienko: Which I think is another thing that no one really talks about is that, you know, a lot of this stuff is being tested with the idea that everyone is going to be. Sort of have the same sort of, uh, you know, positive intentions for things like that, you know, that, uh, you know, well, well, clearly everyone's going to give it the best data it could possibly have, you know,

[00:33:01] Alex Sharpe: well, yeah, so let's go there for a second. Um, one of the things I've, I've learned in working things, stuff over the years, um, data scientists, and I'm not picking on data scientists, I'm just going to talk about realities. Okay. They tend to be some of the most highly privileged users in an organization,

[00:33:25] Chris Sienko: Yes.

[00:33:26] Alex Sharpe: right?

So they blow away the concept of least privilege, role based, this and that access control, you know, that that tends not to apply to them, right? When it comes to awareness and training, they also tend to be the ones that are least likely to believe somebody wants to screw with their data, right? They have spent 6, 12, 18 months getting this data right.

Why would anybody not put my data, right? So, that's part of what we were talking about before about the myth, myth busting and the raising awareness. Now, in all fairness, if you look at what we've done to our data scientists, right? We've, they spend most of their time cleaning up data and garbage collecting and stuff like that because the rise in data, the volume of data and the expectations on data have gone through the roof.

But for us as, you know, security professionals, we still need to work with these folks and, you know, get them to be on our side and frankly, you know, think a bit like a crook.

[00:34:28] Chris Sienko: Yeah. Oh yeah, completely. Um, so, okay, well, I want to, I want to go from that to, uh, a couple of things, but let's start here. Uh, so one of the themes that came up a lot in, in the conference and I Saka was, uh, someone said, quote, AI will not replace humans, but AI proficient humans will replace non AI proficient humans, which I guess is a little less scary, but it's still kind of a curious statement.

And so for, for listeners who are studying, I mean, and again, this is, you know, a lot of our listeners are our students are becoming. Security, uh, professionals and so forth. People are trying to sort of make their way in from other industries. Like, um, uh, you know, what, what additional learning should they be taking on with whatever they're, they're learning to ensure that their skills, skill sets are, are also making them, you know, AI proficient humans regarding, regardless of, uh, their sort of specializations.

Hmm.

[00:35:18] Alex Sharpe: going to see from AI, we're not going to consciously see from AI. Because you think about it, when we look at artificial intelligence, It's already integrated in our lives in ways that we don't even understand and ways we don't realize it, right? You you pick up your smartphone.

Guess what? I'm i'm 98 sure Every day you use at least one a on enabled app 

[00:35:42] Chris Sienko: Oh yeah. 

[00:35:43] Alex Sharpe: If you have a home security system or use any video, they're all ai enabled right now, right there It's it's embedded in so much. Oh your car, you know cars Um, not necessarily autonomous driving, but it's all over the place, right?

So most of what we deal with with AI is we're already using it and it's totally transparent to us.

[00:36:07] Chris Sienko: Mm hmm.

[00:36:08] Alex Sharpe: With that said, understanding that this is an evolving field and probably the number one thing to understand with an evolving field is good ideas are usually behind a few bad ones.

[00:36:26] Chris Sienko: Okay.

[00:36:26] Alex Sharpe: Basic principle of innovation, uh, pioneered ironically by, uh, an economist called Joseph Schumpeter 150 years ago.

It turns out to be true today. But Thomas Edison is probably the most. Uh, referenced person in this area. Um, you know, he talked about in sport, you know, innovation being 1 percent inspiration, 99 percent perspiration.

[00:36:51] Chris Sienko: Mm hmm.

[00:36:52] Alex Sharpe: It took him something like 1500 tries to get the filament right on a light bulb.

[00:36:56] Chris Sienko: Yeah.

[00:36:57] Alex Sharpe: Right.

So dive in, play with it, explore it. Things aren't going to go right. Learn, plow it forward, wash, rinse, repeat. That's probably the number one thing to go with the 

[00:37:13] Chris Sienko: And, and and the, and the guardrails are sort of to sort of protect against the, the, the colossally bad ideas that have to happen first to, uh, allow the good ideas to seep through in the back.

[00:37:22] Alex Sharpe: Yeah. You know, if you want something to, um, you know, do some auto sensing, um, you know, probably your oxygen supply would not be a good place to start, right? You know,

[00:37:37] Chris Sienko: Yeah.

[00:37:39] Alex Sharpe: it's not. Or, you know, you don't know if something's gonna work.

[00:37:41] Chris Sienko: supply. Yeah. Right,

[00:37:42] Alex Sharpe: Yeah, you know, you know, if you got a new new device of some sort, trying it on the family pets, probably not a good good first stop, right? Not good ideas, right? So you watch the guardrails. Um, if you're a security professor, professional, I highly, highly, highly recommend watching, um, a couple of things. One is the defender's dilemma, which I mentioned a minute ago. Um, Cisco or Google guy by the name of Phil Venables. Uh, he and Royal Hanson is a VP of architecture for him.

Uh, they have a really nice paper. If you can't, you should be able to Google, find it. Lots of great ideas. Um, watch the product vendors about what they're talking about. Whether it's real or not is another story. Right? But, for lack of a better term, fiction always leads the way. Right? We saw this with Star Trek and Star Wars.

And, you know, you pick a movie, by the way, you, you know, war games. Right?

[00:38:43] Chris Sienko: yeah,

[00:38:46] Alex Sharpe: 2001 Space Odyssey. There's tons and tons of stuff out there, right? These are usually good ideas that are inspirational. They're, they're good down the road. So the paper, watch what the product vendors are doing, whether it's real or not, it's fantastic ideas, wonderful ideas.

There's excellent, there's tons and tons of webinars, and

[00:39:12] Chris Sienko: okay,

[00:39:12] Alex Sharpe: I would spend time on the ones that come from, um, let's say reputable organizations. Universities generally have a data science organization, right? I'm a member of the one from Columbia, have been for years. And it's amazing some of the stuff that they're looking at.

If you're interested in a specific field, the research centers or the hospitals, the large hospitals, like for example, um, prior to COVID, right? Cause AI is so new. It's been around for decades. Um, prior to COVID, um, a handful of the major cancer hospitals got together and talked about how they were using AI for cancer research.

Those are all excellent ways of keeping abreast, even if some of it's a sales pitch, it's very, it's very, you know, it's generating really good ideas 

[00:40:17] Chris Sienko: Yes. 

[00:40:17] Alex Sharpe: and it's, it's educational and you get to decide what you like and you can toss away the rest.

[00:40:23] Chris Sienko: Yeah. All right. Well, let's talk about that from, uh, maybe a less, um, uh, tech heavy version, uh, you know, because I think you're right. A CISO has to act as a myth buster, but I think we all have to kind of act as myth busters as we, as we go ahead with this kind of thing. So, you know, and I think a lot of us have the anxiety of, you know, maybe a work culture where AI is being treated as this is a thing that just happened yesterday and everyone needs to.

You know ready or not here it comes kind of thing. And so I think for people who uh feel it sort of like Coming in and and and not encroaching necessarily but becoming more ubiquitous Do you have any thoughts on any guiding principles that we should be holding on to in our day to day lives when? We sort of decide how we're going to sort of integrate or not integrate Uh ai in deliberate ways.

I know you said that obviously our our phones and our our cars make it kind of uh, Inevitable anyway, but you know, I think there's still some You Some consumer choices that can be made here and, and I'm wondering if you have any principles around that.

[00:41:21] Alex Sharpe: Oh, absolutely. So there's a couple of things. I think one of the things we need to. Let me give you two things to keep in mind. If you look at how technology is historically adopted, A. I. Is following that adoption cycle like every other technology has before it a little bit accelerated all of a sudden.

Uh, you know, prior to the fall of 2022 Nobody was interested in a talk about a I or having a on a resume. Now the phone doesn't stop ringing, 

[00:41:59] Chris Sienko: for sure. Yeah. It changes. So it follows that adoption. So understand what we're living through. Now we've been through before with other major changes. Electricity, the Internet, the World Wide Web.

[00:42:12] Alex Sharpe: You know, we fire, right? But too bad nobody was 

[00:42:17] Chris Sienko: my time. 

[00:42:18] Alex Sharpe: Yeah, nobody was recording that.

[00:42:20] Chris Sienko: Yeah.

[00:42:21] Alex Sharpe: There is one major exception. Who it's disrupting is changing.

[00:42:27] Chris Sienko: Okay.

[00:42:28] Alex Sharpe: So this is the first time a technology has affected white collar workers, knowledge workers, in a major way.

[00:42:39] Chris Sienko: Yeah.

[00:42:40] Alex Sharpe: So historically, we're taking work, we're repurposing it to a machine.

The work itself doesn't go away, it tends to go to the machine, but if you look at what AI is doing in a way it mimics human behavior. It's reduce, it's reducing the burden placed on the knowledge worker, the human who needs data and applications. If you look at like the Oxford studies or Cornell University has one, there's a few, they all talk about what's being disrupted.

And that is all knowledge workers, right? Accountants, lawyers, blah, blah, blah, blah. You can go down the list. On the other hand, see it causing a relief in some areas like cybersecurity. Cybersecurity is plagued with two workforce problems right now. One is we have a shortage of workers, which means we have to increase productivity.

The second is we have a skills mismatch, which is something we don't talk about. AI can be used to do both of them. And that's where we're seeing major, major investments, right? So that's there. Um, Daily Lives, remember it's, it's disrupting in ways we've seen before. So it's, it might be new to us, but it's not news to economists and historians and technology people, tech people who study this stuff.

It is disrupting white collar workers. So when you look at career choices and all, you would be well advised to see what's being disrupted and what's not.

[00:44:18] Chris Sienko: Mm-Hmm?

[00:44:20] Alex Sharpe: That's and there's a lot of studies out there on it. And one thing that smacks me in the face every time I look at them is how consistent they are.

It's almost like they're using the same data and writing different reports.

[00:44:31] Chris Sienko: right?

[00:44:32] Alex Sharpe: Um, but remember with with AI, it mimics human behavior. It's not really a sentient being. Now, some people will argue that we might get there.

[00:44:42] Chris Sienko: Yeah.

[00:44:43] Alex Sharpe: Clearly not there yet. Right. That's kind of dystopic. I would much rather think those movies are cautionary tales.

So we don't do something silly, but we shall see. So it mimics human behavior, but it's not human behavior. And never forget, it's actually working with ones and zeros, right? And then lastly, it's a technology problem without a technology solution, awareness and training, check your outputs, major problem, right?

It hallucinates makes up stuff, check your outputs.

[00:45:17] Chris Sienko: Eager to please, uh, wants to give you an answer, even if it doesn't have enough data points. I mean, do you, can you speak to, I mean, there's people who are saying things like AI can, can lie, but I, I think it seems like that still sort of falls under the purview of it's sort of eager to please in a way that it's giving you, uh, you know, what it thinks you want.

And it ends up sometimes looking weirdly deceptive. Do you mean, am I wrong on that? Or

[00:45:42] Alex Sharpe: What's so what's I mean, that starts, this is gonna sound stupid, but it's, it becomes a question of what's a lie. And I think it, it, it tends to, that question tends to beg the question, are these things thinking? No, it's not thinking. So, um, these LLMs, two, two factoids I heard recently from a very reputable source is that the average LLM, when you put a query in, right, you put a prompt in, It hits, on average, a thousand pages.

[00:46:16] Chris Sienko: Mm hmm.

[00:46:17] Alex Sharpe: So, it distills the information from a thousand pages to reply to your prompt. Find a place in the world where you can get a thousand pages that agree and there's not a bad fact in there, right? It's going to be there. The second fact I heard, which was also interesting, Is that every prompt consumes about a half a liter of water in the data center,

[00:46:40] Chris Sienko: Mm hmm.

[00:46:42] Alex Sharpe: which another problem outside the scope of the show.

So does this stuff lie? No, it, it makes a determination based on the data it's given and based on the model that interprets that data.

[00:46:58] Chris Sienko: yes

[00:46:59] Alex Sharpe: So if you want to assume it lies, you'd also have to assume it's a sentient being.

[00:47:05] Chris Sienko: Yeah, yeah, and that it otherwise has good intentions or bad intentions or any intentions Okay, so we've talked Sort of see so and fan and myth busting we've talked Average public and how to sort of interact with stuff or for listeners who are planning to Go into working more directly and more thoroughly with AI, whether programming the algorithms or refining the machine learning process or creating the tools that we can bolt the AI to, or the AI we can bolt the tools to what are some skills and degrees and certifications and experiences that they should be actively seeking now to help them be qualified for this type of career, a couple of years down the road.

[00:47:43] Alex Sharpe: Oh, that's a not an easy question. So going back to your thing about the, well, it's It's an easy question. The answer hasn't figured itself out yet. So, it's, it's like when, in, in the 80s, my bachelor's is in electrical engineering. The computer science program I took was part of the engineering program.

Most computer science programs these days are stand alone departments. So things evolve as they, they grow and all that. Um, a lot of it comes down to right now, the number one thing to do is become a sponge and understand how this stuff works, recognizing when we say AI, there's actually a family underneath that.

[00:48:39] Chris Sienko: Okay.

[00:48:41] Alex Sharpe: A large language model is different, right? We could start going through it, but there's all these different pieces and they all work differently. Um, anything, anything you could do to just absorb as much as possible. So like, but, but from credible sources, the universities as a general rule, do a really good job at, um, talking about how these, these models work.

Google has a lot of good classes.

[00:49:18] Chris Sienko: Mm hmm.

[00:49:19] Alex Sharpe: They really do.

[00:49:20] Chris Sienko: Mm hmm.

[00:49:21] Alex Sharpe: Um, one thing you have to look at when you deal with the universities is if you get a class from a math department, it'll be much more math heavy as opposed to if you go to the, you know, the data analytics group. So be a little cautious of that. I think for anybody in the security business, taking some of the classes for my soccer on how to audit these models.

is a great idea.

[00:49:49] Chris Sienko: Mm hmm.

[00:49:50] Alex Sharpe: Um, follow what NIST is doing, NIST, and what we're seeing out of the European Union. That'll help you understand what are the critical areas and what dials are going to need to be turned.

[00:50:06] Chris Sienko: Right.

[00:50:07] Alex Sharpe: Unfortunately, there is not one good source for ideas. We don't have a consistent taxonomy,

[00:50:15] Chris Sienko: Mm hmm.

[00:50:16] Alex Sharpe: so we have different terms for the same thing.

Um, I actually sit on a very senior level AI group and we joke that we all agree in principle, but we can't figure out what words to use.

[00:50:29] Chris Sienko: So this isn't usually around this time. I asked, uh, uh, you know, uh, guests if they had a magic gavel, what kind of legislation would they put in place or this, that, and the other thing, but, um, but I, you, we've already bonded a lot on, on, on classic sci fi and speculative fiction and so forth.

If you had a magic wand that allowed you to sway the direction that AI is going to be used and adopted over the say next 20 to 50 years, what are some principles that you would like to introduce into the. to the thinking that would make its long term evolution less fraught.

[00:50:57] Alex Sharpe: All right, I the number one thing Is we need guardrails not regulation at this point

[00:51:06] Chris Sienko: Mm hmm,

[00:51:07] Alex Sharpe: absolute number one thing because we need to evolve, which means we have to fail. We have to learn. We have to integrate our learnings, wash, rinse, repeat. We also need heavy investment in private public partnerships. So private companies have a different set of priorities.

They can only take on projects of a certain size. They only have an influence of a certain size, but Let's say all the, all the, you know, the U. S. and its allies got together globally and said we're going to make major investments in A. I. It will accelerate a lot faster than if we rely on a pile of individual organizations.

Don't get me wrong, private investment is incredibly important,

[00:52:05] Chris Sienko: mm hmm,

[00:52:05] Alex Sharpe: the larger, longer term, higher risk really has to be invested in by the government. So let's talk about the internet.

[00:52:14] Chris Sienko: please,

[00:52:15] Alex Sharpe: We never would have had the internet without public private partnerships

[00:52:19] Chris Sienko: sure,

[00:52:20] Alex Sharpe: and the internet funded everything after that, the, well, became the foundation for everything afterward, including what we're talking about.

So private public partnerships, creating the guardrails. I think that's incredibly important. Um, I also think. That we need to encourage the use of all these horrible movies and the bad things that happen as a cautionary tale, but not forget, there's a lot of good stuff. Look at Star Trek, look at Star Wars, right?

You, you talk about coexisting with AI, that is all over the place in there, right? Right? How many times did R2 D2, um, as an autonomous vehicle, or even C3PO, how many people did they save as autonomous vehicles? Right. But we don't talk about that. Right. So, you know, use it as a cautionary tale, which is a good thing because we, as humans, when we know what to be concerned about, we watch out for it.

Sometimes the pendulum swings too far, but it's better than not having it. So I,

[00:53:31] Chris Sienko: Yeah, and also realize that it's, uh, you know, a lot of times not written by people with, you know, immense scientific knowledge, you know, They're not predicting, you know, it's very bad for me to say I'm not trying to predict the future. I'm trying to prevent the future, you know, and it's, uh, Yeah. Yeah,

[00:53:53] Alex Sharpe: Uncertainty principle and all this. So, oh crap, what does he do? He comes in the next day. He says, oh, we're going to create a Heisenberg compensator. What does a Heisenberg compensator do? Oh, it compensates for the uncertainty principle.

How does it work? I don't know. We have 500 years to figure it out, right? So,

[00:54:13] Chris Sienko: yeah, right. Fabulous. That's how you start thinking about it. Yeah, absolutely. Yeah, you start, start unreasonable and then work your way towards reasonability, I suppose. Why not?

[00:54:24] Alex Sharpe: Learn! He had us on the moon long before we did.

[00:54:28] Chris Sienko: Yeah, that's right. Absolutely. Uh, all right. Well then, uh, uh, going back to a sort of practical things, uh, because we're coming up on the hour here, Alex, I could talk to you for three more hours, but this has been a blast.

But, uh, before I go, um, I got to ask, what's the best piece of career advice you ever received?

[00:54:43] Alex Sharpe: Ha ha! Alright, the best piece of career advice I received was from a very senior executive, very early in my career, and I think it applies to what we're talking about. Um, they told me to understand that sometimes the only way you know you're doing a good job is by who you're pissing off.

[00:55:02] Chris Sienko: Mm hmm. Yeah, absolutely. Mm hmm. Mm

[00:55:07] Alex Sharpe: unsettling.

It turns out only one out of 20 people will grab onto it and find it exciting, right? I happen to be one of the strange ones. So that means 19 people out of 20 are going to be a little unsettled about what you're doing, right? So, except the fact That that's going to happen, but that also goes back to why you want to build these relationships Right because if people know you're not trying to hurt them and they can understand where they fit And how they can leverage it to their advantage and that you're a nice person not out to get them Things go a whole lot better and they start to becoming part of the solution instead of your roadblock,

[00:55:49] Chris Sienko: Yeah.

[00:55:50] Alex Sharpe: so I best piece of advice I ever received and some of the funniest

[00:55:55] Chris Sienko: Yeah. Yeah, exactly. Yeah, uh, uh, that's, that's fantastic. And, and, uh, yeah, I think, you know, and it also speaks to just the need for, you know, because I, I don't, I don't, I think a lot of people, you know, we, you know, there's certainly enough evidence that there's a lot of, um, you know, clinical psychopaths in, in upper level echelons, you know, and so forth.

But, uh, you know, I think a lot of, a lot of them sort of, Glom on to that first half and don't glom on to the second half. Like, you know, you're no one till you have haters, but also, uh, Hey, maybe, you know, trying to explain what you're doing so that you're not the bad guy might be good as well. And so you don't end up, you know, having the Imperial March playing behind you everywhere you, every time you make a decision, you know?

[00:56:34] Alex Sharpe: Well, the reality is you're going to have a certain percentage of the people that are against whatever you're doing for whatever reason. Most of the reasons have nothing to do with you or the actual activity. You might as well just narrow them down. So, you know, cause your job at the end of the day is still to move the ball forward,

[00:56:53] Chris Sienko: Yeah.

[00:56:54] Alex Sharpe: right?

So the more, more obstacles you can move out of your way and the more people you can get helping you push the ball, the better off you're going to be just recognizing.

[00:57:06] Chris Sienko: Yeah,

[00:57:06] Alex Sharpe: not perfect.

[00:57:07] Chris Sienko: yeah, yeah, and we got to keep moving. Uh, so, alright, well, like I say, just about time to wrap up, but uh, feel free, if you want to tell our listeners more about Sharp Management Consulting and the work you do there, uh, here's, here's a chance.

[00:57:18] Alex Sharpe: All right, cool. So really simple. I'm easy to find. It's it's sharp with an A. S. H. A. R. P. E. The website is sharp. L. L. C. For limited liability corporation. You can you can easily find me on LinkedIn. Feel free to reach out. Basically, the work these days is I've pulled it back to just being me. Um, and I'm really loving advising and consulting to organizations and, and policymakers, uh, largely around, um, cyber security, resilience, AI, the stuff we're talking about.

So feel free to connect and reach out. My tagline is value creation while mitigating cyber risk, because the idea is the world has gone digital. And the reason cyber exists. Is to enable the world to continue on its digital path, its AI path, but doing it without killing ourselves or presenting too much risk.

[00:58:23] Chris Sienko: yeah, beautiful. Alright, well, uh, you answered all my questions. Thank you so much for your, your time and insights, Alex. This has been so much fun. I really appreciate it.

[00:58:31] Alex Sharpe: Thank you for having me. I really enjoyed it.

How does your salary stack up?

Ever wonder how much a career in cybersecurity pays? We crunched the numbers for the most popular roles and certifications. Download the 2024 Cybersecurity Salary Guide to learn more.

placeholder

Weekly career advice

Learn how to break into cybersecurity, build new skills and move up the career ladder. Each week on the Cyber Work Podcast, host Chris Sienko sits down with thought leaders from Booz Allen Hamilton, CompTIA, Google, IBM, Veracode and others to discuss the latest cybersecurity workforce trends.

placeholder

Q&As with industry pros

Have a question about your cybersecurity career? Join our special Cyber Work Live episodes for a Q&A with industry leaders. Get your career questions answered, connect with other industry professionals and take your career to the next level.

placeholder

Level up your skills

Hack your way to success with career tips from cybersecurity experts. Get concise, actionable advice in each episode — from acing your first certification exam to building a world-class enterprise cybersecurity culture.