AI is rapidly changing the nature of cyber risk, and most executives aren’t ready for the fallout. In this episode, cybersecurity strategist Christophe Foulon joins John Riley to unpack how generative AI is not only fueling smarter attacks but also exposing critical gaps in governance, incident response, and data protection. From deepfake threats and shadow AI use to the limits of cloud reliance and the realities of zero trust frameworks, this conversation lays out what business leaders need to know now. Drawing from experience at Capital One and Microsoft, Christophe offers clear strategies for improving visibility, setting enforceable policies, and preparing for the next generation of cyber threats.
—
Watch the episode here
Listen to the podcast here
Facing The New Frontier Of Cyber Risk In The Age Of Generative AI With Christophe Foulon
We’ve got another great episode, and we’ve got this amazing guest. He worked for the likes of Capital One as an ISO implementer and Senior Manager. He is licensed in generative AI with Microsoft and LinkedIn. He’s an author and coach for people who are trying to break into the cybersecurity field. We’re introducing our special guest, Christophe Foulon. Welcome, Christophe.
Thank you so much for having me.
Cyber Risk Explained: What’s The Difference For Executives?
We’re going to jump right in here with our main question. How would you explain, especially to an executive, the difference between what cybersecurity is and what cyber risk is?
Cybersecurity is the act of understanding and securing digital assets within your environment. Cybersecurity risk is the positive or negative risk of doing business within a threat landscape. If you want to go and do Bitcoin mining, there’s a certain level of risk in having to secure your Bitcoin assets in a wallet, as well as the hardware needed to do the mining. Understanding the risk in that is one set of equations.
If we pull that back to something like pharmaceuticals, which others may understand better, the risk is more around securing the patents, the research, the trade secrets, the clients that you’re working with, the grants that you might be getting from the government, and how you’re doing business. You don’t want your competitors to see that. You might not want the market to see that because that might reveal too much or too little about how you are attacking the problem.

It is trying to secure your information to ensure that there is the right level of confidentiality, that there’s the right level of availability to those who need to see it when they need to see it, and then to ensure that there’s the right integrity around whether the information is correct and if anyone can alter it. If it has been altered, can you detect it?
What I heard is that the technology part of it is the cybersecurity piece of it. There are more risks when it comes to cyber, as far as maybe the patents and some of the other things that are still risks that people may not be considering.
The security part is the use of the technology to conduct business. The risk part is conducting business because you can avoid the risk by not conducting the business. That’s risk avoidance. You can transfer it by using a Cloud provider to transfer some of that risk because they’re the ones that are deploying the technology stack and helping to secure the technology stack for you.
They could even go as far as using a Cloud service provider to host software for you as Software as a Service. They’re doing even more for you. You’re transferring even more risk over to that service provider. It’s all about the amount of risk that your organization feels comfortable operating the technology and how they want to deal with that.
If you’re using a SaaS provider, check and make sure that they’re certified and that they’re doing the things that they need to be doing to protect your data.
That’s a good point. Ultimately, the client is always responsible for access control and protecting the data. As much as your SaaS provider says that they have reliability and availability, you still need to be responsible for data backups. Even if they provide backup services, you might still want your own backup somewhere else or in another medium. Keep that in mind. Even if your Cloud service provider has that, you might want your own backup in some fashion or another if this is a critical business service.
The Gen AI Threat: Navigating AI’s Impact On Cybersecurity
What is the most significant cybersecurity threat that you see that companies are facing now?
When it comes to what a lot of companies are grappling with, it is the rapid deployment of generative AI solutions both for internal use and for threat actors. From the use of threat actors, they’re able to scale their operations, whether it’s from phishing campaigns, deep fakes, faking phone calls from your CEOs, or faking videos to imitate your CEO and scare the market. Having a good incident response plan as to how you would deal with that and how you would react from a PR perspective to things like that is something that a lot of companies have not considered.
From the internal perspective, with the use of generative AI, a lot of companies have not realized that they are already using it. If they are, they are putting data governance controls around “If we’re going to use it, how are we going to put controls around it? What data are we going to allow to be processed by generative AI? If we’re going to allow generative AI in our organization, will it be processed in our organization? Will we allow it to be processed outside of our organization? What level of trust do we let stakeholders assign to any of the output?”
Too much that I’ve seen is that stakeholders assume that all the responses from generative AI are 100% truthful. They trust it. We’ve seen lawyers go to court and provide briefs that have citations for made-up case law. You’re going to find that if your stakeholders are not verifying citations or verifying numbers that are produced by these generative AIs, you’re going to run into trouble.
You have to teach your stakeholders that if they’re using generative AI, you have to verify the outputs, and/or when you’re training or refining your models, ensure there’s a level of QA that you’re doing before you let it loose to your larger organization where you validated that the input is at an error rate that is acceptable for your organization or a risk level that is acceptable for your organization.
When you’re dealing with AI, there are a number of AI policies and different things that we’re looking at. You had mentioned some of the content and the other things that are being shared with some of these large language models, sometimes internally and sometimes externally. It is understanding the difference between that and then checking the results of where that comes from. That makes a lot of sense.
The other thing when it comes to that is if your AI is making decisions that have the potential to cause public harm, it is your duty to ensure that there is no bias in the AI, or you’re aware of any bias in the AI. It is so that you can remove that bias in the decision-making, either before it makes a decision or after it makes the decision, and you change the output in your final product.
CEO’s Cyber Priority: Integrating Risk Into Business Strategy
We’re going into an interesting time. When you’re talking to CEOs, how do you think they should prioritize that cyber risk threat? What priorities should they put on that compared to running their business, making sales, and bringing in new things? There are all these things. In the priority list, where do you think that generally falls?
I saw a study from McKinsey that looked at the emphasis placed on large companies, because they’re working with McKinsey, on their use of AI in the past several years. It was trending around 80%, with companies using generative AI. That has been over the past few years. It went from 65% to 70%. The impact that it has had on the overall stock price or overall outputs has been fairly minimal. That’s because organizations are still very green in understanding what they can use AI for, and AI is still green. We can call it a mature if-then model.

We need practitioners to understand, “What do we want the AI to do? Do we want it to compute these things and make a decision?” We need to understand how the decision should be made first so that we can create the right model to make that decision. Oftentimes, organizations don’t have the thinkers who need to be able to make that decision so that they can get the right model in place to do so. That’s why the net impact from AI has still been so minimal. While so many people have been talking about it and it has a lot of potential, it hasn’t caused a lot of impact yet. There’s a lot of potential for automation. There’s a lot of potential to improve things, but we’re not seeing a lot of impact yet.
I was reading an article where a company had let go of a number of its staff members. The idea was that they were going to replace them with AI. Maybe a month later, they ended up hiring many of those people back. That’s because the hype around AI is high, like the Internet was going to take all of our jobs and everything else back in the day.
There’s a portion that AI will help with or bring to fruition as it matures. The AI that’s there, although it looks great on the outside, is still being trained. There’s a lot to be fixed and learned from the AI models. They’ll be replaced over and over again until it looks something similar to what we’ve had with Web 3 and different things like that.
It’s going to be an interesting journey to see how AI changes that and changes that landscape. I do think that we are at the cusp of that, like we were back in 1995 when I was using Mosaic as my web browser, and there were 50 websites that you could visit. That’s the same situation that we’ve got with AI. There are a lot of people who are jumping the gun a bit and opening themselves up to that risk. Ultimately, there will be a lot of significant changes that will happen over the next ten, fifteen, or twenty years, and it’s happening much faster than the internet happened.
It will happen a lot faster. We have to be a little more programmatic in understanding what we want it to accomplish so that we can accomplish it. Realize that it’s not a replacement for humans, but an augmentation tool for humans. We still need humans in the loop to ensure that it’s not doing the wrong thing, and use it to amplify the skills of the human doing the role.
The AI Arms Race: Cybersecurity Positions In An AI-Driven World
I’ve seen the video where the person can scan this huge thing and see the changes or some difference between four different panels. That amazes me. You need that person to be able to find those problems with AI when they do those. Those types of skills will probably be extremely valuable in the future. We’re talking about AI and how that’s working. How will that impact cybersecurity positions? What happens if AI is attacking you and you’re using AI to stop the attacks? Are we back into an arms race of AI on both sides for trying to stop these attacks?
If we take a step back and look at what the internet is and what Cloud services are, they’re everything as code. They’re a big data problem. What is AI good at? AI is good at tackling big data problems. AI will eventually get good at finding the patterns that vulnerabilities are, identifying these patterns, chaining together these patterns, and then either exploiting them if you’re a threat actor or recommending mitigations if you’re a blue teamer.
There’s already an LLM that has been trained to do things like this. WhiteRabbitNeo is an example of that. It’s already close to two years old. In using it, it amassed a huge number of vulnerabilities and a huge number of fixes to those vulnerabilities. It can scan websites. It can scan outputs from error codes and repos, and make recommendations on what you can do to fix them. I’ve sat around a table with CISOs who have used it with their team to augment their skills. I have also heard from CISOs that letting this loose on the world is going to allow threat actors to also augment their skills. It’s going to be an arms race on either side.
For defenders, we have a little more knowledge about our environment, so we have a leg up. There was always a thing that a threat actor only has to get it right once, but they get it wrong many times. The many times that they get it wrong should be red flags to us that we should be hardening our defenses and preventing those attacks. Those flags going up that they’re trying to attack us should be warnings of, “You’re coming in via this way. Start to look at this type of attack this way.” Using AI could be a tool to help us harden defenses in that way.
Even with AI, there’s bias, but there’s also some people’s bias. If you’re a network engineer and you’re looking at the network traffic, you may not be paying attention to some of the things that are happening at a desktop level, for instance. Sometimes, what I’ve seen is that the network engineer will put four firewalls in place to make sure that the network is secure, but then leaves no antivirus on the desktops because of the focus and the need for that.

PowerShell goes wild.
That’s where some of these frameworks can help a customer understand, “Maybe I don’t need to have the four firewalls. Maybe I should put the antivirus on the machines or spread it out a little bit more so that I’m a little more well-rounded and more protected.”
Speaking of frameworks, we have the zero-trust framework. That has been something that the industry has been promoting as the least-trust approach to it. I know a lot of companies have been using it as a marketing term and saying that their solution is zero-trust compliant or a zero-trust solution. If you use it like a framework instead of a one-click solution, you have a better approach, and then you can go about it the right way. There’s a great book from the author of The Unicorn Project. He created one in a similar way for zero trust. It walks through how a project team would tackle solving a zero-trust problem during an incident and how they would go about addressing least privilege in that way. It’s an interesting read.
Cyber Disaster Journey: Preparing For The Unthinkable
We touched a little bit earlier on incident response. Tell me. From your perspective, what does that cyber disaster journey look like? If you’re an executive and you’ve been hacked, and somebody’s stealing your sensitive data, what does that look like?
Identify, detect, contain, remediate, and recover the nest framework. Let’s start with ensuring you have logging before anything happens. Trying to turn on logging in the midst of or after an incident has happened won’t provide you with as many benefits. I’ve done some IR with companies where they might have had an account takeover, phishing being sent from an account targeting their CEO, and things like that. You can’t do much backward tracking in how it happened if you didn’t have logging to begin with. Ensuring that you have visibility into your environment is one of the first things that you should do.

Generative AI: We still need humans in the loop to ensure that it’s not doing the wrong thing.
Having visibility and not looking at it doesn’t help either. Either have someone on your team who is trained to look at your logs, to at least know what type of logs to look at for these types of sensitive information. If you don’t think your organization is ready enough or mature enough for that, work with an MSSP to do that, where they can augment your team and provide that security monitoring for you. They can then escalate events like that to you and help you with the incident response and remediation.
From there, you want to ensure that you have good backups and test your backups. If you have good backups and they’ve never been tested, and you attempt to do a restore and they fail, you’ve backed up a bunch of dead data that doesn’t help you in an incident. I’ve walked through several situations asking leaders, “When’s the last time you’ve set up a BCDR plan, done a backup, and restored it to full functionality?” Most of the time, I hear crickets.
Ensure that this is something that is within your business tolerance, and ensure that you’re backing up critical business information as well as systems so that you will be able to restore them in the event of ransomware, a Cloud provider going out, or a server dying. You can restore it on another server and have that go up quickly. This is something that your team should be doing on a regular basis, as well as documenting the steps to do so.
Oftentimes, this documentation is tribal knowledge. It is undocumented. It might have only lived in the head of the person who set it up, and they might have left the organization two years ago. Everyone else has been supporting the application running. If they needed to rebuild it from scratch, no one knows how to do it. If that’s the case, you might want to go back in and attempt to rebuild this documentation so that if you did have an incident, you can rebuild your application from scratch and quickly.
I completely agree with that. The recovery plan also needs to include some contacts for maybe PR from an executive standpoint. You hit a lot of the technical pieces. Understand that if you’ve lost customer data, you’re probably going to have to say something. You’re going to have to go out there, talk to people, and let them know. That’s a better opportunity to try not to make mistakes.
Legal PR, regulatory, if you have any regulatory governance, as well as ensuring that you’re connected with your regional FBI or CISA contacts. One of the things that I also do is I am the InfraGard President of the InfraGard NCR chapter, which is a nonprofit organization that collaborates between the FBI and the private sector to ensure that critical infrastructure can partner with the FBI. They know who their contact is. They can get information to prepare and respond to an event, as well as know who their private sector coordinator is. They can reach out and be able to coordinate quickly if there is an incident with their critical infrastructure organization.
Prep, prep, and more prep, right?
Exactly.
Christophe’s Cyber Coaching: From Caribbean Hacking To Executive Empowerment
Tell us a little bit about yourself. Who are you? How did you get here? Tell us about your company and how you’re doing things.
I started loving computers. I grew up in the Caribbean. I fell in love with computers at an internet cafe, hacking together. I was buying myself internet time by being a CIS admin and a break-fix person for the internet cafe. Eventually, I decided that I was going to make this my full-time career. I loved helping organizations make things more secure as well as functional, so I transitioned to the dark side of security.
I ended up mostly on the consulting side. I focus on coaching cyber executives because I’ve found that consulting is one approach. I love coaching because it engages the executive in the solution, versus just recommending what the company thinks the industry best practice might be, but truly involves them in what the solution should be for their organization. I work for Quisitive, but I also have my own LLC, CPF Coaching LLC. I do a podcast, Breaking Into Cybersecurity. I’ve written a couple books on how to develop your cybersecurity career. I’m a little bit of everything.
What are you working on that you’re most excited about?
What I’m excited about is my journey into helping more executives with the advancement of their AI initiatives, developing data governance frameworks. I know it doesn’t sound flashy and exciting, but helping organizations get back to the basics so that they can then use AI automation and see the advancements that they can make. It’s exciting because you can then see that growth and maturity over time. I see growth and maturity when I coach individuals, and I love to see that. That’s why I use coaching as my approach to it.
Christophe’s Pro Tip: Uncovering AI Use And Setting Governance
If you could go back in time and give your younger self some advice, what would that advice be?
I would’ve started my own company a lot sooner to do coaching on the side, as well as started my security career a lot sooner. I always felt that it was so out of reach that security was something that was too hard. I was always doing security from the beginning. I was always focusing on making things safer, enabling users to do things in a safe manner from the beginning, but I didn’t think that was security. I thought that was general system operations. That was a blocker in my head. If I were to go back to my younger self, I would’ve started my security career a lot sooner.
I probably would’ve done the same thing. It was a change in moving from CISA admin to security and everything else. I agree with that. We like to give our audience one good action item. What’s one piece of advice or a tip for reducing their cyber risk that you would give to an executive?
I would say to go out and ensure that you have visibility into what AI your organization is using. If you think your organization isn’t using AI, you’re probably wrong. They’re using it in some way, whether it’s Grammarly or ChatGPT. They’re using it in some way, so be more proactive and try to understand what use cases they’re using it for. Decide as an organization how you can put governance around it for the appropriate use cases and where you should put controls around organizational data versus casual generative AI use. It is so that you can set acceptable use policies for your stakeholders.
Also, be able to enforce those. We always used to say it was one thing to have a policy, and it’s a second thing to enforce it.
Having visibility is the first step. If you don’t have visibility, you can’t even have the data to make a risk-based decision.
Where can people find you?
You can find me on LinkedIn, or you can find me at ChristopheFoulon.com.
That’s pretty much it, everybody. I appreciate Christophe being on the show here with us. Audience, thank you for tuning in. I hope you’ve learned something and laughed. Tell somebody about the show. It has been another great episode of the show. We’ll see you next time.
Thank you so much.
Important Links
About Christophe Foulon
As a seasoned Cybersecurity Executive Advisor, IT, and GRC leader with over 17 years of progressive experience, he brings extensive expertise in IT, cybersecurity, cloud technologies, and business transformation.
His background includes a Master’s degree in Information Security and Assurance and a Bachelor’s in Business Administration focusing on Information Systems.
He is well-versed in navigating complex technological landscapes and adept at guiding organizations through their digital evolution, and all his certifications are actively maintained.




