Navigating the wild west of Artificial Intelligence and how data privacy plays into the game is a hot topic for all, from solopreneurs to large enterprises. In this episode, John Riley and Valerie Cobb dive deep into the world of AI policies and data security, exploring the nuances of modern AI usage, its risks, and its exciting potentials. They discuss everything from ChatGPT’s data retention to how your company’s information might be unwittingly shared, plus hilarious AI jokes and the importance of cybersecurity policies. Join us as they unpack practical steps for protecting your data, especially as the tech landscape rapidly evolves, and get ready for some Gen X perspective on the digital age! Valerie and John also touch on the fascinating intersection of AI and cybersecurity, including the concept of “hiring hackers,” making this a must-listen for anyone keen on staying ahead of the curve in tech, business, and yes, even corny jokes.
—
Watch the episode here
Listen to the podcast here
AI Policies And Data Security: Navigating The New Digital Landscape
We’re back.
Here we are.
There have been a lot of pitfalls going on, as well as good stuff. We don’t want to be Negative Nellies. All you Nellies out there, I didn’t mean that. Sorry. My apology.
Is that like being a Karen or something?
I have friends who are Karen who have said, “Please don’t call me a Karen.”
We’ll come up with some new names.
That’s what we’d have to use. In all of our joking and all those kinds of things, we’re back on another great episode of the show. We’re going to laugh a little bit on this episode.
Do we start off with some AI jokes or something since we’re going to be talking about some AI?
We’re going to talk about some AI, and we’re going to talk about your last short. Your last short that I saw was amazing about the hackers. That’s going to be fun, too. Let’s have an AI joke. You’re better at jokes than I am. We did popcorn jokes last time.
That was between you and me, that popcorn joke. That wasn’t in the episode.
I know, but we could do popcorn jokes again.
That one must have impressed you.
It did. I used one and I was like, “This is a bit corny.”
That’s the good one.
What’s an AI joke? We’ve got to have an AI joke.
How intelligent is artificial? Is that like Military intelligence? Not to offend my Military friends out there or anything else, but we all know what we’re talking about.
We all respect. That was funny. What’s another one?
I didn’t know this was Joke Time with John. Let’s see. What’s a good opener at a bar? AI. See ya. You’re going to make me go to ChatGPT and try to find pick-up lines or something. I don’t know. We’ll figure something out. There you go.
The AI Buzz: Beyond The Hype And Defining Modern AI
You segued right into ChatGPT. The reason that I picked this topic is that there’s a plethora of advice on how to prompt on AI, AI agents, agentic AI, or whatever we want to start to coin all these wonderful phrases. Maybe we should define a little bit what we mean, because artificial intelligence has been around since the Furby, or even before then. That dates me.
You could tell.
For those who are new, it wasn’t just the release of LLM, ChatGPT, and OpenAI. Maybe we should qualify this a little bit.
What we’re going to talk about is more modern AI. It is the buzzword, but beyond being the buzzword, it’s being used for things that are great, but at the same time, beyond its capability, and being trusted as if it were a trusted advisor. Depending on your usability and how you’re using it, it may still work. A broken clock is right twice a day anyway. It’s noon and midnight. That’s twice a day.
I’m a little slow on the uptake. In my mind, I’m going, “Huh?”
A broken clock is right twice a day, no matter what time it is.
It doesn’t matter. You’re right. Keep going.
It’s a matter of that commitment. Some people are using ChatGPT to create business plans or have it give them a daily plan on how to market their business or something like that. Anybody in marketing will tell you that if you’re being consistent with it, you’re doing way better than the rest of your competition, and you’re going to have re you’re going to have gains.
How it does it is up to you. If you’re doing something on it for half an hour a day and making those things happen, and that’s what the motivation is behind it, that’s great. Are there better things that you could be doing? Probably, but that’s okay. If you’re getting results, then it is what it is, especially if it’s something that’s free that’s getting you started. When you’re getting started and you’re a small company, it’s way easier to make those mistakes and get there than when you have payroll to make and everything else for other people.
Amazon’s Impact: Data Tracking And Emerging Privacy Concerns
If you’re a small company, maybe you’re trying to pay yourself. This topic came up because, in those business plans and some of those things, I started to think about Amazon. I am going to throw Amazon under the bus because I can. Think about Amazon. I remember when Amazon came out, and I remember thinking, “This is great.” We’d put stuff in the shopping cart, and we would do all of these things, and then it kept getting smarter.
I remember I was in a tax and accounting pre-SaaS organization because SaaS was nothing. That’s sad. It was coming out, but it wasn’t where people would think of it as a physical product. As Amazon was starting to do its thing, I remember that people would say, “How do you know that I clicked on something?” We had to instruct people with MailChimp. You generally have to have a conversation. You can’t say, “They clicked on something,” because that would feel like an invasion of privacy.
You get this sales person, and they call up and say, “You clicked on our X.” Websites weren’t good enough to have all of the hot jars and all that kind of stuff to monitor who’s going where, so it wasn’t a very commonly known thing that you could track what people were doing. One of the big challenges was that we did have that data. We did know we could use that data to figure out how to talk to somebody.
When Amazon started to have the cart, you could start to notice that the cart made better decisions for you over time. Amazon didn’t have the same rules and regulations that we have now at the time that it started. It’s on this big behemoth mound of data that could clone you from shopping on Amazon. You could probably have already done that.
With all these AI posts on LinkedIn, I started thinking we’re paying attention to all these new how-to prompts and which agents to use. Is it an agent or is it not an agent? What is it? I then started to think about the mounds of data that organizations are sitting on. From there, I started to think a little bit about how I use ChatGPT. To me, chatting with ChatGPT is no different than growing up with Star Trek in the ‘70s and saying, “Computer.” People are starting to use it like that. Think about all the employees at your company. They’re doing that on their phone aside from all that. They’re saying, “Computer,” and they’re using information, like, “How do I write this email to so-and-so?”
Even beyond that, though, the bigger issue is taking that mound of data and saying, “ChatGPT or computer, take this data, which is all the shopping data from this person, and tell me more about them.” It’s not necessarily the writing of the email that’s the issue.
I kept that very light. That’s an everyday occurrence, though. It’s like, “Can you revise this email? It’s for so-and-so.” They’ll even give dates. They’ll even copy and paste the headers of those emails into ChatGPT and say, “Please rewrite this for me.”
Need For AI Policies: Safeguarding Data And Training Employees
That’s the real piece of the puzzle that is coming, especially if you’re in an industry where you have regulated data. For instance, if you’ve got PII data, some sort of CUI data, or any type of data that needs to be regulated, you have to have policies. You have to train your employees on what you can use those for and what you can’t use these tools for.
One of the problems that you get is you’ve got an employee who maybe makes a mistake once, twice, or thrice. At some point, it’s got to be a fireable offense, or it’s got to be something where this is not the way that work gets done because we can’t afford to lose that data. That’s going to be an interesting topic as we move over the next few years, especially with younger generations growing up with these tools. It’s like, “I’ve always uploaded my data there. I don’t have to work with this stuff because it produces the reports that I need, and it does it more quickly than I can.” Those are the things that we’re going to have to learn about.
My father was worried about what the computers were doing. He did punch card programming, so he had a pretty decent idea about computers. At the same time, he avoided a lot of those things. We’re looking at the next generation, going, “They’re utilizing AI, and they’re utilizing it in these ways.” There’s going to be some learning. We’re all going to have to expand our ability for that and understand what can and can’t be.
You don’t publish your Social Security number. You don’t publish your information on the web even if you’re the CEO of LifeLock. It was always funny. He would publish his Social Security number. He got hacked so many times that he didn’t care anymore. It was a marketing ploy for his company, and people would sign up. People were like, “If you’re willing to do that with LifeLock, then why shouldn’t I sign up?”
Ultimately, they got bought by Symantec because it wasn’t working as well as they had mentioned. That’s where we’re going to be with AI. There’s a lot of marketing and a lot of buzz about it. As things change, you still won’t post your Social Security number on the web. There’s going to be some learning that we’re going to have to do when it comes to AI.
AI Memory And Data Retention: What Happens To Your Data?
In this little exercise that I did, I asked ChatGPT itself how it protects the data. There are all these spreadsheets. We’re going to have to train, and we’re going to have to do all these things, but what do you train on without a policy? This is where I’m going with this. I started to think about Amazon’s roots. I wasn’t throwing them under the bus. You didn’t know what you didn’t know. You’re sitting on a mound of data, and you’re like, “I can use this data. This makes sense.”
When I was thinking about various modeling that I worked through, in those modeling, I had to find out more about ChatGPT’s memory. Remember that these LLMs, either Gemini or ChatGPT, are what all these agents are run on. When you get into it, it’s like, “It doesn’t stay in my memory,” or you can turn off what? My mind went blank. Help me.
You can turn off history.
You can turn off trainable. Trainable is what I started to get into. I said, “ChatGPT, where does the data reside?” Let’s say you turn off trainable. Where does the data reside? It was like, ‘You’re right. It resides on our servers.” I said, “That means that you still have access, even if you say don’t train on it.” It was like, “Yes, we do.”
It is supposed to be emulating you after a while. It’s almost spooky that it can talk back to you like a human being, understand you, and all these things. I said, “If it’s in your memory, where is it sitting?” It was like, “I have to not have it be in my memory after a certain amount of time.” I said, “Where is it sitting?” After a while, it answered me and said, “We have the data.” It’s not like it gets wiped. It’s not like it goes anywhere. It doesn’t get recalled unless you remind it.
You’re saying to train them for the future. The challenge is that the damage is already there if you’re not training. What that resulted in is I went, “I need an AI policy that anybody I bring on board has to make sure that they never use an organization’s name unless it’s publicly available.” I went down the litany of what this was to then say, “I’m onboarding.” Let’s say you onboard a VA or a new employee. It isn’t enough to say in the future, we’ve got to train the generations not to use that kind of thing. You’ve got to have something in order to start training them to realize that this is happening, and it’s only going to get worse in the future. That’s me anyway.
That’s our paranoid Gen X part of it, right? To a degree. My father would have had a completely different answer about AI versus what it is now. There are going to be changes. There’s going to be security updates. There are going to be ways of having it clear, that memory that might work if you’re paying for it versus not paying for it, or if you’re donating your ideas to the public domain. Those are going to be the choices that are going to have to be made or learned to help build those policies and understand what can and can’t be shared securely.
Never share social security numbers with them, for instance, even if you upload your Excel data file with all that information, which you shouldn’t have anyway. The point is your security policy should have kicked that in the butt right in the beginning. If you did that and you uploaded it, you’re asking for a lot of trouble. You have to understand what the data is, where it is, and who has access to it, and make sure that the people who have access aren’t sharing that data with these models.
Revenue generators are always looking for agents who will upload data to make proven repeatable revenue processes easier, faster, and better, including writing emails for you and all of those things. Revenue generators, marketing, sales, CRMs, Salesforce.com, all of those guys who are using insights to do those things are uploading databases to agents.
Contracts & Data Risk: Trading Convenience For Freedom?
The other part of that is from a purchasing standpoint and understanding contracts. That’s where that’s going to be more identifiable. If I have a contract with OpenAI or Google and they’re reading my data, as long as that data stays within my walls, so to speak, is not being used to train the models, and truly isn’t being used that way, then great. You can utilize that data. If that is going outside of your data center and being shared, that’s where the issue is going to be. That’s going to come down to the contracts.
Can contracts be broken? Yes. Talk to any lawyer. If you need one, I know a million. There’s a reason that there are so many lawyers out there from a contract perspective. That’s because somebody can sign a document and still not follow through with their part of it. They’re like, “Somehow, that data got back into our main system when we logged in or whatever.”
A lot of this is going to come down to contracts of where the data is. Is the risk worth it? That’s the other thing. Is being able to give that data to a machine learning system worth that risk for the upside? If you’re building revenue from it, maybe or maybe not. That’s a question that you have to take a look at from within your organization and answer.
I’m not going to say which CRM I asked. I go, “I’ve got a plethora of attorneys. If I get sued, it won’t cost me. It’s much better to have the data and use it than the other side of the fence.” I didn’t laugh, and did laugh. This is an awareness thing. You get to a point where sometimes, you forget that giving up your data is trading freedom, if that makes any sense.
We all want to live in that connected world because it does save us time. We want grocery delivery. We want pizza delivery. The meme from Google that buys Pizza Hut and some of those things. At the beginning, when we started this episode, I said it’s going to be some of the negative and some of the positive. The reality is that there are ways to keep this positive. First of all, start with a policy of some sort. You’re the guru more than I am the guru. What are some of the positive ways, small organizations, or whatever that can fix this thing?
It’s not about fixing. It’s awareness of what data is being shared. The example I gave was of a small business using it for marketing, a plan for doing things, or revenue generation. If you’re a single entrepreneur and you’re trying to build something out, those little changes, even if they’re not correct 100% of the time, are going to be way better. When you have people counting on you for payroll or when you’ve got some of these other bigger items going on, you need to question it a little bit more. The larger the company gets, the more you need to question that.
I can tell you that I would not want to be standing in a courtroom saying, “I followed ChatGPT’s idea of how to do this. They wrote my contract, and it didn’t work out great.” They’re like, “You got 1,000 employees. Why did you do that?” I couldn’t stand there and take that from the judiciary. The good news is that you can utilize it for your own personal things. Use it if you want. It will help you. If you haven’t read Atomic Habits, it is something like that.
I love Atomic Habits.
Take a look at it. There are ways of building new habits that it can help with, like taking vitamins or anything else. Learn to use it for those things that help you build those habits and those good things that can come along with it, but be aware that what you’re sharing with it may, at some point, become public. That has always been the advice for anything you publish to the internet. We know this now as adults, whereas maybe we didn’t know it with MySpace. Anything you share on the internet is there forever, except maybe MySpace.
Hopefully, the people who are tuning in don’t even know what MySpace is.
I was talking to somebody, and they were talking about MySpace. They said, “That was the first time I learned how to do HTML coding.” If you think about that, that was a lot of people’s first introduction to it because they wanted to make pretty pages. It’s the same thing with AI. People want to make pretty emails and all these things, so they’re utilizing these systems that are learning about how to create those things. It’s somewhat similar. We’re waiting our way in, and people are learning to do that AI thing. What AI is now is not what it’s going to be in 10 years or 5 years.
It’s probably in one year. I’m telling you, it’s faster than that. This is based on your short about the hackers. You’re a large company. Hacking the mothership is exciting, but threat actors typically go to places that are a little easier to hack. You hear of the sensational stuff. Being a ten-person employee shop or a startup does not mean you’re immune to that. Corporate espionage says it’s worth a lot, and it doesn’t have to be holding your data hostage to become a lot worse. You end up losing your company because a competitor beats you out because somebody in a ten-person shop decided to do ChatGPT for something.
It’s the same as an employee taking your data, going and starting their own company, going after your clients and everything else, and being on the defense.
It’s very similar. Be careful. That’s all I’m saying.
The positive is that it will change over the next few years. As those changes come, it’s going to become more powerful, and it’s going to become easier to move things along. I’m still waiting for the AI that reads my email and responds to it without me looking at it.
That is John. If you want to have a conversation with John, make sure you pick up the phone.
There is that, too.
Don’t you want to Deepfake then, like, “Are you John on this camera right now? Is this John? How do I tell that you’re John?”
When it comes to Deepfakes, all you have to do is grow a beard, and it messes it all up.
I’m in trouble with that one. I have to figure out how to make that one work.
One of those Christmas beards.
The Business of Hacking: Defending Against AI-Driven Threats
That’s true. You had a short that was talking about hiring hackers. What are two pieces of advice you would give people with thinking about AI and hiring hackers? I found that fascinating.
If you haven’t watched a short one, I’ll do a quick synopsis of it. Hacking is a business. They are hiring people to come up with ideas and targets. When they recognize it, they’re splitting the revenue to whatever they collect, and they’re offering hardware as a service and other items. That being the case, it’s still a business, or it is a business.
How do you deal with that? One is you’ve got to try not to be a target. The best way to do that is to help build policies and not share the data that’s out there. Understand the policies from a cybersecurity standpoint, but also from an AI standpoint. You don’t want to give it to them either. Somebody knocking on your door doesn’t mean that you hand them the keys to the castle. It’s the same with any of these AI platforms. It’s not because you’re using them that it means you give them everything, because they will learn from them unless you look at those contracts.
The second thing that I would say is awareness and training for your people. If you haven’t watched Catch Me If You Can, there are a lot of these movies that talk about how even the basic confidence gets them by a lot of these issues with people. Those are things that do happen. We need to be aware of them and understand that that’s a standard way of getting past the guard keepers.
If they’re not trained and they don’t know what it is, you’re going to have to get there, especially when it comes to AI. If I say I’m an AI agent and you’re going to feed me the information, I’d be like, “That was the wrong answer. Try again. Here’s more data. Try again.” It is the awareness of who has access to what data. Making sure that they know what to do with that data and what not to do with it is an important part.
I agree 100%. Knowing what you’re training for, which means having some kind of AI policy so that they all know where that is front and center, is very important. A lot of people have forgotten that that’s something that’s necessary. We’re so grounded in different types of frameworks, depending on the industry, whether it’s SOC 2 Type 2, CMMC, ML2, or those kinds of frameworks. Get your AI policy. Get it so that you can calculate the rest, so that you don’t end up in the court of law. You were able to get your evidence and proof of compliance at the time for that. I hope you never end up in the court of law.
Also, understand that you need the cybersecurity policies before that. That’s one of the things that we ignore. We still run into companies that don’t have any, and they don’t understand why they need them. Talking to them about AI policies is a goal line that’s too far. Start with cybersecurity overall, and then get your AI policies. There are steps to get there. Understand that. Like CMMC with the maturity model, it takes maturity. You can’t start running right out of the womb. We’ll put it that way.
I did, according to my mom.
I did, too. My father always said that I was like an adult at the age of two.
You probably were. It’s those old souls. That’s how it is. Bring us home from this episode. What’s your last zinger line for this episode?
Since we started this, I did go ahead and ask ChatGPT for a joke. We’ll come back to that. Why did the AI go to therapy? It’s because it had deep learning issues. There’s your ChatGPT joke for AI for you.
That’s your dad joke, practically. Like us and love us on Omnistruct. It has been a great episode. Thank you for tuning in. Have a very blessed day.
Thank you, everybody. Have a great day.
Important Links
About Valerie Cobb
Revealing why people buy to drive revenue. Valerie Cobb is an award-winning leader with over 25 years’ experience, and is passionate about growing revenue.
She has mastered getting to the root of the buying-and-selling dysfunction that is often common in organizations on the path to consistently producing high-performing sales.
As Chief Revenue Officer of Omnistruct, she is instrumental in aligning sales, marketing, and the client experience.