M

Protecting customer privacy in the AI era

 read
Roadmap prioritization

Generative AI unlocks enormous value for brands and retailers, but it also raises very real privacy and data security risks.

To unlock hyper-personalization, accelerate content production, and unleash smarter customer service, businesses are increasingly relying on AI tools to access and analyze vast amounts of data.

Yet recent incidents show how quickly well-intentioned AI experiments can turn into breaches.

Earlier this year, a large retailer accidentally exposed the personal information of 1.8 million customers. It had used an AI tool to create personalized offers, but because it was missing access controls, hackers were able to access customer data. 

In March, a global tech company left a cloud storage area containing sensitive company data and customer information publicly accessible. Hackers used automated tools to quickly harvest the data, forcing the company to halt AI projects and conduct an emergency security review. 

In October, the government of New South Wales, Australia, accidentally exposed the personal information of up to 3,000 flood victims to ChatGPT. Data from the breach could potentially be used to train the AI platform, making sensitive information publicly searchable.  

With organizations holding more data than ever, their power comes with greater responsibility. And when a breach occurs, the consequences are costlier.

The cost of breaches

As a new IBM study has found, companies are failing to protect their AI tools from compromise, leading to an average cost of $670,000 in data breaches. 

The report also found that while only 13% of organizations had breaches involving AI, 97% of them lacked proper AI access controls. 

One of the key findings was that the most common entry point for these attacks was hackers accessing AI tools through compromised apps, plug-ins, and APIs. 

In this environment, privacy is an imperative that protects customers and the brand’s reputation.

Practical steps for protecting brand privacy in AI

Protecting brand privacy is more than avoiding breaches. It helps to build trust with customers while confidently using AI. Brands can take several concrete steps:

1. Control what data is uploaded

Only provide AI systems with the minimum data required. Avoid uploading sensitive personal information. And consider anonymizing or pseudonymizing datasets to reduce risk. 

For example, instead of uploading full customer profiles with names, email addresses, and purchase histories into an AI system, replace names with random ID numbers and mask email addresses.  

This way, the AI can still analyze purchasing patterns without exposing personally identifiable information.

2. Choose AI providers carefully

Compare privacy policies of different LLMs or AI platforms (but more on this below). 

Favor providers that offer strict access controls and no long-term retention of uploaded content. Understand whether your data might be used for model training or shared with third parties.

3. Implement strict access controls

Limit AI system access to authorized personnel only. Use role-based permissions and multi-factor authentication. And monitor usage logs to detect unusual activity quickly.

4. Secure integrations and APIs

Audit every plug-in, API, or third-party connection linked to AI tools. Ensure encryption of data in transit and at rest. Regularly test endpoints for vulnerabilities.

5. Establish internal governance and audits

Define clear policies for how AI can be used across teams. Conduct regular compliance audits to ensure AI practices meet privacy standards. Train employees on risks and responsibilities when using AI tools.

6. Plan for breach response

Have a clear incident response plan specifically for AI-related breaches. Ensure fast detection, containment, and communication to minimize impact on customers and reputation.

Comparing privacy policies of gen AI platforms

As businesses increasingly integrate generative AI into their operations, understanding the privacy policies of these platforms is crucial. 

Here’s how the major platforms handle user data:

ChatGPT 

The creators of ChatGPT, OpenAI, typically retain user data for 30 days. However, due to a court order related to a lawsuit from The New York Times, it’s currently required to store all ChatGPT conversations indefinitely, including those users request to delete. 

In terms of data usage, OpenAI uses user interactions to improve its models, with users managing their data settings through the platform’s privacy settings. 

Following the backlash of publishing conversations to search engines, OpenAI has now disabled the option to make them publicly discoverable.

Claude 

Claude, developed by Anthropic, has extended its data retention period to five years for users who opt in to model training. This applies to new or resumed chats and coding sessions. Users can change their data sharing preferences at any time in their privacy settings. 

By default, Claude uses user data to train its models. Users must manually opt out if they wish to prevent their data from being used.

Gemini 

Google retains user data for different periods, depending on the user’s settings. Users can choose to have their data retained for 3, 18, or 36 months, or opt out of data retention entirely. 

Google uses user data to improve its services and for personalization. Users can manage their data settings through their Google Account’s activity controls.

What this means for brands

These differences define how much control a brand has over its data. Choosing an AI platform isn’t only a question of capability or cost but of data sovereignty. 

Retailers and brands should ask: 

  • Who ultimately owns the data once it’s uploaded? 
  • Can that data be used to train someone else’s model? 
  • How easily can it be deleted or audited later? 

Platforms that prioritize enterprise-grade privacy, such as offering isolated environments, encryption by default, and zero data retention, will increasingly become the gold standard for brands handling sensitive customer data. 

As privacy regulations tighten and consumer awareness grows, brands that lead on responsible AI use will differentiate themselves as trustworthy.

Building a culture of responsible AI

Data protection isn’t just about the platform you use but the people who use it. 

And that comes down to company culture. 

Every team using AI, from marketing to customer service, needs to understand the privacy implications of the tools they use. That means: 

  • Embedding privacy-by-design principles in every AI project. 
  • Vetting third-party tools before deployment. 
  • Appointing cross-functional privacy leads to ensure alignment between legal, tech, and commercial teams. 

When privacy is treated as a shared responsibility, AI can become a force for innovation rather than a source of risk.

Looking ahead

AI’s potential is only beginning to unfold, but so are the privacy challenges that come with it. 

Brands that act now by tightening data controls and building privacy into their AI strategy will be the ones who earn customer trust and future-proof their digital operations.

If you’re looking at ways to integrate AI into your tech stack, contact us for guidance.

Share on social

Learn more about who we work with