Generative AI has made its way into nearly every industry, promising to change the way we do business. But is your customer data AI-ready? Here are four questions marketing leaders should ask to create an AI-ready data strategy.
Generative AI opens a world of possibilities. From creating personalized content, to enhancing productivity, and much more, marketers are eager to realize the potential of generative AI to transform their organization’s marketing operations. Though Generative AI can be a catalyst for creativity and innovation, it does not replace human oversight and input.
As leaders invest in implementing generative AI to enhance their marketing operations, it’s critical that they also invest in the right foundations to help ensure success. One such foundation? An AI-ready data strategy.
Successful Generative AI projects rely on large amounts of data to work effectively. To be useful, that data must be trustworthy, secure, accessible, and organized so that your Generative AI tool can produce meaningful insights and outputs. Since Generative AI models build outputs based on the data that was used to train them, having an effective data strategy is imperative.
Is your data ready for Generative AI implementation?
Here are four questions to consider first:
Question 1
Is your customer data reliable, indexed, and usable by a Large Language Model?
Before you launch your Generative AI project, ensure that you have an effective data quality strategy in place. This includes doing a thorough analysis of your data infrastructure, understanding how your data is collected and where it's collected from, and creating a plan for data cleanup.
To make your data easier to consume by both your AI and your human collaborators on the project, you will want to developing a metadata strategy and a data dictionary. Your metadata strategy will enable you to tag your data and, as a result, improve searchability. Your data dictionary provides a business-friendly collection of descriptions of the data objects or items in your model. The dictionary is crucial for consistency and accuracy in data management.
With a metadata strategy and data dictionary in place, you can then assess your data infrastructure for integration and organization. Often, the data relevant to different use cases sits across various functions. Map out where the data needed for your project is located and eliminate silos or manual processes that may slow or prevent access to the data required.
Question 2
How will you maintain customer trust and ensure regulatory compliance?
Prioritizing customer trust and regulatory compliance starts from within organizations. Internally, establish a clear set of guidelines and usage rules around data. Ask yourself: “What framework or governance do I have in place to ensure that our AI projects will comply with relevant regulations? What type of information can I feed into Generative AI models, both legally and ethically, and what boundaries do I need to draw to ensure that inappropriate data is not collected, stored, or used in my AI project?” Establishing these guidelines helps ensure AI deployments are ethical and can be trusted.
Your customers expect to be able to trust that the data they share with you will be used ethically and without bias. Prioritize transparency by sharing how you are using their data, and to what ends. Implement privacy and security protections on your data systems to mitigate breaches of customer trust and help ensure regulatory compliance.
Handling customer data in a responsible way is a key to maintaining trust and ensuring privacy. This includes prioritizing consent and ensuring data is used only for its intended purpose. AI models often require access to sensitive data. Consider implementing anonymization techniques to help protect individual privacy while still allowing the AI to learn from the data. Additionally, put processes in place to monitor who has access to the data. Regularly review these processes to ensure you stay aligned with the latest regulatory changes.
Our Trustworthy AI™ framework aims to reduce bias in AI models by developing ethical safeguards across seven dimensions related to fairness, transparency, accountability, robustness, privacy, safety, and security. The framework analyzes these dimensions throughout an AI system’s lifecycle, from the initial design and training to the deployment and ongoing monitoring, to ensure that bias is minimized across stages. Use our Trustworthy AI™ framework to help you develop safeguards to manage AI risks and articulate trust across internal and external stakeholders.
Question 3
Is your customer data compiled into a single platform?
Organizations that already compile their data from marketing, sales, and service into a single unified view of the customer (for example, with a data cloud or CDP), tend to have an easier time implementing AI-enabled personalization journeys than businesses whose data is siloed in separate platforms and systems across the enterprise.
Even in the absence of a data cloud or CDP (customer data platform), you can still explore and experiment with Generative AI as you continue to develop and refine your data strategy. Doing so can enable you to identify gaps in your data and build the necessary connection points between your generative AI project and existing business processes.
Question 4
How will you monitor and validate your Generative AI outputs for accuracy and relevance?
Because Generative AI models can potentially generate inaccurate or false insights, monitoring outputs regularly is key. Monitoring involves real-time tracking and verifying the Generative AI model’s results for accuracy, relevance, and potential risks like false statements, bias, or regulatory violations.
Use surveys, A/B testing, test-and-learn strategies, and include a human-in-the-loop to monitor and validate outputs. Creating surveys to be used internally before A/B testing or test-and-learns will also help you assess outputs before they reach consumers.
Having a human in the loop means including human judgment in the AI decision-making process. It could be during the model-building, training, testing, or validation stage. By incorporating human oversight and AI governance teams, you can monitor and evaluate the performance of the AI system, mitigate reputational and brand risk, provide feedback, and make corrections as needed. This helps ensure that the AI system's outputs are reliable and aligned with the intended goal. It also helps in identifying and reducing any potential biases in the AI system, maintaining transparency, and ensuring compliance with relevant regulations.
Remember, AI is not a magic wand. It requires a clear strategy, a solid foundation of good data practices, the right operational model and talent to review and assess the outputs, and an overarching culture that understands and values data. As you start to build your Generative AI strategy, don’t forget the importance of your data foundation.
How ready is your marketing organization for Generative AI?
Get started on your AI-ready data strategy
Here are a few considerations for what to invest in now, experiment with soon, and aspire to in the future.
Democratize access to your data. Create dashboards that use Generative AI to allow teams across your organization to summarize information quickly and gather insights that will inform key changes and developments in the customer experience. Doing so reduces the burden of data analysis, interpretation, and visualization on your technical team.
Use Generative AI to help you draft copy for an upcoming email campaign, segmenting by known user personas using the data you can already access in your existing platforms. Test, Learn, and Repeat to discover which versions of the AI-generated copy perform best with each segment. Don’t forget to keep a human in the loop to check for accuracy and brand consistency across your different email segments.
Deliver automated, personalized nurture emails encouraging the right next best action to each customer, based on all their known interactions with your organization. Use Generative AI to iteratively personalize the content, images, and channel selections that are optimized based on user intent signals in the data you’ve collected.
About the authors
Trinadha Kandi is a managing director and the leader of the Advertising, Marketing & Commerce practice at Deloitte Digital, a global leader in digital transformation and innovation. He has over 20 years of experience delivering data-driven marketing solutions for clients across various industries, leveraging his expertise in marketing technology, data, and AI.
Sai Medi is a data science and ML specialist at Deloitte, working at the intersection of customer data and marketing. He supports clients in the domains of RCP, health care, and tech.
Jenny Kelly is the head of content for Deloitte Digital where she brings over 15 years of experience creating compelling content that connects audiences with brands. Jenny also leads our GenAI efforts for Advertising and Marketing, helping brands understand how GenAI will transform their business.
David Chan is the Customer Data Platform (CDP) practice lead for Deloitte Digital where he helps clients deliver real-time personalization strategies leveraging customer data signals, insights and analytics, and CX technologies.
As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.
This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.
Copyright © 2024 Deloitte Development LLC. All rights reserved.