Search
Close this search box.

Council Post: What’s On Advertisers’ Minds Regarding Generative AI?

This multi-pronged approach could pave the way for the ethical and sustainable growth of AI-powered advertising, fostering a future where creativity thrives alongside responsible innovation.

Part One

The Dilemma of Advertisers: Implementing Ethical AI Governance 

Agencies are pumping out refreshed AI governance amidst the boom of generative AI technology. But with the C-suite focused on grand pronouncements, employees grapple with real-world ethical dilemmas that haven’t been addressed.

In the storm surge of generative AI, advertising’s flood walls stand in the form of revamped AI governance that guide concerns around transparency, consumer privacy, and the unauthorized use of data. Advertising leaders are directing employees towards AI governance measures, yet they are falling short in providing effective methods to smoothly incorporate these measures into current workflows. With a lack of practical guidance, employees are facing difficult-to-answer, yet fundamental, questions such as: Am I violating consumer privacy by using AI platforms in this work? Is this AI platform referencing biased data? How can I effectively QA this work? Being expected to make use of industry-transforming AI platforms without having frameworks to mitigate their consequences is an anxiety-inducing dilemma.

The disconnection between ethical governance and practical implementation is raising doubts about advertising’s ability to effectively navigate generative AI. And while advertisers are adept at operating in uncertainty, the ethical considerations they are currently facing leave no room for risk. Brands, consumers, and even governments are demanding more from advertisers, exposing the industry’s greatest vulnerability: AI implementation that precedes applicable ethical governance. 

Responsible for altering behaviors and shifting viewpoints, advertisers are held to a high standard of ethics. The gravity of this responsibility means that AI governance too highly removed from existing workflows is no longer ethical. Because of its complexity, ethical AI governance requires a multi-perspective approach. A multi-perspective approach to AI governance addresses platform-level considerations (bias mitigation and ethical designs of specific platforms), consumer-level considerations (privacy and transparency), and societal-level considerations (building towards better AI standards). An example of a multi-perspective framework for developing AI governance is as follows: 

AI Transparency (Specific Policy Within the Governance)
Platform-Level  
Guideline
Agencies should ensure that the decision-making process of the platform is understandable to stakeholders, and they should be able to provide explanations for the platform’s decisions. Workflow integration: Ask legal to review platform terms before integrating into workflows.Internal disclosure to line management of AI-influenced or created work. Line management to escalate QA of input parameters and outputs as needed. 
Consumer-Level  
Guideline
Consumers must have visibility into why they are being shown specific advertisements that utilize AI within the workflow, with explanations as to how AI makes those decisions. Workflow integration: Ads that employ AI should be watermarked and should provide a link to transparency microsites that inform consumers about AI practices and policies. 
Societal-Level  
Guideline
Agencies should participate in developing and adhering to industry-wide transparency standards.Workflow integration: Senior leaders to meet across agency networks to discuss collaboration opportunities within AI leadership. 

Crafting extensive AI governance is insufficient if it fails to address pressing ethical concerns. Agencies should review their existing governance and highlight opportunities for multi-perspective approaches.

PART TWO

AI Ads: Blurred Lines of Creativity and Ownership in the Age of Automation

The advertising industry is abuzz with the possibilities of generative AI. From crafting personalized ad experiences to composing catchy slogans and generating visuals, AI promises significant evolution in how brands connect with consumers. However, amidst the excitement lies a critical question: who owns the creative rights to AI-generated advertising content, and how can we ensure it’s free from bias?

Traditionally, advertising agencies held the copyright for their creative output. But with AI composing ad copy, scripts, and even composing music, the lines of authorship become blurred. This ambiguity presents a challenge regarding uncertainty in authorship. Copyright law, designed for a human-centric creative landscape, struggles to define the true author. Is it the individual setting the parameters, the AI itself that generates the content, or the agency that utilizes the AI tool?

The issue becomes even more complex with the rise of generative AI as a service (AIaaS):

  • Standardization Issues: Different AIaaS providers might have varying approaches to ownership, leading to inconsistencies. One agency’s AI-generated campaign might be lauded for its creativity, while another using a different platform might face copyright ambiguities.
  • Bias in Training Data: AI models inherit the biases present in the information they are trained on. If the training data contains hidden prejudices, the AI can perpetuate these biases in its generated content. An ad campaign crafted by AI could unintentionally reinforce gender stereotypes or perpetuate cultural misrepresentations.
  • Unethical actors: Malicious actors could exploit AIaaS to generate discriminatory or offensive ad content, potentially causing social harm and reputational damage to brands.

To ensure the ethical and sustainable use of AI in advertising, a multi-pronged approach is necessary:

Developing Clear Ownership Frameworks

Establishing well-defined frameworks that consider the level of human input, the specific contribution of the AI, and the role of the AIaaS provider is crucial. This fosters transparency in attributing creations and prevents disputes surrounding ownership rights.

Collaboration Among Stakeholders

Open dialogue and collaboration between policymakers, AI developers, legal experts, and AIaaS providers are essential. This collaborative effort can lead to:

Standardized Practices within AIaaS: Ensuring consistent application of ownership principles across different platforms.
Mitigating Bias in Training Data: Implementing robust data auditing procedures to identify and eliminate biases within AI training datasets.

Promoting Ethical Development and User Education: 

Encouraging the development of AI algorithms with embedded ethical considerations and fostering user education regarding the responsible use of AIaaS are vital steps. Educating advertising agencies about proper attribution practices, the importance of respecting intellectual property rights, and the potential for bias in AI-generated content is crucial.

This multi-pronged approach could pave the way for the ethical and sustainable growth of AI-powered advertising, fostering a future where creativity thrives alongside responsible innovation.

Rachael Chudoba and Giorgio Suighi
Rachael Chudoba and Giorgio Suighi
Rachael is a Senior Strategist at a top creative technology agency within Interpublic Group. Specializing in Digital Humanities, she merges literature, history, and philosophy with computational tools. Her research focuses on algorithmic updates and AI integrations to enhance citizenship behaviors on social media platforms, earning her recognition for AI thought leadership & Giorgio is a seasoned Global Executive and Marketing Leader with 14+ years of experience, excelling in innovative solutions and strategic insights. His expertise in statistical modeling, data science, and machine learning consistently surpasses expectations. Known as an industry trailblazer, Giorgio's hands-on leadership style inspires and mentors organizational managers, strategists, and analysts, setting high standards for success.
Meet 100 Most Influential AI Leaders in USA
MachineCon 2024
26th July, 2024, New York
Latest Edition

AIM Research Apr 2024 Edition

Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
Our Upcoming Events
Intimate leadership Gatherings for Groundbreaking Insights in Artificial Intelligence and Analytics.
AIMResearch Event
Supercharge your top goals and objectives to reach new heights of success!

Cutting Edge Analysis and Trends for USA's AI Industry

Subscribe to our Newsletter