Ethical Challenges and Best Practices for Generative AI in Multimedia

Generative AI is rapidly transforming how we create and consume multimedia content—from AI-generated images and video to synthetic audio and deepfake technologies. While these tools offer powerful new forms of expression and efficiency, they raise a host of ethical questions. Following is a discussion of major areas of concern including data sourcing, copyright, algorithmic bias, consumer misuse, and concludes with practical recommendations for ethical engagement. AI-generated multimedia includes images, video, and audio.

Overview of Ethical Categories

Ethical problems associated with AI multimedia generally fall into four categories:
(i) Ethical questions of data sourcing
(ii) Legal questions involving copyright and licensing
(iii) Technical questions that surround the algorithms and their frequent reinforcement of biases (Francis 2022)
(iv) Consumer use of the models

Current, on-going legal cases (ii) center primarily on what constitutes “public domain and free use”. Getty Images is suing Stability AI for using thousands of images from its library without permission to train Stable Diffusion (Chan and O’Brien 2025).

Consumer use (iv) of algorithmic models includes deepfakes aimed at misinformation, deepfakes for positive social awareness, and deepfakes (or non-deepfakes) as political satire and free speech. For example, satirical images targeting President Donald Trump labeled him the “TACO President” in response to his tariff policy. These AI-generated images are obviously fake and are clearly legitimate political satire and free speech.

Bias in Training Data and Output

Human history is biased and the training data of AI models is reflecting that biased history, reinforcing stereotypes. AI-generated images of a “poor person” favors depicting a person of color. Prompting for “a person flipping burgers” favors an African-American female. AI is incapable of generating an image of “a visibly disabled person leading a meeting” but can generate an image of a visibly disabled person listening to a lecture. Prompting for a “group of wealthy individuals” overwhelmingly produces an image of all-white individuals. Prompting for software engineers and doctors produces only males. Prompting for nurses produces image of females. AI-generated images even reinforce cultural bias of inanimate objects. Prompting for an image of front door of a house almost always produces images indicative of North American houses (Hulick 2024; Baum and Villasenor 2024).

Deepfakes and the Erosion of Trust

Scientific evidence suggest celebrity endorsements greatly affects people’s behaviors ranging from purchasing merchandise to health decisions (Hoffman and Tan 2015). Generative AI is often used to create deepfakes of celebrities for a variety of purposes from pornography to political satire. In February 2025, actress Scarlett Johansson issued a warning about the use of Generative AI and the creation of deepfakes. Her likeness, along with several other prominent Jewish actors, was used in a deepfake generative AI-created video to denounce Ye’s (formerly Kanye West) antisemitic posts on X (formerly Twitter) and selling of pro-Nazi clothing (a white T-shirt with a black swastika) on Shopify. Data from the United Kingdom’s Advertising Standards Authority suggest celebrity deepfakes account for a large percentage of online scams (Glynn 2025).

Deepfakes have been created for decades before Generative AI technology, primarily using Adobe Photoshop. Deepfake videos, images, and even phone calls were used during the 2024 election cycle with the goal of altering the election results. AI-generated phone calls in two states targeted senior citizens telling them that polling stations were closed due to militia threats. In Miami, multiple deepfake photos and videos went viral depicting ballots being dumped in the garbage (De Luce and Collier 2024). Other deepfake public service videos on election day depicted Rosario Dawson and other celebrities warning people not to be duped by AI-generated media (De Luce 2024).

The ability of generative AI to mimic reality improves at a nearly exponential rate. Google’s Veo 3 model is the current leader, stunning everyone with its capabilities to generate life-like video and audio simultaneously.

Extreme cases of deepfakes involve middle school children using generative AI to create deepfake nudes of their classmates. These deepfakes are sometimes used for “sextortion”. The victims experience extreme psychological trauma, often resulting in suicide. A lack of digital literacy, clear policy guidelines, and outdated legal statutes make preventing and handling of these cases difficult. These deepfakes do clearly violate rights to privacy, dignity, and protection but current law does not address whether such images constitute child pornography. Technology is expanding faster than digital literacy and training across multiple sectors (Baird 2025).

Impact on Artists

AI-generated multimedia (images, videos, music) is devastating artist careers. AI is trained on artist data largely without their knowledge or consent. Digital fantasy artist Greg Rutkowski’s name is used up to 150,000 times per month on Midjourney to generate images mimicking his characteristic style. Artists feel that seeing their name on the data training list of a generative AI model means their career is essentially over (Miller 2025). Stable Diffusion is trained on at least 2,000 artists and 12-million unaltered, copyrighted images without artists’ knowledge or consent. Such pervasive appropriation of artists’ work to train AI is being called “artistic colonization” (Dhar 2023).

University of Chicago computer scientist Ben Zhao created the software Glaze and Nightshade allowing users to “poison” AI’s ability to read the images. Glaze adds an extra image layer imperceptible to humans that artists can use to alter the style of the image. Nightshade adds a layer that can be used to change the content of what AI sees. Glaze was downloaded 5-million times in its first month of release. Both apps are free (Miller 2025).

As generative AI becomes more sophisticated and accessible, the ethical stakes intensify. From reinforcing historical biases to undermining trust in media through deepfakes, the potential for harm must be matched with a commitment to transparency, accountability, and inclusivity. Developers, users, and institutions alike must adopt best practices that preserve both creative integrity and public trust in an increasingly algorithmic information ecosystem.

Best Practices and Guidelines

1. Citations
• Cite AI-generated media as you would any other figure, table, or data visual.

2. Transparency
• Do not remove the watermark from the visual media. This may be viewed as “unprofessional” but leaving the watermark clearly designates the visual media as AI-generated.
• Disclose the tool(s) used to create the media.

3. DEI Prompting
• Hulick noted that using diversity terms in prompts reduced biases in AI outputs.

4. Consent and Attribution
• Always seek informed consent from individuals whose likeness, voice, or work is used in training or output, especially in commercial or public-facing contexts.
• Attribute source inspiration (especially for artists or style mimicking) where feasible.

5. AI and Media Literacy for Audiences
• Encourage labeling or contextual framing of AI content in educational or public spaces (e.g., disclaimers in videos, alt-text for accessibility).
• Work with educators to develop awareness and AI literacy curricula.
• Train audiences to critically evaluate media authenticity.

6. Audit and Feedback Loops
• Establish regular bias audits of AI-generated outputs.
• Invite affected communities—particularly artists, marginalized users, and those in vulnerable professions—to provide feedback loops.

7. Policies and Guidelines
• Create policies and guidelines targeting all stakeholders tailored to your industry
▪ In education, for example, stakeholders include administration, faculty and staff, teachers, parents, and students throughout the entire school district