In a recent incident, popular YouTuber MrBeast, also known as Jimmy Donaldson, called out TikTok for its role in allowing a deepfake version of him to be featured in an advertisement promoting a fake iPhone giveaway. The ad, viewed on the app, showcased an AI-generated deepfake MrBeast offering iPhone 15s for $2 as part of a supposed 10,000 phone giveaway. Despite looking official with MrBeast’s logo and a verified checkmark, keen-eyed users noticed signs of AI manipulation, including distorted voice and unnatural mouth movements.
Expressing his concern, MrBeast took to social media to confirm the ad’s inauthenticity and questioned social media platforms’ readiness to address the rise of AI deepfakes. He emphasized that the situation is a serious problem.
TikTok responded to the issue, stating that the ad was removed within a few hours of its posting, and the associated account was taken down for policy violations. The platform’s ad policy explicitly prohibits synthetic media containing the likeness of a real person without their consent. Advertisers are responsible for ensuring proper consent when using synthetic media featuring public figures.
AI-generated content, particularly deepfake versions of celebrities, has become a prevalent issue in online advertising. Recently, Tom Hanks and other notable figures faced unauthorized AI-generated content in promotional materials. TikTok acknowledges the widespread use of AI-generated content on its platform and has introduced tools for creators to label such content transparently. The company is actively testing ways to automatically label AI-generated content to address concerns related to misleading viewers.
As AI technology advances and becomes more accessible, the challenges posed by deepfakes are expected to escalate. The incident involving MrBeast highlights the need for increased transparency and disclosure when AI-generated content is used in advertising, ensuring that users can distinguish between genuine and manipulated content.