YouTube is definitely the biggest video-sharing site, home to many content creators. And with so many creators uploading millions of videos every day, it’s natural that some of them might be using AI-generated content in 2023. However, the YouTube team has recently taken a serious decision regarding how AI-generated videos will exist on the platform.
As announced in its latest blog post, the company will be making amendments to the rules of publishing content on YouTube. Starting soon, YouTube creators will be required to disclose if the content is AI-generated or not. The company will then use this disclosure to create and show a label to users. So now if a user is seeing a video with AI parts the label will read that the video contains “altered or synthetic content.” A similar notice will be there in YouTube Shorts content, too.
The company further exclaims that ‘sensitive topics‘ like politics, health, elections, and more will show this label more prominently.
However, if the creator fails to provide the disclosure, the videos could even be flagged as AI-generated if detected by the tools. YouTube will deploy AI technology for better content moderation, and it has said that newer threats will also be tackled with the help of generative AI. So in essence the company is using AI to catch AI!
When exactly are these rules coming into effect? YouTube says that it will start introducing updates in the coming months. Viewers will be accordingly informed if the content they are currently viewing (Shorts or long-form videos) involves AI-generated content. New tools will be added for creators to set disclosures where required.
Why Is AI-Generated Content A Big Problem For YouTube
There have been many new creators on the rise who basically use AI tools for content generation. Part of the storytelling that many Shorts and YouTube videos use these days is facilitated through AI content, and its misuse & misleading nature can cause many problems.
AI-generated content can not only pose a big problem for original artists & content creators but also pose issues in real-life situations. According to YouTube, disclosure requirements and new content labels are ‘especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts, and public health crises, or public officials‘.
YouTube is also implementing a new change (in the coming months) offering users a privacy complaint process. This is a very good change as now if someone’s face or voice has been used in a video that was AI-generated, they can get that content removed from the platform. YouTube has also confirmed that it is “introducing the ability for our music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice.”
Do you think these new rules set by YouTube will help curb the ever-growing threat of AI? At least from now on, individuals & creators can submit complaints regarding AI content and misuse of their face or voice. Let us know your thoughts in the comments below!