Google announces a new AI content policy to moderate Android apps

Google Play new logo

Google has announced a new AI content policy to better regulate Android apps with built-in generative AI. The recently announced new policy update will force AI apps to comply with strict guidelines aimed at protecting users from offensive AI-generated content. Google is also cracking down on spam notifications from apps.

Flagging offensive AI content

Starting in 2024, developers will have to include features for reporting any offensive content from their generative AI app. “In line with Google’s commitment to responsible AI practices, we want to help ensure AI-generated content is safe for people and that their feedback is incorporated”, Google explains in a blog post. Developers will then use the feedback to improve their content filters.

The new AI content policy works on a similar principle to Google’s reporting system guidelines for user-generated content. Users should be able to report or flag content without leaving the app.

It applies to three types of gen AI apps. Firstly, AI chatbots that primarily produce text content. Secondly, apps that generate AI images from text, voice, or other image prompts. Thirdly, apps that fake voices or videos of real people using AI. It covers every possible combination of text, voice, and image.

Google also outlined apps that are exempt from these regulations. For instance, apps that merely feature or showcase AI-generated content but don’t have built-in generative features aren’t subject to it. The policy also doesn’t apply to apps that summarize human content. Also, productivity apps that use AI to improve the user experience don’t need dedicated reporting or filtering either.

The upcoming update should help developers combat restricted material, giving them more control over this evolving space. Google published some examples of AI-generated content that violates their guidelines. They include sexual deepfake generations, faking voices to run scams, deceptive election-related content, content that promotes bullying or sexual gratification, and malware code.

Fewer spam notifications from apps

The same policy update includes a section about full-screen intent notifications. These notifications are supposed to be high-priority alerts that need your urgent attention. Think about notifications that wake the screen for phone calls or alarms.

But many apps use these full-screen notifications for promotions and gentle reminders. So Google is now introducing a special permission for high-priority notifications. By default, only essential apps that require the feature will have it enabled. Others will have to request the user.

The policy update should make generative AI on Android safer and limit spam notifications that hurt your digital well-being and user experience.

The post Google announces a new AI content policy to moderate Android apps appeared first on Android Headlines.

Post a Comment

0 Comments