Microsoft launches AI censorship tool to detect harmful speech and images

The technology behind Azure AI Content Safety is believed to have significantly improved since its tumultuous early days with Bing Chat, which was allegedly spreading politically incorrect content, such as vaccine 'misinformation.'

Microsoft launches AI censorship tool to detect harmful speech and images
AP Photo/Ted S. Warren, File
Remove Ads

Microsoft is bringing a new AI-powered moderation service, Azure AI Content Safety, to the digital stage, aiming to bolster safety across online spaces and communities by cracking down on speech it deems harmful.

This addition to the Azure AI product platform comes with an array of AI models capable of spotting "inappropriate" text and image content, even catering to a wide range of languages including English, Spanish, German, French, Japanese, Portuguese, Italian, and Chinese.

These intelligent models will assign severity scores to flagged content, streamlining the moderation process by highlighting content that needs immediate attention.

Microsoft said, “We’ve spent over two years crafting solutions to combat harmful content in online communities. Recognizing the shortcomings of existing systems that overlooked cultural context and multilingual capacity, our new AI models are now multilingual, culturally aware, and provide lucid explanations for flagged or removed content,” TechCrunch reported.

During Microsoft’s annual Build conference, Sarah Bird, the company's responsible AI lead, showcased Azure AI Content Safety as a commercial adaptation of the safety system underpinning Microsoft’s Bing chatbot and GitHub’s AI-assisted code-generating service, Copilot. She announced, “We’re launching it as a product available for third-party customers.”

The technology behind Azure AI Content Safety is believed to have significantly improved since its tumultuous early days with Bing Chat, which was allegedly spreading politically incorrect content, such as vaccine “misinformation,” TechCrunch reported.

Despite a recent setback with the dissolution of the ethics and society team within its larger AI organization, Microsoft has pressed on. Azure AI Content Safety, designed to shield against biased, sexist, racist, hateful, violent, and self-harm content, is now integrated with Azure OpenAI Service.

This service is intended to provide businesses with access to OpenAI’s technologies, coupled with enhanced governance and compliance features. However, Azure AI Content Safety is also flexible enough to integrate with non-AI systems like online communities and gaming platforms.

With pricing starting at $1.50 for every 1,000 images and $0.75 for every 1,000 text records, Azure AI Content Safety is shaping up to compete with other AI-driven censorship services like Google's Perspective and Jigsaw. It's a significant step forward from Microsoft's own previous tool, Content Moderator.

Remove Ads
Remove Ads

Don't Get Censored

Big Tech is censoring us. Sign up so we can always stay in touch.

Remove Ads