web
analytics

Ways for schools and universities to manage inappropriate web content


For six years, the Microsoft Digital Crimes Unit has been working on PhotoDNA technology – a way of detecting illegal online child sexual abuse photos. It is used by a wide range of social media and photo sharing companies, like Facebook, Twitter and Flipboard, to scan user-generated images as they are uploaded onto their web services. Any organisation that hosts user-generated content – video, images, text – carries a risk of users uploading offensive or illegal materials. PhotoDNA provides a way to deal with the most extreme examples, and there are associated services that provide ways for schools, TAFEs and universities to manage inappropriate web content being posted on their own services.

Often there’s a hard choice between allowing users to post content freely, or ensuring that every piece of content is approved before posting. Depending on your users, that can create a horrible balance between risk or overwhelming workload!

Examples of where this challenge exists are:

  • Services that provide facilities for students to comment on each other’s work
  • Enquiry forms that allow users to send requests or messages through to teachers/staff
  • Web portals built for parents or students to interact with teachers
  • Competition websites where people upload photos or videos

If you’re developing a web service or app that includes user generated content or provides discussion capabilities, especially in scenarios where they are anonymous or not easily traceable, here’s a couple of services to take a look at – Content Moderator and PhotoDNA .


Ways for schools to manage inappropraite web content

Content Moderator

First, let’s look at a service to manage inappropriate or offensive (rather than just illegal) content. Microsoft Content Moderator is a suite of intelligent screening tools to provide automated content moderation in the cloud, that enhances the safety of your user engagement and communication. Image, text, and video moderation can be configured to support your policy requirements by alerting you to potential issues such as pornography, racism, profanity, violence, and more. This is a cloud service running in Microsoft Azure, and can be used by any organisation, including education customers and independent software developers.

It provides three core services:

 

  • Image Services: Fuzzy image matching against custom and shared blacklists even when file types are changed or images are otherwise altered. Also includes optical character recognition (OCR), face detection, and adult image detection.
  • Text Services: Detect profanity in more than 100 languages and match against custom and shared blacklists. The text service will also integrate with Azure Machine Learning Text Analytics for sentiment analysis.
  • Video Services: Video hashing technology matches video clips against both custom and shared blacklists. The video service will also soon integrate with Azure Media Services for closed-caption text generation.

Because this is a cloud service, it is much simpler to implement:

  1. Sign up and start playing with the sample code and live API on the portal
  2. Create your custom match list, that you want alerts on, or use others’ lists
  3. Call an API method with your content to invoke a check
  4. The Content Moderator service processes your content, and generates labels to describe it (without every storing your data)
  5. Your service receives API-based alerts for each content item matched
  6. You then use the alerts as signals to make content decisions – eg remove content; send it to a human checker; put it on hold for moderation etc

And it does all of this in real-time – as your user hits ‘submit’ or ‘send’.

Learn MoreRead more about Content Moderator, and find out how you can use it

 


image

PhotoDNA

Although PhotoDNA has been in use for over 6 years, and is now used by more than 80 significant organisations, like Facebook, it has historically been time consuming to implement, as organisations required time, money and technical expertise to get it up and running in their own systems. Recently, we have built a new cloud service for PhotoDNA, using Microsoft Azure, that allows you to use the service through simple API calls. Here’s an example of how it’s used:

imageKik, a chat network that’s popular among teens and young adults around the world, recently became the first company in Canada to deploy the PhotoDNA Cloud Service. Kik uses it to detect exploitive profile photos as they’re being uploaded, so the company can immediately remove them, report them to law enforcement and remove the user’s account.

“It is allowing us to identify and remove illegal content, so it’s been a huge plus from our perspective in helping keep our users safe,” says Heather Galt, Kik’s head of privacy.

The company does manually review some images, but with more than 200 million users globally, automation is a must. PhotoDNA allows Kik to identify known illegal images among a much greater number of photos, while in many cases letting human moderators avoid the disturbing task of identifying them.

Another crucial advantage for Kik is that it doesn’t cause any delay for users sharing content.

It’s “so fast and does its work so efficiently that it’s been implemented with no negative impact whatsoever on the experience for users,” Galt says.

Learn More

Read the Microsoft News story about how PhotoDNA is protecting children and businesses

Find out who can use PhotoDNA Cloud service, how it works, and how to apply to use the service (PhotoDNA Cloud Services are free to qualifying organisations that are approved through an independent vetting service)

Comments (0)

Skip to main content