Inappropriate Image Detection

Inappropriate data in scientific papers can result from honest error or intentional falsification. This study estimates the prevalence of a specific type of inaccurate data, inappropriate image duplication, in Western blotting figures in biomedical journals.

Many of the problems are likely sloppy mistakes, but half or more look deliberate — such as using a tool variously known as the rubber stamp or clone stamp to clean up background noise.

Detection

Detecting inappropriate images is a complex task and depends on many factors. It is not possible to identify every image that could be deemed offensive, but it is easy to filter out those that are obviously sexually explicit or depict abusive and menacing content.

Several approaches have been proposed to automatically detect inappropriate images. Birhane and Prabhu [2020] hand-surveyed misogynistic and pornographic content in common CV datasets, while Yang et al. used an ML model to identify obscene product imagery and a human review process to validate results.

Using prompt-tuning to steer CLIP, Q16 can be trained on a large unfiltered dataset and learn implicit knowledge about inappropriate image content. This information is documented by identifying the proportion of the image subset with potentially inappropriate content and by providing image annotations and automatically generated descriptions. Word clouds document the most prevalent concepts in the inappropriate image set, e.g. gun-related images, posters and naked body parts.

Moderation

Image Moderation is a feature enabling you to automatically detect and moderate images with explicit nudity, violence or visually disturbing content without the need for manual human review. Implemented with Amazon Rekognition, it saves your team time and effort while ensuring your application is free of offensive content and helping you build a safer online community for your users.

Using the Image Moderation API, you can specify an array of labels that are detected in the image, and receive a JSON response showing their probability as percentage values. Each label is part of a hierarchical taxonomy. For example, if the image contains multiple labels such as gore, drugs or weapons, the top-level label is the one that is found in most cases.

You can also define a minimum confidence level that must be met for a particular label to be detected. For example, you can set a threshold below 50 percent to reduce the number of false positives in case your use case requires lower detection accuracy.

Remediation

Detection of inappropriate images is a complex task as different people may have diverse sentiments about what constitutes inappropriateness. As a result, image analysis algorithms have difficulty accurately classifying images. The resulting images are often inaccurate and lead to false positives.

For example, a western blot image duplication can be misinterpreted as being sexually explicit. Other examples of inappropriate images include National Socialist symbols especially swastika, persons wearing Ku Klux Klan uniforms, and offensive text or symbols on objects such as f***ing, sex, and middle finger.

Amazon Rekognition provides a wide range of image and video analysis endpoints that can be used to identify inappropriate content. Its nudity and offensive sign detection is useful for businesses that need to moderate images for safety or compliance reasons.

Reporting

While Google’s image search has become extremely good at returning results that go against its terms of service, inappropriate images and offensive pictures occasionally slip through. If you find an image that you feel goes against Google’s guidelines, you can report it.

You can also use the SafeSearch option on Google’s image search to hide explicit images. This will prevent you from seeing any images that are sexually explicit, gory, or violent.

Fig. 6 reveals the most common inappropriate concepts identified in the dataset, including misogynistic and pornographic content. Other inappropriate content includes images depicting National Socialist symbols especially swastika, persons wearing Ku-Klux-Klan uniform, and various insults. However, the most severe inappropriate content is that involving children. In such cases, it is recommended that you contact the National Center for Missing and Exploited Children or an organization specific to your geographic area. This will ensure that the victim is protected and taken care of. Additionally, this will protect other users searching for your content.

Leave a Reply