safe-content-ai: Fast API for NSFW Image Detection

GitHub Stats Value
Stars 54
Forks 8
Language Python
Created 2024-04-22
License MIT License

The ‘safe-content-ai’ project is a robust and efficient API designed for detecting Not Safe For Work (NSFW) images, making it an essential tool for content moderation on digital platforms. Built using Python, the FastAPI framework, Transformers library, and TensorFlow, this API leverages the Falconsai/nsfw-image-detection AI model to provide accurate results. It also optimizes performance by caching results based on the SHA-256 hash of image data and automatically utilizing GPU if available. This project is worth exploring for its ease of use, high accuracy, and scalability, making it a valuable asset for maintaining a safe and compliant online environment.

The Safe Content AI project is a fast and accurate API designed for detecting Not Safe For Work (NSFW) images, ideal for content moderation on digital platforms. Here are its key features:

  • AI Model: Uses the Falconsai/nsfw-image-detection AI model.
  • Caching: Caches results based on the SHA-256 hash of image data.
  • Technologies: Built with Python, FastAPI framework, Transformers library, and TensorFlow, which automatically utilizes GPU if available.
  • Endpoints:
    • POST /v1/detect: Analyzes uploaded image files for NSFW content.
    • POST /v1/detect/urls: Analyzes images from provided URLs for NSFW content.
  • Deployment: Can be run using Docker or set up locally with Python 3.7+ and required libraries.
  • License: Licensed under the MIT License.

The API provides responses in JSON format, including whether the image is NSFW and the confidence level of the prediction.

The ‘safe-content-ai’ project offers a robust API for detecting NSFW (Not Safe For Work) images, making it ideal for content moderation on digital platforms. Here are some practical examples of how users can benefit from this repository:

  • Social Media Platforms: Integrate the API to automatically detect and flag NSFW content uploaded by users, ensuring a safer environment.
  • Online Marketplaces: Use the API to screen product images for inappropriate content before they are listed.
  • Docker Deployment: Users can quickly deploy the API using Docker with a simple command, making it easy to get started.

    bash

    docker run -p 8000:8000 steelcityamir/safe-content-ai:latest
  • Image Upload: Users can upload image files to the /v1/detect endpoint to determine if the content is NSFW.

    bash

    curl -X POST "http://127.0.0.1:8000/v1/detect" \
         -H "Content-Type: multipart/form-data" \
         -F "file=@/path/to/your/image.jpeg"
  • URL Detection: Provide image URLs to the /v1/detect/urls endpoint for batch processing of multiple images.

    bash

    curl -X POST "http://127.0.0.1:8000/v1/detect/urls" \
         -H "Content-Type: application/json" \
         -d '{
               "urls": [
                 "https://example.com/image1.jpg",
                 "https://example.com/image2.jpg"
               ]
             }'
  • Local Setup: Clone the repository, set up a virtual environment, and install dependencies to run the API locally.

    bash

    git clone https://github.com/steelcityamir/safe-content-ai.git
    cd safe-content-ai
    python -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
    uvicorn main:app --reload

By leveraging these features, users can effectively integrate NSFW image detection into their applications, enhancing content moderation and user safety.

The ‘safe-content-ai’ project offers a fast and accurate API for detecting NSFW (Not Safe For Work) images, ideal for content moderation on digital platforms. Key points include:

  • Utilizes the Falconsai/nsfw-image-detection AI model and TensorFlow, leveraging GPU if available.
  • Caches results based on SHA-256 hash of image data.
  • Supports image uploads and URL detection via API endpoints.
  • Easy deployment using Docker or local installation with Python 3.7+.
  • Licensed under MIT License.

The project’s future potential lies in enhancing content moderation efficiency and accuracy across various digital platforms.

For further insights and to explore the project further, check out the original steelcityamir/safe-content-ai repository.

Content derived from the steelcityamir/safe-content-ai repository on GitHub. Original materials are licensed under their respective terms.