safe-content-ai: Fast API for NSFW Image Detection
Project Overview
GitHub Stats | Value |
---|---|
Stars | 54 |
Forks | 8 |
Language | Python |
Created | 2024-04-22 |
License | MIT License |
Introduction
The ‘safe-content-ai’ project is a robust and efficient API designed for detecting Not Safe For Work (NSFW) images, making it an essential tool for content moderation on digital platforms. Built using Python, the FastAPI framework, Transformers library, and TensorFlow, this API leverages the Falconsai/nsfw-image-detection AI model to provide accurate results. It also optimizes performance by caching results based on the SHA-256 hash of image data and automatically utilizing GPU if available. This project is worth exploring for its ease of use, high accuracy, and scalability, making it a valuable asset for maintaining a safe and compliant online environment.
Key Features
The Safe Content AI project is a fast and accurate API designed for detecting Not Safe For Work (NSFW) images, ideal for content moderation on digital platforms. Here are its key features:
- AI Model: Uses the Falconsai/nsfw-image-detection AI model.
- Caching: Caches results based on the SHA-256 hash of image data.
- Technologies: Built with Python, FastAPI framework, Transformers library, and TensorFlow, which automatically utilizes GPU if available.
- Endpoints:
POST /v1/detect
: Analyzes uploaded image files for NSFW content.POST /v1/detect/urls
: Analyzes images from provided URLs for NSFW content.
- Deployment: Can be run using Docker or set up locally with Python 3.7+ and required libraries.
- License: Licensed under the MIT License.
The API provides responses in JSON format, including whether the image is NSFW and the confidence level of the prediction.
Real-World Applications
The ‘safe-content-ai’ project offers a robust API for detecting NSFW (Not Safe For Work) images, making it ideal for content moderation on digital platforms. Here are some practical examples of how users can benefit from this repository:
- Social Media Platforms: Integrate the API to automatically detect and flag NSFW content uploaded by users, ensuring a safer environment.
- Online Marketplaces: Use the API to screen product images for inappropriate content before they are listed.
Implementation
- Docker Deployment: Users can quickly deploy the API using Docker with a simple command, making it easy to get started.
docker run -p 8000:8000 steelcityamir/safe-content-ai:latest
API Usage
- Image Upload: Users can upload image files to the
/v1/detect
endpoint to determine if the content is NSFW.curl -X POST "http://127.0.0.1:8000/v1/detect" \ -H "Content-Type: multipart/form-data" \ -F "file=@/path/to/your/image.jpeg"
- URL Detection: Provide image URLs to the
/v1/detect/urls
endpoint for batch processing of multiple images.curl -X POST "http://127.0.0.1:8000/v1/detect/urls" \ -H "Content-Type: application/json" \ -d '{ "urls": [ "https://example.com/image1.jpg", "https://example.com/image2.jpg" ] }'
Development
- Local Setup: Clone the repository, set up a virtual environment, and install dependencies to run the API locally.
git clone https://github.com/steelcityamir/safe-content-ai.git cd safe-content-ai python -m venv venv source venv/bin/activate pip install -r requirements.txt uvicorn main:app --reload
By leveraging these features, users can effectively integrate NSFW image detection into their applications, enhancing content moderation and user safety.
Conclusion
The ‘safe-content-ai’ project offers a fast and accurate API for detecting NSFW (Not Safe For Work) images, ideal for content moderation on digital platforms. Key points include:
- Utilizes the Falconsai/nsfw-image-detection AI model and TensorFlow, leveraging GPU if available.
- Caches results based on SHA-256 hash of image data.
- Supports image uploads and URL detection via API endpoints.
- Easy deployment using Docker or local installation with Python 3.7+.
- Licensed under MIT License.
The project’s future potential lies in enhancing content moderation efficiency and accuracy across various digital platforms.
For further insights and to explore the project further, check out the original steelcityamir/safe-content-ai repository.
Attributions
Content derived from the steelcityamir/safe-content-ai repository on GitHub. Original materials are licensed under their respective terms.
Stay Updated with the Latest AI & ML Insights
Subscribe to receive curated project highlights and trends delivered straight to your inbox.