Mistral has released batch and moderation APIs
Mistral AI has launched two new services: a moderation API that leverages a Ministral 8B fine-tune to classify text according to nine categories describing common harms, and a batch API that enables developers to process high volumes of data for 50% of the cost of equivalent synchronous API calls.
On Thursday, Mistral AI added two new services to its product roster: a moderation API and a batch processing API. According to Mistral, the moderation API powers the Le Chat platform moderation service, uses a Ministral 8B fine-tune as a classifier, and includes two endpoints: one for raw text and another for conversational text. With this launch, prospective users can leverage a tested moderation service and customize it to fit their applications' moderation needs.
This multilingual moderation service can process content in 11 languages, including Arabic, Chinese, and Russian, and analyzes text across nine different policy categories: sexual, hate and discrimination, violence and threats, dangerous and criminal content, self-harm, health, financial, law, and personally-identifying information (PII). The model powering the API analyzes the last message within its conversational context to detect common harms such as providing unqualified advice or PII.
Mistral's batch processing API is aimed at applications prioritizing data volume over synchronous responses, as is often the case with batch translation, data analysis, or feedback and sentiment analysis. The batch API enables developers to process high-volume requests at half the price it would cost to do so using synchronous API calls. The batch API is currently available for models served on Mistral's La Plateforme, and will soon be available from its cloud partners. Batch API requests are capped at 1 million ongoing requests per workspace.