Anthropic updated its Usage Policy before launching Claude in Europe
Anthropic recently announced that it would expand the availability of Claude, its AI assistant, and strong ChatGPT competitor, across the European continent. Starting May 14, the web experience at claude.ai, the Claude iOS app, and the Claude Team Plan became available for individuals and businesses. The launch follows the announcement of the availability of Anthropic's API in Europe earlier this year. The assistant is fluent in English but it can handle major European languages including French, German, Spanish, and Italian. Individuals can access the web-based experience and the iOS app free of charge or subscribe to Claude Pro (€18 + VAT, monthly) to unlock Claude 3 Opus and other exclusive benefits. Teams and organizations with at least five users can subscribe to a Team Plan for €28 + VAT per user per month.
Before Claude became available across Europe, Anthropic announced a series of minor but impactful changes to its usage policy. Formerly known as the Acceptable Use Policy, the new set of rules and guidelines will be known simply as the Usage Policy, to emphasize that it covers all of Anthropic's customers, as applicable depending on each customer's preferred use cases. Some of the most notable changes to the Usage Policy include specifying what the company means by prohibiting the use of its products for political lobbying and campaigning. The language regarding what is considered interfering with an election is also clearer, including actions such as targeting voting machines or obstructing the counting or certification of votes.
Since the most general principles in the Usage Policy, now known as Universal Usage Standards, apply to all users, Anthropic has delineated additional requirements for use cases that involve sensitive topics, including healthcare, legal counsel, employment, housing, and professional journalism. Anyone using Anthropic's services to advise on these high-risk use cases must keep a human in the loop. In other words, any outputs must be reviewed by a qualified professional before they are finalized or made public. Moreover, high-risk applications must disclose that their outputs are partially informed by Anthropic's services to end users.
Anthropic requires users to be over 18. However, the company will allow organizations to integrate its API into solutions for minors if they implement additional requirements, including making it known that the product relies on AI to perform its function. Additionally, although the company has always restricted the use of its products for surveillance, persecution, and censorship, even in the representation of governments or law enforcement organizations, Anthropic has decided to expand the number of countries where law enforcement organizations can leverage its products for a narrow selection of use cases. In parallel, the company has detailed and updated forbidden uses to include biometric data analysis to attempt prediction of features such as race or religion; and emotional recognition systems.
The updated Usage Policy was announced on May 10 and will be effective worldwide starting June 6, 2024.