Amazon Bedrock offers Anthropic’s latest model, Claude 2.1
AWS are pleased to announce that Anthropic’s Claude 2.1 foundation model (FM) is now available in Amazon Bedrock. Anthropic unveiled its newest model, Claude 2.1, last week. It offers crucial features for businesses, including an industry-leading 200,000 token context window two times the context of Claude 2.0 lower rates of hallucinations, better accuracy on lengthy documents, system prompts, and a beta tool use feature for workflow orchestration and function calling.
Now that Claude 2.1 is available on Amazon Bedrock, you can use more trustworthy and ethical artificial intelligence (AI) systems from Anthropic to create generative AI applications that are ready for enterprise use. The Anthropic Claude 2.1 model is currently used in the Amazon Bedrock console.
The following are some salient features of Amazon Bedrock’s new Claude 2.1 model:
200,000 token context window: When working with lengthy materials like product manuals, technical paperwork, financial statements, or legal documents, enterprise applications require wider context windows and more precise outputs. 200,000 tokens, or more than 500 pages of document content, are supported by Claude 2.1. You may summarize, conduct Q&A, predict trends, compare and contrast numerous papers for business plan writing and contract analysis, and more when you submit large amounts of data to Claude.
Strong accuracy upgrades: Significant improvements in accuracy have also been made by Claude 2.1, as evidenced by a two-fold drop in hallucination rates, a fifty percent decrease in hallucinations during open-ended discussions and document Q&A, a thirty percent decrease in wrong responses, and a three to four-fold decrease in the rate of incorrectly concluding that a document supports a particular claim when compared to Claude 2.0. Claude is becoming more and more aware of its ignorance and is more likely to retract than to have hallucinations. You may create more dependable, mission-critical applications for your clients and staff with this increased accuracy.
System prompts: Claude 2.1 now has support for system prompts, a new feature that can enhance Claude’s performance in a number of ways, such as stricter adherence to guidelines, rules, and instructions, and greater character depth and role adherence in role-playing scenarios, especially over longer conversations. This is a departure from previous methods of prompting Claude in terms of structure, but not in terms of content.
Tool use for function calling and workflow orchestration: Utilizing tools for orchestrating workflows and calling functions Now that Claude 2.1 is a beta feature, you can create generative AI apps by integrating it with your current internal procedures, goods, and APIs. Claude 2.1 efficiently calls functions for a task and gets and analyzes data from other knowledge sources. Claude 2.1 can interpret natural language requests into structured API calls, search databases utilizing web search APIs and private APIs, and connect to product datasets to provide suggestions and assist clients with completing purchases. Currently, only a small number of early access partners have access to this feature; however, public access is soon to be available.
Claude 2.1 at work
Visit the Amazon Bedrock console to begin using Claude 2.1 in Amazon Bedrock. Select Model access from the pane on the bottom left, select Manage model access from the pane on the top right, submit your use case, and ask to access the Anthropic Claude model. Getting access to models could take a few minutes. You don’t need to apply for access to Claude 2.1 individually if you already have access to the Claude model.
Select Text or Chat from the Playgrounds menu on the left sidebar pane to test Claude 2.1 in chat mode. Next, choose Anthropic, followed by Claude v2.1.
You may also access the model through code examples in the AWS SDKs and Command Line Interface (AWS CLI) by selecting View API request. An example of an AWS CLI command is as follows:
You can leverage the Claude 2.1 model’s system prompt engineering techniques, which set your inputs and documents ahead of any inquiries that make use of or reference that information. Natural language text, structured documents, or short bits of code enclosed in <document>, <papers>, <books>, or <code> tags are examples of possible inputs. Conversational text, like chat logs, and results from Retrieval Augmented Generation (RAG), such chunked papers, are also useful.
Claude 2.1 is now accessible in the US West (Oregon) and US East (North Virginia) regions.
With on-demand mode, there are no time-based term commitments; you just pay for what you use. You are billed for each input token that is processed and each output token that is produced for text generation models. Alternatively, in return for a time-based term commitment, you can select the supplied throughput option to satisfy the performance requirements of your application.