AWs is pleased to announce that Meta’s Llama 3 models are now generally available on Amazon Bedrock. You can create, test, and responsibly grow your generative artificial intelligence (AI) applications with Meta Llama 3. The latest Llama 3 models offer superior reasoning, code generation, and instruction, making them the most suitable for a wide variety of use scenarios.
Get to know Meta Llama 3
Llama 3 8B
For edge devices, quicker training times, and constrained computational power and resources, Llama 3 8B is perfect. The model performs exceptionally well in sentiment analysis, language translation, text classification, and summarization.
Llama 3 70B
Llama 3 70B is perfect for research development, enterprise applications, language understanding, conversational AI, and content production. The model performs exceptionally well in language modelling, dialogue systems, text categorization and nuance, sentiment analysis and nuance reasoning, text summarization and accuracy, code generation, and following directions.
Advantages
- More than a million annotations by humans
- Llama Chat, the refined model, makes use of more than a million human annotations as well as publicly accessible instruction datasets.
- Trained on a trillion tokens beforehand
- To improve their understanding of linguistic nuances, llama models are trained on trillions of tokens from publicly available online data sources.
- More than a thousand red-teaming hours
- More than 1,000 hours of red-teaming and annotation work went into the refined model to guarantee model performance while maintaining safety.
- Absence of infrastructure management
- The first public cloud service to provide a fully managed Llama API is Amazon Bedrock. All sizes of organizations can use Amazon Bedrock’s Llama 2 models without having to worry about maintaining the supporting infrastructure.
Become acquainted with Llama
The first publicly available cloud service to provide a fully managed API for Llama, Meta’s next-generation large language model (LLM), is Amazon Bedrock. All sizes of organisations can now access Llama models in Amazon Bedrock without having to worry about maintaining the supporting infrastructure. This allows you to concentrate on developing your AI applications, which is what you do best. The collaboration between Meta and Amazon is an example of group innovation in generative AI. Amazon and Meta are collaborating to expand the realm of possibilities.
Use cases
Accessible big language models, Meta’s Llama models are made for developers, researchers, and companies to create, test, and responsibly scale generative AI concepts. A fundamental component of the framework that fosters creativity in the international community is Llama.
Versions of the models
Llama 3 8B
Perfect for edge devices, quicker training times, and constrained computational power and resources.
Maximum tokens: 8,000
Languages Spoken: English
Sentiment analysis, text classification, text summarization, and language translation are supported use cases.
Llama 3 70B
Perfect for research development, enterprise applications, language understanding, conversational AI, and content production.
Maximum tokens: 8,000
Languages Spoken: English
Use cases that are supported include language modelling, dialogue systems, text categorization and nuance, text summarization and accuracy, sentiment analysis and nuance reasoning, and following directions.
Llama 2 13B
Model that has been adjusted for the 13B parameter size. Ideal for smaller-scale jobs like sentiment analysis, language translation, and text classification.
Maximum tokens: 4,000
Languages Spoken: English
Supported use cases: Chat with an assistant
Llama 2 70B
Model with parameters adjusted to a value of 70B. Ideal for more complex jobs like dialogue systems, text production, and language modelling.
Maximum tokens: 4,000
Languages Spoken: English
Supported use cases: Chat with an assistant
The Llama 3 model family is a group of large language models (LLMs) in 8B and 70B parameter sizes that have been pre-trained and instruction-tuned, according to Meta’s Llama 3 announcement. With four times more code and a training dataset seven times larger than that used for Llama 2 models, these models have been trained on over 15 trillion tokens of data, supporting an 8K context length that doubles Llama 2’s capacity.
Amazon Bedrock now offers two additional Llama 3 variants, expanding the available model selection. With these models, you can quickly test and assess additional top foundation models (FMs) according to your use case:
For edge devices and systems with constrained computational capacity, Llama 3 8B is perfect. The model performs exceptionally well in sentiment analysis, language translation, text classification, and summarization.
Llama 3 70B is perfect for research development, enterprise applications, language understanding, conversational AI, and content production. The model performs exceptionally well in language modelling, dialogue systems, text categorization and nuance, sentiment analysis and nuance reasoning, text summarization and accuracy, code generation, and following directions.
At the moment, Meta is also training more Llama 3 models with over 400B parameters. These 400B models will handle several languages, be multimodal, and have a considerably larger context window, among other additional features. These models will be perfect for research and development (R&D), language understanding, conversational AI, content production, and enterprise applications when they are available.
Llama 3 models in action
To get started with Meta models, select Model access from the bottom left pane of the Amazon Bedrock console. Request access individually for Llama 3 8B Instruct or Llama 3 70B Instruct to have access to the most recent Llama 3 models from Meta.
Select Text or Chat from Playgrounds in the left menu pane of the Amazon Bedrock dashboard to try the Meta Llama 3 models. Next, click Select model, choose Llama 8B Instruct or Llama 3 70B Instruct as the model, and Meta as the category.
You can also use code examples in the AWS SDKs and Command Line Interface (AWS CLI) to access the model by selecting View API request. Model IDs like meta.llama3-8b-instruct-v1 and meta.llama3-70b-instruct-v1 are applicable.
You can create apps in a variety of programming languages by utilizing code samples for Amazon Bedrock with AWS SDKs.
These Llama 3 models can be applied to a range of applications, including sentiment analysis, language translation, and question answering.
Llama 3 instruct models that are tailored for discussion use cases are another option. The prior history between the user and the chat assistant serves as the input for the instruct model endpoints. As a result, you are able to pose queries that are pertinent to the current dialogue and offer system configurations, like personalities, which specify the behaviour of the chat assistant.
Currently accessible
The US West (Oregon) and US East (North Virginia) regions of Amazon Bedrock currently offer Meta Llama 3 models for purchase. See the complete list of regions for upcoming changes. Visit the Llama in Amazon Bedrock product page and pricing page to find out more.
Meta Llama 3 in Vertex AI Model Garden
Google cloud is happy to inform that Vertex AI Model Garden will start offering Meta Llama 3 currently. Similar to its predecessors, Llama 3 is available under a free license for numerous business and research uses. Llama 3 is offered as a pre-trained and instruction-tuned model, and comes in two sizes, 8B and 70B.