Tuesday, December 3, 2024

NinjaTech AI & AWS: Next-Gen AI Agents with Amazon Chips

- Advertisement -

AWS and NinjaTech AI Collaborate to Release the Next Generation of Trained AI Agents Utilizing AI Chips from Amazon.

The goal of Silicon Valley-based NinjaTech AI, a generative AI startup, is to increase productivity for all people by handling tedious activities. Today, the firm announced the release of Ninja, a new personal AI that moves beyond co-pilots and AI assistants to autonomous agents.

- Advertisement -

AWS Trainium

Building, training, and scaling custom AI agents that can handle complex tasks autonomously, like research and meeting scheduling, is what NinjaTech AI is doing with the help of Amazon Web Services’ (AWS) purpose-built machine learning (ML) chips Trainium and Inferentia2, as well as Amazon SageMaker, a cloud-based machine learning service.

These AI agents integrate the potential of generative AI into routine activities, saving time and money for all users. Ninja can handle several jobs at once using AWS’s cloud capabilities, allowing users to assign new tasks without having to wait for the completion of ongoing ones.

Inferentia2

NinjaTech AI has truly changed the game by collaborating with AWS’s Annapurna Labs.” NinjaTech AI’s creator and CEO, Babak Pahlavan, said, “The flexibility and power of Trainium & Inferentia2 chips for AWS reinforcement-learning AI agents far exceeded expectations: They integrate easily and can elastically scale to thousands of nodes via Amazon SageMaker.”

“With up to 80% cost savings and 60% increased energy efficiency over comparable GPUs, these next-generation AWS-designed chips natively support the larger 70B variants of the most recent, well-liked open-source models, such as Llama 3. Apart from the technology per se, the cooperative technical assistance provided by the AWS team has greatly contributed to development of deep technologies.

- Advertisement -

Large language models (LLMs) that are highly customized and adjusted using a range of methods, including reinforcement learning, are the foundation upon which AI agents function, enabling them to provide accuracy and speed. Given the scarcity and high cost of compute power associated with today’s GPUs, as well as the inelasticity of these chips, developing AI agents successfully requires elastic and inexpensive chips that are specifically designed for reinforcement learning.

With the help of its proprietary chip technology, which allows for quick training bursts that scale to thousands of nodes as needed every training cycle, AWS has overcome this obstacle for the AI agent ecosystem. AI agent training is now quick, versatile, and reasonably priced when used in conjunction with Amazon Sage Maker, which provides the option to use open-source models.

AWS Trainium chip

Artificial intelligence (AI) agents are quickly becoming the next wave of productivity technologies that will revolutionize AWS ability to work together, learn, and work. According to Gadi Hutt, senior director of AWS’s Annapurna Labs, “NinjaTech AI has made it possible for customers to swiftly scale fast, accurate, and affordable agents using AWS Inferentia2 and Trainium AI chips.” “They’re excited to help the NinjaTech AI team bring autonomous agents to the market, while also advancing AWS’s commitment to empower open-source ML and popular frameworks like PyTorch and Jax.”

EC2 Trainium

Amazon EC2 Trn1 instances from Trainium were used by NinjaTech AI to train its models, while Amazon EC2 Inf2 instances from Inferentia2 are being used to serve them. In order to train LLMs more quickly, more affordably, and with less energy use, Trainium drives high-performance compute clusters on AWS. With up to 40% higher price performance, models can now make inferences significantly more quickly and cheaply thanks to the Inferentia2 processor.

AWS Trainium and AWS Inferentia2

In order to create really innovative generative AI-based planners and action engines which are essential for creating cutting-edge AI agents they have worked closely with AWS to expedite this process. Pahlavan continued, “AWS decision to train and deploy Ninja on Trainium and Inferentia2 chips made perfect sense because they needed the most elastic and highest-performing chips with incredible accuracy and speed.” “If they want access to on-demand AI chips with amazing flexibility and speed, every generative AI company should be thinking about AWS.”

By visiting myninja.ai, users can access Ninja. Four conversational AI agents are now available through Ninja. These bots can assist with coding chores, plan meetings via email, conduct multi-step, real-time online research, write emails, and offer advice. Ninja also makes it simple to view side-by-side outcome comparisons between elite models from businesses like Google, Anthropic, and OpenAI. Finally, Ninja provides users with an almost limitless amount of asynchronous infrastructure that enables them to work on multiple projects at once. Customers will become more effective in their daily lives as Ninja improves with each use.

Regarding NinjaTech AI

A business called NinjaTech AI is situated in Silicon Valley and develops next-generation bespoke AI agents that can handle difficult jobs on their own. Ninja, a personal AI product from NinjaTech AI, was just released. It contains four AI agents that can arrange meetings, carry out research, give counsel, and assist developers with code debugging and ideas. With executives from Google, AWS, and Meta, the founding team of the company that spun out of SRI in late 2022 has over thirty years of collective experience in AI.

- Advertisement -
Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes