Apriel Nemotron 15B LLM Developed by ServiceNow & NVIDIA

Apriel Nemotron 15B LLM

A 15B-Parameter Super Genius Developed by ServiceNow and NVIDIA Has Joined Your Service Teams.NVIDIA NeMo, NVIDIA Llama Nemotron, and ServiceNow domain data trained on NVIDIA DGX Cloud were used to construct the open-source Apriel Nemotron 15B LLM.

ServiceNow and NVIDIA announced today at ServiceNow’s annual Knowledge 2025 customer and partner event that they are expanding their cooperation to support a new class of intelligent AI agents throughout the workplace. This includes the release of Apriel Nemotron 15B, a new high-performance ServiceNow reasoning model created in collaboration with NVIDIA that weighs objectives, applies rules, and assesses relationships in order to draw conclusions or make decisions.

The open-source LLM is post-trained using data from NVIDIA and ServiceNow, which contributes to faster agentic AI, lower latency, and cheaper inference costs. Through the integration of specific NVIDIA NeMo microservices, the businesses also revealed ambitions to accelerate data processing in ServiceNow Workflow Data Fabric. This will drive a closed-loop data flywheel process that improves model accuracy and individualised user experiences.

An important advancement in the creation of small, enterprise-grade LLMs designed for real-time workflow execution is the Apriel Nemotron 15B reasoning model. The model was trained with NVIDIA DGX Cloud on Amazon Web Services (AWS) utilising ServiceNow domain-specific data, NVIDIA NeMo, and the NVIDIA Llama Nemotron Post-Training Dataset. As an NVIDIA NIM microservice, it offers sophisticated reasoning capabilities in a more compact package, making it faster, more effective, and more economical to run on NVIDIA GPU infrastructure.

The model’s promise to support agentic AI operations at scale is further supported by benchmarks that demonstrate encouraging results for its size category. This model’s release coincides with enterprise AI’s ongoing ascent as a revolutionary tool that helps companies handle increasing complexity, manage macroeconomic unpredictability, and promote more intelligent, robust operations.

Additionally, ServiceNow and NVIDIA announced a new partnership on a shared data flywheel architecture that would merge ServiceNow Workflow Data Fabric and specific NVIDIA NeMo microservices to allow continued model innovation and AI agent performance. With safeguards in place to help guarantee that consumers have choice over how their data is used and processed in a secure and compliant manner, this integrated approach contextualises and curates enterprise workflow data to improve and optimise reasoning models. This makes it possible to use a closed-loop learning process that increases model accuracy and adaptability, which speeds up the creation and implementation of highly customised, context-aware AI agents intended to boost business efficiency.

The announcement comes after the NVIDIA Llama Nemotron Ultra model was released in April. This model uses the same NVIDIA open dataset that ServiceNow utilised to create its Apriel Nemotron 15B model. When it comes to advanced math, coding, scientific reasoning, and other agentic AI activities, Ultra is one of the best open-source models.

Greater Impact with a Smaller Model

The Apriel Nemotron 15B is designed to reason, draw conclusions, weigh objectives, and follow rules in real time. While still delivering enterprise-grade intelligence, it is smaller than some of the most recent general-purpose LLMs, which can run to over a trillion parameters. This results in faster responses and reduced inference costs.

NVIDIA DGX Cloud, hosted on AWS, was used for the model’s post-training, utilising high-performance infrastructure to speed up development. AI agents that can support thousands of concurrent enterprise activities require an AI model that is optimised for speed, efficiency, and scalability in addition to accuracy.

A Closed Loop for Ongoing Education

In addition to the model, ServiceNow and NVIDIA are launching a new data flywheel architecture that combines NVIDIA NeMo microservices, such as NeMo Customiser and NeMo Evaluator, with ServiceNow’s Workflow Data Fabric.

A closed-loop process that uses workflow data to tailor replies and increase accuracy over time is made possible by this configuration. Guardrails guarantee that clients have authority over the safe and legal usage of their data.

Scaling the Era of AI Agents

The partnership represents a change in enterprise AI strategy. Businesses are switching from static models to dynamic, intelligent systems. The collaboration between ServiceNow and NVIDIA, which is advancing agentic AI across industries, also reaches a new milestone.

For companies, this translates into more responsive digital experiences, increased productivity, and quicker resolution times. For tech leaders, it’s a model that can expand with their demands and meets the performance and pricing requirements of today.

Availability

Apriel Nemotron 15B-powered ServiceNow AI Agents are anticipated to launch after Knowledge 2025. The model will provide as a foundation for ServiceNow’s agentic AI products and support its Now LLM services.

RELATED ARTICLES

Page Content

Recent Posts

Index