The second development sprint on the Intel Developer Cloud was recently finished by the Intel Liftoff team and AI startups (see the report about the first sprint here). The Intel Liftoff program team and companies concentrated on providing their products with specialized, LLM-powered feature enablement during this sprint. Participants in hackathons are redefining standards for AI application development across industries by leveraging Intel’s AI stack and support.
Opening Doors for Members of Intel Liftoff
AI startups had access to the Intel Developer Cloud through the virtual event, which also gave them access to Intel Data Center GPU Max Series 1550 and 4th Gen Intel Xeon Scalable processors. The goal of the hackathon was to examine the potential of upcoming LLM-based applications.
The Four Most Innovative Ideas
The four creative teams from among those that participated were chosen for the final showcase:
All hackathon participants received access to several GPUs, a first for Intel Liftoff, and were able to distribute training (data parallel) runs over them all at once. Our hackathon participants were able to fine-tune 3 to 13 billion parameter LLMs in a couple of hours thanks to the larger Data Center GPU Max 1550 model’s availability with 128 GB of VRAM. The Intel oneAPI software stack, Intel oneAPI Base Toolkit, Intel oneAPI Deep Neural Network (oneDNN), Intel oneAPI DPC++/C++ Compiler with SYCL* runtime, and Intel Extension for PyTorch were used to optimize the models for all four of these applications on Intel GPUs.
Chatbots from Dowork Technologies, LLM
applying LoRA, Dowork Technologies modified OpenLLaMA-13B with 4 PVC 1550s by synthesizing a fine-tuning dataset and applying it. Their technology enables businesses to create LLM chatbots and other applications using corporate data in a safe manner, acting as dynamic, conversational institutional memories for staff members—basically, a Private ChatGPT!
“We have been optimizing our 13 billion parameter model using Intel Hardware. The good results have given us the computing capacity required for such a comprehensive model. However, we saw a minor delay in text production during inference. We are enthusiastic to work with Intel to overcome this obstacle and unlock even better performance in future solutions as we push our AI models to new heights, said Mushegh Gevorgyan, founder and CEO of Dowork Technologies.
SQL queries for business analytics: The Mango Jelly
To automate business analytics for marketers, Mango Jelly’s application needed a new functionality for creating SQL queries. During this Intel Liftoff development sprint, which produced amazing results, this crucial feature that is essential to their business plan was built from the ground up. The team improved OpenLLaMA-3B using real customer data and Intel GPUs so that it could provide well-structured queries in response to marketing requests written in everyday language.
We were able to optimize an open-source LLM on incredibly potent hardware as part of Intel Liftoff. It was astounding to see the Intel XPU function at such high speeds. With the help of open-source models and cutting-edge hardware, this alliance gives us more flexibility over customization, tuning, and usage restrictions. Additionally, it highlights the suitability and readiness of our solution for enterprise use cases, according to Divya Upadhyay, co-founder and CEO of The Mango Jelly.
Enhancing Staffing with Terrain Analytics
A platform is provided by Terrain Analytics to help businesses make better hiring decisions. Terrain had created a functionality to parse job postings using OpenAI’s Ada API before to the sprint, but they ran into issues with cost and throughput. They were able to perfect an LLM for this particular use case during the Intel Liftoff sprint by using Intel Data Center GPU Max (Ponte Vecchio) for training and a 4th generation Intel Xeon Scalable Processor (Sapphire Rapids) for inference. The resulting model performed better than the default Ada API, with noticeably improved throughput and significant cost savings.
Terrain can now scale Deep Learning and Language Learning Models without running into computational limits thanks to the incorporation of Intel technology. Nathan Berkley, a software engineer at Terrain Analytics, and Riley Kinser, co-founder and head of product, claim that both of the models they developed showed superior success metrics to those produced using OpenAI’s Ada model and that they processed data 15 times more quickly.
Making LLMs more welcoming to business
With a focus on security and viability, Prediction Guard is an expert in supporting the integration of LLMs into company operations. The deliverables created by LLMs sometimes have an unstructured nature and could provide compliance and reputational difficulties. The platform of Prediction Guard provides answers to these problems. They improved the Camel-5B and Dolly-3B models using data from two paying customers, showcasing their capacity to improve LLM outputs for better business application.
“Prediction Guard was able to show the client how to cut the current OpenAI-based transcription processing time from 20 minutes to under one minute after evaluating LLM data extraction on Intel DCGM. According to Daniel Withenack, founder and CEO of Prediction Guard, their Pivoting initiative for potential clients has the potential to save operational costs by $2 million yearly.
Awaiting the Intel Innovation Event with anticipation
These accomplishments demonstrate the potential that AI businesses may unleash with the Liftoff for businesses program as we rush towards the Intel Innovation event in September. Our program participants are setting new standards for AI application development by utilizing Intel’s AI stack and assistance.
[…] can change appearance depending on both the temperature of its surrounds and the temperature of the GPU itself. This results in a very distinctive overall appearance. Due to the fact that the Starfield […]
[…] is in databases, first. With techniques like Retrieval Augmented Generation (RAG), you can ground LLMs in real-time data from your database, enhancing accuracy and assisting in making sure that […]
[…] create new content. This new content may be high-quality text, images, and sound from their LLM training. Generative AI chatbots can recognize, summarize, translate, predict, and create content […]
[…] search results, attractive art, personalised advertising campaigns, and new software code using large language models (LLMs) and language-vision models […]
[…] Intel Xe2 Battlemage GPUs […]
[…] a Service (SaaS), and application programming interfaces (APIs) powered by an 11B parameter Korean large language model (LLM) for automated call centre and commercial chatbot applications, KT Cloud has ambitious plans to […]