During the Intel Liftoff program’s AI hackathon, startups had the opportunity to leverage the power of Intel Data Center GPU Max and 4th Generation Xeon Scalable CPUs to explore the potential of Large Language Models (LLMs) and develop innovative applications. The startups had access to the Intel Developer Cloud, which provided them with the necessary hardware and software stack for their projects.
Three startups, namely Moonshot AI, SiteMana, and Selecton Technologies, were selected for the final showcase based on their impressive results. These startups fine-tuned and deployed LLM models ranging from 3 to 7 billion parameters using Intel GPUs and oneAPI software stack.
Moonshot AI focused on leveraging LLMs to make predictions, and they found the Intel GPUs to be fabulous, thanks to Intel optimizations that accelerated the model training process.
SiteMana used LLMs to automate e-commerce marketing. They built an LLM model inspired by state-of-the-art chatbots, fine-tuned it to write personalized emails, and deployed and tested the model seamlessly with the help of Intel’s GPU performance.
Selecton Technologies developed an AI personal assistant for gamers utilizing LLMs. They fine-tuned the Dolly LLM model using the LORA training script on an Intel Data Center GPU Max with 48 GB VRAM, which provided them with valuable GPU resources to validate their solution.
The startups’ achievements demonstrate the potential of Intel’s hardware and software in unleashing the power of LLM-powered applications. The flexibility of the oneAPI software stack and the availability of Intel’s silicon at major cloud providers ensure that these models can be deployed seamlessly anywhere.
Intel Liftoff aims to partner with promising AI startups and actively shape the future of AI by encouraging innovation and creation using Intel platforms. A technical blog detailing the achievements and insights from the program will be released soon, providing further information on how others can replicate these successes.