NeuroPace Inc
About 50 million people worldwide suffer from epilepsy. NeuroPace, is dedicated to improving the quality of life for epileptics by minimising or curing their seizures. The RNS System, a responsive neurostimulation device from the business, delivers focused electrical stimulation to stop seizures and tracks brain activity to identify seizure precursors.
Intracranial electroencephalogram, or iEEG, data is also captured by this device. To far, over 15 million recordings from over 5,000 patients have been gathered, making it the biggest collection of ambulatory iEEG records accessible.
Using clinical trial data from the RNS System, the AI team at NeuroPace created electrographic seizure classifier models, which were then refined through transfer learning to determine seizure onset times. The limited number of Graphical Processing Units (GPUs) available in on-premises virtual machines (VMs) formerly hindered machine learning (“ML”) training and slowed down model optimisation and training procedures.
In order to overcome this difficulty, NeuroPace scaled ML workloads using Google Cloud, abandoned on-premises virtual machines, and used Vertex AI to improve training and hyperparameter tuning.
Vertex AI
Making use of the AI infrastructure on Google Cloud
The ML training capabilities of NeuroPace have been greatly enhanced and expedited using Google Cloud’s technology. AlloyDB AI from Google, which is a component of the AlloyDB for PostgreSQL database, can now search through more than a million iEEG records in milliseconds to find ones that are similar to one another.
This operation used to take minutes or hours. Furthermore, NeuroPace’s ML training procedures have been completely transformed, improving scalability, automation, and efficiency, thanks to the combination of Vertex AI, GPUs, Compute Engine, and Google Cloud Storage.
Data engineering, model training, deployment, and monitoring are all supported throughout the machine learning process by Vertex AI, the AI development platform from Google Cloud. The AI team at NeuroPace can now train models on a variety of GPUs thanks to this connection, with L4 GPUs providing a more cost-effective solution than on-premises resources.
With it, they created a cloud-native machine learning training system that uses GPUs and Vertex AI to achieve the necessary scalability and efficiency.
AlloyDB AI
AlloyDB AI patient similarity search
Finding electrophysiological characteristics that epilepsy patients have in common may help in the search for efficient therapies. Using the built-in vector search capabilities in AlloyDB AI, NeuroPace has carried out research investigations to find similar iEEG patterns within a large dataset of over 1 million time-series iEEG records. It is now possible to search for comparable iEEG recordings in this dataset in about 10 milliseconds by using the IVFFlat and HNSW indexing techniques.
Compared to normal PostgreSQL, AlloyDB AI makes it possible to store data embeddings in vector form directly in the database, making similarity searches simpler and faster. As a result, complex external processing pipelines are no longer necessary.
The disease management system of the future
The NeuroPace RNS System’s data may be used to better understand seizure patterns and triggers, which could help with the customisation and optimisation of epileptic treatments.
The goal of the project is to create a comprehensive epilepsy illness management system that emphasises customised treatments and improved patient well-being. This will be accomplished by integrating Google Cloud’s infrastructure with data from NeuroPace’s RNS System.
BENEFITS
Generative AI apps built with PostgreSQL
Utilise open, standard technologies like LangChain and pgvector, along with the familiar PostgreSQL interface, to create generative AI applications.
Vector operations with high performance
Create vector embeddings from within your database and execute vector queries up to 10 times faster than regular PostgreSQL.
Cutting-edge generational AI models
Use conventional SQL queries to retrieve models that you run in Vertex AI and use in your application. Use bespoke models you’ve created or Google models like Gemini.
Important characteristics
Quick and compatible vector search with pgvector
When running vector queries, AlloyDB AI can outperform regular PostgreSQL by up to ten times. When activated, it provides vectors with four times more dimensions and provides support for 8-bit quantization, which results in a threefold reduction in index size.
Your apps can use the ANN (approximate nearest neighbour) or KNN (precise k-nearest neighbour) algorithms to quickly search for similarities on complicated data types, such text and images.
Vector representations in the database
You may quickly convert operational data, such as text and photos, into vector embeddings using the AI model of your choice with automated embeddings generation. Embeddings can be kept in AlloyDB and queried via a SQL interface.
Availability of local or distant models and data
Access bespoke and pretrained models, as well as local models stored in AlloyDB and distant models housed in Vertex AI. Using the data in AlloyDB, you can develop and optimise models before deploying them as Vertex AI endpoints.
Integration of LangChain
With LangChain integration, creating new AI apps that are more dependable, transparent, and accurate is simple. Use Vector Stores to enable semantic search, Document Loaders to load and store information from documents, and Chat Messages History to allow chains to remember past talks.
Scalability, availability, and security at the enterprise level
AlloyDB AI get access to the greatest features offered by both PostgreSQL and Google as a part of AlloyDB. Give your application greater scalability and performance, a 99.99% high availability SLA that includes maintenance, automatic database failure warning and recovery, and extensive security and compliance.