Monday, April 15, 2024

Spanner Controls Huge Generative AI and Similarity Search

Spanner Features

With a 99.999% availability SLA, Spanner, a fully managed, highly available distributed database service from Google Cloud, offers relational semantics and almost infinite horizontal scalability for both relational and non-relational workloads. Customers want scaling as data volumes increase and applications place more demands on their operational databases. Recently, Google introduced support for exact closest neighbor (KNN) search in preview for vector embeddings, enabling enterprises to develop generative AI at almost infinite scale. Because Spanner has all of these features, you can do vector search on your transactional data without transferring it to a different database, keeping things operationally simple.

Google explains in this blog post how vector search may improve general artificial intelligence applications and how the underlying architecture of Spanner allows for very large-scale vector search deployments. They also go over the many operational advantages of using Spanner as opposed to a specialized vector database.

Vector embeddings and generative AI

Numerous new applications are being made possible by generative AI, such as individualized conversational virtual assistants and the ability to create original material just by texting suggestions. The foundation of generative AI is pre-trained large language models (LLMs), which make it possible for developers with less experience in machine learning to create gen AI apps with ease. However, as LLMs may sometimes have hallucinations and provide false information, integrating LLMs with operational databases and vector search can aid in the development of Gen AI applications that are based on real-time, contextual, and domain-specific data, resulting in high-quality AI-assisted user experiences.

Suppose a financial institution employs a virtual assistant to assist clients with account-related inquiries, handle accounts, and suggest financial solutions that best suit each client’s particular requirements. The customer’s decision-making process may take place across many chat sessions with the virtual assistant in complicated settings. The virtual assistant may locate the most pertinent material by using vector search across the discussion history, resulting in a high-caliber, highly relevant, and educational chat experience.

Utilizing vector embeddings—numerical representations of text, image, or video produced by embedding models—vector search assists the gen AI application in determining the most pertinent information to include in LLM prompts, allowing for the customization and enhancement of the LLM’s responses. The distance between vector embeddings may be used to do vector search. The content of the embeddings is increasingly similar the closer they are in the vector space.

With Spanner, you may virtually expand the scale of vector search

Vector workloads, such as the financial virtual assistant example mentioned above, may readily grow to extremely high sizes when they are required to service a large number of customers. Both a vast number of vectors (more than 10 billion, for example) and queries per second (more than millions of QPS) may be found in large-scale vector search workloads. It should come as no surprise that many database systems may find this difficult.

However, a large number of these searches are highly partitionable, meaning that each search is limited to the data that is connected to a certain person. Because Spanner effectively shrinks the search area to provide precise, timely results with little latency, these workloads are well suited for Spanner KNN search. Spanner supports vector search on trillions of vectors for highly partitionable workloads thanks to its horizontally scalable design.

To keep the application simple, Spanner also allows you to query and filter vector embeddings using SQL. It is simple to combine regular searches with vector search and to integrate vector embeddings with operational data using SQL. For instance, before doing a vector search, you may effectively filter rows of interest using secondary indexes. Like any other query on your operational data, Spanner’s vector search queries deliver new, real-time data as soon as transactions are committed.

Spanner offers resource efficiency and operational simplicity

Additionally, Google Spanner in-database vector search features streamline your operational process by removing the expense and complexity of maintaining a separate vector database. Vector embeddings can take advantage of all of Spanner features, such as high 99.999% availability, managed backups, point-in-time recovery (PITR), security and access control features, and change streams, because they are stored and managed in the same manner as operational data in Spanner. Better resource usage and cost reductions are made possible by the sharing of compute resources between operational and vector queries. Furthermore, Spanner’s PostgreSQL interface supports these features as well, providing customers transitioning from PostgreSQL with a recognizable and portable interface.

Additionally, Spanner integrates with well-known AI development tools like Document Loader, Memory, and LangChain Vector Store, making it simple for developers to create AI applications using the tools of their choice.


Vector search skills have gained renewed attention with the emergence of Gen AI. Spanner is well suited to handle your large-scale vector search requirements on the same platform that you currently depend on for your demanding, distributed workloads, thanks to its nearly infinite scalability and support for KNN vector search.

Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.


Please enter your comment!
Please enter your name here

Recent Posts

Popular Post Would you like to receive notifications on latest updates? No Yes