Sunday, October 13, 2024

Vector Search In Memorystore For Redis Cluster And Valkey

- Advertisement -

Memorystore for Redis Cluster became the perfect platform for Gen AI application cases like Retrieval Augmented Generation (RAG), recommendation systems, semantic search, and more with the arrival of vector search earlier this year. Why? due to its exceptionally low latency vector search. Vector search over tens of millions of vectors may be done with a single Memorystore for Redis instance at a latency of one digit millisecond. But what happens if you need to store more vectors than a single virtual machine can hold?

Google is presenting vector search on the new Memorystore for Redis Cluster and Memorystore for Valkey, which combine three exciting features:

- Advertisement -

1) Zero-downtime scalability (in or out);

2) Ultra-low latency in-memory vector search;

3) Robust high performance vector search over millions or billions of vectors.

Vector support for these Memorystore products, which is now in preview, allows you to scale up your cluster by up to 250 shards, allowing you to store billions of vectors in a single instance. Indeed, vector search on over a billion vectors with more than 99% recall can be carried out in single-digit millisecond latency on a single Memorystore for Redis Cluster instance! Demanding enterprise applications, such semantic search over a worldwide corpus of data, are made possible by this scale.

- Advertisement -

Modular in-memory vector exploration

Partitioning the vector index among the cluster’s nodes is essential for both performance and scalability. Because Memorystore employs a local index partitioning technique, every node has an index partition corresponding to the fraction of the keyspace that is kept locally. The OSS cluster protocol has already uniformly sharded the keyspace, so each index split is about similar in size.

This architecture leads to index build times for all vector indices being improved linearly with the addition of nodes. Furthermore, adding nodes enhances the performance of brute-force searches linearly and Hierarchical Navigable Small World (HNSW) searches logarithmically, provided that the number of vectors remains constant. All in all, a single cluster may support billions of vectors that are searchable and indexable, all the while preserving quick index creation times and low search latencies at high recall.

Hybrid queries

Google is announcing support for hybrid queries on Memorystore for Valkey and Memorystore for Redis Cluster, in addition to better scalability. You can combine vector searches with filters on tag and numeric data in hybrid queries. Memorystore combines tag, vector, and numeric search to provide complicated query answers.

Additionally, these filter expressions feature boolean logic, allowing for the fine-tuning of search results to only include relevant fields by combining numerous fields. Applications can tailor vector search queries to their requirements with this new functionality, leading to considerably richer results than previously.

OSS Valkey in the background

The Valkey key-value datastore has generated a lot of interest in the open-source community. It has coauthored a Request for Comments (RFC) and are collaborating with the open source community to contribute its vector search capabilities to Valkey as part of its dedication to making it fantastic. The community alignment process begins with an RFC, and it encourage comments on its proposal and execution. Its main objective is to make it possible for Valkey developers worldwide to use Valkey vector search to build incredible next-generation AI applications.

The quest for a scalable and quick vector search is finished

In addition to the features already available on Memorystore for Redis, Memorystore now offers ultra-low latency across all of its most widely used engines with the addition of fast and scalable vector search on Memorystore for Redis Cluster and Memorystore for Valkey . Therefore, Memorystore will be difficult to beat for developing generative AI applications that need reliable and consistent low-latency vector search. To experience the speed of in-memory search, get started right now by starting a Memorystore for Valkey or Memorystore for Redis Cluster instance.

- Advertisement -
Thota nithya
Thota nithya
Thota Nithya has been writing Cloud Computing articles for govindhtech from APR 2023. She was a science graduate. She was an enthusiast of cloud computing.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes