With its latest update, Redis aims to solidify its position as a vector database for AI use cases
Redis, a leading provider of a popular in-memory data store that has seen applications ranging from caching and vector storage to streaming engines and message brokers, recently announced the launch of several new products and updates, including the upcoming availability of Redis 8, the latest version of its Community Edition in-memory data store stack. Notably, the product launch emphasizes Redis' potential to become an essential component of the modern AI stack and that the company is leveraging the potential of generative AI with the introduction of an AI Copilot intended to help developers find useful information in Redis' documentation faster and write code snippets for them.
To deliver on the promise of Redis' potential when building generative AI applications, Redis has launched Redis for AI, a package integrating Redis' AI capabilities that can be used to unlock blazing-fast RAG with Redis as the "world’s fastest vector database", semantic caching, LLM and agentic memory, and feature stores. Redis for AI includes recipes, reference architectures, integrations, and packages including RedisVL, langchain-redis, and llama-index-vector-stores-redis. Redis for AI supports multiple data types and all sorts of environments—cloud, on-prem, or hybrid.
In addition to Redis for AI, Redis 8, and the Redis Copilot, the company announced the upcoming public preview of Redis Flex, following Redis' acquisition of Speedb. Redis Flex can run on DRAM and SSDs, making it more affordable than competing solutions or traditional Redis. According to the company, developers and organizations using Redis Flex will cut their caching costs by as much as 80%. Finally, Redis is also launching Redis Data Integration (RDI) as a private preview on Redis Cloud and making it generally available on Redis Software. By connecting to sources using a single API, RDI lets developers synchronize data from existing databases without demanding significant changes in the data pipeline.