Update: Pinecone on Azure is now generally available. Sign in and start building today!
Microsoft Azure regions are coming to Pinecone, starting as early access at the end of July.
The Pinecone vector database is a key component of the AI tech stack. It lets companies solve one of the biggest challenges in deploying Generative AI solutions — hallucinations — by allowing them to store, search, and find the most relevant information from company data and send that context to Large Language Models (LLMs) with every query. Users with search and generative AI solutions already on Azure, or leveraging Azure’s OpenAI Service, can now easily add Pinecone to their AI stack to ensure relevant, accurate, and fast responses from applications.
Running Pinecone on Azure also enables our customers to achieve:
- Performance at scale: Having Pinecone closer to the data, applications, and models means lower end-to-end latencies for AI applications.
- Faster, simpler procurement: Skip the approvals needed to integrate a new solution, and start building right away with a simplified architecture.
- Enterprise readiness: Easily meet and comply with internal security and infrastructure requirements by choosing to run on the preferred cloud provider.
Early access for Pinecone on Azure will begin late July in the “eastus-azure” region for Standard and Enterprise users. The preview will support both our p1 pods for performance-optimized indexes and s1 pods for lower-cost, storage-optimized indexes.
Pinecone is already available on other major cloud providers and marketplaces, and will become available on Azure’s marketplace later this year. Sign up for the Azure private preview, and check out our example of using Pinecone with Azure OpenAI Service.