Launch Week5 days. 9 features.Learn more

Build knowledgeable AI

With its vector database at the core, Pinecone is the leading knowledge platform for building accurate, secure, and scalable AI applications.

Pinecone helps power AI for the world’s best companies

Handshake logo
Frontier Meds logo
Cisco logo
Course Hero logo
Handshake logo
Frontier Meds logo
Cisco logo
Course Hero logo
Cisco logo

At Cisco, we’re not only integrating generative AI capabilities throughout products for our customers, we’re also enabling our employees with the most cutting-edge technologies like Pinecone. By leveraging Pinecone’s industry-leading vector database on Google Cloud, our enterprise platform team built an AI assistant that accurately and securely searches through millions of our documents to support our multiple orgs across Cisco

Sujith Joseph

Sujith Joseph

Principal Engineer, Enterprise AI & Search at Cisco

Customers

Start and scale seamlessly

Sign up and start building in seconds. Easily embed, upsert, and index your data with a single API.

Quickstart Guide
Docs
from pinecone import Pinecone, ServerlessSpec

# Create a serverless index
# "dimension" needs to match the dimension of your embedding model
pc = Pinecone(api_key="YOUR_API_KEY")

pc.create_index(name="example-index", dimension=1024, 
    spec=ServerlessSpec(cloud='aws', region='us-east-1') 
)

# Target the index
index = pc.Index("example-index")

# Embed your data
vector = pc.inference.embed(
    model="multilingual-e5-large",
    inputs=["The quick brown fox jumps over the lazy dog."],
    parameters={
        "input_type": "passage",
        "truncate": "END"
    }
)

# Upsert your vector embedding(s)
upsert_response = index.upsert(
  vectors=[
    {
        "id": "some_id", 
        "values": vector[0].values, 
        "metadata": { "description": "English pangram" }
    }
  ]
) 

Choose a hosted model
or bring your own

Pinecone supports embeddings from any model or provider. Choose your preferred model or use the Inference API for fully managed embedding and reranking.

Model Gallery
ModelCompany
pinecone-sparse-english-v0 Inference API
Pinecone
cohere-rerank-3.5 Inference API
Cohere
multilingual-e5-large Inference API
Microsoft
jina-embeddings-v3Jina AI
text-embedding-3-largeOpenAI

Easily integrate with
your favorite tools

Pinecone is part of the developer-favorite AI stack. Easily integrate and build with your favorite cloud provider, data sources, models, frameworks, and more.

Integrations
Amazon Web Services
GCP
Databricks
Microsoft Azure
Cohere
Pulumi
OpenAI
Langchain
Snowflake

Bring knowledge to any AI workload

Prototype and ship AI assistants in minutes

Our Assistant API streamlines RAG development. Simply upload your text data and start chatting with your documents.

Join the movement

Join a growing community of 500,000+ ambitious developers building the next generation of applications with Pinecone.

Pinecone event

Events

Learn and connect with your peers, in person and online.

Attend an event >

Docs

Take advantage of our developer-friendly docs to get going in minutes.

Quickstart >

Forum

Share your questions, and answers in the support forum.

Ask the community >

Secure and Enterprise-ready

Meet security and operational requirements to bring AI products to market faster.

Secure

Data is encrypted at rest and in transit. Control access to your data with SSO, RBAC, CMEK, and more.

Reliable

Powering mission-critical applications of all sizes, with support SLAs and observability.

Cloud-native

Fully managed in the cloud of your choice. Also available via marketplaces: AWS, Azure, GCP.

Compliant

SOC2 Type II certified, HIPAA compliant (request a BAA), and GDPR-ready.

Explore security