Use Vector Search for AI Applications

  • concept
    +
    Use Couchbase Capella’s Vector Search features to add fast and accurate semantic search to your applications.

    Vector Search builds on Couchbase Capella’s Search Service to provide vector index support. You can use these new Vector Search indexes for Retrieval Augmented Generation (RAG) with an existing Large Language Model (LLM).

    Using Capella’s Vector Search, an embedding model, and your chosen LLM, you can develop AI applications while giving context and up-to-date information from your own data.

    You can develop applications that include:

    • Similarity search: Search for documents, products, images, and more that are similar to a given query using vector embeddings. By using vector embeddings, you can search based on descriptions, rather than using specific keywords and get intuitive and relevant results across data types.

    • Semantic search: Use natural language processing to deliver more accurate results based on an understanding of the intent and context behind a given search query, rather than simple keyword matches.

    • Generative AI: Create new original content, such as text and images, based on a prompt or a vector search given to an LLM. Use generative AI to get tailored and dynamic responses across your applications.

    Vector Search supports integrations with frameworks like LangChain to support AI application development. For more information about all frameworks and integrations supported by Vector Search and Capella, see Integrations, Connectors, and Tools.

    Using Vector Search Indexes

    To get started using Vector Search in Capella:

    1. Store data: Store the data you want to use for your search or AI project in a Capella database.

    2. Generate embeddings: Generate vector embeddings from your data with your preferred embedding model.

    3. Store your embeddings: Store your vector embeddings in an array inside the documents in your Capella database.

    4. Create a Vector Search index: Create an index to use your embeddings and identify similar documents with vector similarity.

    In addition to supporting integrations with frameworks like LangChain and LlamaIndex, you can also use the API for an existing LLM and one of their embedding models to generate vector embeddings for your data. For example, the OpenAI embeddings endpoint can generate embeddings for a text string using a specified embedding model. You can then store that embedding as a new field in your documents. For more information about how to generate and obtain embeddings for text strings using the OpenAI API, see the Embeddings documentation.

    When you create a Vector Search index, the dimension of your data vector embeddings must match the dimension for any search query vectors. Otherwise, a Vector Search query fails to return any results.

    For more information about how to create a Vector Search index, see Create a Vector Search Index in Quick Mode.

    For information about how to run a Vector Search query, see Run a Vector Search with the Capella UI.