Customize a Search Index with the Web Console

  • concept
    +
    Configure additional options for a Search index to improve performance and fine tune your search results.

    Some Search index options are only available when you use the standard editor.

    You can add the following components and configure the following options for a Search index:

    Option Quick Editor Standard Editor Description

    Type Identifier

    Set a type identifier to add a filter to the documents added to your Search index:

    • JSON Type Field: Selects only documents that contain a specific field with a specified string value.

    • Doc ID up to Separator: Selects only documents with an ID or key up to a specific substring.

    • Doc ID with Regex: Selects only documents with an ID or key that matches a regular expression.

    For more information about how to configure a type identifier, see Set the Type Identifier for a Search Index.

    Mappings

    Use a type mapping to include or exclude specific documents in a collection from an index.

    Type mappings can also set a field’s data type and other settings.

    You can create two types of type mappings with the Search Service:

    • Dynamic type mappings: When you do not know the structure of your data fields ahead of time, use a dynamic type mapping to add all available fields from a matching document type to an index. For example, you could create a dynamic type mapping to include all documents from the hotel collection in your Search index, or include all fields under a JSON object from your document schema.

      Configure this type of mapping by selecting a collection in the Quick Editor or by clearing Only index specified fields when you Create a Type Mapping.

    • Static type mappings: When your data fields are stable and unlikely to change, use a static type mapping to add and define only specific fields from a matching document type to an index. For example, you could create a static type mapping to only include the contents of the city field from the hotel collection in your Search index, as a text field with an en analyzer.

      Configure this type of mapping by selecting a field in your document schema in the Quick Editor.

      In the standard editor, create a type mapping and add a child field mapping to create a static type mapping.

    Type mappings start at the collection level. Create additional mappings for child fields or JSON objects under a collection’s type mapping to restrict the documents added to your index. This can improve Search index performance over indexing entire collections.

    For more information about how to configure settings for mappings and type mappings in the Quick Editor, see Quick Index Field Options.

    For more information about how to configure a type mapping in the standard editor, see Create a Type Mapping.

    Analyzers

    Use analyzers to improve and customize the search results in your index.

    Analyzers transform input text into tokens, which give you greater control over your index’s text matching.

    You can use one of Couchbase’s built-in analyzers or create your own. For more information about how to create a custom analyzer, see Create a Custom Analyzer.

    Analyzers have different components that control how text is transformed for search. When you create a custom analyzer, you can choose these components. For more information, see Custom Filters.

    Custom Filters

    Use custom filters to add more customization to a custom analyzer.

    For more information about these filters, see the Custom Filters section.

    Date/Time Parsers

    If the documents in your index contain date and time data in a format other than RFC-3339 (ISO-8601), then you need to create a date/time parser.

    A custom date/time parser tells the Search index how to interpret date data from your documents.

    For more information about how to add a custom date/time parser, see Create a Custom Date/Time Parser.

    Advanced

    Set advanced settings to change your index’s default analyzer, replication, and more.

    For more information about how to change advanced settings, see Set Search Index Advanced Settings.

    Custom Filters

    Custom filters are components of a Search index analyzer.

    Create and add these components to a custom analyzer to improve search results and performance for an index with the standard editor.

    You can create the following custom filters:

    Character Filters

    Character filters remove unwanted characters from the input for a search. For example, the default html character filter removes HTML tags from your search content.

    You can use a default character filter in an analyzer or create your own.

    For more information about the available default character filters, see Default Character Filters.

    For more information about how to create your own custom character filter, see Create a Custom Character Filter.

    Tokenizers

    Tokenizers separate input strings into individual tokens. These tokens are combined into token streams. The Search Service takes token streams from search queries to determine matches for token streams in search results.

    You can use a default tokenizer in an analyzer or create your own.

    For more information about the available default tokenizers, see Default Tokenizers.

    For more information about how to create your own tokenizer, see Create a Custom Tokenizer.

    Token Filters

    Token filters take the token stream from a tokenizer and modify the tokens.

    A token filter can create stems from tokens to increase the matches for a search term. For example, if a token filter creates the stem play, a search can return matches for player, playing, and playable.

    The Search Service has default tokenizers available. For a list of all available tokenizers, see Default Token Filters.

    You can also create your own token filters. Custom token filters can use Wordlists to modify their tokens. For more information about how to create your own token filter, see Create a Custom Token Filter.

    Wordlists

    Wordlists define a list of words that you can use with a token filter to create tokens.

    You can use a wordlist to find words and create tokens, or remove words from a tokenizer’s token stream.

    When you create a custom token filter, the Search Service has a set of default wordlists. For more information about the available default wordlists, see Default Wordlists.

    For more information about how to create your own wordlist, see Create a Custom Wordlist.