Query Workbench

    +
    The Query Workbench provides a rich graphical user interface to perform query development.

    Using the Query Workbench, you can conveniently explore data, create, edit, run, and save N1QL queries, view and save query results, and explore the document structures in a bucket - all in a single window.

    Features of the Query Workbench include:

    • A single, integrated visual interface to perform query development and testing.

    • Easy viewing and editing of complex queries by providing features such as multi-line formatting, copy-and-paste, syntax coloring, auto completion of N1QL keywords and bucket and field names, and easy cursor movement.

    • View the structure of the documents in a bucket by using the N1QL INFER command. You no longer have to select the documents at random and guess the structure of the document.

    • Display query results in multiple formats: JSON, table, and tree. You can also save the query results to a file on disk.

    The Query Workbench is only available on clusters that are running the Query Service.

    Accessing the Query Workbench

    Permissions Required

    To access the Query Workbench and execute queries that read documents, you must have the Cluster Data Reader project role. To execute queries that read and modify documents, you must have the Project Owner or Cluster Data / Reader role.

    If a cluster is running the Query Service, the Query Workbench can be accessed under the cluster’s Tools > Query Workbench tab.

    <span class="menuseq"><b class="menu">The cluster’s 'Advanced</b> <i class="fa fa-angle-right caret"></i> <b class="menuitem">Query Workbench' tab.</b></span>

    The Query Workbench consists of three working areas:

    Using the Query Editor

    The Query Editor is where you build and run queries, and view query execution plans. Enter a query into the Query Editor, and then run the query by clicking Execute.

    You can also execute queries by typing a semi-colon (;) at the end of the query and hitting Enter.

    The Query Editor provides the following additional features:

    • Syntax coloring — For easy viewing, N1QL keywords, numbers and string literals are differently colored.

    • Support for N1QL INFER statements — The tool supports the N1QL INFER statement.

    Specify Bucket and Scope Context

    • Couchbase 7.0+

    • Couchbase 6.6

    You can specify the bucket and scope query context using the drop-down menus near the top left of the Query Editor. When the query context is set to a specified bucket and scope, you can write queries using just the collection name and do not need to specify the keyspace path.

    For example, here’s how a query would look without specifying a bucket and scope in the Query Editor:

    SELECT * FROM `travel-sample`.`inventory`.`airline`;

    If you then specify the bucket as travel-sample and the scope as inventory, the query can be simplified to:

    SELECT * FROM `airline`;

    To specify the query context:

    1. Near the top left of the Query Editor, open the bucket drop-down menu and choose a bucket.

    2. With a bucket now selected, the scope drop-down menu is enabled with the _default scope selected. Using this drop-down, you can choose a different scope within the current bucket.

    To remove the query context, open the bucket drop-down and choose Select a bucket.

    For more information on scopes and collections, refer to Scopes and Collections.

    If you are using Couchbase Server 6.6 in Capella, you are unable to specify scope and bucket context. This feature relies on scopes and collections introduced in Couchbase Server 7.0.

    Run a Query

    After entering a query, you can execute the query either by typing a semicolon (;) and pressing Enter, or by clicking the Execute button. When the query is running, the Execute button changes to Cancel, which allows you to cancel the running query. When you cancel a running query, it stops the activity on the cluster side as well.

    The Cancel button does not cancel index creation statements. The index creation continues on the server side even though it appears to have been canceled from the Query Workbench.

    Run a Transaction

    • Couchbase 7.0+

    • Couchbase 6.6

    The Query Editor can be used to run a Couchbase transaction. A transaction is a group of operations that are either committed to the database together or are all undone from the database if there’s a failure. For a complete overview of Couchbase transaction, see Transactions.

    You can use the Query Editor to run a single statement as a transaction. To do this, enter the statement into the Query Editor and click Transaction.

    If you are using Couchbase Server 6.6 in Capella, you are unable to run a Couchbase transaction using the Query Editor.

    Index Advisor

    • Couchbase 7.0+

    • Couchbase 6.6

    The ADVISE command generates index advice that optimizes response time for a query. You can use this command with SELECT, MERGE, UPDATE, or DELETE queries. To display index advice for a query entered into the Query Editor, click Advise.

    Query Workbench will then show any suggested indexes in the Query Results area using JSON format. This will not prepend ADVISE to the current query in the Query Editor

    The Advise button is not available with clusters using Couchbase Server 6.6 in Capella.

    You can still manually prepend the ADVISE command to a query in the Query Editor and run it to get results.

    Viewing the Query Results

    When you execute a query, the results are displayed in the Query Results area. Since large result sets can take a long time to display, it’s recommended that you use the LIMIT clause as part of your query when appropriate.

    The figures in this section display the result of the following query:

    SELECT * FROM `travel-sample` WHERE type="airport" LIMIT 1;

    When a query finishes, metrics for that query are displayed between the Query Editor and the Query Results areas.

    The executed query with its metrics highlighted.

    The following query metrics are displayed:

    • Status — Shows the status of the query. The values can be: success, failed, or HTTP codes.

    • Last Run — The time at which the query was last executed.

    • Round-trip time (RTT) — The total time it took to send the request and receive the response from the server.

    • Elapsed — The time taken by the server to process the request.

    • Execution — The time taken by the server to execute the query.

    • Docs — The number of documents returned.

    • Mutation Count — The number of documents deleted or changed by the query. This appears only for UPDATE and DELETE queries instead of Result Count.

    • Size — The total size, in bytes, of the documents returned.

    JSON Format

    The JSON tab formats the results to make the data easy to read. You can also expand and collapse objects and array values using the small arrow icons next to the line numbers.

    If you clicked Execute, the results of the query are shown. If you clicked Explain, the results are the same as Plan Text format.
    The Query Results area with the 'JSON' tab selected.

    Table Format

    The Table tab presents the results in a tabular format. The tool converts the JSON documents to HTML tables, and presents sub-objects or sub-arrays as sub-tables. This format works well for queries that return an array of objects, like select `beer-sample`.* from `beer-sample`;. You can hover the mouse pointer over a data value to see the path to that value in a tool tip. You can sort a column by clicking the column header.

    The Query Results area with the 'Table' tab selected.

    Plans

    Each time a query is executed, an EXPLAIN command is automatically run in the background to retrieve the query plan for that query. You may also generate the query plan by clicking Explain.

    There are two ways to display the query plan: Plan (graphical) and Plan Text.

    Plan

    The Plan tab presents the query in a graphical format.

    At the top, it shows a summary which also shows lists of the buckets, indexes, and fields used by the query.

    At the bottom is a data-flow diagram of query operators, with the initial scans at the right, and the final output on the left.

    Potentially expensive operators are highlighted.

    The data flow generally follows these steps:

    1. Scan

    2. Fetch

    3. Filter

    4. Projection (part 1)

    5. Order

    6. Projection (part 2)

    Projection is split into two parts (one before Order and one after Order), but Query Workbench shows only the first part.
    The Query Results area with the 'Plan' tab selected.

    Hovering over any unit of the plan shows more details of it. In this example query:

    Unit name Information shown when hovered over

    Order

    {'#operator':'Order':'sort_terms':
    [{'expr':'(`travel-sample`.`name`)'}]}

    Project

    {'#operator':'InitialProject':'result_terms':
    [{'expr':'self','star':true}]}

    Filter

    {'#operator':'Filter','condition':'(((`travel-sample`.`type`) = \'landmark\') and
    ((`travel-sample`.`city`) = \'San Francisco\'))'}

    Fetch

    {'#operator':'Fetch','keyspace':'travel-sample','namespace':'default'}

    IntersectScan

    (none)

    IndexScan2 (above)

    {'#operator':'IndexScan2','index':'def_city','index_id':'d51323973a9c8458','index_projection':
    {'primary_key':true},'keyspace':'travel-sample','namespace':'default','spans':
    [{'exact':true,'range':[{'high':'\San Francisco\'','inclusion':3,'low':'\'San Francisco\''}]}],'using':'gsi'}

    IndexScan2 (below)

    {'#operator':'IndexScan2','index':'def_city','index_id':'a11b1af8651888cf','index_projection':
    {'primary_key':true},'keyspace':'travel-sample','namespace':'default','spans':
    [{'exact':true,'range':[{'high':'\'landmark'\'','inclusion':3,'low':'\'landmark\''}]}],'using':'gsi'}

    In general, the preference of scan is

    1. Covering Index

    2. Index Scan

    3. Intersect Scan

    4. Union Scan, and finally

    5. Fetch

    Plan Text

    The Plan Text tab shows the EXPLAIN query execution plan in JSON format.

    If you clicked Execute, a detailed query execution plan is shown, which includes information about how long each step in the plan took to execute. If you clicked Explain, the intended query execution plan is shown (minus the details that would be included if you actually executed the query).

    The Query Results area with the 'Plan Text' tab selected.

    View Query History

    The Query Workbench maintains a history of all the queries executed.

    If you edit a previous query and execute it, the new query is stored at the end of the history. The history is persistent across browser sessions. The query history only saves queries; due to limited browser storage it does not save query results. Thus, when you restart the browser or reload the page, you can see your old queries, but you must re-execute the queries if you want to see their results.

    Clearing the browser history clears the history maintained by the Query Editor as well.

    Clicking History opens the Query History fly-out menu:

    The 'Query History' fly-out menu.

    You can scroll through the entire query history, and click on an individual query to be taken to that particular point in the history.

    • Search history — You can search the query history by entering text in the Filter Queries search box. All matching queries are displayed.

    • Delete a specific entry — Click the Trash icon next to a particular query to delete it from the history.

      This can be useful if you want a more manicured history when you export the history for future use.
    • Delete all entries — Click Clear to delete the entire query history.

    Import Queries

    You can load a new query history into the Query Workbench from a JSON file. This can be the exported query history from a different cluster.

    Importing query history will overwrite your current query history.
    1. From the cluster’s Tools > Query Workbench tab, click History.

      This opens the Query History fly-out menu.

    2. Click Import.

      This opens the Import Query History fly-out menu.

    3. Click Import JSON and select a local .json file that you wish to import.

    4. Click Save.

      The preexisting query history is overwritten with the query history of the imported file.

    Export Query History

    You can export the current query history to a JSON file. This file can be imported onto other clusters.

    1. From the cluster’s Tools > Query Workbench tab, click History.

      This opens the Query History fly-out menu.

    2. Click Export.

      This opens the Export Query History fly-out menu.

    3. (Optional) Use the File Name field to specify a name for the exported file.

    4. Click Export.

      The fly-out menu closes and the file is downloaded onto your computer.

    Modify Query Settings

    You can specify various settings for the Query Workbench by clicking Settings near the top-left corner of the Query Editor. This opens the Query Settings fly-out menu.

    The 'Query Settings' fly-out menu, showing the workbench settings.

    Configure the following settings and click Submit to save the configuration.

    Option Description

    Timeout

    The timeout parameter (specified in seconds) can be used to limit the running time of a query, up to a maximum of 1800 seconds

    Scan Consistency

    This is a cbq-engine option. Select one of the following options:

    • Not Bounded

    • Request Plus

    • Statement Plus

    For more information, refer to Settings and Parameters in the Couchbase Server documentation.

    Named Parameters

    For the prepared queries, this option allows you to specify any number of named parameters. Click the + button to add new named parameters, and the - button to remove the parameters. Named parameters must start with the dollar sign ($) for use in prepared queries. Otherwise, they are interpreted as parameters to the Query REST API.

    Positional Parameters

    For the prepared queries, this option allows you to specify values for $1, $2, and so on up to as many positional parameters as you have. Click the + button to add new positional parameters, and the - button to remove the parameters. The parameters are automatically labelled as "$1", "$2", and so on.

    Data / Bucket Insights

    • Couchbase 7.0+

    • Couchbase 6.6

    The Data Insights area displays the buckets, scopes, and collections in the current cluster. These keyspaces are shown in a hierarchy that you can expand or collapse.

    • To expand a heading within the hierarchy, click the heading, or click the rightward-pointing arrowhead before the heading.

    • To collapse a heading within the hierarchy, click the heading again, or click the downward-pointing arrowhead before the heading.

    Buckets appear at the top level of the hierarchy. When you expand a bucket, the scopes within that bucket appear below it. Similarly, when you expand a scope, the collections within that scope appear below it. The number of collections within the bucket is displayed to the right of the bucket heading.

    The 'Data Insights' panel.

    The Bucket Insights area displays all installed buckets in the cluster. By default, when the Query Workbench is first loaded, it retrieves a list of available buckets from the cluster. The Bucket Insights area is automatically refreshed when buckets or indexes are added or removed, but you can manually refresh it using the refresh button.

    The 'Bucket Insights' panel.

    The buckets are grouped into the following categories based on the indexes created for the bucket:

    • Queryable Buckets — Contain a primary index or a primary index and secondary indexes.

    • Queryable on Indexed Fields — Do not contain a primary index, but have one or more secondary indexes.

    • Non-Indexed Buckets — Do not contain any indexes. These buckets do not support queries. You must first define an index before querying these buckets.

      You can expand any bucket to view the schema for that bucket: field names, types, and if you hover the mouse pointer over a field name, you can see example values for that field. Bucket analysis is based on the N1QL INFER statement, which you can run manually to get more detailed results. This command infers a schema for a bucket by examining a random sample of documents. Because the command is based on a random sample, the results may vary slightly from run to run. The default sample size is 1000 documents. The syntax of the command is:

      INFER bucket-name [ WITH options ];

      where options is a JSON object, specifying values for one or more of sample_size, similarity_metric, num_sample_values, or dictionary_threshold.

      For example, to increase the sample size to 3000, you could use the following query:

      INFER `travel-sample` WITH {"sample_size":3000};