Cluster object

class couchbase.cluster.Cluster(connection_string='couchbase://localhost', bucket_class=<class 'couchbase.bucket.Bucket'>)[source]

Creates a new Cluster object

Parameters:
  • connection_string – Base connection string. It is an error to specify a bucket in the string.
  • bucket_classcouchbase.bucket.Bucket implementation to use.
authenticate(self, authenticator=None, username=None, password=None)[source]

Set the type of authenticator to use when opening buckets or performing cluster management operations

Parameters:
  • authenticator – The new authenticator to use
  • username – The username to authenticate with
  • password – The password to authenticate with
cluster_manager(self)[source]

Returns an object which may be used to create and manage buckets in the cluster.

Returns:the cluster manager
Return type:couchbase.admin.Admin
n1ql_query(self, query, *args, **kwargs)[source]

Issue a “cluster-level” query. This requires that at least one connection to a bucket is active.

Parameters:
  • query – The query string or object
  • args – Additional arguments to n1ql_query()

See also

n1ql_query()

open_bucket(self, bucket_name, **kwargs)[source]

Open a new connection to a Couchbase bucket

Parameters:
  • bucket_name – The name of the bucket to open
  • kwargs – Additional arguments to provide to the constructor
Returns:

An instance of the bucket_class object provided to __init__()

Bucket object

class couchbase.bucket.Bucket[source]
__init__(self, *args, **kwargs)[source]

Connect to a bucket.

Parameters:
  • connection_string (string) –

    The connection string to use for connecting to the bucket. This is a URI-like string allowing specifying multiple hosts and a bucket name.

    The format of the connection string is the scheme (couchbase for normal connections, couchbases for SSL enabled connections); a list of one or more hostnames delimited by commas; a bucket and a set of options.

    like so:

    couchbase://host1,host2,host3/bucketname?option1=value1&option2=value2
    

    If using the SSL scheme (couchbases), ensure to specify the certpath option to point to the location of the certificate on the client’s filesystem; otherwise connection may fail with an error code indicating the server’s certificate could not be trusted.

    See Additional Connection Options for additional connection options.

  • username (string) – username to connect to bucket with
  • password (string) – the password of the bucket
  • quiet (boolean) – the flag controlling whether to raise an exception when the client executes operations on non-existent keys. If it is False it will raise NotFoundError exceptions. When set to True the operations will return None silently.
  • unlock_gil (boolean) –

    If set (which is the default), the bucket object will release the python GIL when possible, allowing other (Python) threads to function in the background. This should be set to true if you are using threads in your application (and is the default), as otherwise all threads will be blocked while couchbase functions execute.

    You may turn this off for some performance boost and you are certain your application is not using threads

  • transcoder (Transcoder) – Set the transcoder object to use. This should conform to the interface in the documentation (it need not actually be a subclass). This can be either a class type to instantiate, or an initialized instance.
  • lockmode – The lockmode for threaded access. See Using a Bucket from multiple threads for more information.
  • tracer – An OpenTracing tracer into which to propagate any tracing information. Requires tracing to be enabled.
Raise:

BucketNotFoundError or AuthError if there is no such bucket to connect to, or if invalid credentials were supplied.

Raise:

CouchbaseNetworkError if the socket wasn’t accessible (doesn’t accept connections or doesn’t respond in

Raise:

InvalidError if the connection string was malformed.

Returns:

instance of Bucket

Initialize bucket using default options:

from couchbase.bucket import Bucket
cb = Bucket('couchbase:///mybucket')

Connect to protected bucket:

cb = Bucket('couchbase:///protected', password='secret')

Connect using a list of servers:

cb = Bucket('couchbase://host1,host2,host3/mybucket')

Connect using SSL:

cb = Bucket('couchbases://securehost/bucketname?certpath=/var/cb-cert.pem')

Passing Arguments

All keyword arguments passed to methods should be specified as keyword arguments, and the user should not rely on their position within the keyword specification - as this is subject to change.

Thus, if a function is prototyped as:

def foo(self, key, foo=None, bar=1, baz=False)

then arguments passed to foo() should always be in the form of

obj.foo(key, foo=fooval, bar=barval, baz=bazval)

and never like

obj.foo(key, fooval, barval, bazval)

Arguments To *_multi Methods

Arguments passed to *_multi methods involves passing an iterable of keys. The iterable must have __len__ and __iter__ implemented.

For operations which require values (i.e. the upsert_multi() family), a dict must be passed with the values set as the values which should be stored for the keys.

Some of the multi methods accept keyword arguments; these arguments apply to all the keys within the iterable passed.

You can also pass an ItemCollection as the keys or kv parameter. The Item interfaces allows in-place modifications to an object across multiple operations avoiding the need for copying the result into your own data structure.

See the documentation for Item for more information.

Key and Value Format

By default, keys are encoded as UTF-8, while values are encoded as JSON; which was selected to be the default for compatibility and ease-of-use with views.

Format Options

The following constants may be used as values to the format option in methods where this is supported. This is also the value returned in the flags attribute of the ValueResult object from a get() operation.

Each format specifier has specific rules about what data types it accepts.

couchbase.FMT_JSON

Indicates the value is to be converted to JSON. This accepts any plain Python object and internally calls json.dumps(value)(). See the Python json documentation for more information. It is recommended you use this format if you intend to examine the value in a MapReduce view function

couchbase.FMT_PICKLE

Convert the value to Pickle. This is the most flexible format as it accepts just about any Python object. This should not be used if operating in environments where other Couchbase clients in other languages might be operating (as Pickle is a Python-specific format)

couchbase.FMT_BYTES

Pass the value as a byte string. No conversion is performed, but the value must already be of a bytes type. In Python 2.x bytes is a synonym for str. In Python 3.x, bytes and str are distinct types. Use this option to store “binary” data. An exception will be thrown if a unicode object is passed, as unicode objects do not have any specific encoding. You must first encode the object to your preferred encoding and pass it along as the value.

Note that values with FMT_BYTES are retrieved as byte objects.

FMT_BYTES is the quickest conversion method.

couchbase.FMT_UTF8

Pass the value as a UTF-8 encoded string. This accepts unicode objects. It may also accept str objects if their content is encodable as UTF-8 (otherwise a ValueFormatError is thrown).

Values with FMT_UTF8 are retrieved as unicode objects (for Python 3 unicode objects are plain str objects).

couchbase.FMT_AUTO

Automatically determine the format of the input type. The value of this constant is an opaque object.

The rules are as follows:

If the value is a str, FMT_UTF8 is used. If it is a bytes object then FMT_BYTES is used. If it is a list, tuple or dict, bool, or None then FMT_JSON is used. For anything else FMT_PICKLE is used.

Key Format

The above format options are only valid for values being passed to one of the storage methods (see couchbase.bucket.Bucket.upsert()).

For keys, the acceptable inputs are those for FMT_UTF8

Single-Key Data Methods

These methods all return a Result object containing information about the operation (such as status and value).

Storing Data

class couchbase.bucket.Bucket[source]

These methods set the contents of a key in Couchbase. If successful, they replace the existing contents (if any) of the key.

upsert(self, key, value, cas=0, ttl=0, format=None, persist_to=0, replicate_to=0)[source]

Unconditionally store the object in Couchbase.

Parameters:
  • key (string or bytes) – The key to set the value with. By default, the key must be either a bytes or str object encodable as UTF-8. If a custom transcoder class is used (see __init__()), then the key object is passed directly to the transcoder, which may serialize it how it wishes.
  • value

    The value to set for the key. This should be a native Python value which will be transparently serialized to JSON by the library. Do not pass already-serialized JSON as the value or it will be serialized again.

    If you are using a different format setting (see format parameter), and/or a custom transcoder then value for this argument may need to conform to different criteria.

  • cas (int) – The _CAS_ value to use. If supplied, the value will only be stored if it already exists with the supplied CAS
  • ttl (int) – If specified, the key will expire after this many seconds
  • format (int) – If specified, indicates the format to use when encoding the value. If none is specified, it will use the default_format For more info see default_format
  • persist_to (int) – Perform durability checking on this many nodes nodes for persistence to disk. See endure() for more information
  • replicate_to (int) – Perform durability checking on this many replicas for presence in memory. See endure() for more information.
Raise:

ArgumentError if an argument is supplied that is not applicable in this context. For example setting the CAS as a string.

Raise:

:exc`.CouchbaseNetworkError`

Raise:

KeyExistsError if the key already exists on the server with a different CAS value.

Raise:

ValueFormatError if the value cannot be serialized with chosen encoder, e.g. if you try to store a dictionary in plain mode.

Returns:

Result.

Simple set:

cb.upsert('key', 'value')

Force JSON document format for value:

cb.upsert('foo', {'bar': 'baz'}, format=couchbase.FMT_JSON)

Insert JSON from a string:

JSONstr = '{"key1": "value1", "key2": 123}'
JSONobj = json.loads(JSONstr)
cb.upsert("documentID", JSONobj, format=couchbase.FMT_JSON)

Force UTF8 document format for value:

cb.upsert('foo', "<xml></xml>", format=couchbase.FMT_UTF8)

Perform optimistic locking by specifying last known CAS version:

cb.upsert('foo', 'bar', cas=8835713818674332672)

Several sets at the same time (mutli-set):

cb.upsert_multi({'foo': 'bar', 'baz': 'value'})

See also

upsert_multi()

insert(self, key, value, ttl=0, format=None, persist_to=0, replicate_to=0)[source]

Store an object in Couchbase unless it already exists.

Follows the same conventions as upsert() but the value is stored only if it does not exist already. Conversely, the value is not stored if the key already exists.

Notably missing from this method is the cas parameter, this is because insert will only succeed if a key does not already exist on the server (and thus can have no CAS)

Raise:KeyExistsError if the key already exists
replace(self, key, value, cas=0, ttl=0, format=None, persist_to=0, replicate_to=0)[source]

Store an object in Couchbase only if it already exists.

Follows the same conventions as upsert(), but the value is stored only if a previous value already exists.

Raise:NotFoundError if the key does not exist

Retrieving Data

class couchbase.bucket.Bucket[source]
get(self, key, ttl=0, quiet=None, replica=False, no_format=False)[source]

Obtain an object stored in Couchbase by given key.

Parameters:
  • key (string) – The key to fetch. The type of key is the same as mentioned in upsert()
  • ttl (int) – If specified, indicates that the key’s expiration time should be modified when retrieving the value.
  • quiet (boolean) –

    causes get to return None instead of raising an exception when the key is not found. It defaults to the value set by quiet on the instance. In quiet mode, the error may still be obtained by inspecting the rc attribute of the Result object, or checking Result.success.

    Note that the default value is None, which means to use the quiet. If it is a boolean (i.e. True or False) it will override the couchbase.bucket.Bucket-level quiet attribute.

  • replica (bool) –

    Whether to fetch this key from a replica rather than querying the master server. This is primarily useful when operations with the master fail (possibly due to a configuration change). It should normally be used in an exception handler like so

    Using the replica option:

    try:
        res = c.get("key", quiet=True) # suppress not-found errors
    catch CouchbaseError:
        res = c.get("key", replica=True, quiet=True)
    
  • no_format (bool) – If set to True, then the value will always be delivered in the Result object as being of FMT_BYTES. This is a item-local equivalent of using the data_passthrough option
Raise:

NotFoundError if the key does not exist

Raise:

CouchbaseNetworkError

Raise:

ValueFormatError if the value cannot be deserialized with chosen decoder, e.g. if you try to retreive an object stored with an unrecognized format

Returns:

A Result object

Simple get:

value = cb.get('key').value

Get multiple values:

cb.get_multi(['foo', 'bar'])
# { 'foo' : <Result(...)>, 'bar' : <Result(...)> }

Inspect the flags:

rv = cb.get("key")
value, flags, cas = rv.value, rv.flags, rv.cas

Update the expiration time:

rv = cb.get("key", ttl=10)
# Expires in ten seconds

See also

get_multi()

Modifying Data

These methods modify existing values in Couchbase

class couchbase.bucket.Bucket[source]
append(self, key, value, cas=0, format=None, persist_to=0, replicate_to=0)[source]

Append a string to an existing value in Couchbase.

Parameters:value (string) – The data to append to the existing value.

Other parameters follow the same conventions as upsert().

The format argument must be one of FMT_UTF8 or FMT_BYTES. If not specified, it will be FMT_UTF8 (overriding the default_format attribute). This is because JSON or Pickle formats will be nonsensical when random data is appended to them. If you wish to modify a JSON or Pickle encoded object, you will need to retrieve it (via get()), modify it, and then store it again (using upsert()).

Additionally, you must ensure the value (and flags) for the current value is compatible with the data to be appended. For an example, you may append a FMT_BYTES value to an existing FMT_JSON value, but an error will be thrown when retrieving the value using get() (you may still use the data_passthrough to overcome this).

Raise:NotStoredError if the key does not exist
prepend(self, key, value, cas=0, format=None, persist_to=0, replicate_to=0)[source]

Prepend a string to an existing value in Couchbase.

Entry Operations

These methods affect an entry in Couchbase. They do not directly modify the value, but may affect the entry’s accessibility or duration.

class couchbase.bucket.Bucket[source]
remove(self, key, cas=0, quiet=None, persist_to=0, replicate_to=0)[source]

Remove the key-value entry for a given key in Couchbase.

Parameters:
  • key (string, dict, or tuple/list) – A string which is the key to remove. The format and type of the key follows the same conventions as in upsert()
  • cas (int) – The CAS to use for the removal operation. If specified, the key will only be removed from the server if it has the same CAS as specified. This is useful to remove a key only if its value has not been changed from the version currently visible to the client. If the CAS on the server does not match the one specified, an exception is thrown.
  • quiet (boolean) – Follows the same semantics as quiet in get()
  • persist_to (int) – If set, wait for the item to be removed from the storage of at least these many nodes
  • replicate_to (int) – If set, wait for the item to be removed from the cache of at least these many nodes (excluding the master)
Raise:

NotFoundError if the key does not exist.

Raise:

KeyExistsError if a CAS was specified, but the CAS on the server had changed

Returns:

A Result object.

Simple remove:

ok = cb.remove("key").success

Don’t complain if key does not exist:

ok = cb.remove("key", quiet=True)

Only remove if CAS matches our version:

rv = cb.get("key")
cb.remove("key", cas=rv.cas)

Remove multiple keys:

oks = cb.remove_multi(["key1", "key2", "key3"])

Remove multiple keys with CAS:

oks = cb.remove({
    "key1" : cas1,
    "key2" : cas2,
    "key3" : cas3
})

See also

remove_multi(), endure() for more information on the persist_to and replicate_to options.

lock(self, key, ttl=0)[source]

Lock and retrieve a key-value entry in Couchbase.

Parameters:
  • key – A string which is the key to lock.
  • ttl – a TTL for which the lock should be valid. While the lock is active, attempts to access the key (via other lock(), upsert() or other mutation calls) will fail with an KeyExistsError. Note that the value for this option is limited by the maximum allowable lock time determined by the server (currently, this is 30 seconds). If passed a higher value, the server will silently lower this to its maximum limit.

This function otherwise functions similarly to get(); specifically, it will return the value upon success. Note the cas value from the Result object. This will be needed to unlock() the key.

Note the lock will also be implicitly released if modified by one of the upsert() family of functions when the valid CAS is supplied

Raise:TemporaryFailError if the key is already locked.
Raise:See get() for possible exceptions

Lock a key

rv = cb.lock("locked_key", ttl=5)
# This key is now locked for the next 5 seconds.
# attempts to access this key will fail until the lock
# is released.

# do important stuff...

cb.unlock("locked_key", rv.cas)

Lock a key, implicitly unlocking with upsert() with CAS

rv = self.cb.lock("locked_key", ttl=5)
new_value = rv.value.upper()
cb.upsert("locked_key", new_value, rv.cas)

Poll and Lock

rv = None
begin_time = time.time()
while time.time() - begin_time < 15:
    try:
        rv = cb.lock("key", ttl=10)
        break
    except TemporaryFailError:
        print("Key is currently locked.. waiting")
        time.sleep(1)

if not rv:
    raise Exception("Waited too long..")

# Do stuff..

cb.unlock("key", rv.cas)
unlock(self, key, cas)[source]

Unlock a Locked Key in Couchbase.

This unlocks an item previously locked by lock()

Parameters:
  • key – The key to unlock
  • cas – The cas returned from lock()’s Result object.

See lock() for an example.

Raise:TemporaryFailError if the CAS supplied does not match the CAS on the server (possibly because it was unlocked by previous call).
touch(self, key, ttl=0)[source]

Update a key’s expiration time

Parameters:
  • key (string) – The key whose expiration time should be modified
  • ttl (int) – The new expiration time. If the expiration time is 0 then the key never expires (and any existing expiration is removed)
Returns:

OperationResult

Update the expiration time of a key

cb.upsert("key", ttl=100)
# expires in 100 seconds
cb.touch("key", ttl=0)
# key should never expire now
Raise:The same things that get() does

See also

get() - which can be used to get and update the expiration, touch_multi()

Sub-Document Operations

These methods provide entry points to modify parts of a document in Couchbase.

Note

Sub-Document API methods are available in Couchbase Server 4.5 (currently in Developer Preview).

The server and SDK implementations and APIs are subject to change

class couchbase.bucket.Bucket[source]
lookup_in(self, key, *specs, **kwargs)[source]

Atomically retrieve one or more paths from a document.

Parameters:
Returns:

A couchbase.result.SubdocResult object. This object contains the results and any errors of the operation.

Example:

import couchbase.subdocument as SD
rv = cb.lookup_in('user',
                  SD.get('email'),
                  SD.get('name'),
                  SD.exists('friends.therock'))

email = rv[0]
name = rv[1]
friend_exists = rv.exists(2)

See also

retrieve_in() which acts as a convenience wrapper

mutate_in(self, key, *specs, **kwargs)[source]

Perform multiple atomic modifications within a document.

Parameters:
  • key – The key of the document to modify
  • specs – A list of specs (See couchbase.subdocument)
  • create_doc (bool) – Whether the document should be create if it doesn’t exist
  • insert_doc (bool) – If the document should be created anew, and the operations performed only if it does not exist.
  • upsert_doc (bool) – If the document should be created anew if it does not exist. If it does exist the commands are still executed.
  • kwargs – CAS, etc.
Returns:

A SubdocResult object.

Here’s an example of adding a new tag to a “user” document and incrementing a modification counter:

import couchbase.subdocument as SD
# ....
cb.mutate_in('user',
             SD.array_addunique('tags', 'dog'),
             SD.counter('updates', 1))

Note

The insert_doc and upsert_doc options are mutually exclusive. Use insert_doc when you wish to create a new document with extended attributes (xattrs).

retrieve_in(self, key, *paths, **kwargs)[source]

Atomically fetch one or more paths from a document.

Convenience method for retrieval operations. This functions identically to lookup_in(). As such, the following two forms are equivalent:

import couchbase.subdocument as SD
rv = cb.lookup_in(key,
                  SD.get('email'),
                  SD.get('name'),
                  SD.get('friends.therock')

email, name, friend = rv
rv = cb.retrieve_in(key, 'email', 'name', 'friends.therock')
email, name, friend = rv

See also

lookup_in()

Counter Operations

These are atomic counter operations for Couchbase. They increment or decrement a counter. A counter is a key whose value can be parsed as an integer. Counter values may be retrieved (without modification) using the Bucket.get() method

class couchbase.bucket.Bucket[source]
counter(self, key, delta=1, initial=None, ttl=0)[source]

Increment or decrement the numeric value of an item.

This method instructs the server to treat the item stored under the given key as a numeric counter.

Counter operations require that the stored value exists as a string representation of a number (e.g. 123). If storing items using the upsert() family of methods, and using the default couchbase.FMT_JSON then the value will conform to this constraint.

Parameters:
  • key (string) – A key whose counter value is to be modified
  • delta (int) – an amount by which the key should be modified. If the number is negative then this number will be subtracted from the current value.
  • initial (int or None) – The initial value for the key, if it does not exist. If the key does not exist, this value is used, and delta is ignored. If this parameter is None then no initial value is used
  • ttl (int) – The lifetime for the key, after which it will expire
Raise:

NotFoundError if the key does not exist on the bucket (and initial was None)

Raise:

DeltaBadvalError if the key exists, but the existing value is not numeric

Returns:

A Result object. The current value of the counter may be obtained by inspecting the return value’s value attribute.

Simple increment:

rv = cb.counter("key")
rv.value
# 42

Increment by 10:

rv = cb.counter("key", delta=10)

Decrement by 5:

rv = cb.counter("key", delta=-5)

Increment by 20, set initial value to 5 if it does not exist:

rv = cb.counter("key", delta=20, initial=5)

Increment three keys:

kv = cb.counter_multi(["foo", "bar", "baz"])
for key, result in kv.items():
    print "Key %s has value %d now" % (key, result.value)

See also

counter_multi()

Multi-Key Data Methods

These methods tend to be more efficient than their single-key equivalents. They return a couchbase.result.MultiResult object (which is a dict subclass) which contains class:couchbase.result.Result objects as the values for its keys

class couchbase.bucket.Bucket[source]
upsert_multi(self, keys, ttl=0, format=None, persist_to=0, replicate_to=0)[source]

Write multiple items to the cluster. Multi version of upsert()

Parameters:
  • keys (dict) –

    A dictionary of keys to set. The keys are the keys as they should be on the server, and the values are the values for the keys to be stored.

    keys may also be a ItemCollection. If using a dictionary variant for item collections, an additional ignore_cas parameter may be supplied with a boolean value. If not specified, the operation will fail if the CAS value on the server does not match the one specified in the Item’s cas field.

  • ttl (int) – If specified, sets the expiration value for all keys
  • format (int) – If specified, this is the conversion format which will be used for _all_ the keys.
  • persist_to (int) – Durability constraint for persistence. Note that it is more efficient to use endure_multi() on the returned MultiResult than using these parameters for a high volume of keys. Using these parameters however does save on latency as the constraint checking for each item is performed as soon as it is successfully stored.
  • replicate_to (int) – Durability constraints for replication. See notes on the persist_to parameter for usage.
Returns:

A MultiResult object, which is a dict-like object

The multi methods are more than just a convenience, they also save on network performance by batch-scheduling operations, reducing latencies. This is especially noticeable on smaller value sizes.

See also

upsert()

get_multi(self, keys, ttl=0, quiet=None, replica=False, no_format=False)[source]

Get multiple keys. Multi variant of get()

Parameters:
  • keys (iterable) – keys the keys to fetch
  • ttl (int) – Set the expiration for all keys when retrieving
  • replica (boolean) – Whether the results should be obtained from a replica instead of the master. See get() for more information about this parameter.
Returns:

A MultiResult object. This is a dict-like object and contains the keys (passed as) keys as the dictionary keys, and Result objects as values

insert_multi(self, keys, ttl=0, format=None, persist_to=0, replicate_to=0)[source]

Add multiple keys. Multi variant of insert()

replace_multi(self, keys, ttl=0, format=None, persist_to=0, replicate_to=0)[source]

Replace multiple keys. Multi variant of replace()

append_multi(self, keys, format=None, persist_to=0, replicate_to=0)[source]

Append to multiple keys. Multi variant of append().

Warning

If using the Item interface, use the append_items() and prepend_items() instead, as those will automatically update the Item.value property upon successful completion.

prepend_multi(self, keys, format=None, persist_to=0, replicate_to=0)[source]

Prepend to multiple keys. Multi variant of prepend()

remove_multi(self, kvs, quiet=None)[source]

Remove multiple items from the cluster

Parameters:
  • kvs – Iterable of keys to delete from the cluster. If you wish to specify a CAS for each item, then you may pass a dictionary of keys mapping to cas, like remove_multi({k1:cas1, k2:cas2})
  • quiet – Whether an exception should be raised if one or more items were not found
Returns:

A MultiResult containing OperationResult values.

counter_multi(self, kvs, initial=None, delta=1, ttl=0)[source]

Perform counter operations on multiple items

Parameters:
  • kvs – Keys to operate on. See below for more options
  • initial – Initial value to use for all keys.
  • delta – Delta value for all keys.
  • ttl – Expiration value to use for all keys
Returns:

A MultiResult containing ValueResult values

The kvs can be a:

  • Iterable of keys
    cb.counter_multi((k1, k2))
    
  • A dictionary mapping a key to its delta
    cb.counter_multi({
        k1: 42,
        k2: 99
    })
    
  • A dictionary mapping a key to its additional options
    cb.counter_multi({
        k1: {'delta': 42, 'initial': 9, 'ttl': 300},
        k2: {'delta': 99, 'initial': 4, 'ttl': 700}
    })
    

When using a dictionary, you can override settings for each key on a per-key basis (for example, the initial value). Global settings (global here means something passed as a parameter to the method) will take effect for those values which do not have a given option specified.

lock_multi(self, keys, ttl=0)[source]

Lock multiple keys. Multi variant of lock()

Parameters:
  • keys (iterable) – the keys to lock
  • ttl (int) – The lock timeout for all keys
Returns:

a MultiResult object

See also

lock()

unlock_multi(self, keys)[source]

Unlock multiple keys. Multi variant of unlock()

Parameters:keys (dict) – the keys to unlock
Returns:a MultiResult object

The value of the keys argument should be either the CAS, or a previously returned Result object from a lock() call. Effectively, this means you may pass a MultiResult as the keys argument.

Thus, you can do something like

keys = (....)
rvs = cb.lock_multi(keys, ttl=5)
# do something with rvs
cb.unlock_multi(rvs)

See also

unlock()

touch_multi(self, keys, ttl=0)[source]

Touch multiple keys. Multi variant of touch()

Parameters:
  • keys (iterable. keys can also be a dictionary with values being integers, in which case the value for each key will be used as the TTL instead of the global one (i.e. the one passed to this function)) – the keys to touch
  • ttl (int) – The new expiration time
Returns:

A MultiResult object

Update three keys to expire in 10 seconds

cb.touch_multi(("key1", "key2", "key3"), ttl=10)

Update three keys with different expiration times

cb.touch_multi({"foo" : 1, "bar" : 5, "baz" : 10})

See also

touch()

Batch Operation Pipeline

In addition to the multi methods, you may also use the Pipeline context manager to schedule multiple operations of different types

class couchbase.bucket.Bucket[source]
pipeline(self)[source]

Returns a new Pipeline context manager. When the context manager is active, operations performed will return None, and will be sent on the network when the context leaves (in its __exit__ method). To get the results of the pipelined operations, inspect the Pipeline.results property.

Operational errors (i.e. negative replies from the server, or network errors) are delivered when the pipeline exits, but argument errors are thrown immediately.

Returns:a Pipeline object
Raise:PipelineError if a pipeline is already created
Raise:Other operation-specific errors.

Scheduling multiple operations, without checking results:

with cb.pipeline():
  cb.upsert("key1", "value1")
  cb.counter("counter")
  cb.upsert_multi({
    "new_key1" : "new_value_1",
    "new_key2" : "new_value_2"
  })

Retrieve the results for several operations:

pipeline = cb.pipeline()
with pipeline:
  cb.upsert("foo", "bar")
  cb.replace("something", "value")

for result in pipeline.results:
  print("Pipeline result: CAS {0}".format(result.cas))

Note

When in pipeline mode, you cannot execute view queries. Additionally, pipeline mode is not supported on async handles

Warning

Pipeline mode should not be used if you are using the same object concurrently from multiple threads. This only refers to the internal lock within the object itself. It is safe to use if you employ your own locking mechanism (for example a connection pool)

New in version 1.2.0.

class couchbase.bucket.Pipeline[source]
results

Contains a list of results for each pipelined operation executed within the context. The list remains until this context is reused.

The elements in the list are either Result objects (for single operations) or MultiResult objects (for multi operations)

MapReduce/View Methods

class couchbase.bucket.Bucket[source]
query(self, design, view, use_devmode=False, **kwargs)[source]

Query a pre-defined MapReduce view, passing parameters.

This method executes a view on the cluster. It accepts various parameters for the view and returns an iterable object (specifically, a View).

Parameters:
  • design (string) – The design document
  • view (string) – The view function contained within the design document
  • use_devmode (boolean) – Whether the view name should be transformed into a development-mode view. See documentation on design_create() for more explanation.
  • kwargs – Extra arguments passed to the View object constructor.
  • kwargs – Additional parameters passed to the View constructor. See that class’ documentation for accepted parameters.

See also

View
contains more extensive documentation and examples
couchbase.views.params.Query
contains documentation on the available query options
SpatialQuery
contains documentation on the available query options for Geospatial views.

Note

To query a spatial view, you must explicitly use the SpatialQuery. Passing key-value view parameters in kwargs is not supported for spatial views.

N1QL Query Methods

class couchbase.bucket.Bucket[source]
n1ql_query(self, query, *args, **kwargs)[source]

Execute a N1QL query.

This method is mainly a wrapper around the N1QLQuery and N1QLRequest objects, which contain the inputs and outputs of the query.

Using an explicit N1QLQuery:

query = N1QLQuery(
    'SELECT airportname FROM `travel-sample` WHERE city=$1', "Reno")
# Use this option for often-repeated queries
query.adhoc = False
for row in cb.n1ql_query(query):
    print 'Name: {0}'.format(row['airportname'])

Using an implicit N1QLQuery:

for row in cb.n1ql_query(
    'SELECT airportname, FROM `travel-sample` WHERE city="Reno"'):
    print 'Name: {0}'.format(row['airportname'])

With the latter form, *args and **kwargs are forwarded to the N1QL Request constructor, optionally selected in kwargs[‘iterclass’], otherwise defaulting to N1QLRequest.

Parameters:
  • query – The query to execute. This may either be a N1QLQuery object, or a string (which will be implicitly converted to one).
  • kwargs – Arguments for N1QLRequest.
Returns:

An iterator which yields rows. Each row is a dictionary representing a single result

Analytics Query Methods

class couchbase.bucket.Bucket[source]
analytics_query(self, query, host, *args, **kwargs)[source]

Execute an Analytics query.

This method is mainly a wrapper around the AnalyticsQuery and AnalyticsRequest objects, which contain the inputs and outputs of the query.

Using an explicit AnalyticsQuery:

query = AnalyticsQuery(
    "SELECT VALUE bw FROM breweries bw WHERE bw.name = ?", "Kona Brewing")
for row in cb.analytics_query(query, "127.0.0.1"):
    print('Entry: {0}'.format(row))

Using an implicit AnalyticsQuery:

for row in cb.analytics_query(
    "SELECT VALUE bw FROM breweries bw WHERE bw.name = ?", "127.0.0.1", "Kona Brewing"):
    print('Entry: {0}'.format(row))
Parameters:
  • query – The query to execute. This may either be a AnalyticsQuery object, or a string (which will be implicitly converted to one).
  • host – The host to send the request to.
  • args – Positional arguments for AnalyticsQuery.
  • kwargs – Named arguments for AnalyticsQuery.
Returns:

An iterator which yields rows. Each row is a dictionary representing a single result

Full-Text Search Methods

class couchbase.bucket.Bucket[source]
search(self, index, query, **kwargs)[source]

Perform full-text searches

New in version 2.0.9.

Warning

The full-text search API is experimental and subject to change

Parameters:
  • index (str) – Name of the index to query
  • query (couchbase.fulltext.SearchQuery) – Query to issue
  • params (couchbase.fulltext.Params) – Additional query options
Returns:

An iterator over query hits

Note

You can avoid instantiating an explicit Params object and instead pass the parameters directly to the search method.

it = cb.search('name', ft.MatchQuery('nosql'), limit=10)
for hit in it:
    print(hit)

Design Document Management

To perform design document management operations, you must first get an instance of the BucketManager. You can do this by invoking the bucket_manager() method on the Bucket object.

Note

Design document management functions are async. This means that any successful return value simply means that the operation was scheduled successfuly on the server. It is possible that the view or design will still not yet exist after creating, deleting, or publishing a design document. Therefore it may be recommended to verify that the view exists by “polling” until the view does not fail. This may be accomplished by specifying the syncwait parameter to the various design methods which accept them.

Note

The normal process for dealing with views and design docs is to first create a development design document. Such design documents are prefixed with the string dev_. They operate on a small subset of cluster data and as such are ideal for testing as they do not impact load very much.

Once you are satisfied with the behavior of the development design doc, you can publish it into a production mode design doc. Such design documents always operate on the full cluster dataset.

The view and design functions accept a use_devmode parameter which prefixes the design name with dev_ if not already prefixed.

class couchbase.bucketmanager.BucketManager[source]
design_create(self, name, ddoc, use_devmode=True, syncwait=0)[source]

Store a design document

Parameters:
  • name (string) – The name of the design
  • ddoc (string or dict If ddoc is a string, it is passed, as-is, to the server. Otherwise it is serialized as JSON, and its _id field is set to _design/{name}.) – The actual contents of the design document
  • use_devmode (bool) – Whether a development mode view should be used. Development-mode views are less resource demanding with the caveat that by default they only operate on a subset of the data. Normally a view will initially be created in ‘development mode’, and then published using design_publish()
  • syncwait (float) – How long to poll for the action to complete. Server side design operations are scheduled and thus this function may return before the operation is actually completed. Specifying the timeout here ensures the client polls during this interval to ensure the operation has completed.
Raise:

couchbase.exceptions.TimeoutError if syncwait was specified and the operation could not be verified within the interval specified.

Returns:

An HttpResult object.

design_get(self, name, use_devmode=True)[source]

Retrieve a design document

Parameters:
  • name (string) – The name of the design document
  • use_devmode (bool) – Whether this design document is still in “development” mode
Returns:

A HttpResult containing a dict representing the format of the design document

Raise:

couchbase.exceptions.HTTPError if the design does not exist.

design_publish(self, name, syncwait=0)[source]

Convert a development mode view into a production mode views. Production mode views, as opposed to development views, operate on the entire cluster data (rather than a restricted subset thereof).

Parameters:name (string) – The name of the view to convert.

Once the view has been converted, ensure that all functions (such as design_get()) have the use_devmode parameter disabled, otherwise an error will be raised when those functions are used.

Note that the use_devmode option is missing. This is intentional as the design document must currently be a development view.

Returns:An HttpResult object.
Raise:couchbase.exceptions.HTTPError if the design does not exist
design_delete(self, name, use_devmode=True, syncwait=0)[source]

Delete a design document

Parameters:
  • name (string) – The name of the design document to delete
  • use_devmode (bool) – Whether the design to delete is a development mode design doc.
  • syncwait (float) – Timeout for operation verification. See design_create() for more information on this parameter.
Returns:

An HttpResult object.

Raise:

couchbase.exceptions.HTTPError if the design does not exist

Raise:

couchbase.exceptions.TimeoutError if syncwait was specified and the operation could not be verified within the specified interval.

design_list(self)[source]

List all design documents for the current bucket.

Returns:A HttpResult containing a dict, with keys being the ID of the design document.

Note

This information is derived using the pools/default/buckets/<bucket>ddocs endpoint, but the return value has been modified to match that of design_get().

Note

This function returns both ‘production’ and ‘development’ mode views. These two can be distinguished by the name of the design document being prefixed with the dev_ identifier.

The keys of the dict in value will be of the form _design/<VIEWNAME> where VIEWNAME may either be e.g. foo or dev_foo depending on whether foo is a production or development mode view.

for name, ddoc in mgr.design_list().value.items():
    if name.startswith('_design/dev_'):
        print "Development view!"
    else:
        print "Production view!"

Example:

for name, ddoc in mgr.design_list().value.items():
    print 'Design name {0}. Contents {1}'.format(name, ddoc)

See also

design_get()

N1QL Index Management

Before issuing any N1QL query using n1ql_query(), the bucket being queried must have an index defined for the query. The simplest index is the primary index.

To create a primary index, use:

mgr.n1ql_index_create_primary(ignore_exists=True)

You can create additional indexes using:

mgr.n1ql_create_index('idx_foo', fields=['foo'])
class couchbase.bucketmanager.BucketManager[source]
n1ql_index_create(self, ix, **kwargs)[source]

Create an index for use with N1QL.

Parameters:
  • ix (str) – The name of the index to create
  • defer (bool) – Whether the building of indexes should be deferred. If creating multiple indexes on an existing dataset, using the defer option in conjunction with build_deferred_indexes() and watch_indexes() may result in substantially reduced build times.
  • ignore_exists (bool) – Do not throw an exception if the index already exists.
  • fields (list) – A list of fields that should be supplied as keys for the index. For non-primary indexes, this must be specified and must contain at least one field name.
  • primary (bool) – Whether this is a primary index. If creating a primary index, the name may be an empty string and fields must be empty.
  • condition (str) – Specify a condition for indexing. Using a condition reduces an index size
Raise:

KeyExistsError if the index already exists

n1ql_index_create_primary(self, defer=False, ignore_exists=False)[source]

Create the primary index on the bucket.

Equivalent to:

n1ql_index_create('', primary=True, **kwargs)
Parameters:
  • defer (bool) –
  • ignore_exists (bool) –

See also

create_index()

n1ql_index_drop(self, ix, primary=False, **kwargs)[source]

Delete an index from the cluster.

Parameters:
  • ix (str) – the name of the index
  • primary (bool) – if this index is a primary index
  • ignore_missing (bool) – Do not raise an exception if the index does not exist
Raise:

NotFoundError if the index does not exist and ignore_missing was not specified

n1ql_index_drop_primary(self, **kwargs)[source]

Remove the primary index

Equivalent to n1ql_index_drop('', primary=True, **kwargs)

n1ql_index_build_deferred(self, other_buckets=False)[source]

Instruct the server to begin building any previously deferred index definitions.

This method will gather a list of all pending indexes in the cluster (including those created using the defer option with create_index()) and start building them in an efficient manner.

Parameters:other_buckets (bool) – Whether to also build indexes found in other buckets, if possible
Returns:list[couchbase._ixmgmt.Index] objects. This list contains the indexes which are being built and may be passed to n1ql_index_watch() to poll their build statuses.

You can use the n1ql_index_watch() method to wait until all indexes have been built:

mgr.n1ql_index_create('ix_fld1', fields=['field1'], defer=True)
mgr.n1ql_index_create('ix_fld2', fields['field2'], defer=True)
mgr.n1ql_index_create('ix_fld3', fields=['field3'], defer=True)

indexes = mgr.n1ql_index_build_deferred()
# [IndexInfo('field1'), IndexInfo('field2'), IndexInfo('field3')]
mgr.n1ql_index_watch(indexes, timeout=30, interval=1)
n1ql_index_watch(self, indexes, timeout=30, interval=0.2, watch_primary=False)[source]

Await completion of index building

This method will wait up to timeout seconds for every index in indexes to have been built. It will poll the cluster every interval seconds.

Parameters:
  • indexes (list) – A list of indexes to check. This is returned by build_deferred_indexes()
  • timeout (float) – How long to wait for the indexes to become ready.
  • interval (float) – How often to poll the cluster.
  • watch_primary (bool) – Whether to also watch the primary index. This parameter should only be used when manually constructing a list of string indexes
Raise:

TimeoutError if the timeout was reached before all indexes were built

Raise:

NotFoundError if one of the indexes passed no longer exists.

n1ql_index_list(self, other_buckets=False)[source]

List indexes in the cluster.

Parameters:other_buckets (bool) – Whether to also include indexes belonging to other buckets (i.e. buckets other than the current Bucket object)
Returns:list[couchbase._ixmgmt.Index] objects
class couchbase.bucket.Bucket[source]
bucket_manager(self)[source]

Returns a BucketManager object which may be used to perform management operations on the current bucket. These operations may create/modify design documents and flush the bucket

Flushing (clearing) the Bucket

For some stages of development and/or deployment, it might be useful to be able to clear the bucket of its contents.

class couchbase.bucket.Bucket[source]
flush(self)[source]

Clears the bucket’s contents.

Note

This functionality requires that the flush option be enabled for the bucket by the cluster administrator. You can enable flush on the bucket using the administrative console (See http://docs.couchbase.com/admin/admin/UI/ui-data-buckets.html)

Note

This is a destructive operation, as it will clear all the data from the bucket.

Note

A successful execution of this method means that the bucket will have started the flush process. This does not necessarily mean that the bucket is actually empty.

Informational Methods

These methods do not operate on keys directly, but offer various information about things

class couchbase.bucket.Bucket[source]
stats(self, keys=None, keystats=False)[source]

Request server statistics.

Fetches stats from each node in the cluster. Without a key specified the server will respond with a default set of statistical information. It returns the a dict with stats keys and node-value pairs as a value.

Parameters:keys (string or list of string) – One or several stats to query
Raise:CouchbaseNetworkError
Returns:dict where keys are stat keys and values are host-value pairs

Find out how many items are in the bucket:

total = 0
for key, value in cb.stats()['total_items'].items():
    total += value

Get memory stats (works on couchbase buckets):

cb.stats('memory')
# {'mem_used': {...}, ...}
static lcb_version()[source]
observe(self, key, master_only=False)[source]

Return storage information for a key.

It returns a ValueResult object with the value field set to a list of ObserveInfo objects. Each element in the list responds to the storage status for the key on the given node. The length of the list (and thus the number of ObserveInfo objects) are equal to the number of online replicas plus the master for the given key.

Parameters:
  • key (string) – The key to inspect
  • master_only (bool) – Whether to only retrieve information from the master node.

See also

Observe Results

observe_multi(self, keys, master_only=False)[source]

Multi-variant of observe()

Item API Methods

These methods are specifically for the Item API. Most of the multi methods will accept Item objects as well, however there are some special methods for this interface

class couchbase.bucket.Bucket[source]
append_items(self, items, **kwargs)[source]

Method to append data to multiple Item objects.

This method differs from the normal append_multi() in that each Item’s value field is updated with the appended data upon successful completion of the operation.

Parameters:items (ItemOptionDict.) – The item dictionary. The value for each key should contain a fragment field containing the object to append to the value on the server.

The rest of the options are passed verbatim to append_multi()

prepend_items(self, items, **kwargs)[source]

Method to prepend data to multiple Item objects. .. seealso:: append_items()

Durability Constraints

Durability constraints ensure safer protection against data loss.

class couchbase.bucket.Bucket[source]
endure(self, key, persist_to=-1, replicate_to=-1, cas=0, check_removed=False, timeout=5.0, interval=0.01)[source]

Wait until a key has been distributed to one or more nodes

By default, when items are stored to Couchbase, the operation is considered successful if the vBucket master (i.e. the “primary” node) for the key has successfully stored the item in its memory.

In most situations, this is sufficient to assume that the item has successfully been stored. However the possibility remains that the “master” server will go offline as soon as it sends back the successful response and the data is lost.

The endure function allows you to provide stricter criteria for success. The criteria may be expressed in terms of number of nodes for which the item must exist in that node’s RAM and/or on that node’s disk. Ensuring that an item exists in more than one place is a safer way to guarantee against possible data loss.

We call these requirements Durability Constraints, and thus the method is called endure.

Parameters:
  • key (string) – The key to endure.
  • persist_to (int) –

    The minimum number of nodes which must contain this item on their disk before this function returns. Ensure that you do not specify too many nodes; otherwise this function will fail. Use the server_nodes to determine how many nodes exist in the cluster.

    The maximum number of nodes an item can reside on is currently fixed to 4 (i.e. the “master” node, and up to three “replica” nodes). This limitation is current as of Couchbase Server version 2.1.0.

    If this parameter is set to a negative value, the maximum number of possible nodes the key can reside on will be used.

  • replicate_to (int) – The minimum number of replicas which must contain this item in their memory for this method to succeed. As with persist_to, you may specify a negative value in which case the requirement will be set to the maximum number possible.
  • timeout (float) – A timeout value in seconds before this function fails with an exception. Typically it should take no longer than several milliseconds on a functioning cluster for durability requirements to be satisfied (unless something has gone wrong).
  • interval (float) – The polling interval in seconds to use for checking the key status on the respective nodes. Internally, endure is implemented by polling each server individually to see if the key exists on that server’s disk and memory. Once the status request is sent to all servers, the client will check if their replies are satisfactory; if they are then this function succeeds, otherwise the client will wait a short amount of time and try again. This parameter sets this “wait time”.
  • check_removed (bool) – This flag inverts the check. Instead of checking that a given key exists on the nodes, this changes the behavior to check that the key is removed from the nodes.
  • cas (long) – The CAS value to check against. It is possible for an item to exist on a node but have a CAS value from a prior operation. Passing the CAS ensures that only replies from servers with a CAS matching this parameter are accepted.
Returns:

A OperationResult

Raise:

see upsert() and get() for possible errors

endure_multi(self, keys, persist_to=-1, replicate_to=-1, timeout=5.0, interval=0.01, check_removed=False)[source]

Check durability requirements for multiple keys

Parameters:keys – The keys to check
The type of keys may be one of the following:
  • Sequence of keys
  • A MultiResult object
  • A dict with CAS values as the dictionary value
  • A sequence of Result objects
Returns:A MultiResult object of OperationResult items.

See also

endure()

durability(self, persist_to=-1, replicate_to=-1, timeout=0.0)[source]

Returns a context manager which will apply the given persistence/replication settings to all mutation operations when active

Parameters:
  • persist_to (int) –
  • replicate_to (int) –

See endure() for the meaning of these two values

Thus, something like:

with cb.durability(persist_to=3):
  cb.upsert("foo", "foo_value")
  cb.upsert("bar", "bar_value")
  cb.upsert("baz", "baz_value")

is equivalent to:

cb.upsert("foo", "foo_value", persist_to=3)
cb.upsert("bar", "bar_value", persist_to=3)
cb.upsert("baz", "baz_value", persist_to=3)

New in version 1.2.0.

See also

endure()

Attributes

class couchbase.bucket.Bucket[source]
quiet

Whether to suppress errors when keys are not found (in get() and delete() operations).

An error is still returned within the Result object

transcoder

The Transcoder object being used.

This is normally None unless a custom couchbase.transcoder.Transcoder is being used

data_passthrough

When this flag is set, values are always returned as raw bytes

unlock_gil

Whether GIL manipulation is enabeld for this connection object.

This attribute can only be set from the constructor.

timeout

The timeout for key-value operations, in fractions of a second. This timeout affects the get() and upsert() family of methods.

# Set timeout to 3.75 seconds
cb.timeout = 3.75
views_timeout

The timeout for view query operations. This affects the query() method.

Timeout may be specified in fractions of a second. .. seealso:: timeout

n1ql_timeout

The timeout for N1QL query operations. This affects the n1ql_query() method.

Timeouts may also be adjusted on a per-query basis by setting the couchbase.n1ql.N1QLQuery.timeout property. The effective timeout is either the per-query timeout or the global timeout, whichever is lower.

bucket

Name of the bucket this object is connected to

server_nodes

Get a list of the current nodes in the cluster

is_ssl

Read-only boolean property indicating whether SSL is used for this connection.

If this property is true, then all communication between this object and the Couchbase cluster is encrypted using SSL.

See __init__() for more information on connection options.

compression

The compression mode to be used when talking to the server.

This can be any of the values in :module:`couchbase._libcouchbase` prefixed with COMPRESS_:

COMPRESS_NONE

Do not perform compression in any direction.

COMPRESS_IN

Decompress incoming data, if the data has been compressed at the server.

COMPRESS_OUT

Compress outgoing data.

COMPRESS_INOUT

Both COMPRESS_IN and COMPRESS_OUT.

COMPRESS_FORCE

Setting this flag will force the client to assume that all servers support compression despite a HELLO not having been initially negotiated.

compression_min_size

Minimum size (in bytes) of the document payload to be compressed when compression enabled.

Type:int
compression_min_ratio

Minimum compression ratio (compressed / original) of the compressed payload to allow sending it to cluster.

Type:float
default_format

Specify the default format (default: FMT_JSON) to encode your data before storing in Couchbase. It uses the flags field to store the format.

See Key and Value Format for the possible values

On a get() the original value will be returned. This means the JSON will be decoded, respectively the object will be unpickled.

quiet

It controlls whether to raise an exception when the client executes operations on non-existent keys (default: False). If it is False it will raise couchbase.exceptions.NotFoundError exceptions. When set to True the operations will not raise an exception, but still set an error inside the Result object.

lockmode

How access from multiple threads is handled. See Using a Bucket from multiple threads for more information

Private APIs

class couchbase.bucket.Bucket[source]

The following APIs are not supported because using them is inherently dangerous. They are provided as workarounds for specific problems which may be encountered by users, and for potential testing of certain states and/or modifications which are not attainable with the public API.

_close()

Close the instance’s underlying socket resources

Note that operations pending on the connection may fail.

_cntl(self, *args, **kwargs)[source]

Low-level interface to the underlying C library’s settings. via lcb_cntl().

This method accepts an opcode and an optional value. Constants are intentionally not defined for the various opcodes to allow saner error handling when an unknown opcode is not used.

Warning

If you pass the wrong parameters to this API call, your application may crash. For this reason, this is not a public API call. Nevertheless it may be used sparingly as a workaround for settings which may have not yet been exposed directly via a supported API

Parameters:
  • op (int) – Type of cntl to access. These are defined in libcouchbase’s cntl.h header file
  • value – An optional value to supply for the operation. If a value is not passed then the operation will return the current value of the cntl without doing anything else. otherwise, it will interpret the cntl in a manner that makes sense. If the value is a float, it will be treated as a timeout value and will be multiplied by 1000000 to yield the microsecond equivalent for the library. If the value is a boolean, it is treated as a C int
  • value_type

    String indicating the type of C-level value to be passed to lcb_cntl(). The possible values are:

    • "string" - NUL-terminated const char.
      Pass a Python string
    • "int" - C int type. Pass a Python int
    • "uint32_t" - C lcb_uint32_t type.
      Pass a Python int
    • "unsigned" - C unsigned int type.
      Pass a Python int
    • "float" - C float type. Pass a Python float
    • "timeout" - The number of seconds as a float. This is
      converted into microseconds within the extension library.
Returns:

If no value argument is provided, retrieves the current setting (per the value_type specification). Otherwise this function returns None.

_cntlstr(self, key, value)[source]

Low-level interface to the underlying C library’s settings. via lcb_cntl_string().

This method accepts a key and a value. It can modify the same sort of settings as the _cntl() method, but may be a bit more convenient to follow in code.

Warning

See _cntl() for warnings.

Parameters:
  • key (string) – The setting key
  • value (string) – The setting value

See the API documentation for libcouchbase for a list of acceptable setting keys.

_vbmap()

Returns a tuple of (vbucket, server index) for a key

Additional Connection Options

This section is intended to document some of the less common connection options and formats of the connection string (see couchbase.bucket.Bucket.__init__()).

Using Custom Ports

If you require to connect to an alternate port for bootstrapping the client (either because your administrator has configured the cluster to listen on alternate ports, or because you are using the built-in cluster_run script provided with the server source code), you may do so in the host list itself.

Simply provide the host in the format of host:port.

Note that the port is dependent on the scheme used. In this case, the scheme dictates what specific service the port points to.

Scheme Protocol
couchbase memcached port (default is 11210)
couchbases SSL-encrypted memcached port (default is 11207)
http REST API/Administrative port (default is 8091)

Options in Connection String

Additional client options may be specified within the connection string itself. These options are derived from the underlying libcouchbase library and thus will accept any input accepted by the library itself. The following are some influential options:

  • config_total_timeout. Number of seconds to wait for the client bootstrap to complete.

  • config_node_timeout. Maximum number of time to wait (in seconds) to attempt to bootstrap from the current node. If the bootstrap times out (and the config_total_timeout setting is not reached), the bootstrap is then attempted from the next node (or an exception is raised if no more nodes remain).

  • config_cache. If set, this will refer to a file on the filesystem where cached “bootstrap” information may be stored. This path may be shared among multiple instance of the Couchbase client. Using this option may reduce overhead when using many short-lived instances of the client.

    If the file does not exist, it will be created.