Messaging library for Python

kombu.enable_insecure_serializers(choices=['pickle', 'yaml', 'msgpack'])[source]

Enable serializers that are considered to be unsafe.

Will enable pickle, yaml and msgpack by default, but you can also specify a list of serializers (by name or content type) to enable.

kombu.disable_insecure_serializers(allowed=['json'])[source]

Disable untrusted serializers.

Will disable all serializers except json or you can specify a list of deserializers to allow.

Note

Producers will still be able to serialize data in these formats, but consumers will not accept incoming data using the untrusted content types.

Connection

class kombu.Connection(hostname='localhost', userid=None, password=None, virtual_host=None, port=None, insist=False, ssl=False, transport=None, connect_timeout=5, transport_options=None, login_method=None, uri_prefix=None, heartbeat=0, failover_strategy='round-robin', alternates=None, **kwargs)[source]

A connection to the broker.

Parameters:URL – Broker URL, or a list of URLs, e.g.
Connection('amqp://guest:guest@localhost:5672//')
Connection('amqp://foo;amqp://bar', failover_strategy='round-robin')
Connection('redis://', transport_options={
    'visibility_timeout': 3000,
})

import ssl
Connection('amqp://', login_method='EXTERNAL', ssl={
    'ca_certs': '/etc/pki/tls/certs/something.crt',
    'keyfile': '/etc/something/system.key',
    'certfile': '/etc/something/system.cert',
    'cert_reqs': ssl.CERT_REQUIRED,
})

SSL compatibility

SSL currently only works with the py-amqp, amqplib, and qpid transports. For other transports you can use stunnel.

Parameters:
  • ssl – Use SSL to connect to the server. Default is False. May not be supported by the specified transport.
  • transport – Default transport if not specified in the URL.
  • connect_timeout – Timeout in seconds for connecting to the server. May not be supported by the specified transport.
  • transport_options – A dict of additional connection arguments to pass to alternate kombu channel implementations. Consult the transport documentation for available options.
  • heartbeat – Heartbeat interval in int/float seconds. Note that if heartbeats are enabled then the heartbeat_check() method must be called regularly, around once per second.

Note

The connection is established lazily when needed. If you need the connection to be established, then force it by calling connect():

>>> conn = Connection('amqp://')
>>> conn.connect()

and always remember to close the connection:

>>> conn.release()

Legacy options

These options have been replaced by the URL argument, but are still supported for backwards compatibility:

Parameters:
  • hostname – Host name/address. NOTE: You cannot specify both the URL argument and use the hostname keyword argument at the same time.
  • userid – Default user name if not provided in the URL.
  • password – Default password if not provided in the URL.
  • virtual_host – Default virtual host if not provided in the URL.
  • port – Default port if not provided in the URL.

Attributes

hostname = None
port = None
userid = None
password = None
virtual_host = '/'
ssl = None
login_method = None
failover_strategy = 'round-robin'

Strategy used to select new hosts when reconnecting after connection failure. One of “round-robin”, “shuffle” or any custom iterator constantly yielding new URLs to try.

connect_timeout = 5
heartbeat = None

Heartbeat value, currently only supported by the py-amqp transport.

default_channel

Default channel, created upon access and closed when the connection is closed.

Can be used for automatic channel handling when you only need one channel, and also it is the channel implicitly used if a connection is passed instead of a channel, to functions that require a channel.

connected

Return true if the connection has been established.

recoverable_connection_errors[source]

List of connection related exceptions that can be recovered from, but where the connection must be closed and re-established first.

recoverable_channel_errors[source]

List of channel related exceptions that can be automatically recovered from without re-establishing the connection.

connection_errors[source]

List of exceptions that may be raised by the connection.

channel_errors[source]

List of exceptions that may be raised by the channel.

transport
connection

The underlying connection object.

Warning

This instance is transport specific, so do not depend on the interface of this object.

uri_prefix = None
declared_entities = None

The cache of declared entities is per connection, in case the server loses data.

cycle = None

Iterator returning the next broker URL to try in the event of connection failure (initialized by failover_strategy).

host

The host as a host name/port pair separated by colon.

manager[source]

Experimental manager that can be used to manage/monitor the broker instance. Not available for all transports.

supports_heartbeats
is_evented

Methods

as_uri(include_password=False, mask='**', getfields=<operator.itemgetter object>)[source]

Convert connection parameters to URL form.

connect()[source]

Establish connection to server immediately.

channel()[source]

Create and return a new channel.

drain_events(**kwargs)[source]

Wait for a single event from the server.

Parameters:timeout – Timeout in seconds before we give up.

:raises socket.timeout: if the timeout is exceeded.

release()[source]

Close the connection (if open).

autoretry(fun, channel=None, **ensure_options)[source]

Decorator for functions supporting a channel keyword argument.

The resulting callable will retry calling the function if it raises connection or channel related errors. The return value will be a tuple of (retval, last_created_channel).

If a channel is not provided, then one will be automatically acquired (remember to close it afterwards).

See ensure() for the full list of supported keyword arguments.

Example usage:

channel = connection.channel()
try:
    ret, channel = connection.autoretry(publish_messages, channel)
finally:
    channel.close()
ensure_connection(errback=None, max_retries=None, interval_start=2, interval_step=2, interval_max=30, callback=None)[source]

Ensure we have a connection to the server.

If not retry establishing the connection with the settings specified.

Parameters:
  • errback – Optional callback called each time the connection can’t be established. Arguments provided are the exception raised and the interval that will be slept (exc, interval).
  • max_retries – Maximum number of times to retry. If this limit is exceeded the connection error will be re-raised.
  • interval_start – The number of seconds we start sleeping for.
  • interval_step – How many seconds added to the interval for each retry.
  • interval_max – Maximum number of seconds to sleep between each retry.
  • callback – Optional callback that is called for every internal iteration (1 s)
ensure(obj, fun, errback=None, max_retries=None, interval_start=1, interval_step=1, interval_max=1, on_revive=None)[source]

Ensure operation completes, regardless of any channel/connection errors occurring.

Will retry by establishing the connection, and reapplying the function.

Parameters:
  • fun – Method to apply.
  • errback – Optional callback called each time the connection can’t be established. Arguments provided are the exception raised and the interval that will be slept (exc, interval).
  • max_retries – Maximum number of times to retry. If this limit is exceeded the connection error will be re-raised.
  • interval_start – The number of seconds we start sleeping for.
  • interval_step – How many seconds added to the interval for each retry.
  • interval_max – Maximum number of seconds to sleep between each retry.

Example

This is an example ensuring a publish operation:

>>> from kombu import Connection, Producer
>>> conn = Connection('amqp://')
>>> producer = Producer(conn)

>>> def errback(exc, interval):
...     logger.error('Error: %r', exc, exc_info=1)
...     logger.info('Retry in %s seconds.', interval)

>>> publish = conn.ensure(producer, producer.publish,
...                       errback=errback, max_retries=3)
>>> publish({'hello': 'world'}, routing_key='dest')
revive(new_channel)[source]

Revive connection after connection re-established.

create_transport()[source]
get_transport_cls()[source]

Get the currently used transport class.

clone(**kwargs)[source]

Create a copy of the connection with the same connection settings.

info()[source]

Get connection info.

switch(url)[source]

Switch connection parameters to use a new URL (does not reconnect)

maybe_switch_next()[source]

Switch to next URL given by the current failover strategy (if any).

heartbeat_check(rate=2)[source]

Allow the transport to perform any periodic tasks required to make heartbeats work. This should be called approximately every second.

If the current transport does not support heartbeats then this is a noop operation.

Parameters:rate – Rate is how often the tick is called compared to the actual heartbeat value. E.g. if the heartbeat is set to 3 seconds, and the tick is called every 3 / 2 seconds, then the rate is 2. This value is currently unused by any transports.
maybe_close_channel(channel)[source]

Close given channel, but ignore connection and channel errors.

register_with_event_loop(loop)[source]
close()

Close the connection (if open).

_close()[source]

Really close connection, even if part of a connection pool.

completes_cycle(retries)[source]

Return true if the cycle is complete after number of retries.

get_manager(*args, **kwargs)[source]
Producer(channel=None, *args, **kwargs)[source]

Create new kombu.Producer instance using this connection.

Consumer(queues=None, channel=None, *args, **kwargs)[source]

Create new kombu.Consumer instance using this connection.

Pool(limit=None, preload=None)[source]

Pool of connections.

See ConnectionPool.

Parameters:
  • limit – Maximum number of active connections. Default is no limit.
  • preload – Number of connections to preload when the pool is created. Default is 0.

Example usage:

>>> connection = Connection('amqp://')
>>> pool = connection.Pool(2)
>>> c1 = pool.acquire()
>>> c2 = pool.acquire()
>>> c3 = pool.acquire()
>>> c1.release()
>>> c3 = pool.acquire()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "kombu/connection.py", line 354, in acquire
  raise ConnectionLimitExceeded(self.limit)
    kombu.exceptions.ConnectionLimitExceeded: 2
ChannelPool(limit=None, preload=None)[source]

Pool of channels.

See ChannelPool.

Parameters:
  • limit – Maximum number of active channels. Default is no limit.
  • preload – Number of channels to preload when the pool is created. Default is 0.

Example usage:

>>> connection = Connection('amqp://')
>>> pool = connection.ChannelPool(2)
>>> c1 = pool.acquire()
>>> c2 = pool.acquire()
>>> c3 = pool.acquire()
>>> c1.release()
>>> c3 = pool.acquire()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "kombu/connection.py", line 354, in acquire
  raise ChannelLimitExceeded(self.limit)
    kombu.connection.ChannelLimitExceeded: 2
SimpleQueue(name, no_ack=None, queue_opts=None, exchange_opts=None, channel=None, **kwargs)[source]

Create new SimpleQueue, using a channel from this connection.

If name is a string, a queue and exchange will be automatically created using that name as the name of the queue and exchange, also it will be used as the default routing key.

Parameters:
  • name – Name of the queue/or a Queue.
  • no_ack – Disable acknowledgements. Default is false.
  • queue_opts – Additional keyword arguments passed to the constructor of the automatically created Queue.
  • exchange_opts – Additional keyword arguments passed to the constructor of the automatically created Exchange.
  • channel – Custom channel to use. If not specified the connection default channel is used.
SimpleBuffer(name, no_ack=None, queue_opts=None, exchange_opts=None, channel=None, **kwargs)[source]

Create new SimpleQueue using a channel from this connection.

Same as SimpleQueue(), but configured with buffering semantics. The resulting queue and exchange will not be durable, also auto delete is enabled. Messages will be transient (not persistent), and acknowledgements are disabled (no_ack).

Exchange

Example creating an exchange declaration:

>>> news_exchange = Exchange('news', type='topic')

For now news_exchange is just a declaration, you can’t perform actions on it. It just describes the name and options for the exchange.

The exchange can be bound or unbound. Bound means the exchange is associated with a channel and operations can be performed on it. To bind the exchange you call the exchange with the channel as argument:

>>> bound_exchange = news_exchange(channel)

Now you can perform operations like declare() or delete():

>>> bound_exchange.declare()
>>> message = bound_exchange.Message('Cure for cancer found!')
>>> bound_exchange.publish(message, routing_key='news.science')
>>> bound_exchange.delete()
class kombu.Exchange(name='', type='', channel=None, **kwargs)[source]

An Exchange declaration.

Parameters:
name

Name of the exchange. Default is no name (the default exchange).

type

This description of AMQP exchange types was shamelessly stolen from the blog post `AMQP in 10 minutes: Part 4`_ by Rajith Attapattu. Reading this article is recommended if you’re new to amqp.

“AMQP defines four default exchange types (routing algorithms) that covers most of the common messaging use cases. An AMQP broker can also define additional exchange types, so see your broker manual for more information about available exchange types.

  • direct (default)

    Direct match between the routing key in the message, and the routing criteria used when a queue is bound to this exchange.

  • topic

    Wildcard match between the routing key and the routing pattern specified in the exchange/queue binding. The routing key is treated as zero or more words delimited by ”.” and supports special wildcard characters. “*” matches a single word and “#” matches zero or more words.

  • fanout

    Queues are bound to this exchange with no arguments. Hence any message sent to this exchange will be forwarded to all queues bound to this exchange.

  • headers

    Queues are bound to this exchange with a table of arguments containing headers and values (optional). A special argument named “x-match” determines the matching algorithm, where “all” implies an AND (all pairs must match) and “any” implies OR (at least one pair must match).

    arguments is used to specify the arguments.

channel

The channel the exchange is bound to (if bound).

durable

Durable exchanges remain active when a server restarts. Non-durable exchanges (transient exchanges) are purged when a server restarts. Default is True.

auto_delete

If set, the exchange is deleted when all queues have finished using it. Default is False.

delivery_mode

The default delivery mode used for messages. The value is an integer, or alias string.

  • 1 or “transient”

    The message is transient. Which means it is stored in memory only, and is lost if the server dies or restarts.

  • 2 or “persistent” (default)

    The message is persistent. Which means the message is stored both in-memory, and on disk, and therefore preserved if the server dies or restarts.

The default value is 2 (persistent).

arguments

Additional arguments to specify when the exchange is declared.

maybe_bind(channel)

Bind instance to channel if not already bound.

Message(body, delivery_mode=None, priority=None, content_type=None, content_encoding=None, properties=None, headers=None)[source]

Create message instance to be sent with publish().

Parameters:
  • body – Message body.
  • delivery_mode – Set custom delivery mode. Defaults to delivery_mode.
  • priority – Message priority, 0 to 9. (currently not supported by RabbitMQ).
  • content_type – The messages content_type. If content_type is set, no serialization occurs as it is assumed this is either a binary object, or you’ve done your own serialization. Leave blank if using built-in serialization as our library properly sets content_type.
  • content_encoding – The character set in which this object is encoded. Use “binary” if sending in raw binary objects. Leave blank if using built-in serialization as our library properly sets content_encoding.
  • properties – Message properties.
  • headers – Message headers.
PERSISTENT_DELIVERY_MODE = 2
TRANSIENT_DELIVERY_MODE = 1
attrs = (('name', None), ('type', None), ('arguments', None), ('durable', <type 'bool'>), ('passive', <type 'bool'>), ('auto_delete', <type 'bool'>), ('delivery_mode', <function <lambda> at 0x7fbd002c8758>))
auto_delete = False
bind_to(exchange='', routing_key='', arguments=None, nowait=False, **kwargs)[source]

Binds the exchange to another exchange.

Parameters:nowait – If set the server will not respond, and the call will not block waiting for a response. Default is False.
binding(routing_key='', arguments=None, unbind_arguments=None)[source]
can_cache_declaration
declare(nowait=False, passive=None)[source]

Declare the exchange.

Creates the exchange on the broker.

Parameters:nowait – If set the server will not respond, and a response will not be waited for. Default is False.
delete(if_unused=False, nowait=False)[source]

Delete the exchange declaration on server.

Parameters:
  • if_unused – Delete only if the exchange has no bindings. Default is False.
  • nowait – If set the server will not respond, and a response will not be waited for. Default is False.
delivery_mode = 2
durable = True
name = ''
passive = False
publish(message, routing_key=None, mandatory=False, immediate=False, exchange=None)[source]

Publish message.

Parameters:
  • messageMessage() instance to publish.
  • routing_key – Routing key.
  • mandatory – Currently not supported.
  • immediate – Currently not supported.
type = 'direct'
unbind_from(source='', routing_key='', nowait=False, arguments=None)[source]

Delete previously created exchange binding from the server.

Queue

Example creating a queue using our exchange in the Exchange example:

>>> science_news = Queue('science_news',
...                      exchange=news_exchange,
...                      routing_key='news.science')

For now science_news is just a declaration, you can’t perform actions on it. It just describes the name and options for the queue.

The queue can be bound or unbound. Bound means the queue is associated with a channel and operations can be performed on it. To bind the queue you call the queue instance with the channel as an argument:

>>> bound_science_news = science_news(channel)

Now you can perform operations like declare() or purge():

>>> bound_science_news.declare()
>>> bound_science_news.purge()
>>> bound_science_news.delete()
class kombu.Queue(name='', exchange=None, routing_key='', channel=None, bindings=None, on_declared=None, **kwargs)[source]

A Queue declaration.

Parameters:
name

Name of the queue. Default is no name (default queue destination).

exchange

The Exchange the queue binds to.

routing_key

The routing key (if any), also called binding key.

The interpretation of the routing key depends on the Exchange.type.

  • direct exchange

    Matches if the routing key property of the message and the routing_key attribute are identical.

  • fanout exchange

    Always matches, even if the binding does not have a key.

  • topic exchange

    Matches the routing key property of the message by a primitive pattern matching scheme. The message routing key then consists of words separated by dots (”.”, like domain names), and two special characters are available; star (“*”) and hash (“#”). The star matches any word, and the hash matches zero or more words. For example “*.stock.#” matches the routing keys “usd.stock” and “eur.stock.db” but not “stock.nasdaq”.

channel

The channel the Queue is bound to (if bound).

durable

Durable queues remain active when a server restarts. Non-durable queues (transient queues) are purged if/when a server restarts. Note that durable queues do not necessarily hold persistent messages, although it does not make sense to send persistent messages to a transient queue.

Default is True.

exclusive

Exclusive queues may only be consumed from by the current connection. Setting the ‘exclusive’ flag always implies ‘auto-delete’.

Default is False.

auto_delete

If set, the queue is deleted when all consumers have finished using it. Last consumer can be cancelled either explicitly or because its channel is closed. If there was no consumer ever on the queue, it won’t be deleted.

queue_arguments

Additional arguments used when declaring the queue.

binding_arguments

Additional arguments used when binding the queue.

alias

Unused in Kombu, but applications can take advantage of this. For example to give alternate names to queues with automatically generated queue names.

on_declared

Optional callback to be applied when the queue has been declared (the queue_declare operation is complete). This must be a function with a signature that accepts at least 3 positional arguments: (name, messages, consumers).

maybe_bind(channel)

Bind instance to channel if not already bound.

exception ContentDisallowed

Consumer does not allow this content-type.

Queue.as_dict(recurse=False)[source]
Queue.attrs = (('name', None), ('exchange', None), ('routing_key', None), ('queue_arguments', None), ('binding_arguments', None), ('durable', <type 'bool'>), ('exclusive', <type 'bool'>), ('auto_delete', <type 'bool'>), ('no_ack', None), ('alias', None), ('bindings', <type 'list'>))
Queue.auto_delete = False
Queue.bind(channel)[source]
Queue.bind_to(exchange='', routing_key='', arguments=None, nowait=False)[source]
Queue.can_cache_declaration
Queue.cancel(consumer_tag)[source]

Cancel a consumer by consumer tag.

Queue.consume(consumer_tag='', callback=None, no_ack=None, nowait=False)[source]

Start a queue consumer.

Consumers last as long as the channel they were created on, or until the client cancels them.

Parameters:
  • consumer_tag – Unique identifier for the consumer. The consumer tag is local to a connection, so two clients can use the same consumer tags. If this field is empty the server will generate a unique tag.
  • no_ack – If enabled the broker will automatically ack messages.
  • nowait – Do not wait for a reply.
  • callback – callback called for each delivered message
Queue.declare(nowait=False)[source]

Declares the queue, the exchange and binds the queue to the exchange.

Queue.delete(if_unused=False, if_empty=False, nowait=False)[source]

Delete the queue.

Parameters:
  • if_unused – If set, the server will only delete the queue if it has no consumers. A channel error will be raised if the queue has consumers.
  • if_empty – If set, the server will only delete the queue if it is empty. If it is not empty a channel error will be raised.
  • nowait – Do not wait for a reply.
Queue.durable = True
Queue.exchange = <unbound Exchange ''(direct)>
Queue.exclusive = False
classmethod Queue.from_dict(queue, **options)[source]
Queue.get(no_ack=None, accept=None)[source]

Poll the server for a new message.

Must return the message if a message was available, or None otherwise.

Parameters:
  • no_ack – If enabled the broker will automatically ack messages.
  • accept – Custom list of accepted content types.

This method provides direct access to the messages in a queue using a synchronous dialogue, designed for specific types of applications where synchronous functionality is more important than performance.

Queue.name = ''
Queue.no_ack = False
Queue.purge(nowait=False)[source]

Remove all ready messages from the queue.

Queue.queue_bind(nowait=False)[source]

Create the queue binding on the server.

Queue.queue_declare(nowait=False, passive=False)[source]

Declare queue on the server.

Parameters:
  • nowait – Do not wait for a reply.
  • passive – If set, the server will not create the queue. The client can use this to check whether a queue exists without modifying the server state.
Queue.queue_unbind(arguments=None, nowait=False)[source]
Queue.routing_key = ''
Queue.unbind_from(exchange='', routing_key='', arguments=None, nowait=False)[source]

Unbind queue by deleting the binding from the server.

Queue.when_bound()[source]

Message Producer

class kombu.Producer(channel, exchange=None, routing_key=None, serializer=None, auto_declare=None, compression=None, on_return=None)[source]

Message Producer.

Parameters:
  • channel – Connection or channel.
  • exchange – Optional default exchange.
  • routing_key – Optional default routing key.
  • serializer – Default serializer. Default is “json”.
  • compression – Default compression method. Default is no compression.
  • auto_declare – Automatically declare the default exchange at instantiation. Default is True.
  • on_return – Callback to call for undeliverable messages, when the mandatory or immediate arguments to publish() is used. This callback needs the following signature: (exception, exchange, routing_key, message). Note that the producer needs to drain events to use this feature.
channel
exchange = None

Default exchange

routing_key = ''

Default routing key.

serializer = None

Default serializer to use. Default is JSON.

compression = None

Default compression method. Disabled by default.

auto_declare = True

By default the exchange is declared at instantiation. If you want to declare manually then you can set this to False.

on_return = None

Basic return callback.

connection
declare()[source]

Declare the exchange.

This happens automatically at instantiation if auto_declare is enabled.

maybe_declare(entity, retry=False, **retry_policy)[source]

Declare the exchange if it hasn’t already been declared during this session.

publish(body, routing_key=None, delivery_mode=None, mandatory=False, immediate=False, priority=0, content_type=None, content_encoding=None, serializer=None, headers=None, compression=None, exchange=None, retry=False, retry_policy=None, declare=[], expiration=None, **properties)[source]

Publish message to the specified exchange.

Parameters:
  • body – Message body.
  • routing_key – Message routing key.
  • delivery_mode – See delivery_mode.
  • mandatory – Currently not supported.
  • immediate – Currently not supported.
  • priority – Message priority. A number between 0 and 9.
  • content_type – Content type. Default is auto-detect.
  • content_encoding – Content encoding. Default is auto-detect.
  • serializer – Serializer to use. Default is auto-detect.
  • compression – Compression method to use. Default is none.
  • headers – Mapping of arbitrary headers to pass along with the message body.
  • exchange – Override the exchange. Note that this exchange must have been declared.
  • declare – Optional list of required entities that must have been declared before publishing the message. The entities will be declared using maybe_declare().
  • retry – Retry publishing, or declaring entities if the connection is lost.
  • retry_policy – Retry configuration, this is the keywords supported by ensure().
  • expiration – A TTL in seconds can be specified per message. Default is no expiration.
  • **properties – Additional message properties, see AMQP spec.
revive(channel)[source]

Revive the producer after connection loss.

Message Consumer

class kombu.Consumer(channel, queues=None, no_ack=None, auto_declare=None, callbacks=None, on_decode_error=None, on_message=None, accept=None, tag_prefix=None)[source]

Message consumer.

Parameters:
channel = None

The connection/channel to use for this consumer.

queues = None

A single Queue, or a list of queues to consume from.

no_ack = None

Flag for automatic message acknowledgment. If enabled the messages are automatically acknowledged by the broker. This can increase performance but means that you have no control of when the message is removed.

Disabled by default.

auto_declare = True

By default all entities will be declared at instantiation, if you want to handle this manually you can set this to False.

callbacks = None

List of callbacks called in order when a message is received.

The signature of the callbacks must take two arguments: (body, message), which is the decoded message body and the Message instance (a subclass of Message).

on_message = None

Optional function called whenever a message is received.

When defined this function will be called instead of the receive() method, and callbacks will be disabled.

So this can be used as an alternative to callbacks when you don’t want the body to be automatically decoded. Note that the message will still be decompressed if the message has the compression header set.

The signature of the callback must take a single argument, which is the raw message object (a subclass of Message).

Also note that the message.body attribute, which is the raw contents of the message body, may in some cases be a read-only buffer object.

on_decode_error = None

Callback called when a message can’t be decoded.

The signature of the callback must take two arguments: (message, exc), which is the message that can’t be decoded and the exception that occurred while trying to decode it.

connection
declare()[source]

Declare queues, exchanges and bindings.

This is done automatically at instantiation if auto_declare is set.

register_callback(callback)[source]

Register a new callback to be called when a message is received.

The signature of the callback needs to accept two arguments: (body, message), which is the decoded message body and the Message instance (a subclass of Message.

add_queue(queue)[source]

Add a queue to the list of queues to consume from.

This will not start consuming from the queue, for that you will have to call consume() after.

add_queue_from_dict(queue, **options)[source]

This method is deprecated.

Instead please use:

consumer.add_queue(Queue.from_dict(d))
consume(no_ack=None)[source]

Start consuming messages.

Can be called multiple times, but note that while it will consume from new queues added since the last call, it will not cancel consuming from removed queues ( use cancel_by_queue()).

Parameters:no_ack – See no_ack.
cancel()[source]

End all active queue consumers.

This does not affect already delivered messages, but it does mean the server will not send any more messages for this consumer.

cancel_by_queue(queue)[source]

Cancel consumer by queue name.

consuming_from(queue)[source]

Return True if the consumer is currently consuming from queue’.

purge()[source]

Purge messages from all queues.

Warning

This will delete all ready messages, there is no undo operation.

flow(active)[source]

Enable/disable flow from peer.

This is a simple flow-control mechanism that a peer can use to avoid overflowing its queues or otherwise finding itself receiving more messages than it can process.

The peer that receives a request to stop sending content will finish sending the current content (if any), and then wait until flow is reactivated.

qos(prefetch_size=0, prefetch_count=0, apply_global=False)[source]

Specify quality of service.

The client can request that messages should be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement.

The prefetch window is Ignored if the no_ack option is set.

Parameters:
  • prefetch_size – Specify the prefetch window in octets. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls within other prefetch limits). May be set to zero, meaning “no specific limit”, although other prefetch limits may still apply.
  • prefetch_count – Specify the prefetch window in terms of whole messages.
  • apply_global – Apply new settings globally on all channels.
recover(requeue=False)[source]

Redeliver unacknowledged messages.

Asks the broker to redeliver all unacknowledged messages on the specified channel.

Parameters:requeue – By default the messages will be redelivered to the original recipient. With requeue set to true, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber.
receive(body, message)[source]

Method called when a message is received.

This dispatches to the registered callbacks.

Parameters:
  • body – The decoded message body.
  • message – The Message instance.
Raises:

NotImplementedError – If no consumer callbacks have been registered.

revive(channel)[source]

Revive consumer after connection loss.