system.dead_letter_queue
Contains information about messages received via a streaming engine and parsed with errors. Currently implemented for Kafka and RabbitMQ.
Logging is enabled by specifying dead_letter_queue for the engine specific handle_error_mode setting.
The flushing period of data is set in flush_interval_milliseconds parameter of the dead_letter_queue server settings section. To force flushing, use the SYSTEM FLUSH LOGS query.
ClickHouse does not delete data from the table automatically. See Introduction for more details.
Columns:
table_engine(Enum8) - Stream type. Possible values:KafkaandRabbitMQ.event_date(Date) - Message consuming date.event_time(DateTime) - Message consuming date and time.event_time_microseconds(DateTime64) - Message consuming time with microseconds precision.database(LowCardinality(String)) - ClickHouse database the streaming table belongs to.table(LowCardinality(String)) - ClickHouse table name.error(String) - Error text.raw_message(String) - Message body.kafka_topic_name(String) - Kafka topic name.kafka_partition(UInt64) - Kafka partition of the topic.kafka_offset(UInt64) - Kafka offset of the message.kafka_key(String) - Kafka key of the message.rabbitmq_exchange_name(String) - RabbitMQ exchange name.rabbitmq_message_id(String) - RabbitMQ message id.rabbitmq_message_timestamp(DateTime) - RabbitMQ message timestamp.rabbitmq_message_redelivered(UInt8) - RabbitMQ redelivered flag.rabbitmq_message_delivery_tag(UInt64) - RabbitMQ delivery tag.rabbitmq_channel_id(String) - RabbitMQ channel id.
Example
Query:
Result:
See Also
- Kafka - Kafka Engine
- system.kafka_consumers — Description of the
kafka_consumerssystem table which contains information like statistics and errors about Kafka consumers.