Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views509 pages

Docker Console v2

The document details a series of Docker commands executed to manage a local development environment for a project named 'hive-participation-service'. It includes creating and removing Docker networks, launching containers for various services like Elasticsearch, DynamoDB, Zookeeper, Kafka, and Kafka UI using Docker Compose. Additionally, it logs the initialization and configuration details of these services, including user permissions and environment settings.

Uploaded by

hai hovan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views509 pages

Docker Console v2

The document details a series of Docker commands executed to manage a local development environment for a project named 'hive-participation-service'. It includes creating and removing Docker networks, launching containers for various services like Elasticsearch, DynamoDB, Zookeeper, Kafka, and Kafka UI using Docker Compose. Additionally, it logs the initialization and configuration details of these services, including user permissions and environment settings.

Uploaded by

hai hovan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 509

Last login: Fri Dec 9 07:37:53 on console

haiho@ip-192-168-20-101 ~ % cd works/projects/hive-
participation-service
haiho@ip-192-168-20-101 hive-participation-service % docker
network ls
NETWORK ID NAME DRIVER SCOPE
20e0451571ce bridge bridge local
fc180dd78fae happymoney-hps-local bridge local
a60302418dd6 host host local
c371fb580572 none null local
haiho@ip-192-168-20-101 hive-participation-service % docker
network remove fc18
fc18
haiho@ip-192-168-20-101 hive-participation-service % docker
network create happymoney-hps-local
ef1cf735bab942e48621f05621f10249473f7d9921e003767e58e
d560582a3ef
haiho@ip-192-168-20-101 hive-participation-service % docker-
compose -f ./app/docker-compose.yml up &
[1] 2008
[+] Running 7/48-20-101 hive-participation-service %
⠿ Network app_default Created
0.0s
⠿ Container elasticsearch Created
0.0s
⠿ Container dynamodb Created
0.0s
⠿ Container zookeeper Created
0.0s
⠿ Container kafka Created
0.0s
⠿ Container schema Created
0.0s
⠿ Container kafka-ui Created
0.0s
Attaching to dynamodb, elasticsearch, kafka, kafka-ui, schema,
zookeeper
zookeeper | ===> User
zookeeper | uid=1000(appuser) gid=1000(appuser)
groups=1000(appuser)
zookeeper | ===> Configuring ...
kafka | ===> User
kafka | uid=1000(appuser) gid=1000(appuser)
groups=1000(appuser)
kafka | ===> Configuring ...
schema | ===> User
schema | uid=1000(appuser) gid=1000(appuser)
groups=1000(appuser)
schema | ===> Configuring ...
dynamodb | Initializing DynamoDB Local with the
following configuration:
dynamodb | Port: 8000
dynamodb | InMemory: false
dynamodb | DbPath: /home/dynamodblocal
dynamodb | SharedDb: true
dynamodb | shouldDelayTransientStatuses: false
dynamodb | CorsParams: null
dynamodb |
haiho@ip-192-168-20-101 hive-participation-service %
zookeeper | ===> Running preflight checks ...
zookeeper | ===> Check if /var/lib/zookeeper/data is
writable ...
zookeeper | ===> Check if /var/lib/zookeeper/log is
writable ...
schema | ===> Running preflight checks ...
schema | ===> Check if Kafka is healthy ...
zookeeper | ===> Launching ...
zookeeper | ===> Launching zookeeper ...
kafka | ===> Running preflight checks ...
kafka | ===> Check if /var/lib/kafka/data is writable ...
schema | SLF4J: Class path contains multiple SLF4J
bindings.
schema | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/
slf4j/impl/StaticLoggerBinder.class]
schema | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org
/slf4j/impl/StaticLoggerBinder.class]
schema | SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
kafka | ===> Check if Zookeeper is healthy ...
schema | SLF4J: Actual binding is of type
[org.slf4j.impl.SimpleLoggerFactory]
kafka-ui | _ _ ___ __ _ _
_ __ __ _
kafka-ui | | | | |_ _| / _|___ _ _ /_\ _ __ __ _ __| |_ ___ |
|/ /__ _ / _| |_____
kafka-ui | | |_| || | | _/ _ | '_| / _ \| '_ / _` / _| ' \/ -_) | ' </
_` | _| / / _`|
kafka-ui | \___/|___| |_| \___|_| /_/ \_| .__\__,_\__|_||_\___| |
_|\_\__,_|_| |_\_\__,|
kafka-ui | |_|
kafka-ui |
schema | [main] INFO
org.apache.kafka.clients.admin.AdminClientConfig -
AdminClientConfig values:
schema | bootstrap.servers = [kafka-local:9095]
schema | client.dns.lookup = use_all_dns_ips
schema | client.id =
schema | connections.max.idle.ms = 300000
schema | default.api.timeout.ms = 60000
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 2147483647
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema |
kafka-ui | 2022-12-09 00:47:41,252 INFO [background-
preinit] o.h.v.i.u.Version: HV000001: Hibernate Validator
6.2.0.Final
kafka-ui | 2022-12-09 00:47:41,832 INFO [main]
c.p.k.u.KafkaUiApplication: Starting KafkaUiApplication using
Java 13.0.9 on 28c12c062f11 with PID 1 (/kafka-ui-api.jar
started by kafkaui in /)
kafka-ui | 2022-12-09 00:47:41,837 DEBUG [main]
c.p.k.u.KafkaUiApplication: Running with Spring Boot v2.6.3,
Spring v5.3.15
kafka-ui | 2022-12-09 00:47:41,844 INFO [main]
c.p.k.u.KafkaUiApplication: No active profile set, falling back to
default profiles: default
kafka | SLF4J: Class path contains multiple SLF4J
bindings.
kafka | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/
slf4j/impl/StaticLoggerBinder.class]
kafka | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org
/slf4j/impl/StaticLoggerBinder.class]
kafka | SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
kafka | SLF4J: Actual binding is of type
[org.slf4j.impl.SimpleLoggerFactory]
schema | [main] INFO
org.apache.kafka.common.utils.AppInfoParser - Kafka version:
7.1.1-ccs
schema | [main] INFO
org.apache.kafka.common.utils.AppInfoParser - Kafka commitId:
947fac5beb61836d
schema | [main] INFO
org.apache.kafka.common.utils.AppInfoParser - Kafka
startTimeMs: 1670546862986
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:zookeeper.version=3.6.3--
6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on
04/08/2021 16:35 GMT
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:host.name=kafka
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.version=11.0.14.1
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.vendor=Azul Systems, Inc.
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.home=/usr/lib/jvm/zulu11-ca
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.class.path=/usr/share/java/cp-base-
new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/metrics-
core-4.1.12.1.jar:/usr/share/java/cp-base-new/minimal-json-
0.9.5.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-
2.12.3.jar:/usr/share/java/cp-base-new/kafka_2.13-7.1.1-
ccs.jar:/usr/share/java/cp-base-new/jackson-databind-
2.12.3.jar:/usr/share/java/cp-base-new/snappy-java-
1.1.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/
usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar:/usr/share/
java/cp-base-new/audience-annotations-0.5.0.jar:/usr/share/
java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/
jackson-module-scala_2.13-2.12.3.jar:/usr/share/java/cp-base-
new/scala-logging_2.13-3.9.3.jar:/usr/share/java/cp-base-new/
zstd-jni-1.5.0-4.jar:/usr/share/java/cp-base-new/logredactor-
metrics-1.0.10.jar:/usr/share/java/cp-base-new/kafka-raft-7.1.1-
ccs.jar:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar:/
usr/share/java/cp-base-new/kafka-storage-7.1.1-ccs.jar:/usr/
share/java/cp-base-new/slf4j-api-1.7.30.jar:/usr/share/java/cp-
base-new/scala-collection-compat_2.13-2.4.4.jar:/usr/share/
java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-
new/jmx_prometheus_javaagent-0.14.0.jar:/usr/share/java/cp-
base-new/kafka-clients-7.1.1-ccs.jar:/usr/share/java/cp-base-
new/jose4j-0.7.8.jar:/usr/share/java/cp-base-new/zookeeper-
3.6.3.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.5.jar:/
usr/share/java/cp-base-new/kafka-metadata-7.1.1-ccs.jar:/usr/
share/java/cp-base-new/gson-2.8.6.jar:/usr/share/java/cp-base-
new/common-utils-7.1.1.jar:/usr/share/java/cp-base-new/kafka-
server-common-7.1.1-ccs.jar:/usr/share/java/cp-base-new/
jolokia-jvm-1.6.2-agent.jar:/usr/share/java/cp-base-new/json-
simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-
dataformat-yaml-2.12.3.jar:/usr/share/java/cp-base-new/scala-
java8-compat_2.13-1.0.0.jar:/usr/share/java/cp-base-new/disk-
usage-agent-7.1.1.jar:/usr/share/java/cp-base-new/paranamer-
2.8.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/
share/java/cp-base-new/logredactor-1.0.10.jar:/usr/share/java/
cp-base-new/snakeyaml-1.27.jar:/usr/share/java/cp-base-new/
zookeeper-jute-3.6.3.jar:/usr/share/java/cp-base-new/jackson-
annotations-2.12.3.jar:/usr/share/java/cp-base-new/argparse4j-
0.7.0.jar:/usr/share/java/cp-base-new/confluent-log4j-1.2.17-
cp10.jar:/usr/share/java/cp-base-new/scala-library-2.13.5.jar:/
usr/share/java/cp-base-new/utility-belt-7.1.1.jar:/usr/share/
java/cp-base-new/kafka-storage-api-7.1.1-ccs.jar:/usr/share/
java/cp-base-new/jolokia-core-1.6.2.jar:/usr/share/java/cp-base-
new/jackson-datatype-jdk8-2.12.3.jar:/usr/share/java/cp-base-
new/jackson-core-2.12.3.jar
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client
environment:java.library.path=/usr/java/packages/lib:/usr/lib64:
/lib64:/lib:/usr/lib
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.io.tmpdir=/tmp
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.compiler=<NA>
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.name=Linux
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.arch=amd64
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.version=5.15.49-linuxkit
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:user.name=appuser
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:user.home=/home/appuser
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:user.dir=/home/appuser
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.memory.free=117MB
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.memory.max=1964MB
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.memory.total=124MB
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Initiating client connection, connectString=zookeeper:2191
sessionTimeout=40000
watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher
@289d1c02
kafka | [main] INFO
org.apache.zookeeper.common.X509Util - Setting -D
jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-
initiated TLS renegotiation
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [main] INFO
org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value
is 1048575 Bytes
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [main] INFO org.apache.zookeeper.ClientCnxn -
zookeeper.request.timeout value is 0. feature enabled=false
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.19.0.4:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] WARN
org.apache.zookeeper.ClientCnxn - Session 0x0 for sever
zookeeper/172.19.0.4:2191, Closing socket connection.
Attempting reconnect except it is a SessionExpiredException.
kafka | java.net.ConnectException: Connection refused
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native
Method)
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketC
hannelImpl.java:777)
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
nxnSocketNIO.java:344)
kafka | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.j
ava:1290)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
zookeeper | [2022-12-09 00:47:43,969] INFO Reading
configuration from: /etc/kafka/zookeeper.properties
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,070] INFO
clientPortAddress is 0.0.0.0:2191
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,070] INFO
secureClientPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,071] INFO
observerMasterPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,071] INFO
metricsProvider.className is
org.apache.zookeeper.metrics.impl.DefaultMetricsProvider
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,095] INFO
autopurge.snapRetainCount set to 3
(org.apache.zookeeper.server.DatadirCleanupManager)
zookeeper | [2022-12-09 00:47:44,095] INFO
autopurge.purgeInterval set to 0
(org.apache.zookeeper.server.DatadirCleanupManager)
zookeeper | [2022-12-09 00:47:44,095] INFO Purge task
is not scheduled.
(org.apache.zookeeper.server.DatadirCleanupManager)
zookeeper | [2022-12-09 00:47:44,095] WARN Either no
config or no quorum defined in config, running in standalone
mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
zookeeper | [2022-12-09 00:47:44,121] INFO Log4j 1.2
jmx support found and enabled.
(org.apache.zookeeper.jmx.ManagedUtil)
zookeeper | [2022-12-09 00:47:44,198] INFO Reading
configuration from: /etc/kafka/zookeeper.properties
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,203] INFO
clientPortAddress is 0.0.0.0:2191
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,203] INFO
secureClientPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,203] INFO
observerMasterPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,203] INFO
metricsProvider.className is
org.apache.zookeeper.metrics.impl.DefaultMetricsProvider
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 00:47:44,204] INFO Starting
server (org.apache.zookeeper.server.ZooKeeperServerMain)
zookeeper | [2022-12-09 00:47:44,294] INFO
ServerMetrics initialized with provider
org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@2ab
f4075 (org.apache.zookeeper.server.ServerMetrics)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
zookeeper | [2022-12-09 00:47:44,332] INFO
zookeeper.snapshot.trust.empty : false
(org.apache.zookeeper.server.persistence.FileTxnSnapLog)
zookeeper | [2022-12-09 00:47:44,416] INFO
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,416] INFO ______
_
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,416] INFO |___ /
||
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,416] INFO //
___ ___ | | __ ___ ___ _ __ ___ _ __
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,416] INFO // /_
\ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__|
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,417] INFO / /__ |
(_) | | (_) | | < | __/ | __/ | |_) | | __/ | |
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,417] INFO /_____| \
___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_|
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,417] INFO
||
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,417] INFO
|_|
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,417] INFO
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,425] INFO Server
environment:zookeeper.version=3.6.3--
6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on
04/08/2021 16:35 GMT
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,425] INFO Server
environment:host.name=zookeeper
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,425] INFO Server
environment:java.version=11.0.14.1
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,425] INFO Server
environment:java.vendor=Azul Systems, Inc.
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,426] INFO Server
environment:java.home=/usr/lib/jvm/zulu11-ca
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,426] INFO Server
environment:java.class.path=/usr/bin/../share/java/kafka/metric
s-core-2.2.0.jar:/usr/bin/../share/java/kafka/jersey-server-
2.34.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/
usr/bin/../share/java/kafka/rocksdbjni-6.22.1.1.jar:/usr/bin/../
share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/
java/kafka/minimal-json-0.9.5.jar:/usr/bin/../share/java/kafka/
hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-
dataformat-csv-2.12.3.jar:/usr/bin/../share/java/kafka/kafka-
log4j-appender-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
kafka_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/
connect-mirror-client-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jackson-databind-2.12.3.jar:/usr/bin/../share/java/kafka/snappy-
java-1.1.8.4.jar:/usr/bin/../share/java/kafka/jopt-simple-
5.0.4.jar:/usr/bin/../share/java/kafka/jetty-util-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-
streams-scala_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/
jersey-hk2-2.34.jar:/usr/bin/../share/java/kafka/audience-
annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka-streams-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/logredactor-metrics-
1.0.8.jar:/usr/bin/../share/java/kafka/connect-runtime-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/re2j-1.6.jar:/usr/bin/../share/
java/kafka/jackson-module-scala_2.13-2.12.3.jar:/usr/bin/../
share/java/kafka/scala-logging_2.13-3.9.3.jar:/usr/bin/../share/
java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/
kafka/zstd-jni-1.5.0-4.jar:/usr/bin/../share/java/kafka/logredactor-
1.0.8.jar:/usr/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/usr/
bin/../share/java/kafka/connect-json-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/kafka-raft-7.1.1-ccs.jar:/usr/bin/../share/java/
kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jline-
3.12.1.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/
bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/usr/bin/../share/
java/kafka/maven-artifact-3.8.1.jar:/usr/bin/../share/java/kafka/
netty-transport-4.1.73.Final.jar:/usr/bin/../share/java/kafka/
javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/commons-
lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/usr/
bin/../share/java/kafka/connect-api-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/scala-collection-compat_2.13-2.4.4.jar:/usr/
bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/
java/kafka/kafka-streams-examples-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/javassist-3.27.0-GA.jar:/usr/bin/../share/java/
kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-
annotations-2.12.3.jar:/usr/bin/../share/java/kafka/connect-
basic-auth-extension-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/reflections-
0.9.12.jar:/usr/bin/../share/java/kafka/kafka-clients-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-
1.2.1.jar:/usr/bin/../share/java/kafka/jose4j-0.7.8.jar:/usr/bin/../
share/java/kafka/scala-reflect-2.13.6.jar:/usr/bin/../share/java/
kafka/zookeeper-3.6.3.jar:/usr/bin/../share/java/kafka/jersey-
container-servlet-core-2.34.jar:/usr/bin/../share/java/kafka/
jersey-client-2.34.jar:/usr/bin/../share/java/kafka/kafka-
metadata-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/
connect-transforms-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-util-ajax-9.4.44.v20210927.jar:/usr/bin/../share/java/
kafka/kafka-tools-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-
server-common-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-
servlet-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-
transport-native-unix-common-4.1.73.Final.jar:/usr/bin/../share/
java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/jackson-
jaxrs-json-provider-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-
io-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jersey-
common-2.34.jar:/usr/bin/../share/java/kafka/scala-library-
2.13.6.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-
1.0.3.jar:/usr/bin/../share/java/kafka/netty-tcnative-classes-
2.0.46.Final.jar:/usr/bin/../share/java/kafka/jersey-container-
servlet-2.34.jar:/usr/bin/../share/java/kafka/scala-java8-
compat_2.13-1.0.0.jar:/usr/bin/../share/java/kafka/trogdor-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-
2.0.2.jar:/usr/bin/../share/java/kafka/confluent-log4j-1.2.17-
cp8.jar:/usr/bin/../share/java/kafka/netty-handler-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/
usr/bin/../share/java/kafka/netty-codec-4.1.73.Final.jar:/usr/
bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/
java/kafka/kafka-streams-test-utils-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/jetty-server-9.4.44.v20210927.jar:/usr/bin/../
share/java/kafka/zookeeper-jute-3.6.3.jar:/usr/bin/../share/java/
kafka/connect-mirror-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-client-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jackson-annotations-2.12.3.jar:/usr/bin/../share/java/kafka/
jackson-jaxrs-base-2.12.3.jar:/usr/bin/../share/java/kafka/
argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/netty-resolver-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-security-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-shell-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-native-
epoll-4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-common-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-servlets-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-storage-
api-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-http-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jackson-
datatype-jdk8-2.12.3.jar:/usr/bin/../share/java/kafka/jackson-
core-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/netty-
transport-classes-epoll-4.1.73.Final.jar:/usr/bin/../share/java/
confluent-telemetry/*
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,426] INFO Server
environment:java.library.path=/usr/java/packages/lib:/usr/lib64:
/lib64:/lib:/usr/lib
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:java.io.tmpdir=/tmp
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:java.compiler=<NA>
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:os.name=Linux
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:os.arch=amd64
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:os.version=5.15.49-linuxkit
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:user.name=appuser
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:user.home=/home/appuser
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,427] INFO Server
environment:user.dir=/home/appuser
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,428] INFO Server
environment:os.memory.free=493MB
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,428] INFO Server
environment:os.memory.max=512MB
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,428] INFO Server
environment:os.memory.total=512MB
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,428] INFO
zookeeper.enableEagerACLCheck = false
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,428] INFO
zookeeper.digest.enabled = true
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,428] INFO
zookeeper.closeSessionTxn.enabled = true
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,429] INFO
zookeeper.flushDelay=0
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,429] INFO
zookeeper.maxWriteQueuePollTime=0
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,429] INFO
zookeeper.maxBatchSize=1000
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,431] INFO
zookeeper.intBufferStartingSizeBytes = 1024
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,437] INFO Weighed
connection throttling is disabled
(org.apache.zookeeper.server.BlueThrottle)
zookeeper | [2022-12-09 00:47:44,442] INFO
minSessionTimeout set to 4000
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,442] INFO
maxSessionTimeout set to 40000
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,445] INFO Response
cache size is initialized with value 400.
(org.apache.zookeeper.server.ResponseCache)
zookeeper | [2022-12-09 00:47:44,445] INFO Response
cache size is initialized with value 400.
(org.apache.zookeeper.server.ResponseCache)
zookeeper | [2022-12-09 00:47:44,450] INFO
zookeeper.pathStats.slotCapacity = 60
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 00:47:44,450] INFO
zookeeper.pathStats.slotDuration = 15
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 00:47:44,450] INFO
zookeeper.pathStats.maxDepth = 6
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 00:47:44,451] INFO
zookeeper.pathStats.initialDelay = 5
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 00:47:44,451] INFO
zookeeper.pathStats.delay = 5
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 00:47:44,451] INFO
zookeeper.pathStats.enabled = false
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 00:47:44,466] INFO The max
bytes for all large requests are set to 104857600
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,467] INFO The large
request threshold is set to -1
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,467] INFO Created
server with tickTime 2000 minSessionTimeout 4000
maxSessionTimeout 40000 clientPortListenBacklog -1 datadir
/var/lib/zookeeper/log/version-2 snapdir
/var/lib/zookeeper/data/version-2
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:44,574] INFO Logging
initialized @6849ms to org.eclipse.jetty.util.log.Slf4jLog
(org.eclipse.jetty.util.log)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.19.0.4:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] WARN
org.apache.zookeeper.ClientCnxn - Session 0x0 for sever
zookeeper/172.19.0.4:2191, Closing socket connection.
Attempting reconnect except it is a SessionExpiredException.
kafka | java.net.ConnectException: Connection refused
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native
Method)
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketC
hannelImpl.java:777)
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
nxnSocketNIO.java:344)
kafka | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.j
ava:1290)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
zookeeper | [2022-12-09 00:47:45,329] WARN
o.e.j.s.ServletContextHandler@12591ac8{/,null,STOPPED}
contextPath ends with /*
(org.eclipse.jetty.server.handler.ContextHandler)
zookeeper | [2022-12-09 00:47:45,330] WARN Empty
contextPath (org.eclipse.jetty.server.handler.ContextHandler)
zookeeper | [2022-12-09 00:47:45,461] INFO jetty-
9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git:
8da83308eeca865e495e53ef315a249d63ba9332; jvm
11.0.14.1+1-LTS (org.eclipse.jetty.server.Server)
zookeeper | [2022-12-09 00:47:45,788] INFO
DefaultSessionIdManager workerName=node0
(org.eclipse.jetty.server.session)
zookeeper | [2022-12-09 00:47:45,788] INFO No
SessionScavenger set, using defaults
(org.eclipse.jetty.server.session)
zookeeper | [2022-12-09 00:47:45,799] INFO node0
Scavenging every 600000ms (org.eclipse.jetty.server.session)
zookeeper | [2022-12-09 00:47:45,841] WARN
[email protected]@12591ac8{/,null
,STARTING} has uncovered http methods for path: /*
(org.eclipse.jetty.security.SecurityHandler)
zookeeper | [2022-12-09 00:47:45,900] INFO Started
o.e.j.s.ServletContextHandler@12591ac8{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.19.0.4:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] WARN
org.apache.zookeeper.ClientCnxn - Session 0x0 for sever
zookeeper/172.19.0.4:2191, Closing socket connection.
Attempting reconnect except it is a SessionExpiredException.
zookeeper | [2022-12-09 00:47:46,006] INFO Started
ServerConnector@61c4eee0{HTTP/1.1, (http/1.1)}
{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector)
kafka | java.net.ConnectException: Connection refused
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native
Method)
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketC
hannelImpl.java:777)
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
nxnSocketNIO.java:344)
kafka | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.j
ava:1290)
zookeeper | [2022-12-09 00:47:46,013] INFO Started
@8289ms (org.eclipse.jetty.server.Server)
zookeeper | [2022-12-09 00:47:46,018] INFO Started
AdminServer on address 0.0.0.0, port 8080 and command
URL /commands
(org.apache.zookeeper.server.admin.JettyAdminServer)
zookeeper | [2022-12-09 00:47:46,037] INFO Using
org.apache.zookeeper.server.NIOServerCnxnFactory as server
connection factory
(org.apache.zookeeper.server.ServerCnxnFactory)
zookeeper | [2022-12-09 00:47:46,041] WARN maxCnxns
is not configured, using default value 0.
(org.apache.zookeeper.server.ServerCnxnFactory)
zookeeper | [2022-12-09 00:47:46,049] INFO Configuring
NIO connection handler with 10s sessionless connection
timeout, 1 selector thread(s), 8 worker threads, and 64 kB
direct buffers.
(org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper | [2022-12-09 00:47:46,055] INFO binding to
port 0.0.0.0/0.0.0.0:2191
(org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper | [2022-12-09 00:47:46,137] INFO Using
org.apache.zookeeper.server.watch.WatchManager as watch
manager
(org.apache.zookeeper.server.watch.WatchManagerFactory)
zookeeper | [2022-12-09 00:47:46,137] INFO Using
org.apache.zookeeper.server.watch.WatchManager as watch
manager
(org.apache.zookeeper.server.watch.WatchManagerFactory)
zookeeper | [2022-12-09 00:47:46,143] INFO
zookeeper.snapshotSizeFactor = 0.33
(org.apache.zookeeper.server.ZKDatabase)
zookeeper | [2022-12-09 00:47:46,146] INFO
zookeeper.commitLogCount=500
(org.apache.zookeeper.server.ZKDatabase)
zookeeper | [2022-12-09 00:47:46,189] INFO
zookeeper.snapshot.compression.method = CHECKED
(org.apache.zookeeper.server.persistence.SnapStream)
zookeeper | [2022-12-09 00:47:46,189] INFO
Snapshotting: 0x0 to
/var/lib/zookeeper/data/version-2/snapshot.0
(org.apache.zookeeper.server.persistence.FileTxnSnapLog)
zookeeper | [2022-12-09 00:47:46,199] INFO Snapshot
loaded in 52 ms, highest zxid is 0x0, digest is 1371985504
(org.apache.zookeeper.server.ZKDatabase)
zookeeper | [2022-12-09 00:47:46,199] INFO
Snapshotting: 0x0 to
/var/lib/zookeeper/data/version-2/snapshot.0
(org.apache.zookeeper.server.persistence.FileTxnSnapLog)
zookeeper | [2022-12-09 00:47:46,200] INFO Snapshot
taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 00:47:46,235] INFO
PrepRequestProcessor (sid:0) started, reconfigEnabled=false
(org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper | [2022-12-09 00:47:46,237] INFO
zookeeper.request_throttler.shutdownTimeout = 10000
(org.apache.zookeeper.server.RequestThrottler)
zookeeper | [2022-12-09 00:47:46,337] INFO Using
checkIntervalMs=60000 maxPerMinute=10000
maxNeverUsedIntervalMs=0
(org.apache.zookeeper.server.ContainerManager)
zookeeper | [2022-12-09 00:47:46,340] INFO ZooKeeper
audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.19.0.4:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Socket connection
established, initiating session, client: /172.19.0.5:53444, server:
zookeeper/172.19.0.4:2191
zookeeper | [2022-12-09 00:47:47,156] INFO Creating
new log file: log.1
(org.apache.zookeeper.server.persistence.FileTxnLog)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Session establishment
complete on server zookeeper/172.19.0.4:2191, session id =
0x100000154ac0000, negotiated timeout = 40000
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Session: 0x100000154ac0000 closed
kafka | [main-EventThread] INFO
org.apache.zookeeper.ClientCnxn - EventThread shut down for
session: 0x100000154ac0000
kafka | ===> Launching ...
kafka | ===> Launching kafka ...
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:47:50,792Z", "level": "WARN", "component":
"o.e.b.JNANatives", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "unable to install
syscall filter: ",
elasticsearch | "stacktrace":
["java.lang.UnsupportedOperationException: seccomp
unavailable: CONFIG_SECCOMP not compiled into kernel,
CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed",
elasticsearch | "at
org.elasticsearch.bootstrap.SystemCallFilter.linuxImpl(SystemC
allFilter.java:342) ~[elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.SystemCallFilter.init(SystemCallFilte
r.java:617) ~[elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.JNANatives.tryInstallSystemCallFilte
r(JNANatives.java:260) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Natives.tryInstallSystemCallFilter(N
atives.java:113) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstra
p.java:116) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178
) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393)
[elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:
170) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.
java:161) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.cli.EnvironmentAwareCommand.execute(Envir
onmentAwareCommand.java:86) [elasticsearch-
7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.cli.Command.mainWithoutErrorHandling(Com
mand.java:127) [elasticsearch-cli-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.cli.Command.main(Command.java:90)
[elasticsearch-cli-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.jav
a:126) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.jav
a:92) [elasticsearch-7.10.2.jar:7.10.2]"] }
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:47:52,855Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "version[7.10.2], pid[7],
build[default/docker/747e1cc71def077253878a59143c1f785afa
92b9/2021-01-13T00:42:12.435326Z], OS[Linux/5.15.49-
linuxkit/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server
VM/15.0.1/15.0.1+9]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:47:52,866Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "JVM home
[/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:47:52,870Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "JVM arguments [-Xshare:auto, -
Des.networkaddress.cache.ttl=60, -
Des.networkaddress.cache.negative.ttl=10, -XX:
+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -
Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-
OmitStackTraceInFastThrow, -XX:
+ShowCodeDetailsInExceptionMessages, -
Dio.netty.noUnsafe=true, -
Dio.netty.noKeySetOptimization=true, -
Dio.netty.recycler.maxCapacityPerThread=0, -
Dio.netty.allocator.numDirectArenas=0, -
Dlog4j.shutdownHookEnabled=false, -
Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT,
-Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -
XX:InitiatingHeapOccupancyPercent=30,
-Djava.io.tmpdir=/tmp/elasticsearch-7615847219944640451, -
XX:+HeapDumpOnOutOfMemoryError, -
XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -
Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,t
ags:filecount=32,filesize=64m, -
Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx1092m, -
XX:MaxDirectMemorySize=572522496,
-Des.path.home=/usr/share/elasticsearch,
-Des.path.conf=/usr/share/elasticsearch/config, -
Des.distribution.flavor=default, -Des.distribution.type=docker, -
Des.bundled_jdk=true]" }
kafka | [2022-12-09 00:47:53,073] INFO Registered
kafka:type=kafka.Log4jController MBean
(kafka.utils.Log4jControllerRegistration$)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:47:55,020] INFO Setting -D
jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-
initiated TLS renegotiation
(org.apache.zookeeper.common.X509Util)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:47:55,748] INFO Registered
signal handlers for TERM, INT, HUP
(org.apache.kafka.common.utils.LoggingSignalHandler)
kafka | [2022-12-09 00:47:55,774] INFO starting
(kafka.server.KafkaServer)
kafka | [2022-12-09 00:47:55,782] INFO Connecting to
zookeeper on zookeeper:2191 (kafka.server.KafkaServer)
kafka | [2022-12-09 00:47:55,859] INFO
[ZooKeeperClient Kafka server] Initializing a new session to
zookeeper:2191. (kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 00:47:55,902] INFO Client
environment:zookeeper.version=3.6.3--
6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on
04/08/2021 16:35 GMT (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,911] INFO Client
environment:host.name=kafka
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,913] INFO Client
environment:java.version=11.0.14.1
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,914] INFO Client
environment:java.vendor=Azul Systems, Inc.
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,914] INFO Client
environment:java.home=/usr/lib/jvm/zulu11-ca
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,914] INFO Client
environment:java.class.path=/usr/bin/../share/java/kafka/metric
s-core-2.2.0.jar:/usr/bin/../share/java/kafka/jersey-server-
2.34.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/
usr/bin/../share/java/kafka/rocksdbjni-6.22.1.1.jar:/usr/bin/../
share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/
java/kafka/minimal-json-0.9.5.jar:/usr/bin/../share/java/kafka/
hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-
dataformat-csv-2.12.3.jar:/usr/bin/../share/java/kafka/kafka-
log4j-appender-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
kafka_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/
connect-mirror-client-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jackson-databind-2.12.3.jar:/usr/bin/../share/java/kafka/snappy-
java-1.1.8.4.jar:/usr/bin/../share/java/kafka/jopt-simple-
5.0.4.jar:/usr/bin/../share/java/kafka/jetty-util-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-
streams-scala_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/
jersey-hk2-2.34.jar:/usr/bin/../share/java/kafka/audience-
annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka-streams-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/logredactor-metrics-
1.0.8.jar:/usr/bin/../share/java/kafka/connect-runtime-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/re2j-1.6.jar:/usr/bin/../share/
java/kafka/jackson-module-scala_2.13-2.12.3.jar:/usr/bin/../
share/java/kafka/scala-logging_2.13-3.9.3.jar:/usr/bin/../share/
java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/
kafka/zstd-jni-1.5.0-4.jar:/usr/bin/../share/java/kafka/logredactor-
1.0.8.jar:/usr/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/usr/
bin/../share/java/kafka/connect-json-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/kafka-raft-7.1.1-ccs.jar:/usr/bin/../share/java/
kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jline-
3.12.1.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/
bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/usr/bin/../share/
java/kafka/maven-artifact-3.8.1.jar:/usr/bin/../share/java/kafka/
netty-transport-4.1.73.Final.jar:/usr/bin/../share/java/kafka/
javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/commons-
lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/usr/
bin/../share/java/kafka/connect-api-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/scala-collection-compat_2.13-2.4.4.jar:/usr/
bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/
java/kafka/kafka-streams-examples-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/javassist-3.27.0-GA.jar:/usr/bin/../share/java/
kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-
annotations-2.12.3.jar:/usr/bin/../share/java/kafka/connect-
basic-auth-extension-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/reflections-
0.9.12.jar:/usr/bin/../share/java/kafka/kafka-clients-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-
1.2.1.jar:/usr/bin/../share/java/kafka/jose4j-0.7.8.jar:/usr/bin/../
share/java/kafka/scala-reflect-2.13.6.jar:/usr/bin/../share/java/
kafka/zookeeper-3.6.3.jar:/usr/bin/../share/java/kafka/jersey-
container-servlet-core-2.34.jar:/usr/bin/../share/java/kafka/
jersey-client-2.34.jar:/usr/bin/../share/java/kafka/kafka-
metadata-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/
connect-transforms-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-util-ajax-9.4.44.v20210927.jar:/usr/bin/../share/java/
kafka/kafka-tools-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-
server-common-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-
servlet-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-
transport-native-unix-common-4.1.73.Final.jar:/usr/bin/../share/
java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/jackson-
jaxrs-json-provider-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-
io-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jersey-
common-2.34.jar:/usr/bin/../share/java/kafka/scala-library-
2.13.6.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-
1.0.3.jar:/usr/bin/../share/java/kafka/netty-tcnative-classes-
2.0.46.Final.jar:/usr/bin/../share/java/kafka/jersey-container-
servlet-2.34.jar:/usr/bin/../share/java/kafka/scala-java8-
compat_2.13-1.0.0.jar:/usr/bin/../share/java/kafka/trogdor-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-
2.0.2.jar:/usr/bin/../share/java/kafka/confluent-log4j-1.2.17-
cp8.jar:/usr/bin/../share/java/kafka/netty-handler-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/
usr/bin/../share/java/kafka/netty-codec-4.1.73.Final.jar:/usr/
bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/
java/kafka/kafka-streams-test-utils-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/jetty-server-9.4.44.v20210927.jar:/usr/bin/../
share/java/kafka/zookeeper-jute-3.6.3.jar:/usr/bin/../share/java/
kafka/connect-mirror-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-client-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jackson-annotations-2.12.3.jar:/usr/bin/../share/java/kafka/
jackson-jaxrs-base-2.12.3.jar:/usr/bin/../share/java/kafka/
argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/netty-resolver-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-security-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-shell-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-native-
epoll-4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-common-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-servlets-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-storage-
api-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-http-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jackson-
datatype-jdk8-2.12.3.jar:/usr/bin/../share/java/kafka/jackson-
core-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/netty-
transport-classes-epoll-4.1.73.Final.jar:/usr/bin/../share/java/
confluent-telemetry/* (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,915] INFO Client
environment:java.library.path=/usr/java/packages/lib:/usr/lib64:
/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,915] INFO Client
environment:java.io.tmpdir=/tmp
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:java.compiler=<NA>
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:os.name=Linux
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:os.arch=amd64
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:os.version=5.15.49-linuxkit
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:user.name=appuser
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:user.home=/home/appuser
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:user.dir=/home/appuser
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:os.memory.free=1010MB
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,916] INFO Client
environment:os.memory.max=1024MB
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,917] INFO Client
environment:os.memory.total=1024MB
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,926] INFO Initiating
client connection, connectString=zookeeper:2191
sessionTimeout=18000
watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWa
tcher$@22ffa91a (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 00:47:55,975] INFO
jute.maxbuffer value is 4194304 Bytes
(org.apache.zookeeper.ClientCnxnSocket)
kafka | [2022-12-09 00:47:55,997] INFO
zookeeper.request.timeout value is 0. feature enabled=false
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 00:47:56,023] INFO
[ZooKeeperClient Kafka server] Waiting until connected.
(kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 00:47:56,100] INFO Opening
socket connection to server zookeeper/172.19.0.4:2191.
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 00:47:56,101] INFO SASL config
status: Will not attempt to authenticate using SASL (unknown
error) (org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 00:47:56,127] INFO Socket
connection established, initiating session, client:
/172.19.0.5:37410, server: zookeeper/172.19.0.4:2191
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 00:47:56,181] INFO Session
establishment complete on server zookeeper/172.19.0.4:2191,
session id = 0x100000154ac0001, negotiated timeout = 18000
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 00:47:56,200] INFO
[ZooKeeperClient Kafka server] Connected.
(kafka.zookeeper.ZooKeeperClient)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:47:56,661] INFO [feature-zk-
node-event-process-thread]: Starting
(kafka.server.FinalizedFeatureChangeListener$ChangeNotificati
onProcessorThread)
kafka | [2022-12-09 00:47:56,741] INFO Feature ZK
node at path: /feature does not exist
(kafka.server.FinalizedFeatureChangeListener)
kafka | [2022-12-09 00:47:56,743] INFO Cleared cache
(kafka.server.FinalizedFeatureCache)
kafka-ui | 2022-12-09 00:47:57,041 INFO [main]
o.s.d.r.c.RepositoryConfigurationDelegate: Bootstrapping Spring
Data LDAP repositories in DEFAULT mode.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka-ui | 2022-12-09 00:47:57,520 INFO [main]
o.s.d.r.c.RepositoryConfigurationDelegate: Finished Spring Data
repository scanning in 437 ms. Found 0 LDAP repository
interfaces.
haiho@ip-192-168-20-101 hive-participation-service % kafka
| [2022-12-09 00:47:57,572] INFO Cluster ID =
u7FTV21fSNCC5BbN_1oteQ (kafka.server.KafkaServer)
haiho@ip-192-168-20-101 hive-participation-service % kafka
| [2022-12-09 00:47:57,612] WARN No meta.properties file
under dir /var/lib/kafka/data/meta.properties
(kafka.server.BrokerMetadataCheckpoint)
haiho@ip-192-168-20-101 hive-participation-service % kafka
| [2022-12-09 00:47:57,904] INFO KafkaConfig values:
kafka | advertised.listeners =
LISTENER://localhost:9092, LISTENER_HOST://kafka-local:9095
kafka | alter.config.policy.class.name = null
kafka | alter.log.dirs.replication.quota.window.num
= 11
kafka |
alter.log.dirs.replication.quota.window.size.seconds = 1
kafka | authorizer.class.name =
kafka | auto.create.topics.enable = true
kafka | auto.leader.rebalance.enable = true
kafka | background.threads = 10
kafka | broker.heartbeat.interval.ms = 2000
kafka | broker.id = 1
kafka | broker.id.generation.enable = true
kafka | broker.rack = null
kafka | broker.session.timeout.ms = 9000
kafka | client.quota.callback.class = null
kafka | compression.type = producer
kafka | connection.failed.authentication.delay.ms =
100
kafka | connections.max.idle.ms = 600000
kafka | connections.max.reauth.ms = 0
kafka | control.plane.listener.name = null
kafka | controlled.shutdown.enable = true
kafka | controlled.shutdown.max.retries = 3
kafka | controlled.shutdown.retry.backoff.ms =
5000
kafka | controller.listener.names = null
kafka | controller.quorum.append.linger.ms = 25
kafka | controller.quorum.election.backoff.max.ms
= 1000
kafka | controller.quorum.election.timeout.ms =
1000
kafka | controller.quorum.fetch.timeout.ms = 2000
kafka | controller.quorum.request.timeout.ms =
2000
kafka | controller.quorum.retry.backoff.ms = 20
kafka | controller.quorum.voters = []
kafka | controller.quota.window.num = 11
kafka | controller.quota.window.size.seconds = 1
kafka | controller.socket.timeout.ms = 30000
kafka | create.topic.policy.class.name = null
kafka | default.replication.factor = 1
kafka | delegation.token.expiry.check.interval.ms =
3600000
kafka | delegation.token.expiry.time.ms =
86400000
kafka | delegation.token.master.key = null
kafka | delegation.token.max.lifetime.ms =
604800000
kafka | delegation.token.secret.key = null
kafka |
delete.records.purgatory.purge.interval.requests = 1
kafka | delete.topic.enable = true
kafka | fetch.max.bytes = 57671680
kafka | fetch.purgatory.purge.interval.requests =
1000
kafka | group.initial.rebalance.delay.ms = 0
kafka | group.max.session.timeout.ms = 1800000
kafka | group.max.size = 2147483647
kafka | group.min.session.timeout.ms = 6000
kafka | initial.broker.registration.timeout.ms =
60000
kafka | inter.broker.listener.name = LISTENER
kafka | inter.broker.protocol.version = 3.1-IV0
kafka | kafka.metrics.polling.interval.secs = 10
kafka | kafka.metrics.reporters = []
kafka | leader.imbalance.check.interval.seconds =
300
kafka | leader.imbalance.per.broker.percentage =
10
kafka | listener.security.protocol.map =
LISTENER:PLAINTEXT, LISTENER_HOST:PLAINTEXT
kafka | listeners = LISTENER://0.0.0.0:9092,
LISTENER_HOST://0.0.0.0:9095
kafka | log.cleaner.backoff.ms = 15000
kafka | log.cleaner.dedupe.buffer.size = 134217728
kafka | log.cleaner.delete.retention.ms = 86400000
kafka | log.cleaner.enable = true
kafka | log.cleaner.io.buffer.load.factor = 0.9
kafka | log.cleaner.io.buffer.size = 524288
kafka | log.cleaner.io.max.bytes.per.second =
1.7976931348623157E308
kafka | log.cleaner.max.compaction.lag.ms =
9223372036854775807
kafka | log.cleaner.min.cleanable.ratio = 0.5
kafka | log.cleaner.min.compaction.lag.ms = 0
kafka | log.cleaner.threads = 1
kafka | log.cleanup.policy = [delete]
kafka | log.dir = /tmp/kafka-logs
kafka | log.dirs = /var/lib/kafka/data
kafka | log.flush.interval.messages =
9223372036854775807
kafka | log.flush.interval.ms = null
kafka | log.flush.offset.checkpoint.interval.ms =
60000
kafka | log.flush.scheduler.interval.ms =
9223372036854775807
kafka | log.flush.start.offset.checkpoint.interval.ms
= 60000
kafka | log.index.interval.bytes = 4096
kafka | log.index.size.max.bytes = 10485760
kafka | log.message.downconversion.enable = true
kafka | log.message.format.version = 3.0-IV1
kafka | log.message.timestamp.difference.max.ms
= 9223372036854775807
kafka | log.message.timestamp.type = CreateTime
kafka | log.preallocate = false
kafka | log.retention.bytes = -1
kafka | log.retention.check.interval.ms = 300000
kafka | log.retention.hours = 168
kafka | log.retention.minutes = null
kafka | log.retention.ms = null
kafka | log.roll.hours = 168
kafka | log.roll.jitter.hours = 0
kafka | log.roll.jitter.ms = null
kafka | log.roll.ms = null
kafka | log.segment.bytes = 1073741824
kafka | log.segment.delete.delay.ms = 60000
kafka | max.connection.creation.rate =
2147483647
kafka | max.connections = 2147483647
kafka | max.connections.per.ip = 2147483647
kafka | max.connections.per.ip.overrides =
kafka | max.incremental.fetch.session.cache.slots
= 1000
kafka | message.max.bytes = 1048588
kafka | metadata.log.dir = null
kafka |
metadata.log.max.record.bytes.between.snapshots =
20971520
kafka | metadata.log.segment.bytes =
1073741824
kafka | metadata.log.segment.min.bytes =
8388608
kafka | metadata.log.segment.ms = 604800000
kafka | metadata.max.retention.bytes = -1
kafka | metadata.max.retention.ms = 604800000
kafka | metric.reporters = []
kafka | metrics.num.samples = 2
kafka | metrics.recording.level = INFO
kafka | metrics.sample.window.ms = 30000
kafka | min.insync.replicas = 1
kafka | node.id = 1
kafka | num.io.threads = 8
kafka | num.network.threads = 3
kafka | num.partitions = 1
kafka | num.recovery.threads.per.data.dir = 1
kafka | num.replica.alter.log.dirs.threads = null
kafka | num.replica.fetchers = 1
kafka | offset.metadata.max.bytes = 4096
kafka | offsets.commit.required.acks = -1
kafka | offsets.commit.timeout.ms = 5000
kafka | offsets.load.buffer.size = 5242880
kafka | offsets.retention.check.interval.ms =
600000
kafka | offsets.retention.minutes = 10080
kafka | offsets.topic.compression.codec = 0
kafka | offsets.topic.num.partitions = 50
kafka | offsets.topic.replication.factor = 1
kafka | offsets.topic.segment.bytes = 104857600
kafka | password.encoder.cipher.algorithm =
AES/CBC/PKCS5Padding
kafka | password.encoder.iterations = 4096
kafka | password.encoder.key.length = 128
kafka | password.encoder.keyfactory.algorithm =
null
kafka | password.encoder.old.secret = null
kafka | password.encoder.secret = null
kafka | principal.builder.class = class
org.apache.kafka.common.security.authenticator.DefaultKafkaPr
incipalBuilder
kafka | process.roles = []
kafka | producer.purgatory.purge.interval.requests
= 1000
kafka | queued.max.request.bytes = -1
kafka | queued.max.requests = 500
kafka | quota.window.num = 11
kafka | quota.window.size.seconds = 1
kafka | remote.log.index.file.cache.total.size.bytes
= 1073741824
kafka | remote.log.manager.task.interval.ms =
30000
kafka |
remote.log.manager.task.retry.backoff.max.ms = 30000
kafka | remote.log.manager.task.retry.backoff.ms =
500
kafka | remote.log.manager.task.retry.jitter = 0.2
kafka | remote.log.manager.thread.pool.size = 10
kafka | remote.log.metadata.manager.class.name
= null
kafka | remote.log.metadata.manager.class.path =
null
kafka | remote.log.metadata.manager.impl.prefix =
null
kafka |
remote.log.metadata.manager.listener.name = null
kafka | remote.log.reader.max.pending.tasks = 100
kafka | remote.log.reader.threads = 10
kafka | remote.log.storage.manager.class.name =
null
kafka | remote.log.storage.manager.class.path =
null
kafka | remote.log.storage.manager.impl.prefix =
null
kafka | remote.log.storage.system.enable = false
kafka | replica.fetch.backoff.ms = 1000
kafka | replica.fetch.max.bytes = 1048576
kafka | replica.fetch.min.bytes = 1
kafka | replica.fetch.response.max.bytes =
10485760
kafka | replica.fetch.wait.max.ms = 500
kafka |
replica.high.watermark.checkpoint.interval.ms = 5000
kafka | replica.lag.time.max.ms = 30000
kafka | replica.selector.class = null
kafka | replica.socket.receive.buffer.bytes = 65536
kafka | replica.socket.timeout.ms = 30000
kafka | replication.quota.window.num = 11
kafka | replication.quota.window.size.seconds = 1
kafka | request.timeout.ms = 30000
kafka | reserved.broker.max.id = 1000
kafka | sasl.client.callback.handler.class = null
kafka | sasl.enabled.mechanisms = [GSSAPI]
kafka | sasl.jaas.config = null
kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka | sasl.kerberos.min.time.before.relogin =
60000
kafka | sasl.kerberos.principal.to.local.rules =
[DEFAULT]
kafka | sasl.kerberos.service.name = null
kafka | sasl.kerberos.ticket.renew.jitter = 0.05
kafka | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka | sasl.login.callback.handler.class = null
kafka | sasl.login.class = null
kafka | sasl.login.connect.timeout.ms = null
kafka | sasl.login.read.timeout.ms = null
kafka | sasl.login.refresh.buffer.seconds = 300
kafka | sasl.login.refresh.min.period.seconds = 60
kafka | sasl.login.refresh.window.factor = 0.8
kafka | sasl.login.refresh.window.jitter = 0.05
kafka | sasl.login.retry.backoff.max.ms = 10000
kafka | sasl.login.retry.backoff.ms = 100
kafka | sasl.mechanism.controller.protocol =
GSSAPI
kafka | sasl.mechanism.inter.broker.protocol =
GSSAPI
kafka | sasl.oauthbearer.clock.skew.seconds = 30
kafka | sasl.oauthbearer.expected.audience = null
kafka | sasl.oauthbearer.expected.issuer = null
kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
kafka | sasl.oauthbearer.jwks.endpoint.url = null
kafka | sasl.oauthbearer.scope.claim.name = scope
kafka | sasl.oauthbearer.sub.claim.name = sub
kafka | sasl.oauthbearer.token.endpoint.url = null
kafka | sasl.server.callback.handler.class = null
kafka | security.inter.broker.protocol = PLAINTEXT
kafka | security.providers = null
kafka | socket.connection.setup.timeout.max.ms =
30000
kafka | socket.connection.setup.timeout.ms =
10000
kafka | socket.receive.buffer.bytes = 102400
kafka | socket.request.max.bytes = 104857600
kafka | socket.send.buffer.bytes = 102400
kafka | ssl.cipher.suites = []
kafka | ssl.client.auth = none
kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka | ssl.endpoint.identification.algorithm = https
kafka | ssl.engine.factory.class = null
kafka | ssl.key.password = null
kafka | ssl.keymanager.algorithm = SunX509
kafka | ssl.keystore.certificate.chain = null
kafka | ssl.keystore.key = null
kafka | ssl.keystore.location = null
kafka | ssl.keystore.password = null
kafka | ssl.keystore.type = JKS
kafka | ssl.principal.mapping.rules = DEFAULT
kafka | ssl.protocol = TLSv1.3
kafka | ssl.provider = null
kafka | ssl.secure.random.implementation = null
kafka | ssl.trustmanager.algorithm = PKIX
kafka | ssl.truststore.certificates = null
kafka | ssl.truststore.location = null
kafka | ssl.truststore.password = null
kafka | ssl.truststore.type = JKS
kafka |
transaction.abort.timed.out.transaction.cleanup.interval.ms
= 10000
kafka | transaction.max.timeout.ms = 900000
kafka |
transaction.remove.expired.transaction.cleanup.interval.ms
= 3600000
kafka | transaction.state.log.load.buffer.size =
5242880
kafka | transaction.state.log.min.isr = 1
kafka | transaction.state.log.num.partitions = 50
kafka | transaction.state.log.replication.factor = 1
kafka | transaction.state.log.segment.bytes =
104857600
kafka | transactional.id.expiration.ms = 604800000
kafka | unclean.leader.election.enable = false
kafka | zookeeper.clientCnxnSocket = null
kafka | zookeeper.connect = zookeeper:2191
kafka | zookeeper.connection.timeout.ms = null
kafka | zookeeper.max.in.flight.requests = 10
kafka | zookeeper.session.timeout.ms = 18000
kafka | zookeeper.set.acl = false
kafka | zookeeper.ssl.cipher.suites = null
kafka | zookeeper.ssl.client.enable = false
kafka | zookeeper.ssl.crl.enable = false
kafka | zookeeper.ssl.enabled.protocols = null
kafka |
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka | zookeeper.ssl.keystore.location = null
kafka | zookeeper.ssl.keystore.password = null
kafka | zookeeper.ssl.keystore.type = null
kafka | zookeeper.ssl.ocsp.enable = false
kafka | zookeeper.ssl.protocol = TLSv1.2
kafka | zookeeper.ssl.truststore.location = null
kafka | zookeeper.ssl.truststore.password = null
kafka | zookeeper.ssl.truststore.type = null
kafka | zookeeper.sync.time.ms = 2000
kafka | (kafka.server.KafkaConfig)
haiho@ip-192-168-20-101 hive-participation-service % kafka
| [2022-12-09 00:47:57,942] INFO KafkaConfig values:
kafka | advertised.listeners =
LISTENER://localhost:9092, LISTENER_HOST://kafka-local:9095
kafka | alter.config.policy.class.name = null
kafka | alter.log.dirs.replication.quota.window.num
= 11
kafka |
alter.log.dirs.replication.quota.window.size.seconds = 1
kafka | authorizer.class.name =
kafka | auto.create.topics.enable = true
kafka | auto.leader.rebalance.enable = true
kafka | background.threads = 10
kafka | broker.heartbeat.interval.ms = 2000
kafka | broker.id = 1
kafka | broker.id.generation.enable = true
kafka | broker.rack = null
kafka | broker.session.timeout.ms = 9000
kafka | client.quota.callback.class = null
kafka | compression.type = producer
kafka | connection.failed.authentication.delay.ms =
100
kafka | connections.max.idle.ms = 600000
kafka | connections.max.reauth.ms = 0
kafka | control.plane.listener.name = null
kafka | controlled.shutdown.enable = true
kafka | controlled.shutdown.max.retries = 3
kafka | controlled.shutdown.retry.backoff.ms =
5000
kafka | controller.listener.names = null
kafka | controller.quorum.append.linger.ms = 25
kafka | controller.quorum.election.backoff.max.ms
= 1000
kafka | controller.quorum.election.timeout.ms =
1000
kafka | controller.quorum.fetch.timeout.ms = 2000
kafka | controller.quorum.request.timeout.ms =
2000
kafka | controller.quorum.retry.backoff.ms = 20
kafka | controller.quorum.voters = []
kafka | controller.quota.window.num = 11
kafka | controller.quota.window.size.seconds = 1
kafka | controller.socket.timeout.ms = 30000
kafka | create.topic.policy.class.name = null
kafka | default.replication.factor = 1
kafka | delegation.token.expiry.check.interval.ms =
3600000
kafka | delegation.token.expiry.time.ms =
86400000
kafka | delegation.token.master.key = null
kafka | delegation.token.max.lifetime.ms =
604800000
kafka | delegation.token.secret.key = null
kafka |
delete.records.purgatory.purge.interval.requests = 1
kafka | delete.topic.enable = true
kafka | fetch.max.bytes = 57671680
kafka | fetch.purgatory.purge.interval.requests =
1000
kafka | group.initial.rebalance.delay.ms = 0
kafka | group.max.session.timeout.ms = 1800000
kafka | group.max.size = 2147483647
kafka | group.min.session.timeout.ms = 6000
kafka | initial.broker.registration.timeout.ms =
60000
kafka | inter.broker.listener.name = LISTENER
kafka | inter.broker.protocol.version = 3.1-IV0
kafka | kafka.metrics.polling.interval.secs = 10
kafka | kafka.metrics.reporters = []
kafka | leader.imbalance.check.interval.seconds =
300
kafka | leader.imbalance.per.broker.percentage =
10
kafka | listener.security.protocol.map =
LISTENER:PLAINTEXT, LISTENER_HOST:PLAINTEXT
kafka | listeners = LISTENER://0.0.0.0:9092,
LISTENER_HOST://0.0.0.0:9095
kafka | log.cleaner.backoff.ms = 15000
kafka | log.cleaner.dedupe.buffer.size = 134217728
kafka | log.cleaner.delete.retention.ms = 86400000
kafka | log.cleaner.enable = true
kafka | log.cleaner.io.buffer.load.factor = 0.9
kafka | log.cleaner.io.buffer.size = 524288
kafka | log.cleaner.io.max.bytes.per.second =
1.7976931348623157E308
kafka | log.cleaner.max.compaction.lag.ms =
9223372036854775807
kafka | log.cleaner.min.cleanable.ratio = 0.5
kafka | log.cleaner.min.compaction.lag.ms = 0
kafka | log.cleaner.threads = 1
kafka | log.cleanup.policy = [delete]
kafka | log.dir = /tmp/kafka-logs
kafka | log.dirs = /var/lib/kafka/data
kafka | log.flush.interval.messages =
9223372036854775807
kafka | log.flush.interval.ms = null
kafka | log.flush.offset.checkpoint.interval.ms =
60000
kafka | log.flush.scheduler.interval.ms =
9223372036854775807
kafka | log.flush.start.offset.checkpoint.interval.ms
= 60000
kafka | log.index.interval.bytes = 4096
kafka | log.index.size.max.bytes = 10485760
kafka | log.message.downconversion.enable = true
kafka | log.message.format.version = 3.0-IV1
kafka | log.message.timestamp.difference.max.ms
= 9223372036854775807
kafka | log.message.timestamp.type = CreateTime
kafka | log.preallocate = false
kafka | log.retention.bytes = -1
kafka | log.retention.check.interval.ms = 300000
kafka | log.retention.hours = 168
kafka | log.retention.minutes = null
kafka | log.retention.ms = null
kafka | log.roll.hours = 168
kafka | log.roll.jitter.hours = 0
kafka | log.roll.jitter.ms = null
kafka | log.roll.ms = null
kafka | log.segment.bytes = 1073741824
kafka | log.segment.delete.delay.ms = 60000
kafka | max.connection.creation.rate =
2147483647
kafka | max.connections = 2147483647
kafka | max.connections.per.ip = 2147483647
kafka | max.connections.per.ip.overrides =
kafka | max.incremental.fetch.session.cache.slots
= 1000
kafka | message.max.bytes = 1048588
kafka | metadata.log.dir = null
kafka |
metadata.log.max.record.bytes.between.snapshots =
20971520
kafka | metadata.log.segment.bytes =
1073741824
kafka | metadata.log.segment.min.bytes =
8388608
kafka | metadata.log.segment.ms = 604800000
kafka | metadata.max.retention.bytes = -1
kafka | metadata.max.retention.ms = 604800000
kafka | metric.reporters = []
kafka | metrics.num.samples = 2
kafka | metrics.recording.level = INFO
kafka | metrics.sample.window.ms = 30000
kafka | min.insync.replicas = 1
kafka | node.id = 1
kafka | num.io.threads = 8
kafka | num.network.threads = 3
kafka | num.partitions = 1
kafka | num.recovery.threads.per.data.dir = 1
kafka | num.replica.alter.log.dirs.threads = null
kafka | num.replica.fetchers = 1
kafka | offset.metadata.max.bytes = 4096
kafka | offsets.commit.required.acks = -1
kafka | offsets.commit.timeout.ms = 5000
kafka | offsets.load.buffer.size = 5242880
kafka | offsets.retention.check.interval.ms =
600000
kafka | offsets.retention.minutes = 10080
kafka | offsets.topic.compression.codec = 0
kafka | offsets.topic.num.partitions = 50
kafka | offsets.topic.replication.factor = 1
kafka | offsets.topic.segment.bytes = 104857600
kafka | password.encoder.cipher.algorithm =
AES/CBC/PKCS5Padding
kafka | password.encoder.iterations = 4096
kafka | password.encoder.key.length = 128
kafka | password.encoder.keyfactory.algorithm =
null
kafka | password.encoder.old.secret = null
kafka | password.encoder.secret = null
kafka | principal.builder.class = class
org.apache.kafka.common.security.authenticator.DefaultKafkaPr
incipalBuilder
kafka | process.roles = []
kafka | producer.purgatory.purge.interval.requests
= 1000
kafka | queued.max.request.bytes = -1
kafka | queued.max.requests = 500
kafka | quota.window.num = 11
kafka | quota.window.size.seconds = 1
kafka | remote.log.index.file.cache.total.size.bytes
= 1073741824
kafka | remote.log.manager.task.interval.ms =
30000
kafka |
remote.log.manager.task.retry.backoff.max.ms = 30000
kafka | remote.log.manager.task.retry.backoff.ms =
500
kafka | remote.log.manager.task.retry.jitter = 0.2
kafka | remote.log.manager.thread.pool.size = 10
kafka | remote.log.metadata.manager.class.name
= null
kafka | remote.log.metadata.manager.class.path =
null
kafka | remote.log.metadata.manager.impl.prefix =
null
kafka |
remote.log.metadata.manager.listener.name = null
kafka | remote.log.reader.max.pending.tasks = 100
kafka | remote.log.reader.threads = 10
kafka | remote.log.storage.manager.class.name =
null
kafka | remote.log.storage.manager.class.path =
null
kafka | remote.log.storage.manager.impl.prefix =
null
kafka | remote.log.storage.system.enable = false
kafka | replica.fetch.backoff.ms = 1000
kafka | replica.fetch.max.bytes = 1048576
kafka | replica.fetch.min.bytes = 1
kafka | replica.fetch.response.max.bytes =
10485760
kafka | replica.fetch.wait.max.ms = 500
kafka |
replica.high.watermark.checkpoint.interval.ms = 5000
kafka | replica.lag.time.max.ms = 30000
kafka | replica.selector.class = null
kafka | replica.socket.receive.buffer.bytes = 65536
kafka | replica.socket.timeout.ms = 30000
kafka | replication.quota.window.num = 11
kafka | replication.quota.window.size.seconds = 1
kafka | request.timeout.ms = 30000
kafka | reserved.broker.max.id = 1000
kafka | sasl.client.callback.handler.class = null
kafka | sasl.enabled.mechanisms = [GSSAPI]
kafka | sasl.jaas.config = null
kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka | sasl.kerberos.min.time.before.relogin =
60000
kafka | sasl.kerberos.principal.to.local.rules =
[DEFAULT]
kafka | sasl.kerberos.service.name = null
kafka | sasl.kerberos.ticket.renew.jitter = 0.05
kafka | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka | sasl.login.callback.handler.class = null
kafka | sasl.login.class = null
kafka | sasl.login.connect.timeout.ms = null
kafka | sasl.login.read.timeout.ms = null
kafka | sasl.login.refresh.buffer.seconds = 300
kafka | sasl.login.refresh.min.period.seconds = 60
kafka | sasl.login.refresh.window.factor = 0.8
kafka | sasl.login.refresh.window.jitter = 0.05
kafka | sasl.login.retry.backoff.max.ms = 10000
kafka | sasl.login.retry.backoff.ms = 100
kafka | sasl.mechanism.controller.protocol =
GSSAPI
kafka | sasl.mechanism.inter.broker.protocol =
GSSAPI
kafka | sasl.oauthbearer.clock.skew.seconds = 30
kafka | sasl.oauthbearer.expected.audience = null
kafka | sasl.oauthbearer.expected.issuer = null
kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
kafka | sasl.oauthbearer.jwks.endpoint.url = null
kafka | sasl.oauthbearer.scope.claim.name = scope
kafka | sasl.oauthbearer.sub.claim.name = sub
kafka | sasl.oauthbearer.token.endpoint.url = null
kafka | sasl.server.callback.handler.class = null
kafka | security.inter.broker.protocol = PLAINTEXT
kafka | security.providers = null
kafka | socket.connection.setup.timeout.max.ms =
30000
kafka | socket.connection.setup.timeout.ms =
10000
kafka | socket.receive.buffer.bytes = 102400
kafka | socket.request.max.bytes = 104857600
kafka | socket.send.buffer.bytes = 102400
kafka | ssl.cipher.suites = []
kafka | ssl.client.auth = none
kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka | ssl.endpoint.identification.algorithm = https
kafka | ssl.engine.factory.class = null
kafka | ssl.key.password = null
kafka | ssl.keymanager.algorithm = SunX509
kafka | ssl.keystore.certificate.chain = null
kafka | ssl.keystore.key = null
kafka | ssl.keystore.location = null
kafka | ssl.keystore.password = null
kafka | ssl.keystore.type = JKS
kafka | ssl.principal.mapping.rules = DEFAULT
kafka | ssl.protocol = TLSv1.3
kafka | ssl.provider = null
kafka | ssl.secure.random.implementation = null
kafka | ssl.trustmanager.algorithm = PKIX
kafka | ssl.truststore.certificates = null
kafka | ssl.truststore.location = null
kafka | ssl.truststore.password = null
kafka | ssl.truststore.type = JKS
kafka |
transaction.abort.timed.out.transaction.cleanup.interval.ms
= 10000
kafka | transaction.max.timeout.ms = 900000
kafka |
transaction.remove.expired.transaction.cleanup.interval.ms
= 3600000
kafka | transaction.state.log.load.buffer.size =
5242880
kafka | transaction.state.log.min.isr = 1
kafka | transaction.state.log.num.partitions = 50
kafka | transaction.state.log.replication.factor = 1
kafka | transaction.state.log.segment.bytes =
104857600
kafka | transactional.id.expiration.ms = 604800000
kafka | unclean.leader.election.enable = false
kafka | zookeeper.clientCnxnSocket = null
kafka | zookeeper.connect = zookeeper:2191
kafka | zookeeper.connection.timeout.ms = null
kafka | zookeeper.max.in.flight.requests = 10
kafka | zookeeper.session.timeout.ms = 18000
kafka | zookeeper.set.acl = false
kafka | zookeeper.ssl.cipher.suites = null
kafka | zookeeper.ssl.client.enable = false
kafka | zookeeper.ssl.crl.enable = false
kafka | zookeeper.ssl.enabled.protocols = null
kafka |
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka | zookeeper.ssl.keystore.location = null
kafka | zookeeper.ssl.keystore.password = null
kafka | zookeeper.ssl.keystore.type = null
kafka | zookeeper.ssl.ocsp.enable = false
kafka | zookeeper.ssl.protocol = TLSv1.2
kafka | zookeeper.ssl.truststore.location = null
kafka | zookeeper.ssl.truststore.password = null
kafka | zookeeper.ssl.truststore.type = null
kafka | zookeeper.sync.time.ms = 2000
kafka | (kafka.server.KafkaConfig)
haiho@ip-192-168-20-101 hive-participation-service % kafka
| [2022-12-09 00:47:58,146] INFO [ThrottledChannelReaper-
Fetch]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 00:47:58,163] INFO
[ThrottledChannelReaper-Produce]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 00:47:58,194] INFO
[ThrottledChannelReaper-Request]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 00:47:58,201] INFO
[ThrottledChannelReaper-ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 00:47:58,489] INFO Loading logs
from log dirs ArraySeq(/var/lib/kafka/data)
(kafka.log.LogManager)
kafka | [2022-12-09 00:47:58,517] INFO Attempting
recovery for all logs in /var/lib/kafka/data since no clean
shutdown file was found (kafka.log.LogManager)
kafka | [2022-12-09 00:47:58,543] INFO Loaded 0 logs
in 52ms. (kafka.log.LogManager)
kafka | [2022-12-09 00:47:58,545] INFO Starting log
cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka | [2022-12-09 00:47:58,558] INFO Starting log
flusher with a default period of 9223372036854775807 ms.
(kafka.log.LogManager)
kafka | [2022-12-09 00:47:58,639] INFO Starting the
log cleaner (kafka.log.LogCleaner)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:47:58,975] INFO [kafka-log-
cleaner-thread-0]: Starting (kafka.log.LogCleaner)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:48:00,435] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Starting
(kafka.server.BrokerToControllerRequestThread)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:48:02,722] INFO Updated
connection-accept-rate max connection creation rate to
2147483647 (kafka.network.ConnectionQuotas)
kafka | [2022-12-09 00:48:02,770] INFO Awaiting
socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.19.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 00:48:03,108] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Created data-plane
acceptor and processors for endpoint :
ListenerName(LISTENER) (kafka.network.SocketServer)
kafka | [2022-12-09 00:48:03,126] INFO Updated
connection-accept-rate max connection creation rate to
2147483647 (kafka.network.ConnectionQuotas)
kafka | [2022-12-09 00:48:03,131] INFO Awaiting
socket connections on 0.0.0.0:9095. (kafka.network.Acceptor)
kafka | [2022-12-09 00:48:03,270] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Created data-plane
acceptor and processors for endpoint :
ListenerName(LISTENER_HOST) (kafka.network.SocketServer)
kafka | [2022-12-09 00:48:03,346] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Starting (kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 00:48:03,460] INFO
[ExpirationReaper-1-Produce]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:03,495] INFO
[ExpirationReaper-1-Fetch]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:03,496] INFO
[ExpirationReaper-1-ElectLeader]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:03,487] INFO
[ExpirationReaper-1-DeleteRecords]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:03,635] INFO
[LogDirFailureHandler]: Starting
(kafka.server.ReplicaManager$LogDirFailureHandler)
kafka | [2022-12-09 00:48:03,824] INFO Creating
/brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka | [2022-12-09 00:48:03,942] INFO Stat of the
created znode at /brokers/ids/1 is:
27,27,1670546883906,1670546883906,1,0,0,72057599753453
569,263,0,27
kafka | (kafka.zk.KafkaZkClient)
kafka | [2022-12-09 00:48:03,957] INFO Registered
broker 1 at path /brokers/ids/1 with addresses:
LISTENER://localhost:9092,LISTENER_HOST://kafka-local:9095,
czxid (broker epoch): 27 (kafka.zk.KafkaZkClient)
kafka | [2022-12-09 00:48:04,379] INFO
[ControllerEventThread controllerId=1] Starting
(kafka.controller.ControllerEventManager$ControllerEventThrea
d)
kafka-ui | 2022-12-09 00:48:04,456 INFO [main]
c.p.k.u.s.DeserializationService: Using SimpleRecordSerDe for
cluster 'hiveLocal'
kafka | [2022-12-09 00:48:04,497] INFO
[ExpirationReaper-1-topic]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:04,545] INFO Successfully
created /controller_epoch with initial epoch 0
(kafka.zk.KafkaZkClient)
kafka | [2022-12-09 00:48:04,594] INFO
[ExpirationReaper-1-Rebalance]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:04,595] INFO
[ExpirationReaper-1-Heartbeat]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:04,635] INFO [Controller
id=1] 1 successfully elected as the controller. Epoch
incremented to 1 and epoch zk version is now 1
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:04,671] INFO [Controller
id=1] Creating FeatureZNode at path: /feature with contents:
FeatureZNode(Enabled,Features{})
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:04,699] INFO Feature ZK
node created at path: /feature
(kafka.server.FinalizedFeatureChangeListener)
kafka | [2022-12-09 00:48:04,832] INFO
[GroupCoordinator 1]: Starting up.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:04,889] INFO
[GroupCoordinator 1]: Startup complete.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:05,074] INFO [Controller
id=1] Registering handlers (kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,092] INFO Updated
cache from existing <empty> to latest
FinalizedFeaturesAndEpoch(features=Features{}, epoch=0).
(kafka.server.FinalizedFeatureCache)
kafka | [2022-12-09 00:48:05,096] INFO [Controller
id=1] Deleting log dir event notifications
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,116] INFO [Controller
id=1] Deleting isr change notifications
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,150] INFO [Controller
id=1] Initializing controller context
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,152] INFO
[TransactionCoordinator id=1] Starting up.
(kafka.coordinator.transaction.TransactionCoordinator)
kafka | [2022-12-09 00:48:05,227] INFO
[TransactionCoordinator id=1] Startup complete.
(kafka.coordinator.transaction.TransactionCoordinator)
kafka | [2022-12-09 00:48:05,252] INFO [Transaction
Marker Channel Manager 1]: Starting
(kafka.coordinator.transaction.TransactionMarkerChannelManag
er)
kafka | [2022-12-09 00:48:05,387] INFO [Controller
id=1] Initialized broker epochs cache: HashMap(1 -> 27)
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,471] DEBUG [Controller
id=1] Register BrokerModifications handler for Set(1)
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,533] DEBUG [Channel
manager on controller 1]: Controller 1 trying to connect to
broker 1 (kafka.controller.ControllerChannelManager)
kafka | [2022-12-09 00:48:05,653] INFO
[ExpirationReaper-1-AlterAcls]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 00:48:05,713] INFO [Controller
id=1] Currently active brokers in the cluster: Set(1)
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,722] INFO
[RequestSendThread controllerId=1] Starting
(kafka.controller.RequestSendThread)
kafka | [2022-12-09 00:48:05,732] INFO [Controller
id=1] Currently shutting brokers in the cluster: HashSet()
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,734] INFO [Controller
id=1] Current list of topics in the cluster: HashSet()
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,736] INFO [Controller
id=1] Fetching topic deletions in progress
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,749] INFO [Controller
id=1] List of topics to be deleted:
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,751] INFO [Controller
id=1] List of topics ineligible for deletion:
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,760] INFO [Controller
id=1] Initializing topic deletion manager
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,762] INFO [Topic
Deletion Manager 1] Initializing manager with initial deletions:
Set(), initial ineligible deletions: HashSet()
(kafka.controller.TopicDeletionManager)
kafka | [2022-12-09 00:48:05,767] INFO [Controller
id=1] Sending update metadata request
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:05,823] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:05,834] INFO
[/config/changes-event-process-thread]: Starting
(kafka.common.ZkNodeChangeNotificationListener$ChangeEve
ntProcessThread)
kafka | [2022-12-09 00:48:05,893] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Starting socket server
acceptors and processors (kafka.network.SocketServer)
kafka | [2022-12-09 00:48:05,899] INFO
[ReplicaStateMachine controllerId=1] Initializing replica state
(kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 00:48:05,902] INFO
[ReplicaStateMachine controllerId=1] Triggering online replica
state changes (kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 00:48:05,953] INFO
[ReplicaStateMachine controllerId=1] Triggering offline replica
state changes (kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 00:48:05,958] DEBUG
[ReplicaStateMachine controllerId=1] Started replica state
machine with initial state -> HashMap()
(kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 00:48:05,959] INFO
[PartitionStateMachine controllerId=1] Initializing partition state
(kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 00:48:05,970] INFO
[PartitionStateMachine controllerId=1] Triggering online
partition state changes
(kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 00:48:05,975] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Started data-plane
acceptor and processor(s) for endpoint :
ListenerName(LISTENER) (kafka.network.SocketServer)
kafka | [2022-12-09 00:48:06,023] DEBUG
[PartitionStateMachine controllerId=1] Started partition state
machine with initial state -> HashMap()
(kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 00:48:06,043] INFO
[RequestSendThread controllerId=1] Controller 1 connected to
localhost:9092 (id: 1 rack: null) for sending state change
requests (kafka.controller.RequestSendThread)
kafka | [2022-12-09 00:48:06,051] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Started data-plane
acceptor and processor(s) for endpoint :
ListenerName(LISTENER_HOST) (kafka.network.SocketServer)
kafka | [2022-12-09 00:48:06,066] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Started socket server
acceptors and processors (kafka.network.SocketServer)
kafka | [2022-12-09 00:48:06,070] INFO [Controller
id=1] Ready to serve as the new controller with epoch 1
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,214] INFO Kafka version:
7.1.1-ccs (org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 00:48:06,225] INFO Kafka
commitId: 947fac5beb61836d
(org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 00:48:06,228] INFO Kafka
startTimeMs: 1670546886066
(org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 00:48:06,259] INFO [Controller
id=1] Partitions undergoing preferred replica election:
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,263] INFO [Controller
id=1] Partitions that completed preferred replica election:
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,264] INFO [Controller
id=1] Skipping preferred replica election for partitions due to
topic deletion: (kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,265] INFO [Controller
id=1] Resuming preferred replica election for partitions:
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,301] INFO [Controller
id=1] Starting replica leader election (PREFERRED) for
partitions triggered by ZkTriggered
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,314] INFO [KafkaServer
id=1] started (kafka.server.KafkaServer)
kafka | [2022-12-09 00:48:06,434] INFO [Controller
id=1] Starting the controller scheduler
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:06,669] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Recorded new controller, from now on will
use broker localhost:9092 (id: 1 rack: null)
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 00:48:06,711] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 0 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 00:48:06,740] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Recorded new controller, from now on will use broker
localhost:9092 (id: 1 rack: null)
(kafka.server.BrokerToControllerRequestThread)
schema | ===> Launching ...
schema | ===> Launching schema-registry ...
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,315Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[aggs-matrix-stats]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,320Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[analysis-common]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,321Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[constant-keyword]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,324Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[flattened]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,325Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[frozen-indices]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,325Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[ingest-common]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,325Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[ingest-geoip]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,326Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[ingest-user-agent]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,326Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[kibana]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,330Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[lang-expression]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,331Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[lang-mustache]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,336Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[lang-painless]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,337Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[mapper-extras]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,338Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[mapper-version]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,339Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[parent-join]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,339Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[percolator]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,340Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[rank-eval]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,341Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[reindex]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,342Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[repositories-metering-api]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,342Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[repository-url]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,350Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[search-business-rules]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,354Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[searchable-snapshots]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,356Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[spatial]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,357Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[transform]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,357Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[transport-netty4]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,358Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[unsigned-long]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,358Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[vectors]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,359Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module
[wildcard]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,361Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-analytics]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,362Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-async]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,364Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-async-search]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,365Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-autoscaling]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,366Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-ccr]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,366Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-core]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,367Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-data-streams]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,367Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-deprecation]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,369Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-enrich]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,369Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-eql]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,370Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-graph]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,370Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-identity-provider]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,371Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-ilm]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,371Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-logstash]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,371Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-ml]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,371Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-monitoring]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,372Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-ql]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,373Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-rollup]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,374Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-security]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,374Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-sql]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,375Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-stack]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,375Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-voting-only-node]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,375Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "loaded module [x-
pack-watcher]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,378Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "no plugins
loaded" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,614Z", "level": "INFO", "component":
"o.e.e.NodeEnvironment", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "using [1] data
paths, mounts [[/ (overlay)]], net usable_space [50.1gb], net
total_space [58.3gb], types [overlay]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,623Z", "level": "INFO", "component":
"o.e.e.NodeEnvironment", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "heap size [1gb],
compressed ordinary object pointers [true]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:08,884Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "node name [8c9f05d4bd02],
node ID [H18iHmWlRFK5x1zuu-6mFQ], cluster name [docker-
cluster], roles [transform, master, remote_cluster_client, data,
ml, data_content, data_hot, data_warm, data_cold, ingest]" }
haiho@ip-192-168-20-101 hive-participation-service % kafka
| [2022-12-09 00:48:11,474] INFO [Controller id=1] Processing
automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:11,476] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
haiho@ip-192-168-20-101 hive-participation-service % kafka-ui
| 2022-12-09 00:48:13,569 INFO [main]
o.s.b.a.e.w.EndpointLinksResolver: Exposing 2 endpoint(s)
beneath base path '/actuator'
kafka-ui | 2022-12-09 00:48:14,687 INFO [main]
o.s.b.a.s.r.ReactiveUserDetailsServiceAutoConfiguration:
kafka-ui |
kafka-ui | Using generated security password: 14443419-
94d6-4daf-ad62-7c2bedab772e
kafka-ui |
schema | [2022-12-09 00:48:14,745] INFO
SchemaRegistryConfig values:
schema | access.control.allow.headers =
schema | access.control.allow.methods =
schema | access.control.allow.origin =
schema | access.control.skip.options = true
schema | authentication.method = NONE
schema | authentication.realm =
schema | authentication.roles = [*]
schema | authentication.skip.paths = []
schema | avro.compatibility.level =
schema | compression.enable = true
schema | csrf.prevention.enable = false
schema | csrf.prevention.token.endpoint = /csrf
schema | csrf.prevention.token.expiration.minutes =
30
schema | csrf.prevention.token.max.entries = 10000
schema | debug = false
schema | dos.filter.delay.ms = 100
schema | dos.filter.enabled = false
schema | dos.filter.insert.headers = true
schema | dos.filter.ip.whitelist = []
schema | dos.filter.managed.attr = false
schema | dos.filter.max.idle.tracker.ms = 30000
schema | dos.filter.max.requests.ms = 30000
schema | dos.filter.max.requests.per.sec = 25
schema | dos.filter.max.wait.ms = 50
schema | dos.filter.remote.port = false
schema | dos.filter.throttle.ms = 30000
schema | dos.filter.throttled.requests = 5
schema | dos.filter.track.global = false
schema | host.name = schema
schema | http2.enabled = true
schema | idle.timeout.ms = 30000
schema | inter.instance.headers.whitelist = []
schema | inter.instance.protocol = http
schema | kafkastore.bootstrap.servers = [kafka-
local:9095]
schema | kafkastore.checkpoint.dir = /tmp
schema | kafkastore.checkpoint.version = 0
schema | kafkastore.connection.url =
schema | kafkastore.group.id =
schema | kafkastore.init.timeout.ms = 60000
schema | kafkastore.sasl.kerberos.kinit.cmd =
/usr/bin/kinit
schema |
kafkastore.sasl.kerberos.min.time.before.relogin = 60000
schema | kafkastore.sasl.kerberos.service.name =
schema | kafkastore.sasl.kerberos.ticket.renew.jitter
= 0.05
schema |
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
schema | kafkastore.sasl.mechanism = GSSAPI
schema | kafkastore.security.protocol = PLAINTEXT
schema | kafkastore.ssl.cipher.suites =
schema | kafkastore.ssl.enabled.protocols =
TLSv1.2,TLSv1.1,TLSv1
schema |
kafkastore.ssl.endpoint.identification.algorithm =
schema | kafkastore.ssl.key.password = [hidden]
schema | kafkastore.ssl.keymanager.algorithm =
SunX509
schema | kafkastore.ssl.keystore.location =
schema | kafkastore.ssl.keystore.password = [hidden]
schema | kafkastore.ssl.keystore.type = JKS
schema | kafkastore.ssl.protocol = TLS
schema | kafkastore.ssl.provider =
schema | kafkastore.ssl.trustmanager.algorithm =
PKIX
schema | kafkastore.ssl.truststore.location =
schema | kafkastore.ssl.truststore.password =
[hidden]
schema | kafkastore.ssl.truststore.type = JKS
schema | kafkastore.timeout.ms = 500
schema | kafkastore.topic = _schemas
schema | kafkastore.topic.replication.factor = 3
schema | kafkastore.topic.skip.validation = false
schema | kafkastore.update.handlers = []
schema | kafkastore.write.max.retries = 5
schema | leader.eligibility = true
schema | listener.protocol.map = []
schema | listeners = [http://schema:9091]
schema | master.eligibility = null
schema | metric.reporters = []
schema | metrics.jmx.prefix = kafka.schema.registry
schema | metrics.num.samples = 2
schema | metrics.sample.window.ms = 30000
schema | metrics.tag.map = []
schema | mode.mutability = true
schema | nosniff.prevention.enable = false
schema | port = 8081
schema | proxy.protocol.enabled = false
schema | reject.options.request = false
schema | request.logger.name = io.confluent.rest-
utils.requests
schema | request.queue.capacity = 2147483647
schema | request.queue.capacity.growby = 64
schema | request.queue.capacity.init = 128
schema | resource.extension.class = []
schema | resource.extension.classes = []
schema | resource.static.locations = []
schema | response.http.headers.config =
schema | response.mediatype.default =
application/vnd.schemaregistry.v1+json
schema | response.mediatype.preferred =
[application/vnd.schemaregistry.v1+json,
application/vnd.schemaregistry+json, application/json]
schema | rest.servlet.initializor.classes = []
schema | schema.cache.expiry.secs = 300
schema | schema.cache.size = 1000
schema | schema.canonicalize.on.consume = []
schema | schema.compatibility.level = backward
schema | schema.providers = []
schema | schema.registry.group.id = schema-registry
schema | schema.registry.inter.instance.protocol =
schema | schema.registry.resource.extension.class =
[]
schema | shutdown.graceful.ms = 1000
schema | ssl.cipher.suites = []
schema | ssl.client.auth = false
schema | ssl.client.authentication = NONE
schema | ssl.enabled.protocols = []
schema | ssl.endpoint.identification.algorithm = null
schema | ssl.key.password = [hidden]
schema | ssl.keymanager.algorithm =
schema | ssl.keystore.location =
schema | ssl.keystore.password = [hidden]
schema | ssl.keystore.reload = false
schema | ssl.keystore.type = JKS
schema | ssl.keystore.watch.location =
schema | ssl.protocol = TLS
schema | ssl.provider =
schema | ssl.trustmanager.algorithm =
schema | ssl.truststore.location =
schema | ssl.truststore.password = [hidden]
schema | ssl.truststore.type = JKS
schema | thread.pool.max = 200
schema | thread.pool.min = 8
schema | websocket.path.prefix = /ws
schema | websocket.servlet.initializor.classes = []
schema |
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
schema | [2022-12-09 00:48:15,150] INFO Logging
initialized @6972ms to org.eclipse.jetty.util.log.Slf4jLog
(org.eclipse.jetty.util.log)
schema | [2022-12-09 00:48:15,410] INFO Initial
capacity 128, increased by 64, maximum capacity
2147483647. (io.confluent.rest.ApplicationServer)
kafka-ui | 2022-12-09 00:48:15,857 WARN [main]
c.p.k.u.c.a.DisabledAuthSecurityConfig: Authentication is
disabled. Access will be unrestricted.
schema | [2022-12-09 00:48:16,148] INFO Adding
listener with HTTP/2: http://schema:9091
(io.confluent.rest.ApplicationServer)
kafka-ui | 2022-12-09 00:48:17,689 INFO [main]
o.s.l.c.s.AbstractContextSource: Property 'userDn' not set -
anonymous context will be used for read-write operations
schema | [2022-12-09 00:48:19,655] INFO
AdminClientConfig values:
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | client.dns.lookup = use_all_dns_ips
schema | client.id =
schema | connections.max.idle.ms = 300000
schema | default.api.timeout.ms = 60000
schema | host.resolver.class = class
org.apache.kafka.clients.DefaultHostResolver
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 2147483647
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema |
(org.apache.kafka.clients.admin.AdminClientConfig)
schema | [2022-12-09 00:48:20,666] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:20,667] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:20,667] INFO Kafka
startTimeMs: 1670546900644
(org.apache.kafka.common.utils.AppInfoParser)
kafka-ui | 2022-12-09 00:48:20,815 INFO [main]
o.s.b.w.e.n.NettyWebServer: Netty started on port 8080
kafka-ui | 2022-12-09 00:48:21,059 INFO [main]
c.p.k.u.KafkaUiApplication: Started KafkaUiApplication in 48.602
seconds (JVM running for 62.653)
kafka-ui | 2022-12-09 00:48:21,299 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:48:21,369 INFO [parallel-1]
o.a.k.c.a.AdminClientConfig: AdminClientConfig values:
kafka-ui | bootstrap.servers = [kafka-local:9095]
kafka-ui | client.dns.lookup = use_all_dns_ips
kafka-ui | client.id =
kafka-ui | connections.max.idle.ms = 300000
kafka-ui | default.api.timeout.ms = 60000
kafka-ui | metadata.max.age.ms = 300000
kafka-ui | metric.reporters = []
kafka-ui | metrics.num.samples = 2
kafka-ui | metrics.recording.level = INFO
kafka-ui | metrics.sample.window.ms = 30000
kafka-ui | receive.buffer.bytes = 65536
kafka-ui | reconnect.backoff.max.ms = 1000
kafka-ui | reconnect.backoff.ms = 50
kafka-ui | request.timeout.ms = 30000
kafka-ui | retries = 2147483647
kafka-ui | retry.backoff.ms = 100
kafka-ui | sasl.client.callback.handler.class = null
kafka-ui | sasl.jaas.config = null
kafka-ui | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka-ui | sasl.kerberos.min.time.before.relogin =
60000
kafka-ui | sasl.kerberos.service.name = null
kafka-ui | sasl.kerberos.ticket.renew.jitter = 0.05
kafka-ui | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka-ui | sasl.login.callback.handler.class = null
kafka-ui | sasl.login.class = null
kafka-ui | sasl.login.refresh.buffer.seconds = 300
kafka-ui | sasl.login.refresh.min.period.seconds = 60
kafka-ui | sasl.login.refresh.window.factor = 0.8
kafka-ui | sasl.login.refresh.window.jitter = 0.05
kafka-ui | sasl.mechanism = GSSAPI
kafka-ui | security.protocol = PLAINTEXT
kafka-ui | security.providers = null
kafka-ui | send.buffer.bytes = 131072
kafka-ui | socket.connection.setup.timeout.max.ms =
30000
kafka-ui | socket.connection.setup.timeout.ms =
10000
kafka-ui | ssl.cipher.suites = null
kafka-ui | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka-ui | ssl.endpoint.identification.algorithm = https
kafka-ui | ssl.engine.factory.class = null
kafka-ui | ssl.key.password = null
kafka-ui | ssl.keymanager.algorithm = SunX509
kafka-ui | ssl.keystore.certificate.chain = null
kafka-ui | ssl.keystore.key = null
kafka-ui | ssl.keystore.location = null
kafka-ui | ssl.keystore.password = null
kafka-ui | ssl.keystore.type = JKS
kafka-ui | ssl.protocol = TLSv1.3
kafka-ui | ssl.provider = null
kafka-ui | ssl.secure.random.implementation = null
kafka-ui | ssl.trustmanager.algorithm = PKIX
kafka-ui | ssl.truststore.certificates = null
kafka-ui | ssl.truststore.location = null
kafka-ui | ssl.truststore.password = null
kafka-ui | ssl.truststore.type = JKS
kafka-ui |
kafka-ui | 2022-12-09 00:48:21,733 INFO [parallel-1]
o.a.k.c.u.AppInfoParser: Kafka version: 2.8.0
kafka-ui | 2022-12-09 00:48:21,734 INFO [parallel-1]
o.a.k.c.u.AppInfoParser: Kafka commitId: ebb1d6e21cc92130
kafka-ui | 2022-12-09 00:48:21,734 INFO [parallel-1]
o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1670546901708
schema | [2022-12-09 00:48:22,777] INFO App info
kafka.admin.client for adminclient-1 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:22,833] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 00:48:22,834] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 00:48:22,834] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 00:48:22,963] INFO Registering
schema provider for AVRO:
io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 00:48:22,964] INFO Registering
schema provider for JSON:
io.confluent.kafka.schemaregistry.json.JsonSchemaProvider
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 00:48:22,964] INFO Registering
schema provider for PROTOBUF:
io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProv
ider
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 00:48:23,129] INFO Initializing
KafkaStore with broker endpoints: PLAINTEXT://kafka-local:9095
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 00:48:23,155] INFO
AdminClientConfig values:
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | client.dns.lookup = use_all_dns_ips
schema | client.id =
schema | connections.max.idle.ms = 300000
schema | default.api.timeout.ms = 60000
schema | host.resolver.class = class
org.apache.kafka.clients.DefaultHostResolver
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 2147483647
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema |
(org.apache.kafka.clients.admin.AdminClientConfig)
schema | [2022-12-09 00:48:23,208] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:23,209] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:23,209] INFO Kafka
startTimeMs: 1670546903208
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:23,285] INFO Creating
schemas topic _schemas
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 00:48:23,305] WARN Creating
the schema topic _schemas using a replication factor of 1,
which is less than the desired one of 3. If this is a production
environment, it's crucial to add more brokers and increase the
replication factor of the topic.
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
kafka | [2022-12-09 00:48:23,738] INFO Creating topic
_schemas with configuration {cleanup.policy=compact} and
initial partition assignment HashMap(0 -> ArrayBuffer(1))
(kafka.zk.AdminZkClient)
kafka | [2022-12-09 00:48:23,865] INFO [Controller
id=1] New topics: [Set(_schemas)], deleted topics: [HashSet()],
new partition replica assignment
[Set(TopicIdReplicaAssignment(_schemas,Some(iZzXfq3mTJWqB
DSup0ZbYA),Map(_schemas-0 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=))))] (kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:23,868] INFO [Controller
id=1] New partition creation callback for _schemas-0
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:23,877] INFO [Controller
id=1 epoch=1] Changed partition _schemas-0 state from
NonExistentPartition to NewPartition with assigned replicas 1
(state.change.logger)
kafka | [2022-12-09 00:48:23,881] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:23,893] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
_schemas-0 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:23,893] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:23,997] INFO [Controller
id=1 epoch=1] Changed partition _schemas-0 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:24,002] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='_schemas',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition _schemas-0 (state.change.logger)
kafka | [2022-12-09 00:48:24,006] INFO [Controller
id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with
1 become-leader and 0 become-follower partitions
(state.change.logger)
kafka | [2022-12-09 00:48:24,018] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 1 partitions (state.change.logger)
kafka | [2022-12-09 00:48:24,027] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
_schemas-0 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:24,027] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:24,051] INFO [Broker id=1]
Handling LeaderAndIsr request correlationId 1 from controller 1
for 1 partitions (state.change.logger)
kafka | [2022-12-09 00:48:24,054] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='_schemas',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 1 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:24,226] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 1 from
controller 1 epoch 1 starting the become-leader transition for
partition _schemas-0 (state.change.logger)
kafka | [2022-12-09 00:48:24,231] INFO
[ReplicaFetcherManager on broker 1] Removed fetcher for
partitions Set(_schemas-0)
(kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 00:48:24,232] INFO [Broker id=1]
Stopped fetchers as part of LeaderAndIsr request correlationId
1 from controller 1 epoch 1 as part of the become-leader
transition for 1 partitions (state.change.logger)
kafka | [2022-12-09 00:48:24,751] INFO [LogLoader
partition=_schemas-0, dir=/var/lib/kafka/data] Loading
producer state till offset 0 with message format version 2
(kafka.log.UnifiedLog$)
kafka-ui | 2022-12-09 00:48:24,803 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 00:48:24,822] INFO Created log
for partition _schemas-0 in /var/lib/kafka/data/_schemas-0 with
properties {cleanup.policy=compact} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:24,833] INFO [Partition
_schemas-0 broker=1] No checkpointed highwatermark is
found for partition _schemas-0 (kafka.cluster.Partition)
kafka | [2022-12-09 00:48:24,835] INFO [Partition
_schemas-0 broker=1] Log loaded for partition _schemas-0 with
initial high watermark 0 (kafka.cluster.Partition)
kafka | [2022-12-09 00:48:24,840] INFO [Broker id=1]
Leader _schemas-0 starts at leader epoch 0 from offset 0 with
high watermark 0 ISR [1] addingReplicas [] removingReplicas [].
Previous leader epoch was -1. (state.change.logger)
kafka | [2022-12-09 00:48:24,866] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 1 from
controller 1 epoch 1 for the become-leader transition for
partition _schemas-0 (state.change.logger)
kafka | [2022-12-09 00:48:24,889] INFO [Broker id=1]
Finished LeaderAndIsr request in 856ms correlationId 1 from
controller 1 for 1 partitions (state.change.logger)
kafka | [2022-12-09 00:48:24,903] TRACE [Controller
id=1 epoch=1] Received response
LeaderAndIsrResponseData(errorCode=0, partitionErrors=[],
topics=[LeaderAndIsrTopicError(topicId=iZzXfq3mTJWqBDSup0
ZbYA,
partitionErrors=[LeaderAndIsrPartitionError(topicName='',
partitionIndex=0, errorCode=0)])]) for request
LEADER_AND_ISR with correlation id 1 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 00:48:24,930] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='_schemas',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition _schemas-0 in response to
UpdateMetadata request sent by controller 1 epoch 1 with
correlation id 2 (state.change.logger)
kafka | [2022-12-09 00:48:24,933] INFO [Broker id=1]
Add 1 partitions and deleted 0 partitions from metadata cache
in response to UpdateMetadata request sent by controller 1
epoch 1 with correlation id 2 (state.change.logger)
kafka | [2022-12-09 00:48:24,951] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 2 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
schema | [2022-12-09 00:48:25,012] INFO App info
kafka.admin.client for adminclient-2 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,059] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 00:48:25,060] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 00:48:25,060] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 00:48:25,097] INFO
ProducerConfig values:
schema | acks = -1
schema | batch.size = 16384
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | buffer.memory = 33554432
schema | client.dns.lookup = use_all_dns_ips
schema | client.id = producer-1
schema | compression.type = none
schema | connections.max.idle.ms = 540000
schema | delivery.timeout.ms = 120000
schema | enable.idempotence = false
schema | interceptor.classes = []
schema | key.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
schema | linger.ms = 0
schema | max.block.ms = 60000
schema | max.in.flight.requests.per.connection = 5
schema | max.request.size = 1048576
schema | metadata.max.age.ms = 300000
schema | metadata.max.idle.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
schema | receive.buffer.bytes = 32768
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 0
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema | transaction.timeout.ms = 60000
schema | transactional.id = null
schema | value.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
schema |
(org.apache.kafka.clients.producer.ProducerConfig)
schema | [2022-12-09 00:48:25,234] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,234] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,234] INFO Kafka
startTimeMs: 1670546905233
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,297] INFO [Producer
clientId=producer-1] Cluster ID: u7FTV21fSNCC5BbN_1oteQ
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:25,390] INFO Registered
kafka:type=kafka.Log4jController MBean
(kafka.utils.Log4jControllerRegistration$)
schema | [2022-12-09 00:48:25,392] INFO Kafka store
reader thread starting consumer
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 00:48:25,420] INFO
ConsumerConfig values:
schema | allow.auto.create.topics = true
schema | auto.commit.interval.ms = 5000
schema | auto.offset.reset = earliest
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | check.crcs = true
schema | client.dns.lookup = use_all_dns_ips
schema | client.id = KafkaStore-reader-_schemas
schema | client.rack =
schema | connections.max.idle.ms = 540000
schema | default.api.timeout.ms = 60000
schema | enable.auto.commit = false
schema | exclude.internal.topics = true
schema | fetch.max.bytes = 52428800
schema | fetch.max.wait.ms = 500
schema | fetch.min.bytes = 1
schema | group.id = schema-registry-schema-9091
schema | group.instance.id = null
schema | heartbeat.interval.ms = 3000
schema | interceptor.classes = []
schema | internal.leave.group.on.close = true
schema |
internal.throw.on.fetch.stable.offset.unsupported = false
schema | isolation.level = read_uncommitted
schema | key.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
schema | max.partition.fetch.bytes = 1048576
schema | max.poll.interval.ms = 300000
schema | max.poll.records = 500
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | partition.assignment.strategy = [class
org.apache.kafka.clients.consumer.RangeAssignor, class
org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | session.timeout.ms = 45000
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema | value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
schema |
(org.apache.kafka.clients.consumer.ConsumerConfig)
schema | [2022-12-09 00:48:25,630] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,630] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,630] INFO Kafka
startTimeMs: 1670546905630
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:25,665] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Cluster ID: u7FTV21fSNCC5BbN_1oteQ
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:25,713] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Subscribed to partition(s): _schemas-0
(org.apache.kafka.clients.consumer.KafkaConsumer)
schema | [2022-12-09 00:48:25,722] INFO Seeking to
beginning for all partitions
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 00:48:25,724] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Seeking to EARLIEST offset of partition
_schemas-0
(org.apache.kafka.clients.consumer.internals.SubscriptionState)
schema | [2022-12-09 00:48:25,725] INFO Initialized
last consumed offset to -1
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 00:48:25,748] INFO [kafka-store-
reader-thread-_schemas]: Starting
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 00:48:26,080] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Resetting the last seen epoch of
partition _schemas-0 to 0 since the associated topicId changed
from null to iZzXfq3mTJWqBDSup0ZbYA
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:26,332] INFO [Producer
clientId=producer-1] Resetting the last seen epoch of partition
_schemas-0 to 0 since the associated topicId changed from null
to iZzXfq3mTJWqBDSup0ZbYA
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:26,957] INFO Wait to catch
up until the offset at 0
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 00:48:27,359] INFO Reached
offset at 0
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 00:48:27,360] INFO Joining
schema registry with Kafka-based coordination
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 00:48:27,442] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:27,442] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:27,443] INFO Kafka
startTimeMs: 1670546907442
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 00:48:27,513] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition _schemas-0 to 0 since the
associated topicId changed from null to
iZzXfq3mTJWqBDSup0ZbYA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:27,527] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Cluster ID:
u7FTV21fSNCC5BbN_1oteQ (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 00:48:27,552] INFO Creating topic
__consumer_offsets with configuration
{compression.type=producer, cleanup.policy=compact,
segment.bytes=104857600} and initial partition assignment
HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 ->
ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 ->
ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 ->
ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 ->
ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -
> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17
-> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1),
20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 ->
ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -
> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28
-> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1),
31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 ->
ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -
> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39
-> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1),
42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 ->
ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -
> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1))
(kafka.zk.AdminZkClient)
kafka | [2022-12-09 00:48:27,628] INFO [Controller
id=1] New topics: [Set(__consumer_offsets)], deleted topics:
[HashSet()], new partition replica assignment
[Set(TopicIdReplicaAssignment(__consumer_offsets,Some(MXjtiL
KSTpuWmttd37LXhA),HashMap(__consumer_offsets-22 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-30 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-25 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-35 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-37 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-38 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-13 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-8 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-21 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-4 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-27 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-7 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-9 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-46 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-41 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-33 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-23 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-49 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-47 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-16 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-28 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-31 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-36 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-42 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-3 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-18 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-15 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-24 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-17 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-48 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-19 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-11 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-2 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-43 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-6 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-14 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-20 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-0 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-44 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-39 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-12 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-45 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-1 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-5 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-26 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-29 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-34 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-10 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-32 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-40 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=))))] (kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:27,659] INFO [Controller
id=1] New partition creation callback for __consumer_offsets-
22,__consumer_offsets-30,__consumer_offsets-
25,__consumer_offsets-35,__consumer_offsets-
37,__consumer_offsets-38,__consumer_offsets-
13,__consumer_offsets-8,__consumer_offsets-
21,__consumer_offsets-4,__consumer_offsets-
27,__consumer_offsets-7,__consumer_offsets-
9,__consumer_offsets-46,__consumer_offsets-
41,__consumer_offsets-33,__consumer_offsets-
23,__consumer_offsets-49,__consumer_offsets-
47,__consumer_offsets-16,__consumer_offsets-
28,__consumer_offsets-31,__consumer_offsets-
36,__consumer_offsets-42,__consumer_offsets-
3,__consumer_offsets-18,__consumer_offsets-
15,__consumer_offsets-24,__consumer_offsets-
17,__consumer_offsets-48,__consumer_offsets-
19,__consumer_offsets-11,__consumer_offsets-
2,__consumer_offsets-43,__consumer_offsets-
6,__consumer_offsets-14,__consumer_offsets-
20,__consumer_offsets-0,__consumer_offsets-
44,__consumer_offsets-39,__consumer_offsets-
12,__consumer_offsets-45,__consumer_offsets-
1,__consumer_offsets-5,__consumer_offsets-
26,__consumer_offsets-29,__consumer_offsets-
34,__consumer_offsets-10,__consumer_offsets-
32,__consumer_offsets-40 (kafka.controller.KafkaController)
kafka | [2022-12-09 00:48:27,663] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-22 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,667] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-30 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,667] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-25 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,667] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-35 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-37 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-38 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-13 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-8 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-21 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-4 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,668] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-27 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-7 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-9 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-46 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-41 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-33 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-23 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,669] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-49 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-47 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-16 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-28 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-31 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-36 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-42 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-3 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,670] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-18 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,671] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-15 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,671] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-24 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,671] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-17 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,677] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-48 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-19 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-11 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-2 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-43 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-6 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-14 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,678] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-20 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,679] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-0 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,679] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-44 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,679] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-39 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,679] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-12 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,679] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-45 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,679] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-1 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-5 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-26 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-29 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-34 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-10 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-32 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,680] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-40 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 00:48:27,681] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:27,692] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-32 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,702] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-5 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,702] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-44 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,702] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-48 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-46 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-20 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-43 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-24 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-6 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-18 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-21 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-1 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-14 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,703] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-34 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-16 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-29 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-11 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-0 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-22 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-47 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-36 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,704] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-28 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-42 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-9 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-37 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-13 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-30 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-35 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-39 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-12 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-27 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,705] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-45 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-19 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-49 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-40 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-41 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-38 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-8 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,706] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-7 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-33 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-25 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-31 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-23 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-10 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-2 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-17 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,707] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-4 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,708] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-15 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,708] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-26 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,708] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-3 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 00:48:27,708] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:28,241] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-22 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,241] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-30 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,249] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-25 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,250] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-35 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,250] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-37 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,250] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-38 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,250] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-13 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-8 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-21 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-4 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-27 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-7 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-9 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-46 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-41 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-33 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-23 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-49 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-47 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,251] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-16 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-28 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-31 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-36 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-42 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-3 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-18 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-15 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,252] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-24 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,255] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-17 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,255] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-48 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,255] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-19 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-11 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-2 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-43 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-6 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-14 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-20 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-0 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-44 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-39 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,256] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-12 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-45 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-1 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-5 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-26 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-29 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-34 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-10 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-32 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,257] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-40 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 00:48:28,258] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=13, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-13 (state.change.logger)
kafka | [2022-12-09 00:48:28,258] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=46, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-46 (state.change.logger)
kafka | [2022-12-09 00:48:28,258] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=9, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-9 (state.change.logger)
kafka | [2022-12-09 00:48:28,258] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=42, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-42 (state.change.logger)
kafka | [2022-12-09 00:48:28,258] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=21, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-21 (state.change.logger)
kafka | [2022-12-09 00:48:28,258] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=17, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-17 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=30, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-30 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=26, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-26 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=5, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-5 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=38, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-38 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=1, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-1 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=34, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-34 (state.change.logger)
kafka | [2022-12-09 00:48:28,262] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=16, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-16 (state.change.logger)
kafka | [2022-12-09 00:48:28,263] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=45, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-45 (state.change.logger)
kafka | [2022-12-09 00:48:28,263] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=12, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-12 (state.change.logger)
kafka | [2022-12-09 00:48:28,263] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=41, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-41 (state.change.logger)
kafka | [2022-12-09 00:48:28,263] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=24, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-24 (state.change.logger)
kafka | [2022-12-09 00:48:28,263] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=20, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-20 (state.change.logger)
kafka | [2022-12-09 00:48:28,263] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=49, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-49 (state.change.logger)
kafka | [2022-12-09 00:48:28,267] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-0 (state.change.logger)
kafka | [2022-12-09 00:48:28,274] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=29, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-29 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=25, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-25 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=8, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-8 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=37, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-37 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=4, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-4 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=33, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-33 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=15, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-15 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=48, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-48 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=11, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-11 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=44, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-44 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=23, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-23 (state.change.logger)
kafka | [2022-12-09 00:48:28,275] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=19, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-19 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=32, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-32 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=28, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-28 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=7, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-7 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=40, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-40 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=3, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-3 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=36, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-36 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=47, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-47 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=14, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-14 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=43, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-43 (state.change.logger)
kafka | [2022-12-09 00:48:28,276] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=10, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-10 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=22, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-22 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=18, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-18 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=31, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-31 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=27, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-27 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=39, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-39 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=6, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-6 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=35, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-35 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=2, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-2 (state.change.logger)
kafka | [2022-12-09 00:48:28,277] INFO [Controller
id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with
50 become-leader and 0 become-follower partitions
(state.change.logger)
kafka | [2022-12-09 00:48:28,280] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 50 partitions (state.change.logger)
kafka | [2022-12-09 00:48:28,296] INFO [Broker id=1]
Handling LeaderAndIsr request correlationId 3 from controller 1
for 50 partitions (state.change.logger)
kafka | [2022-12-09 00:48:28,308] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=13, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,309] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=46, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,309] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=9, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,309] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=42, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,309] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=21, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,309] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=17, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,309] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=30, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,310] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=26, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,310] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=5, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,313] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=38, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,313] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=1, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,314] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-32 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,316] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=34, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,317] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=16, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,317] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=45, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,317] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=12, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,318] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=41, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,317] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-5 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,318] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=24, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,318] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=20, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,318] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=49, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,318] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,318] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=29, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=25, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=8, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=37, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-44 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=4, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=33, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=15, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-48 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=48, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-46 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-20 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-43 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,319] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=11, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-24 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=44, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-6 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=23, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-18 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-21 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=19, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-1 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-14 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=32, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,320] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-34 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-16 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-29 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-11 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-0 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-22 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=28, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-47 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-36 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-28 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-42 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,322] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-9 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,322] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-37 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,322] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-13 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,322] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-30 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,322] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-35 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,321] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=7, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=40, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=3, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-39 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=36, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-12 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=47, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-27 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=14, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,323] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-45 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,324] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=43, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,324] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-19 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,324] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-49 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,324] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-40 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,325] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-41 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,325] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-38 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,325] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-8 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,324] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=10, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,325] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-7 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-33 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=22, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=18, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-25 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=31, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=27, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-31 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,327] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-23 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,327] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-10 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,327] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-2 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,327] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-17 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,327] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-4 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,326] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=39, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,330] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-15 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,329] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=6, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,330] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-26 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,330] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=35, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,330] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-3 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 00:48:28,330] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=2, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 00:48:28,330] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 00:48:29,045] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-3 (state.change.logger)
kafka | [2022-12-09 00:48:29,053] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-18 (state.change.logger)
kafka | [2022-12-09 00:48:29,060] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-41 (state.change.logger)
kafka | [2022-12-09 00:48:29,060] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-10 (state.change.logger)
kafka | [2022-12-09 00:48:29,061] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-33 (state.change.logger)
kafka | [2022-12-09 00:48:29,061] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-48 (state.change.logger)
kafka | [2022-12-09 00:48:29,061] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-19 (state.change.logger)
kafka | [2022-12-09 00:48:29,061] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-34 (state.change.logger)
kafka | [2022-12-09 00:48:29,062] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-4 (state.change.logger)
kafka | [2022-12-09 00:48:29,062] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-11 (state.change.logger)
kafka | [2022-12-09 00:48:29,062] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-26 (state.change.logger)
kafka | [2022-12-09 00:48:29,062] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-49 (state.change.logger)
kafka | [2022-12-09 00:48:29,062] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-39 (state.change.logger)
kafka | [2022-12-09 00:48:29,062] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-9 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-24 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-31 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-46 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-1 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-16 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-2 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-25 (state.change.logger)
kafka | [2022-12-09 00:48:29,063] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-40 (state.change.logger)
kafka | [2022-12-09 00:48:29,064] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-47 (state.change.logger)
kafka | [2022-12-09 00:48:29,064] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-17 (state.change.logger)
kafka | [2022-12-09 00:48:29,064] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-32 (state.change.logger)
kafka | [2022-12-09 00:48:29,065] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-37 (state.change.logger)
kafka | [2022-12-09 00:48:29,065] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-7 (state.change.logger)
kafka | [2022-12-09 00:48:29,065] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-22 (state.change.logger)
kafka | [2022-12-09 00:48:29,066] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-29 (state.change.logger)
kafka | [2022-12-09 00:48:29,066] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-44 (state.change.logger)
kafka | [2022-12-09 00:48:29,066] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-14 (state.change.logger)
kafka | [2022-12-09 00:48:29,066] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-23 (state.change.logger)
kafka | [2022-12-09 00:48:29,067] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-38 (state.change.logger)
kafka | [2022-12-09 00:48:29,067] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-8 (state.change.logger)
kafka | [2022-12-09 00:48:29,067] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-45 (state.change.logger)
kafka | [2022-12-09 00:48:29,068] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-15 (state.change.logger)
kafka | [2022-12-09 00:48:29,071] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-30 (state.change.logger)
kafka | [2022-12-09 00:48:29,073] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-0 (state.change.logger)
kafka | [2022-12-09 00:48:29,073] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-35 (state.change.logger)
kafka | [2022-12-09 00:48:29,074] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-5 (state.change.logger)
kafka | [2022-12-09 00:48:29,074] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-20 (state.change.logger)
kafka | [2022-12-09 00:48:29,074] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-27 (state.change.logger)
kafka | [2022-12-09 00:48:29,074] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-42 (state.change.logger)
kafka | [2022-12-09 00:48:29,074] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-12 (state.change.logger)
kafka | [2022-12-09 00:48:29,074] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-21 (state.change.logger)
kafka | [2022-12-09 00:48:29,077] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-36 (state.change.logger)
kafka | [2022-12-09 00:48:29,078] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-6 (state.change.logger)
kafka | [2022-12-09 00:48:29,078] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-43 (state.change.logger)
kafka | [2022-12-09 00:48:29,078] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-13 (state.change.logger)
kafka | [2022-12-09 00:48:29,078] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-28 (state.change.logger)
kafka | [2022-12-09 00:48:29,094] INFO
[ReplicaFetcherManager on broker 1] Removed fetcher for
partitions HashSet(__consumer_offsets-22, __consumer_offsets-
30, __consumer_offsets-25, __consumer_offsets-35,
__consumer_offsets-37, __consumer_offsets-38,
__consumer_offsets-13, __consumer_offsets-8,
__consumer_offsets-21, __consumer_offsets-4,
__consumer_offsets-27, __consumer_offsets-7,
__consumer_offsets-9, __consumer_offsets-46,
__consumer_offsets-41, __consumer_offsets-33,
__consumer_offsets-23, __consumer_offsets-49,
__consumer_offsets-47, __consumer_offsets-16,
__consumer_offsets-28, __consumer_offsets-31,
__consumer_offsets-36, __consumer_offsets-42,
__consumer_offsets-3, __consumer_offsets-18,
__consumer_offsets-15, __consumer_offsets-24,
__consumer_offsets-17, __consumer_offsets-48,
__consumer_offsets-19, __consumer_offsets-11,
__consumer_offsets-2, __consumer_offsets-43,
__consumer_offsets-6, __consumer_offsets-14,
__consumer_offsets-20, __consumer_offsets-0,
__consumer_offsets-44, __consumer_offsets-39,
__consumer_offsets-12, __consumer_offsets-45,
__consumer_offsets-1, __consumer_offsets-5,
__consumer_offsets-26, __consumer_offsets-29,
__consumer_offsets-34, __consumer_offsets-10,
__consumer_offsets-32, __consumer_offsets-40)
(kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 00:48:29,104] INFO [Broker id=1]
Stopped fetchers as part of LeaderAndIsr request correlationId
3 from controller 1 epoch 1 as part of the become-leader
transition for 50 partitions (state.change.logger)
kafka | [2022-12-09 00:48:29,213] INFO [LogLoader
partition=__consumer_offsets-3, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,258] INFO Created log
for partition __consumer_offsets-3 in
/var/lib/kafka/data/__consumer_offsets-3 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,268] INFO [Partition
__consumer_offsets-3 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-3
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,273] INFO [Partition
__consumer_offsets-3 broker=1] Log loaded for partition
__consumer_offsets-3 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,275] INFO [Broker id=1]
Leader __consumer_offsets-3 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,301] INFO [LogLoader
partition=__consumer_offsets-18, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,311] INFO Created log
for partition __consumer_offsets-18 in
/var/lib/kafka/data/__consumer_offsets-18 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,311] INFO [Partition
__consumer_offsets-18 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-18
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,311] INFO [Partition
__consumer_offsets-18 broker=1] Log loaded for partition
__consumer_offsets-18 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,311] INFO [Broker id=1]
Leader __consumer_offsets-18 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,322] INFO [LogLoader
partition=__consumer_offsets-41, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,325] INFO Created log
for partition __consumer_offsets-41 in
/var/lib/kafka/data/__consumer_offsets-41 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,325] INFO [Partition
__consumer_offsets-41 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-41
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,325] INFO [Partition
__consumer_offsets-41 broker=1] Log loaded for partition
__consumer_offsets-41 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,326] INFO [Broker id=1]
Leader __consumer_offsets-41 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,345] INFO [LogLoader
partition=__consumer_offsets-10, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,354] INFO Created log
for partition __consumer_offsets-10 in
/var/lib/kafka/data/__consumer_offsets-10 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,374] INFO [Partition
__consumer_offsets-10 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-10
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,376] INFO [Partition
__consumer_offsets-10 broker=1] Log loaded for partition
__consumer_offsets-10 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,377] INFO [Broker id=1]
Leader __consumer_offsets-10 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,426] INFO [LogLoader
partition=__consumer_offsets-33, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,433] INFO Created log
for partition __consumer_offsets-33 in
/var/lib/kafka/data/__consumer_offsets-33 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,433] INFO [Partition
__consumer_offsets-33 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-33
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,435] INFO [Partition
__consumer_offsets-33 broker=1] Log loaded for partition
__consumer_offsets-33 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,435] INFO [Broker id=1]
Leader __consumer_offsets-33 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,459] INFO [LogLoader
partition=__consumer_offsets-48, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,480] INFO Created log
for partition __consumer_offsets-48 in
/var/lib/kafka/data/__consumer_offsets-48 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,480] INFO [Partition
__consumer_offsets-48 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-48
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,481] INFO [Partition
__consumer_offsets-48 broker=1] Log loaded for partition
__consumer_offsets-48 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,481] INFO [Broker id=1]
Leader __consumer_offsets-48 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,518] INFO [LogLoader
partition=__consumer_offsets-19, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,538] INFO Created log
for partition __consumer_offsets-19 in
/var/lib/kafka/data/__consumer_offsets-19 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,538] INFO [Partition
__consumer_offsets-19 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-19
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,538] INFO [Partition
__consumer_offsets-19 broker=1] Log loaded for partition
__consumer_offsets-19 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,539] INFO [Broker id=1]
Leader __consumer_offsets-19 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,566] INFO [LogLoader
partition=__consumer_offsets-34, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,571] INFO Created log
for partition __consumer_offsets-34 in
/var/lib/kafka/data/__consumer_offsets-34 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,572] INFO [Partition
__consumer_offsets-34 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-34
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,572] INFO [Partition
__consumer_offsets-34 broker=1] Log loaded for partition
__consumer_offsets-34 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,572] INFO [Broker id=1]
Leader __consumer_offsets-34 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,587] INFO [LogLoader
partition=__consumer_offsets-4, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,591] INFO Created log
for partition __consumer_offsets-4 in
/var/lib/kafka/data/__consumer_offsets-4 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,592] INFO [Partition
__consumer_offsets-4 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-4
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,592] INFO [Partition
__consumer_offsets-4 broker=1] Log loaded for partition
__consumer_offsets-4 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,592] INFO [Broker id=1]
Leader __consumer_offsets-4 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,608] INFO [LogLoader
partition=__consumer_offsets-11, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,614] INFO Created log
for partition __consumer_offsets-11 in
/var/lib/kafka/data/__consumer_offsets-11 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,614] INFO [Partition
__consumer_offsets-11 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-11
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,614] INFO [Partition
__consumer_offsets-11 broker=1] Log loaded for partition
__consumer_offsets-11 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,614] INFO [Broker id=1]
Leader __consumer_offsets-11 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,630] INFO [LogLoader
partition=__consumer_offsets-26, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,648] INFO Created log
for partition __consumer_offsets-26 in
/var/lib/kafka/data/__consumer_offsets-26 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,649] INFO [Partition
__consumer_offsets-26 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-26
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,649] INFO [Partition
__consumer_offsets-26 broker=1] Log loaded for partition
__consumer_offsets-26 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,649] INFO [Broker id=1]
Leader __consumer_offsets-26 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,674] INFO [LogLoader
partition=__consumer_offsets-49, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,678] INFO Created log
for partition __consumer_offsets-49 in
/var/lib/kafka/data/__consumer_offsets-49 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,681] INFO [Partition
__consumer_offsets-49 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-49
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,681] INFO [Partition
__consumer_offsets-49 broker=1] Log loaded for partition
__consumer_offsets-49 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,681] INFO [Broker id=1]
Leader __consumer_offsets-49 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,693] INFO [LogLoader
partition=__consumer_offsets-39, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,697] INFO Created log
for partition __consumer_offsets-39 in
/var/lib/kafka/data/__consumer_offsets-39 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,697] INFO [Partition
__consumer_offsets-39 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-39
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,697] INFO [Partition
__consumer_offsets-39 broker=1] Log loaded for partition
__consumer_offsets-39 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,697] INFO [Broker id=1]
Leader __consumer_offsets-39 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,727] INFO [LogLoader
partition=__consumer_offsets-9, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,734] INFO Created log
for partition __consumer_offsets-9 in
/var/lib/kafka/data/__consumer_offsets-9 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,736] INFO [Partition
__consumer_offsets-9 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-9
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,737] INFO [Partition
__consumer_offsets-9 broker=1] Log loaded for partition
__consumer_offsets-9 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,737] INFO [Broker id=1]
Leader __consumer_offsets-9 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,780] INFO [LogLoader
partition=__consumer_offsets-24, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,793] INFO Created log
for partition __consumer_offsets-24 in
/var/lib/kafka/data/__consumer_offsets-24 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,793] INFO [Partition
__consumer_offsets-24 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-24
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,793] INFO [Partition
__consumer_offsets-24 broker=1] Log loaded for partition
__consumer_offsets-24 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,794] INFO [Broker id=1]
Leader __consumer_offsets-24 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,818] INFO [LogLoader
partition=__consumer_offsets-31, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,825] INFO Created log
for partition __consumer_offsets-31 in
/var/lib/kafka/data/__consumer_offsets-31 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,825] INFO [Partition
__consumer_offsets-31 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-31
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,826] INFO [Partition
__consumer_offsets-31 broker=1] Log loaded for partition
__consumer_offsets-31 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,840] INFO [Broker id=1]
Leader __consumer_offsets-31 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,855] INFO [LogLoader
partition=__consumer_offsets-46, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,858] INFO Created log
for partition __consumer_offsets-46 in
/var/lib/kafka/data/__consumer_offsets-46 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,862] INFO [Partition
__consumer_offsets-46 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-46
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,862] INFO [Partition
__consumer_offsets-46 broker=1] Log loaded for partition
__consumer_offsets-46 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,863] INFO [Broker id=1]
Leader __consumer_offsets-46 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,881] INFO [LogLoader
partition=__consumer_offsets-1, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,886] INFO Created log
for partition __consumer_offsets-1 in
/var/lib/kafka/data/__consumer_offsets-1 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,886] INFO [Partition
__consumer_offsets-1 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-1
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,886] INFO [Partition
__consumer_offsets-1 broker=1] Log loaded for partition
__consumer_offsets-1 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,886] INFO [Broker id=1]
Leader __consumer_offsets-1 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,908] INFO [LogLoader
partition=__consumer_offsets-16, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,919] INFO Created log
for partition __consumer_offsets-16 in
/var/lib/kafka/data/__consumer_offsets-16 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,919] INFO [Partition
__consumer_offsets-16 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-16
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,919] INFO [Partition
__consumer_offsets-16 broker=1] Log loaded for partition
__consumer_offsets-16 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,924] INFO [Broker id=1]
Leader __consumer_offsets-16 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,939] INFO [LogLoader
partition=__consumer_offsets-2, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,942] INFO Created log
for partition __consumer_offsets-2 in
/var/lib/kafka/data/__consumer_offsets-2 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,942] INFO [Partition
__consumer_offsets-2 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-2
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,943] INFO [Partition
__consumer_offsets-2 broker=1] Log loaded for partition
__consumer_offsets-2 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,943] INFO [Broker id=1]
Leader __consumer_offsets-2 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:29,976] INFO [LogLoader
partition=__consumer_offsets-25, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:29,983] INFO Created log
for partition __consumer_offsets-25 in
/var/lib/kafka/data/__consumer_offsets-25 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:29,983] INFO [Partition
__consumer_offsets-25 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-25
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,983] INFO [Partition
__consumer_offsets-25 broker=1] Log loaded for partition
__consumer_offsets-25 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:29,983] INFO [Broker id=1]
Leader __consumer_offsets-25 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,018] INFO [LogLoader
partition=__consumer_offsets-40, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,021] INFO Created log
for partition __consumer_offsets-40 in
/var/lib/kafka/data/__consumer_offsets-40 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,024] INFO [Partition
__consumer_offsets-40 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-40
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,024] INFO [Partition
__consumer_offsets-40 broker=1] Log loaded for partition
__consumer_offsets-40 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,025] INFO [Broker id=1]
Leader __consumer_offsets-40 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,041] INFO [LogLoader
partition=__consumer_offsets-47, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,054] INFO Created log
for partition __consumer_offsets-47 in
/var/lib/kafka/data/__consumer_offsets-47 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,054] INFO [Partition
__consumer_offsets-47 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-47
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,054] INFO [Partition
__consumer_offsets-47 broker=1] Log loaded for partition
__consumer_offsets-47 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,054] INFO [Broker id=1]
Leader __consumer_offsets-47 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,127] INFO [LogLoader
partition=__consumer_offsets-17, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,134] INFO Created log
for partition __consumer_offsets-17 in
/var/lib/kafka/data/__consumer_offsets-17 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,134] INFO [Partition
__consumer_offsets-17 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-17
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,135] INFO [Partition
__consumer_offsets-17 broker=1] Log loaded for partition
__consumer_offsets-17 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,135] INFO [Broker id=1]
Leader __consumer_offsets-17 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,171] INFO [LogLoader
partition=__consumer_offsets-32, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,177] INFO Created log
for partition __consumer_offsets-32 in
/var/lib/kafka/data/__consumer_offsets-32 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,178] INFO [Partition
__consumer_offsets-32 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-32
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,179] INFO [Partition
__consumer_offsets-32 broker=1] Log loaded for partition
__consumer_offsets-32 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,179] INFO [Broker id=1]
Leader __consumer_offsets-32 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,210] INFO [LogLoader
partition=__consumer_offsets-37, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,220] INFO Created log
for partition __consumer_offsets-37 in
/var/lib/kafka/data/__consumer_offsets-37 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,233] INFO [Partition
__consumer_offsets-37 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-37
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,233] INFO [Partition
__consumer_offsets-37 broker=1] Log loaded for partition
__consumer_offsets-37 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,234] INFO [Broker id=1]
Leader __consumer_offsets-37 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,253] INFO [LogLoader
partition=__consumer_offsets-7, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,262] INFO Created log
for partition __consumer_offsets-7 in
/var/lib/kafka/data/__consumer_offsets-7 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,262] INFO [Partition
__consumer_offsets-7 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-7
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,262] INFO [Partition
__consumer_offsets-7 broker=1] Log loaded for partition
__consumer_offsets-7 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,262] INFO [Broker id=1]
Leader __consumer_offsets-7 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,279] INFO [LogLoader
partition=__consumer_offsets-22, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,284] INFO Created log
for partition __consumer_offsets-22 in
/var/lib/kafka/data/__consumer_offsets-22 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,284] INFO [Partition
__consumer_offsets-22 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-22
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,284] INFO [Partition
__consumer_offsets-22 broker=1] Log loaded for partition
__consumer_offsets-22 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,284] INFO [Broker id=1]
Leader __consumer_offsets-22 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,296] INFO [LogLoader
partition=__consumer_offsets-29, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,325] INFO Created log
for partition __consumer_offsets-29 in
/var/lib/kafka/data/__consumer_offsets-29 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,325] INFO [Partition
__consumer_offsets-29 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-29
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,325] INFO [Partition
__consumer_offsets-29 broker=1] Log loaded for partition
__consumer_offsets-29 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,330] INFO [Broker id=1]
Leader __consumer_offsets-29 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,365] INFO [LogLoader
partition=__consumer_offsets-44, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,369] INFO Created log
for partition __consumer_offsets-44 in
/var/lib/kafka/data/__consumer_offsets-44 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,370] INFO [Partition
__consumer_offsets-44 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-44
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,370] INFO [Partition
__consumer_offsets-44 broker=1] Log loaded for partition
__consumer_offsets-44 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,371] INFO [Broker id=1]
Leader __consumer_offsets-44 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,384] INFO [LogLoader
partition=__consumer_offsets-14, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,386] INFO Created log
for partition __consumer_offsets-14 in
/var/lib/kafka/data/__consumer_offsets-14 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,386] INFO [Partition
__consumer_offsets-14 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-14
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,386] INFO [Partition
__consumer_offsets-14 broker=1] Log loaded for partition
__consumer_offsets-14 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,387] INFO [Broker id=1]
Leader __consumer_offsets-14 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,397] INFO [LogLoader
partition=__consumer_offsets-23, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,401] INFO Created log
for partition __consumer_offsets-23 in
/var/lib/kafka/data/__consumer_offsets-23 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,401] INFO [Partition
__consumer_offsets-23 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-23
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,401] INFO [Partition
__consumer_offsets-23 broker=1] Log loaded for partition
__consumer_offsets-23 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,401] INFO [Broker id=1]
Leader __consumer_offsets-23 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,424] INFO [LogLoader
partition=__consumer_offsets-38, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,426] INFO Created log
for partition __consumer_offsets-38 in
/var/lib/kafka/data/__consumer_offsets-38 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,426] INFO [Partition
__consumer_offsets-38 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-38
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,431] INFO [Partition
__consumer_offsets-38 broker=1] Log loaded for partition
__consumer_offsets-38 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,431] INFO [Broker id=1]
Leader __consumer_offsets-38 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,472] INFO [LogLoader
partition=__consumer_offsets-8, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,486] INFO Created log
for partition __consumer_offsets-8 in
/var/lib/kafka/data/__consumer_offsets-8 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,486] INFO [Partition
__consumer_offsets-8 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-8
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,486] INFO [Partition
__consumer_offsets-8 broker=1] Log loaded for partition
__consumer_offsets-8 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,487] INFO [Broker id=1]
Leader __consumer_offsets-8 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,499] INFO [LogLoader
partition=__consumer_offsets-45, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,501] INFO Created log
for partition __consumer_offsets-45 in
/var/lib/kafka/data/__consumer_offsets-45 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,502] INFO [Partition
__consumer_offsets-45 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-45
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,502] INFO [Partition
__consumer_offsets-45 broker=1] Log loaded for partition
__consumer_offsets-45 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,516] INFO [Broker id=1]
Leader __consumer_offsets-45 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,537] INFO [LogLoader
partition=__consumer_offsets-15, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,542] INFO Created log
for partition __consumer_offsets-15 in
/var/lib/kafka/data/__consumer_offsets-15 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,542] INFO [Partition
__consumer_offsets-15 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-15
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,542] INFO [Partition
__consumer_offsets-15 broker=1] Log loaded for partition
__consumer_offsets-15 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,542] INFO [Broker id=1]
Leader __consumer_offsets-15 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,584] INFO [LogLoader
partition=__consumer_offsets-30, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,587] INFO Created log
for partition __consumer_offsets-30 in
/var/lib/kafka/data/__consumer_offsets-30 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,588] INFO [Partition
__consumer_offsets-30 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-30
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,588] INFO [Partition
__consumer_offsets-30 broker=1] Log loaded for partition
__consumer_offsets-30 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,588] INFO [Broker id=1]
Leader __consumer_offsets-30 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,602] INFO [LogLoader
partition=__consumer_offsets-0, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,618] INFO Created log
for partition __consumer_offsets-0 in
/var/lib/kafka/data/__consumer_offsets-0 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,619] INFO [Partition
__consumer_offsets-0 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,619] INFO [Partition
__consumer_offsets-0 broker=1] Log loaded for partition
__consumer_offsets-0 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,619] INFO [Broker id=1]
Leader __consumer_offsets-0 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,646] INFO [LogLoader
partition=__consumer_offsets-35, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,648] INFO Created log
for partition __consumer_offsets-35 in
/var/lib/kafka/data/__consumer_offsets-35 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,649] INFO [Partition
__consumer_offsets-35 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-35
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,649] INFO [Partition
__consumer_offsets-35 broker=1] Log loaded for partition
__consumer_offsets-35 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,649] INFO [Broker id=1]
Leader __consumer_offsets-35 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,672] INFO [LogLoader
partition=__consumer_offsets-5, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,682] INFO Created log
for partition __consumer_offsets-5 in
/var/lib/kafka/data/__consumer_offsets-5 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,683] INFO [Partition
__consumer_offsets-5 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-5
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,683] INFO [Partition
__consumer_offsets-5 broker=1] Log loaded for partition
__consumer_offsets-5 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,683] INFO [Broker id=1]
Leader __consumer_offsets-5 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,701] INFO [LogLoader
partition=__consumer_offsets-20, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,704] INFO Created log
for partition __consumer_offsets-20 in
/var/lib/kafka/data/__consumer_offsets-20 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,704] INFO [Partition
__consumer_offsets-20 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-20
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,704] INFO [Partition
__consumer_offsets-20 broker=1] Log loaded for partition
__consumer_offsets-20 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,704] INFO [Broker id=1]
Leader __consumer_offsets-20 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,717] INFO [LogLoader
partition=__consumer_offsets-27, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,719] INFO Created log
for partition __consumer_offsets-27 in
/var/lib/kafka/data/__consumer_offsets-27 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,719] INFO [Partition
__consumer_offsets-27 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-27
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,720] INFO [Partition
__consumer_offsets-27 broker=1] Log loaded for partition
__consumer_offsets-27 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,720] INFO [Broker id=1]
Leader __consumer_offsets-27 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,730] INFO [LogLoader
partition=__consumer_offsets-42, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,733] INFO Created log
for partition __consumer_offsets-42 in
/var/lib/kafka/data/__consumer_offsets-42 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,734] INFO [Partition
__consumer_offsets-42 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-42
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,734] INFO [Partition
__consumer_offsets-42 broker=1] Log loaded for partition
__consumer_offsets-42 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,734] INFO [Broker id=1]
Leader __consumer_offsets-42 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,758] INFO [LogLoader
partition=__consumer_offsets-12, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,761] INFO Created log
for partition __consumer_offsets-12 in
/var/lib/kafka/data/__consumer_offsets-12 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,761] INFO [Partition
__consumer_offsets-12 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-12
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,761] INFO [Partition
__consumer_offsets-12 broker=1] Log loaded for partition
__consumer_offsets-12 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,761] INFO [Broker id=1]
Leader __consumer_offsets-12 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,785] INFO [LogLoader
partition=__consumer_offsets-21, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,789] INFO Created log
for partition __consumer_offsets-21 in
/var/lib/kafka/data/__consumer_offsets-21 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,789] INFO [Partition
__consumer_offsets-21 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-21
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,790] INFO [Partition
__consumer_offsets-21 broker=1] Log loaded for partition
__consumer_offsets-21 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,790] INFO [Broker id=1]
Leader __consumer_offsets-21 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,807] INFO [LogLoader
partition=__consumer_offsets-36, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,811] INFO Created log
for partition __consumer_offsets-36 in
/var/lib/kafka/data/__consumer_offsets-36 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,811] INFO [Partition
__consumer_offsets-36 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-36
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,811] INFO [Partition
__consumer_offsets-36 broker=1] Log loaded for partition
__consumer_offsets-36 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,812] INFO [Broker id=1]
Leader __consumer_offsets-36 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,825] INFO [LogLoader
partition=__consumer_offsets-6, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,830] INFO Created log
for partition __consumer_offsets-6 in
/var/lib/kafka/data/__consumer_offsets-6 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,830] INFO [Partition
__consumer_offsets-6 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-6
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,830] INFO [Partition
__consumer_offsets-6 broker=1] Log loaded for partition
__consumer_offsets-6 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,830] INFO [Broker id=1]
Leader __consumer_offsets-6 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,844] INFO [LogLoader
partition=__consumer_offsets-43, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,848] INFO Created log
for partition __consumer_offsets-43 in
/var/lib/kafka/data/__consumer_offsets-43 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,848] INFO [Partition
__consumer_offsets-43 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-43
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,848] INFO [Partition
__consumer_offsets-43 broker=1] Log loaded for partition
__consumer_offsets-43 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,849] INFO [Broker id=1]
Leader __consumer_offsets-43 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,859] INFO [LogLoader
partition=__consumer_offsets-13, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,863] INFO Created log
for partition __consumer_offsets-13 in
/var/lib/kafka/data/__consumer_offsets-13 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,863] INFO [Partition
__consumer_offsets-13 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-13
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,863] INFO [Partition
__consumer_offsets-13 broker=1] Log loaded for partition
__consumer_offsets-13 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,863] INFO [Broker id=1]
Leader __consumer_offsets-13 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,879] INFO [LogLoader
partition=__consumer_offsets-28, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 00:48:30,884] INFO Created log
for partition __consumer_offsets-28 in
/var/lib/kafka/data/__consumer_offsets-28 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 00:48:30,884] INFO [Partition
__consumer_offsets-28 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-28
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,884] INFO [Partition
__consumer_offsets-28 broker=1] Log loaded for partition
__consumer_offsets-28 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 00:48:30,885] INFO [Broker id=1]
Leader __consumer_offsets-28 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-3 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-18 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-41 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-10 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-33 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-48 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-19 (state.change.logger)
kafka | [2022-12-09 00:48:30,887] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-34 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-4 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-11 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-26 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-49 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-39 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-9 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-24 (state.change.logger)
kafka | [2022-12-09 00:48:30,888] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-31 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-46 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-1 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-16 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-2 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-25 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-40 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-47 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-17 (state.change.logger)
kafka | [2022-12-09 00:48:30,889] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-32 (state.change.logger)
kafka | [2022-12-09 00:48:30,890] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-37 (state.change.logger)
kafka | [2022-12-09 00:48:30,890] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-7 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-22 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-29 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-44 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-14 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-23 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-38 (state.change.logger)
kafka | [2022-12-09 00:48:30,891] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-8 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-45 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-15 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-30 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-0 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-35 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-5 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-20 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-27 (state.change.logger)
kafka | [2022-12-09 00:48:30,893] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-42 (state.change.logger)
kafka | [2022-12-09 00:48:30,894] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-12 (state.change.logger)
kafka | [2022-12-09 00:48:30,894] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-21 (state.change.logger)
kafka | [2022-12-09 00:48:30,894] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-36 (state.change.logger)
kafka | [2022-12-09 00:48:30,894] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-6 (state.change.logger)
kafka | [2022-12-09 00:48:30,894] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-43 (state.change.logger)
kafka | [2022-12-09 00:48:30,895] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-13 (state.change.logger)
kafka | [2022-12-09 00:48:30,898] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-28 (state.change.logger)
kafka | [2022-12-09 00:48:30,907] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 3 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,915] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-3 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 18 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-18 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 41 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-41 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 10 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-10 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 33 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-33 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 48 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-48 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 19 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-19 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 34 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,927] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-34 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 4 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-4 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 11 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-11 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 26 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-26 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 49 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-49 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 39 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-39 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 9 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-9 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 24 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-24 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 31 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-31 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 46 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-46 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 1 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-1 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 16 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-16 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 2 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,928] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-2 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 25 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-25 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 40 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-40 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 47 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-47 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 17 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-17 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 32 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-32 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 37 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-37 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 7 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-7 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 22 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-22 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 29 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-29 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 44 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-44 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 14 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-14 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 23 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-23 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 38 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-38 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 8 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-8 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 45 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,929] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-45 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,930] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 15 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,930] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-15 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,930] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 30 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,930] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-30 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,930] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 0 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,930] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-0 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,935] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 35 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,935] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-35 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,935] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 5 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-5 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 20 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-20 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 27 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-27 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 42 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-42 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 12 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-12 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 21 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-21 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 36 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,936] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-36 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 6 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-6 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 43 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-43 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 13 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-13 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 28 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:30,937] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-28 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:30,938] INFO [Broker id=1]
Finished LeaderAndIsr request in 2641ms correlationId 3 from
controller 1 for 50 partitions (state.change.logger)
kafka | [2022-12-09 00:48:30,947] TRACE [Controller
id=1 epoch=1] Received response
LeaderAndIsrResponseData(errorCode=0, partitionErrors=[],
topics=[LeaderAndIsrTopicError(topicId=MXjtiLKSTpuWmttd37L
XhA, partitionErrors=[LeaderAndIsrPartitionError(topicName='',
partitionIndex=13, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=46,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=9, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=42,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=21, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=17,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=30, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=26,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=5, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=38,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=1, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=34,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=16, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=45,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=12, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=41,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=24, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=20,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=49, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=0,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=29, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=25,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=8, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=37,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=4, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=33,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=15, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=48,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=11, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=44,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=23, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=19,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=32, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=28,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=7, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=40,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=3, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=36,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=47, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=14,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=43, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=10,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=22, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=18,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=31, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=27,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=39, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=6,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=35, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=2,
errorCode=0)])]) for request LEADER_AND_ISR with correlation
id 3 sent to broker localhost:9092 (id: 1 rack: null)
(state.change.logger)
kafka | [2022-12-09 00:48:30,972] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=13, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-13 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,973] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=46, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-46 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,973] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=9, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-9 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=42, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-42 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=21, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-21 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=17, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-17 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=30, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-30 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=26, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-26 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=5, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-5 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,974] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=38, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-38 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,975] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=1, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-1 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,975] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=34, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-34 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,975] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=16, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-16 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,978] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=45, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-45 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,978] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=12, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-12 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,978] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=41, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-41 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,978] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=24, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-24 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,979] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=20, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-20 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,979] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=49, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-49 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,982] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-0 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=29, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-29 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=25, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-25 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=8, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-8 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=37, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-37 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=4, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-4 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=33, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-33 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=15, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-15 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=48, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-48 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=11, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-11 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,986] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=44, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-44 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=23, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-23 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=19, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-19 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=32, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-32 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=28, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-28 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=7, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-7 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=40, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-40 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=3, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-3 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=36, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-36 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=47, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-47 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=14, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-14 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=43, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-43 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=10, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-10 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=22, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-22 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=18, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-18 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=31, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-31 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=27, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-27 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=39, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-39 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=6, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-6 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=35, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-35 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,987] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=2, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-2 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,988] INFO [Broker id=1]
Add 50 partitions and deleted 0 partitions from metadata cache
in response to UpdateMetadata request sent by controller 1
epoch 1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 00:48:30,991] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 4 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 00:48:31,018] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-3 in 75
milliseconds for epoch 0, of which 19 milliseconds was spent in
the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,033] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-18 in 105
milliseconds for epoch 0, of which 101 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,034] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-41 in 107
milliseconds for epoch 0, of which 107 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,034] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-10 in 107
milliseconds for epoch 0, of which 107 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,035] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-33 in 108
milliseconds for epoch 0, of which 108 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,035] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-48 in 108
milliseconds for epoch 0, of which 108 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,035] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-19 in 108
milliseconds for epoch 0, of which 108 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,036] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-34 in 108
milliseconds for epoch 0, of which 108 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,036] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-4 in 108
milliseconds for epoch 0, of which 108 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,036] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-11 in 108
milliseconds for epoch 0, of which 108 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,037] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-26 in 109
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,037] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-49 in 109
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,037] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-39 in 109
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,038] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-9 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,038] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-24 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,038] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-31 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,039] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-46 in 111
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,039] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-1 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,039] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-16 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,039] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-2 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,040] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-25 in 111
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,040] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-40 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,040] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-47 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,040] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-17 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,041] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-32 in 112
milliseconds for epoch 0, of which 112 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,041] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-37 in 112
milliseconds for epoch 0, of which 112 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,041] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-7 in 112
milliseconds for epoch 0, of which 112 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,042] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-22 in 113
milliseconds for epoch 0, of which 112 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,042] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-29 in 113
milliseconds for epoch 0, of which 113 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,042] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-44 in 113
milliseconds for epoch 0, of which 113 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,043] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-14 in 113
milliseconds for epoch 0, of which 113 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,043] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-23 in 114
milliseconds for epoch 0, of which 114 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,043] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-38 in 114
milliseconds for epoch 0, of which 114 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,043] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-8 in 114
milliseconds for epoch 0, of which 114 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,044] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-45 in 115
milliseconds for epoch 0, of which 114 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,044] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-15 in 114
milliseconds for epoch 0, of which 114 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,044] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-30 in 114
milliseconds for epoch 0, of which 114 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,044] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-0 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,045] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-35 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,045] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-5 in 109
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,045] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-20 in 109
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,045] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-27 in 109
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,046] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-42 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,046] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-12 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,046] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-21 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,047] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-36 in 110
milliseconds for epoch 0, of which 109 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,047] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-6 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,047] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-43 in 110
milliseconds for epoch 0, of which 110 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,048] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-13 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 00:48:31,048] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-28 in 111
milliseconds for epoch 0, of which 111 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 00:48:31,089] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-0 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,089] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-10 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,089] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-20 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-40 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-30 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-9 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-11 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-31 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-39 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-13 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-18 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,090] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-22 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-8 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-32 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-43 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-29 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-34 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-1 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-6 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,091] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-41 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-27 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-48 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-5 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-15 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-35 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-25 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-46 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-26 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,092] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-36 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,093] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-44 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,093] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-16 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,093] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-37 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,093] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-17 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,094] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-45 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,094] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-3 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,094] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-24 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,094] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-38 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,095] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-33 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,095] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-23 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,095] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-28 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,095] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-2 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,095] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-12 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,095] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-19 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-14 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-4 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-47 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-49 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-42 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-7 to 0 since the
associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,096] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-21 to 0 since
the associated topicId changed from null to
MXjtiLKSTpuWmttd37LXhA (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 00:48:31,122] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Discovered
group coordinator kafka-local:9095 (id: 2147483646 rack: null)
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 00:48:31,136] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] (Re-)joining
group
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
kafka | [2022-12-09 00:48:31,262] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group schema-registry in Empty state. Created a new
member id sr-1-ab0cd419-79c9-4235-8b6a-80fa3660511e and
request the member to rejoin with this id.
(kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 00:48:31,276] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Request
joining group due to: need to re-join with the given member-id
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 00:48:31,277] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] (Re-)joining
group
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
kafka | [2022-12-09 00:48:31,312] INFO
[GroupCoordinator 1]: Preparing to rebalance group schema-
registry in state PreparingRebalance with old generation 0
(__consumer_offsets-29) (reason: Adding new member sr-1-
ab0cd419-79c9-4235-8b6a-80fa3660511e with group instance
id None) (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 00:48:31,354] INFO
[GroupCoordinator 1]: Stabilized group schema-registry
generation 1 (__consumer_offsets-29) with 1 members
(kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 00:48:31,382] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Successfully
joined group with generation Generation{generationId=1,
memberId='sr-1-ab0cd419-79c9-4235-8b6a-80fa3660511e',
protocol='v0'}
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
kafka | [2022-12-09 00:48:31,457] INFO
[GroupCoordinator 1]: Assignment received from leader sr-1-
ab0cd419-79c9-4235-8b6a-80fa3660511e for group schema-
registry for generation 1. The group has 1 members, 0 of which
are static. (kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 00:48:31,550] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Successfully
synced group in generation Generation{generationId=1,
memberId='sr-1-ab0cd419-79c9-4235-8b6a-80fa3660511e',
protocol='v0'}
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 00:48:31,562] INFO Finished
rebalance with leader election result: Assignment{version=1,
error=0, leader='sr-1-ab0cd419-79c9-4235-8b6a-
80fa3660511e',
leaderIdentity=version=1,host=schema,port=9091,scheme=ht
tp,leaderEligibility=true}
(io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGro
upLeaderElector)
schema | [2022-12-09 00:48:31,636] INFO Wait to catch
up until the offset at 1
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 00:48:31,666] INFO Reached
offset at 1
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 00:48:31,963] INFO Binding
SchemaRegistryRestApplication to all listeners.
(io.confluent.rest.Application)
schema | [2022-12-09 00:48:32,430] INFO jetty-
9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git:
8da83308eeca865e495e53ef315a249d63ba9332; jvm
11.0.14.1+1-LTS (org.eclipse.jetty.server.Server)
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:32,700Z", "level": "INFO", "component":
"o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message":
"[controller/290] [Main.cc@114] controller (64 bit): Version
7.10.2 (Build 40a3af639d4698) Copyright (c) 2021
Elasticsearch BV" }
schema | [2022-12-09 00:48:32,737] INFO
DefaultSessionIdManager workerName=node0
(org.eclipse.jetty.server.session)
schema | [2022-12-09 00:48:32,737] INFO No
SessionScavenger set, using defaults
(org.eclipse.jetty.server.session)
schema | [2022-12-09 00:48:32,744] INFO node0
Scavenging every 660000ms (org.eclipse.jetty.server.session)
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ConfigResourc
e registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ConfigResourc
e will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ContextsResou
rce registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ContextsResou
rce will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectsResour
ce registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectsResour
ce will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.SchemasResou
rce registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.SchemasResou
rce will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectVersion
sResource registered in SERVER runtime does not implement
any provider interfaces applicable in the SERVER runtime. Due
to constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectVersion
sResource will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.CompatibilityR
esource registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.CompatibilityR
esource will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ModeResource
registered in SERVER runtime does not implement any provider
interfaces applicable in the SERVER runtime. Due to constraint
configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ModeResource
will be ignored.
schema | Dec 09, 2022 12:48:34 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ServerMetadat
aResource registered in SERVER runtime does not implement
any provider interfaces applicable in the SERVER runtime. Due
to constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ServerMetadat
aResource will be ignored.
schema | [2022-12-09 00:48:35,931] INFO HV000001:
Hibernate Validator 6.1.7.Final
(org.hibernate.validator.internal.util.Version)
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:35,942Z", "level": "INFO", "component":
"o.e.t.NettyAllocator", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "creating
NettyAllocator with the following configs:
[name=elasticsearch_configured, chunk_size=256kb,
suggested_max_allocation_size=256kb,
factors={es.unsafe.use_netty_default_chunk_and_page_size=fa
lse, g1gc_enabled=true, g1gc_region_size=1mb}]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:36,181Z", "level": "INFO", "component":
"o.e.d.DiscoveryModule", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "using discovery
type [single-node] and seed hosts providers [settings]" }
schema | [2022-12-09 00:48:37,196] INFO Started
o.e.j.s.ServletContextHandler@6de0f580{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler)
schema | [2022-12-09 00:48:37,309] INFO Started
o.e.j.s.ServletContextHandler@28348c6{/ws,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler)
schema | [2022-12-09 00:48:37,416] INFO Started
NetworkTrafficServerConnector@3c7c886c{HTTP/1.1, (http/1.1,
h2c)}{schema:9091}
(org.eclipse.jetty.server.AbstractConnector)
schema | [2022-12-09 00:48:37,420] INFO Started
@29257ms (org.eclipse.jetty.server.Server)
schema | [2022-12-09 00:48:37,424] INFO Schema
Registry version: 7.1.1 commitId:
5ed926f555f75683a1d34946ef6bc855bfbd1bbe
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
schema | [2022-12-09 00:48:37,424] INFO Server
started, listening for requests...
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:37,586Z", "level": "WARN", "component":
"o.e.g.DanglingIndicesState", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message":
"gateway.auto_import_dangling_indices is disabled, dangling
indices will not be automatically detected or imported and must
be managed manually" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:38,562Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "initialized" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:38,563Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "starting ..." }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:39,729Z", "level": "INFO", "component":
"o.e.t.TransportService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "publish_address
{elasticsearch/172.19.0.3:9300}, bound_addresses
{172.19.0.3:9300}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,278Z", "level": "WARN", "component":
"o.e.b.BootstrapChecks", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "initial heap size
[536870912] not equal to maximum heap size [1145044992];
this can cause resize pauses" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,279Z", "level": "WARN", "component":
"o.e.b.BootstrapChecks", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "system call filters
failed to install; check the logs and fix your configuration or
disable system call filters at your own risk" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,299Z", "level": "INFO", "component":
"o.e.c.c.Coordinator", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "setting initial
configuration to VotingConfiguration{H18iHmWlRFK5x1zuu-
6mFQ}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,583Z", "level": "INFO", "component":
"o.e.c.s.MasterService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "elected-as-master
([1] nodes joined)[{8c9f05d4bd02}{H18iHmWlRFK5x1zuu-
6mFQ}{LMUj-fYEQ0iGKzdkEgAugQ}{elasticsearch}
{172.19.0.3:9300}{cdhilmrstw}
{ml.machine_memory=8233017344, xpack.installed=true,
transform.node=true, ml.max_open_jobs=20} elect leader,
_BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1,
version: 1, delta: master node changed {previous [], current
[{8c9f05d4bd02}{H18iHmWlRFK5x1zuu-6mFQ}{LMUj-
fYEQ0iGKzdkEgAugQ}{elasticsearch}{172.19.0.3:9300}
{cdhilmrstw}{ml.machine_memory=8233017344,
xpack.installed=true, transform.node=true,
ml.max_open_jobs=20}]}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,665Z", "level": "INFO", "component":
"o.e.c.c.CoordinationState", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "cluster UUID set to
[jeCIhYCERKmqRIfciS5i-A]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,717Z", "level": "INFO", "component":
"o.e.c.s.ClusterApplierService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "master node
changed {previous [], current [{8c9f05d4bd02}
{H18iHmWlRFK5x1zuu-6mFQ}{LMUj-fYEQ0iGKzdkEgAugQ}
{elasticsearch}{172.19.0.3:9300}{cdhilmrstw}
{ml.machine_memory=8233017344, xpack.installed=true,
transform.node=true, ml.max_open_jobs=20}]}, term: 1,
version: 1, reason: Publication{term=1, version=1}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,804Z", "level": "INFO", "component":
"o.e.h.AbstractHttpServerTransport", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message":
"publish_address {elasticsearch/172.19.0.3:9200},
bound_addresses {172.19.0.3:9200}", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,807Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "started", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,834Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-anomalies-] for [ml], because it doesn't
exist", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,836Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-state] for [ml], because it doesn't exist",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,837Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-config] for [ml], because it doesn't exist",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,839Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-inference-000003] for [ml], because it
doesn't exist", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,844Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-meta] for [ml], because it doesn't exist",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,845Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-notifications-000001] for [ml], because it
doesn't exist", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:40,848Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
legacy template [.ml-stats] for [ml], because it doesn't exist",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:41,216Z", "level": "INFO", "component":
"o.e.g.GatewayService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "recovered [0]
indices into cluster_state", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,132Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-inference-000003] for index patterns
[.ml-inference-000003]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-
A", "node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,259Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-notifications-000001] for index patterns
[.ml-notifications-000001]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,343Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-state] for index patterns [.ml-state*]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,432Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-meta] for index patterns [.ml-meta]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,545Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-stats] for index patterns [.ml-stats-*]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,690Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-anomalies-] for index patterns [.ml-
anomalies-*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,783Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.ml-config] for index patterns [.ml-config]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,889Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding component template [synthetics-mappings]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:42,983Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding component template [metrics-mappings]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,106Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding component template [logs-mappings]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,188Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding component template [synthetics-settings]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,259Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding component template [metrics-settings]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,343Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding component template [logs-settings]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,482Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [.triggered_watches] for index patterns
[.triggered_watches*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-
A", "node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,620Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [.watch-history-12] for index patterns
[.watcher-history-12*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-
A", "node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,721Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [.watches] for index patterns
[.watches*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,809Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [ilm-history] for index patterns [ilm-
history-3*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,894Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [.slm-history] for index patterns [.slm-
history-3*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:43,967Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.monitoring-alerts-7] for index patterns
[.monitoring-alerts-7]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-
A", "node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:44,099Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.monitoring-es] for index patterns
[.monitoring-es-7-*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,105Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.monitoring-kibana] for index patterns
[.monitoring-kibana-7-*]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,268Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.monitoring-logstash] for index patterns
[.monitoring-logstash-7-*]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,384Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding template [.monitoring-beats] for index patterns
[.monitoring-beats-7-*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-
A", "node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,568Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [synthetics] for index patterns
[synthetics-*-*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A",
"node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,684Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [metrics] for index patterns [metrics-*-
*]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,773Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "8c9f05d4bd02", "message":
"adding index template [logs] for index patterns [logs-*-*]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:45,900Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [ml-size-based-ilm-policy]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:46,018Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [metrics]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:46,106Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [synthetics]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:46,313Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [logs]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:46,435Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [watch-history-ilm-policy]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:46,562Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [ilm-history-ilm-policy]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:46,768Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "adding
index lifecycle policy [slm-history-ilm-policy]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:48:47,156Z", "level": "INFO", "component":
"o.e.l.LicenseService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "license [93abce8e-
920f-475b-9be0-fc77b4df05a9] mode [basic] - valid",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
kafka-ui | 2022-12-09 00:48:51,003 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:48:51,808 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:49:21,011 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:49:21,156 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:49:51,009 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:49:51,164 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:50:21,008 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:50:21,157 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:50:51,011 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:50:51,126 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:51:21,009 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:51:21,134 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:51:51,018 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:51:51,150 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:52:21,020 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:52:21,124 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:52:51,007 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:52:51,258 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 00:53:11,491] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:53:11,508] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:53:11,582] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:53:11,598] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 00:53:21,020 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:53:21,174 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:53:51,028 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:53:51,150 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:54:05,740Z", "level": "INFO", "component":
"o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "[order]
creating index, cause [api], templates [], shards [1]/[1]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:54:06,565Z", "level": "INFO", "component":
"o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "[security]
creating index, cause [api], templates [], shards [1]/[1]",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T00:54:06,847Z", "level": "INFO", "component":
"o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message":
"[saleprogram] creating index, cause [api], templates [], shards
[1]/[1]", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
kafka-ui | 2022-12-09 00:54:21,025 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:54:21,124 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
elasticsearch | {"type": "deprecation", "timestamp": "2022-12-
09T00:54:30,980Z", "level": "DEPRECATION", "component":
"o.e.d.t.TransportInfo", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message":
"transport.publish_address was printed as [ip:port] instead of
[hostname/ip:port]. This format is deprecated and will change
to [hostname/ip:port] in a future version. Use -
Des.transport.cname_in_publish_address=true to enforce non-
deprecated formatting.", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-
A", "node.id": "H18iHmWlRFK5x1zuu-6mFQ" }
kafka-ui | 2022-12-09 00:54:51,009 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:54:51,187 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:55:21,012 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:55:21,159 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:55:51,042 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:55:51,124 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:56:21,014 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:56:21,112 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:56:51,000 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:56:51,075 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:57:21,003 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:57:21,122 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
schema | [2022-12-09 00:57:25,522] INFO [Producer
clientId=producer-1] Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient)
schema | [2022-12-09 00:57:26,065] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient)
schema | [2022-12-09 00:57:27,668] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Node -1
disconnected. (org.apache.kafka.clients.NetworkClient)
kafka-ui | 2022-12-09 00:57:51,034 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:57:51,188 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 00:58:11,614] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:58:11,624] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:58:11,643] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 00:58:11,644] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 00:58:21,010 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:58:21,120 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:58:51,017 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:58:51,212 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:59:21,013 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:59:21,117 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 00:59:51,013 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 00:59:51,183 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:00:21,015 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:00:21,150 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:00:51,034 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:00:51,211 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:01:21,030 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:01:21,128 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:01:51,057 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:01:51,161 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:02:21,042 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:02:21,157 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:02:51,012 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:02:51,115 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:03:11,661] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:03:11,672] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:03:11,695] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:03:11,696] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:03:21,046 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:03:21,152 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:03:51,013 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:03:51,097 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:04:21,036 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:04:21,325 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:04:51,006 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:04:51,338 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:05:21,059 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:05:21,188 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:05:51,016 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:05:51,172 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:06:21,041 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:06:21,272 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:06:51,036 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:06:51,242 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:07:21,039 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:07:21,220 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:07:51,032 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:07:51,210 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:08:11,665] INFO [Admin
Manager on Broker 1]: Error processing create topic request
CreatableTopic(name='store.hive-participation-service.security',
numPartitions=3, replicationFactor=3, assignments=[],
configs=[CreateableTopicConfig(name='cleanup.policy',
value='compact')]) (kafka.server.ZkAdminManager)
kafka |
org.apache.kafka.common.errors.InvalidReplicationFactorExcept
ion: Replication factor: 3 larger than available brokers: 1.
kafka | [2022-12-09 01:08:11,703] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:08:11,704] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:08:11,762] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:08:11,764] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:08:12,039] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-streams-app in Empty state.
Created a new member id hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-7ac74ee6-93ab-42c2-9d76-19c7a4fe14fd and
request the member to rejoin with this id.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,058] INFO Creating topic
response.data-warehouse-svc.warehousedata-event with
configuration {} and initial partition assignment HashMap(0 ->
ArrayBuffer(1)) (kafka.zk.AdminZkClient)
kafka | [2022-12-09 01:08:12,073] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 0 (__consumer_offsets-10) (reason: Adding new
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-7ac74ee6-93ab-
42c2-9d76-19c7a4fe14fd with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,246] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-
streams-app generation 1 (__consumer_offsets-10) with 1
members (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,292] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-local in Empty state. Created a
new member id consumer-hive-participation-local-1-de3f6e39-
7edf-46fc-837e-ad56f88c7226 and request the member to
rejoin with this id. (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,314] INFO [Controller
id=1] New topics: [Set(response.data-warehouse-
svc.warehousedata-event)], deleted topics: [HashSet()], new
partition replica assignment
[Set(TopicIdReplicaAssignment(response.data-warehouse-
svc.warehousedata-
event,Some(XqrXfj1iTJ6RVqbTnuD1sA),Map(response.data-
warehouse-svc.warehousedata-event-0 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=))))] (kafka.controller.KafkaController)
kafka | [2022-12-09 01:08:12,314] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 0 (__consumer_offsets-26) (reason: Adding new
member consumer-hive-participation-local-1-de3f6e39-7edf-
46fc-837e-ad56f88c7226 with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,315] INFO [Controller
id=1] New partition creation callback for response.data-
warehouse-svc.warehousedata-event-0
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:08:12,321] INFO [Controller
id=1 epoch=1] Changed partition response.data-warehouse-
svc.warehousedata-event-0 state from NonExistentPartition to
NewPartition with assigned replicas 1 (state.change.logger)
kafka | [2022-12-09 01:08:12,324] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,330] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-local
generation 1 (__consumer_offsets-26) with 1 members
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,344] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
response.data-warehouse-svc.warehousedata-event-0 from
NonExistentReplica to NewReplica (state.change.logger)
kafka | [2022-12-09 01:08:12,345] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,392] INFO
[GroupCoordinator 1]: Assignment received from leader hive-
participation-streams-app-a6155938-52d5-47d9-bfc6-
15aec5c4b305-StreamThread-1-consumer-7ac74ee6-93ab-
42c2-9d76-19c7a4fe14fd for group hive-participation-streams-
app for generation 1. The group has 1 members, 0 of which are
static. (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,412] INFO
[GroupCoordinator 1]: Assignment received from leader
consumer-hive-participation-local-1-de3f6e39-7edf-46fc-837e-
ad56f88c7226 for group hive-participation-local for generation
1. The group has 1 members, 0 of which are static.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,459] INFO [Controller
id=1 epoch=1] Changed partition response.data-warehouse-
svc.warehousedata-event-0 from NewPartition to
OnlinePartition with state LeaderAndIsr(leader=1,
leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
kafka | [2022-12-09 01:08:12,469] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='response.data-
warehouse-svc.warehousedata-event', partitionIndex=0,
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1],
zkVersion=0, replicas=[1], addingReplicas=[],
removingReplicas=[], isNew=true) to broker 1 for partition
response.data-warehouse-svc.warehousedata-event-0
(state.change.logger)
kafka | [2022-12-09 01:08:12,469] INFO [Controller
id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with
1 become-leader and 0 become-follower partitions
(state.change.logger)
kafka | [2022-12-09 01:08:12,519] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 1 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,530] INFO [Controller
id=1, targetBrokerId=1] Node 1 disconnected.
(org.apache.kafka.clients.NetworkClient)
kafka | [2022-12-09 01:08:12,533] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
response.data-warehouse-svc.warehousedata-event-0 from
NewReplica to OnlineReplica (state.change.logger)
kafka | [2022-12-09 01:08:12,534] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,610] INFO
[RequestSendThread controllerId=1] Controller 1 connected to
localhost:9092 (id: 1 rack: null) for sending state change
requests (kafka.controller.RequestSendThread)
kafka | [2022-12-09 01:08:12,629] INFO [Broker id=1]
Handling LeaderAndIsr request correlationId 5 from controller 1
for 1 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,630] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='response.data-
warehouse-svc.warehousedata-event', partitionIndex=0,
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1],
zkVersion=0, replicas=[1], addingReplicas=[],
removingReplicas=[], isNew=true) correlation id 5 from
controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 01:08:12,652] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 5 from
controller 1 epoch 1 starting the become-leader transition for
partition response.data-warehouse-svc.warehousedata-event-0
(state.change.logger)
kafka | [2022-12-09 01:08:12,652] INFO
[ReplicaFetcherManager on broker 1] Removed fetcher for
partitions Set(response.data-warehouse-svc.warehousedata-
event-0) (kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 01:08:12,653] INFO [Broker id=1]
Stopped fetchers as part of LeaderAndIsr request correlationId
5 from controller 1 epoch 1 as part of the become-leader
transition for 1 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,727] INFO [LogLoader
partition=response.data-warehouse-svc.warehousedata-event-
0, dir=/var/lib/kafka/data] Loading producer state till offset 0
with message format version 2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 01:08:12,740] INFO Created log
for partition response.data-warehouse-svc.warehousedata-
event-0 in /var/lib/kafka/data/response.data-warehouse-
svc.warehousedata-event-0 with properties {}
(kafka.log.LogManager)
kafka | [2022-12-09 01:08:12,753] INFO [Partition
response.data-warehouse-svc.warehousedata-event-0
broker=1] No checkpointed highwatermark is found for
partition response.data-warehouse-svc.warehousedata-event-0
(kafka.cluster.Partition)
kafka | [2022-12-09 01:08:12,754] INFO [Partition
response.data-warehouse-svc.warehousedata-event-0
broker=1] Log loaded for partition response.data-warehouse-
svc.warehousedata-event-0 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 01:08:12,756] INFO [Broker id=1]
Leader response.data-warehouse-svc.warehousedata-event-0
starts at leader epoch 0 from offset 0 with high watermark 0
ISR [1] addingReplicas [] removingReplicas []. Previous leader
epoch was -1. (state.change.logger)
kafka | [2022-12-09 01:08:12,775] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 5 from
controller 1 epoch 1 for the become-leader transition for
partition response.data-warehouse-svc.warehousedata-event-0
(state.change.logger)
kafka | [2022-12-09 01:08:12,784] INFO [Broker id=1]
Finished LeaderAndIsr request in 155ms correlationId 5 from
controller 1 for 1 partitions (state.change.logger)
kafka | [2022-12-09 01:08:12,793] TRACE [Controller
id=1 epoch=1] Received response
LeaderAndIsrResponseData(errorCode=0, partitionErrors=[],
topics=[LeaderAndIsrTopicError(topicId=XqrXfj1iTJ6RVqbTnuD1
sA, partitionErrors=[LeaderAndIsrPartitionError(topicName='',
partitionIndex=0, errorCode=0)])]) for request
LEADER_AND_ISR with correlation id 5 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 01:08:12,809] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='response.data-
warehouse-svc.warehousedata-event', partitionIndex=0,
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1],
zkVersion=0, replicas=[1], offlineReplicas=[]) for partition
response.data-warehouse-svc.warehousedata-event-0 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 6 (state.change.logger)
kafka | [2022-12-09 01:08:12,810] INFO [Broker id=1]
Add 1 partitions and deleted 0 partitions from metadata cache
in response to UpdateMetadata request sent by controller 1
epoch 1 with correlation id 6 (state.change.logger)
kafka | [2022-12-09 01:08:12,813] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 6 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 01:08:12,918] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 1 (__consumer_offsets-26) (reason: Leader
consumer-hive-participation-local-1-de3f6e39-7edf-46fc-837e-
ad56f88c7226 re-joining group during Stable)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,926] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-local
generation 2 (__consumer_offsets-26) with 1 members
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:12,945] INFO
[GroupCoordinator 1]: Assignment received from leader
consumer-hive-participation-local-1-de3f6e39-7edf-46fc-837e-
ad56f88c7226 for group hive-participation-local for generation
2. The group has 1 members, 0 of which are static.
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:08:21,037 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:08:21,236 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:08:51,038 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:08:51,212 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:08:57,574] INFO
[GroupCoordinator 1]: Member hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-7ac74ee6-93ab-42c2-9d76-19c7a4fe14fd in group
hive-participation-streams-app has failed, removing it from the
group (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:57,580] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 1 (__consumer_offsets-10) (reason: removing
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-7ac74ee6-93ab-
42c2-9d76-19c7a4fe14fd on heartbeat expiration)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:08:57,582] INFO
[GroupCoordinator 1]: Group hive-participation-streams-app
with generation 2 is now empty (__consumer_offsets-10)
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:09:21,028 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:09:21,128 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:09:51,019 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:09:51,206 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:10:21,038 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:10:21,184 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:10:51,032 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:10:51,195 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:11:21,038 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:11:21,157 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:11:51,049 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:11:51,177 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:12:21,010 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:12:21,146 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
schema | [2022-12-09 01:12:27,747] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition response.data-warehouse-
svc.warehousedata-event-0 to 0 since the associated topicId
changed from null to XqrXfj1iTJ6RVqbTnuD1sA
(org.apache.kafka.clients.Metadata)
kafka-ui | 2022-12-09 01:12:51,034 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:12:51,411 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:13:11,780] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:13:11,789] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:13:11,836] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:13:11,836] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:13:21,038 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:13:21,125 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:13:51,037 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:13:51,283 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:14:02,752] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 2 (__consumer_offsets-26) (reason: Removing
member consumer-hive-participation-local-1-de3f6e39-7edf-
46fc-837e-ad56f88c7226 on LeaveGroup)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:14:02,753] INFO
[GroupCoordinator 1]: Group hive-participation-local with
generation 3 is now empty (__consumer_offsets-26)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:14:02,771] INFO
[GroupCoordinator 1]: Member
MemberMetadata(memberId=consumer-hive-participation-
local-1-de3f6e39-7edf-46fc-837e-ad56f88c7226,
groupInstanceId=None, clientId=consumer-hive-participation-
local-1, clientHost=/172.19.0.1, sessionTimeoutMs=45000,
rebalanceTimeoutMs=300000, supportedProtocols=List(range,
cooperative-sticky)) has left group hive-participation-local
through explicit `LeaveGroup` request
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:14:21,048 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:14:21,180 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:14:51,006 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:14:51,157 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:15:21,029 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:15:21,116 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:15:50,998 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:15:51,083 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:16:06,770] INFO [Admin
Manager on Broker 1]: Error processing create topic request
CreatableTopic(name='store.hive-participation-service.security',
numPartitions=3, replicationFactor=3, assignments=[],
configs=[CreateableTopicConfig(name='cleanup.policy',
value='compact')]) (kafka.server.ZkAdminManager)
kafka |
org.apache.kafka.common.errors.InvalidReplicationFactorExcept
ion: Replication factor: 3 larger than available brokers: 1.
kafka | [2022-12-09 01:16:06,911] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-streams-app in Empty state.
Created a new member id hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-d837c939-7895-4cb8-8580-24d351fb1dee and
request the member to rejoin with this id.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:06,917] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-local in Empty state. Created a
new member id consumer-hive-participation-local-1-3439ae6b-
abe6-461f-a88c-52737f3f9fc4 and request the member to rejoin
with this id. (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:06,921] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 2 (__consumer_offsets-10) (reason: Adding new
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-d837c939-
7895-4cb8-8580-24d351fb1dee with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:06,930] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 3 (__consumer_offsets-26) (reason: Adding new
member consumer-hive-participation-local-1-3439ae6b-abe6-
461f-a88c-52737f3f9fc4 with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:06,938] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-
streams-app generation 3 (__consumer_offsets-10) with 1
members (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:06,945] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-local
generation 4 (__consumer_offsets-26) with 1 members
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:06,954] INFO
[GroupCoordinator 1]: Assignment received from leader
consumer-hive-participation-local-1-3439ae6b-abe6-461f-a88c-
52737f3f9fc4 for group hive-participation-local for generation 4.
The group has 1 members, 0 of which are static.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:07,017] INFO
[GroupCoordinator 1]: Assignment received from leader hive-
participation-streams-app-a6155938-52d5-47d9-bfc6-
15aec5c4b305-StreamThread-1-consumer-d837c939-7895-
4cb8-8580-24d351fb1dee for group hive-participation-streams-
app for generation 3. The group has 1 members, 0 of which are
static. (kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:16:21,040 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:16:21,275 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:16:51,005 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:16:51,193 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:16:52,048] INFO
[GroupCoordinator 1]: Member hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-d837c939-7895-4cb8-8580-24d351fb1dee in group
hive-participation-streams-app has failed, removing it from the
group (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:52,065] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 3 (__consumer_offsets-10) (reason: removing
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-d837c939-
7895-4cb8-8580-24d351fb1dee on heartbeat expiration)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:16:52,067] INFO
[GroupCoordinator 1]: Group hive-participation-streams-app
with generation 4 is now empty (__consumer_offsets-10)
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:17:21,020 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:17:21,145 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:17:50,999 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:17:51,121 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:18:04,952] INFO
[GroupMetadataManager brokerId=1] Group hive-participation-
streams-app transitioned to Dead in generation 4
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 01:18:11,836] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:18:11,837] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:18:11,865] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:18:11,866] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:18:21,025 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:18:21,206 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:18:51,018 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:18:51,148 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
haiho@ip-192-168-20-101 hive-participation-service % kafka-ui
| 2022-12-09 01:19:21,014 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:19:21,178 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:19:51,037 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:19:51,247 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:20:21,018 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:20:21,194 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:20:51,017 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:20:51,089 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:21:21,007 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:21:21,182 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:21:51,004 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:21:51,297 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:22:21,011 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:22:21,097 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:22:51,008 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:22:51,087 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:23:11,883] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:23:11,893] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:23:11,914] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:23:11,915] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:23:21,012 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:23:21,186 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:23:28,789 WARN [parallel-1]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for TopicColumnsToSortDTO(String, int, String), which
declares implicit or synthetic parameters. Automatic resolution
of generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 01:23:28,936 WARN [parallel-1]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for SortOrderDTO(String, int, String), which declares
implicit or synthetic parameters. Automatic resolution of
generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 01:23:35,440 WARN [parallel-2]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for ConsumerGroupOrderingDTO(String, int, String),
which declares implicit or synthetic parameters. Automatic
resolution of generic type information for method parameters
may yield incorrect results if multiple parameters have the
same erasure. To solve this, compile your code with the '-
parameters' flag.
kafka-ui | 2022-12-09 01:23:51,032 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:23:51,118 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:23:56,481] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 4 (__consumer_offsets-26) (reason: Removing
member consumer-hive-participation-local-1-3439ae6b-abe6-
461f-a88c-52737f3f9fc4 on LeaveGroup)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:23:56,488] INFO
[GroupCoordinator 1]: Group hive-participation-local with
generation 5 is now empty (__consumer_offsets-26)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:23:56,516] INFO
[GroupCoordinator 1]: Member
MemberMetadata(memberId=consumer-hive-participation-
local-1-3439ae6b-abe6-461f-a88c-52737f3f9fc4,
groupInstanceId=None, clientId=consumer-hive-participation-
local-1, clientHost=/172.19.0.1, sessionTimeoutMs=45000,
rebalanceTimeoutMs=300000, supportedProtocols=List(range,
cooperative-sticky)) has left group hive-participation-local
through explicit `LeaveGroup` request
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:24:21,004 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:24:21,134 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:24:22,518] INFO [Admin
Manager on Broker 1]: Error processing create topic request
CreatableTopic(name='store.hive-participation-service.security',
numPartitions=3, replicationFactor=3, assignments=[],
configs=[CreateableTopicConfig(name='cleanup.policy',
value='compact')]) (kafka.server.ZkAdminManager)
kafka |
org.apache.kafka.common.errors.InvalidReplicationFactorExcept
ion: Replication factor: 3 larger than available brokers: 1.
kafka | [2022-12-09 01:24:22,628] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-streams-app in Empty state.
Created a new member id hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-c1e01430-8f5b-4ec3-8679-2f1e63b7772c and
request the member to rejoin with this id.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,637] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-local in Empty state. Created a
new member id consumer-hive-participation-local-1-a7a23c9a-
4cdc-43fe-8350-919a59354042 and request the member to
rejoin with this id. (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,643] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 0 (__consumer_offsets-10) (reason: Adding new
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-c1e01430-8f5b-
4ec3-8679-2f1e63b7772c with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,659] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 5 (__consumer_offsets-26) (reason: Adding new
member consumer-hive-participation-local-1-a7a23c9a-4cdc-
43fe-8350-919a59354042 with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,664] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-
streams-app generation 1 (__consumer_offsets-10) with 1
members (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,673] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-local
generation 6 (__consumer_offsets-26) with 1 members
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,681] INFO
[GroupCoordinator 1]: Assignment received from leader
consumer-hive-participation-local-1-a7a23c9a-4cdc-43fe-8350-
919a59354042 for group hive-participation-local for generation
6. The group has 1 members, 0 of which are static.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:24:22,727] INFO
[GroupCoordinator 1]: Assignment received from leader hive-
participation-streams-app-a6155938-52d5-47d9-bfc6-
15aec5c4b305-StreamThread-1-consumer-c1e01430-8f5b-
4ec3-8679-2f1e63b7772c for group hive-participation-streams-
app for generation 1. The group has 1 members, 0 of which are
static. (kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:24:51,032 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:24:51,211 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:25:07,746] INFO
[GroupCoordinator 1]: Member hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-c1e01430-8f5b-4ec3-8679-2f1e63b7772c in group
hive-participation-streams-app has failed, removing it from the
group (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:25:07,749] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 1 (__consumer_offsets-10) (reason: removing
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-c1e01430-8f5b-
4ec3-8679-2f1e63b7772c on heartbeat expiration)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 01:25:07,751] INFO
[GroupCoordinator 1]: Group hive-participation-streams-app
with generation 2 is now empty (__consumer_offsets-10)
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 01:25:21,014 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:25:21,090 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:25:51,013 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:25:51,155 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:26:21,029 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:26:21,181 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:26:51,042 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:26:51,208 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:27:21,029 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:27:21,260 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:27:51,047 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:27:51,507 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:28:04,875] INFO
[GroupMetadataManager brokerId=1] Group hive-participation-
streams-app transitioned to Dead in generation 2
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 01:28:11,926] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:28:11,930] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:28:11,959] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:28:11,960] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:28:21,018 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:28:21,097 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:28:51,044 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:28:51,138 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:29:21,021 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:29:21,192 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:29:50,986 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:29:51,041 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T01:30:00,035Z", "level": "INFO", "component":
"o.e.x.s.SnapshotRetentionTask", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "starting
SLM retention snapshot cleanup task", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T01:30:00,053Z", "level": "INFO", "component":
"o.e.x.s.SnapshotRetentionTask", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "there are
no repositories to fetch, SLM retention snapshot cleanup task
complete", "cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
kafka-ui | 2022-12-09 01:30:20,986 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:30:21,155 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:30:51,010 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:30:51,173 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:31:21,001 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:31:21,214 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:31:51,001 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:31:51,113 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:32:21,034 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:32:21,209 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:32:51,011 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:32:51,083 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:33:11,961] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:33:11,974] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:33:12,000] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:33:12,001] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:33:21,024 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:33:21,191 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:33:51,006 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:33:51,077 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:34:20,984 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:34:21,155 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:34:51,004 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:34:51,155 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:35:20,986 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:35:21,165 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:35:51,012 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:35:51,201 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:36:21,017 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:36:21,078 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:36:51,025 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:36:51,194 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:37:21,001 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:37:21,191 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:37:51,010 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:37:51,159 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:38:12,032] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:38:12,042] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:38:12,071] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:38:12,071] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:38:21,012 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:38:21,188 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:38:51,015 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:38:51,171 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:39:21,071 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:39:21,561 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:39:51,003 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:39:51,119 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:40:21,022 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:40:21,249 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:40:51,016 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:40:51,087 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:41:20,984 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:41:21,067 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:41:51,044 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:41:51,230 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:42:20,988 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:42:21,157 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:42:51,014 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:42:51,146 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:43:12,099] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:43:12,116] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:43:12,162] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:43:12,164] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:43:21,017 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:43:21,297 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:43:50,998 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:43:51,160 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:44:20,974 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:44:21,178 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:44:50,966 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:44:51,051 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:45:20,942 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:45:21,014 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:45:50,941 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:45:51,122 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:46:20,971 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:46:21,174 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:46:50,963 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:46:51,138 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:47:20,968 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:47:21,729 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:47:50,975 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:47:51,172 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:48:12,133] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:48:12,144] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:48:12,178] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:48:12,178] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:48:20,967 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:48:21,128 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:48:50,938 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:48:51,012 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:49:20,975 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:49:21,200 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:49:50,969 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:49:51,138 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:50:20,953 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:50:21,029 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:50:50,982 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:50:51,269 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:51:20,943 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:51:21,095 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:51:50,973 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:51:51,136 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:52:20,954 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:52:21,035 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:52:50,983 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:52:51,160 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:53:12,194] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:53:12,204] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:53:12,229] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:53:12,230] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:53:20,972 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:53:21,177 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:53:50,973 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:53:51,125 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:54:20,948 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:54:21,189 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:54:50,980 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:54:51,130 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:55:21,045 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:55:21,233 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:55:50,971 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:55:51,092 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:56:20,967 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:56:21,084 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:56:50,933 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:56:51,002 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:57:20,960 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:57:21,108 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:57:50,972 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:57:51,084 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 01:58:12,242] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:58:12,251] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:58:12,276] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 01:58:12,276] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 01:58:20,927 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:58:21,071 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:58:50,929 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:58:51,009 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:59:20,964 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:59:21,083 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 01:59:50,932 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 01:59:51,005 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:00:20,906 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:00:20,972 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:00:50,894 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:00:50,982 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:01:20,941 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:01:21,060 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:01:50,905 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:01:50,973 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:02:20,900 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:02:20,969 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:02:50,896 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:02:51,131 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:03:12,261] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:03:12,272] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:03:12,304] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:03:12,305] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:03:20,897 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:03:21,062 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:03:50,926 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:03:50,993 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:04:20,903 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:04:20,977 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:04:50,937 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:04:51,047 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:05:20,898 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:05:20,958 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:05:50,891 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:05:50,961 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:06:20,893 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:06:21,146 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:06:50,917 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:06:51,092 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:07:20,925 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:07:21,057 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:07:50,919 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:07:51,270 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:08:12,319] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:08:12,330] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:08:12,419] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:08:12,419] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:08:20,930 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:08:21,075 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:08:50,936 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:08:51,148 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:09:20,914 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:09:20,985 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:09:50,894 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:09:51,055 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:10:20,926 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:10:21,171 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:10:50,915 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:10:51,002 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:11:20,927 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:11:21,114 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:11:50,911 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:11:51,098 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:12:20,910 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:12:20,992 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:12:50,883 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:12:50,963 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:13:12,432] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:13:12,443] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:13:12,470] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:13:12,471] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:13:20,929 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:13:21,052 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:13:50,904 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:13:51,126 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:14:20,906 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:14:20,983 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:14:50,895 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:14:50,959 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:15:20,895 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:15:21,038 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:15:50,909 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:15:51,071 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:16:20,884 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:16:20,948 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:16:50,884 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:16:51,041 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:17:20,900 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:17:21,206 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:17:50,926 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:17:50,999 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:18:12,572] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:18:12,589] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:18:12,617] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:18:12,619] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:18:20,885 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:18:21,223 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:18:50,913 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:18:51,189 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:19:20,902 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:19:21,085 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:19:50,909 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:19:50,977 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:20:20,881 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:20:20,952 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:20:50,884 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:20:50,962 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:21:20,918 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:21:21,043 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:21:50,931 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:21:51,071 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:22:20,897 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:22:20,973 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:22:50,887 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:22:51,166 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:23:12,630] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:23:12,641] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:23:12,686] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:23:12,688] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:23:20,908 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:23:21,136 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:23:50,904 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:23:51,067 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:24:20,908 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:24:20,975 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:24:50,893 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:24:50,955 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:25:20,879 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:25:20,949 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:25:50,885 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:25:50,942 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:26:20,876 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:26:20,944 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:26:50,873 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:26:51,090 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:27:20,896 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:27:21,154 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:27:50,924 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:27:51,101 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:28:12,699] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:28:12,709] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:28:12,734] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:28:12,735] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:28:20,894 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:28:20,968 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:28:50,879 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:28:50,950 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:29:20,879 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:29:20,959 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:29:50,872 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:29:50,954 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:30:20,871 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:30:20,946 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:30:50,881 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:30:50,945 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:31:20,878 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:31:21,025 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:31:50,877 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:31:50,949 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:32:20,922 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:32:21,034 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:32:50,875 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:32:50,947 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:33:12,753] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:33:12,765] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:33:12,791] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:33:12,791] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:33:20,882 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:33:20,945 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:33:50,918 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:33:51,046 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:34:20,873 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:34:20,941 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:34:50,870 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:34:50,949 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:35:20,871 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:35:21,031 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:35:50,869 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:35:50,933 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:36:20,870 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:36:20,930 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:36:50,869 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:36:50,979 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:37:20,908 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:37:21,013 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:37:50,866 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:37:50,919 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:38:12,799] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:38:12,810] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:38:12,847] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:38:12,849] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:38:20,859 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:38:20,898 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:38:50,863 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:38:50,919 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:39:20,857 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:39:20,927 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:39:50,872 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:39:50,939 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:40:20,869 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:40:21,024 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:40:50,865 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:40:50,945 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:41:20,866 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:41:20,933 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:41:50,864 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:41:50,927 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:42:20,867 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:42:20,939 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:42:50,864 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:42:50,933 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:43:12,863] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:43:12,875] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:43:12,912] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:43:12,914] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:43:20,866 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:43:20,934 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:43:50,876 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:43:50,940 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:44:20,863 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:44:21,093 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:44:50,861 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:44:50,930 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:45:20,867 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:45:20,925 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
haiho@ip-192-168-20-101 hive-participation-service % kafka-ui
| 2022-12-09 02:45:50,884 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:45:50,999 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:46:15,554] INFO
[GroupCoordinator 1]: Member consumer-hive-participation-
local-1-a7a23c9a-4cdc-43fe-8350-919a59354042 in group hive-
participation-local has failed, removing it from the group
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:46:15,569] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 6 (__consumer_offsets-26) (reason: removing
member consumer-hive-participation-local-1-a7a23c9a-4cdc-
43fe-8350-919a59354042 on heartbeat expiration)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:46:15,572] INFO
[GroupCoordinator 1]: Group hive-participation-local with
generation 7 is now empty (__consumer_offsets-26)
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 02:46:20,847 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:46:20,908 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:25,771Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "stopping ...", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:25,853Z", "level": "INFO", "component":
"o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message":
"[controller/290] [Main.cc@154] ML controller exiting",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:25,857Z", "level": "INFO", "component":
"o.e.x.m.p.NativeController", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "Native controller
process has stopped - no new native processes can be started",
"cluster.uuid": "jeCIhYCERKmqRIfciS5i-A", "node.id":
"H18iHmWlRFK5x1zuu-6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:25,874Z", "level": "INFO", "component":
"o.e.x.w.WatcherService", "cluster.name": "docker-cluster",
"node.name": "8c9f05d4bd02", "message": "stopping watch
service, reason [shutdown initiated]", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:25,890Z", "level": "INFO", "component":
"o.e.x.w.WatcherLifeCycleService", "cluster.name": "docker-
cluster", "node.name": "8c9f05d4bd02", "message": "watcher
has stopped and shutdown", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
schema | [2022-12-09 02:46:25,899] INFO Stopped
NetworkTrafficServerConnector@3c7c886c{HTTP/1.1, (http/1.1,
h2c)}{schema:9091}
(org.eclipse.jetty.server.AbstractConnector)
schema | [2022-12-09 02:46:25,902] INFO node0
Stopped scavenging (org.eclipse.jetty.server.session)
schema | [2022-12-09 02:46:25,920] INFO Stopped
o.e.j.s.ServletContextHandler@28348c6{/ws,null,STOPPED}
(org.eclipse.jetty.server.handler.ContextHandler)
kafka-ui | 2022-12-09 02:46:26,003 INFO [kafka-admin-
client-thread | adminclient-1] o.a.k.c.u.AppInfoParser: App info
kafka.admin.client for adminclient-1 unregistered
kafka-ui | 2022-12-09 02:46:26,020 INFO [kafka-admin-
client-thread | adminclient-1] o.a.k.c.m.Metrics: Metrics
scheduler closed
kafka-ui | 2022-12-09 02:46:26,027 INFO [kafka-admin-
client-thread | adminclient-1] o.a.k.c.m.Metrics: Closing reporter
org.apache.kafka.common.metrics.JmxReporter
kafka-ui | 2022-12-09 02:46:26,028 INFO [kafka-admin-
client-thread | adminclient-1] o.a.k.c.m.Metrics: Metrics
reporters closed
schema | [2022-12-09 02:46:26,042] INFO Stopped
o.e.j.s.ServletContextHandler@6de0f580{/,null,STOPPED}
(org.eclipse.jetty.server.handler.ContextHandler)
schema | [2022-12-09 02:46:26,080] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,080] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,083] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,085] INFO Shutting
down schema registry
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 02:46:26,091] INFO [kafka-store-
reader-thread-_schemas]: Shutting down
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:46:26,097] INFO [kafka-store-
reader-thread-_schemas]: Shutdown completed
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:46:26,097] INFO [kafka-store-
reader-thread-_schemas]: Stopped
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:46:26,099] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Resetting generation due to: consumer
pro-actively leaving the group
(org.apache.kafka.clients.consumer.internals.ConsumerCoordin
ator)
schema | [2022-12-09 02:46:26,099] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Request joining group due to: consumer
pro-actively leaving the group
(org.apache.kafka.clients.consumer.internals.ConsumerCoordin
ator)
schema | [2022-12-09 02:46:26,106] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,106] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,107] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,122] INFO App info
kafka.consumer for KafkaStore-reader-_schemas unregistered
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:46:26,122] INFO
KafkaStoreReaderThread shutdown complete.
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:46:26,123] INFO [Producer
clientId=producer-1] Closing the Kafka producer with
timeoutMillis = 9223372036854775807 ms.
(org.apache.kafka.clients.producer.KafkaProducer)
schema | [2022-12-09 02:46:26,137] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,137] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,137] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,138] INFO App info
kafka.producer for producer-1 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:46:26,138] INFO Kafka store
producer shut down
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:46:26,138] INFO Kafka store
shut down complete
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:46:26,145] ERROR
Unexpected exception in schema registry group processing
thread
(io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGro
upLeaderElector)
schema |
org.apache.kafka.common.errors.WakeupException
schema | at
org.apache.kafka.clients.consumer.internals.ConsumerNetwork
Client.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
schema | at
org.apache.kafka.clients.consumer.internals.ConsumerNetwork
Client.poll(ConsumerNetworkClient.java:278)
schema | at
org.apache.kafka.clients.consumer.internals.ConsumerNetwork
Client.poll(ConsumerNetworkClient.java:236)
schema | at
org.apache.kafka.clients.consumer.internals.ConsumerNetwork
Client.poll(ConsumerNetworkClient.java:227)
schema | at
io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator.poll(SchemaRegistryCoordinator.java:124)
schema | at
io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGrou
pLeaderElector$1.run(KafkaGroupLeaderElector.java:198)
schema | at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(
Executors.java:515)
schema | at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:2
64)
schema | at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Th
readPoolExecutor.java:1128)
schema | at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(T
hreadPoolExecutor.java:628)
schema | at
java.base/java.lang.Thread.run(Thread.java:829)
schema | [2022-12-09 02:46:26,182] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Member sr-1-
ab0cd419-79c9-4235-8b6a-80fa3660511e sending LeaveGroup
request to coordinator kafka-local:9095 (id: 2147483646 rack:
null) due to the consumer is being closed
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:46:26,195] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting
generation due to: consumer pro-actively leaving the group
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:46:26,197] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Request
joining group due to: consumer pro-actively leaving the group
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:46:26,198] WARN [Schema
registry clientId=sr-1, groupId=schema-registry] Close timed
out with 1 pending requests to coordinator, terminating client
connections
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:46:26,198] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,198] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:46:26,199] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
kafka | [2022-12-09 02:46:26,204] INFO
[GroupCoordinator 1]: Preparing to rebalance group schema-
registry in state PreparingRebalance with old generation 1
(__consumer_offsets-29) (reason: Removing member sr-1-
ab0cd419-79c9-4235-8b6a-80fa3660511e on LeaveGroup)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:46:26,204] INFO
[GroupCoordinator 1]: Group schema-registry with generation 2
is now empty (__consumer_offsets-29)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:46:26,212] INFO
[GroupCoordinator 1]: Member MemberMetadata(memberId=sr-
1-ab0cd419-79c9-4235-8b6a-80fa3660511e,
groupInstanceId=None, clientId=sr-1, clientHost=/172.19.0.6,
sessionTimeoutMs=10000, rebalanceTimeoutMs=300000,
supportedProtocols=List(v0)) has left group schema-registry
through explicit `LeaveGroup` request
(kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 02:46:26,213] INFO App info
kafka.schema.registry for sr-1 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
dynamodb exited with code 143
dynamodb exited with code 0
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:26,414Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "stopped", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:26,417Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "closing ...", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:46:26,485Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"8c9f05d4bd02", "message": "closed", "cluster.uuid":
"jeCIhYCERKmqRIfciS5i-A", "node.id": "H18iHmWlRFK5x1zuu-
6mFQ" }
schema exited with code 143
schema exited with code 0
elasticsearch exited with code 143
elasticsearch exited with code 0
kafka-ui exited with code 143
kafka-ui exited with code 0
kafka | [2022-12-09 02:46:28,425] INFO Terminating
process due to signal SIGTERM
(org.apache.kafka.common.utils.LoggingSignalHandler)
kafka | [2022-12-09 02:46:28,545] INFO [KafkaServer
id=1] shutting down (kafka.server.KafkaServer)
kafka | [2022-12-09 02:46:28,555] INFO [KafkaServer
id=1] Starting controlled shutdown (kafka.server.KafkaServer)
kafka | [2022-12-09 02:46:28,641] INFO [Controller
id=1] Shutting down broker 1 (kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:28,642] DEBUG [Controller
id=1] All shutting down brokers: 1
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:28,644] DEBUG [Controller
id=1] Live brokers: (kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:28,656] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:46:28,661] TRACE [Controller
id=1] All leaders = __consumer_offsets-13 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-46 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-9 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-42 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-21 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-17 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-30 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-26 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-5 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-38 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-1 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-34 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-16 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
schemas-0 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-45 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-12 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-41 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-24 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-20 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-49 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-0 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-29 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-25 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-8 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-37 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-4 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-33 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-15 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-48 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-11 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-44 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-23 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-19 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-32 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-28 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-7 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-40 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-3 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-36 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-47 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-14 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-43 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),r
esponse.data-warehouse-svc.warehousedata-event-0 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-10 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-22 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-18 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-31 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-27 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-39 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-6 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-35 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),_
_consumer_offsets-2 ->
(Leader:1,ISR:1,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1)
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:28,698] INFO [KafkaServer
id=1] Controlled shutdown request returned successfully after
84ms (kafka.server.KafkaServer)
kafka | [2022-12-09 02:46:28,727] INFO
[/config/changes-event-process-thread]: Shutting down
(kafka.common.ZkNodeChangeNotificationListener$ChangeEve
ntProcessThread)
kafka | [2022-12-09 02:46:28,730] INFO
[/config/changes-event-process-thread]: Shutdown completed
(kafka.common.ZkNodeChangeNotificationListener$ChangeEve
ntProcessThread)
kafka | [2022-12-09 02:46:28,730] INFO
[/config/changes-event-process-thread]: Stopped
(kafka.common.ZkNodeChangeNotificationListener$ChangeEve
ntProcessThread)
kafka | [2022-12-09 02:46:28,733] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Stopping socket server
request processors (kafka.network.SocketServer)
kafka | [2022-12-09 02:46:28,790] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Stopped socket server
request processors (kafka.network.SocketServer)
kafka | [2022-12-09 02:46:28,794] INFO [data-plane
Kafka Request Handler on Broker 1], shutting down
(kafka.server.KafkaRequestHandlerPool)
kafka | [2022-12-09 02:46:28,807] INFO [data-plane
Kafka Request Handler on Broker 1], shut down completely
(kafka.server.KafkaRequestHandlerPool)
kafka | [2022-12-09 02:46:28,827] INFO
[ExpirationReaper-1-AlterAcls]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,017] INFO
[ExpirationReaper-1-AlterAcls]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,017] INFO
[ExpirationReaper-1-AlterAcls]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,028] INFO [KafkaApi-1]
Shutdown complete. (kafka.server.KafkaApis)
kafka | [2022-12-09 02:46:29,096] INFO
[ExpirationReaper-1-topic]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,218] INFO
[ExpirationReaper-1-topic]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,218] INFO
[ExpirationReaper-1-topic]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,229] INFO
[TransactionCoordinator id=1] Shutting down.
(kafka.coordinator.transaction.TransactionCoordinator)
kafka | [2022-12-09 02:46:29,233] INFO [Transaction
State Manager 1]: Shutdown complete
(kafka.coordinator.transaction.TransactionStateManager)
kafka | [2022-12-09 02:46:29,233] INFO [Transaction
Marker Channel Manager 1]: Shutting down
(kafka.coordinator.transaction.TransactionMarkerChannelManag
er)
kafka | [2022-12-09 02:46:29,236] INFO [Transaction
Marker Channel Manager 1]: Shutdown completed
(kafka.coordinator.transaction.TransactionMarkerChannelManag
er)
kafka | [2022-12-09 02:46:29,236] INFO [Transaction
Marker Channel Manager 1]: Stopped
(kafka.coordinator.transaction.TransactionMarkerChannelManag
er)
kafka | [2022-12-09 02:46:29,243] INFO
[TransactionCoordinator id=1] Shutdown complete.
(kafka.coordinator.transaction.TransactionCoordinator)
kafka | [2022-12-09 02:46:29,245] INFO
[GroupCoordinator 1]: Shutting down.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:46:29,248] INFO
[ExpirationReaper-1-Heartbeat]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,423] INFO
[ExpirationReaper-1-Heartbeat]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,423] INFO
[ExpirationReaper-1-Heartbeat]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,426] INFO
[ExpirationReaper-1-Rebalance]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,622] INFO
[ExpirationReaper-1-Rebalance]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,622] INFO
[ExpirationReaper-1-Rebalance]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,636] INFO
[GroupCoordinator 1]: Shutdown complete.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:46:29,650] INFO
[ReplicaManager broker=1] Shutting down
(kafka.server.ReplicaManager)
kafka | [2022-12-09 02:46:29,652] INFO
[LogDirFailureHandler]: Shutting down
(kafka.server.ReplicaManager$LogDirFailureHandler)
kafka | [2022-12-09 02:46:29,653] INFO
[LogDirFailureHandler]: Shutdown completed
(kafka.server.ReplicaManager$LogDirFailureHandler)
kafka | [2022-12-09 02:46:29,652] INFO
[LogDirFailureHandler]: Stopped
(kafka.server.ReplicaManager$LogDirFailureHandler)
kafka | [2022-12-09 02:46:29,660] INFO
[ReplicaFetcherManager on broker 1] shutting down
(kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 02:46:29,669] INFO
[ReplicaFetcherManager on broker 1] shutdown completed
(kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 02:46:29,670] INFO
[ReplicaAlterLogDirsManager on broker 1] shutting down
(kafka.server.ReplicaAlterLogDirsManager)
kafka | [2022-12-09 02:46:29,671] INFO
[ReplicaAlterLogDirsManager on broker 1] shutdown completed
(kafka.server.ReplicaAlterLogDirsManager)
kafka | [2022-12-09 02:46:29,671] INFO
[ExpirationReaper-1-Fetch]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,822] INFO
[ExpirationReaper-1-Fetch]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,823] INFO
[ExpirationReaper-1-Fetch]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:29,827] INFO
[ExpirationReaper-1-Produce]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,024] INFO
[ExpirationReaper-1-Produce]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,024] INFO
[ExpirationReaper-1-Produce]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,029] INFO
[ExpirationReaper-1-DeleteRecords]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,228] INFO
[ExpirationReaper-1-DeleteRecords]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,229] INFO
[ExpirationReaper-1-DeleteRecords]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,232] INFO
[ExpirationReaper-1-ElectLeader]: Shutting down
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,430] INFO
[ExpirationReaper-1-ElectLeader]: Stopped
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,430] INFO
[ExpirationReaper-1-ElectLeader]: Shutdown completed
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:46:30,452] INFO
[ReplicaManager broker=1] Shut down completely
(kafka.server.ReplicaManager)
kafka | [2022-12-09 02:46:30,455] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Shutting down (kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:46:30,456] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Stopped (kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:46:30,456] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Shutdown completed
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:46:30,469] INFO Broker to
controller channel manager for alterIsr shutdown
(kafka.server.BrokerToControllerChannelManagerImpl)
kafka | [2022-12-09 02:46:30,471] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Shutting down
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:46:30,471] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Stopped
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:46:30,471] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Shutdown completed
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:46:30,473] INFO Broker to
controller channel manager for forwarding shutdown
(kafka.server.BrokerToControllerChannelManagerImpl)
kafka | [2022-12-09 02:46:30,476] INFO Shutting
down. (kafka.log.LogManager)
kafka | [2022-12-09 02:46:30,483] INFO Shutting down
the log cleaner. (kafka.log.LogCleaner)
kafka | [2022-12-09 02:46:30,485] INFO [kafka-log-
cleaner-thread-0]: Shutting down (kafka.log.LogCleaner)
kafka | [2022-12-09 02:46:30,487] INFO [kafka-log-
cleaner-thread-0]: Stopped (kafka.log.LogCleaner)
kafka | [2022-12-09 02:46:30,487] INFO [kafka-log-
cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner)
kafka | [2022-12-09 02:46:30,531] INFO
[ProducerStateManager partition=__consumer_offsets-29]
Wrote producer snapshot at offset 2 with 0 producer ids in 6
ms. (kafka.log.ProducerStateManager)
kafka | [2022-12-09 02:46:30,588] INFO
[ProducerStateManager partition=__consumer_offsets-26]
Wrote producer snapshot at offset 7 with 0 producer ids in 2
ms. (kafka.log.ProducerStateManager)
kafka | [2022-12-09 02:46:30,619] INFO
[ProducerStateManager partition=__consumer_offsets-10]
Wrote producer snapshot at offset 8 with 0 producer ids in 1
ms. (kafka.log.ProducerStateManager)
kafka | [2022-12-09 02:46:30,622] INFO
[ProducerStateManager partition=_schemas-0] Wrote producer
snapshot at offset 2 with 0 producer ids in 0 ms.
(kafka.log.ProducerStateManager)
kafka | [2022-12-09 02:46:30,645] INFO Shutdown
complete. (kafka.log.LogManager)
kafka | [2022-12-09 02:46:30,646] INFO
[ControllerEventThread controllerId=1] Shutting down
(kafka.controller.ControllerEventManager$ControllerEventThrea
d)
kafka | [2022-12-09 02:46:30,648] INFO
[ControllerEventThread controllerId=1] Shutdown completed
(kafka.controller.ControllerEventManager$ControllerEventThrea
d)
kafka | [2022-12-09 02:46:30,648] INFO
[ControllerEventThread controllerId=1] Stopped
(kafka.controller.ControllerEventManager$ControllerEventThrea
d)
kafka | [2022-12-09 02:46:30,652] DEBUG [Controller
id=1] Resigning (kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:30,654] DEBUG [Controller
id=1] Unregister BrokerModifications handler for Set(1)
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:30,658] INFO
[PartitionStateMachine controllerId=1] Stopped partition state
machine (kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 02:46:30,660] INFO
[ReplicaStateMachine controllerId=1] Stopped replica state
machine (kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 02:46:30,663] INFO
[RequestSendThread controllerId=1] Shutting down
(kafka.controller.RequestSendThread)
kafka | [2022-12-09 02:46:30,664] INFO
[RequestSendThread controllerId=1] Shutdown completed
(kafka.controller.RequestSendThread)
kafka | [2022-12-09 02:46:30,664] INFO
[RequestSendThread controllerId=1] Stopped
(kafka.controller.RequestSendThread)
kafka | [2022-12-09 02:46:30,678] INFO [Controller
id=1] Resigned (kafka.controller.KafkaController)
kafka | [2022-12-09 02:46:30,680] INFO [feature-zk-
node-event-process-thread]: Shutting down
(kafka.server.FinalizedFeatureChangeListener$ChangeNotificati
onProcessorThread)
kafka | [2022-12-09 02:46:30,681] INFO [feature-zk-
node-event-process-thread]: Shutdown completed
(kafka.server.FinalizedFeatureChangeListener$ChangeNotificati
onProcessorThread)
kafka | [2022-12-09 02:46:30,681] INFO [feature-zk-
node-event-process-thread]: Stopped
(kafka.server.FinalizedFeatureChangeListener$ChangeNotificati
onProcessorThread)
kafka | [2022-12-09 02:46:30,684] INFO
[ZooKeeperClient Kafka server] Closing.
(kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 02:46:30,827] INFO Session:
0x100000154ac0001 closed (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:46:30,827] INFO EventThread
shut down for session: 0x100000154ac0001
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 02:46:30,833] INFO
[ZooKeeperClient Kafka server] Closed.
(kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 02:46:30,835] INFO
[ThrottledChannelReaper-Fetch]: Shutting down
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:31,707] INFO
[ThrottledChannelReaper-Fetch]: Shutdown completed
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:31,707] INFO
[ThrottledChannelReaper-Fetch]: Stopped
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:31,709] INFO
[ThrottledChannelReaper-Produce]: Shutting down
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:31,910] INFO
[ThrottledChannelReaper-Produce]: Shutdown completed
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:31,912] INFO
[ThrottledChannelReaper-Request]: Shutting down
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:31,914] INFO
[ThrottledChannelReaper-Produce]: Stopped
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:32,908] INFO
[ThrottledChannelReaper-Request]: Shutdown completed
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:32,910] INFO
[ThrottledChannelReaper-ControllerMutation]: Shutting down
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:32,909] INFO
[ThrottledChannelReaper-Request]: Stopped
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:33,910] INFO
[ThrottledChannelReaper-ControllerMutation]: Stopped
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:33,910] INFO
[ThrottledChannelReaper-ControllerMutation]: Shutdown
completed
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:46:33,940] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Shutting down socket
server (kafka.network.SocketServer)
kafka | [2022-12-09 02:46:34,043] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Shutdown completed
(kafka.network.SocketServer)
kafka | [2022-12-09 02:46:34,045] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
kafka | [2022-12-09 02:46:34,045] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
kafka | [2022-12-09 02:46:34,046] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
kafka | [2022-12-09 02:46:34,053] INFO Broker and
topic stats closed (kafka.server.BrokerTopicStats)
kafka | [2022-12-09 02:46:34,058] INFO App info
kafka.server for 1 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 02:46:34,059] INFO [KafkaServer
id=1] shut down completed (kafka.server.KafkaServer)
kafka exited with code 143
kafka exited with code 0
zookeeper exited with code 143

[1] + done docker-compose -f ./app/docker-compose.yml


up
haiho@ip-192-168-20-101 hive-participation-service % clear

haiho@ip-192-168-20-101 hive-participation-service % docker-


compose -f ./app/docker-compose.yml up &
[1] 9931
[+] Running 7/48-20-101 hive-participation-service %
⠿ Network app_default Created
0.0s
⠿ Container elasticsearch Created
0.0s
⠿ Container zookeeper Created
0.0s
⠿ Container dynamodb Created
0.0s
⠿ Container kafka Created
0.0s
⠿ Container kafka-ui Created
0.0s
⠿ Container schema Created
0.0s
Attaching to dynamodb, elasticsearch, kafka, kafka-ui, schema,
zookeeper
zookeeper | ===> User
zookeeper | uid=1000(appuser) gid=1000(appuser)
groups=1000(appuser)
zookeeper | ===> Configuring ...
kafka | ===> User
kafka | uid=1000(appuser) gid=1000(appuser)
groups=1000(appuser)
kafka | ===> Configuring ...
schema | ===> User
schema | uid=1000(appuser) gid=1000(appuser)
groups=1000(appuser)
schema | ===> Configuring ...
dynamodb | Initializing DynamoDB Local with the
following configuration:
dynamodb | Port: 8000
dynamodb | InMemory: false
dynamodb | DbPath: /home/dynamodblocal
dynamodb | SharedDb: true
dynamodb | shouldDelayTransientStatuses: false
dynamodb | CorsParams: null
dynamodb |
zookeeper | ===> Running preflight checks ...
zookeeper | ===> Check if /var/lib/zookeeper/data is
writable ...
zookeeper | ===> Check if /var/lib/zookeeper/log is
writable ...
schema | ===> Running preflight checks ...
schema | ===> Check if Kafka is healthy ...
zookeeper | ===> Launching ...
zookeeper | ===> Launching zookeeper ...
kafka | ===> Running preflight checks ...
kafka | ===> Check if /var/lib/kafka/data is writable ...
schema | SLF4J: Class path contains multiple SLF4J
bindings.
schema | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/
slf4j/impl/StaticLoggerBinder.class]
schema | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org
/slf4j/impl/StaticLoggerBinder.class]
schema | SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
schema | SLF4J: Actual binding is of type
[org.slf4j.impl.SimpleLoggerFactory]
kafka | ===> Check if Zookeeper is healthy ...
schema | [main] INFO
org.apache.kafka.clients.admin.AdminClientConfig -
AdminClientConfig values:
schema | bootstrap.servers = [kafka-local:9095]
schema | client.dns.lookup = use_all_dns_ips
schema | client.id =
schema | connections.max.idle.ms = 300000
schema | default.api.timeout.ms = 60000
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 2147483647
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema |
kafka-ui | _ _ ___ __ _ _
_ __ __ _
kafka-ui | | | | |_ _| / _|___ _ _ /_\ _ __ __ _ __| |_ ___ |
|/ /__ _ / _| |_____
kafka-ui | | |_| || | | _/ _ | '_| / _ \| '_ / _` / _| ' \/ -_) | ' </
_` | _| / / _`|
kafka-ui | \___/|___| |_| \___|_| /_/ \_| .__\__,_\__|_||_\___| |
_|\_\__,_|_| |_\_\__,|
kafka-ui | |_|
kafka-ui |
schema | [main] INFO
org.apache.kafka.common.utils.AppInfoParser - Kafka version:
7.1.1-ccs
schema | [main] INFO
org.apache.kafka.common.utils.AppInfoParser - Kafka commitId:
947fac5beb61836d
schema | [main] INFO
org.apache.kafka.common.utils.AppInfoParser - Kafka
startTimeMs: 1670554074228
kafka | SLF4J: Class path contains multiple SLF4J
bindings.
kafka | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/
slf4j/impl/StaticLoggerBinder.class]
kafka | SLF4J: Found binding in
[jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org
/slf4j/impl/StaticLoggerBinder.class]
kafka | SLF4J: See
http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
kafka | SLF4J: Actual binding is of type
[org.slf4j.impl.SimpleLoggerFactory]
kafka-ui | 2022-12-09 02:47:54,247 INFO [background-
preinit] o.h.v.i.u.Version: HV000001: Hibernate Validator
6.2.0.Final
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka-ui | 2022-12-09 02:47:54,836 INFO [main]
c.p.k.u.KafkaUiApplication: Starting KafkaUiApplication using
Java 13.0.9 on 9da7eef27bb1 with PID 1 (/kafka-ui-api.jar
started by kafkaui in /)
kafka-ui | 2022-12-09 02:47:54,862 DEBUG [main]
c.p.k.u.KafkaUiApplication: Running with Spring Boot v2.6.3,
Spring v5.3.15
kafka-ui | 2022-12-09 02:47:54,867 INFO [main]
c.p.k.u.KafkaUiApplication: No active profile set, falling back to
default profiles: default
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
zookeeper | [2022-12-09 02:47:55,292] INFO Reading
configuration from: /etc/kafka/zookeeper.properties
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,406] INFO
clientPortAddress is 0.0.0.0:2191
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,407] INFO
secureClientPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,412] INFO
observerMasterPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,412] INFO
metricsProvider.className is
org.apache.zookeeper.metrics.impl.DefaultMetricsProvider
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,430] INFO
autopurge.snapRetainCount set to 3
(org.apache.zookeeper.server.DatadirCleanupManager)
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:zookeeper.version=3.6.3--
6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on
04/08/2021 16:35 GMT
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:host.name=kafka
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.version=11.0.14.1
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.vendor=Azul Systems, Inc.
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.home=/usr/lib/jvm/zulu11-ca
zookeeper | [2022-12-09 02:47:55,443] INFO
autopurge.purgeInterval set to 0
(org.apache.zookeeper.server.DatadirCleanupManager)
zookeeper | [2022-12-09 02:47:55,446] INFO Purge task
is not scheduled.
(org.apache.zookeeper.server.DatadirCleanupManager)
zookeeper | [2022-12-09 02:47:55,447] WARN Either no
config or no quorum defined in config, running in standalone
mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.class.path=/usr/share/java/cp-base-
new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/metrics-
core-4.1.12.1.jar:/usr/share/java/cp-base-new/minimal-json-
0.9.5.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-
2.12.3.jar:/usr/share/java/cp-base-new/kafka_2.13-7.1.1-
ccs.jar:/usr/share/java/cp-base-new/jackson-databind-
2.12.3.jar:/usr/share/java/cp-base-new/snappy-java-
1.1.8.4.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/
usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar:/usr/share/
java/cp-base-new/audience-annotations-0.5.0.jar:/usr/share/
java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/
jackson-module-scala_2.13-2.12.3.jar:/usr/share/java/cp-base-
new/scala-logging_2.13-3.9.3.jar:/usr/share/java/cp-base-new/
zstd-jni-1.5.0-4.jar:/usr/share/java/cp-base-new/logredactor-
metrics-1.0.10.jar:/usr/share/java/cp-base-new/kafka-raft-7.1.1-
ccs.jar:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar:/
usr/share/java/cp-base-new/kafka-storage-7.1.1-ccs.jar:/usr/
share/java/cp-base-new/slf4j-api-1.7.30.jar:/usr/share/java/cp-
base-new/scala-collection-compat_2.13-2.4.4.jar:/usr/share/
java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-
new/jmx_prometheus_javaagent-0.14.0.jar:/usr/share/java/cp-
base-new/kafka-clients-7.1.1-ccs.jar:/usr/share/java/cp-base-
new/jose4j-0.7.8.jar:/usr/share/java/cp-base-new/zookeeper-
3.6.3.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.5.jar:/
usr/share/java/cp-base-new/kafka-metadata-7.1.1-ccs.jar:/usr/
share/java/cp-base-new/gson-2.8.6.jar:/usr/share/java/cp-base-
new/common-utils-7.1.1.jar:/usr/share/java/cp-base-new/kafka-
server-common-7.1.1-ccs.jar:/usr/share/java/cp-base-new/
jolokia-jvm-1.6.2-agent.jar:/usr/share/java/cp-base-new/json-
simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-
dataformat-yaml-2.12.3.jar:/usr/share/java/cp-base-new/scala-
java8-compat_2.13-1.0.0.jar:/usr/share/java/cp-base-new/disk-
usage-agent-7.1.1.jar:/usr/share/java/cp-base-new/paranamer-
2.8.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/
share/java/cp-base-new/logredactor-1.0.10.jar:/usr/share/java/
cp-base-new/snakeyaml-1.27.jar:/usr/share/java/cp-base-new/
zookeeper-jute-3.6.3.jar:/usr/share/java/cp-base-new/jackson-
annotations-2.12.3.jar:/usr/share/java/cp-base-new/argparse4j-
0.7.0.jar:/usr/share/java/cp-base-new/confluent-log4j-1.2.17-
cp10.jar:/usr/share/java/cp-base-new/scala-library-2.13.5.jar:/
usr/share/java/cp-base-new/utility-belt-7.1.1.jar:/usr/share/
java/cp-base-new/kafka-storage-api-7.1.1-ccs.jar:/usr/share/
java/cp-base-new/jolokia-core-1.6.2.jar:/usr/share/java/cp-base-
new/jackson-datatype-jdk8-2.12.3.jar:/usr/share/java/cp-base-
new/jackson-core-2.12.3.jar
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client
environment:java.library.path=/usr/java/packages/lib:/usr/lib64:
/lib64:/lib:/usr/lib
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.io.tmpdir=/tmp
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:java.compiler=<NA>
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.name=Linux
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.arch=amd64
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.version=5.15.49-linuxkit
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:user.name=appuser
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:user.home=/home/appuser
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:user.dir=/home/appuser
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.memory.free=117MB
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.memory.max=1964MB
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Client environment:os.memory.total=124MB
zookeeper | [2022-12-09 02:47:55,477] INFO Log4j 1.2
jmx support found and enabled.
(org.apache.zookeeper.jmx.ManagedUtil)
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Initiating client connection, connectString=zookeeper:2191
sessionTimeout=40000
watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher
@289d1c02
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [main] INFO
org.apache.zookeeper.common.X509Util - Setting -D
jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-
initiated TLS renegotiation
kafka | [main] INFO
org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value
is 1048575 Bytes
zookeeper | [2022-12-09 02:47:55,637] INFO Reading
configuration from: /etc/kafka/zookeeper.properties
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
kafka | [main] INFO org.apache.zookeeper.ClientCnxn -
zookeeper.request.timeout value is 0. feature enabled=false
zookeeper | [2022-12-09 02:47:55,652] INFO
clientPortAddress is 0.0.0.0:2191
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,660] INFO
secureClientPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,663] INFO
observerMasterPort is not set
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,663] INFO
metricsProvider.className is
org.apache.zookeeper.metrics.impl.DefaultMetricsProvider
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
zookeeper | [2022-12-09 02:47:55,664] INFO Starting
server (org.apache.zookeeper.server.ZooKeeperServerMain)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.20.0.3:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] WARN
org.apache.zookeeper.ClientCnxn - Session 0x0 for sever
zookeeper/172.20.0.3:2191, Closing socket connection.
Attempting reconnect except it is a SessionExpiredException.
kafka | java.net.ConnectException: Connection refused
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native
Method)
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketC
hannelImpl.java:777)
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
nxnSocketNIO.java:344)
kafka | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.j
ava:1290)
zookeeper | [2022-12-09 02:47:55,846] INFO
ServerMetrics initialized with provider
org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@770
d3326 (org.apache.zookeeper.server.ServerMetrics)
zookeeper | [2022-12-09 02:47:55,867] INFO
zookeeper.snapshot.trust.empty : false
(org.apache.zookeeper.server.persistence.FileTxnSnapLog)
zookeeper | [2022-12-09 02:47:55,931] INFO
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO ______
_
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO |___ /
||
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO //
___ ___ | | __ ___ ___ _ __ ___ _ __
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO // /_
\ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__|
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO / /__ |
(_) | | (_) | | < | __/ | __/ | |_) | | __/ | |
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO /_____| \
___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_|
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO
||
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO
|_|
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,931] INFO
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,936] INFO Server
environment:zookeeper.version=3.6.3--
6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on
04/08/2021 16:35 GMT
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,936] INFO Server
environment:host.name=zookeeper
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,936] INFO Server
environment:java.version=11.0.14.1
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,936] INFO Server
environment:java.vendor=Azul Systems, Inc.
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,936] INFO Server
environment:java.home=/usr/lib/jvm/zulu11-ca
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:java.class.path=/usr/bin/../share/java/kafka/metric
s-core-2.2.0.jar:/usr/bin/../share/java/kafka/jersey-server-
2.34.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/
usr/bin/../share/java/kafka/rocksdbjni-6.22.1.1.jar:/usr/bin/../
share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/
java/kafka/minimal-json-0.9.5.jar:/usr/bin/../share/java/kafka/
hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-
dataformat-csv-2.12.3.jar:/usr/bin/../share/java/kafka/kafka-
log4j-appender-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
kafka_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/
connect-mirror-client-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jackson-databind-2.12.3.jar:/usr/bin/../share/java/kafka/snappy-
java-1.1.8.4.jar:/usr/bin/../share/java/kafka/jopt-simple-
5.0.4.jar:/usr/bin/../share/java/kafka/jetty-util-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-
streams-scala_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/
jersey-hk2-2.34.jar:/usr/bin/../share/java/kafka/audience-
annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka-streams-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/logredactor-metrics-
1.0.8.jar:/usr/bin/../share/java/kafka/connect-runtime-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/re2j-1.6.jar:/usr/bin/../share/
java/kafka/jackson-module-scala_2.13-2.12.3.jar:/usr/bin/../
share/java/kafka/scala-logging_2.13-3.9.3.jar:/usr/bin/../share/
java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/
kafka/zstd-jni-1.5.0-4.jar:/usr/bin/../share/java/kafka/logredactor-
1.0.8.jar:/usr/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/usr/
bin/../share/java/kafka/connect-json-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/kafka-raft-7.1.1-ccs.jar:/usr/bin/../share/java/
kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jline-
3.12.1.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/
bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/usr/bin/../share/
java/kafka/maven-artifact-3.8.1.jar:/usr/bin/../share/java/kafka/
netty-transport-4.1.73.Final.jar:/usr/bin/../share/java/kafka/
javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/commons-
lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/usr/
bin/../share/java/kafka/connect-api-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/scala-collection-compat_2.13-2.4.4.jar:/usr/
bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/
java/kafka/kafka-streams-examples-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/javassist-3.27.0-GA.jar:/usr/bin/../share/java/
kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-
annotations-2.12.3.jar:/usr/bin/../share/java/kafka/connect-
basic-auth-extension-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/reflections-
0.9.12.jar:/usr/bin/../share/java/kafka/kafka-clients-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-
1.2.1.jar:/usr/bin/../share/java/kafka/jose4j-0.7.8.jar:/usr/bin/../
share/java/kafka/scala-reflect-2.13.6.jar:/usr/bin/../share/java/
kafka/zookeeper-3.6.3.jar:/usr/bin/../share/java/kafka/jersey-
container-servlet-core-2.34.jar:/usr/bin/../share/java/kafka/
jersey-client-2.34.jar:/usr/bin/../share/java/kafka/kafka-
metadata-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/
connect-transforms-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-util-ajax-9.4.44.v20210927.jar:/usr/bin/../share/java/
kafka/kafka-tools-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-
server-common-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-
servlet-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-
transport-native-unix-common-4.1.73.Final.jar:/usr/bin/../share/
java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/jackson-
jaxrs-json-provider-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-
io-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jersey-
common-2.34.jar:/usr/bin/../share/java/kafka/scala-library-
2.13.6.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-
1.0.3.jar:/usr/bin/../share/java/kafka/netty-tcnative-classes-
2.0.46.Final.jar:/usr/bin/../share/java/kafka/jersey-container-
servlet-2.34.jar:/usr/bin/../share/java/kafka/scala-java8-
compat_2.13-1.0.0.jar:/usr/bin/../share/java/kafka/trogdor-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-
2.0.2.jar:/usr/bin/../share/java/kafka/confluent-log4j-1.2.17-
cp8.jar:/usr/bin/../share/java/kafka/netty-handler-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/
usr/bin/../share/java/kafka/netty-codec-4.1.73.Final.jar:/usr/
bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/
java/kafka/kafka-streams-test-utils-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/jetty-server-9.4.44.v20210927.jar:/usr/bin/../
share/java/kafka/zookeeper-jute-3.6.3.jar:/usr/bin/../share/java/
kafka/connect-mirror-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-client-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jackson-annotations-2.12.3.jar:/usr/bin/../share/java/kafka/
jackson-jaxrs-base-2.12.3.jar:/usr/bin/../share/java/kafka/
argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/netty-resolver-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-security-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-shell-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-native-
epoll-4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-common-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-servlets-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-storage-
api-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-http-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jackson-
datatype-jdk8-2.12.3.jar:/usr/bin/../share/java/kafka/jackson-
core-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/netty-
transport-classes-epoll-4.1.73.Final.jar:/usr/bin/../share/java/
confluent-telemetry/*
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:java.library.path=/usr/java/packages/lib:/usr/lib64:
/lib64:/lib:/usr/lib
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:java.io.tmpdir=/tmp
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:java.compiler=<NA>
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:os.name=Linux
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:os.arch=amd64
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,937] INFO Server
environment:os.version=5.15.49-linuxkit
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO Server
environment:user.name=appuser
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO Server
environment:user.home=/home/appuser
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO Server
environment:user.dir=/home/appuser
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO Server
environment:os.memory.free=493MB
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO Server
environment:os.memory.max=512MB
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO Server
environment:os.memory.total=512MB
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,938] INFO
zookeeper.enableEagerACLCheck = false
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,939] INFO
zookeeper.digest.enabled = true
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,939] INFO
zookeeper.closeSessionTxn.enabled = true
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,939] INFO
zookeeper.flushDelay=0
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,939] INFO
zookeeper.maxWriteQueuePollTime=0
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,939] INFO
zookeeper.maxBatchSize=1000
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,940] INFO
zookeeper.intBufferStartingSizeBytes = 1024
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,945] INFO Weighed
connection throttling is disabled
(org.apache.zookeeper.server.BlueThrottle)
zookeeper | [2022-12-09 02:47:55,950] INFO
minSessionTimeout set to 4000
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,950] INFO
maxSessionTimeout set to 40000
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,953] INFO Response
cache size is initialized with value 400.
(org.apache.zookeeper.server.ResponseCache)
zookeeper | [2022-12-09 02:47:55,954] INFO Response
cache size is initialized with value 400.
(org.apache.zookeeper.server.ResponseCache)
zookeeper | [2022-12-09 02:47:55,957] INFO
zookeeper.pathStats.slotCapacity = 60
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 02:47:55,958] INFO
zookeeper.pathStats.slotDuration = 15
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 02:47:55,958] INFO
zookeeper.pathStats.maxDepth = 6
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 02:47:55,958] INFO
zookeeper.pathStats.initialDelay = 5
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 02:47:55,958] INFO
zookeeper.pathStats.delay = 5
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 02:47:55,958] INFO
zookeeper.pathStats.enabled = false
(org.apache.zookeeper.server.util.RequestPathMetricsCollector)
zookeeper | [2022-12-09 02:47:55,968] INFO The max
bytes for all large requests are set to 104857600
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,969] INFO The large
request threshold is set to -1
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:55,969] INFO Created
server with tickTime 2000 minSessionTimeout 4000
maxSessionTimeout 40000 clientPortListenBacklog -1 datadir
/var/lib/zookeeper/log/version-2 snapdir
/var/lib/zookeeper/data/version-2
(org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:56,058] INFO Logging
initialized @6726ms to org.eclipse.jetty.util.log.Slf4jLog
(org.eclipse.jetty.util.log)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
zookeeper | [2022-12-09 02:47:56,707] WARN
o.e.j.s.ServletContextHandler@5a7fe64f{/,null,STOPPED}
contextPath ends with /*
(org.eclipse.jetty.server.handler.ContextHandler)
zookeeper | [2022-12-09 02:47:56,708] WARN Empty
contextPath (org.eclipse.jetty.server.handler.ContextHandler)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.20.0.3:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] WARN
org.apache.zookeeper.ClientCnxn - Session 0x0 for sever
zookeeper/172.20.0.3:2191, Closing socket connection.
Attempting reconnect except it is a SessionExpiredException.
kafka | java.net.ConnectException: Connection refused
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native
Method)
kafka | at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketC
hannelImpl.java:777)
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
nxnSocketNIO.java:344)
kafka | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.j
ava:1290)
zookeeper | [2022-12-09 02:47:57,042] INFO jetty-
9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git:
8da83308eeca865e495e53ef315a249d63ba9332; jvm
11.0.14.1+1-LTS (org.eclipse.jetty.server.Server)
zookeeper | [2022-12-09 02:47:57,329] INFO
DefaultSessionIdManager workerName=node0
(org.eclipse.jetty.server.session)
zookeeper | [2022-12-09 02:47:57,329] INFO No
SessionScavenger set, using defaults
(org.eclipse.jetty.server.session)
zookeeper | [2022-12-09 02:47:57,335] INFO node0
Scavenging every 600000ms (org.eclipse.jetty.server.session)
zookeeper | [2022-12-09 02:47:57,355] WARN
[email protected]@5a7fe64f{/,null,
STARTING} has uncovered http methods for path: /*
(org.eclipse.jetty.security.SecurityHandler)
zookeeper | [2022-12-09 02:47:57,401] INFO Started
o.e.j.s.ServletContextHandler@5a7fe64f{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler)
zookeeper | [2022-12-09 02:47:57,499] INFO Started
ServerConnector@6f10d5b6{HTTP/1.1, (http/1.1)}
{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector)
zookeeper | [2022-12-09 02:47:57,500] INFO Started
@8167ms (org.eclipse.jetty.server.Server)
zookeeper | [2022-12-09 02:47:57,500] INFO Started
AdminServer on address 0.0.0.0, port 8080 and command
URL /commands
(org.apache.zookeeper.server.admin.JettyAdminServer)
zookeeper | [2022-12-09 02:47:57,519] INFO Using
org.apache.zookeeper.server.NIOServerCnxnFactory as server
connection factory
(org.apache.zookeeper.server.ServerCnxnFactory)
zookeeper | [2022-12-09 02:47:57,523] WARN maxCnxns
is not configured, using default value 0.
(org.apache.zookeeper.server.ServerCnxnFactory)
zookeeper | [2022-12-09 02:47:57,528] INFO Configuring
NIO connection handler with 10s sessionless connection
timeout, 1 selector thread(s), 8 worker threads, and 64 kB
direct buffers.
(org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper | [2022-12-09 02:47:57,532] INFO binding to
port 0.0.0.0/0.0.0.0:2191
(org.apache.zookeeper.server.NIOServerCnxnFactory)
zookeeper | [2022-12-09 02:47:57,616] INFO Using
org.apache.zookeeper.server.watch.WatchManager as watch
manager
(org.apache.zookeeper.server.watch.WatchManagerFactory)
zookeeper | [2022-12-09 02:47:57,616] INFO Using
org.apache.zookeeper.server.watch.WatchManager as watch
manager
(org.apache.zookeeper.server.watch.WatchManagerFactory)
zookeeper | [2022-12-09 02:47:57,622] INFO
zookeeper.snapshotSizeFactor = 0.33
(org.apache.zookeeper.server.ZKDatabase)
zookeeper | [2022-12-09 02:47:57,622] INFO
zookeeper.commitLogCount=500
(org.apache.zookeeper.server.ZKDatabase)
zookeeper | [2022-12-09 02:47:57,667] INFO
zookeeper.snapshot.compression.method = CHECKED
(org.apache.zookeeper.server.persistence.SnapStream)
zookeeper | [2022-12-09 02:47:57,668] INFO
Snapshotting: 0x0 to
/var/lib/zookeeper/data/version-2/snapshot.0
(org.apache.zookeeper.server.persistence.FileTxnSnapLog)
zookeeper | [2022-12-09 02:47:57,680] INFO Snapshot
loaded in 58 ms, highest zxid is 0x0, digest is 1371985504
(org.apache.zookeeper.server.ZKDatabase)
zookeeper | [2022-12-09 02:47:57,681] INFO
Snapshotting: 0x0 to
/var/lib/zookeeper/data/version-2/snapshot.0
(org.apache.zookeeper.server.persistence.FileTxnSnapLog)
zookeeper | [2022-12-09 02:47:57,687] INFO Snapshot
taken in 5 ms (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper | [2022-12-09 02:47:57,738] INFO
zookeeper.request_throttler.shutdownTimeout = 10000
(org.apache.zookeeper.server.RequestThrottler)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
zookeeper | [2022-12-09 02:47:57,741] INFO
PrepRequestProcessor (sid:0) started, reconfigEnabled=false
(org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper | [2022-12-09 02:47:57,876] INFO Using
checkIntervalMs=60000 maxPerMinute=10000
maxNeverUsedIntervalMs=0
(org.apache.zookeeper.server.ContainerManager)
zookeeper | [2022-12-09 02:47:57,879] INFO ZooKeeper
audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Opening socket connection
to server zookeeper/172.20.0.3:2191.
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - SASL config status: Will not
attempt to authenticate using SASL (unknown error)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Socket connection
established, initiating session, client: /172.20.0.5:54290, server:
zookeeper/172.20.0.3:2191
zookeeper | [2022-12-09 02:47:58,129] INFO Creating
new log file: log.1
(org.apache.zookeeper.server.persistence.FileTxnLog)
kafka | [main-SendThread(zookeeper:2191)] INFO
org.apache.zookeeper.ClientCnxn - Session establishment
complete on server zookeeper/172.20.0.3:2191, session id =
0x100006f5f460000, negotiated timeout = 40000
kafka | [main-SendThread(zookeeper:2191)] WARN
org.apache.zookeeper.ClientCnxn - An exception was thrown
while closing send thread for session 0x100006f5f460000.
kafka | EndOfStreamException: Unable to read
additional data from server sessionid 0x100006f5f460000,
likely server has closed socket
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSoc
ketNIO.java:77)
kafka | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientC
nxnSocketNIO.java:350)
kafka | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.j
ava:1290)
kafka | [main] INFO org.apache.zookeeper.ZooKeeper -
Session: 0x100006f5f460000 closed
kafka | [main-EventThread] INFO
org.apache.zookeeper.ClientCnxn - EventThread shut down for
session: 0x100006f5f460000
kafka | ===> Launching ...
kafka | ===> Launching kafka ...
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:01,259Z", "level": "WARN", "component":
"o.e.b.JNANatives", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "unable to install
syscall filter: ",
elasticsearch | "stacktrace":
["java.lang.UnsupportedOperationException: seccomp
unavailable: CONFIG_SECCOMP not compiled into kernel,
CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed",
elasticsearch | "at
org.elasticsearch.bootstrap.SystemCallFilter.linuxImpl(SystemC
allFilter.java:342) ~[elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.SystemCallFilter.init(SystemCallFilte
r.java:617) ~[elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.JNANatives.tryInstallSystemCallFilte
r(JNANatives.java:260) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Natives.tryInstallSystemCallFilter(N
atives.java:113) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstra
p.java:116) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178
) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393)
[elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:
170) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.
java:161) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.cli.EnvironmentAwareCommand.execute(Envir
onmentAwareCommand.java:86) [elasticsearch-
7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.cli.Command.mainWithoutErrorHandling(Com
mand.java:127) [elasticsearch-cli-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.cli.Command.main(Command.java:90)
[elasticsearch-cli-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.jav
a:126) [elasticsearch-7.10.2.jar:7.10.2]",
elasticsearch | "at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.jav
a:92) [elasticsearch-7.10.2.jar:7.10.2]"] }
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:03,683Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "version[7.10.2], pid[7],
build[default/docker/747e1cc71def077253878a59143c1f785afa
92b9/2021-01-13T00:42:12.435326Z], OS[Linux/5.15.49-
linuxkit/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server
VM/15.0.1/15.0.1+9]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:03,708Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "JVM home
[/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:03,709Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "JVM arguments [-Xshare:auto, -
Des.networkaddress.cache.ttl=60, -
Des.networkaddress.cache.negative.ttl=10, -XX:
+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -
Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-
OmitStackTraceInFastThrow, -XX:
+ShowCodeDetailsInExceptionMessages, -
Dio.netty.noUnsafe=true, -
Dio.netty.noKeySetOptimization=true, -
Dio.netty.recycler.maxCapacityPerThread=0, -
Dio.netty.allocator.numDirectArenas=0, -
Dlog4j.shutdownHookEnabled=false, -
Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT,
-Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -
XX:InitiatingHeapOccupancyPercent=30,
-Djava.io.tmpdir=/tmp/elasticsearch-16581964568158089264, -
XX:+HeapDumpOnOutOfMemoryError, -
XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -
Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,t
ags:filecount=32,filesize=64m, -
Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx1092m, -
XX:MaxDirectMemorySize=572522496,
-Des.path.home=/usr/share/elasticsearch,
-Des.path.conf=/usr/share/elasticsearch/config, -
Des.distribution.flavor=default, -Des.distribution.type=docker, -
Des.bundled_jdk=true]" }
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:04,337] INFO Registered
kafka:type=kafka.Log4jController MBean
(kafka.utils.Log4jControllerRegistration$)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:06,251] INFO Setting -D
jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-
initiated TLS renegotiation
(org.apache.zookeeper.common.X509Util)
kafka | [2022-12-09 02:48:06,928] INFO Registered
signal handlers for TERM, INT, HUP
(org.apache.kafka.common.utils.LoggingSignalHandler)
kafka | [2022-12-09 02:48:06,942] INFO starting
(kafka.server.KafkaServer)
kafka | [2022-12-09 02:48:06,948] INFO Connecting to
zookeeper on zookeeper:2191 (kafka.server.KafkaServer)
kafka | [2022-12-09 02:48:07,027] INFO
[ZooKeeperClient Kafka server] Initializing a new session to
zookeeper:2191. (kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 02:48:07,065] INFO Client
environment:zookeeper.version=3.6.3--
6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on
04/08/2021 16:35 GMT (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,065] INFO Client
environment:host.name=kafka
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,067] INFO Client
environment:java.version=11.0.14.1
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,067] INFO Client
environment:java.vendor=Azul Systems, Inc.
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,067] INFO Client
environment:java.home=/usr/lib/jvm/zulu11-ca
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,067] INFO Client
environment:java.class.path=/usr/bin/../share/java/kafka/metric
s-core-2.2.0.jar:/usr/bin/../share/java/kafka/jersey-server-
2.34.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/
usr/bin/../share/java/kafka/rocksdbjni-6.22.1.1.jar:/usr/bin/../
share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/
java/kafka/minimal-json-0.9.5.jar:/usr/bin/../share/java/kafka/
hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-
dataformat-csv-2.12.3.jar:/usr/bin/../share/java/kafka/kafka-
log4j-appender-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
kafka_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/
connect-mirror-client-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jackson-databind-2.12.3.jar:/usr/bin/../share/java/kafka/snappy-
java-1.1.8.4.jar:/usr/bin/../share/java/kafka/jopt-simple-
5.0.4.jar:/usr/bin/../share/java/kafka/jetty-util-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-
streams-scala_2.13-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/
jersey-hk2-2.34.jar:/usr/bin/../share/java/kafka/audience-
annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka-streams-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/logredactor-metrics-
1.0.8.jar:/usr/bin/../share/java/kafka/connect-runtime-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/re2j-1.6.jar:/usr/bin/../share/
java/kafka/jackson-module-scala_2.13-2.12.3.jar:/usr/bin/../
share/java/kafka/scala-logging_2.13-3.9.3.jar:/usr/bin/../share/
java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/
kafka/zstd-jni-1.5.0-4.jar:/usr/bin/../share/java/kafka/logredactor-
1.0.8.jar:/usr/bin/../share/java/kafka/plexus-utils-3.2.1.jar:/usr/
bin/../share/java/kafka/connect-json-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/kafka-raft-7.1.1-ccs.jar:/usr/bin/../share/java/
kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jline-
3.12.1.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/
bin/../share/java/kafka/slf4j-log4j12-1.7.30.jar:/usr/bin/../share/
java/kafka/maven-artifact-3.8.1.jar:/usr/bin/../share/java/kafka/
netty-transport-4.1.73.Final.jar:/usr/bin/../share/java/kafka/
javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/commons-
lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.30.jar:/usr/
bin/../share/java/kafka/connect-api-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/scala-collection-compat_2.13-2.4.4.jar:/usr/
bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/
java/kafka/kafka-streams-examples-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/javassist-3.27.0-GA.jar:/usr/bin/../share/java/
kafka/kafka.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-
annotations-2.12.3.jar:/usr/bin/../share/java/kafka/connect-
basic-auth-extension-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/reflections-
0.9.12.jar:/usr/bin/../share/java/kafka/kafka-clients-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-
1.2.1.jar:/usr/bin/../share/java/kafka/jose4j-0.7.8.jar:/usr/bin/../
share/java/kafka/scala-reflect-2.13.6.jar:/usr/bin/../share/java/
kafka/zookeeper-3.6.3.jar:/usr/bin/../share/java/kafka/jersey-
container-servlet-core-2.34.jar:/usr/bin/../share/java/kafka/
jersey-client-2.34.jar:/usr/bin/../share/java/kafka/kafka-
metadata-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jakarta.xml.bind-api-2.3.2.jar:/usr/bin/../share/java/kafka/
connect-transforms-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-util-ajax-9.4.44.v20210927.jar:/usr/bin/../share/java/
kafka/kafka-tools-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-
server-common-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-
servlet-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/netty-
transport-native-unix-common-4.1.73.Final.jar:/usr/bin/../share/
java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/jackson-
jaxrs-json-provider-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-
io-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jersey-
common-2.34.jar:/usr/bin/../share/java/kafka/scala-library-
2.13.6.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-
1.0.3.jar:/usr/bin/../share/java/kafka/netty-tcnative-classes-
2.0.46.Final.jar:/usr/bin/../share/java/kafka/jersey-container-
servlet-2.34.jar:/usr/bin/../share/java/kafka/scala-java8-
compat_2.13-1.0.0.jar:/usr/bin/../share/java/kafka/trogdor-7.1.1-
ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-
2.0.2.jar:/usr/bin/../share/java/kafka/confluent-log4j-1.2.17-
cp8.jar:/usr/bin/../share/java/kafka/netty-handler-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/
usr/bin/../share/java/kafka/netty-codec-4.1.73.Final.jar:/usr/
bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/
java/kafka/kafka-streams-test-utils-7.1.1-ccs.jar:/usr/bin/../
share/java/kafka/jetty-server-9.4.44.v20210927.jar:/usr/bin/../
share/java/kafka/zookeeper-jute-3.6.3.jar:/usr/bin/../share/java/
kafka/connect-mirror-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/
jetty-client-9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/
jackson-annotations-2.12.3.jar:/usr/bin/../share/java/kafka/
jackson-jaxrs-base-2.12.3.jar:/usr/bin/../share/java/kafka/
argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/netty-resolver-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-buffer-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-security-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-shell-
7.1.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-native-
epoll-4.1.73.Final.jar:/usr/bin/../share/java/kafka/netty-common-
4.1.73.Final.jar:/usr/bin/../share/java/kafka/jetty-servlets-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/kafka-storage-
api-7.1.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-http-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/jackson-
datatype-jdk8-2.12.3.jar:/usr/bin/../share/java/kafka/jackson-
core-2.12.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-
9.4.44.v20210927.jar:/usr/bin/../share/java/kafka/netty-
transport-classes-epoll-4.1.73.Final.jar:/usr/bin/../share/java/
confluent-telemetry/* (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,067] INFO Client
environment:java.library.path=/usr/java/packages/lib:/usr/lib64:
/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,067] INFO Client
environment:java.io.tmpdir=/tmp
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,068] INFO Client
environment:java.compiler=<NA>
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,068] INFO Client
environment:os.name=Linux
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,068] INFO Client
environment:os.arch=amd64
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,069] INFO Client
environment:os.version=5.15.49-linuxkit
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,069] INFO Client
environment:user.name=appuser
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,069] INFO Client
environment:user.home=/home/appuser
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,069] INFO Client
environment:user.dir=/home/appuser
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,069] INFO Client
environment:os.memory.free=1010MB
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,069] INFO Client
environment:os.memory.max=1024MB
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,070] INFO Client
environment:os.memory.total=1024MB
(org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,078] INFO Initiating
client connection, connectString=zookeeper:2191
sessionTimeout=18000
watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWa
tcher$@4b41e4dd (org.apache.zookeeper.ZooKeeper)
kafka | [2022-12-09 02:48:07,127] INFO
jute.maxbuffer value is 4194304 Bytes
(org.apache.zookeeper.ClientCnxnSocket)
kafka | [2022-12-09 02:48:07,155] INFO
zookeeper.request.timeout value is 0. feature enabled=false
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 02:48:07,176] INFO
[ZooKeeperClient Kafka server] Waiting until connected.
(kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 02:48:07,225] INFO Opening
socket connection to server zookeeper/172.20.0.3:2191.
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 02:48:07,226] INFO SASL config
status: Will not attempt to authenticate using SASL (unknown
error) (org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 02:48:07,258] INFO Socket
connection established, initiating session, client:
/172.20.0.5:50074, server: zookeeper/172.20.0.3:2191
(org.apache.zookeeper.ClientCnxn)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:07,294] INFO Session
establishment complete on server zookeeper/172.20.0.3:2191,
session id = 0x100006f5f460001, negotiated timeout = 18000
(org.apache.zookeeper.ClientCnxn)
kafka | [2022-12-09 02:48:07,314] INFO
[ZooKeeperClient Kafka server] Connected.
(kafka.zookeeper.ZooKeeperClient)
kafka | [2022-12-09 02:48:07,698] INFO [feature-zk-
node-event-process-thread]: Starting
(kafka.server.FinalizedFeatureChangeListener$ChangeNotificati
onProcessorThread)
kafka | [2022-12-09 02:48:07,763] INFO Feature ZK
node at path: /feature does not exist
(kafka.server.FinalizedFeatureChangeListener)
kafka | [2022-12-09 02:48:07,770] INFO Cleared cache
(kafka.server.FinalizedFeatureCache)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:08,536] INFO Cluster ID =
1i0gWgdkSlq2grYfKIOGfw (kafka.server.KafkaServer)
kafka | [2022-12-09 02:48:08,565] WARN No
meta.properties file under dir
/var/lib/kafka/data/meta.properties
(kafka.server.BrokerMetadataCheckpoint)
kafka | [2022-12-09 02:48:08,835] INFO KafkaConfig
values:
kafka | advertised.listeners =
LISTENER://localhost:9092, LISTENER_HOST://kafka-local:9095
kafka | alter.config.policy.class.name = null
kafka | alter.log.dirs.replication.quota.window.num
= 11
kafka |
alter.log.dirs.replication.quota.window.size.seconds = 1
kafka | authorizer.class.name =
kafka | auto.create.topics.enable = true
kafka | auto.leader.rebalance.enable = true
kafka | background.threads = 10
kafka | broker.heartbeat.interval.ms = 2000
kafka | broker.id = 1
kafka | broker.id.generation.enable = true
kafka | broker.rack = null
kafka | broker.session.timeout.ms = 9000
kafka | client.quota.callback.class = null
kafka | compression.type = producer
kafka | connection.failed.authentication.delay.ms =
100
kafka | connections.max.idle.ms = 600000
kafka | connections.max.reauth.ms = 0
kafka | control.plane.listener.name = null
kafka | controlled.shutdown.enable = true
kafka | controlled.shutdown.max.retries = 3
kafka | controlled.shutdown.retry.backoff.ms =
5000
kafka | controller.listener.names = null
kafka | controller.quorum.append.linger.ms = 25
kafka | controller.quorum.election.backoff.max.ms
= 1000
kafka | controller.quorum.election.timeout.ms =
1000
kafka | controller.quorum.fetch.timeout.ms = 2000
kafka | controller.quorum.request.timeout.ms =
2000
kafka | controller.quorum.retry.backoff.ms = 20
kafka | controller.quorum.voters = []
kafka | controller.quota.window.num = 11
kafka | controller.quota.window.size.seconds = 1
kafka | controller.socket.timeout.ms = 30000
kafka | create.topic.policy.class.name = null
kafka | default.replication.factor = 1
kafka | delegation.token.expiry.check.interval.ms =
3600000
kafka | delegation.token.expiry.time.ms =
86400000
kafka | delegation.token.master.key = null
kafka | delegation.token.max.lifetime.ms =
604800000
kafka | delegation.token.secret.key = null
kafka |
delete.records.purgatory.purge.interval.requests = 1
kafka | delete.topic.enable = true
kafka | fetch.max.bytes = 57671680
kafka | fetch.purgatory.purge.interval.requests =
1000
kafka | group.initial.rebalance.delay.ms = 0
kafka | group.max.session.timeout.ms = 1800000
kafka | group.max.size = 2147483647
kafka | group.min.session.timeout.ms = 6000
kafka | initial.broker.registration.timeout.ms =
60000
kafka | inter.broker.listener.name = LISTENER
kafka | inter.broker.protocol.version = 3.1-IV0
kafka | kafka.metrics.polling.interval.secs = 10
kafka | kafka.metrics.reporters = []
kafka | leader.imbalance.check.interval.seconds =
300
kafka | leader.imbalance.per.broker.percentage =
10
kafka | listener.security.protocol.map =
LISTENER:PLAINTEXT, LISTENER_HOST:PLAINTEXT
kafka | listeners = LISTENER://0.0.0.0:9092,
LISTENER_HOST://0.0.0.0:9095
kafka | log.cleaner.backoff.ms = 15000
kafka | log.cleaner.dedupe.buffer.size = 134217728
kafka | log.cleaner.delete.retention.ms = 86400000
kafka | log.cleaner.enable = true
kafka | log.cleaner.io.buffer.load.factor = 0.9
kafka | log.cleaner.io.buffer.size = 524288
kafka | log.cleaner.io.max.bytes.per.second =
1.7976931348623157E308
kafka | log.cleaner.max.compaction.lag.ms =
9223372036854775807
kafka | log.cleaner.min.cleanable.ratio = 0.5
kafka | log.cleaner.min.compaction.lag.ms = 0
kafka | log.cleaner.threads = 1
kafka | log.cleanup.policy = [delete]
kafka | log.dir = /tmp/kafka-logs
kafka | log.dirs = /var/lib/kafka/data
kafka | log.flush.interval.messages =
9223372036854775807
kafka | log.flush.interval.ms = null
kafka | log.flush.offset.checkpoint.interval.ms =
60000
kafka | log.flush.scheduler.interval.ms =
9223372036854775807
kafka | log.flush.start.offset.checkpoint.interval.ms
= 60000
kafka | log.index.interval.bytes = 4096
kafka | log.index.size.max.bytes = 10485760
kafka | log.message.downconversion.enable = true
kafka | log.message.format.version = 3.0-IV1
kafka | log.message.timestamp.difference.max.ms
= 9223372036854775807
kafka | log.message.timestamp.type = CreateTime
kafka | log.preallocate = false
kafka | log.retention.bytes = -1
kafka | log.retention.check.interval.ms = 300000
kafka | log.retention.hours = 168
kafka | log.retention.minutes = null
kafka | log.retention.ms = null
kafka | log.roll.hours = 168
kafka | log.roll.jitter.hours = 0
kafka | log.roll.jitter.ms = null
kafka | log.roll.ms = null
kafka | log.segment.bytes = 1073741824
kafka | log.segment.delete.delay.ms = 60000
kafka | max.connection.creation.rate =
2147483647
kafka | max.connections = 2147483647
kafka | max.connections.per.ip = 2147483647
kafka | max.connections.per.ip.overrides =
kafka | max.incremental.fetch.session.cache.slots
= 1000
kafka | message.max.bytes = 1048588
kafka | metadata.log.dir = null
kafka |
metadata.log.max.record.bytes.between.snapshots =
20971520
kafka | metadata.log.segment.bytes =
1073741824
kafka | metadata.log.segment.min.bytes =
8388608
kafka | metadata.log.segment.ms = 604800000
kafka | metadata.max.retention.bytes = -1
kafka | metadata.max.retention.ms = 604800000
kafka | metric.reporters = []
kafka | metrics.num.samples = 2
kafka | metrics.recording.level = INFO
kafka | metrics.sample.window.ms = 30000
kafka | min.insync.replicas = 1
kafka | node.id = 1
kafka | num.io.threads = 8
kafka | num.network.threads = 3
kafka | num.partitions = 1
kafka | num.recovery.threads.per.data.dir = 1
kafka | num.replica.alter.log.dirs.threads = null
kafka | num.replica.fetchers = 1
kafka | offset.metadata.max.bytes = 4096
kafka | offsets.commit.required.acks = -1
kafka | offsets.commit.timeout.ms = 5000
kafka | offsets.load.buffer.size = 5242880
kafka | offsets.retention.check.interval.ms =
600000
kafka | offsets.retention.minutes = 10080
kafka | offsets.topic.compression.codec = 0
kafka | offsets.topic.num.partitions = 50
kafka | offsets.topic.replication.factor = 1
kafka | offsets.topic.segment.bytes = 104857600
kafka | password.encoder.cipher.algorithm =
AES/CBC/PKCS5Padding
kafka | password.encoder.iterations = 4096
kafka | password.encoder.key.length = 128
kafka | password.encoder.keyfactory.algorithm =
null
kafka | password.encoder.old.secret = null
kafka | password.encoder.secret = null
kafka | principal.builder.class = class
org.apache.kafka.common.security.authenticator.DefaultKafkaPr
incipalBuilder
kafka | process.roles = []
kafka | producer.purgatory.purge.interval.requests
= 1000
kafka | queued.max.request.bytes = -1
kafka | queued.max.requests = 500
kafka | quota.window.num = 11
kafka | quota.window.size.seconds = 1
kafka | remote.log.index.file.cache.total.size.bytes
= 1073741824
kafka | remote.log.manager.task.interval.ms =
30000
kafka |
remote.log.manager.task.retry.backoff.max.ms = 30000
kafka | remote.log.manager.task.retry.backoff.ms =
500
kafka | remote.log.manager.task.retry.jitter = 0.2
kafka | remote.log.manager.thread.pool.size = 10
kafka | remote.log.metadata.manager.class.name
= null
kafka | remote.log.metadata.manager.class.path =
null
kafka | remote.log.metadata.manager.impl.prefix =
null
kafka |
remote.log.metadata.manager.listener.name = null
kafka | remote.log.reader.max.pending.tasks = 100
kafka | remote.log.reader.threads = 10
kafka | remote.log.storage.manager.class.name =
null
kafka | remote.log.storage.manager.class.path =
null
kafka | remote.log.storage.manager.impl.prefix =
null
kafka | remote.log.storage.system.enable = false
kafka | replica.fetch.backoff.ms = 1000
kafka | replica.fetch.max.bytes = 1048576
kafka | replica.fetch.min.bytes = 1
kafka | replica.fetch.response.max.bytes =
10485760
kafka | replica.fetch.wait.max.ms = 500
kafka |
replica.high.watermark.checkpoint.interval.ms = 5000
kafka | replica.lag.time.max.ms = 30000
kafka | replica.selector.class = null
kafka | replica.socket.receive.buffer.bytes = 65536
kafka | replica.socket.timeout.ms = 30000
kafka | replication.quota.window.num = 11
kafka | replication.quota.window.size.seconds = 1
kafka | request.timeout.ms = 30000
kafka | reserved.broker.max.id = 1000
kafka | sasl.client.callback.handler.class = null
kafka | sasl.enabled.mechanisms = [GSSAPI]
kafka | sasl.jaas.config = null
kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka | sasl.kerberos.min.time.before.relogin =
60000
kafka | sasl.kerberos.principal.to.local.rules =
[DEFAULT]
kafka | sasl.kerberos.service.name = null
kafka | sasl.kerberos.ticket.renew.jitter = 0.05
kafka | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka | sasl.login.callback.handler.class = null
kafka | sasl.login.class = null
kafka | sasl.login.connect.timeout.ms = null
kafka | sasl.login.read.timeout.ms = null
kafka | sasl.login.refresh.buffer.seconds = 300
kafka | sasl.login.refresh.min.period.seconds = 60
kafka | sasl.login.refresh.window.factor = 0.8
kafka | sasl.login.refresh.window.jitter = 0.05
kafka | sasl.login.retry.backoff.max.ms = 10000
kafka | sasl.login.retry.backoff.ms = 100
kafka | sasl.mechanism.controller.protocol =
GSSAPI
kafka | sasl.mechanism.inter.broker.protocol =
GSSAPI
kafka | sasl.oauthbearer.clock.skew.seconds = 30
kafka | sasl.oauthbearer.expected.audience = null
kafka | sasl.oauthbearer.expected.issuer = null
kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
kafka | sasl.oauthbearer.jwks.endpoint.url = null
kafka | sasl.oauthbearer.scope.claim.name = scope
kafka | sasl.oauthbearer.sub.claim.name = sub
kafka | sasl.oauthbearer.token.endpoint.url = null
kafka | sasl.server.callback.handler.class = null
kafka | security.inter.broker.protocol = PLAINTEXT
kafka | security.providers = null
kafka | socket.connection.setup.timeout.max.ms =
30000
kafka | socket.connection.setup.timeout.ms =
10000
kafka | socket.receive.buffer.bytes = 102400
kafka | socket.request.max.bytes = 104857600
kafka | socket.send.buffer.bytes = 102400
kafka | ssl.cipher.suites = []
kafka | ssl.client.auth = none
kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka | ssl.endpoint.identification.algorithm = https
kafka | ssl.engine.factory.class = null
kafka | ssl.key.password = null
kafka | ssl.keymanager.algorithm = SunX509
kafka | ssl.keystore.certificate.chain = null
kafka | ssl.keystore.key = null
kafka | ssl.keystore.location = null
kafka | ssl.keystore.password = null
kafka | ssl.keystore.type = JKS
kafka | ssl.principal.mapping.rules = DEFAULT
kafka | ssl.protocol = TLSv1.3
kafka | ssl.provider = null
kafka | ssl.secure.random.implementation = null
kafka | ssl.trustmanager.algorithm = PKIX
kafka | ssl.truststore.certificates = null
kafka | ssl.truststore.location = null
kafka | ssl.truststore.password = null
kafka | ssl.truststore.type = JKS
kafka |
transaction.abort.timed.out.transaction.cleanup.interval.ms
= 10000
kafka | transaction.max.timeout.ms = 900000
kafka |
transaction.remove.expired.transaction.cleanup.interval.ms
= 3600000
kafka | transaction.state.log.load.buffer.size =
5242880
kafka | transaction.state.log.min.isr = 1
kafka | transaction.state.log.num.partitions = 50
kafka | transaction.state.log.replication.factor = 1
kafka | transaction.state.log.segment.bytes =
104857600
kafka | transactional.id.expiration.ms = 604800000
kafka | unclean.leader.election.enable = false
kafka | zookeeper.clientCnxnSocket = null
kafka | zookeeper.connect = zookeeper:2191
kafka | zookeeper.connection.timeout.ms = null
kafka | zookeeper.max.in.flight.requests = 10
kafka | zookeeper.session.timeout.ms = 18000
kafka | zookeeper.set.acl = false
kafka | zookeeper.ssl.cipher.suites = null
kafka | zookeeper.ssl.client.enable = false
kafka | zookeeper.ssl.crl.enable = false
kafka | zookeeper.ssl.enabled.protocols = null
kafka |
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka | zookeeper.ssl.keystore.location = null
kafka | zookeeper.ssl.keystore.password = null
kafka | zookeeper.ssl.keystore.type = null
kafka | zookeeper.ssl.ocsp.enable = false
kafka | zookeeper.ssl.protocol = TLSv1.2
kafka | zookeeper.ssl.truststore.location = null
kafka | zookeeper.ssl.truststore.password = null
kafka | zookeeper.ssl.truststore.type = null
kafka | zookeeper.sync.time.ms = 2000
kafka | (kafka.server.KafkaConfig)
kafka | [2022-12-09 02:48:08,872] INFO KafkaConfig
values:
kafka | advertised.listeners =
LISTENER://localhost:9092, LISTENER_HOST://kafka-local:9095
kafka | alter.config.policy.class.name = null
kafka | alter.log.dirs.replication.quota.window.num
= 11
kafka |
alter.log.dirs.replication.quota.window.size.seconds = 1
kafka | authorizer.class.name =
kafka | auto.create.topics.enable = true
kafka | auto.leader.rebalance.enable = true
kafka | background.threads = 10
kafka | broker.heartbeat.interval.ms = 2000
kafka | broker.id = 1
kafka | broker.id.generation.enable = true
kafka | broker.rack = null
kafka | broker.session.timeout.ms = 9000
kafka | client.quota.callback.class = null
kafka | compression.type = producer
kafka | connection.failed.authentication.delay.ms =
100
kafka | connections.max.idle.ms = 600000
kafka | connections.max.reauth.ms = 0
kafka | control.plane.listener.name = null
kafka | controlled.shutdown.enable = true
kafka | controlled.shutdown.max.retries = 3
kafka | controlled.shutdown.retry.backoff.ms =
5000
kafka | controller.listener.names = null
kafka | controller.quorum.append.linger.ms = 25
kafka | controller.quorum.election.backoff.max.ms
= 1000
kafka | controller.quorum.election.timeout.ms =
1000
kafka | controller.quorum.fetch.timeout.ms = 2000
kafka | controller.quorum.request.timeout.ms =
2000
kafka | controller.quorum.retry.backoff.ms = 20
kafka | controller.quorum.voters = []
kafka | controller.quota.window.num = 11
kafka | controller.quota.window.size.seconds = 1
kafka | controller.socket.timeout.ms = 30000
kafka | create.topic.policy.class.name = null
kafka | default.replication.factor = 1
kafka | delegation.token.expiry.check.interval.ms =
3600000
kafka | delegation.token.expiry.time.ms =
86400000
kafka | delegation.token.master.key = null
kafka | delegation.token.max.lifetime.ms =
604800000
kafka | delegation.token.secret.key = null
kafka |
delete.records.purgatory.purge.interval.requests = 1
kafka | delete.topic.enable = true
kafka | fetch.max.bytes = 57671680
kafka | fetch.purgatory.purge.interval.requests =
1000
kafka | group.initial.rebalance.delay.ms = 0
kafka | group.max.session.timeout.ms = 1800000
kafka | group.max.size = 2147483647
kafka | group.min.session.timeout.ms = 6000
kafka | initial.broker.registration.timeout.ms =
60000
kafka | inter.broker.listener.name = LISTENER
kafka | inter.broker.protocol.version = 3.1-IV0
kafka | kafka.metrics.polling.interval.secs = 10
kafka | kafka.metrics.reporters = []
kafka | leader.imbalance.check.interval.seconds =
300
kafka | leader.imbalance.per.broker.percentage =
10
kafka | listener.security.protocol.map =
LISTENER:PLAINTEXT, LISTENER_HOST:PLAINTEXT
kafka | listeners = LISTENER://0.0.0.0:9092,
LISTENER_HOST://0.0.0.0:9095
kafka | log.cleaner.backoff.ms = 15000
kafka | log.cleaner.dedupe.buffer.size = 134217728
kafka | log.cleaner.delete.retention.ms = 86400000
kafka | log.cleaner.enable = true
kafka | log.cleaner.io.buffer.load.factor = 0.9
kafka | log.cleaner.io.buffer.size = 524288
kafka | log.cleaner.io.max.bytes.per.second =
1.7976931348623157E308
kafka | log.cleaner.max.compaction.lag.ms =
9223372036854775807
kafka | log.cleaner.min.cleanable.ratio = 0.5
kafka | log.cleaner.min.compaction.lag.ms = 0
kafka | log.cleaner.threads = 1
kafka | log.cleanup.policy = [delete]
kafka | log.dir = /tmp/kafka-logs
kafka | log.dirs = /var/lib/kafka/data
kafka | log.flush.interval.messages =
9223372036854775807
kafka | log.flush.interval.ms = null
kafka | log.flush.offset.checkpoint.interval.ms =
60000
kafka | log.flush.scheduler.interval.ms =
9223372036854775807
kafka | log.flush.start.offset.checkpoint.interval.ms
= 60000
kafka | log.index.interval.bytes = 4096
kafka | log.index.size.max.bytes = 10485760
kafka | log.message.downconversion.enable = true
kafka | log.message.format.version = 3.0-IV1
kafka | log.message.timestamp.difference.max.ms
= 9223372036854775807
kafka | log.message.timestamp.type = CreateTime
kafka | log.preallocate = false
kafka | log.retention.bytes = -1
kafka | log.retention.check.interval.ms = 300000
kafka | log.retention.hours = 168
kafka | log.retention.minutes = null
kafka | log.retention.ms = null
kafka | log.roll.hours = 168
kafka | log.roll.jitter.hours = 0
kafka | log.roll.jitter.ms = null
kafka | log.roll.ms = null
kafka | log.segment.bytes = 1073741824
kafka | log.segment.delete.delay.ms = 60000
kafka | max.connection.creation.rate =
2147483647
kafka | max.connections = 2147483647
kafka | max.connections.per.ip = 2147483647
kafka | max.connections.per.ip.overrides =
kafka | max.incremental.fetch.session.cache.slots
= 1000
kafka | message.max.bytes = 1048588
kafka | metadata.log.dir = null
kafka |
metadata.log.max.record.bytes.between.snapshots =
20971520
kafka | metadata.log.segment.bytes =
1073741824
kafka | metadata.log.segment.min.bytes =
8388608
kafka | metadata.log.segment.ms = 604800000
kafka | metadata.max.retention.bytes = -1
kafka | metadata.max.retention.ms = 604800000
kafka | metric.reporters = []
kafka | metrics.num.samples = 2
kafka | metrics.recording.level = INFO
kafka | metrics.sample.window.ms = 30000
kafka | min.insync.replicas = 1
kafka | node.id = 1
kafka | num.io.threads = 8
kafka | num.network.threads = 3
kafka | num.partitions = 1
kafka | num.recovery.threads.per.data.dir = 1
kafka | num.replica.alter.log.dirs.threads = null
kafka | num.replica.fetchers = 1
kafka | offset.metadata.max.bytes = 4096
kafka | offsets.commit.required.acks = -1
kafka | offsets.commit.timeout.ms = 5000
kafka | offsets.load.buffer.size = 5242880
kafka | offsets.retention.check.interval.ms =
600000
kafka | offsets.retention.minutes = 10080
kafka | offsets.topic.compression.codec = 0
kafka | offsets.topic.num.partitions = 50
kafka | offsets.topic.replication.factor = 1
kafka | offsets.topic.segment.bytes = 104857600
kafka | password.encoder.cipher.algorithm =
AES/CBC/PKCS5Padding
kafka | password.encoder.iterations = 4096
kafka | password.encoder.key.length = 128
kafka | password.encoder.keyfactory.algorithm =
null
kafka | password.encoder.old.secret = null
kafka | password.encoder.secret = null
kafka | principal.builder.class = class
org.apache.kafka.common.security.authenticator.DefaultKafkaPr
incipalBuilder
kafka | process.roles = []
kafka | producer.purgatory.purge.interval.requests
= 1000
kafka | queued.max.request.bytes = -1
kafka | queued.max.requests = 500
kafka | quota.window.num = 11
kafka | quota.window.size.seconds = 1
kafka | remote.log.index.file.cache.total.size.bytes
= 1073741824
kafka | remote.log.manager.task.interval.ms =
30000
kafka |
remote.log.manager.task.retry.backoff.max.ms = 30000
kafka | remote.log.manager.task.retry.backoff.ms =
500
kafka | remote.log.manager.task.retry.jitter = 0.2
kafka | remote.log.manager.thread.pool.size = 10
kafka | remote.log.metadata.manager.class.name
= null
kafka | remote.log.metadata.manager.class.path =
null
kafka | remote.log.metadata.manager.impl.prefix =
null
kafka |
remote.log.metadata.manager.listener.name = null
kafka | remote.log.reader.max.pending.tasks = 100
kafka | remote.log.reader.threads = 10
kafka | remote.log.storage.manager.class.name =
null
kafka | remote.log.storage.manager.class.path =
null
kafka | remote.log.storage.manager.impl.prefix =
null
kafka | remote.log.storage.system.enable = false
kafka | replica.fetch.backoff.ms = 1000
kafka | replica.fetch.max.bytes = 1048576
kafka | replica.fetch.min.bytes = 1
kafka | replica.fetch.response.max.bytes =
10485760
kafka | replica.fetch.wait.max.ms = 500
kafka |
replica.high.watermark.checkpoint.interval.ms = 5000
kafka | replica.lag.time.max.ms = 30000
kafka | replica.selector.class = null
kafka | replica.socket.receive.buffer.bytes = 65536
kafka | replica.socket.timeout.ms = 30000
kafka | replication.quota.window.num = 11
kafka | replication.quota.window.size.seconds = 1
kafka | request.timeout.ms = 30000
kafka | reserved.broker.max.id = 1000
kafka | sasl.client.callback.handler.class = null
kafka | sasl.enabled.mechanisms = [GSSAPI]
kafka | sasl.jaas.config = null
kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka | sasl.kerberos.min.time.before.relogin =
60000
kafka | sasl.kerberos.principal.to.local.rules =
[DEFAULT]
kafka | sasl.kerberos.service.name = null
kafka | sasl.kerberos.ticket.renew.jitter = 0.05
kafka | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka | sasl.login.callback.handler.class = null
kafka | sasl.login.class = null
kafka | sasl.login.connect.timeout.ms = null
kafka | sasl.login.read.timeout.ms = null
kafka | sasl.login.refresh.buffer.seconds = 300
kafka | sasl.login.refresh.min.period.seconds = 60
kafka | sasl.login.refresh.window.factor = 0.8
kafka | sasl.login.refresh.window.jitter = 0.05
kafka | sasl.login.retry.backoff.max.ms = 10000
kafka | sasl.login.retry.backoff.ms = 100
kafka | sasl.mechanism.controller.protocol =
GSSAPI
kafka | sasl.mechanism.inter.broker.protocol =
GSSAPI
kafka | sasl.oauthbearer.clock.skew.seconds = 30
kafka | sasl.oauthbearer.expected.audience = null
kafka | sasl.oauthbearer.expected.issuer = null
kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
kafka |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
kafka | sasl.oauthbearer.jwks.endpoint.url = null
kafka | sasl.oauthbearer.scope.claim.name = scope
kafka | sasl.oauthbearer.sub.claim.name = sub
kafka | sasl.oauthbearer.token.endpoint.url = null
kafka | sasl.server.callback.handler.class = null
kafka | security.inter.broker.protocol = PLAINTEXT
kafka | security.providers = null
kafka | socket.connection.setup.timeout.max.ms =
30000
kafka | socket.connection.setup.timeout.ms =
10000
kafka | socket.receive.buffer.bytes = 102400
kafka | socket.request.max.bytes = 104857600
kafka | socket.send.buffer.bytes = 102400
kafka | ssl.cipher.suites = []
kafka | ssl.client.auth = none
kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka | ssl.endpoint.identification.algorithm = https
kafka | ssl.engine.factory.class = null
kafka | ssl.key.password = null
kafka | ssl.keymanager.algorithm = SunX509
kafka | ssl.keystore.certificate.chain = null
kafka | ssl.keystore.key = null
kafka | ssl.keystore.location = null
kafka | ssl.keystore.password = null
kafka | ssl.keystore.type = JKS
kafka | ssl.principal.mapping.rules = DEFAULT
kafka | ssl.protocol = TLSv1.3
kafka | ssl.provider = null
kafka | ssl.secure.random.implementation = null
kafka | ssl.trustmanager.algorithm = PKIX
kafka | ssl.truststore.certificates = null
kafka | ssl.truststore.location = null
kafka | ssl.truststore.password = null
kafka | ssl.truststore.type = JKS
kafka |
transaction.abort.timed.out.transaction.cleanup.interval.ms
= 10000
kafka | transaction.max.timeout.ms = 900000
kafka |
transaction.remove.expired.transaction.cleanup.interval.ms
= 3600000
kafka | transaction.state.log.load.buffer.size =
5242880
kafka | transaction.state.log.min.isr = 1
kafka | transaction.state.log.num.partitions = 50
kafka | transaction.state.log.replication.factor = 1
kafka | transaction.state.log.segment.bytes =
104857600
kafka | transactional.id.expiration.ms = 604800000
kafka | unclean.leader.election.enable = false
kafka | zookeeper.clientCnxnSocket = null
kafka | zookeeper.connect = zookeeper:2191
kafka | zookeeper.connection.timeout.ms = null
kafka | zookeeper.max.in.flight.requests = 10
kafka | zookeeper.session.timeout.ms = 18000
kafka | zookeeper.set.acl = false
kafka | zookeeper.ssl.cipher.suites = null
kafka | zookeeper.ssl.client.enable = false
kafka | zookeeper.ssl.crl.enable = false
kafka | zookeeper.ssl.enabled.protocols = null
kafka |
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka | zookeeper.ssl.keystore.location = null
kafka | zookeeper.ssl.keystore.password = null
kafka | zookeeper.ssl.keystore.type = null
kafka | zookeeper.ssl.ocsp.enable = false
kafka | zookeeper.ssl.protocol = TLSv1.2
kafka | zookeeper.ssl.truststore.location = null
kafka | zookeeper.ssl.truststore.password = null
kafka | zookeeper.ssl.truststore.type = null
kafka | zookeeper.sync.time.ms = 2000
kafka | (kafka.server.KafkaConfig)
kafka | [2022-12-09 02:48:09,133] INFO
[ThrottledChannelReaper-Fetch]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:48:09,148] INFO
[ThrottledChannelReaper-Produce]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:48:09,159] INFO
[ThrottledChannelReaper-Request]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
kafka | [2022-12-09 02:48:09,192] INFO
[ThrottledChannelReaper-ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:09,415] INFO Loading logs
from log dirs ArraySeq(/var/lib/kafka/data)
(kafka.log.LogManager)
kafka | [2022-12-09 02:48:09,437] INFO Attempting
recovery for all logs in /var/lib/kafka/data since no clean
shutdown file was found (kafka.log.LogManager)
kafka | [2022-12-09 02:48:09,457] INFO Loaded 0 logs
in 41ms. (kafka.log.LogManager)
kafka | [2022-12-09 02:48:09,467] INFO Starting log
cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka | [2022-12-09 02:48:09,478] INFO Starting log
flusher with a default period of 9223372036854775807 ms.
(kafka.log.LogManager)
kafka | [2022-12-09 02:48:09,535] INFO Starting the
log cleaner (kafka.log.LogCleaner)
kafka | [2022-12-09 02:48:09,756] INFO [kafka-log-
cleaner-thread-0]: Starting (kafka.log.LogCleaner)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka-ui | 2022-12-09 02:48:10,905 INFO [main]
o.s.d.r.c.RepositoryConfigurationDelegate: Bootstrapping Spring
Data LDAP repositories in DEFAULT mode.
kafka | [2022-12-09 02:48:11,085] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Starting
(kafka.server.BrokerToControllerRequestThread)
kafka-ui | 2022-12-09 02:48:11,244 INFO [main]
o.s.d.r.c.RepositoryConfigurationDelegate: Finished Spring Data
repository scanning in 303 ms. Found 0 LDAP repository
interfaces.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:13,051] INFO Updated
connection-accept-rate max connection creation rate to
2147483647 (kafka.network.ConnectionQuotas)
kafka | [2022-12-09 02:48:13,075] INFO Awaiting
socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
schema | [kafka-admin-client-thread | adminclient-1]
INFO org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Node -1 disconnected.
schema | [kafka-admin-client-thread | adminclient-1]
WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=adminclient-1] Connection to node -1
(kafka-local/172.20.0.5:9095) could not be established. Broker
may not be available.
kafka | [2022-12-09 02:48:13,266] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Created data-plane
acceptor and processors for endpoint :
ListenerName(LISTENER) (kafka.network.SocketServer)
kafka | [2022-12-09 02:48:13,285] INFO Updated
connection-accept-rate max connection creation rate to
2147483647 (kafka.network.ConnectionQuotas)
kafka | [2022-12-09 02:48:13,286] INFO Awaiting
socket connections on 0.0.0.0:9095. (kafka.network.Acceptor)
kafka | [2022-12-09 02:48:13,430] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Created data-plane
acceptor and processors for endpoint :
ListenerName(LISTENER_HOST) (kafka.network.SocketServer)
kafka | [2022-12-09 02:48:13,539] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Starting (kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:48:13,667] INFO
[ExpirationReaper-1-Produce]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:13,685] INFO
[ExpirationReaper-1-Fetch]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:13,721] INFO
[ExpirationReaper-1-DeleteRecords]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:13,726] INFO
[ExpirationReaper-1-ElectLeader]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:13,868] INFO
[LogDirFailureHandler]: Starting
(kafka.server.ReplicaManager$LogDirFailureHandler)
kafka | [2022-12-09 02:48:14,204] INFO Creating
/brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka | [2022-12-09 02:48:14,340] INFO Stat of the
created znode at /brokers/ids/1 is:
27,27,1670554094277,1670554094277,1,0,0,72058072377720
833,263,0,27
kafka | (kafka.zk.KafkaZkClient)
kafka | [2022-12-09 02:48:14,367] INFO Registered
broker 1 at path /brokers/ids/1 with addresses:
LISTENER://localhost:9092,LISTENER_HOST://kafka-local:9095,
czxid (broker epoch): 27 (kafka.zk.KafkaZkClient)
kafka | [2022-12-09 02:48:14,796] INFO
[ControllerEventThread controllerId=1] Starting
(kafka.controller.ControllerEventManager$ControllerEventThrea
d)
kafka | [2022-12-09 02:48:14,852] INFO
[ExpirationReaper-1-topic]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:14,924] INFO
[ExpirationReaper-1-Heartbeat]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:14,925] INFO Successfully
created /controller_epoch with initial epoch 0
(kafka.zk.KafkaZkClient)
kafka | [2022-12-09 02:48:14,981] INFO [Controller
id=1] 1 successfully elected as the controller. Epoch
incremented to 1 and epoch zk version is now 1
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:14,981] INFO
[ExpirationReaper-1-Rebalance]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:15,074] INFO [Controller
id=1] Creating FeatureZNode at path: /feature with contents:
FeatureZNode(Enabled,Features{})
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,089] INFO Feature ZK
node created at path: /feature
(kafka.server.FinalizedFeatureChangeListener)
kafka | [2022-12-09 02:48:15,224] INFO
[GroupCoordinator 1]: Starting up.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:15,284] INFO
[GroupCoordinator 1]: Startup complete.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:15,554] INFO Updated
cache from existing <empty> to latest
FinalizedFeaturesAndEpoch(features=Features{}, epoch=0).
(kafka.server.FinalizedFeatureCache)
kafka | [2022-12-09 02:48:15,554] INFO [Controller
id=1] Registering handlers (kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,604] INFO [Controller
id=1] Deleting log dir event notifications
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,628] INFO
[TransactionCoordinator id=1] Starting up.
(kafka.coordinator.transaction.TransactionCoordinator)
kafka | [2022-12-09 02:48:15,639] INFO [Controller
id=1] Deleting isr change notifications
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,658] INFO
[TransactionCoordinator id=1] Startup complete.
(kafka.coordinator.transaction.TransactionCoordinator)
kafka | [2022-12-09 02:48:15,668] INFO [Controller
id=1] Initializing controller context
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,697] INFO [Transaction
Marker Channel Manager 1]: Starting
(kafka.coordinator.transaction.TransactionMarkerChannelManag
er)
kafka | [2022-12-09 02:48:15,876] INFO [Controller
id=1] Initialized broker epochs cache: HashMap(1 -> 27)
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,934] DEBUG [Controller
id=1] Register BrokerModifications handler for Set(1)
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:15,972] DEBUG [Channel
manager on controller 1]: Controller 1 trying to connect to
broker 1 (kafka.controller.ControllerChannelManager)
kafka | [2022-12-09 02:48:16,064] INFO
[ExpirationReaper-1-AlterAcls]: Starting
(kafka.server.DelayedOperationPurgatory$ExpiredOperationRea
per)
kafka | [2022-12-09 02:48:16,132] INFO [Controller
id=1] Currently active brokers in the cluster: Set(1)
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,146] INFO [Controller
id=1] Currently shutting brokers in the cluster: HashSet()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,149] INFO [Controller
id=1] Current list of topics in the cluster: HashSet()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,154] INFO [Controller
id=1] Fetching topic deletions in progress
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,167] INFO
[RequestSendThread controllerId=1] Starting
(kafka.controller.RequestSendThread)
kafka | [2022-12-09 02:48:16,186] INFO [Controller
id=1] List of topics to be deleted:
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,190] INFO [Controller
id=1] List of topics ineligible for deletion:
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,198] INFO [Controller
id=1] Initializing topic deletion manager
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,200] INFO [Topic
Deletion Manager 1] Initializing manager with initial deletions:
Set(), initial ineligible deletions: HashSet()
(kafka.controller.TopicDeletionManager)
kafka | [2022-12-09 02:48:16,219] INFO [Controller
id=1] Sending update metadata request
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,275] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:48:16,355] INFO
[/config/changes-event-process-thread]: Starting
(kafka.common.ZkNodeChangeNotificationListener$ChangeEve
ntProcessThread)
kafka | [2022-12-09 02:48:16,398] INFO
[ReplicaStateMachine controllerId=1] Initializing replica state
(kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 02:48:16,417] INFO
[ReplicaStateMachine controllerId=1] Triggering online replica
state changes (kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 02:48:16,445] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Starting socket server
acceptors and processors (kafka.network.SocketServer)
kafka | [2022-12-09 02:48:16,471] INFO
[ReplicaStateMachine controllerId=1] Triggering offline replica
state changes (kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 02:48:16,480] DEBUG
[ReplicaStateMachine controllerId=1] Started replica state
machine with initial state -> HashMap()
(kafka.controller.ZkReplicaStateMachine)
kafka | [2022-12-09 02:48:16,482] INFO
[PartitionStateMachine controllerId=1] Initializing partition state
(kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 02:48:16,488] INFO
[PartitionStateMachine controllerId=1] Triggering online
partition state changes
(kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 02:48:16,494] INFO
[RequestSendThread controllerId=1] Controller 1 connected to
localhost:9092 (id: 1 rack: null) for sending state change
requests (kafka.controller.RequestSendThread)
kafka | [2022-12-09 02:48:16,516] DEBUG
[PartitionStateMachine controllerId=1] Started partition state
machine with initial state -> HashMap()
(kafka.controller.ZkPartitionStateMachine)
kafka | [2022-12-09 02:48:16,517] INFO [Controller
id=1] Ready to serve as the new controller with epoch 1
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,568] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Started data-plane
acceptor and processor(s) for endpoint :
ListenerName(LISTENER) (kafka.network.SocketServer)
kafka | [2022-12-09 02:48:16,616] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Started data-plane
acceptor and processor(s) for endpoint :
ListenerName(LISTENER_HOST) (kafka.network.SocketServer)
kafka | [2022-12-09 02:48:16,626] INFO [Controller
id=1] Partitions undergoing preferred replica election:
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,661] INFO [Controller
id=1] Partitions that completed preferred replica election:
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,633] INFO [SocketServer
listenerType=ZK_BROKER, nodeId=1] Started socket server
acceptors and processors (kafka.network.SocketServer)
kafka | [2022-12-09 02:48:16,672] INFO [Controller
id=1] Skipping preferred replica election for partitions due to
topic deletion: (kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,736] INFO [Controller
id=1] Resuming preferred replica election for partitions:
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,761] INFO Kafka version:
7.1.1-ccs (org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 02:48:16,761] INFO Kafka
commitId: 947fac5beb61836d
(org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 02:48:16,761] INFO Kafka
startTimeMs: 1670554096661
(org.apache.kafka.common.utils.AppInfoParser)
kafka | [2022-12-09 02:48:16,824] INFO [Controller
id=1] Starting replica leader election (PREFERRED) for
partitions triggered by ZkTriggered
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:16,878] INFO [KafkaServer
id=1] started (kafka.server.KafkaServer)
kafka | [2022-12-09 02:48:17,086] INFO [Controller
id=1] Starting the controller scheduler
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:17,268] INFO
[BrokerToControllerChannelManager broker=1
name=forwarding]: Recorded new controller, from now on will
use broker localhost:9092 (id: 1 rack: null)
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:48:17,269] INFO
[BrokerToControllerChannelManager broker=1 name=alterIsr]:
Recorded new controller, from now on will use broker
localhost:9092 (id: 1 rack: null)
(kafka.server.BrokerToControllerRequestThread)
kafka | [2022-12-09 02:48:17,324] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 0 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
schema | ===> Launching ...
kafka-ui | 2022-12-09 02:48:18,602 INFO [main]
c.p.k.u.s.DeserializationService: Using SimpleRecordSerDe for
cluster 'hiveLocal'
schema | ===> Launching schema-registry ...
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,265Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[aggs-matrix-stats]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,350Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[analysis-common]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,370Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[constant-keyword]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,372Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[flattened]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,373Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[frozen-indices]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,373Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[ingest-common]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,373Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[ingest-geoip]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,379Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[ingest-user-agent]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,379Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[kibana]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,379Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[lang-expression]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,380Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[lang-mustache]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,380Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[lang-painless]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,380Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[mapper-extras]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,381Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[mapper-version]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,381Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[parent-join]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,381Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[percolator]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,382Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[rank-eval]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,395Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[reindex]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,412Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[repositories-metering-api]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,412Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[repository-url]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,413Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[search-business-rules]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,413Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[searchable-snapshots]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,414Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[spatial]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,414Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[transform]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,414Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[transport-netty4]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,415Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[unsigned-long]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,415Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[vectors]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,415Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module
[wildcard]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,425Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-analytics]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,446Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-async]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,449Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-async-search]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,454Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-autoscaling]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,458Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-ccr]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,458Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-core]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,467Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-data-streams]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,467Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-deprecation]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,468Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-enrich]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,469Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-eql]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,470Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-graph]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,471Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-identity-provider]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,471Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-ilm]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,473Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-logstash]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,475Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-ml]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,475Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-monitoring]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,475Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-ql]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,476Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-rollup]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,477Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-security]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,479Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-sql]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,480Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-stack]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,480Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-voting-only-node]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,480Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "loaded module [x-
pack-watcher]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:21,508Z", "level": "INFO", "component":
"o.e.p.PluginsService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "no plugins
loaded" }
kafka | [2022-12-09 02:48:22,100] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:22,107] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:22,125Z", "level": "INFO", "component":
"o.e.e.NodeEnvironment", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "using [1] data
paths, mounts [[/ (overlay)]], net usable_space [50.1gb], net
total_space [58.3gb], types [overlay]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:22,142Z", "level": "INFO", "component":
"o.e.e.NodeEnvironment", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "heap size [1gb],
compressed ordinary object pointers [true]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:22,608Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "node name [4921ed443d90],
node ID [p6XVXI47QGCi1EGg95j87Q], cluster name [docker-
cluster], roles [transform, master, remote_cluster_client, data,
ml, data_content, data_hot, data_warm, data_cold, ingest]" }
schema | [2022-12-09 02:48:25,004] INFO
SchemaRegistryConfig values:
schema | access.control.allow.headers =
schema | access.control.allow.methods =
schema | access.control.allow.origin =
schema | access.control.skip.options = true
schema | authentication.method = NONE
schema | authentication.realm =
schema | authentication.roles = [*]
schema | authentication.skip.paths = []
schema | avro.compatibility.level =
schema | compression.enable = true
schema | csrf.prevention.enable = false
schema | csrf.prevention.token.endpoint = /csrf
schema | csrf.prevention.token.expiration.minutes =
30
schema | csrf.prevention.token.max.entries = 10000
schema | debug = false
schema | dos.filter.delay.ms = 100
schema | dos.filter.enabled = false
schema | dos.filter.insert.headers = true
schema | dos.filter.ip.whitelist = []
schema | dos.filter.managed.attr = false
schema | dos.filter.max.idle.tracker.ms = 30000
schema | dos.filter.max.requests.ms = 30000
schema | dos.filter.max.requests.per.sec = 25
schema | dos.filter.max.wait.ms = 50
schema | dos.filter.remote.port = false
schema | dos.filter.throttle.ms = 30000
schema | dos.filter.throttled.requests = 5
schema | dos.filter.track.global = false
schema | host.name = schema
schema | http2.enabled = true
schema | idle.timeout.ms = 30000
schema | inter.instance.headers.whitelist = []
schema | inter.instance.protocol = http
schema | kafkastore.bootstrap.servers = [kafka-
local:9095]
schema | kafkastore.checkpoint.dir = /tmp
schema | kafkastore.checkpoint.version = 0
schema | kafkastore.connection.url =
schema | kafkastore.group.id =
schema | kafkastore.init.timeout.ms = 60000
schema | kafkastore.sasl.kerberos.kinit.cmd =
/usr/bin/kinit
schema |
kafkastore.sasl.kerberos.min.time.before.relogin = 60000
schema | kafkastore.sasl.kerberos.service.name =
schema | kafkastore.sasl.kerberos.ticket.renew.jitter
= 0.05
schema |
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
schema | kafkastore.sasl.mechanism = GSSAPI
schema | kafkastore.security.protocol = PLAINTEXT
schema | kafkastore.ssl.cipher.suites =
schema | kafkastore.ssl.enabled.protocols =
TLSv1.2,TLSv1.1,TLSv1
schema |
kafkastore.ssl.endpoint.identification.algorithm =
schema | kafkastore.ssl.key.password = [hidden]
schema | kafkastore.ssl.keymanager.algorithm =
SunX509
schema | kafkastore.ssl.keystore.location =
schema | kafkastore.ssl.keystore.password = [hidden]
schema | kafkastore.ssl.keystore.type = JKS
schema | kafkastore.ssl.protocol = TLS
schema | kafkastore.ssl.provider =
schema | kafkastore.ssl.trustmanager.algorithm =
PKIX
schema | kafkastore.ssl.truststore.location =
schema | kafkastore.ssl.truststore.password =
[hidden]
schema | kafkastore.ssl.truststore.type = JKS
schema | kafkastore.timeout.ms = 500
schema | kafkastore.topic = _schemas
schema | kafkastore.topic.replication.factor = 3
schema | kafkastore.topic.skip.validation = false
schema | kafkastore.update.handlers = []
schema | kafkastore.write.max.retries = 5
schema | leader.eligibility = true
schema | listener.protocol.map = []
schema | listeners = [http://schema:9091]
schema | master.eligibility = null
schema | metric.reporters = []
schema | metrics.jmx.prefix = kafka.schema.registry
schema | metrics.num.samples = 2
schema | metrics.sample.window.ms = 30000
schema | metrics.tag.map = []
schema | mode.mutability = true
schema | nosniff.prevention.enable = false
schema | port = 8081
schema | proxy.protocol.enabled = false
schema | reject.options.request = false
schema | request.logger.name = io.confluent.rest-
utils.requests
schema | request.queue.capacity = 2147483647
schema | request.queue.capacity.growby = 64
schema | request.queue.capacity.init = 128
schema | resource.extension.class = []
schema | resource.extension.classes = []
schema | resource.static.locations = []
schema | response.http.headers.config =
schema | response.mediatype.default =
application/vnd.schemaregistry.v1+json
schema | response.mediatype.preferred =
[application/vnd.schemaregistry.v1+json,
application/vnd.schemaregistry+json, application/json]
schema | rest.servlet.initializor.classes = []
schema | schema.cache.expiry.secs = 300
schema | schema.cache.size = 1000
schema | schema.canonicalize.on.consume = []
schema | schema.compatibility.level = backward
schema | schema.providers = []
schema | schema.registry.group.id = schema-registry
schema | schema.registry.inter.instance.protocol =
schema | schema.registry.resource.extension.class =
[]
schema | shutdown.graceful.ms = 1000
schema | ssl.cipher.suites = []
schema | ssl.client.auth = false
schema | ssl.client.authentication = NONE
schema | ssl.enabled.protocols = []
schema | ssl.endpoint.identification.algorithm = null
schema | ssl.key.password = [hidden]
schema | ssl.keymanager.algorithm =
schema | ssl.keystore.location =
schema | ssl.keystore.password = [hidden]
schema | ssl.keystore.reload = false
schema | ssl.keystore.type = JKS
schema | ssl.keystore.watch.location =
schema | ssl.protocol = TLS
schema | ssl.provider =
schema | ssl.trustmanager.algorithm =
schema | ssl.truststore.location =
schema | ssl.truststore.password = [hidden]
schema | ssl.truststore.type = JKS
schema | thread.pool.max = 200
schema | thread.pool.min = 8
schema | websocket.path.prefix = /ws
schema | websocket.servlet.initializor.classes = []
schema |
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
schema | [2022-12-09 02:48:25,457] INFO Logging
initialized @6386ms to org.eclipse.jetty.util.log.Slf4jLog
(org.eclipse.jetty.util.log)
schema | [2022-12-09 02:48:25,970] INFO Initial
capacity 128, increased by 64, maximum capacity
2147483647. (io.confluent.rest.ApplicationServer)
schema | [2022-12-09 02:48:26,546] INFO Adding
listener with HTTP/2: http://schema:9091
(io.confluent.rest.ApplicationServer)
kafka-ui | 2022-12-09 02:48:27,649 INFO [main]
o.s.b.a.e.w.EndpointLinksResolver: Exposing 2 endpoint(s)
beneath base path '/actuator'
kafka-ui | 2022-12-09 02:48:29,467 INFO [main]
o.s.b.a.s.r.ReactiveUserDetailsServiceAutoConfiguration:
kafka-ui |
kafka-ui | Using generated security password: a71296fd-
7dfd-43d0-8a65-a9cb4647d5c0
kafka-ui |
schema | [2022-12-09 02:48:29,773] INFO
AdminClientConfig values:
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | client.dns.lookup = use_all_dns_ips
schema | client.id =
schema | connections.max.idle.ms = 300000
schema | default.api.timeout.ms = 60000
schema | host.resolver.class = class
org.apache.kafka.clients.DefaultHostResolver
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 2147483647
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema |
(org.apache.kafka.clients.admin.AdminClientConfig)
schema | [2022-12-09 02:48:30,267] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:30,268] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:30,268] INFO Kafka
startTimeMs: 1670554110256
(org.apache.kafka.common.utils.AppInfoParser)
kafka-ui | 2022-12-09 02:48:30,529 WARN [main]
c.p.k.u.c.a.DisabledAuthSecurityConfig: Authentication is
disabled. Access will be unrestricted.
kafka-ui | 2022-12-09 02:48:32,607 INFO [main]
o.s.l.c.s.AbstractContextSource: Property 'userDn' not set -
anonymous context will be used for read-write operations
schema | [2022-12-09 02:48:33,112] INFO App info
kafka.admin.client for adminclient-1 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:33,153] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:48:33,153] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:48:33,163] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:48:33,290] INFO Registering
schema provider for AVRO:
io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 02:48:33,290] INFO Registering
schema provider for JSON:
io.confluent.kafka.schemaregistry.json.JsonSchemaProvider
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 02:48:33,290] INFO Registering
schema provider for PROTOBUF:
io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProv
ider
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 02:48:33,394] INFO Initializing
KafkaStore with broker endpoints: PLAINTEXT://kafka-local:9095
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:48:33,406] INFO
AdminClientConfig values:
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | client.dns.lookup = use_all_dns_ips
schema | client.id =
schema | connections.max.idle.ms = 300000
schema | default.api.timeout.ms = 60000
schema | host.resolver.class = class
org.apache.kafka.clients.DefaultHostResolver
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 2147483647
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema |
(org.apache.kafka.clients.admin.AdminClientConfig)
schema | [2022-12-09 02:48:33,438] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:33,438] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:33,439] INFO Kafka
startTimeMs: 1670554113438
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:33,606] INFO Creating
schemas topic _schemas
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:48:33,613] WARN Creating
the schema topic _schemas using a replication factor of 1,
which is less than the desired one of 3. If this is a production
environment, it's crucial to add more brokers and increase the
replication factor of the topic.
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
kafka | [2022-12-09 02:48:34,074] INFO Creating topic
_schemas with configuration {cleanup.policy=compact} and
initial partition assignment HashMap(0 -> ArrayBuffer(1))
(kafka.zk.AdminZkClient)
kafka | [2022-12-09 02:48:34,237] INFO [Controller
id=1] New topics: [Set(_schemas)], deleted topics: [HashSet()],
new partition replica assignment
[Set(TopicIdReplicaAssignment(_schemas,Some(jPX-71-
SRvWdrc9QRb_h-w),Map(_schemas-0 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=))))] (kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:34,240] INFO [Controller
id=1] New partition creation callback for _schemas-0
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:34,249] INFO [Controller
id=1 epoch=1] Changed partition _schemas-0 state from
NonExistentPartition to NewPartition with assigned replicas 1
(state.change.logger)
kafka | [2022-12-09 02:48:34,261] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:48:34,290] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
_schemas-0 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:34,291] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:48:34,437] INFO [Controller
id=1 epoch=1] Changed partition _schemas-0 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:34,449] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='_schemas',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition _schemas-0 (state.change.logger)
kafka | [2022-12-09 02:48:34,457] INFO [Controller
id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with
1 become-leader and 0 become-follower partitions
(state.change.logger)
kafka | [2022-12-09 02:48:34,481] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:48:34,488] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
_schemas-0 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:34,503] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:48:34,528] INFO [Broker id=1]
Handling LeaderAndIsr request correlationId 1 from controller 1
for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:48:34,533] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='_schemas',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 1 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:34,685] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 1 from
controller 1 epoch 1 starting the become-leader transition for
partition _schemas-0 (state.change.logger)
kafka | [2022-12-09 02:48:34,689] INFO
[ReplicaFetcherManager on broker 1] Removed fetcher for
partitions Set(_schemas-0)
(kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 02:48:34,692] INFO [Broker id=1]
Stopped fetchers as part of LeaderAndIsr request correlationId
1 from controller 1 epoch 1 as part of the become-leader
transition for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:48:35,068] INFO [LogLoader
partition=_schemas-0, dir=/var/lib/kafka/data] Loading
producer state till offset 0 with message format version 2
(kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:35,168] INFO Created log
for partition _schemas-0 in /var/lib/kafka/data/_schemas-0 with
properties {cleanup.policy=compact} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:35,183] INFO [Partition
_schemas-0 broker=1] No checkpointed highwatermark is
found for partition _schemas-0 (kafka.cluster.Partition)
kafka | [2022-12-09 02:48:35,201] INFO [Partition
_schemas-0 broker=1] Log loaded for partition _schemas-0 with
initial high watermark 0 (kafka.cluster.Partition)
kafka | [2022-12-09 02:48:35,211] INFO [Broker id=1]
Leader _schemas-0 starts at leader epoch 0 from offset 0 with
high watermark 0 ISR [1] addingReplicas [] removingReplicas [].
Previous leader epoch was -1. (state.change.logger)
kafka | [2022-12-09 02:48:35,290] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 1 from
controller 1 epoch 1 for the become-leader transition for
partition _schemas-0 (state.change.logger)
kafka | [2022-12-09 02:48:35,332] INFO [Broker id=1]
Finished LeaderAndIsr request in 829ms correlationId 1 from
controller 1 for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:48:35,366] TRACE [Controller
id=1 epoch=1] Received response
LeaderAndIsrResponseData(errorCode=0, partitionErrors=[],
topics=[LeaderAndIsrTopicError(topicId=jPX-71-
SRvWdrc9QRb_h-w,
partitionErrors=[LeaderAndIsrPartitionError(topicName='',
partitionIndex=0, errorCode=0)])]) for request
LEADER_AND_ISR with correlation id 1 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 02:48:35,436] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='_schemas',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition _schemas-0 in response to
UpdateMetadata request sent by controller 1 epoch 1 with
correlation id 2 (state.change.logger)
kafka | [2022-12-09 02:48:35,441] INFO [Broker id=1]
Add 1 partitions and deleted 0 partitions from metadata cache
in response to UpdateMetadata request sent by controller 1
epoch 1 with correlation id 2 (state.change.logger)
kafka | [2022-12-09 02:48:35,492] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 2 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
schema | [2022-12-09 02:48:35,490] INFO App info
kafka.admin.client for adminclient-2 unregistered
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:35,521] INFO Metrics
scheduler closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:48:35,528] INFO Closing
reporter org.apache.kafka.common.metrics.JmxReporter
(org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:48:35,529] INFO Metrics
reporters closed (org.apache.kafka.common.metrics.Metrics)
schema | [2022-12-09 02:48:35,571] INFO
ProducerConfig values:
schema | acks = -1
schema | batch.size = 16384
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | buffer.memory = 33554432
schema | client.dns.lookup = use_all_dns_ips
schema | client.id = producer-1
schema | compression.type = none
schema | connections.max.idle.ms = 540000
schema | delivery.timeout.ms = 120000
schema | enable.idempotence = false
schema | interceptor.classes = []
schema | key.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
schema | linger.ms = 0
schema | max.block.ms = 60000
schema | max.in.flight.requests.per.connection = 5
schema | max.request.size = 1048576
schema | metadata.max.age.ms = 300000
schema | metadata.max.idle.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
schema | receive.buffer.bytes = 32768
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retries = 0
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema | transaction.timeout.ms = 60000
schema | transactional.id = null
schema | value.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
schema |
(org.apache.kafka.clients.producer.ProducerConfig)
schema | [2022-12-09 02:48:35,908] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:35,909] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:35,909] INFO Kafka
startTimeMs: 1670554115908
(org.apache.kafka.common.utils.AppInfoParser)
kafka-ui | 2022-12-09 02:48:35,970 INFO [main]
o.s.b.w.e.n.NettyWebServer: Netty started on port 8080
schema | [2022-12-09 02:48:36,003] INFO [Producer
clientId=producer-1] Cluster ID: 1i0gWgdkSlq2grYfKIOGfw
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:36,120] INFO Registered
kafka:type=kafka.Log4jController MBean
(kafka.utils.Log4jControllerRegistration$)
schema | [2022-12-09 02:48:36,122] INFO Kafka store
reader thread starting consumer
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:48:36,153] INFO
ConsumerConfig values:
schema | allow.auto.create.topics = true
schema | auto.commit.interval.ms = 5000
schema | auto.offset.reset = earliest
schema | bootstrap.servers = [PLAINTEXT://kafka-
local:9095]
schema | check.crcs = true
schema | client.dns.lookup = use_all_dns_ips
schema | client.id = KafkaStore-reader-_schemas
schema | client.rack =
schema | connections.max.idle.ms = 540000
schema | default.api.timeout.ms = 60000
schema | enable.auto.commit = false
schema | exclude.internal.topics = true
schema | fetch.max.bytes = 52428800
schema | fetch.max.wait.ms = 500
schema | fetch.min.bytes = 1
schema | group.id = schema-registry-schema-9091
schema | group.instance.id = null
schema | heartbeat.interval.ms = 3000
schema | interceptor.classes = []
schema | internal.leave.group.on.close = true
schema |
internal.throw.on.fetch.stable.offset.unsupported = false
schema | isolation.level = read_uncommitted
schema | key.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
schema | max.partition.fetch.bytes = 1048576
schema | max.poll.interval.ms = 300000
schema | max.poll.records = 500
schema | metadata.max.age.ms = 300000
schema | metric.reporters = []
schema | metrics.num.samples = 2
schema | metrics.recording.level = INFO
schema | metrics.sample.window.ms = 30000
schema | partition.assignment.strategy = [class
org.apache.kafka.clients.consumer.RangeAssignor, class
org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
schema | receive.buffer.bytes = 65536
schema | reconnect.backoff.max.ms = 1000
schema | reconnect.backoff.ms = 50
schema | request.timeout.ms = 30000
schema | retry.backoff.ms = 100
schema | sasl.client.callback.handler.class = null
schema | sasl.jaas.config = null
schema | sasl.kerberos.kinit.cmd = /usr/bin/kinit
schema | sasl.kerberos.min.time.before.relogin =
60000
schema | sasl.kerberos.service.name = null
schema | sasl.kerberos.ticket.renew.jitter = 0.05
schema | sasl.kerberos.ticket.renew.window.factor =
0.8
schema | sasl.login.callback.handler.class = null
schema | sasl.login.class = null
schema | sasl.login.connect.timeout.ms = null
schema | sasl.login.read.timeout.ms = null
schema | sasl.login.refresh.buffer.seconds = 300
schema | sasl.login.refresh.min.period.seconds = 60
schema | sasl.login.refresh.window.factor = 0.8
schema | sasl.login.refresh.window.jitter = 0.05
schema | sasl.login.retry.backoff.max.ms = 10000
schema | sasl.login.retry.backoff.ms = 100
schema | sasl.mechanism = GSSAPI
schema | sasl.oauthbearer.clock.skew.seconds = 30
schema | sasl.oauthbearer.expected.audience = null
schema | sasl.oauthbearer.expected.issuer = null
schema | sasl.oauthbearer.jwks.endpoint.refresh.ms
= 3600000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms =
10000
schema |
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
schema | sasl.oauthbearer.jwks.endpoint.url = null
schema | sasl.oauthbearer.scope.claim.name = scope
schema | sasl.oauthbearer.sub.claim.name = sub
schema | sasl.oauthbearer.token.endpoint.url = null
schema | security.protocol = PLAINTEXT
schema | security.providers = null
schema | send.buffer.bytes = 131072
schema | session.timeout.ms = 45000
schema | socket.connection.setup.timeout.max.ms =
30000
schema | socket.connection.setup.timeout.ms =
10000
schema | ssl.cipher.suites = null
schema | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
schema | ssl.endpoint.identification.algorithm = https
schema | ssl.engine.factory.class = null
schema | ssl.key.password = null
schema | ssl.keymanager.algorithm = SunX509
schema | ssl.keystore.certificate.chain = null
schema | ssl.keystore.key = null
schema | ssl.keystore.location = null
schema | ssl.keystore.password = null
schema | ssl.keystore.type = JKS
schema | ssl.protocol = TLSv1.3
schema | ssl.provider = null
schema | ssl.secure.random.implementation = null
schema | ssl.trustmanager.algorithm = PKIX
schema | ssl.truststore.certificates = null
schema | ssl.truststore.location = null
schema | ssl.truststore.password = null
schema | ssl.truststore.type = JKS
schema | value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
schema |
(org.apache.kafka.clients.consumer.ConsumerConfig)
kafka-ui | 2022-12-09 02:48:36,220 INFO [main]
c.p.k.u.KafkaUiApplication: Started KafkaUiApplication in 52.669
seconds (JVM running for 65.948)
schema | [2022-12-09 02:48:36,388] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:36,403] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:36,403] INFO Kafka
startTimeMs: 1670554116387
(org.apache.kafka.common.utils.AppInfoParser)
kafka-ui | 2022-12-09 02:48:36,460 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
schema | [2022-12-09 02:48:36,472] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Cluster ID: 1i0gWgdkSlq2grYfKIOGfw
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:36,531] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Subscribed to partition(s): _schemas-0
(org.apache.kafka.clients.consumer.KafkaConsumer)
schema | [2022-12-09 02:48:36,544] INFO Seeking to
beginning for all partitions
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:48:36,546] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Seeking to EARLIEST offset of partition
_schemas-0
(org.apache.kafka.clients.consumer.internals.SubscriptionState)
schema | [2022-12-09 02:48:36,547] INFO Initialized
last consumed offset to -1
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
schema | [2022-12-09 02:48:36,560] INFO [kafka-store-
reader-thread-_schemas]: Starting
(io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderTh
read)
kafka-ui | 2022-12-09 02:48:36,602 INFO [parallel-1]
o.a.k.c.a.AdminClientConfig: AdminClientConfig values:
kafka-ui | bootstrap.servers = [kafka-local:9095]
kafka-ui | client.dns.lookup = use_all_dns_ips
kafka-ui | client.id =
kafka-ui | connections.max.idle.ms = 300000
kafka-ui | default.api.timeout.ms = 60000
kafka-ui | metadata.max.age.ms = 300000
kafka-ui | metric.reporters = []
kafka-ui | metrics.num.samples = 2
kafka-ui | metrics.recording.level = INFO
kafka-ui | metrics.sample.window.ms = 30000
kafka-ui | receive.buffer.bytes = 65536
kafka-ui | reconnect.backoff.max.ms = 1000
kafka-ui | reconnect.backoff.ms = 50
kafka-ui | request.timeout.ms = 30000
kafka-ui | retries = 2147483647
kafka-ui | retry.backoff.ms = 100
kafka-ui | sasl.client.callback.handler.class = null
kafka-ui | sasl.jaas.config = null
kafka-ui | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka-ui | sasl.kerberos.min.time.before.relogin =
60000
kafka-ui | sasl.kerberos.service.name = null
kafka-ui | sasl.kerberos.ticket.renew.jitter = 0.05
kafka-ui | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka-ui | sasl.login.callback.handler.class = null
kafka-ui | sasl.login.class = null
kafka-ui | sasl.login.refresh.buffer.seconds = 300
kafka-ui | sasl.login.refresh.min.period.seconds = 60
kafka-ui | sasl.login.refresh.window.factor = 0.8
kafka-ui | sasl.login.refresh.window.jitter = 0.05
kafka-ui | sasl.mechanism = GSSAPI
kafka-ui | security.protocol = PLAINTEXT
kafka-ui | security.providers = null
kafka-ui | send.buffer.bytes = 131072
kafka-ui | socket.connection.setup.timeout.max.ms =
30000
kafka-ui | socket.connection.setup.timeout.ms =
10000
kafka-ui | ssl.cipher.suites = null
kafka-ui | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka-ui | ssl.endpoint.identification.algorithm = https
kafka-ui | ssl.engine.factory.class = null
kafka-ui | ssl.key.password = null
kafka-ui | ssl.keymanager.algorithm = SunX509
kafka-ui | ssl.keystore.certificate.chain = null
kafka-ui | ssl.keystore.key = null
kafka-ui | ssl.keystore.location = null
kafka-ui | ssl.keystore.password = null
kafka-ui | ssl.keystore.type = JKS
kafka-ui | ssl.protocol = TLSv1.3
kafka-ui | ssl.provider = null
kafka-ui | ssl.secure.random.implementation = null
kafka-ui | ssl.trustmanager.algorithm = PKIX
kafka-ui | ssl.truststore.certificates = null
kafka-ui | ssl.truststore.location = null
kafka-ui | ssl.truststore.password = null
kafka-ui | ssl.truststore.type = JKS
kafka-ui |
schema | [2022-12-09 02:48:36,874] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Resetting the last seen epoch of
partition _schemas-0 to 0 since the associated topicId changed
from null to jPX-71-SRvWdrc9QRb_h-w
(org.apache.kafka.clients.Metadata)
kafka-ui | 2022-12-09 02:48:37,023 INFO [parallel-1]
o.a.k.c.u.AppInfoParser: Kafka version: 2.8.0
kafka-ui | 2022-12-09 02:48:37,024 INFO [parallel-1]
o.a.k.c.u.AppInfoParser: Kafka commitId: ebb1d6e21cc92130
kafka-ui | 2022-12-09 02:48:37,024 INFO [parallel-1]
o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1670554117007
schema | [2022-12-09 02:48:37,634] INFO [Producer
clientId=producer-1] Resetting the last seen epoch of partition
_schemas-0 to 0 since the associated topicId changed from null
to jPX-71-SRvWdrc9QRb_h-w
(org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:38,226] INFO Wait to catch
up until the offset at 0
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:48:38,544] INFO Reached
offset at 0
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:48:38,553] INFO Joining
schema registry with Kafka-based coordination
(io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistr
y)
schema | [2022-12-09 02:48:38,644] INFO Kafka
version: 7.1.1-ce
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:38,644] INFO Kafka
commitId: 87f529fc90d374d4
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:38,644] INFO Kafka
startTimeMs: 1670554118643
(org.apache.kafka.common.utils.AppInfoParser)
schema | [2022-12-09 02:48:38,713] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition _schemas-0 to 0 since the
associated topicId changed from null to jPX-71-
SRvWdrc9QRb_h-w (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:38,714] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Cluster ID:
1i0gWgdkSlq2grYfKIOGfw (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:38,785] INFO Creating topic
__consumer_offsets with configuration
{compression.type=producer, cleanup.policy=compact,
segment.bytes=104857600} and initial partition assignment
HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 ->
ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 ->
ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 ->
ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 ->
ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -
> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17
-> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1),
20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 ->
ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -
> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28
-> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1),
31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 ->
ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -
> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39
-> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1),
42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 ->
ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -
> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1))
(kafka.zk.AdminZkClient)
kafka | [2022-12-09 02:48:38,879] INFO [Controller
id=1] New topics: [Set(__consumer_offsets)], deleted topics:
[HashSet()], new partition replica assignment
[Set(TopicIdReplicaAssignment(__consumer_offsets,Some(7c8kJ
5UBR5yIaaALUBePYg),HashMap(__consumer_offsets-22 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-30 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-25 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-35 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-37 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-38 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-13 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-8 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-21 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-4 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-27 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-7 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-9 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-46 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-41 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-33 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-23 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-49 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-47 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-16 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-28 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-31 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-36 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-42 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-3 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-18 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-15 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-24 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-17 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-48 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-19 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-11 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-2 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-43 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-6 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-14 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-20 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-0 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-44 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-39 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-12 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-45 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-1 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-5 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-26 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-29 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-34 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-10 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-32 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=), __consumer_offsets-40 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=))))] (kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:38,880] INFO [Controller
id=1] New partition creation callback for __consumer_offsets-
22,__consumer_offsets-30,__consumer_offsets-
25,__consumer_offsets-35,__consumer_offsets-
37,__consumer_offsets-38,__consumer_offsets-
13,__consumer_offsets-8,__consumer_offsets-
21,__consumer_offsets-4,__consumer_offsets-
27,__consumer_offsets-7,__consumer_offsets-
9,__consumer_offsets-46,__consumer_offsets-
41,__consumer_offsets-33,__consumer_offsets-
23,__consumer_offsets-49,__consumer_offsets-
47,__consumer_offsets-16,__consumer_offsets-
28,__consumer_offsets-31,__consumer_offsets-
36,__consumer_offsets-42,__consumer_offsets-
3,__consumer_offsets-18,__consumer_offsets-
15,__consumer_offsets-24,__consumer_offsets-
17,__consumer_offsets-48,__consumer_offsets-
19,__consumer_offsets-11,__consumer_offsets-
2,__consumer_offsets-43,__consumer_offsets-
6,__consumer_offsets-14,__consumer_offsets-
20,__consumer_offsets-0,__consumer_offsets-
44,__consumer_offsets-39,__consumer_offsets-
12,__consumer_offsets-45,__consumer_offsets-
1,__consumer_offsets-5,__consumer_offsets-
26,__consumer_offsets-29,__consumer_offsets-
34,__consumer_offsets-10,__consumer_offsets-
32,__consumer_offsets-40 (kafka.controller.KafkaController)
kafka | [2022-12-09 02:48:38,897] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-22 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,898] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-30 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,898] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-25 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,898] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-35 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,898] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-37 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,898] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-38 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,898] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-13 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-8 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-21 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-4 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-27 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-7 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-9 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,899] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-46 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-41 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-33 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-23 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-49 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-47 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-16 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,900] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-28 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,901] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-31 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,901] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-36 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,901] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-42 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,901] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-3 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,901] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-18 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,901] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-15 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,902] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-24 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,902] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-17 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,902] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-48 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,903] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-19 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,903] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-11 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,903] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-2 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,903] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-43 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,903] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-6 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,903] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-14 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,907] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-20 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,908] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-0 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,908] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-44 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,908] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-39 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,909] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-12 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,909] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-45 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,909] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-1 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,916] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-5 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,916] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-26 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,916] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-29 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,916] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-34 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,917] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-10 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,917] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-32 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,917] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-40 state
from NonExistentPartition to NewPartition with assigned
replicas 1 (state.change.logger)
kafka | [2022-12-09 02:48:38,917] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:48:38,927] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-32 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,927] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-5 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,927] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-44 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-48 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-46 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-20 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-43 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-24 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-6 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-18 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-21 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-1 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-14 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-34 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-16 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-29 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-11 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,928] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-0 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-22 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-47 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-36 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-28 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-42 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-9 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-37 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-13 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-30 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-35 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-39 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-12 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-27 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,929] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-45 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,930] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-19 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,930] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-49 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,930] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-40 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,930] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-41 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,930] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-38 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,930] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-8 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,939] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-7 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,939] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-33 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,939] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-25 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,939] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-31 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,939] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-23 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,939] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-10 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,940] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-2 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,940] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-17 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,940] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-4 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,943] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-15 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,943] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-26 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,944] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-3 from NonExistentReplica to NewReplica
(state.change.logger)
kafka | [2022-12-09 02:48:38,944] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:48:39,530] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-22 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,530] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-30 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,530] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-25 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,530] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-35 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,530] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-37 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,530] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-38 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-13 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-8 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-21 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-4 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-27 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-7 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-9 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-46 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-41 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-33 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-23 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-49 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-47 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-16 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,531] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-28 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,532] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-31 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,532] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-36 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,532] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-42 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,532] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-3 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,532] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-18 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,548] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-15 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,548] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-24 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,548] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-17 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,548] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-48 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-19 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-11 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-2 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-43 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-6 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-14 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-20 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-0 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-44 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-39 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-12 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-45 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,549] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-1 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-5 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-26 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-29 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-34 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-10 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-32 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] INFO [Controller
id=1 epoch=1] Changed partition __consumer_offsets-40 from
NewPartition to OnlinePartition with state
LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1),
zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:48:39,550] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=13, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-13 (state.change.logger)
kafka | [2022-12-09 02:48:39,550] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=46, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-46 (state.change.logger)
kafka | [2022-12-09 02:48:39,550] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=9, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-9 (state.change.logger)
kafka | [2022-12-09 02:48:39,550] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=42, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-42 (state.change.logger)
kafka | [2022-12-09 02:48:39,550] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=21, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-21 (state.change.logger)
kafka | [2022-12-09 02:48:39,551] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=17, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-17 (state.change.logger)
kafka | [2022-12-09 02:48:39,551] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=30, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-30 (state.change.logger)
kafka | [2022-12-09 02:48:39,551] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=26, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-26 (state.change.logger)
kafka | [2022-12-09 02:48:39,551] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=5, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-5 (state.change.logger)
kafka | [2022-12-09 02:48:39,551] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=38, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-38 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=1, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-1 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=34, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-34 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=16, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-16 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=45, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-45 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=12, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-12 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=41, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-41 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=24, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-24 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=20, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-20 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=49, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-49 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-0 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=29, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-29 (state.change.logger)
kafka | [2022-12-09 02:48:39,557] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=25, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-25 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=8, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-8 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=37, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-37 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=4, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-4 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=33, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-33 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=15, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-15 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=48, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-48 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=11, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-11 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=44, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-44 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=23, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-23 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=19, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-19 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=32, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-32 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=28, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-28 (state.change.logger)
kafka | [2022-12-09 02:48:39,558] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=7, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-7 (state.change.logger)
kafka | [2022-12-09 02:48:39,559] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=40, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-40 (state.change.logger)
kafka | [2022-12-09 02:48:39,559] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=3, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-3 (state.change.logger)
kafka | [2022-12-09 02:48:39,559] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=36, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-36 (state.change.logger)
kafka | [2022-12-09 02:48:39,559] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=47, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-47 (state.change.logger)
kafka | [2022-12-09 02:48:39,559] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=14, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-14 (state.change.logger)
kafka | [2022-12-09 02:48:39,566] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=43, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-43 (state.change.logger)
kafka | [2022-12-09 02:48:39,573] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=10, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-10 (state.change.logger)
kafka | [2022-12-09 02:48:39,573] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=22, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-22 (state.change.logger)
kafka | [2022-12-09 02:48:39,573] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=18, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-18 (state.change.logger)
kafka | [2022-12-09 02:48:39,573] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=31, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-31 (state.change.logger)
kafka | [2022-12-09 02:48:39,574] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=27, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-27 (state.change.logger)
kafka | [2022-12-09 02:48:39,574] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=39, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-39 (state.change.logger)
kafka | [2022-12-09 02:48:39,574] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=6, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-6 (state.change.logger)
kafka | [2022-12-09 02:48:39,574] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=35, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-35 (state.change.logger)
kafka | [2022-12-09 02:48:39,574] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=2, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true) to broker
1 for partition __consumer_offsets-2 (state.change.logger)
kafka | [2022-12-09 02:48:39,574] INFO [Controller
id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with
50 become-leader and 0 become-follower partitions
(state.change.logger)
kafka | [2022-12-09 02:48:39,576] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 50 partitions (state.change.logger)
kafka | [2022-12-09 02:48:39,595] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-32 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,595] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-5 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,597] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-44 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,607] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-48 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,608] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-46 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,615] INFO [Broker id=1]
Handling LeaderAndIsr request correlationId 3 from controller 1
for 50 partitions (state.change.logger)
kafka | [2022-12-09 02:48:39,631] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=13, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,631] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=46, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,631] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=9, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,631] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=42, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,631] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=21, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,631] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=17, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=30, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=26, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=5, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=38, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=1, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=34, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=16, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=45, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-20 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=12, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-43 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,633] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-24 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,632] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=41, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-6 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=24, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-18 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=20, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-21 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=49, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-1 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,637] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=29, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-14 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=25, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-34 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=8, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-16 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=37, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-29 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=4, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=33, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-11 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=15, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-0 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=48, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=11, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-22 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,638] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=44, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=23, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-47 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=19, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-36 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=32, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-28 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-42 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=28, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-9 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=7, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=40, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-37 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=3, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-13 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=36, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-30 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=47, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,639] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-35 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=14, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-39 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=43, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-12 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=10, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=22, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=18, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=31, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=27, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-27 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=39, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-45 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=6, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-19 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=35, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,640] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-49 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='__consumer_offsets',
partitionIndex=2, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
addingReplicas=[], removingReplicas=[], isNew=true)
correlation id 3 from controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-40 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-41 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-38 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-8 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-7 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-33 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-25 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-31 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-23 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-10 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,641] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-2 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,642] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-17 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,642] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-4 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,642] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-15 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,642] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-26 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,642] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
__consumer_offsets-3 from NewReplica to OnlineReplica
(state.change.logger)
kafka | [2022-12-09 02:48:39,642] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka-ui | 2022-12-09 02:48:39,822 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:48:40,329] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-3 (state.change.logger)
kafka | [2022-12-09 02:48:40,350] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-18 (state.change.logger)
kafka | [2022-12-09 02:48:40,355] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-41 (state.change.logger)
kafka | [2022-12-09 02:48:40,356] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-10 (state.change.logger)
kafka | [2022-12-09 02:48:40,356] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-33 (state.change.logger)
kafka | [2022-12-09 02:48:40,357] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-48 (state.change.logger)
kafka | [2022-12-09 02:48:40,357] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-19 (state.change.logger)
kafka | [2022-12-09 02:48:40,359] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-34 (state.change.logger)
kafka | [2022-12-09 02:48:40,364] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-4 (state.change.logger)
kafka | [2022-12-09 02:48:40,364] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-11 (state.change.logger)
kafka | [2022-12-09 02:48:40,364] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-26 (state.change.logger)
kafka | [2022-12-09 02:48:40,364] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-49 (state.change.logger)
kafka | [2022-12-09 02:48:40,374] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-39 (state.change.logger)
kafka | [2022-12-09 02:48:40,374] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-9 (state.change.logger)
kafka | [2022-12-09 02:48:40,374] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-24 (state.change.logger)
kafka | [2022-12-09 02:48:40,374] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-31 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-46 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-1 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-16 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-2 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-25 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-40 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-47 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-17 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-32 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-37 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-7 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-22 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-29 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-44 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-14 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-23 (state.change.logger)
kafka | [2022-12-09 02:48:40,375] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-38 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-8 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-45 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-15 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-30 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-0 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-35 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-5 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-20 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-27 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-42 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-12 (state.change.logger)
kafka | [2022-12-09 02:48:40,376] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-21 (state.change.logger)
kafka | [2022-12-09 02:48:40,388] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-36 (state.change.logger)
kafka | [2022-12-09 02:48:40,388] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-6 (state.change.logger)
kafka | [2022-12-09 02:48:40,388] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-43 (state.change.logger)
kafka | [2022-12-09 02:48:40,388] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-13 (state.change.logger)
kafka | [2022-12-09 02:48:40,388] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 starting the become-leader transition for
partition __consumer_offsets-28 (state.change.logger)
kafka | [2022-12-09 02:48:40,425] INFO
[ReplicaFetcherManager on broker 1] Removed fetcher for
partitions HashSet(__consumer_offsets-22, __consumer_offsets-
30, __consumer_offsets-25, __consumer_offsets-35,
__consumer_offsets-37, __consumer_offsets-38,
__consumer_offsets-13, __consumer_offsets-8,
__consumer_offsets-21, __consumer_offsets-4,
__consumer_offsets-27, __consumer_offsets-7,
__consumer_offsets-9, __consumer_offsets-46,
__consumer_offsets-41, __consumer_offsets-33,
__consumer_offsets-23, __consumer_offsets-49,
__consumer_offsets-47, __consumer_offsets-16,
__consumer_offsets-28, __consumer_offsets-31,
__consumer_offsets-36, __consumer_offsets-42,
__consumer_offsets-3, __consumer_offsets-18,
__consumer_offsets-15, __consumer_offsets-24,
__consumer_offsets-17, __consumer_offsets-48,
__consumer_offsets-19, __consumer_offsets-11,
__consumer_offsets-2, __consumer_offsets-43,
__consumer_offsets-6, __consumer_offsets-14,
__consumer_offsets-20, __consumer_offsets-0,
__consumer_offsets-44, __consumer_offsets-39,
__consumer_offsets-12, __consumer_offsets-45,
__consumer_offsets-1, __consumer_offsets-5,
__consumer_offsets-26, __consumer_offsets-29,
__consumer_offsets-34, __consumer_offsets-10,
__consumer_offsets-32, __consumer_offsets-40)
(kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 02:48:40,426] INFO [Broker id=1]
Stopped fetchers as part of LeaderAndIsr request correlationId
3 from controller 1 epoch 1 as part of the become-leader
transition for 50 partitions (state.change.logger)
kafka | [2022-12-09 02:48:40,552] INFO [LogLoader
partition=__consumer_offsets-3, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,584] INFO Created log
for partition __consumer_offsets-3 in
/var/lib/kafka/data/__consumer_offsets-3 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,603] INFO [Partition
__consumer_offsets-3 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-3
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,616] INFO [Partition
__consumer_offsets-3 broker=1] Log loaded for partition
__consumer_offsets-3 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,623] INFO [Broker id=1]
Leader __consumer_offsets-3 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,662] INFO [LogLoader
partition=__consumer_offsets-18, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,679] INFO Created log
for partition __consumer_offsets-18 in
/var/lib/kafka/data/__consumer_offsets-18 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,680] INFO [Partition
__consumer_offsets-18 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-18
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,680] INFO [Partition
__consumer_offsets-18 broker=1] Log loaded for partition
__consumer_offsets-18 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,681] INFO [Broker id=1]
Leader __consumer_offsets-18 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,706] INFO [LogLoader
partition=__consumer_offsets-41, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,722] INFO Created log
for partition __consumer_offsets-41 in
/var/lib/kafka/data/__consumer_offsets-41 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,722] INFO [Partition
__consumer_offsets-41 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-41
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,723] INFO [Partition
__consumer_offsets-41 broker=1] Log loaded for partition
__consumer_offsets-41 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,723] INFO [Broker id=1]
Leader __consumer_offsets-41 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,750] INFO [LogLoader
partition=__consumer_offsets-10, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,769] INFO Created log
for partition __consumer_offsets-10 in
/var/lib/kafka/data/__consumer_offsets-10 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,769] INFO [Partition
__consumer_offsets-10 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-10
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,770] INFO [Partition
__consumer_offsets-10 broker=1] Log loaded for partition
__consumer_offsets-10 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,770] INFO [Broker id=1]
Leader __consumer_offsets-10 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,789] INFO [LogLoader
partition=__consumer_offsets-33, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,802] INFO Created log
for partition __consumer_offsets-33 in
/var/lib/kafka/data/__consumer_offsets-33 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,803] INFO [Partition
__consumer_offsets-33 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-33
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,803] INFO [Partition
__consumer_offsets-33 broker=1] Log loaded for partition
__consumer_offsets-33 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,803] INFO [Broker id=1]
Leader __consumer_offsets-33 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,851] INFO [LogLoader
partition=__consumer_offsets-48, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,872] INFO Created log
for partition __consumer_offsets-48 in
/var/lib/kafka/data/__consumer_offsets-48 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,874] INFO [Partition
__consumer_offsets-48 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-48
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,874] INFO [Partition
__consumer_offsets-48 broker=1] Log loaded for partition
__consumer_offsets-48 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,874] INFO [Broker id=1]
Leader __consumer_offsets-48 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,944] INFO [LogLoader
partition=__consumer_offsets-19, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:40,950] INFO Created log
for partition __consumer_offsets-19 in
/var/lib/kafka/data/__consumer_offsets-19 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:40,950] INFO [Partition
__consumer_offsets-19 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-19
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,951] INFO [Partition
__consumer_offsets-19 broker=1] Log loaded for partition
__consumer_offsets-19 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:40,952] INFO [Broker id=1]
Leader __consumer_offsets-19 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:40,991] INFO [LogLoader
partition=__consumer_offsets-34, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,011] INFO Created log
for partition __consumer_offsets-34 in
/var/lib/kafka/data/__consumer_offsets-34 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,011] INFO [Partition
__consumer_offsets-34 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-34
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,012] INFO [Partition
__consumer_offsets-34 broker=1] Log loaded for partition
__consumer_offsets-34 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,012] INFO [Broker id=1]
Leader __consumer_offsets-34 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,042] INFO [LogLoader
partition=__consumer_offsets-4, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,045] INFO Created log
for partition __consumer_offsets-4 in
/var/lib/kafka/data/__consumer_offsets-4 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,046] INFO [Partition
__consumer_offsets-4 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-4
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,046] INFO [Partition
__consumer_offsets-4 broker=1] Log loaded for partition
__consumer_offsets-4 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,046] INFO [Broker id=1]
Leader __consumer_offsets-4 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,056] INFO [LogLoader
partition=__consumer_offsets-11, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,059] INFO Created log
for partition __consumer_offsets-11 in
/var/lib/kafka/data/__consumer_offsets-11 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,060] INFO [Partition
__consumer_offsets-11 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-11
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,060] INFO [Partition
__consumer_offsets-11 broker=1] Log loaded for partition
__consumer_offsets-11 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,060] INFO [Broker id=1]
Leader __consumer_offsets-11 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,073] INFO [LogLoader
partition=__consumer_offsets-26, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,103] INFO Created log
for partition __consumer_offsets-26 in
/var/lib/kafka/data/__consumer_offsets-26 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,108] INFO [Partition
__consumer_offsets-26 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-26
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,119] INFO [Partition
__consumer_offsets-26 broker=1] Log loaded for partition
__consumer_offsets-26 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,120] INFO [Broker id=1]
Leader __consumer_offsets-26 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,147] INFO [LogLoader
partition=__consumer_offsets-49, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,155] INFO Created log
for partition __consumer_offsets-49 in
/var/lib/kafka/data/__consumer_offsets-49 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,157] INFO [Partition
__consumer_offsets-49 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-49
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,164] INFO [Partition
__consumer_offsets-49 broker=1] Log loaded for partition
__consumer_offsets-49 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,165] INFO [Broker id=1]
Leader __consumer_offsets-49 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,181] INFO [LogLoader
partition=__consumer_offsets-39, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,189] INFO Created log
for partition __consumer_offsets-39 in
/var/lib/kafka/data/__consumer_offsets-39 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,199] INFO [Partition
__consumer_offsets-39 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-39
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,200] INFO [Partition
__consumer_offsets-39 broker=1] Log loaded for partition
__consumer_offsets-39 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,200] INFO [Broker id=1]
Leader __consumer_offsets-39 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,223] INFO [LogLoader
partition=__consumer_offsets-9, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,233] INFO Created log
for partition __consumer_offsets-9 in
/var/lib/kafka/data/__consumer_offsets-9 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,233] INFO [Partition
__consumer_offsets-9 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-9
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,234] INFO [Partition
__consumer_offsets-9 broker=1] Log loaded for partition
__consumer_offsets-9 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,234] INFO [Broker id=1]
Leader __consumer_offsets-9 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,270] INFO [LogLoader
partition=__consumer_offsets-24, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,279] INFO Created log
for partition __consumer_offsets-24 in
/var/lib/kafka/data/__consumer_offsets-24 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,279] INFO [Partition
__consumer_offsets-24 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-24
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,280] INFO [Partition
__consumer_offsets-24 broker=1] Log loaded for partition
__consumer_offsets-24 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,280] INFO [Broker id=1]
Leader __consumer_offsets-24 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,320] INFO [LogLoader
partition=__consumer_offsets-31, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,334] INFO Created log
for partition __consumer_offsets-31 in
/var/lib/kafka/data/__consumer_offsets-31 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,335] INFO [Partition
__consumer_offsets-31 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-31
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,335] INFO [Partition
__consumer_offsets-31 broker=1] Log loaded for partition
__consumer_offsets-31 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,335] INFO [Broker id=1]
Leader __consumer_offsets-31 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,386] INFO [LogLoader
partition=__consumer_offsets-46, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,398] INFO Created log
for partition __consumer_offsets-46 in
/var/lib/kafka/data/__consumer_offsets-46 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,398] INFO [Partition
__consumer_offsets-46 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-46
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,398] INFO [Partition
__consumer_offsets-46 broker=1] Log loaded for partition
__consumer_offsets-46 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,398] INFO [Broker id=1]
Leader __consumer_offsets-46 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,414] INFO [LogLoader
partition=__consumer_offsets-1, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,416] INFO Created log
for partition __consumer_offsets-1 in
/var/lib/kafka/data/__consumer_offsets-1 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,416] INFO [Partition
__consumer_offsets-1 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-1
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,416] INFO [Partition
__consumer_offsets-1 broker=1] Log loaded for partition
__consumer_offsets-1 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,417] INFO [Broker id=1]
Leader __consumer_offsets-1 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,455] INFO [LogLoader
partition=__consumer_offsets-16, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,459] INFO Created log
for partition __consumer_offsets-16 in
/var/lib/kafka/data/__consumer_offsets-16 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,459] INFO [Partition
__consumer_offsets-16 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-16
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,459] INFO [Partition
__consumer_offsets-16 broker=1] Log loaded for partition
__consumer_offsets-16 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,460] INFO [Broker id=1]
Leader __consumer_offsets-16 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,493] INFO [LogLoader
partition=__consumer_offsets-2, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,511] INFO Created log
for partition __consumer_offsets-2 in
/var/lib/kafka/data/__consumer_offsets-2 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,511] INFO [Partition
__consumer_offsets-2 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-2
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,511] INFO [Partition
__consumer_offsets-2 broker=1] Log loaded for partition
__consumer_offsets-2 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,511] INFO [Broker id=1]
Leader __consumer_offsets-2 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,557] INFO [LogLoader
partition=__consumer_offsets-25, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,569] INFO Created log
for partition __consumer_offsets-25 in
/var/lib/kafka/data/__consumer_offsets-25 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,570] INFO [Partition
__consumer_offsets-25 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-25
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,570] INFO [Partition
__consumer_offsets-25 broker=1] Log loaded for partition
__consumer_offsets-25 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,570] INFO [Broker id=1]
Leader __consumer_offsets-25 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,595] INFO [LogLoader
partition=__consumer_offsets-40, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,605] INFO Created log
for partition __consumer_offsets-40 in
/var/lib/kafka/data/__consumer_offsets-40 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,606] INFO [Partition
__consumer_offsets-40 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-40
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,611] INFO [Partition
__consumer_offsets-40 broker=1] Log loaded for partition
__consumer_offsets-40 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,612] INFO [Broker id=1]
Leader __consumer_offsets-40 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,635] INFO [LogLoader
partition=__consumer_offsets-47, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,639] INFO Created log
for partition __consumer_offsets-47 in
/var/lib/kafka/data/__consumer_offsets-47 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,640] INFO [Partition
__consumer_offsets-47 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-47
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,640] INFO [Partition
__consumer_offsets-47 broker=1] Log loaded for partition
__consumer_offsets-47 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,640] INFO [Broker id=1]
Leader __consumer_offsets-47 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,654] INFO [LogLoader
partition=__consumer_offsets-17, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,665] INFO Created log
for partition __consumer_offsets-17 in
/var/lib/kafka/data/__consumer_offsets-17 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,665] INFO [Partition
__consumer_offsets-17 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-17
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,665] INFO [Partition
__consumer_offsets-17 broker=1] Log loaded for partition
__consumer_offsets-17 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,666] INFO [Broker id=1]
Leader __consumer_offsets-17 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,678] INFO [LogLoader
partition=__consumer_offsets-32, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,682] INFO Created log
for partition __consumer_offsets-32 in
/var/lib/kafka/data/__consumer_offsets-32 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,689] INFO [Partition
__consumer_offsets-32 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-32
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,690] INFO [Partition
__consumer_offsets-32 broker=1] Log loaded for partition
__consumer_offsets-32 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,690] INFO [Broker id=1]
Leader __consumer_offsets-32 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,708] INFO [LogLoader
partition=__consumer_offsets-37, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,714] INFO Created log
for partition __consumer_offsets-37 in
/var/lib/kafka/data/__consumer_offsets-37 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,717] INFO [Partition
__consumer_offsets-37 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-37
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,717] INFO [Partition
__consumer_offsets-37 broker=1] Log loaded for partition
__consumer_offsets-37 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,718] INFO [Broker id=1]
Leader __consumer_offsets-37 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,744] INFO [LogLoader
partition=__consumer_offsets-7, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,748] INFO Created log
for partition __consumer_offsets-7 in
/var/lib/kafka/data/__consumer_offsets-7 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,748] INFO [Partition
__consumer_offsets-7 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-7
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,750] INFO [Partition
__consumer_offsets-7 broker=1] Log loaded for partition
__consumer_offsets-7 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,750] INFO [Broker id=1]
Leader __consumer_offsets-7 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,777] INFO [LogLoader
partition=__consumer_offsets-22, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,787] INFO Created log
for partition __consumer_offsets-22 in
/var/lib/kafka/data/__consumer_offsets-22 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,787] INFO [Partition
__consumer_offsets-22 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-22
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,788] INFO [Partition
__consumer_offsets-22 broker=1] Log loaded for partition
__consumer_offsets-22 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,790] INFO [Broker id=1]
Leader __consumer_offsets-22 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,807] INFO [LogLoader
partition=__consumer_offsets-29, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,810] INFO Created log
for partition __consumer_offsets-29 in
/var/lib/kafka/data/__consumer_offsets-29 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,812] INFO [Partition
__consumer_offsets-29 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-29
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,812] INFO [Partition
__consumer_offsets-29 broker=1] Log loaded for partition
__consumer_offsets-29 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,812] INFO [Broker id=1]
Leader __consumer_offsets-29 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,832] INFO [LogLoader
partition=__consumer_offsets-44, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,852] INFO Created log
for partition __consumer_offsets-44 in
/var/lib/kafka/data/__consumer_offsets-44 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,852] INFO [Partition
__consumer_offsets-44 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-44
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,853] INFO [Partition
__consumer_offsets-44 broker=1] Log loaded for partition
__consumer_offsets-44 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,853] INFO [Broker id=1]
Leader __consumer_offsets-44 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,873] INFO [LogLoader
partition=__consumer_offsets-14, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,885] INFO Created log
for partition __consumer_offsets-14 in
/var/lib/kafka/data/__consumer_offsets-14 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,885] INFO [Partition
__consumer_offsets-14 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-14
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,885] INFO [Partition
__consumer_offsets-14 broker=1] Log loaded for partition
__consumer_offsets-14 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,885] INFO [Broker id=1]
Leader __consumer_offsets-14 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,913] INFO [LogLoader
partition=__consumer_offsets-23, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,916] INFO Created log
for partition __consumer_offsets-23 in
/var/lib/kafka/data/__consumer_offsets-23 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,916] INFO [Partition
__consumer_offsets-23 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-23
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,916] INFO [Partition
__consumer_offsets-23 broker=1] Log loaded for partition
__consumer_offsets-23 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,916] INFO [Broker id=1]
Leader __consumer_offsets-23 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,949] INFO [LogLoader
partition=__consumer_offsets-38, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:41,956] INFO Created log
for partition __consumer_offsets-38 in
/var/lib/kafka/data/__consumer_offsets-38 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:41,959] INFO [Partition
__consumer_offsets-38 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-38
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,962] INFO [Partition
__consumer_offsets-38 broker=1] Log loaded for partition
__consumer_offsets-38 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:41,962] INFO [Broker id=1]
Leader __consumer_offsets-38 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:41,993] INFO [LogLoader
partition=__consumer_offsets-8, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,000] INFO Created log
for partition __consumer_offsets-8 in
/var/lib/kafka/data/__consumer_offsets-8 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,000] INFO [Partition
__consumer_offsets-8 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-8
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,001] INFO [Partition
__consumer_offsets-8 broker=1] Log loaded for partition
__consumer_offsets-8 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,001] INFO [Broker id=1]
Leader __consumer_offsets-8 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,030] INFO [LogLoader
partition=__consumer_offsets-45, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,042] INFO Created log
for partition __consumer_offsets-45 in
/var/lib/kafka/data/__consumer_offsets-45 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,042] INFO [Partition
__consumer_offsets-45 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-45
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,043] INFO [Partition
__consumer_offsets-45 broker=1] Log loaded for partition
__consumer_offsets-45 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,047] INFO [Broker id=1]
Leader __consumer_offsets-45 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,071] INFO [LogLoader
partition=__consumer_offsets-15, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,078] INFO Created log
for partition __consumer_offsets-15 in
/var/lib/kafka/data/__consumer_offsets-15 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,083] INFO [Partition
__consumer_offsets-15 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-15
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,083] INFO [Partition
__consumer_offsets-15 broker=1] Log loaded for partition
__consumer_offsets-15 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,083] INFO [Broker id=1]
Leader __consumer_offsets-15 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,110] INFO [LogLoader
partition=__consumer_offsets-30, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,112] INFO Created log
for partition __consumer_offsets-30 in
/var/lib/kafka/data/__consumer_offsets-30 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,112] INFO [Partition
__consumer_offsets-30 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-30
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,112] INFO [Partition
__consumer_offsets-30 broker=1] Log loaded for partition
__consumer_offsets-30 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,113] INFO [Broker id=1]
Leader __consumer_offsets-30 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,130] INFO [LogLoader
partition=__consumer_offsets-0, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,143] INFO Created log
for partition __consumer_offsets-0 in
/var/lib/kafka/data/__consumer_offsets-0 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,143] INFO [Partition
__consumer_offsets-0 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,143] INFO [Partition
__consumer_offsets-0 broker=1] Log loaded for partition
__consumer_offsets-0 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,143] INFO [Broker id=1]
Leader __consumer_offsets-0 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,156] INFO [LogLoader
partition=__consumer_offsets-35, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,159] INFO Created log
for partition __consumer_offsets-35 in
/var/lib/kafka/data/__consumer_offsets-35 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,159] INFO [Partition
__consumer_offsets-35 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-35
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,159] INFO [Partition
__consumer_offsets-35 broker=1] Log loaded for partition
__consumer_offsets-35 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,159] INFO [Broker id=1]
Leader __consumer_offsets-35 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,177] INFO [LogLoader
partition=__consumer_offsets-5, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,188] INFO Created log
for partition __consumer_offsets-5 in
/var/lib/kafka/data/__consumer_offsets-5 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,189] INFO [Partition
__consumer_offsets-5 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-5
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,189] INFO [Partition
__consumer_offsets-5 broker=1] Log loaded for partition
__consumer_offsets-5 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,189] INFO [Broker id=1]
Leader __consumer_offsets-5 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,210] INFO [LogLoader
partition=__consumer_offsets-20, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,223] INFO Created log
for partition __consumer_offsets-20 in
/var/lib/kafka/data/__consumer_offsets-20 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,223] INFO [Partition
__consumer_offsets-20 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-20
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,223] INFO [Partition
__consumer_offsets-20 broker=1] Log loaded for partition
__consumer_offsets-20 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,224] INFO [Broker id=1]
Leader __consumer_offsets-20 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,238] INFO [LogLoader
partition=__consumer_offsets-27, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,242] INFO Created log
for partition __consumer_offsets-27 in
/var/lib/kafka/data/__consumer_offsets-27 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,243] INFO [Partition
__consumer_offsets-27 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-27
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,243] INFO [Partition
__consumer_offsets-27 broker=1] Log loaded for partition
__consumer_offsets-27 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,243] INFO [Broker id=1]
Leader __consumer_offsets-27 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,258] INFO [LogLoader
partition=__consumer_offsets-42, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,261] INFO Created log
for partition __consumer_offsets-42 in
/var/lib/kafka/data/__consumer_offsets-42 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,261] INFO [Partition
__consumer_offsets-42 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-42
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,261] INFO [Partition
__consumer_offsets-42 broker=1] Log loaded for partition
__consumer_offsets-42 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,261] INFO [Broker id=1]
Leader __consumer_offsets-42 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,273] INFO [LogLoader
partition=__consumer_offsets-12, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,276] INFO Created log
for partition __consumer_offsets-12 in
/var/lib/kafka/data/__consumer_offsets-12 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,277] INFO [Partition
__consumer_offsets-12 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-12
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,277] INFO [Partition
__consumer_offsets-12 broker=1] Log loaded for partition
__consumer_offsets-12 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,277] INFO [Broker id=1]
Leader __consumer_offsets-12 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,307] INFO [LogLoader
partition=__consumer_offsets-21, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,315] INFO Created log
for partition __consumer_offsets-21 in
/var/lib/kafka/data/__consumer_offsets-21 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,318] INFO [Partition
__consumer_offsets-21 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-21
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,318] INFO [Partition
__consumer_offsets-21 broker=1] Log loaded for partition
__consumer_offsets-21 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,318] INFO [Broker id=1]
Leader __consumer_offsets-21 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,336] INFO [LogLoader
partition=__consumer_offsets-36, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,342] INFO Created log
for partition __consumer_offsets-36 in
/var/lib/kafka/data/__consumer_offsets-36 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,342] INFO [Partition
__consumer_offsets-36 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-36
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,343] INFO [Partition
__consumer_offsets-36 broker=1] Log loaded for partition
__consumer_offsets-36 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,343] INFO [Broker id=1]
Leader __consumer_offsets-36 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,362] INFO [LogLoader
partition=__consumer_offsets-6, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,381] INFO Created log
for partition __consumer_offsets-6 in
/var/lib/kafka/data/__consumer_offsets-6 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,390] INFO [Partition
__consumer_offsets-6 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-6
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,390] INFO [Partition
__consumer_offsets-6 broker=1] Log loaded for partition
__consumer_offsets-6 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,391] INFO [Broker id=1]
Leader __consumer_offsets-6 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,451] INFO [LogLoader
partition=__consumer_offsets-43, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,488] INFO Created log
for partition __consumer_offsets-43 in
/var/lib/kafka/data/__consumer_offsets-43 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,489] INFO [Partition
__consumer_offsets-43 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-43
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,492] INFO [Partition
__consumer_offsets-43 broker=1] Log loaded for partition
__consumer_offsets-43 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,492] INFO [Broker id=1]
Leader __consumer_offsets-43 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,529] INFO [LogLoader
partition=__consumer_offsets-13, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,533] INFO Created log
for partition __consumer_offsets-13 in
/var/lib/kafka/data/__consumer_offsets-13 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,533] INFO [Partition
__consumer_offsets-13 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-13
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,533] INFO [Partition
__consumer_offsets-13 broker=1] Log loaded for partition
__consumer_offsets-13 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,533] INFO [Broker id=1]
Leader __consumer_offsets-13 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,554] INFO [LogLoader
partition=__consumer_offsets-28, dir=/var/lib/kafka/data]
Loading producer state till offset 0 with message format version
2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:48:42,556] INFO Created log
for partition __consumer_offsets-28 in
/var/lib/kafka/data/__consumer_offsets-28 with properties
{cleanup.policy=compact, compression.type="producer",
segment.bytes=104857600} (kafka.log.LogManager)
kafka | [2022-12-09 02:48:42,565] INFO [Partition
__consumer_offsets-28 broker=1] No checkpointed
highwatermark is found for partition __consumer_offsets-28
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,569] INFO [Partition
__consumer_offsets-28 broker=1] Log loaded for partition
__consumer_offsets-28 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:48:42,570] INFO [Broker id=1]
Leader __consumer_offsets-28 starts at leader epoch 0 from
offset 0 with high watermark 0 ISR [1] addingReplicas []
removingReplicas []. Previous leader epoch was -1.
(state.change.logger)
kafka | [2022-12-09 02:48:42,586] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-3 (state.change.logger)
kafka | [2022-12-09 02:48:42,586] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-18 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-41 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-10 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-33 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-48 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-19 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-34 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-4 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-11 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-26 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-49 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-39 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-9 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-24 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-31 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-46 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-1 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-16 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-2 (state.change.logger)
kafka | [2022-12-09 02:48:42,587] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-25 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-40 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-47 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-17 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-32 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-37 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-7 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-22 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-29 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-44 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-14 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-23 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-38 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-8 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-45 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-15 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-30 (state.change.logger)
kafka | [2022-12-09 02:48:42,588] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-0 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-35 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-5 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-20 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-27 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-42 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-12 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-21 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-36 (state.change.logger)
kafka | [2022-12-09 02:48:42,589] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-6 (state.change.logger)
kafka | [2022-12-09 02:48:42,590] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-43 (state.change.logger)
kafka | [2022-12-09 02:48:42,590] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-13 (state.change.logger)
kafka | [2022-12-09 02:48:42,590] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 3 from
controller 1 epoch 1 for the become-leader transition for
partition __consumer_offsets-28 (state.change.logger)
kafka | [2022-12-09 02:48:42,622] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 3 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,666] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-3 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,730] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 18 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,733] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-18 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 41 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-41 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 10 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-10 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 33 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-33 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 48 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-48 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,734] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 19 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-19 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 34 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-34 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 4 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-4 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 11 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-11 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 26 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-26 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,736] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 49 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,738] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-49 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,739] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 39 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,739] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-39 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,739] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 9 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,739] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-9 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 24 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-24 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 31 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-31 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 46 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-46 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 1 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-1 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,740] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 16 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-16 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 2 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-2 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 25 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-25 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 40 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-40 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 47 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-47 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 17 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,741] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-17 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 32 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-32 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 37 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-37 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 7 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-7 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 22 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-22 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 29 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,742] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-29 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,743] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 44 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,743] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-44 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,743] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 14 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,743] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-14 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,743] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 23 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,743] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-23 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 38 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-38 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 8 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-8 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 45 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-45 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,744] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 15 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,745] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-15 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,745] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 30 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,745] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-30 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,745] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 0 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,745] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-0 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,745] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 35 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,748] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-35 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,752] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 5 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-5 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 20 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-20 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 27 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-27 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 42 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-42 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 12 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,753] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-12 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 21 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-21 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 36 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-36 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 6 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-6 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 43 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-43 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,754] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 13 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,755] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-13 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,755] INFO
[GroupCoordinator 1]: Elected as the group coordinator for
partition 28 in epoch 0
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:42,755] INFO
[GroupMetadataManager brokerId=1] Scheduling loading of
offsets and group metadata from __consumer_offsets-28 for
epoch 0 (kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,756] INFO [Broker id=1]
Finished LeaderAndIsr request in 3141ms correlationId 3 from
controller 1 for 50 partitions (state.change.logger)
kafka | [2022-12-09 02:48:42,765] TRACE [Controller
id=1 epoch=1] Received response
LeaderAndIsrResponseData(errorCode=0, partitionErrors=[],
topics=[LeaderAndIsrTopicError(topicId=7c8kJ5UBR5yIaaALUBe
PYg, partitionErrors=[LeaderAndIsrPartitionError(topicName='',
partitionIndex=13, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=46,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=9, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=42,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=21, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=17,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=30, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=26,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=5, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=38,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=1, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=34,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=16, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=45,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=12, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=41,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=24, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=20,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=49, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=0,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=29, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=25,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=8, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=37,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=4, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=33,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=15, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=48,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=11, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=44,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=23, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=19,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=32, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=28,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=7, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=40,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=3, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=36,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=47, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=14,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=43, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=10,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=22, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=18,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=31, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=27,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=39, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=6,
errorCode=0), LeaderAndIsrPartitionError(topicName='',
partitionIndex=35, errorCode=0),
LeaderAndIsrPartitionError(topicName='', partitionIndex=2,
errorCode=0)])]) for request LEADER_AND_ISR with correlation
id 3 sent to broker localhost:9092 (id: 1 rack: null)
(state.change.logger)
kafka | [2022-12-09 02:48:42,786] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=13, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-13 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,787] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=46, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-46 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,787] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=9, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-9 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,787] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=42, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-42 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,787] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=21, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-21 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=17, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-17 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=30, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-30 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=26, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-26 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=5, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-5 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=38, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-38 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=1, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-1 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=34, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-34 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=16, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-16 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=45, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-45 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=12, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-12 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=41, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-41 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=24, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-24 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=20, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-20 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=49, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-49 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=0, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-0 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,788] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=29, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-29 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=25, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-25 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=8, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-8 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=37, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-37 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=4, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-4 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=33, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-33 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=15, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-15 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=48, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-48 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=11, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-11 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=44, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-44 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=23, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-23 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=19, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-19 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=32, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-32 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=28, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-28 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=7, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-7 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=40, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-40 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=3, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-3 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=36, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-36 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=47, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-47 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=14, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-14 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=43, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-43 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,789] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=10, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-10 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=22, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-22 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=18, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-18 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=31, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-31 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=27, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-27 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=39, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-39 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=6, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-6 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=35, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-35 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='__consumer_offsets'
, partitionIndex=2, controllerEpoch=1, leader=1,
leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1],
offlineReplicas=[]) for partition __consumer_offsets-2 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,790] INFO [Broker id=1]
Add 50 partitions and deleted 0 partitions from metadata cache
in response to UpdateMetadata request sent by controller 1
epoch 1 with correlation id 4 (state.change.logger)
kafka | [2022-12-09 02:48:42,792] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 4 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 02:48:42,838] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-3 in 138
milliseconds for epoch 0, of which 58 milliseconds was spent in
the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,854] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-18 in 121
milliseconds for epoch 0, of which 120 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,858] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-41 in 124
milliseconds for epoch 0, of which 124 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,859] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-10 in 125
milliseconds for epoch 0, of which 124 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,859] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-33 in 125
milliseconds for epoch 0, of which 125 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,859] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-48 in 125
milliseconds for epoch 0, of which 125 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,860] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-19 in 123
milliseconds for epoch 0, of which 123 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,863] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-34 in 124
milliseconds for epoch 0, of which 124 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,867] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-4 in 131
milliseconds for epoch 0, of which 127 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,868] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-11 in 132
milliseconds for epoch 0, of which 131 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,868] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-26 in 132
milliseconds for epoch 0, of which 132 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,873] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-49 in 135
milliseconds for epoch 0, of which 130 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,873] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-0 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,874] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-10 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,874] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-39 in 135
milliseconds for epoch 0, of which 134 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,875] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-20 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,876] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-40 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,876] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-30 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,877] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-9 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,877] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-11 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,877] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-31 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,877] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-39 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,877] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-9 in 135
milliseconds for epoch 0, of which 135 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,877] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-13 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,878] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-24 in 138
milliseconds for epoch 0, of which 137 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,879] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-18 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,879] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-31 in 139
milliseconds for epoch 0, of which 139 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,879] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-22 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,879] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-8 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,879] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-32 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,879] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-43 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,880] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-29 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,880] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-34 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,880] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-1 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,881] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-46 in 141
milliseconds for epoch 0, of which 139 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,882] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-1 in 142
milliseconds for epoch 0, of which 141 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,882] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-16 in 141
milliseconds for epoch 0, of which 141 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,883] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-2 in 142
milliseconds for epoch 0, of which 141 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,883] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-25 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,883] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-40 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,883] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-47 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,884] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-17 in 142
milliseconds for epoch 0, of which 141 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,884] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-32 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,884] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-37 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,884] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-7 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,885] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-22 in 143
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,885] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-29 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,885] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-44 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,885] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-14 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,880] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-6 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,885] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-23 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,886] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-38 in 142
milliseconds for epoch 0, of which 141 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,886] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-8 in 142
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,885] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-41 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,887] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-45 in 143
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,889] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-27 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,889] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-48 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,889] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-15 in 144
milliseconds for epoch 0, of which 142 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,890] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-5 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,890] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-15 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,890] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-35 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,890] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-30 in 145
milliseconds for epoch 0, of which 144 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,890] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-25 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,890] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-46 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,890] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-26 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,890] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-0 in 145
milliseconds for epoch 0, of which 145 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-36 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-44 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-16 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,891] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-35 in 139
milliseconds for epoch 0, of which 138 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,891] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-5 in 138
milliseconds for epoch 0, of which 138 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,891] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-20 in 138
milliseconds for epoch 0, of which 138 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-37 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-17 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-45 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,891] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-3 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,892] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-24 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,892] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-38 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,892] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-33 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,892] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-23 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,891] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-27 in 138
milliseconds for epoch 0, of which 138 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,892] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-28 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,892] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-2 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-12 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-19 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-14 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-4 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-47 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-49 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,893] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-42 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,894] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-7 to 0 since the
associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
schema | [2022-12-09 02:48:42,894] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition __consumer_offsets-21 to 0 since
the associated topicId changed from null to
7c8kJ5UBR5yIaaALUBePYg (org.apache.kafka.clients.Metadata)
kafka | [2022-12-09 02:48:42,894] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-42 in 141
milliseconds for epoch 0, of which 138 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,895] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-12 in 141
milliseconds for epoch 0, of which 140 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,897] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-21 in 143
milliseconds for epoch 0, of which 141 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,903] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-36 in 149
milliseconds for epoch 0, of which 143 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,913] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-6 in 159
milliseconds for epoch 0, of which 151 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,914] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-43 in 160
milliseconds for epoch 0, of which 160 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,915] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-13 in 160
milliseconds for epoch 0, of which 160 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:48:42,915] INFO
[GroupMetadataManager brokerId=1] Finished loading offsets
and group metadata from __consumer_offsets-28 in 160
milliseconds for epoch 0, of which 160 milliseconds was spent
in the scheduler.
(kafka.coordinator.group.GroupMetadataManager)
schema | [2022-12-09 02:48:42,928] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Discovered
group coordinator kafka-local:9095 (id: 2147483646 rack: null)
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:48:42,936] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] (Re-)joining
group
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
kafka | [2022-12-09 02:48:43,089] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group schema-registry in Empty state. Created a new
member id sr-1-ea6891bc-b10a-4953-818b-02c90814a890 and
request the member to rejoin with this id.
(kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 02:48:43,118] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Request
joining group due to: need to re-join with the given member-id
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:48:43,118] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] (Re-)joining
group
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
kafka | [2022-12-09 02:48:43,164] INFO
[GroupCoordinator 1]: Preparing to rebalance group schema-
registry in state PreparingRebalance with old generation 0
(__consumer_offsets-29) (reason: Adding new member sr-1-
ea6891bc-b10a-4953-818b-02c90814a890 with group instance
id None) (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:48:43,215] INFO
[GroupCoordinator 1]: Stabilized group schema-registry
generation 1 (__consumer_offsets-29) with 1 members
(kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 02:48:43,247] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Successfully
joined group with generation Generation{generationId=1,
memberId='sr-1-ea6891bc-b10a-4953-818b-02c90814a890',
protocol='v0'}
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
kafka | [2022-12-09 02:48:43,357] INFO
[GroupCoordinator 1]: Assignment received from leader sr-1-
ea6891bc-b10a-4953-818b-02c90814a890 for group schema-
registry for generation 1. The group has 1 members, 0 of which
are static. (kafka.coordinator.group.GroupCoordinator)
schema | [2022-12-09 02:48:43,478] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Successfully
synced group in generation Generation{generationId=1,
memberId='sr-1-ea6891bc-b10a-4953-818b-02c90814a890',
protocol='v0'}
(io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaR
egistryCoordinator)
schema | [2022-12-09 02:48:43,485] INFO Finished
rebalance with leader election result: Assignment{version=1,
error=0, leader='sr-1-ea6891bc-b10a-4953-818b-
02c90814a890',
leaderIdentity=version=1,host=schema,port=9091,scheme=ht
tp,leaderEligibility=true}
(io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGro
upLeaderElector)
schema | [2022-12-09 02:48:43,592] INFO Wait to catch
up until the offset at 1
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:48:43,604] INFO Reached
offset at 1
(io.confluent.kafka.schemaregistry.storage.KafkaStore)
schema | [2022-12-09 02:48:43,940] INFO Binding
SchemaRegistryRestApplication to all listeners.
(io.confluent.rest.Application)
schema | [2022-12-09 02:48:44,408] INFO jetty-
9.4.44.v20210927; built: 2021-09-27T23:02:44.612Z; git:
8da83308eeca865e495e53ef315a249d63ba9332; jvm
11.0.14.1+1-LTS (org.eclipse.jetty.server.Server)
schema | [2022-12-09 02:48:44,770] INFO
DefaultSessionIdManager workerName=node0
(org.eclipse.jetty.server.session)
schema | [2022-12-09 02:48:44,770] INFO No
SessionScavenger set, using defaults
(org.eclipse.jetty.server.session)
schema | [2022-12-09 02:48:44,778] INFO node0
Scavenging every 600000ms (org.eclipse.jetty.server.session)
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ConfigResourc
e registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ConfigResourc
e will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ContextsResou
rce registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ContextsResou
rce will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectsResour
ce registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectsResour
ce will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.SchemasResou
rce registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.SchemasResou
rce will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectVersion
sResource registered in SERVER runtime does not implement
any provider interfaces applicable in the SERVER runtime. Due
to constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.SubjectVersion
sResource will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.CompatibilityR
esource registered in SERVER runtime does not implement any
provider interfaces applicable in the SERVER runtime. Due to
constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.CompatibilityR
esource will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ModeResource
registered in SERVER runtime does not implement any provider
interfaces applicable in the SERVER runtime. Due to constraint
configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ModeResource
will be ignored.
schema | Dec 09, 2022 2:48:47 AM
org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
schema | WARNING: A provider
io.confluent.kafka.schemaregistry.rest.resources.ServerMetadat
aResource registered in SERVER runtime does not implement
any provider interfaces applicable in the SERVER runtime. Due
to constraint configuration problems the provider
io.confluent.kafka.schemaregistry.rest.resources.ServerMetadat
aResource will be ignored.
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:47,280Z", "level": "INFO", "component":
"o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message":
"[controller/292] [Main.cc@114] controller (64 bit): Version
7.10.2 (Build 40a3af639d4698) Copyright (c) 2021
Elasticsearch BV" }
schema | [2022-12-09 02:48:48,176] INFO HV000001:
Hibernate Validator 6.1.7.Final
(org.hibernate.validator.internal.util.Version)
schema | [2022-12-09 02:48:49,472] INFO Started
o.e.j.s.ServletContextHandler@3157e4c0{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler)
schema | [2022-12-09 02:48:49,572] INFO Started
o.e.j.s.ServletContextHandler@4e31c3ec{/ws,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler)
schema | [2022-12-09 02:48:49,677] INFO Started
NetworkTrafficServerConnector@4426bff1{HTTP/1.1, (http/1.1,
h2c)}{schema:9091}
(org.eclipse.jetty.server.AbstractConnector)
schema | [2022-12-09 02:48:49,680] INFO Started
@30640ms (org.eclipse.jetty.server.Server)
schema | [2022-12-09 02:48:49,685] INFO Schema
Registry version: 7.1.1 commitId:
5ed926f555f75683a1d34946ef6bc855bfbd1bbe
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
schema | [2022-12-09 02:48:49,685] INFO Server
started, listening for requests...
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:50,923Z", "level": "INFO", "component":
"o.e.t.NettyAllocator", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "creating
NettyAllocator with the following configs:
[name=elasticsearch_configured, chunk_size=256kb,
suggested_max_allocation_size=256kb,
factors={es.unsafe.use_netty_default_chunk_and_page_size=fa
lse, g1gc_enabled=true, g1gc_region_size=1mb}]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:51,102Z", "level": "INFO", "component":
"o.e.d.DiscoveryModule", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "using discovery
type [single-node] and seed hosts providers [settings]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:52,364Z", "level": "WARN", "component":
"o.e.g.DanglingIndicesState", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message":
"gateway.auto_import_dangling_indices is disabled, dangling
indices will not be automatically detected or imported and must
be managed manually" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:53,351Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "initialized" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:53,352Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "starting ..." }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:53,658Z", "level": "INFO", "component":
"o.e.t.TransportService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "publish_address
{elasticsearch/172.20.0.4:9300}, bound_addresses
{172.20.0.4:9300}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,089Z", "level": "WARN", "component":
"o.e.b.BootstrapChecks", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "initial heap size
[536870912] not equal to maximum heap size [1145044992];
this can cause resize pauses" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,090Z", "level": "WARN", "component":
"o.e.b.BootstrapChecks", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "system call filters
failed to install; check the logs and fix your configuration or
disable system call filters at your own risk" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,110Z", "level": "INFO", "component":
"o.e.c.c.Coordinator", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "setting initial
configuration to
VotingConfiguration{p6XVXI47QGCi1EGg95j87Q}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,351Z", "level": "INFO", "component":
"o.e.c.s.MasterService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "elected-as-master
([1] nodes joined)[{4921ed443d90}
{p6XVXI47QGCi1EGg95j87Q}{zqpUx0-USSiPMPJ3zuhLAA}
{elasticsearch}{172.20.0.4:9300}{cdhilmrstw}
{ml.machine_memory=8233017344, xpack.installed=true,
transform.node=true, ml.max_open_jobs=20} elect leader,
_BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1,
version: 1, delta: master node changed {previous [], current
[{4921ed443d90}{p6XVXI47QGCi1EGg95j87Q}{zqpUx0-
USSiPMPJ3zuhLAA}{elasticsearch}{172.20.0.4:9300}
{cdhilmrstw}{ml.machine_memory=8233017344,
xpack.installed=true, transform.node=true,
ml.max_open_jobs=20}]}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,437Z", "level": "INFO", "component":
"o.e.c.c.CoordinationState", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "cluster UUID set
to [GGWgtNIOQnypQKH2vGtr1A]" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,487Z", "level": "INFO", "component":
"o.e.c.s.ClusterApplierService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "master node
changed {previous [], current [{4921ed443d90}
{p6XVXI47QGCi1EGg95j87Q}{zqpUx0-USSiPMPJ3zuhLAA}
{elasticsearch}{172.20.0.4:9300}{cdhilmrstw}
{ml.machine_memory=8233017344, xpack.installed=true,
transform.node=true, ml.max_open_jobs=20}]}, term: 1,
version: 1, reason: Publication{term=1, version=1}" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,578Z", "level": "INFO", "component":
"o.e.h.AbstractHttpServerTransport", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message":
"publish_address {elasticsearch/172.20.0.4:9200},
bound_addresses {172.20.0.4:9200}", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,580Z", "level": "INFO", "component": "o.e.n.Node",
"cluster.name": "docker-cluster", "node.name":
"4921ed443d90", "message": "started", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,600Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-anomalies-] for [ml], because it doesn't
exist", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,602Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-state] for [ml], because it doesn't exist",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,603Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-config] for [ml], because it doesn't exist",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,604Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-inference-000003] for [ml], because it
doesn't exist", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,615Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-meta] for [ml], because it doesn't exist",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,618Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-notifications-000001] for [ml], because it
doesn't exist", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,625Z", "level": "INFO", "component":
"o.e.x.c.t.IndexTemplateRegistry", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
legacy template [.ml-stats] for [ml], because it doesn't exist",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:54,939Z", "level": "INFO", "component":
"o.e.g.GatewayService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "recovered [0]
indices into cluster_state", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:56,332Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-state] for index patterns [.ml-state*]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:56,673Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-anomalies-] for index patterns [.ml-
anomalies-*]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:56,822Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-config] for index patterns [.ml-config]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:56,939Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-inference-000003] for index patterns
[.ml-inference-000003]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,075Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-stats] for index patterns [.ml-stats-*]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,168Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-meta] for index patterns [.ml-meta]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,255Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.ml-notifications-000001] for index patterns
[.ml-notifications-000001]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,338Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding component template [logs-settings]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,411Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding component template [synthetics-settings]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,485Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding component template [metrics-settings]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,585Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding component template [synthetics-mappings]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,686Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding component template [metrics-mappings]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,762Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding component template [logs-mappings]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,891Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [.watch-history-12] for index patterns
[.watcher-history-12*]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:57,968Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [.triggered_watches] for index patterns
[.triggered_watches*]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,056Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [.watches] for index patterns
[.watches*]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,131Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [ilm-history] for index patterns [ilm-
history-3*]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,202Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [.slm-history] for index patterns [.slm-
history-3*]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,278Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.monitoring-alerts-7] for index patterns
[.monitoring-alerts-7]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,409Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.monitoring-es] for index patterns
[.monitoring-es-7-*]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,531Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.monitoring-kibana] for index patterns
[.monitoring-kibana-7-*]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,637Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.monitoring-logstash] for index patterns
[.monitoring-logstash-7-*]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,767Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding template [.monitoring-beats] for index patterns
[.monitoring-beats-7-*]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,876Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [synthetics] for index patterns
[synthetics-*-*]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:58,965Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [metrics] for index patterns [metrics-*-
*]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:59,173Z", "level": "INFO", "component":
"o.e.c.m.MetadataIndexTemplateService", "cluster.name":
"docker-cluster", "node.name": "4921ed443d90", "message":
"adding index template [logs] for index patterns [logs-*-*]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:48:59,461Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [ml-size-based-ilm-policy]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:00,649Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [logs]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:00,794Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [metrics]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:00,886Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [synthetics]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:00,951Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [watch-history-ilm-policy]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:01,018Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [ilm-history-ilm-policy]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:01,250Z", "level": "INFO", "component":
"o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "adding
index lifecycle policy [slm-history-ilm-policy]", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:49:01,926Z", "level": "INFO", "component":
"o.e.l.LicenseService", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message": "license
[89d1dda8-140f-4e51-8e3c-42ef4bfec8a6] mode [basic] -
valid", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
kafka-ui | 2022-12-09 02:49:06,165 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:49:06,473 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:49:36,166 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:49:36,347 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:50:06,177 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:50:06,437 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:50:36,160 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:50:36,311 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:51:06,163 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:51:06,751 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:51:36,168 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:51:36,366 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:52:06,163 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:52:06,371 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:52:36,166 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:52:36,308 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:53:06,161 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:53:06,376 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:53:22,139] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:53:22,151] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:53:22,546] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:53:22,729] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:53:36,160 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:53:36,405 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:54:06,147 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:54:06,307 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:54:36,165 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:54:36,281 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:55:06,165 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:55:06,250 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:55:23,817Z", "level": "INFO", "component":
"o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "[order]
creating index, cause [api], templates [], shards [1]/[1]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:55:24,825Z", "level": "INFO", "component":
"o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message": "[security]
creating index, cause [api], templates [], shards [1]/[1]",
"cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
elasticsearch | {"type": "server", "timestamp": "2022-12-
09T02:55:25,936Z", "level": "INFO", "component":
"o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-
cluster", "node.name": "4921ed443d90", "message":
"[saleprogram] creating index, cause [api], templates [], shards
[1]/[1]", "cluster.uuid": "GGWgtNIOQnypQKH2vGtr1A",
"node.id": "p6XVXI47QGCi1EGg95j87Q" }
kafka-ui | 2022-12-09 02:55:36,176 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:55:36,386 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:56:06,183 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:56:06,310 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:56:33,100] INFO [Admin
Manager on Broker 1]: Error processing create topic request
CreatableTopic(name='store.hive-participation-service.security',
numPartitions=3, replicationFactor=3, assignments=[],
configs=[CreateableTopicConfig(name='cleanup.policy',
value='compact')]) (kafka.server.ZkAdminManager)
kafka |
org.apache.kafka.common.errors.InvalidReplicationFactorExcept
ion: Replication factor: 3 larger than available brokers: 1.
kafka | [2022-12-09 02:56:33,255] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-streams-app in Empty state.
Created a new member id hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-8965249c-1744-401d-89ef-0945aa0563cf and
request the member to rejoin with this id.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,270] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 0 (__consumer_offsets-10) (reason: Adding new
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-8965249c-
1744-401d-89ef-0945aa0563cf with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,294] INFO Creating topic
response.data-warehouse-svc.warehousedata-event with
configuration {} and initial partition assignment HashMap(0 ->
ArrayBuffer(1)) (kafka.zk.AdminZkClient)
kafka | [2022-12-09 02:56:33,319] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-
streams-app generation 1 (__consumer_offsets-10) with 1
members (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,388] INFO
[GroupCoordinator 1]: Assignment received from leader hive-
participation-streams-app-a6155938-52d5-47d9-bfc6-
15aec5c4b305-StreamThread-1-consumer-8965249c-1744-
401d-89ef-0945aa0563cf for group hive-participation-streams-
app for generation 1. The group has 1 members, 0 of which are
static. (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,405] INFO
[GroupCoordinator 1]: Dynamic member with unknown member
id joins group hive-participation-local in Empty state. Created a
new member id consumer-hive-participation-local-1-d29e10b6-
e8fb-4e53-a4b1-c062e054cb2e and request the member to
rejoin with this id. (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,407] INFO [Controller
id=1] New topics: [Set(response.data-warehouse-
svc.warehousedata-event)], deleted topics: [HashSet()], new
partition replica assignment
[Set(TopicIdReplicaAssignment(response.data-warehouse-
svc.warehousedata-
event,Some(yk4UWuIuQDqgxcBKdJHQNQ),Map(response.data-
warehouse-svc.warehousedata-event-0 ->
ReplicaAssignment(replicas=1, addingReplicas=,
removingReplicas=))))] (kafka.controller.KafkaController)
kafka | [2022-12-09 02:56:33,409] INFO [Controller
id=1] New partition creation callback for response.data-
warehouse-svc.warehousedata-event-0
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:56:33,418] INFO [Controller
id=1 epoch=1] Changed partition response.data-warehouse-
svc.warehousedata-event-0 state from NonExistentPartition to
NewPartition with assigned replicas 1 (state.change.logger)
kafka | [2022-12-09 02:56:33,422] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:56:33,431] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 0 (__consumer_offsets-26) (reason: Adding new
member consumer-hive-participation-local-1-d29e10b6-e8fb-
4e53-a4b1-c062e054cb2e with group instance id None)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,446] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
response.data-warehouse-svc.warehousedata-event-0 from
NonExistentReplica to NewReplica (state.change.logger)
kafka | [2022-12-09 02:56:33,447] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:56:33,450] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-local
generation 1 (__consumer_offsets-26) with 1 members
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,739] INFO [Controller
id=1 epoch=1] Changed partition response.data-warehouse-
svc.warehousedata-event-0 from NewPartition to
OnlinePartition with state LeaderAndIsr(leader=1,
leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
kafka | [2022-12-09 02:56:33,795] TRACE [Controller
id=1 epoch=1] Sending become-leader LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='response.data-
warehouse-svc.warehousedata-event', partitionIndex=0,
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1],
zkVersion=0, replicas=[1], addingReplicas=[],
removingReplicas=[], isNew=true) to broker 1 for partition
response.data-warehouse-svc.warehousedata-event-0
(state.change.logger)
kafka | [2022-12-09 02:56:33,796] INFO [Controller
id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with
1 become-leader and 0 become-follower partitions
(state.change.logger)
kafka | [2022-12-09 02:56:33,844] INFO
[GroupCoordinator 1]: Assignment received from leader
consumer-hive-participation-local-1-d29e10b6-e8fb-4e53-a4b1-
c062e054cb2e for group hive-participation-local for generation
1. The group has 1 members, 0 of which are static.
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:33,919] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet(1) for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:56:33,975] INFO [Broker id=1]
Handling LeaderAndIsr request correlationId 5 from controller 1
for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:56:33,980] TRACE [Broker
id=1] Received LeaderAndIsr request
LeaderAndIsrPartitionState(topicName='response.data-
warehouse-svc.warehousedata-event', partitionIndex=0,
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1],
zkVersion=0, replicas=[1], addingReplicas=[],
removingReplicas=[], isNew=true) correlation id 5 from
controller 1 epoch 1 (state.change.logger)
kafka | [2022-12-09 02:56:33,984] TRACE [Controller
id=1 epoch=1] Changed state of replica 1 for partition
response.data-warehouse-svc.warehousedata-event-0 from
NewReplica to OnlineReplica (state.change.logger)
kafka | [2022-12-09 02:56:33,986] INFO [Controller
id=1 epoch=1] Sending UpdateMetadata request to brokers
HashSet() for 0 partitions (state.change.logger)
kafka | [2022-12-09 02:56:34,093] TRACE [Broker
id=1] Handling LeaderAndIsr request correlationId 5 from
controller 1 epoch 1 starting the become-leader transition for
partition response.data-warehouse-svc.warehousedata-event-0
(state.change.logger)
kafka | [2022-12-09 02:56:34,096] INFO
[ReplicaFetcherManager on broker 1] Removed fetcher for
partitions Set(response.data-warehouse-svc.warehousedata-
event-0) (kafka.server.ReplicaFetcherManager)
kafka | [2022-12-09 02:56:34,097] INFO [Broker id=1]
Stopped fetchers as part of LeaderAndIsr request correlationId
5 from controller 1 epoch 1 as part of the become-leader
transition for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:56:34,161] INFO [LogLoader
partition=response.data-warehouse-svc.warehousedata-event-
0, dir=/var/lib/kafka/data] Loading producer state till offset 0
with message format version 2 (kafka.log.UnifiedLog$)
kafka | [2022-12-09 02:56:34,170] INFO Created log
for partition response.data-warehouse-svc.warehousedata-
event-0 in /var/lib/kafka/data/response.data-warehouse-
svc.warehousedata-event-0 with properties {}
(kafka.log.LogManager)
kafka | [2022-12-09 02:56:34,183] INFO [Partition
response.data-warehouse-svc.warehousedata-event-0
broker=1] No checkpointed highwatermark is found for
partition response.data-warehouse-svc.warehousedata-event-0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:56:34,184] INFO [Partition
response.data-warehouse-svc.warehousedata-event-0
broker=1] Log loaded for partition response.data-warehouse-
svc.warehousedata-event-0 with initial high watermark 0
(kafka.cluster.Partition)
kafka | [2022-12-09 02:56:34,185] INFO [Broker id=1]
Leader response.data-warehouse-svc.warehousedata-event-0
starts at leader epoch 0 from offset 0 with high watermark 0
ISR [1] addingReplicas [] removingReplicas []. Previous leader
epoch was -1. (state.change.logger)
kafka | [2022-12-09 02:56:34,198] TRACE [Broker
id=1] Completed LeaderAndIsr request correlationId 5 from
controller 1 epoch 1 for the become-leader transition for
partition response.data-warehouse-svc.warehousedata-event-0
(state.change.logger)
kafka | [2022-12-09 02:56:34,217] INFO [Broker id=1]
Finished LeaderAndIsr request in 239ms correlationId 5 from
controller 1 for 1 partitions (state.change.logger)
kafka | [2022-12-09 02:56:34,257] TRACE [Controller
id=1 epoch=1] Received response
LeaderAndIsrResponseData(errorCode=0, partitionErrors=[],
topics=[LeaderAndIsrTopicError(topicId=yk4UWuIuQDqgxcBKdJ
HQNQ,
partitionErrors=[LeaderAndIsrPartitionError(topicName='',
partitionIndex=0, errorCode=0)])]) for request
LEADER_AND_ISR with correlation id 5 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 02:56:34,283] TRACE [Broker
id=1] Cached leader info
UpdateMetadataPartitionState(topicName='response.data-
warehouse-svc.warehousedata-event', partitionIndex=0,
controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1],
zkVersion=0, replicas=[1], offlineReplicas=[]) for partition
response.data-warehouse-svc.warehousedata-event-0 in
response to UpdateMetadata request sent by controller 1 epoch
1 with correlation id 6 (state.change.logger)
kafka | [2022-12-09 02:56:34,284] INFO [Broker id=1]
Add 1 partitions and deleted 0 partitions from metadata cache
in response to UpdateMetadata request sent by controller 1
epoch 1 with correlation id 6 (state.change.logger)
kafka | [2022-12-09 02:56:34,286] TRACE [Controller
id=1 epoch=1] Received response
UpdateMetadataResponseData(errorCode=0) for request
UPDATE_METADATA with correlation id 6 sent to broker
localhost:9092 (id: 1 rack: null) (state.change.logger)
kafka | [2022-12-09 02:56:34,426] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-local in state PreparingRebalance with old
generation 1 (__consumer_offsets-26) (reason: Leader
consumer-hive-participation-local-1-d29e10b6-e8fb-4e53-a4b1-
c062e054cb2e re-joining group during Stable)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:34,432] INFO
[GroupCoordinator 1]: Stabilized group hive-participation-local
generation 2 (__consumer_offsets-26) with 1 members
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:56:34,440] INFO
[GroupCoordinator 1]: Assignment received from leader
consumer-hive-participation-local-1-d29e10b6-e8fb-4e53-a4b1-
c062e054cb2e for group hive-participation-local for generation
2. The group has 1 members, 0 of which are static.
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 02:56:36,168 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:56:36,392 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:57:06,170 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:57:06,513 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:57:18,543] INFO
[GroupCoordinator 1]: Member hive-participation-streams-app-
a6155938-52d5-47d9-bfc6-15aec5c4b305-StreamThread-1-
consumer-8965249c-1744-401d-89ef-0945aa0563cf in group
hive-participation-streams-app has failed, removing it from the
group (kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:57:18,547] INFO
[GroupCoordinator 1]: Preparing to rebalance group hive-
participation-streams-app in state PreparingRebalance with old
generation 1 (__consumer_offsets-10) (reason: removing
member hive-participation-streams-app-a6155938-52d5-47d9-
bfc6-15aec5c4b305-StreamThread-1-consumer-8965249c-
1744-401d-89ef-0945aa0563cf on heartbeat expiration)
(kafka.coordinator.group.GroupCoordinator)
kafka | [2022-12-09 02:57:18,549] INFO
[GroupCoordinator 1]: Group hive-participation-streams-app
with generation 2 is now empty (__consumer_offsets-10)
(kafka.coordinator.group.GroupCoordinator)
kafka-ui | 2022-12-09 02:57:36,165 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
schema | [2022-12-09 02:57:36,253] INFO [Producer
clientId=producer-1] Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient)
kafka-ui | 2022-12-09 02:57:36,365 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
schema | [2022-12-09 02:57:36,695] INFO [Consumer
clientId=KafkaStore-reader-_schemas, groupId=schema-
registry-schema-9091] Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient)
schema | [2022-12-09 02:57:38,926] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Node -1
disconnected. (org.apache.kafka.clients.NetworkClient)
schema | [2022-12-09 02:57:38,957] INFO [Schema
registry clientId=sr-1, groupId=schema-registry] Resetting the
last seen epoch of partition response.data-warehouse-
svc.warehousedata-event-0 to 0 since the associated topicId
changed from null to yk4UWuIuQDqgxcBKdJHQNQ
(org.apache.kafka.clients.Metadata)
kafka-ui | 2022-12-09 02:58:06,150 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:58:06,290 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 02:58:15,312] INFO
[GroupMetadataManager brokerId=1] Group hive-participation-
streams-app transitioned to Dead in generation 2
(kafka.coordinator.group.GroupMetadataManager)
kafka | [2022-12-09 02:58:22,728] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:58:22,731] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:58:22,753] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 02:58:22,754] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 02:58:27,791 WARN [parallel-4]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for TopicColumnsToSortDTO(String, int, String), which
declares implicit or synthetic parameters. Automatic resolution
of generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 02:58:27,833 WARN [parallel-4]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for SortOrderDTO(String, int, String), which declares
implicit or synthetic parameters. Automatic resolution of
generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 02:58:34,986 WARN [parallel-4]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for ConsumerGroupOrderingDTO(String, int, String),
which declares implicit or synthetic parameters. Automatic
resolution of generic type information for method parameters
may yield incorrect results if multiple parameters have the
same erasure. To solve this, compile your code with the '-
parameters' flag.
kafka-ui | 2022-12-09 02:58:36,150 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:58:36,439 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 02:59:06,160 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:59:06,269 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
elasticsearch | {"type": "deprecation", "timestamp": "2022-12-
09T02:59:16,345Z", "level": "DEPRECATION", "component":
"o.e.d.t.TransportInfo", "cluster.name": "docker-cluster",
"node.name": "4921ed443d90", "message":
"transport.publish_address was printed as [ip:port] instead of
[hostname/ip:port]. This format is deprecated and will change
to [hostname/ip:port] in a future version. Use -
Des.transport.cname_in_publish_address=true to enforce non-
deprecated formatting.", "cluster.uuid":
"GGWgtNIOQnypQKH2vGtr1A", "node.id":
"p6XVXI47QGCi1EGg95j87Q" }
kafka-ui | 2022-12-09 02:59:36,149 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 02:59:36,255 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:00:06,150 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:00:06,272 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:00:36,145 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:00:36,257 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:01:06,122 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:01:06,368 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:01:36,135 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:01:36,302 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:02:06,151 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:02:06,247 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:02:36,115 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:02:36,220 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:03:06,115 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:03:06,198 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 03:03:22,737] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:03:22,752] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:03:22,772] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:03:22,773] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 03:03:36,123 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:03:36,253 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:04:06,127 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:04:06,694 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:04:36,159 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:04:36,388 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:05:06,135 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:05:06,293 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:05:36,143 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:05:36,305 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:06:06,125 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:06:06,309 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:06:36,118 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:06:36,211 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:07:06,117 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:07:06,305 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:07:36,138 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:07:36,338 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:08:06,139 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:08:06,438 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 03:08:22,816] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:08:22,822] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:08:22,848] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:08:22,849] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 03:08:36,143 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:08:36,329 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:09:06,127 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:09:06,305 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:09:36,131 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:09:36,308 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:10:06,127 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:10:06,292 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:10:36,130 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:10:36,293 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:11:06,120 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:11:06,276 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:11:36,127 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:11:36,331 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:12:06,132 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:12:06,309 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:12:36,140 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:12:36,487 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:13:06,130 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:13:06,189 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 03:13:22,862] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:13:22,874] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:13:22,898] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:13:22,898] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 03:13:36,107 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:13:36,266 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:14:06,130 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:14:06,288 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:14:36,105 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:14:36,566 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:15:06,135 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:15:06,309 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:15:36,100 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:15:36,232 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:16:06,143 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:16:06,237 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:16:36,165 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:16:36,347 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:17:06,145 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:17:06,317 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:17:36,138 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:17:36,260 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:18:06,149 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:18:06,337 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 03:18:22,944] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:18:22,959] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:18:23,075] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:18:23,078] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 03:18:36,147 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:18:36,347 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:19:06,147 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:19:06,303 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:19:36,158 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:19:36,313 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:20:06,143 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:20:06,619 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:20:36,138 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:20:36,286 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:21:06,135 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:21:06,219 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:21:16,472 WARN [parallel-3]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for SeekTypeDTO(String, int, String), which declares
implicit or synthetic parameters. Automatic resolution of
generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 03:21:16,758 WARN [parallel-3]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for MessageFilterTypeDTO(String, int, String), which
declares implicit or synthetic parameters. Automatic resolution
of generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 03:21:16,780 WARN [parallel-3]
o.h.v.i.p.j.JavaBeanExecutable: HV000254: Missing parameter
metadata for SeekDirectionDTO(String, int, String), which
declares implicit or synthetic parameters. Automatic resolution
of generic type information for method parameters may yield
incorrect results if multiple parameters have the same erasure.
To solve this, compile your code with the '-parameters' flag.
kafka-ui | 2022-12-09 03:21:17,042 INFO
[boundedElastic-4] o.a.k.c.c.ConsumerConfig: ConsumerConfig
values:
kafka-ui | allow.auto.create.topics = false
kafka-ui | auto.commit.interval.ms = 5000
kafka-ui | auto.offset.reset = earliest
kafka-ui | bootstrap.servers = [kafka-local:9095]
kafka-ui | check.crcs = true
kafka-ui | client.dns.lookup = use_all_dns_ips
kafka-ui | client.id = kafka-ui-4d1a87dc-7219-41a5-
b39d-8b5f7fbfa55b
kafka-ui | client.rack =
kafka-ui | connections.max.idle.ms = 540000
kafka-ui | default.api.timeout.ms = 60000
kafka-ui | enable.auto.commit = false
kafka-ui | exclude.internal.topics = true
kafka-ui | fetch.max.bytes = 52428800
kafka-ui | fetch.max.wait.ms = 500
kafka-ui | fetch.min.bytes = 1
kafka-ui | group.id = null
kafka-ui | group.instance.id = null
kafka-ui | heartbeat.interval.ms = 3000
kafka-ui | interceptor.classes = []
kafka-ui | internal.leave.group.on.close = true
kafka-ui |
internal.throw.on.fetch.stable.offset.unsupported = false
kafka-ui | isolation.level = read_uncommitted
kafka-ui | key.deserializer = class
org.apache.kafka.common.serialization.BytesDeserializer
kafka-ui | max.partition.fetch.bytes = 1048576
kafka-ui | max.poll.interval.ms = 300000
kafka-ui | max.poll.records = 500
kafka-ui | metadata.max.age.ms = 300000
kafka-ui | metric.reporters = []
kafka-ui | metrics.num.samples = 2
kafka-ui | metrics.recording.level = INFO
kafka-ui | metrics.sample.window.ms = 30000
kafka-ui | partition.assignment.strategy = [class
org.apache.kafka.clients.consumer.RangeAssignor]
kafka-ui | receive.buffer.bytes = 65536
kafka-ui | reconnect.backoff.max.ms = 1000
kafka-ui | reconnect.backoff.ms = 50
kafka-ui | request.timeout.ms = 30000
kafka-ui | retry.backoff.ms = 100
kafka-ui | sasl.client.callback.handler.class = null
kafka-ui | sasl.jaas.config = null
kafka-ui | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka-ui | sasl.kerberos.min.time.before.relogin =
60000
kafka-ui | sasl.kerberos.service.name = null
kafka-ui | sasl.kerberos.ticket.renew.jitter = 0.05
kafka-ui | sasl.kerberos.ticket.renew.window.factor =
0.8
kafka-ui | sasl.login.callback.handler.class = null
kafka-ui | sasl.login.class = null
kafka-ui | sasl.login.refresh.buffer.seconds = 300
kafka-ui | sasl.login.refresh.min.period.seconds = 60
kafka-ui | sasl.login.refresh.window.factor = 0.8
kafka-ui | sasl.login.refresh.window.jitter = 0.05
kafka-ui | sasl.mechanism = GSSAPI
kafka-ui | security.protocol = PLAINTEXT
kafka-ui | security.providers = null
kafka-ui | send.buffer.bytes = 131072
kafka-ui | session.timeout.ms = 10000
kafka-ui | socket.connection.setup.timeout.max.ms =
30000
kafka-ui | socket.connection.setup.timeout.ms =
10000
kafka-ui | ssl.cipher.suites = null
kafka-ui | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka-ui | ssl.endpoint.identification.algorithm = https
kafka-ui | ssl.engine.factory.class = null
kafka-ui | ssl.key.password = null
kafka-ui | ssl.keymanager.algorithm = SunX509
kafka-ui | ssl.keystore.certificate.chain = null
kafka-ui | ssl.keystore.key = null
kafka-ui | ssl.keystore.location = null
kafka-ui | ssl.keystore.password = null
kafka-ui | ssl.keystore.type = JKS
kafka-ui | ssl.protocol = TLSv1.3
kafka-ui | ssl.provider = null
kafka-ui | ssl.secure.random.implementation = null
kafka-ui | ssl.trustmanager.algorithm = PKIX
kafka-ui | ssl.truststore.certificates = null
kafka-ui | ssl.truststore.location = null
kafka-ui | ssl.truststore.password = null
kafka-ui | ssl.truststore.type = JKS
kafka-ui | value.deserializer = class
org.apache.kafka.common.serialization.BytesDeserializer
kafka-ui |
kafka-ui | 2022-12-09 03:21:17,153 INFO
[boundedElastic-4] o.a.k.c.u.AppInfoParser: Kafka version: 2.8.0
kafka-ui | 2022-12-09 03:21:17,153 INFO
[boundedElastic-4] o.a.k.c.u.AppInfoParser: Kafka commitId:
ebb1d6e21cc92130
kafka-ui | 2022-12-09 03:21:17,153 INFO
[boundedElastic-4] o.a.k.c.u.AppInfoParser: Kafka startTimeMs:
1670556077152
kafka-ui | 2022-12-09 03:21:17,685 INFO
[boundedElastic-4] o.a.k.c.Metadata: [Consumer clientId=kafka-
ui-4d1a87dc-7219-41a5-b39d-8b5f7fbfa55b, groupId=null]
Cluster ID: 1i0gWgdkSlq2grYfKIOGfw
kafka-ui | 2022-12-09 03:21:17,698 INFO
[boundedElastic-4] c.p.k.u.u.OffsetsSeek: Positioning consumer
for topic response.data-warehouse-svc.warehousedata-event
with ConsumerPosition(seekType=OFFSET,
seekTo={response.data-warehouse-svc.warehousedata-event-
0=0}, seekDirection=FORWARD)
kafka-ui | 2022-12-09 03:21:17,864 INFO
[boundedElastic-4] o.a.k.c.c.KafkaConsumer: [Consumer
clientId=kafka-ui-4d1a87dc-7219-41a5-b39d-8b5f7fbfa55b,
groupId=null] Unsubscribed all topics or patterns and assigned
partitions
kafka-ui | 2022-12-09 03:21:17,882 INFO
[boundedElastic-4] c.p.k.u.u.OffsetsSeek: Assignment: []
kafka-ui | 2022-12-09 03:21:17,909 INFO
[boundedElastic-4] c.p.k.u.e.ForwardRecordEmitter: Polling
finished
kafka-ui | 2022-12-09 03:21:17,915 INFO
[boundedElastic-4] o.a.k.c.m.Metrics: Metrics scheduler closed
kafka-ui | 2022-12-09 03:21:17,916 INFO
[boundedElastic-4] o.a.k.c.m.Metrics: Closing reporter
org.apache.kafka.common.metrics.JmxReporter
kafka-ui | 2022-12-09 03:21:17,916 INFO
[boundedElastic-4] o.a.k.c.m.Metrics: Metrics reporters closed
kafka-ui | 2022-12-09 03:21:17,962 INFO
[boundedElastic-4] o.a.k.c.u.AppInfoParser: App info
kafka.consumer for kafka-ui-4d1a87dc-7219-41a5-b39d-
8b5f7fbfa55b unregistered
kafka-ui | 2022-12-09 03:21:36,142 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:21:36,294 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:22:06,151 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:22:06,489 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:22:36,147 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:22:36,334 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka-ui | 2022-12-09 03:23:06,138 DEBUG [parallel-2]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:23:06,262 DEBUG [parallel-3]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal
kafka | [2022-12-09 03:23:23,105] INFO [Controller
id=1] Processing automatic preferred replica leader election
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:23:23,110] TRACE [Controller
id=1] Checking need to trigger auto leader balancing
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:23:23,130] DEBUG [Controller
id=1] Topics not in preferred replica for broker 1 HashMap()
(kafka.controller.KafkaController)
kafka | [2022-12-09 03:23:23,131] TRACE [Controller
id=1] Leader imbalance ratio for broker 1 is 0.0
(kafka.controller.KafkaController)
kafka-ui | 2022-12-09 03:23:36,138 DEBUG [parallel-4]
c.p.k.u.s.ClustersMetricsScheduler: Start getting metrics for
kafkaCluster: hiveLocal
kafka-ui | 2022-12-09 03:23:36,317 DEBUG [parallel-1]
c.p.k.u.s.ClustersMetricsScheduler: Metrics updated for cluster:
hiveLocal

You might also like