forked from smorin/KafkaOffsetMonitor
-
Notifications
You must be signed in to change notification settings - Fork 108
Open
Description
kafka version: 2.1.0
kafka offset monitor: 0.4.1-SNAPSHOT
logs:
2018-12-05 01:57:21 ERROR KafkaOffsetGetter$:103 - The message was malformed and does not conform to a type of (BaseKey, OffsetAndMetadata. Ignoring this message.
kafka.common.KafkaException: Unknown offset schema version 3
at kafka.coordinator.GroupMetadataManager$.schemaForOffset(GroupMetadataManager.scala:739)
at kafka.coordinator.GroupMetadataManager$.readOffsetMessageValue(GroupMetadataManager.scala:884)
at com.quantifind.kafka.core.KafkaOffsetGetter$.tryParseOffsetMessage(KafkaOffsetGetter.scala:194)
at com.quantifind.kafka.core.KafkaOffsetGetter$.startCommittedOffsetListener(KafkaOffsetGetter.scala:268)
at com.quantifind.kafka.OffsetGetter$$anon$3.run(OffsetGetter.scala:239)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-12-05 01:57:21 ERROR KafkaOffsetGetter$:103 - An unhandled exception was thrown while reading messages from the committed offsets topic.
org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {__consumer_offsets-16=13467746} whose size is larger than the fetch size 1048576 and hence cannot be ever returned. Increase the fetch size, or decrease the maximum message size the broker will allow.
2018-12-05 01:57:21 INFO KafkaOffsetGetter$:236 - Creating new Kafka Client to get consumer group committed offsets
2018-12-05 01:57:21 INFO ConsumerConfig:165 - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = kafka-monitor-committedOffsetListener
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [ap-1001-kafka-prod-ali-hk001:9092, ap-1001-kafka-prod-ali-hk002:9092, ap-1001-kafka-prod-ali-hk003:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = false
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = latest
2018-12-05 01:57:21 INFO AppInfoParser:82 - Kafka version : 0.9.0.1
2018-12-05 01:57:21 INFO AppInfoParser:83 - Kafka commitId : 23c69d62a0cabf06
^C2018-12-05 01:57:21 INFO ContextHandler:843 - stopped o.e.j.s.ServletContextHandler{/,jar:file:/opt/kafka-offset-monitor/KafkaOffsetMonitor-assembly-0.4.1-SNAPSHOT.jar!/offsetapp}
no consumer group chart
Metadata
Metadata
Assignees
Labels
No labels
