Post to Monasca Log API can exceed Kafka max size

Bug #1888861 reported by Doug Szumski
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
Incomplete
Low
Doug Szumski

Bug Description

We have 10MB limit on the Monasca API, to accept the maximum Fluentd chunks of 8MB. When we make a post larger than 1MB, we see:

Error{code=MSG_SIZE_TOO_LARGE,val=10,str="Unable to produce message: Broker: Message size too large"}', 60)
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor [req-5bd9553b-5edc-4d7c-8a1e-d3dd91b77a5b cd93ddd29c2c4f76947f0700f26a18ad 79607c696f7b46
68bd3f4b4389e07abb - default default] ('Service unavailable', 'KafkaError{code=MSG_SIZE_TOO_LARGE,val=10,str="Unable to produce message: Broker: Message size t
oo large"}', 60): falcon.errors.HTTPServiceUnavailable: ('Service unavailable', 'KafkaError{code=MSG_SIZE_TOO_LARGE,val=10,str="Unable to produce message: Brok
er: Message size too large"}', 60)
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor Traceback (most recent call last):
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor File "/var/lib/kolla/venv/lib/python3.6/site-packages/monasca_api/api/core/log/log_publ
isher.py", line 181, in _publish
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor messages
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor File "/var/lib/kolla/venv/lib/python3.6/site-packages/monasca_common/confluent_kafka/pr
oducer.py", line 77, in publish
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor callback=KafkaProducer.delivery_report)
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor cimpl.KafkaException: KafkaError{code=MSG_SIZE_TOO_LARGE,val=10,str="Unable to produce me
ssage: Broker: Message size too large"}
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor During handling of the above exception, another exception occurred:
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor Traceback (most recent call last):
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor File "/var/lib/kolla/venv/lib/python3.6/site-packages/monasca_api/v2/common/bulk_proces
sor.py", line 66, in send_message
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor self._publish(to_send_msgs)
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor File "/var/lib/kolla/venv/lib/python3.6/site-packages/monasca_api/api/core/log/log_publ
isher.py", line 186, in _publish
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor str(ex), 60)
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor falcon.errors.HTTPServiceUnavailable: ('Service unavailable', 'KafkaError{code=MSG_SIZE_T
OO_LARGE,val=10,str="Unable to produce message: Broker: Message size too large"}', 60)
2020-07-24 14:12:56.649 26 ERROR monasca_api.v2.common.bulk_processor

This is with the Confluent Kafka client and the unified Monasca API. Possibly something has changed.

Can resolve with the following Kafka config in Kafka server.properties (on the large side):

message.max.bytes = 104857600
replica.fetch.max.bytes = 104857600

Tempest Log API tests assumes anything >1MB is too big and are not configurable.

Doug Szumski (dszumski)
Changed in kolla-ansible:
assignee: nobody → Doug Szumski (dszumski)
Mark Goddard (mgoddard)
Changed in kolla-ansible:
importance: Undecided → Medium
Revision history for this message
Michal Nasiadka (mnasiadka) wrote :

Is that still something you'd like to pursue Doug? Putting the ticket in Incomplete status, will get expired in 60 days if no updates are written here.

Changed in kolla-ansible:
status: New → Incomplete
importance: Medium → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.