Yoga monasca_log_persister elasticsearch version container crash problem

Bug #1980554 reported by Vince Mulhollon
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
kolla-ansible
New
Undecided
Unassigned

Bug Description

Freshly installed Yoga (installed this afternoon) with monasca and elasticsearch enabled using source type and Ubuntu.

docker container monasca_log_persister restarts every half minute and no logs arrive in elasticsearch as verified in kibana. Logs seem to be accessible by monasca_log_persister.

docker logs monasca_log_persister shows repeated error message every time it restarts the container:

{"level":"ERROR","loggerName":"logstash.javapipeline","timeMillis":1656709544365,"thread":"[main]-pipeline-manager","logEvent":{"message":"Pipeline error","pipeline_id":"main","exception":{"metaClass":{"metaClass":{"metaClass":{"exception":"Could not connect to a compatible version of Elasticsearch",

There is a line in the docker container logs about:
 echo 'Running command: '\''/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ --log.format json --path.logs /var/log/kolla/logstash/monasca-log-persister -f /etc/logstash/conf.d/log-persister.conf'\'''

Based on about two days of experience so far with Kolla-ansible, I think the config being passed to the container is located in /etc/kolla/monasca-log-persister/log-persister.conf and it contains:

output {
    elasticsearch {
        index => "monasca-%{[meta][tenantId]}-%{+YYYY.MM.dd}"
        hosts => ['http://10.10.20.56:9200']
        document_type => "log"
        template_name => "monasca"
        template => "/etc/logstash/elasticsearch-template.json"
    }
}

If I connect to the docker container:

docker exec -i monasca_log_persister bash

and look at the file inside the container at /etc/logstash/conf.d/log-persister.conf does exactly match the file from kolla ansible located as mentioned above. So I think the container is trying to connect to 10.10.20.56:9200.

I believe that is correct. From a desktop I can connect to 10.10.20.56:9200 and see

{
  "name" : "10.10.20.56",
  "cluster_name" : "kolla_logging",
  "cluster_uuid" : "kCOHJolvT8yijatWyw1e9Q",
  "version" : {
    "number" : "7.10.2",
    "build_flavor" : "oss",
    "build_type" : "deb",
    "build_hash" : "747e1cc71def077253878a59143c1f785afa92b9",
    "build_date" : "2021-01-13T00:42:12.435326Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

I can connect to the monasca_log_persister in docker by running:

docker exec -i monasca_log_persister bash

From that docker bash shell I can run "curl 10.10.20.56:9200" and curl responds with exactly the same elasticsearch message I see above from the desktop aka version 7.10.2 and so forth.

Also the kibana installation for central logging on port 5601 connects to elasticsearch just fine although there is no data to look at.

So, I am pretty confident elasticsearch is running and accessible. There is network connectivity between the monasca_log_persister container and the elasticsearch container. Cool.

I found a working (well, I assume so, anyway) dockerfile for monasca at

https://github.com/monasca/monasca-docker/blob/master/elasticsearch/Dockerfile

and that Dockerfile specifies using a REALLY old version of elasticsearch 7.3.0

As I understand https://www.elastic.co/downloads/past-releases#elasticsearch-oss
that would seem to imply Elasticsearch from July 30 2019 worked with monasca but the version from January 14 2021 is too new to work with monasca_log_persister so it crashes the container every 30 seconds.

At this point, if I knew how, I could pursue this bug two different ways:

If I could figure out how to do this, I would reconfigure kolla-ansible to install ancient elasticsearch 7.3.0 and see if that old version of elasticsearch makes monasca_log_persister stop crashing.

Or alternately, if I could find the source for monasca_log_persister, which I cannot, I could try to figure out why it refuses to connect to elasticsearch version 7.10.2.

Revision history for this message
Vince Mulhollon (vincemulhollon) wrote :

I went to https://www.elastic.co/support/matrix and click on Product Compatibility

The Kolla-Ansible installed Elasticsearch 7.10.2 is claimed to be compatible with LogStash 6.8.x-7.17.x

The git repo example using Elasticsearch 7.3.0 is claimed to be compatible with the same range of LogStash version 6.8.x-7.17.x

I ran "docker exec -it monasca_log_persister /bin/bash" and then inside persister I ran /usr/share/logstash/bin/logstash --version and the container has logstash version 7.17.5 installed, which is technically just at the very edge of supported version for that old elasticsearch.

I looked thru the release notes and was unsuccessful at immediately finding anything that would explain why 7.17.5 cannot connect to 7.10.2

I see the behavior WRT version checking changed in version 7.16.0 of logstash, but I don't know if its relevant.
https://www.elastic.co/guide/en/logstash/7.17/logstash-7-16-0.html

Revision history for this message
Vince Mulhollon (vincemulhollon) wrote :
Download full text (3.5 KiB)

Full crash message, repeats every 30 seconds.

{"level":"ERROR","loggerName":"logstash.javapipeline","timeMillis":1656889574271,"thread":"[main]-pipeline-manager","logEvent":{"message":"Pipeline error","pipeline_id":"main","exception":{"metaClass":{"metaClass":{"metaClass":{"exception":"Could not connect to a compatible version of Elasticsearch","backtrace":["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:245:in `block in healthcheck!'","org/jruby/RubyHash.java:1415:in `each'","/usr/share/logstash/vendor/bundle/jrub
y/2.5.0/gems/logstash-output-elasticsearch-11.4.1java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:238:i
n `healthcheck!'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:370:in `update_urls'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:87:in `update_initial_urls'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:81:in`start'","/usr/share/logstash/vendor/bundle/jruby/2.5
.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:358:in `build_p
ool'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/ou
tputs/elasticsearch/http_client.rb:63:in `initialize'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash
-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:106:in `create_http_cli
ent'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/ou
tputs/elasticsearch/http_client_builder.rb:102:in `build'","/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logs
tash-output-elasticsearch-11.4.1-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:34:in `build_client'","/us
r/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elas
ticsearch.rb:279:in `register'","org/logstash/config/ir/compiler/OutputStrategyExt.java:131:in `register'","org/log
stash/config/ir/compiler/AbstractOutputDelegatorExt.java:68:in `register'","/usr/share/logstash/logstash-core/lib/l
ogstash/java_pipeline.rb:233:in `block in register_plugins'","org/jruby/RubyArray.java:1821:in `each'","/usr/share/
logstash/logstash-core/lib/logstash/java_pipeline.rb:232:in `register_plugins'","/usr/share/logstash/logstash-core/
lib/logstash/java_pipeline.rb:598:in `maybe_setup_out_plugins'","/usr/share/logstash/logstash-core/lib/logstash/jav
a_pipeline.rb:245:in `start_workers'","/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:190:in `run'
","/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:142:in `block in start'"],"pipeline.sources":["/
etc/logstash/conf.d/log-persister.conf"],"thread":"#<Thread:0x109ff155 run>"}}}}}}
{"level":"ERROR","loggerName":"logstash.agent","timeMillis":1656...

Read more...

Revision history for this message
Vince Mulhollon (vincemulhollon) wrote :

Note that if you then remove Monasca from your cluster and kolla-ansible reconfigure, the centralized logging system no longer tries to funnel logs thru Monasca, after which centralized logging works quite beautifully well. So aside from the obvious elasticsearch connection error reported above, everything else works well as long as Monasca is not installed.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.