Today, we deployed a batch of virtual servers in our private cloud. Because we used a VM template for batch deployment, the data displayed by Kibana Discover was interrupted after a while. We then checked the cluster and Elasticsearch nodes, and they were all functioning normally, with good disk space usage. We then checked Logstash at each level and found a large number of the following error messages in the logs.
[2025-07-11T15:18:08,589][INFO ][logstash.outputs.elasticsearch][koevn_logs][7c39dc6190bbe761f71b0f0b463552b818d13eb5b5b0b9e16dfa801f1463654c] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>78}[2025-07-11T15:18:08,635][INFO ][logstash.outputs.elasticsearch][koevn_logs][7c39dc6190bbe761f71b0f0b463552b818d13eb5b5b0b9e16dfa801f1463654c] Retrying failed action {:status=>403, :action=>["index", {:_id=>nil, :_index=>"koevn_logs-2025.02.18", :routing=>nil}, {"@version"=>"1", "kafka_topic"=>"koevn_logs", "auditd.process.id"=>"11", "event.type"=>"koevn", "@timestamp"=>2025-02-18T08:49:18.820Z, "auditd.event.id"=>"161", "fields"=>{"env"=>"test", "host_ip"=>"10.68.11.136"}, "tags"=>["koevn_logs"], "auditd.event.time"=>"1739868558.820", "temp_timestamp"=>"2025-02-18T08:49:18.820Z", "message"=>"type=koevn msg=audit(1739868558.820:161): prog-id=11 op=UNLOAD", "log"=>{"offset"=>180412, "file"=>{"path"=>"/var/log/audit/audit.log"}}}], :error=>{"type"=>"cluster_block_exception", "reason"=>"index [koevn_logs-2025.02.18] blocked by: [FORBIDDEN/8/index write (api)];"}}Because ILM lifecycle management is configured for the created index, only data within 30 days is readable and writable, while data older than 30 days is read-only. Therefore, the logs show that a large amount of data older than 30 days is being attempted to be written to the read-only index, resulting in frequent 403 errors and Elasticsearch rejecting all write requests. In this case, it depends on your needs: whether to remove the read-only restriction and write data, or simply discard it.
There are two ways to remove The first method is to unset the specified index according to your needs.
curl -XPUT http://<es-host>:9200/koevn_logs-2025.02.18/_settings -H 'Content-Type: application/json' -d '{ "index.blocks.write": false}'The second is to remove all read-only status indexes
curl -XPUT http://<es-host>:9200/_all/_settings -H 'Content-Type: application/json' -d '{ "index.blocks.read_only_allow_delete": null, "index.blocks.write": false}'Another approach is to filter out logs older than 30 days in Logstash to avoid writing them into the Elasticsearch index and causing all connections to be rejected, as shown in the following configuration.
filter { # ------- Other configurations are omitted -------
ruby { code => " require 'time' now = Time.now cutoff = now - 29 * 24 * 60 * 60 if event.get('@timestamp') ts = Time.parse(event.get('@timestamp').to_s) if ts < cutoff event.tag('drop_old_event') end end " }
if "drop_old_event" in [tags] { drop { } }
# ------- Other configurations are omitted -------}Then reload the logstash service, and the Kibana Discover service will display the log data normally.