elasticsearch 7无法将映射放入索引logstash类型事件并拒绝映射更新

问题描述 投票:0回答:1

我们有一个带有7.3.2版本的elasticsearch的新集群。我们使用rsyslog将客户端日志发送到elasticsearch节点。

通过删除版本7中弃用的事件和类型,我将logstash模板从6更新为7。

遇到我遇到的错误。

[2020-03-17T09:13:08,861][DEBUG][o.e.c.s.MasterService    ] [prod-apm-elasticsearch103.example.com] processing [put-mapping[events]]: took [2ms] no change in cluster state
[2020-03-17T09:13:08,964][DEBUG][o.e.c.s.MasterService    ] [prod-apm-elasticsearch103.example.com] processing [put-mapping[events]]: execute
[2020-03-17T09:13:08,967][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [prod-apm-elasticsearch103.example.com] failed to put mappings on indices [[[logstash-2020.03.17/7_uGNP-iSCOxGczC2_xvfA]]], type [events]
java.lang.IllegalArgumentException: Rejecting mapping update to [logstash-2020.03.17] as the final mapping would have more than 1 type: [_doc, events]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:272) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:238) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.3.2.jar:7.3.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:835) [?:?]

以下是rsyslog转发器配置。

#This is the same as the default template, but allows for tags longer than 32 characters.
#See http://www.rsyslog.com/sende-messages-with-tags-larger-than-32-characters/ for an explanation
template (name="LongTagForwardFormat" type="string" string="<%PRI%>%TIMEGENERATED:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%")

# this is for index names to be like: logstash-YYYY.MM.DD
template(name="logstash-index" type="list") {
  constant(value="logstash-")
  property(name="timereported" dateFormat="rfc3339" position.from="1" position.to="4")
  constant(value=".")
  property(name="timereported" dateFormat="rfc3339" position.from="6" position.to="7")
  constant(value=".")
  property(name="timereported" dateFormat="rfc3339" position.from="9" position.to="10")
}


action(type="mmjsonparse")

template(name="plain-syslog" type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"containerId\":\"") property(name="hostname")
      constant(value="\",\"host\":\"")        property(name="$myhostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"tag\":\"")         property(name="programname" format="json") #name of process
      constant(value="\",")
      property(name="$!all-json" position.from="2")
}


# ship logs to Elasticsearch, contingent on having an applog_es_server defined
local0.* action(type="omelasticsearch"
  template="plain-syslog"
  #template="logstash"
  searchIndex="logstash-index"
  dynSearchIndex="on"
  #asyncrepl="off"
  bulkmode="on"
  queue.dequeuebatchsize="250"
  queue.type="linkedlist"
  queue.filename="syslog-elastic"
  queue.maxdiskspace="1G"
  queue.highwatermark="10000"
  queue.lowwatermark="5000"
  queue.size="2000000"
  queue.timeoutEnqueue="0"
  queue.timeoutshutdown="5000"
  queue.saveonshutdown="on"
  action.resumeretrycount="1"
  server=["prod-routing101.example.com:9200"]

rsyslogd 8.1908.0(aka 2019.08)编译为:平台:x86_64-pc-linux-gnu平台(lsb_release -d):FEATURE_REGEXP:是GSSAPI Kerberos 5支持:否FEATURE_DEBUG(调试版本,慢速代码):否支持32位原子操作:是支持64位原子操作:是内存分配器:系统默认值运行时检测(慢速代码):否uuid支持:是系统支持:否配置文件:/etc/rsyslog.confPID文件:/var/run/rsyslogd.pidRainerScript整数的位数:64

请帮助我解决此问题。

有关我使用的模板的信息,请参考elasticsearch mapper_parsing_exception Root mapping definition has unsupported parameters

提前感谢。

elasticsearch rsyslog
1个回答
0
投票

在rsyslog conf(applog forwader)中添加searchType="_doc"后起作用。

# ship logs to Elasticsearch, contingent on having an applog_es_server defined
local0.* action(type="omelasticsearch"
  template="plain-syslog"
  #template="logstash"
  searchIndex="logstash-index"
  dynSearchIndex="on"
  searchType="_doc"
  #asyncrepl="off"
  bulkmode="on"
  queue.dequeuebatchsize="250"
  queue.type="linkedlist"
  ...
  ...
  ...

© www.soinside.com 2019 - 2024. All rights reserved.