Elasticsearch alpine docker with jdk8 java.time.Instant导致epochSecond错误

问题描述 投票:0回答:2

我最近尝试过2.4.6-alpine,从java.util.Date改为JDK 8 java.time.Instant

Log文档使用spring-boot自动注入。

import java.time.Instant;

@Document(indexName = "log")
public class Log {

    @Id
    private String id;

    @Field(type = FieldType.Date, store = true)
    private Instant timestamp = null;
...

之前的Log文档看起来像这样。

import java.util.Date;
@Document(indexName = "log")
public class Log {

    @Id
    private String id;

    @Field(type = FieldType.Date, store = true)
    private Date timestamp = null;

在ES 2.4.6-alpine与java.util.Date和ES 2.4.6与java.time.Instant,我没有任何问题。

然而,在ES 2.4.6-alpine与java.time.Instant,我看到以下错误。它似乎是alpine linux和java.time格式的问题。

   SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [/v1] threw exception [Request processing failed; nested exception is MapperParsingException[failed to parse [timestamp]]; nested: IllegalArgumentException[unknown property [epochSecond]];] with root cause
 java.lang.IllegalArgumentException: unknown property [epochSecond]
      at org.elasticsearch.index.mapper.core.DateFieldMapper.innerParseCreateField(DateFieldMapper.java:520)
      at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:241)
      at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:321)
      at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:311)
      at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:328)
      at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)
      at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)
      at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
      at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:533)
      at org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:510)
      at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)
      at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
      at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
      at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:66)
      at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
      at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
      at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
      at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
      at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
      at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
      at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)

有关使用java.time。*和alpine elasticsearch的任何建议吗?

docker-compose up -d命令之后,当我做curl -xGET localhost:9200/*时,我看到有一些初始数据。即使在-XDELETEdocker-compose downdocker-compose up -d命令之后,这些数据也会回来。

来自弹性搜索:2.4.6和弹性搜索:2.4.6-高山码头工人的初始数据是相同的。

{
  "log":{
    "aliases":{},
    "mappings":{
      "log":{
        "properties":{
          "timestamp":{
            "type":"date",
            "store":true,
            "format":"strict_date_optional_time||epoch_millis"
           }
         }
       }
     },
     "settings":{
       "index":{
         "refresh_interval":"1s",
         "number_of_shards":"5",
         "creation_date":"1513716676662",
         "store":{
           "type":"fs"
         },
         "number_of_replicas":"1",
         "uuid":"qlj9xxxxxxxxxxxxxxoisA",
         "version":{
           "created":"2040699"
         }
       }
     },
     "warmers":{}
   }
 }

啊。初始数据在spring-boot服务启动时自动注入我的Elasticsearch实现中使用的Log文档类。

在org.springframework.data.elasticsearch.annotation.DateFormat类的javadoc中找到了对日期时间格式的很好的引用。 SOO许多时间格式的名称和没有匹配我的输出:(

http://nocf-www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html

java docker elasticsearch spring-boot
2个回答
0
投票

此错误通常是您提交格式与以前编制索引的文档冲突的文档时的结果。 (比如将日期格式从java date更改为java Instant)

更改文档格式时,需要清除ElasticSearch中的相应索引。

您可以使用DELETE API清除索引(您可以使用*来清除所有类似的内容) curl -XDELETE localhost:9200/*)和GET API来验证一个干净的索引。 (curl -XGET localhost:9200/*,或者只是在浏览器中访问http://localhost:9200/*.{}意味着你有一个空索引)

(这假设您还没有尝试创建一个新的ES 2.4.6-alpine进行测试。我见过其他人使用docker设置和其他东西恢复到'干净'设置实际上没有摆脱所有旧数据)


0
投票

为了使用elasticsearch:2.4.6-alpine docker和我的spring-boot 1.5.9-RELEASE自动注入,我不得不将format = DateFormat.custom, pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ"添加到@FIELD注释中。显然,默认的org.springframework.data.elasticsearch.annotations.DateFormat.none不适用于在阿尔卑斯山上运行的elasticsearch。

必须从alpine 3.7 OS获得与CentOS OS不兼容的时间日期格式。

© www.soinside.com 2019 - 2024. All rights reserved.