我正在尝试提出能够解析来自大型机(固定长度)的提要文件的配置。
鉴于该提要的每一行的第一个位置代表一个记录类型,我正在尝试为 BeanIO (2.1) 配置构建一个最小框架(目前只是每个记录的第一个字段)来表示该文件的格式。
要解析的示例文件(目前只是每行/记录的第一个字段):
1
5
6
6
6
8
5
6
8
9
运行 BeanIO 解组器来解析上述最小大型机文件格式时,抛出以下异常:
org.beanio.UnexpectedRecordException: End of stream reached, expected record 'batchControl'
at org.beanio.internal.parser.UnmarshallingContext.newUnsatisfiedRecordException(UnmarshallingContext.java:367)
at org.beanio.internal.parser.Group.unmarshal(Group.java:127)
at org.beanio.internal.parser.DelegatingParser.unmarshal(DelegatingParser.java:39)
at org.beanio.internal.parser.RecordCollection.unmarshal(RecordCollection.java:42)
at org.beanio.internal.parser.Group.unmarshal(Group.java:140)
at org.beanio.internal.parser.BeanReaderImpl.internalRead(BeanReaderImpl.java:106)
at org.beanio.internal.parser.BeanReaderImpl.read(BeanReaderImpl.java:67)
at com.pru.globalpayments.feeds.downstream.dailycashreport.acquire.provider.sftp.unmarshal.AchFileUnmarshallerService.unmarshalAchReturnFile(AchFileUnmarshallerService.java:76)
at com.pru.globalpayments.feeds.downstream.dailycashreport.acquire.provider.sftp.unmarshal.AchFileUnmarshallerServiceTest.testSuccessfulUnmarshallingOfMinimalFileToSkeletonAchObjectGraph(AchFileUnmarshallerServiceTest.java:175)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
以下是相关文件:
AchFileUnmarshallerService.java
@Service
@Slf4j
public class AchFileUnmarshallerService {
public static final String BEANIO_PARSER_FORMAT = "fixedlength";
public final static String UNMARSHALLER_STREAM_NAME = "sftp-ach";
public AchFileCollection unmarshalAchReturnFile(File filetoParse, StreamBuilder streamBuilder){
Assert.notNull(filetoParse, "A valid file must be provided for parsing.");
Assert.state(Files.exists(filetoParse.toPath()), "A valid ACH file must exist at the provided path to attempt parsing.");
String fileName = filetoParse.getAbsolutePath();
log.info("Starting unmarshaling '{}' to a corresponding ACH structure.", fileName);
// StreamBuilder streamBuilder = new StreamBuilder(UNMARSHALLER_STREAM_NAME)
// .format(BEANIO_PARSER_FORMAT)
// .parser(new FixedLengthParserBuilder())
// .addGroup(AchFileCollection.class)
// .occurs(1, -1)
// //.strict()
// ;
StreamFactory factory = StreamFactory.newInstance();
factory.define(streamBuilder);
BeanReader beanioReader = factory.createReader(UNMARSHALLER_STREAM_NAME, fileName);
/*
beanioReader.setErrorHandler(new BeanReaderErrorHandler() {
@Override
public void handleError(BeanReaderException ex) throws Exception {
for (int i = 0; i < ex.getRecordCount(); i++) {
RecordContext context = ex.getRecordContext(i);
log.info(context.toString());
}
throw new BeanReaderException(ex.toString(), ex);
}
});
*/
return (AchFileCollection)beanioReader.read();
}
}
AchFileUnmarshallerServiceTest.java
@SpringJUnitConfig(classes = {
DcrDataFactoryApplication.class,
/* FileAcquisitionConfig.class, */
/* FileDistributionConfig.class */
})
@EnableConfigurationProperties
@PropertySource(value = "application.yml", factory = YamlPropertySourceFactory.class)
@Slf4j
class AchFileUnmarshallerServiceTest {
@Autowired
private AchFileUnmarshallerService achFileUnmarshallerService;
private StreamBuilder streamBuilder() {
StreamBuilder streamBuilder = new StreamBuilder(AchFileUnmarshallerService.UNMARSHALLER_STREAM_NAME)
.format(AchFileUnmarshallerService.BEANIO_PARSER_FORMAT).minOccurs(1)
.addGroup(new GroupBuilder("achFile").type(AchFile.class).order(1).occurs(1, -1)
.addRecord(new RecordBuilder("fileHeader", AchFileHeader.class).order(1).occurs(1, 1)
.addField(new FieldBuilder("recordTypeCode").at(0).length(1).required().literal("1")))
.addGroup(new GroupBuilder("batchRecords").type(AchBatch.class).order(2).occurs(0, -1)
.collection(List.class)
.addRecord(new RecordBuilder("batchHeader", AchBatchHeader.class).order(1).occurs(1, 1)
.addField(new FieldBuilder("recordTypeCode").at(0).length(1).required()
.literal("5")))
.addRecord(new RecordBuilder("batchEntries", AchBatchEntry.class).order(2).occurs(0, -1)
.collection(List.class)
.addField(new FieldBuilder("recordTypeCode").at(0).length(1).required()
.literal("6")))
.addRecord(new RecordBuilder("batchControl", AchBatchFooter.class).order(3).occurs(1, 1)
.addField(new FieldBuilder("recordTypeCode").at(0).length(1).required()
.literal("8"))))
.addRecord(new RecordBuilder("fileControl", AchFileFooter.class).order(3).occurs(1, 1)
.addField(new FieldBuilder("recordTypeCode").at(0).length(1).required().literal("9"))))
;
return streamBuilder;
}
@Test
@SneakyThrows
void testSuccessfulUnmarshallingOfMinimalFileToSkeletonAchObjectGraph() {
log.info("Ready to unmarshall.");
File fileToUnmarshall = new ClassPathResource("skeleton.ach.file.txt").getFile();
AchFileCollection root = achFileUnmarshallerService.unmarshalAchReturnFile(fileToUnmarshall, streamBuilder());
log.info("Finished unmarshalling {} file into the following structure {}.", fileToUnmarshall.getName(), root);
}
域模型类(记录仅包含单个字段,如前所述):
AchFile.java:
@Data
//@Group
public class AchFile {
//@Record
private AchFileHeader fileHeader;
//@Group
private List<AchBatch> batchRecords = new ArrayList<>();
//@Record
private AchFileFooter fileControl;
}
AchFileFooter.java:
@Data
public class AchFileFooter {
//@Field(name = "recordTypeCode", ordinal = 1, at = 0, length = 1, required = true, literal="9")
private String recordTypeCode;
}
AchFileHeader.java:
@Data
public class AchFileHeader {
//@Field(name = "recordTypeCode", ordinal = 1, at = 0, length = 1, required = true, literal="1")
private String recordTypeCode;
}
AchFileCollection.java(目前未使用):
@Data
//@Group(minOccurs = 1, maxOccurs = -1)
public class AchFileCollection {
//@Group
private List<AchFile> achFiles = new ArrayList<>();
}
AchBatch.java:
@Data
//@Group
public class AchBatch {
//@Record
private AchBatchHeader batchHeader;
//@Record
private List<AchBatchEntry> batchEntries = new ArrayList<>();
//@Record
private AchBatchFooter batchControl;
}
AchBatchHeader.java:
@Data
//@Record(order = 1, minOccurs = 0, maxOccurs = 1)
public class AchBatchHeader {
//@Field(name = "recordTypeCode", ordinal = 1, at = 0, length = 1, required = true, literal="5")
private String recordTypeCode;
}
AchBatchFooter.java:
@Data
//@Record(order = 3, minOccurs = 0, maxOccurs = 1)
public class AchBatchFooter {
//@Field(name = "recordTypeCode", ordinal = 1, at = 0, length = 1, required = true, literal="8")
private String recordTypeCode;
}
AchBatchEntry.java:
@Data
//@Record(order = 2, minOccurs = 0, maxOccurs = -1)
public class AchBatchEntry {
//@Field(name = "recordTypeCode", ordinal = 1, at = 0, length = 1, required = true, literal="6")
private String recordTypeCode;
}
注意:最终,当我在每个记录类型上添加更多字段时,配置将迁移到域模型中(因此那里的注释,现在注释掉)。希望首先通过 BeanIO 的 Java 构建器选项获取它,全部集中在一个地方,以进行测试。
我的假设是,一旦
BeanReader
通过 StreamFactory
配置了输入拓扑,并给出要处理的输入数据(以文件的形式),它会将两者匹配在一起,并且知道如何迭代输入文件的行在识别它们所代表的记录类型的同时,构建该输入文件的结果输出对象表示并返回正确的水合物。或者它是否过于乐观,而不是 BeanIO reader 的工作方式,我需要手动执行其他操作?
我做错了什么而出现上述异常,如何解决?
TIA。
我逐渐将相关映射从上面显示的 Java 构建器样式转移到下面显示的注释(奇怪的是,其中一些与注释无关,这在某种程度上不是一一对应的):
AchFile.java:
@Data
@Group(name = "achFile", /* order = 1, */ minOccurs = 1, maxOccurs = -1) //.order(2).occurs(1, -1)
public class AchFile {
@Record( minOccurs = 1, maxOccurs = 1)
private AchFileHeader fileHeader;
@Group( minOccurs = 0, maxOccurs = -1)
private Set<AchBatch> batchRecords = new LinkedHashSet<>();
@Record(minOccurs = 1, maxOccurs = 1)
private AchFileFooter fileControl;
}
AchFileHeader.java:
@Data
@Record(order = 1, minOccurs = 1, maxOccurs = 1)
public class AchFileHeader {
@Field( ordinal = 1, rid = true, at = 0, length = 1, required = true, literal = "1")
private String type;
}
AchFileFooter.java:
@Data
public class AchFileFooter {
@Field( rid = true, at=0, length=1, required=true, literal="9")
private String type;
}
AchBatch.java:
@Data
@Group(minOccurs = 0, maxOccurs = -1)
public class AchBatch {
@Record(minOccurs = 1, maxOccurs = 1)
private AchBatchHeader batchHeader;
@Record(minOccurs = 0, maxOccurs = -1)
private Set<AchBatchEntry> batchEntries = new LinkedHashSet<>();
@Record(minOccurs = 1, maxOccurs = 1)
private AchBatchFooter batchControl;
}
AchBatchHeader.java:
@Data
@Record(minOccurs = 1, maxOccurs = 1)
public class AchBatchHeader {
@Field(name="type", rid=true, at=0, length=1, required=true, literal="5")
private String type;
}
AchBatchFooter.java:
@Data
@Record(minOccurs = 1, maxOccurs = 1)
public class AchBatchFooter {
@Field(rid=true, at=0, length=1, required=true, literal="8")
private String type;
}
AchBatchEntry.java:
@Data
@Record(minOccurs = 0, maxOccurs = -1)
public class AchBatchEntry {
@Field(rid=true, at=0, length=1, required = true, literal="6")
private String type;
}
以及以下跑步者:
Assert.notNull(filetoParse, "A valid file must be provided for parsing.");
Assert.state(Files.exists(filetoParse.toPath()), "A valid ACH file must exist at the provided path to attempt parsing.");
String fileName = filetoParse.getAbsolutePath();
log.info("Started unmarshalling '{}' according to the provided topology.", fileName);
StreamFactory factory = StreamFactory.newInstance();
factory.define(streamBuilder);
BeanReader beanioReader = factory.createReader(AchTopologyFactory.UNMARSHALLER_STREAM_NAME, fileName);
Set<T> result = new LinkedHashSet<>();
T parsed;
while ((parsed = (T) beanioReader.read()) != null) {
result.add(parsed);
}
beanioReader.close();
log.info("Finished unmarshaling '{}' to a corresponding ACH structure: {}", fileName, result);
return result;
它开始产生所需的行为。
Parsed struct: [AchFile(fileHeader=AchFileHeader(type=1), batchRecords=[AchBatch(batchHeader=AchBatchHeader(type=5), batchEntries=[AchBatchEntry(type=6)], batchControl=AchBatchFooter(type=8))], fileControl=AchFileFooter(type=9)), AchFile(fileHeader=AchFileHeader(type=1), batchRecords=[AchBatch(batchHeader=AchBatchHeader(type=5), batchEntries=[AchBatchEntry(type=6)], batchControl=AchBatchFooter(type=8)), AchBatch(batchHeader=AchBatchHeader(type=5), batchEntries=[], batchControl=AchBatchFooter(type=8))], fileControl=AchFileFooter(type=9)), AchFile(fileHeader=AchFileHeader(type=1), batchRecords=[], fileControl=AchFileFooter(type=9))]
给出以下输入:
1
5
6
6
6
8
5
6
8
9
1
5
8
5
6
8
5
8
9
1
9
1
5
8
5
6
8
5
8
9
如前所述,目前还不清楚为什么某些属性组合可以在 Java 构建器样式中工作,但不能在注释样式中工作,也不清楚为什么样式混合似乎也不总是工作 - 因此 BeanIO 框架中的专家声音肯定是欢迎对上述工作设置发表评论或提出质疑,或以该框架作为错误打开相应的票证。