GridFS无法从GridFSDownloadStream中读取任何内容

问题描述 投票:-1回答:2

我已成功从mongodb获取特定文件,但是当我尝试从中读取数据时,它不起作用。为什么?

    @Autowired
    private MongoTemplate mongoTemplate;
    public InputStream getDownloadStream() {
        GridFSBucket gridfs = GridFSBuckets.create(mongoTemplate.getDb());
        GridFSDownloadStream st = gridfs.openDownloadStream(new ObjectId("5c891a34f2d7831638ba8fce"));
        System.out.println(st.getGridFSFile().getLength()); // output: 4422653
        System.out.println(st.available()); // output: 0
        return st;
    }
java mongodb gridfs
2个回答
0
投票

您可以尝试以下方法:

@Autowired
private GridFsTemplate gridFsTemplate;

public InputStream getDownloadStream() throws IOException {
    Query query = new Query(Criteria.where("_id").is("5c891a34f2d7831638ba8fce"));
    GridFSFile gridFSFile = gridFsTemplate.findOne(query);
    GridFsResource gridFsResource = gridFsTemplate.getResource(gridFSFile);
    return gridFsResource.getInputStream();
}

gridFsTemplate可以在spring boot mongodb starter的帮助下注入:

implementation "org.springframework.boot:spring-boot-starter-data-mongodb"

0
投票

实际上,数据已经能够从GridFSDownloadStream中读取。但是当我这样做时:

GridFSDownloadStream st = gridFSBucket.openDownloadStream(filename);
st.available();

我总是得到一个“0”。

GridFSDownloadStream不支持available()方法,至少你不能通过它获得保存在多个块中的文件的正确长度。如果要获取GridFSDownloadStream的长度,请改用getGridFSFile()。getLength()。

GridFSDownloadStream st = gridFSBucket.openDownloadStream(filename);
st.getGridFSFile().getLength();

在使用缓冲区从GridFSDownloadStream读取数据时,值得注意的是缓冲区大小不应大于块大小。因为方法“read()”每次只能从一个块读取数据。

GridFSDownloadStream st = gridFSBucket.openDownloadStream(filename);
byte[] buffer = new buffer[stream.getGridFSFile().getLength()];
st.read(buffer);

如果数据以多个块保存,则可能无效。只有第一个块中的数据才会被读入缓冲区!

所以你可以尝试以下方法来避免这种情况:

GridFSDownloadStream st = gridFSBucket.openDownloadStream(filename);
int bufferSize = 1024;
int chunkSize = st.getGridFSFile().getChunkSize();
if (bufferSize > chunkSize)
    bufferSize = chunkSize;
byte[] buffer = new byte[bufferSize];
// Loop
st.read(buffer);
© www.soinside.com 2019 - 2024. All rights reserved.