我想使用 Apache Ignite 作为缓存层。我正在使用 Ignite 胖客户端。我已将多达 400,000 条记录插入服务器进行缓存。然而,在检索数据时,需要2-3秒的时间。我怎样才能减少这种延迟?此外,当尝试加载超过 400,000 条记录时,它们不会被缓存。我试图在启动时将超过 1000 万条数据加载到服务器上。可能是什么问题? 请给我一个明确的解决方案。我是 Ignite 的新手。
服务器配置参考:
public class MemoryAndCacheMonitoring {
public static void main(String[] args) throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("Instance");
cfg.setConsistentId("Node");
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
DataRegionConfiguration regionCfg = new DataRegionConfiguration();
regionCfg.setMaxSize(6L * 1024 * 1024 * 1024);
regionCfg.setPersistenceEnabled(true);
regionCfg.setInitialSize(4L * 1024 * 1024 * 1024);
regionCfg.setMetricsEnabled(true);
storageCfg.setDefaultDataRegionConfiguration(regionCfg);
cfg.setDataStorageConfiguration(storageCfg);
CacheConfiguration<String, String> CacheCfg = new CacheConfiguration<>();
CacheCfg.setName("myCache");
CacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
CacheCfg.setCacheMode(CacheMode.REPLICATED);
CacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cfg.setCacheConfiguration(CacheCfg);
cfg.setPeerClassLoadingEnabled(true);
Ignite igniteServer = Ignition.start(cfg);
igniteServer.cluster().state(ClusterState.ACTIVE);
IgniteCache<String, String> myCache = igniteServer.getOrCreateCache("myCache");
igniteServer.resetLostPartitions(Arrays.asList("myCache"));
igniteServer.cluster().baselineAutoAdjustEnabled(true);
}
}
批量数据流传输功能:
@Async
public CompletableFuture<Void> processAllRecords() {
long startTime = System.currentTimeMillis();
List<ProductLines> records = productLinesRepo.findRecordsWithPanNotNull();
if (!records.isEmpty()) {
igniteCacheService.streamBulkData("myCache", records);
totalProcessedRecords += records.size();
logger.info("Processed {} records", totalProcessedRecords);
System.out.println("Processed batch of records until now: " + totalProcessedRecords);
}
long endTime = System.currentTimeMillis();
long totalTime = endTime - startTime;
logger.info("Total time taken for processing all records: {} milliseconds", totalTime);
return CompletableFuture.completedFuture(null);
}
缓存检索控制器:
@GetMapping("/getCache/{cacheName}/{panNumber}")
public Map<String, ProductLines> getCachebyId(@PathVariable String cacheName, @PathVariable String panNumber) {
IgniteCache<String, ProductLines> cache = cacheService.getCache(cacheName);
if (cache != null) {
Map<String, ProductLines> thatoneCache = new HashMap<>();
// Iterate over all entries in the cache
for (Cache.Entry<String, ProductLines> entry : cache) {
if(entry.getKey().toString().equals(panNumber))
thatoneCache.put(entry.getKey(), entry.getValue());
}
return thatoneCache;
} else {
throw new RuntimeException("Requested Cache '" + cacheName + "' cannot be found"); // Handle error appropriately
}
}
迭代整个缓存从来都不是一个好主意
您有密钥,用它来检索条目。
ProductLines res = cache.get(panNumber);
除了 Pavel Tupitsyn 的贡献之外,还请注意,通过迭代整个缓存,您还将使用当前分发到 N 个服务器节点的数据加载厚客户端内存!缓存是键/值存储。它对于检索任何已知键的值都非常好且非常快,因为任何单个键在最坏的情况下始终是来自发出请求的客户端的 1 级间接。希望有帮助。