调用ExecuteQuery()
的限制是什么?例如,实体数限制和下载大小。
换句话说,下面的方法何时会达到其极限?
private static void ExecuteSimpleQuery(CloudTable table, string partitionKey, string startRowKey, string endRowKey)
{
try
{
// Create the range query using the fluid API
TableQuery<CustomerEntity> rangeQuery = new TableQuery<CustomerEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, startRowKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, endRowKey))));
foreach (CustomerEntity entity in table.ExecuteQuery(rangeQuery))
{
Console.WriteLine("Customer: {0},{1}\t{2}\t{3}", entity.PartitionKey, entity.RowKey, entity.Email, entity.PhoneNumber);
}
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
下面的方法使用ExecuteQuerySegmentedAsync
的TakeCount为50,但是如何确定50,这是我上面的问题所确定的。
private static async Task PartitionRangeQueryAsync(CloudTable table, string partitionKey, string startRowKey, string endRowKey)
{
try
{
// Create the range query using the fluid API
TableQuery<CustomerEntity> rangeQuery = new TableQuery<CustomerEntity>().Where(
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
TableOperators.And,
TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, startRowKey),
TableOperators.And,
TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThanOrEqual, endRowKey))));
// Request 50 results at a time from the server.
TableContinuationToken token = null;
rangeQuery.TakeCount = 50;
int segmentNumber = 0;
do
{
// Execute the query, passing in the continuation token.
// The first time this method is called, the continuation token is null. If there are more results, the call
// populates the continuation token for use in the next call.
TableQuerySegment<CustomerEntity> segment = await table.ExecuteQuerySegmentedAsync(rangeQuery, token);
// Indicate which segment is being displayed
if (segment.Results.Count > 0)
{
segmentNumber++;
Console.WriteLine();
Console.WriteLine("Segment {0}", segmentNumber);
}
// Save the continuation token for the next call to ExecuteQuerySegmentedAsync
token = segment.ContinuationToken;
// Write out the properties for each entity returned.
foreach (CustomerEntity entity in segment)
{
Console.WriteLine("\t Customer: {0},{1}\t{2}\t{3}", entity.PartitionKey, entity.RowKey, entity.Email, entity.PhoneNumber);
}
Console.WriteLine();
}
while (token != null);
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
示例来自以下链接:https://github.com/Azure-Samples/storage-table-dotnet-getting-started
对于ExecuteQuerySegmentedAsync
,限制为1000
。这基于REST API的限制,其中对表服务的单个请求最多可以返回1000个实体(参考:https://docs.microsoft.com/en-us/rest/api/storageservices/query-timeout-and-pagination)。
ExecuteQuery
方法将尝试返回所有与查询匹配的实体。在内部,它尝试在一次迭代中最多获取1000个实体,并且如果表服务的响应中包含延续令牌,则将尝试获取下一组实体。
UPDATE
如果ExecuteQuery自动执行分页,似乎是比ExecuteQuerySegmentedAsync更易于使用。我为什么要用ExecuteQuerySegmentedAsync?那下载大小呢? 1000个实体不论大小?
使用ExecuteQuery
,您将无法摆脱循环。当表中有很多实体时,这将成为问题。您可以通过ExecuteQuerySegmentedAsync
拥有这种灵活性。例如,假设您要从一个很大的表中下载所有实体并将其保存在本地。如果使用ExecuteQuerySegmentedAsync
,则可以将实体保存在不同的文件中。
关于您对1000个实体的评论,无论其大小如何,答案是肯定的。请记住,每个实体的最大大小为1MB。