如何从表单将2GB以上的大文件上传到.NET Core API控制器?

问题描述 投票:2回答:1

[通过邮递员上传大文件时(从前端用php编写的表单,我遇到了同样的问题),我从Azure Web App收到502错误的网关错误消息:

502-Web服务器在充当服务器时收到无效响应网关或代理服务器。您所在的页面有问题寻找,并且无法显示。当Web服务器(同时充当网关或代理)联系了上游内容服务器,它收到了来自内容服务器的无效响应。

我在Azure应用程序见解中看到的错误:

Microsoft.AspNetCore.Connections.ConnectionResetException:客户端已断开连接

[尝试上传2GB测试文件时发生的情况。使用1GB的文件,它可以正常工作,但需要达到〜5GB。

我已经优化了通过使用块写入方法将文件流写入到Azure Blob存储的部分(提供给:https://www.red-gate.com/simple-talk/cloud/platform-as-a-service/azure-blob-storage-part-4-uploading-large-blobs/),但是对我来说,连接似乎对客户端关闭了(对邮递员而言)在这种情况下),因为这似乎是单个HTTP POST请求,并且底层的Azure网络堆栈(例如负载平衡器)正在关闭连接,这花费了很长时间,直到我的API为HTTP POST请求提供了HTTP 200 OK。

我的假设正确吗?如果是,如何才能实现从我的前端(或邮递员)上传的数据块(例如15MB)发生,然后API可以比整个2GB更快的方式对其进行确认?即使创建一个SAS URL以便上传到azure blob并将URL返回到浏览器也可以,但是不确定我如何轻松地集成它-还有最大块大小afaik,因此对于2GB,我可能需要创建多个块。如果这是建议,那么在这里获得一个好的样本将是很棒的,但是也欢迎其他想法!

这是我在C#.Net Core 2.2中的API控制器端点中的相关部分:

        [AllowAnonymous]
            [HttpPost("DoPost")]
            public async Task<IActionResult> InsertFile([FromForm]List<IFormFile> files, [FromForm]string msgTxt)
            {
                 ...

                        // use generated container name
                        CloudBlobContainer container = blobClient.GetContainerReference(SqlInsertId);

                        // create container within blob
                        if (await container.CreateIfNotExistsAsync())
                        {
                            await container.SetPermissionsAsync(
                                new BlobContainerPermissions
                                {
                                    // PublicAccess = BlobContainerPublicAccessType.Blob
                                    PublicAccess = BlobContainerPublicAccessType.Off
                                }
                                );
                        }

                        // loop through all files for upload
                        foreach (var asset in files)
                        {
                            if (asset.Length > 0)
                            {

                                // replace invalid chars in filename
                                CleanFileName = String.Empty;
                                CleanFileName = Utils.ReplaceInvalidChars(asset.FileName);

                                // get name and upload file
                                CloudBlockBlob blockBlob = container.GetBlockBlobReference(CleanFileName);


                                // START of block write approach

                                //int blockSize = 256 * 1024; //256 kb
                                //int blockSize = 4096 * 1024; //4MB
                                int blockSize = 15360 * 1024; //15MB

                                using (Stream inputStream = asset.OpenReadStream())
                                {
                                    long fileSize = inputStream.Length;

                                    //block count is the number of blocks + 1 for the last one
                                    int blockCount = (int)((float)fileSize / (float)blockSize) + 1;

                                    //List of block ids; the blocks will be committed in the order of this list 
                                    List<string> blockIDs = new List<string>();

                                    //starting block number - 1
                                    int blockNumber = 0;

                                    try
                                    {
                                        int bytesRead = 0; //number of bytes read so far
                                        long bytesLeft = fileSize; //number of bytes left to read and upload

                                        //do until all of the bytes are uploaded
                                        while (bytesLeft > 0)
                                        {
                                            blockNumber++;
                                            int bytesToRead;
                                            if (bytesLeft >= blockSize)
                                            {
                                                //more than one block left, so put up another whole block
                                                bytesToRead = blockSize;
                                            }
                                            else
                                            {
                                                //less than one block left, read the rest of it
                                                bytesToRead = (int)bytesLeft;
                                            }

                                            //create a blockID from the block number, add it to the block ID list
                                            //the block ID is a base64 string
                                            string blockId =
                                              Convert.ToBase64String(ASCIIEncoding.ASCII.GetBytes(string.Format("BlockId{0}",
                                                blockNumber.ToString("0000000"))));
                                            blockIDs.Add(blockId);
                                            //set up new buffer with the right size, and read that many bytes into it 
                                            byte[] bytes = new byte[bytesToRead];
                                            inputStream.Read(bytes, 0, bytesToRead);

                                            //calculate the MD5 hash of the byte array
                                            string blockHash = Utils.GetMD5HashFromStream(bytes);

                                            //upload the block, provide the hash so Azure can verify it
                                            blockBlob.PutBlock(blockId, new MemoryStream(bytes), blockHash);

                                            //increment/decrement counters
                                            bytesRead += bytesToRead;
                                            bytesLeft -= bytesToRead;
                                        }

                                        //commit the blocks
                                        blockBlob.PutBlockList(blockIDs);

                                    }
                                    catch (Exception ex)
                                    {
                                        System.Diagnostics.Debug.Print("Exception thrown = {0}", ex);
                                        // return BadRequest(ex.StackTrace);
                                    }
                                }

                                // END of block write approach
...

这是通过邮递员的HTTP POST示例:

postman pic

我已在web.config中设置maxAllowedContentLength和requestTimeout以进行测试:

requestLimits maxAllowedContentLength =“ 4294967295”

aspNetCore processPath =“%LAUNCHER_PATH%” arguments =“%LAUNCHER_ARGS%”stdoutLogEnabled =“ false” stdoutLogFile =“。\ logs \ stdout”requestTimeout =“ 00:59:59” HostingModel =“ InProcess”

c# .net azure azure-storage-blobs core
1个回答
0
投票

[如果您要将大的Blob文件上传到Azure存储,请从后端获取SAS令牌并直接从客户端上传此文件将是更好的选择,因为它不会增加后端工作负载。您可以使用下面的代码来获取SAS令牌,该令牌仅对您的客户端具有2小时的写许可权:

© www.soinside.com 2019 - 2024. All rights reserved.