NetCore file upload two ways

The two file upload methods given by NetCore are “buffering” and “streaming” respectively. I will simply talk about the difference between the two,

1. Buffering: First save the entire file to memory through model binding, then we get the stream through IFormFile, the advantage is high efficiency, the shortcomings have large memory requirements. The file should not be too large.

2. Streaming processing: directly read the stream corresponding to the Section after the request body is loaded, and directly operate strm. No need to read the entire request body into memory,

The following is the official Microsoft statement

buffer

The entire file is read into IFormFile, which is a C# representation of the file that is used to process or save the file. The resources (disk, memory) used for file upload depend on the number and size of concurrent file uploads. If an app tries to buffer too many uploads, the site will crash if there is not enough memory or disk space. If the size or frequency of file uploads consumes application resources, use streaming.

Streaming

Receive the file from a multi-part request and then apply the process directly or save it. Streaming does not significantly improve performance. Streaming reduces the need for memory or disk space when uploading files.

File size limit

Speaking of size restrictions, we have to start from two aspects, 1 application server Kestrel 2. application (our netcore program),

1. Application server Kestre settings

The limitation of the application server Kestrel is that the size of the entire request body can be set by the following configuration (Program -> CreateHostBuilder). If the setting range is exceeded, the BadHttpRequestException: Request body too large  exception information will be reported  .

 

 

Public  static IHostBuilder CreateHostBuilder( string [] args) =>
           Host.CreateDefaultBuilder(args)
               .ConfigureWebHostDefaults(webBuilder =>
               {
                   webBuilder.ConfigureKestrel((context, options) =>
                   {
                       / / Set the application server Kestrel request body up to 50MB 
                       options.Limits.MaxRequestBodySize = 52428800 ;
                   });
                   webBuilder.UseStartup <Startup> ();
});

 

 

2. Application settings

Application settings (Startup ->   ConfigureServices) will report InvalidDataException  exception information beyond the setting range

services.Configure<FormOptions>(options =>
 {
             options.MultipartBodyLengthLimit = long .MaxValue;
 });

Reset the file size limit by setting it.

Source code analysis

Here I mainly talk about the  MultipartBodyLengthLimit  parameter. He mainly limits the length of each file when we upload the file using the “buffer” form. Why is it in the buffer form? Because our buffer form is used to read the uploaded file. The helper class is   the Read  method under the MultipartReaderStream class  . This method will update the total number of bytes read in after each read. The quantity will throw  throw new InvalidDataException($ ” Multipart body length limit {LengthLimit.GetValueOrDefault()} exceeded. ” );   mainly reflected in the  UpdatePosition  method  ‘s judgment on  _observedLength

The following is   the source code of the two methods of the MultipartReaderStream class. For the convenience of reading, I have streamlined some of the code.

Read

 

 

Public  override  int Read( byte [] buffer, int offset, int count)
 {
          
          Var bufferedData = _innerStream.BufferedData;
      int read;
      Read = _innerStream.Read(buffer, offset, Math.Min(count, bufferedData.Count));
           return UpdatePosition(read);
}

 

 

UpdatePosition

 

 

Private  int UpdatePosition( int read)
        {
            _position += read;
             if (_observedLength < _position)
            {
                _observedLength = _position;
                 if (LengthLimit.HasValue && _observedLength > LengthLimit.GetValueOrDefault())
                {
                    Throw  new InvalidDataException($ " Multipart body length limit {LengthLimit.GetValueOrDefault()} exceeded. " );
                }
            }
            Return read;
}

 

 

Through the code we can see that when you make the   limit of MultipartBodyLengthLimit , the total amount of reading will be accumulated after each read, when the total amount of reading exceeds

The MultipartBodyLengthLimit  setting will throw an  InvalidDataException  .

Finally my file upload controller is as follows

It should be noted that we  did not set BodyLengthLimit when  creating  MultipartReader  (this parameter will be passed to  MultipartReaderStream.LengthLimit  ), which is our final limit. Here I have no limit on the value, which can be  reflected by the  UpdatePosition method.

 

 

Using Microsoft.AspNetCore.Http;
 using Microsoft.AspNetCore.Mvc;
 using Microsoft.AspNetCore.WebUtilities;
 using Microsoft.Net.Http.Headers;
 using System.IO;
 using System.Threading.Tasks;
 
namespace BigFilesUpload.Controllers
{
    [Route( " api/[controller] " )]
     public  class FileController : Controller
    {
        Private  readonly  string _targetFilePath = " C:\\files\\TempDir " ;
 
        ///  <summary> 
        /// Stream file upload
         ///  </summary> 
        ///  <returns></returns> 
        [HttpPost( " UploadingStream " )]
         public  async Task<IActionResult> UploadingStream()
        {
 
            // Get boundary 
            var boundary = HeaderUtilities.RemoveQuotes(MediaTypeHeaderValue.Parse(Request.ContentType).Boundary).Value;
             // Get reader 
            var reader = new MultipartReader(boundary, HttpContext.Request.Body);
             // { BodyLengthLimit = 2000 }; //
             var section = await reader.ReadNextSectionAsync();
 
            / / Read section 
            while (section != null )
            {
                Var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(section.ContentDisposition, out  var contentDisposition);
                 if (hasContentDispositionHeader)
                {
                    Var trustedFileNameForFileStorage = Path.GetRandomFileName();
                     await WriteFileAsync(section.Body, Path.Combine(_targetFilePath, trustedFileNameForFileStorage));
                }
                Section = await reader.ReadNextSectionAsync();
            }
            Return Created(nameof(FileController), null );
        }
 
        ///  <summary> 
        /// cached file upload
         ///  </summary> 
        ///  <param name=""></param> 
        ///  <returns></returns> 
        [HttpPost( " UploadingFormFile " )]
         public  async Task<IActionResult> UploadingFormFile(IFormFile file)
        {
            Using ( var stream = file.OpenReadStream())
            {
                Var trustedFileNameForFileStorage = Path.GetRandomFileName();
                 await WriteFileAsync(stream, Path.Combine(_targetFilePath, trustedFileNameForFileStorage));
            }
            Return Created(nameof(FileController), null );
        }
 
 
        ///  <summary> 
        /// Write file to disk
         ////  </summary> 
        ///  <param name="stream"> flow </param> 
        ///  <param name="path"> File save Path </param> 
        ///  <returns></returns> 
        public  static  async Task< int > WriteFileAsync(System.IO.Stream stream, string path)
        {
            Const  int FILE_WRITE_SIZE = 84975 ; // Write out the buffer size 
            int writeCount = 0 ;
             using (FileStream fileStream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Write, FILE_WRITE_SIZE, true ))
            {
                Byte [] byteArr = new  byte [FILE_WRITE_SIZE];
                 int readCount = 0 ;
                 while ((readCount = await stream.ReadAsync(byteArr, 0 , byteArr.Length)) > 0 )
                {
                    the await fileStream.WriteAsync (byteArr, 0 , ReadCount);
                    writeCount += readCount;
                }
            }
            Return writeCount;
        }
 
    }
}

 

 

 

Summary:

If you deploy on iis or other application servers such as Nginx, you should also pay attention to it, because they also have restrictions on the request body. It is also worth noting that when we create the file stream object, the size of the buffer should not exceed the netcore. The limit of the object. This makes it easy to trigger the recycling of the second generation GC when the concurrency is high.

Orignal link:https://www.cnblogs.com/hts92/p/11909626.html