By Stephen Toub about . NET performance Inspired by the blog, we are writing a similar article to emphasize ASP Net core in 6.0.
Datum setting
We will use most of the examples in the whole process BenchmarkDotNet . stay https://github.com/BrennanCon... A repurchase agreement is provided, which includes most of the benchmarks used in this article.
Most of the benchmark results in this article are generated from the following command line:
dotnet run -c Release -f net48 --runtimes net48 netcoreapp3.1 net5.0 net6.0
Then select the specific benchmark to run from the list.
This command line gives the BenchmarkDotNet instruction:
• build all content in the publishing configuration.
• for NET Framework 4.8 peripheral areas to build it.
• in NET Framework 4.8,. NET Core 3.1,. NET 5 and NET 6.
• for some benchmarks, they are only available in NET 6 (for example, if you compare two ways of encoding on the same version):
dotnet run -c Release -f net6.0 --runtimes net6.0
For others, only a subset of the version is running, for example
dotnet run -c Release -f net5.0 --runtimes net5.0 net6.0
I will include commands for running each benchmark.
Most of the results in this article are generated by running the above benchmark on Windows NET Framework 4.8 is included in the result set. However, unless otherwise noted, in general, all these benchmarks show considerable improvement when running on Linux or macOS. Just make sure you have installed each runtime you want to measure. These benchmarks use Construction of. NET 6 RC1 , and Latest release of NET 5 and NET Core 3.1 download.
span< T >
Ever since I was here NET 2.1 Span<T> In each subsequent version, we converted more code to use Span internally and as part of the public API to improve performance. This release is no exception.
PR dotnet/aspnetcore#28855 When adding two PathString instances, the string from Instead of assigning a temporary string in the PathString of substring, span < char > is used as the temporary string. In the following benchmark, we use a short string and a long string to show the performance difference of avoiding temporary strings.
dotnet run -c Release -f net48 --runtimes net48 net5.0 net6.0 --filter *PathStringBenchmark* private PathString _first = new PathString("/first/"); private PathString _second = new PathString("/second/"); private PathString _long = new PathString("/longerpathstringtoshowsubstring/"); [Benchmark] public PathString AddShortString() { return _first.Add(_second); } [Benchmark] public PathString AddLongString() { return _first.Add(_long); }
dotnet/aspnetcore#34001 A new Span based API is introduced to enumerate query strings. When there are no encoded characters, the query string is allocated idle. When the query string contains encoded characters, the allocation is lower.
dotnet run -c Release -f net6.0 --runtimes net6.0 --filter QueryEnumerableBenchmark
#if NET6_0_OR_GREATER public enum QueryEnum { Simple = 1, Encoded, } [ParamsAllValues] public QueryEnum QueryParam { get; set; } private string SimpleQueryString = "?key1=value1&key2=value2"; private string QueryStringWithEncoding = "?key1=valu%20&key2=value%20"; [Benchmark(Baseline = true)] public void QueryHelper() { var queryString = QueryParam == QueryEnum.Simple ? SimpleQueryString : QueryStringWithEncoding; foreach (var queryParam in QueryHelpers.ParseQuery(queryString)) { _ = queryParam.Key; _ = queryParam.Value; } } [Benchmark] public void QueryEnumerable() { var queryString = QueryParam == QueryEnum.Simple ? SimpleQueryString : QueryStringWithEncoding; foreach (var queryParam in new QueryStringEnumerable(queryString)) { _ = queryParam.DecodeName(); _ = queryParam.DecodeValue(); } } #endif
It should be noted that, There is no such thing as a free lunch. . In the case of the new QueryStringEnumerable API, if you plan to enumerate query string values multiple times, it may actually be better than using queryhelpers Parsequery and a dictionary that stores parsed query string values are more expensive.
@paulomorgado of dotnet/aspnetcore#29448 use string.Create Method, which allows you to initialize a string after it is created if you know its final size. This is used to remove urihelper Some temporary string assignments in buildababsolute.
dotnet run -c Release -f netcoreapp3.1 --runtimes netcoreapp3.1 net6.0 --filter UriHelperBenchmark
#if NETCOREAPP [Benchmark] public void BuildAbsolute() { _ = UriHelper.BuildAbsolute("https", new HostString("localhost")); } #endif
PR dotnet/aspnetcore#31267 Convert some parsing logic in ContentDispositionHeaderValue to span < T > based API to avoid temporary string and temporary byte [].
dotnet run -c Release -f net48 --runtimes net48 netcoreapp3.1 net5.0 net6.0 --filter ContentDispositionBenchmark
[Benchmark] public void ParseContentDispositionHeader() { var contentDisposition = new ContentDispositionHeaderValue("inline"); contentDisposition.FileName = "FileÃName.bat"; }
idle connection
ASP. One of the main components of net core is the managed server, which brings many different problems that need to be optimized. We will focus on the improvement of idle connections in 6.0, in which we have made many changes to reduce the amount of memory used by connections waiting for data.
We made three different types of changes. One is to reduce the size of objects used by the connection, including system IO. Pipelines, SocketConnections, and SocketSenders. The second type of change is to pool commonly accessed objects so that we can reuse old instances and save allocation. The third type of change is to use the so-called "zero byte read". Here, we try to use a zero byte buffer to read data from the connection. If there is data available, the reading will return no data, but we know that there is data available now. We can provide a buffer to read the data immediately. This avoids pre allocating a buffer for possible future reads, so we can avoid a large number of allocations until we know that data is available.
dotnet/runtime#49270 Set system IO. The size of pipelines has been reduced from ~ 560 bytes to ~ 368 bytes, a decrease of 34%. There are at least two pipelines per connection, so this is a huge victory.
dotnet/aspnetcore#31308 The Socket layer of Kestrel is reconstructed to avoid some asynchronous state machines and reduce the size of the remaining state machines, so as to save 33% allocation for each connection.
dotnet/aspnetcore#30769 We re-use the same PipeOptions for each connection to the server, so we assign only one option to each connection and delete it. From@ benaadams of dotnet/aspnetcore#31311 Replace the well-known header value in the WebSocket request with Internal string , this allows strings allocated during header parsing to be garbage collected, reducing the memory usage of long-standing WebSocket connections. dotnet/aspnetcore#30771 The Sockets layer in Kestrel is reconstructed. Firstly, the allocation of SocketReceiver object + socketawaitabileeventargs is avoided and combined into a single object, which saves several bytes and causes less objects to be allocated per connection. The PR also brings together the SocketSender class, so you now have multiple core socketsenders on average, rather than creating one for each connection. Therefore, in the following benchmark, when we have 10000 connections, only 16 connections are allocated on my machine instead of 10000, which saves ~ 46mb!
Another similar size change is dotnet/runtime#49123 , it adds support for zero byte reading in SslStream, so that our 10000 free connections range from ~ 46mb allocated by SslStream to ~ 2.3mb. dotnet/runtime#49117 Added support for zero byte reading on streampipeereader, and then Kestrel dotnet/aspnetcore#30863 Use it to start zero byte reading in SslStream.
The end result of all these changes is a significant reduction in memory usage for free connections.
The following numbers are not from the benchmark dotnet application because it measures free connections and is easier to set up with client and server applications.
The console and WebApplication code are pasted in the following points:
https://gist.github.com/Brenn...
Here are 10000 idle secure WebSocket connections (WSS) occupying server memory on different frameworks.
This is nearly four times less memory than net5.
Entity Framework core
EF Core has made a lot of improvements in version 6.0, and the query execution speed has been increased by 31%, TechEmpower fortune Benchmark run time update, optimization benchmark and EF improvement increased by 70%.
These improvements come from the improvement of object pool, intelligent check whether telemetry technology is enabled, and adding an option to exit thread safety check when you know that your application uses DbContext safely.
Please refer to the blog post of publishing entity framework core 6.0 preview version 4: performance version , which highlights many improvements in detail.
Blazor native byte [] interoperability
Blazor now has effective support for byte arrays when performing JavaScript interoperability. Previously, byte arrays sent to and from JavaScript were Base64 encoded, so they can be serialized into JSON, which increases the transmission size and CPU load. Base64 encoding is now in use NET 6 is optimized to allow users to use it transparently NET and Uint8Array in JavaScript. Describes how to use this property for JavaScript to NET and . NET to JavaScript.
Let's look at a quick benchmark to see byte [] interoperability in NET 5 and NET 6. The following Razor code creates a 22 kB byte [], and sends it to the receiveAndReturnBytes function of JavaScript, which immediately returns the byte []. This data is repeated 10000 times, and the time data is printed on the screen. This code is for NET 5 and NET 6 is the same.
<button @onclick="@RoundtripData">Roundtrip Data</button> <hr /> @Message @code { public string Message { get; set; } = "Press button to benchmark"; private async Task RoundtripData() { var bytes = new byte[1024*22]; List<double> timeForInterop = new List<double>(); var testTime = DateTime.Now; for (var i = 0; i < 10_000; i++) { var interopTime = DateTime.Now; var result = await JSRuntime.InvokeAsync<byte[]>("receiveAndReturnBytes", bytes); timeForInterop.Add(DateTime.Now.Subtract(interopTime).TotalMilliseconds); } Message = $"Round-tripped: {bytes.Length / 1024d} kB 10,000 times and it took on average {timeForInterop.Average():F3}ms, and in total {DateTime.Now.Subtract(testTime).TotalMilliseconds:F1}ms"; } }
Next, let's look at the receiveAndReturnBytes JavaScript function. Yes NET 5. We must first decode the base64 encoded byte array into Uint8Array so that it can be used in application code. Then, before returning the data to the server, we must recode it to Base64.
function receiveAndReturnBytes(bytesReceivedBase64Encoded) { const bytesReceived = base64ToArrayBuffer(bytesReceivedBase64Encoded); // Use Uint8Array data in application const bytesToSendBase64Encoded = base64EncodeByteArray(bytesReceived); if (bytesReceivedBase64Encoded != bytesToSendBase64Encoded) { throw new Error("Expected input/output to match.") } return bytesToSendBase64Encoded; } // https://stackoverflow.com/a/21797381 function base64ToArrayBuffer(base64) { const binaryString = atob(base64); const length = binaryString.length; const result = new Uint8Array(length); for (let i = 0; i < length; i++) { result[i] = binaryString.charCodeAt(i); } return result; } function base64EncodeByteArray(data) { const charBytes = new Array(data.length); for (var i = 0; i < data.length; i++) { charBytes[i] = String.fromCharCode(data[i]); } const dataBase64Encoded = btoa(charBytes.join('')); return dataBase64Encoded; }
Encoding / decoding adds huge overhead on both client and server, and requires a lot of template code. Well, here we are NET 6? Well, it's quite simple:
function receiveAndReturnBytes(bytesReceived) { // bytesReceived comes as a Uint8Array ready for use // and can be used by the application or immediately returned. return bytesReceived; }
So it's certainly easier to write it, but what about its performance? Separately NET 5 and Run these code fragments in the blazorserver template of. NET 6. Under the Release configuration, we can see NET 6 has 78% performance improvement in byte [] interoperability!
Please note that streaming interoperability support can also effectively download (large) files. Please refer to the documentation for more details.
The InputFile component has been upgraded to use streaming via dotnet/aspnetcore#33900.
In addition, this byte array interoperability support is used in the framework to support JavaScript and NET. Users can now transfer any binary data. About from . NET streaming to JavaScript Your documentation is available here, JavaScript to NET document Available here.
input file
Using the above-mentioned Blazer streaming interop, we now support the InputFile component Upload large files (previously, upload was limited to about 2GB). Due to the use of local byte [] stream instead of Base64 coding, the speed of this component has also been significantly improved. For example, with NET 5, the upload speed of a 100mb file is 77% faster.
Note that streaming interoperability support can also effectively download (large) files. For more details, see file.
The InputFile component has been upgraded to pass dotnet/aspnetcore#33900 Use streaming.
Hodgepodge
From@ benaadams of dotnet/aspnetcore#30320 Our Typescript library has been modernized and optimized, so the website loads faster. signalr. The min.js file changed from 36.8 kB compressed and 132 kB uncompressed to 16.1 kB compressed and 42.2 kB uncompressed. blazor.server.js file is 86.7 kB after compression, 276 kB when uncompressed, 43.9 kB after compression and 130 kB when uncompressed.
@benaadams of dotnet/aspnetcore#31322 Some unnecessary casts were removed when getting common functions from the connection function collection. This provides about 50% improvement in accessing common features in the collection. Unfortunately, it's impossible to see performance improvements in benchmarking because it requires a bunch of internal types, so I'll include numbers from PR here. If you're interested in running them, PR includes benchmarks that can run against internal code.
dotnet/aspnetcore#31519 Also from@ benaadams , will Default interface method Add to the IHeaderDictionary type to access the public header through an attribute named after the header name. No more wrong common titles when accessing the title dictionary! More interesting in this blog post is that this change allows the server implementation to return a custom header dictionary to more optimally implement these new interface methods. For example, the server may store the header value directly in a field and return it directly, instead of querying the header value in the internal dictionary, which requires hashing the key and finding the entry. In some cases, this change can lead to up to 480% improvement when getting or setting the header value. Again, in order to benchmark this change correctly to show that it needs to be set with internal types, so I'll include numbers from pr. for those interested in trying it, PR includes benchmarks running on internal code.
dotnet/aspnetcore#31466 use. NET 6 The tryreset () method reuses the CancellationTokenSource when the connection is closed but not cancelled. The following numbers are run by bombardier Collected from Kestrel's 125 connections, it ran about 100000 requests.
dotnet/aspnetcore#31528 and dotnet/aspnetcore#34075 Similar changes have been made to the CancellationTokenSource that reuses HTTPS handshakes and HTTPS 3 streams, respectively.
dotnet/aspnetcore#31660 By reusing the allocated StreamItem object for the whole flow in SignalR instead of assigning one for each flow item, the efficiency is improved Server to client streaming Performance. and dotnet/aspnetcore#31661 Store the HubCallerClients object on the SignalR connection instead of assigning it to each Hub method call.
@ShreyasJejurkar of dotnet/aspnetcore#31506 The internal structure of WebSocket handshake is reconstructed to avoid temporary List allocation@ gfoidl Medium dotnet/aspnetcore#32829 Refactor QueryCollection to reduce allocation and vectorize some code@ benaadams of dotnet/aspnetcore#32234 Deleted unused fields in HttpRequestHeaders enumeration, which improves performance by no longer assigning fields to the header of each enumeration.
come from martincostello of dotnet/aspnetcore#31333 Add http Sys conversion to use LoggerMessage.Define , this is a high-performance logging API. This avoids unnecessary value type boxing, parsing of log format strings, and, in some cases, assigning strings or objects when the log level is not enabled.
dotnet/aspnetcore#31784 Added a new iaapplicationbuilder. Use overloading to register middleware to avoid unnecessary allocation on request when running middleware. The old code is as follows:
app.Use(async (context, next) => { await next(); }); The new code is as follows: app.Use(async (context, next) => { await next(context); });
The following benchmark simulates the middleware pipeline without setting up the server to demonstrate the improvement. Use int instead of HttpContext for the request, and the middleware returns a completed task.
dotnet run -c Release -f net6.0 --runtimes net6.0 --filter *UseMiddlewareBenchmark* static private Func<Func<int, Task>, Func<int, Task>> UseOld(Func<int, Func<Task>, Task> middleware) { return next => { return context => { Func<Task> simpleNext = () => next(context); return middleware(context, simpleNext); }; }; } static private Func<Func<int, Task>, Func<int, Task>> UseNew(Func<int, Func<int, Task>, Task> middleware) { return next => context => middleware(context, next); } Func<int, Task> Middleware = UseOld((c, n) => n())(i => Task.CompletedTask); Func<int, Task> NewMiddleware = UseNew((c, n) => n(c))(i => Task.CompletedTask); [Benchmark(Baseline = true)] public Task Use() { return Middleware(10); } [Benchmark] public Task UseNew() { return NewMiddleware(10); }
summary
I hope you enjoy reading ASP Net core 6.0 some improvements! I encourage you to go and have a look NET 6 blog about runtime Performance improvement articles.