tag:blogger.com,1999:blog-57293715256425216632024-03-14T07:21:52.030+03:00Reliable softwareИван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.comBlogger38125tag:blogger.com,1999:blog-5729371525642521663.post-91047960063052940682023-12-29T11:31:00.000+03:002023-12-29T11:31:20.757+03:00Log volume problem<p style="text-align: justify;">When a problem occurs in our production system, we want to have in our logs all the information necessary to find the cause of the error. In rather complex systems, this leads to the collection of a large amount of data: which processing stages were completed, what were the arguments of some functions when called, what results were returned by calls to external services, etc. The problem here is that we have to collect all this information, even if there is no error. And this leads to an increase in the volume of our log storage, for which we have to pay.</p>
<p style="text-align: justify;">Log levels (error, warning, information, ...) don't help much here. Usually the application has some target log level (for example, information). This means that all records with the level equal to or higher than this target level are logged, and all other records are discarded. But at the moment when an error occurs, it is these debug level entries that we are interested in, which are usually discarded. If the problem repeats frequently, we can temporarily lower the target level, collect all the necessary information, and then return the target level back. In this case, the rate of increase in the volume of the log storage increases only temporarily. But if the error is rare, this approach (although possible) is not very convenient because it leads to the collection of a large amount of data.</p>
<p style="text-align: justify;">Can we improve the situation? I think we can. But here I have to say that in this article I will not offer a ready-made solution. This is just some idea that should be implemented in existing logging systems, as it requires changes to their source code.</p>
<p style="text-align: justify;">Ok. Let's begin.</p><span><a name='more'></a></span>
<h2 style="text-align: left;">Main idea</h2>
<p style="text-align: justify;">The basic idea to get the best of both worlds is as follows. Let's introduce the concept of a block of log entries:</p>
<pre> <code lang="cs">
using(_logBlockFactory.CreateLogBlock())
{
...
SomeOperations();
...
}
</code>
</pre>
<p style="text-align: justify;">All records inside such a block will not be sent to the storage immediately. Instead, they wait for the block's <i>Dispose</i> method to be called. Inside this method, we analyze all the log entries of this block. If there is at least one entry of the error log level or higher, we send all entries to the storage. Otherwise, we just delete all these records.</p>
<p style="text-align: justify;">Naturally, the timestamp for each log entry is taken at the time the record is created, and not at the time the <i>Dispose</i> method is called.</p>
<h2 style="text-align: left;">Possible improvements</h2>
<p style="text-align: justify;">This basic idea can be improved.</p>
<p style="text-align: justify;">First of all, we may need to have some log entries written to the storage regardless of whether there is an error. For example, you may need such records for auditing. Or you want to save information about the performance of operations (although it may be better to do this using metrics). However, we must have this ability:</p>
<pre> <code lang="cs">
_logger.Mandatory.WriteInformation("Something");
</code>
</pre>
<p style="text-align: justify;">Secondly, we may need to be able to decide whether we want to upload records to the storage or not. For this purpose, we can use either a simple property:</p>
<pre> <code lang="cs">
_logBlockFactory.CurrentBlock.WriteBlock = true;
</code>
</pre>
<p style="text-align: justify;">or invent something more complex, such as a function that will look through all the entries in the block and make a decision.</p>
<p style="text-align: justify;">We can also play with the log levels. For example, if there are errors, we can write records of all levels of the log to the storage. And without errors, we will save only records of the information level and above. I do not know if it is possible (and necessary) to implement such an approach. Today, we can set a separate target log level for each storage. Thus, this new approach to logging will require too many changes to the existing logging concept.</p>
<h2 style="text-align: left;">Problems</h2>
<p style="text-align: justify;">Naturally, there are negative sides to this idea.</p>
<p style="text-align: justify;">If you want to store log entries almost in real time, this approach will not work for you. Here, records are saved only when the <i>Dispose</i> method of the block is called.</p>
<p style="text-align: justify;">Since we are currently storing log entries in batches, it may happen that they will arrive in the storage in the wrong order in time. For example, the first block stores records for 11:31:01, 11:31:02, 11:31:03. After that, the second block is completed, and it also stores records for 11:31:01, 11:31:02, 11:31:03. This means that the records are shuffled by their timestamps. If your storage does not support timestamp ordering (console, file, ...), this may be a problem. On the other hand, I think that all modern production log storages (for example, Azure Application Insights) support such ordering. Also, records from different blocks will usually have different correlation IDs. This will simplify the separation of such records.</p>
<p style="text-align: justify;">In addition, since we do not write log entries immediately, in the event of a serious failure, we may lose some of the entries (if the <i>Dispose</i> method of the block was not called at all).</p>
<p style="text-align: justify;">There is also a slight problem with the semantic of nested blocks. If the inner block encounters an error, how should the outer block behave? What if an error happened in the outer block? Should we write records from the inner blocks? All these questions should be resolved.</p>
<h2 style="text-align: left;">Conclusion</h2>
<p style="text-align: justify;">That was my idea how to solve the problem of growing log volume. I hope it will be useful and somebody will try to implement it. Good luck!</p>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-20523905702143101112023-11-02T11:21:00.001+03:002023-11-02T11:51:24.089+03:00Comparison of HTTP libraries<p style="text-align: justify;">In .NET applications, we often need to make HTTP calls. In these cases, we can use the standard <a href="https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpclient?view=net-7.0" rel="nofollow" target="_blank">HttpClient</a> class or some other library. For example, I have already used <a href="https://github.com/reactiveui/refit" rel="nofollow" target="_blank">Refit</a> and <a href="https://restsharp.dev" rel="nofollow" target="_blank">RestSharp</a>. But I have never decided which one to use. Always the library was already utilized in the project I was working with. Therefore, I decided to compare these libraries to form my own meaningful opinion, which one is better and why. This is what I will do in this article.</p>
<p style="text-align: justify;">But how should I compare these libraries? I have no doubt that they all can send HTTP requests and receive responses. After all, these libraries wouldn't have become so popular if they couldn't do that. Therefore, I'm more interested in additional features that are in demand in large corporative applications.</p>
<p style="text-align: justify;">Ok, let's start.</p><span><a name='more'></a></span><h2 style="text-align: justify;"><span style="text-align: left;">Initial setup</span></h2>
<p style="text-align: justify;">As a service to communicate with we'll use a simple Web API:</p>
<pre> <code lang="cs">
[ApiController]
[Route("[controller]")]
public class DataController : ControllerBase
{
[HttpGet("hello")]
public IActionResult GetHello()
{
return Ok("Hello");
}
}
</code>
</pre>
<p style="text-align: justify;">Now let's create clients for this service using our 3 libraries.</p>
<p style="text-align: justify;">We'll create an interface:</p>
<pre> <code lang="cs">
public interface IServiceClient
{
Task<string> GetHello();
}
</code>
</pre>
<p style="text-align: justify;">Its implementation using HttpClient looks like this:</p>
<pre> <code lang="cs">
public class ServiceClient : IServiceClient
{
private readonly HttpClient _httpClient;
public ServiceClient(HttpClient httpClient)
{
_httpClient = httpClient;
}
public async Task<string> GetHello()
{
var response = await _httpClient.GetAsync("http://localhost:5001/data/hello");
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
}
</code>
</pre>
<p style="text-align: justify;">Now we must prepare the dependency container:</p>
<pre> <code lang="cs">
var services = new ServiceCollection();
services.AddHttpClient<IServiceClient, ServiceClient>();
</code>
</pre>
<p style="text-align: justify;">In the case of RestSharp, the implementation has the following form:</p>
<pre> <code lang="cs">
public class ServiceClient : IServiceClient
{
public async Task<string?> GetHello()
{
var client = new RestClient();
var request = new RestRequest("http://localhost:5001/data/hello");
return await client.GetAsync<string>(request);
}
}
</code>
</pre>
<p style="text-align: justify;">The dependency container should be prepared as follows:</p>
<pre> <code lang="cs">
var services = new ServiceCollection();
services.AddTransient<IServiceClient, ServiceClient>();
</code>
</pre>
<p style="text-align: justify;">And for Refit we have to define a separate interface:</p>
<pre> <code lang="cs">
public interface IServiceClient
{
[Get("/data/hello")]
Task<string> GetHello();
}
</code>
</pre>
<p style="text-align: justify;">Its registration is as follows:</p>
<pre> <code lang="cs">
var services = new ServiceCollection();
services
.AddRefitClient<IServiceClient>()
.ConfigureHttpClient(c =>
{
c.BaseAddress = new Uri("http://localhost:5001");
});
</code>
</pre>
<p style="text-align: justify;">After that, there are no problems with using of these clients.</p>
<h2 style="text-align: left;">Performance comparison</h2>
<p style="text-align: justify;">First of all, let's compare performance of these libraries. We'll measure simple GET-request using Benchmark.Net. Here are the results:</p>
<table border="1">
<tbody><tr>
<th>
Method</th>
<th>
Mean</th>
<th>
Error</th>
<th>
StdDev</th>
<th>
Min</th>
<th>
Max</th>
</tr>
<tr>
<td>
HttpClient</td>
<td>
187.1 us</td>
<td>
4.31 us</td>
<td>
12.72 us</td>
<td>
127.0 us</td>
<td>
211.8 us</td>
</tr>
<tr>
<td>
Refit</td>
<td>
207.3 us</td>
<td>
4.47 us</td>
<td>
13.12 us</td>
<td>
138.4 us</td>
<td>
226.7 us</td>
</tr>
<tr>
<td>
RestSharp</td>
<td>
724.5 us</td>
<td>
14.36 us</td>
<td>
36.03 us</td>
<td>
657.6 us</td>
<td>
902.7 us</td>
</tr>
</tbody></table>
<p style="text-align: justify;">It is obvious, that RestSharp takes much longer to execute the request. Let's understand why.</p>
<p style="text-align: justify;">Here is our code for the RestSharp client:</p>
<pre> <code lang="cs">
public async Task<string?> GetHello()
{
var client = new RestClient();
var request = new RestRequest("http://localhost:5001/data/hello");
return await client.GetAsync<string>(request);
}
</code>
</pre>
<p style="text-align: justify;">As you see, we create a new <i>RestClient</i> object for each request. Inside, it creates and initializes a new <i>HttpClient</i> instance. That's what time is spent on. But RestSharp allows us to use a ready-made instance of <i>HttpClient</i>. Let's slightly change the code of our client:</p>
<pre> <code lang="cs">
public class ServiceClient : IServiceClient
{
private readonly HttpClient _httpClient;
public ServiceClient(HttpClient httpClient)
{
_httpClient = httpClient;
}
public async Task<string?> GetHello()
{
var client = new RestClient(_httpClient);
var request = new RestRequest("http://localhost:5001/data/hello");
return await client.GetAsync<string>(request);
}
}
</code>
</pre>
<p style="text-align: justify;">And initialization should also be changed:</p>
<pre> <code lang="cs">
var services = new ServiceCollection();
services.AddHttpClient<IServiceClient, ServiceClient>();
</code>
</pre>
<p style="text-align: justify;">Now the performance comparison results look more uniform:</p>
<table border="1">
<tbody><tr>
<th>
Method</th>
<th>
Mean</th>
<th>
Error</th>
<th>
StdDev</th>
<th>
Median</th>
<th>
Min</th>
<th>
Max</th>
</tr>
<tr>
<td>
HttpClient</td>
<td>
190.2 us</td>
<td>
3.79 us</td>
<td>
10.61 us</td>
<td>
190.8 us</td>
<td>
163.1 us</td>
<td>
214.5 us</td>
</tr>
<tr>
<td>
Refit</td>
<td>
180.8 us</td>
<td>
12.20 us</td>
<td>
35.96 us</td>
<td>
205.2 us</td>
<td>
122.5 us</td>
<td>
229.3 us</td>
</tr>
<tr>
<td>
RestSharp</td>
<td>
242.8 us</td>
<td>
7.45 us</td>
<td>
21.73 us</td>
<td>
248.5 us</td>
<td>
160.4 us</td>
<td>
278.5 us</td>
</tr>
</tbody></table>
<h2 style="text-align: left;">Base address</h2>
<p style="text-align: justify;">Sometimes we need to change the base address for requests during the execution of our application. For example, our system works with several MT4 trading servers. During the operation of our application, you can connect and disconnect trading servers. Since all these trading servers have the same API, we can use one client to communicate with them. But they have different base addresses. And these addresses are unknown at the start of our system.</p>
<p style="text-align: justify;">For HttpClient and RestSharp, this is not a problem. Here is the code for HttpClient:</p>
<pre> <code lang="cs">
public async Task<string> GetHelloFrom(string baseAddress)
{
var response = await _httpClient.GetAsync($"{baseAddress.TrimEnd('/')}/data/hello");
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
</code>
</pre>
<p style="text-align: justify;">and here is one for RestSharp:</p>
<pre> <code lang="cs">
public async Task<string?> GetHelloFrom(string baseAddress)
{
var client = new RestClient(_httpClient);
var request = new RestRequest($"{baseAddress.TrimEnd('/')}/data/hello");
return await client.GetAsync<string>(request);
}
</code>
</pre>
<p style="text-align: justify;">But for Refit, it is slightly more complicated. We specified the base address at the configuration stage:</p>
<pre> <code lang="cs">
services
.AddRefitClient<IServiceClient>()
.ConfigureHttpClient(c =>
{
c.BaseAddress = new Uri("http://localhost:5001");
});
</code>
</pre>
<p style="text-align: justify;">But now we can't do that. We only have an interface, but not its implementation. Fortunately, Refit allows us to create an instances of this interface manually by specifying base address. To do this, we'll create a factory for our interfaces:</p>
<pre> <code lang="cs">
internal class RefitClientFactory
{
public T GetClientFor<T>(string baseUrl)
{
RefitSettings settings = new RefitSettings();
return RestService.For<T>(baseUrl, settings);
}
}
</code>
</pre>
<p style="text-align: justify;">Let's register it in our dependency container:</p>
<pre> <code lang="cs">
services.AddScoped<RefitClientFactory>();
</code>
</pre>
<p style="text-align: justify;">We'll use this factory every time we want to explicitly set the base address:</p>
<pre> <code lang="cs">
var factory = provider.GetRequiredService<RefitClientFactory>();
var client = factory.GetClientFor<IServiceClient>("http://localhost:5001");
var response = await client.GetHello();
</code>
</pre>
<h2 style="text-align: left;">Common processing of requests</h2>
<p style="text-align: justify;">We can divide into two groups all the actions we perform during HTTP requests. The first group contains actions that depend on a specific endpoint. Fjr example, during calls to the ServiceA we need to apply one actions, and other actions during calls to ServiceB. In this case, we simply perform these actions inside the implementation of the client interfaces for these services: <i>IServiceAClient</i> and <i>IServiceBClient</i>. There are no problems with this approach in case of using HttpClient and RestSharp. But in case of Refit, we do not actually have a client interface implementation. In this situation, we can use an ordinary decorator (for example, from <a href="https://github.com/khellang/Scrutor" rel="nofollow" target="_blank">Scrutor</a> library).</p>
<p style="text-align: justify;">The second group contains actions that must be performed for each HTTP request regardless of the endpoint. These are actions such as error logging, request time measurement, etc. Although we can also implement this logic inside the implementations of our client interfaces, I don't like this approach. There are too many things to do, too many places to change, and it is easy to forget something if a new client is created. Can we define some code that will be executed on every request?</p>
<p style="text-align: justify;">Yes, we can. We can add our own handler to the chain of standard request handlers. Consider the following example. Let's say we want to log information about requests. In this case, we may create a class inheriting <i>DelegatingHandler</i>:</p>
<pre> <code lang="cs">
public class LoggingHandler : DelegatingHandler
{
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
try
{
AnsiConsole.MarkupLine($"[yellow]Sending {request.Method} request to {request.RequestUri}[/]");
return await base.SendAsync(request, cancellationToken);
}
catch (Exception ex)
{
AnsiConsole.MarkupLine($"[yellow]{request.Method} request to {request.RequestUri} is failed: {ex.Message}[/]");
throw;
}
finally
{
AnsiConsole.MarkupLine($"[yellow]{request.Method} request to {request.RequestUri} is finished[/]");
}
}
protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken)
{
try
{
AnsiConsole.MarkupLine($"[yellow]Sending {request.Method} request to {request.RequestUri}[/]");
return base.Send(request, cancellationToken);
}
catch (Exception ex)
{
AnsiConsole.MarkupLine($"[yellow]{request.Method} request to {request.RequestUri} is failed: {ex.Message}[/]");
throw;
}
finally
{
AnsiConsole.MarkupLine($"[yellow]{request.Method} request to {request.RequestUri} is finished[/]");
}
}
}
</code>
</pre>
<p style="text-align: justify;">It is easy to add this class into the chain of request handlers:</p>
<pre> <code lang="cs">
services.AddTransient<LoggingHandler>();
services.ConfigureAll<HttpClientFactoryOptions>(options =>
{
options.HttpMessageHandlerBuilderActions.Add(builder =>
{
builder.AdditionalHandlers.Add(builder.Services.GetRequiredService<LoggingHandler>());
});
});
</code>
</pre>
<p style="text-align: justify;">After that, our logging will be performed for each request via <i>HttpClient</i>. The same approach works fine with RestSharp, since we use it as a wrapper around <i>HttpClient</i>.</p>
<p style="text-align: justify;">With Refit everything is a little more complicated. This approach works fine with Refit until we try to use our factory to replace the base address. It looks like the call of <i>RestService.For</i> does not use the settings of <i>HttpClient</i>. That's why we'll have to manually add our request handler:</p>
<pre> <code lang="cs">
internal class RefitClientFactory
{
public T GetClientFor<T>(string baseUrl)
{
RefitSettings settings = new RefitSettings();
settings.HttpMessageHandlerFactory = () => new LoggingHandler
{
InnerHandler = new HttpClientHandler()
};
return RestService.For<T>(baseUrl, settings);
}
}
</code>
</pre>
<h2 style="text-align: left;">Request cancellation</h2>
<p style="text-align: justify;">Sometimes we need to cancel the request. For example, a user got tired of waiting for a response from the server and left some UI page. Now results of the request are no longer needed and we should cancel the request. How can we do it?</p>
<p style="text-align: justify;">ASP.NET Core allows us to understand that client has cancelled the request with the help of <i>CancellationToken</i> class. Naturally, it would be useful if our libraries supported this class.</p>
<p style="text-align: justify;">With HttpClient it works fine:</p>
<pre> <code lang="cs">
public async Task<string> GetLong(CancellationToken cancellationToken)
{
var response = await _httpClient.GetAsync("http://localhost:5001/data/long", cancellationToken);
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
</code>
</pre>
<p style="text-align: justify;">Here we have <i>CancellationToken</i> support out of the box. The same situation is with RestSharp:</p>
<pre> <code lang="cs">
public async Task<string?> GetLong(CancellationToken cancellationToken)
{
var client = new RestClient(_httpClient);
var request = new RestRequest("http://localhost:5001/data/long") { };
return await client.GetAsync<string>(request, cancellationToken);
}
</code>
</pre>
<p style="text-align: justify;">Refit also supports <i>CancellationToken</i>:</p>
<pre> <code lang="cs">
public interface IServiceClient
{
[Get("/data/long")]
Task<string> GetLong(CancellationToken cancellationToken);
...
}
</code>
</pre>
<p style="text-align: justify;">As you can see, there are no problems with request cancellation.</p>
<h2 style="text-align: left;">Request timeout</h2>
<p style="text-align: justify;">In addition to being able to cancel requests, it would be nice to be able to limit the duration of the request. Here situation is opposite to the case of the common processing logic. It is easy to set up common request timeout for any request in the configuration. But it is useful to be able to specify this timeout for each specific request. Indeed, even on the same server, different endpoints process different amount of information. And this leads to different request processing times. That's why it is better to be able to set different timeouts for different endpoints.</p>
<p style="text-align: justify;">RestSharp has no problem with that:</p>
<pre> <code lang="cs">
public async Task<string?> GetLongWithTimeout(TimeSpan timeout, CancellationToken cancellationToken = default)
{
try
{
var client = new RestClient(_httpClient, new RestClientOptions { MaxTimeout = (int)timeout.TotalMilliseconds });
var request = new RestRequest("http://localhost:5001/data/long");
return await client.GetAsync<string>(request, cancellationToken);
}
catch (TimeoutException)
{
return "Timeout";
}
}
</code>
</pre>
<p style="text-align: justify;">With HttpClient we already have some problems. On the one hand, <i>HttpClient</i> has the <i>Timeout</i> property that can be used. But here I have some doubts. First of all, the same instance of <i>HttpClient</i> is used in different methods of the class implementing our HTTP client interface. In each method, the timeout expectations can be different. It is easy to forget something, and the timeout from one method will leak to another method. This problem can be overcome with the help of a wrapper that will set timeout at the beginning of each method and return it back to its original value at the end. If the client is not used in multithreading mode, this approach will work.</p>
<p style="text-align: justify;">But, in addition, I have some uncertainty about using different instances of <i>HttpClient</i> class from dependency container. According to <a href="https://learn.microsoft.com/en-us/dotnet/fundamentals/networking/http/httpclient-guidelines" rel="nofollow" target="_blank">the documentation</a>, it is a bad idea to create new instance of <i>HttpClient</i> class every time we need to send an HTTP request. The system internally supports a reusable pool of connections, checks various conditions, etc. In other words, there is a lot of magic. That's why I'm afraid it is possible that the same instance of <i>HttpClient</i> class can be used by different services. And the timeout set in one of them can leak into another one. I must say that I have not been able to reproduce this situation, but maybe I just don't understand something.</p>
<p style="text-align: justify;">In short, I want to be sure that my request timeout will be used only for one specific request and nowhere else. And this can be done using the same <i>CancellationToken</i>:</p>
<pre> <code lang="cs">
public async Task<string> GetLongWithTimeout(TimeSpan timeout, CancellationToken cancellationToken = default)
{
try
{
using var tokenSource = new CancellationTokenSource(timeout);
using var registration = cancellationToken.Register(tokenSource.Cancel);
var response = await _httpClient.GetAsync("http://localhost:5001/data/long", tokenSource.Token);
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
catch (TaskCanceledException)
{
return "Timeout";
}
}
</code>
</pre>
<p style="text-align: justify;">The same method can be applied to Refit:</p>
<pre> <code lang="cs">
var client = provider.GetRequiredService<IServiceClient>();
using var cancellationTokenSource = new CancellationTokenSource();
try
{
var response = await Helper.WithTimeout(
TimeSpan.FromSeconds(5),
cancellationTokenSource.Token,
client.GetLong);
Console.WriteLine(response);
}
catch (TaskCanceledException)
{
Console.WriteLine("Timeout");
}
</code>
</pre>
<p style="text-align: justify;">Here the <i>Helper</i> class has the following code:</p>
<pre> <code lang="cs">
internal class Helper
{
public static async Task<T> WithTimeout<T>(TimeSpan timeout, CancellationToken cancellationToken, Func<CancellationToken, Task<T>> action)
{
using var cancellationTokenSource = new CancellationTokenSource(timeout);
using var registration = cancellationToken.Register(cancellationTokenSource.Cancel);
return await action(cancellationTokenSource.Token);
}
}
</code>
</pre>
<p style="text-align: justify;">Bit in this case, the problem is that the Refit interface is not enough anymore. We have to write some wrapper to call our methods with the desired timeout.</p>
<h2 style="text-align: left;">Polly support</h2>
<p style="text-align: justify;">Today Polly is the de-facto standard add-on for enterprise-level HTTP requests. Let's see how the library works with HttpClient, RestSharp and Refit.</p>
<p style="text-align: justify;">Here, as in the case of common processing logic, there may be several variants. First of all, the Polly policy may differ for different methods of our client interface. In this case, we can implement it inside our implementation class, and for Refit - through decorator.</p>
<p style="text-align: justify;">Secondly, we may want to set some policy for all methods of one client interface. How can we do this?</p>
<p style="text-align: justify;">For HttpClient it is pretty easy. You create a new policy:</p>
<pre> <code lang="cs">
var policy = HttpPolicyExtensions
.HandleTransientHttpError()
.OrResult(response => (int)response.StatusCode == 418)
.RetryAsync(3, (_, retry) =>
{
AnsiConsole.MarkupLine($"[fuchsia]Retry number {retry}[/]");
});
</code>
</pre>
<p style="text-align: justify;">and assign it to a specific interface:</p>
<pre> <code lang="cs">
services.AddHttpClient<IServiceClient, ServiceClient>()
.AddPolicyHandler(policy);
</code>
</pre>
<p style="text-align: justify;">For RestSharp, which uses <i>HttpClient</i> from the dependency container, there is no difference.</p>
<p style="text-align: justify;">Refit also supports this scenario quite easily:</p>
<pre> <code lang="cs">
services
.AddRefitClient<IServiceClient>()
.ConfigureHttpClient(c =>
{
c.BaseAddress = new Uri("http://localhost:5001");
})
.AddPolicyHandler(policy);
</code>
</pre>
<p style="text-align: justify;">It is interesting to consider the following question. What if we have an interface whose almost all methods want one Polly policy, but one method wants a completely different policy? Here, I think we should look at the policy registry and the policy selector. In <a href="https://nodogmablog.bryanhogan.net/2018/07/polly-httpclientfactory-and-the-policy-registry-choosing-the-right-policy-based-on-the-http-request/" rel="nofollow" target="_blank">this article</a> it is described how to select a policy based on specific request.</p>
<h2 style="text-align: left;">Request resending</h2>
<p style="text-align: justify;">There is one more topic related to Polly. Sometimes we need more complex request preparation. For example, we may need to generate certain headers. In order to do this, the <i>HttpClient</i> class has the <i>Send</i> method that accepts <i>HttpRequestMessage</i> parameter.</p>
<p style="text-align: justify;">However, various problems may occur during the sending of the request. Some of them can be solved by resending the message using, for example, the same Polly policies. But may we pass the same instance of <i>HttpRequestMessage</i> to the <i>Send</i> method again?</p>
<p style="text-align: justify;">To test this possibility, I'll create another endpoint that returns a random result:</p>
<pre> <code lang="cs">
[HttpGet("rnd")]
public IActionResult GetRandom()
{
if (Random.Shared.Next(0, 2) == 0)
{
return StatusCode(500);
}
return Ok();
}
</code>
</pre>
<p style="text-align: justify;">Let's take a look at the method of a client communicating with this endpoint. I will not use Polly here, but just make several requests:</p>
<pre> <code lang="cs">
public async Task<IReadOnlyList<int>> GetRandom()
{
var request = new HttpRequestMessage(HttpMethod.Get, "http://localhost:5001/data/rnd");
var returnCodes = new LinkedList<int>();
for (int i = 0; i < 10; i++)
{
var response = await _httpClient.SendAsync(request);
returnCodes.AddLast((int)response.StatusCode);
}
return returnCodes.ToArray();
}
</code>
</pre>
<p style="text-align: justify;">As you can see, I'm trying to send the same instance of <i>HttpRequestMessage</i> multiple times. An what do I have?</p>
<pre> <code lang="text">
Unhandled exception. System.InvalidOperationException: The request message was already sent. Cannot send the same request message multiple times.
</code>
</pre>
<p style="text-align: justify;">It means that if I need retries, I have to create a new <i>HttpRequestMessage</i> every time.</p>
<p style="text-align: justify;">Now let's test RestSharp. Here is the same repeating request:</p>
<pre> <code lang="cs">
public async Task<IReadOnlyList<int>> GetRandom()
{
var client = new RestClient(_httpClient);
var request = new RestRequest("http://localhost:5001/data/rnd");
var returnCodes = new LinkedList<int>();
for (int i = 0; i < 10; i++)
{
var response = await client.ExecuteAsync(request);
returnCodes.AddLast((int)response.StatusCode);
}
return returnCodes.ToArray();
}
</code>
</pre>
<p style="text-align: justify;">Here instead of <i>HttpRequestMessage</i> we use <i>RestRequest</i>. And this time everything is fine. RestSharp does not mind sending the same <i>RestRequest</i> object multiple times.</p>
<p style="text-align: justify;">For Refit this problem is not applicable. As far as I know, it does not have any analogue of a "request object". All parameters are passed through the arguments of the Refit interface method each time.</p>
<h2 style="text-align: left;">Conclusion</h2>
<p style="text-align: justify;">It is time to draw some conclusion. Personally, I think that RestSharp is the best option, although its difference from pure HttpClient is minimal. RestSharp uses <i>HttpClient</i> objects and has access to all their configuration options. Only a slightly improved ability to set operation timeout and resend the same request object makes RestSharp the best. Although it should be said that RestSharp requests are slightly slower. For some people, this can be very important.</p>
<p style="text-align: justify;">In my opinion, Refit is somewhat behind. On the one hand, it looks attractive because it does not require writing client code. On the other hand, some scenarios require too much effort to implement.</p>
<p style="text-align: justify;">I hope this comparison was helpful for you. Please write in the comments your experience with these libraries. Or maybe you use something else for HTTP requests?</p>
<p style="text-align: justify;">Good luck!</p>
<p style="text-align: justify;">P.S. The code for this article can be found at <a href="https://github.com/yakimovim/http-libraries-comparison" rel="nofollow" target="_blank">GitHub</a>.</p>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-1642763475731157092023-02-27T11:46:00.000+03:002023-02-27T11:46:24.353+03:00Testing of dependency tree<p style="text-align: justify;">Today, the use of dependency containers is widespread. The constructor of your classes accepts instances of other classes, they depend on other classes, etc. And the dependency container manages the construction of the entire instance tree.</p>
<p style="text-align: justify;">This system has its price. For example, during testing, you must create instances of all the dependencies of a class in order to test this class. You can use something like <a href="https://github.com/moq/moq4" rel="nofollow" target="_blank">Moq</a> for this task. But in this case, there is a problem with class changes. If you want to add or remove any constructor parameter, you will also have to change the tests, even if this parameter does not affect them.</p>
<p style="text-align: justify;">There is another task that we want to solve during testing. Let's say we want to test the work of not one isolated class, but the joint work of several classes in some part of our system. Our dependency container creates a whole tree of instances of various classes. And you want to test the whole tree. Let's see how we can do this, what obstacles we will face and how we can overcome them.</p><span><a name='more'></a></span><h2 style="text-align: justify;"><span style="text-align: left;">Resilience to constructor changes</span></h2>
<p style="text-align: justify;">Let's say we have a class we want to test:</p>
<pre> <code lang="cs">
public class System
{
public System(
IService1 service1,
IService2 service2
)
{
...
}
...
}
</code>
</pre>
<p style="text-align: justify;">Usually the tests for such cases look like this:</p>
<pre> <code lang="cs">
[TestMethod]
public void SystemTest()
{
var service1Mock = new Mock<IService1>();
var service2Mock = new Mock<IService2>();
var system = new System(
service1Mock.Object,
service2Mock.Object
);
...
}
</code>
</pre>
<p style="text-align: justify;">But today I want to add logging to my <i>System</i> class:</p>
<pre> <code lang="cs">
public class System
{
public System(
IService1 service1,
IService2 service2,
ILogger logger
)
{
...
}
...
}
</code>
</pre>
<p style="text-align: justify;">Now the tests for this class are not compiled. I have to go to all the places where I create an instance of the <i>System</i> class and change the code there:</p>
<pre> <code lang="cs">
[TestMethod]
public void SystemTest()
{
var service1Mock = new Mock<IService1>();
var service2Mock = new Mock<IService2>();
var loggerMock = new Mock<ILogger>();
var system = new System(
service1Mock.Object,
service2Mock.Object,
loggerMock.Object
);
...
}
</code>
</pre>
<p style="text-align: justify;">Of course, to reduce the amount of work, I can move the code that creates an instance of the class to a separate method. Then I won't need to make changes in many places.:</p>
<pre> <code lang="cs">
private Mock<IService1> service1Mock = new();
private Mock<IService2> service2Mock = new();
private Mock<ILogger> loggerMock = new();
private System CreateSystem()
{
return new System(
service1Mock.Object,
service2Mock.Object,
loggerMock.Object
);
}
[TestMethod]
public void SystemTest()
{
var system = CreateSystem();
...
}
</code>
</pre>
<p style="text-align: justify;">But this approach also has its drawbacks. I still had to create the <i>ILogger</i> mock, even though I don't need it. I only use it to pass to the constructor of my class.</p>
<p style="text-align: justify;">Fortunately, there is <a href="https://github.com/moq/Moq.AutoMocker" rel="nofollow" target="_blank">AutoMocker</a>. You just create an instance of your class using <i>CreateInstance</i>:</p>
<pre> <code lang="cs">
private AutoMocker _autoMocker = new();
[TestMethod]
public void SystemTest()
{
var system = _autoMocker.CreateInstance<System>();
...
}
</code>
</pre>
<p style="text-align: justify;">This method can create instances of any class, even <i>sealed</i> one. It works like a dependency container, analyzing the constructor and creating mocks for its parameters.</p>
<p style="text-align: justify;">In any moment you can get any mock you want to set its behavior or verify calls of its methods:</p>
<pre> <code lang="cs">
var service1Mock = _autoMocker.GetMock<IService1>();
</code>
</pre>
<p style="text-align: justify;">Also, if you don't want to use Moq mock, but you have your own implementation of some interface, you can do it before calling <i>CreateInstance</i>:</p>
<pre> <code lang="cs">
var testService1 = new TestService1();
_autoMocker.Use<IService1>(testService1);
</code>
</pre>
<p style="text-align: justify;">Cool! Now you can freely change the signature of the constructor without fear that you'll have to change tests in thousand of places.</p>
<p style="text-align: justify;">However, the fact that tests continue to compile does not mean that they will continue to pass after changes in the class. On the other hand, it has been said many times that tests should check the class contract, not its implementation. If the contract is not changed, the tests must still pass. If the contract is changed, there is no way to avoid changing the tests.</p>
<p style="text-align: justify;">However, before we started using <i>AutoMocker</i>, we immediately saw which tests were affected by our changes in the constructor, and could only run these tests. Now we will probably have to run all the tests unless we have some kind of agreement on where we store all the tests for one class. But here everyone has to choose for himself.</p>
<p style="text-align: justify;">And we continue.</p>
<h2 style="text-align: left;">Testing with dependencies</h2>
<p style="text-align: justify;">One of my colleagues suggested taking another step forward. In fact, in our application we are still creating a dependency container. There we register all our classes and interfaces. So why don't we take instances of classes for testing from this container? In this case, we will test the tree of objects that we actually use in production. This would be very useful for integration tests.</p>
<p style="text-align: justify;">For example, our dependency registration code looks like this:</p>
<pre> <code lang="cs">
services.AddLogging();
services.AddDomainClasses();
services.AddRepositories();
...
</code>
</pre>
<p style="text-align: justify;">We move it to a separate method:</p>
<pre> <code lang="cs">
public static class ServicesConfiguration
{
public static void RegisterEverything(IServiceCollection services)
{
services.AddLogging();
services.AddDomainClasses();
services.AddRepositories();
...
}
}
</code>
</pre>
<p style="text-align: justify;">and use this method to register our services:</p>
<pre> <code lang="cs">
ServicesConfiguration.RegisterEverything(services);
</code>
</pre>
<p style="text-align: justify;">Now we can use this method in tests as well:</p>
<pre> <code lang="cs">
[TestMethod]
public void SystemTest()
{
IServiceCollection services = new ServiceCollection();
ServicesConfiguration.RegisterEverything(services);
var provider = services.BuildServiceProvider();
using var scope = provider.CreateScope();
var system = scope.ServiceProvider.GetRequiredService<System>();
...
}
</code>
</pre>
<p style="text-align: justify;">And even if your class is not registered in the dependency container, but you just want to take the parameters for its constructor from there, you can do it as follows:</p>
<pre> <code lang="cs">
var system = ActivatorUtilities.CreateInstance<System>(_scope.ServiceProvider);
</code>
</pre>
<p style="text-align: justify;">Naturally, it may be necessary to make some changes to the registered services. For example, you may want to change the database connection strings if you don't use <i>IConfiguration</i> to get them:</p>
<pre> <code lang="cs">
IServiceCollection services = new ServiceCollection();
Configuration.RegisterEverything(services);
services.RemoveAll<IConnectionStringsProvider>();
services.AddSingleton<IConnectionStringsProvider>(new TestConnectionStringsProvider());
</code>
</pre>
<p style="text-align: justify;">And if you use <i>IConfiguration</i>, you can create your own configuration using in-memory storage:</p>
<pre> <code lang="cs">
var builder = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appSettings.json", optional: true, reloadOnChange: true)
.AddInMemoryCollection(settings);
var configuration = builder.Build();
</code>
</pre>
<p style="text-align: justify;">But even in this case, we may still want to use mocks in some situations. You see, even in integration tests there are external dependencies that you don't control. If you can still recreate your own database, there are external services that you use via HTTP requests. You still want to imitate these external requests using mocks.</p>
<p style="text-align: justify;">To support mocks, we need to write a certain amount of code. I moved all the logic related to getting service instances and managing mocks into one class:</p>
<pre> <code lang="cs">
public class SimpleConfigurator : IServiceProvider, IDisposable
{
private readonly IDictionary<Type, Mock> _registeredMocks = new Dictionary<Type, Mock>();
private readonly IServiceCollection _services;
private IServiceProvider _serviceProvider;
private IServiceScope? _scope;
private bool _configurationIsFinished = false;
public SimpleConfigurator(IServiceCollection services)
{
_services = services;
}
public void Dispose()
{
_scope?.Dispose();
}
/// <summary>
/// Creates instance of <typeparamref name="T"/> type using dependency container
/// to resolve constructor parameters.
/// </summary>
/// <typeparam name="T">Type of instance.</typeparam>
/// <returns>Instance of <typeparamref name="T"/> type.</returns>
public T CreateInstance<T>()
{
PrepareScope();
return ActivatorUtilities.CreateInstance<T>(_scope!.ServiceProvider);
}
/// <summary>
/// Returns service registered in the container.
/// </summary>
/// <param name="serviceType">Service type.</param>
/// <returns>Instance of a service from the container.</returns>
public object? GetService(Type serviceType)
{
PrepareScope();
return _scope!.ServiceProvider.GetService(serviceType);
}
/// <summary>
/// Replaces in the dependency container records of <typeparamref name="T"/> type
/// with a singleton mock and returns the mock.
/// </summary>
/// <typeparam name="T">Type of service.</typeparam>
/// <returns>Mock for the <typeparamref name="T"/> type.</returns>
/// <exception cref="InvalidOperationException">This method can't be called after
/// any service is resolved from the container.</exception>
public Mock<T> GetMock<T>()
where T : class
{
if (_registeredMocks.ContainsKey(typeof(T)))
{
return (Mock<T>)_registeredMocks[typeof(T)];
}
if (!_configurationIsFinished)
{
var mock = new Mock<T>();
_registeredMocks.Add(typeof(T), mock);
_services.RemoveAll<T>();
_services.AddSingleton(mock.Object);
return mock;
}
else
{
throw new InvalidOperationException($"You can not create new mock after any service is already resolved (after call of {nameof(CreateInstance)} or {nameof(GetService)})");
}
}
private void PrepareScope()
{
if (!_configurationIsFinished)
{
_configurationIsFinished = true;
_serviceProvider = _services.BuildServiceProvider();
_scope = _serviceProvider.CreateScope();
}
}
}
</code>
</pre>
<p style="text-align: justify;">Let's examine this class in more detail.</p>
<p style="text-align: justify;">This class implements the standard <i>IServiceProvider</i> interface, so you can use all the features of this interface to get instances of your services. In addition, the <i>CreateInstance</i> method allows you to create instances of classes that are not registered in the container, but whose constructor parameters can be resolved from the container.</p>
<p style="text-align: justify;">This class creates a new scope (the <i>_scope</i> field) before resolving any service. This allows you to use even services registered for a scope (for example, using the <i>AddScope</i> method). The scope will be destroyed by the <i>Dispose</i> method. That's why the class implements the <i>IDisposable</i> interface.</p>
<p style="text-align: justify;">Now about getting mocks (the <i>GetMock</i> method). Here we implement the following idea. You can create any mock, but only before the first service is resolved from the container. After that, you will not be able to create new mocks. The reason is that the container creates a service using some specific dependency instances. This means that the service object can store references to instances of these classes. And now it is impossible to replace these references. That's why the mocks created after the first service is resolved are actually useless. And that's why we don't allow them to be created.</p>
<p style="text-align: justify;">All already created mocks are stored in the dictionary <i>_registeredMocks</i>. The <i>_configurationIsFinished</i> field contains information about whether any service has already been resolved or not.</p>
<p style="text-align: justify;">Note that when we create a mock, we remove all entries for this type from the container and replace when with just this one mock. If you need to test code that gets not only one instance, but a collection of objects of this type, this approach may not be enough. In this case, you will have to extend the functionality of this class in a way that suits your needs.</p>
<h2 style="text-align: left;">Testing at the project level</h2>
<p style="text-align: justify;">Up to this point, we used the dependency container to test the entire application. But there is another option. In our company, the solution contains several sections corresponding to domain areas. Each section can contain several projects (assemblies) - for domain classes, for infrastructure, ... For example:</p>
<ul>
<li>Users.Domain</li>
<li>Users.Repository</li>
<li>Users.Api</li>
</ul>
<p style="text-align: justify;">or</p>
<ul>
<li>Orders.Domain</li>
<li>Orders.Repository</li>
<li>Orders.Api</li>
</ul>
<p style="text-align: justify;">And each such project provides an extension method for <i>IServiceCollection</i> that registers classes from this project:</p>
<pre> <code lang="cs">
public static class ContainerConfig
{
public static void RegisterDomainServices(this IServiceCollection services)
{
services.AddScope<ISystem, System>();
services.AddScope<IService1, Service1>();
...
}
}
</code>
</pre>
<p style="text-align: justify;">In the end, our main project just uses all these extension methods.</p>
<p style="text-align: justify;">Suppose we want to create tests at the project level. This means that we only want to test the interaction of classes defined in one particular project. It may seem simple. We create an instance of <i>ServiceCollection</i>, execute our extension method for this instance, and now we are in the same situation as before when we tested the entire application.</p>
<p style="text-align: justify;">But there is a serious difference here. When we tested the entire application, absolutely all classes were registered in our instance of the <i>ServiceCollection</i> class. In a situation with a separate project, this is not the case. This extension method registers only the classes defined in this project. But these classes may depend on interfaces that are not implemented in this project, but are implemented elsewhere.</p>
<p style="text-align: justify;">For example, our class <i>System</i> depends on the interfaces <i>IService1</i> and <i>IService2</i>. Both of these interfaces are defined in the same project, which contains the <i>System</i> class. But the interface <i>IService1</i> has its own implementation there, and the interface <i>IService2</i> does not. It is expected that it will be implemented in some other project, and our application will take it from there.</p>
<p style="text-align: justify;">So how can we test the <i>System</i> class only with classes from the same project? The idea is to force our dependency container to use mocks for interfaces that are not registered. In order to do this, we need a container that can handle the situation of absent dependencies. I used <a href="https://github.com/dadhi/DryIoc" rel="nofollow" target="_blank">DryIoc</a>. Let's see how we can create the necessary functionality:</p>
<pre> <code lang="cs">
public class Configurator : IServiceProvider, IDisposable
{
private readonly AutoMocker _autoMocker = new AutoMocker();
private readonly IDictionary<Type, Mock> _registeredMocks = new Dictionary<Type, Mock>();
private readonly IServiceCollection _services;
private IContainer? _container;
private IServiceScope? _scope;
private bool _configurationIsFinished = false;
public Configurator(IServiceCollection? services = null)
: this(FillServices(services))
{
}
public Configurator(Action<IServiceCollection> configuration)
{
_services = new ServiceCollection();
configuration?.Invoke(_services);
}
private static Action<IServiceCollection> FillServices(IServiceCollection? services)
{
return internalServices =>
{
if (services != null)
{
foreach (var description in services)
{
internalServices.Add(description);
}
}
};
}
public void Dispose()
{
_scope?.Dispose();
_container?.Dispose();
}
/// <summary>
/// Creates instance of <typeparamref name="T"/> type using dependency container
/// to resolve constructor parameters.
/// </summary>
/// <typeparam name="T">Type of instance.</typeparam>
/// <returns>Instance of <typeparamref name="T"/> type.</returns>
public T CreateInstance<T>()
{
PrepareScope();
return ActivatorUtilities.CreateInstance<T>(_scope!.ServiceProvider);
}
/// <summary>
/// Returns service registered in the container.
/// </summary>
/// <param name="serviceType">Service type.</param>
/// <returns>Instance of a service from the container.</returns>
public object? GetService(Type serviceType)
{
PrepareScope();
return _scope!.ServiceProvider.GetService(serviceType);
}
/// <summary>
/// Replaces in the dependency container records of <typeparamref name="T"/> type
/// with a singleton mock and returns the mock.
/// </summary>
/// <typeparam name="T">Type of service.</typeparam>
/// <returns>Mock for the <typeparamref name="T"/> type.</returns>
/// <exception cref="InvalidOperationException">This method can't be called after
/// any service is resolved from the container.</exception>
public Mock<T> GetMock<T>()
where T : class
{
if (_registeredMocks.ContainsKey(typeof(T)))
{
return (Mock<T>)_registeredMocks[typeof(T)];
}
if (!_configurationIsFinished)
{
var mock = new Mock<T>();
_registeredMocks.Add(typeof(T), mock);
_services.RemoveAll<T>();
_services.AddSingleton(mock.Object);
return mock;
}
else
{
throw new InvalidOperationException($"You can not create new mock after any service is already resolved (after call of {nameof(CreateInstance)} or {nameof(GetService)})");
}
}
private void PrepareScope()
{
if (!_configurationIsFinished)
{
_configurationIsFinished = true;
_container = CreateContainer();
_scope = _container.BuildServiceProvider().CreateScope();
}
}
private IContainer CreateContainer()
{
Rules.DynamicRegistrationProvider dynamicRegistration = (serviceType, serviceKey) =>
new[]
{
new DynamicRegistration(DelegateFactory.Of(_ =>
{
if(_registeredMocks.ContainsKey(serviceType))
{
return _registeredMocks[serviceType].Object;
}
var mock = _autoMocker.GetMock(serviceType);
_registeredMocks[serviceType] = mock;
return mock.Object;
}))
};
var rules = Rules.Default.WithDynamicRegistration(
dynamicRegistration,
DynamicRegistrationFlags.Service | DynamicRegistrationFlags.AsFallback);
var container = new Container(rules);
container.Populate(_services);
return DryIocAdapter.WithDependencyInjectionAdapter(container);
}
}
</code>
</pre>
<p style="text-align: justify;">The <i>Configurator</i> class is very similar to the <i>SimpleConfigurator</i> class shown earlier, but it has several important differences. First of all, instead of the Microsoft dependency container, we use DryIoc. For this container, we set the behavior for situations where we need an unregistered dependency:</p>
<pre> <code lang="cs">
Rules.DynamicRegistrationProvider dynamicRegistration = (serviceType, serviceKey) =>
new[]
{
new DynamicRegistration(DelegateFactory.Of(_ =>
{
if(_registeredMocks.ContainsKey(serviceType))
{
return _registeredMocks[serviceType].Object;
}
var mock = _autoMocker.GetMock(serviceType);
_registeredMocks[serviceType] = mock;
return mock.Object;
}))
};
var rules = Rules.Default.WithDynamicRegistration(
dynamicRegistration,
DynamicRegistrationFlags.Service | DynamicRegistrationFlags.AsFallback);
var container = new Container(rules);
</code>
</pre>
<p style="text-align: justify;">In this case, we create a Moq mock and save a link to it. This allows us to get it later for configuration and verification.</p>
<p style="text-align: justify;">Now we can test our system only with classes from the same project:</p>
<pre> <code lang="cs">
[TestMethod]
public void TestSystem()
{
using var configurator = new Configurator(service => { services.RegisterDomainServices() });
var system = configurator.GetRequiredService<System>();
var service2Mock = configurator.GetMock<IService2>();
...
}
</code>
</pre>
<p style="text-align: justify;">Of course, we don't have to limit ourselves to just one project. For example, we can test classes from several projects related to the same domain area in this way.</p>
<h2 style="text-align: left;">Conclusion</h2>
<p style="text-align: justify;">In this article, we discussed how we can use the existing dependency container infrastructure for our tests. Undoubtedly, there are many ways to improve the proposed system. But I hope I have given you a framework that can be useful to you.</p>
<p style="text-align: justify;">P. S. The source code of the examples can be found on <a href="https://github.com/yakimovim/testing-with-dependency-injection" rel="nofollow" target="_blank">GitHub</a>.</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-10903977051583995692022-11-21T08:44:00.002+03:002022-11-21T08:44:26.294+03:00My experience with OData<p style="text-align: justify;">OData is very interesting technology. Using several lines of code you can support filtering, paging, partial selection, ... for your data. Today GraphQL is replacing it, but OData is still very attractive.</p><p style="text-align: justify;">Nevertheless, there are several pitfalls I had to deal with. Here I want to share my experience with OData.</p><span><a name='more'></a></span><h2 style="text-align: left;">The simplest use</h2><p style="text-align: justify;">To begin with, we need a Web service. I'll create it using ASP.NET Core. To use OData, we need to install the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.OData/" rel="nofollow" target="_blank">Microsoft.AspNetCore.OData</a> NuGet package. Now we must configure it. Here is the content of the <i>Program.cs</i> file:</p><pre><code lang="cs">var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services
.AddControllers()
.AddOData(opts =>
{
opts
.Select()
.Expand()
.Filter()
.Count()
.OrderBy()
.SetMaxTop(1000);
});
var app = builder.Build();
// Configure the HTTP request pipeline.
app.UseAuthorization();
app.MapControllers();
app.Run();</code></pre><p style="text-align: justify;">In the <i>AddOData</i> method we specify which operations of all possible in OData we allow.</p><p style="text-align: justify;">Of course, OData is designed to work with data. Let's add some data to our application. The data definition is very simple:</p><pre><code lang="cs">public class Author
{
[Key]
public int Id { get; set; }
[Required]
public string FirstName { get; set; }
[Required]
public string LastName { get; set; }
public string? ImageUrl { get; set; }
public string? HomePageUrl { get; set; }
public ICollection<Article> Articles { get; set; }
}
public class Article
{
[Key]
public int Id { get; set; }
public int AuthorId { get; set; }
[Required]
public string Title { get; set; }
}
</code></pre><p>I'll use Entity Framework to work with it. The test data is created using <a href="https://github.com/bchavez/Bogus" rel="nofollow" target="_blank">Bogus</a>:</p><pre><code lang="cs">public class AuthorsContext : DbContext
{
public DbSet<Author> Authors { get; set; } = null!;
public AuthorsContext(DbContextOptions<AuthorsContext> options)
: base(options)
{ }
public async Task Initialize()
{
await Database.EnsureDeletedAsync();
await Database.EnsureCreatedAsync();
var rnd = Random.Shared;
Authors.AddRange(
Enumerable
.Range(0, 10)
.Select(_ =>
{
var faker = new Faker();
var person = faker.Person;
return new Author
{
FirstName = person.FirstName,
LastName = person.LastName,
ImageUrl = person.Avatar,
HomePageUrl = person.Website,
Articles = new List<Article>(
Enumerable
.Range(0, rnd.Next(1, 5))
.Select(_ => new Article
{
Title = faker.Lorem.Slug(rnd.Next(3, 5))
})
)
};
})
);
await SaveChangesAsync();
}
}
</code></pre><p style="text-align: justify;">As a storage for data, I will use the in-memory <a href="https://www.nuget.org/packages/Microsoft.EntityFrameworkCore.Sqlite/" rel="nofollow" target="_blank">Sqlite</a>. Here is the configuration in the <i>Program.cs</i>:</p><pre><code lang="cs">...
var inMemoryDatabaseConnection = new SqliteConnection("DataSource=:memory:");
inMemoryDatabaseConnection.Open();
builder.Services.AddDbContext<AuthorsContext>(optionsBuilder =>
{
optionsBuilder.UseSqlite(inMemoryDatabaseConnection);
}
);
...
using (var scope = app.Services.CreateScope())
{
await scope.ServiceProvider.GetRequiredService<AuthorsContext>().Initialize();
}
...</code></pre><p>Now the storage is ready. Let's create a simple controller that returns data to the client:</p><pre><code lang="cs">[ApiController]
[Route("/api/v1/authors")]
public class AuthorsController : ControllerBase
{
private readonly AuthorsContext _db;
public AuthorsController(
AuthorsContext db
)
{
_db = db ?? throw new ArgumentNullException(nameof(db));
}
[HttpGet("no-odata")]
public ActionResult GetWithoutOData()
{
return Ok(_db.Authors);
}
}</code></pre><p>Now at <i>/api/v1/authors/no-odata</i> we can have the following result:</p><pre><code lang="json">[
{
"id": 1,
"firstName": "Fred",
"lastName": "Kuhlman",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/54.jpg",
"homePageUrl": "donald.com"
},
{
"id": 2,
"firstName": "Darrel",
"lastName": "Armstrong",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/796.jpg",
"homePageUrl": "angus.org"
},
...
]
</code></pre><p>Naturally, there is no OData support yet. But how difficult is it to add it?</p><h2 style="text-align: left;">Basic support of OData</h2><p>It is easy. Let's create one more endpoint:</p><pre><code lang="cs">[HttpGet("odata")]
[EnableQuery]
public IQueryable<Author> GetWithOData()
{
return _db.Authors;
}</code></pre><p style="text-align: justify;">As you can see, differences are minimal. But now you can use OData in your queries. For example, the query <i>/api/v1/authors/odata?$filter=id lt 3&$orderby=firstName</i> gives the following result:</p><pre><code lang="json">[
{
"id": 2,
"firstName": "Darrel",
"lastName": "Armstrong",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/796.jpg",
"homePageUrl": "angus.org"
},
{
"id": 1,
"firstName": "Fred",
"lastName": "Kuhlman",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/54.jpg",
"homePageUrl": "donald.com"
}
]
</code></pre><p style="text-align: justify;">Great! But there is a small drawback. Our method or controller returns <i>IQueryable<></i> object. In practice, usually we want to return several variants of responses (e. g. <i>NotFound</i>, <i>BadRequest</i>, ...) What can we do?</p><p style="text-align: justify;">It turns out that OData implementation works fine with wrapping <i>IQueryable<></i> object into <i>Ok</i>:</p><pre><code lang="cs">[HttpGet("odata")]
[EnableQuery]
public IActionResult GetWithOData()
{
return Ok(_db.Authors);
}</code></pre><p style="text-align: justify;">It means that you can add any validation logic into your controller actions.</p><h2 style="text-align: left;">Paging</h2><p style="text-align: justify;">As you probably know, OData allows you to get only some particular page of the full result. It can be done using <i>skip</i> and <i>top</i> operators (e. g. <i>/api/v1/authors/odata?$skip=3&$top=2</i>). You must not forget to call the <i>SetMaxTop</i> method while configuring OData in <i>Program.cs</i>. Otherwise, using the <i>top</i> operator may result in the following error:</p><pre><code lang="text">The query specified in the URI is not valid. The limit of '0' for Top query has been exceeded.</code></pre><p style="text-align: justify;">But for the full use of the paging mechanism, it is very useful to know how many pages you have in total. We need our endpoint to additionally return total number of items, corresponding to the given filter. OData supports the <i>count</i> operator for this purpose: (<i>/api/v1/authors/odata?$skip=3&$top=2&$count=true</i>). But if we simply add <i>$count=true</i> to our query, that does nothing. In order to get the desired result, we need to configure EDM (entity data model). But first, we must know the address of our endpoint.</p><p style="text-align: justify;">Let's say, that we want our data to be accessible at <i>/api/v1/authors/edm</i>. This endpoint will return objects of type <i>Author</i>. In this case, OData configuration in the <i>Program.cs</i> file will look like this:</p><pre><code lang="cs">builder.Services
.AddControllers()
.AddOData(opts =>
{
opts.AddRouteComponents("api/v1/authors", GetAuthorsEdm());
IEdmModel GetAuthorsEdm()
{
ODataConventionModelBuilder edmBuilder = new();
edmBuilder.EntitySet<Author>("edm");
return edmBuilder.GetEdmModel();
}
opts
.Select()
.Expand()
.Filter()
.Count()
.OrderBy()
.SetMaxTop(1000);
});</code></pre><p style="text-align: justify;">Please note, that the route for our components (<i>api/v1/authors</i>) equals to the prefix of the address of our endpoint, and the name of the entity set equals to the rest of this address (<i>edm</i>).</p><p style="text-align: justify;">Final touch is adding <i>ODataAttributeRouting</i> attribute to the corresponding method of the controller:</p><pre><code lang="cs">[HttpGet("edm")]
[ODataAttributeRouting]
[EnableQuery]
public IQueryable<Author> GetWithEdm()
{
return _db.Authors;
}</code></pre><p style="text-align: justify;">Now this endpoint for the request <i>/api/v1/authors/edm?$top=2&$count=true</i> will return the following data:</p><pre><code lang="json">{
"@odata.context": "http://localhost:5293/api/v1/authors/$metadata#edm",
"@odata.count": 10,
"value": [
{
"Id": 1,
"FirstName": "Steve",
"LastName": "Schaefer",
"ImageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/670.jpg",
"HomePageUrl": "kylie.info"
},
{
"Id": 2,
"FirstName": "Stella",
"LastName": "Ankunding",
"ImageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/884.jpg",
"HomePageUrl": "allen.name"
}
]
}
</code></pre><p style="text-align: justify;">As you can see, the <i>@odata.count</i> field contains number of data items corresponding to the query filter. That is what we wanted.</p><p style="text-align: justify;">In general, the question of correspondence between EDM and specific endpoint appeared to be rather complex for me. If you wish, you may try to investigate it out yourself <a href="https://devblogs.microsoft.com/odata/routing-in-asp-net-core-8-0-preview/" rel="nofollow" target="_blank">by documentation</a> or <a href="https://github.com/OData/AspNetCoreOData/tree/main/sample/ODataRoutingSample" rel="nofollow" target="_blank">by examples</a>.</p><p style="text-align: justify;">You may get some help from the debugging page, which can be enabled as follows:</p><pre><code lang="cs">if (app.Environment.IsDevelopment())
{
app.UseODataRouteDebug();
}</code></pre><p style="text-align: justify;">Now at <i>/$odata</i> you can see which endpoints you have and which models are associated with them.</p><h2 style="text-align: justify;">JSON serialization</h2><p style="text-align: justify;">Have you noticed what kind of change happened to the data we returned after we added EDM? All property names now start with a capital letter (before it was <i>firstName</i>, an now it is <i>FirstName</i>). It can be a big problem for JavaScript clients where there is a difference between capital and lowercase letters. We must somehow control names of our properties. OData uses the classes of the <i>System.Text.Json</i> namespace for data serialization. Unfortunately, using the attributes of this namespace gives nothing:</p><pre><code lang="cs">[JsonPropertyName("firstName")]
public string FirstName { get; set; }</code></pre><p style="text-align: justify;">It looks like OData takes the names of the properties from EDM, not from the class definition.</p><p style="text-align: justify;">The OData implementation suggests two approaches for solving this problem in case of using EDM. The first one allows to turn on "lower camel case" for the whole model using the call of <i>EnableLowerCamelCase</i> method:</p><pre><code lang="cs">IEdmModel GetAuthorsEdm()
{
ODataConventionModelBuilder edmBuilder = new();
edmBuilder.EnableLowerCamelCase();
edmBuilder.EntitySet<Author>("edm");
return edmBuilder.GetEdmModel();
}
</code></pre><p style="text-align: justify;">Now we have the following data:</p><pre><code lang="json">{
"@odata.context": "http://localhost:5293/api/v1/authors/$metadata#edm",
"@odata.count": 10,
"value": [
{
"id": 1,
"firstName": "Troy",
"lastName": "Gottlieb",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/228.jpg",
"homePageUrl": "avery.net"
},
{
"id": 2,
"firstName": "Mathew",
"lastName": "Schiller",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/401.jpg",
"homePageUrl": "marion.biz"
}
]
}
</code></pre><p style="text-align: justify;">It is good. But what if we need more granular control over the JSON properties names? What if we need some property in JSON to have a name that is not allowed for property names in C# (e. g. <i>@odata.count</i>)?</p><p style="text-align: justify;">It can be done through the EDM. Let's rename <i>homePageUrl</i> to <i>@url.home</i>:</p><pre><code lang="cs">IEdmModel GetAuthorsEdm()
{
ODataConventionModelBuilder edmBuilder = new();
edmBuilder.EnableLowerCamelCase();
edmBuilder.EntitySet<Author>("edm");
edmBuilder.EntityType<Author>()
.Property(a => a.HomePageUrl).Name = "@url.home";
return edmBuilder.GetEdmModel();
}</code></pre><p style="text-align: justify;">Here we'll face an unpleasant surprise:</p><pre><code lang="text">Microsoft.OData.ODataException: The property name '@url.home' is invalid; property names must not contain any of the reserved characters ':', '.', '@'.
</code></pre><p style="text-align: justify;">Let's try something simpler:</p><pre><code lang="cs">edmBuilder.EntityType<Author>()
.Property(a => a.HomePageUrl).Name = "url_home";
</code></pre><p style="text-align: justify;">Now it works:</p><pre><code lang="json">{
"url_home": "danielle.info",
"id": 1,
"firstName": "Armando",
"lastName": "Hammes",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/956.jpg"
},
</code></pre><p style="text-align: justify;">Unpleasant, of course, but what can you do.</p><h2 style="text-align: justify;">Data transformation</h2><p style="text-align: justify;">Until now, we have provided the user with data directly from the database. But usually in large applications it is customary to divide between classes responsible for storing information and classes responsible for providing data to the user. At least it allows to change these classes relatively independently. Let's see how this mechanism works with OData.</p><p style="text-align: justify;">I'll create simple wrappers for our classes:</p><pre><code lang="cs">public class AuthorDto
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string? ImageUrl { get; set; }
public string? HomePageUrl { get; set; }
public ICollection<ArticleDto> Articles { get; set; }
}
public class ArticleDto
{
public string Title { get; set; }
}
</code></pre><p style="text-align: justify;">I'll use <a href="https://automapper.org/" rel="nofollow" target="_blank">AutoMapper</a> for transformations. I'm not very familiar with <a href="https://github.com/MapsterMapper/Mapster" rel="nofollow" target="_blank">Mapster</a>, but I know that it can work with Entity Framework too.</p><p style="text-align: justify;">For AutoMapper we must configure corresponding transformations:</p><pre><code lang="cs">public class DefaultProfile : Profile
{
public DefaultProfile()
{
CreateMap<Article, ArticleDto>();
CreateMap<Author, AuthorDto>();
}
}</code></pre><p style="text-align: justify;">and register it at the start of our application (I use here <a href="https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection/" rel="nofollow" target="_blank">AutoMapper.Extensions.Microsoft.DependencyInjection</a> NuGet package):</p><pre><code lang="cs">builder.Services.AddAutoMapper(typeof(Program).Assembly);</code></pre><p style="text-align: justify;">Now I can add one more endpoint to my controller:</p><pre><code lang="cs">...
private readonly IMapper _mapper;
private readonly AuthorsContext _db;
public AuthorsController(
IMapper mapper,
AuthorsContext db
)
{
_mapper = mapper ?? throw new ArgumentNullException(nameof(mapper));
_db = db ?? throw new ArgumentNullException(nameof(db));
}
...
[HttpGet("mapping")]
[EnableQuery]
public IQueryable<AuthorDto> GetWithMapping()
{
return _db.Authors.ProjectTo<AuthorDto>(_mapper.ConfigurationProvider);
}
</code></pre><p style="text-align: justify;">As you can see, it is easy to apply the transformation. Unfortunately, the result contains expanded list of articles:</p><pre><code lang="json">[
{
"id": 1,
"firstName": "Edward",
"lastName": "O'Kon",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/1162.jpg",
"homePageUrl": "zachariah.info",
"articles": [
{
"title": "animi-sint-atque"
},
{
"title": "aut-eum-iure"
}
]
},
...
]
</code></pre><p style="text-align: justify;">It means, that we are unable to apply <i>expand</i> OData operation. But it is easy to fix. Let's change AutoMapper configuration for <i>AuthorDto</i>:</p><pre><code lang="cs">CreateMap<Author, AuthorDto>()
.ForMember(a => a.Articles, o => o.ExplicitExpansion());</code></pre><p style="text-align: justify;">Now for <i>/api/v1/authors/mapping</i> we get the correct result:</p><pre><code lang="json">[
{
"id": 1,
"firstName": "Spencer",
"lastName": "Cummerata",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/286.jpg",
"homePageUrl": "woodrow.info"
},
...
]</code></pre><p style="text-align: justify;">And for <i>/api/v1/authors/mapping?$expand=articles</i>:</p><pre><code lang="text">InvalidOperationException: The LINQ expression '$it => new SelectAll<ArticleDto>{
Model = __TypedProperty_1,
Instance = $it,
UseInstanceForProperties = True
}
' could not be translated.</code></pre><p style="text-align: justify;">Yes, a problem. But AutoMapper gives us another way to work with OData. There is the <a href="https://www.nuget.org/packages/AutoMapper.AspNetCore.OData.EFCore/" rel="nofollow" target="_blank">AutoMapper.AspNetCore.OData.EFCore</a> NuGet package. With it, I can implement my endpoint like this:</p><pre><code lang="cs">[HttpGet("automapper")]
public IQueryable<AuthorDto> GetWithAutoMapper(ODataQueryOptions<AuthorDto> query)
{
return _db.Authors.GetQuery(_mapper, query);
}</code></pre><p style="text-align: justify;">Note that we don't augment our method with the <i>EnableQuery</i> attribute. Instead, we collect all OData query parameters in the <i>ODataQueryOptions</i> object and apply all required transformations "manually".</p><p style="text-align: justify;">This time everything works fine: the request without expansion:</p><pre><code lang="json">[
{
"id": 1,
"firstName": "Nathan",
"lastName": "Heller",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/764.jpg",
"homePageUrl": "jamarcus.biz",
"articles": null
},
...
]</code></pre><p style="text-align: justify;">and the request with expansion:</p><pre><code lang="json">[
{
"id": 1,
"firstName": "Nathan",
"lastName": "Heller",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/764.jpg",
"homePageUrl": "jamarcus.biz",
"articles": [
{
"title": "quidem-nulla-et"
}
]
},
...
]</code></pre><p style="text-align: justify;">Additionally, there is one more advantage of this approach. It allows to use standard JSON tools to control serialization of our objects. For example, we can remove <i>null</i> values from our results like this:</p><pre><code lang="cs">builder.Services
.AddJsonOptions(configure =>
{
configure.JsonSerializerOptions.DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull;
configure.JsonSerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase;
});</code></pre><p style="text-align: justify;">Furthermore, we can set JSON property names through usual attributes:</p><pre><code lang="cs">[JsonPropertyName("@url.home")]
public string? HomePageUrl { get; set; }</code></pre><p style="text-align: justify;">Now we can use such a name:</p><pre><code lang="json">[
{
"id": 1,
"firstName": "Edward",
"lastName": "Schmidt",
"imageUrl": "https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/1046.jpg",
"@url.home": "justen.com"
},
...
]</code></pre><h2 style="text-align: justify;">Additional data</h2><p style="text-align: justify;">If our database fields exactly match to the fields of our resulting data we have no problems. But this is not always the case. Frequently we would like to return transformed and processed data from the storage. In this case, we may face several situations.</p><p style="text-align: justify;">First of all, the transformation may be simple. For example, I want to return not the first and last name separately, but the full name of the author:</p><pre><code lang="cs">public class ComplexAuthor
{
[Key]
public int Id { get; set; }
public string FullName { get; set; }
}</code></pre><p style="text-align: justify;">We can configure AutoMapper for this class like this:</p><pre><code lang="cs">CreateMap<Author, ComplexAuthor>()
.ForMember(d => d.FullName,
opt => opt.MapFrom(s => s.FirstName + " " + s.LastName));
</code></pre><p style="text-align: justify;">In this case, we get the desired result:</p><pre><code lang="json">[
{
"id": 1,
"fullName": "Lance Rice"
},
...
]</code></pre><p style="text-align: justify;">Furthermore, we still can filter and sort data by our new field (<i>/api/v1/authors/nonsql?$filter=startswith(fullName,'A')</i>):</p><pre><code lang="json">[
{
"id": 4,
"fullName": "Andre Medhurst"
},
{
"id": 6,
"fullName": "Amber Terry"
}
]</code></pre><p style="text-align: justify;">The reason we still can do it is that our simple expression (<i>s.FirstName + " " + s.LastName</i>) can be easily converted into SQL. Here is the query that Entity Framework generated for me in this case:</p><pre><code lang="sql">SELECT "a"."Id", ("a"."FirstName" || ' ') || "a"."LastName"
FROM "Authors" AS "a"
WHERE (@__TypedProperty_0 = '') OR (((("a"."FirstName" || ' ') || "a"."LastName" LIKE @__TypedProperty_0 || '%') AND (substr(("a"."FirstName" || ' ') || "a"."LastName", 1, length(@__TypedProperty_0)) = @__TypedProperty_0)) OR (@__TypedProperty_0 = ''))</code></pre><p style="text-align: justify;">That is why filtering and sorting still work.</p><p style="text-align: justify;">But obviously not every transformation can be translated into SQL. Let's say for some reason we want to calculate the hash of the full name:</p><pre><code lang="cs">public class ComplexAuthor
{
[Key]
public int Id { get; set; }
public string FullName { get; set; }
public string NameHash { get; set; }
}</code></pre><p style="text-align: justify;">Now our AutoMapper configuration looks like this:</p><pre><code lang="cs">CreateMap<Author, ComplexAuthor>()
.ForMember(d => d.FullName,
opt => opt.MapFrom(s => s.FirstName + " " + s.LastName))
.ForMember(
d => d.NameHash,
opt => opt.MapFrom(a => string.Join(",", SHA256.HashData(Encoding.UTF32.GetBytes(a.FirstName + " " + a.LastName))))
);</code></pre><p style="text-align: justify;">Let's try to get our data:</p><pre><code lang="json">[
{
"id": 1,
"fullName": "Julius Haag",
"nameHash": "66,19,82,19,233,224,181,226,111,125,241,228,81,6,200,47,5,112,248,30,186,26,173,91,83,73,9,137,6,158,138,115"
},
{
"id": 2,
"fullName": "Anita Wilderman",
"nameHash": "196,131,191,35,182,3,174,193,196,91,70,199,22,173,72,54,123,73,110,83,254,178,19,129,219,24,137,197,83,158,76,209"
},
...
]</code></pre><p style="text-align: justify;">Interesting. Despite the fact that the resulting expression cannot be expressed in SQL terms, but the system still continues to work. It looks like Entity Framework known what can be evaluated on the server side.</p><p style="text-align: justify;">Now let's try to filter our data by this new field (<i>nameHash</i>): <i>/api/v1/authors/nonsql?$filter=nameHash eq '1'</i></p><pre><code lang="text">InvalidOperationException: The LINQ expression 'DbSet<Author>()
.Where(a => (string)string.Join<byte>(
separator: ",",
values: SHA256.HashData(__UTF32_0.GetBytes(a.FirstName + " " + a.LastName))) == __TypedProperty_1)' could not be translated.</code></pre><p style="text-align: justify;">Here we can no longer avoid converting our expression to SQL. And, since it can't be done, we get the error message.</p><p style="text-align: justify;">In this case, we can't rewrite the expression such a way it can be converted into SQL. But we can prohibit filtering and sorting by this field. There are several attributes to do it: <i>NonFilterable</i> and <i>NotFilterable</i>, <i>NotSortable</i> and <i>Unsortable</i>. You can use any of them:</p><pre><code lang="cs">public class ComplexAuthor
{
[Key]
public int Id { get; set; }
public string FullName { get; set; }
[NonFilterable]
[Unsortable]
public string NameHash { get; set; }
}</code></pre><p style="text-align: justify;">I'd prefer to return <i>Bad Request</i> if the user tries to filter by this field. But mere adding of these attributes does nothing. Filtering by <i>nameHash</i> leads to the same error. We have to validate our request manually:</p><pre><code lang="cs">[HttpGet("nonsql")]
public IActionResult GetNonSqlConvertible(ODataQueryOptions<ComplexAuthor> options)
{
try
{
options.Validator.Validate(options, new ODataValidationSettings());
}
catch (ODataException e)
{
return BadRequest(e.Message);
}
return Ok(_db.Authors.GetQuery(_mapper, options));
}</code></pre><p style="text-align: justify;">Now when we try to filter, we get the following message:</p><pre><code lang="text">The property 'NameHash' cannot be used in the $filter query option.</code></pre><p style="text-align: justify;">It is better. Although the property name returned to the user starts with a small letter (<i>nameHash</i>), not with a capital one (<i>NameHash</i>).</p><p style="text-align: justify;">I wonder how things are going with changing property names using the <i>JsonPropertyName</i> attribute in general? For example, I want my property to have name <i>name</i>:</p><pre><code lang="cs">[JsonPropertyName("name")]
public string FullName { get; set; }</code></pre><p style="text-align: justify;">Can I filter by <i>name</i> now (<i>/api/v1/authors/nonsql?$filter=startswith(name,'A')</i>)? It turns out that I can't:</p><pre><code lang="text">Could not find a property named 'name' on type 'ODataJourney.Models.ComplexAuthor'.</code></pre><p style="text-align: justify;">What if we return to EDM? To do this, it is enough to add the <i>ODataAttributeRouting</i> attribute to the controller method:</p><pre><code lang="cs">[HttpGet("nonsql")]
[ODataAttributeRouting]
public IActionResult GetNonSqlConvertible(ODataQueryOptions<ComplexAuthor> options)</code></pre><p style="text-align: justify;">And update our model:</p><pre><code lang="cs">...
edmBuilder.EntitySet<ComplexAuthor>("nonsql");
edmBuilder.EntityType<ComplexAuthor>()
.Property(a => a.FullName).Name = "name";
...</code></pre><p style="text-align: justify;">Now we can filter by <i>name</i>:</p><pre><code lang="json">{
"@odata.context": "http://localhost:5293/api/v1/authors/$metadata#nonsql",
"value": [
{
"name": "Leona Bauch",
"id": 3,
"nameHash": "56,114,131,251,22,63,188,105,37,55,74,232,36,181,152,24,9,111,131,55,229,89,164,181,230,158,109,163,206,137,147,173"
},
{
"name": "Leo Schimmel",
"id": 7,
"nameHash": "78,48,88,216,170,3,241,99,96,251,10,176,45,187,250,58,240,215,104,159,26,158,217,244,93,219,183,119,206,40,130,102"
}
]
}</code></pre><p style="text-align: justify;">But as you can see, the data structure has changed. We get OData wrapper. In addition, we have returned to the restriction on property names described above.</p><p style="text-align: justify;">In the end, let's look at one more type of data transformation. So far we transformed data using AutoMapper. But in this case, we can't use the request context. AutoMapper transformations are described in a separate file where there is no access to the information from a request. But sometimes it can be very important. For example, we may want to make another Web request based on the data received in the request and change our resulting data using the response. In the following example, I use a simple <i>foreach</i> loop to represent some server-side data processing:</p><pre><code lang="cs">[HttpGet("add")]
public IActionResult ApplyAdditionalData(ODataQueryOptions<ComplexAuthor> options)
{
try
{
options.Validator.Validate(options, new ODataValidationSettings());
}
catch (ODataException e)
{
return BadRequest(e.Message);
}
var query = _db.Authors.ProjectTo<ComplexAuthor>(_mapper.ConfigurationProvider);
var authors = query.ToArray();
foreach (var author in authors)
{
author.FullName += " (Mr)";
}
return Ok(authors);
}</code></pre><p style="text-align: justify;">Naturally, there is no OData support here. But how can we add it? We don't want to lose the ability to filter, sort and paginate.</p><p style="text-align: justify;">Here is one possible approach. We can apply all OData operations except <i>select</i>. In this case, we still work with full <i>ComplexAuthor</i> objects. After that we transform these objects and then we apply the <i>select</i> operation, if it was requested. This will allow us to get from the database only a small number of records corresponding to our filter and page:</p><pre><code lang="cs">[HttpGet("add")]
public IActionResult ApplyAdditionalData(ODataQueryOptions<ComplexAuthor> options)
{
try
{
options.Validator.Validate(options, new ODataValidationSettings());
}
catch (ODataException e)
{
return BadRequest(e.Message);
}
var query = _db.Authors.ProjectTo<ComplexAuthor>(
_mapper.ConfigurationProvider);
var authors = options
.ApplyTo(query, AllowedQueryOptions.Select)
.Cast<ComplexAuthor>()
.ToArray();
foreach (var author in authors)
{
author.FullName += " (Mr)";
}
var result = options.ApplyTo(
authors.AsQueryable(),
AllowedQueryOptions.All & ~AllowedQueryOptions.Select
);
return Ok(result);
}</code></pre><p style="text-align: justify;"><i>ODataQueryOptions</i> object allows us to specify which OData operations should be applied. Using this opportunity, we will divide the application of OData operations into two stages, between which we insert our processing.</p><p style="text-align: justify;">This approach has its drawbacks. First of all, we lose ability to change property names using JSON attributes. It can be fixed with EDM, but in this case, we'll change the data shape and get the OData wrapper.</p><p style="text-align: justify;">In addition, the problem with the <i>expand</i> operation returns. Our <i>ComplexAuthor</i> class is quite simple, but we can add a property to it that returns articles:</p><pre><code lang="cs">public ICollection<ArticleDto> Articles { get; set; }</code></pre><p style="text-align: justify;">The <i>GetQuery</i> method we used earlier from the <a href="https://www.nuget.org/packages/AutoMapper.AspNetCore.OData.EFCore/" rel="nofollow" target="_blank">AutoMapper.AspNetCore.OData.EFCore</a> NuGet package does not allow to apply OData operations partially. And without it I could not make the system to expand the <i>Articles</i> property correctly. Finally I have got this incomprehensible error:</p><pre><code lang="text">ODataException: Property 'articles' on type 'ODataJourney.Models.ComplexAuthor' is not a navigation property or complex property. Only navigation properties can be expanded.</code></pre><p style="text-align: justify;">Maybe someone will be able to overcome it.</p><h2 style="text-align: justify;">Conclusion</h2><p style="text-align: justify;">Despite the fact that OData provides a fairly simple way to add powerful data filtering operations to your Web API, it turns out to be very difficult to get everything you want from the current Microsoft implementation. It looks like that when you implement one thing, something else falls off.</p><p style="text-align: justify;">Let's hope I just don't understand something here, and there is a reliable way to overcome all these difficulties. Good luck!</p><p style="text-align: justify;">P.S. You can find the source code for this article on <a href="https://github.com/yakimovim/ODataJourney" rel="nofollow" target="_blank">GitHub</a>.</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-59342639982631780322022-10-27T11:51:00.001+03:002022-10-28T10:52:25.643+03:00Web request sequence visualization<p style="text-align: justify;">Modern requests to web services are very complex. The service you are calling can call other services, they are other services, etc. All these requests can be executed in parallel. Of course, the logging system stores information from all participants in the request. But the clocks on different services can be slightly out of sync, so it is not easy to recreate the correct picture. And if we add message queues here (Azure EventHub, RabbitMQ, ...), the task becomes even more difficult.</p><p style="text-align: justify;">Here I'll try to create a system that will allow us to quickly plot the sequence diagram of events during my request.</p><p style="text-align: justify;">Ok, let's start.</p><span><a name='more'></a></span><h2 style="text-align: justify;">System to analyze</h2><p style="text-align: justify;">Let's build a system whose requests we want to analyze. You can take it's full code from <a href="https://github.com/yakimovim/request-sequence-visualization" rel="nofollow" target="_blank">GitHub</a>.</p><p style="text-align: justify;">My system will contain several services (<i>Service1</i>, <i>Service2</i>, <i>Service3</i>, <i>ExternalService</i>):</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPwt-sy7kueQ2jz8uW7FT2tELQWy_QuLRYoZv1zZlOu1TijYsNI3BVtKmaWWeSmhrdU_g2c38sW_Pc4sVsR6MLBrKVX5A1y3IzxipWzvbYGsgud6SsMR9BtTf66HkPsNsWD6nA7pNxVpzIfcxeJhltKbjC1IC2dKU2GlIXb0vnVdJW0xCqgDdDDgV_xQ/s305/Services.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Services" border="0" data-original-height="214" data-original-width="305" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPwt-sy7kueQ2jz8uW7FT2tELQWy_QuLRYoZv1zZlOu1TijYsNI3BVtKmaWWeSmhrdU_g2c38sW_Pc4sVsR6MLBrKVX5A1y3IzxipWzvbYGsgud6SsMR9BtTf66HkPsNsWD6nA7pNxVpzIfcxeJhltKbjC1IC2dKU2GlIXb0vnVdJW0xCqgDdDDgV_xQ/s16000/Services.png" /></a></div><br /><p style="text-align: justify;">The <i>ServiceN</i> services are components of my system. They are under my control. They do some work, write something in logs, make requests to other services. It doesn't matter to us here what they actually do. Here is a typical example of such a service:</p><pre><code lang="cs">[HttpGet]
public async Task<IEnumerable<WeatherForecast>> Get()
{
_logger.LogInformation("Get weather forecast");
Enumerable.Range(1, 4).ToList()
.ForEach(_ => _logger.LogInformation("Some random message"));
await Task.WhenAll(Enumerable.Range(1, 3).Select(_ => _service2Client.Get()));
await _service3Client.Get();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = Random.Shared.Next(-20, 55),
Summary = Summaries[Random.Shared.Next(Summaries.Length)]
})
.ToArray();
}</code></pre><p style="text-align: justify;">But in addition to my services, there are also external services. Simply put, these are services that do not write anything to our log system. It can be anything: mail, database, authorization, client webhooks, ... This type of service is represented here by <i>ExternalService</i>.</p><p style="text-align: justify;">Now our system is ready. Let's configure it.</p><h2 style="text-align: justify;">System configuration</h2><p style="text-align: justify;">First of all, I want to collect all my logs in one place. I will use <a href="https://datalust.co/" rel="nofollow" target="_blank">Seq</a> just because it is so easy to use with Docker. Here is the corresponding Docker Compose file:</p><pre><code lang="yaml">version: "3"
services:
seq:
image: datalust/seq
container_name: seq
environment:
- ACCEPT_EULA=Y
ports:
- "5341:5341"
- "9090:80"</code></pre><p style="text-align: justify;">Now at the address <i>http://localhost:9090</i> I have access to the Seq UI. And I can write logs in Seq using <a href="https://serilog.net/" rel="nofollow" target="_blank">Serilog</a>:</p><pre><code lang="cs">Log.Logger = new LoggerConfiguration()
.MinimumLevel.Override("Microsoft", LogEventLevel.Error)
.MinimumLevel.Override("System", LogEventLevel.Error)
.Enrich.FromLogContext()
.WriteTo.Console(new CompactJsonFormatter())
.WriteTo.Seq("http://localhost:5341")
.CreateLogger();
</code></pre><p style="text-align: justify;">But there is one more requirement for my logging system. It must give me access to the log entries via some API. In the case of Seq, there is a NuGet package <a href="https://www.nuget.org/packages/Seq.Api" rel="nofollow" target="_blank">Seq.Api</a>. That's quite enough for me.</p><p style="text-align: justify;">Now I want to add some information to logs, which I'll use later to build request sequence diagram. To do this, I'll create ASP.NET Core middleware and add it into the request processing pipeline. Here is the main code:</p><pre><code lang="cs">public async Task Invoke(HttpContext context)
{
GetCorrelationId(context);
GetInitialsService(context);
GetPreviousService(context);
GetPreviousClock(context);
using (LogContext.PushProperty(Names.CurrentServiceName, ServiceNameProvider.ServiceName))
using (LogContext.PushProperty(Names.CorrelationIdHeaderName, _correlationIdProvider.GetCorrelationId()))
using (LogContext.PushProperty(Names.InitialServiceHeaderName, _initialServiceProvider.GetInitialService()))
using (LogContext.PushProperty(Names.PreviousServiceHeaderName, _previousServiceProvider.GetPreviousService()))
using (LogContext.PushProperty(Names.RequestClockHeaderName, _requestClockProvider.GetPreviousServiceClock()))
{
await _next(context);
}
}
</code></pre><p style="text-align: justify;">What can we see here? We add the following information to all log entries:</p><p style="text-align: justify;"></p><ul><li>The name of the current service. There is no magic here. It can be just the assembly name or whatever you want.</li><li>Correlation id. I hope it does not require a long introduction. It connects all the log entries associated with a single external request.</li><li>The name of service to which the external request is sent. This is the point where the request enters our system. This information is for convenience only and will not be used in this article.</li><li>The name of the previous service in the chain of requests. It is useful to know where the request came from.</li><li>Some <i>timestamp</i> that does not depend on the physical clocks of different services. We'll talk about this in more detail later.</li></ul><p></p><p style="text-align: justify;">At the beginning of request processing, we need to get all these values from the request object. This is what all these <i>GetNNN</i> methods do at the start of <i>Invoke</i>. Let's take a look at the <i>GetCorrelationId</i> method. Other methods are generally identical.</p><pre><code lang="cs">private void GetCorrelationId(HttpContext context)
{
if (context.Request.Headers.ContainsKey(Names.CorrelationIdHeaderName)
&& context.Request.Headers[Names.CorrelationIdHeaderName].Any())
{
_correlationIdProvider.SetCorrelationId(context.Request.Headers[Names.CorrelationIdHeaderName][0]);
}
else
{
_correlationIdProvider.SetCorrelationId(Guid.NewGuid().ToString("N"));
}
}
</code></pre><p style="text-align: justify;">The providers of these values also generally identical. They store values during a request in a field of type <i>AsyncLocal<T></i>:</p><pre><code lang="cs">public class CorrelationIdProvider
{
private static readonly AsyncLocal<string> Value = new();
public string GetCorrelationId()
{
var value = Value.Value;
if (string.IsNullOrWhiteSpace(value))
{
value = Guid.NewGuid().ToString("N");
SetCorrelationId(value);
}
return value;
}
public void SetCorrelationId(string value)
{
if (string.IsNullOrWhiteSpace(value))
throw new ArgumentException("Value cannot be null or whitespace.", nameof(value));
Value.Value = value;
}
}
</code></pre><p style="text-align: justify;">But there is an exception to this simplicity. I'm talking about the monotonous clock. Now it is time to discuss it.</p><h2 style="text-align: justify;">Monotonous sequence of requests</h2><p style="text-align: justify;">Technically, each log entry has it's own timestamp. What prevents me from sorting by this timestamp and considering such a sequence of records? There are several obstacles.</p><p style="text-align: justify;">First of all, as I have already said, the clocks of different services can be out of sync. Even a small shift of a couple of tens of milliseconds can lead to shuffling of log entries.</p><p style="text-align: justify;">But even if the clocks are perfectly synchronized, this does not solve our problem completely. Imagine that my service calls the same endpoint several times, each time with slightly different parameters. And I'm doing it in parallel to improve performance. Log entries from these calls will inevitably be shuffled.</p><p style="text-align: justify;">What can we do about it? We need some kind of magic <i>monotonous clock</i> that is identical for all services, or a monotonically increasing sequence of numbers. We can use it as follows. When my system receives a request, I set this clock to 0. When I need to call another service, I increase the clock value by one and send this new value to another service. It gets the value and makes it's own calls, increasing this value each time.Eventually it returns the last value to me. With this value, I continue my work.</p><p style="text-align: justify;">This approach has a number of disadvantages. First of all, external (not my) systems will not update the value of my clock. But this is not that important. The worse part is that I can not make parallel calls. I have to wait for the end of the next call to get the updated clock value.</p><p style="text-align: justify;">To avoid this problem, I will use a different method. I use the clock value from the previous service as a prefix for the clock value in my service. The suffix will be a number that I monotonically increase with each request to another service. Here is how it is implemented:</p><pre><code lang="cs">public class RequestClockProvider
{
private class ClockHolder
{
public string PreviousServiceClock { get; init; }
public int CurrentClock { get; set; }
}
private static readonly AsyncLocal<ClockHolder> Clock = new();
public void SetPreviousServiceClock(string? value)
{
Clock.Value = new ClockHolder
{
PreviousServiceClock = value ?? string.Empty
};
}
public string GetPreviousServiceClock() => Clock.Value?.PreviousServiceClock ?? string.Empty;
public string GetNextCurrentServiceClock()
{
lock (this)
{
var clock = Clock.Value!;
return $"{clock.PreviousServiceClock}.{clock.CurrentClock++}";
}
}
}
</code></pre><p style="text-align: justify;">The <i>SetPreviousServiceClock</i> method is used by my middleware to initialize the clock for this request. The <i>GetNextCurrentServiceClock</i> method is used every time I send a request to another service.</p><p style="text-align: justify;">So, if my service receives a request in which the clock is set to <i>2</i>, it will generate requests to other services with clock values <i>2.0</i>, <i>2.1</i>, <i>2.2</i>, ... And if the service receives a request with the clock value <i>2.1</i>, it will generate requests with the values <i>2.1.0</i>, <i>2.1.1</i>, <i>2.1.2</i>, ...</p><p style="text-align: justify;">If I have such a value for each log entry, I can easily group and order them. Entries within the same group can be safely ordered by timestamp, since they are created when processing a single request by a single service. It means that their timestamps were created by one physical clock.</p><p style="text-align: justify;">One more remark. One could say that I'm implementing the <a href="https://en.wikipedia.org/wiki/Lamport_timestamp" rel="nofollow" target="_blank">Lamport timestamp</a> here. It may be so, but I won't venture to assert it. I'm sure that there are more efficient algorithms that solve this problem. In practice, you should use them. But here my implementation is enough.</p><h2 style="text-align: justify;">Request sending</h2><p style="text-align: justify;">Now we have the information from the request to our service. We need to send it further with each of our own request. How can we do it? For simplicity, I'll use instances of <i>HttpClient</i>. Here is my client for one of the services:</p><pre><code lang="cs">public interface IService2Client
{
Task Get();
}
public class Service2Client : IService2Client
{
private readonly HttpClient _client;
public Service2Client(HttpClient client)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
}
public async Task Get()
{
_ = await _client.GetAsync("http://localhost:5106/weatherforecast");
}
}
</code></pre><p style="text-align: justify;">I'll register it in the dependency container as follows:</p><pre><code lang="cs">builder.Services.AddHttpClientWithHeaders<IService2Client, Service2Client>();</code></pre><p style="text-align: justify;">Here the <i>AddHttpClientWithHeaders</i> method looks like this:</p><pre><code lang="cs">public static IHttpClientBuilder AddHttpClientWithHeaders<TInterface, TClass>(this IServiceCollection services)
where TInterface : class
where TClass : class, TInterface
{
return services.AddHttpClient<TInterface, TClass>()
.AddHttpMessageHandler<RequestHandler>();
}
</code></pre><p style="text-align: justify;">As you can see, I'm just adding my own request handler. Here is it's code:</p><pre><code lang="cs">protected override Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request,
CancellationToken cancellationToken)
{
var requestClockValue = _requestClockProvider.GetNextCurrentServiceClock();
request.Headers.Add(Names.CorrelationIdHeaderName, _correlationIdProvider.GetCorrelationId());
request.Headers.Add(Names.InitialServiceHeaderName, _initialServiceProvider.GetInitialService());
request.Headers.Add(Names.PreviousServiceHeaderName, ServiceNameProvider.ServiceName);
request.Headers.Add(Names.RequestClockHeaderName, requestClockValue);
using (LogContext.PushProperty(Names.RequestBoundaryForName, requestClockValue))
using (LogContext.PushProperty(Names.RequestURLName, $"{request.Method} {request.RequestUri}"))
{
_logger.LogInformation("Sending request...");
return base.SendAsync(request, cancellationToken);
}
}
</code></pre><p style="text-align: justify;">First of all, I add several headers to the request, the values of which you already know. Here I pass these values to the next service.</p><p style="text-align: justify;">Then I create an additional log entry with two special fields. One of them is the URL of the request. I keep it for information purposes only. In the sequence diagram, I'll show this address. The second field (<i>RequestBoundaryFor</i>) will be used by me to understand where I must place the log entries from the target service. We'll discuss this topic later when we talk about creating the request sequence diagram.</p><h2 style="text-align: justify;">System launch</h2><p style="text-align: justify;">It is time to make some request. Firstly, I'll start Seq using Docker Compose:</p><pre><code lang="bash">> docker compose -f "docker-compose.yml" up -d</code></pre><p style="text-align: justify;">Then I'll start all my services. Here is my startup configuration in Visual Studio:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-wfYI58H3mhp4WovH07VC9ltI_BxaUQQlGgbaWtT_pPc0FnwYLr_jSrcY30PCiNkZdDLDaStqwNE0yTE3c0gYbOR2FzmeESejiHI34sj_H3Qz85HpQTBC1W32-0LgkrbkJHjHXLQv5Zn8MxXCVHeg7rhO_QLTRcoWHY30b7TiCWjGyF2SFkYTLFJPKw/s530/Startup.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Launch" border="0" data-original-height="293" data-original-width="530" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-wfYI58H3mhp4WovH07VC9ltI_BxaUQQlGgbaWtT_pPc0FnwYLr_jSrcY30PCiNkZdDLDaStqwNE0yTE3c0gYbOR2FzmeESejiHI34sj_H3Qz85HpQTBC1W32-0LgkrbkJHjHXLQv5Zn8MxXCVHeg7rhO_QLTRcoWHY30b7TiCWjGyF2SFkYTLFJPKw/s16000/Startup.png" /></a></div><br /><p style="text-align: justify;">And now we can make a request to one of our services (e. g. to <i>http://localhost:5222/weatherforecast</i>).</p><p style="text-align: justify;">After that, we'll have some entries in Seq :</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii1Fy6mFOLMzzEYeoz_mOOrJSYEsdC95fqicWIQGxu-xsnaxKrojHHTKsRQBoEJRIGc7vRoJAKCx3jUHFhc8n_uR5rJl6gfQkJ9KoqxVg454kUyiqR7mfxvXFuF3VFE1y6GeM04uXBV5tGoBNXD41U-dKjQaFEJsQ6xwxfEWekBHVC_qhSauj_oUoqXQ/s912/Seq.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Entries in Seq" border="0" data-original-height="499" data-original-width="912" height="350" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii1Fy6mFOLMzzEYeoz_mOOrJSYEsdC95fqicWIQGxu-xsnaxKrojHHTKsRQBoEJRIGc7vRoJAKCx3jUHFhc8n_uR5rJl6gfQkJ9KoqxVg454kUyiqR7mfxvXFuF3VFE1y6GeM04uXBV5tGoBNXD41U-dKjQaFEJsQ6xwxfEWekBHVC_qhSauj_oUoqXQ/w640-h350/Seq.png" width="640" /></a></div><br /><p style="text-align: justify;">I only need the correlation id from them.</p><p style="text-align: justify;">Let's see how we can build a request sequence diagram based on these log entries.</p><h2 style="text-align: justify;">Building of request sequence diagram</h2><p style="text-align: justify;">There is a free <a href="https://www.websequencediagrams.com/" rel="nofollow" target="_blank">www.websequencediagrams.com</a> service on the Internet. It has its own language for describing sequence diagrams. We'll use this language to describe our request. For this reason I created <a href="https://github.com/yakimovim/request-sequence-visualization/tree/main/EventsReader" rel="nofollow" target="_blank">EventsReader</a> application.</p><p style="text-align: justify;">But first we have to get the log entries from Seq. We'll use <a href="https://www.nuget.org/packages/Seq.Api" rel="nofollow" target="_blank">Seq.Api</a> NuGet package:</p><pre><code lang="cs">using EventsReader;
using Seq.Api;
var connection = new SeqConnection("http://localhost:9090");
var result = connection.Events.EnumerateAsync(
filter: "CorrelationId = '4395cd986c9e4b548404a2aa2aca6016'",
render: true,
count: int.MaxValue);
var logs = new ServicesRequestLogs();
await foreach (var evt in result)
{
logs.Add(evt);
}
logs.PrintSequenceDiagram();
</code></pre><p style="text-align: justify;">In <i>ServicesRequestLogs</i> class, we group all log entries by the value of our monotonous clock:</p><pre><code lang="cs">public void Add(EventEntity evt)
{
var clock = evt.GetPropertyValue(Names.RequestClockHeaderName);
if(clock == null) return;
var singleServiceLogs = GetSingleServiceLogs(clock, evt);
singleServiceLogs.Add(evt);
}
private SingleServiceRequestLogs GetSingleServiceLogs(string clock, EventEntity evt)
{
if (_logRecords.ContainsKey(clock))
{
return _logRecords[clock];
}
var serviceName = evt.GetPropertyValue(Names.CurrentServiceName)!;
var serviceAlias = GetServiceAlias(serviceName);
var logs = new SingleServiceRequestLogs
{
ServiceName = serviceName,
ServiceAlias = serviceAlias,
Clock = clock
};
_logRecords.Add(clock, logs);
return logs;
}
private string GetServiceAlias(string serviceName)
{
if(_serviceAliases.ContainsKey(serviceName))
return _serviceAliases[serviceName];
var serviceAlias = $"s{_serviceAliases.Count}";
_serviceAliases[serviceName] = serviceAlias;
return serviceAlias;
}
</code></pre><p style="text-align: justify;">All entries with the same value correspond to the processing of one request by one service. They are stored in a simple class:</p><pre><code lang="cs">public class SingleServiceRequestLogs
{
public string ServiceName { get; set; }
public string ServiceAlias { get; set; }
public string Clock { get; set; }
public List<EventEntity> LogEntities { get; } = new List<EventEntity>();
public void Add(EventEntity evt)
{
LogEntities.Add(evt);
}
}
</code></pre><p style="text-align: justify;">Now we'll construct our description of the sequence diagram:</p><pre><code lang="cs">public void PrintSequenceDiagram()
{
Console.WriteLine();
PrintParticipants();
PrintServiceLogs("");
}
</code></pre><p style="text-align: justify;">The <i>PrintParticipants</i> method describes all participants in the communication. The names of services may contain characters that are unacceptable for websequencediagrams, so we use aliases:</p><pre><code lang="cs">private void PrintParticipants()
{
Console.WriteLine("participant \"User\" as User");
foreach (var record in _serviceAliases)
{
Console.WriteLine($"participant \"{record.Key}\" as {record.Value}");
}
}
</code></pre><p style="text-align: justify;">The <i>PrintServiceLogs</i> method prints the sequence of request processing in one service. This method gets the value of monotonous clock as a parameter:</p><pre><code lang="cs">private void PrintServiceLogs(string clock)
{
var logs = _logRecords[clock];
if (clock == string.Empty)
{
Console.WriteLine($"User->{logs.ServiceAlias}: ");
Console.WriteLine($"activate {logs.ServiceAlias}");
}
foreach (var entity in logs.LogEntities.OrderBy(e => DateTime.Parse(e.Timestamp, null, System.Globalization.DateTimeStyles.RoundtripKind)))
{
var boundaryClock = entity.GetPropertyValue(Names.RequestBoundaryForName);
if (boundaryClock == null)
{
Console.WriteLine($"note right of {logs.ServiceAlias}: {entity.RenderedMessage}");
}
else
{
if (_logRecords.TryGetValue(boundaryClock, out var anotherLogs))
{
Console.WriteLine($"{logs.ServiceAlias}->{anotherLogs.ServiceAlias}: {entity.GetPropertyValue(Names.RequestURLName)}");
Console.WriteLine($"activate {anotherLogs.ServiceAlias}");
PrintServiceLogs(boundaryClock);
Console.WriteLine($"{anotherLogs.ServiceAlias}->{logs.ServiceAlias}: ");
Console.WriteLine($"deactivate {anotherLogs.ServiceAlias}");
}
else
{
// Call to external system
Console.WriteLine($"{logs.ServiceAlias}->External: {entity.GetPropertyValue(Names.RequestURLName)}");
Console.WriteLine($"activate External");
Console.WriteLine($"External->{logs.ServiceAlias}: ");
Console.WriteLine($"deactivate External");
}
}
}
if (clock == string.Empty)
{
Console.WriteLine($"{logs.ServiceAlias}->User: ");
Console.WriteLine($"deactivate {logs.ServiceAlias}");
}
}
</code></pre><p style="text-align: justify;">Here we get all the log entries for this particular clock value (<i>logs</i> variable). Then an the beginning and at the end of the method there is some code that should make our diagram more beautiful. There is nothing important here:</p><pre><code lang="cs">if (clock == string.Empty) ...
</code></pre><p style="text-align: justify;">All the main work is done inside the <i>foreach</i> loop. As you can see, we sort the log entries by timestamp. We can safely do it here, because all these entries are obtained as a result of processing one request by one service. It means that all these timestamps came from one physical clock:</p><pre><code lang="cs">foreach (var entity in logs.LogEntities.OrderBy(e => DateTime.Parse(e.Timestamp, null, System.Globalization.DateTimeStyles.RoundtripKind))) ...
</code></pre><p style="text-align: justify;">Then we check if the current entry is a service entry representing the beginning of some request to another service. As I have already said, this entry must contain the <i>RequestBoundaryFor</i> field. If there is no such field, then this is a normal entry. In this case, we just print it's message as a note:</p><pre><code lang="cs">Console.WriteLine($"note right of {logs.ServiceAlias}: {entity.RenderedMessage}");
</code></pre><p style="text-align: justify;">If the entry is a service entry, two variants are possible. First of all, it can be start of a request to another of our services. In this case, logs will contain information from the target service. We extract this information and add it to the diagram:</p><pre><code lang="cs">Console.WriteLine($"{logs.ServiceAlias}->{anotherLogs.ServiceAlias}: {entity.GetPropertyValue(Names.RequestURLName)}");
Console.WriteLine($"activate {anotherLogs.ServiceAlias}");
PrintServiceLogs(boundaryClock);
Console.WriteLine($"{anotherLogs.ServiceAlias}->{logs.ServiceAlias}: ");
Console.WriteLine($"deactivate {anotherLogs.ServiceAlias}");
</code></pre><p style="text-align: justify;">Secondly, it can be a request to an external service. In this case, we have no log entries about it's work:</p><pre><code lang="cs">Console.WriteLine($"{logs.ServiceAlias}->External: {entity.GetPropertyValue(Names.RequestURLName)}");
Console.WriteLine($"activate External");
Console.WriteLine($"External->{logs.ServiceAlias}: ");
Console.WriteLine($"deactivate External");
</code></pre><p style="text-align: justify;">That's all. We can insert our correlation id into the appropriate place of <a href="https://github.com/yakimovim/request-sequence-visualization/blob/496beb7126ddcd224471195bad4d3f19a9365fe9/EventsReader/Program.cs#L9" rel="nofollow" target="_blank">Program.cs</a> and run the program. It will give us the description of the sequence diagram that can be inserted into <a href="https://www.websequencediagrams.com/" rel="nofollow" target="_blank">www.websequencediagrams.com</a>:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZ6b_U_1C1QDKULYPi2nZn1tVHlTSRFHaVacTOsMWdWZLJiyMGsmjdmY0EIfEmTdnB0UNA7KMkPbtWt5hWAlm2kxoEZn9L0JeV7CTNqzV4bchOXezw6WihuFZhjXyk1raqcnSuivKL8GfW-2TnU69oVhgOYQ8MPWy9op30nvj8GQJq02rkPfctuc4yzw/s989/Diagram.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Request sequence diagram" border="0" data-original-height="845" data-original-width="989" height="546" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZ6b_U_1C1QDKULYPi2nZn1tVHlTSRFHaVacTOsMWdWZLJiyMGsmjdmY0EIfEmTdnB0UNA7KMkPbtWt5hWAlm2kxoEZn9L0JeV7CTNqzV4bchOXezw6WihuFZhjXyk1raqcnSuivKL8GfW-2TnU69oVhgOYQ8MPWy9op30nvj8GQJq02rkPfctuc4yzw/w640-h546/Diagram.png" width="640" /></a></div><br /><h2 style="text-align: justify;">Improvements</h2><p style="text-align: justify;">Our system is ready. At the end, I'd like to say a few words about possible improvements.</p><p style="text-align: justify;">Firstly, I have not talked about messages to message queues (RabbitMQ, Azure EventHub, ...) here. They usually allow you to send some metadata along with messages, so you can transfer our special data (correlation id, monotonous clock value, ...). Support of message queues is a natural extension of our mechanism.</p><p style="text-align: justify;">Secondly, the capabilities of www.websequencediagrams.com (at least in free version) are not very large. For example, I'd like to visually separate log entries of different types (Info, Warning, Error, ...). Perhaps we can use another, more powerful tool to create sequence diagrams.</p><p style="text-align: justify;">Thirdly, some requests are send as "fire and forget". It means that no one is waiting for their completion. We need to somehow represent them in a different way on the diagram.</p><h2 style="text-align: justify;">Conclusion</h2><p style="text-align: justify;">That's all I wanted to say. I hope that the article will be useful to you and help you understand what's happening in a complex system. Good luck!</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-71395591846560219392022-09-23T14:30:00.000+03:002022-09-23T14:30:09.281+03:00Drag and Drop in WPF TreeView<p style="text-align: justify;">Today I want to describe the implementation of drag and drop functionality inside WPF <a href="https://docs.microsoft.com/en-us/dotnet/api/system.windows.controls.treeview?view=netframework-4.8" rel="nofollow" target="_blank">TreeView</a> control. It sounds like a simple task, but took surprisingly a lot of time from me. So let's start.</p><span><a name='more'></a></span><h2 style="text-align: left;">Setting a stage</h2><p style="text-align: justify;">We'll create a simple WPF application showing a tree view. This tree view will show items from a binding. I'll use the following view model for these items:</p><pre><code lang="cs">public class ItemViewModel
{
public string Title { get; }
public ItemViewModel? Parent { get; set; }
public ItemViewModel(string title)
{
Title = title;
SubItems = new ObservableCollection<ItemViewModel>();
SubItems.CollectionChanged += OnCollectionChanged;
}
private void OnCollectionChanged(object? sender, NotifyCollectionChangedEventArgs e)
{
if (e.Action == NotifyCollectionChangedAction.Add)
{
foreach (ItemViewModel item in e.NewItems)
{
item.Parent = this;
}
}
}
public ObservableCollection<ItemViewModel> SubItems { get; }
}</code></pre><p>As you can see, this is a simple class with the following properties:</p><p></p><ul style="text-align: left;"><li><i>Title</i>. We'll see it in the tree view.</li><li><i>SubItems</i>. This is a collection of nested items.</li><li style="text-align: justify;"><i>Parent</i>. This is a reference to the parent item. We'll need it later. This property is automatically set when we add an item into the <i>SubItems</i> collection (see the <i>OnCollectionChanged</i> method).</li></ul><div>The code of the TreeView control displaying these items looks like this:</div><pre><code lang="xml"><TreeView ItemsSource="{Binding SubItems}">
<TreeView.Resources>
<HierarchicalDataTemplate DataType="{x:Type local:ItemViewModel}"
ItemsSource="{Binding SubItems}">
<TextBlock Text="{Binding Title}"/>
</HierarchicalDataTemplate>
</TreeView.Resources>
</TreeView></code></pre><div>The only thing left to do is to set the <i>DataContext</i> of our window to a correct object:</div><pre><code lang="cs">public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
SubItems = new ObservableCollection<ItemViewModel>
{
new ItemViewModel("A")
{
SubItems =
{
new ItemViewModel("D"),
new ItemViewModel("E")
{
SubItems =
{
new ItemViewModel("G"),
new ItemViewModel("H"),
new ItemViewModel("I"),
}
},
}
},
new ItemViewModel("B"),
new ItemViewModel("C")
{
SubItems =
{
new ItemViewModel("F"),
}
},
};
DataContext = this;
}
public ObservableCollection<ItemViewModel> SubItems { get; }
}</code></pre><h2 style="text-align: left;">Implementing Drag and Drop</h2><div style="text-align: justify;">Now it is time to implement <i>Drag and Drop</i> functionality. First of all, we have to allow dropping on our <i>TreeView</i>.</div><pre><code lang="xml"><TreeView ItemsSource="{Binding SubItems}"
AllowDrop="True"</code></pre><div>Then we have to initialize dragging. It is done in two event handlers:</div><pre><code lang="xml"><TreeView ItemsSource="{Binding SubItems}"
AllowDrop="True"
PreviewMouseLeftButtonDown="OnPreviewMouseLeftButtonDown"
PreviewMouseMove="OnPreviewMouseMove"</code></pre><div>Here is the code of these handlers:</div><pre><code lang="cs">private Point _startLocation;
private ItemViewModel? _selectedItem;
private void OnPreviewMouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
_startLocation = e.GetPosition(null);
_selectedItem = GetItemViewModel(e.OriginalSource);
}
private void OnPreviewMouseMove(object sender, MouseEventArgs e)
{
if (e.LeftButton == MouseButtonState.Pressed && _selectedItem != null)
{
var mousePos = e.GetPosition(null);
var diff = _startLocation - mousePos;
if (Math.Abs(diff.X) > SystemParameters.MinimumHorizontalDragDistance
|| Math.Abs(diff.Y) > SystemParameters.MinimumVerticalDragDistance)
{
var treeView = (TreeView) sender;
var dragData = new DataObject(_selectedItem);
DragDrop.DoDragDrop(treeView, dragData, DragDropEffects.Move);
}
}
}</code></pre><div style="text-align: justify;">In the <i>OnPreviewMouseLeftButtonDown</i> method we remember the position of our mouse and the item on which we click the mouse. This is the most interesting part of all. <i>OriginalSource</i> property of <i>MouseButtonEventArgs</i> gives us some WPF control which initiates the event. But we don't need this control. We need corresponding instance of <i>ItemViewModel</i>. Method <i>GetItemViewModel</i> implements this extraction:</div><pre><code lang="cs">private ItemViewModel? GetItemViewModel(object uiElement)
{
var depElement = uiElement as DependencyObject;
while (true)
{
var frElement = depElement as FrameworkElement;
if(frElement == null) break;
var item = frElement.DataContext as ItemViewModel;
if (item != null)
{
return item;
}
depElement = VisualTreeHelper.GetParent(frElement);
}
return null;
}</code></pre><div style="text-align: justify;">In the <i>OnPreviewMouseMove</i> method we actually start Drag and Drop operation by calling <i>DragDrop.DoDragDrop</i>. Protection logic prevents us from starting dragging without the instance of <i>ItemViewModel</i> or on a simple mouse click.</div><div style="text-align: justify;">Now our operation is initiated. When we drag our item over other items we need to tell if it is allowed to drop it here or not. In order to do it we must specify handlers of another three events:</div><pre><code lang="xml"><TreeView ItemsSource="{Binding SubItems}"
AllowDrop="True"
PreviewMouseLeftButtonDown="OnPreviewMouseLeftButtonDown"
PreviewMouseMove="OnPreviewMouseMove"
DragEnter="OnCheckDrag"
DragOver="OnCheckDrag"
DragLeave="OnCheckDrag"
</code></pre><div style="text-align: justify;">As you can see, I use the same event handler <span style="text-align: left;"><i>OnCheckDrag</i> </span>for all three events. Here it its code:</div><pre><code lang="cs">private void OnCheckDrag(object sender, DragEventArgs e)
{
e.Handled = true;
var uiElement = (UIElement)sender;
var element = uiElement.InputHitTest(e.GetPosition(uiElement));
var itemUnderMouse = GetItemViewModel(element);
if (itemUnderMouse == null)
{
e.Effects = DragDropEffects.None;
return;
}
if(IsChild(itemUnderMouse))
{
e.Effects = DragDropEffects.None;
return;
}
e.Effects = DragDropEffects.Move;
}
private bool IsChild(ItemViewModel? item)
{
while (true)
{
if (item == null) return false;
if (ReferenceEquals(item, _selectedItem)) return true;
item = item.Parent;
}
}</code></pre><div style="text-align: justify;">Here we again get <i>ItemViewModel</i> under our mouse. We do it using standard WPF <i>InputHitTest</i> method and our <i>GetItemViewModel</i> method, which we have already described.Now we have both the item we drag (source item) an the item we drag over (target item). At this moment we can execute any checks we want. Here I only check that the target item is not a child of the source item. But you can add any login you want. If we allow to drop the source item here, we set the <span style="text-align: left;"><i>Effects</i> proper</span><span style="text-align: left;">ty of </span><span style="text-align: left;"><i>DragEventArgs</i> to <i>Move</i>. Otherwise, we set it to <i>None</i>.</span></div><div style="text-align: justify;"><span style="text-align: left;">And here is the final touch. Logic of dropping is implemented in the Drop event handler:</span></div><pre><code lang="xml"><TreeView ItemsSource="{Binding SubItems}"
AllowDrop="True"
PreviewMouseLeftButtonDown="OnPreviewMouseLeftButtonDown"
PreviewMouseMove="OnPreviewMouseMove"
DragEnter="OnCheckDrag"
DragOver="OnCheckDrag"
DragLeave="OnCheckDrag"
Drop="OnDrop">
</code></pre><div style="text-align: justify;"><span style="text-align: left;">Here is the code:</span></div><pre><code lang="cs">private void OnDrop(object sender, DragEventArgs e)
{
e.Handled = true;
var uiElement = (UIElement)sender;
var element = uiElement.InputHitTest(e.GetPosition(uiElement));
var newParentItem = GetItemViewModel(element);
_selectedItem!.Parent!.SubItems.Remove(_selectedItem);
newParentItem!.SubItems.Add(_selectedItem);
}
</code></pre><div style="text-align: justify;"><span style="text-align: left;">Here we again get item under the mouse pointer. Then we apply any business logic we want.</span></div><h2 style="text-align: justify;"><span style="text-align: left;">Conclusion</span></h2><div style="text-align: justify;"><span style="text-align: left;">It takes surprisingly a lot of code to implement such a standard operation. I hope this small article will be helpful to you if you have the same task.</span></div><p></p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-50138770193657345052022-07-12T18:53:00.000+03:002022-07-12T18:53:21.960+03:00My work with LiteDB<p style="text-align: justify;">Recently I was looking for a storage system for my program. This is a desktop application that creates many objects and searches for text in them. So I thought: "Why don't I try something new." Instead of an SQL database, I could use some kind of document database. But I didn't want to have a separate server, I wanted this database to work with a simple file. Search of the Internet for this kind of databases for .NET applications quickly led me to <a href="https://www.litedb.org" rel="nofollow" target="_blank">LiteDB</a>. And here I want to share my experience with this database.</p><span><a name='more'></a></span><h3 style="text-align: left;">Inheritance</h3><p>My program works as follows. I want to store objects like this:</p><pre><code lang="cs">internal class Item
{
public string Title { get; set; }
public string Description { get; set; }
public List<Field> Fields { get; set; } = new List<Field>();
}</code></pre><p style="text-align: justify;">But the <i>Field</i> class is abstract. And it has many descendants:</p><pre><code lang="cs">internal abstract class Field
{
}
internal sealed class TextField : Field
{
public string Text { get; set; }
}
internal sealed class PasswordField : Field
{
public string Password { get; set; }
}
internal sealed class DescriptionField : Field
{
public string Description { get; set; }
}
...</code></pre><p style="text-align: justify;">When working with SQL databases, I had to configure the storage of various descendants of the <i>Field</i> class. I thought that with LiteDB I would have to write my own BSON serialization mechanism, LiteDB provides <a href="http://www.litedb.org/docs/object-mapping/" rel="nofollow" target="_blank">such an opportunity</a>. But I was pleasantly surprised. Nothing is required of me. Serialization and deserialization of various types are already implemented. You just create the necessary objects:</p><pre><code lang="cs">var items = new Item[]
{
new Item
{
Title = "item1",
Description = "description1",
Fields =
{
new TextField
{
Text = "text1"
},
new PasswordField
{
Password = "123"
}
}
},
new Item
{
Title = "item2",
Description = "description2",
Fields =
{
new TextField
{
Text = "text2"
},
new DescriptionField
{
Description = "description2"
}
}
}
};</code></pre><p style="text-align: justify;">... and insert them into the database:</p><pre><code lang="cs">using (var db = new LiteDatabase(connectionString))
{
var collection = db.GetCollection<Item>();
collection.InsertBulk(items);
}</code></pre><p style="text-align: justify;">That's all. LiteDB has <a href="https://github.com/mbdavid/LiteDB.Studio" rel="nofollow" target="_blank">LiteDB.Studio</a> utility that allows you to view the contents of your database. Let's see how our objects are stored:</p><pre><code lang="json">{
"_id": {"$oid": "62bf12ce12a00b0f966e9afa"},
"Title": "item1",
"Description": "description1",
"Fields":
[
{
"_type": "LiteDBSearching.TextField, LiteDBSearching",
"Text": "text1"
},
{
"_type": "LiteDBSearching.PasswordField, LiteDBSearching",
"Password": "123"
}
]
}</code></pre><p style="text-align: justify;">It looks like each object has a <i>_type</i> property that allows correct deserialization from the database.</p><p>Well, we have saved our objects. Let's move on to reading.</p><h3 style="text-align: left;">Search of text</h3><p style="text-align: justify;">As I said before, I need to search for <i>Item</i> objects in which the <i>Title</i> and <i>Description</i> properties and the properties of their fields (the <i>Fields</i> property) contain some text.</p><p style="text-align: justify;">There is nothing complicated in searching inside the <i>Title</i> and <i>Description</i> properties. The documentation is pretty clear:</p><pre><code lang="cs">var items = collection.Query()
.Where(i => i.Title.Contains("1") || i.Description.Contains("1"))
.ToArray();</code></pre><p style="text-align: justify;">But there is a problem with searching by fields. You see, the abstract class <i>Field</i> does not contain any properties. That's why I can't refer to them. Fortunately, LiteDB allows you to use string query syntax:</p><pre><code lang="cs">var items = collection.Query()
.Where("$.Title LIKE '%1%' OR $.Description LIKE '%1%'")
.ToArray();</code></pre><p style="text-align: justify;">So, how can we search inside fields using this syntax? The documentation gives a hint that the query should look something like this:</p><pre><code>$.Title LIKE '%1%' OR $.Description LIKE '%1%' OR $.Fields[@.Text] LIKE '%1%' OR $.Fields[@.Description] LIKE '%1%' OR $.Fields[@.Password] LIKE '%1%'</code></pre><p>But this leads to an error:</p><pre><code>Left expression `$.Fields[@.Text]` returns more than one result. Try use ANY or ALL before operant.</code></pre><p>And yes, using <i>ANY</i> function solves the problem:</p><pre><code>$.Title LIKE '%1%' OR $.Description LIKE '%1%' OR ANY($.Fields[@.Text LIKE '%1%']) OR ANY($.Fields[@.Description LIKE '%1%']) OR ANY($.Fields[@.Password LIKE '%1%'])</code></pre><p style="text-align: justify;">But I want to make a couple of comments about this expression. First of all, it may seem that we can use expressions like this:</p>
<pre><code>ANY($.Fields[@.Text LIKE '%1%'])</code></pre>
<p style="text-align: justify;">But this is not the case. If you try to query elements using this expression, you will get the following error:</p>
<pre><code>Expression 'ANY($.Fields[@.Text LIKE "%1%"])' are not supported as predicate expression.</code></pre>
<p style="text-align: justify;">Strange, isn't it? It turns out that you should write like this:</p>
<pre><code>ANY($.Fields[@.Text LIKE '%1%']) = true</code></pre>
<p style="text-align: justify;">I immediately recall 1 and 0 in SQL Server predicates. I don't know why they implemented it this way.</p><p style="text-align: justify;">Secondly, I was confused by the phrase <i>Try use ANY or ALL before operant</i>. For me, this does not correspond to a function call. It turns out that LiteDB supports the following syntax:</p>
<pre><code>$.Fields[*].Text ANY LIKE '%1%'</code></pre>
<p style="text-align: justify;">Unfortunately, this is not described in the documentation. I came across this in <a href="https://github.com/mbdavid/LiteDB" rel="nofollow" target="_blank">the source code</a> of tests for LiteDB on Github. This syntax works fine as a predicate without any comparison with <i>true</i>.</p><p>Finally, we can rewrite your query expression as follows:</p>
<pre><code>$.Title LIKE '%1%' OR $.Description LIKE '%1%' OR ($.Fields[*].Text ANY LIKE '%1%') OR ($.Fields[*].Description ANY LIKE '%1%') OR ($.Fields[*].Password ANY LIKE '%1%')</code></pre>
<p style="text-align: justify;">There are a couple more things that bother me here. Firstly, for each new field type, I will have to rewrite this expression if I use a new property name. Is there anything we can do about it? Well, we can.</p><p style="text-align: justify;">LiteDB supports the <i>BsonField</i> attribute, which specifies the name of the database field in which this property is stored. It is used as follows:</p><pre><code lang="cs">internal sealed class TextField : Field
{
[BsonField("TextField")]
public string Text { get; set; }
}
internal sealed class PasswordField : Field
{
[BsonField("TextField")]
public string Password { get; set; }
}
internal sealed class DescriptionField : Field
{
[BsonField("TextField")]
public string Description { get; set; }
}</code></pre><p>Now we can write one query expression for any <i>Field</i> objects:</p>
<pre><code>$.Title LIKE '%1%' OR $.Description LIKE '%1%' OR $.Fields[*].TextField ANY LIKE '%1%'</code></pre>
<p style="text-align: justify;">When I add a new descendant of the <i>Field</i> class, I can simply mark its property with the <i>[BsonField("TextField")]</i> attribute. Then I won't need to change the expression of my query.</p><p style="text-align: justify;">Unfortunately, this method doesn't quite solve all our problems. The fact is that the descendant of the <i>Field</i> can have an arbitrary number of properties in which I need to search for text. This means that I may not be able to save them all in the existing database fields.</p><p>That's why I will still use the following form of the expression:</p>
<pre><code>$.Title LIKE '%1%' OR $.Description LIKE '%1%' OR ($.Fields[*].Text ANY LIKE '%1%') OR ($.Fields[*].Description ANY LIKE '%1%') OR ($.Fields[*].Password ANY LIKE '%1%')</code></pre>
<p style="text-align: justify;">We have another problem. I have used my search string <i>%1%</i> several times in the expression. There is also an SQL injection attack (although I'm not sure I can use the word <i>SQL</i> here). In short, I'm talking about using parameters in my queries. And the LiteDB API allows us to use them:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQLEGRaOaEriFYJUF8phHtfAfQldq3Cnjjd6Xi8sxK7NtsLwigzBf0bY09lz1nsBnDW5RfBj_J1i57DI4CO_z2F6bayEuoLd6Tvrsxp_RcYDEuZgMMV6FPJvySYovWgUl0EKCjbke9C1semF70GldNUvrpz59iqH-zXK3c4PYFtfGK5-t2cxdgaaTQew/s956/Parameters.png" style="margin-left: 1em; margin-right: 1em; outline-width: 0px; user-select: auto;"><img alt="Parameters in a query" border="0" data-original-height="96" data-original-width="956" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQLEGRaOaEriFYJUF8phHtfAfQldq3Cnjjd6Xi8sxK7NtsLwigzBf0bY09lz1nsBnDW5RfBj_J1i57DI4CO_z2F6bayEuoLd6Tvrsxp_RcYDEuZgMMV6FPJvySYovWgUl0EKCjbke9C1semF70GldNUvrpz59iqH-zXK3c4PYFtfGK5-t2cxdgaaTQew/w320-h32/Parameters.png" width="320" /></a></div><br /><p style="text-align: justify;">But what exactly should we do? Unfortunately, the documentation failed me again. I had to go to the source code of the LiteDB tests and look <a href="https://github.com/mbdavid/LiteDB/blob/6d9ac6237ff8cae104a3c57a8de4ec55b4506e87/LiteDB.Tests/Expressions/Expressions_Exec_Tests.cs#L144 " rel="nofollow" target="_blank">there</a> how I should use the parameters:</p><pre><code lang="cs">var items = collection.Query()
.Where("$.Title LIKE @0 OR $.Description LIKE @0 OR ($.Fields[*].Text ANY LIKE @0) OR ($.Fields[*].Description ANY LIKE @0) OR ($.Fields[*].Password ANY LIKE @0)", "%1%")
.ToArray();</code></pre><p>Well, the search is done. But how fast is it?</p><h3 style="text-align: left;">Indexes</h3><p style="text-align: justify;">LiteDB supports indexes. Of course, my application doesn't store a really large amount of data, so it's not critically important. However, it would be great to use indexes and execute queries as fast as possible.</p><p>First of all, we need to understand whether this query uses some kind of index or not. For this purpose, LiteDB has the <i>EXPLAIN</i> command. In LiteDB.Studio, I execute my query this way:</p><pre><code lang="sql">EXPLAIN
SELECT $ FROM Item
WHERE $.Title LIKE '%1%'
OR $.Description LIKE '%1%'
OR ($.Fields[*].Text ANY LIKE '%1%')
OR ($.Fields[*].Description ANY LIKE '%1%')
OR ($.Fields[*].Password ANY LIKE '%1%')</code></pre><p>The result contains information about the index that was used:</p><pre><code lang="cs">"index":
{
"name": "_id",
"expr": "$._id",
"order": 1,
"mode": "FULL INDEX SCAN(_id)",
"cost": 100
},</code></pre><p style="text-align: justify;">As you can see, we have to go through all the data now. I would like to achieve a better result.</p><p style="text-align: justify;">The documentation <a href="http://www.litedb.org/docs/indexes/" rel="nofollow" target="_blank">explicitly says</a> that it is possible to create an index based on an array type property. In this case, I can search for any elements in this array. For example, I can create an index to search inside the <i>Text</i> properties of my fields:</p><pre><code lang="cs">collection.EnsureIndex("TextIndex", "$.Fields[*].Text");</code></pre><p>Now we can use this index in our queries:</p><pre><code lang="cs">var items = collection.Query()
.Where("$.Fields[*].Text ANY LIKE @0", "%1%")
.ToArray();</code></pre><p>The <i>EXPLAIN</i> command in LiteDB.Studio shows that this query really uses the index we created:</p><pre><code lang="json">"index":
{
"name": "TextIndex",
"expr": "MAP($.Fields[*]=>@.Text)",
"order": 1,
"mode": "FULL INDEX SCAN(TextIndex LIKE \"%1%\")",
"cost": 100
},</code></pre><p style="text-align: justify;">But how can we combine all our properties in one index? Here we can use the <i>CONCAT</i> command. It combines several values into one array. Here's what creating a full index looks like:</p><pre><code lang="cs">collection.EnsureIndex("ItemsIndex", @"CONCAT($.Title,
CONCAT($.Description,
CONCAT($.Fields[*].Text,
CONCAT($.Fields[*].Password,
$.Fields[*].Description
)
)
)
)");</code></pre><p>To use it, we have to rewrite the expression of our query:</p><pre><code lang="cs">var items = collection.Query()
.Where(
@"CONCAT($.Title,
CONCAT($.Description,
CONCAT($.Fields[*].Text,
CONCAT($.Fields[*].Password,
$.Fields[*].Description
)
)
)
) ANY LIKE @0",
"%1%")
.ToArray();</code></pre><p>Now our search really uses the index:</p><pre><code lang="json">"index":
{
"name": "ItemsIndex",
"expr": "CONCAT($.Title,CONCAT($.Description,CONCAT(MAP($.Fields[*]=>@.Text),CONCAT(MAP($.Fields[*]=>@.Password),MAP($.Fields[*]=>@.Description)))))",
"order": 1,
"mode": "FULL INDEX SCAN(ItemsIndex LIKE \"%3%\")",
"cost": 100
},</code></pre><p style="text-align: justify;">Unfortunately, the <i>LIKE</i> operator still results in a FULL INDEX SCAN. We can only hope that the index gives some advantage. But wait. Why should we only hope when we can measure it? After all, we have <a href="https://github.com/dotnet/BenchmarkDotNet" rel="nofollow" target="_blank">BenchmarkDotNet</a>.</p><p>I wrote the following code for performance testing:</p><pre><code lang="cs">[SimpleJob(RuntimeMoniker.Net60)]
public class LiteDBSearchComparison
{
private LiteDatabase _database;
private ILiteCollection<Item> _collection;
[GlobalSetup]
public void Setup()
{
if (File.Exists("compare.dat"))
File.Delete("compare.dat");
_database = new LiteDatabase("Filename=compare.dat");
_collection = _database.GetCollection<Item>();
_collection.EnsureIndex("ItemIndex", @"CONCAT($.Title,
CONCAT($.Description,
CONCAT($.Fields[*].Text,
CONCAT($.Fields[*].Password,
$.Fields[*].Description
)
)
)
)");
for (int i = 0; i < 100; i++)
{
var item = new Item
{
Title = "t",
Description = "d",
Fields =
{
new TextField { Text = "te" },
new PasswordField { Password = "p" },
new DescriptionField { Description = "de" }
}
};
_collection.Insert(item);
}
}
[GlobalCleanup]
public void Cleanup()
{
_database.Dispose();
}
[Benchmark(Baseline = true)]
public void WithoutIndex()
{
_ = _collection.Query()
.Where("$.Title LIKE @0 OR $.Description LIKE @0 OR ($.Fields[*].Text ANY LIKE @0) OR ($.Fields[*].Description ANY LIKE @0) OR ($.Fields[*].Password ANY LIKE @0)",
"%1%")
.ToArray();
}
[Benchmark]
public void WithIndex()
{
_ = _collection.Query()
.Where(@"CONCAT($.Title,
CONCAT($.Description,
CONCAT($.Fields[*].Text,
CONCAT($.Fields[*].Password,
$.Fields[*].Description
)
)
)
) ANY LIKE @0",
"%1%")
.ToArray();
}
}</code></pre><p>Here are the results:</p>
<table border="1">
<thead>
<tr>
<th>Method</th><th>Mean</th><th>Error</th><th>StdDev</th><th>Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<th>WithoutIndex</th><td>752.7 us</td><td>14.71 us</td><td>21.56 us</td><td>1.00</td>
</tr>
<tr>
<th>WithIndex</th><td>277.5 us</td><td>4.30 us</td><td>4.02 us</td><td>0.37</td>
</tr>
</tbody>
</table>
<p>As you can see, the index does provide a significant performance advantage.</p><h3 style="text-align: left;">Conclusion</h3><p style="text-align: justify;">That's all I wanted to say. Overall, I have a pretty good impression of LiteDB. I am ready to use it as a document storage for small projects. Unfortunately, the documentation, in my opinion, is not at the best level.</p><p>I hope this information will be useful to you. Good luck!</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-26167377442022368322022-06-03T17:51:00.000+03:002022-06-03T17:51:38.283+03:00Single database for multiple microservices with FluentMigrator<p style="text-align: justify;">If you have multiple microservices, it is common to use a separate database for each of them. But recently we faced the following problem. Our price plan on database hosting provider includes only limited number of databases. We can't create new database for each microservice as it is too expensive. How can we solve this problem?</p><span><a name='more'></a></span><h2 style="text-align: justify;">High-level approach</h2><p style="text-align: justify;">In this article I'll use SQL Server as my database. In general, the solution is very simple. All microservices will use the same database. But how can we be sure that there will be no conflicts? We will use schemas. Each microservice will create database objects (tables, views, stored procedures, ...) only in some particular database schema which is unique across all microservices. In order to avoid problems with access to data of another microservice we'll create a separate login and user and give them rights only for one schema.<br /></p><p style="text-align: justify;">For example, for microservice for work with orders we can do it like this:</p><pre><code lang="sql">CREATE LOGIN [orders_login] WITH PASSWORD='p@ssw0rd'
execute('CREATE SCHEMA [orders]')
CREATE USER [orders_user] FOR LOGIN [orders_login] WITH DEFAULT_SCHEMA=[orders]
GRANT CREATE TABLE to [orders_user]
GRANT ALTER,DELETE,SELECT,UPDATE,INSERT,REFERENCES ON SCHEMA :: [orders] to [orders_user]
</code></pre><p style="text-align: justify;">Now we are ready to create database objects.</p><h2 style="text-align: justify;">FluentMigrator</h2><p style="text-align: justify;">I will use <a href="https://fluentmigrator.github.io" rel="nofollow" target="_blank">FluentMigrator</a> NuGet package to modify structure of my database. It is very simple to use. First configure it:</p><pre><code lang="cs">var serviceProvider = new ServiceCollection()
.AddFluentMigratorCore()
.ConfigureRunner(
builder =>
{
builder
.AddSqlServer2016()
.WithGlobalConnectionString(connectionString)
.ScanIn(typeof(Database).Assembly).For.Migrations();
})
.BuildServiceProvider();</code></pre><p style="text-align: justify;">Here we use SQL Server 2016 or later. The <span style="text-align: left;"><i>connectionString </i>variable contains connection string to our database. Type <i>Database</i> can be any type inside an assembly with your migrations. Wait! But what are migrations?</span></p><p style="text-align: justify;"><span style="text-align: left;">This is how we describe changes we want to make to our database. Each migration is a simple class that inherits <i>Migration</i>:</span></p><pre><code lang="cs">[Migration(1)]
public class FirstMigration : Migration
{
public const string TableName = "orders";
public override void Up()
{
Create.Table(TableName)
.WithColumn("id").AsInt32().PrimaryKey().Identity()
.WithColumn("code").AsString(100).NotNullable();
}
public override void Down()
{
Delete.Table(TableName);
}
}</code></pre><p style="text-align: justify;"><span style="text-align: left;">Inside <i>Up</i> and <i>Down</i> methods you describe what you want to do on applying and rollbacking the migration. Attribute <i>Migration</i> contains number which specifies order in which your migrations will be applied.</span></p><p style="text-align: justify;"><span style="text-align: left;">Now it is very simple to apply your migrations to a database:</span></p><pre><code lang="cs">var runner = serviceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();</code></pre><p style="text-align: justify;"><span style="text-align: left;">That's all. All your migrations must be applied to the database now. FluentMigrator will also create <i>VersionInfo</i> table that contains information about all currently applied migrations. With help of this table FluentMigrator will know next time which migrations should be additionally applied to the database.</span></p><p style="text-align: justify;"><span style="text-align: left;">Unfortunately it does not work that way for our use case. There are two problems.</span></p><p style="text-align: justify;"><span style="text-align: left;">First of all, <i>VersionInfo</i> table is created in the <i>dbo</i> schema by default. But it is unacceptable for us. Each microservice must have its own <i>VersionInfo</i> table inside its own schema.</span></p><p style="text-align: justify;"><span style="text-align: left;">The second problem is the following. Consider this code of a migration:</span></p><pre><code lang="cs">Create.Table("orders")</code></pre><p style="text-align: justify;"><span style="text-align: left;">Unfortunately this code creates table <i>orders</i> also inside <i>dbo</i> schema. Of course, we can specify schema explicitly:</span></p><pre><code lang="cs">Create.Table("orders").InSchema("orders")</code></pre><p style="text-align: justify;"><span style="text-align: left;">But I'd prefer to avoid this. Somebody will forget to write this schema and we may have an error. I'd like to replace default schema for an entire microservice.</span></p><h2 style="text-align: justify;"><span style="text-align: left;">Schema for VersionInfo table</span></h2><p style="text-align: justify;"><span style="text-align: left;">It is very easy to set custom schema for <i>VersionInfo</i> table:</span></p><pre><code lang="cs">var serviceProvider = new ServiceCollection()
.AddSingleton<IConventionSet>(new DefaultConventionSet("orders", null))
.AddFluentMigratorCore()
.ConfigureRunner(
builder =>
{
builder
.AddSqlServer2016()
.WithGlobalConnectionString(connectionString)
.ScanIn(typeof(Database).Assembly).For.Migrations();
})
.BuildServiceProvider();
</code></pre><p style="text-align: justify;"><span style="text-align: left;">Here we just register new instance of </span><span style="text-align: left;"><i>DefaultConventionSet</i> class for </span><span style="text-align: left;"><i>IConventionSet</i> interface with corresponding schema. Now our <i>VersionInfo</i> table will be created inside <i>orders</i> schema.</span></p><h2 style="text-align: justify;"><span style="text-align: left;">Default schema for database objects</span></h2><p style="text-align: justify;"><span style="text-align: left;">Unfortunately, it is not so easy to understand how we can replace default schema for other database objects. It took me some time. Let's start from the <a href="https://github.com/fluentmigrator/fluentmigrator/blob/c4babcfed93e4c490d7a12d21be27e8c378937f4/src/FluentMigrator.Runner.SqlServer/SqlServerRunnerBuilderExtensions.cs#L141" rel="nofollow" target="_blank">code</a> of </span><span style="text-align: left;"><i>AddSqlServer2016</i> method. It registers instance of </span><span style="text-align: left;"><i>SqlServer2008Quoter</i> class. This class inherits </span><span style="text-align: left;"><i>QuoteSchemaName</i> <a href="https://github.com/fluentmigrator/fluentmigrator/blob/c4babcfed93e4c490d7a12d21be27e8c378937f4/src/FluentMigrator.Runner.SqlServer/Generators/SqlServer/SqlServer2005Quoter.cs#L23" rel="nofollow" target="_blank">method</a> from </span><i style="text-align: left;">SqlServer2005Quoter</i><span style="text-align: left;"> class. Here you can see where the default schema comes from.</span></p><p style="text-align: justify;"><span style="text-align: left;">We'll replace this quoter class with our own:</span></p><pre><code lang="cs">sealed class Quoter : SqlServer2008Quoter
{
private readonly string _defaultSchemaName;
public Quoter(string defaultSchemaName)
{
if (string.IsNullOrWhiteSpace(defaultSchemaName))
throw new ArgumentException("Value cannot be null or whitespace.", nameof(defaultSchemaName));
_defaultSchemaName = defaultSchemaName;
}
public override string QuoteSchemaName(string schemaName)
{
if (string.IsNullOrEmpty(schemaName))
return $"[{_defaultSchemaName}]";
return base.QuoteSchemaName(schemaName);
}
}</code></pre><p style="text-align: justify;"><span style="text-align: left;">As you can see, it is very simple. Implementation is almost the same as in </span><i style="text-align: left;">SqlServer2005Quoter</i><span style="text-align: left;"> class, but instead of <i>dbo</i> we use our custom schema.</span></p><p style="text-align: justify;"><span style="text-align: left;">Now we just must register this class:</span></p><pre><code lang="cs">var serviceProvider = new ServiceCollection()
.AddSingleton<IConventionSet>(new DefaultConventionSet("orders", null))
.AddFluentMigratorCore()
.ConfigureRunner(
builder =>
{
builder
.AddSqlServer2016()
.WithGlobalConnectionString(connectionString)
.ScanIn(typeof(Database).Assembly).For.Migrations();
builder.Services.RemoveAll<SqlServer2008Quoter>()
.AddSingleton<SqlServer2008Quoter>(new Quoter("orders"));
})
.BuildServiceProvider();
</code></pre><p style="text-align: justify;"><span style="text-align: left;">And everything works fine as we've expected.</span></p><h2 style="text-align: justify;"><span style="text-align: left;">Conclusion</span></h2><p style="text-align: justify;"><span style="text-align: left;">I hope this article is useful for you. It was surprisingly hard to understand how to change default schema for database objects. I hope I saved you some time. Good luck!</span></p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-38723124850481259452022-06-02T18:29:00.000+03:002022-06-02T18:29:36.922+03:00Calling Thread.Abort and Thread.ResetAbort several times<p style="text-align: justify;">In this short article I want to analyze a situation when we want to call <i>Thread.Abort</i> several times for a thread which uses <i>Thread.ResetAbort</i> to control cancellation process.</p><span><a name='more'></a></span><p>Let's start with a simple code to understand how <i>Thread.Abort</i> works. I'll create a thread with an infinite loop and then I'll abort it:</p><pre><code lang="cs">using System;
using System.Threading;
namespace ThreadAbort
{
internal class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(Run);
thread.Start();
Console.WriteLine("Press any key to abort the thread.");
Console.ReadKey();
thread.Abort();
}
static void Run()
{
while(true) { }
}
}
}
</code></pre><p style="text-align: justify;">The <i>Run</i> method is executed in a separate thread. When <i>Abort</i> is called for this thread the runtime throws special <i>ThreadAbortException</i> inside the thread. The thread code can catch this exception and gracefully handle thread abortion.</p><p style="text-align: justify;">But does it mean that I can swallow this exception and continue the thread execution? Let's see.</p><pre><code lang="cs">using System;
using System.Threading;
namespace ThreadAbort
{
internal class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(Run);
thread.Start();
Console.WriteLine("Press any key to abort the thread.");
Console.ReadKey();
thread.Abort();
}
static void Run()
{
try
{
while (true) { }
}
catch (ThreadAbortException)
{
Console.WriteLine("Thread is aborted");
}
Console.WriteLine("Continue execution");
}
}
}
</code></pre><p style="text-align: justify;">Here we catch a <i>ThreadAbortException</i> and continue execution. But the line printing <i>Continue execution</i> is never called. You see, when all <i>catch</i> and <i>finally</i> blocks are finished, the runtime rethrows the <i>ThreadAbortException</i>. This is why all code after <i>try-catch</i> is never executed.</p><p style="text-align: justify;">But what if I do want to continue the execution of the thread code? In this case, I must use <i>Thread.ResetAbort</i>:</p><pre><code lang="cs">using System;
using System.Threading;
namespace ThreadAbort
{
internal class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(Run);
thread.Start();
Console.WriteLine("Press any key to abort the thread.");
Console.ReadKey();
thread.Abort();
}
static void Run()
{
try
{
while (true) { }
}
catch (ThreadAbortException)
{
Console.WriteLine("Thread is aborted");
Thread.ResetAbort();
}
Console.WriteLine("Continue execution");
}
}
}
</code></pre><p style="text-align: justify;">Here I call <i>Thread.ResetAbort</i> inside my <i>catch</i> block. It means, that I don't want <i>ThreadAbortException </i>to be rethrown. This is why this time the line <i>Continue execution</i> is printed.</p><p style="text-align: justify;">But recently I came across the following situation. I have a thread method that constructs a complicated page for rendering. If this process takes too long, I abort the process and construct another page with error message using the same method but with another state. I can illustrate this situation with the following code:</p><pre><code lang="cs">static void Run()
{
try
{
while (true) { }
}
catch (ThreadAbortException)
{
Thread.ResetAbort();
State = "error";
Run();
}
}</code></pre><p style="text-align: justify;">But it appeared that even creation of the error page can take too long. In this case, I want to abort the thread again and now construct really simple page. So, what is the problem? Let's see. Here is a sketch of my code:</p><pre><code lang="cs">using System;
using System.Threading;
namespace ThreadAbort
{
internal class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(Run);
thread.Start();
while (true)
{
Console.WriteLine("Press any key to abort the thread.");
Console.ReadKey();
thread.Abort();
}
}
static void Run()
{
Console.WriteLine("Thread execution started");
try
{
while (true) { }
}
catch (ThreadAbortException)
{
Console.WriteLine("Thread is aborted");
Thread.ResetAbort();
Run();
}
}
}
}
</code></pre><p style="text-align: justify;">The <i>Run</i> method recursively calls itself on every abort. The <i>while</i> loop in the <i>Main</i> method allows me to abort the thread arbitrary number of times. But when you run this program it appears that actually you can abort the thread only once. Why?</p><p style="text-align: justify;">The reason is in the <i>try-catch</i> block. You see, while the code is still inside the <i>catch</i> block, system thinks that the process of aborting is not finished yet. This is why it ignores all consecutive calls of <i>Thread.Abort</i>. And we invoke the <i>Run</i> method from inside the <i>catch</i> block. This is why we never leave it.</p><p style="text-align: justify;">So, what should we do to be able to call <i>Thread.Abort</i> several times? We should move the recursive invocation of the <i>Run</i> method outside of the <i>catch</i> block:</p><pre><code lang="cs">using System;
using System.Threading;
namespace ThreadAbort
{
internal class Program
{
static void Main(string[] args)
{
Thread thread = new Thread(Run);
thread.Start();
while (true)
{
Console.WriteLine("Press any key to abort the thread.");
Console.ReadKey();
thread.Abort();
}
}
static void Run()
{
Console.WriteLine("Thread execution started");
try
{
while (true) { }
}
catch (ThreadAbortException)
{
Console.WriteLine("Thread is aborted");
Thread.ResetAbort();
}
Run();
}
}
}
</code></pre><p style="text-align: justify;">Now we can abort the thread as many times as we want.</p><p style="text-align: justify;">I hope this little advice is useful for you. Good luck!</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-40097709820318775272022-03-28T16:16:00.002+03:002022-03-28T16:16:45.981+03:00About trust in software systems<p style="text-align: justify;">I recently heard about the following situation. The Ukrainian side has created some kind of video message. They claimed that the message was created on a certain date. But the Russian side stated that it was recorded in advance. So I started thinking if it was possible for ordinary people to check when the video was created.</p><span><a name='more'></a></span><p style="text-align: justify;">Of course, there is a very simple method that has been known for a long time. You buy a fresh newspaper and shoot your video showing the first page of the newspaper. In this case, you can be sure that the author of the message created it no earlier than the newspaper was published.</p><p style="text-align: justify;">But in the case where the video was created by some government, there are still doubts. The government can have sufficient influence on the press and force it to create the necessary publications. Of course, this problem can be solved quite easily. You can ask your supporters in the USA or another big country to take a photo of some famous newspaper and send it to you. Then you can show your audience this photo.</p><p style="text-align: justify;">But I, as a software developer, am interested to know if it is possible to solve this problem with software. Perhaps, in the process of thinking, I could find some ideas on how to increase confidence in software systems.</p><h2 style="text-align: justify;">Description of our system</h2><p style="text-align: justify;">Let's imagine what a system might look like to solve our problem. Instead of the first page of the Times, our author shows a computer screen. There's a browser open showing some kind of website (for example, clock.org ). This site shows the current time in UTC (for example, March 4, 2022 16:47) and some sequence of numbers and letters (for example, 1G34HF4JH3). For brevity, let's call this sequence a time code. This code changes once a minute. Now we turn to the viewer's side. Let's say he sees the video in a week. He sees the time and code on the video. If he wants to know if the video was actually made at that time, he goes to another page at clock.org . There he enters the time and code, and the site tells him whether this code was actually shown at that time.</p><p style="text-align: justify;">Theoretically, for this purpose, you can use the website of some news agency. But, firstly, the content of this site may not change often enough for our purposes. And, secondly, they usually do not provide an opportunity to show the content of the site at some previous point in time. We could try to extract this content from some Google cache, but it is not designed for such tasks. It may not contain the required content or the content may be deleted to free up disk space. And also today it is difficult to believe in the independence of various news agencies. I would like to avoid all this.</p><p style="text-align: justify;">You may ask, "Can we trust your site? Maybe you made a deal with the author of the video, and your site shows what they want..." Well, let's figure it out.</p><h2 style="text-align: justify;">Trust in the source code</h2><p style="text-align: justify;">My site has some source code. If the user doesn't know this code, they have no reason to trust it. No one can know what is actually running on the server side. Maybe I created a complex algorithm for generating codes for each moment in time and passed this algorithm to the author of the video. And now they can generate these codes at any time.</p><p style="text-align: justify;">But what if the source code of the site is available to anyone? Suppose I posted it on GitHub. It's hard to suspect GitHub that I bribed its authors. And we don't really need to trust GitHub. Anyone can download the source code from there to their own computer. "So what?" - you can say. "How can you prove that your site executes exactly this code?" Here's how it's done.</p><p style="text-align: justify;">First, I will need a compiler that uniquely converts the source code into an executable file. This means that if you take some source code and compile it 1000 times, you will get an absolutely identical executable file all 1000 times. This means no compilation timestamps, no built-in GUIDs for debugging, ... But technically I don't see any problems.</p><p style="text-align: justify;">Also, there are some programming languages (like PHP) where you don't even need to compile anything. It is enough to have the source code that will be interpreted on the hosting provider's side.</p><p style="text-align: justify;">Now let's move on to the hosting provider. We're going to need some help from him. I want to ask him to calculate the hash for several of my files. I mean those files that the provider actually performs for my site. These can be compiled assemblies or source code files. The hosting provider will show this hash to anyone who wants to know it. It looks like this. You visit the hosting provider's website (not on clock.org , but to the website of the provider that hosts clock.org ). There you type `clock.org " in some input field, and the provider will show you the hash and the full paths to all the files for which the hash was calculated. I'll explain later why we need these full paths.</p><p style="text-align: justify;">We need one more piece of information. My website (clock.org ) will provide information about the GitHub repository and the commit ID in it.</p><p style="text-align: justify;">Now let's get it all together. Let's say I want to be sure that the site clock.org executes the same code that its author posted on GitHub. I'm going to clock.org and I get the repository name and commit ID there. Then I go to GitHub and download this particular version of the source code to my computer. Then I compile it. Now I go to the hosting provider's website and get there a hash and a list of files for which this hash was calculated. Then on my machine I calculate the hash for the same files. If the hashes are equal, then everything is fine. If not... then there is no trust.</p><p style="text-align: justify;">As you can see, we have replaced trust in the site owner with trust in the hosting provider. But we can further reduce the need for this trust. If I really need the trust of my users, I can host instances of my site all over the world with different hosting providers in different countries. It will be really hard to imagine that I colluded with all of them.</p><p style="text-align: justify;">There are more questions. Why should we choose the files for which the provider will calculate the hash? Why can't we just calculate it for all files in the root folder and all subfolders? Usually the site writes something to disk (for example, logs). Any changes to the files also change the hash, so in this case the system will not work. It is better to specify several immutable files and calculate a hash for them. Of course, this list of files should include everything that actually runs on the provider's website: index.php , web.config, ... This may lead to some restrictions on what these files can contain, since they must be hosted on GitHub and visible to everyone. But I think it's not a very big problem. All confidential information can be passed through environment variables.</p><p style="text-align: justify;">I promised to explain why the hosting provider should show the full paths to all files involved in hash calculation. Otherwise, I could do the following trick. I will upload any arbitrary code to the supplier. But I will also create a separate folder where I will put the compiled code from GitHub. After that, I will ask the provider to calculate the hash for the files in this folder. This means that one code will be executed, and the hash will be calculated for a completely different code. Knowing the full file paths will protect against such a problem.</p><p style="text-align: justify;">Now we have several copies of my site all over the world from different hosting providers. The author of the video can open several of them to demonstrate that they show the same UTC time and the same time code. The audience can check any of them, depending on which hosting provider they trust more.</p><p style="text-align: justify;">Wait a minute! Did I say "same time code"?!</p><h2 style="text-align: left;">Data storage system</h2><p style="text-align: justify;">"So you got caught!" - you can say. "Obviously, your site instances must have some kind of storage system, some kind of database in which you keep the correspondence between UTC time and time codes. And who has access to this database? You have it! This means that if you wish, you can insert any information you like there, change the data at your discretion. How can I trust this?" Yes, this is a very serious question.</p><p style="text-align: justify;">First of all, each instance of my site can have a separate independent storage. For example, the SQLite file in the root folder. But in this case, I do not know how to establish trust. It's better to replace trust in my repository with trust in something else. To the database provider.</p><p style="text-align: justify;">All my site instances will interact with the same database. They will get access to it via the Internet from some database provider. The address to access (e.g. URL), the database name, the names of all tables/collections (and possibly the username) will be hardcoded in my source code. I'll let anyone make sure that I always work with the same database, with the same tables, that I haven't replaced them on the sly. I will only get the password through the environment variable.</p><p style="text-align: justify;">"Big deal! You can still make any changes to this database." - you can tell. Yes. Unless...</p><p style="text-align: justify;">Unless the database itself restricts what I can do. Imagine a database that does not allow you to modify existing data and does not allow you to delete data. I can only add new entries. It is also not possible to create multiple records with the same key. In our case, the key will be UTC time to the minute.</p><p style="text-align: justify;">But that's not enough. Let's say I made a deal with the author of the video. He wants to create a video today, but it should look like it will be created next week. To do this, I am adding some entries to the database for the time next week. This does not violate database restrictions. I don't change anything or delete anything, just add new entries. I also don't create key conflicts. They may happen later when my code tries to insert new entries for the same time. But such conflicts can be easily resolved by the program. So my database needs to guarantee a sequence of keys. At any moment, I can only read the entry for the next key (for the minute following the last minute existing in the database).</p><p style="text-align: justify;">There is another important issue that we need to discuss. Who inserts data into my database? Imagine that each instance of my site has code that inserts new records into a shared database. Each instance generates a new time code for the next minute once a minute and tries to insert it into the database. Only one instance will do this successfully (due to the unique key constraint), other instances will receive an error message due to a key conflict and will have to re-read the time code for the next minute from the database. This way they will all be able to show the same time code at the same time. Is this enough to be sure that our time codes are generated honestly? Unfortunately, no.</p><p style="text-align: justify;">At home, I may have an Excel file in which I save time codes for many years to come. Since I have a password for the database, I can run a small program that will write these codes to the database before the instances of my site do it. This will allow me to know all the codes for many years, and I will be able to use this knowledge.</p><p style="text-align: justify;">What can we do to overcome this problem? First, on the database provider's website, I can restrict the list of IP (addresses) from which I can connect to the database. Of course, the database provider should publish this list to everyone so that everyone can see that any changes to my database come from the same IP addresses as the instances of my site. But now I have to somehow guarantee that I won't be able to run another process on the same computer that my site is running on. I think it's much harder to do.</p><p style="text-align: justify;">There is another approach. My sites can provide a kind of log in which we record whether this instance of the site was able to insert a new record into the database, or whether it encountered a key conflict and had to re-read the time code. People could (at least theoretically) visit all instances of my site and check if one of them was able to write time code in a given time. If they all encountered a key conflict, it would mean that someone else created the time code. But how can I keep this log? If it's a file on disk, no one can be sure that I haven't made some changes there. Such a file will be constantly changing, and the hash will not help us. I could save such a log in the server memory. In this case, we could be sure that the data stored there is correct, since we trust the source code. But the size of such storage will be limited. So we'll have to clean it out from time to time. This would mean that we would not be able to check the time code after a certain period of time. And also the log will disappear every time the site instance is restarted.</p><p style="text-align: justify;">So, what can we do?</p><h2 style="text-align: justify;">Blockchain</h2><p style="text-align: justify;">In fact, there is already a technology that can be used for our storage purposes. I'm talking about blockchain. It provides immutable (without changes in recorded data) storage and verification of new blocks by the entire community of participants.</p><p style="text-align: justify;">For our purposes, the blockchain can be built as follows. Firstly, any participant can generate a block with a time code for the next minute. Secondly, the verification process verifies that:</p><p></p><ul style="text-align: left;"><li style="text-align: justify;">The block is generated for the next minute after the last block in the chain, and not for a distant time in the future.</li><li>The block contains a time code that has not been used before (or has not been used for some considerable time).</li><li>The block is generated by a participant who has not generated any of the last N blocks in the chain. This check helps to exclude situations when one person generates several consecutive blocks.</li></ul><p></p><p style="text-align: justify;">This approach, of course, also has some drawbacks. I can still insert a block with an arbitrary time code into the chain, which I can know in advance. This means that I can use this time code if my video message lasts less than a minute.</p><p style="text-align: justify;">Moreover, blocks should be generated once per minute. If there are many participants, this can lead to conflicts that generate temporary branching of the chain. In this case, our site may read data from the "wrong" branch of the chain, and the system will become useless. We can, of course, try to generate blocks in advance, for example, a day in advance, in the hope that during this time the community will already decide which branch is correct. But in this case, the author of the message can also record it a day in advance.</p><p style="text-align: justify;">And, of course, the question arises about the incentive for generating new blocks. Why do a lot of people have to start generating blocks with time codes for our system? We need some kind of economic incentive, as in the case of bitcoin, but I don't see it here.</p><p style="text-align: justify;">In general, there are many more questions that need to be answered.</p><h2 style="text-align: justify;">Conclusion</h2><p style="text-align: justify;">My article discusses the creation of a software system that can be trusted without trusting its authors and owners. I can draw the following conclusions.</p><p style="text-align: justify;">We can build trust in the source code of the website. But in fact, we replace trust in the author of the site with trust in the hosting provider. We can improve the situation by hosting multiple instances of the site with different hosting providers around the world. Maintaining such trust requires some action on the part of the hosting provider, but nothing extraordinary.</p><p style="text-align: justify;">It is much more difficult to organize trust in data. It is very difficult to be sure that the owner has not made any changes to the data. Even transferring some checks to the side of the database provider cannot completely solve the problem. Probably, the use of systems such as blockchain could help here. But these systems show data to everyone, which is not always acceptable.</p><p style="text-align: justify;">I hope it was at least fun for you to travel around the world of trust in software systems. I will be glad if it gives you food for thought. Good luck!</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-73624155703658063362022-02-28T17:59:00.002+03:002022-02-28T17:59:44.199+03:00How to use Telepresence in Windows<p>Recently I came across <a href="https://www.telepresence.io" rel="nofollow" style="text-align: left;" target="_blank">Telepresence</a><span style="text-align: left;">. It allow you to quickly replace a deployment in your Kubernetes cluster with some application running on your machine. It means that all requests inside the cluster to the pods of this deployment will actually go to your developer machine. It allows you, for example, to debug your application in real environment. For other use cases please consult with the </span><a href="https://www.telepresence.io/docs/latest/quick-start/" rel="nofollow" style="text-align: left;" target="_blank">documentation</a>.</p><p>Here I want to show how you can install and use it on your Windows machine.</p><span><a name='more'></a></span><h2 style="text-align: left;">Prepare Kubernetes cluster</h2><p>First of all, you'll need some Kubernetes cluster. I use <a href="https://www.docker.com/products/docker-desktop" rel="nofollow" target="_blank">Docker Desktop</a>, but you can use whatever you want. In my cluster there will be only single deployment of Nginx and a single service for it. Here is my YAML file for the deployment:</p><pre><code lang="yaml">apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
service: nginx
spec:
replicas: 1
selector:
matchLabels:
service: nginx
template:
metadata:
labels:
service: nginx
spec:
containers:
- name: nginx-container
image: nginx
</code></pre><p>And this is the YAML file of my service:</p><pre><code lang="yaml">apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
service: nginx
spec:
type: NodePort
selector:
service: nginx
ports:
- name: http
port: 80
targetPort: 80
nodePort: 31000</code></pre><p>I install them into the cluster from one folder with a simple command:</p><pre><code lang="bash">kubectl apply -f .</code></pre><p>Now I can contact Nginx at <i>http://localhost:31000</i>:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjSwbLLu7hDEo_CTt4yobtOapYBE1L7yPhLHDoDhGRdXQ6CDBqoJTFoe9bATnDskQhpg-V_4rZ45CvOFwjuyA4PgtEojVw2KdC_IWYCfpNE9QXj6I7BFOWnHsEzYYH5s1T-nzGXDxQikQj24IDoOyZpeCE9W6NsXcraY5ZJHakLq0hrCm9jwqpbwcIMqg=s829" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="342" data-original-width="829" height="132" src="https://blogger.googleusercontent.com/img/a/AVvXsEjSwbLLu7hDEo_CTt4yobtOapYBE1L7yPhLHDoDhGRdXQ6CDBqoJTFoe9bATnDskQhpg-V_4rZ45CvOFwjuyA4PgtEojVw2KdC_IWYCfpNE9QXj6I7BFOWnHsEzYYH5s1T-nzGXDxQikQj24IDoOyZpeCE9W6NsXcraY5ZJHakLq0hrCm9jwqpbwcIMqg=s320" width="320" /></a></div><h2 style="text-align: left;">Install Telepresence</h2><p>There is a <a href="https://www.telepresence.io/docs/latest/install/" rel="nofollow" target="_blank">documentation page</a> describing how to install Telepresence. In short words, it says the following. Download the archive with the latest version of Telepresence. Extract it somewhere. Execute <i>install-telepresence.ps1</i> in PowerShell with administrator rights and elevated execution policy. I wanted to specify a folder to install Telepresence to, so I executed</p><pre><code lang="bash">Set-ExecutionPolicy Bypass -Scope Process
.\install-telepresence.ps1 -Path c:\tools\telepresence</code></pre><p>Now Telepresence is installed on our machine. Let's connect it to the Kubernetes cluster.</p><h2 style="text-align: left;">Connecting to Kubernetes</h2><p>The documentation says that after installation I must close PowerShell and open a new PowerShell session (with administrator rights, of course). Now we must execute:</p><pre><code lang="bash">telepresence connect</code></pre><p>This command installs something into your Kubernetes cluster (your can take a look inside <i>ambassador</i> namespace) and makes everything ready. I think that it somehow uses the usual <i>kubectl</i> command. So make sure you have sufficient rights to install different things into the Kubernetes cluster.</p><p>And here I got strange error. It said something like <i>Configured to use 'C:\tools\telepresence\telepresence.exe' but actually using 'c:\tools\telepresence\telepresence.exe'</i>. Really strange thing. Nevertheless, what should we do? Use Cmd instead of PowerShell, of course. In Cmd with administrator rights everything works like a charm.</p><h2 style="text-align: left;">Replacing a deployment</h2><p>Now it is time to replace something inside the Kubernetes cluster with something on our developer machine. There is a page in the documentation that is entitled '<a href="https://www.telepresence.io/docs/latest/howtos/intercepts/" rel="nofollow" target="_blank">Intercept a service in your own environment</a>'. It recommends to use command like this:</p><pre><code lang="bash">telepresence intercept nginx-service --port 5500:http --env-file ./nginx-service-intercept.env</code></pre><p>I really thought that we'll replace a Kubernetes service and it'll start routing all traffic into our developer machine. But this is not the case:</p><p></p><blockquote><i>telepresence: error: No interceptable deployment, replicaset, or statefulset matching nginx-service found</i></blockquote><p></p><p>We must replace a deployment:</p><pre><code lang="bash">telepresence intercept nginx-deployment --port 5500:http --env-file ./nginx-service-intercept.env</code></pre><p>Now everything works fine. All requests to the deployment in the cluster will be rerouted to the port 5500 on my developer machine. This command also created a <i>nginx-service-intercept.env</i> file. This file contains values of all environment variables available for the deployment pods in the cluster. You can use them to configure your application on developer machine.</p><p>Now when you browse <i>http://localhost:31000</i> you'll see whatever you provide on port 5500:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgAYDTWMQmjtpzhftmAwRLaEHOecKu21yjgr9N9iR2kP7JbsE8npWjVjYAVYi9QNFooqiaGvu7dmMzB9WJGkjTBj331d2ujz9Vgwy1e35aKxFxxVZfxSUxzjr7CX1feafZOdcDCmNpOHFQRH7lPDgyVFknVYb5VzhUtNWP8s2pFMjEAmNb1idWSCVRrmA=s218" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="106" data-original-width="218" height="106" src="https://blogger.googleusercontent.com/img/a/AVvXsEgAYDTWMQmjtpzhftmAwRLaEHOecKu21yjgr9N9iR2kP7JbsE8npWjVjYAVYi9QNFooqiaGvu7dmMzB9WJGkjTBj331d2ujz9Vgwy1e35aKxFxxVZfxSUxzjr7CX1feafZOdcDCmNpOHFQRH7lPDgyVFknVYb5VzhUtNWP8s2pFMjEAmNb1idWSCVRrmA" width="218" /></a></div><br /><h2 style="text-align: left;">Cleaning up</h2><p>If you want to stop intercepting traffic, execute the following command:</p><pre><code lang="bash">telepresence leave nginx-deployment</code></pre><p>To disconnect Telepresence from your cluster use:</p><pre><code lang="bash">telepresence quit</code></pre><p>And finally to uninstall Telepresence items from the cluster use:</p><pre><code lang="bash">telepresence uninstall --everything</code></pre><h2 style="text-align: left;">Conclusion</h2><p>Telepresence is very interesting and useful technology. Unfortunately it is very unstable for Windows at the moment. I had to uninstall and reinstall Telepresence objects to the cluster several times before it started to work for me. But we must remember that it is still in Developer Preview for Windows now. I'm sure they'll make it work fine.<br /></p><p>I hope this short article is useful for you. Good luck!</p><p><br /></p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-36020175536747621432021-11-16T14:28:00.000+03:002021-11-16T14:28:24.211+03:00How to use BenchmarkDotNet<p>If you want to measure the performance of your .NET code, you can use the <a href="https://benchmarkdotnet.org/" rel="nofollow" target="_blank">BenchmarkDotNet</a> NuGet package. Let's see what you can do with it.</p><span><a name='more'></a></span><p>The description of your performance tests looks like a usual class:</p><pre><code lang="cs">[SimpleJob(RuntimeMoniker.Net50)]
[SimpleJob(RuntimeMoniker.NetCoreApp31)]
[MinColumn, MaxColumn]
public class HashAlgorithmsComparison
{
private readonly SHA256 sha256 = SHA256.Create();
private readonly SHA512 sha512 = SHA512.Create();
private readonly MD5 md5 = MD5.Create();
private byte[] data;
[Params(1000, 10000)]
public int N;
[GlobalSetup]
public void Setup()
{
data = new byte[N];
new Random(42).NextBytes(data);
}
[GlobalCleanup]
public void Cleanup()
{
data = new byte[0];
}
[IterationSetup]
public void IterationSetup()
{
Console.WriteLine("Iteration setup");
}
[IterationCleanup]
public void IterationCleanup()
{
Console.WriteLine("Iteration cleanup");
}
[Benchmark(Baseline = true)]
public byte[] Md5() => md5.ComputeHash(data);
[Benchmark]
public byte[] Sha256() => sha256.ComputeHash(data);
[Benchmark]
public byte[] Sha512() => sha512.ComputeHash(data);
}
</code></pre><p>Each method, which performance must be measured, should be marked with the <i>Benchmark</i> attribute. For one of such attributes you can set <i>Baseline</i> property to <i>true</i>. In this case, this method will be used as a base. The performance of other methods will be compared with this method performance.</p><p>Methods marked with <i>GlobalSetup</i> and <i>GlobalCleanup</i> attributes are used for initialization and cleanup for each benchmark method. They'll be executed once for all iterations of this method. Other methods marked with <i>IterationSetup</i> and <i>IterationCleanup</i> attributes will be executed before and after each iteration.</p><p>By default BenchmarkDotNet calculates several standard statistics. But you can specify additional statistics to calculate. It is done with attributes like <i>MinColumn</i> and <i>MaxColumn</i>.</p><p>You also can execute your tests for different target frameworks. Use attributes like <i>SimpleJob(RuntimeMoniker.NetCoreApp31)</i> and <i>SimpleJob(RuntimeMoniker.NetCore50)</i> to do it. But remember that you have to have SDKs for these frameworks installed on machines where you want to run your tests. BenchmarkDotNet will compile the code for these frameworks.</p><p>BenchmarkDotNet also allows to parameterize your tests. You can do it with <i>Params</i> attribute that specifies values which should be assigned to the property marked with this attribute. There are <a href="https://benchmarkdotnet.org/articles/features/parameterization.html" rel="nofollow" target="_blank">other attributes</a> that allow to take these values from the results of method executions.</p><p>As a result of work, BenchmarkDotNet generates several report files (CSV, MD, HTML). You can customize what reports you want to see using attributes like <i>MarkdownExporterAttribute.GitHub</i>.</p><p>In order to execute your tests, use the following code:</p><pre><code lang="cs">class Program
{
static void Main()
{
_ = BenchmarkRunner.Run<HashAlgorithmsComparison>();
}
}</code></pre><p>That's all. For additional information, please, refer to <a href="https://benchmarkdotnet.org/articles/overview.html" rel="nofollow" target="_blank">the documentation</a>.</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-48724137267328948512021-11-12T17:48:00.001+03:002021-11-12T18:00:44.910+03:00Why do we need MediatR?<p>Recently I came across using the <a href="https://github.com/jbogard/MediatR/wiki" rel="nofollow" target="_blank">MediatR</a> package in our source code. This interested me. Why should I use MediatR? What advantages can it give me? Here we'll discuss these topics.</p><span><a name='more'></a></span><h2 style="text-align: left;">How to use MediatR</h2><p>Basic usage of MediatR is very simple. First, you install the <a href="https://www.nuget.org/packages/mediatr" rel="nofollow" target="_blank">MediatR</a> NuGet package. In your application you'll have different descriptions of work to be done (e.g. create a ToDo item, change user name, etc.). These descriptions are called <i>requests</i> in MediatR. They are simple classes implementing <i>IRequest<T></i> interface. This is a marker interface without any members.</p><pre><code lang="cs">class CreateToDoItem : IRequest<int>
{
public string ToDoItemText { get; set; }
}</code></pre><p>These classes may not contain any logic, they are just data containers for your operations.</p><p>But what is type parameter <i>T</i> in the <i>IRequest<T></i> interface? You see, your operations may return some results. For example, if you are creating a new ToDo item, you may need to get the ID of that item. This is exactly what the <i>T</i> is for. In our case, we want to get integer ID of the new ToDo item.</p><p>Now we need some code that will perform this operation. MediatR calls this code <i>request handler</i>. Request handlers must implement <i>IRequestHandler<TRequest, TResponse></i> interface, where <i>TRequest</i> must be <i>IRequest<TResponse></i>:</p><pre><code lang="cs">class CreateToDoItemHandler : IRequestHandler<CreateToDoItem, int>
{
public Task<int> Handle(CreateToDoItem request, CancellationToken cancellationToken)
{
...
}
}</code></pre><p>As you can see, this interface requires the implementation of a single <i>Handle</i> method that asynchronously performs the requested operation and returns the desired result.</p><p>The only thing left to do is to connect the request and the corresponding handler. MediatR does this using a dependency container. If you develop ASP.NET Core application then you can use MediatR's <a href="https://www.nuget.org/packages/MediatR.Extensions.Microsoft.DependencyInjection" rel="nofollow" target="_blank">MediatR.Extensions.Microsoft.DependencyInjection</a> package. But MediatR supports <a href="https://github.com/jbogard/MediatR/wiki/Container-Feature-Support" rel="nofollow" target="_blank">many different containers</a>.</p><pre><code lang="cs">services.AddMediatR(typeof(Startup));</code></pre><p>Here <i>services</i> is an instance of <i>IServiceCollection</i> interface which is usually accessible in the <i>ConfigureServices</i> method of the <i>Startup</i> class. This command will scan the assembly where the <i>Startup</i> class lives and find all request handlers.</p><p>Now you can execute your requests. You just need to get reference to <i>IMediator</i> instance. It is registered in your container using the same <i>AddMediatR</i> method.</p><pre><code lang="cs">var toDoItemId = await mediator.Send(createToDoItemRequest);</code></pre><p>That's all. MediatR will find the appropriate request handler, execute it, and return the result to you.</p><p>And now we come to the main question.</p><h2 style="text-align: left;">Why do we need MediatR?</h2><p>Let's say we have an ASP.NET Core controller which supports operations with ToDo items. We'll compare how we can implement ToDo item creation using MediatR and without it. Here is the code without MediatR:</p><pre><code lang="cs">[ApiController]
public class ToDoController : ControllerBase
{
private readonly IToDoService _service;
public ToDoController(IToDoService service)
{
_service = service;
}
[HttpPost]
public async Task<IActionResult> CreateToDoItem([FromBody] CreateToDoItem createToDoItemRequest)
{
var toDoItemId = await _service.CreateToDoItem(createToDoItemRequest);
return Ok(toDoItemId);
}
}</code></pre><p>And now the same implementation with MediatR:</p><pre><code lang="cs">[ApiController]
public class ToDoController : ControllerBase
{
private readonly IMediator _mediator;
public ToDoController(IMediator mediator)
{
_mediator = mediator;
}
[HttpPost]
public async Task<IActionResult> CreateToDoItem([FromBody] CreateToDoItem createToDoItemRequest)
{
var toDoItemId = await _mediator.Send(createToDoItemRequest);
return Ok(toDoItemId);
}
}
</code></pre><p>Do you see any serious advantages of MediatR here? I don't. In fact, I think the implementation with MediatR is a little less readable. It uses generic <i>Send</i> method instead of meaningful <i>CreateToDoItem</i>.</p><p>So why should I use MediatR?</p><h3 style="text-align: left;">References</h3><p>First of all, MediatR separates request handlers from requests. In our controller code, we do not reference <i>CreateToDoItemHandler</i> class. It means that we can move this class to any place inside the same assembly and we'll not need to modify the code of our controller.</p><p>But personally, I don't see this as a big advantage. Yes, it will be easier for you to make some changes to your project. But at the same time, we will face some difficulties here. From the code of our controller, we don't actually see who is processing our request. To find a handler for an instance of <i>CreateToDoItem</i>, we need to know what MediatR is and how it works. There is nothing particularly complicated here. After all, <i>IToDoService</i> is also not a handler implementation, we will have to look for classes implementing this interface. But it will still take more time for new developers to figure out what's going on.</p><h3 style="text-align: left;">Single responsibility</h3><p>The next difference is more important. You see, the request handler is a class. And this whole class is responsible for performing a single operation. In the case of a service (for example, <i>IToDoService</i>), one method is responsible for performing one operation. This means that the service can contain many different methods, possibly related to different operations. This makes it difficult to understand the service code. On the other hand, the entire request handler class is responsible for a single operation. This makes this class smaller and easier to understand.</p><p>It all looks nice, but the reality is slightly messier. Usually you have to support a lot of related operations (e.g. create ToDo item, update ToDo item, change status of ToDo item, ...) All these operations may require the same pieces of code. In case of service we can use private methods to do common job. But request handlers are separate classes. Of course, we can use inheritance and extract everything we need into the base class. But this brings us to the same situation, if not worse. In the case of the service, we had many methods in one class. Now we have many methods distributed across multiple classes. I'm not sure which is better.</p><p>In other words, if you want to shoot your leg, you still have plenty of options.</p><h3 style="text-align: left;">Decorators</h3><p>But there is one more serious advantage of MediatR. You see, all your request handlers implement the same interface <i>IRequestHandler</i>. It means that you can write decorators applicable to all of them. In ASP.NET Core you can use <a href="https://www.nuget.org/packages/Scrutor/" rel="nofollow" target="_blank">Scrutor</a> NuGet package for support of decorators. For example, you can write logging decorator:</p><pre><code lang="cs">class LoggingDecorator<TRequest, TResponse> : IRequestHandler<TRequest, TResponse>
where TRequest : IRequest<TResponse>
{
private readonly IRequestHandler<TRequest, TResponse> _handler;
private readonly Logger _logger;
public LoggingDecorator(IRequestHandler<TRequest, TResponse> handler,
Logger logger)
{
_handler = handler;
_logger = logger;
}
public Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken)
{
_logger.Log("Log something here.");
return _handler.Handle(request, cancellationToken);
}
}
</code></pre><p>Now register it:</p><pre><code lang="cs">services.AddMediatR(typeof(Startup));
services.Decorate(typeof(IRequestHandler<,>), typeof(LoggingDecorator<,>));</code></pre><p>And that's all. Now you applied logging to all your request handlers. You don't need to create a separate decorator for each of your services. All you need is to decorate a single interface.</p><p>But why bother with Scrutor? MediatR provides the same functionality with pipeline behaviors. Write a class implementing <i>IPipelineBehavior</i>:</p><pre><code lang="cs">class LoggingBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
{
private readonly Logger _logger;
public LoggingBehavior(Logger logger)
{
_logger = logger;
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
{
try
{
_logger.Log($"Before execution for {typeof(TRequest).Name}");
return await next();
}
finally
{
_logger.Log($"After execution for {typeof(TRequest).Name}");
}
}
}
</code></pre><p>Register it:</p><pre><code lang="cs">services.AddMediatR(typeof(Startup));
services.AddScoped(typeof(IPipelineBehavior<,>), typeof(LoggingBehavior<,>));
</code></pre><p>And everything works the same way. You don't need decorators anymore. All registered pipeline behaviors will be executed with each request handler in the order they are registered.</p><p>The approach with behaviors is even better than that of decorators. Consider the following example. You may want to execute some requests inside a transaction. In order to mark such requests you use <i>ITransactional</i> marker interface:</p><pre><code lang="cs">interface ITransactional { }
class CreateToDoItem : IRequest<int>, ITransactional
...
</code></pre><p>How to apply your behavior only to requests marked with <i>ITransactional</i> interface? You may use generic class constraints:</p><pre><code lang="cs">class TransactionalBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
where TRequest : ITransactional
...
</code></pre><p>But you can't do the same with Scrutor decorators. If you implement decorator like this:</p><pre><code lang="cs">class TransactionalDecorator<TRequest, TResponse> : IRequestHandler<TRequest, TResponse>
where TRequest : IRequest<TResponse>, ITransactional
...
</code></pre><p>you will not be able to use it if you have any request that does not implement <i>ITransactional</i>.</p><p>When implementing pipeline behavior remember, that they executed on every call of <i>Send</i> method. It may be important if you are sending requests from inside of handlers:</p><pre><code lang="cs">class CommandHandler : IRequestHandler<Command, string>
{
private readonly IMediator _mediator;
public CommandHandler(IMediator mediator)
{
_mediator = mediator;
}
public async Task<string> Handle(Command request, CancellationToken cancellationToken)
{
...
var result = await _mediator.Send(new AnotherCommand(), cancellationToken);<br />
...
}
}
</code></pre><p>If you marked both <i>Command</i> and <i>AnotherCommand</i> with <i>ITransactional</i> interface, corresponding <i>TransactionalBehavior</i> will be executed twice. So make sure that you don't create two separate transactions.</p><h2 style="text-align: left;">Other functionality</h2><p>MediatR provides you with other functionality as well. It supports notifications mechanism. It may be very useful if you use domain events in your architecture. All classes of your events must implement <i>INotification</i> marker interface. And you can create any number of handlers for this event type with <i>INotificationHandler</i> interface. The difference between requests and notification is as follows. The request will be passed to only one single handler. A notification will be passes to all registered handlers for this type of notifications. Also, for a request, you can get the result of its processing. Notifications does not allow to get any results. Use <i>Publish</i> method to <a href="https://github.com/jbogard/MediatR/wiki#notifications" rel="nofollow" target="_blank">send notifications</a>.</p><p>MediatR also providers an exception handling mechanism. It is rather sophisticated and you can read about it <a href="https://github.com/jbogard/MediatR/wiki#exceptions-handling" rel="nofollow" target="_blank">here</a>.</p><h2 style="text-align: left;">Conclusion</h2><p>In conclusion, I have to say that MediatR is an interesting NuGet package. The ability to express all operations using a single interface and behavior mechanism makes it attractive for use in my projects. I can't say it's a silver bullet, but it has certain advantages. Good luck using it.</p><p><br /></p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-59614987935173497802021-10-01T16:34:00.000+03:002021-10-01T16:34:46.126+03:00How to use certificates in ASP.NET CoreRecently, the use of the HTTPS protocol for your Web resources is a mandatory requirement for all relatively large Web projects. This technology is based on so called <i>certificates</i>. Previously, you had to pay to get a certificate for your Web server. But now we have services like <a href="https://letsencrypt.org/" rel="nofollow" target="_blank">Let's Encrypt</a> where you can get your certificate for free. This is why the price is no longer a reason not to use HTTPS.<br /><br />In the simplest case, a certificate allows you to establish protected connection between client and server. But this is not all it is capable of. For example, I saw an online course on <a href="https://www.pluralsight.com/" rel="nofollow" target="_blank">Pluralsight</a> called <a href="https://www.pluralsight.com/courses/microservices-security-fundamentals" rel="nofollow" target="_blank">Microservices Security</a>. And there was one thing mentioned there, which is called <i>Mutual Transport Layer Security</i>. It not only allows client to make sure that it is interacting with the correct server, but also allows the server to authenticate the client.<br /><br />This is why developers must know how to work with certificates. And it is for this reason that I decided to write this article. I want it to be a place where one can find basic knowledge about certificates. I don't think that experts can find something interesting here, but I hope that it will be useful for beginners and those who want to refresh their knowledge.<span><a name='more'></a></span><div><br />This article will contain the following sections:<br /><br /><ul style="text-align: left;"><li><a href="#why">What is a <i>certificate</i> and why do we need them?</a></li><li><a href="#create">How to create a <i>self-signed</i> certificate for testing on your computer?</a></li><li><a href="#usage">How to use certificates with ASP.NET Core on the server side and on the client side?</a></li></ul><div style="text-align: left;"><br /></div><h3 style="text-align: left;">Why do we need certificates?</h3><a name="why"></a><div><br /></div>Before we start working with certificates, we need to understand why we need them. Let's look at a couple of people. Traditionally, we call them Alice and Bob. They need to communicate with each other. But the only way to do this is to exchange messages over a public communication channel:<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhr9DNHEoU9EYSXklNpfBgl6isxFG7O6a2FnbOep20mMY1mzuD78FykLd274QpPUTwHoqq_d3T7uB9mhcfKonAO89Hk8uCS_7nPZ_uTSlTyeU1Hk244Hc4WekyggR2IU0-omjqM3D2E0nTO/s742/Bob+and+Alice.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="192" data-original-width="742" height="83" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhr9DNHEoU9EYSXklNpfBgl6isxFG7O6a2FnbOep20mMY1mzuD78FykLd274QpPUTwHoqq_d3T7uB9mhcfKonAO89Hk8uCS_7nPZ_uTSlTyeU1Hk244Hc4WekyggR2IU0-omjqM3D2E0nTO/s320/Bob+and+Alice.png" width="320" /></a></div><br /><i><span style="font-size: x-small;">All icons were created by Vitaly Gorbachev at <a href="http://www.flaticon.com" rel="nofollow" target="_blank">Flaticon</a></span></i><br /><br />Unfortunately, since the channel is public, anyone can read and even change the messages that Alice and Bob send to each other:<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitvcWGjlicMapeqtFizaqAotEfklLUJHJo_enAUtl3CtdEnsR9xH1OqN5S0QLzmEpuOYnH8_eKZhLv4a2nwjcdGPSJdldjR2Tp5IMZDGZh2_a2Xowur_pykkgNWG8G1lXpF87KC33KJAF1/s738/Man+in+the+Middle.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="175" data-original-width="738" height="76" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitvcWGjlicMapeqtFizaqAotEfklLUJHJo_enAUtl3CtdEnsR9xH1OqN5S0QLzmEpuOYnH8_eKZhLv4a2nwjcdGPSJdldjR2Tp5IMZDGZh2_a2Xowur_pykkgNWG8G1lXpF87KC33KJAF1/s320/Man+in+the+Middle.png" width="320" /></a></div><br />This situation is called "Man in the Middle".<br /><br />How can Alice and Bob protect themselves from this danger? Encryption comes to the rescue. The most ancient and widespread encryption systems are <i>systems with a symmetric key</i>. In this case, Alice and Bob must have exactly the same keys (which is why they are called symmetric), which are not known to anyone else. Then, using any symmetric encryption system, they can exchange messages over a public communication channel without fear that a hacker will be able to read the messages or change them.<br /><br />But a hacker can still repeat one or more messages that he saw earlier. In some cases, this can pose a serious danger (imagine that a hacker can repeat a request to transfer money from one account to another). But this problem is effectively solved in all modern communication systems. (For example, you can add a sequence number to each message. If the number in the message on the receiving side is not equal to the expected number, such a message is discarded).</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSMSq2QK7nTtR9sRY4wy9Yf0fupsDHxaWPgfIypkW3BzJm0HiOpNGMgPjFgB3cySvSnVJw6sX58o3IxTn99_H45qlO70cPYqvSM2ECfOigAmZ683l_VKua9NLKWD5wqRWGoeQ4GzMcCb-b/s748/Symmetric+encryption.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="232" data-original-width="748" height="99" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSMSq2QK7nTtR9sRY4wy9Yf0fupsDHxaWPgfIypkW3BzJm0HiOpNGMgPjFgB3cySvSnVJw6sX58o3IxTn99_H45qlO70cPYqvSM2ECfOigAmZ683l_VKua9NLKWD5wqRWGoeQ4GzMcCb-b/s320/Symmetric+encryption.png" width="320" /></a></div><div><br /></div>But let's go back to our Alice and Bob. It looks like their problem has been solved. But this is not the case. The question is how can they get identical encryption keys so that no one else gets them. After all, they can only communicate via a public channel. Passing the key through this channel will also simply pass it to the hacker. In this case, he will be able to decrypt and change the messages of Alice and Bob.<br /><br />What should we do? This is where <i>asymmetric encryption</i> or <i>public key encryption</i> comes to the rescue. Its main idea is as follows. Let's say Alice wants to send a message to Bob. Now Bob generates not one, but two keys - <i>public</i> and <i>private</i>. The public key is not a secret. Bob can give it to anyone who wants to talk to him. But he keeps the private key secret and does not show it to anyone, even Alice. The trick is that if a message is encrypted with a public key, it can only be decrypted using the private key. Conversely, a message encrypted with a private key can only be decrypted using the public key.<br /><br />Now it is clear how Alice and Bob should act. Each of them generates its own public and private keys. Then they exchange their public keys over the communication channel. Since public keys are not a secret, they can be transmitted over public channels. But Alice and Bob keep their private keys secret. Let's say Bob wants to send his message to Alice. He encrypts it with her public key and sends an encrypted message over the channel. Only the person who has the private key can decrypt this message (this means that only Alice can do this). The hacker can't decrypt it.</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtMgxwSASySlubNtKZv9rZq5_Ed2NRQxiSUBy3YKwE_PEZR1qg09fOsYE_pzOix2LfoeCmxQ9ERsIC9N5ef6tOHmVybm7l_iwWMJRaKiqL1pw5u2ui0DwkJsumHeHT8zjRuabJqm4Zv1ik/s795/Asymmetric+encryption.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="206" data-original-width="795" height="83" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtMgxwSASySlubNtKZv9rZq5_Ed2NRQxiSUBy3YKwE_PEZR1qg09fOsYE_pzOix2LfoeCmxQ9ERsIC9N5ef6tOHmVybm7l_iwWMJRaKiqL1pw5u2ui0DwkJsumHeHT8zjRuabJqm4Zv1ik/s320/Asymmetric+encryption.png" width="320" /></a></div><div><br /></div>In fact, everything is a little more complicated. You see, public key encryption is much slower than symmetric encryption. Therefore, it is inconvenient to encrypt large amounts of data in this way. That's why when Bob wants to talk to Alice, he does the following. He generates a new key for a symmetric encryption system (usually called a <i>session key</i>). He then encrypts this session key with Alice's public key and sends it to her. Now Alice and Bob have a symmetric key that is not known to anyone else. From now on, they can use fast symmetric encryption algorithms.<br /><br />It looks like our problem has been solved. But this is not so simple. The hacker who controls the communication channel has something to tell us. The problem is again in the key distribution mechanism, but now these are public keys. Let's see what can happen.<br /><br />Suppose that Alice has generated a pair of public and private keys. Now she wants to give her public key to Bob. She sends this key over the communication channel. At this point, the hacker intercepts this key and does not allow Bob to get it. Instead, the hacker generates his own pair of public and private keys. He then sends his public key to Bob, saying that it is Alice's public key. The hacker keeps Alice's real public key for himself:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwYX6nzxTKBT5HTbWZKJEHB9SeudP3eyHscWYEiIgOWF0MeEaePUbPOuEtRWiVST7qfxt97Wz5wgO-YyOUmOt9VipOaBgNUjJWWcIEAMXL6f45dRK6HRvtmHzI8FnnBJ_mdWb4XaFXN7sr/s876/Public+key+distribution+attack.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="258" data-original-width="876" height="94" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwYX6nzxTKBT5HTbWZKJEHB9SeudP3eyHscWYEiIgOWF0MeEaePUbPOuEtRWiVST7qfxt97Wz5wgO-YyOUmOt9VipOaBgNUjJWWcIEAMXL6f45dRK6HRvtmHzI8FnnBJ_mdWb4XaFXN7sr/s320/Public+key+distribution+attack.png" width="320" /></a></div><div><br /></div>Yes, now we have many different keys. Let's see how it all works. Let's say Bob wants to send a message to Alice. He encrypts it with a public key, which, in his opinion, belongs to Alice. But in fact, this is the hacker's key. The hacker intercepts this message and does not allow Alice to receive it. Since the message was encrypted with the hacker's public key, he can decrypt it with his private key, read it and change it as he sees fit. After that, he encrypts it with Alice's real public key (remember that the hacker keeps her public key with him) and sends it to her. Alice decrypts it with her private key without any problems. So Alice receives Bob's message and has no idea that it has been read and possibly modified by a hacker.<br /><br />What can we do to avoid such a situation? And here we come close to certificates. Imagine that Alice distributes through a public channel not just her public key, but a key with a label where it is written that the key belongs to Alice. This label also contains the signature of some respected person whom Alice and Bob trust:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibWUbGUZMyO5aYaaS6fQ2n6Gc5Xy_0LdhyiyNDPOKhHFTFQ_Q17z0rprf35lZPIxNDb8rzJAjXu3WdC_64N4EY7mcssp2bRl-9w6mUBW4DuAnz1SLS2-ZhyphenhyphenD6ZZUWeMDPI5KGEWqfVYMDz/s830/Signed+key.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="239" data-original-width="830" height="92" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibWUbGUZMyO5aYaaS6fQ2n6Gc5Xy_0LdhyiyNDPOKhHFTFQ_Q17z0rprf35lZPIxNDb8rzJAjXu3WdC_64N4EY7mcssp2bRl-9w6mUBW4DuAnz1SLS2-ZhyphenhyphenD6ZZUWeMDPI5KGEWqfVYMDz/s320/Signed+key.png" width="320" /></a></div><div><br /></div>It is assumed that the key and the label are one. The label cannot be removed from one key and placed on another. In this case, if the hacker cannot forge the signature, he also cannot forge the key. If Bob receives a key with a label where it says that this is Alice's key and where there is a signature of a trusted person, he can be sure that this is Alice's key, and not someone else's.<br /><br />You can assume that the certificate is a key with such a label. But how does it work in the digital world?<br /><br />In the digital world, everything can be represented as a sequence of bits (zeros and ones). The same applies to keys. What should we do to create a <i>digital signature</i> for such a sequence of bits? This signature must have the following properties:<br /><br /><ul style="text-align: left;"><li>It should be short. Imagine that you want to create a digital signature for a movie file. Such a file can take up tens of gigabytes on the disk. If our signature is of the same size, it will be difficult to transfer it along with the file.</li><li>It should be impossible (or very difficult in practice) to fake it. Otherwise, the hacker could still force Bob to accept his own key instead of Alice's key.</li></ul><br />How do we create such a signature? We can do this as follows. First, we will calculate the so-called <i>hash</i> for our sequence of bits. You send your sequence of bits to the input of some function (it is called a <i>hash function</i>), and this function returns you another sequence of bits, but already very short. This output sequence is called a hash. All modern hash functions have the following properties:<br /><br /><ul style="text-align: left;"><li>For an input sequence of any length, they generate a hash of the same length. Usually this length does not exceed several tens of bytes. Remember that our signature must be short. This property of the hash makes it convenient to use in the signature.</li><li>If you only know the hash, you will not be able to get the input sequence for which this hash was created. This means that you cannot recover the input sequence from the hash.</li><li>If you have a hash for some sequence of bits, you cannot specify another sequence of bits with the same hash. Indeed, there are a lot of different files with a length of 1 GB. But for any of them, you can calculate a hash of, say, 32 bytes. There are far fewer different sequences of 32 bytes in length than there are different files of 1 GB in length. This means that there must be two different files with a length of 1 GB with the same hash. And yet, if you know one of these files and its hash, you will not be able to specify another file that gives the same hash.</li></ul><br />But enough about hashes. Unfortunately, the hash itself is not suitable for the role of a signature. Yes, it is short. But anyone can calculate it. A hacker can calculate a hash for his public key, nothing prevents him from doing this. How can we make the hash resistant to forgery? And here again, public-key encryption comes to the rescue.<br /><br />Remember, I said that Alice and Bob should trust the signature on the key label. Let's say Alice and Bob trust the signature of <i>Very Important Person</i>. How can Very Important Person sign a key? To do this, he generates his own pair of public and private keys. He passes his public key to Alice and Bob, and keeps the private key secret. When he needs to sign Alice's public key, he does it as follows. First, he calculates the hash of Alice's key, and then encrypts it with his private key. A hash encrypted with the private key of Very Important Person (it is usually called a <i>certificate authority</i>) is a signature. Since no one knows the private key of Very Important Person, no one can forge his signature.<br /><br />Now we understand how to create a signature. But we also need to know how we can verify it, how to make sure that the signature was not forged. Let's say Bob has some key. The label says that this is Alice's public key. In addition, there is a signature of Very Important Person. But how to check it? First of all, Bob calculates the hash of the received public key. Remember that everyone can do it. Bob then decrypts the signature using the public key of Very Important Person. As I said before, a signature is just an encrypted hash. After that, Bob compares two hashes: the one that he calculated, and the one that he received from the decrypted signature. If they are equal, then everything is fine, and Bob can be sure that this is Alice's key. But if the hashes are different, then the key cannot be trusted. Since the hacker can't create the correct signature, he can't force Bob to trust the wrong key.<br /><br />So, a certificate is just a key and a label for it. However, in practice, a lot of additional information is added to the certificate:<br /><br /><ul style="text-align: left;"><li>Who owns the key. In our case, this is Alice.</li><li>From what date and until what date the key is valid.</li><li>Who signed the key. In our case, this is Very Important Person. This information is necessary, because in reality, different certificate authorities can sign the key.</li><li>What algorithm is used to calculate the hash and create the signature.</li><li>... and any additional information.</li></ul><br />A hash and signature are created for all this data, so a hacker can't fake any of it.<br /><br />But there is still a gap in our strict scheme. I hope you have already understood what I mean. How do Alice and Bob get the public key of Very Important person? If a hacker can replace this key with his own key, our entire system will be destroyed.<br /><br />Well, of course, the public key of Very Important Person is distributed with a certificate, but now signed by Very-Very Important Person. Hmm... But how is the public key of Very-Very Important Person distributed? With a certificate, of course. Well, you know... <a href="https://en.wikipedia.org/wiki/Turtles_all_the_way_down" rel="nofollow" target="_blank">there are certificates all the way down</a>.<br /><br />But jokes aside. Indeed, Alice's certificate can be signed with the certificate of Very Important Person. And his certificate can be signed with the certificate of Very-Very Important Person. This is called a <i>chain of trust</i>. But this chain is not endless. It usually ends with a <i>root certificate</i>. This certificate is not signed by anyone, to be more precise, it is signed by itself (<i>self-signed certificate</i>). Usually, root certificates belong to very reliable companies, whose job is to sign other certificates with their root certificates.<br /><br />Previously, companies took money for signing certificates. But now we have services like <a href="https://lets'encrypt.org/" rel="nofollow" target="_blank">Let's Encrypt</a>, which do it for free. I think that many large companies have realized that it is better to provide certificates for free and make the Internet a more secure space than to have a lot of poorly protected sites, each of which can be used as a platform for attacks on these large companies. Something like this happened with antiviruses. Twenty years ago, we had to pay for them. Now a person can easily find a free high-quality antivirus for installation on a personal computer.<br /><br />But let's go back to our certificates. We still have one last question. Why do we trust root certificates? What prevents a hacker from replacing them? The reason is how they get to Alice and Bob's computers. You see, they are not delivered via the open communication channel, but are delivered together with the operating system. Recently, some browsers have started to be installed with their own set of trusted certificates.<br /><br />That's all. That's all I wanted to say about certificates. There are many interesting things connected with them, such as mechanisms for deprecation and revocation of certificates, but we will not talk about this here. Let's move on to practical things.<br /><br /><h3 style="text-align: left;">Creation of certificates</h3><a name="create"></a><div><br /></div>I hope I managed to convince you that certificates are an important and necessary thing. And you, as a developer, decided that it's time for you to learn how to use them. If you create ASP.NET Core project from Visual Studio, you can simply select the <i>Configure for HTTPS </i> checkbox, and all the necessary infrastructure will be prepared for you:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2PtseNH58Vk774jKwnlIHRWZmSRxkMlFJWY7FHb6fvxtk0j_hWzYmI3VAggGlKfnjzJ34H9NBJ-kO-gCufhJWlA5tiZXwtk0oOwHSGvMq5Fi5xa4kzPa2ytasfeNYEYYwMp3E9W7R1iJf/s955/Configure+for+HTTPS.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="328" data-original-width="955" height="138" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2PtseNH58Vk774jKwnlIHRWZmSRxkMlFJWY7FHb6fvxtk0j_hWzYmI3VAggGlKfnjzJ34H9NBJ-kO-gCufhJWlA5tiZXwtk0oOwHSGvMq5Fi5xa4kzPa2ytasfeNYEYYwMp3E9W7R1iJf/w400-h138/Configure+for+HTTPS.png" width="400" /></a></div><div><br /></div>But I want to show you how you can create your own certificate for testing your applications. First, I will create a self-signed certificate, a certificate that is signed by itself. Next, I will show you how you can install this certificate in your system so that it starts trusting the certificate.<br /><br />Let's get started. Everything we need is already in .NET Core. Let's create a console application and use some useful namespaces:<br /><pre><code lang="cs">using System.Security;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;</code>
</pre>Now we need to create a pair of public and private keys. Secure distribution of the public key is the work of the certificate:<pre><code lang="cs">// Generate private-public key pair
var rsaKey = RSA.Create(2048);</code></pre>Then we need to create a certificate request:<pre><code lang="cs">// Describe certificate
string subject = "CN=localhost";
// Create certificate request
var certificateRequest = new CertificateRequest(
subject,
rsaKey,
HashAlgorithmName.SHA256,
RSASignaturePadding.Pkcs1
);</code>
</pre>The certificate request contains information about who this certificate was issued for (the <i>subject</i> variable). If we want the certificate to be used by a web server available at <i>www.example.com</i>, then the variable <i>subject</i> should be equal to <i>CN=www.example.com</i>. In our case, we want to test our web server on <i>localhost</i>. This is why the value of the <i>subject</i> variable is equal to <i>CN=localhost</i>.<br /><br />Next, we pass our key pair to the certificate request and specify the algorithms that should be used to calculate the hash and signature.<div><br />Now we need to provide some additional information about which certificate we need. Let's indicate that we don't want to sign other certificates with this one:<pre><code lang="cs">certificateRequest.CertificateExtensions.Add(
new X509BasicConstraintsExtension(
certificateAuthority: false,
hasPathLengthConstraint: false,
pathLengthConstraint: 0,
critical: true
)
);</code>
</pre>Then there is something interesting. You see, a certificate is just an encryption key store. These keys can be used for various purposes. We have already seen that they can be used for digital signature and session key encryption. But there are other uses for it. Now we must specify how our certificate can be used:<pre><code lang="cs">certificateRequest.CertificateExtensions.Add(
new X509KeyUsageExtension(
keyUsages:
X509KeyUsageFlags.DigitalSignature
| X509KeyUsageFlags.KeyEncipherment,
critical: false
)
);</code>
</pre></div>You can take a look at the <a href="https://docs.microsoft.com/en-us/dotnet/api/system.security.cryptography.x509certificates.x509keyusageflags?f1url=%3FappId%3DDev16IDEF1%26l%3DEN-US%26k%3Dk(System.Security.Cryptography.X509Certificates.X509KeyUsageFlags);k(DevLang-csharp)%26rd%3Dtrue&view=net-5.0" rel="nofollow" target="_blank">X509KeyUsageFlags</a> enumeration yourself, where the various areas of use of certificates are listed.<br /><br />Next we provide a public key for identification:<pre><code lang="cs">certificateRequest.CertificateExtensions.Add(
new X509SubjectKeyIdentifierExtension(
key: certificateRequest.PublicKey,
critical: false
)
);</code>
</pre>And here comes a little bit of black magic. As I have already told you, if you want to use the certificate for protection of <i>www.example.com</i> site, it's <i>subject</i> field must contain <i>CN=www.example.com</i>. But it is not enough for Chrome browsers. They require that the <i>Subject Alternative Name</i> field must contain <i>DNS Name=www.example.com</i>. In our case, it must contain <i>DNS Name=localhost</i>. Otherwise Chrome will not trust such a certificate. Unfortunately I have not found a convenient way to set value of <i>Subject Alternative Name</i> field for our certificate. But the following piece of code sets it to <i>DNS Name=localhost</i>:<pre><code lang="cs">certificateRequest.CertificateExtensions.Add(
new X509Extension(
new AsnEncodedData(
"Subject Alternative Name",
new byte[] { 48, 11, 130, 9, 108, 111, 99, 97, 108, 104, 111, 115, 116 }
),
false
)
);</code>
</pre>That's it. Our certificate request is ready. Now we can create the certificate itself:</div><pre><code lang="cs">var expireAt = DateTimeOffset.Now.AddYears(5);
var certificate = certificateRequest.CreateSelfSigned(DateTimeOffset.Now, expireAt);</code>
</pre>Here we say that the certificate will be valid for five years from the current moment.<br /><br />Now we have a certificate. But it exists only in the computer's memory so far. To be able to install it in our system, we need to write it to a file in the PFX format. But there is one obstacle here. The file we want to get must contain both public and private keys, because the server must perform both encryption and decryption. But for security reasons, our certificate cannot be used to export the private key. We can create a certificate ready for export as follows:<br /><pre><code lang="cs">// Export certificate with private key
var exportableCertificate = new X509Certificate2(
certificate.Export(X509ContentType.Cert),
(string)null,
X509KeyStorageFlags.Exportable | X509KeyStorageFlags.PersistKeySet
).CopyWithPrivateKey(rsaKey);</code>
</pre>For convenience, we can add a description:<pre><code lang="cs">exportableCertificate.FriendlyName = "Ivan Yakimov Test-only Certificate For Client Authorization";</code></pre>Now we can export the certificate to a file. Since this file also contains a private key, it is reasonable to protect it with a password. In this case, even if the file is stolen, the criminal will not be able to use it:<pre><code lang="cs">// Create password for certificate protection
var passwordForCertificateProtection = new SecureString();
foreach (var @char in "p@ssw0rd")
{
passwordForCertificateProtection.AppendChar(@char);
}
// Export certificate to a file.
File.WriteAllBytes(
"certificateForServerAuthorization.pfx",
exportableCertificate.Export(
X509ContentType.Pfx,
passwordForCertificateProtection
)
);</code>
</pre>So, we have a certificate file that can be used to protect the Web server. But you can also create a certificate to authenticate clients of this server. The creation process is almost the same as for the server certificate, but the <i>subject</i> field can contain anything, and we no longer need the <i>Subject Alternative Name</i> field:<pre><code lang="cs">// Generate private-public key pair
var rsaKey = RSA.Create(2048);
// Describe certificate
string subject = "CN=Ivan Yakimov";
// Create certificate request
var certificateRequest = new CertificateRequest(
subject,
rsaKey,
HashAlgorithmName.SHA256,
RSASignaturePadding.Pkcs1
);
certificateRequest.CertificateExtensions.Add(
new X509BasicConstraintsExtension(
certificateAuthority: false,
hasPathLengthConstraint: false,
pathLengthConstraint: 0,
critical: true
)
);
certificateRequest.CertificateExtensions.Add(
new X509KeyUsageExtension(
keyUsages:
X509KeyUsageFlags.DigitalSignature
| X509KeyUsageFlags.KeyEncipherment,
critical: false
)
);
certificateRequest.CertificateExtensions.Add(
new X509SubjectKeyIdentifierExtension(
key: certificateRequest.PublicKey,
critical: false
)
);
var expireAt = DateTimeOffset.Now.AddYears(5);
var certificate = certificateRequest.CreateSelfSigned(DateTimeOffset.Now, expireAt);
// Export certificate with private key
var exportableCertificate = new X509Certificate2(
certificate.Export(X509ContentType.Cert),
(string)null,
X509KeyStorageFlags.Exportable | X509KeyStorageFlags.PersistKeySet
).CopyWithPrivateKey(rsaKey);
exportableCertificate.FriendlyName = "Ivan Yakimov Test-only Certificate For Client Authorization";
// Create password for certificate protection
var passwordForCertificateProtection = new SecureString();
foreach (var @char in "p@ssw0rd")
{
passwordForCertificateProtection.AppendChar(@char);
}
// Export certificate to a file.
File.WriteAllBytes(
"certificateForClientAuthorization.pfx",
exportableCertificate.Export(
X509ContentType.Pfx,
passwordForCertificateProtection
)
);</code>
</pre>Now we can install the certificate we created into the system. To do this in Windows, double-click on the PFX certificate file. The wizard window opens. Specify that you want to install the certificate only for the current user, and not for the entire machine:<div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQAksxSBcdkgsG3IyFJjABpc8iswFVwk4MBf8zhYKpX21ko2GMaC8WEb4QujMTVJHaSMz82t8_7gxC007kfYLyUN4pjOk67-FMcH1UXV3yozTxurflJfCl0RO5hADmrWDdsWQ5BoXl8OkP/s675/Install+for+current+user.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="454" data-original-width="675" height="269" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQAksxSBcdkgsG3IyFJjABpc8iswFVwk4MBf8zhYKpX21ko2GMaC8WEb4QujMTVJHaSMz82t8_7gxC007kfYLyUN4pjOk67-FMcH1UXV3yozTxurflJfCl0RO5hADmrWDdsWQ5BoXl8OkP/w400-h269/Install+for+current+user.png" width="400" /></a></div><div><br /></div>On the next screen, you can specify the path to the certificate file. Leave everything as it is:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIZFWW5quKsw2cPuc-uhRH_yWluYuumTUbowhO6BTkGLfRrIuZFAGto74AZvWVSOY6kVGAeGWAnf1vOV75IYbZaR6fwHeX7-1YdpW9EREp2gTE971u1DYC2YApC8J3wUqdd23X8YpidPer/s660/Choose+certificate+file.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="365" data-original-width="660" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIZFWW5quKsw2cPuc-uhRH_yWluYuumTUbowhO6BTkGLfRrIuZFAGto74AZvWVSOY6kVGAeGWAnf1vOV75IYbZaR6fwHeX7-1YdpW9EREp2gTE971u1DYC2YApC8J3wUqdd23X8YpidPer/w400-h221/Choose+certificate+file.png" width="400" /></a></div><div><br /></div>On the next screen, enter the password that you used to protect the certificate file:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdmFMRmpf_xxQdEifFLjBovwXywv-vBlEJ2Bmh5drRmADDodxL-fjABEeUktw0C_3QXYYVcisKj4E4H5QwvpTRymYqdCMH7uUY-7Hi33z5PjXRueiYrujjCOAUCp-LCvcIIiqSl1INIaSO/s662/Entering+password.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="536" data-original-width="662" height="324" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdmFMRmpf_xxQdEifFLjBovwXywv-vBlEJ2Bmh5drRmADDodxL-fjABEeUktw0C_3QXYYVcisKj4E4H5QwvpTRymYqdCMH7uUY-7Hi33z5PjXRueiYrujjCOAUCp-LCvcIIiqSl1INIaSO/w400-h324/Entering+password.png" width="400" /></a></div><div><br /></div>Then specify that you want to install your certificate in Trusted Root Certification Authorities:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9VjQWLnDHOO_JfKgd4577bARp-guptxM1O4gQDTP1_cVUXklwk0O9kwV3TE_9wvMzORsYJMuQyg7TMDzE3S3cZNhh3zXb9rzfwO5DxCiHuSOLJXYFUA7-Y6QJx42a9eF9v481h-xvwstB/s678/Certificate+store.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="329" data-original-width="678" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9VjQWLnDHOO_JfKgd4577bARp-guptxM1O4gQDTP1_cVUXklwk0O9kwV3TE_9wvMzORsYJMuQyg7TMDzE3S3cZNhh3zXb9rzfwO5DxCiHuSOLJXYFUA7-Y6QJx42a9eF9v481h-xvwstB/w400-h194/Certificate+store.png" width="400" /></a></div><div><br /></div>Remember how we discussed earlier certificate trust chains? This Trusted Root Certification Authorities repository stores these final (root) certificates that the system trusts without additional checks.<br /><br />This is the end of the certificate import configuration. Then you can click only "Next", "Finish" and "Ok".<br /><br />Now our certificate is present in the Trusted Root Certification Authorities storage. You can open it by clicking the Manage User Certificates link in the Control Panel:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4rBp2k1hQ3Wnge9OJxU6qsQPMT60vDUpTlYxPoAKZ6ygCHiaZ3o-MpvgLKIDiHRig1fV2NcZDOjBdW03PwoAIoGargBwh4YZkietqQNsAOp3fSUprG8fN_jO4ko2WCKjcK3biB9xosVLN/s732/Manage+User+Certificates.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="532" data-original-width="732" height="291" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4rBp2k1hQ3Wnge9OJxU6qsQPMT60vDUpTlYxPoAKZ6ygCHiaZ3o-MpvgLKIDiHRig1fV2NcZDOjBdW03PwoAIoGargBwh4YZkietqQNsAOp3fSUprG8fN_jO4ko2WCKjcK3biB9xosVLN/w400-h291/Manage+User+Certificates.png" width="400" /></a></div><div><br /></div>Here is how our certificate looks like:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGjlwlRp25pEiKhvoKnkjennn2BJspBkplQtsn5ja1TpKljWDkUZ1_LkHuQ98Lv2I5T7RNzhxDhC4cMRb58hPDrcIrEwQEp73yUxdinA1FjC0QUr0ayGGX4f7LM9y8WkZ0ksbv1wS_1dQK/s1108/Certificate+in+Trusted+Storage.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="352" data-original-width="1108" height="203" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGjlwlRp25pEiKhvoKnkjennn2BJspBkplQtsn5ja1TpKljWDkUZ1_LkHuQ98Lv2I5T7RNzhxDhC4cMRb58hPDrcIrEwQEp73yUxdinA1FjC0QUr0ayGGX4f7LM9y8WkZ0ksbv1wS_1dQK/w640-h203/Certificate+in+Trusted+Storage.png" width="640" /></a></div><div><br /></div>The certificate for client authentication can be installed in the same way.<br /><br />Before proceeding to using these certificates in the .NET code, I want to show you another way to create self-signed certificates. If you don't want to write the certificate creation program, but you have PowerShell, you can create a certificate using it.<br /><br />Here is the code that generates a certificate to protect the server:<pre><code lang="ps">$certificate = New-SelfSignedCertificate `
-Subject localhost `
-DnsName localhost `
-KeyAlgorithm RSA `
-KeyLength 2048 `
-NotBefore (Get-Date) `
-NotAfter (Get-Date).AddYears(5) `
-FriendlyName "Ivan Yakimov Test-only Certificate For Server Authorization" `
-HashAlgorithm SHA256 `
-KeyUsage DigitalSignature, KeyEncipherment, DataEncipherment `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1")
$pfxPassword = ConvertTo-SecureString `
-String "p@ssw0rd" `
-Force `
-AsPlainText
Export-PfxCertificate `
-Cert $certificate `
-FilePath "certificateForServerAuthorization.pfx" `
-Password $pfxPassword</code>
</pre><i>New-SelfSignedCertificate</i> and <i>Export-PfxCertificate</i> command are from the <a href="https://docs.microsoft.com/en-us/powershell/module/pki/?view=windowsserver2019-ps" rel="nofollow" target="_blank">pki</a> module. I hope that by now you can already understand the meaning of the various parameters here.<br /><br />And here is the code for creating a certificate for client authentication:<pre><code lang="ps">$certificate = New-SelfSignedCertificate `
-Type Custom `
-Subject "Ivan Yakimov" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2") `
-FriendlyName "Ivan Yakimov Test-only Certificate For Client Authorization" `
-KeyUsage DigitalSignature `
-KeyAlgorithm RSA `
-KeyLength 2048
$pfxPassword = ConvertTo-SecureString `
-String "p@ssw0rd" `
-Force `
-AsPlainText
Export-PfxCertificate `
-Cert $certificate `
-FilePath "certificateForClientAuthorization.pfx" `
-Password $pfxPassword</code>
</pre>Now let's see how we can use these certificates.</div><div><br /><div><h3 style="text-align: left;">How to use certificates in .NET code</h3><a name="usage"></a><div><br /></div>So, we have a web server written in ASP.NET Core. And we want to protect it with our certificate. First, we need to get this certificate in the code of our server. There are two ways to do this.<br /><br />The first option is to get a certificate from a PFX file. You can use this option if you have a certificate file that you have installed in the trusted certificate store. In this case, you can get a certificate as follows:</div><pre><code lang="cs">var certificate = new X509Certificate2(
"certificateForServerAuthorization.pfx",
"p@ssw0rd"
);</code>
</pre>Here <i>certificateForServerAuthorization.pfx</i> is the path to the certificate file, and <i>p@ssw0rd</i> is the password that you used to protect it.<br /><br />But you may not always have access to the certificate file. In this case, you can take the certificate directly from the storage:<pre><code lang="cs">var store = new X509Store(StoreName.Root, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
var certificate = store.Certificates.OfType<X509Certificate2>()
.First(c => c.FriendlyName == "Ivan Yakimov Test-only Certificate For Server Authorization");</code>
</pre>The value <i>StoreLocation.CurrentUser</i> means that we want to work with the certificate store of the current user, and not the entire computer. The value <i>StoreName.Root</i> means, that we must look for the certificate in the Trusted Root Certification Authorities storage. Here, for simplicity, I'm looking for a certificate by name, but you can specify any suitable criterion.<br /><br />Now we have a certificate. Let's make our server to use it. To do this, we need to change the code of the <i>Program.cs</i> file:<pre><code lang="cs">public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args)
{
var store = new X509Store(StoreName.Root, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
var certificate = store.Certificates.OfType<X509Certificate2>()
.First(c => c.FriendlyName == "Ivan Yakimov Test-only Certificate For Server Authorization");
return Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder
.UseKestrel(options =>
{
options.Listen(System.Net.IPAddress.Loopback, 44321, listenOptions =>
{
var connectionOptions = new HttpsConnectionAdapterOptions();
connectionOptions.ServerCertificate = certificate;
listenOptions.UseHttps(connectionOptions);
});
})
.UseStartup<Startup>();
});
}
}</code>
</pre>As you can see, all the magic happens inside the <i>UseKestrel</i> method. Here we specify which port we want to use and which certificate we want to apply.<br /><br />Now the browser considers our site protected:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXuwHAtnwD-hytmKiNmMWkn8hQA8oMZF7oySsrBkbjPVzv3JuXh7F4Avk7OXgI2tANQzNh_rKVsxkvpjWCcaDJL9fPBst_i3V5Ntjk2fFvLLwKc5i7wfbibvwzt_DA-ptzFYDMQ6W1UzAW/s603/Secure+connection.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="521" data-original-width="603" height="345" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXuwHAtnwD-hytmKiNmMWkn8hQA8oMZF7oySsrBkbjPVzv3JuXh7F4Avk7OXgI2tANQzNh_rKVsxkvpjWCcaDJL9fPBst_i3V5Ntjk2fFvLLwKc5i7wfbibvwzt_DA-ptzFYDMQ6W1UzAW/w400-h345/Secure+connection.png" width="400" /></a></div><div><br /></div>But we don't always work with a web server through a browser. Sometimes we need to contact him from the code. Then <i>HttpClient</i> comes to the rescue:<br /><pre><code lang="cs">var client = new HttpClient()
{
BaseAddress = new Uri("https://localhost:44321")
};
var result = await client.GetAsync("data");
var content = await result.Content.ReadAsStringAsync();
Console.WriteLine(content);</code>
</pre>In fact, the standard <i>HttpClient</i> verifies the server certificate and will not establish a connection if it cannot verify its authenticity. But what if we want to do some additional checks? For example, you may want to check who signed the server certificate. Or you want to check some non-standard field of this certificate. This can be done. We just need to define the method that will be called after the system performs the standard certificate verification:<pre><code lang="cs">var handler = new HttpClientHandler()
{
ServerCertificateCustomValidationCallback = (request, certificate, chain, errors) => {
if (errors != SslPolicyErrors.None) return false;
return true;
}
};
var client = new HttpClient(handler)
{
BaseAddress = new Uri("https://localhost:44321")
};</code>
</pre>You assign this method to the <i>ServerCertificateCustomValidationCallback</i> property of <i>HttpClientHandler</i> instance. The instance must be passed to the <i>HttpClient</i>'s constructor.<br /><br />Let's take a closer look at this verification method. As I said before, it is called after, and not instead of the standard check. The results of this check can be obtained from the last parameter of this method (<i>errors</i>). If this value is not equal to <i>SslPolicyErrors.No</i>, the standard verification failed, and you can't trust such a certificate. This method also allows you to get information about:<br /><br /><ul style="text-align: left;"><li>The request (<i>request</i>).</li><li>Server certificate (<i>certificate</i>).</li><li>Chain of trust for this certificate (<i>chain</i>). Here you can find the detailed reason why the standard check failed, if you are interested in this information.</li></ul><br />So, now we know how to protect our server with a certificate. But the certificate can also be used to authenticate the client. In this case, the server will only serve requests from those clients that provide the "correct" certificate. A certificate is considered correct if it passes the standard verification, and also meets any additional conditions requested by the server.<br /><br />Let's see how to make the server require a certificate from the client. To do this, you only need a small code change:<pre><code lang="cs">return Host.CreateDefaultBuilder(args)
.UseSerilog()
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder
.UseKestrel(options =>
{
options.Listen(System.Net.IPAddress.Loopback, 44321, listenOptions =>
{
var connectionOptions = new HttpsConnectionAdapterOptions();
connectionOptions.ServerCertificate = certificate;
connectionOptions.ClientCertificateMode = ClientCertificateMode.RequireCertificate;
connectionOptions.ClientCertificateValidation = (certificate, chain, errors) =>
{
if (errors != SslPolicyErrors.None) return false;
// Here is your code...
return true;
};
listenOptions.UseHttps(connectionOptions);
});
})
.UseStartup<Startup>();
});</code>
</pre>As you can see, we have additionally set only two properties of the <i>HttpsConnectionAdapterOptions</i> object. Using the <i>ClientCertificateMode</i> property, we determine that the client certificate is mandatory, and using the <i>ClientCertificateValidation</i> property, we set our custom function for additional certificate verification.<br /><br />If you open such a site in a browser, it will ask you which client certificate you want to use:</div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8dQ42mKAYsOY9vzxYz4B3TC3xGRVGqUK9GxJD7jsgU0dkvpp0tCArFSPaCuRNTULS4NQTDTGgEFWbfFigwFYaJYFBP83bH9whMIOs7BoTTKzwvunK79UG_HUn6kp0nb05iillA-iRaY4R/s867/Specify+client+certificate.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="489" data-original-width="867" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8dQ42mKAYsOY9vzxYz4B3TC3xGRVGqUK9GxJD7jsgU0dkvpp0tCArFSPaCuRNTULS4NQTDTGgEFWbfFigwFYaJYFBP83bH9whMIOs7BoTTKzwvunK79UG_HUn6kp0nb05iillA-iRaY4R/w400-h225/Specify+client+certificate.png" width="400" /></a></div><div><br /></div>The only thing left to do is to provide a client certificate to <i>HttpClient</i>. You can get a certificate just like you did for the server. Other changes are minimal:<pre><code lang="cs">var handler = new HttpClientHandler()
{
ServerCertificateCustomValidationCallback = (request, certificate, chain, errors) => {
if (errors != SslPolicyErrors.None) return false;
// Here is your code...
return true;
}
};
handler.ClientCertificates.Add(certificate);
var client = new HttpClient(handler)
{
BaseAddress = new Uri("https://localhost:44321")
};</code>
</pre>You just add the certificate into the <i>ClientCertificates</i> collection of the <i>HttpClientHandler</i> object.<br /><br /><h3 style="text-align: left;">Conclusion</h3><div><br /></div>So our article has come to an end. It was quite long. I conceived it as a single place where in the future I will be able to refresh my knowledge about certificates and their use. I hope that this will be useful for you as well.<br /><br /><h3 style="text-align: left;">Appendix</h3><div><br /></div>In my work, I used the following materials:<br /><br /><ul style="text-align: left;"><li><a href="https://www.humankode.com/asp-net-core/develop-locally-with-https-self-signed-certificates-and-asp-net-core" rel="nofollow" target="_blank">Develop Locally with HTTPS, Self-Signed Certificates and ASP.NET Core</a></li><li><a href="https://habr.com/ru/post/497160/" rel="nofollow" target="_blank">X.509 своими силами в .Net Core</a></li><li>All icons were created by Vitaly Gorbachev at <a href="http://www.flaticon.com" rel="nofollow" target="_blank">Flaticon</a></li></ul><br />The source code for this article can be found at <a href="https://github.com/yakimovim/play-with-ssl" rel="nofollow" target="_blank">GitHub</a>.</div>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-88419115775550768352021-10-01T12:20:00.000+03:002021-10-01T12:20:26.623+03:00How to use dependency injection in any .NET application<p> If you work with ASP.NET applications, you know that it uses dependency injection mechanism. It is very convenient in many cases. But you may want to use the same mechanism in other types of applications: console, desktop, ... Here I'll show you how you can do it.</p><span><a name='more'></a></span><p>First of all, you need to install <i>Microsoft.Extensions.DependencyInjection</i> NuGet package. Then you must create an instance of <i>ServiceCollection</i> class.</p><pre><code lang="cs">var configurator = new ServiceCollection();</code></pre><p>Now you can configure your dependencies.</p><pre><code lang="cs">configurator.AddScoped<Worker>();
configurator.AddScoped<Logger>();
</code></pre><p>When you finished configuration, you should create a service provider.</p><pre><code lang="cs">ServiceProvider services = configurator.BuildServiceProvider();</code></pre><p>Then you can request instances of your services.</p><pre><code lang="cs">var worker = services.GetService<Worker>();
worker.Do();
</code></pre><p>That's it. Nice and easy. Happy coding!</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-25737781981588745082021-04-15T17:46:00.001+03:002021-10-01T12:07:09.394+03:00Description of the enumeration members in Swashbuckle<p><a href="https://swagger.io/" target="_blank">Swagger</a> is a great thing! It allows us to easily see the API of our service, generate a client for it in different languages and even work with the service through the UI. In ASP.NET Core we have NuGet package <a href="https://github.com/domaindrivendev/Swashbuckle.AspNetCore" target="_blank">Swashbuckle.AspNetCore</a> for the support of Swagger.</p><p>But there is one thing I don't like about this implementation. Swashbuckle can show me descriptions of methods, parameters, and classes based on XML comments in the .NET code. But it does not show the descriptions of the enum members.</p><p>Let me show you what I mean.</p><span><a name='more'></a></span><p><br /></p><h2 style="text-align: left;">Service creation</h2><p>I created a simple Web service:</p><pre><code lang="cs">/// <summary>
/// Contains endpoints that use different enums.
/// </summary>
[Route("api/data")]
[ApiController]
public class EnumsController : ControllerBase
{
/// <summary>
/// Executes operation of requested type and returns result status.
/// </summary>
/// <param name="id">Operation id.</param>
/// <param name="type">Operation type.</param>
/// <returns>Result status.</returns>
[HttpGet]
public Task<Result> ExecuteOperation(int id, OperationType type)
{
return Task.FromResult(Result.Success);
}
/// <summary>
/// Changes data
/// </summary>
[HttpPost]
public Task<IActionResult> Change(DataChange change)
{
return Task.FromResult<IActionResult>(Ok());
}
}<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>This controller makes extensive use of enums. It uses them as argument types, as method results, and as parts of more complex objects:</p><pre><code lang="cs">/// <summary>
/// Operation types.
/// </summary>
public enum OperationType
{
/// <summary>
/// Do operation.
/// </summary>
Do,
/// <summary>
/// Undo operation.
/// </summary>
Undo
}
/// <summary>
/// Operation results.
/// </summary>
public enum Result
{
/// <summary>
/// Operations was completed successfully.
/// </summary>
Success,
/// <summary>
/// Operation failed.
/// </summary>
Failure
}
/// <summary>
/// Data change information.
/// </summary>
public class DataChange
{
/// <summary>
/// Data id.
/// </summary>
public int Id { get; set; }
/// <summary>
/// Source type.
/// </summary>
public Sources Source { get; set; }
/// <summary>
/// Operation type.
/// </summary>
public OperationType Operation { get; set; }
}
/// <summary>
/// Types of sources.
/// </summary>
public enum Sources
{
/// <summary>
/// In-memory data source.
/// </summary>
Memory,
/// <summary>
/// Database data source.
/// </summary>
Database
}<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>I installed <i>Swashbuckle.AspNetCore</i> NuGet package to support Swagger. Now I must configure it. It can be done in the <i>Startup</i> file:</p><pre><code lang="cs">public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddSwaggerGen(c => {
// Set the comments path for the Swagger JSON and UI.
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
c.IncludeXmlComments(xmlPath);
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseSwagger();
app.UseSwaggerUI();
app.UseRouting();
...
}
}<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>Now we can start our service. And at the address <i>http://localhost:5000/swagger/index.html</i> we'll find a description of it:</p><p class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFi0_vrOeqFktuJPoMqaxAuaSjCgPzGtdeqLtV5rlv_m_BP-JLGq8p4fDE04IRmsMqoH-yi_3Oexm-m13S_J2CRX-QYspXk4lISBInEb8asqa8vXowoyhvHe1_RLQnfisA3R5f-92Qc3ZP/s935/Swagger.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Swagger UI for the service" border="0" data-original-height="563" data-original-width="935" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFi0_vrOeqFktuJPoMqaxAuaSjCgPzGtdeqLtV5rlv_m_BP-JLGq8p4fDE04IRmsMqoH-yi_3Oexm-m13S_J2CRX-QYspXk4lISBInEb8asqa8vXowoyhvHe1_RLQnfisA3R5f-92Qc3ZP/w400-h241/Swagger.png" width="400" /></a></p><p><br /></p><p>But now all our enumerations are represented by mere numbers:</p><p class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1aViYyi9eNtH2XsdiC3SBPLN2sG9ixM4gcPPKovDAtlGbxRaqdNuWKsRGEsyuJv1d6DcJElf4uEMVr0k6FC2Y1cyBxqIEOJN9NtdF1mjFynnc0nwNtTLmLOA5xXv-dyzC-rqwCDGqikDR/s396/Integer+Enum.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Representation of enumerations by numbers" border="0" data-original-height="195" data-original-width="396" height="158" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1aViYyi9eNtH2XsdiC3SBPLN2sG9ixM4gcPPKovDAtlGbxRaqdNuWKsRGEsyuJv1d6DcJElf4uEMVr0k6FC2Y1cyBxqIEOJN9NtdF1mjFynnc0nwNtTLmLOA5xXv-dyzC-rqwCDGqikDR/w320-h158/Integer+Enum.png" width="320" /></a></p><p><br /></p><p>I'd prefer to provide string values for enumerations. They at least make some sense to users, unlike these numbers.</p><p>To do this, we need to make some changes to the Swashbuckle configuration. I installed another NuGet package <i>Swashbuckle.AspNetCore.Newtonsoft</i>. And here are my changes. I changed</p><pre><code lang="cs">services.AddControllers();</code></pre><p>to</p><pre><code lang="cs">services.AddControllers().AddNewtonsoftJson(o =>
{
o.SerializerSettings.Converters.Add(new StringEnumConverter
{
CamelCaseText = true
});
});</code></pre><p>Now our enumerations are represented as strings:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg15UJFLJ2KrHZ8476yJVrqp3WKPzNgWU6dcn32tDLxSgr2tFLXigzizlkI_1gNfIkMimjw7Bg1WgWQgljjQWhm-8DBVlDdO22eY2BiBXnVTx45zIj68VOuIfX4wRCXakN5wCP2qzWo5eBt/s320/String+Enum.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Representation of enumerations by strings" border="0" data-original-height="198" data-original-width="320" height="198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg15UJFLJ2KrHZ8476yJVrqp3WKPzNgWU6dcn32tDLxSgr2tFLXigzizlkI_1gNfIkMimjw7Bg1WgWQgljjQWhm-8DBVlDdO22eY2BiBXnVTx45zIj68VOuIfX4wRCXakN5wCP2qzWo5eBt/w320-h198/String+Enum.png" width="320" /></a></div><br /><p>But even now I see one drawback. Swagger UI does not show me XML comments assigned to the members of enumerations.</p><h2 style="text-align: left;">Description of enumeration types</h2><p>Let's see how we can get them. I did a bit of searching on the internet but found almost nothing. Although there is one very interesting <a href="https://gist.github.com/edfarrow/3162b0b7f7940e92b4d38da9b741fa4c" target="_blank">piece of code</a>. Unfortunately, it matches the old version of Swashbuckle. Nevertheless, it is a good starting point.</p><p>Swashbuckle allows us to interfere with the documentation generation process. For example, there is an interface <i>ISchemaFilter</i>, which allows you to change the schema description of individual classes. The following code shows how to change the descriptions of enumerations:</p><p><br /></p><pre><code lang="cs">public class EnumTypesSchemaFilter : ISchemaFilter
{
private readonly XDocument _xmlComments;
public EnumTypesSchemaFilter(string xmlPath)
{
if(File.Exists(xmlPath))
{
_xmlComments = XDocument.Load(xmlPath);
}
}
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
{
if (_xmlComments == null) return;
if(schema.Enum != null && schema.Enum.Count > 0 &&
context.Type != null && context.Type.IsEnum)
{
schema.Description += "<p>Members:</p><ul>";
var fullTypeName = context.Type.FullName;
foreach (var enumMemberName in schema.Enum.OfType<OpenApiString>().Select(v => v.Value))
{
var fullEnumMemberName = $"F:{fullTypeName}.{enumMemberName}";
var enumMemberComments = _xmlComments.Descendants("member")
.FirstOrDefault(m => m.Attribute("name").Value.Equals(fullEnumMemberName, StringComparison.OrdinalIgnoreCase));
if (enumMemberComments == null) continue;
var summary = enumMemberComments.Descendants("summary").FirstOrDefault();
if (summary == null) continue;
schema.Description += $"<li><i>{enumMemberName}</i> - {summary.Value.Trim()}</li>";
}
schema.Description += "</ul>";
}
}
}<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>The constructor of this class accepts a path to the file with XML comments. I read its contents into the <i>XDocument</i> object. Then in <i>Apply</i> method, we check if the current type is enumeration. For such types, we add an HTML list with descriptions of all the members of this enumeration to the type description.</p><p>Now we must plug the class of our filter into Swashbuckle:</p><pre><code lang="cs">services.AddSwaggerGen(c => {
// Set the comments path for the Swagger JSON and UI.
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
c.IncludeXmlComments(xmlPath);
c.SchemaFilter<EnumTypesSchemaFilter>(xmlPath);
});<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>It can be done using the <i>SchemaFilter</i> method in the configuration section for Swagger. I pass the path to the file with XML comments to this method. This value will be passed to the constructor of the <i>EnumTypesSchemaFilter</i> class.</p><p>Now the Swagger UI shows the enum descriptions as follows:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWAQSvjfho0lRpTj-FJmrzWI5SM5qfQhmN4a1i41CX0VBSdgmiIv0aUS4njHq2sU6jQ_z5LpeCMyUWt8HNL-PmXtoXWUZ_ohnagQ9Y_MbFMPVrVL3f4sMiGYPaUrYYk7Wlc_9_Inon1Ig-/s318/XML+comments+for+classes.png" style="margin-left: 1em; margin-right: 1em;"><img alt="XML comments for enumeration members" border="0" data-original-height="289" data-original-width="318" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWAQSvjfho0lRpTj-FJmrzWI5SM5qfQhmN4a1i41CX0VBSdgmiIv0aUS4njHq2sU6jQ_z5LpeCMyUWt8HNL-PmXtoXWUZ_ohnagQ9Y_MbFMPVrVL3f4sMiGYPaUrYYk7Wlc_9_Inon1Ig-/s16000/XML+comments+for+classes.png" /></a></div><br /><h2 style="text-align: left;">Description of enumeration parameters</h2><p>It looks better. But not good enough. Our controller has a method that takes an enum as a parameter:</p><pre><code lang="cs">public Task<Result> ExecuteOperation(int id, OperationType type)</code></pre><p>Let's see how the Swagger UI shows this:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhi3JNkqxTqKgOnnH3YyIXJNhIPmPQqEpm3cMYVjIHLjaOlbrEO3CtUFenn4C6_-4_uM52GJ7yPX_iDoyZSgrG-njFTDAH0H6GC-LWefNGV1ENGL4CYeL9DLsCkJbbWNbiNJsSNkXmg908t/s529/Parameter+description.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Parameter description" border="0" data-original-height="286" data-original-width="529" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhi3JNkqxTqKgOnnH3YyIXJNhIPmPQqEpm3cMYVjIHLjaOlbrEO3CtUFenn4C6_-4_uM52GJ7yPX_iDoyZSgrG-njFTDAH0H6GC-LWefNGV1ENGL4CYeL9DLsCkJbbWNbiNJsSNkXmg908t/w320-h173/Parameter+description.png" width="320" /></a></div><br /><p>As you can see, there is no description of the enum members here. The reason is that we see here a description of the parameter, not a description of the parameter type. So this is an XML comment for the parameter, not for the parameter type.</p><p>But we can solve this problem too. To do this, we will use another Swashbuckle interface - <i>IDocumentFilter</i>. Here is our implementation:</p><p><br /></p><pre><code lang="cs">public class EnumTypesDocumentFilter : IDocumentFilter
{
public void Apply(OpenApiDocument swaggerDoc, DocumentFilterContext context)
{
foreach (var path in swaggerDoc.Paths.Values)
{
foreach(var operation in path.Operations.Values)
{
foreach(var parameter in operation.Parameters)
{
var schemaReferenceId = parameter.Schema.Reference?.Id;
if (string.IsNullOrEmpty(schemaReferenceId)) continue;
var schema = context.SchemaRepository.Schemas[schemaReferenceId];
if (schema.Enum == null || schema.Enum.Count == 0) continue;
parameter.Description += "<p>Variants:</p>";
int cutStart = schema.Description.IndexOf("<ul>");
int cutEnd = schema.Description.IndexOf("</ul>") + 5;
parameter.Description += schema.Description
.Substring(cutStart, cutEnd - cutStart);
}
}
}
}
}<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>Here, in the <i>Apply</i> method, we iterate through all the parameters of all the methods of all the controllers. Unfortunately, in this interface, we do not have access to the parameter type, only to the schema of this type (at least I think so). That's why I just cut the description of the enum members from the string with the parameter type description.</p><p>Our class must be registered in the same way using the <i>DocumentFilter</i> method:</p><pre><code lang="cs">services.AddSwaggerGen(c => {
// Set the comments path for the Swagger JSON and UI.
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
c.IncludeXmlComments(xmlPath);
c.SchemaFilter<EnumTypesSchemaFilter>(xmlPath);
c.DocumentFilter<EnumTypesDocumentFilter>();
});<span style="font-family: Times New Roman;"><span style="white-space: normal;">
</span></span></code></pre><p>Here's what the parameter description in the Swagger UI looks like now:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtaLWS_i7G7bb6YHyn1zjKfnKHWx0gOCB32WFMAGWxXJaXvvWx0FPdiRhX4NpOm8hfZ-AwShNzlEvGQWgC9NlWwBgbCiGDTMBx50ADbaWmm3y9ZttphMqBNP3pAsgB0YIjKWhIsxSo1S3N/s522/Parameter+description+with+variants.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Parameter description with variants" border="0" data-original-height="410" data-original-width="522" height="251" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtaLWS_i7G7bb6YHyn1zjKfnKHWx0gOCB32WFMAGWxXJaXvvWx0FPdiRhX4NpOm8hfZ-AwShNzlEvGQWgC9NlWwBgbCiGDTMBx50ADbaWmm3y9ZttphMqBNP3pAsgB0YIjKWhIsxSo1S3N/w320-h251/Parameter+description+with+variants.png" width="320" /></a></div><br /><h2 style="text-align: left;">Conclusion</h2><p>The code presented in this article is more of a sketch than a final version. But I hope it can be useful and allow you to add a description of the enum members to your Swagger UI. Thank you!</p><p>P.S. You can find the whole code of the project on <a href="https://github.com/yakimovim/enum-description-in-swagger" target="_blank">GitHub</a>.</p>Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com2tag:blogger.com,1999:blog-5729371525642521663.post-51269489127501121732020-01-22T16:44:00.001+03:002020-01-22T16:44:25.799+03:00Constant reservation and Git hooks using C#<div dir="ltr" style="text-align: left;" trbidi="on">
Let me tell you a story. Once upon a time, there were two developers: Sam and Bob. They worked with a project where a database was. When a developer wanted to make changes to the database structure they had to create a step file <i>stepNNN.sql</i>, where <i>NNN</i> was some number. To avoid collisions of the numbers between different developers they had a simple Web application. Each developer before starting to write an SQL file should go to the application and reserve a new number for their modifications.<br />
<br />
That was a time for Sam and Bob to make changes in the database. And Sam obediently went to the Web application and reserved number 333. But Bob forgot to do it. He just used 333 for his new step file. It happened that Bob was the first who committed his changes into the version control system. When Sam was ready to commit it appeared that <i>step333.sql</i> already existed. He contacted Bob, explained to him that step 333 was already reserved and asked Bob to fix the collision. But Bob answered:<br />
<br />
- Hey, man. You know, my code is already in the 'master' branch, many developers already took it. And also it is already on production. Could you just fix your code instead?<br />
<br />
Have you noticed it? The person who followed all the rules was the one who was punished. Sam had to change his files, modify his local database, etc. Personally, I hate such situations. Let's see how we can avoid it.<br />
<a name='more'></a><br />
<h2 style="text-align: left;">
General idea</h2>
<br />
How can we prevent such things from happening? What if Bob was not able to commit his changes if he has not reserved the corresponding number on the Web application?<br />
<br />
And it can be implemented. We can use Git hooks to execute custom code before each commit. This code will check all changes that a developer wants to commit. If these changes contain a new step file, the code will contact the Web application and check if the number of the step file is reserved by the current developer. And if the number is not reserved, the code will prevent the commit.<br />
<br />
This is the main idea. Now let's dig into details.<br />
<br />
<h2 style="text-align: left;">
Git hooks on C#</h2>
<br />
Git does not limit you which language you should use to write hooks. As a C# developer, I'd like to use well-known C# for this purpose. Can I do it?<br />
<br />
Yes, I can. I took the main idea from <a href="https://medium.com/@max.hamulyak/using-c-code-in-your-git-hooks-66e507c01a0f" target="_blank">this article</a> of Max Hamulyák. It requires us to use <a href="https://github.com/filipw/dotnet-script" target="_blank"><i>dotnet-script</i></a> global tool. This tool requires .NET Core 2.1 + SDK to be installed on the developer machine. I think it is not unreasonable to have it installed if you are doing .NET development. Installation of the <i>dotnet-script</i> is very straightforward:<br />
<br />
<pre><code lang="bash">> dotnet tool install -g dotnet-script</code></pre>
<br />
Now we can write Git hooks using C#. To do it in the folder of your project go to <i>.git\hooks</i> directory and create <i>pre-commit</i> file (without any extension):<br />
<br />
<pre><code lang="cs">#!/usr/bin/env dotnet-script
Console.WriteLine("Git hook");</code></pre>
<br />
From this moment on every time you run <i>git commit</i> command you'll see <i>Git hook</i> message in your console.<br />
<br />
<h2 style="text-align: left;">
Several processors for one hook</h2>
<br />
Well, it was a start. Now we can write anything in the <i>pre-commit</i> file. But I don't like this idea very much.<br />
<br />
First, the writing of a script file is not very convenient. I'd prefer to use my favorite IDE with all its features. And I want to split complex code across several files.<br />
<br />
But there is one more thing I don't like. Consider the following situation. You created a <i>pre-commit</i> file with some checks. But later you decided to add some more checks. You'll have to open the file, decide where to insert new code, decide how to interact with old code, etc. Personally, I prefer to write new code, not modify existing code.<br />
<br />
Let's deal with these problems one at a time.<br />
<br />
<h2 style="text-align: left;">
Call of external code</h2>
<br />
Here is what we'll do. We'll create some folder (e.g. <i>gitHookAssemblies</i>). In this folder, I'll place some .NET Core assembly (e.g. <i>GitHooks</i>). My script in the <i>pre-commit</i> file will just call some method from this assembly.<br />
<br />
<pre><code lang="cs">public class RunHooks
{
public static void RunPreCommitHook()
{
Console.WriteLine("Git hook from assembly");
}
}</code></pre>
<br />
I can create the assembly in my favorite IDE with using any tools I want.<br />
<br />
Now in the <i>pre-commit</i> file, I can write:<br />
<br />
<pre><code lang="cs">#!/usr/bin/env dotnet-script
#r "../../gitHookAssemblies/GitHooks.dll"
GitHooks.RunHooks.RunPreCommitHook();</code></pre>
<br />
<br />
See how cool it is! Now I must only make changes in the <i>GitHooks</i> assembly. The code of <i>pre-commit</i> file will never change. Any time I need some new check, I'll change the code of <i>RunPreCommitHook</i> method, recompile the assembly and place it into the <i>gitHookAssemblies</i> folder. And that's it!<br />
<br />
Well, not quite.<br />
<br />
<h2 style="text-align: left;">
Fighting with cache </h2>
<br />
Let's try to follow this process. Let's change the message for <i>Console.WriteLine</i> to something different, recompile the assembly and put in into <i>gitHookAssemblies</i> folder. After that call <i>git commit</i> again. What will we see? The old message. Our changes were not found. Why is that?<br />
<br />
Let's say, that your project is in the <i>c:\project</i> folder. It means that Git hooks are stored in the <i>c:\project\.git\hooks</i> folder. Now, if you are on Windows 10, go to the <i>c:\Users\<UserName>\AppData\Local\Temp\scripts\c\project\.git\hooks\</i> folder. Here <i><UserName></i> should be the name of your current user. What do we have here? When we run the <i>pre-commit</i> script, in this folder will be created a compiled version of the script. Here you can also find all referenced assemblies (including our <i>GitHooks.dll</i>). And in the <i>execution-cache</i> sub-folder you can find SHA256 file. I can suggest, that this file contains SHA256 hash of our <i>pre-commit</i> file. Any time we run the script, runtime compares the current hash of the file with the stored hash. If they are equal, the stored version of the compiled script will be used.<br />
<br />
It means, that as we never change our <i>pre-commit</i> file, changes in the <i>GitHooks.dll</i> will never go to the cache and will never be used.<br />
<br />
What can we do about it? Well, Reflection will help. I'll rewrite my script file to use Reflection instead of direct reference to the <i>GitHooks</i> assembly. Here is how our <i>pre-commit</i> file will look like:<br />
<br />
<pre><code lang="cs">#!/usr/bin/env dotnet-script
#r "nuget: System.Runtime.Loader, 4.3.0"
using System.IO;
using System.Runtime.Loader;
var hooksDirectory = Path.Combine(Environment.CurrentDirectory, "gitHookAssemblies");
var assemblyPath = Path.Combine(hooksDirectory, "GitHooks.dll");
var assembly = AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath);
if(assembly == null)
{
Console.WriteLine($"Can't load assembly from '{assemblyPath}'.");
}
var collectorsType = assembly.GetType("GitHooks.RunHooks");
if(collectorsType == null)
{
Console.WriteLine("Can't find entry type.");
}
var method = collectorsType.GetMethod("RunPreCommitHook", System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static);
if(method == null)
{
Console.WriteLine("Can't find method for pre-commit hooks.");
}
method.Invoke(null, new object[0]);</code></pre>
<br />
Now we can update <i>GitHook.dll</i> in our <i>gitHookAssemblies</i> folder at any moment and all changes will be executed by the same script. No need to change it at all.<br />
<br />
It sounds fine, but still, there is one more problem we need to solve before going further. I'm talking about references.<br />
<br />
<h2 style="text-align: left;">
Referencing assemblies</h2>
<br />
Everything looks fine when the only thing our <i>RunHooks.RunPreCommitHook</i> does is writing some string to console. But, frankly speaking, usually, we do not have much interest in writing strings. We need to do more complex things. And to do them we need to reference other assemblies and NuGet packages. Let's see how to do it.<br />
<br />
I'll modify my <i>RunHooks.RunPreCommitHook</i> to use some <i>LibGit2Sharp</i> package:<br />
<br />
<pre><code lamg="cs">public static void RunPreCommitHook()
{
using var repo = new Repository(Environment.CurrentDirectory);
Console.WriteLine(repo.Info.WorkingDirectory);
}</code></pre>
<br />
Now, if I try to run <i>git commit</i>, I'll get the following error message:<br />
<br />
<pre><code>System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> System.IO.FileLoadException: Could not load file or assembly 'LibGit2Sharp, Version=0.26.0.0, Culture=neutral, PublicKeyToken=7cbde695407f0333'. General Exception (0x80131500)</code></pre>
<br />
So we need some way to provide all referenced assemblies. The main idea here is the following. I'll place all assemblies required for the execution to the same <i>gitHookAssemblies</i> folder alongside with <i>GitHooks.dll</i>. To get all referenced assemblies in a .NET Core project you can use <i>dotnet publish</i> command. In our case, we need to place in this folder <i>LibGit2Sharp.dll</i> and <i>git2-7ce88e6.dll</i>.<br />
<br />
Also, we have to modify our <i>pre-commit</i> script. We'll add the following code:<br />
<br />
<pre><code lang="cs">#!/usr/bin/env dotnet-script
#r "nuget: System.Runtime.Loader, 4.3.0"
using System.IO;
using System.Runtime.Loader;
var hooksDirectory = Path.Combine(Environment.CurrentDirectory, "gitHookAssemblies");
var assemblyPath = Path.Combine(hooksDirectory, "GitHooks.dll");
AssemblyLoadContext.Default.Resolving += (context, assemblyName) => {
var assemblyPath = Path.Combine(hooksDirectory, $"{assemblyName.Name}.dll");
if(File.Exists(assemblyPath))
{
return AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath);
}
return null;
};
...</code></pre>
<br />
This code will try to find all unknown assemblies in the <i>gitHookAssemblies</i> folder.<br />
<br />
Now we can run <i>git commit</i> and it will execute without problems.<br />
<br />
<h2 style="text-align: left;">
Improve extensibility</h2>
<br />
Now our <i>pre-commit</i> is complete. We don't need to modify it anymore. But in case of any changes, we'll need to modify <i>RunHooks.RunPreCommitHook</i> method. We just moved this problem to another level. Personally, I'd prefer to have some sort of plug-in system. Every time I need to add some action that must be executed before commit, I just write another plug-in and don't modify anything. Is it hard to implement?<br />
<br />
Not at all. Let's use <a href="https://docs.microsoft.com/en-us/dotnet/framework/mef/" target="_blank">MEF</a>. Here is how it works.<br />
<br />
First, we define an interface for all hook handlers:<br />
<br />
<pre><code lang="cs">public interface IPreCommitHook
{
bool Process(IList<string> args);
}</code></pre>
<br />
Each Git hook can get some string arguments passed by Git. These arguments will be in the <i>args</i> parameter. The <i>Process</i> method will return <i>true</i> if it allows changes to be committed, and <i>false</i> otherwise.<br />
<br />
We definitely can define similar interfaces for other hooks, but in this article, we'll concentrate on pre-commit hook.<br />
<br />
Now we implement this interface:<br />
<br />
<pre><code lang="cs">[Export(typeof(IPreCommitHook))]
public class MessageHook : IPreCommitHook
{
public bool Process(IList<string> args)
{
Console.WriteLine("Message hook...");
if(args != null)
{
Console.WriteLine("Arguments are:");
foreach(var arg in args)
{
Console.WriteLine(arg);
}
}
return true;
}
}</code></pre>
<br />
Such classes can be defined in different assemblies if we want. Literally, there are no limitations. Attribute <i>Export</i> must be taken from <i>System.ComponentModel.Composition</i> NuGet package.<br />
<br />
And we'll define a helper method that will collect all implementations of <i>IPreCommitHook</i> interface marked with <i>Export</i> attribute, run them all and return if any of them does not allow to continue the commit. I placed this code into separate <i>GitHooksCollector</i> assembly, but it is not so important:<br />
<br />
<pre><code lang="cs">public class Collectors
{
private class PreCommitHooks
{
[ImportMany(typeof(IPreCommitHook))]
public IPreCommitHook[] Hooks { get; set; }
}
public static int RunPreCommitHooks(IList<string> args, string directory)
{
var catalog = new DirectoryCatalog(directory, "*Hooks.dll");
var container = new CompositionContainer(catalog);
var obj = new PreCommitHooks();
container.ComposeParts(obj);
bool success = true;
foreach(var hook in obj.Hooks)
{
success &= hook.Process(args);
}
return success ? 0 : 1;
}
}</code></pre>
<br />
This code also uses <i>System.ComponentModel.Composition</i> NuGet package. First, we say that we'll look into all assemblies which name corresponds to the <i>*Hooks.dll</i> pattern in the <i>directory</i> folder. You may use any pattern you want here. Then we collect all exported implementations of <i>IPreCommitHook</i> interface into <i>PreCommitHooks</i> object. And finally, we run all handlers and compute aggregated execution result.<br />
<br />
The last thing to do is to slightly change <i>pre-commit</i> file:<br />
<br />
<pre><code lang="cs">#!/usr/bin/env dotnet-script
#r "nuget: System.Runtime.Loader, 4.3.0"
using System.IO;
using System.Runtime.Loader;
var hooksDirectory = Path.Combine(Environment.CurrentDirectory, "gitHookAssemblies");
var assemblyPath = Path.Combine(hooksDirectory, "GitHooksCollector.dll");
AssemblyLoadContext.Default.Resolving += (context, assemblyName) => {
var assemblyPath = Path.Combine(hooksDirectory, $"{assemblyName.Name}.dll");
if(File.Exists(assemblyPath))
{
return AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath);
}
return null;
};
var assembly = AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath);
if(assembly == null)
{
Console.WriteLine($"Can't load assembly from '{assemblyPath}'.");
}
var collectorsType = assembly.GetType("GitHooksCollector.Collectors");
if(collectorsType == null)
{
Console.WriteLine("Can't find collector's type.");
}
var method = collectorsType.GetMethod("RunPreCommitHooks", System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static);
if(method == null)
{
Console.WriteLine("Can't find collector's method for pre-commit hooks.");
}
int exitCode = (int) method.Invoke(null, new object[] { Args, hooksDirectory });
Environment.Exit(exitCode);</code></pre>
<br />
And don't forget to place all participating assemblies into the <i>gitHookAssemblies</i> folder.<br />
<br />
Wow, that was a long preamble. But now we have a pretty robust solution for writing Git hooks using C#. All we need is to modify the content of <i>gitHookAssemblies</i> folder. The content of this folder can be placed under version control system and thus distributed across all developers.<br />
<br />
Anyway, it is time to solve our initial problem.<br />
<br />
<h2 style="text-align: left;">
Web service for constants registration</h2>
<br />
We wanted to make sure that developers will not be able to commit changes if they forgot to register corresponding constants on a Web service. Let's create a simple Web service for our needs. I'll use ASP.NET Core Web service with Windows authentication. But actually, there are many variants can be used here.<br />
<br />
<pre><code lang="cs">using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
namespace ListsService.Controllers
{
public sealed class ListItem<T>
{
public ListItem(T value, string owner)
{
Value = value;
Owner = owner;
}
public T Value { get; }
public string Owner { get; }
}
public static class Lists
{
public static List<ListItem<int>> SqlVersions = new List<ListItem<int>>
{
new ListItem<int>(1, @"DOMAIN\Iakimov")
};
public static Dictionary<int, List<ListItem<int>>> AllLists = new Dictionary<int, List<ListItem<int>>>
{
{1, SqlVersions}
};
}
[Authorize]
public class ListsController : Controller
{
[Route("/api/lists/{listId}/ownerOf/{itemId}")]
[HttpGet]
public IActionResult GetOwner(int listId, int itemId)
{
if (!Lists.AllLists.ContainsKey(listId))
return NotFound();
var item = Lists.AllLists[listId].FirstOrDefault(li => li.Value == itemId);
if(item == null)
return NotFound();
return Json(item.Owner);
}
}
}</code></pre>
<br />
Here I use static class <i>Lists</i> as a storage mechanism for testing purposes only. Each list will have an integer identifier. Each list will contain integer items with information about people who registered them. Method <i>GetOwner</i> of <i>ListController</i> class allows getting some identifier of the person who registered the corresponding list item.<br />
<br />
<h2 style="text-align: left;">
Checking SQL step files</h2>
<br />
Now we are ready to check if we can commit a new SQL step file or not. Let's say that we store SQL step files the following way. In the main folder of the project, we have <i>sql</i> sub-folder. In this folder, every developer can create <i>verXXX</i> folder where <i>XXX</i> is some number that must be registered in the Web service. And inside <i>verXXX</i> folder should be one or several <i>.sql</i> files that provide modifications to the database. We'll not discuss the problem of the order of execution of these <i>.sql</i> files here. It is not relevant to our discussion. All we want to do is the following. If a developer wants to commit any new file inside some <i>sql/verXXX</i> folder we must check if constant <i>XXX</i> was registered by this developer.<br />
<br />
Here is the code of corresponding Git hook:<br />
<br />
<pre><code lang="cs">[Export(typeof(IPreCommitHook))]
public class SqlStepsHook : IPreCommitHook
{
private static readonly Regex _expr = new Regex("\\bver(\\d+)\\b");
public bool Process(IList<string> args)
{
using var repo = new Repository(Environment.CurrentDirectory);
var items = repo.RetrieveStatus()
.Where(i => !i.State.HasFlag(FileStatus.Ignored))
.Where(i => i.State.HasFlag(FileStatus.NewInIndex))
.Where(i => i.FilePath.StartsWith(@"sql"));
var versions = new HashSet<int>(
items
.Select(i => _expr.Match(i.FilePath))
.Where(m => m.Success)
.Select(m => m.Groups[1].Value)
.Select(d => int.Parse(d))
);
foreach(var version in versions)
{
if (!ListItemOwnerChecker.DoesCurrentUserOwnListItem(1, version))
return false;
}
return true;
}
}</code></pre>
<br />
Here we use <i>Repository</i> class from <i>LibGit2Sharp</i> NuGet package. The <i>items</i> variable will contain all new files in the Git index located inside <i>sql</i> folder. You can improve the procedure of finding such files if you wish. Into the <i>versions</i> variable we collect all different <i>XXX</i> constants from <i>verXXX</i> folders. And, finally, method <i>ListItemOwnerChecker.DoesCurrentUserOwnListItem</i> checks if the version is registered by the current user on the Web service in the list 1.<br />
<br />
Implementation of <i>ListItemOwnerChecker.DoesCurrentUserOwnListItem</i> is quite simple:<br />
<br />
<pre><code lang="cs">class ListItemOwnerChecker
{
public static string GetListItemOwner(int listId, int itemId)
{
var handler = new HttpClientHandler
{
UseDefaultCredentials = true
};
var client = new HttpClient(handler);
var response = client.GetAsync($"https://localhost:44389/api/lists/{listId}/ownerOf/{itemId}")
.ConfigureAwait(false)
.GetAwaiter()
.GetResult();
if (response.StatusCode == System.Net.HttpStatusCode.NotFound)
{
return null;
}
var owner = response.Content
.ReadAsStringAsync()
.ConfigureAwait(false)
.GetAwaiter()
.GetResult();
return JsonConvert.DeserializeObject<string>(owner);
}
public static bool DoesCurrentUserOwnListItem(int listId, int itemId)
{
var owner = GetListItemOwner(listId, itemId);
if (owner == null)
{
Console.WriteLine($"There is no item '{itemId}' in the list '{listId}' registered on the lists service.");
return false;
}
if (owner != WindowsIdentity.GetCurrent().Name)
{
Console.WriteLine($"Item '{itemId}' in the list '{listId}' registered by '{owner}' and you are '{WindowsIdentity.GetCurrent().Name}'.");
return false;
}
return true;
}
}</code></pre>
<br />
Here we ask the Web service for the identifier of the user who registered required constant (<i>GetListItemOwner</i> method). Then we compare it with the name of the current Windows user. This is only one way to implement this functionality from many possible. For example, you can use the name or e-mail of a user from the Git config.<br />
<br />
And that is it. Just build the corresponding assembly and place it into the <i>gitHookAssemblies</i> folder with all referenced assemblies. Everything will work automatically.<br />
<br />
<h2 style="text-align: left;">
Checking enum values</h2>
<br />
Well, it's great. Now nobody can commit new changes for SQL database without registering the corresponding constant in the Web service first. But we can use this method in other places where some constants should be reserved.<br />
<br />
For example, somewhere in the code, there can be an enum. Every developer can add some member into the enum and assign some integer value for the member:<br />
<br />
<pre><code lang="cs">enum Constants
{
Val1 = 1,
Val2 = 2,
Val3 = 3
}</code></pre>
<br />
We want to avoid collisions of values for members of this enum. This is why we require to register corresponding integer constant in the Web service first. How hard is it to implement the check of registration for such constants?<br />
<br />
Here is the code of new Git hook:<br />
<br />
<pre><code lang="cs">[Export(typeof(IPreCommitHook))]
public class ConstantValuesHook : IPreCommitHook
{
public bool Process(IList<string> args)
{
using var repo = new Repository(Environment.CurrentDirectory);
var constantsItem = repo.RetrieveStatus()
.Staged
.FirstOrDefault(i => i.FilePath == @"src/GitInteraction/Constants.cs");
if (constantsItem == null)
return true;
if (!constantsItem.State.HasFlag(FileStatus.NewInIndex)
&& !constantsItem.State.HasFlag(FileStatus.ModifiedInIndex))
return true;
var initialContent = GetInitialContent(repo, constantsItem);
var indexContent = GetIndexContent(repo, constantsItem);
var initialConstantValues = GetConstantValues(initialContent);
var indexConstantValues = GetConstantValues(indexContent);
indexConstantValues.ExceptWith(initialConstantValues);
if (indexConstantValues.Count == 0)
return true;
foreach (var version in indexConstantValues)
{
if (!ListItemOwnerChecker.DoesCurrentUserOwnListItem(2, version))
return false;
}
return true;
}
...
}</code></pre>
<br />
First, we check if the corresponding file with our enum was modified. Then we extract the content of this file from Git storage (previously committed version) and from Git index using <i>GetInitialContent</i> and <i>GetIndexContent</i> methods. Here are their implementations:<br />
<br />
<pre><code lang="cs">private string GetInitialContent(Repository repo, StatusEntry item)
{
var blob = repo.Head.Tip[item.FilePath]?.Target as Blob;
if (blob == null)
return null;
using var content = new StreamReader(blob.GetContentStream(), Encoding.UTF8);
return content.ReadToEnd();
}
private string GetIndexContent(Repository repo, StatusEntry item)
{
var id = repo.Index[item.FilePath]?.Id;
if (id == null)
return null;
var itemBlob = repo.Lookup<Blob>(id);
if (itemBlob == null)
return null;
using var content = new StreamReader(itemBlob.GetContentStream(), Encoding.UTF8);
return content.ReadToEnd();
}</code></pre>
<br />
Then we extract integer values of the enum members from both versions of the enum. It is done in the <i>GetConstantValues</i> method. I have used <a href="https://github.com/dotnet/roslyn" target="_blank"><i>Roslyn</i></a> to implement this functionality. You can take it from <i>Microsoft.CodeAnalysis.CSharp</i> NuGet package.<br />
<br />
<pre><code lang="cs">private ISet<int> GetConstantValues(string fileContent)
{
if (string.IsNullOrWhiteSpace(fileContent))
return new HashSet<int>();
var tree = CSharpSyntaxTree.ParseText(fileContent);
var root = tree.GetCompilationUnitRoot();
var enumDeclaration = root
.DescendantNodes()
.OfType<EnumDeclarationSyntax>()
.FirstOrDefault(e => e.Identifier.Text == "Constants");
if(enumDeclaration == null)
return new HashSet<int>();
var result = new HashSet<int>();
foreach (var member in enumDeclaration.Members)
{
if(int.TryParse(member.EqualsValue.Value.ToString(), out var value))
{
result.Add(value);
}
}
return result;
}</code></pre>
<br />
When using <i>Roslyn</i> I faced the following problem. When I wrote my code the latest version of <i>Microsoft.CodeAnalysis.CSharp</i> NuGet package was 3.4.0. I placed the assembly into the <i>gitHookAssemblies</i> folder, but the code said that it can't find the corresponding version of the assembly. Here is the reason. You see, <i>dotnet-script</i> also uses <i>Roslyn</i> for work. It means, that some version of <i>Microsoft.CodeAnalysis.CSharp</i> assembly was already loaded into the domain. For me, it was version 3.3.1. When I started to use this version of the NuGet package the problem vanished.<br />
<br />
Finally, in the <i>Process</i> method of our hook handler, we choose all new values and check their owners on our Web service.<br />
<br />
<h2 style="text-align: left;">
Points of interest</h2>
<br />
Here we are. Our system to check the constant reservations is built. In the end, I'd like to talk about some problems that we should think about.<br />
<br />
1. We created a <i>pre-commit</i> hook file, but we have not talked about how to place it into <i>.git\hooks</i> folder on the computers of all developers. We can use <i>--template</i> parameter of <i>git init</i> command. Or something like this:<br />
<br />
<pre><code lang="bash">git config init.templatedir git_template_dir
git init</code></pre>
<br />
Or we can use <i>core.hooksPath</i> Git configuration option if you have Git 2.9 or later:<br />
<br />
<pre><code lang="bash">git config core.hooksPath git_template_dir</code></pre>
<br />
Or we can make it a part of the build process for our project.<br />
<br />
2. The same question comes about the installation of <i>dotnet-script</i>. We either can pre-install it on all developer machines with some version of .NET Core, or we can install it as a part of the build process.<br />
<br />
3. Personally, I see the biggest problem with the location of referenced assemblies. We agreed to place all of them into <i>gitHookAssemblies</i> folder, but I'm not sure it can help in all situations. For example, <i>LibGit2Sharp</i> package comes with many native libraries for different operating systems. Here I used <i>git2-7ce88e6.dll</i> suitable for Win-x64. But if different developers use different operating systems we can face some problems.<br />
<br />
4. We said almost nothing about the implementation of the Web service. Here we used Windows authentication, but there are many possible options. Also, the Web service should provide some UI for the reservation of new constants and for the creation of new lists.<br />
<br />
5. Maybe you have noticed, that usage of async operation in our Git hook handlers was awkward. I think, better support for such operations should be implemented.<br />
<br />
<h2 style="text-align: left;">
Conclusion</h2>
<br />
In this article, we learned how to build a robust system for writing Git hooks using .NET languages. On this basis, we wrote several hook handlers that allow us to check the reservation of different constants and prevent commits in case of violations.<br />
<br />
I hope this information will be helpful to you. Good luck!<br />
<br />
P.S. You can find the code for the article on <a href="https://github.com/yakimovim/csharp-git-hooks" target="_blank">GitHub</a>.</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com1tag:blogger.com,1999:blog-5729371525642521663.post-75424572884858631452019-05-06T16:30:00.001+03:002021-10-01T12:07:50.123+03:00NLog: rules and filters<div dir="ltr" style="text-align: left;" trbidi="on">
In <a href="https://www.confirmit.com/" target="_blank">Confirmit</a> we use <a href="https://github.com/NLog/NLog" target="_blank">NLog</a> library for logging in .NET applications. Although there is a documentation for this library, I found it hard to understand how the loggers work. In this article, I’ll try to explain, how rules and filters are used by NLog. Let’s start.</div><span><a name='more'></a></span><div dir="ltr" style="text-align: left;" trbidi="on"><br />
<h3 style="text-align: left;">
How to configure NLog</h3>
<br />
We’ll start with a small reminder, what we can do with NLog configuration. Simple configuration usually is an XML file (e.g. NLog.config):<br />
<br />
<pre><code lang="xml"><?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="target1" xsi:type="ColoredConsole" layout="Access Log|${level:uppercase=true}|${logger}|${message}">
<highlight-row condition="true" foregroundColor="red"/>
</target>
<target name="target2" xsi:type="ColoredConsole" layout="Common Log|${level:uppercase=true}|${logger}|${message}">
<highlight-row condition="true" foregroundColor="green"/>
</target>
<target name="target3" xsi:type="ColoredConsole" layout="Yellow Log|${level:uppercase=true}|${logger}|${message}">
<highlight-row condition="true" foregroundColor="yellow"/>
</target>
</targets>
<rules>
<logger name="*" minlevel="Warn" writeTo="target1,target2,target3" />
</rules>
</nlog>
</code></pre>
<br />
You can load this configuration with a single line of code:<br />
<br />
<pre><code lang="cs">LogManager.Configuration = new XmlLoggingConfiguration("NLog.config");</code></pre>
<br />
What can we do with NLog configuration? We can set several targets per rule:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Warn" writeTo="target1,target2,target3" />
</rules>
</code></pre>
<br />
We can define which log levels we want to log:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Warn" writeTo="target1" />
<logger name="*" levels="Debug,Warn,Info" writeTo="target2" />
</rules>
</code></pre>
<br />
We can set filters for each rule:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Info" writeTo="target1">
<filters defaultAction='Log'>
<when condition="contains('${message}','Common')" action="Ignore" />
</filters>
</logger>
</rules>
</code></pre>
<br />
And finally, we can use nested rules:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Info" writeTo="target1">
<logger name="*" minlevel="Warn" writeTo="target2" />
</logger>
</rules>
</code></pre>
<br />
It is time to take a look at how it all works.<br />
<br />
<h3 style="text-align: left;">
Construction of a logger configuration</h3>
<br />
When you request an instance of a logger,<br />
<br />
<pre><code lang="cs">var commonLogger = LogManager.GetLogger("Common");</code></pre>
<br />
NLog either takes it from its cache or creates a new one (see <a href="https://github.com/NLog/NLog/blob/af2ca41049fdb29c6da95d8e83156aad9c52d925/src/NLog/LogFactory.cs#L890" target="_blank">here</a>). In the last case, it will create a new configuration for loggers with the given name. Let's take a closer look at this configuration.<br />
<br />
In general, the configuration of a logger contains a separate chain of log targets with corresponding filters for each log level (<i>Trace</i>, <i>Debug</i>, <i>Info</i>, <i>Warn</i>, <i>Error</i>, <i>Fatal</i>) (see <a href="https://github.com/NLog/NLog/blob/af2ca41049fdb29c6da95d8e83156aad9c52d925/src/NLog/Internal/LoggerConfiguration.cs#L41" target="_blank">here</a>). Now we'll see, how these chains are constructed.<br />
<br />
The main method, responsible for constructions of these chains is <a href="https://github.com/NLog/NLog/blob/af2ca41049fdb29c6da95d8e83156aad9c52d925/src/NLog/LogFactory.cs#L647" target="_blank"><i>GetTargetsByLevelForLogger</i></a> of <i>LogFactory</i> class. Here is how it works. It traverses all rules in the NLog configuration. First of all, it checks if the name of the rule corresponds to the name of the logger. Names of rules support wildcard symbols as we use for file system objects:<br />
<br />
<ul style="text-align: left;">
<li>* - any sequence of symbols</li>
<li>? - any single symbol</li>
</ul>
<br />
So rule name '<i>*</i>' corresponds to any name of a logger, and '<i>Common*</i>' corresponds to all loggers with names starting with '<i>Common</i>'.<br />
<br />
If the name of the rule does not correspond to the name of the logger, this rule is not used with all its subrules. Otherwise, the method gets all log levels, for which this rule is enabled. For each such level NLog adds all targets of the rule to the corresponding chain of targets with filters of the rule.<br />
<br />
There is one more important step in the construction of the target chains. If current rule is marked as <i>final</i> and its name corresponds to the name of the logger, then NLog stops here constructions of all chains of targets for all log levels enabled for the rule. It means, that neither following rules, nor nested rules will add anything to these chains of targets. They are completely constructed and will not be changed. It means, that there is no point in writing something like this:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Info" writeTo="target1" final="true">
<logger name="*" minlevel="Warn" writeTo="target2" />
</logger>
</rules>
</code></pre>
<br />
Nothing will ever come to <i>target2</i>. But it has a point to write this configuration:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Warn" writeTo="target1" final="true">
<logger name="*" minlevel="Info" writeTo="target2" />
</logger>
</rules>
</code></pre>
<br />
As the outer rule is not enabled for the <i>Info</i> log level, targets chain for this level will not be frozen on the outer rule. So all <i>Info</i> messages will go to the <i>target2</i>.<br />
<br />
After the targets of the rule are added to corresponding chains of targets, all subrules of the current rule are recursively processed using the same algorithm. It happens regardless of log levels enabled for the parent rule.<br />
<br />
Finally, the configuration for the logger is constructed. It contains chains of targets with filters for each possible log level:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1LLVehETm_St_SGiNR8dgf-GlbehWqHUa9hR0WeH31ImPp2SB_PPW6iD2hq-Zl_lfrPvlKBOJ45jiRJmBplQiu0gAvhZYPULnWBliPIrqjUoTqz7jmYwziyCSjmbu6KRyOZ0sQMZv1BB5/s1600/2019-04-10_17-47-28.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="448" data-original-width="233" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1LLVehETm_St_SGiNR8dgf-GlbehWqHUa9hR0WeH31ImPp2SB_PPW6iD2hq-Zl_lfrPvlKBOJ45jiRJmBplQiu0gAvhZYPULnWBliPIrqjUoTqz7jmYwziyCSjmbu6KRyOZ0sQMZv1BB5/s400/2019-04-10_17-47-28.png" width="207" /></a></div>
<br />
Now it is time to use the configuration.<br />
<br />
<h3 style="text-align: left;">
Usage of a logger configuration</h3>
<br />
We'll start with a simple thing. <i>Logger</i> class has <i>IsEnabled</i> method and corresponding <i>IsXXXEnabled</i> properties (<i>IsDebugEnabled</i>, <i>IsInfoEnabled</i>, ...). How do they work? Actually, it just checks if the chain of targets for the corresponding log level is not empty (see <a href="https://github.com/NLog/NLog/blob/af2ca41049fdb29c6da95d8e83156aad9c52d925/src/NLog/Logger.cs#L87" target="_blank">here</a>). It means that filters never influence values of these properties and method.<br />
<br />
Next, let me explain what happens when you want to log some message. As you can guess, the logger takes the chain of targets for the corresponding log level. It starts to process links of the chain one by one. For each link, the logger decides if the message should be written in the corresponding target and if the processing of the chain should be stopped here. It is done using filters. Let me show you how NLog filters work.<br />
<br />
Here is how filters are defined in the configuration:<br />
<br />
<pre><code lang="xml"><rules>
<logger name="*" minlevel="Info" writeTo="target1">
<filters defaultAction='Log'>
<when condition="contains('${message}','Common')" action="Ignore" />
</filters>
</logger>
</rules>
</code></pre>
<br />
A usual filter has a boolean condition. Here you can think, that filter returns <i>true</i> or <i>false</i> for each message. But it is not true. Filters actually return value of <i><a href="https://github.com/NLog/NLog/blob/2fffff47a50423477a801afb416767cf64e34707/src/NLog/Filters/FilterResult.cs#L39" target="_blank">FilterResult</a></i> type. If the condition of a filter evaluates to <i>true</i>, then the filter returns value defined by <i>action</i> attribute (it is <i>Ignore</i> in our example). If the condition evaluates to <i>false</i>, the filter returns <i>Neutral</i>. It means that this filter does not want to decide what to do with the message.<br />
<br />
You can see how the chain of targets is processed <a href="https://github.com/NLog/NLog/blob/af2ca41049fdb29c6da95d8e83156aad9c52d925/src/NLog/LoggerImpl.cs#L98" target="_blank">here</a>. For each target result of filters is calculated using <i><a href="https://github.com/NLog/NLog/blob/af2ca41049fdb29c6da95d8e83156aad9c52d925/src/NLog/LoggerImpl.cs#L239" target="_blank">GetFilterResult</a></i> method. This result is equal to the result of the first filter that is not <i>Neutral</i>. It means, that if some filter returns value different from <i>Neutral</i>, all filters after this will not be executed.<br />
<br />
But what happens if all filters returned <i>Neutral</i> value? In this case, the default value is used. This value is set by <i>defaultAction</i> attribute of <i>filters</i> element of a rule. How do you think, what is the default value for the <i>defaultAction</i>? You are right if you think it is <i>Neutral</i>. It means, that the whole chain of filters can return <i>Neutral</i> as a result. In this case, NLog considers it as <i>Log</i>. That is the message will still be written into the target (see <a href="https://github.com/NLog/NLog/blob/2fffff47a50423477a801afb416767cf64e34707/src/NLog/LoggerImpl.cs#L201" target="_blank">here</a>).<br />
<br />
As you can guess if the filter result is <i>Ignore</i> or <i>IgnoreFinal</i> the message will not be written into the target. If the result is <i>Log</i> or <i>LogFinal</i> the message will be written into the target. But what is the difference between <i>Ignore</i> and <i>IgnoreFinal</i> and between <i>Log</i> or <i>LogFinal</i>? It is simple. In the case of <i>IgnoreFinal</i> and <i>LogFinal</i> NLog stops the processing of the chain of targets here and writes nothing in the targets after the current.<br />
<br />
<h3 style="text-align: left;">
Conclusion</h3>
<br />
Analysis of NLog code helped me a lot in understanding how its rules and filters work. I hope it will help you as well. Good luck!</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-80633907253343165632019-01-10T16:27:00.000+03:002019-01-15T17:00:03.640+03:00Log level per request<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
Some time ago I was reading <a href="https://www.thoughtworks.com/radar" rel="nofollow" target="_blank">Technology Radar</a> of <a href="https://www.thoughtworks.com/" rel="nofollow" target="_blank">ThoughtWorks</a>. And there was a technique called "<a href="https://www.thoughtworks.com/radar/techniques/log-level-per-request" rel="nofollow" target="_blank">Log level per request</a>". Here in <a href="https://www.confirmit.com/" rel="nofollow" target="_blank">Confirmit</a>, we use logging widely. So I was wondering how to implement a similar solution. And now I'm ready to show you one possible implementation.</div>
<div style="text-align: justify;">
</div>
<a name='more'></a><div style="text-align: left;">
<br /></div>
<h3 style="text-align: left;">
Problem description</h3>
<br />
So what are we talking about? Let's say you have a Web service. At one moment it starts to fail in a production environment. But it fails only on some requests. For example, it fails only for one user. Or only for a specific endpoint... Certainly, we have to find the reason. In this case, logging should help.<br />
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We can insert into the code enough log instructions to pinpoint different reasons of failures. Log instruction usually associates a message with some log level (Debug, Info, Warning, ...). Also, the logger has its own log level. So all messages with levels above the logger level will be written to a log sink (file, database, ...). If the level of the message is below the logger level, the message will be discarded.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
When the application works well, we want to have as fewer log messages as possible, to keep the size of log sinks small. At the same time, if the application fails, we want to have as many log messages as possible to be able to find the reason for the problem. The difficulty here is that usually, we set one level for all loggers in an application. If everything is OK, we keep this level high (e.g. Warning). If we need to investigate failures we set this level low (e.g. Debug).</div>
<div style="text-align: justify;">
<br />
<h3>
Log level per application</h3>
<br /></div>
<div style="text-align: justify;">
When we set the application log level low, suddenly there will be a lot of messages in the log sinks. These messages will come from many requests and will be shuffled, as many requests can be processed simultaneously. It leads to several potential difficulties:</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
</div>
<ul>
<li style="text-align: justify;">How to separate messages from requests with problems from messages from requests without problems?</li>
<li style="text-align: justify;">Requests without problems still spend their time to write messages to the log sinks, although these messages will not be used.</li>
<li style="text-align: justify;">Messages from the requests without problems still occupy space in the log sinks, although these messages will not be used.</li>
</ul>
<div>
<br /></div>
<div>
<div style="text-align: justify;">
Well, all these difficulties are not very serious. To separate messages of "good" requests from messages of "bad" requests, we can use <a href="https://blog.rapid7.com/2016/12/23/the-value-of-correlation-ids/" rel="nofollow" target="_blank">correlation id</a>. All modern log-keeping systems support filtering of messages by their fields.</div>
</div>
<div>
<br /></div>
<div>
<div style="text-align: justify;">
Performance is also not a big problem usually. Logging systems support asynchronous write, so the impact of heavy logging should not be severe.</div>
</div>
<div>
<br /></div>
<div>
<div style="text-align: justify;">
And storage space is relatively cheap now, so this is not a big problem either. Especially if we can delete old records.</div>
<br />
<div style="text-align: justify;">
Nevertheless, can we do better? Can we set a separate log level for each request depending on some condition? I'd like to investigate this problem here.</div>
<br />
<h3 style="text-align: left;">
Log level per request</h3>
<br />
<div style="text-align: justify;">
Allow me to formulate requirements for the solution I'll implement. There should be a way to set log level independently for each request. There must be a flexible way to define conditions of choosing a log level for a request. And it should be possible to change these conditions in runtime without restart of the application.</div>
<br />
<div style="text-align: justify;">
The stage is set. Let's begin.</div>
<br />
<div style="text-align: justify;">
I'll write a simple Web API application using .NET Core. It will have the only controller:</div>
<br />
<pre><code lang="cs">[Route("api/[controller]")]
[ApiController]
public class ValuesController : ControllerBase
{
...
// GET api/values
[HttpGet]
public ActionResult<IEnumerable<string>> Get()
{
Logger.Info("Executing Get all");
return new[] { "value1", "value2" };
}
// GET api/values/5
[HttpGet("{id}")]
public ActionResult<string> Get(int id)
{
Logger.Info($"Executing Get {id}");
return "value";
}
}</code></pre>
<br />
<div style="text-align: justify;">
We'll discuss the implementation of <i>Logger </i>property later. For this implementation, I'll use the <a href="https://logging.apache.org/log4net/index.html" rel="nofollow" target="_blank">log4net</a> library for logging. This library has an interesting feature. I'm talking about <a href="https://logging.apache.org/log4net/release/manual/introduction.html" target="_blank">level inheritance</a>. Very briefly, if in the configuration of the logging I say that logger with name <i>X</i> should have log level <i>Info</i>, it means, that all loggers with names starting with <i>X.</i> (like <i>X.Y</i>, <i>X.Z</i>, <i>X.A.B</i>) will inherit the same log level.</div>
<br />
<div style="text-align: justify;">
So here comes the initial idea. For every request, I'll somehow calculate the required log level. Then I'll create a newly named logger in <i>log4net</i> configuration. This logger will have desired log level. After that, all logger objects created during this request must have names prefixed with the name of that logger. The only thing here is that <i>log4net</i> never removes loggers. Once they were created they will live while an application is running. This is why I'll pre-create loggers with specific names for each log level:</div>
<br />
<pre><code lang="xml"><?xml version="1.0" encoding="utf-8" ?>
<log4net>
<appender name="Console" type="log4net.Appender.ConsoleAppender">
<layout type="log4net.Layout.PatternLayout">
<!-- Pattern to output the caller's file name and line number -->
<conversionPattern value="%5level [%thread] (%file:%line) - %message%newline" />
</layout>
</appender>
<appender name="RollingFile" type="log4net.Appender.RollingFileAppender">
<file value="RequestLoggingLog.log" />
<appendToFile value="true" />
<maximumFileSize value="100KB" />
<maxSizeRollBackups value="2" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%level %thread %logger - %message%newline" />
</layout>
</appender>
<root>
<level value="WARN" />
<appender-ref ref="Console" />
<appender-ref ref="RollingFile" />
</root>
<b> <logger name="EdlinSoftware.Log.Error">
<level value="ERROR" />
</logger>
<logger name="EdlinSoftware.Log.Warning">
<level value="WARN" />
</logger>
<logger name="EdlinSoftware.Log.Info">
<level value="INFO" />
</logger>
<logger name="EdlinSoftware.Log.Debug">
<level value="DEBUG" />
</logger></b>
</log4net></code></pre>
<br />
<div style="text-align: justify;">
Now I have several predefined loggers with names <i>EdlinSoftware.Log.XXXX</i>. These names will be prefixes for names of actual levels. To avoid collisions between requests, I'll save prefix for the current request in an <i>AsyncLocal</i> object. Value for this object I'll set inside new OWIN middleware:</div>
<br />
<pre><code lang="cs">app.Use(async (context, next) =>
{
try
{
LogSupport.LogNamePrefix.Value = await LogSupport.GetLogNamePrefix(context);
await next();
}
finally
{
LogSupport.LogNamePrefix.Value = null;
}
});</code></pre>
<br />
<div style="text-align: justify;">
When this value is set, it is easy to create loggers with the desired prefix of name:</div>
<br />
<pre><code lang="cs">public static class LogSupport
{
public static readonly AsyncLocal<string> LogNamePrefix = new AsyncLocal<string>();
public static ILog GetLogger(string name)
{
return GetLoggerWithPrefixedName(name);
}
public static ILog GetLogger(Type type)
{
return GetLoggerWithPrefixedName(type.FullName);
}
private static ILog GetLoggerWithPrefixedName(string name)
{
if (!string.IsNullOrWhiteSpace(LogNamePrefix.Value))
{ name = $"{LogNamePrefix.Value}.{name}"; }
return LogManager.GetLogger(typeof(LogSupport).Assembly, name);
}
....
}</code></pre>
<br />
<div style="text-align: justify;">
It is clear now how to get an instance of logger inside our controller:</div>
<br />
<pre><code lang="cs">[Route("api/[controller]")]
[ApiController]
public class ValuesController : ControllerBase
{
private ILog _logger;
private ILog Logger
{
get => _logger ?? (_logger = LogSupport.GetLogger(typeof(ValuesController)));
}
....
}</code></pre>
<br />
<div style="text-align: justify;">
The only thing left to do is how to set rules that define which log level should be assigned for the request. This mechanism should be flexible enough. The main idea here is to use C# scripting. I'll create a file <i>LogLevelRules.json</i> where I'll define a set of pairs rule-log level:</div>
<br />
<pre><code lang="json">[
{
"logLevel": "Debug",
"ruleCode": "context.Request.Path.Value == \"/api/values/1\""
},
{
"logLevel": "Debug",
"ruleCode": "context.Request.Path.Value == \"/api/values/3\""
}
]</code></pre>
<br />
<div style="text-align: justify;">
Here <i>logLevel </i>is desired log level and <i>ruleCode</i> - C# code that returns a boolean value for a given request. The application will run code of this rules one by one. The first rule that returns true, will set corresponding log level. If all rules returned false, default log level will be used.</div>
<br />
<div style="text-align: justify;">
To create delegates from the string representations of rules, <i>CSharpScript </i>class can be used:</div>
<br />
<pre><code lang="cs">public class Globals
{
public HttpContext context;
}
internal class LogLevelRulesCompiler
{
public IReadOnlyList<LogLevelRule> Compile(IReadOnlyList<LogLevelRuleDescription> levelRuleDescriptions)
{
var result = new List<LogLevelRule>();
foreach (var levelRuleDescription in levelRuleDescriptions ?? new LogLevelRuleDescription[0])
{
var script = CSharpScript.Create<bool>(levelRuleDescription.RuleCode, globalsType: typeof(Globals));
ScriptRunner<bool> runner = script.CreateDelegate();
result.Add(new LogLevelRule(levelRuleDescription.LogLevel, runner));
}
return result;
}
}
internal sealed class LogLevelRule
{
public string LogLevel { get; }
public ScriptRunner<bool> Rule { get; }
public LogLevelRule(string logLevel, ScriptRunner<bool> rule)
{
LogLevel = logLevel ?? throw new ArgumentNullException(nameof(logLevel));
Rule = rule ?? throw new ArgumentNullException(nameof(rule));
}
}</code></pre>
<br />
<div style="text-align: justify;">
Here <i>Compile </i>method gets a list of objects read from the <i>LogLevelRules</i><i>.json</i> file. It creates <i>runner</i> delegate for each rule and stores it for later usage.</div>
<br />
This list of delegates can be stored:<br />
<br />
<pre><code lang="cs">LogSupport.LogLevelSetters = new LogLevelRulesCompiler().Compile(
new LogLevelRulesFileReader().ReadFile("LogLevelRules.json")
);</code></pre>
<br />
<div style="text-align: justify;">
and used later:</div>
<br />
<pre><code lang="cs">public static class LogSupport
{
internal static IReadOnlyList<LogLevelRule> LogLevelSetters = new LogLevelRule[0];
...
public static async Task<string> GetLogNamePrefix(HttpContext context)
{
var globals = new Globals
{
context = context
};
string result = null;
foreach (var logLevelSetter in LogLevelSetters)
{
if (await logLevelSetter.Rule(globals))
{
result = $"EdlinSoftware.Log.{logLevelSetter.LogLevel}";
break;
}
}
return result;
}
}</code></pre>
<br />
<div style="text-align: justify;">
So on the start of the application, we read <i>LogLevelRules</i><i>.json</i> file, convert its content into a list of delegates using <i>CSharpScript </i>class and store it into the <i>LogSupport.LogLevelSetters</i> field. On each request, we run delegates from this list to get the log level for the request.</div>
<br />
<div style="text-align: justify;">
The only thing left to do is to watch modifications of the <i>LogLevelRules</i><i>.json</i> file. When we want to set the log level for some sort of requests, we add another checker in this file. To make the application to apply these changes without restart we have to watch the file:</div>
<br />
<pre><code lang="cs">var watcher = new FileSystemWatcher
{
Path = Directory.GetCurrentDirectory(),
Filter = "*.json",
NotifyFilter = NotifyFilters.LastWrite
};
watcher.Changed += (sender, eventArgs) =>
{
// Wait for the application that modifies the file to release it..
Thread.Sleep(1000);
LogSupport.LogLevelSetters = new LogLevelRulesCompiler().Compile(
new LogLevelRulesFileReader().ReadFile("LogLevelRules.json")
);
};
watcher.EnableRaisingEvents = true;</code></pre>
<br />
<div style="text-align: justify;">
For the sake of brevity, I have not used thread synchronization code while working with <i>LogSupport.LogLevelSetters</i> field. But in a real application, you really should apply it.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The whole code of the application you can find at <a href="https://github.com/yakimovim/request-logging/tree/62b8c3b28e4c85e37a71b9b5f5844c61b9be74d6" target="_blank">GitHub</a>.</div>
<br />
<h3 style="text-align: left;">
Disadvantages</h3>
<br />
<div style="text-align: justify;">
This code solves the problem of defining log level per request. But it has some disadvantages too. Let's discuss them.</div>
<br />
<div style="text-align: justify;">
1. This approach changes names of loggers. So in the log file instead of "<i>MyClassLogger</i>" one will see something like "<i>Edlinsoft.Log.Debug.MyClassLogger</i>". One can live with it, but it is not very convenient. Probably this problem can be overcome by playing with the log layout.</div>
<br />
<div style="text-align: justify;">
2. Now it is not possible to make logger instances static, as they should be created separately for each request. The most serious problem here for me is that all team members should always remember to do it. One can accidentally define a static field with logger and get strange results. To overcome this situation we could create a wrapper class for logger and use it instead of direct usage of <i>log4net</i> classes. Such a wrapper class could always create new instances of <i>log4net</i> loggers for each operation. In this case, it will be no problem to use a static instance of the wrapper class.</div>
<br />
<div style="text-align: justify;">
3. The described approach creates many instances of loggers. It pollutes memory and takes up CPU cycles. Depending on the application it may be or not be a problem.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
4. Then we modify the JSON file with rules, the code of rules can contain errors. It is easy to set try-catch blocks to be sure that these errors will not ruin our main program. But we still need to be aware that something went wrong. There could be two types of errors:</div>
<div style="text-align: justify;">
</div>
<ul>
<li>Compile-time errors while compiling rules code into delegates.</li>
<li>Run-time errors during execution of these delegates.</li>
</ul>
<br />
<div style="text-align: justify;">
Somehow we have to be aware of these errors, otherwise, we just won't get log messages and won't even know about it.</div>
<br />
5. Code of rules in the JSON file can contain any instructions. Potentially this is a security issue. We should somehow limit capabilities of the code. On the other hand, if an adversary is able to modify your files, it means you already have serious security problem.<br />
<h3 style="text-align: left;">
Conclusion</h3>
<br />
<div style="text-align: justify;">
In general, I should say that I don't fill this is a great solution that should replace the modern approach to the logging. A good tool that can filter log records can help here even with using log level per application. In any case, I hope the analysis of the problem will give you something to think about.</div>
</div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-43377117541619967042018-12-10T16:40:00.001+03:002018-12-10T16:40:11.003+03:00Integration of Cake build script with TeamCity<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
<a class="" href="https://www.cakebuild.net/" rel="nofollow" target="_blank">Cake</a> is a great tool for organizing a delivery pipeline for your application. I like it because it lets me to write the pipeline using C#, the language I know well. The great property of Cake, PSake and other similar frameworks is that they allow as to use the same building script on a local development machine and on CI servers. Here I'll explain how to integrate Cake with <a href="https://www.jetbrains.com/teamcity/" rel="nofollow" target="_blank">TeamCity</a>.</div>
<br />
<a name='more'></a><div style="text-align: justify;">
I'll assume you have initial knowledge of Cake and TeamCity. Otherwise, you can start with reading:</div>
<br />
<ul style="text-align: left;">
<li>For Cake: </li>
<ul>
<li>Website: <a href="https://www.cakebuild.net/">https://www.cakebuild.net</a></li>
<li>Pluralsight course: <a href="https://www.pluralsight.com/courses/cake-applications-deploying-building">https://www.pluralsight.com/courses/cake-applications-deploying-building</a></li>
</ul>
<li>For TeamCity:</li>
<ul>
<li>Docs & demos: <a href="https://www.jetbrains.com/teamcity/documentation/">https://www.jetbrains.com/teamcity/documentation/</a></li>
<li>Online documentation: <a href="https://confluence.jetbrains.com/display/TCD18/TeamCity+Documentation">https://confluence.jetbrains.com/display/TCD18/TeamCity+Documentation</a></li>
</ul>
</ul>
<br />
<div style="text-align: justify;">
Now let's talk about Cake and TeamCity together.</div>
<br />
<h3 style="text-align: left;">
Logging</h3>
<div style="text-align: justify;">
<br />
Cake pipeline usually consists of several tasks. It would be good to have a separate section for each such task in the TeamCity build log. I want to have a collapsible section for each Cake task in log:</div>
<div style="text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4-C_wCnIqlgtCqVV3eswPnX4lm-t_Eg941bNn0WXVS2e0-DdDOsVWSqBNshkhn2rRedPpIldrnykTRTZilWTG4j9HjfePsJEFAX_2_BWAzKCPs3itiVhePwVGKuCgW84QiYKaVnmwJvz-/s1600/2018-12-04_17-35-37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="82" data-original-width="377" height="86" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4-C_wCnIqlgtCqVV3eswPnX4lm-t_Eg941bNn0WXVS2e0-DdDOsVWSqBNshkhn2rRedPpIldrnykTRTZilWTG4j9HjfePsJEFAX_2_BWAzKCPs3itiVhePwVGKuCgW84QiYKaVnmwJvz-/s400/2018-12-04_17-35-37.png" width="400" /></a></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Cake API contains methods <i>TeamCity.WriteStartBuildBlock</i> and <i>TeamCity.WriteEndBuildBlock</i>. Although it is possible to use them in each task, it can be automated. In Cake, there are <i>TaskSetup </i>and <i>TaskTeardown</i> methods that will be called before and after execution of each task. They can be used to start and end TeamCity block:<br />
<br />
<pre><code lang="cs">TaskSetup(setupContext =>
{
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.WriteStartBuildBlock(setupContext.Task.Name);
}
});
TaskTeardown(teardownContext =>
{
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.WriteEndBuildBlock(teardownContext.Task.Name);
}
});</code></pre>
<br /></div>
<div style="text-align: justify;">
Here <i>TeamCity.IsRunningOnTeamCity</i> property is used to execute the code only if it runs on TeamCity.</div>
<br />
<div style="text-align: justify;">
Now we have collapsable blocks in the build log. But still, we can improve it a little bit more.</div>
<br />
<div style="text-align: justify;">
Usually, tasks of Cake tend to have short names like <i>Build</i>, <i>Test</i>, <i>Clean</i>. In this case, it is easier to run them from the command line. But in the build log, I'd prefer to have more expanded descriptions of Cake tasks. And it is possible to provide such descriptions. To set description of a task, use <i>Description</i> method:</div>
<br />
<pre><code lang="cs">Task("Clean")
.Description("Create and clean folders with results")
.Does(() => { ... });</code></pre>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Now, these descriptions can be used to form build log blocks:</div>
<div style="text-align: justify;">
<br /></div>
<pre><code lang="cs">TaskSetup(setupContext =>
{
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.WriteStartBuildBlock(setupContext.Task.Description ?? setupContext.Task.Name);
}
});
TaskTeardown(teardownContext =>
{
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.WriteEndProgress(teardownContext.Task.Description ?? teardownContext.Task.Name);
}
});</code></pre>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
It allows improving readability of the build log.<br />
<br />
<h3>
Progress indication</h3>
<br />
If running a Cake script takes a lot of time, it would be great to see, which task is executing now.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9QU26ZhbEJLF1Pj93bsZf15zxqUQZy4mz_t-Pmgit_b-NLvqlEk-V0or4Qc6chxMfMaXCbjy2jeSUNZLQuVRyTeh35e_XCcqj_V7UuV2gjYnRXw79WYVb7x2llqzuEzF5SlECDdOsAnlT/s1600/2018-12-05_17-21-45.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="173" data-original-width="1019" height="107" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9QU26ZhbEJLF1Pj93bsZf15zxqUQZy4mz_t-Pmgit_b-NLvqlEk-V0or4Qc6chxMfMaXCbjy2jeSUNZLQuVRyTeh35e_XCcqj_V7UuV2gjYnRXw79WYVb7x2llqzuEzF5SlECDdOsAnlT/s640/2018-12-05_17-21-45.png" width="640" /></a></div>
<br />
<br />
It can be done using <i>TeamCity.WriteStartProgress</i> and <i>TeamCity.WriteEndProgress</i> methods. Their calls can be inserted into the same <i>TaskSetup </i>and <i>TaskTeardown</i>:</div>
<div style="text-align: justify;">
<br />
<pre><code lang="cs">TaskSetup(setupContext =>
{
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.WriteStartBuildBlock(setupContext.Task.Description ?? setupContext.Task.Name);
TeamCity.WriteStartProgress(setupContext.Task.Description ?? setupContext.Task.Name);
}
});
TaskTeardown(teardownContext =>
{
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.WriteEndProgress(teardownContext.Task.Description ?? teardownContext.Task.Name);
TeamCity.WriteEndBuildBlock(teardownContext.Task.Description ?? teardownContext.Task.Name);
}
});</code></pre>
<br />
<h3>
Tests results</h3>
<br />
If you run some tests in your Cake task, it would be great to show the results of their execution in TeamCity.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTbnl4PqhmwgXkDLUHkqmKUsqsnhWpH8MBfOLEnO7ZXp80KpugAdVNPHPAY4ibnjuHlOh_d0gKjw8CvJYKUJcHSS6RCnYphztwuzAIfi15ut976oNfKrpCjDw0ZhtaHsV8KWBv78rgcP_w/s1600/2018-12-05_17-34-15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="127" data-original-width="489" height="103" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTbnl4PqhmwgXkDLUHkqmKUsqsnhWpH8MBfOLEnO7ZXp80KpugAdVNPHPAY4ibnjuHlOh_d0gKjw8CvJYKUJcHSS6RCnYphztwuzAIfi15ut976oNfKrpCjDw0ZhtaHsV8KWBv78rgcP_w/s400/2018-12-05_17-34-15.png" width="400" /></a></div>
<br />
<br />
It can be done using <i>TeamCity.ImportData</i> method. This method accepts two parameters: string description of data type and path to a file with data. For example, if MSTest is used for tests, here is how you can execute tests and inform TeamCity about their results:<br />
<br />
<pre><code lang="cs">Task("Run-Tests")
.Description("Run tests")
.IsDependentOn("Clean")
.IsDependentOn("Build")
.Does(() => {
var testDllsPattern = string.Format("./**/bin/{0}/*.*Tests.dll", configuration);
var testDlls = GetFiles(testDllsPattern);
var testResultsFile = System.IO.Path.Combine(temporaryFolder, "testResults.trx");
MSTest(testDlls, new MSTestSettings() {
ResultsFile = testResultsFile
});
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.ImportData("mstest", testResultsFile);
}
});</code></pre>
<br />
TeamCity supports several types of tests. Instead of <i class="">mstest</i> you can use <i class="">nunit</i>, <i class="">vstest</i> and <a href="https://confluence.jetbrains.com/display/TCD18/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-ImportingXMLReports" rel="nofollow" target="_blank">several more</a>.<br />
<br />
<h3>
Code coverage analysis</h3>
<br />
TeamCity can show results of code coverage by tests.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAhUGx47f7VZaXPgZvN5vZOVkw4-qb5tXDcWyqn0yQCfTt7ewlZbLKUL-cMvNFFNwyg6k-1d3wdCm95yoxS8k66wub_nZ5i8EVXSL8bN_yvGWnmN388ROmAgGTuYh-mnZ4aQriqJ3t-9Xg/s1600/2018-12-05_17-44-04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="320" data-original-width="559" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAhUGx47f7VZaXPgZvN5vZOVkw4-qb5tXDcWyqn0yQCfTt7ewlZbLKUL-cMvNFFNwyg6k-1d3wdCm95yoxS8k66wub_nZ5i8EVXSL8bN_yvGWnmN388ROmAgGTuYh-mnZ4aQriqJ3t-9Xg/s640/2018-12-05_17-44-04.png" width="640" /></a></div>
<br />
Now TeamCity supports integration with <a href="https://www.jetbrains.com/dotcover/?fromMenu" rel="nofollow" target="_blank">DotCover</a> tool. Let me show how to use DotCover in your Cake script. First of all, DotCover must be installed by adding:<br />
<br />
<pre><code lang="cs">#tool "nuget:?package=JetBrains.dotCover.CommandLineTools"</code></pre>
<br />
Now it can be used in your task:<br />
<br />
<pre><code lang="cs">Task("Analyse-Test-Coverage")
.Description("Analyse code coverage by tests")
.IsDependentOn("Clean")
.IsDependentOn("Build")
.Does(() => {
var coverageResultFile = System.IO.Path.Combine(temporaryFolder, "coverageResult.dcvr");
var testDllsPattern = string.Format("./**/bin/{0}/*.*Tests.dll", configuration);
var testDlls = GetFiles(testDllsPattern);
var testResultsFile = System.IO.Path.Combine(temporaryFolder, "testResults.trx");
DotCoverCover(tool => {
tool.MSTest(testDlls, new MSTestSettings() {
ResultsFile = testResultsFile
});
},
new FilePath(coverageResultFile),
new DotCoverCoverSettings()
.WithFilter("+:Application")
.WithFilter("-:Application.*Tests")
);
if(TeamCity.IsRunningOnTeamCity)
{
TeamCity.ImportData("mstest", testResultsFile);
TeamCity.ImportDotCoverCoverage(coverageResultFile);
}
});</code></pre>
<br />
As you can see, during this task tests were also run. So we can inform TeamCity both about test results and coverage analysis results. Method <i>TeamCity.ImportDotCoverCoverage</i> does the last thing.<br />
<br />
<h3>
Publishing artifacts</h3>
<br />
TeamCity allows you to publish some artifacts that will be available for each build. A good candidate for such artifacts is a NuGet package created during the build process:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgitRdxzhAOPuqd2tBe1Umr_TEDU9Eklg8qSp_fqPWnkb4lfLV4gqXY9uOOHBiFhYVOgwp59B_OeOfR83-07ZVIYHLz4MbQVpLedDHEuxWpWpHFePj3O41I-SrWSno5rMGATQtByzW4Hh8G/s1600/2018-12-07_11-01-59.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="192" data-original-width="577" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgitRdxzhAOPuqd2tBe1Umr_TEDU9Eklg8qSp_fqPWnkb4lfLV4gqXY9uOOHBiFhYVOgwp59B_OeOfR83-07ZVIYHLz4MbQVpLedDHEuxWpWpHFePj3O41I-SrWSno5rMGATQtByzW4Hh8G/s640/2018-12-07_11-01-59.png" width="640" /></a></div>
<br />
In order to do it, place all your artifacts in a folder. Then you can publish this folder using <i>TeamCity.PublishArtifacts</i>:<br />
<br />
<pre><code lang="cs">Task("Publish-Artifacts-On-TeamCity")
.Description("Publish artifacts on TeamCity")
.IsDependentOn("Create-NuGet-Package")
.WithCriteria(TeamCity.IsRunningOnTeamCity)
.Does(() => {
TeamCity.PublishArtifacts(artifactsFolder);
});</code></pre>
<br />
<h3>
Conclusion</h3>
<br />
I hope these short code snippets will save you some time and effort if you want to run your Cake script on TeamCity. The full version of the Cake script and application you can find at <a href="https://github.com/yakimovim/cake-teamcity-integration/tree/77eb41dec2cbf3c11dfb989f99d443e2dd6e1c0b" target="_blank">GitHub</a>. Good luck!</div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-80522804825546655012018-10-01T20:32:00.000+03:002018-10-01T20:32:20.679+03:00Example of Microsoft Flow usage or Flowers for my wife<div dir="ltr" style="text-align: left;" trbidi="on">
Here I'll show you not so simple example of usage of <a href="https://emea.flow.microsoft.com/en-us/" rel="nofollow" target="_blank">Microsoft Flow</a> for one practical task.<br />
<br />
<a name='more'></a><div style="text-align: justify;">
Sometimes programming challenges come from usual life. This was one of them. I want to buy flowers for my wife from time to time, e.g. once a month. But there is a catch. I don't want to do it on some specific date, like every first day of a month. In this case, it will not look like a surprise. I want some randomization. For example, next time to buy flowers should be in one month after the previous case plus/minus a couple of days.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
There is nothing special about this requirement. It is easy to write a computer program, which will notify me of these events. But the desktop application will have a big drawback. It will work only on one machine, while I have several of them (at work, at home, ...). And a smartphone... Wouldn't it be great if I could get my notifications on any device I currently use?</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Well, I can store information about events somewhere in an Internet database (like <a href="https://mlab.com/" rel="nofollow" target="_blank">Mlab</a>). In this case, all instances of my application will use and change the same information, that will allow them to work in sync. But still, I'll need to install this application on all my computers. Also, as I use Windows on computers and Android on the smartphone, I'll have to actually write different applications if I want to use them everywhere. How can I overcome this obstacle? With Web, of course.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
If I'll create a Web application, I'll be able to use it on virtually any device I have. Great! It's decided, I must create a Web application. But still, it is not cool enough. Let me explain my point. Although it is not hard to write a Web application solving my problem, still there are many things I should care about. I must think about a hosting, about a storage for my data, about a code repository, ... And why? The functionality that I require is practically implemented in modern Web calendars like <a href="https://calendar.google.com/" rel="nofollow" target="_blank">Google Calendar</a>. It can create events, make them recursive, send notifications when the time has come, etc. The only thing it can't do is this randomization I want to have. Wouldn't it be cool, if I could just add this single feature and use all other features as well?<br />
<br />
When I thought about this problem I came across a site <a href="https://ifttt.com/" rel="nofollow" target="_blank">IFTTT.com</a>. The idea of this site is simple, yet powerful. If something happens, it makes something. I know, it sounds strange, so let me give you some examples. If I received an email from a specific person, send me an SMS. If my favorite author wrote a new article in his blog, inform me in Slack. Or if the time for some specific event has come in Google Calendar, send me an email. I hope, you see now where I'm going. This service can observe some events (they are called triggers) and take some actions when events happen. There are really tons of triggers and actions on IFTTT.com. I could watch my events on Google Calendar, and when a particular event occurs, I could send me an email, delete the old event from the calendar and recreate it at some later time. Great! This is what I need! But there are obstacles here as well.<br />
<br />
First of all, IFTTT allows only one action for a trigger. This is not a big problem, as I can have several applets with identical triggers (an applet is a combination of trigger and action in IFTTT). One will send me an email, another will delete the old event from the calendar, and the last will create a new event there. But there is a bigger problem. For the new event, I must generate occurrence time randomly. And I have not found how to do it in IFTTT. It means, that this service can't solve my task. But maybe there are similar services on the Internet. And yes, there are.<br />
<br />
The next service I came across was <a href="https://zapier.com/" rel="nofollow" target="_blank">Zapier</a>. Here we can create several actions for one trigger, which is good. But it is available only on paid plans, which is bad. I did not play with Zapier a lot, but it appeared to me that it also has no means to make the required randomization. I can be wrong here. Anyway, I moved to the next candidate.<br />
<br />
It was <a href="https://emea.flow.microsoft.com/" rel="nofollow" target="_blank">Microsoft Flow</a>. This service allows 750 trigger operations per month for free. It is more than enough for my needs. Moreover, it has the support of expressions, and there is <a href="https://docs.microsoft.com/ru-ru/azure/logic-apps/workflow-definition-language-functions-reference#rand" rel="nofollow" target="_blank">rand()</a> function! This is what I need. So let me show you how to solve the task with Microsoft Flow.<br />
<br />
First of all, you should create a new flow. Flow is a combination of a trigger and actions. To do it, register on the site, click on "My flows", then click on "Create from blank":<br />
<br /></div>
<div style="text-align: justify;">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgT7nn-g1EHOVw3veZssEnsIWmcsQU3IWUic-IV7XMsfygiA6S4XhQshtt8I-oP53Crpyj-s_GZ91nyhwosMtqd80-FV8AaPpElLuY2rjo1a7t6E8mtqx0qKrX0WcjTsqncvt0tMqRw81Y-/s1600/2018-09-19_17-34-52.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="118" data-original-width="1420" height="33" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgT7nn-g1EHOVw3veZssEnsIWmcsQU3IWUic-IV7XMsfygiA6S4XhQshtt8I-oP53Crpyj-s_GZ91nyhwosMtqd80-FV8AaPpElLuY2rjo1a7t6E8mtqx0qKrX0WcjTsqncvt0tMqRw81Y-/s400/2018-09-19_17-34-52.png" width="400" /></a></div>
<br /></div>
<div style="text-align: justify;">
Click on "Create from blank" button:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeHkxey8ZZAF8MXN3SWXqwOZg4FegwdGWwfml91ArdBUmuRkFo9mM63n6Ct0w_WBL8_bOxBM6lquUm79L_wkbqh53nuBT0l6ZwP6RTyeW6Yk0BDgzybGCMcqjmPLQvItCyIRTrLSwkYjfu/s1600/2018-09-19_17-40-34.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="259" data-original-width="374" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeHkxey8ZZAF8MXN3SWXqwOZg4FegwdGWwfml91ArdBUmuRkFo9mM63n6Ct0w_WBL8_bOxBM6lquUm79L_wkbqh53nuBT0l6ZwP6RTyeW6Yk0BDgzybGCMcqjmPLQvItCyIRTrLSwkYjfu/s320/2018-09-19_17-40-34.png" width="320" /></a></div>
<br />
You'll be asked to select a trigger. Enter "calendar" in the search box and choose "Google Calendar":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1NjCjf9i9zrQ5UZEEw6tMLWkrq6BmPq4C3GsUsoU1Z_ogRSqle_Ugr7TxJEsvP3OJVV1NVYCGoSxjMU2x_ml47I_1likqHb6EkVhrRhKaeyX8UoCr-KyI_o7XhO6Sz4j3OLOL37W7wKPE/s1600/2018-09-19_17-42-25.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="234" data-original-width="919" height="100" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1NjCjf9i9zrQ5UZEEw6tMLWkrq6BmPq4C3GsUsoU1Z_ogRSqle_Ugr7TxJEsvP3OJVV1NVYCGoSxjMU2x_ml47I_1likqHb6EkVhrRhKaeyX8UoCr-KyI_o7XhO6Sz4j3OLOL37W7wKPE/s400/2018-09-19_17-42-25.png" width="400" /></a></div>
<br />
In the list of available triggers for Google Calendar select "When an event starts":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnoYw90rsC3jFcim7L8HKma4-B1qsdurYQvNV8FCrq7TT18WCVlNrBlJj3Mp7D_BYQ0QnQxbuQqivOqi_0kfLcK8gmcYQ2a4wo0jX4DbWyb2SfPmYn7MHklHMXqDB0WW0gkbmqUyhjdMt-/s1600/2018-09-19_17-44-28.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="410" data-original-width="1055" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnoYw90rsC3jFcim7L8HKma4-B1qsdurYQvNV8FCrq7TT18WCVlNrBlJj3Mp7D_BYQ0QnQxbuQqivOqi_0kfLcK8gmcYQ2a4wo0jX4DbWyb2SfPmYn7MHklHMXqDB0WW0gkbmqUyhjdMt-/s400/2018-09-19_17-44-28.png" width="400" /></a></div>
Here you may be asked to authorize access of Microsoft Flow to your Google Calendar on your behalf.<br />
<br />
Now your trigger is ready. The only parameter you should set is a calendar:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAH7-FY_zLor1oDqbI9cSLCA8k12JpHUHUrVbSOyU8b9k9x-q3SuWqRSKPjZt98Ci7-US3EvKhdTBzCYOLAahivRHj5JNOz0_OY6dF_xWHYXKhE_7zu6X-s64faEIq1p3Shx-yk3zcPnN5/s1600/2018-09-19_17-48-38.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="120" data-original-width="613" height="62" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAH7-FY_zLor1oDqbI9cSLCA8k12JpHUHUrVbSOyU8b9k9x-q3SuWqRSKPjZt98Ci7-US3EvKhdTBzCYOLAahivRHj5JNOz0_OY6dF_xWHYXKhE_7zu6X-s64faEIq1p3Shx-yk3zcPnN5/s320/2018-09-19_17-48-38.png" width="320" /></a></div>
Yes, in Google Calendar you can create several calendars. Each event will belong to one calendar and you should choose the calendar, which events will trigger your actions.<br />
<br />
It is completely fine to create a calendar and put there all events that should trigger your actions. But if not all events in the calendar should be processed, you may add filtering. Press "New step" and then "Add a condition":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGu5wjfyxhGNC7ENrkFNbZHLtBj1OwgQZVl1zcg-MvZtrFhnzNkFYKUUFxIL2YLRJzNWzdS5Ng_h_QJlSMgCgUWV7zsCB2sBwK2zPy87lLAaRndQYrfFVM9MY10ZPV4UnByFL7nVpVTiLQ/s1600/2018-09-19_17-54-43.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="189" data-original-width="400" height="151" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGu5wjfyxhGNC7ENrkFNbZHLtBj1OwgQZVl1zcg-MvZtrFhnzNkFYKUUFxIL2YLRJzNWzdS5Ng_h_QJlSMgCgUWV7zsCB2sBwK2zPy87lLAaRndQYrfFVM9MY10ZPV4UnByFL7nVpVTiLQ/s320/2018-09-19_17-54-43.png" width="320" /></a></div>
A filter will be created for you. Now we should decide how we want to filter our events. For example, I only want to process events, that contains the text "[RANDOM]" in the location field. To do it, click on the "Choose value" text box. Microsoft Flow will show you a list of possible values you can work with:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAJ83E_yo3vsl_KtbQoXbkDkzQkHnh6U7419ugZr6rzqvcyCxyvO7qEhdhkaDichuQkfEhIZNCnELrmFgRyOABO40hucPkdnB3llFCyQ4Kixw7dvhY5NlwK__j2VBt_W5JDpUx1uwxpl-L/s1600/2018-09-20_17-25-32.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="529" data-original-width="635" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAJ83E_yo3vsl_KtbQoXbkDkzQkHnh6U7419ugZr6rzqvcyCxyvO7qEhdhkaDichuQkfEhIZNCnELrmFgRyOABO40hucPkdnB3llFCyQ4Kixw7dvhY5NlwK__j2VBt_W5JDpUx1uwxpl-L/s320/2018-09-20_17-25-32.png" width="320" /></a></div>
<br />
Click on "Event List Event Location". Also, fill other text boxes with corresponding values:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_06kWWytoJi-DR7xK0uCYzrSc2o4sv48eZxUFT-MTvqN2RshcnlAyVYI4Y_zUON0kZdsxeuhBB_0xOxW7af3Z8cFLD67859p8fwTozIHH3df-nE0tJHC55XYAtY7vNSZtPjnE745cc53q/s1600/2018-09-20_17-30-41.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="148" data-original-width="615" height="77" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_06kWWytoJi-DR7xK0uCYzrSc2o4sv48eZxUFT-MTvqN2RshcnlAyVYI4Y_zUON0kZdsxeuhBB_0xOxW7af3Z8cFLD67859p8fwTozIHH3df-nE0tJHC55XYAtY7vNSZtPjnE745cc53q/s320/2018-09-20_17-30-41.png" width="320" /></a></div>
<br />
Make notice the "Edit in advanced mode" link. It is very useful. If you click it, you'll get a presentation of the same condition in form of text expression:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsntffHW3V0pU2FWLXADuGSe_7hhYY4cB1jiYTAb8YSUPGBfB8qdPWYgLuiXKZ3Msz1LPM2IXMUl1rWiYKzekW3KEP1gR3WVbJZv35xy9QNpNHtHTuc4utkk5eh4Hnrz01GYUiWBS9kYZb/s1600/2018-09-20_17-33-07.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="147" data-original-width="609" height="77" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsntffHW3V0pU2FWLXADuGSe_7hhYY4cB1jiYTAb8YSUPGBfB8qdPWYgLuiXKZ3Msz1LPM2IXMUl1rWiYKzekW3KEP1gR3WVbJZv35xy9QNpNHtHTuc4utkk5eh4Hnrz01GYUiWBS9kYZb/s320/2018-09-20_17-33-07.png" width="320" /></a></div>
<br />
It will be of great help when the time comes to write our own expressions.<br />
<br />
Now we can add actions to our events. In the "If yes" branch of our filter click "Add an action" link. Here I'll create an action that sends me an email using Gmail:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGamTvRVnN-V5lzrjElCDi5TMz8hLt9OfvW9Jola3KW2eKXQuXbam3kg1AXuXuX2hA7Em83vvQt3W5jsd-Y9lrTfWE-qi_iRWP0Nb0Tl4XPE_95lciH3TYrjqUN5Wg0FoczwvYUk5cqbFG/s1600/2018-09-20_17-37-32.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="608" data-original-width="647" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGamTvRVnN-V5lzrjElCDi5TMz8hLt9OfvW9Jola3KW2eKXQuXbam3kg1AXuXuX2hA7Em83vvQt3W5jsd-Y9lrTfWE-qi_iRWP0Nb0Tl4XPE_95lciH3TYrjqUN5Wg0FoczwvYUk5cqbFG/s320/2018-09-20_17-37-32.png" width="320" /></a></div>
<br />
As you can see, you can use data from our event to fill Subject, Body and other properties of the action.<br />
<br />
Now we have our notification. It is time to delete old event in the calendar and create a new one on a later time. With help of "Add an action" link I'll create a "Delete an event" action for Google Calendar:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0x1al1G9ZVheqVmllo-EkqTq0SdrvoeOwx6uzabVZQ_XFvg4XWwUtvSQfdABmPvddv12Qg0mnqVmJ63C55e-AaBOhQ6KMfgsn3jSLoz0Skj005099KbcTUoNSBr_ETksNVrODSP1hZKpc/s1600/2018-09-20_17-45-53.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="444" data-original-width="638" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0x1al1G9ZVheqVmllo-EkqTq0SdrvoeOwx6uzabVZQ_XFvg4XWwUtvSQfdABmPvddv12Qg0mnqVmJ63C55e-AaBOhQ6KMfgsn3jSLoz0Skj005099KbcTUoNSBr_ETksNVrODSP1hZKpc/s320/2018-09-20_17-45-53.png" width="320" /></a></div>
<br />
<br />
I want to make you aware, that it is possible to just update a calendar event instead of deleting and recreating it, but here I'll stick to the later strategy to illustrate one more possibility in Microsoft Flow. Let's say, I want my creation action to be executed not after delete action, but in parallel. Hover your mouse over the arrow between "Send email" and "Delete an event" actions. A plus sign will appear. Click on it and select "Add a parallel branch" -> "Add an action" from the context menu:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRDte82KHQXLr9VNEwTrhVyt6EpwBg9fSRMVlzeRe7TsOVwrKAY-IdafxCrNUkgUWaKxvleZ5FQWw5iBJT0W7vbBd9xR1L18TDKNhkMk_X8eJyIXxk8rls1lWtlRtfqM2pWzI9Rkye4s5F/s1600/2018-09-21_16-52-44.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="482" data-original-width="594" height="259" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRDte82KHQXLr9VNEwTrhVyt6EpwBg9fSRMVlzeRe7TsOVwrKAY-IdafxCrNUkgUWaKxvleZ5FQWw5iBJT0W7vbBd9xR1L18TDKNhkMk_X8eJyIXxk8rls1lWtlRtfqM2pWzI9Rkye4s5F/s320/2018-09-21_16-52-44.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Now we can add a parallel "Create an event" action for Google Calendar. For this action, we'll take the title, description, and location from the initial event. The only one thing to do is to fill its start time and end time:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitihVWzYJ_zXI754lA4SmRIrWPM8AAA6L7Sf8ykxaZ-5RaaKr1VAp0hmiMa38hnlj2ONCugeYmX3l17_oglVXYvZUeLsQWK87wj_Ik83MIv6daUjltwjSm2kty7PtTycGo5_KZnrHCYbrl/s1600/2018-09-21_16-57-35.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="422" data-original-width="1212" height="111" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitihVWzYJ_zXI754lA4SmRIrWPM8AAA6L7Sf8ykxaZ-5RaaKr1VAp0hmiMa38hnlj2ONCugeYmX3l17_oglVXYvZUeLsQWK87wj_Ik83MIv6daUjltwjSm2kty7PtTycGo5_KZnrHCYbrl/s320/2018-09-21_16-57-35.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
And now we come to the really interesting part. We need to add some random number of days to the start time of the event. And this new date we'll use as a start time for the new event. For example, I'd like to add 30 days plus/minus 2 days. One can find documentation about functions you can use <a href="https://docs.microsoft.com/ru-ru/azure/logic-apps/workflow-definition-language-functions-reference#functions-in-expressions" rel="nofollow" target="_blank">here</a>. I admit it is not a very easy text to read. I had many questions, especially about how to extract the start time of the initial event. Some help can be obtained from our condition action. Do you remember this "Edit in advanced mode" link:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh655k4Z7ZgrAGib2OkMf_Fh169_VBEMknXno9f9j5wzYpuuBdOjSg1uPWlGn5gIvzp057pe6S8Z5A6MEgLSgQ_JzkNI9mDGYMnf0mjsECBZCzTPHfiWWEfnfPPX8b17yIidvkopGaSFWw8/s1600/2018-09-20_17-30-41.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="148" data-original-width="615" height="77" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh655k4Z7ZgrAGib2OkMf_Fh169_VBEMknXno9f9j5wzYpuuBdOjSg1uPWlGn5gIvzp057pe6S8Z5A6MEgLSgQ_JzkNI9mDGYMnf0mjsECBZCzTPHfiWWEfnfPPX8b17yIidvkopGaSFWw8/s320/2018-09-20_17-30-41.png" width="320" /></a></div>
Click on this link will show the corresponding expression:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0wDyOAxInhacmJBE2jdJdtQ6DWo9NdE0M1DiuovKM1vGsu2hK26MAEnCJvl5sx9VDim0SM9DjUkpMw6IuSdaIgVv2Z4fH2Epy7GyfCj-qkYiHnYCPGX_fuLjFBRfY1KwSrXDLgEaI-jb8/s1600/2018-09-24_17-03-57.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="142" data-original-width="608" height="74" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0wDyOAxInhacmJBE2jdJdtQ6DWo9NdE0M1DiuovKM1vGsu2hK26MAEnCJvl5sx9VDim0SM9DjUkpMw6IuSdaIgVv2Z4fH2Epy7GyfCj-qkYiHnYCPGX_fuLjFBRfY1KwSrXDLgEaI-jb8/s320/2018-09-24_17-03-57.png" width="320" /></a></div>
<br />
It gave me some help with the writing of my expressions. Now click on the "Start time" field of the "Create an event" action and choose "Expression" tab:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSk4oG7wXf47CEei2tt0waeELfZY1Dw1d3uNmwGCcmV53MqAp9J0vdARoYs61n28C2RrqxFggUk2rnF-vusJloVa05jOsjvUu3Y8LMG25216T-oS_pZr7STqj1A7qhoNGM472Oux_dVA_Q/s1600/2018-09-24_17-06-34.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="370" data-original-width="1018" height="145" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSk4oG7wXf47CEei2tt0waeELfZY1Dw1d3uNmwGCcmV53MqAp9J0vdARoYs61n28C2RrqxFggUk2rnF-vusJloVa05jOsjvUu3Y8LMG25216T-oS_pZr7STqj1A7qhoNGM472Oux_dVA_Q/s400/2018-09-24_17-06-34.png" width="400" /></a></div>
<br />
<br />
In the text box we can enter an expression like:<br />
<br />
<div style="text-align: left;">
<i>addDays(triggerBody()?['start'], add(30, rand(mul(2, -1), add(2, 1))))</i></div>
<br />
and press the "Ok" button. It will solve exactly our task: add 30 days plus/minus 2 days. But what if I want some flexibility? What if I want to have several types of events. For the first type, I'd like to increase the start time by 30 days plus/minus 2 days, for the second type, I'd like to increase the start time by 14 days plus/minus 3 days, etc... How can I achieve this?<br />
<br />
Here is the approach I have used. Do yoг remember, that we keep string "[RANDOM]" inside location fields of our events? Now I'll add some information to this field. It will contain text in the format "[RANDOM],NN,MM", where NN - two digits and MM - also two digits. I'll increase the start time of the event by NN days plus/minus MM days. In this case, I can be sure, that symbols 9 and 10 (starting from 0) of this string will represent NN and symbols 12 and 13 will represent MM. And here is an expression using this format to increase the start time of an event:<br />
<br />
<div style="text-align: left;">
<i>addDays(triggerBody()?['start'], add(int(substring(triggerBody()?['location'], 9, 2)), rand(mul(int(substring(triggerBody()?['location'], 12, 2)), -1), add(int(substring(triggerBody()?['location'], 12, 2)), 1))))</i></div>
<br />
In general, it uses <i><a href="https://docs.microsoft.com/ru-ru/azure/logic-apps/workflow-definition-language-functions-reference#substring" rel="nofollow" target="_blank">substring</a></i> function to extract required parts of the string and <i><a href="https://docs.microsoft.com/ru-ru/azure/logic-apps/workflow-definition-language-functions-reference#int" rel="nofollow" target="_blank">int</a></i> function to convert results into integers.<br />
<br />
Cool! We are almost there. The only one thing to do is to set the end time of the event. And here we meet our last obstacle. I want the end time to be equal to the start time plus 15 minutes. Microsoft Flow has function <a href="https://docs.microsoft.com/ru-ru/azure/logic-apps/workflow-definition-language-functions-reference#addMinutes" rel="nofollow" target="_blank"><i>addMinutes</i></a>, and we can write something like this:<br />
<br />
<div style="text-align: left;">
<i>addMinutes(<the previous expression here>, 15)</i></div>
<br />
But by the nature of the <i><a href="https://docs.microsoft.com/ru-ru/azure/logic-apps/workflow-definition-language-functions-reference#rand" rel="nofollow" target="_blank">rand</a></i> function, we can get here value, completely unrelated to our start time. Instead, I would like to have some variable '<i class="gr-progress">nextStart</i>' here, that will keep the value of our expression. In this case, I'll use this variable for the start time, and use<br />
<br />
<div style="text-align: left;">
<i>addMinutes(<the value of 'nextStart' variable>, 15)</i></div>
<br />
for the end time of my new event. And you know what? Microsoft Flow has variables. First of all, we need to initialize one. Hover your mouse on the arrow between our trigger and condition. Click on plus sign and select "Add action":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRFPT-BjdLYZMcCvYrJQo2ZSrx3QXMLSntNmz0cF8trJXKbC1JU0WHmhqTQDfsBMSsL0cT025vB1JtvGVST0JPaMuBuMjlp1S8snJbrHo_tkeWGbalDVyxtlttbUU_Ylq6DN_VigY-fgQB/s1600/2018-09-25_16-58-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="317" data-original-width="615" height="164" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRFPT-BjdLYZMcCvYrJQo2ZSrx3QXMLSntNmz0cF8trJXKbC1JU0WHmhqTQDfsBMSsL0cT025vB1JtvGVST0JPaMuBuMjlp1S8snJbrHo_tkeWGbalDVyxtlttbUU_Ylq6DN_VigY-fgQB/s320/2018-09-25_16-58-03.png" width="320" /></a></div>
<br />
Into the search box enter "Variables" and select "Initialize variable":<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgodPQVLy424b8z4Ms0JVTk30A5PKb5VrSbkRimlsnhNyDMxjWTzvjO8PcO4t3knB6dtYEcW9ioznm3kzdld-ria3s4P-XyRV-9tpVtVN8lU1qPHI0LZsP-Tf6n191r6kx7o1uy-7fSaqRr/s1600/2018-09-25_17-01-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="670" data-original-width="623" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgodPQVLy424b8z4Ms0JVTk30A5PKb5VrSbkRimlsnhNyDMxjWTzvjO8PcO4t3knB6dtYEcW9ioznm3kzdld-ria3s4P-XyRV-9tpVtVN8lU1qPHI0LZsP-Tf6n191r6kx7o1uy-7fSaqRr/s320/2018-09-25_17-01-07.png" width="297" /></a></div>
<br />
Set the name of the variable to '<i class="">nextStart'</i> and type to "String". Microsoft Flow does not have a date/time type, so it uses string to represent date/time as well.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_bsEVG_TiCm19SbHjgwQUGGGx6bw6Wurv2b2k8vOcav_3PB4Jh1jySleCLzDiTSUlsgmXJDoQ1R2897A7HoZXhvbiNGAHYRjm4y66SfPskPE0Q7_8a2ws3gvUeYzY7fmRzIwFuWKXfJv_/s1600/2018-09-25_17-02-32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="173" data-original-width="610" height="90" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_bsEVG_TiCm19SbHjgwQUGGGx6bw6Wurv2b2k8vOcav_3PB4Jh1jySleCLzDiTSUlsgmXJDoQ1R2897A7HoZXhvbiNGAHYRjm4y66SfPskPE0Q7_8a2ws3gvUeYzY7fmRzIwFuWKXfJv_/s320/2018-09-25_17-02-32.png" width="320" /></a></div>
<br />
Now we can set value for the variable. I can't set it right here, as I don't know if this is a correct event. Only after our condition, I can be sure about it. So I'll insert another action of type "Set variable" after sending an email:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOhYM2NKatoiKODO8KAR5TuwHcq1nI6KuWsfUdc8Y5us5PJyLR-YIIZLc4_eXKIFkwmD3slKgN1FPP3EMMz-P9zj2h7LUNA3jUCRAiaUfc2dZZB0jg3eRHNoQZCJ-WcPC-w-agdz8esRCR/s1600/2018-09-25_17-09-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="298" data-original-width="629" height="151" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOhYM2NKatoiKODO8KAR5TuwHcq1nI6KuWsfUdc8Y5us5PJyLR-YIIZLc4_eXKIFkwmD3slKgN1FPP3EMMz-P9zj2h7LUNA3jUCRAiaUfc2dZZB0jg3eRHNoQZCJ-WcPC-w-agdz8esRCR/s320/2018-09-25_17-09-03.png" width="320" /></a></div>
<br />
Here I set the value of the '<i class="">nextStart</i>' variable to our long expression. The only thing left to do is to reuse this variable in expressions for the start and end time. We can reference our variable in expressions using:<br />
<br />
<div style="text-align: left;">
<i>variables('nextStart')</i></div>
<br />
So expression for the end time of the event will be:<br />
<br />
<div style="text-align: left;">
<i>addMinutes(variables('nextStart'), 15)</i></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiS1ZxVe5elG-i7LtQl1YNtEcgj6hLVcKbeQuHx8jvB-7HaT8KXRXrv4cVkLwEHrqyQmX_YyJmTF6kpOTa8QVoyfDYy2wjjm9ur-MPQ5o4ko1YqtduYdam-ofgvzYA1kUL1HaCmqHeD8Hcb/s1600/2018-09-26_17-07-15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="340" data-original-width="1007" height="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiS1ZxVe5elG-i7LtQl1YNtEcgj6hLVcKbeQuHx8jvB-7HaT8KXRXrv4cVkLwEHrqyQmX_YyJmTF6kpOTa8QVoyfDYy2wjjm9ur-MPQ5o4ko1YqtduYdam-ofgvzYA1kUL1HaCmqHeD8Hcb/s400/2018-09-26_17-07-15.png" width="400" /></a></div>
<br />
<br />
This is the end of the story. We can save the flow and Microsoft Flow will run it for us.<br />
<br />
I hope this article will be useful to you. I think that Microsoft Flow is a great tool for automation of different tasks.<br />
<br /></div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-36347777921589604542018-07-04T18:20:00.000+03:002018-07-04T18:20:09.936+03:00Gathering context information for logging<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
When you write messages to your logs, sometimes it may be useful to add context information. For example, if you write information about some error, you could also include input data in some form, to be able to reproduce the problem easily. Here I'll show how to gather this additional information.</div>
<div style="text-align: justify;">
</div>
<a name='more'></a><br />
<h3>
Setup</h3>
<br />
<div style="text-align: justify;">
First, let's describe the problem we want to solve. I have an ASP.NET MVC Web service. The service accepts POST-requests containing JSON descriptions. After analyzing such description, the service constructs and executes several SQL queries to a database. Then it combines results and returns them to the client.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
It is necessary to say that our service heavily uses asynchronous API through async/await and tasks.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Now we have an understanding of the stage. Let's go to the problems.</div>
<div style="text-align: justify;">
<br /></div>
<h3 style="text-align: justify;">
Gathering context of errors</h3>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Sometimes our service fails. The reasons can be different: error in the input JSON, bugs in our code, problems with the database, ... In this case, we need to log information about an error.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
There is no problem with an exception itself. We can catch it in the action method of our service:</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<pre><code lang="cs">public class ServiceController : ApiController
{
[Route("api/service")]
[HttpPost]
public async Task<HttpResponseMessage> ServiceAction(
[FromBody] RequestModel requestModel
)
{
try
{
...
}
catch (Exception ex)
{
Logger.LogError(ex);
throw;
}
}
}</code></pre>
<br />
Or you can create a custom action filter attribute:<br />
<br />
<pre><code lang="cs">public class LogErrorAttribute : ActionFilterAttribute
{
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
base.OnActionExecuted(actionExecutedContext);
if (actionExecutedContext.Exception != null)
{
Logger.LogError(actionExecutedContext.Exception);
}
}
}</code></pre>
<br />
and use it with your action method:<br />
<br />
<pre><code lang="cs">[Route("api/service")]
[HttpPost]
<b>[LogError]</b>
public async Task<HttpResponseMessage> ServiceAction(
[FromBody] RequestModel requestModel
)
{
...
}</code></pre>
<br />
But we need more. For every error I'd like to have the following additional information:<br />
<br />
<ul>
<li>JSON of the request.</li>
<li>Texts of all generated SQL queries.</li>
</ul>
<div>
<br />
And there is one more requirement. This information should be written to the log only if an error happened. Otherwise, I don't need it there.</div>
<div>
<br /></div>
<div>
Well, it is not very hard to do it with the JSON from the request:</div>
<div>
<br /></div>
<pre><code lang="cs">public class ServiceController : ApiController
{
[Route("api/service")]
[HttpPost]
public async Task<HttpResponseMessage> ServiceAction(
[FromBody] RequestModel requestModel
)
{
<b>var requestText = await Request.Content.ReadAsStringAsync();</b>
try
{
...
}
catch (Exception ex)
{
Logger.LogError(ex);
<b>Logger.LogError($"Request test is {requestText}");</b>
throw;
}
}
}</code></pre>
</div>
<div>
<br /></div>
<div>
<div style="text-align: justify;">
But it is not so simple with texts of SQL queries. Let me explain this. These queries are not generated in our action method. They are not even generated in the class of our controller. There can be many and many calls of different methods of different classes before we reach this code. So, how can we extract these texts?</div>
<br />
<div style="text-align: justify;">
One variant is to use some list of messages (e.g. <i>List<string></i>). We will create it in our action method (<i>ServiceAction</i>) and pass it down to the method, where SQL queries are generated. There we'll add texts of queries to this list. In this case, if an error happened, in the action method we'll have the list of messages we need to log.</div>
<br />
<div style="text-align: justify;">
This approach has very serious drawback from my point of view. We have to pass the list of messages through all the chain of calls of different methods until we reach the method, where we generate SQL queries. It means, that many methods will accept it as a parameter only to pass it down. It complicates the code, and I'd try to avoid it.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
If you use a DI container and can create your classes from the container, you can inject the list of messages into the controller class and into classes creating SQL queries. Just register the list of messages with 'per request' lifecycle.<br />
<br />
But there is a way even if your code does not use a DI container. It would be great if I could access the list of messages through a static property:<br />
<br />
<pre><code lang="cs">public static async Task<SqlDataReader> RunReaderAsync(this SqlCommand cmd)
{
var message = $"SQL Server query is: {cmd.CommandText}";
<b>ErrorContext.Current.AttachMessage(message);</b>
...
}</code></pre>
<br />
There is one serious problem here. Our service can serve several requests simultaneously. And each request must have its own list of messages. Furthermore, while processing a single request, our code can start multiple separate threads (e.g. by async/await). And all these threads must have access to the same instance of the list of messages. How can we implement it?<br />
<br />
It can be achieved using an instance of <i>AsyncLocal<T></i> class. This class guarantees that if you give it some value in one thread this values can be obtained in this thread and in all threads created from this thread from this moment on. At the same time, all other threads will not see this value.<br />
<br />
Let's take a look at the complete implementation of the <i>ErrorContext</i> class:<br />
<br />
<pre><code lang="cs">public class ErrorContext
{
private static readonly object Lock = new object();
private static readonly AsyncLocal<ErrorContext> CurrentErrorContext = new AsyncLocal<ErrorContext>();
private readonly Lazy<ConcurrentBag<string>> _attachedMessages = new Lazy<ConcurrentBag<string>>(() => new ConcurrentBag<string>());
private ErrorContext()
{}
public static ErrorContext Current
{
get
{
lock (Lock)
{
var errorContext = CurrentErrorContext.Value;
if (errorContext == null)
{
CurrentErrorContext.Value = errorContext = new ErrorContext();
}
return errorContext;
}
}
}
public static ErrorContext CreateNewErrorContext()
{
lock (Lock)
{
var errorContext = new ErrorContext();
CurrentErrorContext.Value = errorContext;
return errorContext;
}
}
public void AttachMessage(string message)
{
if (!string.IsNullOrWhiteSpace(message))
{
_attachedMessages.Value.Add(message);
}
}
public IReadOnlyList<string> GetMessages()
{
return _attachedMessages.Value.ToArray();
}
}</code></pre>
<br />
Here is how it works. Method <i>CreateNewErrorContext</i> immediately creates a new list of messages and stores it in the <i>CurrentErrorContext</i> field, implemented with <i>AsyncLocal<T></i>. You can get access to the list in any place of code using static <i>Current</i> property. Method <i>AttachMessage</i> puts a new message on the list. It stores messages inside <i>ConcurrentBag</i> instance, as this method can be simultaneously called from several threads. Method <i>GetMessages</i> returns all stored messages so you can log them.<br />
<br />
Now it is easy to initialize and use <i>ErrorContext </i>inside your <i>LogErrorAttribute</i>:<br />
<br />
<pre><code lang="cs">public class LogErrorAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(HttpActionContext actionContext)
{
ErrorContext.CreateNewErrorContext();
base.OnActionExecuting(actionContext);
}
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
base.OnActionExecuted(actionExecutedContext);
if (actionExecutedContext.Exception != null)
{
foreach(var message in ErrorContext.Current.GetMessages())
{
Logger.LogError(message);
}
Logger.LogError(actionExecutedContext.Exception);
}
}
}</code></pre>
<br />
In any place of your code you can add messages to the current error context like this:<br />
<br />
<pre><code lang="cs">ErrorContext.Current.AttachMessage(message);</code></pre>
<br />
<br />
<h3>
Logging performance issues</h3>
<br />
Sometimes my service answers slowly. Not for all requests, but for some of them it takes too long to get a response. I'd like to log information about such requests to be able to deal with them later. How should it be implemented and what information do I need in this case?<br />
<br />
First of all, I'd like to set some sort of threshold on work time. If my service takes less time to respond it is Ok. I'll log nothing in this case. But if it takes more time then I must log some information.<br />
<br />
What information do I need? Well, I certainly need to know how long it took to process the whole request. But it is not enough. My service does many things: validates the request, gets data from external services, constructs SQL queries, executes these queries, ... I may want to know how long it took to execute each of these parts to understand the root of the problem.<br />
<br />
Also, I'll need the same information as for errors. I'll need request body to be able to reproduce the problem. I'll need texts of SQL queries in case they are the slowest part.<br />
<br />
How can we do it? With help of the same <i>AsyncLocal</i> class:<br />
<br />
<pre><code lang="cs">public class Timer : IDisposable
{
private static readonly object Lock = new object();
private static readonly AsyncLocal<Timer> CurrentTimer = new AsyncLocal<Timer>();
private readonly Stopwatch _stopwatch = new Stopwatch();
private readonly Lazy<ConcurrentQueue<Timer>> _attachedTimers = new Lazy<ConcurrentQueue<Timer>>(() => new ConcurrentQueue<Timer>());
private readonly Lazy<ConcurrentQueue<string>> _attachedMessages = new Lazy<ConcurrentQueue<string>>(() => new ConcurrentQueue<string>());
private readonly string _description;
private readonly TimeSpan? _threshold;
private readonly Timer _previousCurrent;
private bool _isDisposed;
private bool _suspendLogging;
private Timer(Timer previousCurrent, string description = null, TimeSpan? threshold = null)
{
_previousCurrent = previousCurrent;
_description = description;
_threshold = threshold;
_stopwatch.Start();
}
public static Timer Current
{
get
{
lock (Lock)
{
var timer = CurrentTimer.Value;
if (timer == null)
{
CurrentTimer.Value = timer = new Timer(null);
}
return timer;
}
}
}
public static Timer SetCurrentTimer(string description, TimeSpan? threshold = null)
{
lock (Lock)
{
var currentTimer = CurrentTimer.Value;
var timer = new Timer(currentTimer, description, threshold);
CurrentTimer.Value = timer;
currentTimer?._attachedTimers.Value.Enqueue(timer);
return timer;
}
}
public void AttachMessage(string message)
{
if (!string.IsNullOrWhiteSpace(message))
{
_attachedMessages.Value.Enqueue(message);
}
}
public void Dispose()
{
if (!_isDisposed)
{
_isDisposed = true;
_stopwatch.Stop();
if (_attachedTimers.IsValueCreated)
{
foreach (var attachedTimer in _attachedTimers.Value)
{
attachedTimer.Dispose();
}
}
if (!_suspendLogging && _threshold.HasValue && _stopwatch.Elapsed > _threshold.Value)
{
Log();
}
if (_previousCurrent != null)
{
CurrentTimer.Value = _previousCurrent;
}
}
}
private JObject Message
{
get
{
Dispose();
var message = new StringBuilder($"It took {_stopwatch.ElapsedMilliseconds} ms to execute {_description}.");
if (_threshold.HasValue)
{
message.Append($" Duration threshold is {_threshold.Value.TotalMilliseconds} ms.");
}
var messageObj = new JObject
{
["message"] = message.ToString(),
};
if (_attachedTimers.IsValueCreated && _attachedTimers.Value.Any())
{
messageObj["attachedTimers"] = new JArray(_attachedTimers.Value.Select(t => t.Message));
}
if (_attachedMessages.IsValueCreated && _attachedMessages.Value.Any())
{
messageObj["attachedMessages"] = new JArray(_attachedMessages.Value);
}
return messageObj;
}
}
public void Log()
{
try
{
_suspendLogging = true;
Dispose();
if (_stopwatch.Elapsed < _threshold)
{
Logger.LogDebug(Message.ToString());
}
else
{
Logger.LogWarning(Message.ToString());
}
}
finally
{
_suspendLogging = false;
}
}
}</code></pre>
<br />
Let's see how it works. Method <i>SetCurrentTimer</i> creates a new timer. Here you set its description and an optional threshold of duration. Sometimes I want part of my code to execute within some time interval. E.g. I may want to get the response to my request within 3 seconds. In another case, I don't have any particular limitations. E.g. I don't care how long it takes to execute SQL queries, as long as the whole processing of a request takes less than 3 seconds. This is why for some timers you may want to set the threshold, while for others you may don't want to set it.<br />
<br />
Inside this method, I create a new timer and assign it to the <i>AsyncLocal</i> variable <i>CurrentTimer</i>. But this is not the whole story. At this moment there can be another active timer. In this case, I attach the new timer to the old one. It allows me to create nested timers. In this case, I'll be able to measure the time of the whole block of code, as well as its parts:<br />
<br />
<pre><code lang="cs">using (Timer.SetCurrentTimer("The whole block"))
{
...
using (Timer.SetCurrentTimer("Part 1"))
{
...
}
...
using (Timer.SetCurrentTimer("Part 2"))
{
...
}
...
}</code></pre>
<br />
Property <i>Current</i> gives access to the current timer. It is useful if we want to attach some messages to the timer:<br />
<br />
<pre><code lang="cs">var message = $"SQL Server query is: {cmd.CommandText}";
Timer.Current.AttachMessage(message);</code></pre>
<br />
In this case, I store attached messages and nested timers using <i>ConcurrentQueue</i> instances because the order of timers and messages may be important.<br />
<br />
Property <i>Message</i> returns combined messages from the current timer and all nested timers. To store these messages in a structured format I use JSON by means of <a href="https://www.newtonsoft.com/json" target="_blank">JSON.NET</a>. But it is really not important. You may use any format.<br />
<br />
Method <i>Log</i> will write timer information to a log regardless if it has threshold or not. At the same time, <i>Dispose</i> method will write information to a log only if the threshold is set and it was exceeded.<br />
<br />
Now we can create another action filter attribute for action methods of our controllers:<br />
<br />
<pre><code lang="cs">public class TimerContextAttribute : ActionFilterAttribute
{
private readonly string _timerDescription;
private readonly int _durationThresholdMs;
private readonly AsyncLocal<Timer> _timer = new AsyncLocal<Timer>();
public TimerContextAttribute(string timerDescription, int durationThresholdMs)
{
if (string.IsNullOrWhiteSpace(timerDescription)) throw new ArgumentNullException(nameof(timerDescription));
_timerDescription = timerDescription;
_durationThresholdMs = durationThresholdMs;
}
public override void OnActionExecuting(HttpActionContext actionContext)
{
_timer.Value = Timer.SetCurrentTimer(_timerDescription,
TimeSpan.FromMilliseconds(_durationThresholdMs));
base.OnActionExecuting(actionContext);
}
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
base.OnActionExecuted(actionExecutedContext);
_timer.Value?.Dispose();
}
}</code></pre>
<br />
and use it for our actions:<br />
<br />
<pre><code lang="cs">[Route("api/service")]
[HttpPost]
<b>[TimerContext("For ServiceAction method", 3000)]</b>
public async Task<HttpResponseMessage> ServiceAction(
[FromBody] RequestModel requestModel
)
{
...
}</code></pre>
<br />
<h3>
Conclusion</h3>
<br />
In this article, I described how you can easily collect information from many places in your code and retrieve it later. It can be done using static properties and methods, which give access to instances of <i>AsyncLocal</i> class.<br />
<br />
I hope this information could be useful and allow you to improve logging in your applications.<br />
<br /></div>
</div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-82821919857516004052018-03-12T12:40:00.000+03:002018-03-12T12:40:08.403+03:00Adding documentation into ASP.NET Web API<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
When you provide Web API, there is a question, how to inform a user about all its abilities, about the syntax of requests, etc. Usually, you should create some available Web page, where you discuss these topics. But wouldn't it be great, if the Web API itself provided access to the documentation?</div>
<div style="text-align: justify;">
</div>
<a name='more'></a><br />
<br />
<div style="text-align: justify;">
If you open a page of any serious project on <a href="https://github.com/" target="_blank">GitHub</a>, you'll see well-written <i>Readme.md</i> document. This <a href="https://en.wikipedia.org/wiki/Markdown" target="_blank">Markdown</a> document describes the purpose of the repository and frequently contains links to other documents. The great thing here is that GitHub automatically converts these documents in HTML and shows the result to you. It makes Markdown files a good place to store documentation about your project. First of all, they can provide rich formatting. Also, they are stored in the VCS along with your code. It makes these files a first-class citizen. You treat them as a part of your code and modify them when you make modifications to your code. At least it should be in theory. Now you have all your documentation in your repository.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
It is a good thing if your repository is opened. But I work for a company, which provides some Web APIs to external clients. These clients do not have access to our code repositories. How should I provide documentation about these services?<br />
<br />
I can create a separate site with documentation. But now I have two places where information about my product is stored: in Markdown files and on this site. I can automate the process of creation of the site with documentation to generate it from my Markdown files. Or I can create a separate document (e.g. PDF) containing content of these files.<br />
<br />
There is nothing wrong with this approach. But I think we can get one more step in this direction. Why should we separate our API from documentation? Can we provide them in one place? For example, if our Web API is accessible at URL <i>http://www.something.com/api/data</i> then documentation about this API can be accessible at URL <i>http://www.something.com/api/help.md</i>.<br />
<br />
How difficult is it to implement such documentation system using ASP.NET Web API? Let's take a look.<br />
<br />
I'll start with simple Web API using OWIN. Here is my <i>Startup</i> file:<br />
<br />
<pre><code lang="cs">[assembly: OwinStartup(typeof(OwinMarkdown.Startup))]
namespace OwinMarkdown
{
public class Startup
{
public void Configuration(IAppBuilder app)
{
HttpConfiguration config = new HttpConfiguration();
config.Formatters.Clear();
config.Formatters.Add(
new JsonMediaTypeFormatter
{
SerializerSettings = GetJsonSerializerSettings()
});
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new {id = RouteParameter.Optional}
);
app.UseWebApi(config);
}
private static JsonSerializerSettings GetJsonSerializerSettings()
{
var settings = new JsonSerializerSettings();
settings.Converters.Add(new StringEnumConverter { CamelCaseText = false });
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();
return settings;
}
}
}
</code></pre>
<br />
I'll add some Markdown files to my project:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl_rMk_tzMCfQUBng3JVH7pIVokJO2e-98Dk8Wlih2zPQRj55kMzBvgVWIOWt05hwbB8Z7wBGdra-8R4rEIg8_VUqNVuNExeNIl4hcg3p9_A7WRkDXVgI_SujuK2qh5inadsdJ1Kf72Hx_/s1600/Project.png" imageanchor="1"><img border="0" data-original-height="312" data-original-width="224" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl_rMk_tzMCfQUBng3JVH7pIVokJO2e-98Dk8Wlih2zPQRj55kMzBvgVWIOWt05hwbB8Z7wBGdra-8R4rEIg8_VUqNVuNExeNIl4hcg3p9_A7WRkDXVgI_SujuK2qh5inadsdJ1Kf72Hx_/s320/Project.png" width="230" /></a>
<br />
<br />
I want to say a couple of words about these files. First of all, there could be a complex structure of subfolders keeping different parts of my documentation. Next, there are other files here, like images. My Markdown files can reference them. Our solution must support both: tree of folders and additional files.<br />
<br />
Let's start with <i>Web.config</i> file. We need to make some modifications to it. You see, Internet Information Services (IIS) can provide static files all by itself. For example, if I'll ask for http://myhost/help/root.md, IIS will understand, that there is such a file on the disk. Then it'll try to return it. It means, that IIS will not pass the request to our application. But this is not what we want. We don't want to return raw Markdown file. We want to convert it to HTML first. This is why we need to modify <i>Web.config</i>. We must instruct IIS, that it should pass all requests to our application. We do it by altering the <i>system.webServer</i> section:<br />
<br />
<pre><code lanf="xml"><system.webServer>
<b><modules runAllManagedModulesForAllRequests="true" /></b>
<handlers>
<remove name="ExtensionlessUrlHandler-Integrated-4.0" />
<remove name="OPTIONSVerbHandler" />
<remove name="TRACEVerbHandler" />
<b><add name="Owin" verb="" path="*" type="Microsoft.Owin.Host.SystemWeb.OwinHttpHandler, Microsoft.Owin.Host.SystemWeb" /></b>
</handlers>
</system.webServer>
</code></pre>
<br />
Now IIS will not process static files. But we still need it (e.g. for images in our documentation). This is why we'll use <i>Microsoft.Owin.StaticFiles</i> NuGet package. Let's say, I want my documentation to be available at path "<i>/api/doc</i>". In this case, I'll configure this package the following way:<br />
<br />
<pre><code lang="cs">[assembly: OwinStartup(typeof(OwinMarkdown.Startup))]
namespace OwinMarkdown
{
public class Startup
{
<b>private static readonly string HelpUrlPart = "/api/doc";</b>
public void Configuration(IAppBuilder app)
{
<b> var basePath = AppDomain.CurrentDomain.SetupInformation.ApplicationBase;
app.UseStaticFiles(new StaticFileOptions
{
RequestPath = new PathString(HelpUrlPart),
FileSystem = new PhysicalFileSystem(Path.Combine(basePath, "Help"))
});
</b>
HttpConfiguration config = new HttpConfiguration();
config.Formatters.Clear();
config.Formatters.Add(
new JsonMediaTypeFormatter
{
SerializerSettings = GetJsonSerializerSettings()
});
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new {id = RouteParameter.Optional}
);
app.UseWebApi(config);
}
private static JsonSerializerSettings GetJsonSerializerSettings()
{
var settings = new JsonSerializerSettings();
settings.Converters.Add(new StringEnumConverter { CamelCaseText = false });
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();
return settings;
}
}
}</code></pre>
<br />
Now we can serve static files from "<i>Help</i>" folder of our application by "<i>/api/doc</i>" path. But we still need to somehow convert Markdown files into HTML and serve them. For this purpose, we'll write OWIN middleware. This middleware will use <i class="">Markdig</i> NuGet package.<br />
<br />
<pre><code lang="cs">[assembly: OwinStartup(typeof(OwinMarkdown.Startup))]
namespace OwinMarkdown
{
public class Startup
{
private static readonly string HelpUrlPart = "/api/doc";
public void Configuration(IAppBuilder app)
{
<b> var pipeline = new MarkdownPipelineBuilder().UseAdvancedExtensions().Build();
app.Use(async (context, next) =>
{
var markDownFile = GetMarkdownFile(context.Request.Path.ToString());
if (markDownFile == null)
{
await next();
return;
}
using (var reader = markDownFile.OpenText())
{
context.Response.ContentType = @"text/html";
var fileContent = reader.ReadToEnd();
fileContent = Markdown.ToHtml(fileContent, pipeline);
// Send our modified content to the response body.
await context.Response.WriteAsync(fileContent);
}
});
</b>
var basePath = AppDomain.CurrentDomain.SetupInformation.ApplicationBase;
app.UseStaticFiles(new StaticFileOptions
{
RequestPath = new PathString(HelpUrlPart),
FileSystem = new PhysicalFileSystem(Path.Combine(basePath, "Help"))
});
HttpConfiguration config = new HttpConfiguration();
config.Formatters.Clear();
config.Formatters.Add(
new JsonMediaTypeFormatter
{
SerializerSettings = GetJsonSerializerSettings()
});
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new {id = RouteParameter.Optional}
);
app.UseWebApi(config);
}
private static JsonSerializerSettings GetJsonSerializerSettings()
{
var settings = new JsonSerializerSettings();
settings.Converters.Add(new StringEnumConverter { CamelCaseText = false });
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();
return settings;
}
private static FileInfo GetMarkdownFile(string path)
{
if (Path.GetExtension(path) != ".md")
return null;
var basePath = AppDomain.CurrentDomain.SetupInformation.ApplicationBase;
var helpPath = Path.Combine(basePath, "Help");
var helpPosition = path.IndexOf(HelpUrlPart + "/", StringComparison.OrdinalIgnoreCase);
if (helpPosition < 0)
return null;
var markDownPathPart = path.Substring(helpPosition + HelpUrlPart.Length + 1);
var markDownFilePath = Path.Combine(helpPath, markDownPathPart);
if (!File.Exists(markDownFilePath))
return null;
return new FileInfo(markDownFilePath);
}
}
}</code></pre>
<br />
Let's take a look, how this middleware works. First of all, it checks if the request was for a Markdown file or not. This is what <i>GetMarkdownFile</i> function do. It tries to find a Markdown file corresponding to a request and returns its <i>FileInfo</i> object if the file is found, or null otherwise. I agree, that my implementation of this function is not the best. It just serves to prove the concept. You can replace it with any other implementation you want.<br />
<br />
If the file was not found, our middleware just passes the request further using <i>await next()</i>. But if the file is found, it reads its content, converts it to HTML and returns the response.<br />
<br />
Now you have a documentation available for users of your API in several places. It is available in your VCS repository. It is also available directly through your Web API. And your documentation is a part of your code, which is stored under VCS.<br />
<br />
This is a great achievement from my point of view. But there is one drawback I'd like to discuss. This system is good if your product is stable. But in the early phase of development, it is not always clear how your API should look like, what is the form of requests and responses, etc. It means that on this phase your documentation should be opened to comments. There must be some system to comment on the content of Markdown files. GitHub has the system of Issues, where you can write comments about the code. As documentation is a part of our code now, we can use Issues to discuss the content of documentation at development phase. But I think it is not the best solution. It would be much better to write comments directly on the document like we can do it in <a href="https://www.atlassian.com/software/confluence" target="_blank">Confluence</a>. Shortly speaking, I think we still need some tool to be able to discuss Markdown documents at an early stage of development.<br />
<br /></div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-66253797164452795812018-02-09T11:23:00.001+03:002018-02-09T11:50:39.991+03:00Handling JSON errors in OWIN Web application<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
Somу time ago I wrote <a href="https://ivanyakimov.blogspot.ru/2017/12/finding-typos-and-usage-of-obsolete.html" target="_blank">an article</a>, where I discuss how to find typos and usage of obsolete properties in JSON body of a request. The described technique required providing a handler of <i>Error</i> event of the <i>JsonSerializerSettings</i> object. The problem is that this object is shared across all requests to your application, but we need separate handling for each request. In this short article, I'll describe how to do it.</div>
<div style="text-align: justify;">
</div>
<a name='more'></a><br />
<div style="text-align: justify;">
First of all, I assume, that you are familiar with <a href="https://ivanyakimov.blogspot.ru/2017/12/finding-typos-and-usage-of-obsolete.html" target="_blank">the previous article</a>, because I'll use the same model and classes here. You may consider this article as a continuation of the previous one.</div>
<div style="text-align: justify;">
<br />
Let's take a look at how we usually define Web API application with JSON serialization:</div>
<div style="text-align: justify;">
<br />
<pre><code lang="cs">[assembly: OwinStartup(typeof(JsonOwinWebApplication.Startup))]
namespace JsonOwinWebApplication
{
public class Startup
{
public void Configuration(IAppBuilder app)
{
HttpConfiguration config = new HttpConfiguration();
config.Formatters.Clear();
config.Formatters.Add(
new JsonMediaTypeFormatter {SerializerSettings = GetJsonSerializerSettings()});
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new {id = RouteParameter.Optional}
);
app.UseWebApi(config);
}
private static JsonSerializerSettings GetJsonSerializerSettings()
{
var settings = new JsonSerializerSettings();
settings.Converters.Add(new StringEnumConverter { CamelCaseText = false });
settings.ContractResolver = new CamelCasePropertyNamesContractResolver();
return settings;
}
}
}</code>
</pre>
<br />
In our case, we need more complex configuration of JSON serializer. Take a look at the following pseudo code:<br />
<br />
<pre><code lang="cs">private static JsonSerializerSettings GetJsonSerializerSettings()
{
var settings = new JsonSerializerSettings
{
MissingMemberHandling = MissingMemberHandling.Error,
Error = (sender, args) =>
{
<b>handler</b>.Handle(sender, args);
}
};
settings.Converters.Add(new StringEnumConverter { CamelCaseText = false });
settings.Converters.Add(new ValueJsonConverterWithPath(<b>pathsStack</b>));
settings.ContractResolver = new ObsoletePropertiesContractResolver();
return settings;
}</code>
</pre>
<br />
Here <i>handler</i> and <i>pathsStack</i> should be different for each request. Moreover, the <i>handler </i>should have a reference to the corresponding <i class="">pathsStack</i>. Also, we must have access to the <i>handler</i> instance in our Web API methods, to get the list of typos and obsolete properties for the current request.<br />
<br />
The problem here is that our <i>Startup</i> class configures JSON serializer for all requests, but we need it somehow different for each request. How can we solve this problem?<br />
<br />
The first idea I had was about an inversion of control container. I knew, that I can configure lifespan of some objects as "per request". I thought, that everywhere I need unique objects for a request, I'll take them from such a container. Unfortunately, I was not able to implement it. I used <a href="https://autofac.org/" target="_blank">Autofac</a> container, I configured it to provide my <i>handler</i> and <i>pathsStack</i> objects per request, I attached the container to the OWIN pipeline and to the <i>HttpConfiguration</i> dependency resolver. But still, I got an error trying to get my objects from the container inside the <i>Error</i> handler of the <i>JsonSerializerSettings</i>. I'm not a big specialist in work with IoC containers, it may be that I made some mistake, so you are welcome to try.<br />
<br />
Still, I had a task to solve. At that moment I thought about <a href="http://owin.org/html/spec/owin-1.0.html" target="_blank">OWIN specification</a>. It is very simple and elegant. It describes how your Web application communicates with hosting Web server. The Web server provides your application with an instance of <i>IDictionary<string, object></i>. The Web server guarantees that the dictionary contains some data (host, path, query, request body, ...). Your application passes the dictionary through the pipeline of middleware. On each stage, you are free to add/remove/modify the content of the dictionary. In the end, your application must add to the dictionary some information about the response (headers, cookies, body, ...). That's it. And this process is repeated for each request.<br />
<br />
And here comes my idea. In the beginning of the pipeline, I'll add my objects <i>handler</i> and <i>pathsStack</i> to the dictionary. Later I'll take them from the dictionary everywhere I need them. Here is how it works:<br />
<br />
<pre><code lang="cs">[assembly: OwinStartup(typeof(JsonOwinWebApplication.Startup))]
namespace JsonOwinWebApplication
{
public class Startup
{
public void Configuration(IAppBuilder app)
{
<b> app.Use(async (context, next) =>
{
var pathsStack = new Stack<string>();
var handler = new TyposAndObsoleteHandlerWithPath(pathsStack);
handler.Ignore(e => e.CurrentObject is Value && e.ErrorContext.Member.ToString() == "type");
handler.AddObsoleteMessage((type, name) =>
{
if (type == typeof(StringValue) && name == "Id")
return "Use another property here";
return null;
});
context.Set("My.PathsStack", pathsStack);
context.Set("My.TyposHandler", handler);
await next.Invoke();
});
</b>
HttpConfiguration config = new HttpConfiguration();
config.Formatters.Clear();
config.Formatters.Add(
new JsonMediaTypeFormatter {SerializerSettings = GetJsonSerializerSettings()});
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new {id = RouteParameter.Optional}
);
app.UseWebApi(config);
}
private static JsonSerializerSettings GetJsonSerializerSettings()
{
var settings = new JsonSerializerSettings
{
MissingMemberHandling = MissingMemberHandling.Error,
Error = (sender, args) =>
{
<b> var owinContext = HttpContext.Current.GetOwinContext();
var handler = owinContext.Get<TyposAndObsoleteHandlerWithPath>("My.TyposHandler");
handler.Handle(sender, args);
</b> }
};
settings.Converters.Add(new StringEnumConverter { CamelCaseText = false });
settings.Converters.Add(new ValueJsonConverterWithPath(() =>
{
<b> var owinContext = HttpContext.Current.GetOwinContext();
var pathsStack = owinContext.Get<Stack<string>>("My.PathsStack");
return pathsStack;
</b> }));
settings.ContractResolver = new ObsoletePropertiesContractResolver();
return settings;
}
}
}</code>
</pre>
<br />
In OWIN middleware I create my <i>handler</i> and <i>pathsStack</i> objects, configure them and add them to the OWIN context using <i>context.Set</i> method. Notice, that the constructor of the <i>ValueJsonConverterWithPath</i> class does not accept an instance of paths stack any more, but rather a function, which can return this instance. Again the reason is that <i>pathsStack</i> object should be different for each request.<br />
<br />
Later in any place of my application, I can get OWIN context of the current request using <i>HttpContext.Current.GetOwinContext()</i>. I can use it in my Web API method to get access to the <i>handler</i> object to collect all found typos and usages of obsolete properties:<br />
<br />
<pre><code lang="cs">namespace JsonOwinWebApplication
{
public class ValuesController : ApiController
{
public object Post([FromBody] Value[] values)
{
var owinContext = HttpContext.Current.GetOwinContext();
var handler = owinContext.Get<TyposAndObsoleteHandlerWithPath>("My.TyposHandler");
// Other processing...
return new
{
Message = "Message processed",
Warnings = handler.Messages.ToArray()
};
}
}
}</code>
</pre>
<br />
That's it. I hope you'll find this tip useful for the described use case and for other cases, where you need to have separate objects for each Web request.<br />
<br /></div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0tag:blogger.com,1999:blog-5729371525642521663.post-91151832932402877182017-12-27T11:29:00.000+03:002017-12-27T11:29:31.862+03:00Finding typos and usage of obsolete properties in JSON<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
JSON format is very widespread now. Many Web APIs return their results in this format. Also, many APIs accept incoming requests in the same format. Structure of incoming JSON request can be very complex. It is not uncommon to make a typo in such a document. In this article I'd like to discuss, how can we detect these typos and inform users about them in a friendly form.<br />
<br />
<a name='more'></a>Let's start with a simple example. I have the following class:<br />
<br />
<pre><code lang="cs">public class Range
{
public int? From { get; set; }
public int? To { get; set; }
}</code>
</pre>
<br />
I want to deserialize a user request in the form of JSON string into this object:<br />
<br />
<pre><code lang="cs">var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false}
},
ContractResolver = new CamelCasePropertyNamesContractResolver()
};
var result = JsonConvert.DeserializeObject<Range>(jsonString, settings);
Console.WriteLine("Range is from: " + result.From);
Console.WriteLine("Range is to: " + result.To);</code>
</pre>
<br />
How do you think, what will be the result of execution of this code, if <i>jsonString</i> is:<br />
<br />
<pre><code lang="json">{
form: 3,
to: 5
}</code>
</pre>
<br />
Here is the result:<br />
<br />
<pre><code lang="text">Range is from:
Range is to: 5</code>
</pre>
<br />
The reason for this strange result is that instead of FROM we wrote FORM in our JSON.<br />
<br />
In this simple example, it is rather easy to find out why the result differs from the expected. But consider a case, when you have very long JSON with deep nesting. In this case, it is not so easy to identify the problem. I suggest helping a user to find these problems by providing useful warning messages when some typo occurs.<br />
<br />
<h3>
Looking for typos</h3>
<br />
How can we understand if something is a typo or not? In general if during deserialization we face some property in the JSON, which does not have a corresponding property in the object model, we can talk about a typo.<br />
<br />
By default <a href="https://www.newtonsoft.com/json">Json.Net</a> just ignore such problems. But we can change this behavior by modifying <i>MissingMemberHandling</i> property of serializer settings. If we set the value of this property to <i>MissingMemberHandling.Error</i>,<i> </i>it will make serializer to throw an exception if there is no member for JSON property. We can handle this exception using <i>Error </i>event of serializer settings:<br />
<br />
<pre><code lang="cs">var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false}
},
ContractResolver = new CamelCasePropertyNamesContractResolver(),
<b> MissingMemberHandling = MissingMemberHandling.Error,
Error = (sender, args) =>
{
...
}
</b>};</code>
</pre>
<br />
The only thing we should do here is to distinguish errors raised by missing member from all other sorts of error. Unfortunately, Json.Net does not give us a lot of help here. The only thing we can do is to check message in the exception:<br />
<br />
<pre><code lang="cs"><b>var discriminator = new Regex("^Could not find member '[^']*' on object of type '[^']*'");</b>
var messages = new List<string>();
var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false}
},
ContractResolver = new CamelCasePropertyNamesContractResolver(),
MissingMemberHandling = MissingMemberHandling.Error,
Error = (sender, args) =>
{
<b> if (discriminator.IsMatch(args.ErrorContext.Error.Message))
{
args.ErrorContext.Handled = true;
messages.Add($"Property {args.ErrorContext.Member} ({args.ErrorContext.Path}) is not defined on objects of '{args.CurrentObject.GetType().Name}' class.");
}
</b> }
};
var result = JsonConvert.DeserializeObject<Range>(jsonString, settings);
foreach (var message in messages)
{
Console.WriteLine(message);
}
Console.WriteLine("-----------------------------------");
Console.WriteLine("Range is from: " + result.From);
Console.WriteLine("Range is to: " + result.To);
</code>
</pre>
<br />
Please notice, that we set <i>args.ErrorContext.Handled</i> to true. It allows the serializer to continue its work.<br />
<br />
I want to emphasize that this is a very fragile way to distinguish types of errors. If Json.Net team decide to change error message or they implement internationalization support, this code will be broken.<br />
<br />
Nevertheless, now we have our error message:<br />
<br />
<pre><code lang="text">Property form (form) is not defined on objects of 'Range' class.</code>
</pre>
<br />
And even better, we have information about where exactly the typo was from <i>args.ErrorContext.Path</i> property. Try to deserialize the following array of ranges (you should use <i>JsonConvert.DeserializeObject<Range[]></i> now):<br />
<br />
<pre><code lang="json">[
{
from: 1,
to: 3
},
{
form: 3,
to: 5
},
{
from: 5,
to: 10
}
]</code>
</pre>
<br />
You'll get the following warning message:<br />
<br />
<pre><code lang="text">Property form ([1].form) is not defined on objects of 'Range' class.</code></pre>
<br />
As you can see, we have the exact path to the typo: the second element in the root array.<br />
<br />
It looks great! Are we done? Not yet. There are couple things to do.<br />
<br />
<h3>
Discriminator fields</h3>
<br />
Let's consider a slightly more complex example. I want to deserialize objects belonging to a hierarchy of classes:<br />
<br />
<pre><code lang="cs">public abstract class Value
{
public Value[] Values { get; set; }
}
public class IntValue : Value
{
public int Value { get; set; }
}
public class StringValue : Value
{
public string Value { get; set; }
}</code></pre>
<br />
To do these things I must implement custom converter:<br />
<br />
<pre><code lang="cs">public enum ValueType
{
Integer,
String
}
public class ValueJsonConverter : JsonConverter
{
public override bool CanWrite => false;
public override bool CanConvert(Type objectType)
{
return typeof(Value).IsAssignableFrom(objectType);
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
throw new NotSupportedException("Custom converter should only be used while deserializing.");
}
public override object ReadJson(JsonReader reader, Type objectType, object existingValue,
JsonSerializer serializer)
{
if (reader.TokenType == JsonToken.Null)
return null;
// Load JObject from stream
JObject jObject = JObject.Load(reader);
if (jObject == null)
return null;
ValueType valueType;
if (Enum.TryParse(jObject.Value<string>("type"), true, out valueType))
{
switch (valueType)
{
case ValueType.String:
var stringValueModel = new StringValue();
serializer.Populate(jObject.CreateReader(), stringValueModel);
return stringValueModel;
case ValueType.Integer:
var intValueModel = new IntValue();
serializer.Populate(jObject.CreateReader(), intValueModel);
return intValueModel;
default:
throw new ArgumentException($"Unknown value type '{valueType}'");
}
}
throw new ArgumentException("Unable to parse value object");
}
}
</code></pre>
<br />
Now I can use it to deserialize objects of <i>Value </i>class:<br />
<br />
<pre><code lang="cs">var jsonString = @"
[
{
type: 'integer',
value: 3
},
{
type: 'string',
value: 'aaa'
}
]
";
var discriminator = new Regex("^Could not find member '[^']*' on object of type '[^']*'");
var messages = new List<string>();
var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false},
<b>new ValueJsonConverter()</b>
},
ContractResolver = new CamelCasePropertyNamesContractResolver(),
MissingMemberHandling = MissingMemberHandling.Error,
Error = (sender, args) =>
{
if (discriminator.IsMatch(args.ErrorContext.Error.Message))
{
args.ErrorContext.Handled = true;
messages.Add($"Property {args.ErrorContext.Member} ({args.ErrorContext.Path}) is not defined on objects of '{args.CurrentObject.GetType().Name}' class.");
}
}
};
var result = JsonConvert.DeserializeObject<Value[]>(jsonString, settings);
foreach (var message in messages)
{
Console.WriteLine(message);
}
</code></pre>
<br />
What do you think will be the result of execution of this method? Here it is:<br />
<br />
<pre><code lang="text">Property type (type) is not defined on objects of 'IntValue' class.
Property type (type) is not defined on objects of 'StringValue' class.</code></pre>
<br />
Indeed, '<i>type</i>' property is not a member of <i>Value </i>class or its descendants. We just use it for discrimination of different classes.<br />
<br />
So there must be a way to exclude some properties from our warning messages. Here is how we'll make it.<br />
<br />
First of all, I'll extract logiс of handling typos into a separate class:<br />
<br />
<pre><code lang="cs">public class TyposHandler
{
private static readonly Regex Discriminator = new Regex("^Could not find member '[^']*' on object of type '[^']*'");
private readonly List<string> _messages = new List<string>();
private readonly List<Predicate<ErrorEventArgs>> _ignored = new List<Predicate<ErrorEventArgs>>();
public IReadOnlyList<string> Messages => _messages;
public void Handle(object sender, ErrorEventArgs args)
{
if (!Discriminator.IsMatch(args.ErrorContext.Error?.Message ?? ""))
return;
args.ErrorContext.Handled = true;
if (!_ignored.Any(p => p(args)))
{
_messages.Add($"Property {args.ErrorContext.Member} ({args.ErrorContext.Path}) is not defined on objects of '{args.CurrentObject.GetType().Name}' class.");
}
}
public void Ignore(Predicate<ErrorEventArgs> selector)
{
if (selector == null) throw new ArgumentNullException(nameof(selector));
_ignored.Add(selector);
}
}</code></pre>
<br />
It has <i>Ignore </i>method, which allows defining a predicate for ignoring some missing values. Here is how we can use it:<br />
<br />
<pre><code lang="cs">var jsonString = @"
[
{
type: 'integer',
value: 3
},
{
type: 'string',
value: 'aaa'
}
]
";
var handler = new TyposHandler();
<b>handler.Ignore(e => e.CurrentObject is Value && e.ErrorContext.Member.ToString() == "type");</b>
var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false},
new ValueJsonConverter()
},
ContractResolver = new CamelCasePropertyNamesContractResolver(),
MissingMemberHandling = MissingMemberHandling.Error,
<b>Error = handler.Handle</b>
};
var result = JsonConvert.DeserializeObject<Value[]>(jsonString, settings);
foreach (var message in handler.Messages)
{
Console.WriteLine(message);
}</code></pre>
<br />
Now we don't have any warning messages for '<i>type</i>' property.<br />
<br />
<h3>
Incorrect path</h3>
<br />
Let me add one missing property to the JSON I want to deserialize:<br />
<br />
<pre><code lang="json">[
{
type: 'integer',
value: 3,
<b>unknown: 'aaa'</b>
},
{
type: 'string',
value: 'aaa'
}
]</code></pre>
<br />
Now I'll have the following warning message:<br />
<br />
<pre><code lang="text">Property unknown (unknown) is not defined on objects of 'IntValue' class.</code></pre>
<br />
Do you see the problem? The path <i>(unknown)</i> is incorrect. It should be <i>([0].unknown)</i>. What is the reason for the problem?<br />
<br />
The reason is in our <i>ValueJsonConverter </i>class. There we create a new standalone <i>JObject</i>:<br />
<br />
<pre><code lang="cs">JObject jObject = JObject.Load(reader);</code></pre>
<br />
and then populate our model from properties of this object:<br />
<br />
<pre><code lang="cs">serializer.Populate(jObject.CreateReader(), model);</code></pre>
<br />
If you look at the implementation of <i>Path </i>property of a <i>JToken </i>object, you'll see that it relies on the path of the parent token. But the object we created using <i>JObject.Load</i> does not have a parent. It is standalone. It means, that we lost context here.<br />
<br />
To fix this problem we'll introduce a stack of paths:<br />
<br />
<pre><code lang="cs">var jsonString = @"
[
{
type: 'integer',
value: 3,
unknown: 'aaa'
},
{
type: 'string',
value: 'aaa'
}
]
";
<b>var paths = new Stack<string>();</b>
<b>var handler = new TyposHandlerWithPath(paths);</b>
handler.Ignore(e => e.CurrentObject is Value && e.ErrorContext.Member.ToString() == "type");
var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false},
<b>new ValueJsonConverterWithPath(paths)</b>
},
ContractResolver = new CamelCasePropertyNamesContractResolver(),
MissingMemberHandling = MissingMemberHandling.Error,
Error = handler.Handle
};
var result = JsonConvert.DeserializeObject<Value[]>(jsonString, settings);
foreach (var message in handler.Messages)
{
Console.WriteLine(message);
}</code></pre>
<br />
We'll pass this stack to our handler of typos and to any values converter we use. Here is how we use this stack in the <i>ReadJson </i>method of a value converter:<br />
<br />
<pre><code lang="cs">if (reader.TokenType == JsonToken.Null)
return null;
<b>var path = reader.Path;</b>
// Load JObject from stream
JObject jObject = JObject.Load(reader);
if (jObject == null)
return null;
ValueType valueType;
if (Enum.TryParse(jObject.Value<string>("type"), true, out valueType))
{
switch (valueType)
{
case ValueType.String:
var stringValueModel = new StringValue();
<b>_pathsStack.Push(path);</b>
serializer.Populate(jObject.CreateReader(), stringValueModel);
<b>_pathsStack.Pop();</b>
return stringValueModel;
case ValueType.Integer:
var intValueModel = new IntValue();
<b>_pathsStack.Push(path);</b>
serializer.Populate(jObject.CreateReader(), intValueModel);
<b>_pathsStack.Pop();</b>
return intValueModel;
default:
throw new ArgumentException($"Unknown value type '{valueType}'");
}
}
throw new ArgumentException($"Unable to parse value object");
</code></pre>
<br />
We'll push current path into the stack before calling <i>serializer.Populate</i> and pop it after the call. Now the stack will contain all parts of the full path from the root of JSON.<br />
<br />
Here is how we use it in our typos handler. Take a look at the <i>GetPath </i>method:<br />
<br />
<pre><code lang="cs">public class TyposHandlerWithPath
{
<b>private readonly Stack<string> _paths;</b>
private static readonly Regex Discriminator = new Regex("^Could not find member '[^']*' on object of type '[^']*'");
private readonly List<string> _messages = new List<string>();
private readonly List<Predicate<ErrorEventArgs>> _ignored = new List<Predicate<ErrorEventArgs>>();
public IReadOnlyList<string> Messages => _messages;
<b>public TyposHandlerWithPath(Stack<string> paths)
{
_paths = paths;
}</b>
public void Handle(object sender, ErrorEventArgs args)
{
if (!Discriminator.IsMatch(args.ErrorContext.Error?.Message ?? ""))
return;
args.ErrorContext.Handled = true;
if (!_ignored.Any(p => p(args)))
{
_messages.Add($"Property {args.ErrorContext.Member} ({<b>GetPath(args.ErrorContext.Path)</b>}) is not defined on objects of '{args.CurrentObject.GetType().Name}' class.");
}
}
private string GetPath(string path)
{
var pathBuilder = new StringBuilder();
foreach (var pathPart in _paths.Reverse())
{
AddPathPart(pathBuilder, pathPart);
}
if (!string.IsNullOrWhiteSpace(path))
{
AddPathPart(pathBuilder, path);
}
return pathBuilder.ToString();
}
private void AddPathPart(StringBuilder pathBuilder, string pathPart)
{
if (pathBuilder.Length == 0)
pathBuilder.Append(pathPart);
else if (pathPart.StartsWith("["))
pathBuilder.Append(@"\" + pathPart);
else
pathBuilder.Append(@"." + pathPart);
}
public void Ignore(Predicate<ErrorEventArgs> selector)
{
if (selector == null) throw new ArgumentNullException(nameof(selector));
_ignored.Add(selector);
}
}</code></pre>
<br />
Here we combine current path with all previously stored paths in the stack. It allows us to reconstruct correct path to any JSON property. In our case we'll get the following warning message:<br />
<br />
<pre><code lang="text">Property unknown (<b>[0].unknown</b>) is not defined on objects of 'IntValue' class.</code></pre>
<br />
Now it is time to consider the last problem we have.<br />
<br />
<h3>
Obsolete properties</h3>
<br />
What can I say? Things change. Even APIs. Some methods of interaction become obsolete. In .NET there is <i>ObsoleteAttribute</i>, which you can use to mark members that should not be used anymore. How to do it in JSON?<br />
<br />
The problem here is that usage of an obsolete property is not a typo. There is an existing property in .NET type we want to deserialize. How to inform a serializer that usage of this property is not allowed? We will throw an exception.<br />
<br />
The <i>Error </i>property of <i>JsonSerializerSettings </i>class allows us to set a handler for all exceptions (at least for <i>JsonSerializationException </i>exceptions). If serializer tries to set a value to an obsolete property, we'll throw our exception derived from <i>JsonSerializationException</i>. Then we'll catch this exception in the <i>Error </i>handler and process it.<br />
<br />
But how to throw an exception while setting a property? We will use <i>ContractResolver</i> here. Now we are setting it to a standard one:<br />
<br />
<pre><code lang="cs">ContractResolver = new CamelCasePropertyNamesContractResolver()</code></pre>
<br />
But let's create our own implementation of contract resolver:<br />
<br />
<pre><code lang="cs">public class ObsoletePropertiesContractResolver : CamelCasePropertyNamesContractResolver
{
protected override IValueProvider CreateMemberValueProvider(MemberInfo member)
{
var provider = base.CreateMemberValueProvider(member);
if (member.GetCustomAttributes(typeof(ObsoleteAttribute)).Any())
return new ObsoletePropertyValueProvider(provider, member);
return provider;
}
}
public class ObsoletePropertyValueProvider : IValueProvider
{
private readonly IValueProvider _valueProvider;
private readonly MemberInfo _memberInfo;
public ObsoletePropertyValueProvider(
IValueProvider valueProvider,
MemberInfo memberInfo)
{
_valueProvider = valueProvider;
_memberInfo = memberInfo;
}
public void SetValue(object target, object value)
{
_valueProvider.SetValue(target, value);
throw new ObsoletePropertyException(_memberInfo.DeclaringType, _memberInfo.Name);
}
public object GetValue(object target)
{
return _valueProvider.GetValue(target);
}
}
[Serializable]
public class ObsoletePropertyException : JsonSerializationException
{
public Type MemberType { get; }
public string PropertyName { get; }
public ObsoletePropertyException(Type memberType, string propertyName)
{
MemberType = memberType;
PropertyName = propertyName;
}
}</code></pre>
<br />
As you can see, we return our own value provider for all properties marked with <i>Obsolete </i>attribute. This provider throws our exception after setting a value to the property. Now we can catch it:<br />
<br />
<pre><code lang="cs">public class TyposAndObsoleteHandlerWithPath
{
private static readonly Regex Discriminator = new Regex("^Could not find member '[^']*' on object of type '[^']*'");
private readonly Stack<string> _paths;
private readonly List<string> _messages = new List<string>();
private readonly List<Predicate<ErrorEventArgs>> _ignored = new List<Predicate<ErrorEventArgs>>();
private readonly List<Func<Type, string, string>> _obsoleteMessages = new List<Func<Type, string, string>>();
public TyposAndObsoleteHandlerWithPath(Stack<string> paths)
{
_paths = paths ?? throw new ArgumentNullException(nameof(paths));
}
public IReadOnlyList<string> Messages => _messages;
public void Handle(object sender, ErrorEventArgs args)
{
<b> if (args.ErrorContext.Error is ObsoletePropertyException)
{
HandleObsoleteProperty(args, (ObsoletePropertyException) args.ErrorContext.Error);
args.ErrorContext.Handled = true;
return;
}
</b>
if(!Discriminator.IsMatch(args.ErrorContext.Error?.Message ?? ""))
return;
args.ErrorContext.Handled = true;
if (!_ignored.Any(p => p(args)))
{
_messages.Add($"Property {args.ErrorContext.Member} ({GetPath(args.ErrorContext.Path)}) is not defined on objects of '{args.CurrentObject.GetType().Name}' class.");
}
}
private void HandleObsoleteProperty(ErrorEventArgs args, ObsoletePropertyException errorContextError)
{
var message = _obsoleteMessages
.Select(p => p(errorContextError.MemberType, errorContextError.PropertyName))
.FirstOrDefault(m => !string.IsNullOrWhiteSpace(m));
if(!string.IsNullOrWhiteSpace(message))
_messages.Add($"Property {args.ErrorContext.Member} ({GetPath(args.ErrorContext.Path)}) is obsolete on objects of '{args.CurrentObject.GetType().Name}' class. {message}");
else
_messages.Add($"Property {args.ErrorContext.Member} ({GetPath(args.ErrorContext.Path)}) is obsolete on objects of '{args.CurrentObject.GetType().Name}' class.");
}
private string GetPath(string path)
{
var pathBuilder = new StringBuilder();
foreach (var pathPart in _paths.Reverse())
{
AddPathPart(pathBuilder, pathPart);
}
if (!string.IsNullOrWhiteSpace(path))
{
AddPathPart(pathBuilder, path);
}
return pathBuilder.ToString();
}
private void AddPathPart(StringBuilder pathBuilder, string pathPart)
{
if (pathBuilder.Length == 0)
pathBuilder.Append(pathPart);
else if (pathPart.StartsWith("["))
pathBuilder.Append(@"\" + pathPart);
else
pathBuilder.Append(@"." + pathPart);
}
public void Ignore(Predicate<ErrorEventArgs> selector)
{
if (selector == null) throw new ArgumentNullException(nameof(selector));
_ignored.Add(selector);
}
public void AddObsoleteMessage(Func<Type, string, string> messageProvider)
{
if (messageProvider == null) throw new ArgumentNullException(nameof(messageProvider));
_obsoleteMessages.Add(messageProvider);
}
}</code></pre>
<br />
Here we also add custom messages for obsolete properties. These messages must explain, how to achieve the same result without the usage of the specific obsolete property. In fact, we could extract a message from the <i>Obsolete</i> attribute. But this message usually relates to .NET API, not to JSON API. This is why I think these messages should be different.<br />
<br />
Let's test out code now. I'll add an obsolete property to the <i>StringValue </i>class:<br />
<br />
<pre><code lang="cs">public class StringValue : Value
{
public string Value { get; set; }
<b>[Obsolete]
public string Id { get; set; }</b>
}</code></pre>
<br />
Now we'll deserialize JSON which sets obsolete property:<br />
<br />
<pre><code lang="cs">var jsonString = @"
[
{
type: 'integer',
value: 3,
},
{
type: 'string',
value: 'aaa',
<b>id: 'bbb'</b>
}
]
";
Stack<string> pathsStack = new Stack<string>();
var handler = new TyposAndObsoleteHandlerWithPath(pathsStack);
handler.Ignore(e => e.CurrentObject is Value && e.ErrorContext.Member.ToString() == "type");
<b>handler.AddObsoleteMessage((type, name) =>
{
if (type == typeof(StringValue) && name == "Id")
return "Use another property here";
return null;
});</b>
var settings = new JsonSerializerSettings
{
Converters =
{
new StringEnumConverter {CamelCaseText = false},
new ValueJsonConverterWithPath(pathsStack)
},
<b>ContractResolver = new ObsoletePropertiesContractResolver(),</b>
MissingMemberHandling = MissingMemberHandling.Error,
Error = handler.Handle
};
var result = JsonConvert.DeserializeObject<Value[]>(jsonString, settings);
foreach (var message in handler.Messages)
{
Console.WriteLine(message);
}</code></pre>
<br />
As a result, we'll have the following warning message:<br />
<br />
<pre><code lang="text">Property id ([1].id) is obsolete on objects of 'StringValue' class. Use another property here</code></pre>
<br />
<h3>
Conclusion</h3>
<br />
That's it. The code here is not production ready, but I think it is a good place to start. Such warning messages can make your Web API more user-friendly.<br />
<br />
Another interesting problem is how to make it work with ASP.NET Web API. There we don't have direct access to the JSON serializer, and all instances of serializers use the same <i>JsonSerializerSettings </i>object. Somehow we must distinct one request from another. But this is a question for another article.</div>
</div>
Иван Якимовhttp://www.blogger.com/profile/07472426134528440328noreply@blogger.com0