Tuesday, April 15, 2025

Nuke, AppVeyor, GitVerse

I recently worked on my own project implementing a Roslyn generator of read-only interfaces for existing classes. When I decided it was time to share my results with the community in the form of NuGet package, I wanted to implement a build pipeline. I have already solved a similar problem using AppVeyor. But this time there were some differences. First of all, in my previous projects I used Cake for building pipeline tasks. This time I decided to try Nuke. It promises better integration with Visual Studio. I also wanted to use the Russian equivalent of GitHub - GitVerse. Here I'll tell you how it went.

Installation of Nuke

There are no problems with installing Nuke. First, you install the instrument itself on your computer:

    
> dotnet tool install Nuke.GlobalTool --global
    

After that, you can add Nuke to your code. To do this, run the following command in the directory with your .sln file:

    
> nuke :setup
    

You will be asked to enter or confirm the root directory of your code, the name for the Nuke project, desired location of this project, and solution where you want to add the Nuke project to. As a result, you'll have a new project, whose default name is _build. Also, several new files will appear in the directory where you executed this command (build.cmd, build.ps1, ). They will run your Nuke code. You can launch it right now:

    
> .\build.ps1
    

Well, not really. For example, I got the following error:

    
 error CS0234: The type or namespace name 'FileSystemTasks' does not exist in the namespace 'Nuke.Common.IO' (are you missing an assembly reference?)
    

Pipeline file

You'll need to launch Visual Studio and open your solution there. In the _build project you'll find the Build.cs file. It seems that the error is caused by the directive using static Nuke.Common.IO.FileSystemTasks;, which is not needed now. After you delete it, the execution of build.ps1 will go smoothly.

The structure of Build.cs file is quite familiar. The file contains a description of the build tasks, the dependencies between them, and the order in which they are performed. Nuke documentation describes everything you need to understand the contents of this file very well. I recommend that you familiarize yourself with it.

Here we will write several tasks.

Creation of output directory

As I said before, the documentation describes everything related to Nuke itself very well. But we need to compile projects, run tests, ... There is no information about this in the Nuke documentation. You'll have to look for it in different places of the Internet. Let's start by creating a directory where we'll put the results of the work of our build pipeline.

    
using Nuke.Common.IO;

...

private readonly AbsolutePath OutputDirectory = RootDirectory / "output";

...

Target Clean => _ => _
    .Description("Clean output directory")
    .Executes(() =>
    {
        (RootDirectory / "src").GlobDirectories("**/bin", "**/obj").ForEach(d =>
        {
            d.DeleteDirectory();
        });
        (RootDirectory / "tests").GlobDirectories("**/bin", "**/obj").ForEach(d => d.DeleteDirectory());

        (OutputDirectory).CreateOrCleanDirectory();
    });
    

Nuke provides us with the RootDirectory property, which contains the path to the root directory. In it, we want to create an output subdirectory for the build results. Since we often use this directory, let's create an additional field OutputDirectory. Please notice that it is very convenient to use the / operation to build file system paths. This allows us to create paths without knowing the actual path separator in different operating systems.

We do several things inside the method for the Clean build stage . First of all, we delete all the bin and obj directories. It is not really necessary, but it allows me to demonstrate working with glob expressions and how to process their results.

Finally, using the CreateOrCleanDirectory method, we create a directory for our artifacts. This method allows us not to think whether the directory exists or not and whether there is any content in it. After execution, we can be sure that there is such a directory and it is empty.

Restoring NuGet dependencies

Before compiling a project, we need to load all NuGet packages on which it depends. Here is how to do it:

    
using Nuke.Common.Tools.DotNet;

...

[Parameter]
readonly string Solution;

[Parameter]
readonly DotNetVerbosity DotNetVerbosity = DotNetVerbosity.quiet;

...

Target Restore => _ => _
    .Description("Restore dependencies")
    .DependsOn(Clean)
    .Executes(() =>
    {
        DotNetTasks.DotNetRestore(new DotNetRestoreSettings()
            .SetProjectFile(RootDirectory / Solution)
            .SetVerbosity(DotNetVerbosity));
    });

    

The Solution field has already been created for us by Nuke. Now we have created an additional field DotNetVerbosity, which determines how much information dotnet tasks will provide us with. By default, we have set it to quiet. But you can change it's value from the command line:

    
> .\build.ps1 --DotNetVerbosity detailed
    

Please note that using the DependsOn method, we inform Nuke that before performing the Restore step, it must also perform the Clean step. This is how you can combine different build steps into a single pipeline. Nuke also provides us with Before and After methods, which you can read about in the documentation.

Solution compilation

Now let's compile our solution:

    
[Parameter("Configuration to build - Default is 'Debug' (local) or 'Release' (server)")]
readonly Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;

Target Compile => _ => _
    .Description("Compile project")
    .DependsOn(Restore)
    .Executes(() =>
    {
        DotNetTasks.DotNetBuild(new DotNetBuildSettings()
            .SetConfiguration(Configuration)
            .SetNoRestore(true)
            .SetProjectFile(RootDirectory / Solution)
            .SetVerbosity(DotNetVerbosity));
    });
    

Here we select the desired configuration for compilation. We also say that there is no need to restore NuGet dependencies, as we have already done this in the previous step.

Tests running

Our project has been compiled. It is time to run the tests. I used MsTest v2 because it is the framework I use in my work. The first version of this framework was very unreliable. Every time we wanted to do something non-trivial, we had to deal with the problems that arose. But the developers have done their homework, and now everything is working pretty well. It's not perfect, of course, there are still some problems. For example, one day after updating Visual Studio version, all tests stopped running under IDE. We had to update the version of MsTest NuGet packages to fix this. But in general, the framework is usable now.

So, how can we run our tests?

    
Target Test => _ => _
    .Description("Run tests")
    .DependsOn(Compile)
    .Executes(() =>
    {
        DotNetTasks.DotNetTest(new DotNetTestSettings()
            .SetConfiguration(Configuration)
            .SetNoRestore(true)
            .SetNoBuild(true)
            .SetSettingsFile(RootDirectory / "tests" / "tests.runsettings")
            .SetProjectFile(RootDirectory / Solution)
            .SetVerbosity(DotNetVerbosity)
            .SetLoggers("trx;LogFileName=mstest-results.trx")
            .SetResultsDirectory(OutputDirectory));
    });
    

Here we say that there is no need to restore dependencies and compile the project. This has already been done. Using the SetLoggers method, we specify the format of the file with the results of out tests. We'll need it to display information about the tests in the AppVeyor:

AppVeyor tests

I also use the tests.runsettings settings file for my tests:

    
<?xml version="1.0" encoding="utf-8" ?>
<RunSettings>
  <DataCollectionRunSettings>
    <DataCollectors>
      <DataCollector friendlyName="XPlat code coverage">
        <Configuration>
          <Format>cobertura,opencover</Format>
          <Exclude>[*.Tests?]*</Exclude>
          <SkipAutoProps>true</SkipAutoProps>
        </Configuration>
      </DataCollector>
    </DataCollectors>
  </DataCollectionRunSettings>
</RunSettings>
    

There is a reason why I need it. I want to collect information about the code coverage by tests. This can be done by this call:

    
    .SetDataCollector("XPlat Code Coverage")
    

But I need a more fine-tuning. I wanted to exclude the assembly with tests from the code coverage. This is exactly what the Exclude tag does. I'm also not interested in coverage of auto properties. This is why I added the SkipAutoProps tag.

Well, this code performs my tests. But for some reason, it created two sets of files with the code coverage results in the output directory. These sets are located in different subdirectories. I don't know why this is happening, and I don't want to spend the effort to fix. That does not stop me from continuing.

Creation of code coverage report

Now, based on the collected information about the code coverage, I want to create a report in a human-readable form.

    
Target CoverageReport => _ => _
    .Description("Create code coverage report")
    .OnlyWhenStatic(() => IsLocalBuild)
    .TriggeredBy(Test)
    .Executes(() =>
    {
        var report = OutputDirectory.GlobFiles(@"**/*.cobertura.xml").First();

        var reportDirectory = (OutputDirectory / "CodeCoverageReport");

        reportDirectory.CreateOrCleanDirectory();

        ReportGeneratorTasks.ReportGenerator(new ReportGeneratorSettings()
            .SetReports(report)
            .SetTargetDirectory(reportDirectory)
            .SetReportTypes(ReportTypes.HtmlInline)
            );
    });
    

First of all, I want to create these reports only on my development machine. I don't want to create them on Appveyor because there is no convenient way to view them there. I could have published such a report as an artefact of the build, but decided against it. That is why I use condition for this stage of the build

    
    .OnlyWhenStatic(() => IsLocalBuild)
    

which starts the stage only during the local build. In addition, the creation of the report does not affect other stages of the build. That is why I use TriggeredBy instead of DependsOn here.

To work at this stage, I need to find files with information about the code coverage of my tests. As I said before, there files were created in several different directories with arcane names. It looks like they are generated from the build timestamps. That is why I just select the first file I find:

    
var report = OutputDirectory.GlobFiles(@"**/*.cobertura.xml").First();
    

When I tried to run this stage, I got the following error:

    
System.Exception: Missing package reference/download.
Run one of the following commands:
  - nuke :add-package ReportGenerator --version 5.4.4
  - nuke :add-package ReportGenerator --version 5.4.3
    

To make our code workable, we need to add one NuGet package to our build project. We can do this as follows:

    
> nuke :add-package ReportGenerator --version 5.4.3
    

Now our code works without any problems. A set of files will be created in the CodeCoverageReport subfolder of the output directory. This set contains a file index.html, which is a code coverage report in HTML format. You can open index.html in any browser and view the report.

Creation of NuGet package

When we are satisfied with our code and tests, we may create a NuGet package. Here is the build stage where this is done:

    
Target CreateNuGet => _ => _
    .Description("Create NuGet package")
    .DependsOn(Test)
    .Produces(OutputDirectory / "*.nupkg")
    .Executes(() =>
    {
        DotNetTasks.DotNetPack(new DotNetPackSettings()
            .SetConfiguration(Configuration)
            .SetNoRestore(true)
            .SetNoBuild(true)
            .SetVerbosity(DotNetVerbosity)
            .SetProject(RootDirectory / "src" / "Generator" / "Generator.csproj")
            .SetIncludeSource(false)
            .SetIncludeSymbols(false)
            .SetOutputDirectory(OutputDirectory));
    });
    

Here I disabled the inclusion of the source code in the NuGet package (SetIncludeSource(false)) and the creation of debugging symbols (SetIncludeSymbols(false)).

It is interesting to discuss how we set the properties of the NuGet package. There have been many approaches to this. In some of them, I used .nuspec files. Now everything can be configured using .csproj file. I added there the following PropertyGroup:

    
<PropertyGroup>
	<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
	<IncludeBuildOutput>false</IncludeBuildOutput>
	<Version>1.0.1</Version>
	<AssemblyVersion>$(Version).0</AssemblyVersion>
	<AssemblyFileVersion>$(Version).0</AssemblyFileVersion>
	<Authors>Ivan Yakimov</Authors>
	<Description>Generator to create read-only interfaces.</Description>
	<PackageLicenseExpression>MIT</PackageLicenseExpression>
	<PackageIcon>settings.png</PackageIcon>
	<PackageProjectUrl>https://gitverse.ru/yakimovim/read-only-interface-generator</PackageProjectUrl>
	<RepositoryUrl>https://gitverse.ru/yakimovim/read-only-interface-generator</RepositoryUrl>
	<PackageTags>generator read-only interface</PackageTags>
	<PackageReadmeFile>README.md</PackageReadmeFile>
	<SuppressDependenciesWhenPacking>true</SuppressDependenciesWhenPacking>
</PropertyGroup>
    

The way to specify the package license has been changed (tag PackageLicenseExpression). In addition, we can now include the content of README file into the NuGet package (tag PackageReadmeFile). This allows us to directly on the the Web page of the package view the documentation for it. It is very comfortable.

NuGet package Web page

NuGet package publishing

Now that our NuGet package is ready, it is time to publish it. To do this, we need to install another dependency:

    
> nuke :add-package NuGet.CommandLine --version 6.12.2
    

Now we can publish the package:

    
Target PublishNuGetToMyGet => _ => _
    .Description("Publish NuGet package to MyGet")
    .DependsOn(CreateNuGet)
    .Executes(() =>
    {
        var apiKey = Environment.GetEnvironmentVariable("MyGetApiKey");

        if (string.IsNullOrWhiteSpace(apiKey))
        {
            Log.Warning("Unable to find MyGet API Key");
            return;
        }

        var nuGetPackage = OutputDirectory.GlobFiles("*.nupkg").SingleOrError("Unable to find NuGet package");

        NuGetTasks.NuGetPush(new NuGetPushSettings()
            .SetTargetPath(nuGetPackage)
            .SetApiKey(apiKey)
            .SetSource("https://www.myget.org/F/ivani/api/v2/package")
        );
    });
    

Here I am publishing the package to MyGet. I take the API key from the environment variable. AppVeyor allows us to set environment variables for a project:

AppVeyor environment variables

Setting repository for AppVeyor

Now our build pipeline is ready. It is time to launch it on AppVeyor. After creating the AppVeyor project, we need to specify the repository from which to take the code. It can be done on the General page:

AppVeyor Git repository

Starting pipeline

Now AppVeyor knows where my code is located. But how to start the pipeline?

There are two ways. First, we can create an appveyor.yml file that describes all the actions that AppVeyor should perform. This file should be places in the root of our repository. Then AppVeyor will automatically find it and follow its instructions. Nuke can actually create this file for us. All we need to do is apply the appropriate attribute to the Build class:

    
[AppVeyor(AppVeyorImage.VisualStudio2022)]
class Build : NukeBuild
    

After launching build.ps1, appveyor.yml will appear in the root directory of your application:

    
image:
  - Visual Studio 2022

build_script:
  - cmd: .\build.cmd 
  - sh: ./build.cmd 
    

Now AppVeyor performs the build perfectly fine. But with one exception. I did not have a list of completed tests displayed. And I really wanted to see it. Reading documentation shows that we need to execute some code to send the results of our tests to AppVeyor. The problem is that if we use appveyor.yml to describe our pipeline, all settings in the AppVeyor UI will be ignored.

That is why I disables the generation of appveyor.yml file and deleted it from the repository to achieve the desired behavior. Then I started the build manually:

AppVeyor build run

And finally, on the General tab, I wrote a script that transmits the results of my tests to AppVeyor:

Send test results to AppVeyor

After that, I was able to see the results of my tests in the UI:

AppVeyor tests

Rebuilding of changes

The build works fine if I start it manually in AppVeyor. But I'd like the build to run automatically when I push new changes to my repository on GitVerse. But that did not happen. For GitHub, everything worked out of the box. With GitVerse, we need to make some adjustments.

We should ask GitVerse to inform AppVeyor about changes in the repository. This can be done using a webhook. The address of the AppVeyor webhook can be found on the General tab:\

AppVeyor webhook URL

This address should be written in the GitVerse repository setting:

GitVerse webhooks

Conclusion

That's all. It took some effort to achieve the desired result. Using Nuke left a pleasant impression, despite the lack of documentation on solving specific build tasks. The integration with AppVeyor turned out to be more difficult than I expected. I'd like the my test results to be easily accessible. Only a manual webhook connection is required to use GitVerse. But there is nothing complicated about it.

I hope this information will be useful to you. Good luck!

No comments:

Post a Comment