Longs, Original

Cake.Console 1.2.0

After a bit of work, I have found Cake.Console stable enough for a first release. I decided to version it with the same number as Cake itself. If needed I will update the revision number.

Usage

Create a new project referencing Cake.Console. It will look something like this

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>net5.0</TargetFramework>
    <OutputType>exe</OutputType>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Cake.Console" Version="1.2.0" />
  </ItemGroup>
</Project>

Add a single Program.cs file with the code. Take advantage of top-level statements.

There are 2 ways of using Cake.Console:

  1. Building an IScriptHost. This is the implicit object in the .cake scripts, so we can use it to register tasks, perform setup, etc.
var host = new CakeHostBuilder().BuildHost(args);

host.Setup(() => { do something });
host.Task("TaskName").Does(c => c.Information("Hello"));
host.RunTarget(host.Context.Arguments.GetArgument("target"));
  1. Using the Cake Cli, that includes arguments like –target, –version, –info, –tree, –description, –exclusive…
    It’s very similar to frosting
new CakeHostBuilder()
    .WorkingDirectory<WorkingDirectory>()
    .ContextData<BuildData>()
    .RegisterTasks<CakeTasks>()
    .InstallNugetTool("NuGet.CommandLine", "5.9.1")
    .RunCakeCli(args);

In this case, we dont have access to the host, so we need to define the build with the 4 extensions that come with Cake.Console:

  • WorkingDirectory<>
  • RegisterTasks<>
  • ContextData<>
  • InstallNugetTool

WorkingDirectory<>

Here we can use a class that has the interface IWorkingDirectory and implements the string WorkingDirectory property.

The class can receive in the constructor any part of the cake infrastructure (ICakeContext, ICakeLog, ICakeArguments, ICakeConfiguration…)

RegisterTasks<>

Here we can use a class that has the interface ICakeTasks.

The class can receive in the constructor any part of the cake infrastructure (ICakeContext, ICakeLog, ICakeArguments, ICakeConfiguration…)

All the methods that have the signature void Name(CakeTaskBuilder builder) will be called, and the name of the method will be the name of the task.

ContextData<>

Here we can use any class that will then be available for use in the task’s definitions.

InstallNugetTool

Given a package name and a version, installs a nuget package as a Cake tool

Summary

Putting it all together

using Cake.Common.Diagnostics;
using Cake.Console;
using Cake.Core;

new CakeHostBuilder()
    .WorkingDirectory<WorkingDir>()
    .ContextData<ContextData>()
    .RegisterTasks<CakeTasks>()
    .InstallNugetTool("xunit.runner.console", "2.4.1")
    .RunCakeCli(args);

record WorkingDir(string WorkingDirectory = ".") : IWorkingDirectory;

class ContextData
{
    public string SomeVeryImportantData { get; set; } = "Cake is awesome!";
    public ContextData(ICakeArguments args)
    {
        if (args.HasArgument("tone-down"))
        {
            SomeVeryImportantData = "Cake is pretty good...";
        }
    }
}


class CakeTasks : ICakeTasks
{
    private readonly ICakeContext ctx;

    public CakeTasks(ICakeContext ctx) => this.ctx = ctx;

    public void TaskName(CakeTaskBuilder b) => b
        .Description("Some task")
        .Does(() => ctx.Information("Something"));

    public void AnotherTask(CakeTaskBuilder b) => b
        .IsDependentOn(nameof(TaskName))
        .Does<ContextData>(data => ctx.Information(data.SomeVeryImportantData));
}
Longs, Original

Presenting Cake.Console

I wanted to run Cake inside a console app, without the penalty of pre-processing the .cake DSL, and have the all the power of an IDE (refactorings, find usages,…). I had 2 possibilities:

  1. Cake.Frosting
    This was the best option, but I really didn’t like a couple of things like, the ceremony of writing a class for each task or using attributes for describing tasks instead of the fluent syntax of cake scripts
  2. Cake.Bridge
    This was more in line with what I wanted, but It missed some stuff like tool installing.

So, it was time to roll up my sleeves and get to work. Presenting Cake.Console!

var cake = new CakeHostBuilder(args)
    .InstallNugetTool("xunit.runner.console", "2.4.1")
    .Build();

cake.Task("Hello")
    .Description("This is just like a cake script")
    .IsDependeeOf("World")
    .Does(c => c.Information("but methods are on the 'cake' object"));

cake.Task("World")
    .Does(c => c.Information("Hello world"));

var target = cake.Context.Argument("target", "hello");
cake.RunTarget(target);

It’s a fairly simple project, but I learned a lot about cake’s internals.

Cake has an architecture where every piece of functionality is behind an interface and is injected into objects as needed. Then the registering of interfaces into implementations is defined in “Modules”. There is a ICakeContainerRegistrar Object that can receive registrations. I needed to implement a registrar if I wanted to take advantage of internal implementations of interfaces from Cake. So I did create a CakeContainer that can receive registrations from Cake.Core and Cake.Nuget modules and then create an IServiceProvider that can instantiate the needed parts of cake.

After understanding this part, It’s just a case of wiring some moving parts and I got it to work. The only “hand coded” part was the parsing of commandline arguments, which was done very naively.

What I got was a piece of code that can give me a IScriptHost object, which is the implicit object that is called on .cake scripts when we define Tasks or use Addins.

I still needed one thing, the installation of tools. I then added 2 things, a way to register stuff into the ICakeContainerRegistrar and a special interface IHostBuilderBehaviour that executes a Run() method before returning the IScriptHost, to add functionality into Cake.Console. What I got was a very simple CakeHostBuilder that I can then extend via extension methods.

With all this infrastructure in place I then added 5 extensions that fulfilled all my needs in this project

Installing Tools

I added the interface ICakeToolReference, and the ToolInstallerBehaviour. Created also a CakeNugetTool class to create the correct Url for a nuget package.
Then it’s just a matter of registering ICakeToolReferences into the ICakeContainerRegistrar

Tasks from methods

I added the ICakeTasks interface and the TaskRegisteringBehaviour, which instantiates the ICakeTasks, and calls all the methods that receive a CakeTaskBuilder. This CakeTaskBuilder will already have created the Task with the same name as the method.

Changing WorkingDirectory

Once more I added the IWorkingDirectory interface which has working directory string a and the WorkingDirectoryBehaviour, that converts it to an absolute path and changes the working directory. Useful when your build scripts are not in the same tree as the code itself.

Auto setup context data

The Setup callback on the IScriptHost can return an object that can then be used in the CakeTaskBuilder extensions. This is called a Typed Context. I wanted a typed context that could tap into the internals of cake, so it needed to be registered into the ICakeContainerRegistrar.

Once more I created a SetupContextDataBehaviour, and I’m good to go. I can even register multiple typed context and use the needed one on different tasks.

Run target

I found myself hating that part of the script that reads the “target” from the arguments. It just breaks the fluent vibe from the code! So I extended the CakeHostBuilder to have a Run method that simply reads the target from the arguments and runs it. Putting it all together…

All modesty aside, I really think it is looking great!

Longs, Original

Goodbye!

In March 2021, I left the company and the project that I worked for 9 years. This is my goodbye message.

If I’m going to be completely honest, I never liked to read this kind of goodbye messages, but now that it’s my turn to go, I feel compelled to write something. Now more than ever when we cannot see each other face to face, I want to leave a few words to you. People seem to think that when we are going through this kind of transition, suddenly we are filled with a burst of wisdom and thus are more willing to hear us out.

I’ve been a part of Critical Software since 2012 and had my share of high and low moments, but overall I know this has been an invaluable experience. As our last CEO used to say, this company is a real “school of engineering“, for those that are willing to learn. It’s not because we are so much smarter than everyone else, but because we are willing to empower ourselves to experiment, learn and improve on the job.

In these 9 years I’ve only worked on one project, Verticalla‘s VisionCenter. I can affirm proudly that I’m in part responsible for it’s success. Nevertheless, even though I enjoyed very much the work I’ve done, upon some reflection I’ve come to the realization that what stays most fondly in my memory are the relationships I’ve built over the years.

Those who know me, are aware that I’m not very good at building and keeping such relationships but I’m grateful that some of you took the time and patience to get to know me and become more than mere co-workers. I advise everyone get to know the people that you work with and befriend them, even though some of them (like myself) are not very friendly. It will bring benefits not only to your personal life (it’s nice to have friends) but also to your work! When we like the ones we work with, we are (1) more available to listen to them and (2) to be more sympathetic to their shortcomings. This raises the levels of trust, which in turn increases productivity, morale, team work and overall happiness.

Some of the friends I’ve made at Critical Software, are no longer working there – like Nuno Sousa, Ricardo Guerra and Pedro Costa, my mentors – but many still are. Thank you Tiago Carregã for the chess games (I know I suck), Luisa Barbosa for telling me to shut up, Catarina Azevedo for the good mood in the office, Agne Pustovoitaite for the Lithuanian lessons, Paulo Silva for the music, Matt Brake for being very approachable

Also, from Sauter side, I really enjoyed working with Patrice Hell and Hartmut Melchin, both being top quality professionals. I cannot name you all but, thanks to Benjamim Cardoso, Carla Machado, Telmo Inácio, Jorge Ribeiro, Hugo Sousa, Ricardo Lamarão, Luís Silva, Nuno Alves, João Carloto, Braselina Sousa and so many others that had a good influence on me.

If you’re brave enough to say goodbye, life will reward you with a new hello

Paulo Coelho

So thank you all and goodbye!


As we’re approaching Easter, and this is one of the most important times of the year for a christian like me, I want also to leave you with some thoughts about Jesus and his resurrection.

  • Christianity is a fact-based religion and the resurrection is the most important of those facts.
  • If Jesus did not rise from the dead, then Christianity is false and life ends on the grave
    • There is no heaven or hell
    • There is no punishment for evil
    • There is no reward for good
    • Jesus’ death is just another death
  • If Jesus did rise from the dead, then Christianity is true and life does not end on the grave
    • There is hope of heaven
    • Evil will be punished
    • Good will be rewarded
    • Jesus’ death unlocks the door to eternal life

When we have assurance of the fact of the resurrection:

  • However bad things get in this life, heaven is secure for us.
  • We have peace over the death of our loved ones, because they are not gone forever.
  • Christ’s resurrection is the basis of the trust we have of our own resurrection. He conquered death.

In short, being sure that the resurrection really occurred, is of great importance. Don’t take my word for it. Check for yourselves!

If you do the research and find out I’m wrong, you just lost a bit of time.

If you do the research and find out I’m right, you also lost a bit of time but gained eternity.


Some resources to get you started:

Original, Shorts

Dependence

I’ve been thinking about the stuff I depend upon to keep this blog, and in particular about link rot.

Most of the stuff I post here is sharing something I saw elsewhere.

I depend on other sites I link to to keep the links alive. I could mitigate this by always linking via the Wayback Machine, but I would also be dependent on them…

I depend on YouTube for the videos I share. I could copy and host them myself, but that would increase a LOT the cost of keeping all the stuff. I depend on Spotify for the songs I share. I could copy the audio and host them myself, but that might be considered a copyright violation.

And even for stuff I own and have a backup for, I depend on Flickr to keep my photos, on SoundCloud to keep my music, on GitHub to keep my code. The list goes on and on…

Dependence and trust. This is what makes the web go round.

Speaking of trust…

Should I go live in a cabin in the woods?

Longs, Original

.NET

For the last 8 years I have worked with .NET as my main development platform. When someone asked what is .NET I had some difficulty with having a clear response. So to get my ideas in order I wrote this.

Definition

.NET (pronounced “dot net”) is a free (as in beer), cross-platform (Windows, Linux, and macOS) and open-source framework for developing a variety of application models such as Web, Desktop or Mobile Apps, Games, Microservices, Machine Learning and IOT.NET is open source and under the .NET Foundation. The .NET Foundation is an independent organization to foster open development and collaboration around the .NET ecosystem.

What is .NET?

We can view the .NET framework from 3 different sides

The Platform

The platform is where your code runs, the runtime and the development tooling that comes with it like compilers
There are 4 main runtime implementations for .net:

  1. The old .NET Framework, that stopped new development, but will be supported as a component of the Windows OS (source)
  2. Mono, which is an open-source implementation of the .NET Framework
  3. .NET Core, which is the new open-source version of the .NET framework from Microsoft
  4. .NET Native for Universal Windows Platform

The last version of the .NET Framework will be 4.8 and of .NET Core will be 3.1. After this there will be only .NET, starting with version 5.

Programs written for .NET execute in a software environment (in contrast to a hardware environment) named the Common Language Runtime (CLR)
The CLR is an application virtual machine that provides services such as security (type safety, memory access,…), memory management (allocation, garbage collection, …), and exception handling.

The CLR runs software that is compiled to Intermediate Language (IL). Any language that compiles to IL, can be run in a .NET runtime.

The .NET compilers produce assemblies (files with the .dll extension) that contain executable code in the form of IL instructions, and symbolic information in the form of metadata.
Before it is executed, the IL code in an assembly is converted to processor-specific code by the CLR

The Libraries

All .NET implementations implement base set of APIs, which are called the .NET standard

If you write code that targets a .NET standard, it will be able to run in any runtime that supports it

Each implementation can also expose additional APIs that are specific to the operating systems it runs on.
For example, .NET Framework is a Windows-only .NET implementation that includes APIs for accessing the Windows Registry.

In addition to the .NET standard, we have NuGet, the default package manager and repository for .NET libraries, where we can find over 200 000 packages

The Languages

.NET supports multiple programming languages. The .NET implementations implement the Common Language Infrastructure (CLI), which among other things specifies a language-independent runtime and language interoperability.
This means that you choose any .NET language to build apps and services on .NET. The CLI is an ECMA standard (ECMA-335), a very interesting 500 page read.

You can write .NET apps in many languages, but the most used ones are C#, F#, and Visual Basic.

  • C# is object-oriented and type-safe programming language. Its now a standard in both ECMA and ISO (ECMA-334, ISO/IEC 23270)
  • F# is an open-source, functional programming language for .NET. It also includes object-oriented and imperative programming.
  • Visual Basic is object-oriented and type-safe but has an approachable syntax that uses more words than symbols

.NET Releases and Support (source)

.NET Core 3.0 shipped in September 2019, and .NET 5 is planned for November 2020. After that a major version of .NET is expected once a year, every November

There are 2 types of releases. Long Term Support (LTS) and Current. The even numbered ones will be LTS.

LTS releases are supported for three years after the initial release. Current releases are supported for three months after a subsequent Current or LTS release.
LTS releases will receive only critical and compatible fixes throughout their life-cycle. Current releases will receive these same fixes and will also be updated with compatible innovations and features.

Original, Shorts

Powershell closures

As a follow up to the previous article, I needed to warn you (and me) about powershell closures. They didn’t work as i expected

$Foo = 1
Write-Host "I expect Foo to be 1: " $Foo

function New-Closure {
    param([Scriptblock] $Expression, $Foo = 2) 
    & $Expression
}

New-Closure -Expression {
    Write-Host "I expected Foo to be 1, but was 2: " $Foo
}

New-Closure -Expression {
    Write-Host "I expected Foo to be 1, and it is: " $Foo
}.GetNewClosure()

The confusion point for me was that $Foo can be changed by the invoker, unless we add the .GetNewClosure().

As in the previous article the parameter name $ResourceGroupName is very common, it’s best to always use the .GetNewClosure() on the ScriptBlock

Original, Shorts

Execute an arbitrary piece of code with a temporary file uploaded to azure

A short one today.

My use case is to run a Set-AzureRmVMCustomScriptExtension command. For it to work i need the script to be somewhere accessible by the VM.

Nothing to fancy. Might be useful for someone (or a future version of me). It creates a temp container, uploads the file, invokes the code passing in the URI, and whatever happens the container will be deleted. Assumes a connection to azure and an already existing storage account.

Just to highlight an interesting point

“temp-container-” + (-join ((97..122) | Get-Random -Count 5 | % {[char]$_}))

Initially i just had “temp-container” but when running 2 times in a row I got an error saying that the container could not be created as it was being deleted. So I found a way to generate random letters and now it creates an “unique” container each time.

Longs, Original

How much logic is too much logic?

Today there was a small argument about constructors and how slim should they be.

In this post I will attempt to explain my position on it.

On a general note, we all agree that constructors should not do much. Nevertheless I affirm that it is acceptable and even useful in some cases, to have logic in them, given that some rules are respected.

In the remainder of this post I will refer to the logic in the constructors as simply “logic” but this can be:

  • Straightforward operations
  • Complex logic, or even
  • IO access

The main contention point is that constructors should not have operations that may throw. First I will review the arguments against it, and then give some arguments for it.

There were 4 lines of argument against having “logic” inside a constructor body

  1. Hard to trace exceptions/memory leaks
  2. Single responsibility
  3. Principle of least astonishment (POLA)
  4. Dependency injection for decoupling/testing purposes

On the point one, this is simply not true in most cases

The hard to trace exceptions only exist in cases that you don’t instantiate the class yourself and don’t have logging in place to see the stack trace. It’s really not applicable

The memory leaks traces back to c++ that can allocate memory for an object and never free it if there is an unhandled exception. In .net this is not applicable either because the language is garbage collected.

If there is no destructor and no object that needs to be disposed, this is no reason for not having logic in the constructor

On the point two, the single responsibility is arguable

This is a reasonable principle, the contention point being what is the responsibility of the constructor.

If the constructor is doing side effects or mutating other objects, this is clearly bad but if the “logic” is about getting the object into a valid state i would argue that it is valid logic to be inside the constructor

On the third point, POLA, let’s see what could be astonishing

If you give an object a set of invalid parameters, should you be astonished that it throws an exception?

If you give an object an connectionstring, should you be astonished that it goes to the database?

If the object cannot get into a valid state, should you be astonished that it cannot be instantiated?

It is a good principle, but i don’t think any of the above cases is astonishing

On the fourth point, inversion of control, I agree on all cases

If you want your object to be testable, it should not instantiate any other object directly unless the two objects should only be tested together

If you want your object to be testable, it should use any static member directly

If you violate these principles, unit testing is impossible. Your tests will include more than your object

My main argument is that i believe that a constructor should only allow an object to get into a valid state. I believe that this is reasonable and is not on contention.

The constructor’s job is to bring the object into a usable state. There are basically two ways you can do this:

  1. Two-stage construction. The constructor merely brings the object into a state in which it refuses to do any work. There’s an additional function that does the actual initialization.
  2. One-stage construction, where the object is fully initialized and usable after construction.

I don’t think the two stage method is a good approach. Every method should enforce that it is not ran if the object is not valid, and there is a good chance the Init method wont be called if the class is to be reused by many people

I’m in favour of the one-stage construction.

There is also two ways you can do this, if you want to keep your code DRY

  1. Have a method somewhere that has your initialization logic

The best place in my opinion is a static Create method along side the constructor

2. Have your logic inside the constructor

I have used both approaches and believe that both are valid. I don’t have a strong opinion about which one should be used.

What do you think?

Longs, Original

Quitting a git GUI habit

I’ve been a user of source tree but it has let me down many times (crashes, high memory usage).

I’ve recognized that i really don’t know that much about git. I should learn at least the commands I use the most.

I’m a windows user so posh-git is a must have. There are great docs for git but they can be a bit overwhelming and have a lot of detail that is not needed 90% of the time. So, here it is, my 7 item git jump-start list:

  1. git checkout <branch>

The most basic command for switching branches. You can use -f to throw away any local changes

2. git fetch

Used to update the remote status. Useful to know if i need to pull. Use -p to remove any remote-tracking references that no longer exist on the remote

3. git pull

Pulls latest commits from origin. Use -r to rebase changes that are not yet pushed

4. git add

Adds changes to the staging area. Use -a to add all changes.

5. git commit -m “<message>”

Commits currently staged changes. Use -a to stage all changes before committing.

6. git push

Push currently committed changed to origin.

7. other stuff

I’m already cheating about the 7 items but now I’m getting into less used commands. git stash, git merge, git commit with amend, git push with force are all great tools to know about.

I cannot get my head around git diff. I still use GUI tools to handle conflicts and to review changes before committing.

That’s it. This list should be enough to keep me from using a GUI to interact with git most of the time.

Original, Shorts

Show up, kick ass, go home

I love programming. I really do! But programming is not my life, it’s just my job.

Since I’ve married and had kids this has become even more evident. There are things I have to do and that I also want to do besides thinking about my job.

I had been thinking about this a couple of days ago but today I read an article that really summed up my thoughts.

A couple of highlights:

“I opted in to this mindset (show up, kick ass, go home) largely in order to protect my own sanity. If I don’t set clear boundaries as to when it’s OK to think about work problems, I’ll think about them all the time,”

“For a puzzle-hungry brain like mine, programming is so full of not-yet-solved problems that a mind like mine can find entertainment and solutions to their hearts content and still not feel like they’ve truly accomplished anything”

Thanks Matthew Jones for being so articulate!

Longs, Original

Developer evolution

One of the great things of staying on the same job – and on the same project – for a long time is that once in a while you get to see code you wrote years ago. Yesterday was one of those days. I had the opportunity to see how much my coding skills had evolved.

I was confronted with a performance issue from a piece of code I wrote 5 years ago. After a couple of hours I and a colleague pinpointed the bottleneck and rewrote the problematic code. The code ran 6–8x faster.

I want to make 3 points from this:

  • Don’t optimize prematurely. The code was in production for 5 years and was good enough. In that time I could improve my understanding of the whole system, a crucial skill to make good design decisions.
  • Challenge yourself. The bottleneck was quite obvious once I saw it. 5 years ago I didn’t have the maturity to question my own code. This could have been avoided with tools like pair programming or code reviews but it’s always a good thing to take a critical look at your work before checking in
  • Measure. Sometimes the problematic code is not that obvious. There are great tools that help you to measure the performance of your code. Learn to use them.

I want to thank the developer – or team – behind CodeTrack. It’s a great tool and on top of that completely free! If you work in .net it’s a must have.