Setting up Amazon ECS CI/CD with Jenkins

Following the shutdown of Docker Cloud, I needed to move my stuff elsewhere. There are a lot of alternatives and one of them is Amazon ECS.

Goal:

  • Run my Container Image on ECS with Jenkins

I will be following the guide AWS CICD_Jenkins_Pipeline.

Considering that the Docker Cloud setup and integration with Github and AWS was a breeze (could be done completely by GUI), how does this setup with Amazon AWS ECS compare?

After creating a new fresh account in AWS and reviewing all the “prerequisites” I see that it’s going to use AWS CLI a lot, so why not do all this configuration from a container.

https://bitbucket.org/geircode/awsjenkins

With this Container setup I mount my local folder directly into the Container so that any changes locally are also done inside the container. This mean that I edit the shell script in Visual Studio Code but run it in the Container. Very handy, because I don’t need to install anything on my Windows Host in order to run bash/shell scripts and this will work on any computer.

Step 1: Build an ECS Cluster
Nub alert, I am failing already on the first step:

Apparently, I need a IAM user with some rights.

>> “<user_name> is an IAM user with Adminstrator Access.”

How to do this?

Adding user: https://console.aws.amazon.com/iam/home?region=eu-west-1#/home

Ok. That was easy.

>> “Create an SSH key in the us-west-2 region. You will use this SSH key to log in to the Jenkins server to retrieve the administrator password.”

https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#KeyPairs:sort=keyName (for some reason you have manually copy-paste this URL to your browser:)

>> Clone the GitHub repository that contains the AWS CloudFormation templates to create the infrastructure you will use to build your pipeline.

https://eu-west-1.console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks?filter=active

https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Instances

Some EC2 instances starting up.

>> Step 2: Create a Jenkins Server

>> Retrieve the public host name of the Jenkins server. Open a terminal window and type the following command:

>> SSH into the instance, and then copy the temp password from /var/lib/jenkins/secrets/initialAdminPassword.

Here I encountered this error:

Fix this by changing the mode of file to exactly “0400”, but it appeared that I can’t change a mounted file to 0400, so I had to copy the file to a different directory directly and change it there

Cool. So, what does this instance have already installed. Checking “docker”

Hurray, it got Docker.

>> sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Ok, I got the password.

>> Step 3: Create an ECR Registry

>> Verify that you can log in to the repository you created (optional).

How to do this from a Container?

First add “/var/run/docker.sock:/var/run/docker.sock” to docker-compose and set “COMPOSE_CONVERT_WINDOWS_PATHS=1” in a .env file that docker-compose then reads.

Refs https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337/9

And https://stackoverflow.com/questions/49507912/docker-jwilder-nginx-proxy-container-create-issue

Then install the Docker Cli inside the Container:

curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.03.1-ce.tgz && tar –strip-components=1 -xvzf docker-17.03.1-ce.tgz -C /usr/local/bin

Login Succeeded! Nice.

>> Step 4: Configure Jenkins First Run

Navigating to my jenkins:

While following the instructions I had some problems with the Jenkins complaining about missing dependencies, but the solution was to:

C:\Users\Garg\AppData\Local\Temp\Image.png

Click on the downgrade to 1.9 and “Restart Jenkins when installation is complete and no jobs are running”, and then install “Amazon ECR”.

>> Step 5: Create and Import SSH Keys for GitHub

>> Step 6: Create a GitHub Repository

Done that a few times before. https://github.com/geircode/jenkinsdemo

>> Enable webhooks on your repository so Jenkins is notified when files are pushed.

Hmm, do I need to do this manually on every repository? That’s not very automation friendly. And if the jenkins URL changes then I need to manually update all the repositories with the new URL? That’s not going to scale very good. Plus, the password is written directly into the URL. There must be and probably is a better way to do this.

>> Step 7: Configure Jenkins

After some time I got this configuration to work.

I was unable to complete step 7.1.f “Under build triggers, choose Build when a change is pushed to GitHub”, because the option was not there. Because of this, the integration to GitHub doesn’t work with the webhook.

Very important to have https:// here.

After setting it up, you can test the project by clicking on the “Build Now” button.

>> Under Network bindings, choose the IP address in the External Link column.

And Voila:

Conclusion:

By following the guide, I was able to create a running Container using ECS, but I was not able to get the integration between GitHub and Jenkins to work properly because of missing options in Jenkins. Also, when I changed the source code of my Github repository and built a new Container, the new Container did not get the latest changes for some strange reason.

In addition to this, the webhook integration seems to be deprecated:

Anyways, I learnt a lot about setting up and managing Jenkins and ECS/ECR.

Install “Docker for Windows” in Azure Nested Virtualization and Debug in VS2017

July this year, Azure got some new interesting VM types that where it’s possible to run Virtual Machines inside each other. This is called nested virtualization which previously has only been possible on bare-metal machines.

Before we get started there is one prerequisite:

  • Azure account

Goals of this post (if the title was not enough):

  • Start a new VM with the nested virtualization
  • Install Visual Studio 2017
  • Install Docker for Windows
  • Debug a .NET CORE 2 service

Open Azure Portal to create a VM, and it looks like Azure already had an image ready:

Apparently nested virtualization is not yet available everywhere in the world: https://azure.microsoft.com/en-us/regions/services/

Select your region to see if your region is supported. Since my region is Europa then only “West Europa” have support for nested virtualization.

When choosing a VM size, look for all VMs starting with “D” or “E” and ending with “_V3”.

My choice was the “D4_V3”:

By default, the Linux image from “Docker for Windows” use 2GB memory so 16GB memory on the Host should be plenty. Tips: Avoid the use of “Premium disk support” if you are just testing stuff because this will cost you a lot even though the VM is shut down and deallocated. The new “Auto-Shutdown” option is also nice.

Click “Create” and wait in intense suspense for deployment.

After a few minutes, the VM has started and it’s time for the fun stuff like “will it actually work out of the box??”.

So far so good. First, I want to find out what exactly how much comes “out of the box”. It turns out that Hyper-V is disabled by looking at the “Turn Windows Features on or off” list.

So, what happens if we enable this? Ok, click through the “Features” and tick the “allow destination computer to restart”. Install Hyper-V.

It will restart automatically after installing Hyper-V.

After rebooting the Task Manager is now showing that Virtualization is Enabled. Great.

If you do not find the newly installed Hyper-V Manager by searching for it, it’s because Windows have not yet indexed it. Just a bit annoying. Anyways, it will be somewhere here: C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools.

Installing “Docker for Windows”.

https://store.docker.com/editions/community/docker-ce-desktop-windows

Promising..

Err, was it too good to be true? Hmm. Troubleshooting time!

Clicking “Reset to factory defaults” button, but got new error:

Restarting the VM to see if that will fix this, and behold!

“Docker for Windows” running on an Azure VM with nested virtualization.

Debug a .NET CORE 2 service with Visual Studio 2017

Visual Studio 2017 Community was included in the Azure VM image, but it probably did not include .NET CORE and Docker tooling.

Open “Visual Studio Installer” from the start menu and click on “modify”. To my surprise, Microsoft have installed everything😊

But the Azure image did not container .NET CORE 2, so we need to install that and restart Visual Studio:

https://github.com/aspnet/Tooling/blob/master/install-2.0-vs15.3.md

Creating a new service:

Remember to tick “Enable Docker Support” and choose “Linux”.

Make sure the “docker-compose” is set to be “StartUp Project”.

Click on “ ” to start debugging.

Eventually you might get this popup. Click on “Share it” to enable access to your source files directly in the Container.

So nested virtualization in Azure together with “Docker for Windows” is definitely working!

Debugging is also working!

A few small bumps to get there, but they were trivial to fix. Great success.

Links:

https://azure.microsoft.com/en-us/blog/nested-virtualization-in-azure/

Exploring Docker Cloud Swarm

This feature has been in beta in Docker Cloud for some time now, and I wanted to find out what all the fuss was about.

Goal:

  • Create and Connect a Docker Swarm running in Azure
  • Run a simple web app in the Swarm

Prerequisites:

  • Azure account (Global Admin)
  • Docker Cloud account

The Docker guide to setup the Docker Cloud to Azure is pretty straightforward. Navigate to Docker Cloud and enable “Swarm mode”. Open “Cloud settings”.

https://docs.docker.com/docker-cloud/cloud-swarm/link-azure-swarm/

The integration process is almost completely automatic. Just type in the Subscription Id, hit click, login Azure and give access. Tada!

The only ‘challenge’ that might arise is that the Azure account must be Global Admin. If you only have partial access, you will need to get in touch with the supreme owner of the Azure account to activate this integration between Docker Cloud and Azure.

Creating the swarm!

For some reason, I always think of beeeeeeesssss when I hear this.

Navigate to “Swarms” tab, hit “Create” and follow this guide:

https://docs.docker.com/docker-cloud/cloud-swarm/create-cloud-swarm-azure/

The guide offered me no resistance.

Starting up!

When it is finished, click the swarm and get this:

Copy and paste into a shell. I recommend trying out Ubuntu for Windows; it is great for doing Linux stuff without an actual Linux instance. Running “docker ps” will now show you this:

The command/shell is now running against the new Swarm and running “docker ps” will now show you this:

It is also possible to get here without the copy and pasting of the docker run command. In Docker for Windows, right click the white whale at the bottom of the screen and expand the “Swarms” option. Click the swarm (in this case the “swarm-poc”) and it will open a CMD where the Docker CLI is connected to the Swarm.

Since Docker CLI is now running against the selected Swarm, any command now runs directly against the swarm and not the Docker Host running locally in Hyper-V.

To see the swarm running, run this command:

>> docker node ls

>> docker run -it –rm hello-world

This however will create a Hello World container running on the Manager, and that is not correct at all. To make run Hello World on one or more of the Workers we need to create a Docker Service.

Running Hello World in the Swarm

The commands to be used with Docker Swarm is listed here: https://docs.docker.com/engine/reference/commandline/service/

>> docker service create -p 13337:80 tutum/hello-world

The “Hello world” webapplication is now running in one of the workers and is reachable from the internet. You can get the public IP from either Docker Cloud or the Azure Portal:

In Azure portal, find the Resource Group and “externalLoadBalancer”:

Deallocate and save money

Deallocate the VMSS instances in Azure in order to save money. They do not show up in the Virtual Machine tab, but click on the Resource Group for the new swarm will show them. Remember to start them up again before using the swarm.

My three nodes (1 Manager, 2 Workers) does not cost much though. Yet. So far they only have a running fare at approx. 0.5 euros a day or 10€ (11$ or 101 NOK) for a month. Some people will call that inexpensive, low-priced, low-cost, economical, competitive, affordable, reasonable or free. At least Google Dictionary do. Except the last one.

In conclusion, we have now a Docker Swarm up and running about that can run Containers with workloads remotely instead of running them in your local Docker Host instance.

Quick demo of Docker Cloud

Prerequisites:

  • Azure account (see guide in a previous post)
  • GitHub account
  • Docker Cloud account (see the above link)
  • Docker Cloud integrated with GitHub and Azure (see the above link)
  • Visual Studio 2015 with .NET Core SDK and Docker tooling

Goals:

  • Create a new repository
  • Create a .NET Core web API
  • Run the .NET Core web API in a Container in a Cloud

Create the project and add the new repository to GitHub:

Click “Publish” and now there is a new repository in GitHub. Next up is Docker Cloud.

First we create a new Docker Hub repository directly in Docker Cloud:

Click “Create”

Click “Link to GitHub”

Tip: If your repository is not showing up, then you must first grant access in GitHub:

Ok, back to Docker Cloud. Select “SOURCE REPOSITORY“:

Select a “BUILD LOCATION” to “Small” and Click “Save”.

Several things just happened, most notably is that this Docker Hub repository will build every time someone commits code to the GitHub repository.

But first the .NET Core web API needs to contain a “Dockerfile” that looks like this

This file is now inserted into the root folder of the .NET Core solution.

Commit this file to GitHub and it will trigger a Build in Docker Cloud(Hub).

The Container image only takes a few minutes to build.

Next, the Container images needs somewhere to run. Create a Node Cluster:

Click “Launch node cluster”

This takes a few minutes to start up. In the meantime the Container image is finished:

We are ready to deploy the .NET Core web API when the Node Cluster itself is deployed.

Click “Launch service”

By default the Port is not open so we need to change that to this.

Here the Container port is now published/visible to the Docker Host which makes it reachable from the internet.

Click “Create & Deploy” for the finishing move.

And it’s starting up.

Go to the “Timeline” to see what is happening:

After a 2-3 minutes it’s running and ready to be tested.

Click on the “link” in the red circle, and remove the “tcp://” in:

Add “/api/values” to the URL and behold!

The Container is now running in a Container in Azure with a complete automatic CI/CD pipeline🙂

Thank you for reading.

Debug Containers running in nested virtualization with Visual Studio 2017

This means I will be running a Linux VM (Docker Host) inside a Windows 10 VM inside a Windows 10 Host. And it works, but it’s slightly slower than running it straight on the host. The payoff is that I just need to copy the Windows 10 Guest VM to another computer if I want to develop somewhere else, and it is excellent to test stuff instead of bloating my Host.

C:\Users\garg\AppData\Local\Microsoft\Windows\INetCache\Content.Word\nestedDiagram2.png

There are a few requirements to get started:

The steps of the day are:

  • Get Windows 10 image
  • Create new Hyper-V VM => “Win10 Guest”
  • Enable Nested Virtualization on “Win10 Guest”
  • Enable Hyper-V on “Win10 Guest”
  • Install “Docker for Windows” on “Win10 Guest”
  • Install Visual Studio 2017 on “Win10 Guest”
  • Create a Dockerized .net Core Web API solution
  • Debug the webservice running in the Container

Ok. Let’s get started.

Download the latest Window 10 image from MSDN downloads (or somewhere else if you do not have access).

In the Windows 10 Host, open the Hyper-V manager and add a new VM. I recommend at least these settings:

  • Generation 1. It comes in handy if you want to convert the to a VMware image later.
  • Do not use “Dynamic Memory”. It’s currently not supported with nested virtualization.

I find that setting the network in Hyper-V is always a bit tricky. My solution is to use “Virtual Switch Manager” to create an “internal only” network and share my Windows Host internet connection with this one.

Enable Hyper-V on “Win10 Guest”

Open a PowerShell console as Administrator on the Windows 10 Host, and run:

Enable-WindowsOptionalFeature -Online -FeatureName:Microsoft-Hyper-V -All

Enable Nested Virtualization on “Win10 Guest”

Set-VMProcessor -VMName "Win10 Guest" -ExposeVirtualizationExtensions $true

Install “Docker for Windows” on “Win10 Guest”

Right. After some hours or minutes later, depending on your setup, we can install “Docker For Windows”. https://www.docker.com/docker-windows

The install usually works, but sometimes you just need to restart a few times, including the Windows host. Finally, the new Windows 10 Guest looks something like this:

Hopefully, there is a white and not a red whale running in your taskbar.

Right-click on the white whale and alter these settings:

  • Shared Drives: Tick true for the drives and click Apply. This enables debugging in VS2017 later.

Install Visual Studio 2017 on “Win10 Guest”

Get the free community version and install Visual Studio from here https://www.microsoft.com/net/core#dockervs. Install the .NET Core workload.

The install differs from earlier installments of Visual Studio in that it is possible to choose much more defined in what to install. That’s cool, I think. And faster.

Create a Dockerized .net Core Web API solution

Visual Studio 2017 comes preinstalled with .net core project templates.

Find “.NET Core” and create an ASP.NET Core project.

Enable Docker Support. This will add a new ‘docker compose’ project to the solution.

Visual Studio 2017 have done a lot more to the Docker tooling. Most of it is ok, but really, it’s just a giant wrapper around the docker cli and docker-compose cli. And it is too bad Microsoft didn’t opensource this tooling instead of hardwiring it into VS2017. If you want to know what is going on inside this tooling, it is possible to look into the ‘source files’ => C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Microsoft\VisualStudio\v15.0\Docker

Ok, last step. Pushing F5.

And it failed with this error:

2>docker ps --filter "status=running" –filter "name=dockercompose2673636242_testcore_" --format {{.ID}} -n 1

1>CSC : error CS2012: Cannot open 'C:\Utvikling\TestCore\TestCore\obj\Debug\netcoreapp1.1\TestCore.dll' for writing -- 'The process cannot access the file 'C:\Utvikling \TestCore\TestCore\obj\Debug\netcoreapp1.1\TestCore.dll' because it is being used by another process.'

Hmm, ok. I restart Visual Studio and try again.

Hurray, great success. Visual Studio is now debugging my webservice running inside the Container.

Resources:

https://blogs.technet.microsoft.com/virtualization/2015/10/13/windows-insider-preview-nested-virtualization/

http://www.thomasmaurer.ch/2015/11/nested-virtualization-in-windows-server-2016-and-windows-10/

Exploring Docker Cloud automated builds

Automated builds are part of the Continuous Integration and Continuous Deployment (CI/CD) of checking in code in GitHub and see it running in your container somewhere in a cloud service such as AWS or Azure. Automated builds makes it possible to have a fully automated CI/CD including running unit tests and deploying to different environments without any user interaction. You just need to check in code into GitHub (or Bitbucket).

Docker Cloud have had automated builds for some time now, but have always lacked most of the build functionality of Docker Hub. Therefore, I was pleased to see that the automated build in Docker Cloud has finally matured.

Docker Cloud has even surpassed Docker Hub in functionality, and since Docker Store is the new “Docker Hub”, it is probably just a matter of time until they discontinues Docker Hub. I sure hope not though.

Since this link, https://docs.docker.com/docker-cloud/builds/automated-build/ kinda explains it all, I can write down the highlights as I see them:

  • Possible to upgrade the build Node that makes it faster to build a Container image
  • Run unit tests automatically on each check-in. Now, this is powerful. It actually means that you can trigger a script each time code is checked into your repository. You may run unit tests, or you can create a Python script that can do anything.
  • See the output of the build of Container image as it progresses
  • “Build Caching”. Meaning it finally caches the build that speeds up the build a lot.