Use Portainer.io to manage a Docker Swarm installation on Azure

Since the shutdown of Docker Cloud, I have been looking for a replacement that is as simple and powerful as Docker Cloud. So now I am just trying out different approaches on how to make my own version of Docker Cloud. This is one of the steps towards the ultimate goal that is to create something that is 100% declarative, both for Infrastructure and the Containers running on this Infrastructure.

First, to test Portainer.io we need some infrastructure, but this is covered in the article: http://geircode.no/setting-docker-community-edition-azure/

Following the results of this article, we now have a working Docker Swarm on Azure.

Goal

Use Portainer.io to manage a Docker Swarm installation on Azure

What is Portainer.io?

“Portainer.io is an open-source lightweight management UI which allows you to easily manage your Docker hosts or Swarm clusters”

The plan is use this solution to manage the Containers visually from the web. Docker Swarm does not have any default ‘gui’ that makes it easy to login to the container shell, deploy stacks, monitor cpu and memory. Hopefully Portainer.io can bridge this gap.

Get started

To avoid bloating my own developer laptop, I create my own workspace in a Container and work from there and keep the files on Bitbucket. That way I can move my workspace around on different computers without caring about snowflakes. The template workspace setup used is shared here: https://bitbucket.org/geircode/setting_up_docker_community_edition_template

Starting the workspace container

Let’s find the commands and certificates used in the previous article.

You may need to copy the certificate files from the shared volume from /app/ubuntu to /ubuntu. This is because shared volume in Docker for Windows forces all files to get a different file mode which is not possible to change during the volume sharing.

sh 005_prepare_ubuntu_keypair.sh

# Login

# ssh -i <path to public certificate> -p 50000 docker@<docker swarm public IP>

ssh -i /ubuntu/geircode_19f93204_rsa -p 50000 docker@13.80.106.142

After logging into the Docker Swarm manager, we are ready to install Portainer.

Starting Portainer

Following the docs: https://portainer.readthedocs.io/en/stable/deployment.html#inside-a-swarm-cluster

Login to the Manager node and start Portainer using these commands:

curl -L https://portainer.io/download/portainer-agent-stack.yml -o portainer-agent-stack.yml

docker stack deploy –compose-file=portainer-agent-stack.yml portainer

Navigate to the Public IP of the “externalLoadBalancer” in Azure:

Even the “Load balancing rules” is automatically configured:

Navigate to IP and port:

Oh my…That was easy, but is it working?

Yep, it is working:)

Deploying locally on Docker for Windows

First tick this option in Docker For Windows:

Open a CMD and execute:

>> “docker swarm init”

Get the script from https://bitbucket.org/geircode/setting_up_docker_community_edition_template/src/master/portainer/

Execute, and navigate to http://localhost:9000/

And it just works out of the box. That’s indeed pretty fantastic.

Goal reached and then some.

Setting up Docker Community Edition Swarm on Azure

 

I wanted to test Portainer.io as a Control Pane for Docker Swarm, but in order to do this I first need some infrastructure running in the cloud, and for this article I will focus on getting Docker Community Edition  (CE) to run on Azure.

Goal: Setup Docker CE as a Docker Swarm on Azure

I will be following this guide to setup Docker CE on Azure:

https://docs.docker.com/docker-for-azure/

and this guide specifies that I need some prerequisites:

Access to an Azure account with admin privileges

Yep, got that. If not then create for free => https://azure.microsoft.com/en-us/free/

SSH key that you want to use when accessing your completed Docker install on Azure

To avoid bloating my own developer laptop, I create my own workspace in a Container and work from there and keep the files on Bitbucket. That way I can move my workspace around on different computers without caring about snowflakes. The workspace setup used is shared here: https://bitbucket.org/geircode/setting_up_docker_community_edition_template

So if you do not have “ssh-keygen” installed then it is easy to start a linux Container and execute something like:

ssh-keygen -t rsa -b 4096 -C geircode@geircode.no -f /app/ubuntu/geircode_19f93204_rsa

Or click on the “Dockerfile.build.bat” and then “docker-compose.up.bat”. This will start a container and open up an interactive terminal. A shared volume is set up between Docker Host and the Container so that the files you see inside the Container on /app are the same files you are seeing on the Docker Host. Running the above command will create the certificate inside the /ubuntu folder which is also accessible on the Docker Host because of the shared volume.

Creating a “Service principal”

Execute something like:

docker run -ti docker4x/create-sp-azure docker_ce_19f93204 docker-ce-19f93204_rg westeurope

If the script fails for some reason, then navigate to Azure Portal to delete the Service Principal that gets created before running the script again.

https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps

root@40198a525e8f:/app# docker run -ti docker4x/create-sp-azure docker_ce_19f93204 docker-ce-19f93204_rg westeurope

info: Executing command login

/info: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXXXXXXX to authenticate.

info: Added subscription Visual Studio Premium med MSDN

info: Setting subscription "Visual Studio Premium" as default

+

info: login command OK

Using subscription xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

info: Executing command account set

info: Setting subscription to "Visual Studio Premium" with id "xxx".

info: Changes saved

info: account set command OK

Creating AD application docker_ce_19f93204

Created AD application, APP_ID= xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Creating AD App ServicePrincipal

Created ServicePrincipal ID= xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Create new Azure Resource Group docker-ce-19f93204_rg in westeurope

info: Executing command group create

+ Getting resource group docker-ce-19f93204_rg

+ Creating resource group docker-ce-19f93204_rg

info: Created resource group docker-ce-19f93204_rg

data: Id: /subscriptions/ xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /resourceGroups/docker-ce-19f93204_rg

data: Name: docker-ce-19f93204_rg

data: Location: westeurope

data: Provisioning State: Succeeded

data: Tags: null

data:

info: group create command OK

Resource Group docker-ce-19f93204_rg created

Waiting for account updates to complete before proceeding ...

Creating role assignment for xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx scoped to docker-ce-19f93204_rg

info: Executing command role assignment create

+ Finding role with specified name

/data: RoleAssignmentId : /subscriptions/ xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /resourcegroups/docker-ce-19f93204_rg/providers/Microsoft.Authorization/roleAssignments/ xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

data: RoleDefinitionName : Contributor

data: RoleDefinitionId : xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

data: Scope : /subscriptions/ xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/docker-ce-19f93204_rg

data: Display Name : docker_ce_19f93204

data: SignInName : undefined

data: ObjectId : xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

data: ObjectType : ServicePrincipal

data:

+

info: role assignment create command OK

Successfully created role assignment for xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Test login...

Waiting for roles to take effect ...

info: Executing command login

-info: Added subscription Visual Studio Enterprise

+

info: login command OK




Your access credentials ==================================================

AD ServicePrincipal App ID: xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

AD ServicePrincipal App Secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

AD ServicePrincipal Tenant ID: xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Resource Group Name: docker-ce-19f93204_rg

Resource Group Location: westeurope

root@40198a525e8f:/app#

Great success.. What now? Go to: https://docs.docker.com/docker-for-azure/

and click on “Deploy Docker Community Edition (CE) for Azure (stable)” and this will load a custom deployment on Azure.

Let’s fill in the details:

Click on “Purchase” and wait 3-4 minutes.

Update! If the Linux Worker Count is changed to only 1 VM, as done in this article, we will get the setup described in the rest of this article. However, if we specify 2 or more on “Linux Worker Count”, this setup will create a working Docker Swarm automatically. And wait a few more minutes for the workers to register automatically.

Listing the resources created:

The new VM’s are hiding inside the Virtual machine scale sets “swarm-xxx-vmss”. In order to save VM costs during testing setups, you can Deallocate them from there.

Deploy your app on Docker for Azure

https://docs.docker.com/docker-for-azure/deploy/

Navigate to “Outputs” on the deployment:

Copy the URL of “SSH TARGETS” to a browser to get this:

Time to login into the Swarm manager. I will use my Container to do this, but first I need to copy the Certificate from the volume shared directory “/app” to somewhere else outside this directory. This is because when a volume is shared to Docker Host, all files inside will get new file modes, and SSH demands a very specific file mode for the certificate.

Trying to connect to the Swarm manager:

root@40198a525e8f:/ubuntu# ssh -i /ubuntu/geircode_19f93204_rsa docker@40.74.57.141

ssh: connect to host 40.74.57.141 port 22: Connection refused

Ok, so that did not work. Aha, each swarm manager has different port i.e. port 50000.

root@40198a525e8f:/ubuntu# ssh -i /ubuntu/geircode_19f93204_rsa -p 50000 docker@40.74.57.141

The authenticity of host '[40.74.57.141]:50000 ([40.74.57.141]:50000)' can't be established.

RSA key fingerprint is ca:2b:5a:34:39:46:32:b5:f5:31:81:d9:68:ec:03:13.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '[40.74.57.141]:50000' (RSA) to the list of known hosts.

Enter passphrase for key '/ubuntu/geircode_19f93204_rsa':

Welcome to Docker!

swarm-manager000000:~$

Cool. I have now logged into my first Docker Swarm in Azure. Or so I thought.

Run “docker info”

Why is “Swarm: inactive”?

Ok, apparently I need to add the Swarm workers manually via the Swarm manager through SSH. Why didn’t the Azure template just do this automatically? Oh well.

Connecting to your Linux worker nodes using SSH

I tried to configure SSH agent forwarding and got this error:

root@40198a525e8f:~$ ssh-add

Could not open a connection to your authentication agent.

According to https://stackoverflow.com/questions/17846529/could-not-open-a-connection-to-your-authentication-agent, I need to start the ssh-agent first!

root@40198a525e8f:/ubuntu# eval `ssh-agent -s`

Agent pid 96

root@40198a525e8f:/ubuntu# ssh-add -L

The agent has no identities.

root@40198a525e8f:/ubuntu# ssh-add /ubuntu/geircode_19f93204_rsa

Enter passphrase for /ubuntu/geircode_19f93204_rsa:

Identity added: /ubuntu/geircode_19f93204_rsa (/ubuntu/geircode_19f93204_rsa)

root@40198a525e8f:/ubuntu#

root@40198a525e8f:/ubuntu# ssh -p 50000 -A docker@40.74.57.141

Welcome to Docker!

swarm-manager000000:~$

Yay. My SSH agent is working.

So where do I find my Swarm workers IP?

Go to the resource list and find the vnet setup: i.e. “dockerswarm-vnet”.

swarm-manager000000:~$ ssh -A docker@10.0.0.4

The authenticity of host '10.0.0.4 (10.0.0.4)' can't be established.

RSA key fingerprint is SHA256:c260W6he0ppCfmik+oa7TN42K4/xfPigAK2VysCSe6U.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '10.0.0.4' (RSA) to the list of known hosts.

Welcome to Docker!

swarm-worker000000:~$

Join the worker to the Docker Swarm:

swarm-worker000000:~$ docker swarm join --token SWMTKN-1-591r0uu76vnar02h8f16n1e2p5p0dbt2pzpm4g89o0zrymhn39-a9q5n6bv4svr3amjscmvp2x55 10.0.0.5:2377

This node joined a swarm as a worker.

swarm-worker000000:~$ exit

Connection to 10.0.0.4 closed.

swarm-manager000000:~$ docker node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION

py4y0q2jmk8jgi806nc0jofgc * swarm-manager000000 Ready Active Leader 18.03.0-ce

1qv3m227fz9n9mz3fg0trwhxt swarm-worker000000 Ready Active 18.03.0-ce

swarm-manager000000:~$

Whohoo!

To test my swarm I will use https://github.com/dockersamples/example-voting-app.

Login Swarm Manager node and run:

git clone https://github.com/dockersamples/example-voting-app.git

cd example-voting-app/

docker stack deploy --compose-file docker-stack.yml vote

But first I need to scale up my workers by increasing here:

And joining the Worker to the Manager like before. Now I have this:

NB: At this point I ran in several quirks:

  • First, at the moment the Manager also works as a Node. So if you try to create a Docker service, it will start up on the Manager Node and not be available on the LoadBalancer URL that is only connected to the Worker Nodes.
  • All network traffic to the Worker Nodes are blocked by default. You have to create a Rule for every Port that is going to be open for the internet.

Add a rule for each port that is going to be reachable from the internet.

Deploy the app:

swarm-manager000000:~/example-voting-app$ docker stack deploy --compose-file docker-stack.yml vote

Creating network vote_backend

Creating network vote_frontend

Creating network vote_default

Creating service vote_result

Creating service vote_worker

Creating service vote_visualizer

Creating service vote_redis

Creating service vote_db

Creating service vote_vote

swarm-manager000000:~/example-voting-app$

Find the URL for the public Loadbalancer for the Worker nodes. In my case it was “dockerswarm-externalLoadBalancer-public-ip”:

Open the URL in a browser with the correct Port:

Yes!

Conclusion: Setting up Docker Swarm in Azure this way had way too many unnecessary steps than I first thought. This should have rather been done in a declarative way such as i.e.: “I want 1 Manager, 2 Workers. Give it to me please, and connect the swarm to my local Docker CLI”.

!Conclusion update: If we specify 2 or more on “Linux Worker Count” when configuring the Azure template, this setup will create a working Docker Swarm automatically.

Article is also available here => https://geircode.atlassian.net/wiki/spaces/geircode/pages/185827331/Setting+up+Docker+Community+Edition+on+Azure

https://hub.docker.com/r/geircode/docker_ce_19f93204/

Setting up Amazon ECS CI/CD with Jenkins

Following the shutdown of Docker Cloud, I needed to move my stuff elsewhere. There are a lot of alternatives and one of them is Amazon ECS.

Goal:

  • Run my Container Image on ECS with Jenkins

I will be following the guide AWS CICD_Jenkins_Pipeline.

Considering that the Docker Cloud setup and integration with Github and AWS was a breeze (could be done completely by GUI), how does this setup with Amazon AWS ECS compare?

After creating a new fresh account in AWS and reviewing all the “prerequisites” I see that it’s going to use AWS CLI a lot, so why not do all this configuration from a container.

https://bitbucket.org/geircode/awsjenkins

With this Container setup I mount my local folder directly into the Container so that any changes locally are also done inside the container. This mean that I edit the shell script in Visual Studio Code but run it in the Container. Very handy, because I don’t need to install anything on my Windows Host in order to run bash/shell scripts and this will work on any computer.

Step 1: Build an ECS Cluster
Nub alert, I am failing already on the first step:

Apparently, I need a IAM user with some rights.

>> “<user_name> is an IAM user with Adminstrator Access.”

How to do this?

Adding user: https://console.aws.amazon.com/iam/home?region=eu-west-1#/home

Ok. That was easy.

>> “Create an SSH key in the us-west-2 region. You will use this SSH key to log in to the Jenkins server to retrieve the administrator password.”

https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#KeyPairs:sort=keyName (for some reason you have manually copy-paste this URL to your browser:)

>> Clone the GitHub repository that contains the AWS CloudFormation templates to create the infrastructure you will use to build your pipeline.

https://eu-west-1.console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks?filter=active

https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Instances

Some EC2 instances starting up.

>> Step 2: Create a Jenkins Server

>> Retrieve the public host name of the Jenkins server. Open a terminal window and type the following command:

>> SSH into the instance, and then copy the temp password from /var/lib/jenkins/secrets/initialAdminPassword.

Here I encountered this error:

Fix this by changing the mode of file to exactly “0400”, but it appeared that I can’t change a mounted file to 0400, so I had to copy the file to a different directory directly and change it there

Cool. So, what does this instance have already installed. Checking “docker”

Hurray, it got Docker.

>> sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Ok, I got the password.

>> Step 3: Create an ECR Registry

>> Verify that you can log in to the repository you created (optional).

How to do this from a Container?

First add “/var/run/docker.sock:/var/run/docker.sock” to docker-compose and set “COMPOSE_CONVERT_WINDOWS_PATHS=1” in a .env file that docker-compose then reads.

Refs https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337/9

And https://stackoverflow.com/questions/49507912/docker-jwilder-nginx-proxy-container-create-issue

Then install the Docker Cli inside the Container:

curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.03.1-ce.tgz && tar –strip-components=1 -xvzf docker-17.03.1-ce.tgz -C /usr/local/bin

Login Succeeded! Nice.

>> Step 4: Configure Jenkins First Run

Navigating to my jenkins:

While following the instructions I had some problems with the Jenkins complaining about missing dependencies, but the solution was to:

C:\Users\Garg\AppData\Local\Temp\Image.png

Click on the downgrade to 1.9 and “Restart Jenkins when installation is complete and no jobs are running”, and then install “Amazon ECR”.

>> Step 5: Create and Import SSH Keys for GitHub

>> Step 6: Create a GitHub Repository

Done that a few times before. https://github.com/geircode/jenkinsdemo

>> Enable webhooks on your repository so Jenkins is notified when files are pushed.

Hmm, do I need to do this manually on every repository? That’s not very automation friendly. And if the jenkins URL changes then I need to manually update all the repositories with the new URL? That’s not going to scale very good. Plus, the password is written directly into the URL. There must be and probably is a better way to do this.

>> Step 7: Configure Jenkins

After some time I got this configuration to work.

I was unable to complete step 7.1.f “Under build triggers, choose Build when a change is pushed to GitHub”, because the option was not there. Because of this, the integration to GitHub doesn’t work with the webhook.

Very important to have https:// here.

After setting it up, you can test the project by clicking on the “Build Now” button.

>> Under Network bindings, choose the IP address in the External Link column.

And Voila:

Conclusion:

By following the guide, I was able to create a running Container using ECS, but I was not able to get the integration between GitHub and Jenkins to work properly because of missing options in Jenkins. Also, when I changed the source code of my Github repository and built a new Container, the new Container did not get the latest changes for some strange reason.

In addition to this, the webhook integration seems to be deprecated:

Anyways, I learnt a lot about setting up and managing Jenkins and ECS/ECR.

Adding a health check to geircode.no and get SMS alerts

This is the story about wanting to add a Azure health check to geircode.no, but ending up with sending SMS alerts via AWS Route 53 health check.

Suddenly my website did not respond anymore and I did not know that had happened. So this article is the story about how I tried to create free health checks in Azure to geircode.no, but ended up with throwing money on AWS using AWS Route 53 health check to send SMS alerts.

Goal:

  • Add some sort of email/SMS alert when geircode.no becomes unresponsive or is not 200 OK anymore

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-web-app-availability

This seems pretty straight forward.

Creating the health check via Application Insights:

Great! A new Application Insights resource in Azure:

Now I need to add the actual health check:

You can set who will get the alert if the website goes down when clicking on the “Alerts”:

But an email alert is not enough. How do I get Azure to send me the alert on SMS?

Hmm, AWS got SMS sending capabilities built in with CloudWatch, but Azure does not seem to have this functionality. A fast google search will show a lot of links pointing to a Twilio integration. Ok, do I need to make another service that calls Twilio? And this service needs to listen a webhook from this Azure health check? That’s not very user friendly, and Twilio is not free. (Neither is AWS though).

Or perhaps I need(want) to create an App that can receive Push notifications? Since I don’t want to create my own App, someone has already done this: https://pushed.co/quick-start-guide

But I still need to create the service that responds to the webhook from Azure and then talks to the Pushed API that again pushes the notification to my mobile. Almost like a sms.

The workload for completing the goals of this article is increasing! Perhaps I should just ‘crawl to the cross’ as we say in Norway, and throw some money at AWS?

So, that’s what I did and added a health check to “geircode.no” in Route53:

In order to add SMS, go here: https://console.aws.amazon.com/sns/v2/home

Click on “Create topic” and create a topic:

Click on “Create subscription”:

Tadaaaa!

If you created the Route53 Health check first, go back and edit the alert and update the “Notification target”:

Now it’s all set up. AWS periodically checks “geircode.no” if it’s up and sends SMS if it not. As with all cloud billing, it will be interesting to see how much this will cost.

Success! Too bad Azure does not have this flexibility.

Debug Containers running in nested virtualization with Visual Studio 2017

This means I will be running a Linux VM (Docker Host) inside a Windows 10 VM inside a Windows 10 Host. And it works, but it’s slightly slower than running it straight on the host. The payoff is that I just need to copy the Windows 10 Guest VM to another computer if I want to develop somewhere else, and it is excellent to test stuff instead of bloating my Host.

C:\Users\garg\AppData\Local\Microsoft\Windows\INetCache\Content.Word\nestedDiagram2.png

There are a few requirements to get started:

The steps of the day are:

  • Get Windows 10 image
  • Create new Hyper-V VM => “Win10 Guest”
  • Enable Nested Virtualization on “Win10 Guest”
  • Enable Hyper-V on “Win10 Guest”
  • Install “Docker for Windows” on “Win10 Guest”
  • Install Visual Studio 2017 on “Win10 Guest”
  • Create a Dockerized .net Core Web API solution
  • Debug the webservice running in the Container

Ok. Let’s get started.

Download the latest Window 10 image from MSDN downloads (or somewhere else if you do not have access).

In the Windows 10 Host, open the Hyper-V manager and add a new VM. I recommend at least these settings:

  • Generation 1. It comes in handy if you want to convert the to a VMware image later.
  • Do not use “Dynamic Memory”. It’s currently not supported with nested virtualization.

I find that setting the network in Hyper-V is always a bit tricky. My solution is to use “Virtual Switch Manager” to create an “internal only” network and share my Windows Host internet connection with this one.

Enable Hyper-V on “Win10 Guest”

Open a PowerShell console as Administrator on the Windows 10 Host, and run:

Enable-WindowsOptionalFeature -Online -FeatureName:Microsoft-Hyper-V -All

Enable Nested Virtualization on “Win10 Guest”

Set-VMProcessor -VMName "Win10 Guest" -ExposeVirtualizationExtensions $true

Install “Docker for Windows” on “Win10 Guest”

Right. After some hours or minutes later, depending on your setup, we can install “Docker For Windows”. https://www.docker.com/docker-windows

The install usually works, but sometimes you just need to restart a few times, including the Windows host. Finally, the new Windows 10 Guest looks something like this:

Hopefully, there is a white and not a red whale running in your taskbar.

Right-click on the white whale and alter these settings:

  • Shared Drives: Tick true for the drives and click Apply. This enables debugging in VS2017 later.

Install Visual Studio 2017 on “Win10 Guest”

Get the free community version and install Visual Studio from here https://www.microsoft.com/net/core#dockervs. Install the .NET Core workload.

The install differs from earlier installments of Visual Studio in that it is possible to choose much more defined in what to install. That’s cool, I think. And faster.

Create a Dockerized .net Core Web API solution

Visual Studio 2017 comes preinstalled with .net core project templates.

Find “.NET Core” and create an ASP.NET Core project.

Enable Docker Support. This will add a new ‘docker compose’ project to the solution.

Visual Studio 2017 have done a lot more to the Docker tooling. Most of it is ok, but really, it’s just a giant wrapper around the docker cli and docker-compose cli. And it is too bad Microsoft didn’t opensource this tooling instead of hardwiring it into VS2017. If you want to know what is going on inside this tooling, it is possible to look into the ‘source files’ => C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\Microsoft\VisualStudio\v15.0\Docker

Ok, last step. Pushing F5.

And it failed with this error:

2>docker ps --filter "status=running" –filter "name=dockercompose2673636242_testcore_" --format {{.ID}} -n 1

1>CSC : error CS2012: Cannot open 'C:\Utvikling\TestCore\TestCore\obj\Debug\netcoreapp1.1\TestCore.dll' for writing -- 'The process cannot access the file 'C:\Utvikling \TestCore\TestCore\obj\Debug\netcoreapp1.1\TestCore.dll' because it is being used by another process.'

Hmm, ok. I restart Visual Studio and try again.

Hurray, great success. Visual Studio is now debugging my webservice running inside the Container.

Resources:

https://blogs.technet.microsoft.com/virtualization/2015/10/13/windows-insider-preview-nested-virtualization/

http://www.thomasmaurer.ch/2015/11/nested-virtualization-in-windows-server-2016-and-windows-10/

Deploy Staytus on Docker Cloud

If you already have a Docker Cloud that is integrated with a Cloud service, then click this button to deploy Staytus immediately.

But if you don’t, we can set it up.

What to do:

Create free Docker login https://docs.docker.com/docker-id/ and log into https://cloud.docker.com

Now you have a free user that can have one private or unlimited public builds. Now leave this and move on to the next step.

word-image-9

Create free Azure login https://azure.microsoft.com/en-us/free/

This process is also rather painless, but it could take some time (or not). You need a MSN account if you do not have this already

word-image-10

Integrate Docker Cloud with Azure

This process is a bit more technical. First, you need to access the non-swarm mode of Docker Cloud. Login using Chrome, not Firefox.

If your cloud.docker.com looks like this, you need to turn off “Swarm mode”:

word-image-12

Then you will get this:

Click on “Cloud Settings

Find “Cloud Providers” and click on the “Connection” link on the right for Microsoft Azure. As you can see, it is possible to choose several others, which will work perfectly as well, but they all have different integration procedure.

Open CHROME (Firefox does not work for some reason) and follow the instructions on: https://docs.docker.com/docker-cloud/infrastructure/link-azure/

Click on the big button “Download Management Certificate” with Chrome. With Firefox, this button will not respond.

What to do – Updated:

Staytus is an open source solution for publishing the status of your services, and will be the target software that we will deploy in Docker Cloud to run in Azure.

We are going to install a fork of Staytus (https://github.com/galexrt/staytus) that has a docker-compose.yml file ready for Docker Cloud.

Docker Cloud’s job is to deploy and maintain Container Images on a Cloud Service. In order to create a Container, Docker Cloud first need to have a Virtual Machine where the Container can be deployed and run. This VM is called a Node and it belongs to a Node Cluster.

How to create a Node Cluster in Docker Cloud? Navigate to “Node Clusters” and click “Create”:

Example Node cluster setup:

Click “Launch node cluster” and this will create a Node Cluster with one Node that runs in Azure. Navigate to the Node and find “Timeline” to see what is happening behind the scenes.

After 5-6 minutes, the Node is ready to serve.

In Azure, you will find this Node under Virtual Machines (Classic). Notice the instance guid: f41fb646.

Now you are ready to push the button under and it will create a Stackfile based on the docker-compose.yml file inside the https://github.com/galexrt/staytus repository. A Stackfile is the Docker Cloud way of setting up running containers.

Click on “Create and Deploy

If the Containers did not start, check the Timeline and the Logs that you find when clicking on the links in the “Services” area. Perhaps try upgrading the size of your Node. This Staytus Stackfile needs 2-3GB memory VM because of the NodeJS runtime.

You can follow the progress of deployment from the Timeline tab, and when everything is running, you click on the Service or Container Endpoint. Just be sure to remove the tcp:// from the url in the browser.

Nice! With one click we deployed Staytus to Azure and it’s now ready to use. Except setting up Docker Cloud with Azure, this was almost one-two-ish click deployment ?

So what the catch? Well, the database follows the Node through something called volumes. The database is persisted on the Node (the Docker Host) which means that when redeploying the Container it will not lose data. However, if the Node goes down so does the data. In Production, the database would probably be backed up somewhere else or be an Azure SQL or Amazon RDS or something similar.

What to do – Updated:

Do it again, but this time Docker Cloud is already set up and ready to use. It’s now way more easy to add more Docker Services. A Docker Service is the parent of the actually Container.

Click on Create:

Go explore ?

Creating Ubuntu Docker VM in Azure

Mission goals:

– use a Linux VM in Azure

– install Docker

– connect via SSH

– install and run an open source system called Staytus

It should be straightforward, but this article is meant to record any ‘quirks’ that might occur through this adventure. This is to make it easier for someone else trying to do the same.

Prerequisites:

– Azure user. Create for free => https://azure.microsoft.com/en-us/free/

If you already have a MSDN sub then you should have some credits available. https://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits-details/

Ok, I will be short and to the point. Less words more action.

Goto https://azuremarketplace.microsoft.com/en-us/marketplace/apps/CanonicalandMSOpenTech.DockerOnUbuntuServer1404LTS

and start your Linux VM. Usually, this process is painless, and just works.

The result should look like something like this:

image001

Ok, that means that we have a Docker installation in Ubuntu on Azure with the address docker-ubuntu-4ivk4ppp.cloudapp.net

Updated Mission goals:
– use a Linux VM in Azure – OK
– install Docker – OK
– connect via SSH
– install and run an open source system called Staytus

Next, we need to connect to Ubuntu somehow. If you are running Windows 10 I recommend to try out installing Ubuntu ‘natively’ http://www.windowscentral.com/how-install-bash-shell-command-line-windows-10 because it’s fun 🙂

Or just use your Git Bash: ssh <username>@<ip>

     ssh  garg@docker-ubuntu-4ivk4ppp.cloudapp.net

and login.

If you get any password problems, try resetting it in Azure:

image002

If you were successfully you should get this:

image003

Hurray. It also shows that the latest Docker is installed.

Updated Mission goals:
– use a Linux VM in Azure – OK
– install Docker – OK
– connect via SSH – OK
– install and run an open source system called Staytus

Next, we are going to install a fork of Staytus => https://github.com/galexrt/staytus

mkdir git
cd git
git clone https://github.com/galexrt/staytus.git
cd staytus

This fork has a docker-compose.yml that split staytus and its database into two containers instead of the original repository running everything in one container (which is not “best practice” for containerization). Let’s start it!

docker-compose up

This command uses the default docker-compose.yml file.

You should start seeing something like this:

image004

image005

Looking at the running containers, it shows that everything is up and should be ready to use:

image006

It also shows that Staytus running on Port 80 on the Ubuntu, but on Port 5000 inside the Container.

Open it!

Get the IP from Azure Portal and open it in a browser:

image007

Great success! Ok, not really. We have to make some endpoints in Azure. The docker-compose file is already forwarding the container port 5000 to the Ubuntu server (Docker Host) on port 80. This means that in order to access the container we need to access the Port 80 on the Ubuntu server.

image008

Here I have added Port 80 and call it “Staytus”.

Let’s check the browser again:

image009

Updated Mission goals:
– use a Linux VM in Azure – OK
– install Docker – OK
– connect via SSH – OK
– install and run an open source system called Staytus – OK

Whoop whoop!

That was pretty cool. In less than 2 hours including creating this article Staytus is now running. I write slow:)

Breakpoint Bug in Visual Studio 2015

Today I discovered a peculiar bug in Visual Studio 2015. If you put in a very long string:

string strings = @”<very long string at least 80000 characters>”;

it does not stop on Breakpoints anymore:)

Visual Studio 2015 does not throw any errors. It just stops stopping on Breakpoints.

The solution was obviously to put the string in a another file and read it into the program, but still it was a rather fun bug to track down.