Grav on Azure - Infrastructure Deployment

July 21st at 1:42am Kenny Lowe

As far as I'm aware, there isn't a guide for deploying Grav into Azure App service, or for integrating it into the various PaaS elements I've chosen to make use of. Even the documentation for deploying in a Windows environment is fairly scant, so we'll be learning as we deploy here for the most part, which in my experience is always the best way to learn!

We'll start out with a completely empty Azure environment, and deploy each component step by step. This post won't cover the code deployment, however by the time we're finished here we'll have all of the infrastructure components deployed and integrated as they need to be.

The first thing we need to do is deploy an Azure App Service into the West Europe region. My naming conventions are vaguely sensible, but it's not mandatory that you follow them, feel free to use your own.

Deploy another identical App Service and associated App Service Plan into East US, and we have the first parts of our infrastructure framework in place.

The next thing we'll do is create a DNS zone. This is by no means necessary, and if you manage DNS for your domain elsewhere feel free to substitute in your own reality here. I find Azure DNS works really well, so let's use it here.

At this stage we've created two generic Web Apps and a DNS zone, which pinned to the dashboard look like this:

Following through our previous post, the next thing we want to do is geographically balance traffic between the two web apps - after all that's why we have two deployed in the first place. In order to achieve that, we need to deploy a Traffic Manager Profile.

Having created the Traffic Manager Profile, we need to create Traffic Manager Endpoints, so Traffic Manager know how to route its traffic. We select the Endpoints area in Traffic Manager...

... and create an endpoint pointing to one of our Azure Web Apps. We need to do this twice, once for the West Europe app and once for the East US app. Here I've chosen to add East US first.

After creating both Traffic Manager endpoints, they should both appear as Online as below.

You should also be able to load your website through the trafficmanager URL. We haven't associated any custom DNS with this yet, so to test it, just navigate to the Traffic Manager URL. In this case,

Success! Traffic Manager is now serving content to me from the web app which is geographically closest to me.

We don't want to connect to the website with a URL though, so the next thing we need to do is associate custom domain names with each web app. Navigate back to the web app, and select 'Custom Domains'.

Add a hostname, and validate it - in our case, this is In order to validate it, you will need to create appropriate DNS records as specificed by the Custom Domains validate page.

Once you've added appropriate CNAME or A records to valdiate your ownership of the domain, it will appear in the custom domain page, alongside the default domain name and traffic manager domains that we created earlier.

Repeat the process for the second web app we deployed into the secondary region. If you have a multi SAN cert of a wildcard cert, then you can use different subdomains here which makes life slightly easier later. In my case I only have a cert covering, so must use the same domain for each weba pp.

Next we want to add an SSL certificate, so navigate to the SSL tab of the web app, and upload a certificate.

Select your certificate, and upload it. Repeat this for the secondary we app as well.

Next add an SSL binding and associate it with the www.azurestack.tip domain we added earlier - repeat this step for the secondary web app as well.

Lastly, add a CNAME DNS entry pointing www to our Traffic Manager URL, in this case

Once DNS propagates, test your new SSL secured URL by navigating to Success! I am now geographically load balancing an SSL secured site between the US and Europe :)

This can be validated by performing a DNS query against the site from a machine in Europe and from a machine in the US - each returns the appropriate IP for the associated web app. Below is the result when querying from a client in Europe.

At this stage I've not had to log into a VM, not had to peform any patch and update activities, not had to install IIS or any of the associated dependencies, not had to install and configure a load balancer or worry about where that load balancer is physically located. I've had to do very little really, and I have a geographically load balanced and SSL secured website deployed.

Next I want to deploy my Redis cache services, even though I'm not doing anything with them or integrating them into the web app quite yet. Search for Redis in the Azure Marketplace, and deploy a new cache service. I'm deploying one into West Europe and one into East US, so each web app has its own local Redis instance available. I've chosen a Standard C1 size because Standard delivers an SLA via a clustered instance behind the scenes, and C1 is the smallest size in the Standard category. This is massive overkill for my needs, but I have the free credits, so why not? :)

The Redis instances will take a while to deploy, I think they're spun up and configured as required as opposed to App Service instances which sit on hot standby, so you may be 30+ minutes waiting for them to finish deploying.

While the Redis instances are deploying, we can take the time to deploy the CDN which was part of our high level architecture earlier.

As the CDN service is a global service, add it to the global resource group we created earlier, and don't worry about where it says it'll store its metadata. There are pros and cons to the various Verizon, Akami, and Microsoft services, but for our blog we defintely just need the Microsoft Standard service.

Choose to create an endpoint now if you want, I forgot to tick that box and had to click a few other buttons to get back there. Oh the humanity!

Adding an endpoint tells the CDN service what data to geo-distribute, and because we can choose an Azure Web App as a source there's no wacky authentication/authorisation to worry about.

The last prep step we want to take before actually looking at deploying the codebase is scalability of the App Service itself. If I'm getting a lot of traffic, I want the app service to be able to auto scale itself out to accommodate, and then scale back in when required as well. I have no idea what scale looks like in this site yet, so I'll take some generic advice and scale based on CPU metrics for now.

For each of our two web apps, navigate to the Scale Out section, and choose 'Enable Autoscale'.

Here I've chosen to auto-scale to a max of three instances when a CPU threshold of 70% is breached. There is also a rule in place to scale back down to the default one instance if CPU time is back to normal.

Having finished all that, we now have all of the required infrastructure to deploy a highly reslient, SSL secured, auto-scaling, very responsive website! Next we will focus on how we actually deploy our website, and then configure it to take advantage of these infrastructure features.

Kenny Lowe