Training is great, free training is even better…

With the final month-ish countdown to Azure Stack multi-node systems being delivered on-premises underway, training materials and courses have begun to pop up online to help get people up to speed. One of the first is from Opsgility, offering a ten module Level 200 course in Implementing Azure Stack solutions.

The course is available here: 

Opsgility is of course a paid for service, however if you sign up for a free Visual Studio Dev Essentials account, three months of Opsgility access is included for free, as well as the ever-useful $25 of free Azure credits every month for a year.

Enjoy! 🙂


Well Microsoft Inspire has kicked off in fine form, both with the announcement of the GA of the Azure Stack Development Kit (formerly One Node PoC), and with the announcement that Azure Stack multi-node systems from Dell EMC, HPE, and Lenovo are available for pre-order now, shipping in September.


Useful Links:


Azure Stack Overview

The Azure Stack Development Kit

The Azure Stack Development Kit Release Notes

Updated App Service Bits for Azure Stack Dev Kit

How to Buy Azure Stack

Azure Stack Management Pack for SCOM is RTM

Julia White discussed Azure Stack

TheRegister article on Azure Stack

NYT on Azure Stack

Why Azure Stack is a game changer for hybrid IT

Business Insider article on Azure Stack launch


This marks the start of a ~2 month countdown to launch – the final furlong in a multi-year journey to true hybrid cloud, and for those of us who have been working deep in about the product for that whole time, the excitement in the community is palpable.


This isn’t the end of the journey however, this is day one of the start of a whole new wave of datacentre innovation, as the reality and the power of the hybrid cloud plus the intelligent edge really start to be understood. There is still so much both to learn and to teach, how we in the service provider industry most effectively deliver value against the inevitability of data gravity, and how we most effectively build into the future without disregarding our heritage.


Cloud doesn’t replace virtualisation, certainly not in the next few years. I’ve run many, many Azure Stack customer workshops, and the single most common assumption about Azure Stack is that it is a VMware and Hyper-V replacement. The good news for those who have built careers in virtualisation over the last ten years or more, is that in most cases, applications which are designed for virtualisation still run best there today. Over time, the inevitable new waves of cloud-native and cloud-first applications will of course displace those traditional 1, 2, 3 tier type applications, but the important takeaway right now is that Digital Transformation is not all or nothing, it can occur over time.


It’s that breathing room in many cases that Azure Stack provides – the ability to iteratively modernise parts of applications over time, while maintaining intra-DC bandwidth and latencies, and without having to disregard or immediately abandon existing hardware and hypervisors.


So this has been a hugely exciting journey to date, and if one thing has been made clear at Inspire today, it’s that partners are the lynchpin that will drive forward products like Azure Stack in the future. I’m incredibly excited to keep sharing as we go forward, but even more importantly, I’m excited to keep on learning!


Right, time to get a VPN set up on this laptop so I can deploy the Azure Stack Development Kit… let’s get building the future!

While running through the (very worthwhile) Azure Functions Challenge, I encountered an error that was new to me, and a quick method of working around/fixing it.

After deploying Challenge 4, I received the following error when trying to open the Function:

“We are not able to retrieve the keys for function … This can happen if the runtime is not able to load your function. Check other function errors.”

It seems this is because I used the same app name multiple times. Encrypted values are stored in your app’s storage which are tied to the app name, and while new keys are generated within the app context when you re-create it, the encrypted values are just carried over in the underlying folder structure.

Deleting the existing files will cause them to be regenerated with values from the new encryption keys, so go ahead and open up Kudu by navigating to:

Change directory to D:\home\data\Functions\secrets, and delete everything in that folder. In my instance, one file, host.json.

Refresh the portal and look in the folder again, and you should find newly regenerated files therein. Your Function should load and work properly as well. Hurrah!



One of the first steps many people take in their journey to Azure or Azure Stack is the migration of an existing workload, rather than building net new. Typically most people would recommend choosing a non-production-critical web-based application running within a VM or across multiple VMs.

There are three usual ways people move this sort of workload:

  • Lift and shift of IaaS to IaaS
  • Re-platform of IaaS to PaaS
  • Partial re-platform of IaaS to PaaS and IaaS

With the workload running within the cloud environment, we are far better positioned to modernise it gradually and when appropriate using cloud-native features.

There are other options available for workload migration, however we find they’re rarely used in the real world just now due either to lack of awareness, or perceived increased complexity. One of those methods which falls squarely into both camps for most people today is containerising an existing workload, and moving it into IaaS.

Containerising in this way can have many benefits – the two core benefits we’ll focus on here though are shrinking of workload footprint, and simplified migration and deployment into the cloud environment.

In order to significantly reduce the knowledge cliff and learning curve needed to containerise an existing workload, a really exciting new community-created and driven PowerShell module was announced at DockerCon a couple of weeks ago, Image2Docker for Windows. Image2Docker also exists for Linux, but for this blog we’ll be focused on the Windows variant.

Image2Docker is able to inspect a VHD, VHDX, or WIM file, identify installed application components (from a select group for now), extract them to an output folder, and then build a Dockerfile from which you can then build a container.

It’s a brilliant tool which begs the question ‘How quickly can I move an existing workload to a cloud provider then?’

… so let’s answer it!

My Azure Stack PoC hosts are being moved to a new rack just now, so for the purposes of this demo I’ll use Azure. The process and principles are identical here though, as we’re just using a Windows Server 2016 VM with Container support enabled.

First of all we will need a workload to move. For this I’ve deployed a very simple website in IIS for this first test – we can get more bold later with databases and jazz, for now this is a plain jane HTML site running on-premises in Hyper-V.


The server we run the Image2Docker cmdlets from will need to have the Hyper-V role installed, so to keep life easy I’m running it from the Hyper-V host that the VM already lives on. I’ve also enabled the Containers role and installed the Docker Engine for Windows Server 2016.


Because the Image2Docker cmdlets mount the VHD/X, it’ll need to be offline to run the tool. You can either take a copy of the VHD/X and run it against that, or as I’m doing in this case, just shut the VM down.

Ok, so with the VM shut down, our first step on the Hyper-V host is to install the Image2Docker cmdlets. This is made extremely easy by virtue of the module being hosted in the PowerShell Gallery.

So simply Install-Module Image2Docker, then Import-Module Image2Docker, and you’re all set!


My VHDX here is hosted in a remote SOFS share, so to remove the need for me to keep having to edit hostnames out of images, I’ve just mapped it to X:


First up we’ll create a folder for the Image2Docker module to output to, both the contents of the IIS site and the resultant Dockerfile will live here.


Now comes time to extract the website and build the Dockerfile.

The documentation claims that the only required parameter is the VHD/X and that it will scan for any application/feature artifacts within the VM automatically. It also claims that you can specify multiple artifacts (e.g. IIS, MSSQL, Apache) for it to scan, and it will extract them all.

Sadly, after reviewing the PowerShell code for it here, it turns out that this is aspirational for now and that the Artifacts parameter is both required, and can only support a single argument for now. C’est la vie, it’s not an issue for our basic IIS site here luckily.

Run the cmdlet, targeting the VM’s VHDX, IIS as the artifact to scan, and the pre-created OutputPath.


After running, the DockerOut folder will contain all the bits of the puzzle needed to build a container based on the IIS website within the VM – hurrah!


Ok! So before we go any further, let’s prep our Docker environment. I already have Docker Engine installed and it’s logged into my Docker account, so let’s get some base images ready.

Because PowerShell == Life, I’ve also installed the Docker PowerShell Module.

Register-PSRepository -Name DockerPS-Dev -SourceLocation

Install-Module -Name Docker -Repository DockerPS-Dev -Scope CurrentUser

This lets us check that there are no containers and no images currently on the server using Docker native and PowerShell cmdlets.


This is where we can either blast ahead with defaults, or make some informed choices…

Looking inside the Dockerfile, right at the top we can see that the base image that this container will be based on is the ASP.NET Windows Server Core image from


All we’re doing here is running a very simple IIS website here, so why not run this with Nano Server as the base? For comparison, I’ve pulled down the IIS enabled Nano Server Docker image, and the Windows Server Core image referenced in the Dockerfile.


Holy moley! It’s almost a 90% image reduction going from Server Core to IIS-enabled Nano Server! Let’s definitely do that.


Kick off the build process with Docker Build DockerOut, and off we go!


… and just like that, the Image is built.

The image has no associated Repo or Tag yet, so let’s add those, then push it up to Docker Hub.


The iisdemo repo doesn’t exist within my Docker account yet, but that’s fine, just pushing it up will create and initialise it.


… and hey presto, the image is in my Docker repo. This could just as easily be a private repo.


Now that we have the Container in Docker Hub, I can go to any Windows Server Container friendly environment and just pull and run it! Just like that.

In Azure I have deployed a Windows Server 2016 VM with Containers support enabled, which can be a host for whatever containers I deign to run on it. In this simple demo I’ll just be running the one container of course.


Within this VM, getting our pre-built image is as simple as pulling it down from Docker Hub.


Running the container is a single command more, with a map of port 80 in the container to 80 on the host…


… and hey presto! Our website has been containerised, pushed to Docker Hub, pulled down to a VM in Azure (or Azure Stack, or anywhere that can run Windows Containers), and the website is up and running happily.


If we break down the steps that were actually needed here to migrate this workload, we had:

  • Generate Dockerfile
  • Tweak Dockerfile
  • Build Container Image
  • Push Image to Docker Hub
  • Pull Image from Docker Hub
  • Run Container

We’re not getting any of the space-saving benefits that have been generated here due to the way I’ve deployed it as a 1:1 mapping of container to VM. Deploying this as a Hyper-V container within Hyper-V 2016 would have seen a ~90% space saving vs the original VM size, which is pretty awesome.

This space saving per container can also be realised the Azure Container Service preview for Windows Containers, which is available in public preview if you deploy a Container Service with Kubernetes as the Orchestrator. The space savings really come into being when operating at scale greater than that of our test site here though, so for a single website like this there’s really no point. There are other resource saving options as well, which fall outwith the scope of this blog.

Obviously this is a very simple workload, and we’re still at very early days with this technology. It hopefully gives a glimpse into the future of how easy it will be to migrate any existing workload to any cloud which supports Windows or Linux containers though.

Once a workload is migrated like-for-like into a cloud environment, extending it and modernising it using cloud-native features within that environment becomes a much simpler proposition. For those for whom working out the best path to do an initial IaaS to IaaS migration is a pain just now (insert fog of war/cloud metaphor), tools like Image2Docker are going to significantly ease the pain and planning required for that first step towards cloud.

So how long did this take me, end to end, including taking screenshots and writing notes? Well the screenshots are there, and I was done in around 30 minutes – this is partly because I’d stripped back pre-reqs like the Core and Nano images in order to get screenshots. Normally these would already be in place and used as the base for multiple images.

Running through the process again with all pre-requisites already in place took around 3 minutes to go from on-premises to running in Azure. So, to answer the question we asked back at the start – not long. Not long at all.


When choosing a platform to build an application for, developers need to consider a number of common factors – skillset in the market, end user reach of the platform, supportability, roadmap, and so on. This is one of the reasons why Windows Phone has had such a difficult time; development houses won’t choose to invest time into it because of limited user reach, uncertain roadmap and support, and the need to develop new skills. There’s just no incentive there to do so, and there is much risk.

Reaching Equilibrium

When I say application delivery platform, I refer to any of a number of areas, including:

  • Desktop Operating Systems
  • Mobile Operating System
  • Virtualisation
  • Cloud Platforms

In each of these areas, during their birth as a new delivery paradigm there tend to be many contenders vying for developer attention. I strongly believe that in any category, over time the number of commonly used and accepted platforms will naturally tend towards a low number as a small subset of them reach developer critical mass, and the others lose traction.

Once you reach this developer critical mass, a platform becomes self-perpetuating. End-user reach is massive, developer tools are well matured, roadmaps are defined, and needed development skills fill the market. Once a few platforms in a category reach this point, no others can compete as they can’t attract developers, the laggards wither and die, and the platforms in the category become constant.

I call this process HomeOStasis

  • In the Desktop and Server arena this has tended to Windows and *nix.
  • In the Mobile arena Android and iOS.
  • In Virtualisation land we predominantly have VMware and Hyper-V.
  • In Cloud there is currently AWS, Azure, and Google Cloud Platform.

It’s still a contentious view, both among hardware vendors and IT Pros as we haven’t quite reached that HomeOStatic point with cloud yet, but I can’t see cloud native as landing as anything other than AWS, Azure, and GCP. SoftLayer and Oracle do fit the bill, but in the certainty of eventual HomeOStasis, I don’t see them gaining the developer critical mass they need to become the core of the stable and defined cloud platform market.


Cloud and Virtualisation

Note that this isn’t Cloud vs Virtualisation, each are powerful and valuable application delivery platforms with their own strengths and weaknesses, and designed to achieve and deliver different outcomes, just like desktop and mobile operating systems.

Virtualisation is designed to support traditional monolithic and multi-tier applications, building resiliency into the fabric and hardware layers to support high availability of applications which can take advantage of scale-up functionality.

Cloud is designed to support containerised and microservice-based applications which span IaaS and PaaS and can take advantage of scale-out functionality, with resiliency designed into the application layer.

Yes you can run applications designed for virtualisation in a cloud-native environment, but it’s rarely the best thing to do, and it’s unlikely that they’ll be able to take advantage of most of the features which make cloud so attractive in the first place.


Hybrid Cloud and Multi Cloud

Today, the vast majority of customers I speak to say they are adopting a hybrid cloud approach, but the reality is that the implementation is multi cloud. The key differentiator between these is that in hybrid cloud the development, deployment, management, and capabilities are consistent across clouds, while in multi cloud the experience is disjointed and requires multiple skillsets and tools. Sometimes organisations will employ separate people to manage different cloud environments, sometimes one team will manage them all. Rarely is there an instance where the platforms involved in multi cloud are used to their full potential

Yes there are cloud brokerages and tools which purport to give a single management platform to give a consistent experience across multiple different cloud platforms, but in my opinion this always results in a diminished overall experience. You end up with a lowest-common-denominator outcome where you’re unable to take advantage of many of the unique and powerful features in each platform for the sake of consistent and normalised management. It’s actually not that different to development and management in desktop and mobile OS’s – there have always been comparisons and trade-offs between native and cross-platform tooling and development, with ardent supporters in each camp.

Today, the need to either manage in a multi cloud model, or diminish overall experience with an abstracted management layer is a direct consequence of every cloud and service provider today delivering a different set of capabilities and APIs, coupled with a very real customer desire to avoid vendor lock-in.


Enabling True Cloud Consistency

The solution to this has been for Microsoft to now finally deliver a platform which is consistent not just in look and feel with Azure, but truly consistent in capabilities, tooling, APIs, and roadmap. Through the appliance-based approach of Azure Stack, this consistency can be guaranteed through any vendor at any location.

This is true hybrid cloud, and enables the use of all the rich cloud-native capabilities within the Azure ecosystem, as well as the broad array of supported open-source development tools, without the risk of vendor lock-in. Applications can span and be moved between multiple providers with ease, with a common development and management skillset for all.

Once we have reached a point of HomeOStasis in Cloud, platform lock-in through use of native capabilities is not a concern either, as roadmap, customer-reach, skillset in the market, and support are all taken care of.

A little-discussed benefit of hybrid cloud through Azure Stack is the mitigation of collapse or failure of a vendor. An application which runs in Azure and Azure Stack can span multiple providers and the public cloud, protected by default from the failure or screw-up of one or more of those providers. The cost implications of architecting like this are similar to multi cloud, however the single skillset, management framework, and development experience can significantly help reduce TCO.

Azure Stack isn’t a silver bullet to solve all application delivery woes, and virtualisation platforms remain as important as ever for many years to come. Over and above virtualisation through, when evaluating your cloud-native strategy, there are some important questions to bear in mind:

  • Who do I think will be the Cloud providers that remain when the dust settles and we achieve HomeOStasis?
  • Do I want to manage a multi cloud or hybrid cloud environment?
  • Do I want to use native or cross-platform tooling?
  • What will common and desirable skillsets in the market be?
  • Where will the next wave of applications I want to deploy be available to me from?

I’m choosing to invest a lot of my energy into learning Azure and Azure Stack, because I believe that the Azure ecosystem offers genuine and real differentiated capability over and above any other cloud-native vendor, and will be a skillset which has both value and longevity.

When any new platform paradigm comes into being, it’s a complete roll of the dice as to which will settle into common use. We’re far enough along in the world of cloud now to make such judgements though, and for Azure and Azure Stack it looks like a rosy future ahead indeed.

When you deploy a new Azure Function, one of the created elements is a Storage Account Connection, either to an existing storage account or to a new one. This is listed in the ‘Integrate’ section of the Function, and automatically sets the appropriate connection string behind the scenes when you select an existing connection, or create a new one.


Out of the box however, this didn’t work correctly for me, throwing an error about the storage account being invalid.


Normally to fix this, we could just go to Function App Settings, and Configure App Settings to check and fix the connection string…


… however after briefly flashing up, the App Settings blade reverts to the following ‘Not found’ status.


There are a fair few ways to fix these existing App Settings connection strings, or just have them deployed correctly in the first place (e.g. in an appsettings.json file). In this instance though I’m going to fix the existing strings through PowerShell, as it’s always my preferred troubleshooting tool.

Fire up an elevated PowerShell window, and let’s get cracking!

  1. Ensure all pre-requisites are enabled/imported/added.

Assuming you have followed all the steps to install Azure PowerShell, which you must have in order to have App Service deployed… 🙂

From within the AzureStack-Tools folder (available from GitHub).

Import-Module AzureRM 
Import-Module AzureStack 
Import-Module .\Connect\AzureStack.Connect.psm1 

Add-AzureStackAzureRmEnvironment -Name "AzureStackUser" -ArmEndpoint "https://management.local.azurestack.external"

# Login with your AAD User (not Admin) Credentials 
Login-AzureRmAccount -EnvironmentName "AzureStackUser"
  1. Investigate the status of the App Settings in the Functions App.

The Functions App is just a Web App, so we can connect to it and view settings as we would any normal Web App.

The Function in question here is called subtwitr-func, and lives within the Resource Group subtwitr-dev-rg.


$myResourceGroup = "subtwitr-dev-rg" 
$mySite = "subtwitr-func" 
$webApp = Get-AzureRMWebAppSlot -ResourceGroupName $myResourceGroup -name $mySite -slot Production 
$appSettingList = $webApp.SiteConfig.AppSettings 

$hash = @{} 
ForEach ($kvp in $appSettingList)  
    $hash[$kvp.Name] = $kvp.Value 

$hash | fl 

Below is the output of the above code, which shows all our different connection strings. There are two storage connection strings I’ve tried to create here – subtwitr_STORAGE which I created manually and storagesjaohrurf7flw_STORAGE which was created via ARM deployment.

I’m not worried about exposing the Account Keys for these isolated test environments so haven’t censored them.


As neither of these strings contains explicit paths to the Azure Stack endpoints, they are trying to resolve to the public Azure endpoints. Let’s fix that for the storagesjaohrurf7flw_STORAGE connection.

$hash['storagesjaohrurf7flw_STORAGE']= 'BlobEndpoint=https://storagesjaohrurf7flw.blob.local.azurestack.external;TableEndpoint=https://storagesjaohrurf7flw.table.local.azurestack.external;QueueEndpoint=https://storagesjaohrurf7flw.queue.local.azurestack.external;AccountName=storagesjaohrurf7flw;AccountKey=MZt4gAph+ro/35qE+AbFEiE4NK6s5XVU/Y4JAi3p3l7yy1d3qx0QPETNl+bGW+fNNvtJHxSXI7TETBWKJw2oQA==' 
set-azurermwebappslot -ResourceGroupName $myResourceGroup -name $mySite -AppSettings $hash -Slot Production 

Now with the endpoints configured, the Function is able to connect to the Blob storage endpoint successfully and there is no more connection error.

Had I explicitly defined the connection string in-code pre-deployment, this would not have been an issue. If it is an issue for anyone, here at least is a way to resolve it until the App Settings blade is functional.

Below are a few quick tips to be aware of with the advent of the TP3 Refresh.

Once you have finished deployment, there is a new Portal Activation step

This has caught a few people out so far, as ever the best tip is to make sure you read all of the documentation before deployment!


When Deploying a Default Image, make sure you use the -Net35 $True option to ensure that all is set up correctly in advance for when you come to deploy your MSSQL Resource Provider.

.Net 3.5 is a pre-requisite for the MSSQL RP just now, and if you don’t have an image with it installed, your deployment of that RP will fail. It’s included in the example code in the documentation, so just copy and paste that and you’ll be all good.

$ISOPath = "Fully_Qualified_Path_to_ISO" 
# Store the AAD service administrator account credentials in a variable
$UserName='Username of the service administrator account' 
$Password='Admin password provided when deploying Azure Stack'|ConvertTo-SecureString -Force -AsPlainText 
$Credential=New-Object PSCredential($UserName,$Password) 
# Add a Windows Server 2016 Evaluation VM Image. Make sure to configure the $AadTenant and AzureStackAdmin environment values as described in Step 6 
New-Server2016VMImage -ISOPath $ISOPath -TenantId $AadTenant -EnvironmentName "AzureStackAdmin" -Net35 $True -AzureStackCredentials $Credential 

Deployment of the MSSQL Resource Provider Parameter Name Documentation is Incorrect

The Parameters table lists DirectoryTenantID as the name of your AAD tenant. This in actual fact requires an AADTenantGUID. This has been fixed via Git and should be updated before too long.


Use the Get-AADTenantGUID command in the AzureStack-Tools\Connect\AzureStack.Connect.psm1 module to retrieve this.


Deploy everything in UTC, at least to be safe.

While almost everything seems to work when the Azure Stack host and VMs are operating in a timezone other than UTC, I have been unable to get the Web Worker role in the App Service resource provider to deploy successfully in any timezone other than UTC.

UTC+1 Log




Well that’s it for now, I have some more specific lessons learned around Azure Functions which will be written up in a separate entry shortly.

During my TP3 Refresh deployment, I ran into an issue with the POC installer, wherein it seemingly wouldn’t download the bits for me to install and I ended up having to download each .bin file manually to proceed.

Charles Joy (@OrchestratorGuy) was kind enough to let me know via Twitter how to check the progress of download and for any errors. As ever, PowerShell is king.

To test this, I initiated a new download of the POC.

I chose a place to download to on my local machine, then started the download.

After starting the download, I fired up PowerShell and ran the Get-BitsTransfer | fl command to see what was going on with the transfer. In this instance, all is working perfectly, however something stuck out for me…


One thing to notice here is that Priority is set to Normal – this setting uses idle network bandwidth for transfer. Well I don’t want to use idle network bandwidth, I want to use all the network bandwidth! 🙂

We can maybe up the speed here by setting Priority to High or to Foreground. Set to Foreground, it will potentially ruin the rest of your internet experience while downloading, but it will move the process from being a background task using idle network bandwidth into actively competing with your other applications for bandwidth. In the race to deploy Azure Stack, this might be a decisive advantage! Smile

Get-BitsTransfer | Set-BitsTransfer -Priority Foreground

Kicking off this PowerShell immediately after starting the PoC downloader could in theory improve your download speed. As ever, YMMV and this is a tip, not a recommendation.

Sometimes when you embark on a new piece of research, serendipity strikes which just makes the job so much simpler than you’d imagined it to be.

In this case, there are already a series of GitHub examples for integrating Azure Media Services and Azure Blob Storage via Azure Functions. It’s heartening to know that my use case is a commonly enough occurring one to have example code already up for pilfering.

Azure Media Services/Functions Integration Examples

If we recall the application ‘design’ referenced in previous blogs, the ‘WatchFolder’ console application performs a very specific function – watching a blob storage container, and when it sees a new file of a specific naming convention appear (guid.mp4), it kicks off the Transcription application. The Transcription application moves the file into Azure Media Services, performs subtitle transcription, copies out the subtitles file, runs an FFMPEG job locally to combine the video and the subtitles, and then finally tweets out the resultant subtitled video.

Through exploration of the GitHub examples linked above, specifically the ‘100-basic-encoding’ example, I can actually completely get rid of the WatchFolder application, and move everything in Transcription up the FFMPEG job into a function.

This is by virtue of the fact that there are pre-defined templates from which functions can be built, and one of those is a C# Function which will run whenever a blob is added to a specified container. Hurrah! Literally just by choosing this Functions template, I have removed the need for a whole C# console app which ran within a VM – this is already valuable stuff.


Ok! So to get cracking with building out on top of the example function that looks to fit my use case, as ever we just hit the ‘Deploy to Azure’ button in the, and start to follow the instructions.


Actually, before we continue, the best thing to do is to fork this project into my own GitHub repo to protect against code-breaking changes to the example repo. Just use the Fork button at the top of the GitHub page, and choose the where you want to fork it to. You’ll need to be signed into a GitHub account.


Now successfully forked, we can get on with deployment.


Enter some basic information – resource group, location, project you want to deploy etc. In this case, we’re taking the 100-basic-encoding function. Easy peasy!


Aaaaaand Internal Service Error. Well, if everything went smoothly, we’d never learn anything, so time to get the ‘ole troubleshooting hat on.


The problem here is a common one if you use a lot of accounts for testing in Azure. When we look at the GitHub sourcecontrols provider at This particular test account has never deployed from GitHub before, and so auth token is not appropriately set.


This is easily fixed in the Azure Portal. Open up your Functions App, select Function App Settings and then Configure Continuous Integration:


And then run through Setup to create a link to GitHub. This will kick off an OAuth process through to your GitHub account, so just follow the prompts.


After completing this and refreshing, the token now shows as set.


Excellent! Let’s redeploy 🙂

Hurrah! Success!


For no other reason than to show the consistency of approach between a traditional C# console application and a C# Azure Function, below I have pasted the bulk of the TranscribeVideo console app down to just above the FFMPEG kick-off directly alongside the out of the box Function example code with zero changes yet. It’s also rather gratifying to see that my approach over a year ago, and that taken in this Function have significant parallels 🙂


Of course the example code is designed to re-encode an MP4 and output it into an output blob, whereas what we want is to run an Indexing job and then output the resultant VTT subtitles file into an output blob. This only takes a handful of tiny changes, made all the more easy by referencing my existing code.

With all the required tweaks to the example code – and they are just tweaks, no major changes – I have decommissioned a full console application, and migrated almost 80% of a second console application into a Functions app. This has exceeded my expectations so far.

Just for the avoidance of doubt, it all works beautifully. Below is a screenshot of the output log of the Function – it started automatically when I added a video file to the input container.


Above you can see the Function finding the video file Index.mp4, submitting it to Azure Media Services, running the transcription job, then taking the .vtt subtitles file and dropping it into the output container.

Here it is in Azure Storage Explorer:


So with that complete, I now need to look at how I encode the subtitles into the video and then tweet it. When I first wrote this many moons ago, it was significantly easier (or maybe actually only possible) to do this in an IaaS VM using FFMPEG to encode the subtitles into the video file. It looks like this might be a simple built-in function in Azure Media Services now. If that’s the case and it’s cost-effective enough, then I may be able to completely decommission the need for any IaaS, and migrate the entire application-set through into Functions.

I also want to change the function above to take advantage of the beta version of the Azure Media Indexer 2, as it suggests it should be able to do the transcription process significantly faster. If you look at the log file above, you’ll see that it took around 3 minutes to transcribe a 20 second video. If this can be sped up, so much the better.

So a few next steps to do, stay tuned for part 4 I guess! 🙂



So having made the decision to rewrite a console app in Azure Functions in my previous blog, I should probably explain what Azure Functions actually is, and the the rationale and benefit behind a rewrite/port. As ever there’s no point just doing something because it’s the new shiny – it has to bring genuine cost, time, process, or operational benefit.

Azure Functions is Microsoft’s ‘Serverless’ programming environment in Azure, much like AWS Lambda. I apostrophise ‘Serverless’, because of course it isn’t – there are still servers behind the scenes, you just don’t have to care about their size or scalability. It’s another PaaS (or depending on your perspective, an actual PaaS), this time for you to deliver your code directly into without worrying about what’s beneath.




You only pay for your code when it’s being executed, unlike when running in an IaaS VM where you’re being charged any time the VM is running. For code which only runs occasionally or intermittently at indeterminate times, this can result in pretty big savings.

Functions will automatically scale the behind-the-scenes infrastructure on which your code runs if your call rate increases, meaning you never have to worry about scale in/up/out/down of infrastructure – it just happens for you.

Functions supports a range of languages – PowerShell, Node, PHP, C#, and F#, Python, Bash, and so on. You can write your code in the Functions browser and execute directly from there, or your can pre-compile using your preferred environment and upload into Functions. The choice, as they say, is yours.




Well no, don’t. When you’re looking at Functions for Serverless coding, it’s just as vital that you understand the appropriate use cases and where you can gain real operational and financial benefit as it is when you’re evaluating Azure and Azure Stack for running certain IaaS workloads.

There are a number of appropriate use cases documented at the Functions page in Azure, for our purposes there are two which are of immediate interest. Timer-Based Processing, and Azure Service Event Processing.

Timer-Based Processing will allow us to have a CRON-like job which ensures we keep both our blob storage containers and our Azure Media Services accounts fairly clean, so we’re not charged for storage for stale data.

Azure Service Event Processing is the gem that will hopefully let us convert the WatchFolder app discussed in the previous blog post from C# console into running in Azure functions. This goal of this function will be to do exactly what the C# application did, except instead of watching a blob storage container constantly and needing a whole VM to run, it will automatically trigger the appropriate code when a new file is added into a blob storage container by the UWP app.




Which leads us neatly on to design consideration #1. In the previous generation, the two console apps existed in the same VM, and could just directly call each other to execute commands against. Now that the WatchFolder app is moving to Azure Functions, I need to re-think how it invokes the Transcription application.

A fairly recent addition to Functions is the ability to just upload an existing Console application into Functions and have it execute on a timer. This isn’t suitable for the whole WatchFolder app, however the sections which are responsible for timed clean-up of blob and AMS storage can be pretty easily split out and uploaded in this way.

For the part of the app which monitors for file addition to blob storage and invokes FFMPEG via the Transcription app, the way I see it with my admittedly mediocre knowledge, there are three vaguely sensible options:

    • Use the Azure Service Bus to queue appropriate data for the Transcription app to monitor for and pick up on and then invoke against.
    • Create an API app within Azure Stack which can be called by the Functions app and which invokes the Transcription app to run FFMPEG.
    • Write some custom code in the Transcription app to watch AMS for new subtitles files on a schedule, and kick off from there.

Honestly, I want to avoid writing as much custom code as possible and just use whatever native functionality I can, but Service Bus won’t be available in Azure Stack at GA, an API app is probably overkill here, and I can do the required job in a handful of lines of code within the Transcription app, so that’s the way I’ll probably go here. At least in the short term while I continue to figure out the art of the possible.

I should probably also note that Azure Media Services can do native encoding functionality itself so in theory there’s no need for me to do all this faffing around with IaaS and FFMPEG. For my purposes here though it is significantly more cost-effective to have an IaaS VM running 24/7 on-premises handling the encoding aspects, and use AMS for the transcription portion at which it excels. FFMPEG does also give me a lot more control over what I’m doing, which I’ve done a lot of tweaking of to get a consistently valid output for the Twitter API to accept without losing video quality.

Right, time to start porting elements across into Functions, ensure the overall app still works end to end, and see what we’ve learned from there!