Modernising Applications with Azure and Azure Stack Pt. 1

Sunday , 2, April 2017 3 Comments



I’ve just spent the last week in Bellevue at the Azure Certified for Hybrid Cloud Airlift, talking non-stop to a huge number of people about Cloud delivery practices, and beyond the incredible technology and massive opportunity that Azure Stack represents, my biggest takeaway from the week is that a lot of people still just don’t get it.

When Azure Stack launches, it will be the first truly hybrid Cloud platform to exist, delivering the same APIs and development experience on-premises and in a service provider environment as is available within the hyper-scale Cloud. It’s a unique and awesome product that loses all sense of differentiation as soon as people say ‘Great! So I can lift and shift my existing applications into VMs in Azure Stack! Then I’ll be doing Cloud!’

Well yes, you can, but you won’t be ‘doing Cloud’. If you have an existing enterprise application it was probably developed with traditional virtualisation in mind, and will probably still run most efficiently and most cost effectively in a virtualisation environment. Virtualisation isn’t going away any time soon, which is why we continue to put so much time and effort into the roadmaps of our existing platforms – most of the time these days it’s still the best place to put most existing enterprise workloads. Even if you drop it into Azure or Azure Stack, the application probably has no way of taking advantage of cloud-native features, so stick with the tried and proven world of virtualisation here.

If however you are developing or deploying net new applications, or are already taking advantage of cloud-native features, or can modernise your DB back end, or can take advantage of turn on/turn off, scale in/scale out type features, and want to bring those to a whole new region or market, then Azure and Azure Stack can open up a plethora of opportunity that hasn’t existed before.

So that’s all well and good to say, but what does modernising an existing application look like in practice? If we want to take advantage of buzzwords like infrastructure as code, serverless programming, containerisation and the like, where do we even begin.

Well it just so happens that I have an application I abandoned a while ago, predominantly due to annoyance at managing updates and dependencies, and with scaling out and in the application automatically as workloads wax and wane. If I write something and chuck it up on an app store, I really want it to maintain and manage itself as much as possible without taking over my life.

SubTwitr is an app I wrote about a year ago to address a pain point I had with Twitter, where I found I would never watch any videos in my feed as I just couldn’t be bothered turning up the volume to listen. I had the idea that I could leverage Azure Media Services to automatically transcribe and subtitle any video content I posted to Twitter, to ensure that at least people viewing my content wouldn’t have that pain. I considered commercialising it, but eventually archived it into GitHub and moved on as I didn’t really have the time to spend on the inevitable support at the time.

Let’s be clear as well, I’m not a pro dev by trade, I dabble in code in order to solve problems for myself, and have done for around 30 years now. I don’t necessarily follow good design patterns, but I do try to at least create code I can maintain over time, with good source control and comment structure.

This is the first app I’ve attempted to modernise using certain Cloud-native features, so is very much a learning experience for me – if I’m doing something stupid, please don’t hesitate to tell me!

Anyway! SubTwitr is comprised of two back end C# console applications which run in a Windows Server 2016 IaaS VM at brightsolid while leveraging Azure Blob storage and Media Services remotely, with a Windows 10 UWP front end application which will run on any Windows 10 device.

SubTwitr UWP App


There is currently no responsive layout built into the XAML, so it’d get rejected from the Windows Store anyway as it stands 🙂 We’re not here to build a pretty app though, we’re here to modernise back-end functionality!

The app is basic, it lets you choose a video, enter a Twitter message, and then post it to Twitter. At this point it authenticates you to the SubTwitr back end via OAuth, and uploads the video into an Azure Blob store along with some metadata – everything is GUIDised.

SubTwitr Console Apps


SubTwitr’s back end consists of two console apps – WatchFolder, and TranscribeVideo.

WatchFolder just sits and watches for a new video to be uploaded into an Azure Blob Store from the UWP app. When it sees a new video appear, it performs some slight renaming operations to prevent other SubTwitr processes trying to grab it when running at larger scale, and then kicks off the second console app.

TranscribeVideo does a little bit more than this…

  • It takes the video passed to it from WatchFolder, and sends it off to Azure Media Services for transcription.
  • AMS transcribes all of the audio in the video into text in a standard subtitle format, and then stores it in its media processing queue for collection.
  • TranscribeVideo watches for the subtitles appearing, and then downloads them and clears out the AMS queue so we don’t end up with a load of videos taking up space there.
  • TranscribeVideo kicks off an FFMPEG job to add the subtitles to the video in a Twitter accepted format, and at an acceptable size for Twitter to accept.
    • There are a few limitations with the Twitter API around size and length which need taken into account.
  • Twitter OAuth credentials are fetched from Azure KeyStore, and the Tweet is sent.
  • Once the Tweet has been successfully posted, Azure Mobile Services sends a push notification back to the UWP app to say that it’s done.
  • Video is cleaned up from the processing server and TranscribeVideo ends.

Note that WatchFolder can initiate as many instances of TranscribeVideo as it wants. Scalability limitations come in in a few areas though, I’ve listed some below and how I can address them using native Azure functionality.

  • VM Size
    • If a load of FFMPEG jobs are kicked off, the VM can become overloaded and slow to a crawl.
    • VM Scale Sets can be used to automatically deploy a new VM Instance if CPU is becoming contended. The code is designed to allow multiple instances to target the same Blob storage. It doesn’t care if they’re on one VM or multiple VMs.
  • Azure Media Services Indexer
    • AMS allows one task to run at a time by default, these are Media Reserved Units. You can pay for more concurrent tasks if desired.
    • A new version of this which performs faster has been released since I initially wrote SubTwitr, and is currently in beta. Sounds like a good thing to test!
  • Bandwidth
    • With a lot of videos flying back and forth, ideally we want to limit charges incurred here.
    • The most cost-effective route I have available is video into Azure Blob (free), Blob to AMS (free), AMS to brightsolid over ExpressRoute (‘free’), brightsolid to Twitter (‘free’).
  • Resource and Dependency Contention
    • I haven’t done any at-scale testing of running loads of TranscribeVideo and WatchFolder processes concurrently, however as they share dependencies and resources at the VM level, there exists the chance for them to conflict and impact each other.
    • Moving WatchFolder into Azure Functions, and containerising TranscribeVideo should significantly help with this.

Next Steps

So there we are, I have a task list to work through in order to modernise this application!

  • Rewrite the WatchFolder console app as an Azure Functions app which will run on Azure today, and on Azure Stack prior to GA.
  • Deploy the VM hosting TranscribeVideo as a VM Scale Set and set the laws for expansion/collapse appropriately.
  • Rewrite the Azure Media Services portions of TranscribeVideo to use the new AMS Indexer 2 Preview.
  • Containerise the TranscribeVideo application
  • Wrap the whole thing in an ARM template for simplified future deployment.


Right, time to get on with deploying my first Functions app – let’s see what the process is like, and what lessons we can learn.


3 thoughts on “ : Modernising Applications with Azure and Azure Stack Pt. 1”
  • […] post Modernising Applications with Azure and Azure Stack Pt. 1 appeared first on Cloud and Datacenter Thoughts and […]

  • Leave a Reply

    %d bloggers like this: