So having made the decision to rewrite a console app in Azure Functions in my previous blog, I should probably explain what Azure Functions actually is, and the the rationale and benefit behind a rewrite/port. As ever there’s no point just doing something because it’s the new shiny – it has to bring genuine cost, time, process, or operational benefit.
Azure Functions is Microsoft’s ‘Serverless’ programming environment in Azure, much like AWS Lambda. I apostrophise ‘Serverless’, because of course it isn’t – there are still servers behind the scenes, you just don’t have to care about their size or scalability. It’s another PaaS (or depending on your perspective, an actual PaaS), this time for you to deliver your code directly into without worrying about what’s beneath.
You only pay for your code when it’s being executed, unlike when running in an IaaS VM where you’re being charged any time the VM is running. For code which only runs occasionally or intermittently at indeterminate times, this can result in pretty big savings.
Functions will automatically scale the behind-the-scenes infrastructure on which your code runs if your call rate increases, meaning you never have to worry about scale in/up/out/down of infrastructure – it just happens for you.
Functions supports a range of languages – PowerShell, Node, PHP, C#, and F#, Python, Bash, and so on. You can write your code in the Functions browser and execute directly from there, or your can pre-compile using your preferred environment and upload into Functions. The choice, as they say, is yours.
Well no, don’t. When you’re looking at Functions for Serverless coding, it’s just as vital that you understand the appropriate use cases and where you can gain real operational and financial benefit as it is when you’re evaluating Azure and Azure Stack for running certain IaaS workloads.
There are a number of appropriate use cases documented at the Functions page in Azure, for our purposes there are two which are of immediate interest. Timer-Based Processing, and Azure Service Event Processing.
Timer-Based Processing will allow us to have a CRON-like job which ensures we keep both our blob storage containers and our Azure Media Services accounts fairly clean, so we’re not charged for storage for stale data.
Azure Service Event Processing is the gem that will hopefully let us convert the WatchFolder app discussed in the previous blog post from C# console into running in Azure functions. This goal of this function will be to do exactly what the C# application did, except instead of watching a blob storage container constantly and needing a whole VM to run, it will automatically trigger the appropriate code when a new file is added into a blob storage container by the UWP app.
Which leads us neatly on to design consideration #1. In the previous generation, the two console apps existed in the same VM, and could just directly call each other to execute commands against. Now that the WatchFolder app is moving to Azure Functions, I need to re-think how it invokes the Transcription application.
A fairly recent addition to Functions is the ability to just upload an existing Console application into Functions and have it execute on a timer. This isn’t suitable for the whole WatchFolder app, however the sections which are responsible for timed clean-up of blob and AMS storage can be pretty easily split out and uploaded in this way.
For the part of the app which monitors for file addition to blob storage and invokes FFMPEG via the Transcription app, the way I see it with my admittedly mediocre knowledge, there are three vaguely sensible options:
Honestly, I want to avoid writing as much custom code as possible and just use whatever native functionality I can, but Service Bus won’t be available in Azure Stack at GA, an API app is probably overkill here, and I can do the required job in a handful of lines of code within the Transcription app, so that’s the way I’ll probably go here. At least in the short term while I continue to figure out the art of the possible.
I should probably also note that Azure Media Services can do native encoding functionality itself so in theory there’s no need for me to do all this faffing around with IaaS and FFMPEG. For my purposes here though it is significantly more cost-effective to have an IaaS VM running 24/7 on-premises handling the encoding aspects, and use AMS for the transcription portion at which it excels. FFMPEG does also give me a lot more control over what I’m doing, which I’ve done a lot of tweaking of to get a consistently valid output for the Twitter API to accept without losing video quality.
Right, time to start porting elements across into Functions, ensure the overall app still works end to end, and see what we’ve learned from there!