Sometimes when you embark on a new piece of research, serendipity strikes which just makes the job so much simpler than you’d imagined it to be.

In this case, there are already a series of GitHub examples for integrating Azure Media Services and Azure Blob Storage via Azure Functions. It’s heartening to know that my use case is a commonly enough occurring one to have example code already up for pilfering.

Azure Media Services/Functions Integration Examples

If we recall the application ‘design’ referenced in previous blogs, the ‘WatchFolder’ console application performs a very specific function – watching a blob storage container, and when it sees a new file of a specific naming convention appear (guid.mp4), it kicks off the Transcription application. The Transcription application moves the file into Azure Media Services, performs subtitle transcription, copies out the subtitles file, runs an FFMPEG job locally to combine the video and the subtitles, and then finally tweets out the resultant subtitled video.

Through exploration of the GitHub examples linked above, specifically the ‘100-basic-encoding’ example, I can actually completely get rid of the WatchFolder application, and move everything in Transcription up the FFMPEG job into a function.

This is by virtue of the fact that there are pre-defined templates from which functions can be built, and one of those is a C# Function which will run whenever a blob is added to a specified container. Hurrah! Literally just by choosing this Functions template, I have removed the need for a whole C# console app which ran within a VM – this is already valuable stuff.

clip_image001

Ok! So to get cracking with building out on top of the example function that looks to fit my use case, as ever we just hit the ‘Deploy to Azure’ button in the Readme.md, and start to follow the instructions.

clip_image002

Actually, before we continue, the best thing to do is to fork this project into my own GitHub repo to protect against code-breaking changes to the example repo. Just use the Fork button at the top of the GitHub page, and choose the where you want to fork it to. You’ll need to be signed into a GitHub account.

clip_image003

Now successfully forked, we can get on with deployment.

clip_image004

Enter some basic information – resource group, location, project you want to deploy etc. In this case, we’re taking the 100-basic-encoding function. Easy peasy!

clip_image005

Aaaaaand Internal Service Error. Well, if everything went smoothly, we’d never learn anything, so time to get the ‘ole troubleshooting hat on.

clip_image006

The problem here is a common one if you use a lot of accounts for testing in Azure. When we look at the GitHub sourcecontrols provider at https://resources.azure.com. This particular test account has never deployed from GitHub before, and so auth token is not appropriately set.

clip_image007

This is easily fixed in the Azure Portal. Open up your Functions App, select Function App Settings and then Configure Continuous Integration:

clip_image008

And then run through Setup to create a link to GitHub. This will kick off an OAuth process through to your GitHub account, so just follow the prompts.

clip_image009

After completing this and refreshing https://resources.azure.com/providers/Microsoft.Web/sourcecontrols/GitHub, the token now shows as set.

clip_image010

Excellent! Let’s redeploy 🙂

Hurrah! Success!

clip_image011

For no other reason than to show the consistency of approach between a traditional C# console application and a C# Azure Function, below I have pasted the bulk of the TranscribeVideo console app down to just above the FFMPEG kick-off directly alongside the out of the box Function example code with zero changes yet. It’s also rather gratifying to see that my approach over a year ago, and that taken in this Function have significant parallels 🙂

clip_image001[7]

Of course the example code is designed to re-encode an MP4 and output it into an output blob, whereas what we want is to run an Indexing job and then output the resultant VTT subtitles file into an output blob. This only takes a handful of tiny changes, made all the more easy by referencing my existing code.

With all the required tweaks to the example code – and they are just tweaks, no major changes – I have decommissioned a full console application, and migrated almost 80% of a second console application into a Functions app. This has exceeded my expectations so far.

Just for the avoidance of doubt, it all works beautifully. Below is a screenshot of the output log of the Function – it started automatically when I added a video file to the input container.

image

Above you can see the Function finding the video file Index.mp4, submitting it to Azure Media Services, running the transcription job, then taking the .vtt subtitles file and dropping it into the output container.

Here it is in Azure Storage Explorer:

image

So with that complete, I now need to look at how I encode the subtitles into the video and then tweet it. When I first wrote this many moons ago, it was significantly easier (or maybe actually only possible) to do this in an IaaS VM using FFMPEG to encode the subtitles into the video file. It looks like this might be a simple built-in function in Azure Media Services now. If that’s the case and it’s cost-effective enough, then I may be able to completely decommission the need for any IaaS, and migrate the entire application-set through into Functions.

I also want to change the function above to take advantage of the beta version of the Azure Media Indexer 2, as it suggests it should be able to do the transcription process significantly faster. If you look at the log file above, you’ll see that it took around 3 minutes to transcribe a 20 second video. If this can be sped up, so much the better.

So a few next steps to do, stay tuned for part 4 I guess! 🙂

image

 

So having made the decision to rewrite a console app in Azure Functions in my previous blog, I should probably explain what Azure Functions actually is, and the the rationale and benefit behind a rewrite/port. As ever there’s no point just doing something because it’s the new shiny – it has to bring genuine cost, time, process, or operational benefit.

Azure Functions is Microsoft’s ‘Serverless’ programming environment in Azure, much like AWS Lambda. I apostrophise ‘Serverless’, because of course it isn’t – there are still servers behind the scenes, you just don’t have to care about their size or scalability. It’s another PaaS (or depending on your perspective, an actual PaaS), this time for you to deliver your code directly into without worrying about what’s beneath.

 

image

 

You only pay for your code when it’s being executed, unlike when running in an IaaS VM where you’re being charged any time the VM is running. For code which only runs occasionally or intermittently at indeterminate times, this can result in pretty big savings.

Functions will automatically scale the behind-the-scenes infrastructure on which your code runs if your call rate increases, meaning you never have to worry about scale in/up/out/down of infrastructure – it just happens for you.

Functions supports a range of languages – PowerShell, Node, PHP, C#, and F#, Python, Bash, and so on. You can write your code in the Functions browser and execute directly from there, or your can pre-compile using your preferred environment and upload into Functions. The choice, as they say, is yours.

 

image

 

Well no, don’t. When you’re looking at Functions for Serverless coding, it’s just as vital that you understand the appropriate use cases and where you can gain real operational and financial benefit as it is when you’re evaluating Azure and Azure Stack for running certain IaaS workloads.

There are a number of appropriate use cases documented at the Functions page in Azure, for our purposes there are two which are of immediate interest. Timer-Based Processing, and Azure Service Event Processing.

Timer-Based Processing will allow us to have a CRON-like job which ensures we keep both our blob storage containers and our Azure Media Services accounts fairly clean, so we’re not charged for storage for stale data.

Azure Service Event Processing is the gem that will hopefully let us convert the WatchFolder app discussed in the previous blog post from C# console into running in Azure functions. This goal of this function will be to do exactly what the C# application did, except instead of watching a blob storage container constantly and needing a whole VM to run, it will automatically trigger the appropriate code when a new file is added into a blob storage container by the UWP app.

 

image

 

Which leads us neatly on to design consideration #1. In the previous generation, the two console apps existed in the same VM, and could just directly call each other to execute commands against. Now that the WatchFolder app is moving to Azure Functions, I need to re-think how it invokes the Transcription application.

A fairly recent addition to Functions is the ability to just upload an existing Console application into Functions and have it execute on a timer. This isn’t suitable for the whole WatchFolder app, however the sections which are responsible for timed clean-up of blob and AMS storage can be pretty easily split out and uploaded in this way.

For the part of the app which monitors for file addition to blob storage and invokes FFMPEG via the Transcription app, the way I see it with my admittedly mediocre knowledge, there are three vaguely sensible options:

    • Use the Azure Service Bus to queue appropriate data for the Transcription app to monitor for and pick up on and then invoke against.
    • Create an API app within Azure Stack which can be called by the Functions app and which invokes the Transcription app to run FFMPEG.
    • Write some custom code in the Transcription app to watch AMS for new subtitles files on a schedule, and kick off from there.

Honestly, I want to avoid writing as much custom code as possible and just use whatever native functionality I can, but Service Bus won’t be available in Azure Stack at GA, an API app is probably overkill here, and I can do the required job in a handful of lines of code within the Transcription app, so that’s the way I’ll probably go here. At least in the short term while I continue to figure out the art of the possible.

I should probably also note that Azure Media Services can do native encoding functionality itself so in theory there’s no need for me to do all this faffing around with IaaS and FFMPEG. For my purposes here though it is significantly more cost-effective to have an IaaS VM running 24/7 on-premises handling the encoding aspects, and use AMS for the transcription portion at which it excels. FFMPEG does also give me a lot more control over what I’m doing, which I’ve done a lot of tweaking of to get a consistently valid output for the Twitter API to accept without losing video quality.

Right, time to start porting elements across into Functions, ensure the overall app still works end to end, and see what we’ve learned from there!

clip_image001

 

I’ve just spent the last week in Bellevue at the Azure Certified for Hybrid Cloud Airlift, talking non-stop to a huge number of people about Cloud delivery practices, and beyond the incredible technology and massive opportunity that Azure Stack represents, my biggest takeaway from the week is that a lot of people still just don’t get it.

When Azure Stack launches, it will be the first truly hybrid Cloud platform to exist, delivering the same APIs and development experience on-premises and in a service provider environment as is available within the hyper-scale Cloud. It’s a unique and awesome product that loses all sense of differentiation as soon as people say ‘Great! So I can lift and shift my existing applications into VMs in Azure Stack! Then I’ll be doing Cloud!’

Well yes, you can, but you won’t be ‘doing Cloud’. If you have an existing enterprise application it was probably developed with traditional virtualisation in mind, and will probably still run most efficiently and most cost effectively in a virtualisation environment. Virtualisation isn’t going away any time soon, which is why we continue to put so much time and effort into the roadmaps of our existing platforms – most of the time these days it’s still the best place to put most existing enterprise workloads. Even if you drop it into Azure or Azure Stack, the application probably has no way of taking advantage of cloud-native features, so stick with the tried and proven world of virtualisation here.

If however you are developing or deploying net new applications, or are already taking advantage of cloud-native features, or can modernise your DB back end, or can take advantage of turn on/turn off, scale in/scale out type features, and want to bring those to a whole new region or market, then Azure and Azure Stack can open up a plethora of opportunity that hasn’t existed before.

So that’s all well and good to say, but what does modernising an existing application look like in practice? If we want to take advantage of buzzwords like infrastructure as code, serverless programming, containerisation and the like, where do we even begin.

Well it just so happens that I have an application I abandoned a while ago, predominantly due to annoyance at managing updates and dependencies, and with scaling out and in the application automatically as workloads wax and wane. If I write something and chuck it up on an app store, I really want it to maintain and manage itself as much as possible without taking over my life.

SubTwitr is an app I wrote about a year ago to address a pain point I had with Twitter, where I found I would never watch any videos in my feed as I just couldn’t be bothered turning up the volume to listen. I had the idea that I could leverage Azure Media Services to automatically transcribe and subtitle any video content I posted to Twitter, to ensure that at least people viewing my content wouldn’t have that pain. I considered commercialising it, but eventually archived it into GitHub and moved on as I didn’t really have the time to spend on the inevitable support at the time.

Let’s be clear as well, I’m not a pro dev by trade, I dabble in code in order to solve problems for myself, and have done for around 30 years now. I don’t necessarily follow good design patterns, but I do try to at least create code I can maintain over time, with good source control and comment structure.

This is the first app I’ve attempted to modernise using certain Cloud-native features, so is very much a learning experience for me – if I’m doing something stupid, please don’t hesitate to tell me!

Anyway! SubTwitr is comprised of two back end C# console applications which run in a Windows Server 2016 IaaS VM at brightsolid while leveraging Azure Blob storage and Media Services remotely, with a Windows 10 UWP front end application which will run on any Windows 10 device.

SubTwitr UWP App

clip_image002

There is currently no responsive layout built into the XAML, so it’d get rejected from the Windows Store anyway as it stands 🙂 We’re not here to build a pretty app though, we’re here to modernise back-end functionality!

The app is basic, it lets you choose a video, enter a Twitter message, and then post it to Twitter. At this point it authenticates you to the SubTwitr back end via OAuth, and uploads the video into an Azure Blob store along with some metadata – everything is GUIDised.

SubTwitr Console Apps

clip_image003

SubTwitr’s back end consists of two console apps – WatchFolder, and TranscribeVideo.

WatchFolder just sits and watches for a new video to be uploaded into an Azure Blob Store from the UWP app. When it sees a new video appear, it performs some slight renaming operations to prevent other SubTwitr processes trying to grab it when running at larger scale, and then kicks off the second console app.

TranscribeVideo does a little bit more than this…

  • It takes the video passed to it from WatchFolder, and sends it off to Azure Media Services for transcription.
  • AMS transcribes all of the audio in the video into text in a standard subtitle format, and then stores it in its media processing queue for collection.
  • TranscribeVideo watches for the subtitles appearing, and then downloads them and clears out the AMS queue so we don’t end up with a load of videos taking up space there.
  • TranscribeVideo kicks off an FFMPEG job to add the subtitles to the video in a Twitter accepted format, and at an acceptable size for Twitter to accept.
    • There are a few limitations with the Twitter API around size and length which need taken into account.
  • Twitter OAuth credentials are fetched from Azure KeyStore, and the Tweet is sent.
  • Once the Tweet has been successfully posted, Azure Mobile Services sends a push notification back to the UWP app to say that it’s done.
  • Video is cleaned up from the processing server and TranscribeVideo ends.

Note that WatchFolder can initiate as many instances of TranscribeVideo as it wants. Scalability limitations come in in a few areas though, I’ve listed some below and how I can address them using native Azure functionality.

  • VM Size
    • If a load of FFMPEG jobs are kicked off, the VM can become overloaded and slow to a crawl.
    • VM Scale Sets can be used to automatically deploy a new VM Instance if CPU is becoming contended. The code is designed to allow multiple instances to target the same Blob storage. It doesn’t care if they’re on one VM or multiple VMs.
  • Azure Media Services Indexer
    • AMS allows one task to run at a time by default, these are Media Reserved Units. You can pay for more concurrent tasks if desired.
    • A new version of this which performs faster has been released since I initially wrote SubTwitr, and is currently in beta. Sounds like a good thing to test!
  • Bandwidth
    • With a lot of videos flying back and forth, ideally we want to limit charges incurred here.
    • The most cost-effective route I have available is video into Azure Blob (free), Blob to AMS (free), AMS to brightsolid over ExpressRoute (‘free’), brightsolid to Twitter (‘free’).
  • Resource and Dependency Contention
    • I haven’t done any at-scale testing of running loads of TranscribeVideo and WatchFolder processes concurrently, however as they share dependencies and resources at the VM level, there exists the chance for them to conflict and impact each other.
    • Moving WatchFolder into Azure Functions, and containerising TranscribeVideo should significantly help with this.

Next Steps

So there we are, I have a task list to work through in order to modernise this application!

  • Rewrite the WatchFolder console app as an Azure Functions app which will run on Azure today, and on Azure Stack prior to GA.
  • Deploy the VM hosting TranscribeVideo as a VM Scale Set and set the laws for expansion/collapse appropriately.
  • Rewrite the Azure Media Services portions of TranscribeVideo to use the new AMS Indexer 2 Preview.
  • Containerise the TranscribeVideo application
  • Wrap the whole thing in an ARM template for simplified future deployment.

 

Right, time to get on with deploying my first Functions app – let’s see what the process is like, and what lessons we can learn.

 

There are often times in the technical previews of Azure Stack where you will need to collect logs to send back to the product teams. Fortunately, in TP3 this previously tedious process has been consolidated into a single command, as per Charles Joy in the Azure Stack forums:

  • Command: Get-AzureStackLogs

    Instructions:

    1. From the Azure Stack POC HOST…
    2. Run the following to import the required PowerShell module:

      cd C:\CloudDeployment\AzureStackDiagnostics\Microsoft.AzureStack.Diagnostics.DataCollection
      Import-Module .\Microsoft.AzureStack.Diagnostics.DataCollection.psd1

    3. Run the Get-AzureStackLogs command, with optional parameters (examples below):

      # Example 1 : collect all logs for all roles
      Get-AzureStackLogs -OutputPath C:\AzureStackLogs

      # Example 2 : collect logs from BareMetal Role (this is the Role where DEPLOYMENT LOGS are collected)
      Get-AzureStackLogs -OutputPath C:\AzureStackLogs -FilterByRole BareMetal

      # Example 3 : collect logs from VirtualMachines and BareMetal Roles, with date filtering for log files for the past 8 hours
      Get-AzureStackLogs -OutputPath C:\AzureStackLogs -FilterByRole VirtualMachines,BareMetal -FromDate (Get-Date).AddHours(-8) -ToDate (Get-Date)

      # If FromDate and ToDate parameters are not specified, logs will be collected for the past 4 hours by default.

    Other Notes about the Command:

    • Note that the command is expected to take some time for log collection based on which roles logs are collected for and the time duration for log collection, and the numbers of nodes of the MASD environment.
    • After log collection completes, check the new folder created under the OutputPath specified in command input C:\AzureStackLogs in the examples above
    • A file with Name Get-AzureStackLogs_Output will be created in the folder containing the zip files, and will include the command output which can be used in troubleshooting any failures in log collection.
    • Each role will have the logs inside a zip file.

One of the wonderful new additions to Azure Stack in Technical Preview 3 is Marketplace Syndication.

The Azure Marketplace offers VM Images with pre-installed software/config, VM Extensions, SaaS Applications, Machine Learning services, and Data Services.

With Marketplace Syndication in TP3, we are now able to directly pull a subset of VM Images from Azure into Azure Stack for consumption by tenants. For anyone who built and deployed Gallery items in Azure Pack, this is just glorious.

The Public Azure Marketplace offers five pricing models:

 

  • BYOL model: Bring your own licence. You obtain outside of the Azure Marketplace the right to access or use the offering and are not charged Azure Marketplace fees for use of the offering in the Azure Marketplace.
  • Free: Free SKU. Customers are not charged Azure Marketplace fees for use of the offering.
  • Free Software Trial (try it now): Full-featured version of the offer that is promotionally free for a limited period of time. You will not be charged Azure Marketplace fees for use of the offering during a trial period. Upon expiration of the trial period, customers will automatically be charged based on the standard rates for use of the offering.
  • Usage-based: You are charged or billed based on the extent of your use of the offering. For Virtual Machines Images, you are charged an hourly Azure Marketplace fee. For Data Services, Developer services and APIs, you are charged per unit of measurement as defined by the offering.
  • Monthly Fee: You are charged or billed a fixed monthly fee for a subscription to the offering (from date of subscription start for that particular plan). The monthly fee is not prorated for mid-month cancellations or unused services.
    Offer specific pricing details can be found on the solution details page on /en-gb/marketplace/ or within the Microsoft Azure classic portal.

As of just now in TP3, BYOL is the only model available, and only for a small subset of offerings. That doesn’t matter though, we’re just proving the concept just now, so that being the case, enabling Marketplace Management was the very first thing I did once I’d fired up my TP3 portal.

 

Registering the Resource Provider

When you click through to the Marketplace Management resource provider, it presents you with a link to follow in order to register and activate the resource provider. It needs to be registered against an existing Public Azure subscription in order to pull marketplace items down from hyperscale to on-prem.

Marketplace Management + Add from Azure NAME PUBLISHER TYPE VERSIO... STATUS You need to register and activate before you can start syndicating Azure Marketplace content. Follow instructions here to register and acitivate

The documentation to do this is available at the following link:

https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-register

A PowerShell script is required in order to register the resource provider, available from GitHub:

https://github.com/Azure/AzureStack-Tools/blob/master/Registration/RegisterWithAzure.ps1

And of course you need the AuzreRM PowerShell module installed, via Install-Module AzureRM

When registering the RP you are prompted for an Azure subscription, and an Azure username and password. This can be a completely separate subscription and username to the one used for Azure Stack deployment. It cannot, however, be a CSP subscription.

Run the script to completion…

Administrator: Windows PowerSheII ISE Eile Edit yew Tools Debug Add-ons Help RegisterWithAzure.ps1 X Ln256 Col 130 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 38 39 The script will follow four steps : Confi gure local identity: confi gures local Azure Stack so that it can call to Azure via your Azure subscription Get regi stration request: get local environment informatnon to create a registration for this azure stack in azure Register with Azure: uses Azure powershell to create an "Azure Stack Registratnon" resource on your Azure subscription Activate Azure Stack: final step in connecting Azure Stack to be able to call out to Azure . PARAMETER azuresubscriptionld Azure subscri ption ID that you want to regi ster your Azure Stack with. This parameter is mandatory. . PARAMETER azureDi rectory Name of your AAD Tenant which your Azure subscri ption is a part of. This parameter is mandatory. . PARAMETER azureSubscriptionOwner Username for the owner of the azure subscription. This user must not be an MSA or 2FA account. This parameter is mandatory. . PARAMETER azureSubscriptionPassword Password for the Azure subscription. You will be prompted to type in this password. This parameter is mandatory. . PARAMETER marketpl aceSyndi cati on Flag (ON/OFF) whether to enable downloading items from the Azure marketplace on this environment. Defaults to "ON" . V EKBU5E : VERBOSE: VERBOSE: STARTING : VERBOSE: STARTING : VERBOSE : VERBOSE: WARNING: STARTING : VERBOSE: VERBOSE : VERBOSE : VERBOSE: VERBOSE: VERBOSE: yrogress T 1 1 e: yrogress . new«egl srr at onxequesr. xm 1 Log file: • strationRequest. 2017-03—01. 23—08—49.0. log Invoking action : NewRegi strationRequest Action NewRegi strationRequest ' Action: Running action plan 'NewRegistrationRequest' . Step I - Create regi stration request - 3/1/2017 PM Step: Running step 1 — Create regi stration request — 3/1/2017 11:08:49 PM Task: Running interface • NewRegistrationRequest• of role Attempt #1. - 3/1/2017 PM Task: Interface 'NewRegi strationRequest' is not a supported standard Interface. - 3/1/2017 PM Task - NewRegistrati onRequest Interface: Path to module: psm1 — 3/1/2017 11:08 PM Interface: Running interface NewRegistrationRequest psml, AzureBridge:NewRegistrationRequest) Runtime parameters are present, will use provided Bridge App configuration file — 3/1/2017 11:08:52 PM Bridge appliation object id : 22280345—84f4—469e-80dO-dcbb7f2ec91b - 3/1/2017 11 :08:52 Performing the operation "Create File" on target "Destination: json" - 3/1/2017 Registration request contents: { "Br i dgeAp$-onfi gOi d" "22280345-84f4-469e-80do-dcbb7f2ec91b" , ' RegionNames • " local " , "Identi tyProvi d er " : "Azur eAD" } - 3/1/2017 PM "Servi cePri ncipal Name" . azurestack. local /b20ffdea—f632—433c-b39d-6ba972192cac" , "Depl oymentldent i fi er" : • b20ffdea—f632 , "https : // azur e. - 3/1/2017 VERBOSE : VERBOSE : COMPL ETE : VERBOSE: COMPLETE : VERBOSE: VERBOSE : COMPL : VERBOSE: Registration request output file is at : json - 3/1/2017 11 Interface: Interface NewRegi strati onRequest completed. - 3/1/2017 11:08:52 PM Task - NewRegistrati onRequest Task: Task compl eted. - 3/1/2017 PM Step 1 - Create regi stration request Step: Status of step 'I — Create registration request' is 'Success' 3/1/2017 PM Action: Action plan 'NewRegistrati onRequest' completed. — 3/1/2017 11:08:52 PM Action NewRegi strationRequest ' New—RegistrationRequest. PSI : END on AS-HOST as AZURESTACK\azurestackadmin STEP 2: Registration request completed. Re—enter your Azure subscription credentials in the Running script/ selection. Press Ctrl* Break to stop. Press Ctrl—B to break into debugger, next step. Press Enter to conti nue. . 100%

' i dgeServi ce. Partiti onConnecti onStr i ng ZB/fs r oynNTJXm1 YBSW40; Pool i ng=Fa1 se" , 'Data Catalog—Microsoft. AzureStack.Connerce;User 'TokenRetri ever . Certifi cateThumbpri nt" : '9794C97694AF2B26ßFE264C14D39D9D2A5571838" , 'TokenRetri ever . Cl i entld" : ' ce8c5ed0-745 a-448c-82c1-117c62f7c348" , 'UsageUpIoader . GatewayUri " " https : //azstusage. tr affi cmanager. net/usage" , 8r i dgeJ obRunn er . Stor ageC1i entld" : " 6b5b2d0522f449d1b021afS35 30294cf" 'TokenRetri ever . ResourceUri • • 'https : //mi crosoft. onmi crosoft. com/azurestackusage" 3/1/2017 11:11 PM VERBOSE: VERBOSE: VERBOSE : VERBOSE : VERBOSE : VERBOSE: COMPL ETE : VERBOSE: COMPL ETE : VERBOSE : VERBOSE : COMPL ETE : VERBOSE: Creating remote power-shell session on MAS—WASOI - 3/1/2017 11 : 11:41 PM Initializing remote powershell session on MAS—WASOI with common functions. - 3/1/2017 11:11 PM Loading infra vm helpers PSI) to session on MAS-WASOI — 3/1/2017 11:11:41 PM Invoking command on remote session.. - 3/1/2017 11:11:41 PM Removing remote power-shell session on MAS—WASOI. 3/1/2017 PM Interface: Interface Configure completed. — 3/1/2017 11 PM Task cRi ngServi ces\UsageBri dge — Confi gure Task: Task compl eted. - 3/1/2017 11:11:42 Step 1 - Configure Usage Bridge Step: Status of step 'I — Configure Usage Bri dgeT is 'Success - 3/1/2017 11:11 PM Action: Acti on plan 'ConfigureUsageBridge' completed. - 3/1/2017 11:11 PM Action 'Confi gureUsage8ridge Activate—Bridge.psI : END on AS-HOST as AZURESTACK\azurestackadmin STEP 4: Activate Azure Stack compl eted Registration compl ete. You may now access Marketpl ace Management in the Admin UI PS dge»

… and all should be well! You can now refresh the Marketplace Management resource provider, to be presented with a new message and an ‘Add from Azure’ button. Yay!

Marketplace Management + Add from Azure NAME PUBLISHER TYPE VERSIO... STATUS You have no items downloaded to your Azure Stack marketplace yet. Click "Add from Azure" to add items.

The available list is currently quite small, but pretty much everything is useful, so kudos on the choices there Microsoft!

Simply select what you want to bring into your Azure Stack, and click download. One thing I noticed is that the transfers were pretty slow, even on our ridiculously fast connections. Pulling down a handful of gallery images had to be left running overnight.

Microsoft Azure Stack V Marketplace Management VERSIO... STATUS > Add from Azure asadmin@brightsolid.... >< Add from Azure NAME TYPE lick " Add from Azure" to add items. O (ț) (â) (Â) Remote Desktop Services (RDS) Basic Farm SQL Server 2014 SPI Express on Windows Server ; SQL Server 2016 RTM Developer on Windows Ser GitLab LAMP Magento Moodle Nginx ownCIoud Redmine Ruby WordPress Drupal PUBLISHER Microsoft Microsoft Microsoft Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami TYPE Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Machine Machine M achine Ma chine Ma chine Ma chine Machine Machine Machine Machine Machine Machine Machine VERSIO... 1.0.0 1.0.0 1.0.0 8.9.61 5.6.270 2.1.20 3.1.22 1.10.14 9.1.11 3.3.10 2.3.15 4.6.14 8.1.90 SIZE 8.5G 18.3G 23.4G 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG

Downloading… waiting… downloading…

Microsoft Azure Stack v Marketplace Management > Add from Azure > Drupal > Word Press c Marketplace Management + Add from Azure Search to filter items... NAME (4) Ruby WordPress Drupal PUBLISHER Bitnami Bitnami Bitnami TYPE Virtual Machine Virtual Machine Virtual Machine VERSIO... 2.3.15 4.6.14 8.1.90 STATUS Downloadi... Downloadi... Downloadi...

Five wonderful marketplace items added and ready for tenant consumption! Amazing.
Microsoft Azure Stack v Marketplace Management Marketplace Management Add from Azure Search to filter items„. LAMP Ruby Word Press e Remote Desktop Services (ROS) Basic Farm SQL Sewer 2014 SPI Express on Windows Server PUBLISHER Bitnami Bitnami Bitnami M icrosoft M icrosoft Virtual Machine Virtual Machine Virtual Machine Virtual Machine Virtual Machine VERSIO... 5.6270 2.3.15 4614 1.00 1.00 STATUS Succeeded Succeeded Succeeded Succeeded Succeeded

Tenants can now select these Marketplace items, and deploy them immediately. This is such a leap forward from Azure Pack, and I feel such joy in using this feature. How important this is cannot be overstated.

Microsoft Azure Stack New p Search the marketplace MARKETPLACE Tenant Offers + Plans Virtual Machines Data + Storage Networking Custom Security + Identity Developer Services Web + Mobile Management Media + CDN RECENT New > Media + CDN See all Media + CDN FEATURED APPS WordPress ee al The most popular and ready-to-go

… and here we are! A WordPress VM deployed using an image from Public Azure, all controlled and managed from within the Azure Stack web UI – no PowerShell, not building VM images, all just so simple. Phenomenal.
Microsoft Azure Stack WPI Search (CtH+,9 Ove rview Activty log Access control (IAM) Tags SETTINGS WPI Start Connect Essentials Resource group (c'.ge) wp-dev-rg Running Location Dundee Subscription name (change) Default Provider Subscription Subscription ID Restart Stop Delete ace2128b-1464-4gc2-8b1 d -oggc2ba420bd Computer name Operating system Linux Standard Al (1 core, 1.75 G8 memory) Public IP address/DNS name label 192.168.102.13/«none» Virtual network/subnet wp-dev-rg-vnet/default

This is a bit of a non-blog, as the TP3 deployment experience is utterly joyous. It deployed first try for me, taking around four hours from start to completion, with no errors logged. I happened to screenshot the process, so here it is in all its glory 🙂

In order to deploy the PoC, I followed the documentation at https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-deploy

Download and run the PoC Downloader. It’s highly recommended that you tick the ‘Download the Windows Server 2016 EVAL (ISO)’ box so you can get something added to the marketplace once it’s deployed.

-Z Azure Stack POC Downloader Download the Azure Stack Proof of Concept (POC) Notice Privacy Notice Microsoft automatically collects usage and performance data over the internet ("Data"). This feature is on by default. You can turn off this feature by following the instructions http://go_microsott.com/fwIink/?LinkID=699582_ For more information about Data collection please see Microsoft Azure Stack POC Privacy Statement at http://go.microsoft.com/fwIink/?LinkID=692023_ 1. Read the deployment prerequisites 2. Choose a build @ Technical Preview (release build) version 20170225.2 C) Technical Preview (in-development build) What's the difference? 3. Optional: Windows Server 2016 EVAL Download the Windows Server 2016 EVAL (ISO) Once Azure Stock has been deployed, this ISO can be used to add an image to the Azure Stack Marketplace. Note: This Windows Server Evaluation image may only be used with Microsoft Azure Stack Proof of Concept and is subject to the Microsoft Azure Stack Proof of Concept license terms. 4. Browse to where you want to save the build Space required: 15.07 Ga Download 5. Cancel

The PoC Downloader will download the PoC.

Azure Stack POC Downloader Download the Azure Stack Proof of Concept (POC) The download is in progress. Downloading Microsoft Azure Stack POC (release build) version 20170225.2. Transferred: 9.27 GB Remaining: 5.8 Ga Estimated time left: Cancel Details... Privacy Notice Pause

You will need to have at least 85GB of storage to extract the downloaded files, which you can extract by clicking the ‘Run’ button once download has completed.

Azure Stack POC Downloader Download the Azure Stack Proof of Concept (POC) Privacy Notice The download has completed and was saved here. Close the window to exit, or click Run to run the Azure Stack POC self-extractor. The Azure Stack POC self-extractor will guide you through the next steps of the installation process. Run

The extractor extracts a VHDX file from which we will boot and run a whole Azure Stack environment. One file to worry about – so simple, even I can’t mess it up.

Setup - Microsoft Azure Stack POC Please vvait while Setup extracts Microsoft Azure Stack POC on your computer. Extracting files. C : user a tor DesktopVvIicr o so ft Azure Stack POC CloudBuiIder. vhdx

Setup - Microsoft Azure Stack POC Completing the Microsoft Azure Stack POC Setup Wizard Setup has finished extracting Microsoft Azure Stack POC on your compu ter Click Finish to exit Setup.

Once extracted, copy the CloudBuilder.vhdx file to the root of your host’s C: drive.

The documented PowerShell below will download preparatory files you need, so blindly copy it into PowerShell ISE and run it 🙂

 

# Variables
$Uri = 'https://raw.githubusercontent.com/Azure/AzureStack-Tools/master/Deployment/'
$LocalPath = 'c:\AzureStack_SupportFiles'

# Create folder
New-Item $LocalPath -type directory

# Download files
( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

The next step is the same as TP2, run the PrepareBootFromVHD PowerShell script to set the BCDBoot entry to allow the host to reboot into the CloudBuilder VHDX. Apply an Unattend file if you don’t have console access to the host. Or don’t, I’m not your boss.

Administrator: Windows PowerSheII PS C: DIR Di rectory: C: \ Azu restack_SupportFi les Mode LastWriteTime 01/03/2017 01/03/2017 01/03/2017 01/03/2017 17 17 17 17 Length 2611 8698 1684 3788 Name BootMenuNoKVM. PSI PrepareBootF romVHD. PSI Unattend . xml unattend NOKVM. xml PS . \prepareBootFromVHD. PSI -CloudBui1derDi skpath "c: oudBui1der. vhdx" -Applyunattend password for the local administrator account of the Azure Stack host. g password will be configured for the local administrator Requi res 6 characters minimum: account of the Azure Stack host: rea Ing new boot entry for Cloud8ui1der. vhdx Running command: bcdboot I : \windows Boot files successfully created. updating descri ption for the boot entry Descri pti on: Wi ndows Server 2016 Devi ce: partiti on—I: bcdedit /set " {default}" description "Azure Stack" The operation completed successfully. Confi rm Are you sure you want to perform this action? performing the operation "Enable the Local shutdown (AS-Base) " [Y] Yes [A] Yes to All [N] NO [L] NO to All access rights and Suspend [?] Help restart the (default is computer. " on target "local host

Once you’ve rebooted into the CloudBuilder VHDX and logged in using the password you provided when applying the Unattend file, run through the same steps as you would have in TP2.

If not using DHCP, set a static IP on the host.

If you’re anywhere other than UTC-8, set a time server.

Rename the host.

Reboot.

Disable all NICs other than the NIC that provides internet connectivity.

Actually I haven’t validated the last step – it was necessary in TP1 and TP2, but I’m pretty certain I saw the deployment script checking for the correct NIC to use while it was installing. Let’s check…

326 327 328 329 330 331 Car ray) Sn etwor kConfi gur ati on if (Sn etwor kConfi gur ati on . Count Get - NetlPConfi gur ation 1) -gt . NetAdapter . Status - EQ thr ow SLocaI i zedData. Mor eThanOneNi cEnabI ed

Yep, DeploySingleNode.ps1, lines 326 to 331 – only one NIC is allowed to be enabled still, so let’s disable all the other NICs.

Network Connections Control Panel Organize NICI Disabled Network and Internet Disabled Network Connections Search Network Connecti Network Intel(R) Gigabit X520/13iO rNDC NIC4 Disabled Intel(R) Gigabit X520/13iO rND... Intel(R) Ethernet IOG4PX520/13SO.„ SLOT 2 Di sabled Mellanox ConnectX-3 Pro Etherne... Intel(R) Ethernet IOG4PX520/13SO... SLOT 2 2 Disabled Mellanox ConnectX-3 Pro Etherne...

Ok! So in this environment I’ve not got DHCP available so we need to set a Static IP, for this lab I’m using 10.20.39.124. Here are the steps to kick off deployment from an elevated PowerShell window. NOTE: Do not use PowerShell ISE for this – if you do, it may lead to fuckery.

cd C:\CloudDeployment\Setup

$adminpass = ConvertTo-SecureString “Local Admin Password” -AsPlainText -Force

$aadpass = ConvertTo-SecureString “Azure AD Account Password” -AsPlainText -Force

$aadcred = New-Object System.Management.Automation.PSCredential (“AAD account email address“, $aadpass)

.\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -InfraAzureDirectoryTenantAdminCredential $aadcred -NatIPv4Subnet 10.20.39.0/24 -NatIPv4Address 10.20.39.124 -NatIPv4DefaultGateway 10.20.39.1 -TimeServer sometimeserver

This is a slight change from TP2, with -AADCredential being renamed to -InfraAzureDirectoryTenantAdminCredential, which just rolls off the tongue :/

Deployment kicks off, and you pretty much wait for four hours. This is also a slight change from TP1 and TP2, with the ‘Cross Fingers and Pray to the Old Gods and the New’ step now being notably absent as everything just works.

Administrator: Windows PowerSheII RARNING: T e names o some 1 mporte comman s rom t e 110 u •e Fa r 1 cR1ngApp11cat10ns Inc •u e unapprove ver s t at ight make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with Action ' Depl oyment ' Running action plan.. Phase O - Confi gure physical machine and external networking Step O - Deploy and confi gure physical machines, 8GP and NAT Task Cloud - Depl oyment- Phas eo- Depl oy8ar etal And8GPAndNAT Running action Action ' Depl oyment-PhaseO-DepI And8GPAndNAT ' Running action plan. 1000000000000 (DE?) Validate Physical Machi nes step 0.12 - Validating the hardware and OS confi guration on the physical nodes . Task CI - Validate Running interface t e Ver ose parameter. For a 1 st approve ver s , type Get-Ver . 3 1 2017 ARNING: The names of some imported commands from the module 'ACS8cdr' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. - 3/1/2017 ARNING: The names of some imported commands from the module 'ACS' include unapproved verbs that might make then less discoverable. To find the connands with unapproved verbs, run the Import-Module conmand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. - 3/1/2017 ARNING: The names of some imported connands from the module '800tArtifactsHeIper' include unapproved verbs that might ake them less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the erbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported commands from the module 'JeaHeIper' include unapproved verbs that might make them less discoverable. To find the connands with unapproved verbs, run the Import-Module conmand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported connands from the module 'UpdatePhysicaIMachineHeIper' include unapproved verbs hat might make then less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported commands from the module 'UpdateNC' include unapproved verbs that might make them less discoverable. To find the connands with unapproved verbs, run the Import-Module conmand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported connands from the module 'PhysicalMachines' include unapproved verbs that might ake them less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the erbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ER80SE : ER80SE : ER80SE : ER80SE : ARNING: ER80SE : ER80SE : Setting IP addresses. — 3/1/2017 5 : 44: 46 Normalizing MAC address '24-6E-96-02-46-2C' . — 3/1/2017 5 : 44: 46 Choosing an address from 'Management' in range '192.168. 200. 65/24' . — 3/1/2017 5 : 44: 46 Find out which NICs are able to connect on each node. - 3/1/2017 Ping to 192.168.100.4 failed Status: TimedOut - 3/1/2017 PM - AS-HOST storagel - 3/1/2017 PM + AS-FOST Deployment - 3/1/2017 5:45 PM
Action ' Depl oyment ' Running action plan. . Step O - phase O - Configure physical machine and external networking Deploy and configure physical machines, BGP and NAT Task Cloud Dep 1 oyment -phas eO-Dep 1 oyBareMeta1 AndBGPAndNAT Running action Action ' Depl oyment-phaseO-Dep1 oyBareMeta1AndBGPAndNAT ' Running action plan. Coooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo Step 0.20 (S TO) Configure Storage Cluster Create storage cluster, create a storage pool and file server. Task Cloud \ Infrastructure\storage Depl oyment Runni ng NARNING: The names of NARNING: The names of Import-Module command NARNING: The names of NARNING: The names of NARNING: The names of NARNING: The names of Import-Module command NARNING: The names of Import-Module command NARNING: The names of NARNING: The names of interface some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 PM some imported commands from the module 'ACSB10b ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 PM some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'ACSBcdr' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'ACS' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'BootArtifactsHe1per' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 PM some imported commands from the module 'JeaHe1per' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM NARNING: The names of some imported commands from the module 'Updatephysica1MachineHe1per' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM NARNING: The names of some imported commands from the module 'UpdateNC' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM NARNING: The names of some imported commands from the module 'physicalMachines ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs , run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM COMPL : COMPL : STARTING : STARTING : NARNING: NARNING: NARNING: COMPL : COMPL : STARTING : STARTING : COMPL : COMPL : STARTING : STARTING : NARNING: NARNING: NARNING: Task Confi gure Step 0.17 (DEP) Confi gure physi cal Machi ne Step 0.18 - (DEP) Confi gure physi cal Machi ne Task [AS-HOST] : [AS-HOST] : [AS-HOST] : Task updateHostComponent C CPDTVMSwi tch] publ i cswi tch] C CPDTVMSwi tch] publ i cswi tch] C CPDTVMSwi tch] publ i cswi tch] updateHostComponent Net Net Net adapters adapters adapters are are are down : down : down : 3/1/2017 PM 3/1/2017 6: PM 3/1/2017 PM Step 0.18 - (DEP) Confi gure physi cal Machi ne Step 0.19 - (FBI) Confi gure powershell JEA for Storage. Task Cloud \ Fabri c\JEA — Confi gure Task - Confi gure Step 0.19 - (FBI) Confi gure powershell JEA for Storage. Step 0.20 - (S TO) Confi gure Storage Cluster Task Depl oyment Cluster validation completed, but had a few tests either unselected/ cancelled/ deemed not applicable. Refer to the validation report for more information 3/1/2017 PM There were issues while creating the clustered role that may prevent it from starting. For more information view the report file below. 3/1/2017 PM Report file location: C: Cluster Wizard S—cluster on 2017 .03. 01 At 18. 33. 06. htm 3/1/2017 PM
Administrator: Windows PowerSheII VERBOSE : Inter ace: Runmng 1 nter ace Ml grate c asses ECEsee RI nq ECEsee RI ng .psml, ECEsee R 1 rig : Ml grate 3 1 2017 PM ARNING: The names of some imported commands from the module 'ECE incl ude unapproved verbs that mi ght make them less di scoverable. Import-Module command again with the verbose parameter. For a list of approved verbs, type Get-verb. - 3/1/2017 PM TO find the commands with unapproved verbs , run the ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ARNING : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ARNING : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: COMPLETE : ERBOSE : COMPLETE : ERBOSE : STARTING : ERBOSE • ERBOSE : STARTING : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : Loading module from path oyment\ECEngi oudEngine.cmd1ets.d11 ' - 3/1/2017 PM Importing cmdlet 'set-Ecesecret' . - 3/1/2017 PM Importing cmdlet 'Get-Ececonfi guration' . - 3/1/2017 PM Importing cmdlet 'Get-Actionprogress ' - 3/1/2017 PM Importing cmdlet 'Get-JsonTemp1ate' . - 3/1/2017 PM Importing cmdlet 'Invoke-EceAction ' - 3/1/2017 PM Importing cmdlet ' Joi n-R01eTemp1ate ' - 3/1/2017 PM Importing cmdlet 'Import-Ececustome rconfiguration ' - 3/1/2017 PM - 3/1/2017 PM Importing cmdlet 'set-RoleDefinition ' Attempting to retrieve BareMeta1Admin credential ... - 3/1/2017 10:24:55 PM unable to retri eve BareMeta1Admin credential . It may not exist. - 3/1/2017 PM Exception calling "Getcredential " with "1" argument (s) : contains no elements' sequence Attempting to retrieve AADAdmin credential .. - 3/1/2017 Attempting to retrieve AADAzureT0ken credential ... - 3/1/2017 PM Attempting to retrieve AzureBri dgeuser credential ... - 3/1/2017 PM unable to retri eve AzureBridgeuser credential . It may not exist. - 3/1/2017 10:24:55 PM Exception calling "Getcredential " with "1" argument (s) : sequence contains no elements' Attempting to retrieve LocalAdmin credential ... - 3/1/2017 10:24:55 PM Attempting to retrieve MgmtLoca1Admin credential ... - 3/1/2017 10:24:55 PM Attempting to retrieve CAcertifi cateuser credential . - 3/1/2017 10:24:55 PM - 3)i/2017 10:24:55 PM Attempting to retrieve Domai nAdmin credential ... Attempting to retrieve Fabric credential ... - 3/1/2017 10:24:55 PM Attempting to retrieve sq 1 service credential ... - 3/1/2017 10:24:55 PM Migrating cloudDefinition to ECEservi ce... - 3/1/2017 10:24:55 PM - 3/1/2017 PM - 3/1/2017 10:24:55 PM Initializing remote powershell session on MAS-ERCS01.AzureStack.Loca1 with common functions. - 3/1/2017 10:24:57 PM Loading infra vm helpers . PSI) to session on MAS-ERCS01.AzureStack.Loca1 - 3/1/2017 10:24:57 PM Migration of cl oudDefinition to ECEservnce completed successfully. - 3/1/2017 10:25:10 PM Migrating ECELite to AD VMS. . . - 3/1/2017 10:25:10 PM copying cloudDep10yment Files to AD VM... - 3/1/2017 10:25:10 PM comed cloudDep10yment Files to AD VM. - 3/1/2017 10:25:31 PM Hydrating ECELite with runtime values... - 3/1/2017 10:25:31 PM Migration of ECELite to AD VMS completed succesfully! - 3/1/2017 10:25:33 PM Interface: Interface Mi grate completed. - 3/1/2017 10:25:33 PM Task cl oud\Fabri c\seedRi ngservi ces\ECEseedRi ng Mi grate Task: Task completed. - 3/1/2017 10:25:33 PM PHASE. 3.1 -C FBI) Migrate confi guration to ECE service on seedRing step 241 - step: Status of step '241 - PHASE. 3.1 -(FBI) Migrate configuration to ECE service on seedRing' is 'success ' prepare for future host reboots step 251 - prepare for future host reboots - 3/1/2017 10:25:33 PM Running step 251 - - 3/1/2017 10:25:33 PM Running interface 'startup' of role 'cl . Attempt #1. Task cl - startup Interface: path to module: C: psml - 3/1/2017 10:25:33 PM Interface: Running interface startup psml, poc:startup) - 3/1/2017 10:25:33 PM Deleting onstartup scheduled task. - 3/1/2017 10:25:42 PM - 3/1/2017 10:25:33 PM setting restart callback as: Import—Module c: oudDepI oyment\ECEngi ne\Ente rpri secl oudEngi ne . psdI Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -Action Type Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue ' 3/1/2017 10:25:43 PM Reaistering the callback for powershell . exe with argument: '-Executionpolicy Remotesigned -NOEXit -command Import-Module .\CIoudDepIoyment .psdl Import-M0duTe c: oudDepI pers . psmI Import-Module c: oudDepI oyment\ECEngi ne\Enterpri secl oudEngi ne . psdI Invoke-EceAction -Rolepath Cloud -Act non Type Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue ' Registering the scheduled task named ' coldstartMachine under the user ' AzurestackAdmin' . ERBOSE : ERBOSE : COMPLETE : ERBOSE : COMPLETE : ERBOSE : ERBOSE : COMPLETE : - 3/1/2017 10:25:43 PM - 3/1/2017 10:25:43 PM Interface: Interface Startup completed . - 3/1/2017 10:25:43 PM Task cl - startup Task: Task completed. - 3/1/2017 10:25:43 PM prepare for future host reboots step 251 - Step: Status of step '251 — prepare for future host reboots' is 'success ' - 3/1/2017 10:25:43 PM Action: Action plan Action ' Depl oyment ' ' Depl oyment' compl eted . - 3/1/2017 10:25:43 PM ps c: oudDep1 PS c: oudDepI ps c: oudDepI PS c: oudDepI 10:27 PM 3/1/2017

Change the default password expiry to 180 days as per the documentation:

 

Set-ADDefaultDomainPasswordPolicy -MaxPasswordAge 180.00:00:00 -Identity azurestack.local
And that's it! Azure Stack TP3 deployed and ready to rock and roll!

Edit dashboard New Region Management Plans U pd ates Provider Settings Locations Marketplace Subscriptions Resource Explorer Portal settings Virtual machine scale s... Availability sets Storage accounts Images Marketplace Managem... Import/export jobs Snapshots More services > Dashboard V + New dashboard Get started Share Fullscreen Clone Updates Delete Feedback Marketplace Alerts Critical Warning Virtual Machines Provision Windows and Linux virtual machines in minutes App Service Create web and mobile apps for any platform and device SQL Database Managed relational database-as-a-service Storage Durable, highly available and massively scalable storage Resource Providers NAME Key Vault Network Capacity Storage Updates Compute STATUS SQL Region Management f) REGION local CRITICAL WARNING CURRENT VERSION 1.0.170225.2 LAST CHECKED PM STATE U pToDate ALERTS Unknown Unknown Unknown Unknown 10:54 PM 3/1/2017

 

 

Well Azure Stack TP3 has landed, and with it a whole host of excitement and capability! Jeffrey Snover has laid out the improvements and roadmap for TP3 through to GA in this blog, so I thought I’d note some thoughts about a few of the listed capabilities while my CloudBuilder VHDX extracts 🙂

What’s new in Azure Stack TP3

  • Deploy with ADFS for disconnected scenarios

I wonder if the only use case here is in disconnected scenarios as noted, or rather also perhaps for scenarios where the customer is comfortable with their Azure Stack having internet access, but not with their identity being synchronised to Azure Active Directory (AAD).

  • Start using Azure Virtual Machine Scale Sets for scale out workloads

These are self-explanatory and I look forward to trying this functionality – the ability to scale a set of identical IaaS VMs in or out with ease, just like Azure.

  • Syndicate content from the Azure Marketplace to make available in Azure Stack

This is hugely exciting to me, even if it is just limited to Bring Your Own License (BYOL) scenarios at this time. One of the major pains of Azure Pack was having to custom build all of the gallery items you wanted to deploy to your tenants. Now with marketplace syndication we can pull relevant gallery items directly from Azure into our Azure stack environments and make them available to our tenants. For me, this is the biggest new feature in TP3 😀

  • Use Azure D-Series VM sizes

But probably not the higher spec ones for those using 96GB or 128GB RAM hosts… 🙂 I should be able to deploy up to Standard_D14 in size, all things being equal.

  • Deploy and create templates with Temp Disks that are consistent with Azure

This is interesting, as in Azure a temp disk is non-permanent and exists on the same host as the VM, rather than in Azure Storage. If the host reboots, the contents of the temp disk are lost. In Azure Stack the underlying storage is hyperconverged Storage Spaces Direct rather than Azure Storage. Performance of temporary disks typically has better IOPS and latency than data disks, this is presumably not the case in Azure Stack though as it’s all the same storage on the same hosts. Will temp disk storage be wiped if a host reboots? Are the use cases the same as in Azure? Or is this purely a consistency item? Stay tuned and find out!

  • Take comfort in the enhanced security of an isolated administrator portal

I feel the comfort like a warm fuzzy blanket enveloping me, letting us split out the tenant and admin portals into separate security zones in the same way as Azure Pack 🙂

  • Take advantage of improvements to IaaS and PaaS functionality

In the short term I’ll be spending the next few days testing and documenting changes/improvements in the IaaS portion of TP3, and look forward to the PaaS services landing so I can go through them with rigor as well!

  • Use enhanced infrastructure management functionality, such as improved alerting

Alerting and monitoring are very important, I’d like to be able to gather data into any or all of SCOM, OMS, Power BI, or Grafana. I’m also very excited to see if Usage and Rate Card data are now available from TP3, as that functionality was broken (at least for me!) in TP2.

Well the CloudBuilder.vhdx is just about finished extracting, so time to document the deployment process – fingers crossed!

There’s some speculation in here, but first up, some concrete notes learned from deploying SDN 2016:

  • MTU needs to be set to 1702 in the physical network for the tenant traffic, to support max ethernet frames of 1542 plus the required EncapOverhead of 160.
  • Make sure your NIC drivers are up to date to ensure that they can make use of EncapOverhead (if supported) to auto-configure MTU in the server infrastructure.
  • If enabled, Sequence Randomisation needs to be disabled within your physical firewall, or return paths won’t go back through the ASA and sequence numbers can’t be restored back to original value.

Ok, so when Azure Stack ships it will be as a turnkey appliance – a black box of wonder that arrives at your datacentre, gets racked, powered on and immediately delivers the glory of Azure to you locally. Right?

Not exactly.

As with any appliance, there are backend integrations which will need to be done – identity, billing and chargeback, integration into existing network infrastructure, that sort of thing. It’s on the latter which we’ve been working on recently, and for which we have some useful lessons learned to share.

While multi-node Azure Stack infrastructures aren’t available for most to work on this integration piece yet and the 1-node Azure Stack implementation hides behind a BGPNAT VM, we do still have options for making sure we’re as well prepared as possible.

Specifically, the Software Defined Networking implementation in Azure Stack is the Azure-inspired SDN which is also in Windows Server 2016, meaning that if we can deploy the end to end SDN stack in a Hyper-V 2016 cluster, in theory much of the required physical network config from above the TOR level should be identical in Azure Stack.

Regardless, Hyper-V 2016 has a long and illustrious future ahead of it as a resilient, cost-effective, and ultra-secure IaaS platform which can be managed through the Azure Stack portal in its own right, so having consistent SDN implemented in Hyper-V 2016 isn’t a nice to have, for me it’s an absolute necessity.

This blog doesn’t seek to document the step by step process for deploying SDN, but rather to showcase a few of the lessons learned encountered along the way which can help inform the integration of Azure Stack into an existing physical infrastructure.

Useful Resources

Software Defined Networking Overview

There is a plethora of documentation available for SDN, and while it can at first glance be a little overwhelming, it’s really important to really read and understand the entire documentation set before proceeding with deployment.

Set up a Software Defined Network Infrastructure in the VMM Fabric

This is the documentation set we have followed in order to deploy SDN successfully on a Hyper-V 2016 cluster managed by VMM 2016.

Set up SDN on One Single Physical Host using VMM

While we’re deploying SDN in a clustered environment, this implementation is really useful to read through as it has step by step screenshots which are extremely useful to reference through deployment.

Troubleshoot SDN

This is one of the most important pages to reference in an SDN deployment. Pretty much every single cmdlet and test documented therein is valuable in understanding the status of your SDN deployment and figuring out where any errors lie.

SDN GitHub

There are a number of useful resources in the SDN GitHub repo, in particular the example SwitchConfigExamples and the Diagnostics scripts are invaluable.

Physical Network Setup

Lesson the first…

SDN NC Planning

The SDN documentation includes a good amount of information about how to configure to a TOR level and how traffic flows within there, how you integrate this into your existing physical network will vary significantly though depending on what hardware you have and how it’s set up.

Image result for "azure stack" network topology

Public Azure Stack documentation to date uses the above image to show how separate cluster fault domains will connect through their TORs and AGGs, but naturally doesn’t go into any detail above that level.

Typically above this level we would find a set of hardware firewalls, and from there a series of core through to edge network devices. We questioned early on though the firewall placement in this scenario, the thought process being that tenant traffic would benefit from bypassing the physical firewall and making use of the SDN distributed firewall. This removes typical firewall bottlenecks, and enables the full automation power of the SDN infrastructure from Hyper-V switch to edge.

Management traffic is still critical to traditionally secure however, so our implementation splits management and tenant traffic out via VRFs.

Per host, Management and SDN traffic are run through a pair of 10Gbps Mellanox cards, while SMB/RDMA storage traffic is split out onto separate Chelsio NICs. Expanding the above image, it starts then to look more like this – yikes! This is not a pretty image, but it’s accurate.

Public VIPs route via core network over the Tenant VRF, while Private VIPs route via AGG/Firewalls, and all works joyously. BGP is in place from RRAS/SLB to ToR, then OSPF for Tenant traffic to the core and out to the edge.

Is this how it’ll be in Azure Stack? I don’t know! One thing’s for sure, learning lessons on how to integrate SDN 2016 into your physical network now can only benefit your Azure Stack deployment in the future.

 

If you’ve ever sat and watched an Azure Stack deployment end to end you’ll have seen a few typos here and there. I had to look this one up though, and it is indeed a word! Today I learned 🙂

http://www.dictionary.com/browse/configurate

Brief Overview

The App Service Resource Provider in Azure Stack offers the same deployment and management experience for Web, Mobile, and API apps as is available in Public Azure, while also extending the provider with capabilities which are unique to Azure Stack.

In addition to the expected Azure consistent capabilities, the App Service in Azure Stack allows customisation of shared and dedicated web worker VMs which host tenant applications, as well as the associated pricing SKUs, in order to most efficiently meet the needs of on-premises and hosted customers, both computationally and financially.

Worker Tiers 
MgmtSvcCIoud 
Add Refresh 
Shared Worker Tiers 
NAME 
Shared 
CORES 
MEMORY 
1792 
MEMORY 
1024 
2048 
4096 
8192 
QUANTITY 
Windows Se 
Dedicated Worker Tiers 
NAME 
Small 
Medium 
Large 
Supreme 
CORES 
2 
4 
8 
Wi dows 
Wi n dows 
Wi n dows 
Wi n dows 
Server 2012 R2 
Server 2012 
Server 2012 R2 
Server 2012 R2 
o 
o 
10 
AVA' LABLE 
AVAILABLE 
o 
10

Azure Stack administrators can deploy multiple shared worker instances, and define and deploy multiple different tiers of worker which do not exist in Public Azure. Different worker instances can have different core and memory counts, different operating systems, and custom software available.

The ability to deploy custom software is unique to Azure Stack and is not available within Public Azure, where you are limited to pre-defined options. Custom software can be made available for deployment via MSI, Zip, Exe, or DLL, and is packaged for consumption within custom worker tiers.

Create New Custom So... 
X Discard 
* Product Id 
p H pNuke 
Title 0 
Version O 
Installer Type O 
Installer (Msi) 
Package (Zip) 
Executable (Exe) 
Lib ra (D Il) 
Target Directory O 
Install Executable O 
Install Arguments O

In true PaaS fashion, these capabilities and the underlying compute resources are abstracted away from tenants and presented as easy to digest SKUs, which are again fully customisable to an Azure Stack admin’s customer requirements.

SKUs 
Mg mtSvcCIoud 
Add 
Free 
Shared 
Sta n d a rd 
High 
X 
3 
High 
SKU 
Discard 
* Name O 
Color Card O 
Green 
Compute Mode O 
Dedicated 
Worker Tiers O 
Large 
Features 
Qu otas 
X 
Refresh 
MODE 
Shared 
Shared 
Dedicated 
Dedicated 
Disable 
COLOR CARD 
Orange 
Orchid 
Blue 
Gree n 
COLOR CARD 
SKU Features 
X Discard 
Web Sockets O 
Custom Domains 
Off 
Ip Based SSI Mode O 
Off 
SNI 
SNI & IPv4 SSL 
SNI & IPv6 SSL SNI & IP SSL 
Worker process 64 bit as default 
Disabled SKUs 
MODE 
No SKUs defined. 
On 
Off 
Worker Process 64 bit Enabled O 
Off 
App Concurrent Request Limit O 
5000 
Site Cpu Percentage Limit O 
Site Memory Lit-nit (VIE) O 
1024 
Site Memory Working Set Limit (MB) O 
1024 
Site Idle Timeout (Minutes) O 
10080

Notes from Testing in Azure Stack TP2 Refresh 1

Hardware in operation:

Dell R630 13G Server

2x 10 Core E5-2650 Processors

384GB RAM

2x 800GB SSD, 6x 1.2TB 15k HDD

Deployment of Web Workers

  • Ten Large Web Workers were deployed concurrently to test deployment process and time to complete. While the deployment ultimately succeeded, the technical preview limitations of (SSD backed) Azure Stack storage for more than a single operation became apparent through this process, with deployment taking over 62 hours to complete.
  • After completing, the Status of the Web Worker Instances can occasionally change from Ready to Installing – consoling onto the VMs shows that this is due to installation of Windows Updates. Due to the insanely slow storage access with this many VMs running, in the Roles with only a single instance, this had the effect of taking that instance offline on a regular basis as the tier becomes unavailable due to insufficient Ready Instances.

Large Web Worker Instances 
Repair All 
NAME 
10.02.13 
10.0.2.18 
10.0.2.10 
10.0.2.11 
10.0.2.19 
10.02.15 
10.02.12 
10.0.2.14 
10.0.2.17 
10.0.2.16 
PLATFORM VERSION 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
STATUS 
Ready 
Installing 
Installing 
Installing 
Ready 
Ready 
Ready 
Ready 
Ready 
Installing
Shared Web Worker Instances 
Repair All 
NAME 
10.0.2.4 
PLATFORM VERSION 
57.0.10696.7 
STATUS 
Installing
Updating your system (96%)

  • Following one Windows update, the Shared Instance Web Worker VM came back up without a NIC, however its VM is definitely configured with a NIC.

Not connected - No connections are available 
1/23/2017
Network Connections 
Network and Internet Network Connections 
Organize 
This folder is empty.
Settings for e9038d1 b-bec9-426d-b9cf 
e9038d1 b- bec9-426d-b9cf-e851 bd6551 v 
Hardware 
r Add Hardware 
BIOS 
Boot from CD 
Security 
Key Storage Drive disabled 
Memory 
1792 Ma 
æ_] Processor 
1 Virtual processor 
E] IDE Controller O 
Hard Drive 
IDE Controller 1 
DVD Drive 
None 
SCSI controller 
00 IDD8B71C06 
SdnS',Nitch 
Hardware Acceleration 
Advanced Features 
e851 bd655138 on ASHOST 
Netvvork Adapter 
Specify the configuration of the netw•ork adapter or remove the netv•ork adapter. 
Virtual switch: 
SdnSwitch 
Enable virtual LAN identfication 
The VLAN identfier specifies the virtual LAN that this virtual machine will use for all 
net',xork communications through this net',xork adapter 
Bandwidth Management 
Enable bandwidth management 
Specify ho•A' this network adapter utilizes network band'"idth, Both Minimum 
aand'A'idth and Maximum aand',vidth are measured in Megabits per second. 
Minimum bandwidth : 
Maximum bandh'idth: 
O 
O 
Mbps 
Mbps

  • Running a Repair All on the Instance from within the Azure Stack portal did not resolve this.
  • Leaving the VM for >24 hours did not resolve this.
  • Manually rebooting the VM did resolve this.

 

One of the ten Web Workers (WW3) took significantly longer than the others to deploy – around 25 minutes extra, which is probably not an issue but just noting it for rigour The first nine VMs deployed in 45 minutes, with WW3 finishing after 70 minutes.
Operation details 
RESOURCE 
WW4-VM/OnStart 
WW2-VM/OnStart 
WW6-VM/OnStart 
WW7-VM/OnStart 
WW1 -VM/OnStart 
WW5-VM/OnStart 
WW9-VM/OnStart 
WW8-VM/OnStart 
WWIO-VM/OnStart 
WW3-VM 
WW2-VM 
WW6-VM 
TYPE 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
Microsoft.Compute... 
STATUS 
Created 
Created 
Created 
Created 
Created 
OK 
OK 
OK 
OK 
Created 
OK 
OK 
TIMESTAMP 
2017-01-20T21 
2017-01-20T21 
2017-01-20T21 
2017-01-20T21 
2017-01-20T21
Specs for a Large Web Worker are 4 Core, 4 GB RAM, however all Large Web Workers deployed at Shared instance specs of 1 Core, 1.75GB RAM. This repeated when deploying a custom Web Worker of 8 Cores, 8GB RAM. All Web Workers in TP2 Refresh 1 deploy at Shared tier spec for now it seems.
08 
08 
08 
08 
08 
08 
08 
•••!1e4n6uu03 
srueas 
80€L ZZZ 
OBSO LOZ 
LOZ 
LOZ 
fi LB LOZ 
LEGSOOZ 
9760 LOZ 
awgdn 
Z6LL 
ZSLL 
Z6LL 
ZSLL 
Z6LL 
an ZSLL 
Z6LL 
ZSLL 
Z6LL 
ZSLL 
Z6LL 
Lo Luan pau6!ssv 
BLI!uur1H 
5u1uunH 
BLI!uurre 
BLI!uurrH 
BLI!uurrH 
5u1uurrH 
ôuluure 
aEesn n dû 
L ïq-sesv-ttsz-ss LLZûtS 
"'te•08L7-SLSL-LZS8LOLt 
L -OZSZOOE 
• •ytve-G067-ape-9-somqsa 
- es26-00h-8€zq-eqp€m€q 
•••q-eot7-tesz-zLGLqqae 
--0L6-6q•osqa-9L9L602È 
a LueN 
lenu!A

  • Deleting a Web Worker Role from the Azure Stack Portal does not currently remove the underlying Virtual Machine from the Infrastructure.

Role Repair

  • Repair of all roles works as expected, other than on the controller instance which fails with the following error:

    • Code: 400, Message: Role object is not present in the request body.
  • I am currently unaware of a fix for this, or indeed even if one is needed or available.

Controller Instances 
Repair All 
NAME 
10.0.2.6 
O Repair instances 
x 
10:30 PM 
Repair 1 instance(s) failed. Reason: Client error occured. Code: 
400, Message: Role object is not present in the request body. 
PLATFORM VERSION 
57.0.10696.7 
STATUS 
Ready

Deployment of a WebApp

Most WebApp deployments succeed as expected, however there are instances where it has failed.

  • During the below deployment of a new WebApp to a dedicated instance, an error was thrown – Conflict error: Not enough available reserved instance servers to satisfy this request. Currently 0 instances are available.

Microsoft.WebSite08f9395c-9acd 
Deployment 
Delete Cancel Refresh 
Lt.] Redeploy 
Failed. Click here for details 
"Code": "Conflict" 
X 
View template 
RELATED 
Outputs 
NO DEPLOYMENT OUTPUTS 
Inputs 
NAME 
HOSTINGPLANNAME 
HOSTINGENVIRONMENT 
LOCATION 
SKU 
WORKERSIZE 
SERVERFARMRESOURCEGROUP 
SUBSCRIPTIONID 
Operation details 
RESOURCE 
dedicated 
Events 
dedicated 
local 
Standard 
2 
dedicated 
8145fe3d%27e4c12Æb39S7878275b703 
Operation details 
OPERATION ID 
TRACKING ID 
STATUS 
PROVISIONING STATE 
TIMESTAMP 
DURATION 
TYPE 
RESOURCE ID 
STATUSMESSAGE 
TYPE 
Microsoft.Web/serv... 
STATUS 
Conflict 
TIMESTAMP 
2017 
938E6474AB3EF62A 
3e87c883-1 c36-43c6-a5ae-4275a40c2988 
Conflict 
Failed 
1/23/2017 2:30:55 PM 
7 seconds 
Microsoft. Web/serverfarms 
/subscriptions/8145fe3d-b27e-4c12-8b39-57878275b703/reso... 
"Code": "Conflict", 
"Message": "Not enough available reserved instance servers to 
satisw this request. Currently O instances are available. If you are 
changing instance size you can reserve up to O instances at this 
moment. If you are increasing instance count then you can add 
extra O instances at this moment." 
"Target": null, 
"Details": 
"Message": "Not enough available reserved instance setvers 
to satisfy this request. Currently O instances are available. If you 
are changing instance size you can reserve up to O instances at 
this moment. If you are increasing instance count then you can 
add extra O instances at this moment." 
"ErrorEntity": { 
"Code": "Conflict", 
"Message": "Not enough available reserved instance servers 
to satisfy this request. Currently O instances are available. If you

  • Two thoughts as to why this could happen were:

    • All Web Worker Instances were undergoing updates at the same time
    • All Web Worker Instances had sites running on them, and as they are dedicated there are none free

Only one Instance was updating at the time, deleting it to bring all Instances to a Ready state had no impact, the deployment still failed.

Large Web Worker Instances 
Repair All 
NAME 
10.0.2.13 
10.02.19 
10.0.2.15 
10.0.2.12 
10.0.2.14 
10.0.2.17 
PLATFORM VERSION 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
STATUS 
Ready 
Ready 
Installing 
Ready 
Ready 
Ready

The Admin portal reports that none of the Instances have any running sites on them, which should make them available for placement…

Large Web Worker Instances 
x 
Repair All 
NAME 
10.0.2.13 
10.0.2.19 
10.0.2.12 
10.0.2.14 
10.0.2.17 
STATUS 
Ready 
Ready 
Ready 
Ready 
Ready 
10.0.2.17 
Large Web Worker 
Repair 
Essentials 
Narne 
10.0.2.17 
Worker Tier 
Large 
Running Sites 
o 
Custom Software 
Not defined 
Instance 
1:45 PM 
CPU 
Http 
Stop 
Logs 
PLATFORM VERSION 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
57.0.10696.7 
Delete 
2 PM 
Role 
Web Worker 
Compute Mode 
Dedicated 
Allocated Dedicated Worker 
No 
2-15 PM 
DISK QUEUE LENGTH 
Add tiles 
2:30 PM 
HI-rp QUEUE LENGTH 
146 
MEMORY PERCENTAGE

… however, there are in fact currently two webapps deployed in the Dedicated Large tier, and another in the Shared tier, and not a single instance is reporting any WebApps deployed into them.

Conclusion: The Admin Portal does not accurately report the number of running WebApps in an instance yet, and all of the dedicated instances were indeed in use at this time.

Backup of WebApp

When configuring the Storage for backup of a WebApp, I accidentally left the storage endpoint as core.windows.net rather than specifying the local Azure Stack endpoint. This naturally caused the backup to fail. After correcting the issue, backups still failed for ~10 minutes as it believed an existing backup is in progress.

Once the infrastructure timed out the initial backup, subsequent backups to Azure Stack storage were able to proceed as expected, and completed successfully.

Backup Configuration Summary 
Backup configured properly with a Schedule to backup your site every 1 days. Last 
backup happened on Monday, January 23, 2017 1:14:47 PM. 
L. 
Schedule: Configured 
Storage: Configured 
All backups 
STATUS 
••/ Succeeded 
BACKUP TIME 
1/23/2017 1:14 PM 
SIZE (MB) 
0.12 
LAST RESTORED

Scale Up of an App Service Plan

Initial testing of Scale Up of an App Service Plan failed with a non-specific error.

Microsoft Azure Stack 
Failed to update App Service plan 
Failed to update App Service plan 
> 
Detail 
klowe@bl 
X 
DESCRIPTION 
STATUS 
TIME 
CORRELATION IDS 
EVENT 
Failed to update App Ser... 
Failed to update App Service plan klappspl: 
error has occurred. 
Error 
Monday, January 23, 2017 1 - 
.09:16 PM 
4e1be96d 0826 40b3-a852 21 164552d8eb 
Detail 
TITLE 
DESCRIPTION 
STATUS 
TIMESTAMP 
UTC TIMESTAMP 
CORRELATION IDS 
Failed to update App Service plan 
Failed to update App Service plan klappspl: {"Message"•"An 
error has occurred." ) 
Error 
Mon Jan 23 2017 GMT+OOOO (GMT Standard Time) 
Mon, 23 Jan 2017 GMT 
421 be96d-082640b3-a852-21164552d8eb 
LEVEL 
O Error 
STATUS 
Error 
TIME 
Jus...

Deploying a new WebApp and testing scale up of it worked as expected, so cause of the initial failure is currently unknown.

Scale Out of an App Service

Initial test of Scale Out of an App Service fails with an error ‘Failed to save scale settings’.

Saving scale settings 
DESCRIPTION 
STATUS 
TIME 
CORRELATION IDS 
EVENT 
Saving scale settings 
Failed to save scale settings. 
Error 
Monday, January 23, 2017 1:33 PM 
clientNotification Idb7feOa a66d 4bbe a47b 
dd2e956bf02e 
LEVEL 
O Error 
STATUS 
Error 
TIME 
Jus...

As with Scale Up, creating a new WebApp resolved this as well and Scale Out reported as completing successfully.

All other tests/integrations currently tried worked as expected. For completeness, these are:

  • Deploy the App Service RP
  • Add the App Service RP to Azure Stack
  • Create custom SKUs
  • Configure Source Control Providers (GitHub only)
  • Custom DNS Integration
  • Create an App Service Plan
  • Create an Empty Website (using custom SKU utilising custom Web Worker tier)
  • Deploy an existing Web App into App Service

    • For this I used a Bot for Dynamics CRM which I had previously created using the Microsoft Bot Framework. As this Azure Stack instance doesn’t have an externally available URL/IP I wasn’t able to test it connected through to Dynamics CRM, however deployment succeeded as expected.
    • Note: Microsoft should have called the Bot Framework the Bot Net Framework.