I find that there’s still a lot of confusion in the community around where to find the most canonical and up to date documentation for Microsoft products. Fortunately for everyone other than Xamarin developers, the answer is straightforward now – just go to Microsoft Docs at docs.microsoft.com, not to be confused with docs.com which took a bullet to the temple just a few days ago.

image

The Docs platform is a constantly evolving and growing platform, and nothing like the staid and little updated documentation repositories of old – how though does it stay so up to date for platforms like Office 365 and Azure which can themselves update on almost a daily basis?

The answer, as with so many things these days, is community. All of the documentation in the Docs platform is hosted in GitHub, and all is available for anyone to submit changes to. Whenever you see a typo, or malformed code, or a cmdlet that’s changed, or something that’s just plain wrong, you can edit then and there yourself, and in so doing, contribute to the collective ongoing success of the IT community.

When confronted with GitHub though, many people I’ve shown this to have thrown up their hands and proclaimed ‘I can’t do that! That’s a developer tool! I’m an IT Pro!’ Well they’re wrong, and to show how simple it is to submit a change, we’re going to walk through an example right now.

Here we are, just ambling along happily reading the Azure Stack documentation to better understand how storage balancing works, and what our role as a Cloud Operator should be in proactively managing it, when shock! Horror! A mistake!

clip_image002

In the past and on other lesser document management platforms we would have scoffed and carried on, but the good digital citizen of the 21st century document management community won’t let this pass! He or she will take thirty seconds out of his or her day to ensure that no one else’s sensibilities are impinged by this literary travesty. Our good digital citizen cares about the application or service the documentation is about, and by extension cares about their fellow comrades in arms in the industry.

Onward now our plucky digital hero forges, and first clicks the ‘Edit’ button at the top right of the page.

clip_image003

Immediately a dark spell is cast! And we are transported in a whirl of sparks and smoke off-site to the Hub of the Gits, land of Sha1 and friends. It’s here in the Hub of Gits that we can actually lay our proposals for change at the feet of the mighty documentation owners, which is in no way meant to imply that the latter are in any way gits.

clip_image004

Before we venture forth, we must first have a name that the Sha1s and the Owners can know us by, for the nameless have no form and hence no power to effect change. The character creation screen is at the top right of the window, or if you already have an established character, simply sign in to sally forth your champion, and then ask why the hell you’re reading this guide if you already know how to use Git.

Your first unguided test here, adventurer, is to complete the sign up process, because to be honest if you can’t go through that unaided, then even setting this guide to “I’m Too Young to Die” would be too hard for you, and a Grue would very quickly gobble you up.

Ok fine, one hint.

clip_image005

There are various different ways that we could now propose our change to the Owners, however we’re speed running this and want to make sure we’re away from our actual task for as little time as possible. That being the case, we coax the crafty Edit button out of hiding, and give him a good hard click.

clip_image006

You’d think that the obvious thing now would be that you’d edit the page then and there, and submit it in place, but oh no! It is not to be! The Gods of Git are not so merciful, and thus drops your first piece of knowledge. In editing this document, a copy of the entire documentation repository has been created within your inventory screen! Much like the twists and turns in a path that any adventurer will encounter in their travels, this copy is called a Fork.

A strange message appears before you, and while some of the words now make sense, some are still completely alien to you.

clip_image007

The edit itself is painless and can be done in browser, and so with a deft sword stroke and a delete, the deed is done.

clip_image008

Our edit made, we simply scroll to the bottom of the page, enter a meaningful epitaph for our fallen foe, and click ‘Propose File Change’.

clip_image009

It’s at this point that we reach the ‘Are you sure you wish to continue, there is no turning back from this point’ screen, and it’s at this point that the majority of people in my experience back out. Stalwart adventurers though we are, we will forge on regardless and click that ‘Create pull request’ button!

clip_image010

So what is a Pull Request?
Quite simply, the changes you made to the file are in your local Fork of it, and you are requesting that the document owner Pulls your changes into their repository. I’ve had so many conversations that have started ‘Why isn’t it a push request! I’m pushing changes!’ No you’re not, you’re prostrating yourself before the Gods of Git and begging them to pull your changes into their One True Version, which will be displayed to all the world on docs.microsoft.com.

Now comes the actual real ‘This is your last chance to back out’ screen., but we’ve levelled up enough that we know what we’re doing, and fearlessly hit that final button to Create our Pull Request.

clip_image011

And that’s it! Your request will now go before the documentation owners, who will review and then commit or reject your changes.

clip_image012

Congratulations! Your quest is complete!

This is the path of the champion of documentation in the 21st century – a slight detour to a side-quest to help all those who come after them. As a bonus, every accepted change will see your character immortalised on that document as a contributor for all to see.

clip_image013

This is how documentation is maintained at the pace of Cloud, and it’s dependent on you to maintain that pace, so go forth and document! Unless you’re a Xamarin developer, then… I guess just wait until Xamarin documentation migrates to docs :/

I know the blog title is pretty clickbaity, and for that I sincerely apologise. This is a subject that’s pretty near and dear to my heart though, so I thought I’d dedicate a post to what options we have for protecting the contents of VMs from the fabric on which they run in Hyper-V, Azure, and Azure Stack. I’m particularly passionate about this subject as I’ve worked at hosting providers for the past decade or so, and so protecting and ensuring availability of customer workloads is pretty much my raison d’être.

Before we touch on practical steps we can take today, we need some positional work to set context.

Here is a representation of some of the key places we can deploy workloads, the hardware they live on, their management experience, and the built-in options we have for securing workloads against malicious admins or compromised credentials. You’ll note that there is a hole in there today.

 

 

 

Shielded VMs in Hyper-V 2016 are just awesome – a fantastic way to rigorously protect your VM estate both in single and multi-tenanted environments, and I’ve previously blogged and spoken quite extensively publicly about the benefits you can achieve through this feature, as well as with just about every customer I’ve met with in the last year and a half.

Secure Enclaves in Azure through Azure Confidential Computing are similarly awesome, and achieve many of the same benefits for VMs running in Azure, however architecturally they are completely different from Shielded VMs.

Each of these features is dependent on trust rooted in their hardware – this is critical to each model being able to achieve its goals, trust cannot be with an individual, it must be in the physical hardware.

So then if we look back to our diagram above, we can immediately see the problem we are faced with! Azure Stack has a stated design goal, which I completely agree with, to be consistent with Azure. If a feature is not available in Azure then it won’t be brought into Stack. This is a line in the sand that I stand 100% behind as consistency should absolutely be the key here.

That being the case though, unless Shielded VMs as a feature come to Azure, they won’t be brought forward from Hyper-V 2016 to Azure Stack, and until the Azure Stack hardware can support Azure Confidential Computing’s Secure Enclaves, that feature cannot be brought back from Azure to Stack.

It leaves us in this really weird position where both Hyper-V and Azure are able to provide a higher level of assurance than Azure Stack can for VM workloads, despite Azure Stack inheriting many of the benefits of the Hyper-V 2016 Guarded Fabric that I cover off in this blog. In fact, pretty much everything other than VM Shielding is in there right now.

It’s important to note that the level of assurance provided by VM Shielding and Azure Confidential Computing outstrips anything else in the hypervisor or cloud space, so Azure Stack isn’t in any way behind other parts of the industry, only its peers in Microsoft-land.

Some may question the need to protect VMs from compromised fabric credentials, when Azure Stack is delivered as an appliance which is locked down and makes extensive use of Just in Time and Just Enough Administration to remove admin exposure to the physical hardware and hypervisor. I would pose the same question of Azure though, where Secure Enclaves are provided – assurance isn’t all about protecting against the likely, or the maybe possible, but sometimes the only slightly potentially possible. Assurance is about showing that you will go to the nth degree to protect your customer workloads in any eventuality, as trust once broken can be extremely hard to regain.

So how then can we work around this in the interim, and provide that absolute rigour and assurance offered in both Azure and Hyper-V 2016 to Azure Stack? The answer, it turns out, is quite simple.

Firstly, the option exists to make use of Dell EMC CloudLink SecureVM, an Azure Stack validated solution to enable many of the protections offered by Hyper-V 2016’s VM Shielding feature at the VM level. When combined with the Guarded Fabric infrastructure features which Azure Stack does inherit from Hyper-V, this goes a huge way towards plugging that gap within Azure Stack.

That said, I’ve always been quite vehemently outspoken about the fact that not all VMs should be Shielded in this way by default – this is a technology which exists to protect sensitive workloads, not every workload, as there is both a resource and management overhead associated. Domain Controllers, finance data, HR data, IPR, these are the sorts of workloads which should be appropriately protected, anything which holds the keys to your proverbial kingdom.

This being the case then, such workloads don’t have to reside within the Azure Stack infrastructure itself, they only have to be managed by Azure Stack, and for that we have the WAP Connector for Azure Stack. Shielded workloads can indeed run alongside and be managed by Azure Stack… soon. This is by no means a perfect solution, and while this is in no way a deal breaker for most, and while Dell EMC gallantly step in for now, I sincerely hope that the gap in the image above is filled in natively some day.

This does open up a question for a whole other blog though… What workloads should you run within your Azure Stack environment? One thing has become abundantly clear to me over the last couple of years working deeply in these technologies, Hyper-V 2016 and Azure Stack are deeply symbiotic, and the best architected solutions around Stack will make appropriate use of both to give the best mix of cloud consistency, resiliency, security, performance, and cost possible.

Training is great, free training is even better…

With the final month-ish countdown to Azure Stack multi-node systems being delivered on-premises underway, training materials and courses have begun to pop up online to help get people up to speed. One of the first is from Opsgility, offering a ten module Level 200 course in Implementing Azure Stack solutions.

The course is available here: https://www.opsgility.com/courses/player/implementing-azure-stack 

Opsgility is of course a paid for service, however if you sign up for a free Visual Studio Dev Essentials account, three months of Opsgility access is included for free, as well as the ever-useful $25 of free Azure credits every month for a year.

Enjoy! 🙂

 

Well Microsoft Inspire has kicked off in fine form, both with the announcement of the GA of the Azure Stack Development Kit (formerly One Node PoC), and with the announcement that Azure Stack multi-node systems from Dell EMC, HPE, and Lenovo are available for pre-order now, shipping in September.

 

Useful Links:

 

Azure Stack Overview

The Azure Stack Development Kit

The Azure Stack Development Kit Release Notes

Updated App Service Bits for Azure Stack Dev Kit

How to Buy Azure Stack

Azure Stack Management Pack for SCOM is RTM

Julia White discussed Azure Stack

TheRegister article on Azure Stack

NYT on Azure Stack

Why Azure Stack is a game changer for hybrid IT

Business Insider article on Azure Stack launch

 

This marks the start of a ~2 month countdown to launch – the final furlong in a multi-year journey to true hybrid cloud, and for those of us who have been working deep in about the product for that whole time, the excitement in the community is palpable.

 

This isn’t the end of the journey however, this is day one of the start of a whole new wave of datacentre innovation, as the reality and the power of the hybrid cloud plus the intelligent edge really start to be understood. There is still so much both to learn and to teach, how we in the service provider industry most effectively deliver value against the inevitability of data gravity, and how we most effectively build into the future without disregarding our heritage.

 

Cloud doesn’t replace virtualisation, certainly not in the next few years. I’ve run many, many Azure Stack customer workshops, and the single most common assumption about Azure Stack is that it is a VMware and Hyper-V replacement. The good news for those who have built careers in virtualisation over the last ten years or more, is that in most cases, applications which are designed for virtualisation still run best there today. Over time, the inevitable new waves of cloud-native and cloud-first applications will of course displace those traditional 1, 2, 3 tier type applications, but the important takeaway right now is that Digital Transformation is not all or nothing, it can occur over time.

 

It’s that breathing room in many cases that Azure Stack provides – the ability to iteratively modernise parts of applications over time, while maintaining intra-DC bandwidth and latencies, and without having to disregard or immediately abandon existing hardware and hypervisors.

 

So this has been a hugely exciting journey to date, and if one thing has been made clear at Inspire today, it’s that partners are the lynchpin that will drive forward products like Azure Stack in the future. I’m incredibly excited to keep sharing as we go forward, but even more importantly, I’m excited to keep on learning!

 

Right, time to get a VPN set up on this laptop so I can deploy the Azure Stack Development Kit… let’s get building the future!

While running through the (very worthwhile) Azure Functions Challenge, I encountered an error that was new to me, and a quick method of working around/fixing it.

After deploying Challenge 4, I received the following error when trying to open the Function:

“We are not able to retrieve the keys for function … This can happen if the runtime is not able to load your function. Check other function errors.”

It seems this is because I used the same app name multiple times. Encrypted values are stored in your app’s storage which are tied to the app name, and while new keys are generated within the app context when you re-create it, the encrypted values are just carried over in the underlying folder structure.

Deleting the existing files will cause them to be regenerated with values from the new encryption keys, so go ahead and open up Kudu by navigating to:

https://yourappname.scm.azurewebsites.net/DebugConsole

Change directory to D:\home\data\Functions\secrets, and delete everything in that folder. In my instance, one file, host.json.

Refresh the portal and look in the folder again, and you should find newly regenerated files therein. Your Function should load and work properly as well. Hurrah!

 

 

One of the first steps many people take in their journey to Azure or Azure Stack is the migration of an existing workload, rather than building net new. Typically most people would recommend choosing a non-production-critical web-based application running within a VM or across multiple VMs.

There are three usual ways people move this sort of workload:

  • Lift and shift of IaaS to IaaS
  • Re-platform of IaaS to PaaS
  • Partial re-platform of IaaS to PaaS and IaaS

With the workload running within the cloud environment, we are far better positioned to modernise it gradually and when appropriate using cloud-native features.

There are other options available for workload migration, however we find they’re rarely used in the real world just now due either to lack of awareness, or perceived increased complexity. One of those methods which falls squarely into both camps for most people today is containerising an existing workload, and moving it into IaaS.

Containerising in this way can have many benefits – the two core benefits we’ll focus on here though are shrinking of workload footprint, and simplified migration and deployment into the cloud environment.

In order to significantly reduce the knowledge cliff and learning curve needed to containerise an existing workload, a really exciting new community-created and driven PowerShell module was announced at DockerCon a couple of weeks ago, Image2Docker for Windows. Image2Docker also exists for Linux, but for this blog we’ll be focused on the Windows variant.

Image2Docker is able to inspect a VHD, VHDX, or WIM file, identify installed application components (from a select group for now), extract them to an output folder, and then build a Dockerfile from which you can then build a container.

It’s a brilliant tool which begs the question ‘How quickly can I move an existing workload to a cloud provider then?’

… so let’s answer it!

My Azure Stack PoC hosts are being moved to a new rack just now, so for the purposes of this demo I’ll use Azure. The process and principles are identical here though, as we’re just using a Windows Server 2016 VM with Container support enabled.

First of all we will need a workload to move. For this I’ve deployed a very simple website in IIS for this first test – we can get more bold later with databases and jazz, for now this is a plain jane HTML site running on-premises in Hyper-V.

clip_image001

The server we run the Image2Docker cmdlets from will need to have the Hyper-V role installed, so to keep life easy I’m running it from the Hyper-V host that the VM already lives on. I’ve also enabled the Containers role and installed the Docker Engine for Windows Server 2016.

clip_image002

Because the Image2Docker cmdlets mount the VHD/X, it’ll need to be offline to run the tool. You can either take a copy of the VHD/X and run it against that, or as I’m doing in this case, just shut the VM down.

Ok, so with the VM shut down, our first step on the Hyper-V host is to install the Image2Docker cmdlets. This is made extremely easy by virtue of the module being hosted in the PowerShell Gallery.

So simply Install-Module Image2Docker, then Import-Module Image2Docker, and you’re all set!

clip_image003

My VHDX here is hosted in a remote SOFS share, so to remove the need for me to keep having to edit hostnames out of images, I’ve just mapped it to X:

clip_image004

First up we’ll create a folder for the Image2Docker module to output to, both the contents of the IIS site and the resultant Dockerfile will live here.

clip_image005

Now comes time to extract the website and build the Dockerfile.

The documentation claims that the only required parameter is the VHD/X and that it will scan for any application/feature artifacts within the VM automatically. It also claims that you can specify multiple artifacts (e.g. IIS, MSSQL, Apache) for it to scan, and it will extract them all.

Sadly, after reviewing the PowerShell code for it here, it turns out that this is aspirational for now and that the Artifacts parameter is both required, and can only support a single argument for now. C’est la vie, it’s not an issue for our basic IIS site here luckily.

Run the cmdlet, targeting the VM’s VHDX, IIS as the artifact to scan, and the pre-created OutputPath.

clip_image006

After running, the DockerOut folder will contain all the bits of the puzzle needed to build a container based on the IIS website within the VM – hurrah!

clip_image007

Ok! So before we go any further, let’s prep our Docker environment. I already have Docker Engine installed and it’s logged into my Docker account, so let’s get some base images ready.

Because PowerShell == Life, I’ve also installed the Docker PowerShell Module.

 
Register-PSRepository -Name DockerPS-Dev -SourceLocation https://ci.appveyor.com/nuget/docker-powershell-dev

Install-Module -Name Docker -Repository DockerPS-Dev -Scope CurrentUser

This lets us check that there are no containers and no images currently on the server using Docker native and PowerShell cmdlets.

clip_image008

This is where we can either blast ahead with defaults, or make some informed choices…

Looking inside the Dockerfile, right at the top we can see that the base image that this container will be based on is the ASP.NET Windows Server Core image from http://hub.docker.com/r/Microsoft/aspnet

clip_image009

All we’re doing here is running a very simple IIS website here, so why not run this with Nano Server as the base? For comparison, I’ve pulled down the IIS enabled Nano Server Docker image, and the Windows Server Core image referenced in the Dockerfile.

clip_image010

Holy moley! It’s almost a 90% image reduction going from Server Core to IIS-enabled Nano Server! Let’s definitely do that.

clip_image011

Kick off the build process with Docker Build DockerOut, and off we go!

clip_image012

… and just like that, the Image is built.

The image has no associated Repo or Tag yet, so let’s add those, then push it up to Docker Hub.

clip_image013

The iisdemo repo doesn’t exist within my Docker account yet, but that’s fine, just pushing it up will create and initialise it.

clip_image014

… and hey presto, the image is in my Docker repo. This could just as easily be a private repo.

clip_image015

Now that we have the Container in Docker Hub, I can go to any Windows Server Container friendly environment and just pull and run it! Just like that.

In Azure I have deployed a Windows Server 2016 VM with Containers support enabled, which can be a host for whatever containers I deign to run on it. In this simple demo I’ll just be running the one container of course.

clip_image016

Within this VM, getting our pre-built image is as simple as pulling it down from Docker Hub.

clip_image017

Running the container is a single command more, with a map of port 80 in the container to 80 on the host…

clip_image018

… and hey presto! Our website has been containerised, pushed to Docker Hub, pulled down to a VM in Azure (or Azure Stack, or anywhere that can run Windows Containers), and the website is up and running happily.

clip_image019

If we break down the steps that were actually needed here to migrate this workload, we had:

  • Generate Dockerfile
  • Tweak Dockerfile
  • Build Container Image
  • Push Image to Docker Hub
  • Pull Image from Docker Hub
  • Run Container

We’re not getting any of the space-saving benefits that have been generated here due to the way I’ve deployed it as a 1:1 mapping of container to VM. Deploying this as a Hyper-V container within Hyper-V 2016 would have seen a ~90% space saving vs the original VM size, which is pretty awesome.

This space saving per container can also be realised the Azure Container Service preview for Windows Containers, which is available in public preview if you deploy a Container Service with Kubernetes as the Orchestrator. The space savings really come into being when operating at scale greater than that of our test site here though, so for a single website like this there’s really no point. There are other resource saving options as well, which fall outwith the scope of this blog.

Obviously this is a very simple workload, and we’re still at very early days with this technology. It hopefully gives a glimpse into the future of how easy it will be to migrate any existing workload to any cloud which supports Windows or Linux containers though.

Once a workload is migrated like-for-like into a cloud environment, extending it and modernising it using cloud-native features within that environment becomes a much simpler proposition. For those for whom working out the best path to do an initial IaaS to IaaS migration is a pain just now (insert fog of war/cloud metaphor), tools like Image2Docker are going to significantly ease the pain and planning required for that first step towards cloud.

So how long did this take me, end to end, including taking screenshots and writing notes? Well the screenshots are there, and I was done in around 30 minutes – this is partly because I’d stripped back pre-reqs like the Core and Nano images in order to get screenshots. Normally these would already be in place and used as the base for multiple images.

Running through the process again with all pre-requisites already in place took around 3 minutes to go from on-premises to running in Azure. So, to answer the question we asked back at the start – not long. Not long at all.

 

When choosing a platform to build an application for, developers need to consider a number of common factors – skillset in the market, end user reach of the platform, supportability, roadmap, and so on. This is one of the reasons why Windows Phone has had such a difficult time; development houses won’t choose to invest time into it because of limited user reach, uncertain roadmap and support, and the need to develop new skills. There’s just no incentive there to do so, and there is much risk.

Reaching Equilibrium

When I say application delivery platform, I refer to any of a number of areas, including:

  • Desktop Operating Systems
  • Mobile Operating System
  • Virtualisation
  • Cloud Platforms

In each of these areas, during their birth as a new delivery paradigm there tend to be many contenders vying for developer attention. I strongly believe that in any category, over time the number of commonly used and accepted platforms will naturally tend towards a low number as a small subset of them reach developer critical mass, and the others lose traction.

Once you reach this developer critical mass, a platform becomes self-perpetuating. End-user reach is massive, developer tools are well matured, roadmaps are defined, and needed development skills fill the market. Once a few platforms in a category reach this point, no others can compete as they can’t attract developers, the laggards wither and die, and the platforms in the category become constant.
.

I call this process HomeOStasis

  • In the Desktop and Server arena this has tended to Windows and *nix.
  • In the Mobile arena Android and iOS.
  • In Virtualisation land we predominantly have VMware and Hyper-V.
  • In Cloud there is currently AWS, Azure, and Google Cloud Platform.

It’s still a contentious view, both among hardware vendors and IT Pros as we haven’t quite reached that HomeOStatic point with cloud yet, but I can’t see cloud native as landing as anything other than AWS, Azure, and GCP. SoftLayer and Oracle do fit the bill, but in the certainty of eventual HomeOStasis, I don’t see them gaining the developer critical mass they need to become the core of the stable and defined cloud platform market.

 

Cloud and Virtualisation

Note that this isn’t Cloud vs Virtualisation, each are powerful and valuable application delivery platforms with their own strengths and weaknesses, and designed to achieve and deliver different outcomes, just like desktop and mobile operating systems.

Virtualisation is designed to support traditional monolithic and multi-tier applications, building resiliency into the fabric and hardware layers to support high availability of applications which can take advantage of scale-up functionality.

Cloud is designed to support containerised and microservice-based applications which span IaaS and PaaS and can take advantage of scale-out functionality, with resiliency designed into the application layer.

Yes you can run applications designed for virtualisation in a cloud-native environment, but it’s rarely the best thing to do, and it’s unlikely that they’ll be able to take advantage of most of the features which make cloud so attractive in the first place.

 

Hybrid Cloud and Multi Cloud

Today, the vast majority of customers I speak to say they are adopting a hybrid cloud approach, but the reality is that the implementation is multi cloud. The key differentiator between these is that in hybrid cloud the development, deployment, management, and capabilities are consistent across clouds, while in multi cloud the experience is disjointed and requires multiple skillsets and tools. Sometimes organisations will employ separate people to manage different cloud environments, sometimes one team will manage them all. Rarely is there an instance where the platforms involved in multi cloud are used to their full potential

Yes there are cloud brokerages and tools which purport to give a single management platform to give a consistent experience across multiple different cloud platforms, but in my opinion this always results in a diminished overall experience. You end up with a lowest-common-denominator outcome where you’re unable to take advantage of many of the unique and powerful features in each platform for the sake of consistent and normalised management. It’s actually not that different to development and management in desktop and mobile OS’s – there have always been comparisons and trade-offs between native and cross-platform tooling and development, with ardent supporters in each camp.

Today, the need to either manage in a multi cloud model, or diminish overall experience with an abstracted management layer is a direct consequence of every cloud and service provider today delivering a different set of capabilities and APIs, coupled with a very real customer desire to avoid vendor lock-in.

 

Enabling True Cloud Consistency

The solution to this has been for Microsoft to now finally deliver a platform which is consistent not just in look and feel with Azure, but truly consistent in capabilities, tooling, APIs, and roadmap. Through the appliance-based approach of Azure Stack, this consistency can be guaranteed through any vendor at any location.

This is true hybrid cloud, and enables the use of all the rich cloud-native capabilities within the Azure ecosystem, as well as the broad array of supported open-source development tools, without the risk of vendor lock-in. Applications can span and be moved between multiple providers with ease, with a common development and management skillset for all.

Once we have reached a point of HomeOStasis in Cloud, platform lock-in through use of native capabilities is not a concern either, as roadmap, customer-reach, skillset in the market, and support are all taken care of.

A little-discussed benefit of hybrid cloud through Azure Stack is the mitigation of collapse or failure of a vendor. An application which runs in Azure and Azure Stack can span multiple providers and the public cloud, protected by default from the failure or screw-up of one or more of those providers. The cost implications of architecting like this are similar to multi cloud, however the single skillset, management framework, and development experience can significantly help reduce TCO.

Azure Stack isn’t a silver bullet to solve all application delivery woes, and virtualisation platforms remain as important as ever for many years to come. Over and above virtualisation through, when evaluating your cloud-native strategy, there are some important questions to bear in mind:

  • Who do I think will be the Cloud providers that remain when the dust settles and we achieve HomeOStasis?
  • Do I want to manage a multi cloud or hybrid cloud environment?
  • Do I want to use native or cross-platform tooling?
  • What will common and desirable skillsets in the market be?
  • Where will the next wave of applications I want to deploy be available to me from?

I’m choosing to invest a lot of my energy into learning Azure and Azure Stack, because I believe that the Azure ecosystem offers genuine and real differentiated capability over and above any other cloud-native vendor, and will be a skillset which has both value and longevity.

When any new platform paradigm comes into being, it’s a complete roll of the dice as to which will settle into common use. We’re far enough along in the world of cloud now to make such judgements though, and for Azure and Azure Stack it looks like a rosy future ahead indeed.

When you deploy a new Azure Function, one of the created elements is a Storage Account Connection, either to an existing storage account or to a new one. This is listed in the ‘Integrate’ section of the Function, and automatically sets the appropriate connection string behind the scenes when you select an existing connection, or create a new one.

clip_image001

Out of the box however, this didn’t work correctly for me, throwing an error about the storage account being invalid.

clip_image002

Normally to fix this, we could just go to Function App Settings, and Configure App Settings to check and fix the connection string…

clip_image003

… however after briefly flashing up, the App Settings blade reverts to the following ‘Not found’ status.

clip_image004

There are a fair few ways to fix these existing App Settings connection strings, or just have them deployed correctly in the first place (e.g. in an appsettings.json file). In this instance though I’m going to fix the existing strings through PowerShell, as it’s always my preferred troubleshooting tool.

Fire up an elevated PowerShell window, and let’s get cracking!

  1. Ensure all pre-requisites are enabled/imported/added.

Assuming you have followed all the steps to install Azure PowerShell, which you must have in order to have App Service deployed… 🙂

From within the AzureStack-Tools folder (available from GitHub).

 
#
Import-Module AzureRM 
Import-Module AzureStack 
Import-Module .\Connect\AzureStack.Connect.psm1 

Add-AzureStackAzureRmEnvironment -Name "AzureStackUser" -ArmEndpoint "https://management.local.azurestack.external"

# Login with your AAD User (not Admin) Credentials 
Login-AzureRmAccount -EnvironmentName "AzureStackUser"
#
  1. Investigate the status of the App Settings in the Functions App.

The Functions App is just a Web App, so we can connect to it and view settings as we would any normal Web App.

The Function in question here is called subtwitr-func, and lives within the Resource Group subtwitr-dev-rg.

clip_image005

 
#
$myResourceGroup = "subtwitr-dev-rg" 
$mySite = "subtwitr-func" 
# 
$webApp = Get-AzureRMWebAppSlot -ResourceGroupName $myResourceGroup -name $mySite -slot Production 
# 
$appSettingList = $webApp.SiteConfig.AppSettings 

$hash = @{} 
ForEach ($kvp in $appSettingList)  
{ 
    $hash[$kvp.Name] = $kvp.Value 
} 

$hash | fl 
# 

Below is the output of the above code, which shows all our different connection strings. There are two storage connection strings I’ve tried to create here – subtwitr_STORAGE which I created manually and storagesjaohrurf7flw_STORAGE which was created via ARM deployment.

I’m not worried about exposing the Account Keys for these isolated test environments so haven’t censored them.

clip_image006

As neither of these strings contains explicit paths to the Azure Stack endpoints, they are trying to resolve to the public Azure endpoints. Let’s fix that for the storagesjaohrurf7flw_STORAGE connection.

 
#
$hash['storagesjaohrurf7flw_STORAGE']= 'BlobEndpoint=https://storagesjaohrurf7flw.blob.local.azurestack.external;TableEndpoint=https://storagesjaohrurf7flw.table.local.azurestack.external;QueueEndpoint=https://storagesjaohrurf7flw.queue.local.azurestack.external;AccountName=storagesjaohrurf7flw;AccountKey=MZt4gAph+ro/35qE+AbFEiE4NK6s5XVU/Y4JAi3p3l7yy1d3qx0QPETNl+bGW+fNNvtJHxSXI7TETBWKJw2oQA==' 
#
set-azurermwebappslot -ResourceGroupName $myResourceGroup -name $mySite -AppSettings $hash -Slot Production 
#

Now with the endpoints configured, the Function is able to connect to the Blob storage endpoint successfully and there is no more connection error.

Had I explicitly defined the connection string in-code pre-deployment, this would not have been an issue. If it is an issue for anyone, here at least is a way to resolve it until the App Settings blade is functional.

Below are a few quick tips to be aware of with the advent of the TP3 Refresh.

Once you have finished deployment, there is a new Portal Activation step

This has caught a few people out so far, as ever the best tip is to make sure you read all of the documentation before deployment!

clip_image001

When Deploying a Default Image, make sure you use the -Net35 $True option to ensure that all is set up correctly in advance for when you come to deploy your MSSQL Resource Provider.

.Net 3.5 is a pre-requisite for the MSSQL RP just now, and if you don’t have an image with it installed, your deployment of that RP will fail. It’s included in the example code in the documentation, so just copy and paste that and you’ll be all good.

 
 
$ISOPath = "Fully_Qualified_Path_to_ISO" 
# Store the AAD service administrator account credentials in a variable
$UserName='Username of the service administrator account' 
$Password='Admin password provided when deploying Azure Stack'|ConvertTo-SecureString -Force -AsPlainText 
$Credential=New-Object PSCredential($UserName,$Password) 
# Add a Windows Server 2016 Evaluation VM Image. Make sure to configure the $AadTenant and AzureStackAdmin environment values as described in Step 6 
New-Server2016VMImage -ISOPath $ISOPath -TenantId $AadTenant -EnvironmentName "AzureStackAdmin" -Net35 $True -AzureStackCredentials $Credential 
 

Deployment of the MSSQL Resource Provider Parameter Name Documentation is Incorrect

The Parameters table lists DirectoryTenantID as the name of your AAD tenant. This in actual fact requires an AADTenantGUID. This has been fixed via Git and should be updated before too long.

clip_image002

Use the Get-AADTenantGUID command in the AzureStack-Tools\Connect\AzureStack.Connect.psm1 module to retrieve this.

clip_image003

Deploy everything in UTC, at least to be safe.

While almost everything seems to work when the Azure Stack host and VMs are operating in a timezone other than UTC, I have been unable to get the Web Worker role in the App Service resource provider to deploy successfully in any timezone other than UTC.

UTC+1 Log

clip_image004

UTC Log

clip_image005

Well that’s it for now, I have some more specific lessons learned around Azure Functions which will be written up in a separate entry shortly.

During my TP3 Refresh deployment, I ran into an issue with the POC installer, wherein it seemingly wouldn’t download the bits for me to install and I ended up having to download each .bin file manually to proceed.

Charles Joy (@OrchestratorGuy) was kind enough to let me know via Twitter how to check the progress of download and for any errors. As ever, PowerShell is king.

clip_image001
To test this, I initiated a new download of the POC.

clip_image002
I chose a place to download to on my local machine, then started the download.

clip_image003
After starting the download, I fired up PowerShell and ran the Get-BitsTransfer | fl command to see what was going on with the transfer. In this instance, all is working perfectly, however something stuck out for me…

clip_image004

One thing to notice here is that Priority is set to Normal – this setting uses idle network bandwidth for transfer. Well I don’t want to use idle network bandwidth, I want to use all the network bandwidth! 🙂

We can maybe up the speed here by setting Priority to High or to Foreground. Set to Foreground, it will potentially ruin the rest of your internet experience while downloading, but it will move the process from being a background task using idle network bandwidth into actively competing with your other applications for bandwidth. In the race to deploy Azure Stack, this might be a decisive advantage! Smile

Get-BitsTransfer | Set-BitsTransfer -Priority Foreground

Kicking off this PowerShell immediately after starting the PoC downloader could in theory improve your download speed. As ever, YMMV and this is a tip, not a recommendation.