Managed Service Identity (MSI) in Azure is a fairly new kid on the block. What it allows you to do is keeping your code and configuration clear of keys and passwords, or any kind of secrets in general. Let’s say you have an Azure Function accessing a database hosted in Azure SQL Database. Often, developers put credentials for SQL Server authentication into the Function’s application settings in terms of a connection string. That takes sensitive information out of the code, but still quite often, configuration is checked into source control. Wouldn’t it be great to manage credentials completely outside of the application realm and push that responsibility to the platform? That’s what MSI allows you to do and this post describes how to go about it.
The officially recommended way for building highly available SQL Server environments in Azure is AlwaysOn Availability Groups, see here. This approach has many benefits, e.g. failover for a discrete set of user databases, flexible failover policies and read-only secondaries, but it requires SQL Enterprise edition (as described in the feature matrix of SQL Server 2014).
If you don’t need these additional capabilities and you like saving some money, there is an alternative way to build a highly-available and very scalable 2-node cluster on top of AlwaysOn Failover Cluster Instances (FCI) using SQL Server Standard edition. ‘Hang on’, you might say, ‘doesn’t FCI require shared storage – is that possible at all in Azure?’ Actually it is, by leveraging SIOS DataKeeper from the Azure Marketplace in order to synchronize local disks attached to the cluster nodes.
This post will show you how to set up this environment in Azure Resource Manager step-by-step, using PowerShell 1.0 as well as the new Azure Portal.
Sometimes, especially in enterprise environments, firewalls prevent connecting via RDP to Azure Windows VMs over port 3389. Quite often, the only outgoing ports being open in the network are 80 and 443 for HTTP(S). Additionally, RDP-over-SSL based technologies like Azure RemoteApp require users to install an app on the client, which also imposes a challenge in many cases, e.g. due to corporate policies.
Wouldn’t it be nice to access your Azure VMs via the browser in an RDP-like manner, without the need for special client-side software and networking configuration? Well, that can actually be done by installing ThinRDP on the target VM. This post will show you how to achieve that in a completely automated manner, using PowerShell with an Azure Resource Manager template and a custom script extension.
Until recently, it was not possible in Azure to host multiple SSL websites in a single IaaS VM (or load-balanced set of VMs), all listening on port 443 using different certificates (e.g. for separate domain names). People had to either use ARR farms in front of the web servers (making deployments more expensive and hard to manage) or use SNI (Server Name Indication) certificates, eliminating usage by all those brave people still running XP. This limitation was caused by the fact that a cloud service in Azure did only get one Virtual IP Address (VIP) from the fabric to get bound to port 443 for a single certificate.
Now, as Microsoft has announced availability of multiple VIPs per cloud service around Build 2015, it’s finally possible to configure several SSL endpoints, each of them pointing to a different website on the same VM (or set of VMs behind the Azure load balancer). This post will go through a simple example of setting up two SSL websites on a single VM.
One of the great things that came with the plethora of new features and capabilities around Azure networking at Build 2015 and didn’t get a lot of attention is the fact that now you have much more flexibility in working with reserved IP addresses in your deployments. By default, VIP addresses of Azure cloud services are dynamic by nature, i.e. they may change when VMs get de-provisioned or the Azure fabric needs to move your VMs to another host, e.g. due to hardware failure.
What you can do now with the latest release of the Azure PowerShell Cmdlets is to convert existing dynamic VIP to reserved IP addresses. Doing so will take the current cloud service VIP from the data center’s general IP address pool and assign it specifically as a reserved IP to your Azure subscription. The IP will remain associated with the cloud service deployment, but can also be used for other deployments in your subscription, as we will see in this post.
People were asking for disk encryption in Azure VMs for quite a while now. With the announcements made at TechEd 2014 in Houston it’s finally here. Instead of re-inventing the wheel, Microsoft is relying on established solutions in the market and initially provides two encryption options for Azure VMs:
This post will walk you through the steps to enable Trend Micro SecureCloud in your Azure VMs in order to encrypt your drives.
A couple of months ago, I wrote a blog post about starting and stopping virtual machines in Azure using Windows Task Scheduler inside a dedicated VM. With the advent of Azure Automation this can actually be done way easier and also free-of-charge. In this post I will describe how to use Azure Automation to start and stop virtual machines on a schedule.
At TechEd 2014 in Houston, Microsoft announced preview availability of a much-anticipated feature, Azure Files. It is a platform capability to easily expose SMB file shares that can be used from multiple instances to read & write files, without having to build an infrastructure yourself (like I described in my blog post about building file shares using DFS. It is a native Azure service built on top of the same architecture as the other storage services (blobs, tables, queues), so it offers the same characteristics in terms of (geo-)redundancy, HA and scalability. Apart from providing a SMB 2.1 interface (that is only accessible for Azure VMs being hosted in the same geographic region or from on-premises environments via a VPN connection), shares in Azure Files can also be accessed remotely via REST from anywhere in the world, provided that a client knows the storage account credentials.
This post gives an introduction to getting started with Azure Files and shows how to automatically mount a share when provisioning a new Windows VM, using the new Custom Script configuration extension (also announced at TechEd).
One of the great new Azure features announced at TechEd 2014 in Houston is the capability of assigning public IP addresses directly to VMs on an instance-level. As these IP addresses are public (that’s why they’re also called PIP) they allow you to access your VMs directly from outside the datacenter, without having to define any endpoints on the virtual IP address (VIP) of the corresponding cloud service.
This can be handy for example if you need to access your Azure VMs via RDP from your corporate environment and your firewall admin has blocked ports other than the ‘mainstream’ ones (80, 443, 3389, …). If you have deployed multiple VMs in a single cloud service, the Azure load balancer provides port forwarding to those VMs from random high ports to port 3389 internally. If your firewall blocks those high ports you’re stuck. PIPs to the rescue! This post will describe what it takes to create a PIP for a VM and how to avoid common pitfalls.
Today, the Windows Azure management portal does not provide an out-of-the-box capability to define a time schedule for startup and shutdown of virtual machines. Having an automated process for this is of great use, as by simply deprovisioning VMs during off-hours can save you a lot of money. This post will describe a lightweight approach for automated provisioning of VMs according to time schedule.