Today, the Windows Azure management portal does not provide an out-of-the-box capability to define a time schedule for startup and shutdown of virtual machines. Having an automated process for this is of great use, as by simply deprovisioning VMs during off-hours can save you a lot of money. This post will describe a lightweight approach for automated provisioning of VMs according to time schedule.
Quite often people want to use the D: drive in a Windows Azure VM for their apps or data. For example, you might want to migrate an existing Windows application to the cloud without change and this app is relying on data being stored on the D: drive. Or your corporate policy mandates installing applications on D:.
By default, Windows VMs in Windows Azure host their operating system on drive C: as a persistent data disk located in blob storage. Additionally, each VM gets a scratch disk labeled as D: which is NOT persisted in blob storage. It’s rather disk space provided by the specific Hyper-V host of your VM. Data on this scratch disk is volatile in a sense that it will get lost whenever your VM will be relocated to another physical host (e.g. because you changed the VM size in the portal).
So if you want to use D: as a persistent data disk, read on…
You are running your virtualized workloads on top of VMware? You’ve heard about Windows Azure and believe it could be a valuable alternative or extension to your current IT landscape? Check out how you can migrate VMware Windows Server Guest VMs directly to Windows Azure in only a couple of steps using the Microsoft Virtual Machine Converter.
With HDInsight, the Windows Azure platform provides a powerful Platform-as-a-Service (PaaS) offering for quickly spinning up and managing Hadoop clusters on top of Windows VMs. These clusters are based on the HortonWorks Data Platform (HDP) distribution. Currently, the newest version of HDP in HDInsight is 1.3.0, which is deployed with the HDInsight version 2.1 (go here for the Microsoft versioning story). For sure, HortonWorks will eventually release a 2.x version for HDInsight on Windows, but if you prefer Ubuntu or you need a Hadoop v2 cluster now – what to do …?
Well, the good news is that Windows Azure is a very flexible platform and does not only provide platform services like HDInsight and many others, but also a powerful Infrastructure-as-a-Service (IaaS) model which allows you to deploy virtual machines based on Windows or Linux and manage them in virtual networks. So, Windows Azure IaaS allows you to build your own Apache Hadoop v2 cluster running in Ubuntu. In this post I will walk you through the steps to get there. We will build up a Hadoop 2.2.0 cluster consisting of a single master node and 2 slave nodes, using a custom-built VM image for all Hadoop nodes. Basically, the post will enable you to build arbitrary-sized Hadoop clusters on top of Windows Azure.
Windows Azure Storage provides a scalable, reliable and highly available service to manage relational as well as unstructured data in the cloud. In order to access your data you can either leverage the Storage REST API directly or use one of the available abstractions on top of it (e.g. the Management Portal, PowerShell Cmdlets, .NET Libraries, 3rd Party Tools, etc.). Windows Azure Blob Storage can be used to store binary data. Many existing applications have requirements in terms of accessing data on network shares using the SMB protocol in Windows. When migrating these applications to Windows Azure one option is to change the file access code to the native REST interface of Blob Storage. However, often the effort for changing an application is too high and customers are looking for a ‘lift & shift’ migration without having to change any of their code.