Here are several that I found most exciting:
- New data centers in China
New data centers in China are now generally available. This development is further validation that the Azure team will continue to build data centers around the world at a rapid pace. If you are building applications that need to be hosted closer to your users and customers worldwide, Azure is a good place to do so.
- Azure Traffic Manager
Azure Traffic Manager is now generally available. This means you can have one “common” DNS address for all instances of your application hosted across data centers. Azure Traffic Manager will then route any incoming requests to this “common” DNS address to the appropriate data center, where your application is hosted, based on a configurable policy (load balancing, failover etc.).
- Static IP addresses
You now have the ability to specify a static IP address for your Azure service at deployment time. Up until now your service was assigned an IP address dynamically at the time of deployment. Even though the dynamically assigned IP address was fairly stable, if you deleted and recreated the service, you would end up with a new IP address. This new ability is a huge enhancement. Why? Consider this example. If you are setting up an on premise firewall to allow “only” your trusted Azure service to call in, it is hard to set such a rule if the IP address assigned to the service is subject to change. This new feature eliminates that problem.
- Move VMs from one subnet to another
You can now move VMs from one subnet to another, without redeployment. Think of Azure VNET as a logical isolation for your VMs. This logical isolation boundary can then be further distributed into small groups or subnets (think of a subset as a pool of IP addresses). Up until now, you did not have the flexibility to change your subnets (i.e. expand or contract the IP address pool) without redeploying the VMs. This change will make it easy to reconfigure a VNET based on changing needs.
- CheckPoint, Fortinent and Open Swan
As you know, VNET enables site-to-site connectivity between the Azure DC and your on premise DC. In order for it to do this, you need to set up a gateway device on premise. Up until now we had the option to use the RRAS role in Windows Server 2012 or Cisco, with Juniper as gateway device. Now, they have added CheckPoint and Fortinent, as well as Open Swan as the soft VPN gateway. This update means that you have additional choices of hardware appliances and a whole set of open source soft VPN gateways enabled by the Open Swan support.
- Azure VNET
Azure VNET-based site-to-site connectivity is only the first step towards the promise of a hybrid cloud. A couple of things to note. You are still routing the site-to-site networking traffic over the public internet. Additionally, site-to-site connection can reach only the VMs and cloud services and not PaaS services such as websites, SQL Database or storage (you have to go through the public endpoint to reach these services). Express Route is designed to alleviate the aforementioned limitations. Express Route provides organizations with a dedicated, high throughput network connection between Windows Azure datacenters and their on premise DC. The dedicated connection can be set up in two ways: 1) through a Express Route Partner (such Equinix or Level 3) or 2) through establishing a WAN connection between Azure DC and on premise DC – essentially making Azure a site on your WAN. A presenter at Built 2014 shared a demo where the latency between a VM hosted in the Equinix partner facility and a VM hosted in Azure was 1ms. Imagine the possibilities if the latency between cloud and on premise is not a factor. This feature has the potential to truly make Azure DC a seamless and secure extension to your existing IT environment.There is another benefit of Express Route that has huge implications for the hybrid cloud. Up until now we could not connect two VNET (for example, one between your on premise DC and Azure East DC and another between your on premise DC and Azure West DC). Express Route can help with this limitation. You can hook up more than one VNET to your express route circuit. This means if one of the Azure DCs becomes unavailable, you could easily failover to the other Azure DC.
- Automation Service
This is a new addition to the portal (currently in preview) that is designed to simplify the Dev/Ops function. The biggest feature it provides is a robust environment to host your automation scripts. Today, customers have to provide their own custom hosting environment for executing scripts. This custom hosting environment is typically an on-premises or cloud-hosted VM combined with a scheduler service. As you can imagine, it is quite a bit of work to make such an environment robust (e.g. ensure successful script completion despite restarts) and current (e.g. ensure that all the latest versions of automation libraries are installed and configured). The automation service seeks to alleviate this challenge by providing a PowerShell workflow-based robust hosting environment. Additionally, it provides a script authoring experience directly within the Azure portal. This means that automation scripts can be modified or adjusted easily without the need for a client machine that has all the PowerShell IDE etc. Automation scripts created using PowerShell IDE can be imported into the Azure Automation Service. The script authoring is based on the Visual Studio Online “Monaco” development environment. Finally, the automation service provides rich capabilities for scheduling, logging, reviewing job status, as well as drafting and publishing capabilities.Note that Azure Automation Service in its current form is based on Service Management Library (and not the new Azure Resource Manager, discussed below).
- Azure Resource Manager
While the automation service will definitely help with the DevOps function, provisioning resources (such as a VM or storage account) in Azure still takes place in a singleton or resource-by-resource basis. This means that there is no way to deploy or update a group of resources. Similarly, you cannot manage the permissions on a group of resources. Also, there is no way to visualize a group of resources as a logical unit of billing or monitoring. This fundamental limitation is manifested in the Azure portal as well. For example, the “Virtual Machine” tab in the portal simply lists all VMs in a subscription without providing the user with any information regarding their logical grouping.The Azure Resource Manager seeks to change that. It defines the notion of a resource group which is a container of multiple resources of similar or different type. Every resource must exist in one and only one resource group. Interestingly, a resource group can span Azure regions. So how does the notion of a resource group help? First, think of the resource group as a scale unit because the lifecycles of resources within a resource group are connected. You typically provision and de-provision resources within a resource group together (although the regroup does not prevent you from adding resources dynamically during the life of a resource group). Second, resources with a resource group can talk to each other innately. Third, the billing, metering and quota information is rolled up to a resource group. As stated earlier, today there is no logical grouping except at the entire subscription level. This means today it is hard to compute costs by individual applications within a subscription. The resource group is going to be very important here. Lastly, the resource group enables a model-driven approach for defining resource groups. The resource group models are JSON formatted files (referred to as resource templates) that define the resources, dependencies, and input/output parameters that make up a resource group. Once you have authored a resource group template, you can use it to instantiate resource groups in an idempotent and fault-tolerant manner. While the resource group model defines the resources and its dependencies, customizing a resource itself (changing the firewall ports, adding IIS virtual directory) can be achieved using one of the several extensible solutions including Chef, Puppet, DSC (in-VM PowerShell) or VMM agent.
What is interesting to note is that the resource model concept applies to Azure, as well as on premises based resources. It turns out that a resource could be an Azure Web Site or an on-premises equivalent – the difference is just a result of a configuration detail. In the case of the former, website is provisioned by Azure Web Sites, while in the case of the latter, the website provisioned by the on-premises version of Web Sites.
*Note* – In the current version it appears that the only resource types that are supported are Azure Web Sites and Azure SQL Databases.
It is worth noting that everything that can be achieved via the declarative JSON syntax can also be achieved via a REST API, or via an equivalent power PowerShell cmdlets. Speaking of PowerShell cmdlets, I think it is safe to assume that we will see a version of Automation Service (described above) that is based on the concept of the Resource Manager.
There were some interesting announcements related to Mobile Services. The long-awaited feature that allows for using .NET based code to develop the backend customization is here. This feature, combined with the ability to publish directly from within Visual Studio 2013, including local development, testing, and debugging will further position Mobile Services as the API hosting service of choice. Another noteworthy addition is the offline sync capability – the ability to download the data locally on the devices, query it, and of course, synchronize the changes when the app is online. This feature is powered by the Azure Mobile Services SQLiteStore nugget package.
- Azure Storage
Based data is geo-replicated across regions hundreds of miles apart. As a user, you get to choose the primary region. The secondary is predefined for each primary region. Recently Microsoft announced that read-only access to the secondary data will be available (even if primary is unavailable). What’s important to understand is that the underlying semantics needed to access a read-only secondary remain the same. So for example, a shared access signature created against an Azure storage blob on the primary region will work against the secondary region (you would change the endpoint from primary accountname.<service>.core.windows.net to secondary accountname-secondary.<service>.core.windows.net). As you can imagine, the storage client library has been updated to reflect these changes. Using the client library, you can programmatically determine the current max geo-replicate delay for the client’s storage account. You can also select retry options to be “PrimaryOnly”, “SecondaryOnly”, “PrimaryThenSecondary” etc.Another noteworthy change was the announcement of the third type of durability option, called Zone Replication (ZRS) for Azure storage (this is in addition to GRS – six replicas across primary and secondary data centers, and LRS – three replicas within a single region). ZRS is about storing three replicas across zones (facilities) within a region, but *may*span across regions in some cases. So how is ZRS different? It provides additional durability against a zone failure within a region (i.e. a fire in one of the buildings within a region). Note that ZRS is only available for block blobs. In other words, persistent VM disks cannot take advantage of ZRS.
Geo-replication is not limited to Azure storage though. The SQL Database team announced that their premium offering will include “active” geo-replication as well. So now you can have up to four readable secondaries in any region of Azure and can also choose when to failover. This means that there is no need for a roll-your-own data sync-based workaround to make your SQL Database fault-tolerant across Azure regions. The SQL Database team has also announced that you can now restore your database to any point in time within the last 35 days. As you can imagine this is designed to protect against human or programmatic data deletion scenarios. Once again this means that you don’t have to roll-your-own scheme for taking SQL Database snapshots. Finally there is now support for databases up to 500 GB and when the premium service is released it will come with a higher uptime SLA of 99.95% (it was 99.9% earlier).
- Service Bus
Service Bus SDK 2.3 SDK 2.3 comes with some very useful updates. This includes an event-driven or push-style model that can remove the need for message loops etc. Additionally, support for CORS, as well as, auto-detection mechanism to fallback to HTTP (if TCP port is blocked) has also been added. Finally there are some enhancements related to dead lettering (a powerful feature within Service Bus that automatically places “bad” messages into a separate “dead letter” queue after a default of 10 failed attempts at delivering the message). With the latest release you can forward dead letter messages to a common queue for ease of management
- Azure IaaS
As expected there were a number of enhancements related to Azure IaaS. As you might already know, support for Oracle and Java VMs on Azure has been available prior to Build 2014. At Build 2014, Microsoft announced additional VS based tooling to create (and delete) a VM and the ability to remotely connect to it for debugging. How is this helpful? As a developer it greatly simplifies testing an app against a specific VM configuration (i.e. OS, IIS version). The remote debugging is made possible because the debugger agent is injected into the VM. In fact, this idea of injecting an agent (or extension) into a VM instance is key. This very mechanism makes it possible to inject all kinds of agents such as Chef, Puppet, DSC or even the PaaS (or Windows Azure Guest) agent. The idea is that these agents can help bridge the divide between PaaS roles and IaaS VM instances. In order words, PaaS becomes an IaaS instance with additional agent configuration.
- The New Azure Portal
I saved the best for the last. The brand new portal (portal.azure.com) is based on a highly customizable and modern UI look and feel. It is comprised of a set of drill down blades that reflect information about various Azure services including websites, SQL Databases, and team sites. Additionally, the blades can be rearranged as needed (or even pinned to the start screen of the portal for easy access). The biggest change, however, is related to the Azure resource groups we discussed earlier. While the current Azure portal displays all the resources within a subscription grouped by resource type, the new portal relies on the notion of resource groups as a way to present information – whether it is billing, metering or even the lifecycle of resources. So how does it help? If you think of an application as a resource group, you now have the ability to visualize the billing, notifications, metering, and metadata associated with an application as a logical unit.
Needless to say, this is an exciting time to be an Azure developer. I cannot wait to try many of these features. I hope this blog post has encouraged you to do the same.