On October 21st, 2014, Microsoft will be hosting AzureConf, another free event for the Azure community. This event will feature a keynote presentation by Scott Guthrie, along with numerous sessions executed by Azure community members. Streamed live for an online audience on Channel 9, the event will allow you to see how developers just like you are using Azure to develop robust, scalable applications on Azure.

AIS’ CTO Vishwas Lele will be leading the 4:40 PM session, titled “Latest advancements in Azure IaaS for Devs & IT Pros.” This session will discuss some of the recent advancements in #azure IaaS that have significantly widened the choices available to developers and IT Pros alike. Developers can remote debug an IaaS VM and run Windows and non-Windows workloads. IT Pros can take advantage of advanced networking features, use rich CLI tools and use a single pane of glass management across Azure and on-premises resources. Other speakers include Michael Collier, Mike Martin, Rick Garibay and Chris Auld.

Whether you’re just learning Microsoft Azure or you’ve already achieved success on the platform, you won’t want to miss this special event.

Register Now!

If you have a minute, take a look at last year’s session.   Watch AzureConf 2013 Recording now!

Attachments, Notes, and Annotations

How do you handle document storage and management in CRM? While this is a prominent feature in SharePoint, it is not as obvious or as easy to use in CRM. However, if you have a need to attach and manage documents in CRM, there is a provided option.

CRM offers a Notes field that can be turned on and associated to any entity. This Notes field is actually a reference to an entity called Annotation. The Annotation entity holds your file attachment and a reference ID back to the entity that the attachment belongs to. This feature is turned on by default for some of the default entities, but you need to turn it on yourself for custom entities.  Read More…

Introduction by Vishwas Lele:

We released the first version of our Media Center App for SharePoint 2013 almost eighteen months ago. In building this app we wanted to bring the functionality of Azure Media Services– support for complete media lifecycles, from ingestion to streaming— to SharePoint. You can read a detailed summary of the design decisions that went into building the first version here. In a nutshell, our goal was to build a SharePoint hosted app, working both in the cloud and on-premise, which provides seamless access to the media assets hosted in Azure Media Services. Furthermore, the access is enabled via a SharePoint construct such as the asset library. This setup allows the users to leverage search, custom views, and a built-in player – all the SharePoint functionality that they are already familiar with. However, as you may have guessed, the asset library only holds the metadata about the assets: the assets themselves are streamed from the AMS origination servers. Read More…

Welcome to part seven of our blog series based on my latest PluralSight course: Applied Azure. Previously, we’ve discussed Azure Web Sites, Azure Worker Roles, Identity and Access with Azure Active Directory, Azure Service Bus and MongoDB, HIPPA Compliant Apps in Azure and Offloading SharePoint Customizations to Azure.

Motivation

No lengthy commentary is needed to communicate the growing importance of big data technologies. Look no further than the rounds of funding [1][2][3] that companies like Cloudera, Hortonworks and MapR have attracted in recent months. It is widely expected that the market for Hadoop will likely grow to $20 Billion by 2018.

The key motivations for the growth of big data technologies includes:

  • The growing need to process ever increasing volumes of data. This growth in data is not limited to web scale companies alone. Businesses of all sizes are seeing this growth.
  • Not all data conforms to a well-defined structure/schema, so there is a need to supplement (if not replace) the traditional data processing and analysis tools such as EDWs.
  • Ability to take advantage of deep compute analytics using massively parallel, commodity based clusters. We will see examples of deep compute analysis a little bit later but this is a growing area of deriving knowledge from the data.
  • Overall simplicity (from the standpoint of the analyst/ developer authoring the query) that hides the non-trivial complexity of the underlying infrastructure.
  • Price-performance benefit accorded due to the commodity based clusters and fault tolerance.
  • The ability to tap into fast paced innovation taking place within the “Hadoop” ecosystem. Consider that Map Reduce, which has been the underpinning of Hadoop ecosystem for years, is being replaced by projects such as Yarn in recent months. Read More…