In this episode of the Azure Government video series, Steve Michelotti sits down with AIS’ very own Vishwas Lele to discuss migrating and modernizing with Kubernetes on Azure Government. You’ll learn about the traditional approaches for migrating workloads to the cloud, including:

1. Rehost
2. Refactor
3. Reimagine

You will also learn how Kubernetes provides an opportunity to fundamentally rethink these traditional approaches to cloud migration by leveraging Kubernetes in order to get the “best of all worlds” in the migration journey. If you’re looking to migrate your existing legacy workloads to the cloud, while minimizing code changes and taking advantage of innovative cloud-native technologies, this is the video you should watch!

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

Meet some of the AIS Recruiting Team – They’re going to talk you through some of their top recommended job interview tips.

(Transcript)

My name is Francesca Hawk. My name is Rana Shahbazi. My name is Kathleen McGurk. My name is Jenny Wan. My name is Denise Kim.

Tip #1: Be Open, Transparent & Direct

I think it’s important for candidates to be authentic and transparent throughout the entire interview process.

Keeping the line of communication open through the interview process is really important for both sides. If you have other opportunities on the table, say that. The recruiters are your advocates and an essence kind of your best friend. Being direct – give us, you know, enough feedback – if you are not interested, or if you if the commute is an issue, or if you want more money, if your clearance was an issue – just let us know.

Tip #2: Know What You Want

So before even searching for opportunities you have to figure out what you’re looking for in a company. And then once you figure out what you’re looking for – whether it’s the culture of the company, the location the company – definitely asked questions with the recruiter prior to the interview so while you’re at the interview you have a little bit of that information.

Tip #3: Be On Time & Be Prepared

You always want to make sure you’re on time. Generally, you want to arrive about 15 minutes before your interview. You know where you’re going to park, make sure that you look up directions ahead of time. And just be prepared in general.

Preparation is extremely underrated in the interview process, so really doing your research getting familiar with the company and the culture there. Go online. Check out, you know, the general website, check out the job description. Make sure you’re aware of the skills and qualifications and what they’re really looking for. Glassdoor always provides really good reviews from the current employees. I think the company website and certainly LinkedIn is a huge aspect – social media in general.

Tip #4: Ask Questions

Ask questions or have questions ready to us ask. Ask about the process ask about the expectations who you’ll be potentially meeting with, what the potential duration could be. The company can’t provide information unless you ask for it.

You also have to make sure that you are interviewing the company just as much as they’re interviewing you. Ask the interviewers is about the culture because you’re going to get a different response from everybody but if they all seem to check out or are the same then that means the culture is pretty good.

Just make sure that you feel comfortable with the environments that, you know, you’re going to be working in.

Tip #5: Make Sure You Understand the Role

Really use the opportunity to understand the position and then to sell your strengths and also kind of tie it back into your accomplishments.

Make sure that you talk about what you were individually able to accomplish in a project. So you were personally able to
bring to the table and not necessarily what the team accomplished as a whole.

Tip #6: Show Your Interest

I think your presentation and the way you present yourself to the interviewers and anybody that you interact within the interview process is extremely important.

So not just what you say, but how you say it. Eye contact and body language say a lot about your interest in the position and the company as a whole.

Showing your interest makes a recruiter feel that you’re confident and that you can certainly do the role, and also that you are
excited about this opportunity.

I think you should be excited about interviewing a company that you’re interested in. And that sounds silly, but I think that going in excited and I think that’s why body language and eye contact are all very important aspects.

Tip #7: Listen

People are so busy thinking about what they’re going to say next that they don’t actually pay attention to the questions being asked.

So making sure that you’re hearing what they’re saying and then taking the time to respond is really important.

Tip #8: Follow Up

Certainly, you know, asking for next steps is very helpful and also that is another way of expressing your interest. You know, definitely being responsive. I would say the general rule of thumb is within 12 hours of turnaround time. If you’re not interested
and that’s okay if we’re not at AIS where this opportunity is not number one and that’s okay, we like to know that as well.

You definitely want to send a thank you note – it goes a long way and it shows you’re very interested in the company and it always leaves a great impression.

We’re Hiring!

AIS is always looking to connect with talented technologists that are passionate about learning and growing to staff exciting new projects for our commercial and federal clients. If you’re interested in working at AIS, check out our current career openings.

I was fortunate enough to attend the Microsoft BUILD 2019 Conference in Seattle this year – the company’s annual developer conference. There was a lot of excitement and a TON of great information to consume; from both the scheduled sessions and one-on-one conversations with product team representatives. So I’m wrapping up BUILD 2019 with some of my highlights below.

(Admittedly, these highlights skew towards technologies I’m currently using most frequently – I’ve grouped some of these into related categories. Also I’m sure I’ve left out some highpoints, so I’ll plan to update this post as needed.)

AIS at BUILD 2019

However, before describing announcements or specific technology updates I noted, my number one highpoint of the week was the session that Vishwas Lele (AIS CTO and MS Azure MVP) gave on Tuesday: “Architecting Cloud-Native Apps with AKS and Cosmos DB.” This year was the first year that Microsoft allowed a few select partners to lead sessions at BUILD, so I consider his inclusion recognition of the great work he is doing to advance cloud-native technologies on Azure. His session was packed, and attendees got their money’s worth of content related to AKS, Cosmos DB, and strategies for using cloud-native conventions for the consumption of PaaS services to build resilient, globally scalable applications.

AIS Team at Microsoft Build 2019

Kubernetes and AKS

Most of the discussion about compute on Azure included at least one point related to AKS (Azure Kubernetes Service). AKS was everywhere, and one consistent theme seems to be AKS as a significant portion of the Azure “compute” offering in the future. So, there were many exciting K8s-related announcements and demonstrations which I had not previously heard, a few that stood out to me:

Azure AI

The company’s vision related to Artificial Intelligence (AI) and Machine Learning offerings is stronger than it’s ever been. This story’s been developing for the past few years, and the vision hasn’t always been crystal clear. Over the past two years, I’ve often asked the question “If I were going to start a new custom machine learning project in Azure, what services would I start with?” Usually, that answer has been “Azure Databricks” by default, but I’m now coming around to the idea that there is now a viable alternative – or at least additional tools to consider.

The BUILD 2019 conference included great sessions and content focused on Azure AI, segmented into three high-level areas:

  • Knowledge Mining: This is concerned with using Azure services to help discover hidden insights from your content – including docs, images, and other media. Sessions and announcements in this area focused on enhancements to two key services; Azure Search and a new “Form Recognizer” service.
  • Azure (Cognitive) Search is now generally available: This service uses built-in AI capabilities to discover patterns and relationships, understand the sentiment, extract key phrases, etc. without the need for specific data science expertise. Additionally, Azure allows consumers to customize results by applying custom-tuned ranking models.
  • Forms Recognizer: A new service announced in public preview. This service exposes a REST API that accepts document content (PDF, images, etc.) and extracts text, key/value pairs, and tables. The idea is that “usable data” can be gleaned from content that has been hard to unlock in the past.

Machine Learning: A set of services that enable building and deploying custom machine learning models. This area represents many capabilities on the Azure platform; I found that at this year’s conference some great new additions and enhancements were highlighted that help to answer that first “where do I start?” question. Some highlights:

  • AutoML is in public preview: This service allows a consumer to choose the “best” machine learning algorithm for a provided data set and the desired outcome. It does this by accepting the data set from the user (in preview it accepts files stored in blob storage exclusively), automatically training several different models based on this data, comparing performance, and reporting the performance to the end user.
  • Visual Interface for Azure Machine Learning Service is in public preview: This service enables consumers to build ML models using a drag and drop interface, with the ability to drop down into Python code when needed for specific activities. In many ways, this is a reincarnation of the “Azure ML Studio” service of the past, without some of the limitations that held this service back (data size restrictions, etc.).
  • Choose your underlying compute: Choose where your models are trained and run, including the Machine Learning Services managed compute environment, AKS, Virtual Machines, Azure Databricks, HDInsight clusters, or in Azure Data Lake Analytics.

AI apps and agents: This area includes Azure Cognitive Services and Azure Bot Service. Azure Cognitive Services is a set of APIs that allow developers to call pre-built AI models to enhance their applications in the areas of computer vision, speech-to-text, and language. A few data points that stuck out to me:

  • A new Cognitive Services category – “Decision”: This category will initially include three services: 1) Content Moderator, 2) Anomaly Detector (currently in preview), and 3) Personalizer (also currently in Preview). Personalizer is a service to help promote relevant content and experiences for users.
  • “Conversation Transcription”: An advanced speech to text capability.
  • Container Support Expansion: The portfolio of Cognitive Services that can be run in locally in a Docker container now includes Anomaly Detector, Speech-to-Text, and Text-to-Speech in addition to the existing text analytics and vision containers.

.NET Platform

It’s amazing for me to consider that .NET is now 17 years old – the official release of .NET 1.0 was in February 2002! And, although .NET is now on the “mature” end of the spectrum compared to many other active programming frameworks, it’s also true that there are many new .NET developers still adding C#, VB.NET, F#, or CLR-based languages to their repertoire. In fact, at BUILD 2019 the company quoted the fact that “a million new active .NET developers” were added last year alone.

One of the reasons for this is that the .NET team continues to innovate with offerings like .NET core – which it released in 2014. .NET Core is the cross-platform development stack which runs across operating systems and has been the “future” of .NET for some time.

One of the major announcements that will affect .NET developers in the future is that the next “release” of .NET core will be “.NET 5”. Yes, this means there will be one unified platform that includes legacy .NET framework components, .NET Core, and Mono. After the .NET 5 release in 2020, there will be one annual release of .NET.

.NET Schedule

A few other .NET related data points that stuck out to me as items to investigate in more detail:

  • “Blazor” got a lot of session time and seems to be a real project now. For some people, the idea of running C# in the browser can devolve into a philosophical debate. However, it’s clear that Microsoft sees enough upside that it has moved the technology beyond an “experimental” phase into a fully-supported preview.
  • .NET for Spark was released (open source) aimed to provide access to Apache Spark for .NET developers.
  • Frequent mentions of gRPC support in .NET Core. gRPC is the language agnostic remote procedure call framework published by Google.
  • NET 1.0: A cross-platform (.NET core) framework for creating custom ML models using C# or F# – without having to leave the .NET ecosystem.

Cosmos DB

BUILD 2019 also had a few great sessions and announcements related to Cosmos DB, Microsoft’s fully managed, global, multi-modal database service. My highlights:

  • Best practices for Azure Cosmos DB: Data modeling, Partitioning, and RUs: A great session given by Deborah Chen and Thomas Weiss (program managers on the Cosmos DB team). Practical, actionable examples related to how to partition, how to minimize request units (RUs) for common database calls, etc.
  • Etcd API: In Kubernetes, etcd is used to store the state and the configuration of clusters. Ensuring availability, reliability, and performance of etcd is crucial to the overall cluster health, scalability, elasticity availability, and performance of a Kubernetes cluster. The etcd API in Azure Cosmos DB allows you to use Azure Cosmos DB as the backend store for Azure Kubernete
  • Spark API: New (preview) native support for Spark through the Cosmos DB Spark API. This one is interesting to me because it has the potential to enable a “serverless experience for Apache Spark” – where the “cluster” is Cosmos DB.  I would pay close attention to the consumed RUs though!
  • Cosmos DB will support multi-model access in the future: Cosmos DB is a multi-model database, meaning you can access the data using many different APIs. However, until now this has been a choice that is made up front on the creation of the database.  In his “Inside Datacenter Architecture” session, Mark Russinovich announced that in the future, Cosmos DB would support multi-model access to the same data.
  • Jupyter notebooks running inside Azure Cosmos DB: announced in preview. A native notebook experience that supports all the Cosmos DB APIs and is accessed directly in the Azure Portal.

Other Announcements

Below are some other BUILD 2019 announcements, highlights, and data points I’m investigating in the coming weeks:

If you have any questions, feel free to reach out to me on Twitter at @Bwodicka or contact the AIS team online.

I just returned from Microsoft BUILD 2019 where I presented a session on Azure Kubernetes Services (AKS) and Cosmos. Thanks to everyone who attended. We had excellent attendance – the room was full! I like to think that the audience was there for the speaker 😊 but I’m sure the audience interest is a clear reflection of how popular AKS and Cosmos DB are becoming.

For those looking for a 2-minute overview, here it is:

In a nutshell, the focus was to discuss the combining Cloud-Native Service (like AKS) and a Managed Database

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

We started with a discussion of Cloud-Native Apps, along with a quick introduction to AKS and Cosmos. We quickly transitioned into stateful app considerations and talked about new stateful capabilities in Kubernetes including PV, PVC, Stateful Sets, CSI, and Operators. While these capabilities represent significant progress, they don’t match up with external services like Cosmos DB.

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Cloud Native Tooling

One option is to use Open Service Broker – It allows Kubernetes hosted services to talk to external services using cloud-native tooling like svcat (Service Catalog).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck svcat

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck SRE

External services like Cosmos DB can go beyond cluster SRE and offer “turn-key” SRE in essence – Specifically, geo-replication, API-based scaling, and even multi-master writes (eliminating the need to failover).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Mutli Master Support

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Configure Regions

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Portability

Since the Open Service Broker is an open specification, your app remains mostly portable even when you move to one cloud provider to another. OpenService Broker does not deal with syntactic differences, say connection string prefix difference between cloud providers.  One way to handle these differences is to use Helm.

Learn more about my BUILD session:

Here you can find the complete recording of the session and slide deck: https://mybuild.techcommunity.microsoft.com/sessions/77138?source=sessions#top-anchor

Additionally, you can find the code for the sample I used here: https://github.com/vlele/build2019 

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

As developers, we spend a lot of time developing APIs. Sometimes it’s to expose data that we’ve transformed or to ingest data from other sources. Coincidentally, more and more companies are jumping into the realm of API Management—Microsoft, Google, MuleSoft and Kong all have products now that provide this functionality. With this much investment from the big players in the tech industry, API management is obviously a priority. Now, why would anyone want to use an API Management tool?

The answer is simple: It allows you to create an API Gateway that you can load all your APIs into, providing a single source to query and curate. API Management makes life as an admin, a developer, and a consumer easier by providing everything for you in one package.

Azure API Management

Azure API Management logoWhat does Azure API Management provide? Azure API Management (APIM) is a cloud-based PaaS offering available in both commercial Azure and Azure Government. APIM provides a one-stop-shop for API authority, with the ability to create products, enforce policies, and utilize a robust developer portal.

Not only can API Management integrate seamlessly with your existing Azure infrastructure, but it can also manage APIs that exist on-prem and in other clouds. APIM is also available in both the IL4 and IL5 environments in Azure Government, which allows for extensibility and management for those working in the public sector.

APIM leverages a few key concepts to provide its functionality to you as a developer, including:

  • Products
  • Policies
  • Developer Portal

From providing security to leveraging rate-limiting and abstraction, Azure API Management does it all for API consolidation and governance in Azure. Any API can be ingested, and it gets even easier when APIs follow the OpenAPI Format.

What Are Products?

Products are a layer of abstraction provided inside APIM. Products allow you to create subsets of APIs that are already ingested into the solution—allowing you to overlap the use of APIs while restricting the use of individual collections of APIs. This level of compartmentalization allows you to not only separate your APIs into logical buckets but also enforce rules on these products separately, providing one more layer of control.

Product access is very similar to Azure RBAC—with different groups created inside of the APIM instance. These groups are yet another way for APIM admins to encapsulate and protect their APIs, allowing them to add users already associated to the APIM instance into separate subsets. Users can also be members of multiple groups, so admins can make sure the right people have access to the right APIs stored in their APIM instance.

What Are Policies?

Policies are APIM’s way of enforcing certain restrictions and providing a more granular level of control. There is an entire breadth of policies available in APIM, which range from simply disallowing usage of the API after calling it five times, to authentication, logging, caching, and transformation of requests or responses from JSON to XML and vice versa. Policies are perhaps the most powerful function of APIM and drive the control that everyone wants and need. Policies are written in XML and can be easily edited within the APIM XML Editor. Policies can also leverage C# 7 Syntax, which brings the power of the .NET Framework to your APIM governance.

What Is the Developer Portal?

The Azure API Management Developer Portal is an improved version of the Swagger documentation that’s generated when you use the OpenAPI spec. The Developer Portal provides an area for developers to readily see APIs, products, and associated applications. The Portal also provides sample request bodies (no more guessing API request structures!) and responses, along with code samples in many different languages.

Finally, the portal also allows you to try API calls with customized request bodies and headers, so you have the ability to see exactly what kind of call you want to make. Along with all that functionality, you can also download your own copy of the OpenAPI Spec for your API after it’s been ingested into your instance.

Why Should I Use APIM?

Every business should be using some form of API Management. You’ll be providing yourself a level of control previously not available. By deploying an API Gateway, that extra layer of abstraction allows for much tighter control of your APIs. Once an API has been ingested, APIM provides many additional functionalities.

First, you can match APIs to products, providing a greater level of compartmentalization. Second, you can add different groups to each product, with groups being subsets of users (i.e. Back-end Devs, Billing Devs, etc.). Third, you automatically generate a robust developer portal, which provides all of the functionality of the Swagger portal, but with added features, such as code snippets.  Finally, APIM also has complete integration with Application Insights in commercial Azure, providing access to a world-class logging and visualization tool.

Azure API Management brings power to the user, and no API should be left out.

ServiceNow Logo
ServiceNow is
a sophisticated information technology tool and one that can be easily underestimated.   

Looking back, I initially thought ServiceNow was limited to a helpdesk front-end intake tool.  

But lately, I’ve realized that while becoming the leading request intake tool, ServiceNow must have also recognized its advantage in offering additional capabilities sought by organizations of all sizes. Perhaps ServiceNow came to realizthat aggregations of problem-related requests may be viewed from a different perspective: As an excellent barometer for latent pain-points. This prompted ServiceNow to pursue larger goals.  

Inside/Out

Delving into ServiceNow permitted me to easily write code within SharePoint that reaches into ServiceNow to obtain or modify data in its tables. Additionally, I was also able to write code that let me reach outward from inside ServiceNow and obtain or modify data that resides in SharePoint.  

This two-way data traffic was easily accomplished through the magic of REST and the appropriate sprinkle of user authentication. Any environment enabled with REST extensions would work just as well.    

So ServiceNow is highly “integrate-able,” as we can easily extend outward from inside it, as well as extend inward from virtually any environment. Add in the fact that ServiceNow is a Platform as a Service (PAAS), and my eyes start widening with a feeling that I’ve stumbled upon an extensible business-support tool that might be the long-awaited game changer regarding how transactional work is organized, managed, and used to determine worthwhile projects, driven by demonstrated and specific business needs.      

What’s Next

My expedition into the ServiceNow platform is still fairly new but it makes me stand back when I hear that some government organization leaders are considering replacing certain (perhaps many) SharePoint uses with the evolving tools offered by ServiceNow. Many large organizations have already adopted it, and the company has experienced extraordinary growth from its birth in 2003.  In just 16 years, ServiceNow has grown to 6,000 employees worldwide with revenue of over $2.6 billion.    

My experience using ServiceNow from the perspective of a full administrator has been extremely positive. As an administrator, I’ve been able to access anything I need through the ease of highly-simplified navigation.  The vertical navigation segment lets me expand by clicking on high-level items. Alternatively, the navigator allows me to filter via a field that eliminates anything that doesn’t match the string I’m keying and surfaces items from the hierarchy for anything that does match. The result is a stunningly quick way to find any table, workflow, or tool built into ServiceNow for administrators. In fact, working as an administrator has been much more intuitive than entering a ServiceNow ticket for any specific request.