Late last Friday, the news of the Joint Enterprise Defense Infrastructure (JEDI) contract award to Microsoft Azure sent seismic waves through the software industry, government, and commercial IT circles alike.

Even as the dust settles on this contract award, including the inevitable requests for reconsideration and protest, DoD’s objectives from the solicitation are apparent.

DOD’s JEDI Objectives

Public Cloud is the Future DoD IT Backbone

A quick look at the JEDI statement of objectives illustrates the government’s comprehensive enterprise expectations with this procurement:

  • Fix fragmented, largely on-premises computing and storage solutions – This fragmentation is making it impossible to make data-driven decisions at “mission-speed”, negatively impacting outcomes. Not to mention that the rise in the level of cyber-attacks requires a comprehensive, repeatable, verifiable, and measurable security posture.
  • Commercial parity with cloud offerings for all classification levels – A cordoned off dedicated government cloud that lags in features is no longer acceptable. Furthermore, it is acceptable for the unclassified data center locations to not be dedicated to a cloud exclusive to the government.
  • Globally accessible and highly available, resilient infrastructure – The need for infrastructure that is reliable, durable, and can continue to operate despite catastrophic failure of pieces of infrastructure is crucial. The infrastructure must be capable of supporting geographically dispersed users at all classification levels, including in closed-loop networks.
  • Centralized management and distributed control – Apply security policies; monitor security compliance and service usage across the network; and accredit standardized service configurations.
  • Fortified Security that enables enhanced cyber defenses from the root level – These cyber defenses are enabled through the application layer and down to the data layer with improved capabilities including continuous monitoring, auditing, and automated threat identification.
  • Edge computing and storage capabilities – These capabilities must be able to function totally disconnected, including provisioning IaaS and PaaS services and running containerized applications, data analytics, and processing data locally. These capabilities must also provide for automated bidirectional synchronization of data storage with the cloud environment when a connection is re-established.
  • Advanced data analytics – An environment that securely enables timely, data-driven decision making and supports advanced data analytics capabilities such as machine learning and artificial intelligence.

Key Considerations: Agility and Faster Time to Market

From its inception, with the Sep 2017 memo announcing the formation of Cloud Executive Steering Group that culminated with the release of RFP in July 2018, DoD has been clear – They wanted a single cloud contract. They deemed a multi-cloud approach to be too slow and costly. Pentagon’s Chief Management officer defended a single cloud approach by suggesting that multi-cloud contract “could prevent DoD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable”, resulting in “additional costs and technical complexity on the Department in adopting enterprise-scale cloud technologies under a multiple-award contract. Requiring multiple vendors to provide cloud capabilities to the global tactical edge would require investment from each vendor to scale up their capabilities, adding expense without commensurate increase in capabilities”

A Single, Unified Cloud Platform Was Required

The JEDI solicitation expected a unified cloud platform that supports a broad set of workloads, with detailed requirements for scale and long-term price projections.

  1. Unclassified webserver with a peak load of 400,000 requests per minute
  2. High volume ERP system – ~30,000 active users
  3. IoT + Tactical Edge – A set of sensors that captures 12 GB of High Definition Audio and Video data per hour
  4. Large data set analysis – 200 GB of storage per day, 4.5 TB of online result data, 4.5 TB of nearline result data, and 72 TB of offline result data
  5. Small form-factor data center – 100 PB of storage with 2000 cores that is deliverable within 30 days of request and be able to fit inside a U.S. military cargo aircraft

Massive Validation for the Azure Platform

The fact that the Azure platform is the “last cloud standing” at the end of the long and arduous selection process is massive validation from our perspective.

As other bidders have discovered, much to their chagrin, the capabilities described above are not developed overnight. It’s a testament to Microsoft’s sustained commitment to meeting the wide-ranging requirements of the JEDI solicitation.

Lately, almost every major cloud provider has invested in bringing the latest innovations in compute (GPUs, FPGAs, ASICs), storage (very high IOPS, HPC), and network (VMs with 25 Gbps bandwidth) to their respective platforms. In the end, what I believe differentiates Azure is a long-standing focus on understanding and investing in enterprise IT needs. Here are a few examples:

  • Investments in Azure Stack started 2010 with the announcement of Azure Appliance. It took over seven years of learnings to finally run Azure completely in an isolated mode. Since then, the investments in Data Box Edge, Azure Sphere and commitment to hybrid solutions have been a key differentiator for Azure.
  • With 54 Azure regions worldwide that ( available in 140 countries) including dedicated Azure government regions – US DoD Central, US DoD East, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia, US Sec East, US Sec West – Azure team has accorded the highest priority on establishing a global footprint. Additionally, having a common team that builds, manages, and secures Azure’s cloud infrastructure has meant that even the public Azure services have DoD CC SRG IL 2, FedRAMP moderate and high designations.
  • Whether it is embracing Linux or Docker, providing the highest number of contributions to GitHub projects, or open-sourcing the majority of  Azure SDKs and services, Microsoft has demonstrated a leading commitment to open source solutions.
  • Decades of investment in Microsoft Research, including the core Microsoft Research Labs and Microsoft Research AI, has meant that they have the most well-rounded story for advanced data analytics and AI.
  • Documentation and ease of use have been accorded the highest engineering priorities. Case in point, rebuilding Azure docs entirely on Github. This has allowed an open feedback mechanism powered by Github issues.
Microsoft HQ in RemondAfter much anticipation, the US Department of Defense (DoD) has awarded the $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract for cloud computing services to Microsoft over Amazon. This effort is crucial to the Pentagon’s efforts to modernize core technology and improve networking capabilities, and the decision on which cloud provider was the best fit was not something taken lightly.

Current military operations run on software systems and hardware from the 80s and 90s, and the DoD has been dedicated to moving forward with connecting systems, streamlining operations, and enabling emerging technologies through cloud adoption.

Microsoft has always been heavy to invest back into their products, the leading reason we went all-in on our partnership, strengthening our capabilities and participating in numerous Microsoft programs since the inception of the partner program in 1994.

In our experience, one of the many differentiators for Microsoft Azure is its global networking capabilities. Azure’s global footprint is so vast that it includes 100K+ miles of fiber and subsea cables, and 130 edge locations connecting over 50 regions worldwide. That’s more regions across the world than AWS and Google combined. As networking is a vital capability for the DoD, they’re investing heavily in connecting their bases and improving networking speeds, data sharing, and operational efficiencies, all without sparing security and compliance.

Pioneering Cloud Adoption in the DoD

We are fortunate enough to have been on the front lines of Azure from the very beginning. AIS has been working with Azure since it was started in pre-release under the code name Red Dog in 2008. We have been a leading partner in helping organizations adopt Azure since it officially came to market in 2010, with the privilege of experience in numerous large, complex projects across highly-regulated commercial and federal enterprises ever since.

When Azure Government came along for pre-release in the summer of 2014, AIS was among the few partners invited to participate and led all partners with the most client consumption. As the first partner to successfully support Azure Gov IL5 DISA Cloud Access Point (CAP) Connectivity and ATO for the DoD, we’ve taken our experience and developed a reliable framework to help federal clients connect to the DISA CAP and expedite the Authority to Operate (ATO) process.

We have led important early adoption projects to show the path forward with Azure Government in DoD, including the US Air Force, US Army EITaaS, Army Futures Command, and Office Under Secretary of Defense for Policy. Our experiences have allowed us to show proven success moving DoD customers IMPACT Level 2, 4, 5, and (soon) 6 workloads to the cloud quickly and thoroughly with AIS’ DoD Cloud Adoption Framework.

To enable faster cloud adoption and native cloud development, AIS pioneered DevSecOps and built Azure Blueprints to help automate achieving federal regulation compliance and ATO. We were also the first to achieve the Trusted Internet Connections (TIC) and DoD Cyber Security Service Provider (CSSP), among others.

AIS continues to spearhead the development of processes, best practices, and standards across cloud adoption, modernization, and data & AI. It’s an exceptionally exciting time to be a Microsoft partner, and we are fortunate enough to be at the tip of the spear alongside the Microsoft product teams and enterprises leading the charge in cloud transformation.

Join Our Growing Team

We will continue to train, mentor, and support passionate cloud-hungry developers and engineers to help us face this massive opportunity and further the mission of the DoD.

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

I recently had the privilege and opportunity to attend this year’s DEF CON conference, one of the world’s largest and most notable hacker conventions, held annually in Las Vegas. Deciding what talks and sessions to attend can be a logistics nightmare for a conference that has anywhere between 20,000 – 30,000 people in attendance, but I pinpointed the ones that I felt would be beneficial for myself and AIS.

During the conference, Tanya Janca, a cloud advocate for Microsoft, and Teri Radichel from 2nd Sight Lab did a presentation on “DIY Azure Security Assessment” that dove into how to verify the security of your Azure environments. More specifically they went into detail on using Azure Security Center, and setting scope, policies, and threat protection. With this post, I want to share what I took away from the talk I found most helpful.

Security in Azure

Security is a huge part of deploying any implementation in Azure and ensuring fail-safes are in place to stop attacks before they occur. I will break down the topics I took away that can help you better understand and perform your own security assessment in Azure along with looking for vulnerabilities and gaps.

The first step in securing your Azure environment is to find the scope at which you are trying to assess and protect. This could also include things external to Azure, such as hybrid solutions with on-premises. These items include the following:

  • Data Protection
  • Application Security
  • Network Security
  • Access Controls
  • Cloud Security Controls
  • Cloud Provider Security
  • Governance
  • Architecture

Second, is using the tools and features within Azure in order to accomplish this objective. Tanya and Teri started out by listing a few key features that every Azure implementation should use. This includes:

  • Turning on Multi-Factor Authentication (MFA)
  • Identity and Access Management (IAM)
    • Roles in Azure AD
    • Policies for access
    • Service accounts
      • Least privilege
    • Account Structure and Governance
      • Management Groups
      • Subscriptions
      • Resource Groups

A key item I took away from this section was allowing access at the “least privileged” level using service accounts, meaning only the required permissions should be granted when needed using accounts that are not for administrative use. Along with tightening access, it’s also important to understand at what level to manage this governance. Granting access at a management group level will cast a wider and more manageable net. A more defined level, such as a subscription level, could help with segregation of duties but this is heavily based on the current landscape of your groups and subscription model.

The Center for Internet Security (CIS)

So maybe now you have an understanding of what scope you want to assess the security of your Azure environment at, but do not know where to start. This is where The Center for Internet Security (CIS) can come into play. CIS is crowd-sourced security for best practices and threat prevention which includes members such as corporations, governments, and academic institutions. It was initially intended for on-premises use. However, as the cloud has grown so has the need for increased security. CIS can help you decide what best practices you should follow based on known threat vectors; these include 20 critical controls broken down into the following 3 sections:

Basic Center for Internet Security Controls

Examples of these CIS control practices could be:

  • Inventory and Control of Hardware Assets by utilizing a software inventory tool
  • Controlled Use of Administrative Privileges by setting up alerts and logs

An additional feature is the CIS Benchmark which has recommendations for best practices in various platforms and services, such as Microsoft SQL or IIS. Plus it’s free! Another cool feature that CIS offers is within the Azure Marketplace. They have pre-defined system images that are already hardened for these best practices.

CIS Offers in Azure Marketplace

The figure below shows an example benchmark for control practice that gives you the recommendation to “Restrict access to Azure AD administration portal.” This will then output audits that show what steps need to be taken to be within the scope of that best practice.

Control Practice to Restrict access to Azure AD administration portal

Azure Security Center (ASC)

In this next section, I detail the features of Azure Security Center (ASC) that I took away from this presentation and how to get started using ASC. The figure below is of the dashboard. As you can see, there are a lot of options inside the ASC dashboard, including sections such as Policy & Compliance and Resource Security Hygiene. The settings inside of those can dive deeper into resources all the way down to the VM or application level.

Azure Security Center Dashboard

Making sure you have ASC turned on should be your first step when implementing the features within it. The visuals you get in ASC are very helpful, including things like subscription coverage and your security score. Policy management is also a feature with ASC to use pre-defined and custom rules to keep your environment within the desired compliance levels.

Cloud Networking

Your network design in Azure plays a crucial role in securing against incoming attacks, including more than just closing ports. When you build a network with security in mind you not only limit your attack surface but also make spotting vulnerabilities easier; all while making it harder for attackers to infiltrate your systems. Using Network Security Groups (NSGs) and routes can also help by allowing only the required ports. You can also utilize Network Watcher to test these effective security rules. Other best practices include not making RDP, SSH, and SQL accessible from the internet. At a higher-level, below are some more networking features and options to secure Azure including:

  • Azure Firewall
    • Protecting storage accounts
    • Using logging
    • Monitored
  • VPN/Express Route
    • Encryption between on-premises and Azure
  • Bastion Host
    • Access to host using jump box feature
    • Heavy logging
  • Advanced Threat Protection
    • Alerts of threats in low, medium and high severity
    • Unusually activities such as large amounts of storage files copied
  • Just in Time (JIT)
    • Access host only when needed in a configured time frame.
    • Select IP Ranges and ports
  • Azure WAF (Web Application Firewall)
    • Layer 7 firewall for applications
    • Utilize logging and monitoring

An additional design factor to consider is the layout of your network architecture. Keeping all your resources divided into tiers can be a great security practice to minimize risk to each component. An example would be utilizing a three-tier design. This design divides a web application into three-tier (VNets). In the figure below you can see a separate web tier, app tier, and data tier. This is much more secure because the front-end web tier can still access the app tier but cannot directly talk to the data tier which helps to minimize risk to your data.

Three Tier Network Architecture: web tier, app tier, and data tier

Logging and Monitoring

Getting the best data and analytics to properly monitor and log your data is an important part of assessing your Azure environment. For those in security roles, liability is an important factor in the ‘chain of custody’. When handling security incidents, extensive logging is required to ensure you understand the full scope of the incident. This includes having logging and monitoring turned on for the following recommended items:

  • IDS/IPS
  • DLP
  • DSN
  • Firewall/WAF
  • Load Balancers
  • CDN

The next possible way to gather even more analytics is the use of a SEIM (Security Information and Event Management) like Azure Sentinel. This just adds another layer of protection to collect, detect, investigate, and respond to threats from on-premises to multi-cloud vendors. An important note of this is to make sure you tune your SEIM, so you are detecting the threats accurately and not diluting the alerts with false positives.

Advanced Data Security

The final point I want to dive into is Advanced Data Security. The protection of data in any organization should be at the top of their list of priorities. Beginning by classifying your data is an important first step to know the sensitivity of your data. This is where Data Discovery & Classification can help in labeling the sensitivity of your data. Next is utilizing the vulnerability assessment scanning which helps assess the risk level of your databases and minimize leaks. Overall, these cloud-native tools are just another great way to help secure your Azure environment.

Conclusion

In closing, Azure has a plethora of tools at your disposal within the Azure Security Center to do your own security assessment and protect yourself, your company, and your clients from future attacks. The ASC can become your hub to define and maintain a compliant security posture for your enterprise. Tanya and Teri go into great detail the steps to take and even supply a checklist you can follow yourself to assess an Azure environment.

Checklist

  1. Set scope & only test what’s in scope
  2. Verify account structure, identity, and access control
  3. Set Azure policies
  4. Turn on Azure Security Center for all subs
  5. Use cloud-native security features – threat protection and adaptive controls, file integrity monitoring, JIT, etc.
  6. Follow networking best practices, NSGs, routes, access to compute and storage, network watcher, Azure Firewall, Express Route and Bastion host
  7. Always be on top of alerts and logs for Azure WAF and Sentinel
  8. VA everything, especially SQL databases
  9. Encryption, for your disk and data (in transit and rest)
  10. Monitor all that can be monitored
  11. Follow the Azure Security Center recommendations
  12. Then call a Penetration Tester

I hope you found this post to be helpful and make you, your company, and your clients’ experience on Azure more secure. For the full presentation, including a demo on Azure Security Center, check out this link. 

This case study was featured on Microsoft. Click here to view the case study.

When GEICO sought to migrate its sales mainframe application to the cloud, they had two choices. The first was to “rehost” their applications, which involved recompiling the code to run in a mainframe emulator hosted in a cloud instance. The second choice was to “rebuild” their infrastructure and replace the existing mainframe functionality with equivalent features build using cloud-native capabilities.

The rehost approach is arguably the easier of the two alternatives. Still, it comes with a downside – the inability to leverage native cloud capabilities and benefits like continuous integration and deployment that flow from it.

Even though a rebuild offers the benefits of native cloud capabilities, it is riskier as it adds cost and complexity. GEICO was clear that it not only wanted to move away from the mainframe codebase but also significantly increase the agility through frequent releases. At the same time, risks of the “rebuild” approach involving a million lines of COBOL code and 16 subsystems were staring them in their faces.

This was when GEICO turned to AIS. GEICO hired AIS for a fixed price and time engagement to “rebuild” the existing mainframe application. AIS extracted the business logic from the current system and reimplemented it from ground-up using modern cloud architecture and at the same time, baking in the principles of DevOps and CI/CD from the inception.

Together GEICO and AIS teams achieved the best of both worlds – a risk mitigated “rebuild” approach to mainframe modernization. Check out the full success story on Microsoft, GEICO finds that the cloud is the best policy after seamless modernization and migration.

DISCOVER THE RIGHT APPROACH FOR MODERNIZING YOUR APPLICATIONS
Download our free whitepaper to explore the various approaches to app modernization, where to start, how it's done, pros and cons, challenges, and key takeaways

Azure Cognitive Services is Microsoft’s offering for enabling artificial intelligence (AI) applications in daily life, a collection of AI services that currently offer capabilities around speech, vision, search, language, and decision. While these services are easy to integrate and consume in your business applications, they bring together powerful capabilities that apply to numerous use cases.

Azure Personalizer is one of the services in the suite of Azure Cognitive Services, a cloud-based API service that allows you to choose the best experience to show to your users by learning from their real-time behavior. Azure Personalizer is based on cutting-edge technology and research in the areas of Reinforcement Learning and uses a machine learning (ML) model that is different from traditional supervised and unsupervised learning models.

This blog is divided into three parts. In part one, we will discuss the core concepts and architecture of Azure Personalizer Service, Feature Engineering, and its relevance and importance. In part two, we will go over a couple of use cases in which Azure Personalizer Service is implemented. Finally, in part three, we will list out recommendations and capacities for implementing solutions using Personalizer.

Core Concepts & Architecture

At its core, Azure Personalizer takes a list of items (e.g. list of drop-down choices) and their context (e.g. Report Name, User Name, Time Zone) as input and returns the ranked list of items for the given context. While doing that, it also allows feedback submission regarding the relevance and efficiency of the ranking results returned by the service. The feedback (reward score) can be automatically calculated and submitted to the service based on the given personalization use case.

Azure Personalizer uses the feedback submitted for the continuous improvement of its ML model. It is highly essential to come up with well thought out features that represent the given items and their context most effectively as per the objective of personalization use case. Some of the use cases for Personalizer are content highlighting, ad placement, recommendations, content filtering, automatic prioritizing, UI usability improvements, intent clarification, BOT traits and tone, notification content & timing, contextual decision scenarios, rapidly changing contents (e.g. news, live events), etc.

There is a wide range of applications of Personalizer Service, in general, every use case where a ranking of options makes sense. Its application is not limited to a simple static list of items to be ranked, one is limited as much as the ability of feature engineering to define an item and its context, which can be effectively anything simple to quite complex. What makes Personalizer scope wide and effective are:

  • Definition of items (called as Actions) and their context with the features
  • No dependency on prior historically labeled data
  • Real-time optimization with consumption of feedback in the form of reward scores
  • Personalizer Service has this notion of exploitation (utilizing ML model recommendation) as well as exploration i.e. using an alternate approach (based on Epsilon Greedy algorithm) to determine the item ranking instead of ML model recommendation
  • Exploration ensures the Personalizer continues to deliver good results even in the changing user behavior and avoids model stagnation, drift, and ultimately lower performance

The following diagram shows the architectural flow and components of Personalizer Service and follows with a description of each of the labeled component.

Azure Services Personalizer

  1. The user interacts with the site/application, features related to the actions and context are sent to the Personalizer in a Rank call.
  2. Personalizer decides whether to exploit the current model or explore new choices. Explore setting defines the percentage of Rank calls to be used for exploring.
  3. Personalizer currently uses Vowpal Wabbit as the foundation for machine learning. This framework allows maximum throughput and lowest latency when calculating ranks and training the model with all events.
  4. Personalizer exploration currently uses an algorithm called epsilon greedy to discover new choices.
  5. Ranking results are returned to the user as well as sent to the EventHub for later correlation with reward scores and training of the model.
  6. The user chooses an action (item) from the ranking results, and the reward score is calculated and submitted to the service in a single or multiple calls using the Personalizer Rewards API. The total reward score is a value between -1 to 1.
  7. The ranking results and reward scores are sent to the EventHub asynchronously and correlated based on EventID. ML model is updated based on correlation results and the inference engine is updated with a new model.
  8. Training service updates the AI model based on the learning loops (cycle of ranking results and reward) and updates the engine.
  9. Personalizer provides an offline evaluation of the service based on the past data available from the ranking calls (learning loops). It helps determine the effectiveness of features defined for actions and context. This can be used to discover more optimized learning policies.

Learning policy determines the specific hyperparameters for the model training. These can be optimized offline (using offline evaluation) and then used online. These can be imported/exported for future reference, re-use, and audit.

Feature Engineering

Feature engineering is the process of producing such data items that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data, it is turning of raw input data into things that model can understand. Estimates show that 60 – 70% of the ML project time is spent in feature engineering.

Good quality features for context and actions is the foundation that determines how effective Personalization Service will perform predictions and drive the highest reward scores; due attention needs to be paid to this aspect of implementing Personalizer. In the field of data science, feature engineering is a complete subject on its own. Good features are characterized as:

  • Be related to the objective
  • Be known at prediction-time
  • Be numeric with meaningful magnitude
  • Have enough examples
  • Bring human insight into the problem

It is recommended to have enough features defined to drive personalization, and these be of diverse densities. High-density features help Personalizer extrapolate learning from one item to another. A feature is dense if many items are grouped into few buckets e.g. nationality of a person, and sparse if items spread across a large number of buckets e.g. book title. One of the objectives of the feature engineering is to make features denser, e.g. timestamp down to the second is very sparse, it could be made dense (effective) by classifying into “morning”, “midday”, “afternoon” etc.

Personalizer is flexible and adaptive to the unavailability of some features for some items (actions), or addition or removal of features over time. For Personalizer, features can be categorized and grouped into namespaces as long as they are valid JSON objects.

In the next part of this blog post, we will go over a couple of use cases for which Personalizer was implemented, looking at features, reward calculation, and their test run results. Stay tuned!

One of the key experiences recommended during any one’s time as an undergraduate is doing an internship. It not only gives a person their first glimpse of corporate life but also is essential to boost an individual’s confidence.

In the third year of my bachelor’s, we were shortlisted for the internship process when Applied Information Sciences (AIS) decided to visit our University for campus placement. Fortunately, we got selected for the pilot internship batch at AIS.

Applied Information Sciences is a top cloud consulting firm that provides software and systems engineering services to government agencies and commercial enterprises.

The six-week program commenced on June 6th, 2019 and we were assigned our supervisor, Mr. Manish Kumar. The program started with an interactive session with Manish and was followed by a tour of the office.

We were expected to report back to our supervisor at the end of each day with questions and discuss what we had learned. The next day we were assigned our internship project.

We were expected to work on infrastructure as a code and apply our knowledge of cloud computing to solve problems and create ARM templates. We also worked on Terraform and Pulumi. Manish would brief us with a topic that was being worked on and assign us the task to gather information about it; from there we would work on the topic until completion.

The most exciting part of the day was the one-on-one session with Manish where all of us would discuss what we learned that day. This session was important because it gave us a chance to share the information we had gathered with each other as well as clear any questions that came up with Manish.

“This program is amazing. For anyone looking to do work in Cloud computing domain, AIS is the place to do it. I’ve loved every moment of this experience and would do it all over again in a heartbeat. My favorite part would the interactive session we had every day and the freedom that was given to execute our ideas in a creative way”
-Devyanshi Tiwari

Each day brought us a new challenge and we gained an understanding of how to finish a task within a stipulated period of time. Since the very first week, we were treated as regular employees of the company, not just interns. Even during the final evaluation Manish asked us to present our projects as we were demonstrating to a client. Over the six weeks, we developed friendships with our co-workers and got constant feedbacks from Manish. But at the same time, we were always provided us with enough scope to inculcate our ideas in our project in the way we wanted to. We were always given the space to express our ideas and thoughts without any hesitation which is such a vital step to learn and grow.

“The best thing about my internship in AIS is that I got a lot of hands-on experience and a chance to work in my area of interest”
-Souradeep Banerjee

Our six weeks at AIS not only allowed us to grow personally but also helped us to gain new skills and information in cloud computing and Microsoft Azure. Also, we gained a better understanding of the industry and met industry veterans.

Most importantly the program gave us a new sense of professionalism, the importance of working in a team, and a clearer view of what it meant to be in the professional world and what shoes we are expected to fill in the near future.

“This internship not only enhanced my technical skills but also brought great improvement in my communication and idea-sharing skills”
-Riya Agrahari

We would advise everyone to take the opportunity and do an internship. There is much to gain from it on both a professional and personal level.

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

Last week, AIS was invited to speak to at DIR one day conference for public sector IT leaders with the theme Journey to the Cloud: How to Get There.

AIS’ session was titled Improve Software Velocity and Portability with Cloud Native Development. As we know, with a significant increase in the adoption of containers, cloud-native development is emerging as an important trend with customers who are looking to build new applications or lift and reshape existing applications in the cloud.

In this session, we discussed the key attributes of the cloud-native approach and how it improves modularity, elasticity, robustness, and – at the same time – helps avoid cloud vendor lock-in.

Cloud-Native Attributes

  • Declarative formats to setup automation
  • Clean contract with the underlying operating system, offering portability
  • Minimize divergence between development and production, enabling continuous deployment
  • Scale-up without significant changes to tooling, architecture, or development practices
  • Containers, service meshes, and microservices
  • Public, private, and hybrid cloud

Multi-Cloud Deployment Demo

To demonstrate cross-cloud portability, we conducted a live demonstration of an application being deployed to the Google Cloud Platform and Microsoft Azure during the session.

Note that the exact same application (binaries and Kubernetes deployment manifests) were used across the two clouds. The only difference between the storage provisioner as shown in the diagram below.

deployment on multi cloud

We thank Texas DIR forum organizers and attendees for a great conference!

Related Resources

For additional information on cloud-native development refer to links below:

If you’re looking for an intelligent cloud-native Security Information and Event Management (SIEM) solution that manages all incidents in one place, Azure Sentinel may be a good fit for you.

Not only does Azure Sentinel provide intelligent security analytics and threat intelligence, but it’s also considered a Security Orchestration and Automation Response (SOAR) solution, meaning it will collect data about security threats and you can automate responses to lower-level security events without the traditionally manual efforts required. You can extend this solution across data sources by integrating Azure Sentinel with enterprise tools, like ServiceNow. There are also services offered at no additional cost, such as User Behavior Analysis (UBA ), Petabyte daily digestion, and Office 365 data ingestion, to make Azure sentinel even more valuable.

First Impression

After opening Azure Sentinel from the Azure portal, you will be presented with the below items:

Azure sentinel first view

Theoretically, Azure Sentinel has four core areas.

Azure Sentinel Four Core Areas

  • Collect – By using connections from multiple vendors or operating systems, Azure Sentinel collects security events and data and keeps them for 31 days by default. This is extendable up to 730 days.
  • Detect – Azure Sentinel has suggested queries, you can find samples, or build your own. Another option is Azure Notebook, which is more interactive and has the potential to use your data science analysis.
  • Investigate – For triaging using the same detection methodology in conjunction with events investigation. Later you will have a case created for the incident.
  • Respond –  Finally, responding can be manual or automated with the help of Azure Sentinel playbooks. Also, you can use graphs, dashboards, or workbooks for presentation.

For a better understanding, the flow in this example of behind the scene is helpful.

Steps in Azure Sentinel

How do I enable Azure Sentinel?

If you already have an Azure Log Analytics Workspace, you are one click away from Azure Sentinel. You need to have contributor RBAC permission on the subscription that has Azure Log Analytics Workspace, which Azure Sentinel will bind itself to it.

Azure Sentinel has some prebuilt dashboards and you are able to share it with your team members.

You can also enable the integration of security data from Security Center > Threat Detection > Enable integration with other Microsoft security services

Azure Sentinel has a variety of built-in connectors that collect data and process it with its artificial intelligence empowered processing engine. Azure Sentinel can relate your events to well-known or unknown anomalies (with the help of ML)!

Below is a sample connection which  offers two out-of-the-box dashboards:

sample connection in Azure Sentinel

All connections have a fair amount of instructions, which usually allows for a fast integration. A sample of an AWS connector can be found here.

Azure Sentinel has thirty out-of-the-box dashboards that make it easy to create an eloquent dashboard, however, built-in dashboards only work if you have configured the related connection.

Built-In Ready to Use Dashboards:

  • AWS Network Activities
  • AWS User Activities
  • Azure Activity
  • Azure AD Audit logs
  • Azure AD Sign-in logs
  • Azure Firewall
  • Azure Information Protection
  • Azure Network Watcher
  • Check Point Software Technologies
  • Cisco
  • CyberArk Privileged Access Security
  • DNS
  • Exchange Online
  • F5 BIG-IP ASM F5
  • FortiGate
  • Identity & Access
  • Insecure Protocols
  • Juniper
  • Linux machines
  • Microsoft Web Application Firewall (WAF)
  • Office 365
  • Palo Alto Networks
  • Palo Alto Networks Threat
  • SharePoint & OneDrive
  • Symantec File Threats
  • Symantec Security
  • Symantec Threats
  • Symantec URL Threats
  • Threat Intelligence
  • VM insights

A Sample Dashboard:

One of the most useful IaaS monitoring services that Azure provides is VMInsights, or Azure Monitor for VMs. Azure Sentinel has a prebuilt VMInsight Dashboard. You can connect your VM to your Azure Log Analytics Workspace, then enable VMInsights from VM > Monitoring > Insights. Make sure the Azure Log Analytics Workspace is the same one that has Azure Sentinel enabled on it.

Sample Dashboard VMInsights or Azure Monitor for VMs

Creating an alert is important. Alerts are the first step for having a case or ‘incidents’. After a case is created based on the alert, then you can do your investigation. For creating an alert, you need to use the KQL language that you probably already used it in Azure Log analytics.

Azure Sentinel has a feature named entity mapping, which lets you relate the query to values like IP address and hostname. These values make the investigation much more meaningful. Instead of going back and forth to multiple queries to relate, you can use entities to make your life easier. At the time of writing this article, Azure Sentinel has four entities; Account, Host, IP address, and Timestamp, which you can bind to your query. You can enable or disable an alert or run it manually as you prefer easily from Configuration > Analytics. Naming might be a little bit confusing since you also need to create your alerts from Analytics.

Azure Sentinel Investigation map of entities becomes public in September 2019 and you no longer need to fill out a form request access.

Let’s Go Hunting

You can use Azure Sentinel built-in hunting queries. You can also directly shoot it down if you know where to find the anomalies by KQL queries and create an alert. Or uses Azure Notebook for AI, ML-based hunting. You can bring your own ML model to Azure Sentinel. Azure Sentinel Notebook is for your tier 4 SOC analysis.

Azure Sentinel built-in hunting query

Azure Sentinel uses MITRE ATT&CK-based queries and introduced eight types of queries, also known as bookmarks, for hunting.

After you become skilled in detection, you can start creating your playbook constructed on logic app workflows. You can also build your automated responses to threads or craft custom actions after an incident has happened. Later you can enable Azure Sentinel Fusion to associate lower fidelity anomalous activities to high fidelity cases.

Azure Sentinel Detection Playbook

A sample playbook:

Azure Sentinel Sample Playbook

Image Source: Microsoft

Azure Notebooks is a Jupyter notebook (interactive computational tool) for facilitating your investigation by using your data science skills. Azure Notebooks support languages and packages from Python 2 and 3 you can also use R and F#.

We all love community-backed solutions. You can share your findings and designs with others and use their insights by using the Azure Sentinel Community on GitHub.

Azure Sentinel Fusion

Fusion helps reduction of noise by preventing alert fatigue. Azure Sentinel Fusion uses this insight here, and you can see how to enable Azure Sentinel Fusion.

Traditionally we assume an attacker follows a static kill chain as the attack path or all information of an attack is present in the logs. Fusion can help here by bringing probabilistic kill chain and to find novel attacks. You can find more information on this topic here. Formerly, you should run a PowerShell command to enable Fusion, but going on Fusion is enabled by default.

What Data Sources Are Supported?

Azure Sentinel has three types of connectors. First, Microsoft services are connected natively and can be configured with a few clicks. Second, is by connecting to external solutions via API. And finally, connecting to external solutions via an agent. These connectors are not limited to below table, and there are some examples of IoT and Azure DevOps that can communicate with Azure Sentinel

Microsoft services External solutions via API External solutions via an agent
Office 365 Barracuda F5
Azure AD audit logs and sign-ins Symantec Check Point
Azure Activity Amazon Web Services Cisco ASA
Azure AD Identity Protection Fortinet
Azure Security Center Palo Alto
Azure Information Protection Common Event Format CEF appliances
Azure Advanced Threat Protection Other Syslog appliances
Cloud App Security DLP solutions
Windows security events Threat intelligence providers
Windows firewall DNS machines
DNS Linux servers
Microsoft web application firewall (WAF) Other clouds

Where Does Azure Sentinel Sit in the Azure Security Picture?

Azure Sentinel in the Azure Security Big Picture

Azure Sentinel can be used before an attack, like Azure Active Directory signings from new locations. During an attack, like malware in the machine or post-attack for investigation about an incident and perform triage with it. Azure Sentinel has a service graph that can show you the related event to an incident.

If you are security titled a person or part of the SOC team and you prefer a cloud-native solution, Azure Sentinel is a good option.

Security Providers or Why Azure Sentinel?

Azure Sentinel uses Microsoft Intelligent Security Graph that is backed by Microsoft Intelligent Security Association. This association consists of almost 60 companies that hand in hand help to find vulnerabilities more efficiently.

Microsoft brings its findings from 3500+ security professionals, 18B+ Bing page scans per month, 470B emails analyzed per month, 1B+ azure account 1.2B devices updated each month, 630B authentications per month, 5B threats blocked per month.

Microsoft Intelligent Security Graph Overview

Image Source: Microsoft

Microsoft has more solutions that create a valuable experience for his Microsoft Graph Security API: Windows antimalware platform, Windows Defender ATP, Azure Active Directory, Azure Information Protection, DMARC reporting for Office 365, Microsoft Cloud App Security, and Microsoft Intune.

Microsoft Intelligent Security Association (MISA)

Microsoft creates vast threat intelligence solutions. Microsoft collaborated with other companies to create a product under the name of Microsoft Intelligent Security Graph API. Microsoft calls the association The Microsoft Intelligent Security Association (MISA), an association that consists of almost 60 companies who share their security insights from trillions of signals.

  • Microsoft products: Azure Active Directory, Azure Information Protection, Windows Defender ATP, Microsoft Intune, Microsoft Graph Security API, Microsoft Cloud App Security, DMARC reporting for Office 365, Windows antimalware platform, Microsoft Azure Sentinel
  • Identity and access management: Axonius, CyberArk, Duo, Entrust Datacard, Feitian, Omada, Ping Identity, Saviynt, Swimlane, Symantec, Trusona, Yubico, Zscaler
  • Information protection: Adobe, Better Mobile, Box, Citrix, Checkpoint, Digital Guardian, Entrust Datacard, EverTrust, Forcepoint, GlobalSign, Imperva, Informatica, Ionic Security, Lookout, Palo Alto Networks, Pradeo, Sectigo, Sophos, Symantec, Wandera, Zimperium, Zscaler
  • Threat protection: AttackIQ, Agari, Anomali, Asavie, Bay Dynamics, Better Mobile, Bitdefender, Citrix, Contrast Security, Corrata, Cymulate, DF Labs, dmarcian, Duo Security, FireEye, Illumio, Lookout, Minerva Labs, Morphisec, Palo Alto Networks, Red Canary, ThreatConnect, SafeBreach, SentinelOne, Swimlane, ValiMail, Wandera, Ziften
  • Security management: Aujas, Barracuda, Carbon Black, Checkpoint, Fortinet, F5, Imperva, Symantec, Verodin

MISA and Security Graph API

MISA is a combined security effort. It continuously monitors cyberthreats and fortifies itself. This enriched knowledge is accessible by Microsoft Intelligent Security Graph API. Azure Sentinel Fusion is the engine that uses graph powered Machine Learning algorithms. Fusion associates activities with patterns of anomalies.

Microsoft Intelligent Security Association (MISA) and Security Graph API

Below you can see the Azure Sentinel Big Picture:

Azure Sentinel Big Picture

I hope you found this blog helpful! As you can see, Azure Sentinel is just a tip of the Microsoft Security ‘iceburg’.

Azure Sentinel Microsoft Security Iceburg