When you think about your organization’s cloud strategy, mindset may not be one of the first things you think about, but mindset is crucial. Adopting a cloud mindset has been called the “single most important” predictor of cloud success (Lewis, 2017, p. 44). Why? Because it is key to aligning people, processes, technology, and culture necessary for cloud transformation to take place on an enterprise-scale rather than as a one-off project with limited scope.

What is a Cloud Mindset?

Mindset refers to your set of attitudes or ways of thinking. Carol Dweck (2006), Stanford professor and popular author, defines mindset as “the view you adopt for yourself” (p.6). Although a given mindset isn’t right or wrong, some mindsets are significantly more advantageous than others in a specific context.

Take for example a top-notch engineer who has developed a reputation for personally solving difficult problems. The engineer’s stellar individual contributor mindset has enabled him to make a difference for the business and achieve success. When that engineer is promoted to manager, it may be tempting for him to stay in the same mindset and to personally solve the problems his team now encounters. That mindset won’t allow him or his team to be successful in the long run. He will need to make the switch to a manager mindset, where he is focused on helping the team develop their own capacity to solve problems.

A cloud mindset has two key components:

  1. Willingness to rethink the role of technology and how it can be leveraged across the enterprise for strategic advantage and mission fulfillment.
  2. Willingness to rethink the value proposition across the organization, considering where alignments in people, processes, and culture are needed to deliver value more effectively and efficiently.

Rethinking Technology

Rethinking technology means moving from the view of technology from static resources to dynamic service, like the difference between a noun and verb, or between a concrete example and abstract formula. One example of this trend is de-emphasizing rigid architecture in favor of infrastructure-as-code (Chung & Rankin, 2017). Cloud is an enterprise capability/system delivering compute power where and when needed to help people and the business accomplish work, not an add-on service or outsourced data center (Wyckoff & Pilat, 2017).  Cloud supercharges the speed and agility of the business, allocating and reallocating resources nearly instantaneously. Through the cloud’s scalability, there is a tremendous opportunity to move to continuous improvement/continuous delivery and try new ways of working that deliver better value to end-users – customers and employees.

Rethinking Work

Just as rigid architecture can be rethought as code, a cloud mindset enables work to be reconceptualized as data transformation and stewardship. Examples of this may include:

  • creating a document
  • monitoring a network
  • setting permission levels
  • configuring a tenant
  • collecting credit card information to securely process a transaction
  • sending an email
  • constructing a building from blueprints
  • having a conversation with a coworker

Get the right data to the right place at the right time and with the right interface so it can be used by the worker, humans, and bots. Thinking in this way can help the business identify strengths, opportunities, and blockers that can be addressed, making work more productive, cost-effective, and potentially more meaningful.

Rethinking Value Delivery

Thinking of work as data transformation and stewardship opens new ways of considering how the business delivers and can deliver value. Delivering value is dependent on the ability to move data across the organization, and technology serves to increase flow or throughput.

A cloud mindset views customers and employees as important partners, seeking to understand their experiences and striving to make their experiences better by delivering the right data to the right people at the right time in a user-friendly way. Understanding what employees and customers perceive as valuable can help business leaders make the most informed decisions.

As decisions are made, there will be tradeoffs. For example, a company that moves payroll and talent management to a software-as-a-service (SaaS) will gain organizational agility but will have to trade a certain degree of customization based on the limitations of the SaaS. Because experiences are valued, the business will ensure support is in place to navigate the tradeoffs and changes.

Rethinking Silos

Silos in organizations have gotten a bad reputation. Silos enable a clear definition of workstreams, roles, and responsibilities and promote work being done by the subject matter experts. The key is to ensure the boundary is set up so that needed data can flow into and out of the silo for productive work. A cloud mindset thinks of silo boundaries as interfaces and intentionally designs them so that data that needs to move across the interface can be shared securely. The goal is to make the interface more user friendly so that the silo does not unnecessarily slow down the movement of data needed to deliver business value.

The proliferation of cross-functional teams is one way that businesses are trying to address this, although there are limitations. Cross-functional teams can help share data across functional silos, but often processes are created within a silo. This is where the view of silo boundaries as interfaces can be especially helpful. Mapping the steps, inputs, and outputs in a process or series of processes that span functional units is a good tool to identify where interface improvements are needed to improve data throughput. Service blueprints are another option. Service blueprints visualize different components of a service (e.g., people, resources, and physical/digital artifacts) that are tied to touchpoints on a customer’s or employee’s journey.

Rethinking Culture

Organizational culture is like the operating system of the organization and refers to the collective values, beliefs, attitudes, and behaviors that are active in the organization. Staying with the operating system metaphor, cloud transformation has trouble running in certain environments. Like with mindset, this does not mean that certain organizational cultures are better than others, but it does mean that in the context of cloud transformation, cultures can promote or hinder cloud transformation:

  • Where culture is aligned to a cloud mindset, then cloud transformation accelerates.
  • Where culture is not aligned to a cloud mindset, then there is friction.

Often, the effect is mixed, with some elements of culture aligned and others not (e.g., Fegahli, 2019). It is important to capitalize on the strengths of the current culture while overcoming friction that can stunt cloud transformation if left unaddressed through effective change management. The goal of cloud transformation is to help your organization be its best, leveraging the cloud to do so. The goal is not to turn your organization into a copy of another organization or another organization’s culture. With that said, helping the organization be receptive to and successful in cloud transformation requires addressing culture.

Configuring Your Mindset

Our experience working with federal and commercial clients and the research on successful cloud adoption points to the following settings as optimal for configuring a cloud mindset:

Setting Item
On Switch Start with the expectation to learn, grow, and iterate
Off Switch Wait to figure out all the details before starting
On Switch Views cloud as an enterprise capability/system delivering compute power where and when needed to help people and the business accomplish work
Off Switch Views cloud as an add-on service or outsourced data center
On Switch Understands that work is ultimately data stewardship/transformation
On Switch Focused on getting the right data to the right place at the right time and with the right interface so it can be used by the worker, humans, and bots
Off Switch Believes that new tools and a little training are all people need to make the transition
On Switch Knows customers and employees are important partners, values their experiences, and strives to make their experiences better
Off Switch Thinks that cloud technology should not impact the organization’s culture
On Switch Thinks that helping culture better align with delivering business value better is a key part of cloud transformation

Cloud transformation reaches all areas of the business. This includes upgrading and syncing legacy systems as well as aligning organizational structure, processes, people, culture, and leadership to unleash the benefits of the cloud at an enterprise scale. Although this is not as straightforward as configuring a tenant, it is worth it. Successful cloud transformation starts with adopting a cloud mindset and then helping the other pieces align.

Want to learn more about managing changes associated with cloud transformation? Stay tuned for my next post on the people side of transformation.

References:

  • Chung, J. & Rankin (2017). How to manage organizational change and cultural impact during cloud transformation. SlideShare presentation.
  • Dweck, C. S. (2008). Mindset: The new psychology of success. Random House Digital, Inc.
  • Feghali, R. (2019). The “Microservic-ing” of Culture. CHIPS Magazine.
  • Lewis (2017). Cloud success is about changing your mindset. NZ Business + Management. 31(6), 44-45.
  • Wyckoff A, Pilat D. Key Issues for Digital Transformation in the G20.; 2017. https://www.oecd.org/internet/key-issues-for-digital-transformation-in-the-g20.pdf.
Data Lake has become a mainstay in data analytics architectures. By storing data in its native format, it allows organizations to defer the effort of structuring and organizing data upfront. This promotes data collection and serves as a rich platform for data analytics. Most data lakes are also backed by a distributed file system that enables massively parallel processing (MPP) and scales with even the largest of data sets. The increase of data privacy regulations and demands on governance requires a new strategy. Simple tasks such as finding, updating or deleting a record in a data lake can be difficult. It requires an understanding of the data and typically involves an inefficient process that includes re-writing the entire data set. This can lead to resource contention and interruptions in critical analytics workloads.

Apache Spark has become one of the most adopted data analytics platforms. Earlier this year, the largest contributor, Databricks, open-sourced a library called Delta Lake. Delta Lake solves the problem of resource contention and interruption by creating an optimized ACID-compliant storage repository that is fully compatible with the Spark API and sits on top of your existing data lake. Files are stored in Parquet format which makes them portable to other analytics workloads. Optimizations like partitioning, caching and data skipping are built-in so additional performance gains will be realized over native formats.

DeltaLake is not intended to replace a traditional domain modeled data warehouse. However, it is intended as an intermediate step to loosely structure and collect data. The schema can remain the same as the source system and personally identifiable data like email addresses, phone numbers, or customer IDs can easily be found and modified. Another important DeltaLake capability is Spark Structured Stream support for both ingest and data changes. This creates a unified ETL for both stream and batch while helping promote data quality.

Data Lake Lifecycle

  1. Ingest Data directly from the source or in a temporary storage location (Azure Blob Storage with Lifecycle Management)
  2. Use Spark Structured Streaming or scheduled jobs to load data into DeltaLake Table(s).
  3. Maintain data in DeltaLake table to keep data lake in compliance with data regulations.
  4. Perform analytics on files stored in data lake using DeltaLake tables in Spark or Parquet files after being put in a consistent state using the `VACCUUM` command.

Data Ingestion and Retention

The concept around data retention is to establish policies that ensure that data that cannot be retained should be automatically removed as part of the process.
By default, DeltaLake stores a change data capture history of all data modifications. There are two settings `delta.logRetentionDuration` (default interval 30 days) and `delta.deletedFileRetentionDuration` (default interval 1 week)

%sql
ALTER table_name SET TBLPROPERTIES ('delta.logRetentionDuration'='interval 240 hours', 'delta.deletedFileRetentionDuration'='interval 1 hours')
SHOW TBLPROPERTIES table_name

Load Data in DeltaLake

The key to DeltaLake is a SQL style `MERGE` statement that is optimized to modify only the affected files. This eliminates the need to reprocess and re-write the entire data set.

%sql
MERGE INTO customers
USING updates
ON customers.customerId = updates. customerId
WHEN MATCHED THEN
      UPDATE email_address = updates.email_address
WHEN NOT MATCHED THEN
      INSERT (customerId, email_address) VALUES (updates.customerId, updates.email_address)

Maintain Data in DeltaLake

Just as data can be updated or inserted, it can be deleted as well. For example, if a list of opted_out_consumers was maintained, data from related tables can be purged.

%sql
MERGE INTO customers
USING opted_out_customers
ON opted_out_customers.customerId = customers.customerId

WHEN MATCHED THEN DELETE

Summary

In summary, Databricks DeltaLake enables organizations to continue to store data in Data Lakes even if it’s subject to privacy and data regulations. With DeltaLakes performance optimizations and open parquet storage format, data can be easily modified and accessed using familiar code and tooling. For more information, Databricks DeltaLake and Python syntax references and examples see the documentation. https://docs.databricks.com/delta/index.html

Whether you are just starting your journey to Office 365, or you are expanding your usage of the platform, it’s important that you stop and define what you hope to accomplish in your project. User research is the most productive activity your team can do to define and shape your project. Many underestimate the value of investing time and money into user research when a team believes they already understand what needs to be built.

The Need to Define the Problem

A common misunderstanding with user research is that it’s intended to help create the solution. While it’s true that user research assists in this, the main purpose of user research is to define the problem you are trying to solve.

Often, in an attempt to save money, companies will reduce or jettison altogether user research. User research ensures a higher likelihood that your implementation will succeed and is well received and adopted. This makes end users feel like they had a voice in the project and that their unique challenges were considered. And the good news is that it’s not all or nothing. There are ways to do user research that will significantly help your project without breaking the budget.

An important distinction needs to be made that user research is not about asking people what their preferences are. While preferences can lend to insights, it is not the goal of user research. Erika Hall in her book Just Enough Research says:

“As you start interviewing people involved in business and design decisions, you might hear them refer to what they do or don’t like. ‘Like’ is not a part of the critical thinker’s vocabulary. On some level, we all want the things we do to be liked (particularly on Facebook), so it’s easy to treat likability as a leading success indicator. But the concept of ‘liking’ is as subjective as it is empty. It is a superficial and self-reported mental state unmoored from any particular behavior. This means you can’t get any useful insights from any given individual reporting that they like or hate a particular thing. I like horses, but I’m not going to buy any online.” (pg. 13)

What Can I Expect When Doing User Research?

Many companies that do not have in house user research experience are unaware of the key steps and activities used. Project goals and requirements vary, requiring slightly different approaches, but the core concepts are often the same.

The first thing that usually occurs is soliciting input from the project team or stakeholders before engaging end-users. These inputs can come in the form of workshops or interviews, but it is important at this stage to understand how the stakeholders involved in commissioning and running the project view the organization’s needs.

After gathering initial input, end-users need to be identified and interviewed to understand the many aspects of how they currently work, what their needs are, and how the various tools and processes they currently use do and do not satisfy their needs.

Below are some sample questions asked during a user interview for end-users regarding their existing intranet:

  • Is there content on the intranet you looked for and were unable to find?
  • What do you do when you cannot find the information you are looking for? Has this happened with the current intranet?
  • What are other tools and applications you need to do your work?
  • What are the most important things that the organization needs from you and you need from the organization?

The answers to these questions and the insights gleaned can be distilled to define the core issues that a new modern workplace solution needs to solve. From here, the team can work together on what specific solutions will address the issues, goals, and needs of the end-users.

AIS did this recently for the ACA Compliance Group in a project to help them roll out Microsoft Teams and Planner. Through systematic user research, the AIS team was able to identify opportunities to leverage these tools to address ACA’s collaboration and content management needs. Read more about our work with ACA Compliance Group.

Other Benefits of User Research

While the primary benefit of user research is to define the problem and help your team ultimately marry that to the correct technological solution, there are many other benefits of doing user research. Here are a few.

  1. It generates interest inside of the organization. When doing research, many people will get a chance to be heard, and often times those are the very individuals that are some of the biggest supporters as the project moves along.
  2. It helps with change management and ultimately increases adoption of the final solution. Change is hard and bringing users into that process greatly increases the odds that the modern workplace solution they receive will aid them in their work. Nothing will slow down the adoption of a new solution faster than those who receive the solution feeling like their challenges were not taken into consideration.
  3. It helps your organization communicate the value of the new implementation in a way that appeals to people across the organization. It is always more impactful to frame your new investment in terms that will appeal to users.

Start Now and Continue to Iterate

If you take away one thing from this piece, I hope you realize the value of user research and how it can bring unique insights to your project that are otherwise left untapped. User research is one of those activities that truly never finishes because an organization and its people are constantly changing, but the more research is used, the better the end result.

Nielsen-Norman Group, a well-known user experience firm publishes its best intranets every year, and it is no mistake that time after time user research is a core component of these successful projects. In this year’s report, it specifically mentions the value of bringing in outside firms to bring expertise and perspective. AIS has years of experience helping organizations do great user research. If you are planning your next Office 365 project, please reach out to AIS for a Modern Workplace Assessment and begin your journey to building a successful modern workplace solution!

K.C. Jones-Evans, a User Experience Developer, Josh Lathery, a Full Stack Developer, and Sara Darrah, a User Experience Specialist, sat down recently to talk through our design and project development planning process that we implemented for part of a project. This exercise was to help improve our overall project development planning and create best practices moving forward. We coined the term the “Design Huddle” to describe the process of taking the feature from a high level (often one sentence request from our customer) to a working product in our software, improving project planning for software development.

We started the design huddle because the contract we were working on already had a software process that did not include User Experience (UX). We knew we needed to include UX, but weren’t sure how it would work given the fast paced (2-week sprint cycles) software process we were contractually obligated to follow. We needed to come up with a software design planning solution that allowed us to work efficiently and cohesively. The Design Huddle allowed us to do just that.

What does the design huddle mean to you?

Josh: Previously the technical lead would have all the design processes worked out prior to being assigned a ticket for development. The huddle meant that I had more ownership in the feature upfront. It was nice to understand via the design process what the product would be used for and why.

K.C.: The huddle was an opportunity to get our thoughts together on the full product before diving into development right away. In the past, we have developed too quickly and discovered major issues. Development early equated to too much ownership in the code, so changes were painful when something needed to be corrected.

Sara: The huddle for me meant the opportunity to meet with the developers early to get on the same page prior to development. That way when we had the final product, we could discuss details and make changes, but we were coming from the same starting point. I have been on other projects where I’m not brought in until after the development is finished- which

immediately strains the relationship between UX and Development due to big changes needed to finished code.

What does the design huddle look like?

  • The Team: UX Designer, at least one front-end or UX developer, a full-stack developer, the software tester, a graphic designer (as needed), and a Subject Matter Experts (as needed)
  • The Meeting: This took time to work out. As with any group, sometimes there were louder or more passionate individuals that seemed to overshadow the rest. At the end of the day the group worked better with order and consensus:
    • Agendas were key: The UX lead created the agendas for our meetings. Without an agenda it was too easy to go down a rabbit hole of code details. This also helped folks who were spread across multiple projects focus on the task at hand faster. We included time to report on action items, old business/review of any design items and set the stage of what you hope to cover as new design work.
    • Action Items: Create and assign actions to maintain in the task management system (JIRA). This was a good translation for developers and helped everyone understand their responsibility leaving the room. These also really helped with sprint planning and the ability to scope tasking.
    • The facilitator had to be assertive: Yes, we are all professionals and in the ideal world we could “King Arthur Round Table” these huddles. But the few times we tried this meeting were quickly off-track and down a rabbit hole. Many teammates would leave frustrated and feeling like we hadn’t made any progress. The facilitator was the UX specialist for our meetings, but we think the owner of the feature should facilitate. The facilitator needs to be willing to assert themselves during conversations, keep the meeting on track and force topics to be tabled for another time when needed.
    • Include everyone and know the crowd: The facilitator needs to quickly understand the team they are working with to figure out how to include everyone. One way we ensured this happen was to do an around the room at the end of each meeting.
    • Visual Aid that the whole team can see during the meeting: Choose the tool that works best for the topic at hand- a dry erase board, a wireframe, JIRA tickets, or a mock-up can help people stay on track and ensure common understanding.
    • Table tough items and know when to end the meeting: sometimes in the larger meetings we needed to call it quits on a debate to give more time for thinking, research, and discussions. Any member of the team could ask for something to be tabled. A tabled item was given a smaller group of individuals to work through the details in between the regular design huddle sessions.
    • Choose the team participants wisely: The first few meetings most likely will involve the entire team, but smaller huddles (team of 2 or 3) can often work through details of tabled items more efficiently.

What is the benefit?

  • Everyone felt ownership in the product by the end of the design. Sara’s favorite thing about this learning experience was when one of the developers I hadn’t worked with before said he loved the process. He had the opportunity to provide input early and often, and then by the time development started there weren’t really any questions left on how to implement. Josh- the process helped me feel like an engineer and not just a code monkey. K.C.-: In other projects, we have been handed mock-ups without context. This process helped the “gray area” be taken away.
  • Developers were able to tame the ideals of the UX designer by understanding the system, not just the User Interface. Developers could assist UX by asking questions, helping understand the existing system limitations, and raising concerns that the design solution was too complicated given the time we had.
  • The software tester was able to understand the flow of the new designs, ask questions and assist in writing acceptance criteria that was specific, measurable, attainable, realistic and testable (SMART).
  • She provided guidance when we needed to go from concept to reality and ensured we understood the design requirements.
  • It was critical to designing 1-2 sprints in front of expected development as part of Agile software development. It allowed for the best design to be used rather than forcing ourselves into something that could be completed in two weeks. Together we would know what our end goal was, then break down the concept into tiers. Each tier would have a viable product that was one sprint long, and always kept the end product design in mind.

The Design Huddle is a way for teams to collaborate early on a new feature or application. We feel it is a great way to work User Experience into the Agile software process and simplify project development planning. We have taken our lessons learned and applied to a new project the three of us are getting to tackle together and expanded the concept of the huddle to different members on the team. If you are struggling to incorporate proper design or feel frustration from teammates on an application task, this may be the software design planning solution for you!

Image of book coverI recently read Patrick Lencioni’s latest book, Getting Naked: A Business Fable About Shedding The Three Fears That Sabotage Client Loyalty. I found the book useful, well written, and insightful.

Here is a quick summary of the key ideas:

Always Consult instead of Sell

Rather than harping on past accolades and what you can do if the organization hires you, transform every sales situation into an opportunity to demonstrate value. Start adding value from the very first meeting… without waiting to be hired. Don’t worry about a potential client taking advantage of your generosity. (One potential client in 10 may – but you don’t want that one potential client as a customer anyway!)

Tell kind truth

Be ready to confront a client with a problematic message, even if the client might not like hearing it. Don’t sugar coat or be obsequious. Rather, replay the message in a manner that recognizes the dignity and humanity of the client. Be prepared to deal with “the elephant in the room” that everyone else (including your competitors) is afraid to address.

Don’t be afraid to ask questions or suggest ideas, even if they seem obvious

There is a good chance that a seemingly obvious question/suggestion is actually of benefit to many in the audience.

Of course, there is also that chance of posing a “dumb” question. But no one is expecting perfection from their consultants, but they do expect transparency and honesty. There is no better way to demonstrate both than by acknowledging mistakes.

Make everything about the client

Be prepared to humble yourself by sacrificially taking the burden off of a client in a difficult situation. Be ready to take on whatever the client needs you to do within the context of the services you offer.

The focus of all of your attention needs to be on understanding, supporting, and honoring the business of the client. Do not try to shift attention to yourself, your skills, or your knowledge. Honor the client’s work by taking an active interest in their business and appreciating the importance of their work.

Admit your weaknesses and limitations

Be true to your strengths and be open to admitting your weaknesses. Not doing so will wear you out and prevent you from doing your best in your areas of strength.

Like many of you, we at AIS attempt to subsume many of the above ideas from Lencioni’s book. For example, rather than “sell” a project, we often co-invest (along with the customer) in a micro-POC* or a one-day architecture and design session that helps lay out the vision for the solution, identify main risks, and crystallize the requirements for a project. With the insights derived from a micro-POC, clients can make an informed decision on whether to move forward or not. Here are a couple of examples of micro-POCs:  Jupyter Notebooks as a Custom Calculation Engine, and Just-in-Time Permission Control with Azure RBAC

* Micro-POC (micro Proof of concept) – a time-boxed (typically ~40 hours) “working” realization of an idea/concept with the aim to better understand the potential, risks, and effort involved. This is *not* an early version of a production application.  Also, see this Wikipedia definition of Proof of concept

Driving value, lowering costs, and building your organization’s future with Microsoft’s next great business technology

Lately, I’ve been helping folks understand the Microsoft Power Platform (MPP) by sharing two simple diagrams.

The first one is below and is my stab (others have made theirs) at contextualizing the platform’s various components in relation to one another.

The Common Data Service (CDS) is the real magic, I tell people. No matter which app you are using, the data lives there in that one CDS across your entire environment. (And no, folks outside your organization don’t get to use it.) This means that data available to one of your apps can be re-used and re-purposed by your other apps, no wizardry or custom integration required. I promise, it just works. Think expansively about the power of this in your organization, and you’ll come up with some cockamamie/brilliant ideas about what you can do.

These are the types of data-driving-business-function that geeks like me always dreamed of.

A diagram of Microsoft Power Platform components

Then there’s Power Apps, in purple. Most folks think of this as a low-code/no-code app development tool. It is, but it’s more. Imagine that there are three flavors of Power Apps:

  1. Dynamics 365, which in the end is a set of really big Power Apps developed by Microsoft
  2. COTS apps developed by Microsoft partners (including AIS), available for organizations to license and use
  3. Custom apps you build yourself

Point Microsoft PowerBI at all of this, then mash it up with data from outside of your CDS that you get to via hundreds of out-of-the-box connectors, automate it all together with workflows in Flow…and you’ve got Power Platform in a nutshell.

When I’m presenting this to a group, I turn to my next slide pretty quickly at this point.

A rearranged look at Microsoft Power Platform

Here I’ve essentially re-arranged the pieces to make my broader point: When we think about the Power Platform, the emphasis needs to be on the Platform bit. When your organization invests in this technology, say via working with an implementation partner such as AIS or purchasing Power Apps P1/P2 licenses, you’re not just getting a product or a one-off app solution.

What you’re getting is a platform on which to build your modern business. You’re not just extending Office 365. Instead, you’re creating a future where your organization’s data and business processes are deeply integrated with, driving, and learning intelligently from one another.

The more you leverage the platform, the higher the ROI and the lower the marginal costs of those licenses become. A central goal of any implementing partner ought to be guiding organizations on the journey of migrating legacy systems onto the platform (i.e., retiring legacy licensing + O&M costs) and empowering workers to make the platform even more valuable.

We don’t invest in one-off apps anymore, i.e. a CRM in one corner of your network where you run your sales, something in another where you manage your delivery, clunky Human Resources Management off over there where you take care of your people, etc.. No, what we care about here is the platform where you integrate all of the above — not through monolithic one-size-fits-all ERP — but rather through elegant app experiences across all your users’ devices that tie back to that magical Common Data Service.

This is what I mean when I tell folks sky’s the limit, and thinking about your entire business is what’s called for here. It’s because Power Platform gives us the ability to learn and grow with our customers, constituents, vendors, employees, and other stakeholders like never before.

That’s what has everyone at Microsoft so excited. I am as well.

I want to learn from you. How do you make Power Platform understandable to those who haven’t thought about it too deeply? How does your organization make it valuable as a platform rather than just a product? I love to build beautiful things, so inspire me!

15498218 - search icon

Make no mistake, most organizations and government agencies are—at least in part—software companies. The backbone of the services and products they sell, the internal business processes they use, and the customer feedback mechanisms they rely on are all built on software. Even in the age of software as a service (SaaS) – a modern organization’s portfolio of applications and the specifics of how these apps are used influence its most important decisions.

So while it’s easy to understand that software is a foundational component to modern business, often the decision to invest in building or offering software to users must also be accompanied by a more specific, anticipated return on that investment. That process can go like this:
Read More…

FBI UCR

The FBI’s Uniform Crime Reporting (UCR) Program is a nationwide, cooperative statistical effort of more than 18,000 city, university and college, county, state, tribal, and federal law enforcement agencies voluntarily reporting data on crimes brought to their attention. The program’s primary objective is to generate reliable information for use in law enforcement administration, operation, and management. Over the years, however, this data has become one of the country’s leading social indicators. The FBI is required by law to store national UCR data in a platform that will allow the public to search/view results and download for off-line analysis.

The current UCR data reporting tool was built in Cold Fusion and has not received an update since 2010. The FBI released an RFI solicitation with the intent to gather information about possible ways to replace the current UCR tool. Criteria for the new tool is that it should be web based with the ability to filter on any UCR attribute and then export the results in a .csv format.

Using 2 gigabytes of sample data provided by the FBI, AIS set out to determine if, in just a few weeks, a minimally viable product (MVP) solution could be built leveraging public cloud services that were currently (or soon to be) available in the Azure Government Cloud. The resulting MVP was designed to fully support the capability for citizens to be able query, filter, correlate and display public FBI National Uniform Crime Reporting (UCR) information. Take a look at the solution we built!

Technologies:

  • ASP.NET MVC 4.5 (with jQuery, bootstrap, ADO.NET)
  • Power BI embedded
  • SQL Azure Data Warehouse
  • Blob storage with SaaS for file download
Office Graph In my previous post, I proposed an example application that leverages the resources available to us in Office 365 development platform and Azure Active Directory, as well as the in-application integration of Office 365 Add-ins.

Now we’ll take a deeper look at the Graph API and some of the implementation points.

Build Your Enterprise Graph

The Graph API empowers developers and enterprises to build new relationships and interactions between resources in Azure Active Directory, Office 365, and other applications and data assets.

As Microsoft’s enterprise cloud offerings continue to expand, so will the opportunities to weave these resources together in new and innovative ways. Microsoft’s acquisition of LinkedIn will help it expand its social network graph, so it will be interesting to see how it plays into its Graph API in the future. Read More…

Office_365_AppsEnterprises have a trove of business resources and data that are often under-utilized – users, calendars, contacts, emails, tasks, documents and other files. Often there are redundancies between what users do with Office applications and other enterprise applications, and a painful lack of integration.

In prior posts, I discussed the compelling new Office 365 development platform and introduced Matter Center to demonstrate how integrating web-based add-ins directly into Office applications like Outlook can lead to productivity gains and happy users.

In this post we’ll introduce a sample application to show a practical example of how we can use these technologies to bring enterprise applications together with these valuable resources.

Read More…