Which accountability aspect is available through the Azure Service Trust Portal

In this post, I’ll tie together the study links from my related YouTube video on the AZ-700 exam topic. You can find the video I produced to help you prepare for this exam at https://aka.ms/YouTube/SC-900. (NOTE: As of 12/19/2021 only the first video in the series has been uploaded. The others will follow, so keep checking back!) The exam objectives which should guide your studying, and which were also used to guide the creation of the content, can be found at Exam AZ-700: Designing and Implementing Microsoft Azure Networking Solutions

As referenced in the introduction of the video, Microsoft MVP, Charbel Nemnom has another study guide with many solid reference links. It can be found at aka.ms/AZ-700StudyGuide.

Here are the slides to use for studying which contain all of the links provided and discussed in the video.

https://aka.ms/AZ-700Deck

Download the deck by clicking on the download icon on the right side, as shown below:

Which accountability aspect is available through the Azure Service Trust Portal

Please let me know if the content helps you pass!

Early in my career at Microsoft, I worked in Microsoft Consulting Services, supporting organizations looking to deploy Exchange 2007 and 2010 in their on-premises environments. During those engagements, the bulk of the conversations focused on availability and disaster recovery concepts for Exchange – things like CCR, SCR and building out the DAG to ensure performance and database availability during an outage – whether it was a disk outage, a server outage, a network outage or a datacenter outage.

Those were fun days. And by “fun”, I mean “I’m glad those days are over”.

It’s never a fun day when you have to tell a customer that they CAN have 99.999% availability (of course – who DOESN’T want five 9’s of availability??) for their email service, but it will probably cost them all the money they make in a year to get it.

Back then, BPOS (Business Productivity Online Service) wasn’t really on the radar for most organizations outside of some larger corporate and government customers.

Then on June 28, 2011, Microsoft announced the release of Office 365 – and the ballgame changed. In the years since then, Office 365 has become a hugely popular service, providing online services to tens of thousands of customers and millions of users.

As a result, more businesses are using Office 365 for their business-critical information. This, of course, is great for our customers, because they get access to a fantastic online service, but it requires a high degree of trust on the part of customers that Microsoft is doing everything possible to preserve the confidentiality, integrity and availability of their data.

A large part of that means that Microsoft must ensure that the impact of natural disasters, power outages, human attacks, and so on are mitigated as much as possible. I recently heard a talk given that dealt with how Microsoft builds our datacenters and account for all sorts of disasters – earthquakes, floods, undersea cable cuts – even mitigations for a meteorite hitting Redmond!

It was an intriguing discussion and it’s good to hear the stories of datacenter survivability in our online services, but the truth is, customers want and need more than stories. This is evidenced by the fact that the contracts that are drawn up for Office 365 inevitably contain requirements related to defining Microsoft’s business continuity methodology.

Our enterprise customers, particularly those from regulated industries, are routinely required to perform business continuity testing to demonstrate that they are taking the steps necessary to keep their services up and running when some form of outage or disaster occurs.

The dynamics change somewhat when a customer moves to Office 365, however. These same customers now must assess the risk of outsourcing their services to a supplier, since the business continuity plans of that supplier directly impact the customer’s adherence to the regulations as well. In the case of Office 365, Microsoft is the outsourced supplier of services, so Microsoft’s Office 365 business continuity plans become very relevant.

Let’s take a simple example:

A customer named Contoso-Med has a large on-premises infrastructure. If business continuity testing were being done in-house by Contoso-Med and they failed the test, they would be held responsible for making the necessary corrections to their processes and procedures.

Now, just because Contoso-Med has moved those same business processes and data to Office 365, they are not absolved of the responsibility to ensure that the services meet the business continuity standards defined by regulators. They must still have a way of validating that Microsoft’s business continuity processes meet the standards defined by the regulations.

However, since Contoso-Med doesn’t get to sit in and offer comments on Microsoft’s internal business continuity tests, they must have another way of confirming that they are compliant with the regulations.

First…a Definition

Before I go much further, I want to clarify something.

There are several concepts that often get intermingled and, at times, used interchangeably: high availability, service resilience, disaster recovery and business continuity. We won’t dig into details on each of these concepts but suffice it to say they all have at their core the desire to keep services running for a business when something goes wrong. However, “business continuity and disaster recovery” from Microsoft’s perspective means that Microsoft will address the recovery and continuity of critical business functions, business system software, hardware, IT infrastructure services and data required to maintain an acceptable level of operations during an incident.

To accomplish that, the Microsoft Online Service Terms (http://go.microsoft.com/?linkid=9840733),which is sometimes referred to as simply the OST, currently states the following regarding business continuity:

  • Microsoft maintains emergency and contingency plans for the facilities in which Microsoft information systems that process Customer Data are located
  • Microsoft’s redundant storage and its procedures for recovering data are designed to attempt to reconstruct Customer Data in its original or last-replicated state from before the time it was lost or destroyed

 

Nice Definition. But How Do You Do It?

I’ve referenced the Service Trust portal in a few other blog posts and described how it can help you track things like your organization’s compliance for NIST, HIPAA or GDPR. It’s also a good resource for understanding other efforts that factor into the equation of whether Microsoft’s services can be trusted by their customers and partners.

A large part of achieving that level of trust relates to how we set up the physical infrastructure of the services.

  • Microsoft’s Enterprise Business Continuity Management (EBCM) framework document outlines the methodology by which we ensure the reliability of our global data centers.
  • The Global Data Centers web page provides detailed insights into Microsoft’s framework for datacenter business continuity.

To be clear, Microsoft online services are always on, running in an active/active configuration with resilience at the service level across multiple data centers. Microsoft has designed the online services to anticipate, plan for, and address failures at the hardware, network, and datacenter levels. Over time, we have built intelligence into our products to allow us to address failures at the application layer rather than at the datacenter layer, which would mean relying on third-party hardware.

As a result, Microsoft is able to deliver significantly higher availability and reliability for Office 365 than most customers are able to achieve in their own environments, usually at a much lower cost. The datacenters operate with high redundancy and the online services are delivering against the financially backed service level agreement of 99.9%.

The Office 365 core reliability design principles include:

  • Redundancy is built into every layer: Physical redundancy (through the use of multiple disk, network cards, redundant servers, geographical sites, and datacenters); data redundancy (constant replication of data across datacenters); and functional redundancy (the ability for customers to work offline when network connectivity is interrupted or inconsistent).
  • Resiliency: We achieve service resiliency using active load balancing and dynamic prioritization of tasks based on current loads. Additionally, we are constantly performing recovery testing across failure domains, and exercising both automated failover and manual switchover to healthy resources.
  • Distributed functionality of component services: Component services of Office 365 are distributed across datacenters and regions to help limit the scope and impact of a failure in one area and to simplify all aspects of maintenance and deployment, diagnostics, repair and recovery.
  • Continuous monitoring: Our services are being actively monitored 24×7, with extensive recovery and diagnostic tools to drive automated and manual recovery of the service.
  • Simplification: Managing a global, online service is complex. To drive predictability, we use standardized components and processes, wherever possible. A loose coupling among the software components results in a less complex deployment and maintenance. Lastly, a change management process that goes through progressive stages from scope to validation before being deployed worldwide helps ensure predictable behaviors.
  • Human backup: Automation and technology are critical to success, but ultimately, its people who make the most critical decisions during a failure, outage or disaster scenario. The online services are staffed with 24/7 on-call support to provide rapid response and information collection towards problem resolution.

These elements exist for all the online services – Azure, Office 365, Dynamics, and so on.

But how are they leveraged during business continuity testing?

Each service team tests their contingency plans at least annually to determine the plan’s effectiveness and the service team’s readiness to execute the plan. The frequency and depth of testing is linked to a confidence level which is different for each of the online services. Confidence levels indicate the confidence and predictability of a service’s ability to recover.

For details on the confidence levels and testing frequencies for Exchange Online, SharePoint Online and OneDrive for Business, etc… please refer to the most recent ECBM Plan Validation Report available on the Office 365 Service Trust Portal.

BC/DR Plan Validation Report – FY19 Q1

A new reporting process has been developed in response to Microsoft Online Services customer expectations regarding our business continuity plan validation activities. The reporting process is designed to provide additional transparency into Microsoft’s Enterprise Business Continuity Management (EBCM) program operations.

The report will be published quarterly for the immediately preceding quarter and will be made available on the Service Trust Portal (STP). Each report will provide details from recent validations and control testing against selected online services.

For example, the FY19 Q1 report, which is posted on the Service Trust Portal (ECBM Testing Validation Report: FY19 Q1), includes information related to 9 selected online services across Office 365, Azure and Dynamics, with the testing dates and testing outcomes for each of the selected services.

The current report only covers a subset of Microsoft cloud services, and we are committed to continuously improving this reporting process.

If you have any questions or feedback related to the content of the reporting, you can send an email to the Office 365 CXP team at [email protected].

Additional Business Continuity resources are available on the Trust Center , Service Trust Portal, Compliance Manager and TechNet
  1. Azure SOC II audit report:  The Azure SOC II report  discusses business continuity (BC) starting on page 59 of the report, and the auditor confirms no exceptions noted for BC control testing on page 95.
  2. Azure SOC Bridge Letter Oct-Dec 2018 : The Azure SOC Bridge letter confirms that there have been no material changes to the system of internal control that would impact the conclusions reached in the SOC 1 type 2 and SOC 2 type 2 audit assessment reports.
  3. Global Data Centers provides insights into Microsoft’s framework for datacenter Threat, Vulnerability and Risk Assessments (TVRA)
  4. Office 365 Core – SSAE 18 SOC 2 Report 9-30-2018: Similar to the Azure  365 SOC II audit report (dated 10/1/2017 through 9/30/2018) discusses Microsoft’s position on business continuity (BC) in Section V, page 71 and the auditor confirms no exceptions noted for the CA-50 control test on page 66.
  5. Office 365 SOC Bridge Letter Q4 2018 : SOC Bridge letter confirming no material changes to the system of internal control provided by Office 365 that would impact the conclusions reached in the SOC 1 type 2 and SOC 2 type 2 audit assessment reports.
  6. Compliance Manager’s Office 365 NIST 800-53 control mapping provides positive (PASS) results for all 51 Business Continuity Disaster Recovery (BCDR)-related controls within Microsoft Managed Controls section, under Contingency Planning. For example, the Exchange Online Recovery Time  Objective and Recovery Point Objective (EXO RPO/RTO) metrics are tested by the third-party auditor per NIST 800-53 control ID CP2(3). Other workloads, such as SharePoint Online, were also audited and discussed in the same control section.
  7. ISO-22301  This business continuity certification has been awarded to Microsoft Azure, Microsoft Azure Government, Microsoft Cloud App Security, Microsoft Intune, and Microsoft Power BI. This is a special one. Microsoft is the first (and currently the ONLY) hyperscale cloud service provider to receive the ISO 22301 certification, which is specifically targeted at business continuity management. That’s right. Google doesn’t have it. Amazon Web Services doesn’t have it. Just Microsoft.
  8. The Office 365 Service Health TechNet article provides useful information and insights related to Microsoft’s notification policy and post-incident review processes
  9. The Exchange Online (EXO) High Availability TechNet article outlines how continuous and multiple EXO replication in geographically dispersed data centers ensures data restoration capability in the wake of messaging infrastructure failure
  10. Microsoft’s Office 365 Data Resiliency Overview outlines ways Microsoft has built redundancy directly into our cloud services, moving away from complex physical infrastructure toward intelligent software to build data resiliency
  11. Microsoft’s current SLA commitments for online services
  12. Current worldwide up times are reported on Office 365 Trust Center Operations Transparency
  13. Azure SLAs and uptime reports are found on Azure Support

As you can see, there are a lot of places where you can find information related to business continuity, service resilience and related topics for Office 365.

This type of information is very useful for partners and customers who need to understand how Microsoft “keeps the lights on” with its Office 365 service and ensures that customers are able to meet regulatory standards, even if their data is in the cloud.

 

A Quick Overview of Kali

One of the tools that many security professionals use on a regular basis is the Kali Linux penetration testing platform. This tool is built and maintained by Offensive Security (www.offensive-security.com), an organization that also provides extensive training on the platform and a variety of other security and penetration testing topics.

Which accountability aspect is available through the Azure Service Trust Portal

The Kali Linux platform is based on Debian GNU/Linux distro and contains hundreds of open source penetration-testing, forensic analysis, and security auditing tools. However, it isn’t exclusively used by traditional “red teams” and “blue teams”. In fact, it can also be used by IT admins to monitor their networks effectively (whether wired or wireless), perform analysis of data, and a variety of other tasks.

It’s important to remember that Kali Linux is NOT a static tool. Rather, you’ll likely have updates to the Kali distro on a daily basis, so make sure you perform updates before use the tool every time. (I’ll show you how in a few minutes)

Kali Linux can run on laptops, desktops or servers. You can download the ISO for Kali from https://www.kali.org/downloads, and create a bootable USB drive if you want to. But what we are doing today is running it on Azure, and this is one of the easiest ways to get started.

Let’s take a look.

Provisioning Kali on Azure

The Kali Linux distro is available without cost on Azure, but it might not be obvious where to find it. If you log in to your Azure subscription and try to provision a Kali box, you simply won’t find it in the list of operating systems or images you can deploy.

What you actually need to do is request it from the Azure Marketplace. To do this, go to https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kali-linux.kali-linux . There, you’ll see a page similar to the one shown below. Click on the “Get It Now” button to request the Kali Linux distro.

Which accountability aspect is available through the Azure Service Trust Portal

After you request the Kali Linux machine, you’ll be asked which account you use to sign in when you request apps from the Azure Marketplace. Enter the login ID that you use for your Azure subscription.

Which accountability aspect is available through the Azure Service Trust Portal

Once you request the machine as described above, you’ll be able to provision the Kali box just like you would any other virtual machine or appliance. There are a couple points that should be highlighted in the description provided when you provision the box.

First, the Installation Defaults section tells us that, by default, the only way to log in to your Kali instance is by using SSH over port 22 and using either a set of SSH keys or a user-provided password. This is because the default configuration for the installation does not include a graphical user interface (GUI). The majority of the tools in Kali work just fine without a GUI, so this is the preferred way to use it, but if you are just getting started, you may want the benefit of a GUI while you figure out how the tools work and how Linux itself is set up. I’ll show you how to install the GUI later in this article. For this article, I’ll be using a username/password to log in, but again, SSH keys are more secure and would be preferred in a production environment.

Additionally, we see that it is recommended that you update the packages on your Kali machine after you deploy it. I’ll walk you through how to do that as well.

Which accountability aspect is available through the Azure Service Trust Portal

After you’ve provisioned your Kali Linux machine (using username and password during the initial configuration in the Azure portal), you’ll want to connect to the machine.

To do so, download and install PuTTY, or a similar SSH/telnet tool. PuTTY can be downloaded here: https://putty.org/

When PuTTY is installed, it will require you to enter the IP address of your Kali machine in order to connect. You can get the public IP address of the Kali machine from the Azure portal, as shown below.

Which accountability aspect is available through the Azure Service Trust Portal

Next, open your PuTTY client and connect to the IP address and port 22 of your Kali machine.

Which accountability aspect is available through the Azure Service Trust Portal

One thing that’s unusual about the install is that the username and password that you defined for the Kali machine when you provisioned it does NOT have root access to the machine. This means you cant make any updates or modify the install with the set of credentials you are logging in with.

Let’s fix that.

As you can see below, I’m logged in to my machine KALI-001 as the user named KALIADMIN. I now need to set a password for the root (administrator) account.

To do this, I type:

sudo passwd root

Then I define the password I want to use. That’s all there is to it!

Now I can log in as root using the command

su root

Which accountability aspect is available through the Azure Service Trust Portal

Now that I’m logged in with root permissions, I need to update my Kali machine.

To do this, simply type:

apt update && apt dist-upgrade

Type y to confirm the updates. Depending upon how many updates are available, this could take a while. For example, when I ran this command after provisioning my machine, it took about 20 minutes to get all the updates.

Which accountability aspect is available through the Azure Service Trust Portal

At this point, you have logged in over SSH, set a password for the root account and updated the machine. However, you still are doing everything from the command line. You may want to install GUI. Basically, there are three tasks you have to perform to be able to able to manage the Kali instance the same way you’d manage a Windows server:

1. Install a GUI
2. Install RDP
3. Configure networking to allow connection over RDP

Install a GUI

Kali comes by default with the GNOME desktop package, but you need to install it.
To do so, use the command below:

apt-get install -f gdm3

Install RDP

Next, you’ll need to install an RDP package and enable the services using the commands below.

apt-get install xrdp
systemctl enable xrdp
echo xfce4-session >~/.xsession
service xrdp restart

Configure Networking to Allow Connection over RDP

Lastly, you’ll need to configure your Azure Network Security Group (NSG) to allow TCP port 3389 inbound (RDP) to your Kali machine. In the Networking section of your machine’s configuration, configure an inbound port rule for TCP 3389. Again, this is a penetration testing tool, so in a production environment, you would likely lock down the source IP addresses that can connect to this machine, but for this demonstration, we are leaving it at Any/Any.

Which accountability aspect is available through the Azure Service Trust Portal

Now that you have this set up, you should be able to connect to your Kali box using RDP, just as you would connect to a typical Windows machine. The interface for GNOME will look something like this, but it can be customized.

Which accountability aspect is available through the Azure Service Trust Portal

What Can I Do With It?

In the past, Microsoft required you to submit an Azure Service Penetration Testing Notification form to let Microsoft know that it was not an actual attack against a tenant. However, as per the documentation noted here, https://docs.microsoft.com/en-us/azure/security/azure-security-pen-testing, this is no longer a requirement.

“As of June 15, 2017, Microsoft no longer requires pre-approval to conduct penetration tests against Azure resources. Customers who wish to formally document upcoming penetration testing engagements against Microsoft Azure are encouraged to fill out the Azure Service Penetration Testing Notification form. This process is only related to Microsoft Azure, and not applicable to any other Microsoft Cloud Service.”

In other words, there is no strict requirement to notify Microsoft when you perform a penetration test against your Azure resources. This means that you can perform many of the standard penetration tests against your Azure tenant, such as :

  • Tests on your endpoints to uncover the Open Web Application Security Project (OWASP) top 10 vulnerabilities
  • Fuzz testing of your endpoints
  • Port scanning of your endpoints

However, one type of test that you can’t perform is any kind of Denial of Service (DoS) attack. This includes initiating a DoS attack itself, or performing related tests that might determine, demonstrate or simulate any type of DoS attack.

Thanks, Captain Obvious……

It should be obvious, but just to be clear: DON’T use your Kali machine to attack anybody else’s stuff.

You would most definitely find yourself in a legal pickle if you decided to attack resources that didn’t belong to you (or one of your customers) without explicit permission in writing. Please, just don’t run the risk.

Practice using the 600+ tools available in the Kali Linux distro and learn how to better secure your environment!

So there I was in my kitchen yesterday, reading an article in ZDNet about how several organizations are teaming up to prevent fraudulent food production practices around the world. The group has created a “Food Trust Framework” that is designed to increase the integrity and quality of the food in a global supply chain.

And there it was. Another reference to blockchain.

Until a few months ago, all of the references to blockchain that I had seen centered around the cryptocurrency Bitcoin, and to be perfectly honest, I figured if I‘m not being forced to pay somebody off to remove ransomware, Bitcoin and blockchain technology don’t really touch my life.

But there was blockchain again – this time in the context of food safety.

And since I eat food on occasion – well, that’s interesting to me. The premise of the news article is that there are an increasing number of food suppliers in China that are using ingredients in the food they sell that probably shouldn’t be there. As an example, maybe a beverage is diluted with water, or a filler is put into the food to reduce their cost of production. Maybe diluting a drink with water isn’t going to kill me, but it still means I’m paying for something that I don’t receive. But what if the filler that was used in a particular food happened to be a nut that I’m allergic to? Then it starts to get scary. (If you really want to scare yourself about “food fraud”, read this article)

Anyway, this group is trying to find a way to ensure that food quality is maintained through the supply chain. But how can you do that in a supply chain that could have dozens of suppliers involved in the process, particularly if some of the suppliers are specifically trying to avoid getting caught?

The answer they decided upon was – you guessed it – “blockchain”.

Blockchain Basics

The basics of blockchain are not terribly hard to understand, but let’s use a simple example to explain the principle.

We all understand the concept behind money. If I want to pay someone with money, I can hand them a dollar bill, or four quarters, or a hundred pennies or whatever, and the deal is done. The other person has the physical, tangible object in their hand, so we don’t necessarily need a third party to confirm that the money has been transferred.

However, let’s say I owe ten dollars to each of six different people, but I only have ten dollars in my bank account. Let’s also say that it’s possible for me to pay people by emailing them PICTURES of money. Being a little bit sneaky, I take a picture of a ten-dollar bill and email it to all six people. At this point, nobody can definitively claim the ten dollars in my bank account because there’s no proof that they are the ONLY one who has the picture of that ten-dollar bill.

But what if there was somebody that I had to email the picture of my money to FIRST and who could then hold me accountable for the money transfer? He would receive my email, make a note of the transaction in a ledger, deduct it from what he knows to be in my account and then pass along the picture to the recipient. If I tried to email another picture of the $10 bill to someone, the person with the ledger would say “Sorry, that money is already spoken for. You can’t do that.”.

Expand the scenario a bit and say that HUNDREDS of people have a copy of the exact same ledger, and everyone keeps the ledger updated in near real-time. Then, even if I tried to get one person to change their ledger to my benefit, others in the chain would say “Nope, this is the accurate set of figures”.

That’s the concept behind blockchain – a shared digital ledger that allows you to verify and validate the transactions contained in that ledger. The ledger is shared among many machines in a decentralized, distributed, encrypted network, so nobody has the ability to artificially manipulate the data because all the other machines in the network serve as an integrity check.

But remember – it doesn’t have to be about validating financial transactions. It can be used to validate just about any kind of record.

And that would include records that impact food safety.

Preventing Counterfeiting and Abuse

Let’s think about the application of blockchain to our original scenario.

Which accountability aspect is available through the Azure Service Trust Portal

Imagine for a moment that you are a dishonest supplier of orange juice. You have an order come in that requires you to provide 500 gallons of pure orange juice. But, being a bit of a crook, you reason that if you use half the number of oranges and instead supplement the mixture with 250 gallons of water, you could nearly double your profits!

So how would blockchain possibly prevent this fraud?

The first assumption must be that all the suppliers in the chain are being required to put their business records into the blockchain in order to be able to accurately track what is taking place.

Now, if blockchain is being used to track all the suppliers in the orange juice production process, it would show that you are only receiving half the needed number of oranges from your orange grower to produce 500 gallons of orange juice. There may be IoT (Internet of Things) sensors tracking water consumption in your plant and sending their records to the blockchain network, and these would show that you are using a lot of water in your production process, potentially indicating that you are watering down the orange juice. Shipping records would show that you are shipping twice as much product without twice the number of oranges being purchased. Perhaps there is a quality control test performed as your product leaves the plant that verifies the makeup of the orange juice and then another test performed when it reaches the next stage in the supply chain, all of which are recorded in blockchain. The blockchain allows the end user (in this case, that might be the grocery store who buys the orange juice, and who has access to all the records in the blockchain) to watch for anomalies in the production process that would indicate fraudulent production processes.

If you attempt to go back and falsify your records, what happens? Let’s say you try to claim you used twice the number of oranges that you actually used. Well, you have an orange supplier whose records are also being stored in the blockchain, and his records show that he only delivered half the number of oranges that your records indicate. The trucking company that delivered the oranges would have a record of the weight of the oranges that were delivered, and that would also be stored in the blockchain, again making your claim suspect. Your records would possibly indicate orders for twice as many orange juice cartons – and unless you have a lot of unused cartons in the plant, that would be hard to explain. When you ship your product, the next stop in the process may perform a quality check. If that quality check indicates that the orange juice is 50% water, they are going to ask questions because now THEIR reputation is at stake.

At each stage of the process, any attempt to manipulate the data will be blocked because the entire set of blockchain participants (who may not even be known to you, and with whom you therefore cannot collude to falsify the records) will be validating that your records match the set of records that they have. The grocery store at the end of the supply chain can backtrack through the entire process and identify where the orange juice is being watered down.

The orange juice supply is saved! (And you potentially go to jail)

Which accountability aspect is available through the Azure Service Trust Portal

Conclusion

What this has taught me is that blockchain has some pretty interesting use cases. There are a number of industries that are already investigating blockchain scenarios very heavily. For example, the healthcare industry is investigating its use in tracking patient records and payment history; the financial services industry is using it to verify and validate financial transactions; and the energy industry is using it to protect themselves against intra-day price fluctuations in energy resources such as solar and wind-generated energy.

Microsoft Azure allows customers to set up blockchain solutions themselves to support their business initiatives. Some of the blockchain and distributed ledger protocols currently available are Ethereum, HyperLedger Fabric, R3 Corda, Quorum, Chain Core and BlockApps, as well as Azure Blockchain Service. These can be used to help customers provision their own blockchain network in just a few minutes in a globally distributed, highly-available network, using Azure’s massively scalable compute, storage and network infrastructure as the foundation.

Investigate blockchain at the link below and try it out for yourself!

https://azure.microsoft.com/en-us/solutions/blockchain/

Announcing Microsoft’s Coco Framework for enterprise blockchain networks

 

I spend a lot of time working with partners and customers setting up and performing demos of new products.

In many cases, we are looking at features that are purely cloud-based – such as Skype for Business Cloud PBX or PSTN Conferencing. When that’s the case, I just go to the Office365 tenant that I have set up for my own testing and show everyone where things are configured or what features are available.

Every so often, though, I get asked to set up a demo using a somewhat more complex type of environment involving a set of virtual machines or some other cloud product like EMS.

I used to manually set up the lab virtual machines on my laptop, but I found a great new resource that lets me build the environment in Azure using a documented and scripted process.

It’s called the Cloud Adoption Test Lab Guides and they are located here: https://technet.microsoft.com/library/dn635308.aspx#O365

For example, if I needed to demonstrate how a highly-available SharePoint 2016 farm would be configured, I could use the guide found here, and it would walk me through building an Azure environment that looks like this:

Which accountability aspect is available through the Azure Service Trust Portal

There are a couple advantages to this approach:

  1. It frees up my laptop resources (VM’s tend to be storage hogs and I have a limited amount of CPU and RAM available for building out scenarios),
  2. I can access it from anywhere since the machines are in the Azure Cloud, and
  3. It gives me the chance to get more hands-on experience with Azure.

It’s a great option for those scenarios where you need to build a testing environment or as a way of demonstrating a product for customers.

The great thing is, you can build it in your own Azure environment so you always have a demo environment ready to go, or you can choose to build it in your customer’s Azure environment as a leave-behind for them to play with at their leisure. That also gives you the opportunity to talk to them about moving their existing on-premises workloads to Azure, or using Azure as a backup/recovery location, setting up test/dev environments in Azure and lots of other stuff.

The team that’s responsible for creating the Cloud Adoption Test Lab Guides is constantly creating new scenarios, so check back frequently to see which new scenarios they’ve created!

What is service trust portal Azure?

The Service Trust Portal is Microsoft's public site for publishing audit reports and other compliance-related information associated with Microsoft's cloud services.

What types of documents are available through the Microsoft Service Trust portal?

What type of documents does the Microsoft Service Trust Portal provide? A list of standards that Microsoft follows, pen test results, security assessments, white papers, faqs, and other documents that can be used to show Microsofts compliance efforts.

Which tool in service trust portal allows you to determine how compliant Azure is with regards to GDPR?

Compliance Manager for Azure helps you assess and manage GDPR compliance. Compliance Manager is a free, Microsoft cloud services solution designed to help organizations meet complex compliance obligations, including the GDPR, ISO 27001, ISO 27018, and NIST 800-53.

Is Trust Center provides information about the Azure compliance offerings?

👨‍🦱 💬 Both Azure Compliance Documenation and Trust Center provide information on compliance offerings in the context of Azure. While Azure compliance documentation might be a better place to start, using trust center is a great option too.