Virtual machines
This section will introduce VMs, a compute service that you must understand, as outlined in the exam’s Skills Measured section: Describe Core Azure Services.
VMs are IaaS compute service resources; they are the common building blocks of any Azure solution. VMs are the virtualization of the physical computer resources of CPU and memory; they are a software emulation (copy) of physical hardware computers.
VMs are the most appropriate compute service in the following scenarios:
- There is a need to provide complete control of the operating system (OS) and any software installed.
- There are customization requirements, such as customizing the OS, any software/applications, and runtimes. Custom and Azure Marketplace images can also be used; the OS can be Windows or Linux distributions.
- In lift-and-shift migration scenarios where the workload cannot be containerized.
- There is a need to extend on-premises computing capacity, perhaps for development and testing, disaster recovery, and business continuity scenarios. VMs can be connected to an organization’s network using a VPN and the internet to route the traffic, or the Microsoft ExpressRoute service, a private network connection that bypasses the internet for better performance (low latency requirements). Alternatively, Azure VMs can be isolated and not connected to on-premises environments.
Creating VMs in your solution means that being an IaaS resource component, there are tasks you are still responsible for under the shared responsibility model we looked at in Chapter 1, Introduction to Cloud Computing. These are the tasks you need to carry out as you would for on-premises compute resources, such as configuring and patching the OS, installing and configuring any software, creating backups, as well as securing the VM with network security controls and managing/securing user account access.
This section introduced VMs as a core Azure IaaS compute service. In the next section, we will look at the differences between physical machines and VMs and the benefits of VMs.
Physical machines versus virtual machines
To understand the benefits of VMs over physical servers, in this section, we will explore the physical hardware operating model.
For each application that an organization wishes to operate, historically, there is typically a one-to-one mapping between the application and a dedicated instance of an OS and physical server, meaning that each app required its own physical server. In this scenario, many physical servers would be required, where an organization will have individual dedicated physical servers to run Exchange, SharePoint, Active Directory (AD), files, print, the web, databases, line-of-business applications, and so on.
The following diagram outlines this traditional typical physical hardware operating model, which has a common shared infrastructure connecting and supporting multiple physical servers:

Figure 4.2 – Traditional physical servers approach
This preceding diagram of the typical traditional approach of deploying one physical server per app was (and still is) a very costly and operationally inefficient model. However, virtualization has many benefits to an organization when running on-premises, and when running in a cloud platform, there can be even greater operational and financial benefits and value, which we looked at in Chapter 1, Introduction to Cloud Computing. This includes no longer having to purchase and maintain hardware in an on-premises facility and having to finance on a CapEx cost model (although maybe leased as OpEx).
When we switch to the virtualization model, we need to understand what virtualization is as a technology concept, its benefits, and what value it provides to both the technology and business personas of an organization.
Virtualization, as a technology concept, is based on abstraction and, specifically, hardware abstraction. The technology layer that’s used to achieve this hardware abstraction is known as a hypervisor.
Abstraction means removing a dependency and filtering out or ignoring the facts of some characteristics that are no longer relevant for us, which allows us to focus on what is important. So, for example, in the case of virtualization, the VM is no longer dependent on the hardware; if we have abstracted the hardware or removed the hardware from the equation, we no longer need that layer or the detail of it. That is somebody else’s aspect to consider – we just take what we need from it.
Likening this to our favorite topic of pizza and our Pizza-as-a-Service from Chapter 1, Introduction to Cloud Computing, we could say that our favorite franchised pizza outlet has abstracted the cooking process in that they have abstracted the oven, as well as the whole kitchen: I requested a specific pizza size that meets my requirements and consumed it; I removed/filtered out (no longer my concern or care) the detail of stocking the ingredients and knowing recipes, having a specialty pizza oven, the actual cooking process, and so on.
In the virtualization versus containerization section, we will see that containerization is all about abstraction at the OS level. Virtualization means abstraction at the hardware level; beyond this, serverless provides abstraction at the runtime level.
The following diagram visualizes the approach of virtualization:

Figure 4.3 – Virtualization approach
This model allows fewer physical servers to run more applications. In the preceding diagram, we have two physical servers to run two apps; with virtualization, we can run four applications from just one physical server in this example.
Each VM shares the underlying hardware resources that the VM is located on, referred to as the host resources. From one physical CPU resource, we can create many virtual CPU and memory resources.
How many VMs you can host from a physical host server will vary on the underlying host’s resources …your mileage may vary.
This section looked at VMs and their advantages over physical servers. In the next section, we will look at the different types of VMs that are available to run your workloads.
VM types
Different workloads have different requirements and need different solutions. Therefore, Azure has many different VM types, and each VM type is tailored to a specific use case and workload type.
VM types are broken down into categories and a family series; the family series identifier is a leading alphabetic character, as seen in the following list. In addition, a naming convention is used to break down the VM types based on their intended use case. Some examples of this include subfamilies, the number of virtual CPUs (this can be expressed as the number of vCPUs), additive features, and versions; we will take a closer look at VM naming conventions later in this section.
Although more of an advanced topic than required for the exam objectives, VMs also support nested virtualization, which allows you to run Hyper-V inside a VM. It is mainly used for testing, training, development, and non-production workloads. Not all VM sizes support nested virtualization, however; as this is liable to change, you should check the following URL for the latest information: https://docs.microsoft.com/azure/virtual-machines/sizes.
Let’s take a look at the categories, family series, and intended purpose of various VMs:
- General Purpose (A, B, D family series): These VMs have a balanced CPU-to-memory ratio. They are best suited for testing and development (A series only), burstable workloads (B series only), and general-purpose workloads (D series only).
- Compute Optimized (F family series): These VMs have a high CPU-to-memory ratio. They are best suited for web servers, application servers, network appliances, batch processes, and any workload where bottlenecks and a lack of resources will typically be CPU over memory.
- Memory Optimized (E, G, M family series): These VMs have a high memory-to-CPU ratio. They are best suited for relational databases, in-memory analytics, and any workload where bottlenecks and a lack of resources will typically be memory over CPU.
- Storage Optimized (L family series): These VMs have high disk throughput and I/O. They are best suited for data analytics, data warehousing, and any workload where bottlenecks and a lack of resources will typically be disk over memory and CPU.
- GPU Optimized (N family series): These VMs have graphics processing units (GPUs). They are best suited for compute-intensive, graphics-/gaming-intensive, visualization, video conferencing/streaming workloads.
- High Performance (H family series): These are the most powerful CPU VMs that Azure provides and can provide high-speed throughput network interfaces. They are best suited for compute and network workloads such as SAP Hana.
The preceding family series awareness is all that you need for the exam objectives; however, as this book aims to take your skills beyond the exam objectives so that you are prepared for an Azure role, we will take a closer look at the naming conventions. These naming conventions can help you identify the many VMs you will be able to choose from when you create a VM.
For simplicity and readability, we have just kept the core values:
[Family] + [Sub-Family] + [# of vCPUs] + [Additive Features] + [Version]:
- Family: The VM family series
- Sub-Family: The specialized VM differentiations
- # of vCPUs: The number of vCPUs of the VM
- Additive Features:
a) a: AMD-based processor.
c) i: Isolated size.
d) l: Low memory size.
e) m: Memory-intensive size.
f) s: Premium storage capable; some newer sizes without the attribute of s can still support premium storage.
g) t: Tiny memory size.
- Version: The version of the VM family series.
Some example breakdowns are as follows:
- B2ms: B series family, two vCPUs, memory-intensive, premium storage capable
- D4ds v4: D series family, four vCPUs, local temp disk, premium storage capable, version 4
- D4as v4: D series family, four vCPUs, AMD, premium storage capable, version 4
- E8s v3: E series family, eight vCPUs, premium storage capable, version 3
- NV16as v4: N series family, NVIDIA GRID, 16 vCPUs, AMD, premium storage capable, version 4
This section introduced the different VM types you can use to run your workloads. In the next section, we will look at what else to consider when choosing VMs as the compute service to host your workloads.
VM deployment considerations
Some additional elements to consider when creating VMs for a solution are outlined in this section and visualized in the following diagram:

Figure 4.4 – VM components and considerations
Let’s look at these additional elements in more detail:
- Additional VM resources: A VM includes virtual CPU and memory as its core components; however, you will need to provide an OS, software, storage, networking, connectivity, and security as a minimum for the VM, the same as you would for a physical computer.
- Location and data residency: Not all VM types and sizes are available in all regions; you should check that the VM types and sizes required for your solution are available in the region you will be creating resources in. Data residency may also be an important point to consider to ensure you meet any compliance needs mandated for your organization. Different regions will also have different costs for VM creation.
- VM quota limits: Each Azure subscription has default quota limits that could impact the creation of VMs. In addition, there are limits on the number of VMs, the VM’s total cores, and the VMs per series. The default limits vary based on the Azure subscription billing type; quota limits for things such as SQL VMs or virtual desktop VMs that require the NV series type can commonly be exceeded, but this can be resolved by requesting a quota limit increase from Microsoft Support. The following link provides further information on these limits: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits.
- Monitoring: You can’t respond to what you don’t know about, so it’s vital to have visibility into the performance metrics and operational and security event logs to gain insights. Unfortunately, Microsoft does not automatically monitor VMs or their resources; there are automatically captured activity logs, but this is more of an audit and governance control than performance metrics or operational and security event logs; you must enable monitoring.
You can use Azure Monitor to gain insights into your VM’s performance and operational health, as well as using Azure Advisor. In addition, Azure Sentinel, a cloud-native Security Incident and Event Management (SIEM) solution, can be used as a single pane of glass and provides a bird’s-eye view across all Azure, other clouds, and on-premises resources.
- Backup: Microsoft does not automatically back up the OS or software running on your VMs. Under the shared responsibility model, you have complete control of that aspect and it is your responsibility; however, Microsoft’s responsibility is to protect the underlying host’s hardware and software.
- Update management: Microsoft does not automatically update the OS or software running on your VMs. Under the shared responsibility model, you have complete control over that aspect and it is your responsibility; however, Microsoft’s responsibility is to update the underlying host’s hardware and software. This leads to the next important aspect; that is, availability.
- Availability: This is the percentage of time a service is available for use. Chapter 1, Introduction to Cloud Computing, looked at the two core components for addressing SLA requirements for VMs, these were availability sets and Availability Zones, and each provided a SLA. This is important to consider because, as it is Microsoft’s hardware and infrastructure that is providing your VM, its resources can and will fail, and this is unavoidable. It is, however, more a case of how you handle the failure when it happens, given the fact that you can’t prevent this.
It may be a planned or unplanned update/maintenance event that Microsoft carries out, so it may not be a failure, but it may mean that your VM gets rebooted or moved to another host, which could result in a short service interruption. So, this does not impact you. Microsoft provides measures such as availability sets and Availability Zones so that these events don’t impact how your service operates when you implement them.
It is important to note that these measures need to be actioned by you, and by default, they are not configured.
- Scalability: This is the ability of a system to handle increased loads while still meeting availability goals. By default, VMs do not support scaling or autoscaling; however, an IaaS resource solution can provide this functionality while still leveraging VMs.
VM scale sets provide this scale functionality. This is an IaaS resource service with built-in autoscaling features for VM-based workloads such as the web and application services, and batch processing. A server farm of identical VMs can be created through VM scale sets, which provides the automatic scaling functionality expected from a cloud model. This section looked at what to consider when choosing VMs as the compute service to host your workloads. In the next section, we will look at containerization and compare and contrast it to VMs as a compute service.