Page 3 of 4

Networking and firewalls – Principles of Modern Architecture

Preventing access to systems from non-authorized networks is, of course, a great way to control access. In an on-premises environment this is generally by default, but for cloud environments, many services are open by default—at least from a networking perspective.

If your application is an internal system, consider controls that would force access along internal routes and block access to external routes.

If systems do need to be external-facing, consider network segregation by breaking up the individual components into their networks or subnets. In this scenario, your solution would have externally facing user interfaces in a public subnet, a middle tier managing business rules in another subnet, and your backend database on another subnet.

Using firewalls, you would only allow public access to the public subnet. The other subnets would only allow access on specific ports on the adjacent subnet. In this way, the user interface would have no direct access to the databases; it would only be allowed access to the business layer that then facilitates that access.

Azure provides firewall appliances and network security groups (NSGs) that deny and allow access between source and destination services, and using a combination of the two together provides even greater control.

Finally, creating a virtual private network (VPN) from an on-premises network into a cloud environment ensures only corporate users can access your systems, as though they were accessing them on-premises.

Network-level controls help control both perimeter and internal routes, but once a user has access, we need to confirm that the user is who they claim to be.

Identity management

Managing user access is sometimes considered the first line of defense, especially for cloud solutions that need to support mobile workforces.

Therefore, you must have a well-thought-out plan for managing access. Identity management is split into two distinct areas: authentication and authorization.

Authentication is the act of a user proving they are who they say they are. Typically, this would be a username/password combination; however, as discussed in the hacking techniques section How do they hack?, somebody could compromise these.

Therefore, you need to consider options for preventing these types of attacks, as either guessing or capturing a user’s password is a common exploit. You could use alternatives such as Multi-Factor Authentication (MFA) or monitoring for suspicious login attributes, such as from where a user is logging on.

Once a user is authenticated, the act of authorization determines what they can access. Following principles such as least privilege or Just Enough Access (JEA) ensures users should only access what they require to perform their role. Just-in-Time (JIT) processes provide elevated access only when a user needs it and remove it after a set period.

Continual monitoring with automated alerting and threat management tools helps ensure that any compromised accounts are flagged and shut down quickly.

Using a combination of authorization and authentication management and good user education around the danger of phishing emails should help prevent the worst attacks. Still, you also need to protect against attacks that bypass the identity layer.

How do they hack? – Principles of Modern Architecture

There are, of course, many ways hackers can gain access to your systems, but once you have identified the reason why an attacker may want to hack you, you can at least narrow down the potential methods. The following are some of the more common ones:

  • Researching login credentials: Although the simplest method, this is perhaps one of the most common. If an attacker can get your login details, they can do a lot of damage very quickly. Details can be captured either by researching a user’s social and public profiles, to guess the password or find answers to your security questions.
  • Phishing: Another way of capturing your credentials; you may receive an email notifying you that your account has been locked, with a link to a fake website. The website looks like what you are expecting, and when you enter your details, it merely captures them.
  • Email: Rather than capturing login details, some emails may contain malicious code in the form of an attachment or a link to a compromised site. The purpose is to infect your computer with a virus, Trojan, or similar. The payload could be a keylogger to capture keystrokes (that is, login details) or spread and use more sophisticated attacks to access other systems.
  • Website vulnerabilities: Poorly written code can lead to all sorts of entry points. SQL injection attacks whereby Transact-SQL (T-SQL) statements are posted within a form can update, add, or delete data if the backend is not written to protect against this type of attack. Cross-site scripts that run on the hacker’s website but access the backend on yours can override form posts, and so on.
  • Distributed Denial of Service (DDoS): A DDoS attack seeks to overwhelm your servers and endpoints by flooding them with requests—this can either bring down your applications or potentially trigger other exploits that grant complete access.
  • Vulnerability exploits: Third-party applications and operating systems can also have vulnerable code that hackers seek to exploit in many different ways, from triggering remote execution scripts to taking complete control of the affected system.

Of course, there are many more, but understanding the main reasons why and how hackers hack is the first step in your defense. With this knowledge, we can start to define and plan our strategy.

Defining your strategy

Once we have identified what we need to protect, including any prioritization based on your platform’s characteristics, we can start to define a set of rules that set out how we protect ourselves.

Based on your requirements, which may be solution- and business-led, the strategy will state which elements need protecting, and how. For example, you may have a rule that states all data must be encrypted at rest, or that all logging is monitored and captured.

There are several industry compliance standards, such as ISO27001, National Institute of Standards and Technology (NIST), and the Payment Card Industry Data Security Standard (PCI DSS). These can either form the basis of your internal policies or be used as a reference; however, depending on your business’s nature, you may be required to align with one or more of them.

Information

ISO is the acronym for International Organization for Standardization, which is an international standard-setting body with representatives from multiple other standards organizations.

We can now consider which technologies we will use to implement the various policies; next, we will look at some of the more common ones.

Architecting for security – Principles of Modern Architecture

In the previous chapter, we looked at why architecture is important, what it seeks to achieve, and how it has changed over time. Understanding how we got to where we are today helps us in our role and provides a solid framework for our designs.

This chapter will look at how we architect systems in general to understand the high-level requirements and potential methods. Split into pillars, we will examine different aspects of each; however, as we will see, they are all interlinked and have some element of dependency on each other.

We will start by looking at security, perhaps one of the essential aspects of architecture, and understand how access to systems is gained and how we prevent it.

Next, we’ll investigate resilience, which is closely related to performance. By understanding the principles of these subjects, we can ensure our designs produce stable and performant applications.

Deployment mechanisms have become far more sophisticated in recent years, and we’ll learn how our decisions around how we build platforms have become entwined with the systems themselves.

Finally, we will see how a well-designed solution must include a suite of monitoring, alerting, and analytics to support the other pillars.

With this in mind, throughout this chapter, we will cover the following topics:

  • Architecting for security
  • Architecting for resilience and business continuity
  • Architecting for performance
  • Architecting for deployment
  • Architecting for monitoring and operations

Architecting for security

As technology has advanced, the solutions we build have become more powerful, flexible, and complex. Our applications’ flexibility and dynamic nature enable a business to leverage data and intelligence at a level previously unknown. The cloud is often touted by many vendors as having near unlimited capacity and processing power that is accessible by anyone.

But power comes at a cost, because it’s not just businesses who wish to leverage the potential of the cloud—hackers also have access to that tooling. Therefore, the architect of any system must keep security at the core of any design they produce.

Knowing the enemy

The first step in ensuring security is to understand the hacker mindset or, at the very least, to think about what they wish to accomplish—why do hackers hack?

Of course, there are lots of reasons, but we’ll state the obvious one—because they can! Some people see hacking a system as a challenge. Because of this, their attack vector could be any security hole they can exploit, even if this didn’t result in any valuable benefit.

The most dangerous attackers are the ones who wish to profit or cause damage from their actions, either directly by selling/releasing private data or by holding their victim to ransom by encrypting data and demanding money to release it.

Some wish to simply disrupt by bringing a system down—depending on the nature of the solution in question, the damage this causes can be reputational or financial.

All of these scenarios highlight some interesting points. The first is that when designing, you need to consider all areas of your solutions that might be vulnerable—authorization, data, application access points, network traffic, and so on.

The second is that, depending on your solution, you can prioritize the areas that would cause you the most damage. For example, if you are a high-volume internet retailer, uptime may be your primary concern, and you might therefore concentrate on preventing attacks that could overload your system. If data is your most valuable asset, then you should think about ways to ensure it cannot be accessed, either by preventing an intrusion in the first place or protecting information if it is accessed via encryption or obfuscation.

Once we have the why, we need to think about the how.

IaC – Architecture for the Cloud

In Azure, components can be created either in the portal using the graphical user interface (GUI) with PowerShell or by using JSON templates. In other words, you can deploy infrastructure in Azure purely with code.

IaC provides many benefits, such as the ability to define VMs, storage, databases, or any Azure component in a way that promotes reusability, automation, and testing.

Tools such as Azure DevOps provide a central repository for all your code that can be controlled and built upon in an iterative process by multiple engineers. DevOps also builds, tests, and releases your infrastructure using automated pipelines—again, in the same way that modern software is deployed.

The DevOps name embodies the fact that operational infrastructure can now be built using development methodologies and can follow the same Agile principles.

DevSecOps takes this even further and includes codifying security components into release pipelines. Security must be designed and built hand in hand with infrastructure and security at every level, as opposed to merely being a perimeter or gateway device.

Cloud architects must, therefore, be fully conversant with the taxonomy, principles, and benefits of Agile, DevOps, and DevSecOps, incorporating them into working practices and designs.

Microsoft provides a range of tools and components to support your role and provides best-in-class solutions that are reliable, resilient, scalable, and—of course—secure.

As we have seen, architecture in the cloud involves many more areas than you might traditionally have gotten involved in. Hopefully, you will appreciate the reasons why these changes have occurred.

From changes in technology to new ways of working, your role has changed in different ways—although challenging, this can also be very exciting as you become involved across various disciplines and work closely with business users.

Summary

During this chapter, we have defined what we mean by architecture in the context of the AZ-304 exam, which is an important starting point to ensure we agree on what the role entails and precisely what is expected for the Azure certification.

We have walked through a brief history of business computing and how this has changed architecture over the years, from monolithic systems through to the era of personal computing, virtualization, the web, and ultimately to the cloud. We examined how each period changed the responsibilities and design requirements for the solutions built on top.

Finally, we had a brief introduction to modern working practices with IaC and project management methodologies, moving from waterfall to Agile, and how this has also changed how we as architects must think about systems.

In the next chapter, we will explore specific areas of architectural principles, specifically those aligned to the Microsoft Azure Well-Architected Framework.

Moving from Waterfall to Agile projects – Architecture for the Cloud

As we move into the cloud, other new terms around working practices come to the fore. DevOps, DevSecOps, and Agile are becoming ingrained in those responsible for building software and infrastructure.

If you come from a software or a DevOps background, there is a good chance you already understand these concepts, but if not, it helps to understand them.

Waterfall

Traditional waterfall project delivery has distinct phases to manage and control the build. In the past, it has often been considered crucial that much effort goes into planning and designing a solution before any engineering or building work commences.

A typical example is that of building a house. Before a single brick is laid, a complete architectural blueprint is produced. Next, foundations must be put in place, followed by the walls, roof, and interiors. The idea is that should you change your mind halfway through, it would be challenging to change anything. If you decide a house needs to be larger after the roof is built, you would need to tear everything down and start again.

With a waterfall approach, every step must be well planned and agreed at the outset. The software industry developed a bad reputation for delivering projects late and over budget. Businesses soon realized that this was not necessarily because of mismanagement but because it is difficult to articulate a vision for something that does not yet exist and, in many cases, has never existed before.

If we take the building metaphor, houses can be built as they are because, in many cases, they are merely copying elements of another house. Each house has a lot in common—walls, floors, and a roof, and there are set ways of building each of these.

The following diagram shows a typical setup of a waterfall project, with well-defined steps completed in turn:

Figure 1.7 – Typical waterfall process

With software, this is not always the case. We often build new applications to address a need that has never been considered or addressed before. Trying to follow a waterfall approach has led to many failed projects, mainly because it’s impossible to design or even articulate the requirements upfront fully.

Agile

Thus, Agile was born. The concept is to break down a big project into lots of smaller projects that each deliver a particular facet of the entire solution. Each mini-project is called a sprint, and each sprint runs through a complete project life cycle—design, plan, build, test, and review.

The following diagram shows that in many ways, Agile is lots of mini-waterfall projects:

Figure 1.8 – Agile process

Sprints are also short-lived, usually 1 or 2 weeks, but at the end of each one, something is delivered. A waterfall project may last months or years before anything is provided to a customer—thus, there is a high margin for error. A small misunderstanding of a single element along the way results in an end state that does not meet requirements.

A particular tenet of Agile is “fail fast”—it is better to know something is wrong and correct it as soon as possible than have that problem exacerbate over time.

This sprint-led delivery mechanism can only be achieved if solutions are built in a particular manner. The application must be modular, and those modules designed in such a way that they can be easily swapped out or modified in response to changing requirements. An application architect must consider this when designing systems.

At this point, you may be wondering how this relates to the cloud. Agile suits software delivery because solutions can be built in small increments, creating lots of small modules combined into an entire solution. To support this, DevOps tooling provides automated mechanisms that deploy code in a repeatable, consistent manner.

As infrastructure in the cloud is virtualized, deployments can now be scripted and therefore automated—this is known as Infrastructure as Code (IaC).

Understanding infrastructure and platform services – Architecture for the Cloud

One of the big differences between IaaS and PaaS is about how the responsibility of components shifts.

The simplest examples of this are with websites and Structured Query Language (SQL) databases. Before we look at IaaS, let’s consider an on-premise implementation.

When hosted in your own data center, you might have a server running IIS, upon which your website is hosted, and a database server running SQL. In this traditional scenario, you own full responsibility for the hardware, Basic Input/Output System (BIOS) updates, operating system (OS) patching, security updates, resilience, inbound and outbound traffic—often via a centralized firewall—and all physical security.

IaaS

The first step in migrating to cloud might be via a lift-and-shift approach using virtual networks (VNETs) and VMs—again, running IIS and SQL. Because you are running in Microsoft’s data centers, you no longer need to worry about the physical aspects of the underlying hardware.

Microsoft ensures their data centers have all the necessary physical security systems, including personnel, monitoring, and access processes. They also worry about hardware maintenance and BIOS updates, as well as the resilience of the underlying hypervisor layer that all the VMs run on.

You must still, however, maintain the software and operating systems of those VMs. You need to ensure they are patched regularly with the latest security and improvement updates. You must architect your solution to provide application-level resilience, perhaps by building your SQL database as a failover cluster over multiple VMs; similarly, your web application may be load-balanced across a farm of IIS servers.

Microsoft maintains network access in general, through its networking and firewall hardware. However, you are still responsible for configuring certain aspects to ensure only the correct ports are open to valid sources and destinations.

A typical example of this split in responsibility is around access to an application. Microsoft ensures protection around the general Azure infrastructure, but it provides the relevant tools and options to allow you to set which ports are exposed from your platform. Through the use of network security groups (NSGs) and firewall appliances, you define source and destination firewall rules just as you would with a physical firewall device in your data center. If you misconfigure a rule, you’re still open to attack—and that’s your responsibility.

PaaS

As we move toward PaaS, accountability shifts again. With Azure SQL databases and Azure web apps, Microsoft takes full responsibility for ensuring all OS-level patches are applied; it ensures the platforms that run Azure SQL databases and Azure web apps are resilient against hardware failure.

Your focus now moves toward the configuration of these appliances. Again, for many services, this includes setting the appropriate firewalls. However, depending on your corporate governance rules, this needs to be well planned.

By default, communications from a web app to a backend Azure SQL database are over the public network. Although it is, of course, contained within Microsoft’s network, it is technically open. To provide more secure connectivity, Azure provides the option to use service connections—direct communication over its internal backbone—but this needs specifically configuring at the web app, the SQL service, and the VNET level.

As the methods of those who wish to circumvent these systems become increasingly sophisticated, further controls are required. For web applications, the use of Web Application Firewall (WAF) is an essential part of this—as the architect, you must ensure they are included in your designs and configured correctly; they are not included by default.

Important note

Even though Microsoft spends billions of dollars a year on securing the Azure platform, unless you carefully architect your solutions, you are still vulnerable to attack. Making an incorrect assumption about where your responsibility lies leads to designing systems that are exposed—remember, many cloud platforms’ networking is open by default; it has to be, and you need to ensure you fully understand where the lines are drawn.

Throughout this chapter, we have covered how changing technologies have significantly impacted how we design and build solutions; however, so far, the discussion has been around the technical implementation.

As software and infrastructure become closely aligned, teams implementing solutions have started to utilize the same tools as developers, which has changed the way projects are managed.

This doesn’t just affect the day-to-day life of an architect; it has yet another impact on the way we design those solutions as well.

Migrating to the cloud from on-premises – Architecture for the Cloud

A new company starting up today can build its IT services as cloud native from day one. These born-in-the-cloud enterprises arguably have a much simpler route.

For existing businesses, especially larger ones, they must consider how any cloud-based service operates with existing applications currently running within their infrastructure.

Even when a corporation chooses to migrate to the cloud, this is rarely performed in a single big-bang approach. Tools exist to perform a lift-and-shift copy of existing servers to VMs, but even this takes time and lots of planning.

For such companies, consideration at each step of the way is crucial. Individual services don’t always run on a single piece of hardware—even websites are generally split into at least two tiers: a frontend user interface running on an Internet Information Services (IIS), with a backend database running on a separate SQL server.

Other services may also communicate with each other—a payroll system will most likely need to interface with an HR database. At the very least, many systems share a standard user directory such as Microsoft Active Directory (AD) for user authentication and authorization.

An architect must decide which servers and systems should be migrated together to ensure these communication lines aren’t impacted by adverse latency and can move independently with adequate cloud-to-on-premises network links. Should we use dedicated connectivity such as ExpressRoute, or will a virtual private network (VPN) channel running over the internet suffice?

As already discussed, as we move to the cloud, we change from an inherently secure platform whereby services are firewalled off by default, to an open one whereby connectivity is exposed to the internet by default. Any new communication channels from the cloud to your on-premises network, required to support a potentially long drawn-out migration, effectively provide an entry point from the internet back into your corporate system.

To alleviate business concerns, a strong governance and monitoring model must be in place, and this needs to be well designed from the outset. Will additional teams be required to support this? Will these tasks be added to existing teams’ responsibilities? What tooling is used? Will it be your current compliance monitoring and reporting software, or will you have a different set for the cloud?

There are many different ways to achieve this, all depending on the answers to these specific questions. However, for those who wish to embrace a cloud-first solution, this may involve the following technologies:

  • Azure Policy and Azure Blueprints for build control
  • Azure Recovery Services
  • Azure Update Management for VM patching
  • Azure Security Center for alerting and compliance reporting
  • Azure Monitor Agent installed on VMs
  • Azure Monitor
  • Azure Log Analytics and Azure Monitor Workbooks

Although these are Azure solutions, they can, however, also be integrated with on-premises infrastructure as well. The following diagram shows an example of this:

Figure 1.6 – Cloud compliance and monitoring tooling

As you can see, having a well-architected framework in place is crucial for ensuring the health and safety of your platform, and this in turn feeds into your strategies and overall solution design when considering a migration into the cloud.

Once we have decided how our integration with an on-premises system might look, we can then start to consider whether we perform a simple “lift and shift” or take the opportunity to re-platform. Before making these choices, we need to understand the main differences between IaaS and PaaS, and when one might be better than the other.

Cloud computing – Architecture for the Cloud

Cloud platforms such as Azure sought to remove the difficulty and cost of maintaining the underlying hardware by providing pure compute and storage services on a pay-as-you-go or operational expenditure (OpEx) model rather than a capital expenditure (CapEx) model.

So, instead of providing hardware hosting, they offered Infrastructure as a Service (IaaS) components such as VMs, networking and storage, and Platform as a Service (PaaS) components such as Azure Web Apps and Azure SQL Databases. The latter is the most interesting. Azure Web Apps and Azure SQL Databases were the first PaaS offerings. The key difference is that they are services that are fully managed by Microsoft.

Under the hood, these services run on VMs; however, whereas with VMs you are responsible for the maintenance and management of them—patching, backups, resilience—with PaaS, the vendor takes over these tasks and just offers the basic service you want to consume.

Over time, Microsoft has developed and enhanced its service offerings and built many new ones as well. But as an architect, it is vital that you understand the differences and how each type of service has its own configurations, and what impact these have.

Many see Azure as “easy to use”, and to a certain extent, one of the marketing points around Microsoft’s service is just that—it’s easy. Billions of dollars are spent on securing the platform, and a central feature is that the vendor takes responsibility for ensuring the security of its services.

A common mistake made by many engineers, developers, system administrators, and architects is that this means you can just start up a service, such as Azure Web Apps or Azure SQL Databases, and that it is inherently secure “out of the box”.

While to a certain extent this may be true, by the very nature of cloud, many services are open to the internet. Every component has its own configuration options, and some of these revolve around securing communications and how they interact with other Azure services.

Now, more than ever, with security taking center stage, an architect must be vigilant of all these aspects and ensure they are taken into consideration. So, whereas the requirement to design underlying hardware is no longer an issue, the correct configuration of higher-level services is critical.

As we can see in the following diagram, the designs of our solutions to a certain extent become more complex in that we must now consider how services communicate between our corporate and cloud networks. However, the need to worry about VMs and hardware disappears—at least when purely using PaaS:

Figure 1.5 – Cloud integration

As we have moved from confined systems such as mainframes to distributed systems with the cloud provider taking on more responsibility, our role as an architect has evolved. Certain aspects may no longer be required, such as hardware design, which has become extremely specialized. However, a cloud architect must simultaneously broaden their range of skills to handle software, security, resilience, and scalability. For many enterprises, the move to the cloud provides a massive opportunity, but due to existing assets, moving to a provider such as Azure will not necessarily be straightforward. Therefore, let’s consider which additional challenges may face an architect when considering migration.

Web apps, mobile apps, and APIs – Architecture for the Cloud

At around the same time, virtualization was starting to grow, and the internet began to mature beyond an academic and military tool. Static, informational websites built purely in HTML gave way to database-driven dynamic content that enabled small start-ups to sell on a worldwide platform with minimal infrastructure.

Websites started to become ever more complex, and slowly the developer community began to realize that full-blown applications could be run as web apps within a browser window, rather than having to control and deploy software directly to a user’s PC.

Processing requirements now moved to the backend server—dynamic web pages were generated on the fly by the web server, with the user’s PC only rendering the HTML.

With all this reliance on the backend, those designing applications had to take into account how to react to failures automatically. The virtualization layer, and the software running on top, had to be able to respond to issues in a way that made the user completely unaware of them.

Architects had to design solutions to be able to cope with an unknown number of users that may vary over time, coming from different countries. Web farms helped spread the load across multiple servers, but this in itself meant a new way of maintaining state or remembering what a user was doing from one page request to the next, keeping in mind that they might be running on a different server from one request to the next.

As the mobile world exploded, more and more mobile apps needed a way of using a centralized data store—one that could be accessed over the internet. Thus, a new type of web app, the API app, started serving raw data as RESTful services (where REST stands for REpresentational State Transfer) using formats such as Extensible Markup Language (XML) or JavaScript Object Notation (JSON).

Information

A RESTful service is an architectural pattern that uses web services to expose data that other systems can then consume. REST allows systems to interchange data in a pre-defined way. As opposed to an application that communicates directly with a database using database-specific commands and connection types, RESTful services use HTTP/HTTPS with standard methods (GET, POST, DELETE, and so on). This allows the underlying data source to be independent of the actual implementation—in other words, the consuming application does not need to know what the source database is, and in fact could be changed without the need to update the consumer.

Eventually, hosted web sites also started using these APIs, with JavaScript-based frameworks to provide a more fluid experience to users. Ironically, this moved the compute requirements back to the user’s PC.

Now, architects have to consider both the capabilities of a backend server and the potential power of a user’s device—be it a phone, tablet, laptop, or desktop.

Security now starts to become increasingly problematic for many different reasons.

The first-generation apps mainly used form-based authentication backed by the same database running the app, which worked well for applications such as shopping sites. But as web applications started to serve businesses, users had to remember multiple logins and passwords for all the different systems they use.

As web applications became more popular—being used by corporates, small businesses, and retail customers—ensuring security became equally more difficult. There was no longer a natural internal barrier—systems needed to be accessible from anywhere. As apps themselves needed to be able to communicate to their respective backend APIs, or even APIs from other businesses providing complementary services, it was no longer just users we had to secure, but additional services too.

Having multiple user databases will no longer do the job, and therefore new security mechanisms must be designed and built. OpenID, OAuth2.0, SAML(which stands for Security Assertion Markup Language), and others have been created to address these needs; however, each has its own nuances, and each needs to be considered when architecting solutions. The wrong decision no longer means it won’t work; it could mean a user’s data being exposed, which in turn leads to massive reputational and financial risk.

From an architectural point of view, solutions are more complex, and as the following diagram shows, the number of components required also increases to accommodate this:

Figure 1.4 – Web apps and APIs increase complexity

Advancements in hardware to support this new era and provide ever more stable and robust systems meant networking, storage, and compute required roles focused on these niche, but highly complex components.

In many ways, this complexity of the underlying hosting platforms led to businesses struggling to cope with or afford the necessary systems and skills. This, in turn, led to our next and final step—cloud computing.

Virtualization – Architecture for the Cloud

As the software that ran on servers started to become more complex and take on more diverse tasks, it began to become clear that having a server that ran internal emails during the day but was unused in the evening and at the weekend was not very efficient.

Conversely, a backup or report-building server might only be used in the evening and not during the day.

One solution to this problem was virtualization, whereby multiple servers—even those with a different underlying operating system—could be run on the same physical hardware. The key was that physical resources such as random-access memory (RAM) and compute could be dynamically reassigned to the virtual servers running on them.

So, in the preceding example, more resources would be given to the email server during core hours, but would then be reduced and given to backup and reporting servers outside of core hours.

Virtualization also enabled better resilience as the software was no longer tied to hardware. It could move across physical servers in response to an underlying problem such as a power cut or a hardware failure. However, to truly leverage this, the software needed to accommodate it and automatically recover if a move caused a momentary communications failure.

From an architectural perspective, the usual issues remained the same—we still used a single user database directory; virtual servers needed to be able to communicate; and we still had the physically secure boundary of a network.

Virtualization technologies presented different capabilities to design around—centrally shared disks rather than dedicated disks communicating over an internal data bus; faster and more efficient communications between physical servers; the ability to detect and respond to a physical server failing, and moving its resources to another physical server that has capacity.

In the following diagram, we see that discrete servers such as databases, file services, and email servers run as separate virtual services, but now they share hardware. However, from a networking point of view, and arguably a software and security point of view, nothing has changed. A large role of the virtualization layer is to abstract away the underlying complexity so that the operating systems and applications they run are entirely unaware:

Figure 1.3 – Virtualization of servers

We will now look at web apps, mobile apps, and application programming interfaces (APIs).

« Older posts Newer posts »