Category: Exams of Microsoft AZ-304 (Page 2 of 2)

Networking and firewalls – Principles of Modern Architecture

Preventing access to systems from non-authorized networks is, of course, a great way to control access. In an on-premises environment this is generally by default, but for cloud environments, many services are open by default—at least from a networking perspective.

If your application is an internal system, consider controls that would force access along internal routes and block access to external routes.

If systems do need to be external-facing, consider network segregation by breaking up the individual components into their networks or subnets. In this scenario, your solution would have externally facing user interfaces in a public subnet, a middle tier managing business rules in another subnet, and your backend database on another subnet.

Using firewalls, you would only allow public access to the public subnet. The other subnets would only allow access on specific ports on the adjacent subnet. In this way, the user interface would have no direct access to the databases; it would only be allowed access to the business layer that then facilitates that access.

Azure provides firewall appliances and network security groups (NSGs) that deny and allow access between source and destination services, and using a combination of the two together provides even greater control.

Finally, creating a virtual private network (VPN) from an on-premises network into a cloud environment ensures only corporate users can access your systems, as though they were accessing them on-premises.

Network-level controls help control both perimeter and internal routes, but once a user has access, we need to confirm that the user is who they claim to be.

Identity management

Managing user access is sometimes considered the first line of defense, especially for cloud solutions that need to support mobile workforces.

Therefore, you must have a well-thought-out plan for managing access. Identity management is split into two distinct areas: authentication and authorization.

Authentication is the act of a user proving they are who they say they are. Typically, this would be a username/password combination; however, as discussed in the hacking techniques section How do they hack?, somebody could compromise these.

Therefore, you need to consider options for preventing these types of attacks, as either guessing or capturing a user’s password is a common exploit. You could use alternatives such as Multi-Factor Authentication (MFA) or monitoring for suspicious login attributes, such as from where a user is logging on.

Once a user is authenticated, the act of authorization determines what they can access. Following principles such as least privilege or Just Enough Access (JEA) ensures users should only access what they require to perform their role. Just-in-Time (JIT) processes provide elevated access only when a user needs it and remove it after a set period.

Continual monitoring with automated alerting and threat management tools helps ensure that any compromised accounts are flagged and shut down quickly.

Using a combination of authorization and authentication management and good user education around the danger of phishing emails should help prevent the worst attacks. Still, you also need to protect against attacks that bypass the identity layer.

How do they hack? – Principles of Modern Architecture

There are, of course, many ways hackers can gain access to your systems, but once you have identified the reason why an attacker may want to hack you, you can at least narrow down the potential methods. The following are some of the more common ones:

  • Researching login credentials: Although the simplest method, this is perhaps one of the most common. If an attacker can get your login details, they can do a lot of damage very quickly. Details can be captured either by researching a user’s social and public profiles, to guess the password or find answers to your security questions.
  • Phishing: Another way of capturing your credentials; you may receive an email notifying you that your account has been locked, with a link to a fake website. The website looks like what you are expecting, and when you enter your details, it merely captures them.
  • Email: Rather than capturing login details, some emails may contain malicious code in the form of an attachment or a link to a compromised site. The purpose is to infect your computer with a virus, Trojan, or similar. The payload could be a keylogger to capture keystrokes (that is, login details) or spread and use more sophisticated attacks to access other systems.
  • Website vulnerabilities: Poorly written code can lead to all sorts of entry points. SQL injection attacks whereby Transact-SQL (T-SQL) statements are posted within a form can update, add, or delete data if the backend is not written to protect against this type of attack. Cross-site scripts that run on the hacker’s website but access the backend on yours can override form posts, and so on.
  • Distributed Denial of Service (DDoS): A DDoS attack seeks to overwhelm your servers and endpoints by flooding them with requests—this can either bring down your applications or potentially trigger other exploits that grant complete access.
  • Vulnerability exploits: Third-party applications and operating systems can also have vulnerable code that hackers seek to exploit in many different ways, from triggering remote execution scripts to taking complete control of the affected system.

Of course, there are many more, but understanding the main reasons why and how hackers hack is the first step in your defense. With this knowledge, we can start to define and plan our strategy.

Defining your strategy

Once we have identified what we need to protect, including any prioritization based on your platform’s characteristics, we can start to define a set of rules that set out how we protect ourselves.

Based on your requirements, which may be solution- and business-led, the strategy will state which elements need protecting, and how. For example, you may have a rule that states all data must be encrypted at rest, or that all logging is monitored and captured.

There are several industry compliance standards, such as ISO27001, National Institute of Standards and Technology (NIST), and the Payment Card Industry Data Security Standard (PCI DSS). These can either form the basis of your internal policies or be used as a reference; however, depending on your business’s nature, you may be required to align with one or more of them.

Information

ISO is the acronym for International Organization for Standardization, which is an international standard-setting body with representatives from multiple other standards organizations.

We can now consider which technologies we will use to implement the various policies; next, we will look at some of the more common ones.

Architecting for security – Principles of Modern Architecture

In the previous chapter, we looked at why architecture is important, what it seeks to achieve, and how it has changed over time. Understanding how we got to where we are today helps us in our role and provides a solid framework for our designs.

This chapter will look at how we architect systems in general to understand the high-level requirements and potential methods. Split into pillars, we will examine different aspects of each; however, as we will see, they are all interlinked and have some element of dependency on each other.

We will start by looking at security, perhaps one of the essential aspects of architecture, and understand how access to systems is gained and how we prevent it.

Next, we’ll investigate resilience, which is closely related to performance. By understanding the principles of these subjects, we can ensure our designs produce stable and performant applications.

Deployment mechanisms have become far more sophisticated in recent years, and we’ll learn how our decisions around how we build platforms have become entwined with the systems themselves.

Finally, we will see how a well-designed solution must include a suite of monitoring, alerting, and analytics to support the other pillars.

With this in mind, throughout this chapter, we will cover the following topics:

  • Architecting for security
  • Architecting for resilience and business continuity
  • Architecting for performance
  • Architecting for deployment
  • Architecting for monitoring and operations

Architecting for security

As technology has advanced, the solutions we build have become more powerful, flexible, and complex. Our applications’ flexibility and dynamic nature enable a business to leverage data and intelligence at a level previously unknown. The cloud is often touted by many vendors as having near unlimited capacity and processing power that is accessible by anyone.

But power comes at a cost, because it’s not just businesses who wish to leverage the potential of the cloud—hackers also have access to that tooling. Therefore, the architect of any system must keep security at the core of any design they produce.

Knowing the enemy

The first step in ensuring security is to understand the hacker mindset or, at the very least, to think about what they wish to accomplish—why do hackers hack?

Of course, there are lots of reasons, but we’ll state the obvious one—because they can! Some people see hacking a system as a challenge. Because of this, their attack vector could be any security hole they can exploit, even if this didn’t result in any valuable benefit.

The most dangerous attackers are the ones who wish to profit or cause damage from their actions, either directly by selling/releasing private data or by holding their victim to ransom by encrypting data and demanding money to release it.

Some wish to simply disrupt by bringing a system down—depending on the nature of the solution in question, the damage this causes can be reputational or financial.

All of these scenarios highlight some interesting points. The first is that when designing, you need to consider all areas of your solutions that might be vulnerable—authorization, data, application access points, network traffic, and so on.

The second is that, depending on your solution, you can prioritize the areas that would cause you the most damage. For example, if you are a high-volume internet retailer, uptime may be your primary concern, and you might therefore concentrate on preventing attacks that could overload your system. If data is your most valuable asset, then you should think about ways to ensure it cannot be accessed, either by preventing an intrusion in the first place or protecting information if it is accessed via encryption or obfuscation.

Once we have the why, we need to think about the how.

IaC – Architecture for the Cloud

In Azure, components can be created either in the portal using the graphical user interface (GUI) with PowerShell or by using JSON templates. In other words, you can deploy infrastructure in Azure purely with code.

IaC provides many benefits, such as the ability to define VMs, storage, databases, or any Azure component in a way that promotes reusability, automation, and testing.

Tools such as Azure DevOps provide a central repository for all your code that can be controlled and built upon in an iterative process by multiple engineers. DevOps also builds, tests, and releases your infrastructure using automated pipelines—again, in the same way that modern software is deployed.

The DevOps name embodies the fact that operational infrastructure can now be built using development methodologies and can follow the same Agile principles.

DevSecOps takes this even further and includes codifying security components into release pipelines. Security must be designed and built hand in hand with infrastructure and security at every level, as opposed to merely being a perimeter or gateway device.

Cloud architects must, therefore, be fully conversant with the taxonomy, principles, and benefits of Agile, DevOps, and DevSecOps, incorporating them into working practices and designs.

Microsoft provides a range of tools and components to support your role and provides best-in-class solutions that are reliable, resilient, scalable, and—of course—secure.

As we have seen, architecture in the cloud involves many more areas than you might traditionally have gotten involved in. Hopefully, you will appreciate the reasons why these changes have occurred.

From changes in technology to new ways of working, your role has changed in different ways—although challenging, this can also be very exciting as you become involved across various disciplines and work closely with business users.

Summary

During this chapter, we have defined what we mean by architecture in the context of the AZ-304 exam, which is an important starting point to ensure we agree on what the role entails and precisely what is expected for the Azure certification.

We have walked through a brief history of business computing and how this has changed architecture over the years, from monolithic systems through to the era of personal computing, virtualization, the web, and ultimately to the cloud. We examined how each period changed the responsibilities and design requirements for the solutions built on top.

Finally, we had a brief introduction to modern working practices with IaC and project management methodologies, moving from waterfall to Agile, and how this has also changed how we as architects must think about systems.

In the next chapter, we will explore specific areas of architectural principles, specifically those aligned to the Microsoft Azure Well-Architected Framework.

Cloud computing – Architecture for the Cloud

Cloud platforms such as Azure sought to remove the difficulty and cost of maintaining the underlying hardware by providing pure compute and storage services on a pay-as-you-go or operational expenditure (OpEx) model rather than a capital expenditure (CapEx) model.

So, instead of providing hardware hosting, they offered Infrastructure as a Service (IaaS) components such as VMs, networking and storage, and Platform as a Service (PaaS) components such as Azure Web Apps and Azure SQL Databases. The latter is the most interesting. Azure Web Apps and Azure SQL Databases were the first PaaS offerings. The key difference is that they are services that are fully managed by Microsoft.

Under the hood, these services run on VMs; however, whereas with VMs you are responsible for the maintenance and management of them—patching, backups, resilience—with PaaS, the vendor takes over these tasks and just offers the basic service you want to consume.

Over time, Microsoft has developed and enhanced its service offerings and built many new ones as well. But as an architect, it is vital that you understand the differences and how each type of service has its own configurations, and what impact these have.

Many see Azure as “easy to use”, and to a certain extent, one of the marketing points around Microsoft’s service is just that—it’s easy. Billions of dollars are spent on securing the platform, and a central feature is that the vendor takes responsibility for ensuring the security of its services.

A common mistake made by many engineers, developers, system administrators, and architects is that this means you can just start up a service, such as Azure Web Apps or Azure SQL Databases, and that it is inherently secure “out of the box”.

While to a certain extent this may be true, by the very nature of cloud, many services are open to the internet. Every component has its own configuration options, and some of these revolve around securing communications and how they interact with other Azure services.

Now, more than ever, with security taking center stage, an architect must be vigilant of all these aspects and ensure they are taken into consideration. So, whereas the requirement to design underlying hardware is no longer an issue, the correct configuration of higher-level services is critical.

As we can see in the following diagram, the designs of our solutions to a certain extent become more complex in that we must now consider how services communicate between our corporate and cloud networks. However, the need to worry about VMs and hardware disappears—at least when purely using PaaS:

Figure 1.5 – Cloud integration

As we have moved from confined systems such as mainframes to distributed systems with the cloud provider taking on more responsibility, our role as an architect has evolved. Certain aspects may no longer be required, such as hardware design, which has become extremely specialized. However, a cloud architect must simultaneously broaden their range of skills to handle software, security, resilience, and scalability. For many enterprises, the move to the cloud provides a massive opportunity, but due to existing assets, moving to a provider such as Azure will not necessarily be straightforward. Therefore, let’s consider which additional challenges may face an architect when considering migration.

Virtualization – Architecture for the Cloud

As the software that ran on servers started to become more complex and take on more diverse tasks, it began to become clear that having a server that ran internal emails during the day but was unused in the evening and at the weekend was not very efficient.

Conversely, a backup or report-building server might only be used in the evening and not during the day.

One solution to this problem was virtualization, whereby multiple servers—even those with a different underlying operating system—could be run on the same physical hardware. The key was that physical resources such as random-access memory (RAM) and compute could be dynamically reassigned to the virtual servers running on them.

So, in the preceding example, more resources would be given to the email server during core hours, but would then be reduced and given to backup and reporting servers outside of core hours.

Virtualization also enabled better resilience as the software was no longer tied to hardware. It could move across physical servers in response to an underlying problem such as a power cut or a hardware failure. However, to truly leverage this, the software needed to accommodate it and automatically recover if a move caused a momentary communications failure.

From an architectural perspective, the usual issues remained the same—we still used a single user database directory; virtual servers needed to be able to communicate; and we still had the physically secure boundary of a network.

Virtualization technologies presented different capabilities to design around—centrally shared disks rather than dedicated disks communicating over an internal data bus; faster and more efficient communications between physical servers; the ability to detect and respond to a physical server failing, and moving its resources to another physical server that has capacity.

In the following diagram, we see that discrete servers such as databases, file services, and email servers run as separate virtual services, but now they share hardware. However, from a networking point of view, and arguably a software and security point of view, nothing has changed. A large role of the virtualization layer is to abstract away the underlying complexity so that the operating systems and applications they run are entirely unaware:

Figure 1.3 – Virtualization of servers

We will now look at web apps, mobile apps, and application programming interfaces (APIs).

Mainframe computing – Architecture for the Cloud

Older IT systems were monolithic. The first business systems consisted of a large mainframe computer that users interacted with through dumb terminals—simple screens and keyboards with no processing power of their own.

The software that ran on them was often built similarly—they were often unwieldy chunks of code. There is a particularly famous photograph of the National Aeronautics and Space Administration (NASA) computer scientist Margaret Hamilton standing by a stack of printed code that is as tall as she is—this was the code that ran the Apollo Guidance Computer (AGC).

In these systems, the biggest concern was computing resources, and therefore architecture was about managing these resources efficiently. Security was primarily performed by a single user database contained within this monolithic system. While the internet did exist in a primitive way, external communications and, therefore, the security around them didn’t come into play. In other words, as the entire solution was essentially one big computer, there was a natural security boundary.

If we examine the following diagram, we can see that in many ways, the role of an architect dealt with fewer moving parts than today, and many of today’s complexities, such as security, didn’t exist because so much was intrinsic to the mainframe itself:

Figure 1.1 – Mainframe computing

Mainframe computing slowly gave way to personal computing, so next, we will look at how the PC revolution changed systems, and therefore design requirements.

Personal computing

The PC era brought about a business computing model in which you had lower-powered servers that delivered one or two duties—for example, a file server, a print server, or an internal email server.

PCs now connected to these servers over a local network and performed much of the processing themselves.

Early on, each of these servers might have had a user database to control access. However, this was very quickly addressed. The notion of a directory server quickly became the norm so that now we still have a single user database, as in the days of the mainframe; however, the information in that database must control access to services running on other servers.

Security had now become more complex as the resources were distributed, but there was still a naturally secure boundary—that of the local network.

Software also started to become more modular in that individual programs were written to run on single servers that performed discrete tasks; however, these servers and programs might have needed to communicate with each other.

The following diagram shows a typical server-based system whereby individual servers provide discrete services, but all within a corporate network:

Figure 1.2 – The personal computing era

Decentralizing applications into individual components running on their own servers enabled a new type of software architecture to emerge—that of N-tier architecture. N-tier architecture is a paradigm whereby the first tier would be the user interface, and the second tier the database. Each was run on a separate server and was responsible for providing those specific services.

As systems developed, additional tiers were added—for example, in a three-tier application the database moved to the third tier, and the middle tier encapsulated business logic—for example, performing calculations, or providing a façade over the database layer, which in turn made the software easier to update and expand.

As PCs effectively brought about a divergence in hardware and software design, so too did the role of an architect also split. It now became more common to see architects who specialized in hardware and networking, with responsibilities for communication protocols and role-based security, and software architects who were more concerned with development patterns, data models, and user interfaces.

The lower-cost entry for PCs also vastly expanded their use; now, smaller businesses could start to leverage technologies. Greater adoption led to greater innovation—and one such advancement was to make more efficient use of hardware—through the use of virtualization.

Introducing architecture – Architecture for the Cloud

Before we examine the detailed knowledge that the AZ-304 exam tests, this chapter discusses some general principles of solution architecture and how the advent of cloud computing has changed the role of the architect. As applications have moved toward ever more sophisticated constructs, the role of the architect has, in turn, become more critical to ensure security, reliability, and scalability.

It is useful to agree on what architecture means today, how we arrived here, and what we need to achieve when documenting requirements and producing designs.

In this chapter, we’re going to cover the following main topics:

  • Introducing architecture
  • Exploring the transition from monolithic to microservices
  • Migrating to the cloud from on-premises
  • Understanding infrastructure and platform services
  • Moving from waterfall to Agile projects

Introducing architecture

It may seem a strange question to ask in a book about solution architecture—after all, it could be assumed that if you are reading this book, then you already know the answer to that question.

In my experience, many architects I have worked with all have a very different view of what architecture is or, to be more precise, what falls into the realms of architecture and what falls into other workstreams such as engineering or operational support.

These differing views usually depend on an architect’s background. Infrastructure engineers concern themselves with the more physical aspects such as servers, networking, and storage. Software developers see solutions in terms of communication layers, interactions, and lower-level data schemas. Finally, former business analysts are naturally more focused on operations, processes, and support tiers.

For me, as someone involved across disciplines, architecture is about all these aspects, and we need to realize that a solution’s components aren’t just technical—they also cover business, operations, and security.

Some would argue that actually, these would typically be broken down into infrastructure, application, or business architecture, with enterprise architecture sitting over the top of all three, providing strategic direction. In a more traditional, on-premises world, this indeed makes sense; however, as business has embraced and adopted cloud, how software is designed, built, and deployed has changed radically.

Where once there was a clear line between all these fields, today they are all treated the same. Every part of a solution’s components, from servers to code, must be created and implemented as part of a single set of tasks.

Software is no longer shaped by hardware; quite the opposite—the supporting systems that run code are now smaller, more agile, and more dynamic.

With so much change, cloud architects must now comprehend the entire stack, from storage to networking, code patterns to project management, and everything in between.

Let’s now look at how systems are transitioned from monolithic to microservices.

Exploring the transition from monolithic to microservices

I’ve often felt that it helps to understand what has led us to where we are in terms of what we’re trying to achieve—that is, well-designed solutions that provide business value and meet all technical and non-technical requirements.

When we architect a system, we must consider many aspects—security, resilience, performance, and more. But why do we need to think of these? At some point, something will go wrong, and therefore we must accommodate that eventuality in our designs.

For this reason, I want to go through a brief history of how technology has changed over the years, how it has affected system design, what new issues have arisen, and—most importantly—how it has changed the role of an IT architect.

We will start in the 1960s when big businesses began to leverage computers to bring efficiencies to their operating models.

Newer posts »