Category: Web apps, mobile apps, and APIs (Page 1 of 2)

Understanding conditional access, MFA and security defaults – Understanding User Authentication

In today’s environments that often expand beyond an organization’s network into the cloud, controlling access while still enabling users to access their resources becomes more complicated.

An additional complication is the fact that different users may have other requirements. For example, a system’s administrators most definitely need the most secure access policies in place. In contrast, an account that will always have more limited access anyway may not need quite as stringent measures because they won’t be accessing (or be granted access to) particularly risky systems should they be compromised.

Another example is where a user is signing in from—if a user is on the corporate network, you already have physical boundaries in place; therefore, you don’t need to be as concerned as a user accessing from a public network.

You could argue that you should always take the most secure baseline; however, the more security measures you introduce, the more complex a user’s sign-on becomes, which in turn can impact a business.

It is, therefore, imperative that you get the right mix of security for who the user is, what their role is, the sensitivity of what they are accessing, and from where they are signing in.

MFA

Traditional usernames and passwords can be (relatively) easily compromised. Many solutions have been used to counteract this risk, such as requiring long and complex passwords, forcing periodic changes, and so on.

Another, more secure way is to provide an additional piece of information—generally, one that is randomly generated and delivered to you via a pre-approved device such as a mobile phone. This is known as MFA.

Usually, this additional token is provided to the approved device via a phone call, text message, or mobile app, often called an authentication app.

When setting up MFA, a user must register the mobile device with the provider. Initially, a text message will be sent to verify that the user registering the device owns it.

From that point on, authentication tokens can only be received by that device. We can see the process in action in the following diagram:

Figure 3.13 – MFA

Microsoft Azure provides MFA for free; however, the paid tiers of Azure AD offer more granular control.

On the Azure AD free tier, you can use a feature called Security Defaults to enable MFA for ALL users, or at the very least on all Azure Global Administrators when Security Defaults isn’t enabled. However, the free tier for non-global administrators only supports a mobile authenticator app for providing the token.

With an Azure AD Premium P1 license, you can have more granular control over MFA through the use of CA, which allows you only to use MFA based on specific scenarios—for example, when accessing publicly over the internet, but not from a trusted location such as a corporate network.

Azure AD Premium P2 extends MFA support by introducing risk-based conditional access. We will explore these concepts shortly, but first we will delve into what is available with Security Defaults.

Federated authentication – Understanding User Authentication

Federated authentication uses an entirely separate authentication system such as Active Directory Federation Services (AD FS). AD FS has been available for some time to enable enterprises to provide SSO capabilities for users by extending access management to the internet.

Therefore, some organizations may already make use of this, and it would therefore make sense to leverage it.

AD FS provides additional advanced authentication services, such as smartcard-based authentication and/or third-party MFA.

Generally speaking, it is recommended to use PHS or PTA. You should only consider federated authentication if you already have a specific requirement to use it, such as the need to use smartcard-based authentication.

Azure AD Connect Health

Before we leave authentication and, specifically, AD Connect, we must look at one last aspect of the service: AD Connect Health.

Azure AD Connect Health provides different types of health monitoring, as follows:

  • AD Connect Health for Sync, which monitors the health of your AD DS to Azure AD.
  • AD Connect Health for AD DS, which monitors your domain controllers and AD.
  • AD Connect Health for AD FS, which monitors AD FS servers.

There are three separate agents, one for each scenario; as with the main AD Connect agent, the download link for the AD Connect Health agent can be accessed in the Azure portal, as follows:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top bar, search for and select Active Directory.
  3. In the left-hand menu, click AD Connect.
  4. On the main page, click Azure AD Connect Health under Health and Analytics.
  5. You will see the links to the AD Connect Health agents under Get tools, as in the following example:

Figure 3.12 – Downloading the AD Connect Health agents

The AD Connect Health blade also gives you a view on any alerts, performance statistics, usage analytics, and other information related to the AD.

Some of the benefits include the following:

  • Reports on these issues:

–Extranet lockouts

–Failed sign-ins

–Privacy compliance

  • Alerts on the following:

–Server configuration and availability

–Performance

–Connectivity

Finally, you need to know that to use AD Connect Health, the following must apply:

  • You must install an agent on your infrastructure on any identity servers you wish to monitor.
  • You must have an Azure AD P1 license.
  • You must be a global administrator to install and configure the agent.
  • You must have connectivity from your services to Azure AD Connect Health service endpoints (these endpoints can be found on the Microsoft website).

As we can see, Azure provides a range of options to manage authentication, ensure user details are synchronized between and cloud, and enable easier sign-on. We also looked at how we can monitor and ensure the health of the different services we use for integration.

In the next section, we will look at ways, other than just with passwords, we can control and manage user authentication.

Password Writeback – Understanding User Authentication

As we have already mentioned, one of the benefits Azure AD provides is self-service password resets—that is, the ability for users to reset their passwords by completing an online process. This process results in the user’s credentials being reset in the cloud—however, if you have hybrid scenarios with those same accounts, you would typically want to have a password reset performed in the cloud to write that change back to the directory.

To achieve this, we use an optional feature in AD Connect called Password Writeback.

Password Writeback is supported in all three hybrid scenarios—PHS, PTA, and AD Federation.

Using Password Writeback enables enforcement of password policies and zero-delay feedback—that is, if there is an issue resetting the password in the AD, the user is informed straightaway rather than waiting for a sync process. It doesn’t require any additional firewall rules over and above those needed for AD Connect (which works over port 443).

Note, however, that this is an optional feature, and to use it, the account that AD Connect uses to integrate with your AD must be set with specific access rights—these are the following:

  • Reset password
  • Write permissions on lockoutTime
  • Write permissions on pwdLastSet

AD Connect ensures that the user login experience is consistent between cloud and hybrid systems. However, AD Connect has an additional option—the ability to enable a user already authenticated to the cloud without the need to sign in again. This is known as Seamless SSO.

Seamless SSO

Either PHS or PTA can be combined with an option called Seamless SSO. With Seamless SSO, users who are already authenticated to the corporate network will be automatically signed in to Azure AD when challenged.

As we can see in the example in the following diagram, if you have a cloud-based application that uses Azure AD for authentication and you have Seamless SSO enabled, users won’t be prompted again for a username and password if they have already signed in to an AD:

Figure 3.11 – SSO

However, it’s important to note that Seamless SSO is for the user’s device that is domain-joined—that is, domain-joined to a network. Devices that are joined to Azure AD or Hybrid Azure AD-joined use primary refresh tokens also to enable SSO, which is a slightly different method of achieving the same SSO experience but using JSON web tokens (JWT).

In other words, Seamless SSO is a feature of hybrid scenarios where you are using AD Connect. SSO is a single sign-on for pure cloud-managed devices.

Password Hash Synchronization – Understanding User Authentication

Password Hash Synchronization (PHS) ensures a hash of the user’s password is copied from the directory into the Azure AD. When a user’s password is stored in AD, it is stored as a hash; that is, the password has a mathematical algorithm applied to it that effectively scrambles it. A user’s password is prevented from being readable if the underlying database is compromised.

With Azure PHS, the password that is already hashed is hashed once again—a hash of a hash—before it is then stored in the Azure AD, providing an extra level of protection, as per the example shown in the following diagram:

Figure 3.9 – Azure PHS

The result of all this is that when a user authenticates against Azure AD, it does so directly against the information stored in the Azure AD database.

An AD Connect synchronization client is required on a computer in your organization if communications to your domain are severed for whatever reason. Users can still authenticate because they are doing so against information in Azure AD.

One potential downside, however, is that this synchronization process is not immediate. Therefore, if a user updates their details, it won’t reflect in the Azure AD accounts until the sync process has taken place, which by default takes 30 minutes. However, this can be changed, or synchronizations can be forced, which is useful if a bulk update is performed, especially on disabling accounts.

Note that some premium features such as Identity Protection and Azure AD DS require PHS.

For some organizations, storing the password hash in Azure AD is simply not an option. For these scenarios, one option would be pass-through authentication (PTA).

Azure AD PTA

With PTA, when a user needs to authenticate, the request is forwarded to an agent that performs the authentication and returns the result and authentication token, as in the example shown in the following diagram:

Figure 3.10 – Azure PTA

Another reason you may wish to go for this option is if any changes to user accounts must be enforced across the enterprise immediately.

These agents must have access to your AD, including unconstrained access to your domain controllers, and they need access to the internet. You therefore need to consider the security implications of this, and the effort involved; for example, opening firewall ports. All traffic is, of course, encrypted and limited to authentication requests.

Because PTA works by handing off the process to an agent, you need to consider resilience. It is recommended you install the AD Connect agent on at least three servers to both distribute the load and provide a backup should one of the agents go offline.

If you still lose connectivity to your agents, authentication will fail. Another failsafe is to use PHS, although this, of course, requires storing password hashes in Azure AD, so you need to consider why you opted for PTA in the first place.

Why AD? – Understanding User Authentication

Let’s take a step back and consider a straightforward scenario: that of an online e-commerce website. Before you can order something, you need to register with that website and provide some basic details—a sign-in name, an email, a password, and so on.

A typical website such as the one shown in the following diagram may simply store your details in a database and, at its simplest, this may just be a user record in a table:

Figure 3.1 – Simple username and password authentication

For more advanced scenarios that may require different users to have different levels of access—for example, in the preceding e-commerce website—the user databases may need to accommodate administrative users as well as customers. Administrative users will want to log in and process orders and, of course, we need to ensure the end customers don’t get this level of access.

So, now, we must also record who has access to what, and ensure users are granted access accordingly, as in the example shown in the following diagram:

Figure 3.2 – Role-based authorization

The same would also be valid for a corporate user database. For example, the company you work for must provide access to various internal systems—payroll, marketing, sales, file shares, email, and so on. Each application will have its own set of security requirements, and users may need access across multiple systems.

For corporate users, Microsoft introduced Active Directory Domain Services (AD DS), which is a dedicated identity management system that allows businesses to manage user databases in a secure and well-organized way. Users in an AD are granted access to other systems (provided they support it) from a single user database. Microsoft AD DS takes care of the complexity and security of user management. See the example shown in the following diagram:

Figure 3.3 – AD

From a single account, IT administrators can provide access to file shares, email systems, and even web applications—provided those systems are integrated with AD. Typically, this would be achieved by domain-joining the device that hosts the application—be it an email server, web server, or file server; that is, the hosting device becomes part of the network, and AD manages not only the user accounts but the computer accounts as well.

In this way, the identity mechanism is a closed system—that is, only internal computers and users have access. Although external access mechanisms have been developed over time to provide remote access, these are still about securely connecting users by essentially extending that “internal” network.

Microsoft AD DS uses specific networking protocols to manage the security of devices and users—that is, the way devices communicate with each other—known as Integrated Windows Authentication (IWA); they are New Technology LAN Manager (NTLM) and Kerberos.

Microsoft AD DS is a common standard today for many organizations. Still, as discussed, it is built around the concept of a closed system—that is, the components are all tightly integrated by enforcing the requirement for them to be “joined.”

Introducing Azure AD – Understanding User Authentication

Azure AD is a cloud-based mechanism that provides the tools to address our security needs. Backed by Microsoft AD, an industry-standard and, importantly, proven secure authentication andauthorization system, it gives both cloud-first (that is, stored and managed entirely in the cloud) and hybrid (a mix of cloud and on-premises) solutions.

Some of these tools are included by default when you create an Azure AD tenant. Others require a Premium add-on, which we will cover later.

These tools include the following:

  • Self-service password resets: Allowing your users to reset their passwords themselves (through the provision of additional security measures) without needing to call the helpdesk.
  • MFA: MFA enforces a second form of identification during the authentication process—a code is generated and sent to the user, and this is entered along with the password. The code is typically sent to a user’s device as either a text message or an MFA authentication app on their mobile device.
  • You can also use biometric devices such as fingerprint or face scanners.
  • Hybrid integration with password writebacks: When Azure AD is synchronized to an on-premises AD with AD Connect, changes to the user’s password in Azure AD is sent back to the on-premises AD to ensure the directories remain in sync.
  • Password protection policies: Policies in Azure can be set to enforce complex passwords or the period between password changes. These policies can be integrated with on-premises directories to ensure consistency.
  • Passwordless authentication: For many organizations, the desire to remove the need for passwords altogether in favor of alternative methods is seen as the ultimate solution to many authentication issues. Credentials are provided through the use of biometrics or a FIDO2 security key. These cannot be easily duplicated, and this removes the need for remembering complex passwords.
  • Single sign-on (SSO): With SSO, users only need to authenticate once to access all their applications—regardless of whether they sign on through their on-premises directory or Azure AD, the single authentication process should identify the user across different environments.
  • CA: To further tighten security, CA policies can provide further restrictions to user sign-in, or when different rules may apply. For example, MFA can be set not to be required when signing in from specific Internet Protocol (IP) ranges, such as a corporate network range.

Differentiating authentication from authorization – Understanding User Authentication

User security is perhaps one of the most critical aspects of a system and, therefore, its architecture. Security has, of course, always been important to protect sensitive information within an organization. However, as we move our applications online and widen our audience, the need to ensure only the correct people gain access to their data has become crucial.

In this chapter, we explore the key differences between authentication and authorization, what tooling we have available within Azure to ensure the safety of user accounts, and how we design solutions according to different business needs.

In this chapter, we will examine the following topics:

  • Differentiating authentication from authorization
  • Introducing Active Directory (AD)
  • Integrating AD
  • Understanding Conditional Access (CA), Multi-Factor Authentication (MFA), and security defaults
  • Using external identities

Differentiating authentication from authorization

A significant and essential role of any platform is that of authentication and authorization. These two terms are often confused and combined as a single entity. When understanding security on platforms such as Azure, it’s vital to know how the different technologies are used.

Authentication is the act of proving who you are, often performed with a username/password combination. If you can provide the correct details, a system authenticates you.

Authentication does not give you access to anything; it merely proves who you are.

Once a system knows the who, it then checks to see what you have access to—this is termed authorization.

In Azure, authorization is the act of checking whether you have access to a particular resource such as a storage account, and what actions you can perform, such as creating, deleting, modifying, or even reading the data in the storage account.

Because of the number of different services and their associated actions that are available to a user in Azure, and the importance of ensuring the validity of a user, the ensuing mechanisms that control all this can become quite complicated.

Luckily, Azure provides a range of services, broken down into authentication and authorization services, that enable you to strictly control how users authenticate and what they can then access, in a very granular way.

Traditionally, authentication has been via simple username/password combinations; however, this is ineffective on its own, and therefore you need to consider many factors and strategies when designing an authentication mechanism. For example, the following scenarios may apply:

  • A user may choose too simple a password, increasing the chances of it being compromised.
  • Complex passwords or regular changes mean users are more likely to forget their password.
  • There may be delays in the authentication process if a user needs to call a helpdesk to request a password reset.
  • A username/password combination itself is open to phishing attacks.
  • Password databases can be compromised.

Important note

A phishing attack is an action whereby a malicious person will attempt to steal your password by sending you to a dummy website that looks like the one you want to access but is, in fact, their site. You enter your details, thinking it is the correct site, and now they have your personal information and can then use this to log in to the real site.

When systems are hosted on a physically isolated network, some of these issues are mitigated as you first need physical access to a building or at least a device set up with a virtual private network (VPN) connection that, in turn, would require a certificate.

However, in cloud scenarios, and especially hybrid systems, whereby you need external authentication mechanisms that must also map or sync to internal systems, this physical firewall cannot always be achieved.

With these scenarios in mind, we need to consider how we might address the following:

  • Managing and enforcing password complexity rules
  • Providing additional layers over and above a password
  • How to securely store and protect passwords

Now that we understand some of the issues we face with authentication systems, especially those that rely on username/password combinations, we can investigate what options are available to mitigate them. First, we will examine Microsoft’s established security platform, AD.

Network monitoring – Principles of Modern Architecture

CPU and RAM utilization are not the only source of problems; problems can also arise from misconfigured firewalls and routing, or misbehaving services causing too much traffic.

Traffic analytics tools will provide an overview of the networks in the solution and help identify sources that generate high traffic levels. Network performance managers offer tools that allow you to create specific tests between two endpoints to investigate particular issues.

For hybrid environments, VPN meters specifically monitor your direct connection links to your on-premises networks.

Monitoring for DevOps and applications

For solutions with well-integrated DevOps code libraries and deployment pipelines, additional metrics and alerts will notify you of failed builds and deployments. Information, support tickets, or work tasks can be automatically raised and linked to the affected build.

Additional application-specific monitoring tools allow for an in-depth analysis of your application’s overall health, and again will help with troubleshooting problems.

Application maps, artificial intelligence (AI)-driven smart detection, usage analytics, and component communications can all be included in your designs to help drive operational efficiencies and warn of future problems.

We can see that for every aspect of your solution design—security, resilience, performance, and deployments—an effective monitoring and alerting regime is vital to ensure the platform’s ongoing health. With proper forethought, issues can be prevented before they happen. Forecasting and planning can be based on intelligent extrapolation rather than guesswork, and responding to failure events becomes a science instead of an art.

Summary

In this chapter, we looked at a high-level view of the architecture and the types of decisions that must be considered, agreed upon, and documented.

By thinking about how we might design for security, resilience, performance, and deployment and monitor all our systems, we get a greater understanding of our solution as a whole.

The last point is important—although a system design must contain the individual components, they must all work together as a single, seamless solution.

In the next chapter, we will look at the different tools and patterns we can use in Azure to build great applications that align with best-practice principles.

Architecting for monitoring and operations – Principles of Modern Architecture

For the topics we have covered in this chapter to be effective, we must continually monitor all aspects of our system. From security to resilience and performance, we must know what is happening at all times.

Monitoring for security

Maintaining the security of a solution requires a monitoring solution that can detect, respond, and ultimately recover from incidents. When an attack happens, the speed at which we respond will determine how much damage is incurred.

However, a monitoring solution needs to be intelligent enough to prioritize and filter false positives.

Azure provides several different monitoring mechanisms in general and, specifically, in terms of security, and can be configured according to your organization’s capabilities. Therefore, when designing a monitoring solution, you must align with your company’s existing teams to effectively direct and alert appropriately, and send pertinent information as required.

Monitoring requirements cover more than just alerts; the policies that define business requirements around configuration settings such as encryption, passwords, and allowed resources must be checked to confirm they are being adhered to. The Azure risk and compliance reports will highlight any items that deviate so that the necessary team can investigate and remediate.

Other tools, such as Azure Security Center, will continually monitor your risk profile and suggest advice on improving your security posture.

Finally, security patching reports also need regular reviews to ensure VMs are being patched so that insecure hosts can be investigated and brought in line.

Monitoring for resilience

Monitoring your solution is not just about being alerted to any issues; the ideal scenario is to detect and remediate problems before they occur—in other words, we can use it as an early warning system.

Applications should include in their designs the ability to output relevant logs and errors; this then enables health alerts to be set up that, when combined with resource thresholds, provide details of the running processes.

Next, a set of baselines can be created that identify what a healthy system looks like. When anomalies occur, such as long-running processes or specific error logs, they are spotted earlier.

As well as defined alerts that will proactively contact administrators when possible issues are detected, visualization dashboards and reporting can also help responsible teams see potential problems or irregular readings as part of their daily checks.

Monitoring for performance

The same CPU, RAM, and input/output (I/O) thresholds used for early warning signs of errors also help identify performance issues. By monitoring response times and resource usage over time, you can understand usage patterns and predict when more power is required.

Performance statistics can either manually set scaling events through the use of schedules or set automated scaling rules more accurately.

Keeping track of scaling events throughout the life cycle of an application is useful. If an application is continually scaling up and down or not scaling at all, it could indicate that thresholds are set incorrectly.

Again, creating and updating baseline metrics will help alert you to potential issues. If resources for a particular service are steadily increasing over time, this information can predict future bottlenecks.

Architecting for deployment – Principles of Modern Architecture

One area of IT solutions in which the cloud has had a dramatic impact is around deployment. Traditional system builds, certainly at the infrastructure level, were mostly manual in their process. Engineers would run through a series of instructions then build and configure the underlying hosting platform, followed by another set of instructions for deploying the software on top.

Manual methods are error-prone because instructions can be misunderstood or implemented wrongly. Validating a deployment is also a complicated process as it would involve walking back through an installation guide, cross-checking the various configurations.

Software deployments led the way on this with automated mechanisms that are scripted, which means they can be repeated time and time again consistently—in other words, we remove the human element.

We can define our infrastructure in code within Azure, too, using either Azure Resource Manager (ARM) templates or other third-party tools; the entire platform can be codified and deployed by automated systems.

The ability to consistently deploy and re-deploy in a consistent manner gives rise to some additional opportunities. Infrastructure as Code (IaC) enables another paradigm—immutable infrastructure.

Traditionally, when modifications are required to the server’s configuration, the process would be to manually make the configuration on the server and record the change in the build documentation. With immutable infrastructure, any modifications are made to the deployment code, and then the server is re-deployed. In other words, the server never changes; it is immutable. Instead, it is destroyed and recreated with the new configuration.

IaC and immutable infrastructure have an impact on our designs. PaaS components are more straightforward to automate than IaaS ones. That is not to say you can’t automate IaaS components; however, PaaS’s management does tend to be simpler. Although not a reason to use PaaS in its own right, it does provide yet one more reason to use technologies such as web apps over VMs running Internet Information Services (IIS).

You also need to consider which deployment tooling you will use. Again, Microsoft has its own native solution in the form of Azure DevOps; however, there are other third-party options. Whichever you choose will have some impact around connectivity and any additional agents and tools you use.

For example, most DevOps rules require some form of deployment agent to pull your code from a repository. Connectivity between the repository, the agent, and the Azure platform is required and must be established in a secure and resilient manner.

Because IaC and DevOps make deployments quicker and more consistent, it is easier to build different environments—development, testing, staging, and production. Solution changes progress through each environment and can be checked and signed off by various parties, thus creating a quality culture—as per the example in the following diagram:

Figure 2.5 – Example DevOps flow

The ability to codify and deploy complete solutions at the click of a button broadens the scope of your solution. An entire application environment can be encapsulated and deployed multiple times; this, in turn, provides the opportunity to create various single-tenant solutions instead of a one-off multi-tenant solution. This aspect is becoming increasingly valuable to organizations as it allows for better separation of data between customers.

In this section, we have introduced how deployment mechanisms can change what our end-state solution looks like, which impacts the architecture. Next, we will look in more detail at how monitoring and operations help keep our system healthy and secure.

« Older posts