Category: Azure AD versus AD DS (Page 1 of 2)

Federated authentication – Understanding User Authentication

Federated authentication uses an entirely separate authentication system such as Active Directory Federation Services (AD FS). AD FS has been available for some time to enable enterprises to provide SSO capabilities for users by extending access management to the internet.

Therefore, some organizations may already make use of this, and it would therefore make sense to leverage it.

AD FS provides additional advanced authentication services, such as smartcard-based authentication and/or third-party MFA.

Generally speaking, it is recommended to use PHS or PTA. You should only consider federated authentication if you already have a specific requirement to use it, such as the need to use smartcard-based authentication.

Azure AD Connect Health

Before we leave authentication and, specifically, AD Connect, we must look at one last aspect of the service: AD Connect Health.

Azure AD Connect Health provides different types of health monitoring, as follows:

  • AD Connect Health for Sync, which monitors the health of your AD DS to Azure AD.
  • AD Connect Health for AD DS, which monitors your domain controllers and AD.
  • AD Connect Health for AD FS, which monitors AD FS servers.

There are three separate agents, one for each scenario; as with the main AD Connect agent, the download link for the AD Connect Health agent can be accessed in the Azure portal, as follows:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top bar, search for and select Active Directory.
  3. In the left-hand menu, click AD Connect.
  4. On the main page, click Azure AD Connect Health under Health and Analytics.
  5. You will see the links to the AD Connect Health agents under Get tools, as in the following example:

Figure 3.12 – Downloading the AD Connect Health agents

The AD Connect Health blade also gives you a view on any alerts, performance statistics, usage analytics, and other information related to the AD.

Some of the benefits include the following:

  • Reports on these issues:

–Extranet lockouts

–Failed sign-ins

–Privacy compliance

  • Alerts on the following:

–Server configuration and availability

–Performance

–Connectivity

Finally, you need to know that to use AD Connect Health, the following must apply:

  • You must install an agent on your infrastructure on any identity servers you wish to monitor.
  • You must have an Azure AD P1 license.
  • You must be a global administrator to install and configure the agent.
  • You must have connectivity from your services to Azure AD Connect Health service endpoints (these endpoints can be found on the Microsoft website).

As we can see, Azure provides a range of options to manage authentication, ensure user details are synchronized between and cloud, and enable easier sign-on. We also looked at how we can monitor and ensure the health of the different services we use for integration.

In the next section, we will look at ways, other than just with passwords, we can control and manage user authentication.

Password Writeback – Understanding User Authentication

As we have already mentioned, one of the benefits Azure AD provides is self-service password resets—that is, the ability for users to reset their passwords by completing an online process. This process results in the user’s credentials being reset in the cloud—however, if you have hybrid scenarios with those same accounts, you would typically want to have a password reset performed in the cloud to write that change back to the directory.

To achieve this, we use an optional feature in AD Connect called Password Writeback.

Password Writeback is supported in all three hybrid scenarios—PHS, PTA, and AD Federation.

Using Password Writeback enables enforcement of password policies and zero-delay feedback—that is, if there is an issue resetting the password in the AD, the user is informed straightaway rather than waiting for a sync process. It doesn’t require any additional firewall rules over and above those needed for AD Connect (which works over port 443).

Note, however, that this is an optional feature, and to use it, the account that AD Connect uses to integrate with your AD must be set with specific access rights—these are the following:

  • Reset password
  • Write permissions on lockoutTime
  • Write permissions on pwdLastSet

AD Connect ensures that the user login experience is consistent between cloud and hybrid systems. However, AD Connect has an additional option—the ability to enable a user already authenticated to the cloud without the need to sign in again. This is known as Seamless SSO.

Seamless SSO

Either PHS or PTA can be combined with an option called Seamless SSO. With Seamless SSO, users who are already authenticated to the corporate network will be automatically signed in to Azure AD when challenged.

As we can see in the example in the following diagram, if you have a cloud-based application that uses Azure AD for authentication and you have Seamless SSO enabled, users won’t be prompted again for a username and password if they have already signed in to an AD:

Figure 3.11 – SSO

However, it’s important to note that Seamless SSO is for the user’s device that is domain-joined—that is, domain-joined to a network. Devices that are joined to Azure AD or Hybrid Azure AD-joined use primary refresh tokens also to enable SSO, which is a slightly different method of achieving the same SSO experience but using JSON web tokens (JWT).

In other words, Seamless SSO is a feature of hybrid scenarios where you are using AD Connect. SSO is a single sign-on for pure cloud-managed devices.

Password Hash Synchronization – Understanding User Authentication

Password Hash Synchronization (PHS) ensures a hash of the user’s password is copied from the directory into the Azure AD. When a user’s password is stored in AD, it is stored as a hash; that is, the password has a mathematical algorithm applied to it that effectively scrambles it. A user’s password is prevented from being readable if the underlying database is compromised.

With Azure PHS, the password that is already hashed is hashed once again—a hash of a hash—before it is then stored in the Azure AD, providing an extra level of protection, as per the example shown in the following diagram:

Figure 3.9 – Azure PHS

The result of all this is that when a user authenticates against Azure AD, it does so directly against the information stored in the Azure AD database.

An AD Connect synchronization client is required on a computer in your organization if communications to your domain are severed for whatever reason. Users can still authenticate because they are doing so against information in Azure AD.

One potential downside, however, is that this synchronization process is not immediate. Therefore, if a user updates their details, it won’t reflect in the Azure AD accounts until the sync process has taken place, which by default takes 30 minutes. However, this can be changed, or synchronizations can be forced, which is useful if a bulk update is performed, especially on disabling accounts.

Note that some premium features such as Identity Protection and Azure AD DS require PHS.

For some organizations, storing the password hash in Azure AD is simply not an option. For these scenarios, one option would be pass-through authentication (PTA).

Azure AD PTA

With PTA, when a user needs to authenticate, the request is forwarded to an agent that performs the authentication and returns the result and authentication token, as in the example shown in the following diagram:

Figure 3.10 – Azure PTA

Another reason you may wish to go for this option is if any changes to user accounts must be enforced across the enterprise immediately.

These agents must have access to your AD, including unconstrained access to your domain controllers, and they need access to the internet. You therefore need to consider the security implications of this, and the effort involved; for example, opening firewall ports. All traffic is, of course, encrypted and limited to authentication requests.

Because PTA works by handing off the process to an agent, you need to consider resilience. It is recommended you install the AD Connect agent on at least three servers to both distribute the load and provide a backup should one of the agents go offline.

If you still lose connectivity to your agents, authentication will fail. Another failsafe is to use PHS, although this, of course, requires storing password hashes in Azure AD, so you need to consider why you opted for PTA in the first place.

Integrating AD – Understanding User Authentication

One of the first steps is to understand how your organization wishes to authenticate its users and from where. A cloud-native approach may be sufficient for some, but some form of integration with an on-premises directory will be required for others. We will look at what they are in the following sections.

Cloud native

The simplest scenario is cloud native; we only need to set up user accounts within Azure AD. Authentication is performed via the web using HTTPS, and access is only required into Azure or other services that integrate with Azure AD—such as a web application using token-based authentication, as we can see in the following diagram:

Figure 3.6 – Cloud-native authentication

Cloud native is mostly used by new organizations or those without an existing directory service. For companies that already have an AD database, it is common to integrate with it, and for this we can use Azure AD Connect.

Azure AD Connect

Azure AD Connect provides the most straightforward option when you need to provide integration with AD.

AD Connect is a synchronization tool that you install on an on-premises server and that essentially copies objects between your on-premises network and your Azure AD.

This scenario is excellent for providing access to Azure resources for existing users and has self-service capabilities such as a password reset, although this requires the Azure AD Premium add-on.

In a typical use case, you may have a web application deployed in Azure that you need to provide remote access to for your users. In other words, users may be connecting to the web app over the public internet, but still need to be challenged for authentication. The following diagram depicts such a scenario:

Figure 3.7 – Hybrid authentication

Azure AD Connect provides a way of keeping your user details (such as login name and password) in sync with the accounts in Azure AD.

Also, note there is no VNET integration—that is, the web apps themselves are not connected to a VNET that is accessed via a VPN or express route. When users try to access the web app, they will authenticate against Azure AD.

When setting up AD Connect, you have several options. You can set up AD Connect to only replicate a subset of users using filtering—so, you should carefully consider which accounts are actually needed and only sync those required.

The AD Connect agent is installed on a server in your environment. The only requirement is that it must have access to the AD by being installed on a domain-connected computer.

The download link for the AD Connect agent can be retrieved by going to the AD Connect menu option in the Azure AD blade in the Azure portal. To access it, perform the following steps:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top bar, search for and select Active Directory.
  3. In the left-hand menu, click AD Connect.
  4. You will see the link to the AD Connect agent, as in the following example:Figure 3.8 – Getting the AD Connect agent download link

Figure 3.8 – Getting the AD Connect agent download link

  1. Copy the agent onto the server you wish to install it and run it, then follow through the installation wizard.

It is recommended that you install the agent on at least two or, preferably, three servers in your environment to protect the synchronization process should the primary server fail.

Important note

The AD Connect agent cannot be actively synchronizing on more than one server. When installing the agent on the standby servers, you must configure them to be in stand-by mode. In the event of the primary server failing, you need to manually reconfigure one of the secondary servers and take them out of stand-by mode.

An important aspect of Azure AD Connect is how users’ passwords are validated, which in turn defines where the password is stored.

Introducing Azure AD – Understanding User Authentication

Azure AD is a cloud-based mechanism that provides the tools to address our security needs. Backed by Microsoft AD, an industry-standard and, importantly, proven secure authentication andauthorization system, it gives both cloud-first (that is, stored and managed entirely in the cloud) and hybrid (a mix of cloud and on-premises) solutions.

Some of these tools are included by default when you create an Azure AD tenant. Others require a Premium add-on, which we will cover later.

These tools include the following:

  • Self-service password resets: Allowing your users to reset their passwords themselves (through the provision of additional security measures) without needing to call the helpdesk.
  • MFA: MFA enforces a second form of identification during the authentication process—a code is generated and sent to the user, and this is entered along with the password. The code is typically sent to a user’s device as either a text message or an MFA authentication app on their mobile device.
  • You can also use biometric devices such as fingerprint or face scanners.
  • Hybrid integration with password writebacks: When Azure AD is synchronized to an on-premises AD with AD Connect, changes to the user’s password in Azure AD is sent back to the on-premises AD to ensure the directories remain in sync.
  • Password protection policies: Policies in Azure can be set to enforce complex passwords or the period between password changes. These policies can be integrated with on-premises directories to ensure consistency.
  • Passwordless authentication: For many organizations, the desire to remove the need for passwords altogether in favor of alternative methods is seen as the ultimate solution to many authentication issues. Credentials are provided through the use of biometrics or a FIDO2 security key. These cannot be easily duplicated, and this removes the need for remembering complex passwords.
  • Single sign-on (SSO): With SSO, users only need to authenticate once to access all their applications—regardless of whether they sign on through their on-premises directory or Azure AD, the single authentication process should identify the user across different environments.
  • CA: To further tighten security, CA policies can provide further restrictions to user sign-in, or when different rules may apply. For example, MFA can be set not to be required when signing in from specific Internet Protocol (IP) ranges, such as a corporate network range.

Differentiating authentication from authorization – Understanding User Authentication

User security is perhaps one of the most critical aspects of a system and, therefore, its architecture. Security has, of course, always been important to protect sensitive information within an organization. However, as we move our applications online and widen our audience, the need to ensure only the correct people gain access to their data has become crucial.

In this chapter, we explore the key differences between authentication and authorization, what tooling we have available within Azure to ensure the safety of user accounts, and how we design solutions according to different business needs.

In this chapter, we will examine the following topics:

  • Differentiating authentication from authorization
  • Introducing Active Directory (AD)
  • Integrating AD
  • Understanding Conditional Access (CA), Multi-Factor Authentication (MFA), and security defaults
  • Using external identities

Differentiating authentication from authorization

A significant and essential role of any platform is that of authentication and authorization. These two terms are often confused and combined as a single entity. When understanding security on platforms such as Azure, it’s vital to know how the different technologies are used.

Authentication is the act of proving who you are, often performed with a username/password combination. If you can provide the correct details, a system authenticates you.

Authentication does not give you access to anything; it merely proves who you are.

Once a system knows the who, it then checks to see what you have access to—this is termed authorization.

In Azure, authorization is the act of checking whether you have access to a particular resource such as a storage account, and what actions you can perform, such as creating, deleting, modifying, or even reading the data in the storage account.

Because of the number of different services and their associated actions that are available to a user in Azure, and the importance of ensuring the validity of a user, the ensuing mechanisms that control all this can become quite complicated.

Luckily, Azure provides a range of services, broken down into authentication and authorization services, that enable you to strictly control how users authenticate and what they can then access, in a very granular way.

Traditionally, authentication has been via simple username/password combinations; however, this is ineffective on its own, and therefore you need to consider many factors and strategies when designing an authentication mechanism. For example, the following scenarios may apply:

  • A user may choose too simple a password, increasing the chances of it being compromised.
  • Complex passwords or regular changes mean users are more likely to forget their password.
  • There may be delays in the authentication process if a user needs to call a helpdesk to request a password reset.
  • A username/password combination itself is open to phishing attacks.
  • Password databases can be compromised.

Important note

A phishing attack is an action whereby a malicious person will attempt to steal your password by sending you to a dummy website that looks like the one you want to access but is, in fact, their site. You enter your details, thinking it is the correct site, and now they have your personal information and can then use this to log in to the real site.

When systems are hosted on a physically isolated network, some of these issues are mitigated as you first need physical access to a building or at least a device set up with a virtual private network (VPN) connection that, in turn, would require a certificate.

However, in cloud scenarios, and especially hybrid systems, whereby you need external authentication mechanisms that must also map or sync to internal systems, this physical firewall cannot always be achieved.

With these scenarios in mind, we need to consider how we might address the following:

  • Managing and enforcing password complexity rules
  • Providing additional layers over and above a password
  • How to securely store and protect passwords

Now that we understand some of the issues we face with authentication systems, especially those that rely on username/password combinations, we can investigate what options are available to mitigate them. First, we will examine Microsoft’s established security platform, AD.

Network monitoring – Principles of Modern Architecture

CPU and RAM utilization are not the only source of problems; problems can also arise from misconfigured firewalls and routing, or misbehaving services causing too much traffic.

Traffic analytics tools will provide an overview of the networks in the solution and help identify sources that generate high traffic levels. Network performance managers offer tools that allow you to create specific tests between two endpoints to investigate particular issues.

For hybrid environments, VPN meters specifically monitor your direct connection links to your on-premises networks.

Monitoring for DevOps and applications

For solutions with well-integrated DevOps code libraries and deployment pipelines, additional metrics and alerts will notify you of failed builds and deployments. Information, support tickets, or work tasks can be automatically raised and linked to the affected build.

Additional application-specific monitoring tools allow for an in-depth analysis of your application’s overall health, and again will help with troubleshooting problems.

Application maps, artificial intelligence (AI)-driven smart detection, usage analytics, and component communications can all be included in your designs to help drive operational efficiencies and warn of future problems.

We can see that for every aspect of your solution design—security, resilience, performance, and deployments—an effective monitoring and alerting regime is vital to ensure the platform’s ongoing health. With proper forethought, issues can be prevented before they happen. Forecasting and planning can be based on intelligent extrapolation rather than guesswork, and responding to failure events becomes a science instead of an art.

Summary

In this chapter, we looked at a high-level view of the architecture and the types of decisions that must be considered, agreed upon, and documented.

By thinking about how we might design for security, resilience, performance, and deployment and monitor all our systems, we get a greater understanding of our solution as a whole.

The last point is important—although a system design must contain the individual components, they must all work together as a single, seamless solution.

In the next chapter, we will look at the different tools and patterns we can use in Azure to build great applications that align with best-practice principles.

Architecting for monitoring and operations – Principles of Modern Architecture

For the topics we have covered in this chapter to be effective, we must continually monitor all aspects of our system. From security to resilience and performance, we must know what is happening at all times.

Monitoring for security

Maintaining the security of a solution requires a monitoring solution that can detect, respond, and ultimately recover from incidents. When an attack happens, the speed at which we respond will determine how much damage is incurred.

However, a monitoring solution needs to be intelligent enough to prioritize and filter false positives.

Azure provides several different monitoring mechanisms in general and, specifically, in terms of security, and can be configured according to your organization’s capabilities. Therefore, when designing a monitoring solution, you must align with your company’s existing teams to effectively direct and alert appropriately, and send pertinent information as required.

Monitoring requirements cover more than just alerts; the policies that define business requirements around configuration settings such as encryption, passwords, and allowed resources must be checked to confirm they are being adhered to. The Azure risk and compliance reports will highlight any items that deviate so that the necessary team can investigate and remediate.

Other tools, such as Azure Security Center, will continually monitor your risk profile and suggest advice on improving your security posture.

Finally, security patching reports also need regular reviews to ensure VMs are being patched so that insecure hosts can be investigated and brought in line.

Monitoring for resilience

Monitoring your solution is not just about being alerted to any issues; the ideal scenario is to detect and remediate problems before they occur—in other words, we can use it as an early warning system.

Applications should include in their designs the ability to output relevant logs and errors; this then enables health alerts to be set up that, when combined with resource thresholds, provide details of the running processes.

Next, a set of baselines can be created that identify what a healthy system looks like. When anomalies occur, such as long-running processes or specific error logs, they are spotted earlier.

As well as defined alerts that will proactively contact administrators when possible issues are detected, visualization dashboards and reporting can also help responsible teams see potential problems or irregular readings as part of their daily checks.

Monitoring for performance

The same CPU, RAM, and input/output (I/O) thresholds used for early warning signs of errors also help identify performance issues. By monitoring response times and resource usage over time, you can understand usage patterns and predict when more power is required.

Performance statistics can either manually set scaling events through the use of schedules or set automated scaling rules more accurately.

Keeping track of scaling events throughout the life cycle of an application is useful. If an application is continually scaling up and down or not scaling at all, it could indicate that thresholds are set incorrectly.

Again, creating and updating baseline metrics will help alert you to potential issues. If resources for a particular service are steadily increasing over time, this information can predict future bottlenecks.

Architecting for deployment – Principles of Modern Architecture

One area of IT solutions in which the cloud has had a dramatic impact is around deployment. Traditional system builds, certainly at the infrastructure level, were mostly manual in their process. Engineers would run through a series of instructions then build and configure the underlying hosting platform, followed by another set of instructions for deploying the software on top.

Manual methods are error-prone because instructions can be misunderstood or implemented wrongly. Validating a deployment is also a complicated process as it would involve walking back through an installation guide, cross-checking the various configurations.

Software deployments led the way on this with automated mechanisms that are scripted, which means they can be repeated time and time again consistently—in other words, we remove the human element.

We can define our infrastructure in code within Azure, too, using either Azure Resource Manager (ARM) templates or other third-party tools; the entire platform can be codified and deployed by automated systems.

The ability to consistently deploy and re-deploy in a consistent manner gives rise to some additional opportunities. Infrastructure as Code (IaC) enables another paradigm—immutable infrastructure.

Traditionally, when modifications are required to the server’s configuration, the process would be to manually make the configuration on the server and record the change in the build documentation. With immutable infrastructure, any modifications are made to the deployment code, and then the server is re-deployed. In other words, the server never changes; it is immutable. Instead, it is destroyed and recreated with the new configuration.

IaC and immutable infrastructure have an impact on our designs. PaaS components are more straightforward to automate than IaaS ones. That is not to say you can’t automate IaaS components; however, PaaS’s management does tend to be simpler. Although not a reason to use PaaS in its own right, it does provide yet one more reason to use technologies such as web apps over VMs running Internet Information Services (IIS).

You also need to consider which deployment tooling you will use. Again, Microsoft has its own native solution in the form of Azure DevOps; however, there are other third-party options. Whichever you choose will have some impact around connectivity and any additional agents and tools you use.

For example, most DevOps rules require some form of deployment agent to pull your code from a repository. Connectivity between the repository, the agent, and the Azure platform is required and must be established in a secure and resilient manner.

Because IaC and DevOps make deployments quicker and more consistent, it is easier to build different environments—development, testing, staging, and production. Solution changes progress through each environment and can be checked and signed off by various parties, thus creating a quality culture—as per the example in the following diagram:

Figure 2.5 – Example DevOps flow

The ability to codify and deploy complete solutions at the click of a button broadens the scope of your solution. An entire application environment can be encapsulated and deployed multiple times; this, in turn, provides the opportunity to create various single-tenant solutions instead of a one-off multi-tenant solution. This aspect is becoming increasingly valuable to organizations as it allows for better separation of data between customers.

In this section, we have introduced how deployment mechanisms can change what our end-state solution looks like, which impacts the architecture. Next, we will look in more detail at how monitoring and operations help keep our system healthy and secure.

Using architectural best practices – Principles of Modern Architecture

Through years of research and experience, vendors such as Microsoft have collected a set of best practices that provide a solid framework for good architecture when followed.

With the business requirements in mind, we can perform a Failure Model Analysis (FMA). An FMA is a process for identifying common types of failures and where they might appear in our application.

From the FMA, we can then start to create a redundancy and scalability plan; designing with scalability in mind helps build a resilient solution and a performant one, as technologies that allow us to scale also protect us from failure.

A load balancer is a powerful tool for achieving scale and resilience. This allows us to build multiple service copies and then distribute the load between them, with unhealthy nodes being automatically removed.

Consider the cost implications of any choices. As mentioned previously, we need to balance the cost of downtime versus the cost of providing protection. This, in turn, may impact decisions between the use of Infrastructure-as-a-Service (IaaS) components such as VMs or Platform-as-a-Service (PaaS) technologies such as web apps, functions, and containers. Using VMs in our solution means we must build out load balancing farms manually, which are challenging to scale, and demand that components such as load balancers be explicitly included. Opting for managed services such as Azure Web Apps or Azure Functions can be cheaper and far more dynamic, with load-balancing and auto-scaling technologies built in.

Data needs to be managed effectively, and there are multiple options for providing resilience and backup. Replication strategies involving geographically dispersed copies provide the best RPO as the data is always consistent, but this comes at a financial cost.

For less critical data or information that does not change often, daily backup tools that are cheaper may suffice, but these require manual intervention in the event of a failure.

A well-defined set of requirements and adherence to best practices will help design a robust solution, but regular testing should also be performed to ensure the correct choices have been made.

Testing and disaster recovery plans

A good architecture defines a blueprint for your solution, but it is only theory until it is built; therefore, solutions need to be tested to validate our design choices.

Work through the identified areas of concern and then forcefully attempt to break them. Document and run through simulations that trigger the danger points we are trying to protect.

Perform failover and failback tests to ensure that the application behaves as it should, and that data loss is within allowable tolerances.

Build test probes and monitoring systems to continually check for possible issues and to alert you to failed components so that these can be further investigated.

Always prepare for the worst—create a disaster recovery plan to detail how you would recover from complete system failure or loss, and then regularly run through that plan to ensure its integrity.

We have seen how a well-architected solution, combined with robust testing and detailed recovery plans, will prepare you for the worst outcomes. Next, we will look at a closely related aspect of design—performance.

« Older posts