BloodHound is the defacto standard that both blue and red security teams use to find lateral movement and privilege escalation paths that can potentially be exploited inside an enterprise environment. A typical environment can yield millions of paths, offering almost endless opportunities for red teams to attack and creating a seemingly insurmountable number of attack vectors for blue teams to try to defend.
However, a critical dimension that BloodHound ignores, namely network permissions, could hold the key to shutting down advanced attacks and ransomware spread. For example, in a least privilege network, where hosts are severely restricted in terms of what they can access (on average, they can access less than 2% of network resources), it turns out 99% of the paths BloodHound identifies are impractical!
We have developed a suite of tools that enable us to integrate network permissions into BloodHound’s database to improve the analysis of paths and increase the accuracy of assessments in restrictive, least privilege networks. The tools include: CornerShot, ShotHound, DB Creator, custom_queries and Ransomulator.
In this post, we present a resilience methodology accompanied with tools that enable security teams to continuously measure and improve the resilience of their network with restrictive network permissions that help them achieve a least privilege networking stance. This methodology is largely based on Andrew Robbins’ (@_wald0) posts (part 1, part 2).
Part 2 of our blog post is designed to provide more technical details on each tool and technique.
Least privilege is a well known concept in security used to describe an enterprise environment in which each account has limited access to only the resources for which they have a legitimate need. Similarly, Least Privilege Networking (LPN) only permits legitimate accounts and devices to have network permissions to needed resources - all other network access is blocked.
In order to measure the “restrictiveness” of a network in this post, we will use the average percentage of hosts accessible to each host in the network. For example, if each host can access on average 10 other hosts in a network of 100 hosts, this would be a network with 10% access.
If an organization can establish and maintain a least privilege network, they effectively shut down unnecessary paths. The less that hosts can access on average, the more difficult it is for attackers to find open attack paths. This is because, not only does an adversary need to collect path information via Bloodhound, they also need to map it to viable network access, which is information that is not easily obtained (even for blue teams). Additionally, even if an attacker is able to compromise privileged credentials, in a LPN, those credentials can only be used from a legitimate device to access a specific set of specific resources.
Let us illustrate with the following example. Say an attacker compromises a host (COMP00160), executes BloodHound, and finds the following path:
This is a logical path from the host to an account that is a domain admin. But, that doesn’t mean it is a practical path - does the network permission allow for such a path to exist? Let’s add the network layer and see if this path is practical.
We see here that there is no network access from the compromised computer (COMP00160) to the next hop (COMP00380), as there is no “open” edge between them. So even though there is access to the final hop (COMP00281), the attacker cannot take this path to get to the domain admin.
Red teams can use CornerShot to find their way in a least privilege network environment. If they are using BloodHound to find paths, ShotHound can be used to locate paths that are valid (also have sufficient network access). That way, red teamers are not following “dead ends”.
With the use of the custom queries and Ransomulator, blue teams can measure the resilience of their network and identify critical areas for improvement. As blue teams often have more control over network security than account access (since this is usually in the realm of application owners), it is usually quicker and simpler to mitigate paths using network access. Additionally it adds another dimension of security that defends against additional attack vectors (on top of lateral movement), such as network discovery, remote vulnerability exploitation, etc.
Security teams have (or at least, should have) full visibility into their network permissions. This information can be integrated into the BloodHound database (more on that in part 2 of this blog series) and used by security teams to measure their resilience, so they can choose and deploy appropriate mitigations. While there are many ways to measure resilience, in this post we cover two methods: practical paths and ransomware infection.
One measurement is the percentage of practical paths in an environment. These are logical paths discovered by BloodHound that are also feasible due to open network permissions. If a BloodHound database already exists, ShotHound can be used to discover those practical paths.
For example, a shortest path query in an environment with 5000 hosts (taken from a simulated environment using DBCreator, refer to part 2 for more information), found 21814 logical paths to a domain admin account.
Running a similar query that filters out impractical paths will yield only 202 paths, which is only 0.9% of the total logical paths. A low percentage of practical paths is very good for blue teams because it means there are less ways for an attacker to accomplish their objectives. It also translates into frustrated attackers, making them follow false paths, and increases the chance they will be discovered and shut down before they can do any damage.
The more restrictive a network is, the lower the percentage of practical paths. We simulated several networks, with a varying average percentage of network access for each host to show you what we mean:
Notice that restricting network access to 50% immediately cuts down the amount of practical paths to roughly 30%! That’s a big impact, which is why blue teams that use network restrictions will be able to implement effective mitigations that improve the overall resilience of their environment.
Not all attackers are stealthy. Some ransomware attacks are using a work-like spreading pattern, expanding to all accessible hosts over the network. These attacks replicate themselves over each compromised host, which increase the chance that they will stumble upon a “hidden” path that was not discovered by BloodHound.
Ransomulator can be used to discover these hidden paths and measure just how much of the network could be compromised from each given host. By continuously running simulations and then applying mitigations, we can reduce the size of the infection and its subsequent impact.
To understand the kind of insights that can be gained from Ransomulator, we ran it over several simulated databases, each simulating a stricter network. We looked at a 2% access, 75% access and 100% access (simulating a flat network, where every logical path is practical).
The following figure shows how many hosts can be compromised from any given host in the dataset to show how ransomware would spread in the network. Each number in the horizontal axis identifies a computer number, and the vertical axis shows how many other hosts could potentially be compromised if that computer is infected
Only about 300 computers have logical paths. But once they do have a path, this path quickly takes over the entire network. When examining practical paths for a “typical” network, which is not very restrictive (access of 75%), we see a marginal improvement. However, once we apply it to a truly least privilege network (access of 2%), we not only see fewer hosts with paths to other computers, but also the reduction of the impact of a host being compromised. By how much?
We can measure the impact of infection waves via several methods. One suggested method is to calculate the area of the restrictive (green) portion of the graph against the logical (red) portion of the graph. In the above example, the restrictive portion makes up only 5% of the logical. Meaning a network of 2% access cuts down possible ransomware infection to 5% compared to its impact on an equivalent flat network, where any logical path would be practical.
The resilience methodology aims to help blue teams efficiently reduce the number of attack paths discovered by BloodHound. Andrew Robbins (@_wald0) from SpecterOps wrote some excellent blog posts (part 1, part 2) that describe the methodology in great detail. In a nutshell, the methodology consists of cycles. Each cycle is broken down into four steps with clear metrics that help teams measure if security has gotten better, and by how much.
Following the same principles, we suggest the following steps to measure resilience with network access:
This step requires collecting information from SharpHound, as well as network access information. The latter can be achieved most efficiently via ShotHound (Scanners, such as nmap ,TrustMeter, and CornerShot can also provide some of this information, as well as firewall configurations). This information can then be integrated into Bloodhounds’ database - more about how this can be done in part 2 of this blog.
Using both resilience methods discussed and custom queries, we can analyze attack paths to measure the resilience value against our target. This will help us identify whether we need to implement more mitigations to improve our security stance.
For example, we may have a network with the following measurements, and target values, which tells us we need to do more to get closer to the resilience level we want:
If we still havent hit our target values in step two, we can utilize the information from Ransomulator or custom_queries to generate a hypothesis on how best to improve our resilience.
Mitigating BloodHound paths is not limited only to logical mitigations. With the visibility we get from the customised queries and Ransomulator, we can now also deploy fixes in the form of additional network restrictions. The type of mitigation will change for each network and use case. Once a fix is deployed, the results can be reanalyzed and measured against our target goal.
Besides doing strict "logical" mitigations, security teams can continuesly measure and improve their resilience using network permissions with the “enumerate, analyze, generate, and deploy” methodology and tools we laid out.
Check out part 2 of this blog series to get more technical details on the tools and methodology we introduced here to help you establish and maintain the level of resilience you are looking for in your environment.