Kisha Gulley was once kicked out of a Facebook group for mothers with autistic children after a c...Read More
It’s been … a weekend for security pros. The Apache Log4j vulnerability (CVE-2021-44228) affects somewhere between 0 and 3 billion-plus of the devices currently running Java. Luckily, a metric ton of amazing advice exists on #InfoSecTwitter right now. It’s a lot to consume at once, which is why we‘ve put together three parallel workstreams you can use to divide and conquer, with links to some of the best Log4j resources available now. Check out these three streams: Prevention And Detection; Vendor Risk Management; and Internal And External Comms below, and stay tuned for follow-up blogs with more in-depth information on each.
You’re going to see a lot of malicious traffic attempting to exploit the Log4j (aka Log4Shell) vulnerability, from “spray and pray” attempts to the opening salvo in sustained intrusion campaigns. Update your web application firewall (WAF) rules now to block these exploit attempts. If your WAF provider is pushing new rules automatically, accept them.
Some researchers contend that it’s going to be difficult to build a set of blocking rules that addresses every potential maliciously formatted string that could exploit Log4j … but that’s not an excuse to do nothing. Instead, assume that this will be an iterative process and that you might spend the next several weeks and months updating your WAF rules to address emerging exploit techniques. On a related note, if your process for updating WAF rules is clunky and manual, here’s your opportunity to streamline it.
Situational awareness is key for security operations teams during this type of event. The reports by Huntress Labs and LunaSec are required reading for your team. Further, your vendor risk management (VRM) and DevOps teams must work closely with your security operations center (SOC) to make sure that they are aware of which assets are affected by this vulnerability. As mentioned in CISA guidance, SOC teams should consider any alerts from these devices as critical and prioritize them in the queue.
There are a lot of resources available for Log4Shell prevention, detection, and response. Instead of regurgitating that information, we’re providing links to some of the most relevant here:
Per Twitter, the researchers that discovered this vulnerability disclosed it to the Apache Software Foundation in November of 2021. The researcher lists their location as China and employer as Alibaba’s cloud security team. Given the vulnerability disclosure requirements China added in 2017, the Chinese government must authorize researchers before they disclose publicly. Based on legal requirements, it appears one of the most active nation states received advanced notice of a massive vulnerability prior to or at some point in November and had ample time to weaponize the technique. Factor that into your assessment of severity based on your threat models.
Keep in mind that this vulnerability is still fresh, and there is a big difference between a vulnerability and the development of a full-fledged attack. Proof-of-concept exploits hit the internet almost immediately after the vulnerability became public. The visibility of this issue and attention it has garnered makes it less interesting for sophisticated intruders, but opportunistic threat actors focused on monetization jumped on it right away, as researchers see it largely used for coin miners now.
Regardless of the sophistication of the threat actors behind the intrusion, even coin miners require response. While this exploit takes center stage right now, exploits can live forever, much like MS08-067, which remained a reliable technique for pen testers for over a decade. Develop a library of active hunting hypotheses on how you could find this if passive detection fails. Some resources that may be beneficial are here and here.
This is not the first nor the last time we see a vulnerability like this; use this time to take notes on what needs to improve for the next go-round. Have your team track after-action items and consider also highlighting the following, which we regularly see as challenges for security pros:
Vendors started dropping patches like Oprah gives away cars. Cars require taxes and insurance, and patches need someone to catalog, test, deploy, and potentially roll back those patches. Patching being mentioned next to taxes and insurance somehow seems … appropriate, though.
If your organization planned a holiday/end-of-year freeze on code releases and patches, well, things might need to change. Last year, it was SolarWinds playing Krampus, and this year, it’s Log4Shell taking on the role of the Grinch. To avoid breaking all the things, identify and prioritize mission-critical, highest-risk software and implement a patching and testing schedule. Highest-risk software will vary based on your industry but will likely include any systems with access to:
There are a few lists floating around of vendors that are affected/not affected by this vulnerability (for example, here). Your VRM team should work in accordance with your IT team, business applications team, and business resilience team to identify, classify, and validate that vendors are issuing a patch if they are affected by this vulnerability and when it has been addressed by your team.
Some vendors may misjudge the criticality of the timing and the magnitude of the potential impact of this vulnerability. This vulnerability affects more than just your security posture. Also understand that your ability to mobilize your efforts and provide assurance for your vendor’s efforts will undoubtedly impact cyber-insurance rates and coverage. Check out this resource for technical guidance on protecting systems that will not be patched in a timely manner.
This vulnerability is going to plague security teams and enterprises more broadly for months. A conversation with the organization about what this means for the business is an inevitability. When talking to employees, the board, or anyone else external to the security organization, use easily understood language such as the following, which we have tweaked slightly from @ramen0x3f:
There is a weakness in a popular software tool used by a large part of the internet. When a user inputs certain text into these applications, it triggers something in the tool that gives them total control of the device the software is running on.
Don’t stop there, as that explainer doesn’t include details on all the effort your team spent dealing with the issue nor your plan to continue addressing it. This is a leadership moment and a teaching moment to engage non-security stakeholders. Follow up your explainer with:
The primary challenge relates to the size and scope of the vulnerability. An enormous number of applications use this underlying software. Given the aperture, threat actors immediately began to weaponize this attack to facilitate intrusions into environments. We expect exploit activity to increase in the coming days and weeks. With that in mind, we have initiated the following actions: 1) Identify the presence of the software in our environment; 2) assess and upgrade our visibility and monitoring of the systems and infrastructure where the software resides; 3) mitigate the vulnerability where possible with existing controls already in place; and 4) patch as soon as possible in coordination with developer and information technology teams.
When all else fails, send your team some #InfoSecTwitter memes. Camaraderie is important, though I’m mostly joking here. But it is notable that this may be the first human-made vulnerability on Mars. I wonder how patching Ingenuity is going.
We will be delving into each of these aspects in more depth in the coming days. Stay tuned for a more nuanced and detailed breakdown of each.