Sumo Logic ahead of the pack
Read articleComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
June 26, 2023
For those responsible directly or indirectly for the cyber defense of their organizations, June 2023 is proving to be an extremely challenging month.
In this month alone, vulnerabilities were discovered in various appliances, ranging from CVE-2023-27997 impacting FortiGate devices to CVE-2023-35708 impacting MOVEit Transfer software as well as the exploitation activity discovered of Barracuda appliances via CVE-2023-2868.
Responses to each one of these vulnerabilities involve following multiple and fast-moving vendor advisories, in addition to performing different and often complex patching procedures.
Although well-meaning, it is often difficult for organizations to follow “just patch” type guidance - as there could be various extraneous circumstances preventing the swift patching of such appliances.
Similarly, building threat detection use cases for such appliances is not a simple task for several reasons:
Appliances may not allow the installation of custom telemetry collection agents
Telemetry from these devices tends to tilt towards debug and operational information and not security use cases
Log formats from these appliances are often generated in non-standard and difficult-to-parse formats
These appliances may be located in network segments that do not allow for simple telemetry collection
The exploitation of these devices may occur in a “0-day” fashion that exploits gaps in existing threat detection coverage
Despite the above – in many networks – these devices still generate egress and ingress traffic that traverses corporate firewalls. Also, some of the above-mentioned appliances install on top of Linux or Windows operating systems as a piece of software, with both operating systems generating telemetry that can aid us in gaining some visibility into the operations performed by such devices.
Given the above context, this blog aims to showcase how the Sumo Logic platform can be brought to bear in detecting threats that stem from vulnerabilities discovered in remote service appliances using telemetry found on corporate firewalls and endpoints.
To aid in any response efforts, organizations must maintain an up-to-date inventory of assets. Organizations that follow NIST best practices can refer to NIST SP 1800-5 for information regarding IT asset management.
Once a vulnerability is identified in a remote service appliance, you can reference your inventory and gather information, particularly IP addresses of vulnerable appliances.
Once a vulnerable appliance is identified in the network, you can create a Sumo Logic Cloud SIEM match list with the relevant information.
Let’s create a match list with the following parameters:
Once the list is created, we can click the “Add List Item” button and the IP address of our vulnerable appliance, as found in our asset inventory.
In our example, we have identified a vulnerable appliance with an external IP address of 137[.]117[.]85[.]175 – once finished, our match list should look something like this:
We can follow the same above steps to add an internal IP address as well.
Depending on how a particular network is configured and how hosts interact with firewalls and NAT, you might want to be searching for either the internal IP address of the vulnerable appliance or the external IP address. In our demonstration, we will focus on the internal IP address.
Once our match list is set, we can navigate to the Cloud SIEM interface and click on Records, and place the internal IP address of our vulnerable appliance in the filter input box:
Once you hit enter, any records containing this particular IP address within the time filter will be presented to you:
In this case, we can see that this particular IP address is associated with different record types such as Network, Authentication and Audit - depending on what the appliance is and what telemetry is being sent into your particular Cloud SIEM instance, you may see other audit categories appear here.
When we dig into a particular record, however, we can see the data that was generated after we created our match list being enriched with our match list specifics:
In our case, this network data is stemming from a Windows host, using host-based telemetry, so process information is included as well. We also see the list items that were matched and the destination IP details.
With this information, we can now begin to craft alerts that use this match list information to aid our threat detection efforts.
At this point, we have identified a vulnerable appliance in our estate and have also referenced our inventory and plugged the inventory information into a Cloud SIEM match list.
Now we need to use this match list in various Cloud SIEM rules to gain visibility into what actions our vulnerable appliances are performing.
Let’s start with a simple match rule that looks at a vulnerable appliance initiating a network connection to an external IP address.
We can begin crafting this rule by navigating to the Content menu, then clicking Rules, then “Create”:
On the “Create a Rule” screen, we can proceed to select a “Match” rule.
In addition to creating match lists via the user interface, Cloud SIEM match lists can also be updated and created via API so this process can be automated and integrated into different security operations workflows.
Within this match rule, we will be using the following logic which we will explain line by line:
array_contains(listMatches,"CVEVulnerableAppliance") and metadata_deviceEventId = "Microsoft-Windows-Sysmon/Operational-3" and fields["EventData.Initiated"] = true and dstDevice_ip_isInternal = false
The first line of this query filters results to only those found on our match list
The second line filters the log to Sysmon Event ID 3 ( NetworkConnect ) - this line is optional, and different environments will have different sources for network telemetry
The third line sets the directionality, in this case, we only want outbound network connections
The final line in the query filters results that have an external destination IP address
Our final rule will look something like this, with the various names, descriptions and MITRE tag being fully customizable:
Crafting alerts of this nature, using a match list instead of an IP address, allows for the division of labor. Different members of your security or engineering teams can update the match list as more vulnerable appliances are discovered on the network and detection engineers continually tweak rules to increase true positive rates and general visibility.
Once our rule conditions are met, a Cloud SIEM signal will trigger:
When crafting the alert, we set the severity score to a high value of 10 – as our source host performs additional action, the Cloud SIEM Insight Generation process will kick in, and insight will be generated should the entity in question continue to trigger signals.
Additionally, an insight can also be created manually from the signal. Once the insight is created, we can click into it and see what signals are associated with it as well as what entities are involved:
On this screen, we can also click on the “Graph View” button to bring up the entity-relationship graph:
This graph view is incredibly powerful, displaying metadata for entities as well as giving analysts the ability to pivot off various entity elements.
In addition to straight forward match rules, our match list can be utilized by Sumo Logic UEBA rules including First Seen and Outlier rules.
These rules allow analysts to flag “first seen” behavior after a certain baseline period.
For example, using the following rule logic, we can detect when a new User Agent is seen by our vulnerable appliance:
The baseline and retention periods can be adjusted here, as well as the “has new value for” fields – some other ideas here include first seen:
Protocols
Ports
Autonomous System Numbers (ASNs)
Destination IPs
Temporal elements like hours or days
The options available will heavily depend on the type of telemetry that is available within the Cloud SIEM platform.
In many cases, analysts and engineers may not understand exactly what these devices are doing on their network, as their operations may be opaque and have not had threat detection use cases built out.
In such scenarios, Cloud SIEM’s Outlier Rule feature can help profile and baseline activities from vulnerable devices.
Once again we can filter on only vulnerable devices and detect outliers using counts, averages, distinct counts, max/min values and sums of various data elements, depending on what kind of telemetry is available.
Once a signal is generated for an outlier rule, a graph is presented that illustrates the trend line, along with the configured threshold and where exactly that threshold was crossed:
In addition to data enrichment for items like ASN and geography, Cloud SIEM will also automatically calculate domain entropy and will flag on possible dynamically generated domains like those used by the SUNBURST implant.
All these data points can be used in the threat detection scenarios outlined above to craft advanced alerts that take advantage of any and all telemetry that is available from devices flagged as vulnerable.
One of the incredibly powerful aspects of the Sumo Logic platform is that data normalized by Cloud SIEM are available to be queried by the Sumo Search Query Language.
In addition, any generated Cloud SIEM signals and insights can also be queried using the same query syntax. This allows users of the platform to query raw and normalized data as well as signal/insight metadata all using the same query syntax. This dynamic allows for some very powerful searching and reporting capabilities.
Consider the following scenario: you are asked by your management to generate a report showing which geographic locations a specific vulnerable appliance initiated connections to within a certain time frame.
We already have a match list containing the IP addresses of the vulnerable device in question, and we already have signal data that was generated when this device performed network connections to an external IP address.
Now, we need to tie this information together and crunch some data.
We can do so with the following query, which uses the Cloud SIEM Signal partition:
_index=sec_signal ruleName = "Network Connection from Vulnerable Appliance" | %"fullRecords[0].dstdevice_ip_countryname" <strong>as</strong> Country | %"fullRecords[0].srcdevice_ip" <strong>as</strong> src_ip | count(Country) <strong>as</strong> Country_Count <strong>by</strong> Country
In this query, we are performing a search on our signal data and are extracting certain fields such as Country and src_ip. We are then generating a count of connections per country.
We can click on the “Aggregates” tab of our search and click on the “Pie Chart” option to generate the following:
We can then click on the “Add to Dashboard” button as well to add this pie chart to a dashboard that we can later export.
In addition to the count of countries above, we can also generate a map with counts utilizing the following query:
_index=sec_signal ruleName = "Network Connection from Vulnerable Appliance" | %"fullRecords[0].dstdevice_ip_longitude" <strong>as</strong> longitude | %"fullRecords[0].dstdevice_ip_latitude" <strong>as</strong> latitude | <strong>count</strong> <strong>by</strong> latitude,longitude | <strong>sort</strong> _count
This will generate a map that looks like the following:
Once again we can add this panel to our dashboard. Once you are happy with your dashboard elements, you can go ahead and turn this dashboard into a scheduled deliverable report:
Once the schedule window occurs, we should see the report land in the designated mailbox:
The PDF attached will contain the same elements as in the dashboard that we built.
As stated above, telemetry from vulnerable remote service appliances is not always available in real-time within SIEM platforms. Often, it is necessary to perform some level of host-based forensic collection from such devices.
Various powerful tools exist that perform this type of activity including Hayabusa and Velociraptor - these tools provide output options in common formats such as CSV or JSON. These logs can then be sent back to Sumo Logic for further analysis.
Using an example of Hayabusa, we can collect host-based artifacts from Windows event logs and transform them into JSON format while running Sigma rules on this data set.
Once we have our JSON file available, we can create a hosted collector in Sumo Logic:
Once the collector is configured, we can go ahead and add an HTTP source, taking note of the endpoint URL which we will use in our next step.
Using the following simple PowerShell script, we can send all JSON files found in the current directly to our collection endpoint:
# Get all the JSON files in the current directory $JSONFiles = Get-ChildItem -Path . # For each of the JSON files in the array, loop through them foreach($JSONFile in $JSONFiles){ Write-Host "Sending Data for" $JSONFile.Name # Send JSON data to Sumo based on the content of the individual JSON file, no need to mess with regex Invoke-WebRequest -Headers @{'Content-Encoding' = 'json'} -Method POST -Body (Get-Content $JSONFile.Name) 'https://url.from.previous.step' }
Once the data makes its way through the endpoint and into Sumo Logic, we can go ahead and query it, similar to the following:
From here, we can dig into the individual rules and their respective data elements to see what host-based collection tools discovered.
No matter what framework of choice you follow, be it the Cyber Kill Chain, MITRE ATT&CK or others, it is usually the case that exploitation of these types of remote service appliances typically leads to other activities on the host or network.
In other words, once a threat actor exploits one of these appliances, they must take additional action to meet their objectives.
What this means for us defenders is that these actions often hit various detection categories other than the original exploitation; this means that, although a particular exploitation attempt may utilize a 0-day, the behavioral artifacts generated by this exploitation may – in some cases – trigger existing rule sets.
A good example here are web shell detections, which, depending on the type of appliance being exploited, should trigger regardless of the novelty of the exploit.
No one has a crystal ball filled with futuristic visions of upcoming 0-day exploits. However, we can utilize testing methodologies within our networks that identify blind spots and gaps in telemetry.
A great tool here is Atomic Red Team which provides various atomic tests that can be executed on hosts to generate telemetry that can be utilized for threat detection.
The following Atomic tests are highlighted as valuable for identifying post-exploitation activities of remote service appliances.
MITRE ID | Atomic Name | Atomic Link |
External Remote Services | ||
Command and Scripting Interpreter: Bash | ||
Server Software Component: Web Shell | ||
System Information Discovery | ||
Unsecured Credentials: Credentials In Files | ||
OS Credential Dumping | ||
Supply Chain Compromise |
Given current trends, there is no reason to suspect that the discoveries of vulnerabilities in various remote service appliances will slow down or abate. Given the criticality of these appliances, combined with the increased attention given to them by threat actors, information security leadership should expect more “June 2023” type months ahead.
In an ideal state, security teams will have full visibility into these appliances and will patch according to vendor guidance swiftly with no exploitation occurring. However, as we all know, ideal states rarely exist in today’s fast-moving world of technology.
Given these dynamics, organizations must bring to bear any and all available resources when suspected or actual exploitation of remote service appliances occurs.
Although telemetry may not always be directly available from remote service appliances, it is often the case that such appliances will generate other secondary telemetry on network devices or various endpoints, with this telemetry being fed into a SIEM.
Now you know how the Sumo Logic platform can extract every ounce of utility from various and often incomplete strands of telemetry. In addition, we also highlighted some helpful ways to stay ahead of the curve and to test your networks for any gaps in threat detection and telemetry collection pipelines.
One of our core Sumo Logic values is that “We’re in it with our customers” and in this regard, the broader Sumo community and the Sumo Threat Labs team specifically stand with all defenders, analysts, engineers and incident responders who are tasked with responding to the ever-changing threat landscape.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial