Sumo Logic ahead of the pack
Read articleComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
September 19, 2022
In today’s hybrid, multi-cloud environments, users and administrators connect to various cloud services using Command Line Interface (CLI) tools and web browsers.
This post highlights the risks associated with unprotected and unmonitored cloud credentials which are found on endpoints, in file shares and in browser cookies.
Get actionable and direct guidance around:
Data collection
Baselining
Hunting & Alerting
In order to alert on and hunt for this malicious activity.
Business workloads are increasingly undergoing a migration to the cloud. Indeed, business applications that are built on cloud workloads enjoy multiple advantages in terms of speed of delivery, cost savings and a host of other benefits.
Cloud migration inherently adds complexity, particularly as you’ll need to secure both the cloud environment and your on-prem access point. Although workloads may reside in the cloud, they are typically administered and configured from an endpoint operated by a cloud administrator, cloud developer or other roles.
With this context as our backdrop, the Sumo Logic Threat Labs Team performs a deep dive into how cloud credentials are stored on Windows endpoints. We cover the risks associated with this dynamic, in addition to showcasing how Sumo Logic can be used to hunt for and monitor malicious activity in the context of cloud credential theft on the endpoint.
Before diving into queries, data and hunts - let us take a step back and perform some threat modeling in the context of cloud credentials.
Here, we can use the OWASP threat modeling cheat sheet as a starting reference point.
A few key terms within OWASP’s threat modeling terminology are worth calling out specifically:
Impact: Cloud credential theft from an endpoint can potentially have an outsized impact. Consider the following scenario: a cloud administrator operates from a locked down workstation, where they do not have administrative access. However, due to legitimate business requirements, this cloud administrator is granted high-privileged access to cloud resources. We can see the escalation path coming into focus in this scenario, whereby if this particular administrator had their cloud access token or browser cookies stolen, it would represent a rather impactful privilege escalation path.
Likelihood: The likelihood of cloud credential theft from endpoints is difficult to measure with accuracy, as many variables are involved, including threat actor motivation interacting with various levels of endpoint controls. However, suppose we view cloud credentials or sensitive browser cookies as files residing on the file system. In that case, we can infer that these types of credentials may be more attractive for a threat actor to interact with than other credential access avenues, such as credentials from memory; an area that endpoint security solutions heavily monitor.
Controls: Controls are another aspect of our threat model which contains a large amount of variation. Several layers of controls may potentially guard against cloud credential theft on the endpoint, particularly around the initial access phases. However, typically, very little observability in the context of file access exists on Windows hosts, unless specifically configured.
Trust Boundary: According to OWASP, a trust boundary is: “a location on the data flow diagram where data changes its level of trust.”
The diagram below shows that the trust boundary is the endpoint, as the data - cloud credentials in our case - are passed from the endpoint to the cloud platforms.
We can imagine a scenario where the “trust” level differs between an endpoint and a cloud platform, with administrators granted more rights to cloud resources versus the endpoint. In other words, a cloud administrator may have different privilege levels between the endpoint being used to access a cloud platform and the cloud platform itself. Therefore, credential theft in these scenarios may result in an escalation of privilege and crossing a trust boundary.
Now that we have some introductory threat modeling notes, we are better positioned to understand the nuances of attempting to articulate risks associated with threat actor behavior.
We also have a critical area to focus our detection engineering efforts on, as we know that the endpoint is our trust boundary. We also know that the cloud credentials that are often found on this endpoint are most likely not fully protected, with their theft potentially resulting in privilege escalation paths.
Let’s continue, and look at some of the data required to hunt and alert on this activity.
Recalling our threat model, we have seen that cloud credentials found on a Windows endpoint reside primarily as files.
As such, we will focus our attention on Event ID 4663 (An attempt was made to access an object.)
Enabling this event in a Windows environment is a two-step process:
The “Audit File System” setting needs to be enabled and applied to hosts on which cloud credentials reside.
A system access control list (SACL) entry must be applied to each file that we wish to audit.
Enabling the “Audit File System” can be accomplished by navigating to:
Computer Configuration → Policies → Windows Settings → Security Settings → Advanced Audit Policy Configuration → Audit Policies → Object Access: Audit File System: Enabled, Success and Failure
Once the “Audit File System” setting is enabled, we have to configure SACL auditing on each file we want to monitor.
Performing these steps manually per file would be a tedious process. Thankfully, researcher Roberto Rodriguez has done the heavy lifting for us and has shared a PowerShell script capable of setting these SACL entries in an automated fashion.
You can find the script here.
To further automate the setting of SACL entries on cloud credentials and keys specifically, the Sumo Logic Threat Research team is providing a wrapper script for Set-AuditRule, which attempts to locate cloud keys, credentials and tokens and set SACL entries on them. Additionally, the script also contains basic logic for checking whether a SACL already exists on a certain file.
The script makes all efforts to resolve paths dynamically. However, the locations of various cloud credentials can potentially change depending on the installation options used when installing tools like the AWS CLI, Kubectl or Azure PowerShell.
Please note that this script is provided as is and is not officially supported by the Sumo Logic team. Feel free to edit and use the script to suit specific environmental requirements.
After running our wrapper script, we see the following output:
In this case:
An Azure PowerShell token was not found, so no SACL was set
Two Azure CLI Tokens were found which had SACLs applied already
AWS CLI, Gcloud, and Kubeconfig credential files were located, with SACL auditing applied
Let’s take a step back to untangle some potentially confusing terminology around various Azure and Azure Active Directory tokens.
In the context of this blog, we are examining potentially malicious access to an Azure Access Token, not an Azure Primary Refresh Token (PRT).
PRT tokens can bypass conditional access policies and other controls if the target endpoint is joined to Azure Active Directory.
For more information regarding Azure PRT tokens, check out the following blogs and posts:
A post by Dirk-jan Mollema on abusing Azure Primary Refresh Tokens
A post by Thomas Naunheim on abusing and replaying Azure AD refresh tokens on macOS
A post by InverseCos on Skeleton Keys and Pass-Through authentication
In contrast to PRT tokens, Azure Access Tokens are used primarily by those who wish to perform some kind of Azure administration through CLI tools, such as Azure PowerShell or Azure CLI.
A way of thinking about this distinction is that Azure PRT tokens are primarily used to access various Azure resources, whereas Azure access tokens are used by Azure PowerShell or Azure CLI to programmatically manage Azure resources.
As we have observed, various cloud service credential material can be found on an endpoints’ file system.
Likewise, you can also find cloud credential material on various file shares.
Similar to file system auditing, file share auditing also requires multiple group policy objects to be configured.
The first step is enabling the “Audit File Share” and “Audit Detailed File Share” settings.
You can find these in:
Computer Configuration → Policies → Windows Settings → Security Settings → Advanced Audit Policy Configuration → Audit Policies → Object Access: Audit File Share: Enabled (Success and Failure)
Computer Configuration → Policies → Windows Settings → Security Settings → Advanced Audit Policy Configuration → Audit Policies → Object Access: Audit Detailed File Share: Enabled (Success and Failure)
Enabling these two settings will generate a few new Event IDs, with two of particular interest for our use cases:
5145: Detailed File Share
5140: File Share
Event IDs 5145 and 5140 will give us insight into what user accessed what share on our network. However, we still need to apply the SACL auditing to the individual files residing in our file shares for full visibility at the file level.
There may be some confusion regarding which host generates the relevant event IDs: these will be generated on the system on which the file share and SACL-audited file is located, not the system which accessed these file shares or files.
Having covered the telemetry required for monitoring files on both file shares and endpoints, let us dive into how we can baseline this activity.
Now that we have the necessary configurations to enable SACL auditing, in addition to this auditing being enabled on the files that we want to monitor, we can begin baselining this activity.
Let’s start with a relatively simple query:
_index=sec_record_audit AND "Security-4663"
| count(file_path) BY baseImage
To break this down, we are:
Looking for Event ID 4663 only
Counting the number of times a particular process accessed a particular object
In this case, an object can be a file, another process or an Active Directory Object.
Our objective for these baselining efforts is to see the types of data our 4663 events generate.
In other words, if we know what normal activity looks like on our endpoints, we will be better equipped to identify abnormal, suspicious or outright malicious activity.
To examine some potentially abnormal events, we have gone ahead and opened all the monitored credentials with Notepad, to generate “abnormal” 4663 events.
Let’s take a look at the results:
Perhaps not surprisingly, we can see that Windows Defender, Java and System processes are creating some noise.
After some whittling down and filtering of our events, we end up with the following, much more focused baseline:
We see the first process on our list, which is our “control” - opening our cloud credentials with Notepad to generate some abnormal activity.
We expect to see the bottom two processes, as many of the CLI tools used to manage Azure, GCP and AWS may be used from the PowerShell terminal. We will cover how to separate this legitimate PowerShell activity from potentially malicious activity later in this post.
Similar to SACL baselining, we also need to baseline our file share activity.
We can start with the following query:
_index=sec_record_audit AND "Security-5145"
| count(file_path) as File BY file_basename
After our results are returned, we can view them in a pie chart:
We can see a lot of noise for group policy objects, as well as some activity (on the right-hand side of the chart) relating to PowerShell Transcription.
After a few exclusions of noisier events, we rerun our query and come back with the following results:
At this point, we have visibility into file access for critical cloud credentials found on our endpoint and can gain similar visibility from network file shares and files located within these shares.
We have also completed some baselining of activity, which should make spotting anomalous or malicious usage patterns easier.
To sum up where we are in our cloud credential hunting journey, so far, we have:
Articulated a threat to our environment
Configured the required data sources
Baselined activity on our endpoints and file shares
Now, we can move towards hunting suspicious or malicious cloud credential access.
Since we have already undertaken baselining efforts, we should know what processes access sensitive cloud credentials on our endpoints and file shares.
Let us look at the following query:
_index=sec_record_audit
| where %"fields.EventID" matches "4663" and file_path matches /(\.azure\\msal_|\.aws\\credentials|\\gcloud\\credentials.db|\.kube\\config)/ and !(baseImage contains "\\Azure\\CLI2\\python.exe") and !(baseImage contains "C:\\Program Files\\Windows Defender Advanced Threat Protection\\MsSense.exe") and !(baseImage contains "C:\\Windows\\System32\\SearchProtocolHost.exe") and !(baseImage contains "\\platform\\bundledpython\\python.exe") and !(baseImage contains "\\bin\\kubectl.exe") and !(baseImage matches "null")
| values(file_path) by baseImage
In this query, we are looking for what processes access sensitive credential files on our endpoint. We are excluding known-good processes such as the Windows Search and legitimate tools like Python and Windows Defender.
Looking at our results, we see the following:
On row one we see a suspicious process named “beacon.exe” accessing our cloud credentials.
However, on row two we see a seemingly legitimate PowerShell process accessing the same set of credentials.
This dynamic highlights the limitations of using a singular event or data point to make a determination regarding potentially malicious activity on our endpoints.
Although the query above is an excellent start when hunting for malicious cloud credential use, we need to broaden our context a bit to gain more confidence in our determinations.
From our baselining efforts, we know that the PowerShell process accesses our sensitive cloud credentials through normal user or administrator activity. Given this, we need a way to differentiate a malicious PowerShell execution chain accessing our cloud credentials from a benign one.
Some of the factors that we can use, using Sysmon or EDR data, combined with our newly configured SACL auditing are:
Whether the PowerShell process was launched with a long and abnormal command line
Whether the PowerShell process made multiple outbound network connections
Whether the outbound network connection was to a public IP address
Whether the PowerShell process did all the above while accessing our sensitive cloud credentials
In query form, this looks like:
(_index=sec_record_network OR _index=sec_record_endpoint OR _index=sec_record_audit) (metadata_deviceEventId="Security-4663" OR metadata_deviceEventId="Security-4688" OR metadata_deviceEventId="Microsoft-Windows-Sysmon/Operational-1" OR metadata_deviceEventId="Microsoft-Windows-Sysmon/Operational-3") //Only looking for Event Codes 4663, 4688, Sysmon EID1 and Sysmon EID3
| where baseImage contains "powershell" //Only look for PowerShell processes
|timeslice 1h //1 hour time slice
// Initialize variables
| 0 as score
| "" as messageQualifiers
| "" as messageQualifiers1
| "" as messageQualifiers2
| "" as messageQualifiers3
| "" as messageQualifiers4
| if(%"fields.EventData.Initiated" matches "true",1,0) as connection_initiated //If Sysmon EID3 has Initiatied=true, count that as an initiated connection
| total connection_initiated as total_connections by %"fields.EventData.ProcessGuid" //Count the total number of connections PowerShell made, by ProcessGuid
| length(commandLine) as commandLineLength //Grab the length of the command line to be used in qualifers later
| if (total_connections > 3, concat(messageQualifiers, "High Connection Count for PowerShell Process: ",total_connections, "\n# score: 300\n"),"") as messageQualifiers //Count the number of connections as high if there are more than 3 connections, this can be tweaked
| if (!isBlank(dstDevice_ip_asnOrg), concat(messageQualifiers1, "Powershell Connection to Public IP Address: ",dstDevice_ip,"\nASN: " ,dstDevice_ip_asnOrg,"\n# score: 300\n"),"") as messageQualifiers1 //If an ASN field is populated, consider that a public IP address
| if (connection_initiated =1, concat(messageQualifiers2, "PowerShell Outgoing Network Connection: ",dstDevice_ip,"\nPort: " ,dstPort,"\n# score: 300\n"),"") as messageQualifiers2 //If the connection was initiated by PowerShell, add this qualifier
| if (commandLineLength > 6000, concat(messageQualifiers3, "Long PowerShell CommandLine Found: ",commandLine,"\nParent Image: " ,baseImage,"\n# score: 3000\n"),"") as messageQualifiers3 //If our PowerShell command line was over 6000 characters, this can be tweaked
| if (file_path matches /(\.azure\\msal_|\.aws\\credentials|\\gcloud\\credentials.db|\.kube\\config)/,concat(messageQualifiers4, "Sensitive File Access Detected: ",file_path,"\nParent Image: " ,baseImage,"\n# score: 3000\n"),"") as messageQualifiers4 //Using 4663 events, if the file being accessed contains any of our sensitive cloud credential file paths
| concat(messageQualifiers,messageQualifiers1,messageQualifiers2,messageQualifiers3,messageQualifiers4) as q //Concact all the qualifiers together
| parse regex field=q "score:\s(?<score>-?\d+)" multi //Grab the score from the qualifiers
| where !isEmpty(q) //Only return results if there is a qualifier of some kind
| values(q) as qualifiers,sum(score) as score by _sourcehost,_timeslice //Return our full qualifiers and sum the score by host and 1 hour time slice
| where score > 40000 //Only return results if the score is over 40000, this value can be tweaked| where score > 40000 //Only return results if the score is over 40000, this value can be tweaked
To break this query down, we are:
Looking at Event IDs 4663 (SACL Auditing), 4688/Sysmon EID1 (Process Creations), Sysmon EID3 (Network Connection)
Looking at the PowerShell process only
Time slicing our data in one hour chunks
Checking to see if a particular connection was inbound or outbound
Counting the total number of connections a particular PowerShell process made
Setting qualifiers if:
The total number of connections PowerShell made was greater than three (3)
If an ASN is present, considering the connection to be made to a public IP address
A connection was outbound
The command line used for the PowerShell process is abnormally long
The PowerShell process accessed our sensitive cloud credentials
And looking at our results:
We can see our abnormally long and malicious command line (which was snipped for readability), as well as the network information associated with this particular PowerShell execution. We can also see the files that this PowerShell process accessed.
This additional context assists us in determining whether cloud credential access was normal behavior or part of a malicious execution chain.
We can also utilize Sumo Logic’s Chain rule capability to build rule logic that looks for:
A long PowerShell command line
PowerShell establishing an outbound network connection
PowerShell accessing our sensitive cloud credentials
And looking at the results, we see an attack chain beginning to come into focus:
As we have highlighted, potentially sensitive credentials which grant access to cloud resources can be found as files on our endpoints. Another area where such credentials can exist is in browser cookies.
Tools like WhiteChocolateMacademiaNut by Justin Bui can connect to a Chromium browser’s remote debugging port in order to dump cookies or enumerate various aspects of the browser, like installed extensions and open tabs.
For this tool to function, Chrome or Edge need to start with a remote debugging port enabled.
We can hunt for this activity on our endpoints with the following query:
_index=sec_record_endpoint
| where toLowerCase(commandLine) contains "--remote-debugging-port" and toLowerCase(baseImage) matches /(chrome.exe|msedge.exe)/
| count by device_hostname,parentBaseImage,baseImage,commandLine
| order by _count DESC
And looking at the results, we can see our Chrome and Edge processes started with a remote debugging port enabled.
If Sysmon is available in the environment, or if your EDR can be instrumented with custom rules, another approach we can take is looking for incoming network connections to Chrome or Edge processes.
The following Sysmon rule snippet will capture incoming connections to Edge or Chrome over loopback IP addresses. Please note that an attacker can bypass the loopback portion of this rule if they use tunneling techniques:
<Rule name="Chromium Incoming Network Connect" groupRelation="and">
<Image condition="contains any">msedge;chrome</Image>
<Initiated condition="is">false</Initiated>
<SourceIp condition="contains any">127.0.0.1;0:0:0:0:0:0:0:1</SourceIp>
</Rule>
After gaining the relevant telemetry, we can query for this data with the following:
_index=sec_record_network and metadata_deviceEventId="Microsoft-Windows-Sysmon/Operational-3"
| where toLowerCase(baseImage) matches /(chrome.exe|msedge.exe)/ and (srcDevice_ip == "127.0.0.1" or "0:0:0:0:0:0:0:1") and %"fields.EventData.Initiated" == "false"
| values(baseImage) as Process,values(user_username) as User,values(srcDevice_ip) as %"Source IP", values(dstPort) as %"Destination Port" by %"fields.EventData.ProcessGuid"
And looking at our results, we can see the following:
Apart from cookie theft, system enumeration / situational awareness tools like SeatBelt often enumerate browser bookmarks, history, installed extensions and other interesting aspects in the context of web browsers.
We can detect this enumeration activity by setting SACL entries on sensitive browser files.
Let’s look at the following query, which looks at what processes are accessing Edge or Chrome’s history:
_index=sec_record_audit and metadata_deviceEventId="Security-4663"
| where file_path matches /(Microsoft\\Edge\\User.Data\\Default\\History)/ or file_path matches /(Google\\Chrome\\User.Data\\Default\\History)/ and !(baseImage contains "msedge.exe") and !(baseImage contains "explorer.exe") and !(baseImage contains "chrome.exe")
| values(file_path) by baseImage
And looking at our results, we see our fictitious “beacon.exe” process accessing the history file of our Edge browser:
Consider the following scenario: a cloud Administrator stores SSH keys on a file share. These SSH keys grant access to a virtual machine hosted in the cloud. While the SSH keys grant administrative access to a critical system in a cloud environment, the permissions on the file share on which the SSH keys are stored are not locked down and are open to various domain users.
A threat actor compromises a machine of a user with access to the file share, but not to the organization’s cloud environment. The threat actor begins to crawl the various file shares, looking for sensitive files.
The threat actor locates the SSH keys saved to an unprotected file share, and proceeds to utilize these SSH keys to get a foothold into the organization’s cloud environment.
You can use tools like Snaffler by l0ss and sh3r4_hax to find these sensitive files:
We can take several hunting approaches to locate this activity on our network.
Let’s start with a basic example, looking for a larger than normal amount of failed requests to file shares.
This scenario assumes that the threat actor is scanning activities on a large number or all of the file shares in the environment.
The account utilized to scan these file shares may not have the necessary permissions for every file share on the network, and this scanning activity may result in many audit failures.
Let’s take a look at the following query, which attempts to capture this activity:
(_index=sec_record_audit AND "Security-5140") AND !"Audit Success"
| count (%"fields.EventData.ShareName") as %"Failure Count", values (%"fields.EventData.ShareName") as %"Share Names" by %"fields.EventData.IpAddress",%"fields.EventData.SubjectUserName"
| if (%"Failure Count" > 30,"1","0") as high_share_failure_count
Breaking this query down, we are:
Looking for Event ID 5140 in the normalized index
Filtering the results by Audit Failure only
Counting the number of failed share access requests by a certain IP address & user name
Using an IF statement to set a field called “high_share_failure_count” if the number of failed shares being accessed are over a certain threshold
Looking at our results, we see the following:
In our case, the user “lowpriv”, coming from the IP address of “10.0.1.25” failed to access fourty (40) shares - mostly system shares, in our case - within the time window we specified. We also see our “high_share_failure_count” field flipped to a 1.
With this query, we now know that someone in our network is scanning file shares and does not have access to some shares in the environment.
This is great visibility to have, but the above hunt lacks contextualization.
For example, we don’t know if sensitive files were accessed and we do not have a good way of excluding legitimate activity. Various scanners exist on corporate networks which constantly reach out to computers on the domain for their tasks, often mimicking “scanning” or “brute force” behavior.
With this in mind, let’s add some color to our query and add some additional qualifiers, such as sensitive file access, using our SACL auditing events outlined above.
Looking at the following query:
_index=sec_record_audit AND ("Security-5140" OR "Security-4663")
| where !(%"fields.task" matches "*12802*") //Kernel Task, logs on LSASS access, not relevant to this search
| where !(file_path matches "*\\REGISTRY\\MACHINE*") //Registry SACLs, not relevant to this search
| where !(file_path matches "*\\*\\SYSVOL*") //SYSVOL exclusions for both 4663 and 5140
| where !(resource matches "\\\\*\\SYSVOL*") //SYSVOL exclusions for both 4663 and 5140
// This value is our time horizon window, this value can be tweaked depending on network size, number of shares etc
| timeslice 1h
// Initialize variables
| 0 as score
| "" as messageQualifiers
| "" as messageQualifiers1
| if(%"fields.keywords" matches "*Audit Failure*", 1,0) as share_failure //If the Audit Keyword is "Failure", set share_failure to "1"
| if(%"fields.keywords" matches "*Audit Success*", 1,0) as share_success //If the Audit Keyword is "Success", set share_success to "1"
//Add the total number of failed share access events by the IP address
| total share_failure as share_access_fail_count by %"fields.EventData.IpAddress"
//Add the total number of successful share access events by the IP address
| total share_success as share_access_success_count by %"fields.EventData.IpAddress"
// If the number of failed shares is higher than 20, consider this "high" - this value can be tweaked (Using 5140 Events)
| if (share_access_fail_count > 20, concat(messageQualifiers, "High Share Fail Count: ",share_access_fail_count,"\nUsername: " ,user_username,"\nSource IP: ",%"fields.EventData.IpAddress","\n# score: 300"),"") as messageQualifiers
// If the id_rsa file is accessed, consider this sensitive file access - this value can be tweaked (Using 4663 Events)
| file_path matches "*id_rsa*" as sensitiveFileAccess
| if (sensitiveFileAccess, concat(messageQualifiers, "Sensitive File Access: ",file_path,"\nUsername: " ,user_username,"\n# score: 300"),"") as messageQualifiers1
// Contact our two qualifiers together into one field named "q"
| concat(messageQualifiers,messageQualifiers1) as q
// Parse out the score from our qualifiers, so that we can sum it up later - note that the score is accumulated per event, not per groupings of events, so the score will be very high
| parse regex field=q "score:\s(?<score>-?\d+)" multi
| where !isEmpty(q) //Only return results if there is a qualifier of some kind
// Present all the qualifier values as a field called "qualifiers" and sum all the scores as a field called "score" by the timeslice
| values(q) as qualifiers,sum(score) as score by _timeslice
// Limit the query further by the score amount, this value can be tweaked based on baselining
| where score > 3000
We,
Look at event codes 4663 and 5140
Exclude some noisy events
Set a time window of one hour
Tabulate the amount of failed and successful share access events per IP address
Add logic to determine if the amount of failed shares is considered “high”
Add logic to determine if a sensitive file has been accessed
Add a score if the number of failed share access events was high
Add a score if a sensitive file was accessed
Sum up the score and present the data
Looking at the results of our query, we can see the following:
We notice the machine account “WIN11-TB$” failing to access some shares on rows one and two, which triggered our qualifiers.
On rows three and four, we also see the machine account and a user account triggering our qualifiers. After some investigation, we determine that this machine and the “lowpriv” account are used for legitimate scanning activities on the network. How do we separate this legitimate scanning activity from potentially malicious share scanning activity?
The fictitious scenario outlined above lends itself well to qualifier or score-based queries and hunting approaches.
We can see our “legitimate” activity taking place on rows 1-4 with a score of about 4,800 - however, on row five, we see our score jump due to the “lowpriv” account accessing a sensitive file that is found on our network shares.
We can then add a “where” statement to our query to only return results if the score is higher than a certain threshold that we determine.
Hunting in file share and file access Windows events is tricky. Several factors work against us, including:
The interplay between NTFS and file share permissions and their disparate set of telemetry
The general “background noise” of various network and endpoint scanning appliances
Lack of visibility and intelligence regarding where sensitive files are located on shares
Overly-permissive file share access
Verbose and voluminous SACL and file share events
However, this visibility and monitoring gap can be closed with a tactical focus and the right hunting approach built to match the intricacies and particularities of your modern corporate enterprise network.
Cloud credentials on the filesystem, in network shares and in browser cookies represent an attractive target for a threat actor. Very little visibility exists into file and network share activity on Windows systems by default. Likewise, the browser is a critical yet poorly monitored component in today’s complex hybrid environments.
This blog post has highlighted this potentially impactful threat vector. We hope that organizations that operate on-premise environments from which cloud resources are accessed will undertake a critical analysis of this potentially underappreciated attack vector.
The threat labs team has developed and deployed the following rules for Cloud SIEM Enterprise customers.
Rule ID | Rule Name | |
THRESHOLD-S00059 | Network Share Scan | |
MATCH-S00820 | Cloud Credential File Accessed | |
MATCH-S00819 | Chromium Process Started With Debugging Port | |
MATCH-S00821 | Suspicious Chromium Browser History Access |
In terms of MITRE ATT&CK Mappings, the following MITRE IDs make up the bulk of the techniques which were covered in this blog:
Name | ID | |
T1539 | ||
Credentials from Password Stores: Credentials from Web Browsers | T1555.003 | |
T1606.001 | ||
T1528 | ||
T1552.001 | ||
T1552.004 |
https://medium.com/@cryps1s/detecting-windows-endpoint-compromise-with-sacls-cd748e10950
https://github.com/rootsecdev/Azure-Red-Team#stealing-tokens
https://mikhail.io/2019/07/how-azure-cli-manages-access-tokens/
https://github.com/secureworks/family-of-client-ids-research
https://www.lares.com/blog/hunting-azure-admins-for-vertical-escalation/
https://www.lares.com/blog/hunting-azure-admins-for-vertical-escalation-part-2/
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial