The cyber-security landscape is changing rapidly as increasingly more organizations are adopting Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). We’re witnessing an inevitable shift, especially since the recent social distancing rules have resulted in large numbers of employees working from home and other remote locations.
However, it wasn’t exactly a gradual transition, and many businesses are not prepared for security challenges that switching to cloud services and platforms can bring.
According to a recent report by Gartner, there are 5 key areas that organizations need to focus on in order to keep their sensitive data secure when using IaaS and PaaS, which are listed below.
- Implement Cloud-Native Controls to Maintain Least Privilege Access to Sensitive Data
- Encrypt All Data at Rest Using Customer-Controlled Keys
- Use Zero Trust Network Access and Micro-segmentation to Reduce Risk and Contain Breaches
- Scan Continuously for Secure and Compliant Cloud Configuration Using CSPM Tools
- Log and Analyze Everything Using Enterprise SIEM and Cloud-Native Threat Detection Tools
Get the Free Guide to Mitigating Popular Cyber Attack Methods
Implement Cloud-Native Controls to Maintain Least Privilege Access to Sensitive Data
It is the customer’s responsibility to setup the appropriate access controls to restrict access to sensitive data.
When accessing sensitive data in the cloud, multi-factor authentication (MFA) and time-limited access tokens for API calls should always be used.
Consider using hardware MFA tokens, and remember to store them in a safe location with a robust policy in place to prevent them falling into the wrong hands.
Regardless of whether you are using a role-based access control (RBAC) or attribute-based access control (ABAC) model, they should be assigned based on the principal of least privilege (PoLP), to ensure that users are granted the least privileges they need to carry out their role.
These days, a granular RBAC model is the preferred model due to its simplicity and flexibility.
Roles should be created to cater for specific job functions, and where possible, users should be assigned to these roles on a time-limited basis, to ensure that privileges are revoked when they are no longer relevant. This also applies to any hardware device or service that requires access to sensitive data via API endpoints.
In order to limit the amount of damage that could be caused were an attacker to gain access to an administrator account; it’s a good idea to setup multiple administrator accounts, with each account assigned to a specific role. Access controls need to be regularly reviewed and adjusted accordingly. This can be done using native tools, such as the AWS Control Tower, AWS Access Advisor or Azure Advisor. Or if you prefer, you can use a dedicated privilege access management (PAM) solution.
Either way, you will need a clear audit trail, which logs all access requests including the process by which the approval is granted.
Encrypt All Data at Rest Using Customer-Controlled Keys
Traditional on-premise architectures, with their own dedicated and trusted infrastructure, allow us to store unencrypted data with minimal risk. However, when using shared infrastructure that can be theoretically accessed by the public, unencrypted data is more vulnerable to attack. It is crucial that data is encrypted both at rest and in transit.
Ensuring that our encryption methods are consistent across all platforms and resources is not an easy task, as it requires continually rotating and revoking encryption keys. Firstly, you need to ensure that you have complete control over the encryption keys.
Make sure that you take advantage of the key management solutions offered by your service provider, such as AWS Key Management Service (KMS) and Azure Key Vault.
A good key management strategy should ensure that access to encryption keys is tightly controlled, and that there is a full audit trail of who has access to the keys, when, and for how long. Keys need to be periodically rotated to prevent persistent access to the encrypted data, were the keys to fall into the wrong hands.
Use Zero Trust Network Access and Micro-segmentation to Reduce Risk and Contain Breaches
Most data breaches are caused by the misuse of privileged credentials. Yet, despite this, most networks automatically trust privileged user accounts, due to their inherent status. Under the zero-trust security model the principle is “never trust, always verify”.
Micro-segmentation, on the other hand, is where networks are broken up into secured isolated zones.
Used together, zero-trust and micro-segmentation can help to eliminate server to server threats and reduce the attack surface of the network.
With increasingly more companies adopting bring your own device (BYOD), or at least, allowing employees to work remotely, network activity is getting pushed to the edges, thus making it a lot harder to keep track of the growing number of devices, applications and endpoints that are used to access sensitive data.
As such, many organizations are enforcing the use of virtual private networks (VPNs) to provide encrypted communication channels between the endpoints and the server containing the sensitive data.
Under a zero-trust model, the authentication process will take additional factors into consideration, such as the location, type and security profile of the requesting device. If an access request seems suspicious in some way, or if the user is attempting to access resources which they shouldn’t be accessing, an event is triggered, which may include sending an alert to the administrator, or asking the user to re-authenticate.
Scan Continuously for Secure and Compliant Cloud Configuration Using CSPM Tools
Use cloud security posture management (CSPM) tools to continuously monitor for mis-configured cloud infrastructure, such as storage containers that are exposed to the public – a problem that is common with Amazon S3 Buckets.
Upon detection, some form of response will need to be executed. This might include sending an alert to the administrator, which will prompt them to launch an investigation, and/or execute a custom script to revoke access, delete a security group, or some other course of action which will prevent the attack from spreading during the remediation process.
Log and Analyze Everything Using Enterprise SIEM and Cloud-Native Threat Detection Tools
Most cloud platforms provide tools which allow you to scrutinize the event logs for suspicious activity. Some of the more sophisticated change auditing solutions use machine learning algorithms to learn typical patterns of behavior, which can be used as a baseline for detecting potentially malicious events.
If you need more visibility than what the native audit solutions provide, you may want to consider using a third-party SIEM/UBA solution. One of the main advantages of using a dedicated third-party auditing solution is that they can aggregate and correlate event data from multiple cloud platforms (including your on-premise infrastructure), which means that you can review all changes via a single intuitive console.
Third-party solutions tend to provide other features that are not available with the native tools, such as automated inactive user account management, automated password rotation and advanced threshold alerting. Additionally, they typically provide more sophisticated reports, which are more customizable, and often provide templates that cover the data protection laws that are relevant to each industry.
Essentially, you need to know exactly who is accessing what data, when, how, and for how long. This must also include API calls, and access requests made by service accounts.