• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

Cloud Engineering

Jan 04 2023

Creating An Effective Proxy Using Node And Express

Overview

Engineers are often faced with the challenge of pulling together multi-website/application projects, without full cross-platform permissions. Utilizing both a Node and an Express proxy, can pull together two websites/applications in a formal and cohesive format.

The aforementioned situation comes up more often than one would think, whether it be a question of host permissions or compatibility, it can undoubtedly pull up a number of possible roadblocks. There are several reasons a developer might not be able to run a site or app in their local environment; perhaps it’s complex to set up or involves a number of permissions. Regardless of the reason, Node and Express can present a novel way to solve the problem.

No alt text provided for this image

In Matt Billard’s Lightning Talk session, we will be uncovering the the primary strategies for Creating An Effective Proxy Using Node And Express:

  • Overview
  • The Architecture
  • How It Works
  • “Gotchas” To Avoid
  • Live Demonstrations
  • Closing Thoughts

The Architecture

No alt text provided for this image

The browser makes a request to the Node Express proxy server where the following three scenarios crop up:

  1. If the user requested an HTML page, we need to combine the page from website 1 and 2. The proxy first asks website 1 and then website 2 for its HTML. It then modifies the two HTML files, combines them, and returns the result to the browser. (Details on how this works below.)
  2. The HTML page will then request the CSS, JavaScript, and other assets it requires. These requests will again go through the proxy which will pass on the requests. If website 1 has the asset, great, the proxy will return it to the browser. 
  3. If website 1 does not have the asset, the proxy will then ask website 2 and return it to the browser.

In the example below, when using the InRhythm.com website as the target into which an engineer can inject some local code (in this case a basic Create React App project); the final result is an actual screenshot of the 2 websites living together in the same browser window.

No alt text provided for this image

How It Works

As mentioned above, website 1 and 2’s HTML are combined. This involves a few steps. Webpages can’t have 2 doctype, html, head, or body tags, so the use of some regex to strip those, will be required. Now that website 2’s HTML is ready, a coder can inject it before website 1’s closing </body> tag. 

No alt text provided for this image

The above code shows the modifications to website 1’s HTML.

It shows a few things:

  1. Many websites have full ‘absolute URLs’ for their links. They look like this: https://www.inrhythm.com/who-we-are/. The problem is if the user clicks on this, they’ll be taken away from our proxy and go to the target website. One can solve this by removing all www.something.com pieces while retaining the language the after the slash.
  2. Injecting the CSS as discussed above, removes backgrounds and allows clicks to pass through website 2 to website 1. (Keep in mind this will probably be slightly different depending on the two sites a coder is combining.)
  3. Injecting the HTML from website 2 must be modified before closing the </body> tag.
No alt text provided for this image

“Gotchas” To Avoid

It can take a variety of trial and errors in order to develop a proxy to one’s exact specifications. Some of the most common occurrences, a coder may find themselves troubleshooting are:

  • Websites usually compress or “Gzip” their content. Normally this is a great thing. It means less data is transferred and websites load more quickly. However, when in need of a proxy, it can become quite troublesome for ease of use. An engineer can’t parse, manipulate, and modify HTML if it looks like gibberish. The solution is actually quite simple: as it turns out, there’s a header one can send with their request to ask the server not to Gzip anything.
No alt text provided for this image
  • When using a proxy, all requests are going to have the header “host” set to “localhost.” Now this is probably not a problem for most sites, but to the target server, this doesn’t look like a very normal request, and indeed, one will find some websites responded abnormally and will return pages that looked nothing like expected. The solution cab be found in modifying one of the headers of the request. 
No alt text provided for this image
  • Due to needing to modify requests quite a bit, this may result in some possible browser abnormalities. The solution to this problem is to delete the ‘content-length’ header before the proxy sends the browser any final response. This will stop the browser from truncating the response and removing all the hard work needed to customize one’s proxy. 
No alt text provided for this image
  • When combining sites that use https, the proxy might complain that the SSL certificates don’t match what it’s expecting. Turns out it’s rather easy to relax this with the following code: 
No alt text provided for this image

Live Demonstrations

Matt Billard has crafted an intuitive demonstration to help guide you through these principles in practicum: 

No alt text provided for this image

Be sure to follow Billard’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

The Node.js framework Express allows an engineer to create web servers and APIs with minimal setup. Using Express in a Node.js application to create an API Proxy to request data from another API and return it to a consumer, is a vital skill to add to one’s skills toolkit. Using Express middleware to help optimize the API Proxy, will allow a coder to raise the bar and improve performance for returning data from the underlying API.

To develop and learn from Billard’s signature “Code Collider” proxy, feel free to download the direct code from GitHub.

Happy coding!

To learn more about Creating An Effective Proxy Using Node And Express, along with some live test samples, and to experience Matt Billard’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Cloud Engineering, InRhythmU, Product Development, Software Engineering · Tagged: Code Collider, Code lounge, express, INRHYTHMU, learning and growth, Node, Node.js, product development, proxy

Jan 03 2023

Creating Robust Test Automation For Microservices

Overview

No alt text provided for this image

Any and all projects that a software engineer joins will come in one of two forms: greenfield or legacy codebases. In the majority of cases, projects will fall into the realm of legacy repositories. As a software engineer, it is their responsibility to be able to strategically navigate their way through either type of project by looking objectively at the opportunities to improve the code base, lower the cognitive load for software engineering, and make a determination to advise on better design strategies.

But, chances are, there is a problem. Before architecture or design refactors can be taken its best to take a pulse on the health of a platform End to End (E2E). The reason being, lurking in a new or existing platform is likely a common ailment of a modern microservices approach – the inability to test the platform E2E across microservices that are, by design, commonly engineered by different teams over time.

Revitalizing Legacy Systems

No alt text provided for this image

One primary challenge faced by a number of software engineers, is the adaptive work on a greenfield platform that has fallen several months behind from a quality assurance perspective. It becomes no longer possible for QA to catch up, nor was it possible for QA to engineer and execute E2E testing to complete common user journeys throughout the enterprise system.

To solve this conundrum, E2E data generation tools need to be created so that the QA team can keep upbuilding and testing every scenario and edge case.

There are three main requirements for an E2E account and data generation tool.

The tool should:

1) Create test accounts with mock data for each microservice

2) Link those accounts between up and downs stream microservices

3) Provide easy to access APIs that are self-documenting 

Using a tool like Swagger, QA can use the API description for REST API, i.e. OpenAPI Specification (formerly Swagger Specification) to view the available endpoints and operations to create accounts, generate test data, authenticate, authorize and “connect the microservices.”

No alt text provided for this image

Closing Thoughts

By creating tools for E2E testing, a QA team was able to eliminate the hassle of trying to figure out which upstream and downstream microservices needed to be called to ensure that the required accounts and data were available and set up properly to ensure a successful test of all scenarios i.e. based upon the variety of different data types, user permissions, user information, and covering the negative test cases. The QA team was able to catch up and write their entire suite of test scenarios generating the matching accounts and data to satisfy those requirements. The net result of having built an E2E test generation tool was automated tests could be produced exponentially quicker and the tests themselves are more resilient to failure. 

Even though the microservices pattern continues to gain traction, developing E2E testing tools that generate accounts and test data across an enterprise platform will likely still remain a pain point.

There’s no better way to maintain a healthy system than to ensure accounts and data in the lower environments actually work and unblock testing end-to-end. 

Written by Kaela Coppinger · Categorized: Agile & Lean, Cloud Engineering, Java Engineering, Product Development, Software Engineering · Tagged: cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering, testing

Dec 20 2022

Configuration Automation Tools: Orchestrating Successful Deployment

Overview

In the modern technology field, buzz words come and go. One day databases are being discussed as the new best thing in the world of Agile Development only for the next, to recenter the importance of programming languages, frameworks, and methodologies.

But, one unchanging aspect of this lifecycle are the people who are an irreplaceable part of the creation, demise, and popularity of any given technology. This modern day world calls for close to perfection execution, of which individuals cannot always account for.

How does this call for flawless mechanisms affect the developers and creators, when called to building perfect products? 

No alt text provided for this image

Automation is the technology by which a process or procedure is performed with minimal human interference through the use of technological or mechanical devices. It is the technique of making a process or a system operate automatically. Automation crosses all functions within almost every industry from installation, maintenance, manufacturing, marketing, sales, medicine, design, procurement, management, etc. Automation has revolutionized those areas in which it has been introduced, and there is scarcely an aspect of modern life that has been unaffected by it.

Automation provides a number of high-level advantages to every aspect of practice, making it an important process to have a working knowledge of:

  • Overview
  • Removing Human Error
  • Steps To Deploy
  • No Hidden Knowledge
  • Popular Implementation Technology Options
  • Closing Thoughts

Removing Human Error

No alt text provided for this image

Automation, automation, more automation – and of course throw in some orchestration deployment and configuration management. Leaving the buzz words behind the “new technology frontier”, is removing human error. This translates to removing the dependencies of tribal knowledge when it pertains to application and system administration job duties.

Those job duties are performed in a repetitive fashion. The job duties are usually consolidated into various custom scripts, leaving a lot of those scripted actions with the ability to be boxed up and reused over and over again.

Steps To Deployment

No alt text provided for this image

The primary cornerstones to prepping an automation deployment for an individual server, follow a near identical framework:

  1. Download and Install the various languages and/or framework libraries the application usages
  2. Download, Install, and Configure the Web server that the application will use
  3. Download, Install, and Configure the Database that the application will use
  4. Test to see if all the steps are installed and configured correctly

Running application tests ensure that the deployment is running as expected. Testing is crucial to the successful run of the deployment.

For example, something simplistic but highly catastrophic is the possibility of a typo. Consider the case of the following code:

  • cd /var/ect/ansible; rm -rf *

but instead a developer forgot the cd execute command and only ran

  • rm -rf /

In this case, the whole drive is at risk to be erased – which can and will make or break a product.

Taking time to ensure the correct command executions, can determine the overall success of a system.

No Hidden Knowledge

No alt text provided for this image

Looking back on the steps to deploy an application to an environment, there are inevitably a number of small intermediary steps involved. A leader’s priority should be the revelation of each one of these unique sub-categories and effectively bringing all engineers around them up to speed on the associated best practices.

The information should be a source of truth maintained in a repository database, that is easy and intuitive to leverage

Popular Implementation Technology Options

What does a source of truth entail? Can one not skip the documentation of information and go straight into the execution of the steps onto a given system? Or create scripts to reconfigure the application if there was ever a need to? Those questions have been proposed several times and solutions have been formulated several times into the form of extensive and comprehensive build tools/frameworks.

These tools are used throughout the industry to solve the problem of orchestrated development, configuration automation, and management. 

Furthermore, DevOps tools such as: Puppet, Chef, and Ansible are well matured automation/orchestration tools. Each tool will provide enough architecture flexibility to virtually handle any use case presented.

Puppet

No alt text provided for this image

Puppet was the first widely used Configuration Automation and Orchestration software dating back to its initial release in 2005. Puppet uses the Master and Slave paradigm to control X amount of machines. The Ruby language is the script language say, for executing commands in a destination environment. 

The “Puppet Agents” (Slave) are modularized distinct components to be deployed to a server. This can be used for the creation of the server (ie. web server, database, application) in its destination environment. The “Puppet Enterprise” (Master) is comprised of all the inner workings to manage, secure, and organize agents.

Puppet Documentation

  • https://puppet.com/docs/
  • http://pub.agrarix.net/OpenSource/Puppet/puppetmanual.pdf
  • https://www.rubydoc.info/gems/puppet/ 

Chef

No alt text provided for this image

Chef is somewhat similar to Puppet. The core language used within Chef’s abstract module components is Ruby. Chef has several layers of management for individual infrastructure automation needs. The Chef workstation is the primary area for managing the various Chef components. The Chef components consist of “cookbooks”, “recipes”, and “nodes”.

“Recipes” are collections of configurations for a given system, virtual, bare metal, or cloud environment. Chef calls those different environments “nodes”. “Cookbooks” contains “recipes” and other configurations for application deployment and control mechanisms for the different Chef clients.

Chef Documentation

  • https://docs.chef.io/
  • https://www.linode.com/docs/applications/configuration-management/beginners-guide-chef/ 

Ansible

No alt text provided for this image

Ansible is the newest mainstream automation/configuration management tool on the market. Therefore, Ansible uses more modern programming languages and configurations concepts and tools. Python is the programming language used in this framework. One of the modern and fastest up-and-coming template languages is YAML. YAML is programming language agnostic and is a subset of the ever so popular JSON. YAML is used within Ansible to describe an Ansible Playbook. 

Ansible Playbook contains the steps that need to be executed on a given system. Once the Ansible Playbook is intact, configuration or further manipulation of the host can be executed through Ansible API – which is implemented in Python. There are several other components within Ansible technology such as modules, plugins, and inventory. 

Ansible Documentation

  • https://docs.ansible.com/ansible/2.5/dev_guide/
  • https://devdocs.io/ansible/
  • https://geekflare.com/ansible-basics/ 

Closing Thoughts

No alt text provided for this image

After covering a couple of the Configuration Automation and Development tools on the market, one can see a vast amount of flexibility available in eliminating those repeatable steps from human error. This software’s framework promotes reusable software within an organization – which is the most viable. The ability to scale an application development environment and environmental infrastructure is critical. 

The learning curve may be deeper than using plain bash scripts, but the structure and integrity of using a proven tool and ease of maintenance outweigh the learning curve.

Written by Kaela Coppinger · Categorized: Cloud Engineering, Code Lounge, DevOps, Java Engineering, Learning and Development, Software Engineering, Web Engineering · Tagged: automation, best practices, cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering

Sep 21 2022

How To Write A Great Test Case

Overview

A test case is exactly what it sounds like: a test scenario measuring functionality across a set of actions or conditions to verify the expected result. They apply to any software application, can use manual testing or an automated test, and can make use of test case management tools.

Most digital-first business leaders know the value of software testing. Some value high-quality software more than others and might demand more test coverage to ultimately satisfy customers. So, how do they achieve that goal?

They test more, and test more efficiently. That means writing test cases that cover a broad spectrum of software functionality. It also means writing test cases clearly and efficiently, as a poor test can prove more damaging than helpful.

A key thing to remember when it comes to writing test cases is that they are intended to test a basic variable or task such as whether or not a discount code applies to the right product on an e-commerce web page. This allows a software tester more flexibility in how to test code and features.

In Nathan Barrett’s Lightning Talk session, we will be breaking down the following topics:

  • What Is A Test Case?
  • What Makes A Good Test Case?
  • Live Demonstration
  • Closing Thoughts

What Is A Test Case?

At a high level, to “test” means to establish the quality, performance, or reliability of a software application. A test case is a repeatable series of specific actions designed to either verify success or provoke failure in a given product, system, or process. 

A test case gives detailed information about testing strategy, testing process, preconditions, and expected output. These are executed during the testing process to check whether the software application is performing the task for which it was developed for or not. A passed test case functions like a receipt verifying the correct functionality of the subject of the test. 

To write the test case, we must have the requirements to derive the inputs, and the test scenarios must be written so that we do not miss out on any features for testing. Then we should have the test case template to maintain the uniformity, or every test engineer follows the same approach to prepare the test document.

Test cases serve as final verification of functionality before releasing it to the direct product users. 

What Makes A Good Test Case?

Writing test cases varies depending on what the test case is measuring or testing. This is also a situation where sharing test assets across dev and test teams can accelerate software testing. But it all starts with knowing how to write a test case effectively and efficiently.

Test cases have a few integral parts that should always be present in fields, as well as some “nice to have” elements that can only work to enhance presented results. 

Required Elements:

  • Summary
    • Concise, direct encapsulation of the purpose of the test case 
  • Prerequisites
    • What needs to be in place prior to starting the test?
    • Bad Prerequisites: captured in test steps, not present, overly specific
    • Good Prerequisites: concise/descriptive, lays out all set-up prior to testing, includes information to learn more if desired 
  • Test Steps
    • The meat of the test case
    • Good test steps: each step is a specific atomic action performed by the user that contains an expected result, call out divergent paths where necessary, cites which test data when laid out in prerequisites needs to be applied
    • Really great test steps should treat the user like they know “nothing” and communicate everything from start to finish
  • Expected Results
    • How do we know that the test hasn’t failed?
    • Bad Expected Results: Page loads correctly, view looks good, app behaves as expected
    • Good Expected Results: Landing page loads after spinner with user’s account details present, view renders with all appropriate configurations (title, subtitle, description, etc.), toggle changes state when tapped (enabled→disabled)

Preferred Additional Elements:

  • Artifacts
    • Screenshots, files, builds, configurations, etc.
  • Test Data
    • Accounts, items, addresses, etc.
    • What information is needed during the test?
    • Pre-rendered prerequisite fulfillment
  • Historical Context
    • Previous failures, previous user journeys, development history, etc.
    • Has this feature been “flakey” in the past?
    • What are previous failure points?
    • How critical is this feature?

The very practice of writing test cases helps prepare the testing team by ensuring good test coverage across the application, but writing test cases has an even broader impact on quality assurance and user experience.

Live Demonstration

Nathan Barrett has crafted an intuitive test of specificity to help guide testers to understand how they should be structuring their cases:

Be sure to follow Nathan’s entire Lightning Talk to follow along with these steps in real time.

Closing Thoughts

Test cases help guide the tester through a sequence of steps to validate whether a software application is free of bugs, and working as required by the end-user. A well-written test case should allow any tester to understand and execute the test.

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to understanding the beneficial prerequisites to writing a good test case for any type of application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about How To Conduct A Great Test Case as well as its importance in the software development process and to experience Nathan Barrett’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: Cloud Engineering, Design UX/UI, DevOps, InRhythmU, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: devops, INRHYTHMU, learning and growth, SDET, software development, software engineering, ux, web engineering

Oct 20 2020

How Organizations Can Reduce Cloud Security Risks

12 Best Practices for Cloud Security

Cloud security is a top focus and priority for organizations today. Protecting your organization continues to be increasingly difficult as employees use their own devices and applications for work, and data flows in and out of your business in a variety of ways. On top of that, the COVID-19 pandemic has acted as a major stress test on cybersecurity controls and policies. The resulting surge in remote work complexifies the attack surface and brings up many new questions for security teams. While the attack surface has broadened, attacks have also become more sophisticated and more damaging. Today’s security leaders must balance these challenges with business needs to collaborate, innovate, and grow. 

Here are the top security best practices you can adopt to secure your cloud solutions:

1. Secrets management

Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication. Managing and securing access to these secrets could be complicated, and if not properly managed, they can end up in the wrong hands.

Regardless of your solution for managing secrets, here are best practices you should focus on addressing:

  1. Identify all types of passwords, keys and other secrets across your entire IT environment and bring them under centralized management. Continuously discover and onboard new secrets as they are created.
  2. Eliminate hard coded secrets in DevOps tool configurations, build scripts, code files, test builds, production builds, applications, and more. Bring hardcoded credentials under management, such as by using API calls, and enforce password security best practices. Eliminating hardcoded and default passwords effectively removes dangerous backdoors to your environment.
  3. Enforce password security best practices, including password length, complexity, uniqueness expiration, rotation, and more across all types of passwords. Secrets, if possible, should never be shared. If a secret is shared, it should be immediately changed. Secrets to more sensitive tools and systems should have more rigorous security parameters, such as one-time passwords, and rotation after each use.
  4. Apply privileged session monitoring to log, audit, and monitor all privileged sessions (for accounts, users, scripts, automation tools, etc.) to improve oversight and accountability. This can also entail capturing keystrokes and screens (allowing for live view and playback). Some enterprise privilege session management solutions also enable IT teams to pinpoint suspicious session activity in-progress, and pause, lock, or terminate the session until the activity can be adequately evaluated.
  5. Extend secrets management to third-parties – ensure partners and vendors conform to best practices in using and managing secrets.
  6. Threat analytics – continuously analyze secrets usage to detect anomalies and potential threats. The more integrated and centralized your secrets management, the better you will be able to report on accounts, keys applications, containers, and systems exposed to risk.
  7. DevSecOps – With the speed and scale of DevOps, it’s crucial to build security into both the culture and the DevOps lifecycle (from inception, design, build, test, release, support, maintenance). Embracing a DevSecOps culture means that everyone shares responsibility for security, helping ensure accountability and alignment across teams. In practice, this should entail ensuring secrets management best practices are in place and that code does not contain embedded passwords in it.

2. Web application firewall (WAF)

Web application firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities. WAF helps protect your web applications and APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.  

3. Multi-factor Authentication (MFA)

Multi-Factor Authentication (MFA) adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in, they will be prompted for their username and password (the first factor—what they know), as well as for an authentication code from their MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your Cloud account settings and resources.

Enforce multi-factor authentication for users, especially administrators and others in your organization who can have a significant impact if their account is compromised.

4. Encrypt virtual hard disk files

Encrypt your virtual hard disk files to help protect your boot volume and data volumes at rest in storage, along with your encryption keys and secrets. Disk Encryption helps you encrypt your Windows and Linux IaaS virtual machine disks. 

The following are the best practices for using Disk Encryption:

  • Enable encryption on VMs. 
  • Use a key encryption key (KEK) for an additional layer of security. 
  • Take a snapshot and/or backup before disks are encrypted. Backups provide a recovery option if an unexpected failure happens during encryption.

5. Use strong network controls 

You can connect virtual machines (VMs) and appliances to other networked devices by placing them on virtual networks. That is, you can connect virtual network interface cards to a virtual network to allow TCP/IP-based communications between network-enabled devices. Virtual machines connected to a virtual network can connect to devices on the same virtual network, different virtual networks, the internet, or your own on-premises networks. 

As you plan your network and the security of your network, centralize: 

  • Management of core network functions like virtual network and subnet provisioning, and IP addressing.  
  • Governance of network security elements, such as network virtual appliance functions like virtual network and subnet provisioning, and IP addressing. If you use a common set of management tools to monitor your network and the security of your network, you get clear visibility into both. A straightforward, unified security strategy reduces errors because it increases human understanding and the reliability of automation.

Best practices for logically segmenting subnets include:

  • Don’t assign Allow rules with broad ranges (e.g. allow 0.0.0.0 through 255.255.255.255).
  • Segment the larger address space into subnets.
  • Create network access controls between subnets. Routing between subnets happens automatically, and you don’t need to manually configure routing tables. By default, there are no network access controls between the subnets that you create on a virtual network.
  • Avoid small virtual networks and subnets to ensure simplicity and flexibility.

6. Mitigate and protect against DDoS

Distributed denial of service (DDoS) is a type of attack that tries to exhaust application resources. The goal is to affect the application’s availability and its ability to handle legitimate requests. These attacks are becoming more sophisticated and larger in size and impact. They can be targeted at any endpoint that is publicly reachable through the internet. Designing and building for DDoS resiliency requires planning and designing for a variety of failure modes. 

Following are the best practices for building DDoS-resilient services:

  • Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications can have bugs that allow a relatively low volume of requests to use a lot of resources, resulting in a service outage.
  • Design your applications to scale horizontally to meet the demand of an amplified load, specifically in the event of a DDoS attack. If your application depends on a single instance of a service, it creates a single point of failure. Provisioning multiple instances makes your system more resilient and more scalable.
  • Layering security defenses in an application reduces the chance of a successful attack. Implement secure designs for your applications by using the built-in capabilities of the Cloud provider.

7. Manage your VM updates 

Cloud VMs, like all on-premises VMs, are meant to be user managed. The cloud provider doesn’t push Windows or Linux updates to them. You need to manage your VM updates.

Here are the best practices to manage your VM updates:

  • Keep your VMs current. Use the update management solution by your cloud provider.
  • Ensure at deployment that images you built include the most recent round of Windows or Linux updates.
  • Periodically redeploy your VMs to force a fresh version of the OS.
  • Rapidly apply security updates to VMs. 
  • Deploy and test a backup solution. 

8. Enable password management 

If you have multiple tenants or you want to enable users to reset their own passwords, it’s important that you use appropriate security policies to prevent abuse.

Here are the best practices to manage your passwords:

  • Set up self-service password reset (SSPR) for your users
  • Monitor how or if SSPR is really being used. 
  • Extend cloud-based password policies to your on-premises infrastructure.

9. Role-based access control (RBAC)

Using role-based access control (RBAC) for cloud resources is critical for any organization that uses the cloud. Role-based access control helps you manage who has access to cloud resources, what they can do with those resources, and what areas they have access to. Designating groups or individual roles responsible for specific functions in the cloud helps avoid confusion that can lead to human and automation errors that create security risks. Restricting access based on the need to know and least privilege security principles is imperative for organizations that want to enforce security policies for data access. Your security team needs visibility into your cloud resources in order to assess and remediate risk. If the security team has operational responsibilities, they need additional permissions to do their jobs. You can use RBAC to assign permissions to users, groups, and applications at a certain scope. The scope of a role assignment can be a subscription, a resource group, or a single resource.

Best practices for using RBAC to manage access to your cloud resources are:

  • Segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your cloud subscription or resources, allow only certain actions at a particular scope.
  • Grant security teams with responsibilities access to see cloud resources so they can assess and remediate risk.
  • Grant the appropriate permissions to security teams that have direct operational responsibilities.

Organizations that don’t enforce data access control by using capabilities like RBAC might be giving more privileges than necessary to their users. This can lead to data compromise by allowing users to access types of data (e.g. high business impact) that they shouldn’t have.

10. Perform security penetration testing 

Validating security defenses is as important as testing any other functionality. Make penetration testing a standard part of your build and deployment process. Schedule regular security tests and vulnerability scanning on deployed applications, and monitor for open ports, endpoints, and attacks. 

Fuzz testing is a method for finding program failures (code errors) by supplying malformed input data to program interfaces (entry points) that parse and consume this data. Use tools by your cloud provider to look for bugs and other security vulnerabilities in your software before you deploy it to cloud. It will help you catch vulnerabilities before you deploy software so you don’t have to patch a bug, deal with crashes, or respond to an attack after the software is released.

11. Keep up to date with security recommendations

Stay up to date with the security recommendations by your cloud provider to evolve the security posture of your workload. Cloud provider services like Azure Security Center, AWS Security Hub and Google Cloud Security Command Center periodically analyze the security state of your cloud resources to identify potential security vulnerabilities. It then provides you with recommendations on how to remediate those vulnerabilities.

12. Build an incident response plan

Preparation is critical to timely and effective investigation, response to, and recovery from security incidents to help minimize disruption to your organization. 

Best Practices: 

  • Identify key personnel and external resources: Identify internal and external personnel, resources, and legal obligations that would help your organization respond to an incident. 
  • Develop incident management plans: Create plans to help you respond to, communicate during, and recover from an incident. For example, you can start an incident response plan with the most likely scenarios for your workload and organization. Include how you would communicate and escalate both internally and externally.
  • Prepare forensic capabilities: Identify and prepare forensic investigation capabilities that are suitable, including external specialists, tools, and automation.
  • Automate containment capability: Automate containment and recovery of an incident to reduce response times and organizational impact.
  • Pre-provision access: Ensure that incident responders have the correct access pre-provisioned to reduce the time for investigation through to recovery.
  • Pre-deploy tools: Ensure that security personnel have the right tools pre-deployed to reduce the time for investigation through to recovery.
  • Run game days: Practice incident response game days (simulations) regularly, incorporate lessons learned into your incident management plans, and continuously improve.

Conclusion:

It is clear that although the use of cloud computing has rapidly increased, security is still a major concern in the cloud computing environment.

Cloud security is not just a technical problem. It also involves standardization, monitoring, policies, and many other aspects.

Written by Parag Katkar · Categorized: Cloud Engineering

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT