• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

DevOps

Sep 19 2023

The Full-Stack Observability Revolution: Enhancing DevOps Best Practices

Based on a Lightning Talk by: Taufiqur Ashrafy, Solutions Architect @ InRhythm on September 7th 2023, as part of this summer’s InRhythm Propel Summit 2023

Overview

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

In today’s dynamic and rapidly evolving technological landscape, ensuring the seamless operation of complex software systems is paramount. DevOps has emerged as a critical approach to streamline software development and IT operations, fostering collaboration and accelerating release cycles. However, as systems grow in complexity and scale, understanding how they behave in real-time across the entire technology stack becomes increasingly challenging. This is where Full-Stack Observability steps in, transforming the way DevOps teams operate:

  • Overview
  • Full-Stack Observability Unveiled 
  • Components Of Full-Stack Observability
  • The Impact Of Full-Stack Observability
  • The InRhythm Propel Summit And Full-Stack Observability
  • Closing Thoughts

Full-Stack Observability Unveiled

Full-Stack Observability is not just the latest development “fad”; it’s a game-changer for modern DevOps practices. It represents the comprehensive understanding of a system’s performance by collecting, correlating, and analyzing data from every layer of the technology stack, including infrastructure, applications, and services. Traditionally, monitoring tools have focused on one aspect of the stack, but this limited view can lead to blind spots when trying to troubleshoot issues or optimize performance. Full-Stack Observability aims to eliminate these blind spots by providing a holistic view of the entire system.

Components Of Full-Stack Observability

  • Logs

Logs provide a textual record of events within applications and systems. They are invaluable for troubleshooting and auditing. Full-Stack Observability integrates log management, enabling DevOps teams to centralize logs from various components and analyze them collectively.

  • Metrics

Metrics are numeric data points that measure various aspects of system behavior, such as CPU usage, memory consumption, or response times. Full-Stack Observability tools gather and visualize metrics from different parts of the technology stack, aiding in performance analysis and trend identification.

  • Traces

Traces follow a transaction’s journey through the system, from the user interface down to the backend services. They help identify bottlenecks and latency issues. Full-Stack Observability incorporates distributed tracing to provide end-to-end visibility into transactions.

  • Events

Events represent specific occurrences or milestones in the system. They can include user actions, system alerts, or custom events defined by the organization. Full-Stack Observability platforms collect and correlate events to create a comprehensive timeline of system activity.

The Impact Of Full-Stack Observability

  • Enhanced Troubleshooting

With Full-Stack Observability, DevOps teams can pinpoint the root cause of issues faster. When an incident occurs, instead of relying on hunches or educated guesses, they can access a wealth of data that reveals what happened, where, and why.

  • Proactive Issue Resolution

Full-Stack Observability enables proactive monitoring. DevOps teams can set up alerts based on specific thresholds or patterns, allowing them to detect and address potential issues before they impact end-users

  • Optimized Performance

By tracking the performance of every component in the technology stack, organizations can identify bottlenecks and areas for improvement. This data-driven approach to optimization can lead to more efficient systems and improved user experiences.

  • Improved Collaboration

Full-Stack Observability promotes collaboration between development and operations teams. When both sides have access to the same comprehensive data, communication improves, and issues are resolved more efficiently.

The InRhythm Propel Summit And Full-Stack Observability

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

The InRhythm Propel Summit is dedicated to fostering continuous learning and growth within the tech community. Our recent DevOps Workshop, hosted by Solutions Architect, Taufiqur Ashrafy, focused on Full-Stack Observability. This event aligns perfectly with the mission of the Propel Summit, which is to empower tech professionals to stay at the forefront of industry trends and best practices.

Closing Thoughts

Full-Stack Observability represents a significant shift in how DevOps teams approach monitoring and troubleshooting. By providing a comprehensive view of the entire technology stack, it empowers organizations to proactively address issues, optimize performance, and enhance collaboration between development and operations teams. As the tech landscape continues to evolve, Full-Stack Observability will play a crucial role in ensuring the reliability and efficiency of software systems. Embracing this approach is not just a choice; it’s a necessity for organizations aiming to thrive in the digital age.

Written by Kaela Coppinger · Categorized: Cloud Engineering, DevOps, Learning and Development, Product Development, Software Engineering · Tagged: devops, DevOps Best Practices, DevOps Workshop, Full-Stack, Full-Stack Observability, InRhythm Propel Summit, INRHYTHMU, learning and growth, Observability, product development, software engineering

Sep 12 2023

The Shift Left Testing Principle: Empowering Quality Assurance with Karate API Framework

Based on a Lightning Talk by: Oleksii Lavrenin, Lead Software Engineer In Test @ InRhythm on August 24th 2023, as part of this summer’s InRhythm Propel Summit 2023

Overview

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

In today’s fast-paced software development landscape, delivering high-quality products is paramount. As software development methodologies evolve, so do the testing practices associated with them. One such methodology gaining prominence is the Shift Left Testing principle, which aims to detect and fix defects early in the development lifecycle. This proactive approach significantly reduces the cost and effort associated with fixing issues at later stages:

  • Overview
  • Shift Left Testing: A Paradigm Shift In Testing Philosophy 
  • The Role Of The Karate API Framework
  • Early Validation And Rapid Feedback
  • Collaboration And Shared Understanding 
  • Expressive And Readable Tests
  • Continuous Testing And Integration
  • Closing Thoughts
  • The InRhythm Propel Summit And Our Core Values

Shift Left Testing: A Paradigm Shift In Testing Philosophy

Traditionally, testing activities were often performed towards the end of the development cycle, leading to a bottleneck in identifying and resolving defects. The Shift Left Testing principle challenges this status quo by advocating the integration of testing activities from the very beginning of the development process. This philosophy ensures that potential defects are identified and addressed early, preventing them from propagating further downstream and becoming more complex and costly to fix.

The Role Of The Karate API

Enter the Karate API Framework, a powerful tool that aligns perfectly with the principles of Shift Left Testing. Karate is an open-source test automation framework specifically designed for API testing. Its unique combination of simplicity, flexibility, and effectiveness makes it an ideal choice for embracing the Shift Left approach.

Early Validation And Feedback

Karate enables teams to perform API testing as early as the development phase. This empowers developers to validate their APIs right from the initial stages, catching potential issues in real-time. By providing rapid feedback, Karate allows developers to address issues swiftly, reducing the need for rework and ensuring that the codebase remains robust.

Collaboration And Shared Understanding

One of the challenges in software development is maintaining clear communication between developers and quality assurance (QA) teams. Karate bridges this gap by using a domain-specific language that is accessible to both developers and testers. This shared language fosters collaboration, ensuring that everyone is on the same page when it comes to defining test scenarios and expectations.

Expressive And Readable Tests

Karate’s syntax is designed to be expressive and readable. Test scenarios are written in a narrative style that closely resembles plain English, making it easy to understand even for non-technical team members. This clarity enhances the Shift Left Testing principle by allowing all stakeholders, including business analysts and product owners, to review and contribute to test scenarios.

Continuous Testing And Integration

Another key aspect of Shift Left Testing is the integration of testing into the continuous integration and continuous delivery (CI/CD) pipeline. Karate seamlessly fits into this workflow, enabling automated API tests to be executed with every code commit. This constant validation ensures that defects are caught early, preventing them from reaching later stages of development.

Closing Thoughts

The Shift Left Testing principle has transformed the way software testing is approached, emphasizing early detection and prevention of defects. The Karate API Framework perfectly complements this principle by providing a powerful yet accessible tool for API testing. Its ability to facilitate collaboration, provide rapid feedback, and integrate seamlessly into the development pipeline makes it an indispensable asset in achieving high-quality software development.

By embracing the Shift Left approach with tools like Karate, development teams can ensure that their products meet the highest quality standards while maximizing efficiency and minimizing costs. The journey towards software excellence begins with a proactive mindset, and the synergy between Shift Left Testing and the Karate API Framework paves the way for a brighter future in software development.

The InRhythm Propel Summit And Our Core Values

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

At InRhythm, our ethos centers around fostering a culture of learning and growth. We believe that staying at the forefront of technological advancements is key to providing exceptional solutions to our clients. The InRhythm Propel Summit perfectly encapsulates this commitment, serving as a platform for sharing insights, fostering innovation, and empowering our engineering community.

As part of this summit, we were thrilled to feature the SDET workshop, an immersive experience led by the esteemed Vidal Karan and Oleksii Lavrenin.

Written by Kaela Coppinger · Categorized: Code Lounge, DevOps, Learning and Development, Product Development, Software Engineering · Tagged: best practices, InRhythm Propel Summit, INRHYTHMU, Integration Testing, Karate, Karate API, learning and growth, product development, SDET, SDET Propel Workshop, Shift Left Testing, software engineering, testing, Testing API, testing automation

Jun 15 2023

Enhancing Test Automation With Visual AI Using Applitools

Based on a Lightning Talk by: Veda Vyas Prabu, Lead Software Development Engineer In Test @ InRhythm on June 1st, 2023

Overview

In the world of software development, testing automation plays a crucial role in ensuring the quality and reliability of applications. However, traditional testing approaches often struggle to validate the visual correctness of user interfaces, leading to potential issues and user dissatisfaction. Applitools, a leading provider of visual AI solutions, offers developers a powerful toolset to incorporate visual validation into their testing automation workflows.

In Vyas Prabu’s Lightning Talk session, we will explore how developers can leverage Applitools for testing automation with Visual AI, enabling them to enhance the accuracy and effectiveness of their automated tests:

  • Overview
  • Understanding Visual AI And Testing Automation
  • The Role Of Applitools In Testing Automation
  • Benefits Of Using Applitools For Testing Automation
  • Closing Thoughts

Understanding Visual AI And Testing Automation

Visual AI combines artificial intelligence and computer vision to analyze and understand the visual elements of an application. It enables automated visual testing, validating the visual aspects of user interfaces, layouts, and components. Testing automation, on the other hand, involves automating the execution and validation of tests, reducing manual effort and ensuring consistent results. By integrating Visual AI into testing automation, developers can achieve comprehensive validation of the visual aspects of their applications.

The Role Of Applitools In Testing Automation

Applitools provides a robust platform that seamlessly integrates Visual AI capabilities into existing testing frameworks. Here’s how developers can leverage Applitools for testing automation with Visual AI:

  • Automated Visual Validation

Applitools allows developers to define visual checkpoints within their automated tests. These checkpoints capture the expected visual state of an application at specific stages or actions. During test execution, Applitools compares the actual visual output with the expected checkpoint, detecting any visual differences or anomalies. This automated visual validation ensures the accuracy and consistency of the application’s visual elements across different test runs and environments.

  • Cross-Browser And Cross-Device Compatibility 

Data in applications is rarely static, and it is crucial to validate the visual presentation of dynamic and varying data. Applitools’ Visual AI engine is capable of handling such scenarios by detecting and highlighting differences between expected and actual visual elements. Developers can define visual checkpoints based on expected visual output, and Applitools automatically verifies that the data-driven application generates the correct visuals regardless of the data’s dynamic nature.

  • Dynamic And Responsive UI Testing

Modern applications need to perform consistently across various browsers and devices. Applitools addresses this challenge by providing cross-browser and cross-device compatibility in visual testing. Developers can write tests once and leverage Applitools’ capabilities to automatically validate the visual correctness of their applications across multiple browsers and devices. This saves time and effort in managing separate test scripts for different platforms and ensures consistent user experiences.

  • Integrations With Testing Frameworks And Tools

Applitools seamlessly integrates with popular testing frameworks and tools, including Selenium, Cypress, and Appium. Developers can continue using their preferred automation frameworks and leverage Applitools for visual validation within their existing test scripts. This integration simplifies the adoption of Applitools into existing testing workflows and enables developers to enhance their automated tests with Visual AI capabilities without significant changes to their existing codebase.

Benefits Of Using Applitools For Testing Automation

  • Enhanced Accuracy And Coverage

By incorporating Applitools into testing automation, developers can achieve higher accuracy and broader coverage of their visual tests. Visual AI technology detects even subtle visual differences, ensuring that applications meet the desired visual standards. This leads to improved application quality and user satisfaction.

  • Time And Effort Savings

Automated visual validation with Applitools significantly reduces the time and effort required for manual visual inspection. Developers can focus on creating automated tests that cover a wider range of scenarios, including visual aspects. This saves valuable time, accelerates the testing cycle, and allows developers to deliver applications faster.

  • Improved Collaboration And Communication

Applitools provides collaboration features that enable team members to efficiently review and manage visual tests. Developers, QA engineers, and other stakeholders can easily communicate visual defects, discuss visual changes, and track the progress of visual testing. This fosters effective collaboration and streamlines the bug-fixing process.

Closing Thoughts

Applitools revolutionizes testing automation by integrating Visual AI capabilities into the testing workflow. By leveraging Applitools’ automated visual validation, developers can enhance the accuracy, coverage, and efficiency of their tests. The platform’s cross-browser compatibility, dynamic UI testing support, and seamless integration with popular testing frameworks make it a valuable tool for developers seeking to validate the visual correctness of their applications.

By incorporating Applitools into testing automation, developers can deliver visually robust and reliable applications, ensuring a delightful user experience. With Visual AI’s power and automation’s efficiency, developers can save time, reduce manual effort, and focus on building high-quality software.

Written by Kaela Coppinger · Categorized: Agile & Lean, DevOps, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: best practices, devops, learning and growth, product development, SDET, software engineering, testing, testing automation

May 08 2023

InRhythm Presents The Propel Spring Quarterly Summit

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

New York, NY – InRhythm recently concluded its very first Propel Spring Quarterly Summit; a premiere event consisting of six individual coding workshops aimed to support the learning and growth of engineering teams around the world. 

Over the last three weeks, our consulting practices have led a series of interactive experiences that delved into the latest technology trends and tools, designed to propel professionals forward into their careers. 

The workshops are free to access as a unique part of InRhythm’s mission to build a forward-thinking thought leadership annex:

  • InRhythm Propel Spring Quarterly Summit / SDET Workshop / March 17th 2023
  • InRhythm Propel Spring Quarterly Summit / Web Workshop / March 24th 2023
  • InRhythm Propel Spring Quarterly Summit / DevOps Workshop / March 29th 2023
  • InRhythm Propel Spring Quarterly Summit / Android Workshop / April 11th 2023
  • InRhythm Propel Spring Quarterly Summit / Cloud Native Workshop / April 21st 2023
Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

SDET Workshop (03/17/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

This workshop worked as an introduction to writing and running tests using Microsoft Playwright. Our SDET Practice went over Playwright’s extensive feature set before diving more in-depth with its API.  

For the workshop, the team went over setup and installation of the tool, as well as wrote a series of comprehensive tests against a test application. Once tests were run, the team afforded participants the opportunity to go over some of Playwright’s advanced features, such as its powerful debugger and enhanced reporting. 

To close out the workshop, SDET Practice Leadership compared Playwright’s features to some of its competitors, went over its pros and cons, and discussed why they believed it to be a paramount tool to consider for automated testing solutions.

Web Workshop (03/24/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

Our Web Practice focused their workshop on their top three, intertwining technologies for development cycles. 

With many modern web applications sharing many of the responsibilities that a middle layer/presentation and service layer/backend provide to the frontend layer, the project was kicked off by organizing the elements with a mono-repository.  

Once the application moved into its build phase, it was time to accelerate the architecture to the next level using NextJS. 

Web Practice Leadership wrapped their project, with an intuitive overview of web bundling and the variety of methods utilized – in order to best adapt to each individual build.

DevOps Workshop (03/29/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

In this workshop, the DevOps Practice demonstrated tools for provisioning infrastructure as well as how to construct a self-servicing platform for provisioning resources. With these new developments in the industry, bridging the gaps between development and ops by allowing developers to self-manage cloud infrastructure to satisfy their needs will be a paramount skill to adopt. Our DevOps practitioners discussed the pros and cons of a number of tools for provisioning infrastructure and identified which tools can best fit a business’ needs.

For the hands-on interactive session, the team ran through the necessary steps to get started with Pulumi and provision a resource onto AWS, along with demonstrating Terraform in order to get a feel for the difference between the two popular infrastructure-as-code tools. After that, we set up some plugins to enhance the development experience with IaC.  

Self-servicing platforms are the best way to allow for engineers to provision resources and infrastructure for their needs en-masse. With Backstage, the team was able to demonstrate a platform for engineers to come to and fulfill their needs whether it be creating a new microservice, a new repository, or even provisioning a new k8s cluster. Furthermore, the provisioning of these resources were proven to standardize and bring uniformity to ensure that best practices are enforced. Long gone are the days of submitting a ticket to create a new instance to deploy an application, with a wait time of a few hours or even a few days.  Self-servicing tools are the future of bringing operations into the hands of developers and bridging the gap between development and operations.

Finally, DevOps Practice Leadership set up a self-servicing platform and hooked it into the aforementioned IaC repository to allow for the provisioning of resources from a GUI. 

Managing infrastructure can quickly become tedious as the number of resources being used on a cloud provider continue to grow.  With infrastructure-as-code, not only DevOps engineers, but developers can now lay out infrastructure using code. Since it’s managed via code, version-controlling/source-code management tools are also available, making management of infrastructure significantly easier.

iOS Workshop (03/28/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

Our iOS Practice did a full overview of Swift Async/Await for iOS application development

Async/Await is a programming feature that simplifies asynchronous operations by allowing software engineers to write asynchronous code in a synchronous manner. It also makes code easy to read/write, improves performance/responsiveness, and reduces the likelihood of errors.

In short, Async/Await is a powerful modern feature in every avenue from development speed and simplified code to and application performance.

Android Workshop (04/11/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

Our Android Practice performed a comprehensive demonstration of the practical integration of Kotlin Multi-Platform Mobile (KMM) for cross-platform development. 

Kotlin Multi-Platform Mobile is an exciting, growing new technology that allows sharing core code between Android, iOS, and Web.  

In this workshop, Android Practice Leadership explored what KMM was, how to setup a project for KMM, a walkthrough implementing a core module to a few APIs (network layer, data models, parsers, and business logic), and then consumed this core library in an Android (Jetpack Compose) and iOS (SwiftUI) application.

Cloud Native Application Development Workshop (04/21/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

In this workshop our Cloud Native Application Development Practice introduced the participants to gRPC, which is Google’s take on Remote Procedural Calls. Our Practice Leadership presented a brief history of gRPC and Protocol Buffers. Google and other companies use gRPC to serialize data to binary which results in smaller data packets. Throughout the presentation our team went over some of the pros and cons of using gRPC for individual API calls.

In our hands-on workshop portion participants created a simple application to manage users and notes powered by Java, gRPC, and Postgres. The grand finale featured a full-circle moment as we worked together to create a series of CRUD APIs in Java using gRPC to send/receive data packets, translate those into objects, and store them in a database.

About InRhythm

InRhythm is a leading modern product consultancy and digital innovation firm with a mission to make a dent in the digital economy. Founded in 2002, InRhythm is currently engaged by Fortune 50 enterprises and scale-ups to bring their next generation of modern digital products and platforms to market. InRhythm has helped hundreds of teams launch mission-critical products that have created a positive impact worth billions of dollars. The projects we work on literally change the world.

InRhythm’s unique capabilities of Product Innovation and Platform Modernization services are the most sought-after. The InRhythm team of A+ thought leaders don’t just “get a job,” they join the company to do what they love. InRhythm has a “who’s who” clients list and has barely scratched the surface in terms of providing those clients the digital solutions they need to compete. From greenfield to tier-one builds, our clients look to us to deliver their mission-critical projects in the fields of product strategy, design, cloud native applications, as well as mobile and web development. 

Written by Kaela Coppinger · Categorized: Culture, DevOps, Employee Engagement, Events, InRhythm News, InRhythmU, Java Engineering, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: Android, best practices, Cloud Native Application Development, devops, INRHYTHMU, ios, JavaScript, learning and growth, Mobile Development, Press Release 2023, Propel, Propel Workshop, SDET, software engineering, Spring Quarterly Propel Summit, Web

Apr 07 2023

Automating Cloud Infrastructure With Pulumi And Python

Based on a Lightning Talk by: James Putman, Senior DevOps Engineer @ InRhythm on March 29th, 2023 as part of the Propel Spring Quarterly Summit 2023

Author: Mike Adams, Senior Technical Writer @ InRhythm

Infrastructure as Code (IaC)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

IaC provides many benefits. It brings the concepts of application development to infrastructure provisioning and allows the provisioning to occur much earlier in the lifecycle as part of application development.

IaC entails source-code management, version control, code-reviews, and collaboration with pull-requests. It allows you to simultaneously maintain multiple cloud providers using hybrid cloud setups to access different services across the major competitors.

With IaC you can provision or destroy your entire infrastructure architecture in one action, allowing different environments to seamlessly represent your architecture.

Other benefits include the ability to modularize infrastructure, thereby increasing its portability and flexibility. Another benefit of IaC is that it allows you to fully automate and easily configure the infrastructure using CI/CD pipelines as that is a part of source code control management.

Introduction To Pulumi

Pulumi is a language agnostic IaC tool used to manage project infrastructure. It provides provisioning of cloud resources and uses a “general purpose language” (GPL), such as JavaScript, GO, Python, etc. to define pipelines in the tool. It integrates easily with GitHub, Azure, AWS, etc., and is an open source package. Pulumi manages the resources.

When compared against Terraform (a similar package of longer standing), Pulumi supports a wider range of conditional options and has more robust utility functions and types. It is easier for developers to use and can integrate with Terraform providers. It supports testing frameworks and Unit, Property, and Integration testing. Unlike Terraform, Pulumi uses the General Purpose Language to describe workflows, while Terraform uses the Hashicorp Configuration Language (HCL).

Because Pulumi is newer than Terraform, as expected the community is smaller and it has less mature documentation. Unrelated to maturity, however, Pulumi provides its state management features behind a paywall. You can, however, manage the state locally or in AWS using an S3 bucket.

Pulumi Projects

Block diagram of Pulumi project architecture.

For our post, the Pulumi Project is the GitHub account that contains the Program, which is the code or repo in that account. The Stacks equate to the branches or environments in the repo and represent different deployment states.

The Program contains the Language Host (part of the Program), which is composed of Resources, the Language Executor, and the Language Runtime. The Resources themselves are binary plugins.

The Language Executor is responsible for launching the Language Runtime, which is in-turn responsible for detecting any Resource registrations changes (i.e., new, removed, updated) and then notifying the Deployment Engine of those changes. The Resources themselves are binary plugins that communicate with the various Providers. Pulumi Resources are stored in ~/.pulumi/plugins.

The Deployment Engine (part of the Pulumi CLI) reads the Notification and verifies that the current Resource States match those listed in the Notification. If the Deployment Engine detects differences, (i.e., a new, modified, or removed Resource), it notifies the Providers which then implement the desired Resource State(s).

The Pulumi SDK provides resource bindings.

Common Commands

Some common commands for working with Pulumi environments are pulumi up to deploy code and resource changes; pulumi stack to manage instances of the project; pulumi config to alter the stack’s config or secrets; and pulumi destroy, to completely tear down the stack’s resources.

Pulumi commands.

Operating Details

This is a python-based demo using a virtual environment. If you have the AWS CLI installed and configured, Pulumi will respect its configuration.

Prerequisites

  • Pulumi CLI
  • Python (v3.7+)
  • An AWS account (i.e., for development, you should also have the AWS CLI installed to simplify your activities)
  • A static website in your IDE to use with Amazon’s S3 website support

Pulumi Setup

Install the Pulumi CLI. Note that you’ll need your Pulumi access token or you must login to the Pulumi website during the setup.

  1. Create a directory location for your project.
  2. CD to that location
  3. Open a terminal in that location and run the following command to create and initialize the Pulumi stack:

pulumi new

Pulumi displays the available cloud/language templates for project scaffolds and offers numerous cloud and language combinations (206 combinations as of 07 Apr 2023). This post uses aws-python.

Pulumi prompts for the project name (the PWD is the default project name), project description, stack name, and cloud region. If you need to create a new stack for this project, simply provide the stack name in “org-name/stack-name” format. If you do not provide the organization name for a new stack, Pulumi associates the stack with the current user. Finally, Pulumi prompts for the cloud region. At this point, Pulumi creates a virtual environment, updates that environment’s toolset, and then downloads and installs the dependencies.

On completion, Pulumi displays, “Your new project is ready to go.”

Pulumi also added the following scaffolding files to your project location:

  • Pulumi.dev.yaml – contains the configuration for the stack you’re initializing
  • Pulumi.yaml – Defines the project
  • __main.py__ – contains a program stub with the Pulumi imports and defines the program’s resources.  __main__.py creates a new S3 bucket and exports the bucket name
  • requirements.txt – contains the Pulumi version requirements
  • venv directory – This directory contains the Pulumi scripts, packages, libraries, and a Python runtime. venv is also a Python virtual environment

Note that the files and their names may vary according to your choice of ‘cloud-language’ combination.

If the AWS CLI is not available, please install it at this time. During the AWS configure step, you will need your AWS Access Key ID and AWS Secret Access Key. You must also know the Output format (if any) and the AWS Region.

At this point, run pulumi up to perform the initial update.

Selecting Yes causes Pulumi to update the modified resource on the AWS site. Pulumi displays the Outputs, the bucket, bucket_name, and website_url, lists a count of the changed and unchanged resources, and the duration of the update.

Pulumi displays sparse details on the proposed update and prompts for confirmation before taking any action. Select details to view a verbose explanation or press CTRL+O (Mac: CMD+O) to open your default web browser to the changes page at the URL shown on the View in Browser (Ctrl+O) line (Mac shows CMD+O instead). Select yes to perform the update or no to abandon the update.

The Outputs reflect the information from your AWS configuration.
bucket : "my-bucket-92bcd61"
bucket_name: "s3-workshop-bucket-0146792"
website_url: "s3-workshop-bucket-0146792.s3-website-us-east-1.amazonaws.com"

The __main.py__ file must import json, mimetypes, and os from Python; import FileAsset, Output, and export from pulumi; and import s3 from pulumi_aws. Additionally, you must define the content_dir to the static website in your project (“www” in the demo) and configure the bucket policy for the static site.

Ensure that you add the appropriate Pulumi libraries to your IDE’s project.

Code

Pulumi-generated __main.py__

This is the file as generated by pulumi new:

"""An AWS Python Pulumi program"""

import pulumi
from pulumi_aws import s3

# Create an AWS resource (S3 Bucket)
bucket = s3.Bucket('my-bucket')

# Export the name of the bucket
pulumi.export('bucket_name', bucket.id)

__main.py__ Modified for the Demo

This is the file modified to add buckets, policies, and a static website.

"""An AWS Python Pulumi program"""

# Setup the imports
import json
import mimetypes
import os
import pulumi
from pulumi import FileAsset, Output, export
from pulumi_aws import s3

# Create an AWS resource (S3 Bucket)
bucket_name = s3.Bucket('my-bucket')

# Export the name of the bucket
pulumi.export('bucket', bucket_name.id)

web_bucket = s3.Bucket('s3-workshop-bucket',
    website=s3.BucketWebsiteArgs(
        index_document="index.html",
    ))

# Set the content directory for the website
content_dir = "www"
for file in os.listdir(content_dir):
    filepath = os.path.join(content_dir, file)
    mime_type, _ = mimetypes.guess_type(filepath)
    obj = s3.BucketObject(file,
        bucket=web_bucket.id,
        source=FileAsset(filepath),
        content_type=mime_type)

# Define AWS buckets
def public_read_policy_for_bucket(bucket_name):
    return Output.json_dumps({
        "Version": "2012-10-17",
        "Statement": [{
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                Output.format("arn:aws:s3:::{0}/*", bucket_name),
            ]
        }]
    })

bucket_name = web_bucket.id
bucket_policy = s3.BucketPolicy("bucket-policy",
    bucket=bucket_name,
    policy=public_read_policy_for_bucket(bucket_name))

# Export the name of the bucket
export('bucket_name', web_bucket.id)
export('website_url', web_bucket.website_endpoint)

Static Website Files

There are no special changes or requirements for the website relating to Pulumi. Pulumi updates website components during the pulumi up process when the user selects, yes to the update confirmation.

Resources

  • Pulumi
    • Pulumi: https://www.pulumi.com/
    • Pulumi blog: https://www.pulumi.com/blog/
    • Pulumi Slack: https://slack.pulumi.com/
  • Documentation
    • Pulumi Documentation (Main): https://www.pulumi.com/docs/
    • Pulumi Getting Started: https://www.pulumi.com/docs/get-started/
    • Pulumi CLI Reference: https://www.pulumi.com/docs/reference/cli/
  • Community
    • Pulumi Community: https://www.pulumi.com/community/
  • Best Practices
    • Pulumi best practices: https://www.pulumi.com/blog/pulumi-recommended-patterns-the-basics/
  • Repositories
    • InRhythm Pulumi with Python Repo: https://github.com/inrhythm-workshops/pulumi-with-python
    • Pulumi Github Repo: https://github.com/pulumi/

Written by Mike Adams · Categorized: Cloud Engineering, DevOps, Learning and Development · Tagged: AWS, best practices, IaC, Infrastructure as Code, Learning and Development, Pulumi

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

195 Broadway
Suite 2400, Floor 24
New York, NY 10007

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT