• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

Java Engineering

May 08 2023

InRhythm Presents The Propel Spring Quarterly Summit

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

New York, NY – InRhythm recently concluded its very first Propel Spring Quarterly Summit; a premiere event consisting of six individual coding workshops aimed to support the learning and growth of engineering teams around the world. 

Over the last three weeks, our consulting practices have led a series of interactive experiences that delved into the latest technology trends and tools, designed to propel professionals forward into their careers. 

The workshops are free to access as a unique part of InRhythm’s mission to build a forward-thinking thought leadership annex:

  • InRhythm Propel Spring Quarterly Summit / SDET Workshop / March 17th 2023
  • InRhythm Propel Spring Quarterly Summit / Web Workshop / March 24th 2023
  • InRhythm Propel Spring Quarterly Summit / DevOps Workshop / March 29th 2023
  • InRhythm Propel Spring Quarterly Summit / Android Workshop / April 11th 2023
  • InRhythm Propel Spring Quarterly Summit / Cloud Native Workshop / April 21st 2023
Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

SDET Workshop (03/17/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

This workshop worked as an introduction to writing and running tests using Microsoft Playwright. Our SDET Practice went over Playwright’s extensive feature set before diving more in-depth with its API.  

For the workshop, the team went over setup and installation of the tool, as well as wrote a series of comprehensive tests against a test application. Once tests were run, the team afforded participants the opportunity to go over some of Playwright’s advanced features, such as its powerful debugger and enhanced reporting. 

To close out the workshop, SDET Practice Leadership compared Playwright’s features to some of its competitors, went over its pros and cons, and discussed why they believed it to be a paramount tool to consider for automated testing solutions.

Web Workshop (03/24/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

Our Web Practice focused their workshop on their top three, intertwining technologies for development cycles. 

With many modern web applications sharing many of the responsibilities that a middle layer/presentation and service layer/backend provide to the frontend layer, the project was kicked off by organizing the elements with a mono-repository.  

Once the application moved into its build phase, it was time to accelerate the architecture to the next level using NextJS. 

Web Practice Leadership wrapped their project, with an intuitive overview of web bundling and the variety of methods utilized – in order to best adapt to each individual build.

DevOps Workshop (03/29/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

In this workshop, the DevOps Practice demonstrated tools for provisioning infrastructure as well as how to construct a self-servicing platform for provisioning resources. With these new developments in the industry, bridging the gaps between development and ops by allowing developers to self-manage cloud infrastructure to satisfy their needs will be a paramount skill to adopt. Our DevOps practitioners discussed the pros and cons of a number of tools for provisioning infrastructure and identified which tools can best fit a business’ needs.

For the hands-on interactive session, the team ran through the necessary steps to get started with Pulumi and provision a resource onto AWS, along with demonstrating Terraform in order to get a feel for the difference between the two popular infrastructure-as-code tools. After that, we set up some plugins to enhance the development experience with IaC.  

Self-servicing platforms are the best way to allow for engineers to provision resources and infrastructure for their needs en-masse. With Backstage, the team was able to demonstrate a platform for engineers to come to and fulfill their needs whether it be creating a new microservice, a new repository, or even provisioning a new k8s cluster. Furthermore, the provisioning of these resources were proven to standardize and bring uniformity to ensure that best practices are enforced. Long gone are the days of submitting a ticket to create a new instance to deploy an application, with a wait time of a few hours or even a few days.  Self-servicing tools are the future of bringing operations into the hands of developers and bridging the gap between development and operations.

Finally, DevOps Practice Leadership set up a self-servicing platform and hooked it into the aforementioned IaC repository to allow for the provisioning of resources from a GUI. 

Managing infrastructure can quickly become tedious as the number of resources being used on a cloud provider continue to grow.  With infrastructure-as-code, not only DevOps engineers, but developers can now lay out infrastructure using code. Since it’s managed via code, version-controlling/source-code management tools are also available, making management of infrastructure significantly easier.

iOS Workshop (03/28/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

Our iOS Practice did a full overview of Swift Async/Await for iOS application development

Async/Await is a programming feature that simplifies asynchronous operations by allowing software engineers to write asynchronous code in a synchronous manner. It also makes code easy to read/write, improves performance/responsiveness, and reduces the likelihood of errors.

In short, Async/Await is a powerful modern feature in every avenue from development speed and simplified code to and application performance.

Android Workshop (04/11/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

Our Android Practice performed a comprehensive demonstration of the practical integration of Kotlin Multi-Platform Mobile (KMM) for cross-platform development. 

Kotlin Multi-Platform Mobile is an exciting, growing new technology that allows sharing core code between Android, iOS, and Web.  

In this workshop, Android Practice Leadership explored what KMM was, how to setup a project for KMM, a walkthrough implementing a core module to a few APIs (network layer, data models, parsers, and business logic), and then consumed this core library in an Android (Jetpack Compose) and iOS (SwiftUI) application.

Cloud Native Application Development Workshop (04/21/23)

Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

In this workshop our Cloud Native Application Development Practice introduced the participants to gRPC, which is Google’s take on Remote Procedural Calls. Our Practice Leadership presented a brief history of gRPC and Protocol Buffers. Google and other companies use gRPC to serialize data to binary which results in smaller data packets. Throughout the presentation our team went over some of the pros and cons of using gRPC for individual API calls.

In our hands-on workshop portion participants created a simple application to manage users and notes powered by Java, gRPC, and Postgres. The grand finale featured a full-circle moment as we worked together to create a series of CRUD APIs in Java using gRPC to send/receive data packets, translate those into objects, and store them in a database.

About InRhythm

InRhythm is a leading modern product consultancy and digital innovation firm with a mission to make a dent in the digital economy. Founded in 2002, InRhythm is currently engaged by Fortune 50 enterprises and scale-ups to bring their next generation of modern digital products and platforms to market. InRhythm has helped hundreds of teams launch mission-critical products that have created a positive impact worth billions of dollars. The projects we work on literally change the world.

InRhythm’s unique capabilities of Product Innovation and Platform Modernization services are the most sought-after. The InRhythm team of A+ thought leaders don’t just “get a job,” they join the company to do what they love. InRhythm has a “who’s who” clients list and has barely scratched the surface in terms of providing those clients the digital solutions they need to compete. From greenfield to tier-one builds, our clients look to us to deliver their mission-critical projects in the fields of product strategy, design, cloud native applications, as well as mobile and web development. 

Written by Kaela Coppinger · Categorized: Culture, DevOps, Employee Engagement, Events, InRhythm News, InRhythmU, Java Engineering, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: Android, best practices, Cloud Native Application Development, devops, INRHYTHMU, ios, JavaScript, learning and growth, Mobile Development, Press Release 2023, Propel, Propel Workshop, SDET, software engineering, Spring Quarterly Propel Summit, Web

Apr 12 2023

What You Can Expect From React 18

Based on a Lightning Talk by: Godfrey Best, Senior Software Engineer @ InRhythm on March 29th, 2023 as part of the Propel Spring Quarterly Summit 2023

Author: Paris Leach, Senior Software Engineer @ InRhythm

Overview

Recently we had an exciting Lightning Talk led by Godfrey Best, who walked us through the changes introduced by React 18.

React 18 ushers in structural changes to the library that will help developers create more performant applications. Among these changes are the highly anticipated concept of concurrent rendering, which gives the developer fine-grain control over how their components render. We will discuss these changes at a (mostly) high level, in the hopes that you walk away from this article with a solid understanding of the changes that this version introduces.

What Is React?

Before we dive into React 18, we’re going to take a brief look at what React is for those who are not familiar. Feel free to skip this section if you already have a good grasp of React.

React is, succinctly, a JavaScript library for building user interfaces. It was created at Meta in 2011 (many of you will remember that it was called Facebook, at the time) and open sourced in 2013. It swiftly became the most popular frontend library/framework.

It provides a declarative, component-based API so that you don’t need to worry about page changes on every update. You pass data to the components, and React determines how they should render. React renders a Virtual DOM (not to be confused with the Shadow DOM), listens for changes in component data, and by default re-renders only those components whose data has changed.

React data is generally passed unilaterally (from parent to child, and not vice-versa), and is usually either a component property (data that is passed from a parent to a child) or a part of component state (internal data that belongs to a specific component and can only be directly changed by that same component).

For most of React’s lifespan, it has relied on synchronous rendering, which means that once an application has begun rendering, the user must wait for the render to be completed before they can interact with the components (these render methods and their callbacks are pushed to JavaScript’s single-threaded call stack).

About Version 18

React 18 was released on March 29th, 2022, and among other changes, it adds features that allow the developer to switch from synchronous rendering to asynchronous rendering, or as React has coined it, concurrent features. This allows React to render and re-render its components outside of the call stack, unblocking the user’s opportunity to interact while the render process occurs. In addition, the developer can establish priority for certain renders, giving them more granular control over their applications. React concurrent features:

  • are opt-in (when upgrading to React 18, components are not automatically set to render concurrently)
  • are backwards compatible
  • employ reusable state
  • are interruptible

How Can We Use These New Features?

There are a number of new hooks introduced in React 18, most which are expected to be implemented by framework authors, such as Next.js, Hydrogen and Remix. A few of the new hooks made available are:

  • useId
    • Used for generating unique ids on both the client and server to prevent hydration mismatches
  • useInsertionEffect
    • Allows for CSS and JavaScript libraries to address performance issues while they are injecting styles during rendering
  • useSyncExternalStore
    • Allows external stores to support concurrent reads by forcing updates to the stores to be synchronous (useful for state management libraries like Redux)
  • useTransition
    • Allows the library’s authors to mark certain actions as low priority (such as switching between pages) *We’ll be taking a closer look at this hook later in the article

React 18 also splits the rendering API (ReactDOM) into 2 parts:

  • ReactDOM/client
  • ReactDOM/server

What Features Can We Use Today?

Not all of these features require time for frameworks (or us) to implement; some of them can be used today, out-of-box:

Automatic Batching

Before React 18, if you had multiple state updates that were called inside of a React event handler function, they would be batched automatically, and the component would only be re-rendered once. This formerly only applied to React state handlers, and not, for example, setTimeouts or native event handlers. React 18 changes this by automatically batching all state updates inside any function by default. This reduces unnecessary re-renders.

useDeferredValue()

This tells React to only render a value when it’s convenient, similar to debounce, though this feature has superior performance to the former. Unlike debounce, there is no fixed time delay before this fires; additionally, this can be interrupted and does not block user input.

useTransition()

This hook is similar to useDeferredValue(), except that it tells React to render a state update when it’s convenient. It can also show if a transition is pending, which is a useful status to have awareness of within your application.

Suspense For Data Fetching

This feature actually existed in previous versions of React, but it was only used for code splitting. In React 18, suspense is available for data fetching, allowing for a declarative fallback ui in scenarios when the application is waiting on an asynchronous action to complete.

Conclusion

React 18 brings some long awaited performance boons and quality-of-life improvements. The move away from synchronous to concurrent rendering is something React has been working on since 2017 and now developers can finally avail themselves of its benefits. React also has some future developments in the pipeline:

  • Rendering components offscreen, allowing a developer to prepare ui to render in advance of the ui being on the page
  • Improvements around suspense for data fetching, such as more exposed primitives to make it easier to access your data, as well as the ability to use the feature without a framework having to implement it
  • Server components (an experimental but upcoming feature), allowing developers to build apps spanning both the client and the server

Written by Kaela Coppinger · Categorized: Cloud Engineering, Java Engineering, Product Development, Software Engineering, Web Engineering · Tagged: best practices, INRHYTHMU, learning and growth, React, React 18, software, software engineering, Web, Web Development, web engineering

Mar 23 2023

How To Write A Good Pull Request

Overview

No alt text provided for this image

Pull Requests are the backbone of open source software development. They allow contributions by anyone, from anywhere. PRs are also a vital form of communication, even within a localized development team working on proprietary software.

What makes a good Pull Request?

Let’s break it down into 4 Rules Of Thumb:

  • Provide Context
  • Make It As Small As Possible, But Not Any Smaller
  • Take Screenshots
  • Ask For Assistance

Provide Context

No alt text provided for this image

Providing context is an important first step in guiding the reviewer. Use this as an opportunity to explain why you are making this change. This can be as simple as referring to the bug/defect/issue number, and as detailed as necessary to describe your change.

In this example, there is a provided link to an original issue on the npm.community site and linked directly to the source of pnpm that was referenced in the original issue.

No alt text provided for this image
https://github.com/npm/cli/pull/32#issue-204418739

In this more complicated example, one wants to make sure I was consistent with how other PRs were made to this repository. One of the suggestions is the old “when in Rome” rule. Some repositories even provide a template, which can be helpful, but this one didn’t. So, one should make each item discussed in the issue into a bullet item, checking-off the ones that were completed and noting anything that you’re not sure about.

No alt text provided for this image
https://github.com/npm/cli/pull/61#issue-211591081

Make It As Small As Possible, But Not Any Smaller

No alt text provided for this image

Everyone agrees that smaller PRs are easier to review. Sometimes it’s just not possible to make a very small change, so here’s some practical advice: don’t do more than necessary. If you need to stray from the core of the change you are making, separate it.

For example – imagine you are adding a path through the code to handle a new requirement. Along the way you realize that some of the variables and functions could use better names, but changing those means that you now need to update a bunch of files in another area of the code. STOP! Don’t make that change! Or at least, don’t bunch it together with your other changes. Instead, make the rename change in its own commit, and probably in its own Pull Request too. In the PR you can explain why you are making the change and how it simplifies the real change you want to make easier.

This same strategy should be applied to most whitespace and re-factorings you want to do during the course of implementing a new feature or resolving a defect. Be considerate of the reviewer’s time. There is nothing more frustrating than hunting through all the changes looking for the actual change and bumping into whitespace and “re-namings” spread across many files.

Take Screenshots

No alt text provided for this image

So you’re working on a story that affects the UI? Maybe you are fixing alignment in IE11, or adding a new interstitial modal when the user clicks a button. The code will get reviewed as it always does, but many people struggle with visualizing layout changes or CSS tweaks. Include a screen capture of the before and after. It’s usually pretty easy to get a before shot – just use the QA or production environment. Then for the after shot, use your local server. Both GitHub and Atlassian BitBucket allow you to paste images, so you can literally SHIFT-CTRL-CMD-4 (OSX) to copy a section of the screen to your clipboard, then CMD-V to paste it into the input box of your PR description.

Another incredibly helpful option is to use an application like GIPHY Capture to record an animated GIF that can be added to your PR. These are great for when you want to show an animation or a sequence of steps.

Let’s face it, it’s a pain for the reviewers to fire up their Windows vm to try out your change that resolves an Edge problem. Make their life easier by including an animated GIF that shows exactly what changed right in the Pull Request!

Ask For Assistance

No alt text provided for this image

Become a big fan of the quote “it is better to beg for forgiveness rather than ask for permission.” In so many instances, not just in software, this rule helps save time. But making too big a change in your PR may be received poorly, especially when you are not a regular contributing member.

Be cautious and curious in order to lead a better engagement and ultimately, a better solution.

Here is a comment made on a PR to a tslint project. You can see how the person’s comment/feedback is acknowledged and further asks a clarifying question because of the impact it would have on so many files. This lets the reviewers know that you have respect and consideration for the size changes coming into their code base, and also that you want to be collaborative in finding the best solution.

No alt text provided for this image
https://github.com/palantir/tslint/pull/1738#issuecomment-261527450

Closing Thoughts

What other things do you like to do in your PRs? What kinds of things would you like to see more of in PRs that you are reviewing? What would you like to see less of?

No alt text provided for this image

Written by Kaela Coppinger · Categorized: DevOps, Java Engineering, Product Development, Software Engineering · Tagged: agile, best practices, growth, INRHYTHMU, JavaScript, learning and growth, software engineering

Feb 28 2023

How To Structure PWAs With PRPL Patterns

Overview

No alt text provided for this image

It’s been over 10 years since the release of the first model of the iPhone. Back then, most people had primitive mobile devices, limited mostly to making calls and receiving brief text messages.

Anything close to decent was considered a pleasant user experience when it came to mobile. Nobody was concerned about the status quo, because nobody was using unstable mobile devices on a daily basis to browse through sites, make purchases, etc. (at least, not yet)

No alt text provided for this image

Over the years, however, a powerful shift has moved users’ primary point of entry from desktop machines with fast, reliable network connections to relatively underpowered mobile devices with connections that are often slow or flaky. Unfortunately, Google reports state 53% of users abandon sites that take longer than 3 seconds to load; the average load time takes up to 19 seconds on a 3G connection and 14 seconds on a 4G connection.

Now you might ask yourself: right, but how does that happen? Why does the page load take 19 seconds? I wrote some CSS, it is responsive, it should work!

Here’s the problem: the UI looks like it works, but it doesn’t work in the real world. If you think about your mobile users, a good amount of them are still using median devices—the ones they receive for free with a new mobile plan, with just 1GB of RAM. They are a little (or even a lot) better than years ago, but still slow and suffering from poor connectivity.

No alt text provided for this image

There’s clearly a significant gap between today’s consumer expectations, the capabilities of their devices, and the mobile behavior of most sites. The patterns we have developed for building feature-rich web apps are just not sufficient for a mobile device user anymore. In order to create the best experience, the PRPL pattern can be key to improved mobile website development and user experience.

PWAs To The Rescue

No alt text provided for this image

When trying to ensure that a web app is suitable for a mobile device, most organizations develop responsive apps. It could appear as a great solution to our previously mentioned problem: the pages automatically respond to the screen size, UX stays consistent across all platforms, and we only have one code base for both mobile and desktop platforms. Unfortunately, this solution comes with some limitations. Responsive Web Design has clear network dependency; as soon as the connection is lost, your page is gone. If your connection is slow, you will automatically see layout and UI glitches.

Responsive Web Design is a fast and simple solution—it doesn’t solve all problems, but it does solve some of them, and quickly. It works best, however, when it naturally moves on to Progressive Web App. While PWAs are quite new and emerging, this architecture allows your app to inherit all main behaviors of RWD such as push notifications or GPS awareness, but also offers some advanced features. Not only is the app visible immediately after entering the page, but it also works better on a slow internet connection. What’s more, thanks to clever caching methods, your content can be visible and flawless even if you are not connected to the internet.

No alt text provided for this image

One of the ways to achieve that improved behavior lays in a pattern for structuring and serving Progressive Web Apps with emphasis on the performance of app delivery and launch.

It’s known as the PRPL pattern:

  • push
  • render
  • pre-cache
  • lazy-load

It is not a specific technology or tools, but more of a mindset and a long-term plan for improving the performance of mobile web. The specific implementation of each of the steps is out of the scope of this article, but feel free to do additional research for more information.

Page Loading Process

No alt text provided for this image

What does it take to load a page, from the moment you first open that page to the moment it’s fully loaded and you can interact with it? When you try to open a site on a mobile device, an initial request is sent to a remote server somewhere far away. After some time, the server brings the response, usually in the form of an HTML document. After that, your browser runs through the HTML file to check what other resources are needed; for each additional resource, your browser needs to make a separate call to the server in order to get that resource. You’ve probably noticed: that’s a lot of calls. How do we optimize that performance?

Push Critical Resources

No alt text provided for this image

Not every file in your application has the same level of importance. Browsers know this, and using their own heuristic they are able to decide which files they should be fetching first. It’s useful to also tell the browser which files are more important to us. There are multiple ways of preloading critical resources faster. Some of them include rel=”preload” and rel=”prefetch”, however you may also want to explore webpack options.

It may be useful to keep in mind that prefetch is better for getting ready the resources needed for different navigation routes.  In general, both of these methods allow you to mask the initial latency by preparing the resources that are important but usually take some time to load. This way your browser reads through HTML and instantly warms up the connection with the source, so by the time the browser got to the last line of the HTML file, the resource is ready to be rendered.

Render An Initial Route As Soon As Possible

No alt text provided for this image

Providing basic user experience as soon as possible is critical when it comes to convincing users that the site they entered is worth staying on. How does it feel when you open a site that starts loading, and the only thing you see for the next 15 seconds is a blank screen? I always ask myself: is it loading? Is my connection not working? Maybe it’s my phone that is not working? Downloading and processing external stylesheets is probably blocking the content from being rendered until the whole process has finished. That creates an opportunity for improvement.

 There are some parts of an application that can be pushed earlier to provide some basic user experience and assure the user of the loading progress. One method is to extract styles responsible for minimum initial rendering and inlining them in the HTML document. You can either implement that solution yourself or use already existing packages such as critical package. This way the browser would be able to render the styles right away. Another approach to improve first paint is to server-side render the initial HTML of your page. This displays content immediately to the user while scripts are still being fetched, parsed, and executed. However, this can increase the payload of the HTML file significantly, which can harm the time it takes for your application to become interactive and thereby respond to user input. There is no single correct solution to reduce the initial load of your application, and you should only consider inlining styles and server-side rendering if the benefits outweigh the tradeoffs for your application.

Pre-Cache Remaining Routes

No alt text provided for this image

As you probably already noticed, minimizing server-side trips can be crucial in the process of shortening page load time. Here’s where the service worker really shines. Using a service worker cache allows you to store the resources that make up the shell. On repeat visits, your browser can fetch assets directly from the cache rather than the server. This way your user will not only be able to use your application offline, but also enjoy a much faster page load. You can either create the service worker file and write the logic yourself, or use libraries such as Workbox that can make this process easier.

 Lazy-Load

No alt text provided for this image

We’ve arrived at the moment when all of our assets are finally delivered by the server at the speed of light, but the initial paint is still slow; what’s taking so long? Almost always the most expensive asset happens to be a JavaScript bundle. From the moment it gets loaded to the moment the UI gets fully interactive, your browser goes through a few phases: it has to download the files, parse through them, compile, and finally execute. In simple terms, after your browser’s received all the resources, it now has to compute what all the files combined together look like, and how they work together. The bigger the bundle you ship, the longer it will take for the browser to parse through it and put it together.

What does it really mean for the user? Shipping a large bundle of JavaScript can significantly delay how your user will be able to interact with UI components. That means your user will be tapping on the UI without anything meaningful happening. The previously mentioned phases don’t take a lot of time on a desktop machine, but on a median mobile device, it can take forever. So how do we manage to quickly load the rest of the code necessary for the application to run? Should we just load the entire code all at once?

No alt text provided for this image

Instead of providing users with all of the code that makes up the entire application as soon as they land on a site you could split the code based on used routes, otherwise known as code splitting. The idea behind it is to give the user small chunks of the code that takes the currently used route. As the user navigates through the site, the browser makes additional requests for more of the fragments of code that haven’t been cached yet, and creates required views, known as lazy loading. This is another feature that you could implement yourself, but it may be worth it to use existing packages and plugins instead, such as an aggressive splitting webpack plugin.

Closing Thoughts

No alt text provided for this image

Nowadays, through improvements in Internet browsers, the expectations toward mobile websites are set very high. The purpose of the first websites over 20 years ago was simply to share information; these days the Internet provides everything from grocery shopping, maps, real estate, social networks, chatting, tickets… everything. If you are hoping for maximum engagement from your customers, improving their mobile experience by delivering content fast and reliably may be the way to go.

Written by Kaela Coppinger · Categorized: DevOps, Java Engineering, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: best practices, INRHYTHMU, JavaScript, learning and growth, product development, PWAs, software engineering, ux

Jan 03 2023

Creating Robust Test Automation For Microservices

Overview

No alt text provided for this image

Any and all projects that a software engineer joins will come in one of two forms: greenfield or legacy codebases. In the majority of cases, projects will fall into the realm of legacy repositories. As a software engineer, it is their responsibility to be able to strategically navigate their way through either type of project by looking objectively at the opportunities to improve the code base, lower the cognitive load for software engineering, and make a determination to advise on better design strategies.

But, chances are, there is a problem. Before architecture or design refactors can be taken its best to take a pulse on the health of a platform End to End (E2E). The reason being, lurking in a new or existing platform is likely a common ailment of a modern microservices approach – the inability to test the platform E2E across microservices that are, by design, commonly engineered by different teams over time.

Revitalizing Legacy Systems

No alt text provided for this image

One primary challenge faced by a number of software engineers, is the adaptive work on a greenfield platform that has fallen several months behind from a quality assurance perspective. It becomes no longer possible for QA to catch up, nor was it possible for QA to engineer and execute E2E testing to complete common user journeys throughout the enterprise system.

To solve this conundrum, E2E data generation tools need to be created so that the QA team can keep upbuilding and testing every scenario and edge case.

There are three main requirements for an E2E account and data generation tool.

The tool should:

1) Create test accounts with mock data for each microservice

2) Link those accounts between up and downs stream microservices

3) Provide easy to access APIs that are self-documenting 

Using a tool like Swagger, QA can use the API description for REST API, i.e. OpenAPI Specification (formerly Swagger Specification) to view the available endpoints and operations to create accounts, generate test data, authenticate, authorize and “connect the microservices.”

No alt text provided for this image

Closing Thoughts

By creating tools for E2E testing, a QA team was able to eliminate the hassle of trying to figure out which upstream and downstream microservices needed to be called to ensure that the required accounts and data were available and set up properly to ensure a successful test of all scenarios i.e. based upon the variety of different data types, user permissions, user information, and covering the negative test cases. The QA team was able to catch up and write their entire suite of test scenarios generating the matching accounts and data to satisfy those requirements. The net result of having built an E2E test generation tool was automated tests could be produced exponentially quicker and the tests themselves are more resilient to failure. 

Even though the microservices pattern continues to gain traction, developing E2E testing tools that generate accounts and test data across an enterprise platform will likely still remain a pain point.

There’s no better way to maintain a healthy system than to ensure accounts and data in the lower environments actually work and unblock testing end-to-end. 

Written by Kaela Coppinger · Categorized: Agile & Lean, Cloud Engineering, Java Engineering, Product Development, Software Engineering · Tagged: cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering, testing

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT