• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

software engineering

Mar 30 2023

Weekly InRhythmU Zoom Lightning Talks: Introduction To Flyway For Data Migration

Every Thursday at 5:15pm, one of our InRhythm team members shares their knowledge on evolving technology trends in the industry – so we can learn and grow together!

This week, join InRhythm’s own Lead Software Engineer, Ted Parton as he take us on a technical deep-dive industry look at Flyway For Data Migration with its best use cases.

Written by Kaela Coppinger · Tagged: Cloud, Data Migration, Flyway, INRHYTHMU, software engineering

Mar 30 2023

A Comprehensive Guide To Playwright’s Debugging And Tracing Features

Based on a Lightning Talk by: Alex Kurochka, Senior SDET Engineer @ InRhythm on March 17th, 2023 as part of the Propel Spring Quarterly Summit 2023

Author: Ted Parton, Lead Software Engineer @ InRhythm

Overview

No alt text provided for this image
Design Credit: Joel Colletti, Lead UI/UX Designer @ InRhythm

We recently held our Spring Summit consisting of six workshops hosted by each of our practice areas. On March 17th, 2023, our SDET Practice led a series of talks and workshops on Microsoft’s Playwright.

Playwright is a tool that enables end-to-end testing of modern web applications. Playwright works with all modern web browsers including: Chromium, Firefox, and WebKit.

In this article, we will go over three tools to go over debugging with Playwright:

  • Playwright Inspector
  • Playwright Test For VSCode
  • Trace Viewer

Fixtures

No alt text provided for this image

Before we get into specific tools, let’s talk about Playwright Fixtures.

For those unfamiliar with test fixtures, these can be useful in establishing an environment for each test. That is, a fixture can provide everything a test needs to run. It is recommended that your fixtures provide only the absolutely necessary things to run and nothing else. In Playwright, fixtures are isolated between tests. With fixtures, you can group tests based on their meaning, instead of their common setup.

According to the official Playwright documentation (source: https://playwright.dev/docs/test-fixtures), Fixtures have a number of advantages over before/after hooks:

  • Fixtures encapsulate setup and teardown in the same place so it is easier to write
  • Fixtures are reusable between test files – you can define them once and use them in all your tests. That’s how Playwright’s built-in page fixture works
  • Fixtures are on-demand – you can define as many fixtures as you’d like, and Playwright Test will setup only the ones needed by your test and nothing else
  • Fixtures are composable – they can depend on each other to provide complex behaviors
  • Fixtures are flexible. Tests can use any combination of the fixtures to tailor the precise environment they need, without affecting other tests
  • Fixtures simplify grouping. You no longer need to wrap tests in describes that set up environment, and are free to group your tests by their meaning instead
No alt text provided for this image

Playwright Inspector

No alt text provided for this image

Playwright Inspector is the default debugging tool for Playwright. While Playwright scripts typically run in a headless mode, Playwright Inspector has a GUI to help troubleshoot your script which opens during the test run along with the browser that opens in headed mode.

To enable it we add a `–debug` to our command line. You can specify which test to run by adding `-g “test name”`. See our example below:

No alt text provided for this image

 When you run tests in debug mode, VSCode will open up the Playwright Inspector window, which will show the code to be executed and the debugger, such as the image below:

No alt text provided for this image

Once we have started our application with Playwright debugging on, we can step through the code in the Playwright Inspector and choose when to use our Fixtures for our tests. You can also let the tests play, and if it will stop at any part that breaks. From there you can look at the error report generated by Playwright. The error report can include lots of useful information such as timeouts, missing variables, unexpected results, etc. In addition to the report you can also view the results in the terminal from VSCode. 

See below for an example:

No alt text provided for this image

With Playwright Inspector you can set breakpoints to help you debug., You may find it helpful to add a breakpoint on a line before the line you know is broken. Breakpoints should be set in code with the await page.pause() statement. This will give you the ability to look at current variables and settings before you get to the line you are attempting to diagnose. But where Inspector truly shines is helping you debug the web page document object model (DOM).

The browser console can be used to debug locators while running tests in debug with playwright inspector. A javascript ‘playwright’ object can be used to evaluate different locators while the test run stopped at a breakpoint. To test locators use ‘playwright’ object methods like: playwright.locator(“string-locator”) and playwright.inspect(“string-locator”).

Playwright Test For VSCode

No alt text provided for this image

Playwright Test for VSCode is a plugin that helps you integrate Playwright into your VSCode workflow. According to Microsoft, this plugin can:

  • Install Playwright
  • Run Tests With A Single Click
  • Run Multiple Tests
  • Show Browsers
  • Pick Locators
  • Debug Step-By-Step, Explore Locators
  • Tune Locators
  • Record New Tests
  • Record At Cursor

You can install this extension by clicking on the extension tab/button in VSCode, then search for “playwright.” Click on the one that says “Playwright Test for VSCode” by Microsoft.

No alt text provided for this image

Once installed it will scan your project for all of your tests and group them together. With this integrated into VSCode, a play button will now appear beside tests in your test files. This makes it much easier to start and debug than with Playwright Inspector where you need to use a command line.

Additionally this adds a Playwright panel to VSCode that makes it easier to toggle options such as should your test run in headless mode or not. In short, this plug-in adds in a lot of nice features designed to provide a better user experience for testers.

Trace Viewer

No alt text provided for this image

Finally we have a Trace Viewer. Playwright Trace Viewer is a GUI tool that helps you explore recorded Playwright traces after the script has run. You can open traces locally or in your browser on trace.playwright.dev.

There are a couple of ways to enable the trace viewer. First the command line, where you add the option, `–trace on` or you can go to the Playwright settings file and enable (or disable) it there. 

Results are stored in a trace folder. To open a trace via the command line you enter, `playwright show-trace <path-to-file>` and hit ENTER. The trace viewer provides you with a lot of detailed information such as page load times, calls to resources, and which Javascript functions are being called.

Closing Thoughts

No alt text provided for this image

In conclusion, though each of these tools has its pluses and minuses, utilizing a combination of all three can help you take your diagnostics to a whole new level.

Written by Kaela Coppinger · Categorized: DevOps, Software Engineering, Web Engineering · Tagged: best practices, debugging, devops, INRHYTHMU, learning and growth, Microsoft, Playwright, Propel, Propel Spring Quarterly Summit 2023, Propel Workshop, software engineering, Tracing

Mar 23 2023

How To Write A Good Pull Request

Overview

No alt text provided for this image

Pull Requests are the backbone of open source software development. They allow contributions by anyone, from anywhere. PRs are also a vital form of communication, even within a localized development team working on proprietary software.

What makes a good Pull Request?

Let’s break it down into 4 Rules Of Thumb:

  • Provide Context
  • Make It As Small As Possible, But Not Any Smaller
  • Take Screenshots
  • Ask For Assistance

Provide Context

No alt text provided for this image

Providing context is an important first step in guiding the reviewer. Use this as an opportunity to explain why you are making this change. This can be as simple as referring to the bug/defect/issue number, and as detailed as necessary to describe your change.

In this example, there is a provided link to an original issue on the npm.community site and linked directly to the source of pnpm that was referenced in the original issue.

No alt text provided for this image
https://github.com/npm/cli/pull/32#issue-204418739

In this more complicated example, one wants to make sure I was consistent with how other PRs were made to this repository. One of the suggestions is the old “when in Rome” rule. Some repositories even provide a template, which can be helpful, but this one didn’t. So, one should make each item discussed in the issue into a bullet item, checking-off the ones that were completed and noting anything that you’re not sure about.

No alt text provided for this image
https://github.com/npm/cli/pull/61#issue-211591081

Make It As Small As Possible, But Not Any Smaller

No alt text provided for this image

Everyone agrees that smaller PRs are easier to review. Sometimes it’s just not possible to make a very small change, so here’s some practical advice: don’t do more than necessary. If you need to stray from the core of the change you are making, separate it.

For example – imagine you are adding a path through the code to handle a new requirement. Along the way you realize that some of the variables and functions could use better names, but changing those means that you now need to update a bunch of files in another area of the code. STOP! Don’t make that change! Or at least, don’t bunch it together with your other changes. Instead, make the rename change in its own commit, and probably in its own Pull Request too. In the PR you can explain why you are making the change and how it simplifies the real change you want to make easier.

This same strategy should be applied to most whitespace and re-factorings you want to do during the course of implementing a new feature or resolving a defect. Be considerate of the reviewer’s time. There is nothing more frustrating than hunting through all the changes looking for the actual change and bumping into whitespace and “re-namings” spread across many files.

Take Screenshots

No alt text provided for this image

So you’re working on a story that affects the UI? Maybe you are fixing alignment in IE11, or adding a new interstitial modal when the user clicks a button. The code will get reviewed as it always does, but many people struggle with visualizing layout changes or CSS tweaks. Include a screen capture of the before and after. It’s usually pretty easy to get a before shot – just use the QA or production environment. Then for the after shot, use your local server. Both GitHub and Atlassian BitBucket allow you to paste images, so you can literally SHIFT-CTRL-CMD-4 (OSX) to copy a section of the screen to your clipboard, then CMD-V to paste it into the input box of your PR description.

Another incredibly helpful option is to use an application like GIPHY Capture to record an animated GIF that can be added to your PR. These are great for when you want to show an animation or a sequence of steps.

Let’s face it, it’s a pain for the reviewers to fire up their Windows vm to try out your change that resolves an Edge problem. Make their life easier by including an animated GIF that shows exactly what changed right in the Pull Request!

Ask For Assistance

No alt text provided for this image

Become a big fan of the quote “it is better to beg for forgiveness rather than ask for permission.” In so many instances, not just in software, this rule helps save time. But making too big a change in your PR may be received poorly, especially when you are not a regular contributing member.

Be cautious and curious in order to lead a better engagement and ultimately, a better solution.

Here is a comment made on a PR to a tslint project. You can see how the person’s comment/feedback is acknowledged and further asks a clarifying question because of the impact it would have on so many files. This lets the reviewers know that you have respect and consideration for the size changes coming into their code base, and also that you want to be collaborative in finding the best solution.

No alt text provided for this image
https://github.com/palantir/tslint/pull/1738#issuecomment-261527450

Closing Thoughts

What other things do you like to do in your PRs? What kinds of things would you like to see more of in PRs that you are reviewing? What would you like to see less of?

No alt text provided for this image

Written by Kaela Coppinger · Categorized: DevOps, Java Engineering, Product Development, Software Engineering · Tagged: agile, best practices, growth, INRHYTHMU, JavaScript, learning and growth, software engineering

Feb 28 2023

How To Structure PWAs With PRPL Patterns

Overview

No alt text provided for this image

It’s been over 10 years since the release of the first model of the iPhone. Back then, most people had primitive mobile devices, limited mostly to making calls and receiving brief text messages.

Anything close to decent was considered a pleasant user experience when it came to mobile. Nobody was concerned about the status quo, because nobody was using unstable mobile devices on a daily basis to browse through sites, make purchases, etc. (at least, not yet)

No alt text provided for this image

Over the years, however, a powerful shift has moved users’ primary point of entry from desktop machines with fast, reliable network connections to relatively underpowered mobile devices with connections that are often slow or flaky. Unfortunately, Google reports state 53% of users abandon sites that take longer than 3 seconds to load; the average load time takes up to 19 seconds on a 3G connection and 14 seconds on a 4G connection.

Now you might ask yourself: right, but how does that happen? Why does the page load take 19 seconds? I wrote some CSS, it is responsive, it should work!

Here’s the problem: the UI looks like it works, but it doesn’t work in the real world. If you think about your mobile users, a good amount of them are still using median devices—the ones they receive for free with a new mobile plan, with just 1GB of RAM. They are a little (or even a lot) better than years ago, but still slow and suffering from poor connectivity.

No alt text provided for this image

There’s clearly a significant gap between today’s consumer expectations, the capabilities of their devices, and the mobile behavior of most sites. The patterns we have developed for building feature-rich web apps are just not sufficient for a mobile device user anymore. In order to create the best experience, the PRPL pattern can be key to improved mobile website development and user experience.

PWAs To The Rescue

No alt text provided for this image

When trying to ensure that a web app is suitable for a mobile device, most organizations develop responsive apps. It could appear as a great solution to our previously mentioned problem: the pages automatically respond to the screen size, UX stays consistent across all platforms, and we only have one code base for both mobile and desktop platforms. Unfortunately, this solution comes with some limitations. Responsive Web Design has clear network dependency; as soon as the connection is lost, your page is gone. If your connection is slow, you will automatically see layout and UI glitches.

Responsive Web Design is a fast and simple solution—it doesn’t solve all problems, but it does solve some of them, and quickly. It works best, however, when it naturally moves on to Progressive Web App. While PWAs are quite new and emerging, this architecture allows your app to inherit all main behaviors of RWD such as push notifications or GPS awareness, but also offers some advanced features. Not only is the app visible immediately after entering the page, but it also works better on a slow internet connection. What’s more, thanks to clever caching methods, your content can be visible and flawless even if you are not connected to the internet.

No alt text provided for this image

One of the ways to achieve that improved behavior lays in a pattern for structuring and serving Progressive Web Apps with emphasis on the performance of app delivery and launch.

It’s known as the PRPL pattern:

  • push
  • render
  • pre-cache
  • lazy-load

It is not a specific technology or tools, but more of a mindset and a long-term plan for improving the performance of mobile web. The specific implementation of each of the steps is out of the scope of this article, but feel free to do additional research for more information.

Page Loading Process

No alt text provided for this image

What does it take to load a page, from the moment you first open that page to the moment it’s fully loaded and you can interact with it? When you try to open a site on a mobile device, an initial request is sent to a remote server somewhere far away. After some time, the server brings the response, usually in the form of an HTML document. After that, your browser runs through the HTML file to check what other resources are needed; for each additional resource, your browser needs to make a separate call to the server in order to get that resource. You’ve probably noticed: that’s a lot of calls. How do we optimize that performance?

Push Critical Resources

No alt text provided for this image

Not every file in your application has the same level of importance. Browsers know this, and using their own heuristic they are able to decide which files they should be fetching first. It’s useful to also tell the browser which files are more important to us. There are multiple ways of preloading critical resources faster. Some of them include rel=”preload” and rel=”prefetch”, however you may also want to explore webpack options.

It may be useful to keep in mind that prefetch is better for getting ready the resources needed for different navigation routes.  In general, both of these methods allow you to mask the initial latency by preparing the resources that are important but usually take some time to load. This way your browser reads through HTML and instantly warms up the connection with the source, so by the time the browser got to the last line of the HTML file, the resource is ready to be rendered.

Render An Initial Route As Soon As Possible

No alt text provided for this image

Providing basic user experience as soon as possible is critical when it comes to convincing users that the site they entered is worth staying on. How does it feel when you open a site that starts loading, and the only thing you see for the next 15 seconds is a blank screen? I always ask myself: is it loading? Is my connection not working? Maybe it’s my phone that is not working? Downloading and processing external stylesheets is probably blocking the content from being rendered until the whole process has finished. That creates an opportunity for improvement.

 There are some parts of an application that can be pushed earlier to provide some basic user experience and assure the user of the loading progress. One method is to extract styles responsible for minimum initial rendering and inlining them in the HTML document. You can either implement that solution yourself or use already existing packages such as critical package. This way the browser would be able to render the styles right away. Another approach to improve first paint is to server-side render the initial HTML of your page. This displays content immediately to the user while scripts are still being fetched, parsed, and executed. However, this can increase the payload of the HTML file significantly, which can harm the time it takes for your application to become interactive and thereby respond to user input. There is no single correct solution to reduce the initial load of your application, and you should only consider inlining styles and server-side rendering if the benefits outweigh the tradeoffs for your application.

Pre-Cache Remaining Routes

No alt text provided for this image

As you probably already noticed, minimizing server-side trips can be crucial in the process of shortening page load time. Here’s where the service worker really shines. Using a service worker cache allows you to store the resources that make up the shell. On repeat visits, your browser can fetch assets directly from the cache rather than the server. This way your user will not only be able to use your application offline, but also enjoy a much faster page load. You can either create the service worker file and write the logic yourself, or use libraries such as Workbox that can make this process easier.

 Lazy-Load

No alt text provided for this image

We’ve arrived at the moment when all of our assets are finally delivered by the server at the speed of light, but the initial paint is still slow; what’s taking so long? Almost always the most expensive asset happens to be a JavaScript bundle. From the moment it gets loaded to the moment the UI gets fully interactive, your browser goes through a few phases: it has to download the files, parse through them, compile, and finally execute. In simple terms, after your browser’s received all the resources, it now has to compute what all the files combined together look like, and how they work together. The bigger the bundle you ship, the longer it will take for the browser to parse through it and put it together.

What does it really mean for the user? Shipping a large bundle of JavaScript can significantly delay how your user will be able to interact with UI components. That means your user will be tapping on the UI without anything meaningful happening. The previously mentioned phases don’t take a lot of time on a desktop machine, but on a median mobile device, it can take forever. So how do we manage to quickly load the rest of the code necessary for the application to run? Should we just load the entire code all at once?

No alt text provided for this image

Instead of providing users with all of the code that makes up the entire application as soon as they land on a site you could split the code based on used routes, otherwise known as code splitting. The idea behind it is to give the user small chunks of the code that takes the currently used route. As the user navigates through the site, the browser makes additional requests for more of the fragments of code that haven’t been cached yet, and creates required views, known as lazy loading. This is another feature that you could implement yourself, but it may be worth it to use existing packages and plugins instead, such as an aggressive splitting webpack plugin.

Closing Thoughts

No alt text provided for this image

Nowadays, through improvements in Internet browsers, the expectations toward mobile websites are set very high. The purpose of the first websites over 20 years ago was simply to share information; these days the Internet provides everything from grocery shopping, maps, real estate, social networks, chatting, tickets… everything. If you are hoping for maximum engagement from your customers, improving their mobile experience by delivering content fast and reliably may be the way to go.

Written by Kaela Coppinger · Categorized: DevOps, Java Engineering, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: best practices, INRHYTHMU, JavaScript, learning and growth, product development, PWAs, software engineering, ux

Feb 21 2023

When To Implement Pair Programming

Overview

No alt text provided for this image

A vast number of companies embrace pair programming as a way to increase programmer productivity (loosely defined as delivering “value” which can have many forms, but is generally interpreted as writing more code per day), but is it really that great? wondered why we should pair program and when is the right time to embrace it as a strategy.

Pair programming, as understand by the software community, came into popular culture as a facet of XP (extreme programming), a development framework that enforces practices that generally improve software quality and responsiveness. The idea is a new incarnation of the old adage: “two minds are better than one.”

Either way, the idea is right – two people have different histories, cultures, and experiences, so for those reasons they think about things in different ways. When two people work on a problem together they almost always come out with a better solution than a solo venture.

So how does this relate to programming?

The Positives

No alt text provided for this image

Pair Programming Does Reduce Bugs

The primary driver for pair programming is to increase quality and decrease bugs. When done well, it does that spectacularly. One study found that pairing reduced bugs in production by 15%!

Pairing Does Increase Code Quality

Many of the benefits of pair programming are not actually technical: they’re social. When working with a peer, it’s normal to feel encouraged to do one’s best, ensuring the coding is clean and avoiding any technical debt that they’ll “fix later.” 

When pairing, one tends to do things just a little bit cleaner, making algorithms easier to read and naming variables more sensibly. A Software Team will actually write unit tests to 100% coverage! With two sets of eyes, the quality is always higher.

The Variables To Keep In Mind

No alt text provided for this image

Pair Programming Does Not Entirely Eliminate Bugs

As much as one would love for pairing to just eliminate all bugs, it’s just not the reality. Bugs still happen. There are generally a lot fewer of them, but perfect code would require perfect programmers.

Pair Programming Does Not Fix Poor Product Direction

Good projects need very strong product direction. And to be clear, the responsibility for this direction is on everyone, not just “product people.” It begins with asking questions and making informed decisions about work to be done. Then the team needs to thoroughly discuss the work, breaking it down as much as possible to understand the full scope. If this isn’t done properly, deadlines are missed, everyone is stressed, work that should take minutes take days… Pair programming can’t fix that. Nothing is more important than good product direction.

Pair Programming Needs To Be Done Right To Mean Anything

Pair programming is a tool meant to help make a difficult problem more digestible.

Pair programming, when done correctly, generally means one person is writing code and the other is directing the work. Directing in this case means providing feedback about best practices and constructive criticism. It also means researching those best practices when one doesn’t know them off the top of their head and taking the time to think deeply about possible edge cases and hangups relevant to the work at hand.

Pairs should communicate thoroughly, share all relevant information about their work, and swap duties as often as possible. It’s taxing to think about problems in both a creative and technical way, so it’s better to distribute that work. That’s one big reason that pair programming is such an effective tool.

When To Implement Pair Programming

No alt text provided for this image

Pair Program When There Is A Very Difficult Problem At Hand

If you have a problem that cannot reasonably be broken down into smaller parts, it should be met by multiple programmers.

An 8-point story should generally never exist in an organization if doing normal web work. Features can almost always be broken down into “front-end” and “back-end” stories. Whole-page mockups can be broken down into component parts. Design and QA phases can also be separated out into their own stories. But a really tough problem is just a really tough problem.

Trying to add a new feature to a language is a really tough problem. Trying to figure out how to reduce the latency on database calls is a really tough problem. These are examples of problems that require both creative and technical thinking.

Pair Program When Two Programmers Are At Completely Different Skill Levels

Pair programming is a remarkably good way to teach junior programmers. Getting to participate live while a more senior programmer talks about how and why they’re doing something is ab invaluable experience. So is writing code while a more senior programmer coerces better practices on the fly.

Pair Program When Two Programmers Have Completely Different Skill Sets

Having two programmers with complementary skill sets can be very rewarding for both the programmers and the codebase. Pairing programmers who generally only work front-end or back-end can get an end-to-end feature out the door, or a Postgres expert pairing with a Scala expert to make a database call more efficient.

When two programmers work together live, they absorb a lot of knowledge about each other’s domain and ensure there’s no aspect of the project that’s neglected.

Pair Program When Both Programmers Are New To A Language/Framework

Sometimes a situation arises where nobody is an expert. This is an excellent learning and growth opportunity! 

A project will end up with two programmers working through a difficult problem and contributing their individual skills to build a better product, helping each other learn, and with a redundancy in the skill growth of programmers. This is important because the skills in within an organization should never be concentrated in one person. Having programmers pair on new languages and/or frameworks ensures that there are a minimum of two people who can work on this in the future.

Closing Thoughts

No alt text provided for this image

The best way to approach pairing is to partner two programmers and have them share a computer. Make them work together to architect, code, and then test their codes in a genuine sense of a partnership. While the ideal setup would include two programmers who are equally skilled (expert – expert or novice – novice), you can also use pair programming for training and educational purposes (expert – novice).

The pair should be able to decide how to split the work, and it is advisable that they should switch roles often.

Written by Kaela Coppinger · Categorized: Agile & Lean, Learning and Development, Product Development, Software Engineering · Tagged: best practices, INRHYTHMU, learning and growth, pair programming, product development, Programming, software development, software engineering

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT