• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

devops

Sep 21 2022

How To Write A Great Test Case

Overview

A test case is exactly what it sounds like: a test scenario measuring functionality across a set of actions or conditions to verify the expected result. They apply to any software application, can use manual testing or an automated test, and can make use of test case management tools.

Most digital-first business leaders know the value of software testing. Some value high-quality software more than others and might demand more test coverage to ultimately satisfy customers. So, how do they achieve that goal?

They test more, and test more efficiently. That means writing test cases that cover a broad spectrum of software functionality. It also means writing test cases clearly and efficiently, as a poor test can prove more damaging than helpful.

A key thing to remember when it comes to writing test cases is that they are intended to test a basic variable or task such as whether or not a discount code applies to the right product on an e-commerce web page. This allows a software tester more flexibility in how to test code and features.

In Nathan Barrett’s Lightning Talk session, we will be breaking down the following topics:

  • What Is A Test Case?
  • What Makes A Good Test Case?
  • Live Demonstration
  • Closing Thoughts

What Is A Test Case?

At a high level, to “test” means to establish the quality, performance, or reliability of a software application. A test case is a repeatable series of specific actions designed to either verify success or provoke failure in a given product, system, or process. 

A test case gives detailed information about testing strategy, testing process, preconditions, and expected output. These are executed during the testing process to check whether the software application is performing the task for which it was developed for or not. A passed test case functions like a receipt verifying the correct functionality of the subject of the test. 

To write the test case, we must have the requirements to derive the inputs, and the test scenarios must be written so that we do not miss out on any features for testing. Then we should have the test case template to maintain the uniformity, or every test engineer follows the same approach to prepare the test document.

Test cases serve as final verification of functionality before releasing it to the direct product users. 

What Makes A Good Test Case?

Writing test cases varies depending on what the test case is measuring or testing. This is also a situation where sharing test assets across dev and test teams can accelerate software testing. But it all starts with knowing how to write a test case effectively and efficiently.

Test cases have a few integral parts that should always be present in fields, as well as some “nice to have” elements that can only work to enhance presented results. 

Required Elements:

  • Summary
    • Concise, direct encapsulation of the purpose of the test case 
  • Prerequisites
    • What needs to be in place prior to starting the test?
    • Bad Prerequisites: captured in test steps, not present, overly specific
    • Good Prerequisites: concise/descriptive, lays out all set-up prior to testing, includes information to learn more if desired 
  • Test Steps
    • The meat of the test case
    • Good test steps: each step is a specific atomic action performed by the user that contains an expected result, call out divergent paths where necessary, cites which test data when laid out in prerequisites needs to be applied
    • Really great test steps should treat the user like they know “nothing” and communicate everything from start to finish
  • Expected Results
    • How do we know that the test hasn’t failed?
    • Bad Expected Results: Page loads correctly, view looks good, app behaves as expected
    • Good Expected Results: Landing page loads after spinner with user’s account details present, view renders with all appropriate configurations (title, subtitle, description, etc.), toggle changes state when tapped (enabled→disabled)

Preferred Additional Elements:

  • Artifacts
    • Screenshots, files, builds, configurations, etc.
  • Test Data
    • Accounts, items, addresses, etc.
    • What information is needed during the test?
    • Pre-rendered prerequisite fulfillment
  • Historical Context
    • Previous failures, previous user journeys, development history, etc.
    • Has this feature been “flakey” in the past?
    • What are previous failure points?
    • How critical is this feature?

The very practice of writing test cases helps prepare the testing team by ensuring good test coverage across the application, but writing test cases has an even broader impact on quality assurance and user experience.

Live Demonstration

Nathan Barrett has crafted an intuitive test of specificity to help guide testers to understand how they should be structuring their cases:

Be sure to follow Nathan’s entire Lightning Talk to follow along with these steps in real time.

Closing Thoughts

Test cases help guide the tester through a sequence of steps to validate whether a software application is free of bugs, and working as required by the end-user. A well-written test case should allow any tester to understand and execute the test.

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to understanding the beneficial prerequisites to writing a good test case for any type of application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about How To Conduct A Great Test Case as well as its importance in the software development process and to experience Nathan Barrett’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: Cloud Engineering, Design UX/UI, DevOps, InRhythmU, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: devops, INRHYTHMU, learning and growth, SDET, software development, software engineering, ux, web engineering

Sep 16 2022

A Comprehensive Guide To Java’s New HTTP Client

Overview 

The Hypertext Transfer Protocol (HTTP) is the foundation of the World Wide Web, and is used to load web pages using hypertext links. HTTP is an application layer protocol designed to transfer information between networked devices and runs on top of other layers of the network protocol stack. A typical flow over HTTP involves a client machine making a message request to a server, which then sends a response message.

Java is a general-purpose, class-based, object-oriented programming language designed for having lesser implementation dependencies. It is a computing platform for application development. Java is fast, secure, and reliable, therefore, it  is widely used by everyone from the newest to most advanced web developers. 

In Daniel Fuentes’ Lightning Talk session, we will breaking down the following topics:

  • What Is HTTP?
  • Improvements In HTTP 2.0
  • How HTTP 2.0 Impacts Java
  • Live Demonstrations
  • Closing Thoughts

The new HTTP 2.0 Client was released in Java 11. This new client is used to request HTTP resources over the network. It supports HTTP/1.1 and HTTP/2.0, both synchronous and asynchronous programming models, handles request and response bodies as well as reactive-streams, and follows the familiar builder pattern.

What Is HTTP?

HTTP is an application layer protocol designed to transfer information between networked devices. HTTP runs on top of other layers of the network protocol stack. 

HTTP is a protocol for fetching resources such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance, text, layout description, images, videos, scripts, and more.

Clients and servers communicate by exchanging individual messages (as opposed to a stream of data). The messages sent by the client, usually a Web browser, are called requests and the messages sent by the server as an answer are called responses.

The typical flow over HTTP involves a client machine making a request to a server, which then sends a response message. 

HTTP was invented alongside HTML to load web pages using links (hypertext). It was a part of the first interactive, text-based web browser: the original World Wide Web. Today, the protocol remains one of the primary means of using the Internet.

Improvements In HTTP 2.0

HTTP 2.0 is based on streams and binary frames, in comparison to the text-only request models of its previous iteration. Unlike text-only interfaces, streams can be multiplexed asynchronously over one TCP (Transmission Control Protocol) connection. Comparatively, HTTP 2.0 reduces latency, therefore enhancing its performance. 

How HTTP 2.0 Impacts Java

Java was commonly built upon the HttpURLConnection class – which had an original launch in 1999 when HTTP 1.0 was still a fresh protocol. With a backbone built from outdated technology, it was never able to update properly in response to the rapidly changing nature of web protocols. 

Its steady incompatibility and lack of ease in use, led developers to opt out of Java’s direct class and instead employ third party solutions (ie. Apache, Netty, Eclipse, Google, etc.).

With the new updated Java 11 developer toolkit, came a number of operational changes – but namely, the adoption of HTTP 2.0. In order to meet the demand of an environment consistently in motion, Java made these longevity changes:

  • Eliminating the need for 3rd party client dependencies
  • Building in a backwards compatibility with HTTP 1.0 for remaining servers that may have not yet made the switch to HTTP 2.0
  • Instating an asynchronous support network for multiple HTTP requests
  • Vastly improving performance with the addition of Header compression and Single Connections for multiple requests

Live Demonstrations

Daniel Fuentes has crafted an intuitive demonstration to help guide you through this new Java HTTP in practicum: 

Be sure to follow Daniel’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test HTTP 2.0 in order to improve your application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Java’s Updated HTTP Server as well as its influence in web development and to experience Daniel Fuentes’ full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: DevOps, InRhythmU, Java Engineering, Product Development · Tagged: best practices, devops, HTTP, INRHYTHMU, Java 11, JavaScript, learning and growth, software development, Web Development

Sep 07 2022

Integrations For Apache – Flink, NiFi, And Kafka

Overview 

Apache is one of the go-to web servers for website owners and developers, with more than a 50% share in the commercial web server market. 

Apache HTTP Server is a free and open-source server that delivers web content through the internet. It is one of the oldest and most reliable web server software maintained by the Apache Software Foundation, with the first version released in 1995. It is commonly referred to as Apache and after development, it quickly became the most popular HTTP client on the web.

It is a modular, process-based web server application that creates a new thread with each simultaneous connection. It supports a number of features; many of them are compiled as separate modules and extend its core functionality, and can provide everything from server side programming language support to authentication mechanisms. Virtual hosting is one such feature that allows a single Apache Web Server to serve a number of different websites.

In Tim Spann’s Lightning Talk session, we will breaking down the following topics:

  • What Is An Apache Integration?
  • Apache Flink
  • Apache Nifi 
  • Apache Kafka
  • Live Demonstrations
  • Closing Thoughts

What Is An Apache Integration?

A software integration is the process of bringing together various types of software subsystems so that they create a unified single system. The integration should be carefully coordinated to result in a seamless connection of the separate parts. When done skillfully, the increased efficiency is a tremendous benefit to the Apache developer.

Implementing an integration that works best for an individual development team’s workflow is essential to the future success of the project. Taking time to explore and walk through a number of layout features will only stand to benefit the user-experience and streamlining of efficient tasking. 

Apache Flink

Apache Flink is a real-time processing framework which can process streaming data. It is an open source stream processing framework for high-performance, scalable, and accurate real-time applications. It has a true streaming model and does not take input data as batch or micro-batches.

Example: The below framework diagrams the different layers that run as part of the Apache Flink ecosystem. 

Apache Nifi

Apache NiFi is a visual data flow based system which performs data routing, transformation and system mediation logic on data between sources or endpoints.

Apache Kafka

Apache Kafka is a distributed streaming platform that: Publishes and subscribes to streams of records, similar to a message queue or enterprise messaging system. Stores streams of records in a fault-tolerant durable way. Processes streams of records as they occur.

Apache Kafka was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

Example: The below framework diagrams the operational flow of information using the Apache Kafka plug-in. 

Live Demonstrations

Tim Spann has crafted an intuitive demonstration to help guide you through these different Apache integrations in practicum: 

Be sure to follow Tim’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts 

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test how Apache Integrations can improve your personal data application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Apache Integrations in application development and to experience Tim Spann’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: InRhythmU, Learning and Development, Product Development, Web Engineering · Tagged: Apache, devops, Flink, INRHYTHMU, Kafka, learning and growth, NiFi, ux, Web Development, web engineering

Aug 23 2022

Web Bundling Alternatives

Overview

Bundling a full website as a single file and making it shareable opens up new use cases for the web. Imagine a world where one can:

  • Create their own content and distribute it in all sorts of ways without being restricted to the network
  • Share a web app or piece of web content with their friends via Bluetooth or Wi-Fi Direct
  • Carry their site on their own USB or even host it on their own local network

The Web Bundles API is an exciting proposal that could make this all possible and in turn, simplify the streamline processes for the developer!

Web Bundling is the process of fetching, resolving, packing, and reducing a tree of dependencies into compressed static files that one can readily host on the web. 

In Wai Fai Lau’s Lightning Talk session, we will breaking down the following topics:

  • What Are Web Bundlers?
  • Why Web Bundlers?
  • Web Bundling Options
  • Live Demonstrations
  • Closing Thoughts

What Are Web Bundlers?

In the simplest terms, a Web Bundle is a tool that bundles all HTTP resources up in a singular, web optimized output folder. In doing so, the lives of developers not only become simpler – but easier  to streamline and process.

The primary job of a bundler is to transpile the code down into something the browser can understand and then encapsulate those HTTP properties. The Web Bundler allows the developer to output all these properties into a singular folder that can then be included on a web page to load the entire application at once. 

Why Web Bundlers?

Web Bundlers provide a number of assets for developers looking to utilize succinct packaging. The primary sectors of improvement associated with bundling are:

  1. Improvement Of Developer Experience
    • Dev server
    • Hot module replacement 
    • Debugging support
  2. Optimization Of Asset Production To Improve Performance And UX
    • Minification
    • Compressed and permanent code
    • Code splitting
    • Tree shaking

Web Bundling Options

There are a number of bundling software options available for developers looking to implement the method. The most important factors to measure when considering a specific alternative is two-fold: speed and package size. A faster build speed and a smaller package size would be the ideal solution. 

Webpack

Webpack is currently the most popular alternative available to developers. It addresses the lack of tooling for complex single page applications. Before Webpack, developers had to manage dependencies manually.

Webpack is:

  • Highly customizable
  • A mature ecosystem 
  • A provider of responsive documentation and support
  • Capable of utilizing module federation 

Parcel

Parcel automatically tracks all files, configurations, plugins, and dev dependencies that are involved in the build, and granularly invalidates the cache when something changes. It integrates with low-level operating system APIs to determine what files have changed in milliseconds, no matter the project size.

Parcel is:

  • Faster than Webpack
  • Without a configuration requirement
  • A provider of a built in hot module replacement

Rollup

Rollup is a module bundler for JavaScript which compiles small pieces of code into something larger and more complex, such as a library or application. It uses the new standardized format for code modules included in the ES6 revision of JavaScript, instead of previous idiosyncratic solutions such as CommonJS and AMD.

Rollup is:

  • A simpler configuration than Webpack
  • A provider of automatic tree shaking
  • A smaller file size than Webpack 
  • A user of scope hoisting

esbuild

esbuild is a JavaScript bundler created by Evan Wallace. The code itself is written in Go with speed in mind, and it’s clear that the developer endeavored to avoid unnecessary allocations as much as possible.

esbuild is:

  • The fastest bundler currently on the market

Vite

Vite.js is a rapid development tool for modern web projects. It focuses on speed and performance by improving the development experience. Vite uses native browser ES imports to enable support for modern browsers without a build process.

Vite is:

  • Capable of hot module replacement
  • A provider of one of the fastest bundling times
  • An automated code splitter 
  • A provider of a multi-page support module right off the bat
  • Compatible with most popular framework templates

Live Demonstrations

Wai Fai Lau has crafted an intuitive demonstration to help guide you through these different Web Bundling alternatives in practicum: 

Be sure to follow Wai Fai’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test how Web Bundling can improve your personal data application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Web Bundling in web development and to experience Wai Fai Lau’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: InRhythmU, Product Development, Web Engineering · Tagged: devops, INRHYTHMU, Learning and Development, Web Bundling, Web Development, web engineering

Aug 16 2022

The Reactive Streams Difference

Overview

As the world continues to shift towards online-first data, for everything from their remote workplace to streaming their favorite shows on Netflix, it’s become more imperative to recognize that “live” data needs to be handled with extra special care in an asynchronous system. 

Asynchrony is needed in order to enable the parallel use of computing resources, on collaborating network hosts with a single streamline data engine. 

The primary goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary – similar to passing elements on to another thread or thread-pool in traditional transfer – while ensuring that the receiving side isn’t forced to buffer arbitrary data. 

Meaning… Reactive Streams allow for MULTIFOLD data processing, as opposed to traditional models, leaving more room for faster response times and a smoother UX experience.

In Hirav Oza’s Lightning Talk session, we will breaking down the following topics:

  • Functional Style Programming
  • Reactive Streams And Back Pressure
  • Popular Reactive Libraries
  • Closing Thoughts

Functional Style Programming

Functional Programming languages are specially designed to handle symbolic computation and list processing applications. It is a declarative type of programming style with a main focus on “what to solve” in contrast to “how to solve” (an imperative style). 

Reactive Programming by nature embraces a functional programming style. It functions similarly to common programmer staples such as Steams API and Lambdas, making it a smooth integration curve. 

Reactive Streams And Backpressure

Backpressure happens when data can’t render as fast as it needs to. This can cause a “buildup” of data at the I/O Switch when buffers are full and cannot receive additional data. At this point, no additional data packets can be transferred until the bottleneck of data has been eliminated or the buffer has been emptied.

Reactive Streams disrupt buildup by supporting the back-pressure so they can signal the database to “hold off” on producing more output, or in other words– “buffer” until the remaining data is processed… Making for a smoother UX  experience for the interface user and a more cohesive data transfer for the programming system. 

Popular Reactive Libraries 

  1. RxJava

RxJava is a specific implementation of reactive programming for Java and Android that is influenced by functional programming. It favors function composition, avoidance of global state and side effects, and thinking in streams to compose asynchronous and event-based programs.

  1. Project Reactor

Reactor Core is a Java 8 library that implements the reactive programming model. It’s built on top of the Reactive Streams specification, a standard for building reactive applications. 

  1. Flow API

Flow API is the official support for reactive streams specification since Java 9. It is a combination of both Iterator and Observer patterns. 

Flow API consists of four basic interfaces:

  • Subscriber: The Subscriber subscribes to Publisher for callbacks.
  • Publisher: The Publisher publishes the stream of data items to the registered subscribers.
  • Subscription: The link between publisher and subscriber.
  • Processor: The processor sits between Publisher and Subscriber, and transforms one stream to another.

Test Your Skills 

You’ve unpacked quite a few introductory principles – think you’re up to exploring a live demonstration of these concepts in play? 

Hirav Oza has crafted an intuitive demonstration to help guide you through these principles in practicum: 

Be sure to follow Oza’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test how Reactive Streams can improve your personal data application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Reactive Streams in web development and to experience Hirav Oza’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: DevOps, InRhythmU, Learning and Development, Product Development, Software Engineering · Tagged: best practices, devops, INRHYTHMU, learning and growth, product development, Reactive Streams, ux

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT