• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

Java Engineering

Jan 03 2023

Creating Robust Test Automation For Microservices

Overview

No alt text provided for this image

Any and all projects that a software engineer joins will come in one of two forms: greenfield or legacy codebases. In the majority of cases, projects will fall into the realm of legacy repositories. As a software engineer, it is their responsibility to be able to strategically navigate their way through either type of project by looking objectively at the opportunities to improve the code base, lower the cognitive load for software engineering, and make a determination to advise on better design strategies.

But, chances are, there is a problem. Before architecture or design refactors can be taken its best to take a pulse on the health of a platform End to End (E2E). The reason being, lurking in a new or existing platform is likely a common ailment of a modern microservices approach – the inability to test the platform E2E across microservices that are, by design, commonly engineered by different teams over time.

Revitalizing Legacy Systems

No alt text provided for this image

One primary challenge faced by a number of software engineers, is the adaptive work on a greenfield platform that has fallen several months behind from a quality assurance perspective. It becomes no longer possible for QA to catch up, nor was it possible for QA to engineer and execute E2E testing to complete common user journeys throughout the enterprise system.

To solve this conundrum, E2E data generation tools need to be created so that the QA team can keep upbuilding and testing every scenario and edge case.

There are three main requirements for an E2E account and data generation tool.

The tool should:

1) Create test accounts with mock data for each microservice

2) Link those accounts between up and downs stream microservices

3) Provide easy to access APIs that are self-documenting 

Using a tool like Swagger, QA can use the API description for REST API, i.e. OpenAPI Specification (formerly Swagger Specification) to view the available endpoints and operations to create accounts, generate test data, authenticate, authorize and “connect the microservices.”

No alt text provided for this image

Closing Thoughts

By creating tools for E2E testing, a QA team was able to eliminate the hassle of trying to figure out which upstream and downstream microservices needed to be called to ensure that the required accounts and data were available and set up properly to ensure a successful test of all scenarios i.e. based upon the variety of different data types, user permissions, user information, and covering the negative test cases. The QA team was able to catch up and write their entire suite of test scenarios generating the matching accounts and data to satisfy those requirements. The net result of having built an E2E test generation tool was automated tests could be produced exponentially quicker and the tests themselves are more resilient to failure. 

Even though the microservices pattern continues to gain traction, developing E2E testing tools that generate accounts and test data across an enterprise platform will likely still remain a pain point.

There’s no better way to maintain a healthy system than to ensure accounts and data in the lower environments actually work and unblock testing end-to-end. 

Written by Kaela Coppinger · Categorized: Agile & Lean, Cloud Engineering, Java Engineering, Product Development, Software Engineering · Tagged: cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering, testing

Dec 20 2022

A Comprehensive Overview Of Apache Kafka

Overview

Apache Kafka is an open-source, distributed event-streaming platform, or message queuing system. Kafka provides real-time data analysis that runs on servers and clients, either locally or in the cloud, on Linux, Windows, or Mac platforms. Kafka’s messages are persisted on disk and replicated within the cluster to prevent data loss.

Some typical Kafka use cases are stream processing, log aggregation, data ingestion to Spark or Hadoop, error recovery, etc.

In Kyle Pollack’s Lightning Talk session, we will breaking down the following topics:

  • Overview
  • Basic Architecture
  • Benefits
  • Advantages Of Apache Kafka
  • Use Cases For Kafka
  • Closing Thoughts

Basic Architecture

There are four main components:

  • The Producer – The client apps that write their Events, or Topics, to the Kafka queue
  • The Topic – Topics are the Events that Kafka stores. They are multi-producer, multi-subscriber (Consumer), decoupled, and can have any number of subscribers or none at all
  • The Broker – Each Broker is a Kafka server that organizes and sequentially stores incoming Events by Topic and stores them on disk in Segmented Partitions
  • Consumer – The apps that subscribe to Kafka Topics

A Kafka cluster is made of one or more servers, called Brokers. Topics live in one or more Partitions on one or more Brokers. 

No alt text provided for this image

As Producers write events to the Topic queues, the Brokers store the message in Segments within their Partitions according to Topic ID. Kafka always writes Event messages into any Partition configured for that Topic ID, on any Broker. Because the save is spread across all Brokers that service that Topic ID and the data is written non-sequentially into Segments within those Partitions, there is no single Broker or Partition that contains the full, sequential list of Events for that Topic. Each Partition only holds a subset of Event records in its Segments.

Kafka Producers

Producers are client applications writing Topics to the Kafka Cluster. 

Kafka Brokers

Brokers receive event streams from Producers and store them sequentially by Topic ID in one or more Partitions across one or more Brokers. Each Broker can handle many Partitions in its storage. All received messages are stored with an Offset ID.

For example, when receiving three events on a given Broker having three partitions, the Broker could store those Events to Partitions in this order 2, 1, and 3, while another Broker in the cluster could store them to 3, 2, and1. Because the writes to Partitions within Brokers are ad hoc, the individual Segments in any one Partition do not contain a sequential string of events. However, on retrieval, Kafka provides those records in their correct order by using their Broker-assigned Offset ID. 

Additionally, you can configure the Event retention as suitable for the application.

The Topic

Kafka organizes events by Topic and may store a Topic in multiple Partitions on multiple Brokers. This provides reliability and also enhances performance by avoiding the I/O bottlenecks that using a single Broker might entail, by spreading the store action across multiple computers.Topics are assigned Topic IDs.

Kafka Consumers

Consumers are apps that read Topic information from Kafka queues. Consumers automatically retrieve new messages as they arrive in the queue.

Benefits

  • I/O Performance – Non-sequentially writing Event records to multiple Brokers/Partitions avoids I/O bottlenecks that could occur if they were written sequentially into a single Partition.
  • Scalability – Kafka scales horizontally by increasing the number of Brokers in the cluster.
  • Data Redundancy – You can configure Kafka to write each event to multiple brokers.
  • High-Concurrency, low-latency, high-throughput
  • Fault-Tolerant
  • Message Broker Capabilities
  • Batch Handling Capability (providing ETL-like functionality)
  • Persistent by default
No alt text provided for this image

Advantages Of Apache Kafka

Real-time data analysis provides faster insights into your data allowing faster response times. For example, to make predictions about what should be stocked, promoted, or pulled from the shelves, based on the most up-to-date information possible.

Even on very large systems, Kafka operates very quickly. You can stream all data in real time to make decisions based on current information, rather than waiting until the data has been obtained, aggregated, and analyzed, which is the case for many companies with large datasets.

Kafka is written in Java, so it is easier to learn.

Use Cases For Kafka

Kafka is used for: 

  • Stream processing
  • Website activity tracking
  • Metrics collection and monitoring
  • Log aggregation
  • Real-time analytics
  • Common Extensibility Platform support (CEP)
  • Ingesting data into Spark
  • Ingesting data into Hadoop
  • Command Query Responsibility Segregation support (CQRS)
  • Replay messages
  • Error recovery
  • Guaranteed distributed commit log for in-memory computing (microservices)
No alt text provided for this image

Closing Thoughts

Apache Kafka is a distributed streaming platform capable of handling trillions of events a day. Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and is able to process streams of events.

Happy coding! To learn more about the implementation of Apache Kafka and to experience Kyle Pollack’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, DevOps, Java Engineering, Product Development, Software Engineering, Web Engineering · Tagged: Apache, best practices, INRHYTHMU, JavaScript, Kafka, learning and growth, software engineering, web engineering

Dec 20 2022

Configuration Automation Tools: Orchestrating Successful Deployment

Overview

In the modern technology field, buzz words come and go. One day databases are being discussed as the new best thing in the world of Agile Development only for the next, to recenter the importance of programming languages, frameworks, and methodologies.

But, one unchanging aspect of this lifecycle are the people who are an irreplaceable part of the creation, demise, and popularity of any given technology. This modern day world calls for close to perfection execution, of which individuals cannot always account for.

How does this call for flawless mechanisms affect the developers and creators, when called to building perfect products? 

No alt text provided for this image

Automation is the technology by which a process or procedure is performed with minimal human interference through the use of technological or mechanical devices. It is the technique of making a process or a system operate automatically. Automation crosses all functions within almost every industry from installation, maintenance, manufacturing, marketing, sales, medicine, design, procurement, management, etc. Automation has revolutionized those areas in which it has been introduced, and there is scarcely an aspect of modern life that has been unaffected by it.

Automation provides a number of high-level advantages to every aspect of practice, making it an important process to have a working knowledge of:

  • Overview
  • Removing Human Error
  • Steps To Deploy
  • No Hidden Knowledge
  • Popular Implementation Technology Options
  • Closing Thoughts

Removing Human Error

No alt text provided for this image

Automation, automation, more automation – and of course throw in some orchestration deployment and configuration management. Leaving the buzz words behind the “new technology frontier”, is removing human error. This translates to removing the dependencies of tribal knowledge when it pertains to application and system administration job duties.

Those job duties are performed in a repetitive fashion. The job duties are usually consolidated into various custom scripts, leaving a lot of those scripted actions with the ability to be boxed up and reused over and over again.

Steps To Deployment

No alt text provided for this image

The primary cornerstones to prepping an automation deployment for an individual server, follow a near identical framework:

  1. Download and Install the various languages and/or framework libraries the application usages
  2. Download, Install, and Configure the Web server that the application will use
  3. Download, Install, and Configure the Database that the application will use
  4. Test to see if all the steps are installed and configured correctly

Running application tests ensure that the deployment is running as expected. Testing is crucial to the successful run of the deployment.

For example, something simplistic but highly catastrophic is the possibility of a typo. Consider the case of the following code:

  • cd /var/ect/ansible; rm -rf *

but instead a developer forgot the cd execute command and only ran

  • rm -rf /

In this case, the whole drive is at risk to be erased – which can and will make or break a product.

Taking time to ensure the correct command executions, can determine the overall success of a system.

No Hidden Knowledge

No alt text provided for this image

Looking back on the steps to deploy an application to an environment, there are inevitably a number of small intermediary steps involved. A leader’s priority should be the revelation of each one of these unique sub-categories and effectively bringing all engineers around them up to speed on the associated best practices.

The information should be a source of truth maintained in a repository database, that is easy and intuitive to leverage

Popular Implementation Technology Options

What does a source of truth entail? Can one not skip the documentation of information and go straight into the execution of the steps onto a given system? Or create scripts to reconfigure the application if there was ever a need to? Those questions have been proposed several times and solutions have been formulated several times into the form of extensive and comprehensive build tools/frameworks.

These tools are used throughout the industry to solve the problem of orchestrated development, configuration automation, and management. 

Furthermore, DevOps tools such as: Puppet, Chef, and Ansible are well matured automation/orchestration tools. Each tool will provide enough architecture flexibility to virtually handle any use case presented.

Puppet

No alt text provided for this image

Puppet was the first widely used Configuration Automation and Orchestration software dating back to its initial release in 2005. Puppet uses the Master and Slave paradigm to control X amount of machines. The Ruby language is the script language say, for executing commands in a destination environment. 

The “Puppet Agents” (Slave) are modularized distinct components to be deployed to a server. This can be used for the creation of the server (ie. web server, database, application) in its destination environment. The “Puppet Enterprise” (Master) is comprised of all the inner workings to manage, secure, and organize agents.

Puppet Documentation

  • https://puppet.com/docs/
  • http://pub.agrarix.net/OpenSource/Puppet/puppetmanual.pdf
  • https://www.rubydoc.info/gems/puppet/ 

Chef

No alt text provided for this image

Chef is somewhat similar to Puppet. The core language used within Chef’s abstract module components is Ruby. Chef has several layers of management for individual infrastructure automation needs. The Chef workstation is the primary area for managing the various Chef components. The Chef components consist of “cookbooks”, “recipes”, and “nodes”.

“Recipes” are collections of configurations for a given system, virtual, bare metal, or cloud environment. Chef calls those different environments “nodes”. “Cookbooks” contains “recipes” and other configurations for application deployment and control mechanisms for the different Chef clients.

Chef Documentation

  • https://docs.chef.io/
  • https://www.linode.com/docs/applications/configuration-management/beginners-guide-chef/ 

Ansible

No alt text provided for this image

Ansible is the newest mainstream automation/configuration management tool on the market. Therefore, Ansible uses more modern programming languages and configurations concepts and tools. Python is the programming language used in this framework. One of the modern and fastest up-and-coming template languages is YAML. YAML is programming language agnostic and is a subset of the ever so popular JSON. YAML is used within Ansible to describe an Ansible Playbook. 

Ansible Playbook contains the steps that need to be executed on a given system. Once the Ansible Playbook is intact, configuration or further manipulation of the host can be executed through Ansible API – which is implemented in Python. There are several other components within Ansible technology such as modules, plugins, and inventory. 

Ansible Documentation

  • https://docs.ansible.com/ansible/2.5/dev_guide/
  • https://devdocs.io/ansible/
  • https://geekflare.com/ansible-basics/ 

Closing Thoughts

No alt text provided for this image

After covering a couple of the Configuration Automation and Development tools on the market, one can see a vast amount of flexibility available in eliminating those repeatable steps from human error. This software’s framework promotes reusable software within an organization – which is the most viable. The ability to scale an application development environment and environmental infrastructure is critical. 

The learning curve may be deeper than using plain bash scripts, but the structure and integrity of using a proven tool and ease of maintenance outweigh the learning curve.

Written by Kaela Coppinger · Categorized: Cloud Engineering, Code Lounge, DevOps, Java Engineering, Learning and Development, Software Engineering, Web Engineering · Tagged: automation, best practices, cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering

Sep 16 2022

A Comprehensive Guide To Java’s New HTTP Client

Overview 

The Hypertext Transfer Protocol (HTTP) is the foundation of the World Wide Web, and is used to load web pages using hypertext links. HTTP is an application layer protocol designed to transfer information between networked devices and runs on top of other layers of the network protocol stack. A typical flow over HTTP involves a client machine making a message request to a server, which then sends a response message.

Java is a general-purpose, class-based, object-oriented programming language designed for having lesser implementation dependencies. It is a computing platform for application development. Java is fast, secure, and reliable, therefore, it  is widely used by everyone from the newest to most advanced web developers. 

In Daniel Fuentes’ Lightning Talk session, we will breaking down the following topics:

  • What Is HTTP?
  • Improvements In HTTP 2.0
  • How HTTP 2.0 Impacts Java
  • Live Demonstrations
  • Closing Thoughts

The new HTTP 2.0 Client was released in Java 11. This new client is used to request HTTP resources over the network. It supports HTTP/1.1 and HTTP/2.0, both synchronous and asynchronous programming models, handles request and response bodies as well as reactive-streams, and follows the familiar builder pattern.

What Is HTTP?

HTTP is an application layer protocol designed to transfer information between networked devices. HTTP runs on top of other layers of the network protocol stack. 

HTTP is a protocol for fetching resources such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance, text, layout description, images, videos, scripts, and more.

Clients and servers communicate by exchanging individual messages (as opposed to a stream of data). The messages sent by the client, usually a Web browser, are called requests and the messages sent by the server as an answer are called responses.

The typical flow over HTTP involves a client machine making a request to a server, which then sends a response message. 

HTTP was invented alongside HTML to load web pages using links (hypertext). It was a part of the first interactive, text-based web browser: the original World Wide Web. Today, the protocol remains one of the primary means of using the Internet.

Improvements In HTTP 2.0

HTTP 2.0 is based on streams and binary frames, in comparison to the text-only request models of its previous iteration. Unlike text-only interfaces, streams can be multiplexed asynchronously over one TCP (Transmission Control Protocol) connection. Comparatively, HTTP 2.0 reduces latency, therefore enhancing its performance. 

How HTTP 2.0 Impacts Java

Java was commonly built upon the HttpURLConnection class – which had an original launch in 1999 when HTTP 1.0 was still a fresh protocol. With a backbone built from outdated technology, it was never able to update properly in response to the rapidly changing nature of web protocols. 

Its steady incompatibility and lack of ease in use, led developers to opt out of Java’s direct class and instead employ third party solutions (ie. Apache, Netty, Eclipse, Google, etc.).

With the new updated Java 11 developer toolkit, came a number of operational changes – but namely, the adoption of HTTP 2.0. In order to meet the demand of an environment consistently in motion, Java made these longevity changes:

  • Eliminating the need for 3rd party client dependencies
  • Building in a backwards compatibility with HTTP 1.0 for remaining servers that may have not yet made the switch to HTTP 2.0
  • Instating an asynchronous support network for multiple HTTP requests
  • Vastly improving performance with the addition of Header compression and Single Connections for multiple requests

Live Demonstrations

Daniel Fuentes has crafted an intuitive demonstration to help guide you through this new Java HTTP in practicum: 

Be sure to follow Daniel’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test HTTP 2.0 in order to improve your application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Java’s Updated HTTP Server as well as its influence in web development and to experience Daniel Fuentes’ full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: DevOps, InRhythmU, Java Engineering, Product Development · Tagged: best practices, devops, HTTP, INRHYTHMU, Java 11, JavaScript, learning and growth, software development, Web Development

Feb 07 2019

InRhythm’s Cloud Engineering Digest: New Year, New Java News

January is a quiet month for releases and breaking news, but it’s usually full of great summaries and articles. For our first Cloud Engineering Digest of 2019, we round up the links and articles you need to see.

New year, new GitHub: GitHub Launches Free Private Repos with up to Three Collaborators

New year, new GitHub: Announcing unlimited free private repos and unified Enterprise offering

IBM Releases Open Liberty 18.0.0.4 with Support for MicroProfile 2.1 and Reactive Extensions
Open Liberty is a production-ready implementation of the MicroProfile specifications
You can read more about Eclipse MicroProfile here.

Raw String Literals have been removed from Java 12 scope.
A raw string literal can span multiple lines of source code and does not interpret escape sequences, such as \n, or Unicode escapes of the form \uXXXX.
Owner Brian Goetz offers an explanation here, and you can see the official feature description here.

Google Announces Spring Cloud GCP 1.1 collaboration between Pivotal’s Spring team and Google to integrate the Spring Framework and Google Cloud Platform (GCP). The project joins the Spring Cloud release train and is now compatible with Spring Boot 2.1 and Java 11, and includes all the goodness of the most recent Spring Boot version.

IntelliJ IDEA 2019.1 Early Access Program is open.
You can see major upcoming features ahead, including Gradle improvements, Spring Cloud Stream refinements, and more.

Spring Boot 2.1.2 released

90 New Features (and APIs) in JDK 11
Since JDK 12 is coming soon, this is a good recap of all the new features released in JDK 11.

Architecture

Developing Microservices with Behavior-Driven Development and Interface-Oriented Design
A good article that explains BDD (behavior-driven development) on a simple sample.

Further Reading

Netflix Play API: Building an Evolutionary Architecture
Great article about architecture changes in Netflix in response to key business milestones for growth.

GraalVM in 2018
High-performance polyglot VM with a new high-performance Java compiler, itself called Graal, which can be used in just-in-time or ahead-of-time configurations.

An Introduction to Kotlin for Serverside Java Developers
A straightforward intro to Kotlin, a newer language on the JVM, making the case for why it works and where it works best.

Share your thoughts—and what you’re reading—with us in the comments below or @GetInRhythm on Twitter.

Written by Nick Logvynenko · Categorized: Cloud Engineering, InRhythm News, InRhythmU, Java Engineering, Newsletters · Tagged: APIs, cloud newsletter, Graal, java newsletter, Kotlin, Learning, link digest, Spring Boot, updates

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT