• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

DevOps

Dec 20 2022

A Comprehensive Overview Of Apache Kafka

Overview

Apache Kafka is an open-source, distributed event-streaming platform, or message queuing system. Kafka provides real-time data analysis that runs on servers and clients, either locally or in the cloud, on Linux, Windows, or Mac platforms. Kafka’s messages are persisted on disk and replicated within the cluster to prevent data loss.

Some typical Kafka use cases are stream processing, log aggregation, data ingestion to Spark or Hadoop, error recovery, etc.

In Kyle Pollack’s Lightning Talk session, we will breaking down the following topics:

  • Overview
  • Basic Architecture
  • Benefits
  • Advantages Of Apache Kafka
  • Use Cases For Kafka
  • Closing Thoughts

Basic Architecture

There are four main components:

  • The Producer – The client apps that write their Events, or Topics, to the Kafka queue
  • The Topic – Topics are the Events that Kafka stores. They are multi-producer, multi-subscriber (Consumer), decoupled, and can have any number of subscribers or none at all
  • The Broker – Each Broker is a Kafka server that organizes and sequentially stores incoming Events by Topic and stores them on disk in Segmented Partitions
  • Consumer – The apps that subscribe to Kafka Topics

A Kafka cluster is made of one or more servers, called Brokers. Topics live in one or more Partitions on one or more Brokers. 

No alt text provided for this image

As Producers write events to the Topic queues, the Brokers store the message in Segments within their Partitions according to Topic ID. Kafka always writes Event messages into any Partition configured for that Topic ID, on any Broker. Because the save is spread across all Brokers that service that Topic ID and the data is written non-sequentially into Segments within those Partitions, there is no single Broker or Partition that contains the full, sequential list of Events for that Topic. Each Partition only holds a subset of Event records in its Segments.

Kafka Producers

Producers are client applications writing Topics to the Kafka Cluster. 

Kafka Brokers

Brokers receive event streams from Producers and store them sequentially by Topic ID in one or more Partitions across one or more Brokers. Each Broker can handle many Partitions in its storage. All received messages are stored with an Offset ID.

For example, when receiving three events on a given Broker having three partitions, the Broker could store those Events to Partitions in this order 2, 1, and 3, while another Broker in the cluster could store them to 3, 2, and1. Because the writes to Partitions within Brokers are ad hoc, the individual Segments in any one Partition do not contain a sequential string of events. However, on retrieval, Kafka provides those records in their correct order by using their Broker-assigned Offset ID. 

Additionally, you can configure the Event retention as suitable for the application.

The Topic

Kafka organizes events by Topic and may store a Topic in multiple Partitions on multiple Brokers. This provides reliability and also enhances performance by avoiding the I/O bottlenecks that using a single Broker might entail, by spreading the store action across multiple computers.Topics are assigned Topic IDs.

Kafka Consumers

Consumers are apps that read Topic information from Kafka queues. Consumers automatically retrieve new messages as they arrive in the queue.

Benefits

  • I/O Performance – Non-sequentially writing Event records to multiple Brokers/Partitions avoids I/O bottlenecks that could occur if they were written sequentially into a single Partition.
  • Scalability – Kafka scales horizontally by increasing the number of Brokers in the cluster.
  • Data Redundancy – You can configure Kafka to write each event to multiple brokers.
  • High-Concurrency, low-latency, high-throughput
  • Fault-Tolerant
  • Message Broker Capabilities
  • Batch Handling Capability (providing ETL-like functionality)
  • Persistent by default
No alt text provided for this image

Advantages Of Apache Kafka

Real-time data analysis provides faster insights into your data allowing faster response times. For example, to make predictions about what should be stocked, promoted, or pulled from the shelves, based on the most up-to-date information possible.

Even on very large systems, Kafka operates very quickly. You can stream all data in real time to make decisions based on current information, rather than waiting until the data has been obtained, aggregated, and analyzed, which is the case for many companies with large datasets.

Kafka is written in Java, so it is easier to learn.

Use Cases For Kafka

Kafka is used for: 

  • Stream processing
  • Website activity tracking
  • Metrics collection and monitoring
  • Log aggregation
  • Real-time analytics
  • Common Extensibility Platform support (CEP)
  • Ingesting data into Spark
  • Ingesting data into Hadoop
  • Command Query Responsibility Segregation support (CQRS)
  • Replay messages
  • Error recovery
  • Guaranteed distributed commit log for in-memory computing (microservices)
No alt text provided for this image

Closing Thoughts

Apache Kafka is a distributed streaming platform capable of handling trillions of events a day. Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and is able to process streams of events.

Happy coding! To learn more about the implementation of Apache Kafka and to experience Kyle Pollack’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, DevOps, Java Engineering, Product Development, Software Engineering, Web Engineering · Tagged: Apache, best practices, INRHYTHMU, JavaScript, Kafka, learning and growth, software engineering, web engineering

Dec 20 2022

Configuration Automation Tools: Orchestrating Successful Deployment

Overview

In the modern technology field, buzz words come and go. One day databases are being discussed as the new best thing in the world of Agile Development only for the next, to recenter the importance of programming languages, frameworks, and methodologies.

But, one unchanging aspect of this lifecycle are the people who are an irreplaceable part of the creation, demise, and popularity of any given technology. This modern day world calls for close to perfection execution, of which individuals cannot always account for.

How does this call for flawless mechanisms affect the developers and creators, when called to building perfect products? 

No alt text provided for this image

Automation is the technology by which a process or procedure is performed with minimal human interference through the use of technological or mechanical devices. It is the technique of making a process or a system operate automatically. Automation crosses all functions within almost every industry from installation, maintenance, manufacturing, marketing, sales, medicine, design, procurement, management, etc. Automation has revolutionized those areas in which it has been introduced, and there is scarcely an aspect of modern life that has been unaffected by it.

Automation provides a number of high-level advantages to every aspect of practice, making it an important process to have a working knowledge of:

  • Overview
  • Removing Human Error
  • Steps To Deploy
  • No Hidden Knowledge
  • Popular Implementation Technology Options
  • Closing Thoughts

Removing Human Error

No alt text provided for this image

Automation, automation, more automation – and of course throw in some orchestration deployment and configuration management. Leaving the buzz words behind the “new technology frontier”, is removing human error. This translates to removing the dependencies of tribal knowledge when it pertains to application and system administration job duties.

Those job duties are performed in a repetitive fashion. The job duties are usually consolidated into various custom scripts, leaving a lot of those scripted actions with the ability to be boxed up and reused over and over again.

Steps To Deployment

No alt text provided for this image

The primary cornerstones to prepping an automation deployment for an individual server, follow a near identical framework:

  1. Download and Install the various languages and/or framework libraries the application usages
  2. Download, Install, and Configure the Web server that the application will use
  3. Download, Install, and Configure the Database that the application will use
  4. Test to see if all the steps are installed and configured correctly

Running application tests ensure that the deployment is running as expected. Testing is crucial to the successful run of the deployment.

For example, something simplistic but highly catastrophic is the possibility of a typo. Consider the case of the following code:

  • cd /var/ect/ansible; rm -rf *

but instead a developer forgot the cd execute command and only ran

  • rm -rf /

In this case, the whole drive is at risk to be erased – which can and will make or break a product.

Taking time to ensure the correct command executions, can determine the overall success of a system.

No Hidden Knowledge

No alt text provided for this image

Looking back on the steps to deploy an application to an environment, there are inevitably a number of small intermediary steps involved. A leader’s priority should be the revelation of each one of these unique sub-categories and effectively bringing all engineers around them up to speed on the associated best practices.

The information should be a source of truth maintained in a repository database, that is easy and intuitive to leverage

Popular Implementation Technology Options

What does a source of truth entail? Can one not skip the documentation of information and go straight into the execution of the steps onto a given system? Or create scripts to reconfigure the application if there was ever a need to? Those questions have been proposed several times and solutions have been formulated several times into the form of extensive and comprehensive build tools/frameworks.

These tools are used throughout the industry to solve the problem of orchestrated development, configuration automation, and management. 

Furthermore, DevOps tools such as: Puppet, Chef, and Ansible are well matured automation/orchestration tools. Each tool will provide enough architecture flexibility to virtually handle any use case presented.

Puppet

No alt text provided for this image

Puppet was the first widely used Configuration Automation and Orchestration software dating back to its initial release in 2005. Puppet uses the Master and Slave paradigm to control X amount of machines. The Ruby language is the script language say, for executing commands in a destination environment. 

The “Puppet Agents” (Slave) are modularized distinct components to be deployed to a server. This can be used for the creation of the server (ie. web server, database, application) in its destination environment. The “Puppet Enterprise” (Master) is comprised of all the inner workings to manage, secure, and organize agents.

Puppet Documentation

  • https://puppet.com/docs/
  • http://pub.agrarix.net/OpenSource/Puppet/puppetmanual.pdf
  • https://www.rubydoc.info/gems/puppet/ 

Chef

No alt text provided for this image

Chef is somewhat similar to Puppet. The core language used within Chef’s abstract module components is Ruby. Chef has several layers of management for individual infrastructure automation needs. The Chef workstation is the primary area for managing the various Chef components. The Chef components consist of “cookbooks”, “recipes”, and “nodes”.

“Recipes” are collections of configurations for a given system, virtual, bare metal, or cloud environment. Chef calls those different environments “nodes”. “Cookbooks” contains “recipes” and other configurations for application deployment and control mechanisms for the different Chef clients.

Chef Documentation

  • https://docs.chef.io/
  • https://www.linode.com/docs/applications/configuration-management/beginners-guide-chef/ 

Ansible

No alt text provided for this image

Ansible is the newest mainstream automation/configuration management tool on the market. Therefore, Ansible uses more modern programming languages and configurations concepts and tools. Python is the programming language used in this framework. One of the modern and fastest up-and-coming template languages is YAML. YAML is programming language agnostic and is a subset of the ever so popular JSON. YAML is used within Ansible to describe an Ansible Playbook. 

Ansible Playbook contains the steps that need to be executed on a given system. Once the Ansible Playbook is intact, configuration or further manipulation of the host can be executed through Ansible API – which is implemented in Python. There are several other components within Ansible technology such as modules, plugins, and inventory. 

Ansible Documentation

  • https://docs.ansible.com/ansible/2.5/dev_guide/
  • https://devdocs.io/ansible/
  • https://geekflare.com/ansible-basics/ 

Closing Thoughts

No alt text provided for this image

After covering a couple of the Configuration Automation and Development tools on the market, one can see a vast amount of flexibility available in eliminating those repeatable steps from human error. This software’s framework promotes reusable software within an organization – which is the most viable. The ability to scale an application development environment and environmental infrastructure is critical. 

The learning curve may be deeper than using plain bash scripts, but the structure and integrity of using a proven tool and ease of maintenance outweigh the learning curve.

Written by Kaela Coppinger · Categorized: Cloud Engineering, Code Lounge, DevOps, Java Engineering, Learning and Development, Software Engineering, Web Engineering · Tagged: automation, best practices, cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering

Nov 01 2022

An Introduction To SwiftUI Animations

Overview

Since the very beginning of iOS, animations have been a key part of the user experience. Animation is a brilliant way to wow users, and make an app look and feel unique. Practically speaking, animation can grab a user’s attention, and allow them to focus on what’s most important. It can help users intuitively understand how to navigate an app or alert them to important changes.

SwiftUI takes care of all the complexity in making effects by automatically animating any transitions that may happen. Done are the days of writing complicated code for a singular animated transition – the framework comes with enough built-in effects that can perform different animations.

In Joshua Buchanan’s Lightning Talk session, we will breaking down the following topics:

  • Overview
  • Why Have Animations?
  • SwiftUI Animation Options
  • How To Add Animations With SwiftUI
  • Live Demonstration
  • Closing Thoughts

Why Have Animations?

Animations are an easy way to show users the functionality hidden behind a beautiful wrapper.

Animations are quite an important part of any application because it draws users’ attention. An animation is simply a collection of images that are repeated at high speed, but animations can set your application apart.

Mobile app animations occupy an important place in the design of the whole user experience of an app. They are great for communicating a message and creating a feeling by not limiting the method with static images or texts. The dynamism that in-app animations inherently have makes the user journey more vivid and delightful. Thanks to that, an animated app can achieve higher user engagement rates.

SwiftUI Animation Options

Animations on SwiftUI have a variety of options built into its starting interface. These options allow for a number of different transitions to take place swiftly and automatically. 

There are a number of varying involvement regarding these options, from basic to advanced – each piece brings a unique visual perspective to life.

The primary basic animation options available to a mobile developer utilizing SwiftUI for their mobile application are:

  • Enable
  • Offset
  • Color
  • Scale

The most popular of these advanced options being:

Repeat Count / Reverse

Duration Options (Speed)

Curve Options

How To Add Animations With SwiftUI

SwiftUI has built-in support for animations with its animation() modifier. To use this modifier, place it after any other modifiers for your views, tell it what kind of animation you want, and also make sure you attach it to a particular value so the animation triggers only when that specific value changes.

SwiftUI animates the effects that many built-in view modifiers produce, like those that set a scale or opacity value. You can animate other values by making your custom views conform to the Animatable protocol, and telling SwiftUI about the value you want to animate.

When an animated state change results in adding or removing a view to or from the view hierarchy, you can tell SwiftUI how to transition the view into or out of place using built-in transitions that AnyTransition defines, like slide or scale. You can also create custom transitions.

Live Demonstration

Joshua Buchanan has crafted an intuitive test to demonstrate real time code and their animated results via SwiftUI:

Be sure to follow Joshua’s entire Lightning Talk to follow along with these steps in real time.

Closing Thoughts

In SwiftUI, Animation is nothing but the change of the state from start to finish with different curves and velocities. The syntax is mostly easy to understand and quick to implement. Animating is quite easy with the use of different implicit animation like easeIn, easeOut, linear, spring, and interpolating spring. A developer can create awesome animations with just a few lines of code in SwiftUI.

Happy coding!

To learn more about the implementation of SwiftUI Animations and to experience Joshua Buchanan’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: Code Lounge, Design UX/UI, DevOps, Learning and Development, Product Development · Tagged: coding, INRHYTHMU, Mobile Development, software development, SwiftUI, ux

Sep 21 2022

How To Write A Great Test Case

Overview

A test case is exactly what it sounds like: a test scenario measuring functionality across a set of actions or conditions to verify the expected result. They apply to any software application, can use manual testing or an automated test, and can make use of test case management tools.

Most digital-first business leaders know the value of software testing. Some value high-quality software more than others and might demand more test coverage to ultimately satisfy customers. So, how do they achieve that goal?

They test more, and test more efficiently. That means writing test cases that cover a broad spectrum of software functionality. It also means writing test cases clearly and efficiently, as a poor test can prove more damaging than helpful.

A key thing to remember when it comes to writing test cases is that they are intended to test a basic variable or task such as whether or not a discount code applies to the right product on an e-commerce web page. This allows a software tester more flexibility in how to test code and features.

In Nathan Barrett’s Lightning Talk session, we will be breaking down the following topics:

  • What Is A Test Case?
  • What Makes A Good Test Case?
  • Live Demonstration
  • Closing Thoughts

What Is A Test Case?

At a high level, to “test” means to establish the quality, performance, or reliability of a software application. A test case is a repeatable series of specific actions designed to either verify success or provoke failure in a given product, system, or process. 

A test case gives detailed information about testing strategy, testing process, preconditions, and expected output. These are executed during the testing process to check whether the software application is performing the task for which it was developed for or not. A passed test case functions like a receipt verifying the correct functionality of the subject of the test. 

To write the test case, we must have the requirements to derive the inputs, and the test scenarios must be written so that we do not miss out on any features for testing. Then we should have the test case template to maintain the uniformity, or every test engineer follows the same approach to prepare the test document.

Test cases serve as final verification of functionality before releasing it to the direct product users. 

What Makes A Good Test Case?

Writing test cases varies depending on what the test case is measuring or testing. This is also a situation where sharing test assets across dev and test teams can accelerate software testing. But it all starts with knowing how to write a test case effectively and efficiently.

Test cases have a few integral parts that should always be present in fields, as well as some “nice to have” elements that can only work to enhance presented results. 

Required Elements:

  • Summary
    • Concise, direct encapsulation of the purpose of the test case 
  • Prerequisites
    • What needs to be in place prior to starting the test?
    • Bad Prerequisites: captured in test steps, not present, overly specific
    • Good Prerequisites: concise/descriptive, lays out all set-up prior to testing, includes information to learn more if desired 
  • Test Steps
    • The meat of the test case
    • Good test steps: each step is a specific atomic action performed by the user that contains an expected result, call out divergent paths where necessary, cites which test data when laid out in prerequisites needs to be applied
    • Really great test steps should treat the user like they know “nothing” and communicate everything from start to finish
  • Expected Results
    • How do we know that the test hasn’t failed?
    • Bad Expected Results: Page loads correctly, view looks good, app behaves as expected
    • Good Expected Results: Landing page loads after spinner with user’s account details present, view renders with all appropriate configurations (title, subtitle, description, etc.), toggle changes state when tapped (enabled→disabled)

Preferred Additional Elements:

  • Artifacts
    • Screenshots, files, builds, configurations, etc.
  • Test Data
    • Accounts, items, addresses, etc.
    • What information is needed during the test?
    • Pre-rendered prerequisite fulfillment
  • Historical Context
    • Previous failures, previous user journeys, development history, etc.
    • Has this feature been “flakey” in the past?
    • What are previous failure points?
    • How critical is this feature?

The very practice of writing test cases helps prepare the testing team by ensuring good test coverage across the application, but writing test cases has an even broader impact on quality assurance and user experience.

Live Demonstration

Nathan Barrett has crafted an intuitive test of specificity to help guide testers to understand how they should be structuring their cases:

Be sure to follow Nathan’s entire Lightning Talk to follow along with these steps in real time.

Closing Thoughts

Test cases help guide the tester through a sequence of steps to validate whether a software application is free of bugs, and working as required by the end-user. A well-written test case should allow any tester to understand and execute the test.

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to understanding the beneficial prerequisites to writing a good test case for any type of application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about How To Conduct A Great Test Case as well as its importance in the software development process and to experience Nathan Barrett’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: Cloud Engineering, Design UX/UI, DevOps, InRhythmU, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: devops, INRHYTHMU, learning and growth, SDET, software development, software engineering, ux, web engineering

Sep 16 2022

A Comprehensive Guide To Java’s New HTTP Client

Overview 

The Hypertext Transfer Protocol (HTTP) is the foundation of the World Wide Web, and is used to load web pages using hypertext links. HTTP is an application layer protocol designed to transfer information between networked devices and runs on top of other layers of the network protocol stack. A typical flow over HTTP involves a client machine making a message request to a server, which then sends a response message.

Java is a general-purpose, class-based, object-oriented programming language designed for having lesser implementation dependencies. It is a computing platform for application development. Java is fast, secure, and reliable, therefore, it  is widely used by everyone from the newest to most advanced web developers. 

In Daniel Fuentes’ Lightning Talk session, we will breaking down the following topics:

  • What Is HTTP?
  • Improvements In HTTP 2.0
  • How HTTP 2.0 Impacts Java
  • Live Demonstrations
  • Closing Thoughts

The new HTTP 2.0 Client was released in Java 11. This new client is used to request HTTP resources over the network. It supports HTTP/1.1 and HTTP/2.0, both synchronous and asynchronous programming models, handles request and response bodies as well as reactive-streams, and follows the familiar builder pattern.

What Is HTTP?

HTTP is an application layer protocol designed to transfer information between networked devices. HTTP runs on top of other layers of the network protocol stack. 

HTTP is a protocol for fetching resources such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance, text, layout description, images, videos, scripts, and more.

Clients and servers communicate by exchanging individual messages (as opposed to a stream of data). The messages sent by the client, usually a Web browser, are called requests and the messages sent by the server as an answer are called responses.

The typical flow over HTTP involves a client machine making a request to a server, which then sends a response message. 

HTTP was invented alongside HTML to load web pages using links (hypertext). It was a part of the first interactive, text-based web browser: the original World Wide Web. Today, the protocol remains one of the primary means of using the Internet.

Improvements In HTTP 2.0

HTTP 2.0 is based on streams and binary frames, in comparison to the text-only request models of its previous iteration. Unlike text-only interfaces, streams can be multiplexed asynchronously over one TCP (Transmission Control Protocol) connection. Comparatively, HTTP 2.0 reduces latency, therefore enhancing its performance. 

How HTTP 2.0 Impacts Java

Java was commonly built upon the HttpURLConnection class – which had an original launch in 1999 when HTTP 1.0 was still a fresh protocol. With a backbone built from outdated technology, it was never able to update properly in response to the rapidly changing nature of web protocols. 

Its steady incompatibility and lack of ease in use, led developers to opt out of Java’s direct class and instead employ third party solutions (ie. Apache, Netty, Eclipse, Google, etc.).

With the new updated Java 11 developer toolkit, came a number of operational changes – but namely, the adoption of HTTP 2.0. In order to meet the demand of an environment consistently in motion, Java made these longevity changes:

  • Eliminating the need for 3rd party client dependencies
  • Building in a backwards compatibility with HTTP 1.0 for remaining servers that may have not yet made the switch to HTTP 2.0
  • Instating an asynchronous support network for multiple HTTP requests
  • Vastly improving performance with the addition of Header compression and Single Connections for multiple requests

Live Demonstrations

Daniel Fuentes has crafted an intuitive demonstration to help guide you through this new Java HTTP in practicum: 

Be sure to follow Daniel’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test HTTP 2.0 in order to improve your application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Java’s Updated HTTP Server as well as its influence in web development and to experience Daniel Fuentes’ full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: DevOps, InRhythmU, Java Engineering, Product Development · Tagged: best practices, devops, HTTP, INRHYTHMU, Java 11, JavaScript, learning and growth, software development, Web Development

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT