• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

software engineering

Jan 03 2023

Creating Robust Test Automation For Microservices

Overview

No alt text provided for this image

Any and all projects that a software engineer joins will come in one of two forms: greenfield or legacy codebases. In the majority of cases, projects will fall into the realm of legacy repositories. As a software engineer, it is their responsibility to be able to strategically navigate their way through either type of project by looking objectively at the opportunities to improve the code base, lower the cognitive load for software engineering, and make a determination to advise on better design strategies.

But, chances are, there is a problem. Before architecture or design refactors can be taken its best to take a pulse on the health of a platform End to End (E2E). The reason being, lurking in a new or existing platform is likely a common ailment of a modern microservices approach – the inability to test the platform E2E across microservices that are, by design, commonly engineered by different teams over time.

Revitalizing Legacy Systems

No alt text provided for this image

One primary challenge faced by a number of software engineers, is the adaptive work on a greenfield platform that has fallen several months behind from a quality assurance perspective. It becomes no longer possible for QA to catch up, nor was it possible for QA to engineer and execute E2E testing to complete common user journeys throughout the enterprise system.

To solve this conundrum, E2E data generation tools need to be created so that the QA team can keep upbuilding and testing every scenario and edge case.

There are three main requirements for an E2E account and data generation tool.

The tool should:

1) Create test accounts with mock data for each microservice

2) Link those accounts between up and downs stream microservices

3) Provide easy to access APIs that are self-documenting 

Using a tool like Swagger, QA can use the API description for REST API, i.e. OpenAPI Specification (formerly Swagger Specification) to view the available endpoints and operations to create accounts, generate test data, authenticate, authorize and “connect the microservices.”

No alt text provided for this image

Closing Thoughts

By creating tools for E2E testing, a QA team was able to eliminate the hassle of trying to figure out which upstream and downstream microservices needed to be called to ensure that the required accounts and data were available and set up properly to ensure a successful test of all scenarios i.e. based upon the variety of different data types, user permissions, user information, and covering the negative test cases. The QA team was able to catch up and write their entire suite of test scenarios generating the matching accounts and data to satisfy those requirements. The net result of having built an E2E test generation tool was automated tests could be produced exponentially quicker and the tests themselves are more resilient to failure. 

Even though the microservices pattern continues to gain traction, developing E2E testing tools that generate accounts and test data across an enterprise platform will likely still remain a pain point.

There’s no better way to maintain a healthy system than to ensure accounts and data in the lower environments actually work and unblock testing end-to-end. 

Written by Kaela Coppinger · Categorized: Agile & Lean, Cloud Engineering, Java Engineering, Product Development, Software Engineering · Tagged: cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering, testing

Dec 20 2022

A Comprehensive Introduction To Swift Package Manager

Introduction

The Swift Package Manager (SwiftPM) is Apple’s tool for managing package dependencies for Swift application development. SwiftPM has been an integrated part of Swift since v3.0.

So, what are Packages? Packages contain reusable code or other resources stored in repositories that your application needs to provide a feature or function. Some examples include, but are not limited to, the fonts and color scheme for your app, a library that provides access to a web resource, a file with images required by your application. Other examples are the APIs and Frameworks that add functionality to your app and simplify your coding tasks.

Let’s work together and breakdown a high-level overview of basic SwiftPM usage and its associated features.

What Do Package Managers Bring To The Table?

Package Managers give developers a way to control which packages and which versions of those packages that the project will consume.

Other features include:

  • Allows easy use of light-weight, reusable libraries
  • Reduces code duplication
  • Supports development best practices by simplifying and supporting modular coding (easy module/package creation)

While it is possible to manually manage dependencies in a project, it isn’t practical for anything but the smallest of applications. Best practices fall on the side of using a package manager to handle this function automatically.

SwiftPM supports Swift 3+ and XCode 8+. Since Swift 5 and Xcode 11, SwiftPM has had cross-platform support for iOS, macOS, and tvOS.

Are There SwiftPM Alternatives?

There are two mainstream alternatives, CocoaPods and Carthage. Both are third-party tools requiring installation and much configuration before use.

CocoaPods

No alt text provided for this image

CocoaPods (2011) – Supports Swift 5+ and Xcode 3.11+ – Mature and stable. A centralized, easy-to-use command-line tool for package dependency management and integration into the application. CocoaPods is written in Ruby.

However, Cocoapods lacks direct control over project configuration and is basically a blackbox. CocoaPods also experiences random build failures that can’t be explained. Resolving these often involves removing and reinstalling CocoaPod dependencies from the project, clearing caches, and completely rebuilding the entire project. These steps can add a significant amount of time to the development process, especially on large projects.

Each CocoaPod dependency must have a “Podspec” file to define the metadata for that dependency. These files are typically uploaded to a different repository from the dependency itself. A “Podfile” within the app project will include a reference to this repository, and CocoaPods will use the information in this Podfile to fetch and install all of the dependencies for the app project. 

Carthage

No alt text provided for this image

Carthage (2014) – A decentralized command-line tool for dependency management. Developers have full project control and must manually provide all package and dependency information. Carthage pulls the packages the developer specifies.Configuration requires much more manual work than CocoaPods and is somewhat complicated.

Using SwiftPM

SwiftPM has many advantages over Carthage and CocoaPods. Foremost, SwiftPM is a native Apple tool integrated into Swift. Other advantages include a quick and simple configuration, easier control over packages and their sub-dependencies, and a GUI built into Xcode for managing package configurations. Each package’s metadata is defined in a “Package.swift” file, which  resides in the same repository as the package source files. 

Package Resolution

With the Package dependencies configured in the SwiftPM GUI, Xcode will download the packages and resolve dependencies at build time with no further developer interaction.

When setting dependencies, you have the following options:

  • By Version: Next Version, Up to next minor, Range, or a specific commit
  • Branch (Specify)
  • Commit(Specify)

Adding A Package To Your Project

  1. Swift Packages> Add Package Dependency 
  2. On the “Choose Package Repository” dialog, enter the desired Repository.
  3. Click Next. 
  4. On the “Choose Package Options” dialog, set the dependency rules (Versions, Branches, Commits). 
  5. Click Next. Xcode now fetches the dependency.
  6. On the “Add Package to YourPrjName dialog,” ensure that the relevant packages are checked and the proper Target is selected in the Add to Target cell.
  7. Click Finish.

Your package now appears in the Navigator under Swift Package Dependencies. Note that SwiftPM can reference both local and remote packages.

Create A Module And Add A Local Package With SwiftPM

SwiftPM simplifies the process of modularizing your code and adding it as a package to a local or external repository. This section will briefly describe the process of creating a module and adding it to a local repository.

  1. Identify a self-contained portion of code or another resource that is suitable for use in a module.
  2. Create a new file: Files > New Package. Name the package accordingly and add it to your application using the same dialog.
  3. Copy the code or resource to the new file.
  4. Delete the existing code or resource from the app
  5. Add the new package to the Application Target. In “Application Project Settings,” add the new package under Frameworks and Libraries.
  6. In the Package code, define the platforms and versions where this Package works.
  7. In the Application’s code files, import the new package into every file where its code or where its resources are used. 

If desired, you could upload the new Package to an external repository for use by developers outside your organization.

Additionally, SwiftPM will allow you to create and use binary packages. Binary packages permit developers to distribute libraries and frameworks without also distributing their source code.

Package Distribution

You can use SwiftPM to distribute resources, in addition to frameworks and libraries. SwiftPM also has built-in support for Apple’s  DocC Documentation format, which makes it easy to build robust interactive documentation files and tutorials. 

The Future

SwiftPM is a mature product still under active development to improve and add new features. Usage stats currently show it being about equal in popularity with CocoaPods. However, since SwiftPM is a native tool that requires no additional work to begin using, its future is more sure than that of CocoaPods or Carthage.

Happy coding!

To learn more about Swift Package Manager as well as its influence in mobile development and to experience Andrew Balmer’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, InRhythmU, Learning and Development, Software Engineering · Tagged: best practices, INRHYTHMU, ios, iOS Ecosystem, learning and growth, Package Management, software engineering, SwiftPM, SwiftUI

Dec 20 2022

A Comprehensive Overview Of Apache Kafka

Overview

Apache Kafka is an open-source, distributed event-streaming platform, or message queuing system. Kafka provides real-time data analysis that runs on servers and clients, either locally or in the cloud, on Linux, Windows, or Mac platforms. Kafka’s messages are persisted on disk and replicated within the cluster to prevent data loss.

Some typical Kafka use cases are stream processing, log aggregation, data ingestion to Spark or Hadoop, error recovery, etc.

In Kyle Pollack’s Lightning Talk session, we will breaking down the following topics:

  • Overview
  • Basic Architecture
  • Benefits
  • Advantages Of Apache Kafka
  • Use Cases For Kafka
  • Closing Thoughts

Basic Architecture

There are four main components:

  • The Producer – The client apps that write their Events, or Topics, to the Kafka queue
  • The Topic – Topics are the Events that Kafka stores. They are multi-producer, multi-subscriber (Consumer), decoupled, and can have any number of subscribers or none at all
  • The Broker – Each Broker is a Kafka server that organizes and sequentially stores incoming Events by Topic and stores them on disk in Segmented Partitions
  • Consumer – The apps that subscribe to Kafka Topics

A Kafka cluster is made of one or more servers, called Brokers. Topics live in one or more Partitions on one or more Brokers. 

No alt text provided for this image

As Producers write events to the Topic queues, the Brokers store the message in Segments within their Partitions according to Topic ID. Kafka always writes Event messages into any Partition configured for that Topic ID, on any Broker. Because the save is spread across all Brokers that service that Topic ID and the data is written non-sequentially into Segments within those Partitions, there is no single Broker or Partition that contains the full, sequential list of Events for that Topic. Each Partition only holds a subset of Event records in its Segments.

Kafka Producers

Producers are client applications writing Topics to the Kafka Cluster. 

Kafka Brokers

Brokers receive event streams from Producers and store them sequentially by Topic ID in one or more Partitions across one or more Brokers. Each Broker can handle many Partitions in its storage. All received messages are stored with an Offset ID.

For example, when receiving three events on a given Broker having three partitions, the Broker could store those Events to Partitions in this order 2, 1, and 3, while another Broker in the cluster could store them to 3, 2, and1. Because the writes to Partitions within Brokers are ad hoc, the individual Segments in any one Partition do not contain a sequential string of events. However, on retrieval, Kafka provides those records in their correct order by using their Broker-assigned Offset ID. 

Additionally, you can configure the Event retention as suitable for the application.

The Topic

Kafka organizes events by Topic and may store a Topic in multiple Partitions on multiple Brokers. This provides reliability and also enhances performance by avoiding the I/O bottlenecks that using a single Broker might entail, by spreading the store action across multiple computers.Topics are assigned Topic IDs.

Kafka Consumers

Consumers are apps that read Topic information from Kafka queues. Consumers automatically retrieve new messages as they arrive in the queue.

Benefits

  • I/O Performance – Non-sequentially writing Event records to multiple Brokers/Partitions avoids I/O bottlenecks that could occur if they were written sequentially into a single Partition.
  • Scalability – Kafka scales horizontally by increasing the number of Brokers in the cluster.
  • Data Redundancy – You can configure Kafka to write each event to multiple brokers.
  • High-Concurrency, low-latency, high-throughput
  • Fault-Tolerant
  • Message Broker Capabilities
  • Batch Handling Capability (providing ETL-like functionality)
  • Persistent by default
No alt text provided for this image

Advantages Of Apache Kafka

Real-time data analysis provides faster insights into your data allowing faster response times. For example, to make predictions about what should be stocked, promoted, or pulled from the shelves, based on the most up-to-date information possible.

Even on very large systems, Kafka operates very quickly. You can stream all data in real time to make decisions based on current information, rather than waiting until the data has been obtained, aggregated, and analyzed, which is the case for many companies with large datasets.

Kafka is written in Java, so it is easier to learn.

Use Cases For Kafka

Kafka is used for: 

  • Stream processing
  • Website activity tracking
  • Metrics collection and monitoring
  • Log aggregation
  • Real-time analytics
  • Common Extensibility Platform support (CEP)
  • Ingesting data into Spark
  • Ingesting data into Hadoop
  • Command Query Responsibility Segregation support (CQRS)
  • Replay messages
  • Error recovery
  • Guaranteed distributed commit log for in-memory computing (microservices)
No alt text provided for this image

Closing Thoughts

Apache Kafka is a distributed streaming platform capable of handling trillions of events a day. Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and is able to process streams of events.

Happy coding! To learn more about the implementation of Apache Kafka and to experience Kyle Pollack’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, DevOps, Java Engineering, Product Development, Software Engineering, Web Engineering · Tagged: Apache, best practices, INRHYTHMU, JavaScript, Kafka, learning and growth, software engineering, web engineering

Dec 20 2022

Configuration Automation Tools: Orchestrating Successful Deployment

Overview

In the modern technology field, buzz words come and go. One day databases are being discussed as the new best thing in the world of Agile Development only for the next, to recenter the importance of programming languages, frameworks, and methodologies.

But, one unchanging aspect of this lifecycle are the people who are an irreplaceable part of the creation, demise, and popularity of any given technology. This modern day world calls for close to perfection execution, of which individuals cannot always account for.

How does this call for flawless mechanisms affect the developers and creators, when called to building perfect products? 

No alt text provided for this image

Automation is the technology by which a process or procedure is performed with minimal human interference through the use of technological or mechanical devices. It is the technique of making a process or a system operate automatically. Automation crosses all functions within almost every industry from installation, maintenance, manufacturing, marketing, sales, medicine, design, procurement, management, etc. Automation has revolutionized those areas in which it has been introduced, and there is scarcely an aspect of modern life that has been unaffected by it.

Automation provides a number of high-level advantages to every aspect of practice, making it an important process to have a working knowledge of:

  • Overview
  • Removing Human Error
  • Steps To Deploy
  • No Hidden Knowledge
  • Popular Implementation Technology Options
  • Closing Thoughts

Removing Human Error

No alt text provided for this image

Automation, automation, more automation – and of course throw in some orchestration deployment and configuration management. Leaving the buzz words behind the “new technology frontier”, is removing human error. This translates to removing the dependencies of tribal knowledge when it pertains to application and system administration job duties.

Those job duties are performed in a repetitive fashion. The job duties are usually consolidated into various custom scripts, leaving a lot of those scripted actions with the ability to be boxed up and reused over and over again.

Steps To Deployment

No alt text provided for this image

The primary cornerstones to prepping an automation deployment for an individual server, follow a near identical framework:

  1. Download and Install the various languages and/or framework libraries the application usages
  2. Download, Install, and Configure the Web server that the application will use
  3. Download, Install, and Configure the Database that the application will use
  4. Test to see if all the steps are installed and configured correctly

Running application tests ensure that the deployment is running as expected. Testing is crucial to the successful run of the deployment.

For example, something simplistic but highly catastrophic is the possibility of a typo. Consider the case of the following code:

  • cd /var/ect/ansible; rm -rf *

but instead a developer forgot the cd execute command and only ran

  • rm -rf /

In this case, the whole drive is at risk to be erased – which can and will make or break a product.

Taking time to ensure the correct command executions, can determine the overall success of a system.

No Hidden Knowledge

No alt text provided for this image

Looking back on the steps to deploy an application to an environment, there are inevitably a number of small intermediary steps involved. A leader’s priority should be the revelation of each one of these unique sub-categories and effectively bringing all engineers around them up to speed on the associated best practices.

The information should be a source of truth maintained in a repository database, that is easy and intuitive to leverage

Popular Implementation Technology Options

What does a source of truth entail? Can one not skip the documentation of information and go straight into the execution of the steps onto a given system? Or create scripts to reconfigure the application if there was ever a need to? Those questions have been proposed several times and solutions have been formulated several times into the form of extensive and comprehensive build tools/frameworks.

These tools are used throughout the industry to solve the problem of orchestrated development, configuration automation, and management. 

Furthermore, DevOps tools such as: Puppet, Chef, and Ansible are well matured automation/orchestration tools. Each tool will provide enough architecture flexibility to virtually handle any use case presented.

Puppet

No alt text provided for this image

Puppet was the first widely used Configuration Automation and Orchestration software dating back to its initial release in 2005. Puppet uses the Master and Slave paradigm to control X amount of machines. The Ruby language is the script language say, for executing commands in a destination environment. 

The “Puppet Agents” (Slave) are modularized distinct components to be deployed to a server. This can be used for the creation of the server (ie. web server, database, application) in its destination environment. The “Puppet Enterprise” (Master) is comprised of all the inner workings to manage, secure, and organize agents.

Puppet Documentation

  • https://puppet.com/docs/
  • http://pub.agrarix.net/OpenSource/Puppet/puppetmanual.pdf
  • https://www.rubydoc.info/gems/puppet/ 

Chef

No alt text provided for this image

Chef is somewhat similar to Puppet. The core language used within Chef’s abstract module components is Ruby. Chef has several layers of management for individual infrastructure automation needs. The Chef workstation is the primary area for managing the various Chef components. The Chef components consist of “cookbooks”, “recipes”, and “nodes”.

“Recipes” are collections of configurations for a given system, virtual, bare metal, or cloud environment. Chef calls those different environments “nodes”. “Cookbooks” contains “recipes” and other configurations for application deployment and control mechanisms for the different Chef clients.

Chef Documentation

  • https://docs.chef.io/
  • https://www.linode.com/docs/applications/configuration-management/beginners-guide-chef/ 

Ansible

No alt text provided for this image

Ansible is the newest mainstream automation/configuration management tool on the market. Therefore, Ansible uses more modern programming languages and configurations concepts and tools. Python is the programming language used in this framework. One of the modern and fastest up-and-coming template languages is YAML. YAML is programming language agnostic and is a subset of the ever so popular JSON. YAML is used within Ansible to describe an Ansible Playbook. 

Ansible Playbook contains the steps that need to be executed on a given system. Once the Ansible Playbook is intact, configuration or further manipulation of the host can be executed through Ansible API – which is implemented in Python. There are several other components within Ansible technology such as modules, plugins, and inventory. 

Ansible Documentation

  • https://docs.ansible.com/ansible/2.5/dev_guide/
  • https://devdocs.io/ansible/
  • https://geekflare.com/ansible-basics/ 

Closing Thoughts

No alt text provided for this image

After covering a couple of the Configuration Automation and Development tools on the market, one can see a vast amount of flexibility available in eliminating those repeatable steps from human error. This software’s framework promotes reusable software within an organization – which is the most viable. The ability to scale an application development environment and environmental infrastructure is critical. 

The learning curve may be deeper than using plain bash scripts, but the structure and integrity of using a proven tool and ease of maintenance outweigh the learning curve.

Written by Kaela Coppinger · Categorized: Cloud Engineering, Code Lounge, DevOps, Java Engineering, Learning and Development, Software Engineering, Web Engineering · Tagged: automation, best practices, cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering

Nov 07 2022

Progressive Web Applications: The Best Of Web And Native

Overview

The web is an incredible platform. Its mix of ubiquity across devices and operating systems, its user-centered security model, and the fact that neither its specification nor its implementation are controlled by a single company makes the web a unique platform to develop software on. Combined with its inherent linkability, it’s possible to search it and share what you’ve found with anyone, anywhere. 

Web applications can reach anyone, anywhere, on any device with a single codebase.

Progressive Web Apps (PWAs) provide access to open web technologies, to provide cross-platform interoperability. PWAs provide users with an app-like experience that’s customized for their devices.

PWAs are websites that are progressively enhanced to function like installed, native apps on supporting platforms, while functioning like regular websites on other browsers.

In Aleks Rokhkind’s Lightning Talk session, we will breaking down the following topics:

  • Overview
  • App User Expectations
  • PWAs
  • Service Workers
  • Live Demonstration
  • Closing Thoughts

App User Expectations

Users expect to have an incredibly intuitive and smooth experience while interacting with both web-based and native mobile applications. However, on mobile devices, users often prefer interacting with the same content via the native app in real time, rather than an external browser. As a result, content providers are forced to maintain multiple codebases that need to simultaneously target different platforms in order to meet these expectations. 

PWAs

Progressive Web Applications work to address this cross-device challenge. In short, a Progress Web Application, or a PWA, is a website that has a near identical feel to a native application on a mobile device. 

A PWA looks to combine the direct advantages of both the web, as well as implementing the ability to intuitively work offline per a native application. 

As a website, a PWA touts a few advantages over native apps:

  • Discoverability – can be easily discovered in online search engines and implements SEO recommendations 
  • Linkability – can be viewed, installed, and shared from a URL, effectively bypassing an app store

As an application, PWAs allow for implementations that are quite similar to native apps:

  • Installability – users can instantly open the app, by tapping an icon on the device’s home screen, effectively allowing it to look and feel more like a native app
  • Network Independence – provides an offline experience
  • Re-Engageability – background sync, providing user push notifications
  • Access To Device Hardware – camera, microphone, motion sensors, geolocation, etc. 

In order to meet a plethora of different needs, a PWA is not a singular technology but instead an amalgamation of a number of intersecting hardwares:

  • Service Worker – a background script running tasks for the main application
  • HTTPS – allowing only secure connections
  • Manifest File – a JSON file with metadata that helps to install said PWA on a device, similar to native application files

Service Worker

A service worker is a proxy object that sits between a web application and the overall network. 

A service worker can perform a number of tasks and capabilities, that include but are not limited to: 

  • Intercepting, modifying, and serving the network requests and responses of the application. For example, when a device is offline, the service worker can serve up a previously cached response in order to provide a decent offline experience
  • Caching both the device’s static assets (stylesheets, scripts, icons, HTML, etc.) and dynamic data
  • Handling push notifications as well as background sync, even when the application is not being actively used
  • Running in threads separate from the main application, in order to cut down on slower run times

Live Demonstration

You’ve unpacked quite a few PWA principles – think you’re up to trying your hand at some practical application exercises?

Aleks Rokhkind has created an individual testing space just for you!

To rise to the challenge and apply what you’ve learned to the following application exercise, click here. 

Closing Thoughts

At their heart, Progressive Web Apps are just web applications. Using progressive enhancement, new capabilities are enabled in modern browsers. Using service workers and a web app manifest, a web application becomes reliable and installable. 

Progressive Web Apps provide a unique opportunity to deliver a web experience that users will love. Using the latest web features to bring enhanced capabilities and reliability, Progressive Web Apps allow a build to be installed by anyone, anywhere, on any device with a single codebase.

Happy coding!

To learn more about the implementation of Progressive Web Applications and to experience Aleks Rokhkind’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: Application Development, best practices, INRHYTHMU, learning and growth, progressive web apps, PWAs, software engineering, Web Development

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT