• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

web engineering

Jan 10 2023

Why You Should Migrate Away From AngularJS

Overview

With the legacy experience of the Angular Javascript framework, web developers have grown to lean on the framework as a go-to tool. Today, we wanted to go through a few of the reasons why you should look to migrate your existing AngularJS projects (any Angular release version under 2) to a more modern and actively supported framework (or library).

Even though AngularJS is a fantastic piece of technology that surely was top of its class when it came out (October 2010) and despite the fact that we’ve enjoyed working with its successor Angular.io (also known as Angular 2+), AngularJS has become outdated (EOL December 2021), and a risk to your company in a variety of different ways.

No alt text provided for this image
@2017, Scott Adams, Inc.

The framework has reached end of support (Version Support Status). This means that it has become read-only mode, therefore it will not be updated further. The framework has not been developed for over a year now (Release 1.8.2 happened in October 2020), and even though extended support was supposed to end mid-2021, it was extended to December 2021 due to the global pandemic. Here’s a blog post by the Angular team regarding discontinued long term support.

To add some support to the point, Angular was created and mainly maintained by Google. Google recognized the shortcomings of AngularJS, and completely rewrote it to release Angular.io. AngularJS only made it to version 1.8.3, however Angular.io has already made it to major version 13 (current at time of writing), with many more versions to come.

What Could Be Making You Hold On To Your Existing AngularJS Apps

No alt text provided for this image
@2016, Scott Adams, Inc.

Trust us, we’ve been there. You have a perfectly functioning application which needs little maintenance, and you have engineers who know it in and out already. Why invest a part of your budget in fixing something that’s not broken? Why bring in new people that don’t know the product? Why push your engineers to do something new/different to what they’ve been doing?

The Reasons

No alt text provided for this image
  • Technology: As stated earlier, AngularJS is outdated. This means that in its feature-set, performance, and just keeping up with latest developments in Javascript and the web browsers, AngularJS has clearly lagged behind, mainly due to the fact that it has been in maintenance mode and not actively developed on for years. If you stay on this framework, you won’t take advantage of the rapidly evolving web world, and the evolving smart devices, and their new features.
  • Support: As the framework is no longer maintained, any new issues or limitations you encounter will not only lack an answer/help from the AngularJS team (again, not supported anymore), you most probably also won’t have a huge community online to help you with it, like you would have with any modern framework. This could mean a longer time to fix issues that come up in your application, and a rough experience for your engineers and users.
  • Security: Perhaps the biggest reason why you should move away from AngularJS. Like any unsupported package out there, you won’t be protected when any new security exploits are identified, be it within the framework itself, or any of its thousands of dependencies and indirect dependencies (yes, your app can be exploited by vulnerabilities in the dependencies of the dependencies of AngularJS which is your app’s dependency… you get the point). Usually when something like this happens in an actively supported package, a fix will be published quite swiftly in response to it, or any dependency that includes the vulnerability will be updated with a newer version.
  • Talent: Not only do you want to provide the best possible experience for your users, but also for your app engineers. When you are trying to retain or expand your software team, AngularJS will weigh on any engineer’s decision. Engineers will want to work with quality, cutting edge technology. It is hard for engineers to get or even stay excited about working on a framework that has reached end of life. It will be much easier for you to retain and hire engineers if your apps run on modern technologies and following best practices and industry trends. I cannot stress how much easier it will be to fill open positions when your tech stack is attractive for the engineers. You can also think about what will happen once you actually find someone willing to do the job on your legacy system, they will play hard to get and you’ll end up paying more for an engineer that is probably not up to date on industry standards.
  • Business: For current technologies, the help you will get from the online community is massive, which speeds up the time it takes to fix and implement new features, and to resolve critical situations that may arise. Not only your engineers will be happier and more engaged in what they are doing, it also impacts your branding. Are you a company that invests in and works with the latest and greatest? Or a company that settles with whatever is there?

Closing Thoughts

No alt text provided for this image

With confidence, we have seen the impact that migrating legacy applications has had for many of our customers, and it is massive. Not only do applications come alive and look and feel more modern, but engineers come to work in a better mood and eager to get things done, and a true engineering culture is fostered.

Written by Kaela Coppinger · Categorized: Code Lounge, Product Development, Software Engineering, Web Engineering · Tagged: angularjs, best practices, INRHYTHMU, JavaScript, ux, Web Development, web engineering

Dec 20 2022

A Comprehensive Overview Of Apache Kafka

Overview

Apache Kafka is an open-source, distributed event-streaming platform, or message queuing system. Kafka provides real-time data analysis that runs on servers and clients, either locally or in the cloud, on Linux, Windows, or Mac platforms. Kafka’s messages are persisted on disk and replicated within the cluster to prevent data loss.

Some typical Kafka use cases are stream processing, log aggregation, data ingestion to Spark or Hadoop, error recovery, etc.

In Kyle Pollack’s Lightning Talk session, we will breaking down the following topics:

  • Overview
  • Basic Architecture
  • Benefits
  • Advantages Of Apache Kafka
  • Use Cases For Kafka
  • Closing Thoughts

Basic Architecture

There are four main components:

  • The Producer – The client apps that write their Events, or Topics, to the Kafka queue
  • The Topic – Topics are the Events that Kafka stores. They are multi-producer, multi-subscriber (Consumer), decoupled, and can have any number of subscribers or none at all
  • The Broker – Each Broker is a Kafka server that organizes and sequentially stores incoming Events by Topic and stores them on disk in Segmented Partitions
  • Consumer – The apps that subscribe to Kafka Topics

A Kafka cluster is made of one or more servers, called Brokers. Topics live in one or more Partitions on one or more Brokers. 

No alt text provided for this image

As Producers write events to the Topic queues, the Brokers store the message in Segments within their Partitions according to Topic ID. Kafka always writes Event messages into any Partition configured for that Topic ID, on any Broker. Because the save is spread across all Brokers that service that Topic ID and the data is written non-sequentially into Segments within those Partitions, there is no single Broker or Partition that contains the full, sequential list of Events for that Topic. Each Partition only holds a subset of Event records in its Segments.

Kafka Producers

Producers are client applications writing Topics to the Kafka Cluster. 

Kafka Brokers

Brokers receive event streams from Producers and store them sequentially by Topic ID in one or more Partitions across one or more Brokers. Each Broker can handle many Partitions in its storage. All received messages are stored with an Offset ID.

For example, when receiving three events on a given Broker having three partitions, the Broker could store those Events to Partitions in this order 2, 1, and 3, while another Broker in the cluster could store them to 3, 2, and1. Because the writes to Partitions within Brokers are ad hoc, the individual Segments in any one Partition do not contain a sequential string of events. However, on retrieval, Kafka provides those records in their correct order by using their Broker-assigned Offset ID. 

Additionally, you can configure the Event retention as suitable for the application.

The Topic

Kafka organizes events by Topic and may store a Topic in multiple Partitions on multiple Brokers. This provides reliability and also enhances performance by avoiding the I/O bottlenecks that using a single Broker might entail, by spreading the store action across multiple computers.Topics are assigned Topic IDs.

Kafka Consumers

Consumers are apps that read Topic information from Kafka queues. Consumers automatically retrieve new messages as they arrive in the queue.

Benefits

  • I/O Performance – Non-sequentially writing Event records to multiple Brokers/Partitions avoids I/O bottlenecks that could occur if they were written sequentially into a single Partition.
  • Scalability – Kafka scales horizontally by increasing the number of Brokers in the cluster.
  • Data Redundancy – You can configure Kafka to write each event to multiple brokers.
  • High-Concurrency, low-latency, high-throughput
  • Fault-Tolerant
  • Message Broker Capabilities
  • Batch Handling Capability (providing ETL-like functionality)
  • Persistent by default
No alt text provided for this image

Advantages Of Apache Kafka

Real-time data analysis provides faster insights into your data allowing faster response times. For example, to make predictions about what should be stocked, promoted, or pulled from the shelves, based on the most up-to-date information possible.

Even on very large systems, Kafka operates very quickly. You can stream all data in real time to make decisions based on current information, rather than waiting until the data has been obtained, aggregated, and analyzed, which is the case for many companies with large datasets.

Kafka is written in Java, so it is easier to learn.

Use Cases For Kafka

Kafka is used for: 

  • Stream processing
  • Website activity tracking
  • Metrics collection and monitoring
  • Log aggregation
  • Real-time analytics
  • Common Extensibility Platform support (CEP)
  • Ingesting data into Spark
  • Ingesting data into Hadoop
  • Command Query Responsibility Segregation support (CQRS)
  • Replay messages
  • Error recovery
  • Guaranteed distributed commit log for in-memory computing (microservices)
No alt text provided for this image

Closing Thoughts

Apache Kafka is a distributed streaming platform capable of handling trillions of events a day. Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and is able to process streams of events.

Happy coding! To learn more about the implementation of Apache Kafka and to experience Kyle Pollack’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, DevOps, Java Engineering, Product Development, Software Engineering, Web Engineering · Tagged: Apache, best practices, INRHYTHMU, JavaScript, Kafka, learning and growth, software engineering, web engineering

Sep 21 2022

How To Write A Great Test Case

Overview

A test case is exactly what it sounds like: a test scenario measuring functionality across a set of actions or conditions to verify the expected result. They apply to any software application, can use manual testing or an automated test, and can make use of test case management tools.

Most digital-first business leaders know the value of software testing. Some value high-quality software more than others and might demand more test coverage to ultimately satisfy customers. So, how do they achieve that goal?

They test more, and test more efficiently. That means writing test cases that cover a broad spectrum of software functionality. It also means writing test cases clearly and efficiently, as a poor test can prove more damaging than helpful.

A key thing to remember when it comes to writing test cases is that they are intended to test a basic variable or task such as whether or not a discount code applies to the right product on an e-commerce web page. This allows a software tester more flexibility in how to test code and features.

In Nathan Barrett’s Lightning Talk session, we will be breaking down the following topics:

  • What Is A Test Case?
  • What Makes A Good Test Case?
  • Live Demonstration
  • Closing Thoughts

What Is A Test Case?

At a high level, to “test” means to establish the quality, performance, or reliability of a software application. A test case is a repeatable series of specific actions designed to either verify success or provoke failure in a given product, system, or process. 

A test case gives detailed information about testing strategy, testing process, preconditions, and expected output. These are executed during the testing process to check whether the software application is performing the task for which it was developed for or not. A passed test case functions like a receipt verifying the correct functionality of the subject of the test. 

To write the test case, we must have the requirements to derive the inputs, and the test scenarios must be written so that we do not miss out on any features for testing. Then we should have the test case template to maintain the uniformity, or every test engineer follows the same approach to prepare the test document.

Test cases serve as final verification of functionality before releasing it to the direct product users. 

What Makes A Good Test Case?

Writing test cases varies depending on what the test case is measuring or testing. This is also a situation where sharing test assets across dev and test teams can accelerate software testing. But it all starts with knowing how to write a test case effectively and efficiently.

Test cases have a few integral parts that should always be present in fields, as well as some “nice to have” elements that can only work to enhance presented results. 

Required Elements:

  • Summary
    • Concise, direct encapsulation of the purpose of the test case 
  • Prerequisites
    • What needs to be in place prior to starting the test?
    • Bad Prerequisites: captured in test steps, not present, overly specific
    • Good Prerequisites: concise/descriptive, lays out all set-up prior to testing, includes information to learn more if desired 
  • Test Steps
    • The meat of the test case
    • Good test steps: each step is a specific atomic action performed by the user that contains an expected result, call out divergent paths where necessary, cites which test data when laid out in prerequisites needs to be applied
    • Really great test steps should treat the user like they know “nothing” and communicate everything from start to finish
  • Expected Results
    • How do we know that the test hasn’t failed?
    • Bad Expected Results: Page loads correctly, view looks good, app behaves as expected
    • Good Expected Results: Landing page loads after spinner with user’s account details present, view renders with all appropriate configurations (title, subtitle, description, etc.), toggle changes state when tapped (enabled→disabled)

Preferred Additional Elements:

  • Artifacts
    • Screenshots, files, builds, configurations, etc.
  • Test Data
    • Accounts, items, addresses, etc.
    • What information is needed during the test?
    • Pre-rendered prerequisite fulfillment
  • Historical Context
    • Previous failures, previous user journeys, development history, etc.
    • Has this feature been “flakey” in the past?
    • What are previous failure points?
    • How critical is this feature?

The very practice of writing test cases helps prepare the testing team by ensuring good test coverage across the application, but writing test cases has an even broader impact on quality assurance and user experience.

Live Demonstration

Nathan Barrett has crafted an intuitive test of specificity to help guide testers to understand how they should be structuring their cases:

Be sure to follow Nathan’s entire Lightning Talk to follow along with these steps in real time.

Closing Thoughts

Test cases help guide the tester through a sequence of steps to validate whether a software application is free of bugs, and working as required by the end-user. A well-written test case should allow any tester to understand and execute the test.

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to understanding the beneficial prerequisites to writing a good test case for any type of application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about How To Conduct A Great Test Case as well as its importance in the software development process and to experience Nathan Barrett’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: Cloud Engineering, Design UX/UI, DevOps, InRhythmU, Learning and Development, Product Development, Software Engineering, Web Engineering · Tagged: devops, INRHYTHMU, learning and growth, SDET, software development, software engineering, ux, web engineering

Sep 07 2022

Integrations For Apache – Flink, NiFi, And Kafka

Overview 

Apache is one of the go-to web servers for website owners and developers, with more than a 50% share in the commercial web server market. 

Apache HTTP Server is a free and open-source server that delivers web content through the internet. It is one of the oldest and most reliable web server software maintained by the Apache Software Foundation, with the first version released in 1995. It is commonly referred to as Apache and after development, it quickly became the most popular HTTP client on the web.

It is a modular, process-based web server application that creates a new thread with each simultaneous connection. It supports a number of features; many of them are compiled as separate modules and extend its core functionality, and can provide everything from server side programming language support to authentication mechanisms. Virtual hosting is one such feature that allows a single Apache Web Server to serve a number of different websites.

In Tim Spann’s Lightning Talk session, we will breaking down the following topics:

  • What Is An Apache Integration?
  • Apache Flink
  • Apache Nifi 
  • Apache Kafka
  • Live Demonstrations
  • Closing Thoughts

What Is An Apache Integration?

A software integration is the process of bringing together various types of software subsystems so that they create a unified single system. The integration should be carefully coordinated to result in a seamless connection of the separate parts. When done skillfully, the increased efficiency is a tremendous benefit to the Apache developer.

Implementing an integration that works best for an individual development team’s workflow is essential to the future success of the project. Taking time to explore and walk through a number of layout features will only stand to benefit the user-experience and streamlining of efficient tasking. 

Apache Flink

Apache Flink is a real-time processing framework which can process streaming data. It is an open source stream processing framework for high-performance, scalable, and accurate real-time applications. It has a true streaming model and does not take input data as batch or micro-batches.

Example: The below framework diagrams the different layers that run as part of the Apache Flink ecosystem. 

Apache Nifi

Apache NiFi is a visual data flow based system which performs data routing, transformation and system mediation logic on data between sources or endpoints.

Apache Kafka

Apache Kafka is a distributed streaming platform that: Publishes and subscribes to streams of records, similar to a message queue or enterprise messaging system. Stores streams of records in a fault-tolerant durable way. Processes streams of records as they occur.

Apache Kafka was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics.

Example: The below framework diagrams the operational flow of information using the Apache Kafka plug-in. 

Live Demonstrations

Tim Spann has crafted an intuitive demonstration to help guide you through these different Apache integrations in practicum: 

Be sure to follow Tim’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts 

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test how Apache Integrations can improve your personal data application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Apache Integrations in application development and to experience Tim Spann’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: InRhythmU, Learning and Development, Product Development, Web Engineering · Tagged: Apache, devops, Flink, INRHYTHMU, Kafka, learning and growth, NiFi, ux, Web Development, web engineering

Aug 23 2022

Web Bundling Alternatives

Overview

Bundling a full website as a single file and making it shareable opens up new use cases for the web. Imagine a world where one can:

  • Create their own content and distribute it in all sorts of ways without being restricted to the network
  • Share a web app or piece of web content with their friends via Bluetooth or Wi-Fi Direct
  • Carry their site on their own USB or even host it on their own local network

The Web Bundles API is an exciting proposal that could make this all possible and in turn, simplify the streamline processes for the developer!

Web Bundling is the process of fetching, resolving, packing, and reducing a tree of dependencies into compressed static files that one can readily host on the web. 

In Wai Fai Lau’s Lightning Talk session, we will breaking down the following topics:

  • What Are Web Bundlers?
  • Why Web Bundlers?
  • Web Bundling Options
  • Live Demonstrations
  • Closing Thoughts

What Are Web Bundlers?

In the simplest terms, a Web Bundle is a tool that bundles all HTTP resources up in a singular, web optimized output folder. In doing so, the lives of developers not only become simpler – but easier  to streamline and process.

The primary job of a bundler is to transpile the code down into something the browser can understand and then encapsulate those HTTP properties. The Web Bundler allows the developer to output all these properties into a singular folder that can then be included on a web page to load the entire application at once. 

Why Web Bundlers?

Web Bundlers provide a number of assets for developers looking to utilize succinct packaging. The primary sectors of improvement associated with bundling are:

  1. Improvement Of Developer Experience
    • Dev server
    • Hot module replacement 
    • Debugging support
  2. Optimization Of Asset Production To Improve Performance And UX
    • Minification
    • Compressed and permanent code
    • Code splitting
    • Tree shaking

Web Bundling Options

There are a number of bundling software options available for developers looking to implement the method. The most important factors to measure when considering a specific alternative is two-fold: speed and package size. A faster build speed and a smaller package size would be the ideal solution. 

Webpack

Webpack is currently the most popular alternative available to developers. It addresses the lack of tooling for complex single page applications. Before Webpack, developers had to manage dependencies manually.

Webpack is:

  • Highly customizable
  • A mature ecosystem 
  • A provider of responsive documentation and support
  • Capable of utilizing module federation 

Parcel

Parcel automatically tracks all files, configurations, plugins, and dev dependencies that are involved in the build, and granularly invalidates the cache when something changes. It integrates with low-level operating system APIs to determine what files have changed in milliseconds, no matter the project size.

Parcel is:

  • Faster than Webpack
  • Without a configuration requirement
  • A provider of a built in hot module replacement

Rollup

Rollup is a module bundler for JavaScript which compiles small pieces of code into something larger and more complex, such as a library or application. It uses the new standardized format for code modules included in the ES6 revision of JavaScript, instead of previous idiosyncratic solutions such as CommonJS and AMD.

Rollup is:

  • A simpler configuration than Webpack
  • A provider of automatic tree shaking
  • A smaller file size than Webpack 
  • A user of scope hoisting

esbuild

esbuild is a JavaScript bundler created by Evan Wallace. The code itself is written in Go with speed in mind, and it’s clear that the developer endeavored to avoid unnecessary allocations as much as possible.

esbuild is:

  • The fastest bundler currently on the market

Vite

Vite.js is a rapid development tool for modern web projects. It focuses on speed and performance by improving the development experience. Vite uses native browser ES imports to enable support for modern browsers without a build process.

Vite is:

  • Capable of hot module replacement
  • A provider of one of the fastest bundling times
  • An automated code splitter 
  • A provider of a multi-page support module right off the bat
  • Compatible with most popular framework templates

Live Demonstrations

Wai Fai Lau has crafted an intuitive demonstration to help guide you through these different Web Bundling alternatives in practicum: 

Be sure to follow Wai Fai’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

All programs should always be designed with performance and the user experience in mind. The properties explored above are the primary stepping stones to exploring the basic components needed to test how Web Bundling can improve your personal data application. Be sure to explore, have fun, and match up the components that work best for your project!

Happy coding!

To learn more about Web Bundling in web development and to experience Wai Fai Lau’s full Lightning Talk session, watch here. 

Written by Kaela Coppinger · Categorized: InRhythmU, Product Development, Web Engineering · Tagged: devops, INRHYTHMU, Learning and Development, Web Bundling, Web Development, web engineering

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT