• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

Software Engineering

Jan 10 2023

Why You Should Migrate Away From AngularJS

Overview

With the legacy experience of the Angular Javascript framework, web developers have grown to lean on the framework as a go-to tool. Today, we wanted to go through a few of the reasons why you should look to migrate your existing AngularJS projects (any Angular release version under 2) to a more modern and actively supported framework (or library).

Even though AngularJS is a fantastic piece of technology that surely was top of its class when it came out (October 2010) and despite the fact that we’ve enjoyed working with its successor Angular.io (also known as Angular 2+), AngularJS has become outdated (EOL December 2021), and a risk to your company in a variety of different ways.

No alt text provided for this image
@2017, Scott Adams, Inc.

The framework has reached end of support (Version Support Status). This means that it has become read-only mode, therefore it will not be updated further. The framework has not been developed for over a year now (Release 1.8.2 happened in October 2020), and even though extended support was supposed to end mid-2021, it was extended to December 2021 due to the global pandemic. Here’s a blog post by the Angular team regarding discontinued long term support.

To add some support to the point, Angular was created and mainly maintained by Google. Google recognized the shortcomings of AngularJS, and completely rewrote it to release Angular.io. AngularJS only made it to version 1.8.3, however Angular.io has already made it to major version 13 (current at time of writing), with many more versions to come.

What Could Be Making You Hold On To Your Existing AngularJS Apps

No alt text provided for this image
@2016, Scott Adams, Inc.

Trust us, we’ve been there. You have a perfectly functioning application which needs little maintenance, and you have engineers who know it in and out already. Why invest a part of your budget in fixing something that’s not broken? Why bring in new people that don’t know the product? Why push your engineers to do something new/different to what they’ve been doing?

The Reasons

No alt text provided for this image
  • Technology: As stated earlier, AngularJS is outdated. This means that in its feature-set, performance, and just keeping up with latest developments in Javascript and the web browsers, AngularJS has clearly lagged behind, mainly due to the fact that it has been in maintenance mode and not actively developed on for years. If you stay on this framework, you won’t take advantage of the rapidly evolving web world, and the evolving smart devices, and their new features.
  • Support: As the framework is no longer maintained, any new issues or limitations you encounter will not only lack an answer/help from the AngularJS team (again, not supported anymore), you most probably also won’t have a huge community online to help you with it, like you would have with any modern framework. This could mean a longer time to fix issues that come up in your application, and a rough experience for your engineers and users.
  • Security: Perhaps the biggest reason why you should move away from AngularJS. Like any unsupported package out there, you won’t be protected when any new security exploits are identified, be it within the framework itself, or any of its thousands of dependencies and indirect dependencies (yes, your app can be exploited by vulnerabilities in the dependencies of the dependencies of AngularJS which is your app’s dependency… you get the point). Usually when something like this happens in an actively supported package, a fix will be published quite swiftly in response to it, or any dependency that includes the vulnerability will be updated with a newer version.
  • Talent: Not only do you want to provide the best possible experience for your users, but also for your app engineers. When you are trying to retain or expand your software team, AngularJS will weigh on any engineer’s decision. Engineers will want to work with quality, cutting edge technology. It is hard for engineers to get or even stay excited about working on a framework that has reached end of life. It will be much easier for you to retain and hire engineers if your apps run on modern technologies and following best practices and industry trends. I cannot stress how much easier it will be to fill open positions when your tech stack is attractive for the engineers. You can also think about what will happen once you actually find someone willing to do the job on your legacy system, they will play hard to get and you’ll end up paying more for an engineer that is probably not up to date on industry standards.
  • Business: For current technologies, the help you will get from the online community is massive, which speeds up the time it takes to fix and implement new features, and to resolve critical situations that may arise. Not only your engineers will be happier and more engaged in what they are doing, it also impacts your branding. Are you a company that invests in and works with the latest and greatest? Or a company that settles with whatever is there?

Closing Thoughts

No alt text provided for this image

With confidence, we have seen the impact that migrating legacy applications has had for many of our customers, and it is massive. Not only do applications come alive and look and feel more modern, but engineers come to work in a better mood and eager to get things done, and a true engineering culture is fostered.

Written by Kaela Coppinger · Categorized: Code Lounge, Product Development, Software Engineering, Web Engineering · Tagged: angularjs, best practices, INRHYTHMU, JavaScript, ux, Web Development, web engineering

Jan 04 2023

Creating An Effective Proxy Using Node And Express

Overview

Engineers are often faced with the challenge of pulling together multi-website/application projects, without full cross-platform permissions. Utilizing both a Node and an Express proxy, can pull together two websites/applications in a formal and cohesive format.

The aforementioned situation comes up more often than one would think, whether it be a question of host permissions or compatibility, it can undoubtedly pull up a number of possible roadblocks. There are several reasons a developer might not be able to run a site or app in their local environment; perhaps it’s complex to set up or involves a number of permissions. Regardless of the reason, Node and Express can present a novel way to solve the problem.

No alt text provided for this image

In Matt Billard’s Lightning Talk session, we will be uncovering the the primary strategies for Creating An Effective Proxy Using Node And Express:

  • Overview
  • The Architecture
  • How It Works
  • “Gotchas” To Avoid
  • Live Demonstrations
  • Closing Thoughts

The Architecture

No alt text provided for this image

The browser makes a request to the Node Express proxy server where the following three scenarios crop up:

  1. If the user requested an HTML page, we need to combine the page from website 1 and 2. The proxy first asks website 1 and then website 2 for its HTML. It then modifies the two HTML files, combines them, and returns the result to the browser. (Details on how this works below.)
  2. The HTML page will then request the CSS, JavaScript, and other assets it requires. These requests will again go through the proxy which will pass on the requests. If website 1 has the asset, great, the proxy will return it to the browser. 
  3. If website 1 does not have the asset, the proxy will then ask website 2 and return it to the browser.

In the example below, when using the InRhythm.com website as the target into which an engineer can inject some local code (in this case a basic Create React App project); the final result is an actual screenshot of the 2 websites living together in the same browser window.

No alt text provided for this image

How It Works

As mentioned above, website 1 and 2’s HTML are combined. This involves a few steps. Webpages can’t have 2 doctype, html, head, or body tags, so the use of some regex to strip those, will be required. Now that website 2’s HTML is ready, a coder can inject it before website 1’s closing </body> tag. 

No alt text provided for this image

The above code shows the modifications to website 1’s HTML.

It shows a few things:

  1. Many websites have full ‘absolute URLs’ for their links. They look like this: https://www.inrhythm.com/who-we-are/. The problem is if the user clicks on this, they’ll be taken away from our proxy and go to the target website. One can solve this by removing all www.something.com pieces while retaining the language the after the slash.
  2. Injecting the CSS as discussed above, removes backgrounds and allows clicks to pass through website 2 to website 1. (Keep in mind this will probably be slightly different depending on the two sites a coder is combining.)
  3. Injecting the HTML from website 2 must be modified before closing the </body> tag.
No alt text provided for this image

“Gotchas” To Avoid

It can take a variety of trial and errors in order to develop a proxy to one’s exact specifications. Some of the most common occurrences, a coder may find themselves troubleshooting are:

  • Websites usually compress or “Gzip” their content. Normally this is a great thing. It means less data is transferred and websites load more quickly. However, when in need of a proxy, it can become quite troublesome for ease of use. An engineer can’t parse, manipulate, and modify HTML if it looks like gibberish. The solution is actually quite simple: as it turns out, there’s a header one can send with their request to ask the server not to Gzip anything.
No alt text provided for this image
  • When using a proxy, all requests are going to have the header “host” set to “localhost.” Now this is probably not a problem for most sites, but to the target server, this doesn’t look like a very normal request, and indeed, one will find some websites responded abnormally and will return pages that looked nothing like expected. The solution cab be found in modifying one of the headers of the request. 
No alt text provided for this image
  • Due to needing to modify requests quite a bit, this may result in some possible browser abnormalities. The solution to this problem is to delete the ‘content-length’ header before the proxy sends the browser any final response. This will stop the browser from truncating the response and removing all the hard work needed to customize one’s proxy. 
No alt text provided for this image
  • When combining sites that use https, the proxy might complain that the SSL certificates don’t match what it’s expecting. Turns out it’s rather easy to relax this with the following code: 
No alt text provided for this image

Live Demonstrations

Matt Billard has crafted an intuitive demonstration to help guide you through these principles in practicum: 

No alt text provided for this image

Be sure to follow Billard’s entire Lightning Talk to view this impressive demonstration in real time.

Closing Thoughts

The Node.js framework Express allows an engineer to create web servers and APIs with minimal setup. Using Express in a Node.js application to create an API Proxy to request data from another API and return it to a consumer, is a vital skill to add to one’s skills toolkit. Using Express middleware to help optimize the API Proxy, will allow a coder to raise the bar and improve performance for returning data from the underlying API.

To develop and learn from Billard’s signature “Code Collider” proxy, feel free to download the direct code from GitHub.

Happy coding!

To learn more about Creating An Effective Proxy Using Node And Express, along with some live test samples, and to experience Matt Billard’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Cloud Engineering, InRhythmU, Product Development, Software Engineering · Tagged: Code Collider, Code lounge, express, INRHYTHMU, learning and growth, Node, Node.js, product development, proxy

Jan 03 2023

Creating Robust Test Automation For Microservices

Overview

No alt text provided for this image

Any and all projects that a software engineer joins will come in one of two forms: greenfield or legacy codebases. In the majority of cases, projects will fall into the realm of legacy repositories. As a software engineer, it is their responsibility to be able to strategically navigate their way through either type of project by looking objectively at the opportunities to improve the code base, lower the cognitive load for software engineering, and make a determination to advise on better design strategies.

But, chances are, there is a problem. Before architecture or design refactors can be taken its best to take a pulse on the health of a platform End to End (E2E). The reason being, lurking in a new or existing platform is likely a common ailment of a modern microservices approach – the inability to test the platform E2E across microservices that are, by design, commonly engineered by different teams over time.

Revitalizing Legacy Systems

No alt text provided for this image

One primary challenge faced by a number of software engineers, is the adaptive work on a greenfield platform that has fallen several months behind from a quality assurance perspective. It becomes no longer possible for QA to catch up, nor was it possible for QA to engineer and execute E2E testing to complete common user journeys throughout the enterprise system.

To solve this conundrum, E2E data generation tools need to be created so that the QA team can keep upbuilding and testing every scenario and edge case.

There are three main requirements for an E2E account and data generation tool.

The tool should:

1) Create test accounts with mock data for each microservice

2) Link those accounts between up and downs stream microservices

3) Provide easy to access APIs that are self-documenting 

Using a tool like Swagger, QA can use the API description for REST API, i.e. OpenAPI Specification (formerly Swagger Specification) to view the available endpoints and operations to create accounts, generate test data, authenticate, authorize and “connect the microservices.”

No alt text provided for this image

Closing Thoughts

By creating tools for E2E testing, a QA team was able to eliminate the hassle of trying to figure out which upstream and downstream microservices needed to be called to ensure that the required accounts and data were available and set up properly to ensure a successful test of all scenarios i.e. based upon the variety of different data types, user permissions, user information, and covering the negative test cases. The QA team was able to catch up and write their entire suite of test scenarios generating the matching accounts and data to satisfy those requirements. The net result of having built an E2E test generation tool was automated tests could be produced exponentially quicker and the tests themselves are more resilient to failure. 

Even though the microservices pattern continues to gain traction, developing E2E testing tools that generate accounts and test data across an enterprise platform will likely still remain a pain point.

There’s no better way to maintain a healthy system than to ensure accounts and data in the lower environments actually work and unblock testing end-to-end. 

Written by Kaela Coppinger · Categorized: Agile & Lean, Cloud Engineering, Java Engineering, Product Development, Software Engineering · Tagged: cloud engineering, INRHYTHMU, JavaScript, learning and growth, microservices, software engineering, testing

Dec 20 2022

A Comprehensive Introduction To Swift Package Manager

Introduction

The Swift Package Manager (SwiftPM) is Apple’s tool for managing package dependencies for Swift application development. SwiftPM has been an integrated part of Swift since v3.0.

So, what are Packages? Packages contain reusable code or other resources stored in repositories that your application needs to provide a feature or function. Some examples include, but are not limited to, the fonts and color scheme for your app, a library that provides access to a web resource, a file with images required by your application. Other examples are the APIs and Frameworks that add functionality to your app and simplify your coding tasks.

Let’s work together and breakdown a high-level overview of basic SwiftPM usage and its associated features.

What Do Package Managers Bring To The Table?

Package Managers give developers a way to control which packages and which versions of those packages that the project will consume.

Other features include:

  • Allows easy use of light-weight, reusable libraries
  • Reduces code duplication
  • Supports development best practices by simplifying and supporting modular coding (easy module/package creation)

While it is possible to manually manage dependencies in a project, it isn’t practical for anything but the smallest of applications. Best practices fall on the side of using a package manager to handle this function automatically.

SwiftPM supports Swift 3+ and XCode 8+. Since Swift 5 and Xcode 11, SwiftPM has had cross-platform support for iOS, macOS, and tvOS.

Are There SwiftPM Alternatives?

There are two mainstream alternatives, CocoaPods and Carthage. Both are third-party tools requiring installation and much configuration before use.

CocoaPods

No alt text provided for this image

CocoaPods (2011) – Supports Swift 5+ and Xcode 3.11+ – Mature and stable. A centralized, easy-to-use command-line tool for package dependency management and integration into the application. CocoaPods is written in Ruby.

However, Cocoapods lacks direct control over project configuration and is basically a blackbox. CocoaPods also experiences random build failures that can’t be explained. Resolving these often involves removing and reinstalling CocoaPod dependencies from the project, clearing caches, and completely rebuilding the entire project. These steps can add a significant amount of time to the development process, especially on large projects.

Each CocoaPod dependency must have a “Podspec” file to define the metadata for that dependency. These files are typically uploaded to a different repository from the dependency itself. A “Podfile” within the app project will include a reference to this repository, and CocoaPods will use the information in this Podfile to fetch and install all of the dependencies for the app project. 

Carthage

No alt text provided for this image

Carthage (2014) – A decentralized command-line tool for dependency management. Developers have full project control and must manually provide all package and dependency information. Carthage pulls the packages the developer specifies.Configuration requires much more manual work than CocoaPods and is somewhat complicated.

Using SwiftPM

SwiftPM has many advantages over Carthage and CocoaPods. Foremost, SwiftPM is a native Apple tool integrated into Swift. Other advantages include a quick and simple configuration, easier control over packages and their sub-dependencies, and a GUI built into Xcode for managing package configurations. Each package’s metadata is defined in a “Package.swift” file, which  resides in the same repository as the package source files. 

Package Resolution

With the Package dependencies configured in the SwiftPM GUI, Xcode will download the packages and resolve dependencies at build time with no further developer interaction.

When setting dependencies, you have the following options:

  • By Version: Next Version, Up to next minor, Range, or a specific commit
  • Branch (Specify)
  • Commit(Specify)

Adding A Package To Your Project

  1. Swift Packages> Add Package Dependency 
  2. On the “Choose Package Repository” dialog, enter the desired Repository.
  3. Click Next. 
  4. On the “Choose Package Options” dialog, set the dependency rules (Versions, Branches, Commits). 
  5. Click Next. Xcode now fetches the dependency.
  6. On the “Add Package to YourPrjName dialog,” ensure that the relevant packages are checked and the proper Target is selected in the Add to Target cell.
  7. Click Finish.

Your package now appears in the Navigator under Swift Package Dependencies. Note that SwiftPM can reference both local and remote packages.

Create A Module And Add A Local Package With SwiftPM

SwiftPM simplifies the process of modularizing your code and adding it as a package to a local or external repository. This section will briefly describe the process of creating a module and adding it to a local repository.

  1. Identify a self-contained portion of code or another resource that is suitable for use in a module.
  2. Create a new file: Files > New Package. Name the package accordingly and add it to your application using the same dialog.
  3. Copy the code or resource to the new file.
  4. Delete the existing code or resource from the app
  5. Add the new package to the Application Target. In “Application Project Settings,” add the new package under Frameworks and Libraries.
  6. In the Package code, define the platforms and versions where this Package works.
  7. In the Application’s code files, import the new package into every file where its code or where its resources are used. 

If desired, you could upload the new Package to an external repository for use by developers outside your organization.

Additionally, SwiftPM will allow you to create and use binary packages. Binary packages permit developers to distribute libraries and frameworks without also distributing their source code.

Package Distribution

You can use SwiftPM to distribute resources, in addition to frameworks and libraries. SwiftPM also has built-in support for Apple’s  DocC Documentation format, which makes it easy to build robust interactive documentation files and tutorials. 

The Future

SwiftPM is a mature product still under active development to improve and add new features. Usage stats currently show it being about equal in popularity with CocoaPods. However, since SwiftPM is a native tool that requires no additional work to begin using, its future is more sure than that of CocoaPods or Carthage.

Happy coding!

To learn more about Swift Package Manager as well as its influence in mobile development and to experience Andrew Balmer’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, InRhythmU, Learning and Development, Software Engineering · Tagged: best practices, INRHYTHMU, ios, iOS Ecosystem, learning and growth, Package Management, software engineering, SwiftPM, SwiftUI

Dec 20 2022

State Management: Redux vs. MobX

No alt text provided for this image

Redux was developed in 2015 by Abramhov and Clark based on concepts used in Facebook’s Flux architecture. MobX also began in 2015 and was originally called Mooservable by the dev team. It became MobX with the 2.0 release on 26 Feb 2016.

There are three parts to state management:

  • The State, which is the source of truth for your application’s State.
  • The View, which is a rendering of the current State.
  • The Action, which is something that triggers a State change.

Bare Javascript passes State via property trees. A function needing State obtains the information via property drilling. Depending on a function’s layer depth, property drilling can introduce significant overhead.

Using State Management introduces a structured workflow in the application and makes obtaining the application State trivial.

Redux

No alt text provided for this image

Overview

Redux manages application state in a centralized repository. There is a single state in the application and it is immutable. It can’t be modified by the application. This is similar to how State is managed in Flux. Redux is designed to be a predictable container for application state.

Redux combines Flux and Functional Programming and uses an immutable State as a single, predictable, source of truth in the application stored in a Javascript object.

Implementation

Using Redux requires a considerable amount of ramp-up work. There is a great deal of boilerplate code, configuration, and to take advantage of some capabilities, developers must follow explicit coding patterns. 

Workflow

There are three main workflow components:

  • The UI: The UI passes the user’s activity, or their Action, to the Event Handler. 
  • The Event or Action Handler/Dispatcher: The Event Handler’s Dispatcher then notifies the Store of the Action. 
  • The Store with Reducers and the State: The Store clones a copy of the current State for the Reducers to act on based on the user’s Action. On completion, the Store passes the new state to the UI. The updated State will trigger a refresh of any component accessing that State.

The State’s Reducers are responsible for updating the State of the Store. They are pure functions that analyze the Store’s recursive data structure. They do not mutate the current state, nor do they make external API calls. Using a combining function, they recombine the results.of their recursive processing into their constituent parts, thereby building a return value.

MobX

No alt text provided for this image

Overview

MobX was created in ES5 Javascript and follows Functional Reactive Programming patterns and prevents an inconsistent State by automatically performing all derivations. MobX’s main difference from Javascript and Redux are that State is not immutable and your application can automatically store different observable data types into different MobX Stores.

MobX uses abstraction to hide much of the background operation and reduce the boilerplate code requirement. MobX directly modifies its current state(s) without cloning beforehand.

Implementation

MobX is implicit by nature, and requires little boilerplate code. After implementing State storage in the desired data structure (plain, arrays, objects, etc.), set those properties that will change over time to observable for MobX tracking.

Workflow

There are four main workflow components:

  • Actions
  • Observable State
  • Derivations (Computed Values) 
  • Reactions (or Side Effects)

Events invoke Actions. Those actions can originate with the user or the application, and are the only thing that can modify the Observable State. MobX propagates any changes to the Observable State to all Derivations and Reactions. 

Deciding On A State Manager

No alt text provided for this image

Types of Derivations could be computation, serialization,  rendering, etc. and example Rendering side-effects are I/O activity, DOM updating, network activity. etc.

  • Application Architecture – Is the architecture Flux/Functional Reactive Programming (Redux) or more OOP (MobX)?
  • Ramp-up – Time until productive
  • Boilerplate – Redux requires extensive boilerplate setup, MobX has very little boilerplate
  • Learning Curve – Steeper: Redux requires developers to follow explicit coding patterns to access capabilities. Flatter: MobX is more implicit and doesn’t require extra tooling. Setup is adding a class and setting Observers, Actions, and Compute Values.
  • State Stores – Redux usually maintains a single immutable State even though it is possible to set up multiple Stores in Redux. It is a cumbersome process. MobX makes it much simpler to have multiple Stores and it uses changeable States.
  • Purity – Redux uses pure functions and does not allow State modification. MobX allows mutable States. 
  • Scalability – Redux’s pure functions make code scaling much simpler and predictable.  MobX’s workflow does not include many pure functions, making testing not as straightforward, and as the application grows it becomes more difficult to determine what exactly should an output be, given an input.
  • Performance – Redux can suffer performance degradation from very frequent updates to its single State Store. MobX, on the other hand, with support for multiple Stores can better support frequent State updates, such as real-time updates on market pricing for a trading platform.
  • Community Support and Tools – Redux has a much larger community and more widely tested tools. MobX has a smaller community and fewer tools.
  • Debugging – Redux has a single source of truth and uses a pure Store making its output predictable and repeatable. Redux has better debugging tools, including ReduxDevTools and the Time Traveling Debugger (provides State history). MobX with its implicit nature, multiple States, and code abstraction can be very difficult to troubleshoot and determine the proper output from a given input. All State updates happen behind the scenes and do not offer any history. MobX also suffers from fewer robust tools

To learn more about State Management and to experience Eric Marcatoma’s full Lightning Talk session, watch here.

Written by Kaela Coppinger · Categorized: Code Lounge, InRhythmU, Learning and Development, Software Engineering · Tagged: best practices, INRHYTHMU, learning and growth, MobX, Redux, State management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 15
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

110 William St
Suite 2601
New York, NY 10038

1 800 683 7813
get@inrhythm.com

Copyright © 2023 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT