• Skip to main content
  • Skip to footer

InRhythm

Your partners in accelerated digital transformation

  • Who We Are
  • Our Work
  • Practices & Products
  • Learning & Growth
  • Culture & Careers
  • Blog
  • Contact Us

Software Engineering

Mar 30 2018

CSS the Right way

With Sari Morninghawk!

Sari walks us through CSS best practices to help you keep your styles neat, tidy, and easy to maintain. If you couldn’t make it, check out the video below and make sure to sign up for our next workshop via Meetup!

Written by Jack Tarantino · Categorized: Code Lounge, Design UX/UI, Events, InRhythm News, Learning and Development, Software Engineering · Tagged: best practices, CSS, development

Mar 26 2018

Debugging Node without restarting processes

This post is another by InRhythm’s own Carl Vitullo. For the full, original post check out “Debugging Node without restarting processes” on hackernoon. Make sure to follow him on Twitter and in Reactiflux!

I’m typically a frontend developer, but every now and then I find myself writing or maintaining some backend code. One of the most significant drawbacks I’ve encountered when switching contexts to Node is the lack of the Chrome Developer Tools. Not having them really highlights how hard I lean on them during my day to day in web development. Luckily, there are options for enabling them, and they’ve gotten much more stable and usable in recent times. Node has a built in debug mode that allows you to connect to the DevTools, and there’s a package called node-inspector that connects automatically.

It’s worth noting that versions of Node < 8 use a now-legacy Debugger API. Node 8 introduces the Inspector API, which better integrates with existing developer tools.

There’s one common theme that I’ve encountered when using these methods: they must be invoked when starting the node process. The other day, I found myself with a process in an odd state that I’ve had trouble reproducing, and I didn’t want to risk losing it by restarting the process to enable the inspector.

However, I found a solution — no less, a solution from the official Node docs.

A Node.js process started without inspect can also be instructed to start listening for debugging messages by signaling it with SIGUSR1 (on Linux and OS X).

This only applies to unix based OSes (sorry Windows users), but it saved my bacon in this case. The kill command in unix may be ominously named, but it can also be used to send arbitrary signals to a running process. man killtells me that I can do so using the syntax, kill -signal_name pid. The list of signal names can be enumerated with kill -l, shown below.

$ kill -l
hup int quit ill trap abrt emt fpe kill bus segv sys pipe alrm term urg
stop tstp cont chld ttin ttou io xcpu xfsz vtalrm prof winch info usr1 usr2

By default, kill sends an int, or an interrupt signal, which is equivalent to hitting ctrl-c in a terminal window. There’s a lot of depth to process signals that I won’t get into (I encourage you to explore them!), but towards the end of the list is usr1. This is the SIGUSR1 that the node docs are referring to, so now I just need a pid, or process ID, to send it to. I can find that by using ps and grep to narrow the list of all processing running on my system

$ ps | grep node
9670 ttys000 0:01.04 node /snip/.bin/concurrently npm run watch:server npm run watch:client
9673 ttys000 0:00.46 node /snip/.bin/concurrently npm run watch:server-files npm run watch:dist
9674 ttys000 0:33.02 node /snip/.bin/webpack — watch
9677 ttys000 0:00.36 node /snip/.bin/concurrently npm run build:snip — — watch
9678 ttys000 0:01.65 node /snip/.bin/nodemon — delay 2 — watch dist ./dist/src/server.js
9713 ttys000 0:01.00 /usr/local/bin/node ./dist/src/server.js
9736 ttys003 0:00.00 grep — color=auto node

My output is a little noisy due to a complex build toolchain that spawns many processes. But I see down towards the bottom the right process: node ./dist/src/server.js, with a pid of 9713.

Now I know the signal name is usr1 and the pid is 9713, so I need to run.

$ kill -usr1 9713

It runs with no output, but I check the logs of my node process and see

Debugger listening on ws://127.0.0.1:9229/ad014904-c9be-4288–82da-bdd47be8283b
For help see https://nodejs.org/en/docs/inspector

I can open chrome://inspect, and I immediately see my inspect target.

I click “inspect”, and I’m rewarded with a Chrome DevTools window in the context of my node process! I can use the profiler to audit performance, use the source tab to add break points and inspect the running code, and use the console to view logs or modify the variables in the current scope, just like I would on the web.

Written by Carl Vitullo · Categorized: InRhythm News, Learning and Development, Software Engineering · Tagged: best practices, Chrome, CLI, Devtools, Inspector, JavaScript, kill, Linux, Node, Processes, Signals

Mar 02 2018

One-directional data binding without frameworks

This post is another by InRhythm’s own Jack Tarantino. For the full post and additional links, check out the original “Frameworkless JavaScript Part 3: One-Way Data Binding” on his website.

The following article is one in a series about writing client-focused JavaScript without the help of libraries and frameworks. It’s meant to remind developers that they can write good code on their own using nothing but native APIs and methods. For more, check out the original article on writing small JavaScript components without frameworks and the previous article on templates and rendering.
This article is intended to be a deep-dive into data-binding, how it works, and how you can do it without frameworks like Angular, React, or Ember. It is strongly recommended that you read the previous article before this one.

1-way data-binding

1-way data-binding is “a method of putting data into the DOM which updates whenever that data changes”. This is the major selling point of the React framework and with a little effort you can set up your own data binding with much less code. This is particularly useful when you have an application that sees routine changes to data like a simple game, a stock ticker, or a twitter feed; objects needs to have data pushed to the user but no user feedback is required. In this case, we need an object with some data in it:

let data_blob = {  
  movie: 'Iron Man',
  quote: 'They say that the best weapon is the one you never have to fire.'
}

A Proxy:

const quote_data = new Proxy(data_blob, {  
  set: (target, property, value) => {
    target[property] = value
    console.log('updated!')
  }  
})

And a poor DOM node to be our guinea pig:

<p class="js-bound-quote">My favorite {{ movie }} quote is "{{ quote }}".</p>  

In this case, we need the data_blob to serve as a storage unit for the proxy. Proxies in ES6 are a convenient way to trigger callbacks when certain actions are taken on an object. Here, we’re using the proxy to trigger a callback every time somebody changes a value in the data blob. We don’t have a way to update the text in the DOM node yet though so let’s set that up:

const quote_node = document.querySelector('.js-bound-quote')

quote_node.template = quote_node.innerHTML  
quote_node.render = function render (data) {  
  this.innerHTML = this.template.replace(/\{\{\s?(\w+)\s?\}\}/g, (match, variable) => {
    return data[variable] || ''
  })
}

This gives us a quick and dirty way to update the node’s inner HTML with some arbitrary data. The only thing needed to connect our new script with the proxy is to substitute the console.log call with quote_node.render(data_blob):

const quote_data = new Proxy(data_blob, {  
  set: (target, property, value) => {
    target[property] = value
    quote_node.render(data_blob)
  }  
})

With all this setup, we can add a quick script to prove that our DOM node is, in fact, updated every time we change the data blob. The exact same way that we want things to happen with a framework but with no external dependencies and WAY less code.

const quotes = [  
  "What is the point of owning a race car if you can't drive it?",
  "Give me a scotch, I'm starving.",
  "I'm a huge fan of the way you lose control and turn into an enormous green rage monster.",
  "I already told you, I don't want to join your super secret boy band.",
  "You know, it's times like these when I realize what a superhero I am."
]

window.setInterval(() => {  
  const quote_number = Math.floor(Math.random() * quotes.length)
  quote_data.quote = quotes[quote_number]
}, 2000)

This adds a script to change to a random quote every two seconds. Check out the working example below:

This is a little sloppy, as it only works for one node, one time. Let’s clean things up a bit and add constructors for both the nodes and Proxies. Continue reading on jack.ofspades.com…

Written by Jack Tarantino · Categorized: Software Engineering · Tagged: best practices, JavaScript

Feb 20 2018

Overly defensive programming

This post is another one by InRhythm’s own Carl Vitullo. For the full post and additional links, check out the original “Overly Defensive Programming” on his medium channel.

I recently asked a coworker why a certain check was being done, and he answered with a shrug and said, “Just to be safe.” Over my career, I’ve seen a lot of code written just to be safe. I’ve written a lot of that code myself! If I wasn’t sure if I could rely on something, I’d add a safety check to prevent it from throwing an exception.

To give some examples, I mean idioms like providing unnecessary default values.

axios.get(url).then(({ data }) =>
 // If the response doesn't have a document, use an empty object
 this.setState({ document: data.document || {} });
})

Or checking that each key exists in deeply nested data.

render() {
  const { document } = this.state;
  const title = document &&
  document.page &&
  document.page.heading &&
  document.page.heading.title;
  return <h1>{title}</h1>
}

And many other idioms. Idioms like these prevent exceptions from being thrown. Used without care, suppressing an exception is like hanging art over a hole in the wall.

https://www.inrhythm.com/wp-content/uploads/2018/02/VCarl-Blog-vidro.mp4

At a glance, there doesn’t appear to be a problem. But you haven’t patched the hole and you haven’t fixed the bug. Instead of an easy-to-trace exception, you have unusable values — bad data — infiltrating your program. What if there’s a bad deployment on the backend and it begins returning an empty response? Your default value gets used, your chain of && checks returns undefined, and the string ‘undefined’ gets put on your page. In React code, it won’t render anything at all.

There’s an adage in computing, “be liberal in what you accept and conservative in what you send.” Some might argue that these are examples of this principle in action, but I say disagree. I think these patterns, when used to excess, show a lack of understanding of what guarantees your libraries and services provide.

Data or arguments from third parties

What your code expects from somebody else’s code is a contract. Often, this contract is only implied, but care should be taken to identify what form the data take and to document it. Without a well understood, clearly documented response format from an API, how can you tell whose code is in error when something breaks? Having a clear definition builds trust.

When you request data from an external HTTP API, you don’t need to inspect the response object to see if it has data. You already know that it exists because of the contract you have with your request library. For a specific example, the axios documentation defines a schema for the format the response comes back with. Further, you should know the shape of the data in the response. Unless the request is stateful or encounters an error, you’ll get the same response every time — this is the contract you have with the backend.

Data passed within the application

The functions you write and the classes you create are also contracts, but it’s up to you as a developer to enforce them. Trust in your data, and your code will be more predictable and your failure cases more obvious. Data errors are simpler to debug if an error is thrown close to the source of the bad data.

Unnecessary safety means that functions will continue to silently pass bad data until it gets to a function that isn’t overly safe. This causes errors to manifest in a strange behavior somewhere in the middle of your application, which can be hard to track with automated tools. Debugging it means tracking the error back to find where the bad data was introduced.

I’ve set up a code sandbox with an example of overly safe and unsafe accesses.

const initialStuff = {
  things: {
    meta: {
      title: "I'm so meta, even this acronym",
      description: "will throw an error if you break the data"
    }
  }
};
// And within each component,
  handleClick = e => {
    if (this.state.stuff) {
      this.setState({ stuff: null });
    } else {
      this.setState({ stuff: initialStuff });
    }
  };

The “safe” component guards against exceptions being thrown.

const { title, description } =
  (stuff && stuff.things && stuff.things.meta) || {};

And the unsafe one gets the values without any checks.

const { title, description } = this.state.stuff.things.meta;

This approximates what could happen if an external API starts returning unusable data. Which of these failure modes would you rather diagnose?

https://www.inrhythm.com/wp-content/uploads/2018/02/VCarlBlog2.mp4

 

Performance and development speed

Beyond that, conditionals aren’t free. Individually, they have little impact on performance, but codebase that makes a widespread habit of doing unnecessary checks will begin to use an observable amount of time. The impact can be significant: React’s production mode removes prop types checks for a significant performance increase. Some benchmarks show production mode in React 15 getting a 2–4x boost over development mode.

Conditional logic adds mental overhead as well, which affects all code that relies on the module. Being overly cautious with external data means that the next person to consume it doesn’t know if it’s trustworthy, either. Without digging into the source to see how trustworthy the data is, the safest choice is to treat it as unsafe. Thus the behavior of this code forces other developers to treat it as an unknown, infecting all new code that’s written.

Fixing the problem

When writing code, take a minute to think through the edge cases:

1.What kinds of errors might happen? What would cause them?

2. Are you handling the errors you can foresee?

3. Could the error occur in production, or should it be caught during development?

4. If you provide a default value, can it be used correctly downstream?

 

Many of the fixes to patterns like this are to handle the errors you can and to throw the errors you can’t. It makes sense to verify that data from an external API comes back in the shape you’re expecting, but if it doesn’t, can your app realistically continue? Lean on your error handling to show an appropriate response to the user, and your error logging to notify you that there’s an issue.

Learning what to expect from your tools is a large part of writing code you can trust. Many times this is documented explicitly, but sometimes it’s only implied. The format of data with a backend API is up to whoever’s writing that backend. If you’re full-stack, great news! You control both ends, and you can trust yourself (right?). If a separate team controls the backend API, then you’ll need to establish what is correct behavior and hold each other to it. A third party API can be harder to trust, but you’ll also have minimal influence over what it returns.

When writing React components, you have an even more powerful tool: PropTypes. Instead of scattering checks like a && a.b && a.b.c && typeof a.b.c === 'function' && a.b.c(), you can add a type definition as a static property.

Thing.propTypes = {
  a: PropTypes.shape({
    b: PropTypes.shape({
      c: PropTypes.func.isRequired
    }).isRequired
  }).isRequired
};

This might look a little ugly, but now the component will log an error during development if your data is wrong. The missing data will likely cause its own error to throw afterward, and which of these messages is more helpful?

Warning: Failed prop type: The prop 'a' is marked as required in 'Thing', but its value is 'undefined'.
// or
Uncaught TypeError: Cannot read property 'b' of undefined

External data that changes

Of course, sometimes you will have data that you’re not sure about. It might have keys a, b, c or x, y, z, or the data key might be null. These are good times to add checks, but consider defining them as functions that communicate their intent.

const hasDataLoaded = data => typeof data !== "undefined";
hasDataLoaded(data) && data.map(/* … */);

Well named functions will tell your coworkers down the road why these checks are present. Particularly good names will enable them to make the checks more accurate in the future.

Excessively safe idioms — and even well-considered checks — amount to stopgaps to guard against type errors. PropTypes are easy to add to an existing React codebase but aren’t the only option available. TypeScript and Flow are much more advanced tools to verify your data types. PropTypes will save you at runtime, but either TypeScript or Flow will allow you to verify that your code works at build time. Types give you an ironclad contract in your code: if you’re not using your data correctly, it won’t even build!

Types aren’t everyone’s jam, but they’ve grown on me for widely shared, highly complex, or difficult-to-change parts of the code. For the rest of the code, in React at least, PropTypes will help you catch errors more quickly and have more confidence in your codebase.

When a developer does something “just to be safe,” it’s a hint that there’s an unrecognized unknown. Ignoring these hints can cause small problems to accumulate into large problems. Know what errors you want when you’re making changes, how to guard against those you don’t, and learn to trust your code.

Written by Carl Vitullo · Categorized: Product Development, Software Engineering

Feb 13 2018

Keep Your Codebase Neat and Tidy

Developers tend to have opinions on style. If you’ve been in the industry for more than 15 minutes, you’ve at least heard about the arguments over spaces or tabs. And don’t even get me started on whether JavaScript needs semi-colons or not.

That’s where automatic code formatting comes in. Sure, when you’re working alone on a side project it doesn’t matter whether your formatting is consistent. But try working on a codebase with more developers! Everyone has different opinions, and a lot of times these opinions come to a head in a code review, resulting in returned PRs, wasted time, and maybe even missed deadlines.

Prettier and standard strive to solve this problem by creating rules around formatting and linting. Both projects then automatically apply these rules to a codebase. That way, developers can focus on shipping features, fixing bugs, and writing clean code, not arguing over semi-colon placement.

The folks working on prettier-standard have pulled together the best of these two projects to help you keep your code readable and your code reviews manageable.

The easiest way to get started is by adding a script to your package.json file that will format your entire codebase at once.

Start by installing prettier-standard:

npm install --save-dev prettier-standard

Then, you can set up a script that will run prettier-standard for you in your package.json file like this:

"scripts": {
  "format": "prettier-standard 'src/**/*.js'"
}

Once you have this set up, you can run npm run format from the command line and prettier-standard will format your whole codebase for you.


That’s all well and good if you’re introducing these changes into a new, small, personal project. But what if you’re working in a massive codebase, and you don’t want to introduce a bunch of merge conflicts over some formatting, while still forcing other developers to adhere to agreed-upon standards?

That’s where git hooks come in: you can easily set them up using husky and lint-staged.

First, let’s install all three packages into our project. (If you’ve already installed prettier-standard, you don’t need to install it again).

npm install lint-staged husky prettier-standard --save-dev

Next, let’s make sure we have a precommit script that runs lint-staged set up in the "scripts" section of our package.json file.

"scripts": {
  "precommit": "lint-staged"
}

Finally, we’ll set up what we want to run when we call the precommit script.

"lint-staged": {
  "*.js": [
    "prettier-standard",
    "git add"
  ]
}

The above code tells our project to run prettier-standard on all staged JavaScript files when we commit our changes.


Now, we just need to go ahead and make sure we’ve set this up successfully.

Go ahead and git add your changes. Then git commit them. If you’ve done this right, you should see the feedback below in your terminal.

husky > npm run -s precommit (node v7.7.4)
↓ Running tasks for *.js [skipped]
   → No staged files match *.js

Adding prettier-standard to your codebase isn’t going to immediately improve your code’s performance or make a difference in how your app functions. It’s unlikely to directly impact the bottom line, and forget about explaining the benefits of setting up git hooks for linting and formatting to non-technical stakeholders.

However, you might struggle to argue against the long-term benefits of implementing a formatter and linter at any stage in your codebase’s lifecycle. Your codebase will grow in complexity. You will add new developers, each with their own opinions and experience levels. Protect your codebase by setting up a standard that all code must adhere to. Spend 10 minutes now, to save hours later.

 

Written by Mae Capozzi · Categorized: Product Development, Software Engineering

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 14
  • Go to Next Page »

Footer

Interested in learning more?
Connect with Us
InRhythm

140 Broadway
Suite 2270
New York, NY 10005

1 800 683 7813
get@inrhythm.com

Copyright © 2022 · InRhythm on Genesis Framework · WordPress · Log in

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT