Your form has successfully submitted

One of our teammates will get back to you soon.

Refactoring JavaScript with Functional Patterns

Let's explore some common legacy JavaScript patterns and try to leverage functional programming concepts to improve the way we write our code.

Searching for the perfect flavor

Like any other software consultancy, at our company we do a lot of talent searching, and day by day we get applications from software developers that want to work with us. A great majority of these developers, if not all of them, have been exposed to JavaScript in one way or another, and maybe this is not a surprise since in the past few years the language's popularity has skyrocketed. We can find it virtually anywhere in the stack, being used for many different purposes. This delicious ice cream also comes in different flavors, like vanilla, jQuery, Node.js, React, Angular. And sometimes it comes with sugar and sprinkles on top, like the more delicious and recent TypeScript, or the stale coffee-flavored CoffeeScript nibs.

In the end, all of these flavors and sprinkles don't matter, since the base is always JavaScript. Every day we spend a lot of time reading it, not only in our candidates' code but also in the new projects we take. And here's the truth: 90% of all of this code is very imperative, and in most of these cases very hard to reason about. However, in time you can start seeing different patterns and little snippets that are commonly used across multiple places. These can be easily abstracted using functional programming concepts and the underlying features of the language. Let's explore them in more detail in this post.

The flavor profile of JavaScript

JavaScript has a lousy reputation. Just do a Google search for “javascript is bad” and you'll find a lot of articles about it. However, most of the underlying frustration in these articles can be summarized into one thing: “The language doesn't behave in the way that I expect it should behave”. And that might be true, JavaScript is not like other languages. You will commonly hear that it is “weird”, but I would say it's “different”. And this blog post doesn't want to prove these other articles right or wrong, but to help us understand some key concepts that can reduce JavaScript's “weirdness”. With that said, let's check some of the more common functional programming concepts and how they tie into the features that the language offers

A quick primer on some functional JavaScript programming concepts

Pure functions

Simply put, a pure function is one that will always return the same output, given the same input. Also, these functions don't have any side effects, which means they are completely independent of outside state. This nature makes it easier to move them around and refactor them safely, because they will be completely deterministic and predictable.

To have an example, let's think about a y = 2x + 3 function from our high school / college algebra lectures. We can say that this function, given an input x of 1 will cause y to take a value of 5. When x is 2, y will be 7, and so on. As you can see, for a given input you have one and only one output, which is very convenient when you want to put this function into a programming language. You can write test cases over it, and you will be sure that the function will always return the same value. In JavaScript, our function would look roughly like the following:

function lineFunction(x) {
  return 2 * x + 3;

This is a pure function, however, let's say that my function won't depend on the value of the x argument, but on a global x variable. It would probably look like this:

var x = 2;

function lineFunction() {
  return 2 * x + 3;

What is happening here? In essence, my lineFunction is depending on external state, which makes it impure. If I ran the following code, my function wouldn't return predictable values every time I run it:

lineFunction(); // First call, returns 7
x = 0;
lineFunction(); // Second call, returns 3
x = 1;
lineFunction(); // Third call, returns 5

So one of the best tips I can give you for writing modern and safe Javascript is: Please, do not use var to declare your variables. ES6 has introduced the let and const keywords, which scope your variables only to the current block. This simple tip will help your code tremendously, and help you avoid those pesky external state changes. For the rest of this article, we won't use var anymore, and switch to the let keyword.


Immutability is the concept that states that your data or objects are not supposed to be modified once they're defined. This means that it is not advisable to change the value of a variable after it has been declared. This sounds a bit crazy at first glance. After all, it's very challenging to rethink most of our current code without relying on variable changes.

A very common example of mutation comes into the classic accumulate function, which receives a list of numbers, and outputs the sum of those numbers:

function accumulate(arr) {
  let accumulator = 0;
  for (i = 0; i < arr.length; i++) {
    accumulator += arr[i];
  return arr;

As you can see, we are changing the accumulator value on every iteration of the for loop. When reading this, we force our brain to keep track of what the value of i is. In this function it's easy enough, but what would happen if you have this other piece of code?

function iHaveNoIdeaWhatThisDoes(multiDimensionalArr) {
  let accumulator = 0;
  for (i = 0; i < multiDimensionalArr.length; i++) {
    for (j = 0; j < multiDimensionalArr[i].length; j++) {
      for (k = 0; k < multiDimensionalArr[i][j].length; k++) {
        if (k < j) {
          accumulator += multiDimensionalArr[i][j];
  return accumulator;

Our working memory is not as good as a computer's memory, since it's very limited both in duration and capacity. Just think about it: When debugging this, our brain needs to keep track of i, j, and k, the k < j condition that needs to be fulfilled for the data to be accumulated, as well as the actual accumulator value. The amount of information that we need to hold to process this is referred to in psychology as cognitive load, and in this function, the thought process that our brain needs to go through is impressive.

So what if we didn't have these variable changes, which we will call mutations from now on. It would be just a bit easier to reason about. So, remember the advice I gave you before about using let instead of var? Scratch that off, we will only be using const in the rest of this article, to force ourselves to think without being able to mutate our variables. Everything will be a constant. We will also not be using any for loops anymore, since they rely heavily on mutation. More on that later!

However, before continuing with the next topic, I'd like to tell you that const works a bit different between primitive types, like string and number than it does with objects and arrays. Let's consider the following:

// const disallows reassignment, so this would produce an error
const name = "Marty McFly";
name = "Dr. Emmett Brown";

// Changing an object property is possible, because soldier only
// holds a reference to an object.
const soldier = { firstName: "John", lastName: "Rambo" };
soldier.lastName = "McClane";

// However, if we try to reassign the variable an error would pop,
// since we're changing the actual reference to the object
soldier = { firstName: "Steve", lastName: "Rogers" };

So keep that in mind when working with objects and/or arrays. A good way of avoiding mutation in these cases is through the use of the spread/rest syntax to copy objects:

// This doesn't copy the object, just makes rambo reference the
// original object. If later you change baseSoldier you will also
// change rambo, since they hold the same reference.
const baseSoldier = { firstName: "", lastName: "" };
const rambo = baseSoldier;

// Copying an object (shallow)
const diehard = { ...baseSoldier };

// Copying only some properties for the object
const vendetta = { ...baseSoldier, firstName: "V" };

// You can also do this with arrays!
const array1 = [1, 2, 3, 4];
const array2 = [...array1, 5, 6];

Higher order functions / functions as first-class citizens

JavaScript's functions are first-class citizens. This means that they can support operations that are generally available to other language entities, such as being modified, assigned to a variable or constant, passed as an argument or returned from a function.

With this in mind, in JavaScript we have higher order functions. These are functions that can either receive functions as arguments, or return other functions. To be completely clear about it, we are not returning the the result of an applied function, but their definitions.

// We can assign a function to a variable
const addTwo = (x) => x + 2;

// ...and since that variable holds a function, we can pass
// values to it.
addTwo(2); // Output: 4

// We can also pass that function to another function
// In this case `doTwice` is a function that receives
// an 'operation' function. This returns another function
// that receives a number, and then applies the operation
// twice to it
const doTwice = (operation) => (number) => operation(operation(number));

doTwice(addTwo)(2); // Output: 6

The example above uses what we call an arrow function. We will be using them through this article, and for the sake of brevity, let's say that they're just a shorter, and more concise way of writing normal functions. The difference lies into how the this keyword works in them, which we will not cover in this article. In general, the power behind passing functions around as variables lies into how they can be used as building blocks. Functional programming is all about having small, composable functions, that can be chained and reused later.

Our favorite three toppings: map, filter and reduce

So with all of this knowledge, we can start talking about three functions that facilitate an approach into functional programming: map, filter and reduce. These are the most known higher order functions in JavaScript, since they all receive functions as arguments. The combination of these three can practically solve all of the different data transformation problems that we will face throughout our software development careers. Let's dissect each one of them quickly:


Our map function allows us to run a function for every element of an array, and return the resulting array without mutating the original one. Generalizing, map receives a list of elements of type A, and transforms it into a list of elements of type B, with the same size as the original array. Let's see an example:

// Takes a list of numbers and adds 1 to each element, then returns the result
const numbers = [1, 2, 3, 4];
const addedNumbers = => x + 1);

// We can also store a function in a variable, and pass it to map
const duplicate = (x) => x * 2;
const doubledNumbers =;

Just as with for loops and mutation, there is a very common function that shows up in almost every codebase. The function is forEach, and it does something similar to map. The main difference is that forEach ignores the return values from the passed function. Personally, I would recommend using map instead, since it allows for more flexibility, and returning a list of values from it as a default helps us keep our code immutable. We can also chain more array function calls after a map execution, which makes code concise and efficient.


Similar to map, it allows us to execute a predicate function (one that returns a boolean value) on each element of an array, and return only the records for which the function holds true. We could say it receives a list of elements of type A, and returns a subset of that list. An example would be:

// Returns only the even values from a list of numbers
const isEven = (x) => x % 2 === 0;
const numbers = [1, 2, 3, 4, 5];
const evenNumbers =;


This is the most complex of the three functions, but by far the most powerful. In fact, map and filter can be implemented using reduce. Given a list of elements of type A, the application of reduce on it receives:

  • A reducer, which is a function that receives an accumulator of type B, a value of type A, and returns a result of type B
  • An initial value of type B

Rather confusing, right? Let's elaborate a bit with an example:

const numbers = [1, 2, 3, 4];

// The reducer has:
// - acc: The accumulated result value from the last function application
// - value: The current value that we're applying the function to
const reducer = (acc, value) => acc + value;
// Reduce does:
// - Takes the first element, applies the reducer to it and sets the
//   accumulator to the return value of the function. The second argument
//   of reduce dictates the starting value. In this example:
//     First element (1) - Reducer result: 0 + 1
//     Second element (2) - Reducer result: 1 + 2
//     Third element (3) - Reducer result: 3 + 3
//     Fourth element (4) - Reducer result: 6 + 4
//     Final result: 10
const sumOfNumbers = numbers.reduce(reducer, 0);

The beauty of reduce is that it can be used to do all sorts of transformations, not only returning a numeric or string value. The accumulator can be of any type, and that gives this function a lot of flexibility. Let's see how we can implement map and filter behavior using reduce:

const numbers = [1, 2, 3, 4, 5];

// map - duplicating each element in the list
// - Initial value is an empty array
// - Accumulator uses the spread syntax. Creates a new array on each iteration and
//   appends the duplicated value.
const duplicated = numbers.reduce((acc, value) => {
  return [...acc, value * 2];
}, []);

// filter - keeping only the even values
// - Initial value is an empty array
// - Accumulator also uses spread syntax, and a ternary
//   conditional to check if it's an odd or even value.
//   If value is even appends to the accumulator, if it's
//   odd returns the previous accumulator untouched.
const onlyEven = numbers.reduce((acc, value) => {
  return value % 2 === 0 ? [...acc, value] : acc;
}, []);

As you can see, the reduce function is the most flexible of the three, however, I wouldn't recommend to use it instead of map or filter. They are far simpler, and we should favor readability when working with these functions.

Okay, got it! So, how do I use these functions in the real world?

There are some really common use cases that can be refactored using these three functions. We will explore some of them, and see how they can be rewritten easily with functional programming patterns:

Initialize + Iterate + Push pattern

This pattern is very common when transforming each element in a list into something else. Every time you see a pattern like the one below, it's safe to use the map function to refactor:

const words = ["piece", "of", "cake"];

// Initialize
let capitalized = [];

// Iterate
for (i = 0; i < words.length; i++) {
  // Push

This can be refactored to:

const words = ["piece", "of", "cake"];
const capitalized = => word.toUpperCase());

What do we gain: Removed mutation for the capitalized variable. Far more readable. Avoid for loop, which has underlying index mutation and tracking. We can also avoid "indexitis", which means worrying about the underlying details about iterating a list, by keeping track of indexes. Instead, functional programming suggests using "wholemeal programming", which models the operations and function definitions based on the whole list, not on its individual elements.

Initialize + Iterate + Conditional push pattern

Very similar to the previous example:

const words = ["piece", "of", "cake"];

// Initialize
let longWords = [];

// Iterate
for (i = 0; i < words.length; i++) {
  // Conditional Push
  if (words[i].length >= 5) {

Let's also think about a small variation to this problem. We want to know which words in the list are longer than 5 characters and which ones are shorter. We would do something like this:

const words = ["piece", "of", "cake"];

// Initialize
let lengthOfWords = [];

// Iterate
for (i = 0; i < words.length; i++) {
  // Conditional Push
  if (words[i].length >= 5) {
  } else {

These examples can be refactored to filter or map respectively, depending on if we want to use only an if structure (filter) or if we want to use a if...else (map).

const words = ["piece", "of", "cake"];
const isLong = (word) => word.length >= 5;

// filter
const longWords = words.filter(isLong);

// map
const longAndShort = => (isLong(word) ? "Long" : "Short"));

What do we gain: Again, removed mutation for the longWords variable, and avoid for loop. The word.length >= 5 bit can be abstracted into its own function, which favors reuse.

Initialize + Iterate + Accumulate pattern

For anything that can be accumulated, counted, summarized or squashed.

const words = ["piece", "of", "cake"];

// Initialize
let camelCase = "";

// Iterate
for (i = 0; i < words.length; i++) {
  // Accumulate
  camelCase += words[i].charAt(0).toUpperCase() + words[i].slice(1);

This pattern can be refactored to:

const words = ["piece", "of", "cake"];
const capitalize = (word) => `${word.charAt(0).toUpperCase()}${word.slice(1)}`;
const joinCamelWords = (word1, word2) => `${word1}${capitalize(word2)}`;
const camelCase = words.reduce(joinCamelWords, ""); // Output: PieceOfCake

What do we gain: Standalone capitalize and joinCamelWords functions are reusable and easier to reason about. Avoid mutation and for loop.

Transform array to object

This is a really common case, where we want to transform a bi-dimensional array into an object

const superheroes = [
  ["superman", "Clark Kent"],
  ["batman", "Bruce Wayne"],
  ["aquaman", "Arthur Curry"],

let heroObject = {};
for (i = 0; i < superheroes.length; i++) {
  heroObject[superheroes[0]] = superheroes[1];

This is a perfect match for reduce! Keep in mind that reduce's output value is not only limited to strings, booleans or numbers, but also arrays and objects.

const superheroes = [
  ["superman", "Clark Kent"],
  ["batman", "Bruce Wayne"],
  ["aquaman", "Arthur Curry"],
const heroObject = superheroes.reduce((acc, [id, name]) => {
  return { [id]: name, ...acc };
}, {});

The inverse operation can also be done:

const heroObject = {
  aquaman: "Arthur Curry",
  batman: "Bruce Wayne",
  superman: "Clark Kent",
const heroArray = Object.keys(heroObject).reduce((acc, key) => {
  return [...acc, [key, heroObject[key]]];
}, []);

What do we gain: Complex transformation is made easier by using an object as an accumulator and the spread/rest syntax. Avoid mutation on heroObject and for loop.

Nested function calls

When we want to do several sequential transformations on a list, we can usually see code like this:

// Adds one to each element in the list
function addOne(list) {...}
// Duplicates each element in the list
function duplicate(list) {...}
// Squares each element in the list
function square(list) {...}

// Call the list of functions in order
square(duplicate(addOne([1, 2, 3])))

This can be improved through chained map calls:

const addOne = (x) => x + 1;
const duplicate = (x) => x * 2;
const square = (x) => x * x;

[1, 2, 3].map(addOne).map(duplicate).map(square);

Or even better, introducing the concept of composition, which is combining two or more functions into one. Based on the mathematical concept, where h(x) = g(f(x)). For this, we can create a helper compose function, which takes a variadic list of functions and combines them. This one is a higher order function, since it returns another function, which is in turn the combination of all other reduced functions:

// Compose receives an undefined number of functions
function compose(...functions) {
  // Returns a function that receives some arguments
  return (args) => {
    // Takes the functions given to compose and applies
    // them from left to right. Then returns a reduced
    // function that can be called.
    return functions.reduce((acc, fn) => fn(acc), args);

To use it, we would pass the list of functions we want to include in the pipeline. An important note is that the input value for a function needs to match the return value for the previous function in the pipeline, and its return value should match the input for the next function in the pipeline. That way, if we want to transform from type A to type D, the functions types should be A to B, B to C and C to D. To use our function we would do:

const addOne = (x) => x + 1;
const duplicate = (x) => x * 2;
const square = (x) => x * x;
const calculate = compose(addOne, duplicate, square);

// Calling calculate will execute all of the functions given
// to compose sequentially
calculate(1); // Output: 16

What do we gain: Transformations are simplified by using several smaller functions instead of a big, complex function. Avoid filling up the stack with intermediate function calls.

Switch statements for function maps

Imagine we have a piece of code like the following:

function operate(action, parameter) {
  switch (action) {
    case "action1":
      return doAction1(parameter);
    case "action2":
      return doAction2(parameter);
      return doDefaultAction(parameter);

We can leverage functions as first-class citizens, by creating an object which will hold our desired actions. Then, instead of doing a switch statement, we would call the function map based on the given action:

const functionMap = {
  action1: doAction1,
  action2: doAction2,

function operate(action, parameter) {
  // Check if we have the action on the map, if it
  // doesn't exist, call the default function.
  if (!functionMap[action]) {
    return doDefaultAction(parameter);

  // Since the object element exists and returns a
  // function, we can pass the parameters to it
  return functionMap[action](parameter);

What do we gain: Cleaner and more scalable code. When we need to add a new action, we would just add it to the map object.

Where do we go from here?

This is just the tip of the iceberg, and there's a lot more to learn in the functional programming world. If you want to dive deeper into it, I suggest looking at a functional library like Ramda or looking into Fantasy Land, which will help you go into the more complex concepts of functional programming.

While functional programming is not a silver bullet, it definitely helps us solve some of the shortcomings of the JavaScript language. As with all approaches and technologies, there are always pros and cons. While there are simple patterns, like the ones discussed in this article, some of the advanced terminology and mathematical concepts can be daunting for a beginner. Immutability also comes with its price since implementing it usually means that you have to copy some structures. This paired with recursion can potentially lead to performance problems, such as processing speed and RAM usage.

My final recommendation would be to dive deeper into these concepts, taking one at a time. One of the key elements on my functional programming journey, that has helped me grasp these concepts quicker has been static typing through the use of TypeScript. It doesn't take a lot of effort to get started with this beautiful language, and it really helps you reason about your functions even before implementing them (what is usually called "Thinking with Types"). So I encourage you to go and explore these concepts further, and maybe implement some of the discussed patterns in your codebase. I guarantee that it will make a difference on how you write, read and reason about code

Published on: Jul. 27, 2021

Written by:

Software Developer

Luis Fernando Alvarez

Subscribe to our blog

Join our community and get the latest articles, tips, and insights delivered straight to your inbox. Don’t miss it – subscribe now and be part of the conversation!

We care about your data. Check out our Privacy Policy.