Appendix D. Reintroducing JavaScript
This appendix covers
- Applying best practices when writing JavaScript
- Using JSON effectively to pass data
- Examining how to use callbacks and escaping callback hell
- Writing modular JavaScript with closures, patterns, and JavaScript classes
- Adopting functional programming principles
JavaScript is such a fundamental part of the MEAN stack (even if you’re writing the Angular part with TypeScript) that we’ll spend a little bit of time looking at it. We need to cover the bases because successful MEAN development depends on it. JavaScript is such a common language (uniquely, JavaScript has a runtime on almost every computer currently on the planet) that it seems that everybody knows some of it, partly because JavaScript is easy to start with and forgiving in the way it’s written. Unfortunately, this looseness and low barrier to entry can encourage bad habits, which can cause unexpected results.
The aim of this appendix isn’t to teach JavaScript from scratch; you should already know the basics. If you don’t know JavaScript at all, you may struggle and find it hard going. Like all things, JavaScript has a learning curve. On the other hand, not everybody needs to read this appendix in detail, particularly experienced JavaScript developers. If you’re lucky enough to count yourself as part of the experienced camp, it still may be worthwhile to skim this appendix in case you find something new here.
We don’t cover TypeScript, though we hope that chapters 8 through 12 cover it in enough detail for you to be comfortable with it.
One last thing before we get started in earnest. When you look around the internet for information around JavaScript, you’ll more than likely come across the appellations ES2015, ES2016, ES5, ES6, ES7, and so on.
ES5 is the version of JavaScript that has been available for the longest time, from the dim and distant past that includes the Firefox 4 web browser; the birth of Google Chrome; and the long, torturous death of the infamous Internet Explorer 6. Luckily, those days are long gone, but the specification still stands, and most browsers (mostly) adhere to it.
Officially, as of 2015, iterations of the JavaScript (or, if you prefer, ECMAScript [ES]) specification have been denoted by the year: ES2015, ES2016, and so on. Any reference to single-digit versioning post ES5, like ES6, is incorrect. Throughout this book, we’ve been careful to ensure that we named things correctly. Many authors across the internet haven’t been so diligent and continue to perpetuate the incorrect naming scheme.
As things stand today, most browsers adhere to most of the changes made in JavaScript as part of the ES2015 spec, with some browsers also providing some functionality for later iterations (2016, 2017, and so on). The pace of adoption and implementation is sometimes slower than we, as developers, would like, so transpilers such as Babel are available. JavaScript transpilers broadly take code written utilizing more modern ideas and convert it to a form that older browsers understand. They provide a bridge between old and new and between different languages. TypeScript, CoffeeScript, Elm, and ReasonML are all transpiled to JavaScript.
Not everybody knows JavaScript, but the vast majority of developers used it in one form or another at some point. Naturally, different levels of knowledge and experience exist. As a test, take a look at the following code listing. The listing contains a chunk of JavaScript code, the aim of which is to output messages to the console. If you understand the way the code is written, correctly determine what the output messages will be, and (more important) why they are what they are, you’re probably good for a skim read.
Listing D.1. Example JavaScript with intentional bugs
const myName = { first: 'Simon', last: 'Holmes' }; var age = 37, country = 'UK'; console.log("1:", myName.first, myName.last); const changeDetails = (function () { console.log("2:", age, country); var age = 35; country = 'United Kingdom'; console.log("3:", age, country); const reduceAge = function (step) { age = age - step; console.log("4: Age:", age); }; const doAgeIncrease = function (step) { for (let i = 0; i <= step; i++) { window.age += 1; } console.log("5: Age:", window.age); }, increaseAge = function (step) { const waitForIncrease = setTimeout(function () { doAgeIncrease(step); }, step * 200); }; console.log("6:", myName.first, myName.last, age, country); return { reduceAge: reduceAge, increaseAge: increaseAge }; })(); changeDetails.increaseAge(5); console.log("7:", age, country); changeDetails.reduceAge(5); console.log("8:", age, country);
How did you get on with that? Listing D.1 has a couple of intentional bugs that JavaScript will let you make if you’re not careful. All this JavaScript is valid and legal, however, and it will run without throwing an error; you can test it by running it in a browser, if you like. The bugs highlight how easy it is to get unexpected results and also how difficult it can be to spot them if you don’t know what you’re looking for.
Want to know what the output of that code is? If you haven’t run it yourself, you can see the result in the following listing.
Listing D.2. Output of listing D.1
1: Simon Holmes 2: undefined UK #1 3: 35 United Kingdom 6: Simon Holmes 35 United Kingdom 7: 37 United Kingdom #2 4: Age: 30 #3 8: 37 United Kingdom 5: Age: 43 #4
Among other things, this code snippet shows a private closure exposing public methods, issues with variable scope and side effects, variables not being defined when expected, mixing of function and lexical scope, the effects of asynchronous code execution, and an easy mistake to make in a for loop. There’s quite a lot to take in when reading the code.
If you’re not sure what some of this means or didn’t get the outcome correct, read this appendix.
JavaScript is an easy language to learn. You can grab a snippet from the internet and pop it into your HTML page, and you’ve started on your journey. One reason why it’s easy to learn is that in some respects, it’s not as strict as it should be. It lets you do things that it possibly shouldn’t, which leads to bad habits. In this section, we’ll take a look at some of these bad habits and show you how to turn them into good habits.
The first step is looking at variables, scope, and functions, which are all closely tied together. JavaScript has three types of scope: global, function (using the var keyword), and lexical (using let or const keywords). JavaScript also has scope inheritance. If you declare a variable in global scope, it’s accessible by everything; if you declare a variable with var inside a function, it’s accessible only to that function and everything inside it; if you declare a variable with let or const in a block, it’s accessible inside the braces and everything inside that block, but unlike var, access doesn’t bleed through to the surrounding function block.
The var keyword in ES2015 and later
Modern practice tends to frown on using the var keyword, which will eventually be deprecated. var comes with a lot of baggage, and if you’re coming from other languages, its scoping can be difficult to work with and can trip up even the most experienced developer. We’ll discuss it here, though, because a lot of JavaScript has been built with var.
With ES2015, the language specification introduced the let and const keywords, which are lexically (block) scoped. These keywords have greater similarity with other variable-definition schemes. The difference is explained in more detail in the following sections.
Listing D.3. Scope example
const firstname = 'Simon'; #1 const addSurname = function () { const surname = 'Holmes'; #2 console.log(firstname + ' ' + surname); #3 }; addSurname(); console.log(firstname + ' ' + surname); #4
This piece of code throws an error because it’s trying to use the variable surname in the global scope, but it was defined in the local scope of the function addSurname(). A good way to visualize the concept of scope is to draw some nested circles. In figure D.1, the outer circle depicts the global scope; the middle circle depicts the function scope; and the inner circle depicts lexical scope. You can see that the global scope has access to the variable firstname and that the local scope of the function addSurname() has access to the global variable firstname and the local variable surname. In this case, lexical scope and function scope overlap.
If you want the global scope to output the full name while keeping the surname private in the local scope, you need a way of pushing the value into the global scope. In terms of scope circles, you’re aiming for what you see in figure D.2. You want a new variable, fullname, that you can use in both global and local scopes.
One way you could do it—and we’ll warn you now that it’s bad practice—is to define a variable against the global scope from inside the local scope. In the browser, the global scope is the object window; in Node.js, it’s global. Sticking with browser examples for now, the following listing shows how this would look if you updated the code to use the fullname variable.
Listing D.4. Global fullname variable
const firstname = 'Simon'; const addSurname = function () { const surname = 'Holmes'; window.fullname = firstname + ' ' + surname; #1 console.log(fullname); }; addSurname(); console.log(fullname); #2
This approach allows you to add a variable to the global scope from inside a local scope, but it’s not ideal. The problems are twofold. First, if anything goes wrong with the addSurname() function and the variable isn’t defined, when the global scope tries to use it, you’ll get an error thrown. The second problem becomes obvious when your code grows. Suppose that you have dozens of functions adding things to different scopes. How do you keep track of them? How do you test them? How do you explain to someone else what’s going on? The answer to all these questions is with great difficulty.
If declaring the global variable in the local scope is wrong, what’s the right way? The rule of thumb is always declare variables in the scope in which they belong. If you need a global variable, you should define it in the global scope, as in the following listing.
Listing D.5. Declaring globally scoped variables
var firstname = 'Simon', fullname; #1 var addSurname = function () { var surname = 'Holmes'; window.fullname = firstname + ' ' + surname; console.log(fullname); }; addSurname(); console.log(fullname);
Here, it’s obvious that the global scope now contains the variable fullname, which makes the code easier to read when you come back to it.
You may have noticed that from within the function, the code still references the global variable by using the fully qualified window.fullname. It’s best practice to do this whenever you reference a global variable from a local scope. Again, this practice makes your code easier to come back to and debug, because you can explicitly see which variable is being referenced. The code should look like the following listing.
Listing D.6. Using global variables in local scope
var firstname = 'Simon', fullname; var addSurname = function () { var surname = 'Holmes'; window.fullname = window.firstname + ' ' + surname; #1 console.log(window.fullname); #1 }; addSurname(); console.log(fullname);
This approach might add a few more characters to your code, but it makes it obvious which variable you’re referencing and where it came from. There’s another reason for this approach, particularly when assigning a value to a variable.
JavaScript lets you declare a variable without using var, which is a bad thing indeed. Worse, if you declare a variable without using var, JavaScript creates the variable in the global scope, as shown in the following listing.
Listing D.7. Declaring without var
var firstname = 'Simon'; var addSurname = function () { surname = 'Holmes'; #1 fullname = firstname + ' ' + surname; #1 console.log(fullname); }; addSurname(); console.log(firstname + surname); #2 console.log(fullname); #2
We hope that you can see how this could be confusing and is a bad practice. The takeaway is always declare variables in the scope in which they belong, using the var statement.
You’ve probably heard that with JavaScript, you should always declare your variables at the top. That’s correct, and the reason is because of variable hoisting. With variable hoisting, JavaScript declares all variables at the top anyway without telling you, which can lead to some unexpected results.
The following code listing shows how variable hoisting might show itself. In the addSurname() function, you want to use the global value of firstname and later declare a local scope value.
Listing D.8. Shadowing example
var firstname = 'Simon'; var addSurname = function () { var surname = 'Holmes'; var fullname = firstname + ' ' + surname; #1 var firstname = 'David'; console.log(fullname); #2 }; addSurname();
Why is the output wrong? JavaScript “hoists” all variable declarations to the top of their scope. You see the code in listing D.8, but JavaScript sees the code in listing D.9.
Listing D.9. Hoisting example
var firstname = 'Simon'; var addSurname = function () { var firstname, #1 surname, #1 fullname; #1 surname = 'Holmes'; fullname = firstname + ' ' + surname; #2 firstname = 'David'; console.log(fullname); }; addSurname();
When you see what JavaScript is doing, the bug is a little more obvious. JavaScript has declared the variable firstname at the top of the scope, but it doesn’t have a value to assign to it, so JavaScript leaves the variable undefined when you first try to use it.
You should bear this fact in mind when writing your code. What JavaScript sees should be what you see. If you can see things from the same perspective, you have less room for error and unexpected problems.
Lexical scope is sometimes called block scope. Variables defined between a set of braces are limited to the scope of those braces. Therefore, scoping can be limited to looping and flow logic constructs.
JavaScript defines two keywords that provide lexical scope: let and const. Why two? The functionality of the two is slightly different.
let is a bit like var. It sets up a variable that can be changed in the scope in which it is defined. It differs from var in that its scope is limited as described earlier, and variables declared this way aren’t hoisted. As they’re not hoisted, they’re not tracked by the compiler the same way as var; the compiler leaves them where they are on the first pass, so if you try to reference them before they’re defined, the compiler complains with a ReferenceError.
Listing D.10. let in action
if (true) { let foo = 1; #1 console.log(foo); #2 foo = 2; #3 console.log(foo); #4 console.log(bar); #5 let bar = 'something'; #6 }
const has the same caveats as let. const differs from let in that variables declared in such a way aren’t allowed to change, either by reassignment or redeclaration; they’re declared to be immutable. const also prevents shadowing—redefining a previously defined outer scoped variable. Suppose you have a variable defined in global scope (with var), and you try to define a variable with const with the same name in an enclosed scope. The compiler will throw an Error. The type of the error returned depends on what you’re trying to do.
Listing D.11. Using const
var bar = 'defined'; #1 if (true) { const foo = 1; #2 console.log(foo); #3 foo = 2; #4 const bar = 'something else'; #5 }
Because of the clarity afforded by declaring variables with let and const, this method is now the preferred way. Issues of hoisting are no longer a concern, and variables behave in a more conventional way that programmers familiar with other mainstream languages are more comfortable with.
You may have noticed throughout the preceding code snippets that the addSurname() function has been declared as a variable. Again, this is a best practice. First, this is how JavaScript sees it anyway, and second, it makes it clear which scope the function is in.
Although you can declare a function in the format
function addSurname() {}
JavaScript interprets it as follows:
const addSurname = function() {}
We’ve talked a lot about using the global scope, but in reality, you should try to limit your use of global variables. Your aim should be to keep the global scope as clean as possible, which becomes important as applications grow. Chances are that you’ll add various third-party libraries and modules. If all these libraries and modules use the same variable names in the global scope, your application will go into meltdown.
Global variables aren’t the “evil” that some people would have you believe, but you must be careful when using them. When you truly need global variables, a good approach is to create a container object in the global scope and put everything there. Do this with the ongoing name example to see how it looks by creating a nameSetup object in the global scope and use this to hold everything else.
Listing D.12. Using const to define functions globally
const nameSetup = { #1 firstname : 'Simon', fullname : '', addSurname : function () { const surname = 'Holmes'; #2 nameSetup.fullname = nameSetup.firstname + ' ' + surname; #3 console.log(nameSetup.fullname); #3 } }; nameSetup.addSurname(); #3 console.log(nameSetup.fullname); #3
When you code like this, all your variables are held together as properties of an object, keeping the global space nice and neat. Working like this also minimizes the risk of having conflicting global variables. You can add more properties to this object after declaration, and even add new functions. Adding to the preceding code listing, you could have the code shown next.
Listing D.13. Adding object properties
nameSetup.addInitial = function (initial) { #1 nameSetup.fullname = nameSetup.fullname.replace(" ", " " + initial + " "); }; nameSetup.addInitial('D'); #2 console.log(nameSetup.fullname); #3
Working in this way gives you control of your JavaScript and reduces the chances that your code will give you unpleasant surprises. Remember to declare variables in the appropriate scope and at the correct time, and group them into objects wherever possible.
So far, we’ve avoided discussing the JavaScript this variable. this is a fairly large topic and can be the source of much confusion. Simply put, the value of this changes depending on the context in which it’s used. For functions defined outside an Object context, this refers to the execution context where the function was defined when in strict mode; it defaults to the current execution context if not in strict mode, so it changes depending on when it’s used.
Further, this can be bound to a different execution context if the prototype functions call() or apply() are used.
If a function is defined as an Object method, this refers to the surrounding object context. When used in an event handler, this refers to the DOM object that triggered the event.
Arrow function expressions (or arrow functions) cut through some of this confusion by not defining a this variable on creation, as happens with the function keyword. Some other context-related things are also not available, but this is by far the most important. Instead, it binds this to the surrounding lexical context and makes it ideal for nonmethod functions such as event handlers, callbacks, and global functions.
The following listings provides the general form and some variations for arrow functions.
Listing D.14. Arrow function format
(param, param2, ..., paramN) => { <function body> } #1 (param, param2, ..., paramN) => expression #2 singleParam => { <function body> } #3 singleParam => expression #4 () => { <function body> } #5
Arrow functions provide a simpler, cleaner syntax, which in turn facilitates shorter, more compact, more expressive functions, especially combined with destructuring assignments. Plenty of examples throughout the book show how arrow functions can be used. For further information on this, see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this; for more on arrow functions, see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions.
Dimly reminiscent of the idea of pattern matching as used in some functional programming languages, destructuring allows for the unpacking of array values and object properties into distinct variables. If you’re passing objects into functions, destructuring means that you can explicitly state which properties from the argument object you want to use.
To use destructuring, on the LHS of the assignment operator (=) place square brackets for destructuring an array or braces for an object; then, add variable names for the values that you want. For arrays, variables get assigned values based on index order. For objects, you should use the keys from the object, but that’s not strictly necessary.
The following listing details how to destructure an array.
Listing D.15. Destructuring an array
let fst, snd, rest; const data = ['first', 'second', 'third', 'fourth', 'fifth']; [fst, snd, ...rest] = data; #1 [, fst, snd] = data; #2 const shortArr = [1]; [fst, snd = 10] = shortArr; #3 let a = 3, b = 4; [a, b] = [b, a]; #4
Destructuring objects requires a little more care; you need to know what properties the object has so that they can be unpacked.
See the following listing for examples of use.
Listing D.16. Destructuring objects
const obj = {a: 10, b: 100, c: 1000}; const {a, c} = obj; #1 const {a: ten, c: hundred} = obj; #2 const {a, d = 50} = obj; #3 const shape = {type: 'square', sides: {width: 10, height: 10}}; #4 const areaOfSquare = ({side: {width}}) => width * width; #5 areaOfSquare(shape); #6
Destructuring is an operation that can only be applied to the result of assignment, usually for function return values and regular expression matches, but can also be applied in function argument lists and in for ... of iteration. Further examples and information are available at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment.
We use this technique in multiple places in the Loc8r codebase to cut down on the amount of data a function or callback is allowed to work with.
Now we’ll take a quick look at best practices for the commonly used patterns of if statements and for loops. The text assumes that you’re familiar with these elements to some extent.
JavaScript is helpful with if statements. If you have one expression within an if block, you don’t have to wrap it in curly braces {}. You can even follow it with an else. The code in the following listing is valid JavaScript.
Listing D.17. if without braces (bad practice)
const firstname = 'Simon'; let surname, fullname; if (firstname === 'Simon') surname = 'Holmes'; #1 else if (firstname === 'Sally') surname = 'Panayiotou'; #1 fullname = `${firstname} ${surname}`; console.log(fullname);
Yes, you can do this in JavaScript, but no, you shouldn’t! Doing this relies on the layout of the code to be readable, which isn’t ideal. More important, what happens if you want to add some extra lines within the if blocks? Start by giving Sally a middle initial. See the following code listing for how you might logically try this.
Listing D.18. Demonstrating issue with no-brace if
const firstname = 'Simon', initial = ''; let surname, fullname; if (firstname === 'Simon') surname = 'Holmes'; else if (firstname === 'Sally') initial = 'J'; #1 surname = 'Panayiotou'; fullname = `${firstname} ${initial} ${surname}`; console.log(fullname); #2
What went wrong here is that without the block braces, only the first expression is considered to be part of the block, and anything following is outside the block. So here, if firstname is Sally, initial becomes J, but surname always becomes Panayiotou.
The following code listing shows the correct way of writing this.
Listing D.19. Correctly formatted if
const firstname = 'Simon'; let surname, fullname, initial = ''; if (firstname === 'Simon') { #1 surname = 'Holmes'; } else if (firstname === 'Sally') { #1 initial = 'J'; surname = 'Panayiotou'; } #1 fullname = `${firstname} ${initial} ${surname}`; console.log(fullname);
By being prescriptive, you see what the JavaScript interpreter sees and reduce the risk of unexpected errors. It’s a good aim to make your code as explicit as possible, and not leave anything open to interpretation. This practice helps both the quality of your code and your ability to understand it when you come back to it after a year of working on other things.
How many = symbols to use
In the code snippets here, you’ll notice that in each of the if statements, === is used to check for a match. This is not only a best practice but also a great habit to get into.
The === (identity) operator is much stricter than == (equality). === provides a positive match only when the two operands are of the same type, such as number, string, and Boolean. == attempts type coercion to see whether the values are similar but a different type, which can lead to some interesting and unexpected results.
Look at the following code snippet for some interesting cases that could easily trip you up:
- 1 True
- 2 False
In some situations, this might appear to be useful, but it’s far better to be clear and specific about what you consider to be a positive match as opposed to what JavaScript interprets as a positive match. If it doesn’t matter to your code whether number is a string or a number type, you can match one or the other:
The key is to always use the exact operator ===. The same goes for the not equals operators: you should always use the exact !== instead of the loose !=.
The most common method of looping through a collection of items is the for loop. JavaScript handles this task fairly well, but you should be aware of a couple of pitfalls and best practices.
First, as with the if statement, JavaScript allows you to omit the curly braces {} around the block if you have only one expression in it. We hope that you know by now that this is a bad idea, as it is with the if statements. The following code listing shows some valid JavaScript that may not produce the results you expect.
Listing D.20. for loop without braces (bad practice)
for (let i = 0; i < 3; i++) console.log(i); console.log(i * 5); #1 // Output in the console // 0 // 1 // 2 // Uncaught ReferenceError: i is not defined #1
From the way this is written and laid out, you might expect both console.log() statements to run on each iteration of the loop. For clarity, the preceding snippet should be written as in the following listing.
Listing D.21. Adding braces to a for loop
for (let i = 0; i < 3; i++) { console.log(i); } console.log(i*5);
We know that we keep going on about this, but making sure that your code reads the same way that JavaScript interprets it helps you! Bearing in mind this fact and the best practice for declaring variables, you should never see let inside a for conditional statement. Updating the preceding code snippet to meet this best practice gives you the following listing.
Listing D.22. Extracting the variable declaration
let i; #1 for (i = 0; i < 3; i++) { console.log(i); } console.log(i*5);
As the variable declaration should be at the top of the scope, there could be many lines of code between it and the variable’s first use in a loop. JavaScript interpreters act as though the variable has been defined there, so that’s where it should go.
A common use for the for loop is to iterate through the contents of an array, so next, we’ll cover some best practices and issues to look out for.
The key to using for loops with arrays is remembering the arrays are zero-indexed: the first object in an array is in position 0. The knock-on effect is that the position of the last item in the array is one less than the length. This sounds more complicated than it is. A simple array breaks down like this:
The typical code you might see for declaring an array like this and looping through it is in the following listing.
Listing D.23. More for loop
let i; const myArray = ["one","two","three"]; for (i = 0; i < myArray.length; i++) { #1 console.log(myArray[i]); }
This code works well and loops through the array correctly, starting at position 0 and going through to the final position, 2. Some people prefer to rule out the use of i++ to autoincrement in their code because it can make code difficult to fathom. Personally, we think that for loops are the exception to this rule and in fact make the code easier to read, rather than adding a manual increment inside the loop itself.
You can do one thing to improve the performance of this code. Each time the loop goes around, JavaScript checks the length of myArray. This process would be quicker if JavaScript checked against a variable, so a better practice is to declare a variable to hold the length of the array. You can see this solution in action in the following code listing.
Listing D.24. Alternative for loop declaration
let i, arrayLength; #1 const myArray = ["one","two","three"]; for (i = 0, arrayLength = myArray.length; i < arrayLength; i++) { #2 console.log(myArray[i]); }
Now a new variable, arrayLength, is given the length of the array to be looped through when the loop is initiated. The script needs to check the length of the array only once, not on every loop.
JavaScript Object Notation (JSON) is a JavaScript-based approach to data exchange. It’s much smaller than XML, more flexible, and easier to read. JSON is based on the structure of JavaScript objects but is language independent and can be used to transfer data among all manner of programming languages.
We’ve used objects in our sample code in this book, and because JSON is based on JavaScript objects, we’ll discuss them here briefly.
In JavaScript, everything other than the simplest data types—string, number, Boolean, null, and undefined—is an object, including arrays and functions. Object literals are what most people think of as JavaScript objects; they’re typically used to store data but can also contain functions, as you’ve already seen.
A JavaScript object is a collection of key-value pairs, which are the properties of the object. Each key must have a value.
The rules for a key are simple:
- The key must be a string.
- The string must be wrapped in double quotes if it’s a JavaScript reserved word or an illegal JavaScript name.
The value can be any JavaScript value, including functions, arrays, and nested objects. The following listing shows a valid JavaScript object literal based on these rules.
Listing D.25. An example of a JavaScript object literal
const nameSetup = { firstname: 'Simon', #1 fullname: '', age: 37, married: true, "clean-shaven": null, #2 addSurname: function () { #3 const surname = 'Holmes'; this.fullname = `${this.firstname} ${surname}`; #4 }, children: [ #5 { firstname: 'Erica' }, { firstname: 'Isobel' } ] };
Here, all keys in the object are strings, but the values are a mixture of types: string, number, Boolean, null, function, and array.
nameSetup.firstname nameSetup.fullname
These examples can be used to get or set property values. If a property doesn’t exist when you try to get it, JavaScript returns undefined. If a property doesn’t exist when you try to set it, JavaScript adds it to the object and creates it for you.
You can’t use dot notation when the key name is a reserved word or an illegal JavaScript name. To access these properties, you need to wrap the key string in square braces []. A couple of examples are
nameSetup["clean-shaven"] nameSetup["var"]
Again, these references can be used to get or set the values.
Next, we’ll take a look at how JSON is related.
JSON is based on the notation of JavaScript object literals, but because it’s designed to be language independent, there are a couple of important differences:
- All key names and strings must be wrapped in double quotes.
- Functions aren’t a supported data type.
These two differences occur largely because you don’t know what will be interpreting it. Other programming languages won’t be able to process JavaScript functions and probably will have different sets of reserved names and restrictions on names. If you send all names as strings, you can bypass this issue.
You can’t send functions with JSON, but as it’s a data exchange format, that’s not such a bad thing. The data types you can send are
- Strings
- Numbers
- Objects
- Arrays
- Booleans
- The value null
Looking at this list and comparing it with the JavaScript object in listing D.25, if you remove the function property, you should be able to convert it to JSON.
Unlike with the JavaScript object, we’re not assigning the data to a variable; neither do we need a trailing semicolon. By wrapping all key names and strings in double quotes—and they do have to be double quotes—we can generate the following listing.
Listing D.26. An example of correctly formatted JSON
{ "firstname": "Simon", #1 "fullname": "", #2 "age": 37, #3 "married": true, #4 "has-own-hair": null, #5 "children": [ { #6 "firstname": "Erica" #6 }, #6 { #6 "firstname": "Isobel" #6 } #6 ] #6 } #6
This listing shows some valid JSON. This data can be exchanged between applications and programming languages without issue. It’s also easy for the human eye to read and understand.
Sending strings containing double quotes
JSON specifies that all strings must be wrapped in double quotes. What if your string contains double quotes? The first double quote that an interpreter comes across will be seen as the end delimiter for the string, so it will most likely throw an error when the next item isn’t valid JSON.
The following code snippet shows an example. There are two double quotes inside the string, which isn’t valid JSON and will cause errors:
The answer to this problem is to escape nested double quotes with the backslash character (\). Applying this technique produces the following:
This escape character tells JSON interpreters that the following character shouldn’t be considered to be part of the code; it’s part of the value and can be ignored.
The spacing and indentation in listing D.26 are purely to aid human readability; programming languages don’t need them. You can reduce the amount of information being transmitted if you remove unnecessary whitespace before sending the code.
The following code snippet shows a minimized version of listing D.26, which is more along the lines of what you’d expect to exchange between applications:
{"firstname":"Simon","fullname":"","age":37,"married":true,"has-own- hair":null,"children":[{"firstname":"Erica"},{"firstname":"Isobel"}]}
The content is exactly the same as that of listing D.26, but much more compact.
The popularity of JSON as a data exchange format predates the development of Node by quite some time. JSON began to flourish as the ability of browsers to run complex JavaScript increased. Having a data format that was (almost) natively supported was extremely helpful and made life considerably easier for front-end developers.
The previous preferred data exchange format was XML. Compared with JSON, XML is harder to read at a glance, much more rigid, and considerably larger to send across networks. As you saw in the JSON examples, JSON doesn’t waste much space on syntax. JSON uses the minimum amount of characters required to accurately hold and structure the data, not a lot more.
When it comes to the MEAN stack, JSON is the ideal format for passing data through the layers of the stack. MongoDB stores data as binary JSON (BSON). Node and Express can interpret this natively and also push it out to Angular, which also uses JSON natively. Every part of the MEAN stack, including the database, uses the same data format, so you have no data transformations to worry about.
The code samples in this book use some of our personal preferences for laying out code. Some of these practices are necessary best practices; others increase readability. If you have different preferences, as long as the code remains correct, that’s absolutely fine; the important thing is to be consistent.
The main reasons for being concerned about formatting are
- Ensuring syntactically correct JavaScript
- Ensuring that your code functions correctly when minified
- Improving readability for yourself and/or others on your team
Start with an easy formatting practice: indentation.
The only real reason to indent your code is to make it considerably easier for mere humans to read. JavaScript interpreters don’t care about it and will happily run code without any indentation or line breaks.
Best practice for indentation is to use spaces, not tabs, as there’s still no standard for the placement of tab stops. How many spaces you choose is up to you; we personally prefer two spaces. We find that using one space can make code difficult to follow at a glance, as the difference isn’t all that big. Four spaces can make your code unnecessarily wide (again, in our opinion). We like to balance the readability gains of indentation against the benefits of maximizing the amount of code you can see onscreen at any time—well, for that reason and a dislike of horizontal scrolling.
A best practice you should get into is placing the opening bracket of a code block at the end of the statement that starts the block. What? All the code snippets so far have been written this way. The following code listing shows the right way and the wrong way of placing braces.
Listing D.27. Brace placements
const firstname = 'Simon'; let surname; if (firstname === 'Simon') { #1 surname = 'Holmes'; #1 console.log(`${firstname} ${surname}`); #1 } if (firstname === 'Simon') #2 { #2 surname = 'Holmes'; #2 console.log(`${firstname} ${surname}`); #2 }
At least 99% of the time, the second approach won’t cause you a problem. The first approach won’t cause you a problem 100% of the time. We’ll take that over wasting time debugging; how about you?
What’s the 1% of the time when the wrong approach will cause you a problem? Consider a code snippet that uses the return statement:
return { name : 'name' };
If you put your opening bracket on a different line, JavaScript assumes that you’ve missed a semicolon after the return command itself and adds one for you. JavaScript evaluates it like this:
return; #1 { name:'name' };
Due to JavaScript’s semicolon insertion, it doesn’t return the object you intended; instead, JavaScript returns undefined.
Next, we’ll look at semicolon use and JavaScript semicolon insertion in more detail.
JavaScript uses the semicolon character to denote the end of statements. It tries to be helpful by making this character optional and injects its own semicolons at runtime if it deems it necessary to do so, which isn’t a good thing at all.
When using semicolons to delimit statements, you should return to the goal of seeing in the code what the JavaScript interpreter sees and not let it make any assumptions. We treat semicolons as not optional, and we’re now at a point where code looks wrong to us if they’re not there.
Most lines of your JavaScript have a semicolon at the end, but not all; that would be too easy! All the statements in the following listing should end with a semicolon.
Listing D.28. Examples of semicolon use
const firstname = 'Simon'; let surname; #1 surname = 'Holmes'; #1 console.log(`${firstname} ${surname}`); #1 const addSurname = function () {}; #1 alert('Hello'); #1 const nameSetup = { firstname : 'Simon', fullname : ''}; #1
But code blocks shouldn’t end with a semicolon. We’re talking about blocks of code associated with if, switch, for, while, try, catch, and function (when not being assigned to a variable). The following listing shows a few examples.
Listing D.29. Using code blocks without semicolons
if (firstname === 'Simon') { ... } #1 function addSurname () { ... } #1 for (let i = 0; i < 3; i++) { ... }
The rule isn’t quite so straightforward as “don’t use a semicolon” after curly braces. When assigning a function or object to a variable, you do have a semicolon after the curly braces. You’ve seen a couple of examples, which we’ve been using throughout the book.
Listing D.30. Semicolon placement for assigned blocks
const addSurname = function () { ... }; #1 const nameSetup = { firstname : 'Simon' }; #1
Putting semicolons after blocks can take a little while to get used to, but it’s worth the effort and eventually becomes second nature.
When you’re defining a long list of variables at the top of a scope, the most common approach is to write one variable name per line. This practice makes it easy to see at a glance what variables you’ve set up. The classic placement for the comma that separates variables is at the end of the line.
Listing D.31. Comma-last placement
let firstname = 'Simon', #1 surname, #1 initial = '', #1 fullname;
This approach is Simon’s preferred approach, as he’s been using it for about 15 years. Clive, on the other hand, advocates putting the comma at the front of each line.
Listing D.32. Comma-first placement
let firstname = 'Simon' , surname #1 , initial = '' #1 , fullname; #1
This JavaScript is perfectly valid and when minified to one line, reads exactly the same as the first code snippet. Simon has tried to get used to it, but he can’t; it looks wrong to him. Clive thinks that comma-first is a good idea, but he thinks Elm is great too.
There are arguments for and against both approaches. Your choice comes down to personal preference. The critical thing is to have a standard and stick to it.
Adding a bit of whitespace between sets of braces can help readability and won’t cause any problems for JavaScript. Again, you’ve seen this approach in all the code snippets so far. You can also add or remove whitespace from between a lot of JavaScript operators. Take a look at the following code snippet, showing the same piece of code with and without extra whitespace.
Listing D.33. Examples of whitespace formatting
const firstname = 'Simon'; #1 let surname; #1 if (firstname === 'Simon') { #1 surname = 'Holmes'; #1 console.log(`${firstname} ${surname}`); #1 } #1 const firstname='Simon'; #2 let surname; #2 if(firstname==='Simon'){ #2 surname='Holmes'; #2 console.log(firstname+" "+surname); #2 } #2
As humans, we read by using whitespace as the delimiters for words, and the way we read code is no different. Yes, you can figure out the second part of the code snippet here, as many syntactic pointers act as delimiters, but it’s quicker and easier to read and understand the first part. JavaScript interpreters don’t notice the whitespace in these places, and if you’re concerned about increasing the file size for browser-based code, you can always minimize it before pushing it live.
A couple of online code-quality checkers called JSHint and ESLint check the quality and consistency of your code. Even better, most IDEs and good text editors have plugins or extensions for one or the other, so your code can be quality-checked as you go. These tools are useful for spotting the occasional missed semicolon or a comma in the wrong place.
Of the two tools, ESLint is geared more toward linting ES2015 code. TypeScript has its own linter, TSLint, which Angular installs by default.
ES2015 introduced an alternative way of formatting strings akin to string interpolation, as you’d find in many different languages. JavaScript calls this type of formatting template literals.
A template literal is denoted with backticks where you’d ordinarily use single or double quotes to define a string. To perform the interpolation, the element (variable or function call result) that you wish inserted into the string needs to be wrapped by '${}'. The following listing shows how this works.
Listing D.34. Using template literals
const value = 10; const square = x => x * x; console.log(`Squaring the number ${value} gives a result of ${square(value)}`); #1 // Squaring the number 10 gives a result of 100 #2
The next aspect of JavaScript programming that we’ll look at is callbacks. Callbacks often seem to be confusing or complicated at first, but if you take a look under the hood, you’ll find that they’re fairly straightforward. Chances are that you’ve already used them.
Callbacks are typically used to run a piece of code after a certain event has happened. Whether this event is a link being clicked, data being written to a database, or another piece of code finishing executing isn’t important, as the event could be almost anything. A callback function itself is typically an anonymous function—a function declared without a name—that’s passed directly to the receiving function as a parameter. Don’t worry if this seems like jargon right now; we’ll look at code examples soon, and you’ll see how easy it is.
Most of the time, you use callbacks to run code after something happens. To get accustomed to the concept, you can use a function that’s built into JavaScript: setTimeout(). You may have already used it. In a nutshell, setTimeout() runs a callback function after the number of milliseconds that you declare. The basic construct for using it as follows:
Canceling a setTimeout
If a setTimeout declaration has been assigned to a variable, you can use that variable to clear the timeout and stop it from completing, assuming that it hasn’t already completed. You use the clearTimeout() function, which works like so:
This code snippet wouldn’t output anything to the log, as the waitForIt timer is cleared before it has the chance to complete.
First, setTimeout() is declared inside a variable so that you can access it again to cancel it, should you want to. As we mentioned earlier, a callback is typically an unnamed anonymous function. If you wanted to log your name to the JavaScript console after 2 seconds, you could use this code snippet.
Listing D.35. Capturing setTimeout reference
const waitForIt = setTimeout(function () { console.log("My name is Simon"); }, 2000);
Note
Callbacks are asynchronous. They run when they’re required, not necessarily in the order in which they appear in your code.
Keeping in mind this asynchronous nature, what would you expect the output of the following code snippet to be?
console.log("Hello, what's your name?"); const waitForIt = setTimeout(function () { console.log("My name is Simon"); }, 2000); console.log("Nice to meet you Simon");
If you read the code from top to bottom, the console log statements appear to make sense. But because the setTimeout() callback is asynchronous, it doesn’t hold up the processing of code, so you end up with this:
Hello, what's your name? Nice to meet you Simon My name is Simon
As a conversation, this result clearly doesn’t flow properly. In code, having the correct flow is essential; otherwise, your applications quickly fall apart.
Because this asynchronous approach is so fundamental to working with Node, we’ll look into it a little deeper.
Before you look at some more code, reminding yourself of the bank-teller analogy from chapter 1. Figure D.3 shows how a bank teller can deal with multiple requests by passing any time-consuming tasks to other people.
The bank teller is able to respond to Sally’s request because she passed responsibility for Simon’s request to the safe manager. The teller isn’t interested in how the safe manager does what he does or how long it takes. This approach is asynchronous.
You can mimic this approach in JavaScript by using the setTimeout() function to demonstrate the asynchronous approach. All you need are some console.log() statements to demonstrate the bank teller’s activity and a couple of timeouts to represent the delegated tasks. You can see this approach in the following code listing, where it’s assumed that Simon’s request will take 3 seconds (3,000 ms), and Sally’s will take 1 second.
Listing D.36. Asynchronous flow
console.log("Taking Simon's request"); #1 const requestA = setTimeout(function () { console.log("Simon: money's in the safe, you have $5000"); }, 3000); console.log("Taking Sally's request"); #2 const requestB = setTimeout(function () { console.log("Sally: Here's your $100"); }, 1000); console.log("Free to take another request"); #3 // ** console.log responses, in order ** // Taking Simon's request // Taking Sally's request // Free to take another request // Sally: Here's your $100 #4 // Simon: money's in the safe, you have $5000 #5
This code has three distinct blocks: taking the first request from Simon and sending it away 1; taking the second request from Sally and sending it away 2; and ready to take another request 3. If this code were synchronous code like you’d see in PHP or .NET, you’d deal with Simon’s request in its entirety before taking Sally’s request 3 seconds later.
With an asynchronous approach, the code doesn’t have to wait for one of the requests to complete before taking another one. You can run this code snippet in your browser to see how it works. Put it in an HTML page and run it, or enter it directly in the JavaScript console.
We hope that you see how this code mimics the scenario we talked through as we kicked off this section. Simon’s request was first in, but as it took some time to complete, the response didn’t come back immediately. While somebody was dealing with Simon’s request, Sally’s request was taken. While Sally’s request was being dealt with, the bank teller became available again to take another request. As Sally’s request took less time to complete, she got her response first, whereas Simon had to wait a bit longer for his response. Neither Sally nor Simon got held up by the other.
We’re not going show you the source code of setTimeout() here, but a skeleton function that uses a callback. Declare a new function called setTimeout() that accepts the parameters callback and delay. The names aren’t important; they can be anything you want. The following code listing demonstrates this function. (Note that you won’t be able to run this function in a JavaScript console.)
Listing D.37. setTimeout skeleton
const setTimeout = (callback, delay) => { ... #1 ... callback(); #2 }; const requestB = setTimeout (() => { #3 console.log("Sally: Here's your $100"); #3 }, 1000); #3
The callback parameter is expected to be a function, which can be invoked at a specific point in the setTimeout() function 1. In this case, you’re passing it a simple anonymous function 3 that will write a message to the console log. When the setTimeout() function deems it appropriate to do so, it invokes the callback, and the message is logged to the console. That’s not so difficult, is it?
If JavaScript is your first programming language, you’ll have no idea how weird this concept of passing anonymous functions around looks to those who are coming in from different backgrounds. But the ability to operate this way is one of JavaScript’s great strengths.
Typically, you won’t generally look inside the function running the callbacks, whether it’s setTimeout(), jQuery’s ready(), or Node’s createServer(). The documentation for these functions tells you what the expected parameters are and what parameters may be returned.
Why setTimeout() is unusual
The setTimeout() function is unusual in that you specify a delay after which the callback will fire. In a more typical use case, the function itself decides when the callback should be triggered. In jQuery’s ready() method, this is when jQuery says the DOM has loaded; in a save() operation in Node, this is when the data is saved to the database and a confirmation is returned.
Something to bear in mind when passing anonymous functions around this way is that the callback doesn’t inherit the scope of the function it’s passed into. The callback function isn’t declared inside the destination function, merely invoked from it. A callback function inherits the scope in which it’s defined.
Figure D.4 depicts scope circles. Here, you see that the callback has its own local scope inside the global scope, which is where requestB is defined. This is all well and good if your callback needs access only to its inherited scope, but what if you want it to be smarter? What if you want to use data from your asynchronous function in your callback?
Currently, the example callback function has a dollar amount hardcoded into it, but what if you want that value to be dynamic—to be a variable? Assuming that this value is set in the setTimeout() function, how do you get it into the callback? You could save it to the global scope, but as you know by now, doing so would be bad. You need to pass the value as a parameter into the callback function. You should get something like the scope circles shown in figure D.5.
The same thing in code would look like the following code listing.
Listing D.38. setTimeout with passing data
const setTimeout = (callback, delay) => { const dollars = 100; #1 ... callback(dollars); #2 }; const requestB = setTimeout((dollars) => { #3 console.log("Sally: Here's your $" + dollars); #3 }, 1000);
This code snippet outputs the same message to the console that you’ve already seen. The big difference now is that the value of dollars is being set in the setTimeout() function and being passed to the callback.
It’s important that you understand this approach, as the vast majority of Node code examples on the internet use asynchronous callbacks this way. But there are a couple of potential problems with this approach, particularly when your codebase gets larger and more complex. An overreliance on passing around anonymous callback functions can make the code hard to read and follow, especially when you find that you have multiple nested callbacks. It also makes running tests on the code difficult, as you can’t call any of these functions by name; they’re all anonymous. We don’t cover unit testing in this book, but in a nutshell, the idea is that every piece of code can be tested separately with repeatable and expected results.
Let’s look at a way that you can achieve this result with named callbacks.
Named callbacks differ from inline callbacks in one fundamental way. Instead of putting the code you want to run directly into the callback, you put the code inside a defined function. Then, rather than passing the code directly as an anonymous function, you can pass the function name. Rather than passing the code, you’re passing a reference to the code to run.
Sticking with the ongoing example, add a new function called onCompletion() that will be the callback function. Figure D.6 shows how this function looks in the scope circles.
This figure looks like the preceding example, except that the callback scope has a name. As with an anonymous callback, a named callback can be invoked without any parameters, implied in figure D.6. The following code snippet shows how to declare and invoke a named callback, putting into code what you see in figure D.6.
Listing D.39. Named callbacks
const setTimeout = (callback, delay) => { const dollars = 100; ... callback(); }; const onCompletion = () => { #1 console.log("Sally: Here's your $100"); #1 }; #1 const requestB = setTimeout( onCompletion, #2 1000 );
The named function 1 now exists as an entity in its own right, creating its own scope. Notice that there’s no longer an anonymous function, but the name of the function 2 is passed as a reference.
Listing D.39 uses a hardcoded dollar value in the console log again. As with anonymous callbacks, passing a variable from one scope to another is straightforward. You can pass the parameters you need into the named function. Figure D.7 shows how this looks in the scope circles.
You need to pass the variable dollars from setTimeout() to the onCompletion() callback function. You can do so without changing anything in your request, as the following code snippet shows.
Listing D.40. setTimeout variable passing
const setTimeout = function (callback, delay) { const dollars = 100; ... callback(dollars); #1 }; const onCompletion = function (dollars) { #2 console.log("Sally: Here's your $" + dollars); #2 }; #2 const requestB = setTimeout( onCompletion, #3 1000 );
Here, the setTimeout() function sends the dollars variable to the onCompletion() function as a parameter. You’ll often have no control of the parameters sent to your callback, because asynchronous functions like setTimeout() are provided as is. But you’ll often want to use variables from other scopes inside your callback, not what your asynchronous function provides. Next, we’ll look at how to send the parameters you want to your callback.
Suppose that you want the name in the output to come through as a parameter. The updated function looks like the following:
const onCompletion = function (dollars, name) { console.log(name + ": Here's your $" + dollars); };
The problem is that the setTimeout() function passes only a single parameter, dollars, to the callback. You can address this problem by using an anonymous function as a callback again, remembering that it inherits the scope in which it’s defined. To demonstrate this function outside the global scope, wrap the request in a new function, getMoney(), that accepts a single parameter, name.
Listing D.41. Variable scoping in setTimeout
const getMoney = function (name) { const requestB = setTimeout(function (dollars) { #1 onCompletion(dollars, name); #2 }, 1000); }; getMoney('Simon');
In the scope circles, this code looks like figure D.8.
The next listing puts all the code together for the sake of completeness.
Listing D.42. Complete setTimeout example
const setTimeout = (callback, delay) => { const dollars = 100; ... callback(dollars); #1 }; const onCompletion = (dollars, name) => { console.log(name + ": Here's your $" + dollars); }; const getMoney = (name) => { const requestB = setTimeout((dollars) => { #2 onCompletion(dollars, name); #3 }, 1000); }; getMoney('Simon');
The simple way to think of it is that calling the named function from inside the anonymous callback enables you to capture anything you need from the parent scope (getMoney(), in this case) and explicitly pass it to the named function (onCompletion()).
Seeing the flow in action
If you want to see this flow in action, you can add a debugger statement, run it in your browser, and step through the functions to see which variables and values are set where and when. Altogether, you have something like this:
Remember that you normally won’t have access to the code inside the function that invokes the callback and that the callback is often invoked with a fixed set of parameters (or none, as with setTimeout()). Anything extra that you need to add must be added inside the anonymous callback.
Defining a named function in this way makes the scope and code of the function easier to comprehend at a glance, especially if you name your functions well. With a small, simple example like this one, you could think that the flow is harder to understand when you move the code into its own function, and you could well have a point. But when the code becomes more complex and you have multiple lines of code inside multiple nested callbacks, you’ll definitely see the advantage of doing it this way.
Another advantage of being able to easily see what the onCompletion() function should do and what parameters it expects and requires to work is that the function becomes easier to test. Now you can say, “When the function onCompletion() is passed a number of dollars and a name, it should output a message to the console, including this number and name.” This case is a simple one, but we hope that you can see its value.
That brings us to the end of discussing callbacks from a code perspective. Now that you’ve got a good idea of how callbacks are defined and used, look at Node to see why callbacks are so useful.
In the browser, many events are based on user interaction, waiting for things to happen outside what the code can control. The concept of waiting for external things to happen is similar on the server side. The difference on the server side is that the events focus more on other things happening on the server or indeed on a different server. In the browser, the code waits for events such as a mouse click or form submit, whereas the server-side code waits for events such as reading a file from the file system or saving data to a database.
The big difference is that in the browser, it’s generally an individual user who initiates the event, and it’s only that user who’s waiting for a response. On the server side, the central code generally initiates the event and waits for a response. As discussed in chapter 1, only a single thread is running in Node, so if the central code has to stop and wait for a response, every visitor to the site gets held up—not a good thing! This is why it’s important to understand callbacks, because Node uses callbacks to delegate the waiting to other processes, making it asynchronous.
Next, we’ll look at an example of using callbacks in Node.
Using a callback in Node isn’t any different from using it in the browser. If you want to save some data, you don’t want the main Node process doing this, as you didn’t want the bank teller going with the safe manager and waiting for the response. You want to use an asynchronous function with a callback. All database drivers for Node provide this ability. We get into the specifics about how to create and save data in the book, so for now, we’ll use a simplified example. The following code snippet shows an example of asynchronously saving data using the save() method of the mySafe object and outputting a confirmation to the console when the database finishes and returns a response.
Listing D.43. Basic Node callback
mySafe.save( function (err, savedData) { console.log(`Data saved: ${savedData}`); } );
Here, the save function expects a callback function that can accept two parameters, an error object (err), and the data returned from the database following the save (savedData). There’s normally a bit more to functionality in the callback than this, but the basic construct is simple.
You get the idea of running a callback, but what do you do if you want to run another asynchronous operation when the callback is finished? Returning to the banking metaphor, suppose that you want to get a total value from all of Simon’s accounts after the deposit is made to the safe. Simon doesn’t need to know that multiple steps and multiple people are involved, and the bank teller doesn’t need to know until everything is complete. You’re looking to create a flow like the one shown in figure D.9.
Clearly, two operations will be required, with another asynchronous call to the database. You know from what we’ve already discussed that you can’t put it in the code after the save function, as in the following code snippet.
Listing D.44. Node callback issues
mySafe.save( function (err, savedData) { console.log(`Data saved: ${savedData}`); } ); myAccounts.findTotal( #1 function (err, accountsData) { console.log(`Your total: ${accountsData}`); } ); // ** console.log responses, in probable order ** // Your total: 4500 // Data saved: {dataObject}
That’s not going to work, because the myAccounts.findTotal() function will run immediately rather than when the mySafe.save() function has finished. The return value is likely to be incorrect, because it won’t take into account the value being added to the safe. You need to ensure that the second operation runs when you know that the first one has finished. The solution is simple: invoke the second function from inside the first callback, a process known as nesting the callbacks.
Nested callbacks are used to run asynchronous functions one after another. Put the second function inside the callback from the first, as in the following listing.
Listing D.45. Nesting callbacks
mySafe.save( function (err, savedData) { console.log(`Data saved: ${savedData}`); myAccounts.findTotal( #1 function (err, accountsData) { console.log(`Your total: ${accountsData.total}`); } ); } ); // ** console.log responses, in order ** // Data saved: {dataObject} // Your total: 5000
Now you can be sure that the myAccounts.findTotal() function will run at the appropriate time, which in turn means that you can predict the response.
This ability is important. Node is inherently asynchronous, jumping from request to request and from site visitor to site visitor. But sometimes, you need to do things in a sequential manner. Nesting callbacks gives you a good way of doing this by using native JavaScript.
The downside of nested callbacks is the complexity. You can probably see that with one level of nesting, the code is already a bit harder to read, and following the sequential flow takes a bit more mental effort. This problem is multiplied when the code gets more complex and you end up with multiple levels of nested callbacks. The problem is so great that it has become known as callback hell. Callback hell is why some people think that Node (and JavaScript) is particularly hard to learn and difficult to maintain, and they use it as an argument against the technology. In fairness, many code samples you can find online do suffer from this problem, which doesn’t do much to combat this opinion. It’s easy to end up in callback hell when you’re developing Node, but it’s also easy to avoid if you start in the right way.
We’ve already discussed the solution to callback hell: using named callbacks. Next, we’ll show you how named callbacks help with this problem.
Named callbacks can help you avoid nested callback hell because you can use them to separate each step into a distinct piece of code or functionality. Humans tend to find this type of code easier to read and understand.
To use a named callback, you need to take the content of a callback function and declare it as a separate function. The nested callback example has two callbacks, so you’re going to need two new functions: one for when the mySafe.save() operation has completed and one for when the myAccounts.findTotal() operation has completed. If these functions are called onSave() and onFindTotal(), respectively, you can create some code like the following listing.
Listing D.46. Refactor of callback code
mySafe.save( function (err, savedData) { onSave(err, savedData); #1 } ); const onSave = function (err, savedData) { console.log(`Data saved: ${savedData}`); myAccounts.findTotal( #2 function (err, accountsData) { onFindTotal(err, accountsData); #3 } ); }; const onFindTotal = function (err, accountsData) { console.log(`Your total: ${accountsData.total}`); };
Now that each piece of functionality is separated into a separate function, it’s easier to look at each part in isolation and understand what it’s doing. You can see what parameters it expects and what the outcomes should be. In reality, the outcomes are likely to be more complex than simple console.log() statements, but you get the idea. You can also follow the flow relatively easily and see the scope of each function.
By using named callbacks, you can reduce the perceived complexity of Node and also make your code easier to read and maintain. An important second advantage is that individual functions are much better suited to unit testing. Each part has defined inputs and outputs, with expected and repeatable behavior.
A Promise is like a contract: it states that a value will be available in the future when a long-running operation has completed. In essence, a Promise represents the result of an asynchronous operation. When that value has been determined, the Promise executes the given code or handles any error associated with not having received the expected value.
Promises are first-class citizens of the JavaScript specification. They have three states:
- Pending— The initial state of the Promise
- Fulfilled— The asynchronous operation successfully resolved
- Rejected— The asynchronous operation did not successfully resolve
When a Promise has been resolved, successfully or not, its value can’t change; it becomes immutable. We’ll discuss immutability in the section on functional programming later in this appendix.
To set up a Promise, you create a function that accepts two callback functions: one that executes on success and one that executes on failure. These callbacks fire when called by the Promise execution. Then, execution of the callbacks is transferred to then() functions (which are chainable) on success or a catch() function when not.
Listing D.47. Setting up/using a Promise
const promise = new Promise((resolve, reject) => { #1 // set up long running, possibly asynchronous operation, // like an API query if (/* successfully resolved */) { resolve({data response}); #2 } else { reject(); #3 } }); promise .then((data) => {/* execute this on success */}) #4 .then(() => {/ * chained next function, and so on */}) #5 .catch((err) => {/* handle error */}); #6
We use Promises in the Loc8r application, but not in a complicated way. The Promises API provides some static functions that help if you’re trying to execute multiple Promises.
Promise.all() accepts an iterable of Promises and returns a Promise when all items in the array fulfill or reject. The resolve() callback receives an array of responses: a mixture of Promise-like objects and other objects in order of fulfillment. If one of the executed Promises rejects, the reject() callback receives a single value.
Listing D.48. Promise.all()
const promise1 = new Promise((resolve, reject) => resolve() ); const promise2 = new Promise((resolve, reject) => resolve() ); const promise3 = new Promise((resolve, reject) => reject() ); const promise4 = new Promise((resolve, reject) => resolved() ); Promise.all([ promise1, promise2, promise3, promise4 ]) .then(([]) => {/* process success data iterable */}) #1 .catch(err => console.log(err)); #2
Promise.race() also accepts an iterable, but the output of Promise.race() is different. Promise.race() executes all provided Promises and returns the first response value that it receives whether this value is a fulfillment or a rejection.
Listing D.49. Promise.race()
const promise1 = new Promise((resolve, reject) => setTimeout(resolve, 1000, 'first') ); const promise2 = new Promise((resolve, reject) => setTimeout(reject, 200, 'second') ); Promise.race([promise1, promise2]) .then(value => console.log(value)) .catch(err => console.log(err)); #1
Because Promises rely on callbacks, due to their asynchronous nature, you can get into a muddle if several callbacks are nested. Finding yourself in a deeply nested callback structure is often referred to as callback hell. Promises somewhat mitigate this problem by providing a structure and making the asynchronicity explicit.
Promises have their drawbacks. They’re difficult to use in a synchronous manner, and you usually have to wade through a bunch of boilerplate code before getting to the good stuff.
async/await functions are there to simplify the behavior of using Promises synchronously. The await expression is valid only in an async function; if used outside an async function, the code throws a SyntaxError. When an async function is declared, the definition returns an AsyncFunction object. This object operates asynchronously via the JavaScript event loop and returns an implicit Promise as its result. The way the syntax is used and how it allows the code to be structured gives the impression that using async functions is much like using synchronous functions.
await
The await expression causes the execution of the async function to pause and wait until the passed Promise resolves. Then, function execution resumes.
One thing to point out is that await is not the same as Promise.then(). As await pauses the execution, causing code to execute synchronously, it isn’t chainable in the same way as Promise.then().
The next listing shows async/await in use.
Listing D.50. async/await
function resolvePromiseAfter2s () { return new Promise(resolve => setTimeout(() => resolve('done in 2s'), 2000)); } const resolveAnonPromise1s = () => new Promise(resolve => setTimeout(() => resolve('done in 1s'), 1000)); async function asyncCall () { #1 const result1 = await resolvePromiseAfter2s(); #2 console.log(result1); #3 const result2 = await resolveAnonPromise1s(); #4 console.log(result2); #5 } asyncCall(); #6
You can find more details on async/await at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function.
Someone anonymously tweeted a great quote:
The secret to writing large apps in JavaScript is not to write large apps. Write many small apps that can talk to each other.
This quote makes great sense in a number of ways. Many applications share several features, such as user login and management, comments, reviews, and so on. The easier it is for you to take a feature from one application you’ve written and drop it into another, the more efficient you’ll be, particularly as you’ll already have (we hope) tested the feature in isolation, so you know it works.
This is where modular JavaScript comes in. JavaScript applications don’t have to be in one never-ending file with functions, logic, and global variables flying loose all over the place. You can contain functionality within enclosed modules.
A closure essentially gives you access to the variables set in a function after the function has completed and returned. Then the closure offers you a way to avoid pushing variables into the global scope. It also offers a degree of protection to the variable and its value, because you can’t overwrite it, as you could a global variable.
Sound a bit weird? Look at an example. The following listing demonstrates how you can send a value to a function and later retrieve it.
Listing D.51. Example closure
const user = {}; const setAge = function (myAge) { return { #1 getAge: function () { #1 return myAge; #1 } #1 }; }; user.age = setAge(30); #2 console.log(user.age); #3 console.log(user.age.getAge()); #4
Here’s what’s happening. The getAge() function is returned as a method of the setAge() function. The getAge() method has access to the scope in which it was created. So getAge(), and getAge() alone, has access to the myAge() parameter. As you saw earlier in this appendix, when a function is created, it also creates its own scope. Nothing outside this function has access to the scope.
myAge() isn’t a one-off shared variable. You can call the function again—creating a second new function scope—to set (and get) the age of a second user. You could happily run the following code snippet after the preceding one, creating a second user and giving them a different age.
Listing D.52. Continuing the closure example
const usertwo = {}; usertwo.age = setAge(35); #1 console.log(usertwo.age.getAge()); #2 console.log(user.age.getAge()); #3
Each user has a different age that isn’t aware of or affected by the other. The closure protects the value from outside interference. The important takeaway here is that the returned method has access to the scope in which it was created.
This closure approach is a great start, but it has evolved into more useful patterns. For example, take a look at the module pattern.
The module pattern extends the closure concept, typically wrapping a collection of code, functions, and functionality into a module. The idea is that the module is self-contained, uses only data that’s explicitly passed into it, and reveals only data that it’s asked for directly.
Immediately Invoked Function Expression
The module pattern uses what is known as the Immediately Invoked Function Expression (IIFE). The functions we’ve been using in this book up until now have been function declarations, creating functions that you can call on later in the code. The IIFE creates a function expression and immediately invokes it, typically returning some values and/or methods.
The syntax for an IIFE wraps the function in parentheses and immediately invokes it by using another pair of parentheses (see the bold sections of this code snippet):
This example is a typical use but not the only one. The IIFE has been assigned to a variable 1. When you do this, the returned methods from the function become properties of the variable 2.
This is made possible by using an IIFE. (See the sidebar in this section for a bit more information on IIFE.) Like the basic closure, the module pattern returns functions and variables as properties of the variable it’s assigned to. Unlike the basic closure, the module pattern doesn’t have to be manually initiated; the module immediately calls itself as soon as it has been defined.
The following listing shows a small but usable example of the module pattern.
Listing D.53. Module pattern example
const user = {firstname: "Simon"}; const userAge = (function () { #1 let myAge; #2 return { setAge: function (initAge) { #3 myAge = initAge; #3 }, #3 getAge: function () { #4 return myAge; #4 } #4 }; })(); userAge.setAge(30); #5 user.age = userAge.getAge(); #5 console.log(user.age); #6
In this example, the myAge variable exists within the scope of the module and is never directly exposed to the outside. You can interact with the myAge variable only in the ways defined by the exposed methods. In listing D.53, you get and set, but it’s possible to modify the age property directly. You can add a happyBirthday() method to the userAge module that will increase the value of myAge by 1 and return the new value. The following listing shows the new parts in bold.
Listing D.54. Adding the happyBirthday method to the module
const user = {firstname: "Simon"}; const userAge = (function () { let myAge; return { setAge: function (initAge) { myAge = initAge; }, getAge: function () { return myAge; }, happyBirthday: function () { #1 myAge += 1; #1 return myAge; #1 } #1 }; })(); userAge.setAge(30); user.age = userAge.getAge(); console.log(user.age); user.age = userAge.happyBirthday(); #2 console.log(user.age); #3 user.age = userAge.getAge(); console.log(user.age); #3
The new happyBirthday() method increments the myAge value by 1 and returns the new value. This result is possible because the myAge variable exists in the scope of the module function, as does the returned happyBirthday() function. The new value of myAge continues to persist inside the module scope.
What we’ve looked at in the module pattern is heading close to the revealing module pattern. The revealing module pattern is essentially some syntax that sugarcoats the module pattern. The aim is to make obvious what is exposed as public and what remains private to the module.
Providing a return in the aforementioned way is also a stylistic convention but is again one that helps you and others understand your code when you come back to it after a break. When you use this approach, the return statement contains a list of the functions that you’re returning without any of the actual code. The code is declared in functions above the return statement, although within the same module. The following code listing shows an example.
Listing D.55. Revealing module pattern, short example
const userAge = (function () { let myAge; const setAge = function (initAge) { #1 myAge = initAge; #1 }; #1 return { setAge #2 }; })();
You can’t see the benefit of this approach in such a small example. We’ll look at a longer example soon that will get you part of the way there, but you’ll see the benefits when you have a module that runs to several hundred lines of code. As gathering all the variables at the top of the scope makes it obvious which variables are being used, taking the code out of the return statement makes it obvious at a glance which functions are being exposed. If you had a dozen or so functions being returned, each with a dozen or more lines of code, chances are that you wouldn’t to be able to see the entire return statement on one screen of code without scrolling.
What’s important in the return statement, and what you’ll be looking for, is which methods are being exposed. In the context of the return statement, you aren’t interested in the inner workings of each method. Separating your code like this makes sense and sets you up to have great, maintainable, and understandable code.
In this section, we’ll take a look at a larger example of the pattern, using the userAge module. The following listing shows an example of the revealing module pattern and removing code from the return statement.
Listing D.56. Revealing module pattern, full example
const user = {}; const userAge = (function () { let myAge; #1 const setAge = function (initAge) { myAge = initAge; }; const getAge = function () { return myAge; }; const addYear = function () { #2 myAge += 1; #2 }; #2 const happyBirthday = function () { addYear(); #3 return myAge; }; return { setAge, #4 getAge, #4 happyBirthday #4 }; })(); userAge.setAge(30); user.age = userAge.getAge(); user.age = userAge.happyBirthday(); #5
This demonstrates a few interesting things. First, notice that the variable myAge 1 itself is never exposed outside the module. The value of the variable is returned by various methods, but the variable itself remains private to the module.
As well as private variables, you can have private functions such as addYear() 2 in the listing. Private functions can easily be called by public methods 3.
The return statement 4 is kept nice and simple and is now an at-a-glance reference to the methods being exposed by this module.
Strictly speaking, the order of the functions inside the module isn’t important so long as they’re above the return statement. Anything below the return statement never runs. When writing large modules, you may find it easier to group related functions. If it suits what you’re doing, you could also create a nested module or even a separate module with a public method exposed to the first module so that they can talk to each other.
Remember the quote from the beginning of this section:
The secret to writing large apps in JavaScript is not to write large apps. Write many small apps that can talk to each other.
This quote applies not only to large-scale applications, but also to modules and functions. If you can keep your modules and functions small and to the point, you’re on your way to writing great code.
An extension to the modularity of JavaScript is the class syntax introduced with ES2015. Classes are syntactic sugar over JavaScript’s prototypal inheritance model, but they work as you mostly expect classes to work, if you have object-oriented programming (OOP) experience.
Note, though, that JavaScript classes, at least up until ES2017, have public properties and public and static methods. Private and protected class visibility are due to be added to the specification at some undetermined point. They do have an inheritance hierarchy that uses the extends keyword, but there are no interfaces. Accessing functions from a parent involves the super function, and initialization uses a constructor function.
We’re not going to cover the whys and wherefores of OOP, which is an exercise best left to you. (See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes for starters.) Here, we’ll cover the basics of the syntax.
Listing D.57. Class syntax examples
// Parent class class Rectangle { width = 0; height = 0; constructor (width, height) { this.width = width; this.height = height; } get area() { return this.determineArea(); } determineArea () { return this.width * this.height; } } // Child class of Rectangle class Square extends Rectangle { constructor (side) { super(side, side); } } const square = new Square(10); console.log(`Square area: ${square.area()}`); // prints Square area: 100;
There’s plenty more to classes than this, and in this book, you’ll have used them mostly in Angular as TypeScript classes to build components.
Functional programming as a concept has been around longer than object orientation. For a long time, the concept was relegated to academia, because some of the languages used have steep learning curves, which raised the barrier to entry artificially high. Who wants to spend time learning obscure concepts only to be confused by the syntax when all you want to do is get information from the users of your site and push it into a database?
Recently, though, all mainstream object-oriented languages have been pulling in and integrating concepts of functional programming languages, because these concepts provide surety of data, reduce cognitive load, and allow for composition of functionality.
Concepts that you can apply to your JavaScript work include immutability, purity, declarative style, and function composition.
A bunch of other stuff may or may not be available, depending on which version of the language you’re using. We’ll cover these concepts one at a time.
Although immutability isn’t strictly enforced at a language level, through a little bit of forward planning and some rigor, you can implement it simply and effectively. Be aware that npm packages are available to help, such as immutable.js from Facebook (https://github.com/facebook/immutable-js).
The point is that data/state that you’re operating on isn’t mutated. Mutation is an in-place operation and can be the cause of hard-to-track bugs.
The concept as it applies to JavaScript means that the state isn’t altered; it’s copied, transformed, and assigned to an alternative variable. This concept can also be applied to collections of data and objects; although slightly more rigor needs to be applied, the outcome should be the same.
For simple scalar-type variables, applying immutability is simple: declare it with const. That way, the JavaScript execution context can’t overwrite the variable, and it throws an exception if you try by mistake. We covered this topic earlier.
For object types (Arrays, Objects, Maps, Sets), declaring with const isn’t massively helpful. The issue is that const creates a reference to the object being created. As it’s a reference, the data within the object can be altered. This is where the rigor comes in. Instead of using looping constructs like for to manipulate the collection directly, use the iterators provided by that type; they’re prototype methods and should be available in both the browser and in Node.js. For functionality you want that isn’t supplied, there are always libraries such as Lodash.js and Ramda.js.
Listing D.58. Examples of applying the concept of immutability
const names = ['s holmes', 'c harber', 'l skywalker', 'h solo']; #1 const uppercasedNames = names.map(name => name.toUpperCase()); #2 const shortNames = names.filter(name => name.length < 10); #3 const values = [1, 2, 3, 4, 5, 6, 7, 8, 9]; #4 const total = values.reduce((value, acc) => acc + value, 0); #5 const product = values.reduceRight((value, acc) => value * acc, 1); #6
Pure functions are functions that don’t exhibit side effects or use data that hasn’t been supplied. A side effect is a change to the program state that’s external to the function and differs from the return value of the function. Typical side effects include changing global variables’ values, sending text to the screen, and printing. Some of these side effects are unwanted and harmful, but some are unavoidable and necessary. As JavaScript programmers, we should strive to reduce side effects as much as possible. This way, program state is predictable and therefore easy to reason about if bugs occur.
Functions should operate only on the data that they’ve been provided. External data, such as global window state, shouldn’t be changed unless absolutely necessary, and even then, only in a controlled manner by a dedicated function. If your code is reliant on global state, that’s a bad code smell that you should investigate.
Pure functions are predictable, and more often than not, they exhibit a property called idempotency: given a set of inputs, the expected output of a function is always the same.
A simple, somewhat contrived example is a function that adds two numbers together:
const sum = (a, b) => a + b;
If you supply 1 and 2 to such a function, you always expect 3 to be returned.
What if this function also relied on a value that was maintained outside the function—such as const sumWithGlobal = (a, b) => a + b + window.c—and that this value (window.c) was generally 0 but sometimes 1 or maybe something random like a string? What would you expect in that instance when you supplied 1 and 2 as function arguments? You couldn’t rely on the result to be 3; it might be 4 or something wildly different or even an exception.
This example is a simple one, but what if it involved thousands of lines of code? As you can see, this makes the size of the issue magnitudes larger. Try to keep functions pure; being able to predict outputs makes everyone’s lives easier.
We don’t want to speak for everybody, but we guess that most code you write is imperative in style. You set out what you want the computer to do line by line, much like a recipe. You might overlay this code with notes of object orientation, but it’s still a recipe. There’s nothing wrong with this approach; it works and mostly works well.
With declarative programming, you state the logic of what you’re looking to achieve but leave the execution details up to the computer. In essence, you don’t care how the outcome of your program is achieved so long as it’s achieved.
In this style of JavaScript, code should favor the following:
- Array iterators over for loops
- Recursion
- Partially applicable and composable functions
- Ternary operators over if statements to ensure return values
- Avoiding changing state, mutating data, and side effects
We stress “should” because JavaScript doesn’t support things like tail call recursion due to an internal stack frame limit. Also, partial application and function composition are things that you build into your code, not things that are natively supported.
Listing D.59. Declarative programming example
const compose = (...fns) => fns.reduce((f, g) => (...args) => f(g(...args))); #1 const url = '...'; const parse = item => JSON.parse(item); const fetchDataFromApi = url => data => fetch(url, data); const convertData = item => item.toLowerCase(); const convert = (...data) => data.map(item => convertData(item)); const items = [...dataList]; #2 const getProcessableList = compose( parse, fetchDataFromApi(url), convert ); #3 const list = getProcessableList(items); #4
In this code, the important part is the instruction to getProcessableList(). All the other elements are boilerplate required to present this contrived example. The point is that the intention is declared, but how it gets done isn’t.
Pure functions provide predictable outcomes. If you can predict outcomes, you can combine your functions in innovate ways. Smaller functions can become parts of larger functions, and you don’t have to worry about intermediary results. To help you understand function composition, we’ll discuss partial application.
Partial application, or currying, means applying fewer arguments to a function than it requires, each time returning a new function and therefore holding off completing execution until all arguments are available.
Unfortunately, JavaScript has no native support for currying, but through use of syntax, you can emulate this feature. The following listing shows how.
Listing D.60. Currying example
const simpleSum (x, y) => x + y; #1 const curriedSum x => y => x + y; #2 const simpleResult = simpleSum(2, 3); #3 const curriedResult = curriedSum(2)(3); #4 const intermediary = curriedSum(2); #5 const finalCurried = intermediary(3); #6
Currying isn’t special. All you’re doing is taking a multiargument function and returning a new function after the application of a single argument.
With this knowledge in place, you can look at composition. Composition is combining multiple functions to create complex flows. This technique allows you to avoid code that uses looping code structures that read like streams of instructions. Instead, you abstract away the complexity of the processing by combining the operations into simple, descriptive functions.
To work properly, the functions need to be small and pure, free of side effects. The functions that are being composed need the inputs and outputs to match, so applying currying is helpful but not mandatory. Having the inputs and outputs match means that a function that takes an integer shouldn’t be composed with a function that takes a string. Although input mismatch is technically acceptable in JavaScript due to the language’s ability to implicitly typecast, it can be a source of bugs that may be difficult to track down.
A simple way to look at this is an example. The next listing takes the curriedSum() function from the preceding listing.
Listing D.61. Simple composition
const add = x => y => x + y; #1 const multiplyFactor = fac => num => num * fac; #2 const multiplyBy10 = multiplyFactor(10); const result = multiplyBy10(add(2)(5)); #3
This example is simple and contrived but illustrates the point.
Some libraries provide a function called compose that allows you to handle composition in a more elegant way, although this function isn’t difficult to build by hand. The basic principle is the simple application of the mathematical formula g(f(x)).
Listing D.62. compose function
const compose = (g, f) => x => g(f(x)); #1 const composedCompute = compose( #2 multiplyBy10, add(2) ); const result = composedCompute(5); #3
Beyond these small examples, composition is a tool that can make your code cleaner and easier to understand.
JavaScript is a forgiving language, which makes it easy to learn, but it’s also easy to pick up bad habits. If you make a little mistake in your code, JavaScript sometimes thinks, “Well, I think you meant to do this, so that’s what I’ll go with.” Sometimes it’s right, and sometimes it’s wrong. This isn’t acceptable for good code, so it’s important to be specific about what your code should do, and you should try to write your code in the way that the JavaScript interpreter sees it.
A key to understanding the power of JavaScript is understanding scope: global scope and function scope and lexical scope. There are no other types of scope in JavaScript. You want to avoid using the global scope as much as possible, and when you do use it, try to do it in a clean and contained way. Scope inheritance cascades down from the global scope, so it can be difficult to maintain if you’re not careful.
JSON is born of JavaScript but isn’t JavaScript; it’s a language-independent data exchange format. JSON contains no JavaScript code and can quite happily be passed between a PHP server and a .NET server; JavaScript isn’t required to interpret JSON.
Callbacks are vital to running successful Node applications, because they allow the central process to effectively delegate tasks that could hold it up. To put it another way, callbacks enable you to use sequential synchronous operations in an asynchronous environment. But callbacks aren’t without their problems. It’s easy to end up in callback hell, having multiple nested callbacks with overlapping inherited scopes making your code hard to read, test, debug, and maintain. Fortunately, you can use named callbacks to address this problem on all levels so long as you remember that named callbacks don’t inherit scope like their inline anonymous counterparts.
Closures and module patterns provide ways to write code that’s self-contained and reusable between projects. A closure enables you to define a set of functions and variables within its own distinct scope, which you can come back to and interact with through the exposed methods. This leads to the revealing module pattern, which is convention-driven to draw specific lines between what’s private and what’s public. Modules are perfect for writing self-contained pieces of code that can interact well with other code, not tripping up over any scope clashes.
Recent changes to the JavaScript specification, such as the addition of class syntax and greater emphasis on functional programming, flesh out the available toolkit to suit whichever style of code you want to use.
A great many other additions to the JavaScript specification aren’t covered here: the rest operator, the spread operator, and generators, to name a few. It’s an exciting time to be working with the JavaScript language.