Q1 : Explain the difference between local and global npm
packages installation
Answer
The main difference between local and global packages is this:
- local packages are installed in the directory where you run
npm install <package-name>
, and they are put in thenode_modules
folder under this directory - global packages are all put in a single place in your system (exactly where depends on your setup), regardless of where you run
npm install -g <package-name>
In general, all packages should be installed locally.
- This makes sure you can have dozens of applications in your computer, all running a different version of each package if needed.
- Updating a global package would make all your projects use the new release, and as you can imagine this might cause nightmares in terms of maintenance, as some packages might break compatibility with further dependencies, and so on.
Q2: What are the benefits of using Node.js?
Answer
Following are main benefits of using Node.js
- Aynchronous and Event Driven – All APIs of Node.js library are aynchronous that is non-blocking. It essentially means a Node.js based server never waits for a API to return data. Server moves to next API after calling it and a notification mechanism of Events of Node.js helps server to get response from the previous API call.
- Very Fast – Being built on Google Chrome’s V8 JavaScript Engine, Node.js library is very fast in code execution.
- Single Threaded but highly Scalable – Node.js uses a single threaded model with event looping. Event mechanism helps server to respond in a non-bloking ways and makes server highly scalable as opposed to traditional servers which create limited threads to handle requests. Node.js uses a single threaded program and same program can services much larger number of requests than traditional server like Apache HTTP Server.
- No Buffering – Node.js applications never buffer any data. These applications simply output the data in chunks.
Q3: What are the key features of Node.js?
Answer
Let’s look at some of the key features of Node.js.
- Asynchronous event driven IO helps concurrent request handling – All APIs of Node.js are asynchronous. This feature means that if a Node receives a request for some Input/Output operation, it will execute that operation in the background and continue with the processing of other requests. Thus it will not wait for the response from the previous requests.
- Fast in Code execution – Node.js uses the V8 JavaScript Runtime engine, the one which is used by Google Chrome. Node has a wrapper over the JavaScript engine which makes the runtime engine much faster and hence processing of requests within Node.js also become faster.
- Single Threaded but Highly Scalable – Node.js uses a single thread model for event looping. The response from these events may or may not reach the server immediately. However, this does not block other operations. Thus making Node.js highly scalable. Traditional servers create limited threads to handle requests while Node.js creates a single thread that provides service to much larger numbers of such requests.
- Node.js library uses JavaScript – This is another important aspect of Node.js from the developer’s point of view. The majority of developers are already well-versed in JavaScript. Hence, development in Node.js becomes easier for a developer who knows JavaScript.
- There is an Active and vibrant community for the Node.js framework – The active community always keeps the framework updated with the latest trends in the web development.
- No Buffering – Node.js applications never buffer any data. They simply output the data in chunks.
Q4: What is Callback Hell and what is the main cause of it?
Answer
Asynchronous JavaScript, or JavaScript that uses callbacks, is hard to get right intuitively. A lot of code ends up looking like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
fs.readdir(source, function (err, files) { if (err) { console.log('Error finding files: ' + err) } else { files.forEach(function (filename, fileIndex) { console.log(filename) gm(source + filename).size(function (err, values) { if (err) { console.log('Error identifying file size: ' + err) } else { console.log(filename + ' : ' + values) aspect = (values.width / values.height) widths.forEach(function (width, widthIndex) { height = Math.round(width / aspect) console.log('resizing ' + filename + 'to ' + height + 'x' + height) this.resize(width, height).write(dest + 'w' + width + '_' + filename, function(err) { if (err) console.log('Error writing file: ' + err) }) }.bind(this)) } }) }) } }) |
See the pyramid shape and all the })
at the end? This is affectionately known as callback hell.
The cause of callback hell is when people try to write JavaScript in a way where execution happens visually from top to bottom. Lots of people make this mistake! In other languages like C, Ruby or Python there is the expectation that whatever happens on line 1 will finish before the code on line 2 starts running and so on down the file.
Q5: What is Callback?
Answer
A callback is a function called at the completion of a given task; this prevents any blocking, and allows other code to be run in the meantime. Callbacks are the foundation of Node.js. Callbacks give you an interface with which to say, “and when you’re done doing that, do all this.”
1 2 3 4 5 6 7 |
var myCallback = function(data) { console.log('got data: '+data); }; var usingItNow = function(callback) { callback('get it?'); }; |
Q6: What is V8?
Answer
The V8 library provides Node.js with a JavaScript engine (a program that converts Javascript code into lower level or machine code that microprocessors can understand), which Node.js controls via the V8 C++ API. V8 is maintained by Google, for use in Chrome.
The Chrome V8 engine :
- The V8 engine is written in C++ and used in Chrome and Nodejs.
- It implements ECMAScript as specified in ECMA-262.
- The V8 engine can run standalone we can embed it with our own C++ program.
Q7: What is libuv
?
Answer
libuv is a C library that is used to abstract non-blocking I/O operations to a consistent interface across all supported platforms. It provides mechanisms to handle file system, DNS, network, child processes, pipes, signal handling, polling and streaming. It also includes a thread pool for offloading work for some things that can’t be done asynchronously at the operating system level.
Q8: What is the difference between returning a callback and just calling a callback?
Answer
1 2 3 4 5 |
return callback(); //some more lines of code; - won't be executed callback(); //some more lines of code; - will be executed |
Of course returning will help the context calling async function get the value returned by callback.
1 2 3 4 5 6 7 8 9 10 11 |
function do2(callback) { log.trace('Execute function: do2'); return callback('do2 callback param'); } var do2Result = do2((param) => { log.trace(`print ${param}`); return `return from callback(${param})`; // we could use that return }); log.trace(`print ${do2Result}`); |
Output:
1 2 3 4 |
C:\Work\Node>node --use-strict main.js [0] Execute function: do2 [0] print do2 callback param [0] print return from callback(do2 callback param) |
Q9: Why we always require
modules at the top of a file? Can we require
modules inside of functions?
Answer
Yes, we can but we shall never do it.
Node.js always runs require
synchronously. If you require an external module from within functions your module will be synchronously loaded when those functions run and this can cause two problems:
- If that module is only needed in one route handler function it might take some time for the module to load synchronously. As a result, several users would be unable to get any access to your server and requests will queue up.
- If the module you require causes an error and crashes the server you may not know about the error.
Q10: Are you familiar with differences between Node.js modules and ES6 modules?
Answer
The modules used in Node.js follow a module specification known as the CommonJS specification. The recent updates to the JavaScript programming language, in the form of ES6, specify changes to the language, adding things like new class syntax and a module system. This module system is different from Node.js modules. To import ES6 module, we’d use the ES6 import
functionality.
Now ES6 modules are incompatible with Node.js modules. This has to do with the way modules are loaded differently between the two formats. If you use a compiler like Babel, you can mix and match module formats.
Q11: Explain the order of Event Listeners execution in Node.js
Problem
Consider this code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
import { EventEmitter } from 'events'; const eventEmitter = new EventEmitter(); eventEmitter.on('myEvent', (data) => { console.log(data, '- FIRST'); }); console.log('Statement A'); eventEmitter.on("myEvent", data => { console.log(data, '- SECOND'); }); eventEmitter.emit('myEvent', 'Emitted Statement'); console.log("Statement B"); |
What will be the output of this code and why?
Answer
When executed, the above code gives the output:
1 2 3 4 |
> Statement A > Emitted Statement - FIRST > Emitted Statement - SECOND > Statement B |
The listeners are executed in the order the listeners are created for an event emitter.
Q12: Explain what is wrong with async/await
use in the forEach
loop
Problem
Consider this code:
1 2 3 4 5 6 7 8 9 10 11 12 |
import fs from 'fs-promise' async function printFiles () { const files = await getFilePaths() // Assume this works fine files.forEach(async (file) => { const contents = await fs.readFile(file, 'utf8') console.log(contents) }) } printFiles() |
Is it working as expected? Can you fix it?
Answer
The code doesn’t do what you expect it to do. It just fires off multiple asynchronous calls, but the printFiles
function does immediately return after that.
If you want to read the files in sequence, you cannot use forEach
indeed. Just use a modern for … of
loop instead, in which await
will work as expected:
1 2 3 4 5 6 7 8 |
async function printFiles () { const files = await getFilePaths(); for (const file of files) { const contents = await fs.readFile(file, 'utf8'); console.log(contents); } } |
Q13: Fix this code. Explain the fix.
Problem
I have this code:
1 2 3 4 5 |
for (var i = 0; i < 5; i++) { setTimeout(function () { console.log(i); }, i); } |
But the output is unexpected:
1 2 3 4 5 |
5 5 5 5 5 |
Can you fix it to be?
1 2 3 4 5 |
0 1 2 3 4 |
Answer
The reason this happens is because each timeout is created and then i
is incremented. Then when the callback is called, it looks for the value of i
and it is 5. The solution is to create a closure so that the current value of i
is stored. For example:
1 2 3 4 5 6 7 |
for (var i = 0; i < 5; i++) { (function(i) { setTimeout(function () { console.log(i); }, i); })(i); } |
Q13: Fix this code. Explain the fix.
Problem
I have this code:
1 2 3 4 5 |
for (var i = 0; i < 5; i++) { setTimeout(function () { console.log(i); }, i); } |
But the output is unexpected:
1 2 3 4 5 |
5 5 5 5 5 |
Can you fix it to be?
1 2 3 4 5 |
0 1 2 3 4 |
Answer
The reason this happens is because each timeout is created and then i
is incremented. Then when the callback is called, it looks for the value of i
and it is 5. The solution is to create a closure so that the current value of i
is stored. For example:
1 2 3 4 5 6 7 |
for (var i = 0; i < 5; i++) { (function(i) { setTimeout(function () { console.log(i); }, i); })(i); } |
Q14: How does Node.js handle Child Threads?
Answer
Node.js, in its essence, is a single thread process. It does not expose child threads and thread management methods to the developer. js does spawn child threads for certain tasks such as asynchronous I/O, but these run behind the scenes and do not execute any application JavaScript code, nor block the main event loop.
If threading support is desired in a Node.js application, there are tools available to enable it, such as the ChildProcess
module.
Q15: How to avoid Callback Hell in Node.js?
Answer
Node.js internally uses a single-threaded event loop to process queued events. But this approach may lead to blocking the entire process if there is a task running longer than expected. Node.js addresses this problem by incorporating callbacks also known as higher-order functions. So whenever a long-running process finishes its execution, it triggers the callback associated. Sometimes, it could lead to complex and unreadable code. More the no. of callbacks, longer the chain of returning callbacks would be.
There are four solutions which can address the callback hell problem:
- Make your program modular – It proposes to split the logic into smaller modules. And then join them together from the main module to achieve the desired result.
- Use
async/await
mechanism – Async /await is another alternative for consuming promises, and it was implemented in ES8, or ES2017. Async/await is a new way of writing promises that are based on asynchronous code but make asynchronous code look and behave more like synchronous code. - Use promises mechanism – Promises give an alternate way to write async code. They either return the result of execution or the error/exception. Implementing promises requires the use of
.then()
function which waits for the promise object to return. It takes two optional arguments, both functions. Depending on the state of the promise only one of them will get called. The first function call proceeds if the promise gets fulfilled. However, if the promise gets rejected, then the second function will get called. - Use generators – Generators are lightweight routines, they make a function wait and resume via the
yield
keyword. Generator functions uses a special syntaxfunction* ()
. They can also suspend and resume asynchronous operations using constructs such as promises orthunks
and turn a synchronous code into asynchronous.function* HelloGen() { yield 100; yield 400; } var gen = HelloGen(); console.log(gen.next()); // {value: 100, done: false} console.log(gen.next()); // {value: 400, done: false} console.log(gen.next()); // {value: undefined, done: true}
Q16: How would you read files in sequence in Node.js? Provide a code example
Answer
If you want to read the files in sequence, just use a modern for … of
loop, in which await
will work as expected:
1 2 3 4 5 6 7 8 |
async function printFiles () { const files = await getFilePaths(); for (const file of files) { const contents = await fs.readFile(file, 'utf8'); console.log(contents); } } |
Q17: Is an Event Emitter Synchronous or Asynchronous?
Answer
Events are synchronous and blocking. The events raised by event emitters are synchronously executed by the listeners in the current event loop’s iteration. They are implemented with simple function calls. If you look at the eventEmitter
code, to send an event to all listeners, it literally just iterates through an array of listeners and calls each listener callback, one after the other.
Consider:
1 2 3 4 5 6 7 8 9 10 |
import { EventEmitter } from 'events'; const eventEmitter = new EventEmitter(); eventEmitter.on('myEvent', (data) => { console.log(data); }); console.log('Statement A'); eventEmitter.emit('myEvent', 'Statement B'); console.log("Statement C"); |
When we execute this code snippet, we get the following output in the console:
1 2 3 |
> Statement A > Statement B > Statement C |
Q18: Provide your favourite reasons to use Node.js
Answer
Reasons to use NodeJS:
- It runs Javascript, so you can use the same language on server and client, and even share some code between them (e.g. for form validation, or to render views at either end.)
- The single-threaded event-driven system is fast even when handling lots of requests at once, and also simple, compared to traditional multi-threaded Java or ROR frameworks.
- The ever-growing pool of packages accessible through NPM, including client and server-side libraries/modules, as well as command-line tools for web development. Most of these are conveniently hosted on github, where sometimes you can report an issue and find it fixed within hours! It’s nice to have everything under one roof, with standardised issue reporting and easy forking.
- It has become the defacto standard environment in which to run Javascript-related tools and other web-related tools, including task runners, minifiers, beautifiers, linters, preprocessors, bundlers and analytics processors.
- It seems quite suitable for prototyping, agile development and rapid product iteration.
Q19: Rewrite promise-based Node.js applications to async/await
Problem
Rewrite this code to async/await
:
1 2 3 4 5 6 7 |
function asyncTask() { return functionA() .then((valueA) => functionB(valueA)) .then((valueB) => functionC(valueB)) .then((valueC) => functionD(valueC)) .catch((err) => logger.error(err)) } |
Answer
1 2 3 4 5 6 7 8 9 10 |
async function asyncTask() { try { const valueA = await functionA() const valueB = await functionB(valueA) const valueC = await functionC(valueB) return await functionD(valueC) } catch (err) { logger.error(err) } } |
Q20: What Are Buffer and why to use them in Node.js?
Answer
Pure JavaScript, while great with unicode-encoded strings, does not handle straight binary data very well. This is fine on the browser, where most data is in the form of strings. However, Node.js servers have to also deal with TCP streams and reading and writing to the filesystem, both of which make it necessary to deal with purely binary streams of data.
The Buffer
class in Node.js is designed to handle raw binary data. Each buffer corresponds to some raw memory allocated outside V8. Buffers act somewhat like arrays of integers, but aren’t resizable and have a whole bunch of methods specifically for binary data.
1 2 3 4 |
var buffer = Buffer.alloc(16) buffer.write("Hello", "utf-8") buffer.write(" world!", 5, "utf-8") buffer.toString('utf-8') |
Q21: What are Event Emitters?
Answer
EventEmitter
is a class that helps us create a publisher-subscriber pattern in NodeJS.
With an event emitter, we can simply raise a new event from a different part of an application, and a listener will listen to the raised event and have some action performed for the event.
1 2 3 4 5 6 7 8 9 10 |
import { EventEmitter } from 'events'; const eventEmitter = new EventEmitter(); // listen to the event eventEmitter.on('myEvent', () => { console.log('Data Received'); // this function is the event listener }); // publish an event eventEmitter.emit('myEvent'); |
Q22: What exactly does module.exports
do in Node.js, and what would a simple example be?
Answer
module.exports
is the object that’s actually returned as the result of a require
call.
The exports
variable is initially set to that same object (i.e. it’s a shorthand “alias”), so in the module code you would usually write something like this:
1 2 3 4 |
let myFunc1 = function() { ... }; let myFunc2 = function() { ... }; exports.myFunc1 = myFunc1; exports.myFunc2 = myFunc2; |
to export (or “expose”) the internally scoped functions myFunc1
and myFunc2
.
And in the calling code you would use:
1 2 |
const m = require('./mymodule'); m.myFunc1(); |
where the last line shows how the result of require
is (usually) just a plain object whose properties may be accessed.
Q23: What is Stream Chaining in Node.js?
Answer
Chanining is a mechanism to connect output of one stream to another stream and create a chain of multiple stream operations. It is normally used with piping operations. if we’re piping into a duplex stream, we can chain pipe calls just like we do in Linux:
1 2 3 4 |
readableSrc .pipe(transformStream1) .pipe(transformStream2) .pipe(finalWrtitableDest) |
The pipe
method returns the destination stream, which enabled us to do the chaining above. For streams a
(readable), b
and c
(duplex), and d
(writable), we can:
1 2 3 4 5 6 7 8 9 |
a.pipe(b).pipe(c).pipe(d) # Which is equivalent to: a.pipe(b) b.pipe(c) c.pipe(d) # Which, in Linux, is equivalent to: $ a | b | c | d |
Q24: What is stream
and what are types of streams available in Node.js?
Answer
A stream is an abstract interface for working with streaming data in Node.js.
Streams basically provide two major advantages over using other data handling methods:
- Memory efficiency: you don’t need to load large amounts of data in memory before you are able to process it
- Time efficiency: it takes way less time to start processing data, since you can start processing as soon as you have it, rather than waiting till the whole data payload is available
There are 4 types of streams in Node.js:
- Writable: streams to which we can write data. For example,
fs.createWriteStream()
lets us write data to a file using streams. - Readable: streams from which data can be read. For example:
fs.createReadStream()
lets us read the contents of a file. - Duplex: streams that are both Readable and Writable. For example,
net.Socket
- Transform: streams that can modify or transform the data as it is written and read. For example, in the instance of file-compression, you can write compressed data and read decompressed data to and from a file.
Q25: What is a Blocking Code in Node.js
Answer
A blocking call causes results to be returned synchronously.

Performing a blocking system call causes the process to enter the blocked state. Control is let back to the process only after the I/O event that is being waited upon occurs.
1 2 3 4 |
const fs = require("fs"); const contents = fs.readFileSync("file.txt", "utf8"); // this line is not reached until the read results are in console.log(contents); |
Q26: What is difference between synchronous and asynchronous method of fs
module?
Answer
Every method in fs
module has synchronous as well as asynchronous form.
- Asynchronous methods takes a last parameter as completion function callback and first parameter of the callback function is error. It is preferred to use asynchronous method instead of
- Synchronous method as former never blocks the program execution where the latter one does.
Q27: What is the difference between cluster
and worker_threads
packages in Node.js?
Answer
Effectively what you are differing is process based vs thread based. Threads share memory (e.g. SharedArrayBuffer
) whereas processes don’t.
- One process is launched on each CPU and can communicate via IPC.
- Each process has its own memory with its own Node (v8) instance. Creating tons of them may create memory issues.
- Great for spawning many HTTP servers that share the same port b/c the master main process will multiplex the requests to the child processes.
- One process total
- Creates multiple threads with each thread having one Node instance (one event loop, one JS engine). Most Node API’s are available to each thread except a few. So essentially Node is embedding itself and creating a new thread.
- Shares memory with other threads (e.g.
SharedArrayBuffer
) - Great for CPU intensive tasks like processing data or accessing the file system. Because NodeJS is single-threaded, synchronous tasks can be made more efficient with workers
Q28: What is the difference between require(x)
and ES6 import x
in Node.js?
Answer
import
(ES6) is the future of the Javascript language in both Node.js and the browser and is used in ECMAScript modules (ESM modules) for loading other modules, either statically or dynamically.- You can use named imports to selectively load only the pieces you need. That can save memory.
- Import can be asynchronous (and in current ES6 Module Loader, it in fact is) and can perform a little better.
- ES6 →
import
,export default
,export
// hello.js function hello() { return 'hello' } export default hello // app.js import hello from './hello' hello() // returns hello
require()
is the original way that Node.js loaded modules and is used in CommonJS modules.require()
is natively supported in Node.js, but not in browsers (though there are some 3rd party libraries that have require-like module loaders for the browser).- You can have dynamic loading where the loaded module name isn’t predefined /static, or where you conditionally load a module only if it’s “truly required” (depending on certain code flow).
- Loading is synchronous. That means if you have multiple requires, they are loaded and processed one by one.
- CommonJS →
require
,module.exports
,exports.foo
// hello.js function hello1() { return 'hello1' } function hello2() { return 'hello2' } module.exports = { hello1, hello2 } // app.js const hello = require('./hello') hello.hello1() // returns hello1 hello.hello2() // returns hello2
Q29: What is the preferred method of resolving unhandled exceptions in Node.js?
Answer
Unhandled exceptions in Node.js can be caught at the Process
level by attaching a handler for uncaughtException
event.
1 2 3 |
process.on('uncaughtException', function(err) { console.log('Caught exception: ' + err); }); |
However, uncaughtException
is a very crude mechanism for exception handling and may be removed from Node.js in the future. An exception that has bubbled all the way up to the Process
level means that your application, and Node.js may be in an undefined state, and the only sensible approach would be to restart everything.
The preferred way is to add another layer between your application and the Node.js process which is called the domain.
Domains provide a way to handle multiple different I/O operations as a single group. So, by having your application, or part of it, running in a separate domain, you can safely handle exceptions at the domain level, before they reach the Process
level.
Q30: What’s the Event Loop?
Answer
The event loop is what allows Node.js to perform non-blocking I/O operations — despite the fact that JavaScript is single-threaded — by offloading operations to the system kernel whenever possible.

Every I/O requires a callback – once they are done they are pushed onto the event loop for execution. Since most modern kernels are multi-threaded, they can handle multiple operations executing in the background. When one of these operations completes, the kernel tells Node.js so that the appropriate callback may be added to the poll queue to eventually be executed.