Nodemon рестарт при ошибке

@Wqrld

Expected behaviour

auto-restart on crash

Actual behaviour

waits for file changes

jacobq, Theblockbuster1, TekTimmy, tmkicks, lnasc256, SYEDUMAIS22, NoamGaash, LucNie, jayphelps, sehrish30, and 2 more reacted with thumbs up emoji

@remy

That’s kinda the point of nodemon — otherwise nodemon would send your script into a loop during development!

…but if you really want to, you can do this (assuming non-Windows):

nodemon -x 'node app.js || touch app.js'

Otherwise, I’d recommend forever if you need something to keep restarting over and over regardless of the cause of crash.

salhernandez, Wqrld, averhulst, mcgwiz, rafaeleyng, franeklubi, afanasy, nickname76, cmurad0215, tomeli5n, and 38 more reacted with thumbs up emoji
gacutcut, nonbinary-duck, abraj-dew, VityaSchel, StupidRepo, and tyliec reacted with heart emoji
i5o, Kukuster, VityaSchel, and StupidRepo reacted with rocket emoji

@salhernandez

Kind of late for this, but another alternative to forever is pm2.
You can use pm2 start server.js --watch which will

  1. watch the project for file changes
  2. restart the server after it crashes
henryadamgordon, alii, Towsif12, theoparis, mairu-san, joaoscheuermann, bartekurbanski, Amal-R-Jayakumar, royswale, ash-lewis, and 9 more reacted with thumbs up emoji

@Leone25

or just make a

sart.bat

file and write inside of it :
:a
node index.js
goto:a

it shoud work

itm-platform, PiggyPlex, alamothe, Oftroll, rrhartjr, mahendran216, tmkicks, victorakaps, lnasc256, Irfan3006, and albe68 reacted with thumbs up emoji

@nadim-khan

While running app.js through nodemon, which is creating a server and routing through pages,
it is auto restarting.

below is the output im getting,
nodemon] starting node app.js
[nodemon] restarting due to changes…
[nodemon] restarting due to changes…
[nodemon] starting node app.js
[nodemon] restarting due to changes…
[nodemon] restarting due to changes…

please let me know if any changes needs to be done

@pcnate

@nadim-khan , it sounds like your application is writing or touching a file that triggers the restart.

@nadim-khan

@pcnate Code is running fine on using command «node app.js».
although sometime it run fine but most of the time im getting the above error.
Still looking for changes.

@pcnate

of coarse the code works fine, but do you write to files?

@nadim-khan

@polyglotdev

I was having the same issue and could not figure out why nodemon would not restart and it was for 2 reasons.

  1. I had a 2 graphql playground instances running with both of them at localhost:4000/
  2. I did not fully understanding how nodemon worked.

The Fix

Go into the file that nodemon is watching and save it and the server restarts! Hopefully that is somewhat helpful, worked for me!

Error Message
«`

[nodemon] app crashed — waiting for file changes before starting…
[nodemon] restarting due to changes…
[nodemon] starting babel-node src/index.js
events.js:165
throw er; // Unhandled ‘error’ event
^

Error: listen EADDRINUSE :::4000
at Server.setupListenHandle [as _listen2] (net.js:1346:14)
at listenInCluster (net.js:1387:12)
at Server.listen (net.js:1475:7)
at /Users/dhallan/Desktop/react-developer-course/graphql-udemy/graphql-basics/node_modules/graphql-yoga/src/index.ts:380:22
at new Promise ()
at GraphQLServer.start (/Users/dhallan/Desktop/react-developer-course/graphql-udemy/graphql-basics/node_modules/graphql-yoga/src/index.ts:378:12)
at Object. (/Users/dhallan/Desktop/react-developer-course/graphql-udemy/graphql-basics/src/index.js:61:8)
at Module._compile (internal/modules/cjs/loader.js:654:30)
at loader (/Users/dhallan/Desktop/react-developer-course/graphql-udemy/graphql-basics/node_modules/babel-register/lib/node.js:144:5)
at Object.require.extensions.(anonymous function) [as .js] (/Users/dhallan/Desktop/react-developer-course/graphql-udemy/graphql-basics/node_modules/babel-register/lib/node.js:154:7)
Emitted ‘error’ event at:
at Server.emit (events.js:180:13)
at Server.emit (domain.js:422:20)
at emitErrorNT (net.js:1366:8)
at process._tickCallback (internal/process/next_tick.js:178:19)
at Function.Module.runMain (internal/modules/cjs/loader.js:697:11)
at Object. (/Users/dhallan/Desktop/react-developer-course/graphql-udemy/graphql-basics/node_modules/babel-cli/lib/_babel-node.js:154:22)
at Module._compile (internal/modules/cjs/loader.js:654:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:665:10)
at Module.load (internal/modules/cjs/loader.js:566:32)
at tryModuleLoad (internal/modules/cjs/loader.js:506:12)
[nodemon] app crashed — waiting for file changes before starting…

@tmcgann

You could also delay restart if you don’t want nodemon to restart immediately.

nodemon -x 'node app.js || (sleep 10; touch app.js)'
rchezrutt88, mikebobadilla, SantiagoAcevedo, SergeyG, TiagoJacobs, Theblockbuster1, madhawa-se, M0diis, pmgledhill102, daveteu, and 10 more reacted with thumbs up emoji
mittler-works, heroselohim, and albe68 reacted with heart emoji

@sgronblo

@remy if you check forever’s github page now they say «For new installations we encourage you to use pm2 or nodemon»

MarianAlexandruAlecu, TekTimmy, vvroman, selarom-epilef, SystemDisc, sawich, Git-Harshit, lnasc256, dav1app, linkbr, and 2 more reacted with thumbs up emoji
NikolaiSch and linkbr reacted with thumbs down emoji

@ghost
ghost

mentioned this issue

Feb 9, 2022

From time to time it does happen, but restarting the node bypass this. I really don’t have how to debug and see the actual error and the site needs to be online anyway.

I wish to know a way to restart the server whet this error happens, without having to change a file or anything like that.

I know this is a way around, etc etc… but that’s what I need to do now…

ERROR:

[nodemon] app crashed — waiting for file changes before starting…

PACKAGE.json

 "scripts": {
            "test": "echo \"Error: no test specified\" && exit 1",
            "start": "node app.js",
            "server": "nodemon app --ignore './client/'",
            "client": "npm start --prefix client",
            "dev": "concurrently \"npm run server\" \"npm run client\"",
            "build": "concurrently \"npm run server\" \"npm run client\""
        },

starting using node server.

A large number of developers use Nodemon during the development and testing of Node.js apps. When your code file changes, Nodemon will automatically restart the program. However, when the app crashes, it will stop:

nodemon app crashed - waiting for file changes before starting

This is a deliberate behavior that intends to help you have time to read error messages and figure out what is going on. If Nodemon restarts your Node.js app on its own in this situation, chances are you’ll go into an endless loop of errors and your console will be flooded with duplicate messages. If you are aware of this and surely want Nodemon to automatically restart your Node.js program on a crash, then there is a simple solution for you.

For Mac and Linux, use the following command:

nodemon -x 'node index.js || touch index.js'

If you’re working with a Windows laptop, use this one:

nodemon -x 'node index.js || copy /b index.js +,,'

if your entry file is not named index.js but app.js, server.js, or anything else then change the command accordingly.

Alternative Solution

If you’ve worked with Node.js for a while then you’re very likely to know about pm2, a popular process manager for Node.js in production. However, pm2 can still be used for development purposes and it performs very well.

Install pm2:

npm i --g pm2

Or:

sudo npm i --g pm2

Then you can run your app and watch for file changes like so:

pm2 start index.js --watch

Further reading:

  • Node.js: Turn a Relative Path into an Absolute Path
  • Node.js: How to Compress a File using Gzip format
  • Node.js: Using __dirname and __filename with ES Modules
  • Node.js: Reading and Parsing Excel (XLSX) Files
  • 7 best Node.js frameworks to build backend APIs

You can also check out our Node.js category page for the latest tutorials and examples.

Automatically restart node server on crash Nodemon pm2

Hi Guy’s Welcome to Proto Coders Point. In this nodejs article let’s look into how to make nodemon automatically get restart when the program crash without waiting for file changes.

Basically many Nodejs Developer makes use of Nodemon while developing & testing nodejs application so that they can make use of feature i.e. Nodemon automatically restart the application when it finds any code file changes.

However, If the nodejs program crashes due to some error, Nodemon will stop and give use this error on console screen:

nodemon app crashed - waiting for file changes before starting

The above info about app crash update on screen it very useful for developer to understand at which line of code nodejs is getting crashed.

support if the nodemon auto restart immediately after showing above error message “nodemon app crashed” on screen, the developer will never be able to debug the issue and code will run with endless error loop.


How to make nodemon automatically get restart when the program crash without waiting for file changes.

Suppose you are aware of this & want’s nodemon to automatically restart the nodejs application on crash, then you can simple run node application by using below command:

Windows

nodemon -x 'node index.js || copy /b index.js +,,'

Linux / Mac

nodemon -x 'node index.js || touch index.js'

Many nodejs developer make use of pm2 (process manager for node.js) in production, To keep their node application script running in background.

if you use pm2 to run you nodejs application and want’s it to auto restart on crash then use below command:

pm2 start index.js --watch

Julián Duque

December 17, 2019


This blog post is adapted from a talk given by Julián Duque at NodeConf EU 2019 titled «Let it crash!.»

Before coming to Heroku, I did some consulting work as a Node.js solutions architect. My job was to visit various companies and make sure that they were successful in designing production-ready Node applications. Unfortunately, I witnessed many different problems when it came to error handling, especially on process shutdown. When an error occurred, there was often not enough visibility on why it happened, a lack of logging details, and bouts of downtime as applications attempted to recover from crashes.

Julián: Okay. So, as Brian said, my name is Julián Duque, it will be in proper Spanish. I come from a very beautiful town in Columbia called Medellín. So, if you haven’t gone there, please visit us. That we have an amazing community, as Brian said. Right now, I work as a senior developer advocate for Heroku. So, I live in United States. Sadly, I’m away from my country, but I always constantly in communication with my community, and that’s pretty much true to main conferences that I organize. One is the NodeConf Colombia and the other is the JSConf Colombia. So, I know if you are like me right now, you are needing coffee. I’m needing coffee too. It’s super early. So, please don’t crash now. Let’s wait until my talk finish, and we can have some coffee to keep us awake.

Julián: So, a little bit of some background about this talk, why I presented this. These are pretty much lessons learned while I was working at NodeSource, previously. I was doing consulting work as a solutions architect, pleasing the customer, making sure they were using Node.js properly and they were successfully using Node. And I saw a lot of different bad patterns out there on how other companies were doing error handling, and especially when the process were crashing or the process were dying. They didn’t have enough visibility. They didn’t have logging strategies in place. They were missing the very important information about why the Node processes were having issues or were crashing. They were experiencing downtime, and we started to collect in a set of best practices and recommendations for them, that are aligned with the overall Node.js community.

Julián: If you go to the documentation, there are going to be pretty much the same recommendations that I’m going to be speaking about today. We add a couple other more things to make sure you have a very good exit strategy for your Node.js Processes. These best practices applies pretty much for web and network based applications because we are going to cover also the graceful shutdowns, but you can use them for other type of Node.js applications that are constantly running. And Node, sadly, is not Erlang. If you know about Erlang or leaks related crashes, just like a term that it’s very common in that community. When I started learning Erlang back in 2014, I loved the fault tolerance options that these platform and language has. And I always think about how to bring the same experience into Node.js, is not the same because you can’t do whole code reloading or function swapping on Node. You can do those things on Erlang, but still, Node is pretty lightweight, and you can easily restart and recover from a crash.

Julián: First, before getting into the bad place or when bad things happen, how to make sure that everything is good? What do we need to do to our Node applications to make sure they are running properly? So, first, as a recommendation, and there is going to be a workshop later about this specific thing, cloud native JS, don’t miss this worship by Beth. She’s going to also mention about how to add health checks to you Node.js processes. So, pretty much as our recommendation, add a health check route, it’s a simple route that is going to return a 200 status code, and you will need to set something to monitor that route. You can do it at your loa balancer level. If you are using a reverse proxy, or a load balancer like nginx, or HAProxy, or you’re using ELB, ALB, any type of application that is being the top layer of your Node.js process being constantly monitoring that the health check is returning okay. So you are making sure that everything is fine.

Julián: And also, rely on APM, some tools that are going to monitor the performance and the health of your Node.js Processes. So, in order to make sure that everything is running fine, you will need to have tools, some very known tools, New Relic, App Dynamics, Dynatrace, and N|Solid. A lot of them in the market will give you way more visibility around the health of your Node.js processes, and you can live in peace when you are making sure your Node is running properly. But what to do if something bad and unexpected happens? So, what should we do with our Node.js processes? Letting them crash. If something bad and unexpected happened, I will let my Node.js process crash, but in order to be able to do it and drive, we will need to implement a set of best practices and follow some steps to make sure that the application is going to restart properly and continue running and serving to our customers and clients.

Julián: Before letting it crash, we will need to learn about the process lifecycle, especially on the shutdown side of things, some error handling best practices. There is going to be also another very recommended workshop around it. I’m not going to be covering how to properly handle errors in Node.js, just on shutdown, and this is pretty much so you stop worrying about unexpected errors and increased visibility of your Node.js processes, increased visibility of what happened when your process crashes and what might be the reason, so you can fix it and iterate over your application. So, similar to coming back to the Erlang concept, a Node.js process is very lightweight. It’s a small in memory. It doesn’t have a very big memory footprint, and the idea is to keep the processes very lean at a startup, so they can start like super fast. If you have a lot of operations, like high intensive CPU or synchronous operation at a startup, it might decrease the ability to restart super fast, your Node.js processes.

Julián: So, try to keep your processes very lean on a startup. Use the strategies, like prebuilding, so you are not going to build on a startup or on the bootstrap of your process. Do everything before you are going to start your process, and if something unexpected and bad happens, just exit and start a new Node.js process as soon as possible to avoid downtime. And pretty much this is called a restart. You’re late in the process crash, and then start the new one. But we will need to have some tools in place and settings to be able to have something that restarts all our Node.js processes. So, let’s learn how to exit a Node.js process. So, there are two common methods on the process module that will help you to shut down or terminate a Node.js process. The most common one is the process.exit. You can pass an exit code, the zero if it’s a success exit or higher than zero, commonly one, if it’s a failure. And this pretty much instructs Node.js to end a process with a specified exit code.

Julián: And there is the other one, which is a process.abort. With the process.abort, it’s going to cause Node.js process to exit immediately and generate the core file, if your operating system has core dumps enabled. So, in order to be able to have more visibility on postmortem debugging, to be able to see what happened or what clashes your Node.js process. If there is a memory issue, you can call process.abort, it will generate a core dump, and then you can use tools like llnode, which is a plugin for lldb to do a C and C++ debugging of the core dump, and to see what might happen in the native side of Node.js when your process scratch. So those are the two options you have to exit the Node.js process. How to handle exit events? So Node.js, it needs two different or two main events when your Node.js process is exiting. One is the beforeExit. So the beforeExit, it’s a handle that can make asynchronous calls and the event loop will continue to work until it finishes.

Julián: So before the process is ending, you can schedule more work on the event loop, do more a synchronous task and then you can clean up your process. This event is not immediate on conditions that are causing explicit termination like on an uncaught exception or when I explicitly call process that exit. So this is all other exit scenarios. And the exit event, it’s a handle, also can’t make a synchronous call. Only synchronous calls can happen in this part of the process life cycle because the event loop doesn’t have any more work to do. So the event loop is paused in here. So, if you try to do any asynchronous calling here, is not going to be executed. Only synchronous calls can happen here and this event is immediate when process.exit is called explicitly. It’s commonly used if you want to log at the end, some information when you process.exit, my process.exit with the specific exit code and you want to add some more context around the state of your application at the time that the process exits.

Julián: Some examples how to use it. You attach those events on the process module. The beforeExit can do asynchronous code so that setTimeout, even though the event loop is pause at that moment when you are scheduled more asynchronous work, it will receive the event loop and continue until there is no more work to do. There’s one thing I want to mention here is that normally a Node.js process exits when there is no more work is scheduled on the event loop. When there is nothing else on the event loop, a process is going to exit. How does a server keeps running? Because it has a handle register on the event loop, like a socket waiting for connections and that’s why a web server is constantly running until you close the server or you interrupt the process. Otherwise, if there is something register on the event loop, the Node.js process is going to continue running. So in this case, I execute setTimeout, schedule more work, it will continue working on until there is no more left to do.

Julián: On process.exit, pretty much just synchronous calls. I can do anything here with the event loop. The event loop is thoroughly paused, useful for logging information or saving the state of the application and exit. There is are a couple of signal events that are going to be useful on shutdown. There is the SIGTERM and SIGINT. SIGTERM, it’s normally immediate when a process monitor send a signal termination to your Node.js process to tell them that there is going to be a successful way to shut down your process. When you execute on systemd or using upstart, when you send stop that service or stop that process, it’s going to sending that SIGTERM to your Node.js process and you can handle that event and do some work on that specific part of the life cycle. And the SIGINT, it’s an interruption. It is immediate when you interrupt the Node.js processes, normally when you do control-C, when you are running on the console, you can also capture that event and do some work around it.

Julián: So these are two ways to expectedly finalize a Node.js process. So these two events are considered a successful termination. This is why I’m exiting here with the exit code zero because it is something that is expected. I say I don’t want this process to continue running. And there is also the error events. So there are two main error events. One is the uncaughtException, the famous uncaughtException. And recently, in promises we’re introducing to Node, the unhandledRejection. So the uncaught exception is immediate when a JavaScript error is not properly handled. So it pretty much represents a programmer error or represents a bug in your code. If an uncaughtException happens, the recommendation is to always crash your process, let it crash. Don’t try to recover from an uncaughtException because it might give you some troubles. And while even though, the community is not totally agree on the second one.

Julián: I will say the same for an unhandledRejection. An unhandledRejection, it is immediate when a promise is rejected and there is no handle attached to the promise. So there is no catch attached to the promise. It my represent an operational error, it my represent a programmer error, so it depends of what happened here. But in both of those cases, it’s better to log as much information as possible. Treat those as P1 issues that needs to be fixed in the next iteration or in the next release. So if you don’t have any strategy in place to be able to identify why your processes are crashing and you are not fixing and handling those properly, your application are going to remain having box. So if it is an uncaught exception, that’s a bug, that’s a programming error, that is something that is not expected. Please crash, log and file an issue, so that needs to be fixed.

Julián: If it is an unhandled rejection, see if this is a programmer error or if it’s an operational error that needs to be handled and go update the code, add the proper handling to that promise and continue with your job. So as I say in both cases an error event, it’s a cause of termination for your Node.js process. Always exit the process with an exit code different than zero. So it’s going to be one. So your process monitor and your logs know that it was a failure and as I say, don’t try to recover from an uncaught exception. While I was working as a consultant, I saw a lot of people trying to do a lot of magic to avoid the Node.js processes dying by adding some complex logic on uncaught exception. And that always ended your application on a bad state. They were having memory leaks or having sockets hanging and it was a mess. So it’s cheaper to let it crash, start a new process from a scratch and continue receiving more requests.

Julián: So a couple of examples on uncaught exception and unhandled rejection. The uncaught exception received such an argument and error instance. So you get the information about the error that was thrown or that wasn’t handling your Node.js code. And the unhandled rejection is going to give you a reason which can be an error instance tool and it will give you the promise that was not properly handled. So those are useful information that you can have in your logs to have more information where things are failing in your code. But we saw how to handle the events, how to handle the errors, some of the best practices, but how to do it properly? What we need to do a better to be able to have a very good shutdown a strategy for Node.js processes? So the first one is running more than one process at the same time. So rely on scaling load balancer processes, having more than one. So in that way, if one of those processes crashes, there is another process that is alive and it’s able to receive requests.

Julián: So it will give you time to do the restart and all the requests that are coming in. And maybe the only issue you are going to have are with the requests that were already happening in the Node.js process that crashes. But this is going to give you a little bit more leverage and prevent downtime. And what do you use for load balancing? Use whatever you have in hand. If it’s nginx or HAProxy as a reverse proxy for your Node.js applications. If you are on AWS or on the cloud, you can use their elastic load balancer application, load balancers or the order load-balancer solutions that cloud offers. If you are on Kubernetes, you can use Ingress or other different in the load balancer strategies for your application. So pretty much make sure that you have more than one Node.js process running, so you can be more in peace if one of those processes crashes. You will need to have process monitoring and process monitoring needs a pretty much something that is running in your operating system or an application that it’s constantly checking if your process is alive or not.

Julián: If it crashes, if there is a failure, the process monitor is in charge of restarting the process. So, the recommendation is to always use the native process monitoring that it’s available on your operating system. If it’s Unix or Linux, you can use systemd or upstart, specifically adding the restart on failure or respond when you are working on upstart. If you are using containers, use whatever is available. Docker has the restart option, Kubernetes has the restart policy and you can also configure your processes to restart when it fails to retry a number of times. So you don’t go into a crazy error, that is going to constantly make your application crash and you end up in the crash loop. So you can add some retries into there but always have a process monitoring in place. If you can’t use any of these tools as a last resource—but not recommended—use a Node.js process monitor like PM2 or forever.

Julián: But I will not recommend these to any customer of mine or any friend, but if you don’t have any more resource, if you can use the native stuff in your operating system or if you are not using containers, you can go this way. These tools are good for development. Don’t get me wrong. If you are logging on the development and they’re very good tools to restart your processes when the crashes. But for production, they might not be the best. Let’s talk about little bit about a graceful shutdown. So we have a web server running. The web server is getting request and it’s getting connection. Sometime we have some established connections between our customers or clients and the server. But what happens when the process crashes? When the process crashes, if we are not doing a graceful shutdown, some of those sockets are going to be kept hanging and are going to wait until a timeout has been reached and that might cause down time and a decreased experience of your users. So it is better. So setting up an un-reference timeout is going to let the server do its job.

Julián: So, we will need to close the server, it’s explicitly say to the server, stop receiving connections so they can reject the new connection. So new connections are going to the new or to the other Node.js process that is running through the load balancer and it will be able to send a TCP packet to the clients that are already connected. So they are going to be finishing the connection immediately when the server dies. They are not going to stay waiting until a timeout is reach out. They are going to be closing that connection and on the next retry, we expect that the process has restarted at that point or they go to another process that is running. So one example of that, un-reference time out, when we are handling the signal or error event, which is the shutdown part of the life cycle. What we can do, it’s too explicitly call server.close. If it is an instance of the net server, which is the same one that uses the http or https, Node modules, you can pass a callback.

Julián: So when it finishes closing the connection, it will exit the process successfully. But we will need to have our timeout in place because we don’t want to wait for a long time. Imagine if we had a lot of different clients connected that it’s taken a lot of time to clean up those processes. We need to have some way to have an internal timeout. So here, we are scheduling a new timeout, but that timeout is not on the event loop. That last part the, unref is not the scheduling the timeout on the event loop, so it is not adding more work to the event loop. So when the timeout is reach or the server close callback is reach, either of those paths are going to close the Node.js process. So this is a race between the two, between your time out that is not in the event loop or between the server close, whichever works better. And what timeout time we do need to put here depending on the needs of your applications.

Julián: We had customers that had the need to have very few timeouts or a small time out because they were doing a lot of real time trading and they needed the processes to restart as far as possible. There are others that can have longer timeouts to lead, or when the connection finishes, so this depends of the use case. If you don’t add the unref in here, since this timeout is going to be a schedule on the event loop, it’s going to wait until it finishes and the process is going to end. So this is like a safeguard. So there is no more work schedule on the event loop while we are exiting our process. Logging, this is one of the most important parts of having a very good exit strategy for Node.js processes. So implement the robust logging strategy for your application, especially on shutdown. If an error happens, please log as much information as possible. An error object will contain not only the message or the cost of the error, but it will also contain the stack trace.

Julián: So if you log the stack trace, you will be able to come back to your code and fix and look specifically why it failed and where it fail. And you can rely on libraries, like pino or winston and use transport to store the logs in an external service. You can use like Splunk or Papertrail or use whatever you like to store the logs. But have a way to always go back to the logs, search for those uncaught exceptions and unhandled rejections and being able to identify why your processes are crushing. Fix those issues and continue with your work. So how can we put these altogether? I have some pattern I use on my projects but there is also a lot of modules on NPM that are going to do the same thing even better than the approach I’m following here. So this is a pattern I use. I create a module called terminate or I use a file called terminate. I pass the server like the instance of that server that I’m going to be closing and some configuration options if I want to enable core dumps or not, and the timeout.

Julián: Usually when I want to enable the core dump of Node, I use an environment variable. When I am going to do some performance testing on my application or I want to replicate the error, I enable the core dump. I let it crash with the process.abort, I check out the core dump and get more information about it. So here, I have our exit function that switches between the abort or the process.exit, depending of the configuration you have here. And the rest, I’m returning a function that returns a function and that function is the one that I’m going to be using as the exit handler. And this is pretty much the code that I’m going to be using for uncaught exceptions, unhandled rejections, and signals. And here, log as much as possible. I’m using console log for simplicity, but please use a proper logging library here. And pretty much if there is an error and if that is an instance of the error, I want to get information about the message and the stack trace. And at the end, I’m going to be trying doing the graceful shutdown.

Julián: So this is the same thing I explained before. I will close this server and also I will have a timeout to also close the server after that timeout happens. So it depends whatever ends first. And how to use this small module I have here, this is as an example, I have an issue to the server. I have my terminate code that I use for my project. I create an exit handler with the options with the server I’m running, with the different parameters I want to pass into my exit handler and I attach that function into the different events. So here exit handler, on uncaught exception and unhandled rejection, I’m going to return an exit code of one and I can add a message to my logs to say what type of error or what type of handling was this, and also with the signals. And with the signals, I’m passing an exit code of zero because it is something that there is going to be successful.

Julián: So this is pretty much what I have for today and the presentation, some resources that are going to be useful for you. Please don’t miss Rubin Bridgewater workshop later today. It’s going to be called «Error Handling: doing it right». Again, it’s going to be explaining now how to avoid getting here? How to avoid getting into the uncaught exception side of things? How to properly create the error objects to have more visibility? How to handle promises, rejections? So, these are going to be a very good presentation and also the cloud native JS by Beth. She’s going to be mentioning also how to add monitoring to application health checks. So those are going to be good things if you want to run Node.js properly in production. Some NPM modules to take a look that pretty much solve the issue I was talking about today. There is a module I like, the terminus by the team at GoDaddy.

Julián: It supports adding health checks to your application. It has a C signal handlers too. It has a very good graceful shutdown strategy. Way more complex than the one I presented you. This is something that you can add to your projects pretty easily. Just create an instance of terminus, configure it, and add the different handlers there. There is another module called stoppable. Stoppable is the decorator over the server class that is going to be able to implement not a close function, but a stop function and it’s going to be also doing a lot of things around a graceful shutdown. And there is also a module that pretty much is what I presented today. It’s called http-graceful-shutdown. You also pass an instance of your HTTP server and it has different handlers, you can see what happened when there is an error or what signals I’m going to be monitoring.

Julián: It’s pretty much… It’s all going to be resources that are going to simplify your life and make you a better up running Node in production and you will be able to let it crash. One last thing, I want to invite you to Nodeconf Colombia, so save the day. This is going to happen June 26 and 27, 2020. It’s going to happen in Medellín, Columbia. More information at nodeconf.co. CFP is not open yet, but I will expect a lot of you all sending proposals to go to Medellin. We pay for travel, we pay for a hotel. And if you want to know a little bit about the experience of speaking at a conference in Columbia, you can ask James, you can ask Anna, and I think you can ask Brian. There is a couple of folks here that have spoken there and thank you very much. This is it.

We started to assemble a collection of best practices and recommendations on error handling, to ensure they were aligned with the overall Node.js community. In this post, I’ll walk through some of the background on the Node.js process lifecycle and some strategies to properly handle graceful shutdown and quickly restart your application after a catastrophic error terminates your program.

The Node.js process lifecycle

Let’s first explore briefly how Node.js operates. A Node.js process is very lightweight and has a small memory footprint. Because crashes are an inevitable part of programming, your primary goal when architecting an application is to keep the startup process very lean, so that your application can quickly boot up. If your startup operations include CPU intensive work or synchronous operations, it might affect the ability of your Node.js processes to quickly restart.

A strategy you can use here is to prebuild as much as possible. That might mean preparing data or compiling assets during the building process. It may increase your deployment times, but it’s better to spend more time outside of the startup process. Ultimately, this ensures that when a crash does happen, you can exit a process and start a new one without much downtime.

Node.js exit methods

Let’s take a look at several ways you can terminate a Node.js process and the differences between them.

The most common function to use is process.exit(), which takes a single argument, an integer. If the argument is 0, it represents a successful exit state. If it’s greater than that, it indicates that an error occurred; 1 is a common exit code for failures here.

Another option is process.abort(). When this method is called, the Node.js process terminates immediately. More importantly, if your operating system allows it, Node will also generate a core dump file, which contains a ton of useful information about the process. You can use this core dump to do some postmortem debugging using tools like llnode.

Node.js exit events

As Node.js is built on top of JavaScript, it has an event loop, which allows you to listen for events that occur and act on them. When Node.js exits, it also emits several types of events.

One of these is beforeExit, and as its name implies, it is emitted right before a Node process exits. You can provide an event handler which can make asynchronous calls, and the event loop will continue to perform the work until it’s all finished. It’s important to note that this event is not emitted on process.exit() calls or uncaughtExceptions; we’ll get into when you might use this event a little later.

Another event is exit, which is emitted only when process.exit() is explicitly called. As it fires after the event loop has been terminated, you can’t do any asynchronous work in this handler.

The code sample below illustrates the differences between the two events:

process.on('beforeExit', code => {
  // Can make asynchronous calls
  setTimeout(() => {
    console.log(`Process will exit with code: ${code}`)
    process.exit(code)
  }, 100)
})

process.on('exit', code => {
  // Only synchronous calls
  console.log(`Process exited with code: ${code}`)
})

OS signal events

Your operating system emits events to your Node.js process, too, depending on the circumstances occurring outside of your program. These are referred to as signals. Two of the more common signals are SIGTERM and SIGINT.

SIGTERM is normally sent by a process monitor to tell Node.js to expect a successful termination. If you’re running systemd or upstart to manage your Node application, and you stop the service, it sends a SIGTERM event so that you can handle the process shutdown.

SIGINT is emitted when a Node.js process is interrupted, usually as the result of a control-C (^-C) keyboard event. You can also capture that event and do some work around it.

Here is an example showing how you may act on these signal events:

process.on('SIGTERM', signal => {
  console.log(`Process ${process.pid} received a SIGTERM signal`)
  process.exit(0)
})

process.on('SIGINT', signal => {
  console.log(`Process ${process.pid} has been interrupted`)
  process.exit(0)
})

Since these two events are considered a successful termination, we call process.exit and pass an argument of 0 because it is something that is expected.

JavaScript error events

At last, we arrive at higher-level error types: the error events thrown by JavaScript itself.

When a JavaScript error is not properly handled, an uncaughtException is emitted. These suggest the programmer has made an error, and they should be treated with the utmost priority. Usually, it means a bug occurred on a piece of logic that needed more testing, such as calling a method on a null type.

An unhandledRejection error is a newer concept. It is emitted when a promise is not satisfied; in other words, a promise was rejected (it failed), and there was no handler attached to respond. These errors can indicate an operational error or a programmer error, and they should also be treated as high priority.

In both of these cases, you should do something counterintuitive and let your program crash! Please don’t try to be clever and introduce some complex logic trying to prevent a process restart. Doing so will almost always leave your application in a bad state, whether that’s having a memory leak or leaving sockets hanging. It’s simpler to let it crash, start a new process from scratch, and continue receiving more requests.

Here’s some code indicating how you might best handle these events:

process.on('uncaughtException', err => {
  console.log(`Uncaught Exception: ${err.message}`)
  process.exit(1)
})

We’re explicitly “crashing” the Node.js process here! Don’t be afraid of this! It is more likely than not unsafe to continue. The Node.js documentation says,

Unhandled exceptions inherently mean that an application is in an undefined state…The correct use of ‘uncaughtException’ is to perform synchronous cleanup of allocated resources (e.g. file descriptors, handles, etc) before shutting down the process. It is not safe to resume normal operation after ‘uncaughtException’.

process.on('unhandledRejection', (reason, promise) => {
  console.log('Unhandled rejection at ', promise, `reason: ${err.message}`)
  process.exit(1)
})

unhandledRejection is such a common error, that the Node.js maintainers have decided it should really crash the process, and they warn us that in a future version of Node.js unhandledRejections will crash the process.

[DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Run more than one process

Even if your process startup time is extremely quick, running just a single process is a risk to safe and uninterrupted application operation. We recommend running more than one process and to use a load balancer to handle the scheduling. That way, if one of the processes crashes, there is another process that is alive and able to receive new requests. This is going to give you a little bit more leverage and prevent downtime.

Use whatever you have on-hand for the load balancing. You can configure a reverse proxy like nginx or HAProxy to do this. If you’re on Heroku, you can scale your application to increase the number of dynos. If you’re on Kubernetes, you can use Ingress or other load balancer strategies for your application.

Monitor your processes

You should have process monitoring in-place, something running in your operating system or an application environment that’s constantly checking if your Node.js process is alive or not. If the process crashes due to a failure, the process monitor is in charge of restarting the process.

Our recommendation is to always use the native process monitoring that’s available on your operating system. For example, if you’re running on Unix or Linux, you can use the systemd or upstart commands. If you’re using containers, Docker has a --restart flag, and Kubernetes has restartPolicy, both of which are useful.

If you can’t use any existing tools, use a Node.js process monitor like PM2 or forever as a last resort. These tools are okay for development environments, but I can’t really recommend them for production use.

If your application is running on Heroku, don’t worry—we take care of the restart for you!

Graceful shutdowns

Let’s say we have a server running. It’s receiving requests and establishing connections with clients. But what happens if the process crashes? If we’re not performing a graceful shutdown, some of those sockets are going to hang around and keep waiting for a response until a timeout has been reached. That unnecessary time spent consumes resources, eventually leading to downtime and a degraded experience for your users.

It’s best to explicitly stop receiving connections, so that the server can disconnect connections while it’s recovering. Any new connections will go to the other Node.js processes running through the load balancer

To do this, you can call server.close(), which tells the server to stop accepting new connections. Most Node servers implement this class, and it accepts a callback function as an argument.

Now, imagine that your server has many clients connected, and the majority of them have not experienced an error or crashed. How can you close the server while not abruptly disconnecting valid clients? We’ll need to use a timeout to build a system to indicate that if all the connections don’t close within a certain limit, we will completely shutdown the server. We do this because we want to give existing, healthy clients time to finish up but don’t want the server to wait for an excessively long time to shutdown.

Here’s some sample code of what that might look like:

process.on('<signal or error event>', _ => {
  server.close(() => {
    process.exit(0)
  })
  // If server hasn't finished in 1000ms, shut down process
  setTimeout(() => {
    process.exit(0)
  }, 1000).unref() // Prevents the timeout from registering on event loop
})

Logging

Chances are you have already implemented a robust logging strategy for your running application, so I won’t get into it too much about that here. Just remember to log with the same rigorous quality and amount of information for when the application shuts down!

If a crash occurs, log as much relevant information as possible, including the errors and stack trace. Rely on libraries like pino or winston in your application, and store these logs using one of their transports for better visibility. You can also take a look at our various logging add-ons to find a provider which matches your application’s needs.

Make sure everything is still good

Last, and certainly not least, we recommend that you add a health check route. This is a simple endpoint that returns a 200 status code if your application is running:

// Add a health check route in express
app.get('/_health', (req, res) => {
  res.status(200).send('ok')
})

You can have a separate service continuously monitor that route. You can configure this in a number of ways, whether by using a reverse proxy, such as nginx or HAProxy, or a load balancer, like ELB or ALB.

Any application that acts as the top layer of your Node.js process can be used to constantly monitor that the health check is returning. These will also give you way more visibility around the health of your Node.js processes, and you can rest easy knowing that your Node processes are running properly. There are some great great monitoring services to help you with this in the Add-ons section of our Elements Marketplace.

Putting it all together

Whenever I work on a new Node.js project, I use the same function to ensure that my crashes are logged and my recoveries are guaranteed. It looks something like this:

function terminate (server, options = { coredump: false, timeout: 500 }) {
  // Exit function
  const exit = code => {
    options.coredump ? process.abort() : process.exit(code)
  }

  return (code, reason) => (err, promise) => {
    if (err && err instanceof Error) {
    // Log error information, use a proper logging library here :)
    console.log(err.message, err.stack)
    }

    // Attempt a graceful shutdown
    server.close(exit)
    setTimeout(exit, options.timeout).unref()
  }
}

module.exports = terminate

Here, I’ve created a module called terminate. I pass the instance of that server that I’m going to be closing, and some configuration options, such as whether I want to enable core dumps, as well as the timeout. I usually use an environment variable to control when I want to enable a core dump. I enable them only when I am going to do some performance testing on my application or whenever I want to replicate the error.

This exported function can then be set to listen to our error events:

const http = require('http')
const terminate = require('./terminate')
const server = http.createServer(...)

const exitHandler = terminate(server, {
  coredump: false,
  timeout: 500
})

process.on('uncaughtException', exitHandler(1, 'Unexpected Error'))
process.on('unhandledRejection', exitHandler(1, 'Unhandled Promise'))
process.on('SIGTERM', exitHandler(0, 'SIGTERM'))
process.on('SIGINT', exitHandler(0, 'SIGINT'))

Additional resources

There are a number of existing npm modules that pretty much solve the aforementioned issues in a similar ways. You can check these out as well:

  • @godaddy/terminus
  • stoppable
  • http-graceful-shutdown

Hopefully, this information will simplify your life and enable your Node app to run better and safer in production!

Errors are part of a developer’s life. We can neither run nor hide from them. While building production-ready software, we need to manage errors effectively to:

  1. improve the end-user experience; i.e., providing correct information and not the generic message “Unable to fulfill the request”
  2. develop a robust codebase
  3. recede development time by finding bugs efficiently
  4. avoid abruptly stopping a program

Because you are here, I assume you are probably a web developer with a JavaScript background. Let’s take the typical use case of reading a file in Node.js without handling an error:

var fs = require('fs')

# read a file
const data = fs.readFileSync('/Users/Kedar/node.txt')

console.log("an important piece of code that should be run at the end")

Note that Node.js should execute some critical piece of code after the file-reading task. When we run it, we receive the output as shown below:

Output:

$node main.js
fs.js:641
  return binding.open(pathModule._makeLong(path), stringToFlags(flags), mode);
                 ^

Error: ENOENT: no such file or directory, open '/Users/Kedar/node.txt'
    at Error (native)
    at Object.fs.openSync (fs.js:641:18)
    at Object.fs.readFileSync (fs.js:509:33)
    at Object.<anonymous> (/home/cg/root/7717036/main.js:3:17)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.runMain (module.js:604:10)

Here, the program ends abruptly without executing the necessary code. We will discuss the revised code with error handling later in the try...catch blocks section. This example demonstrates only one of many issues faced without error handling. Let’s take a look at what we’ll cover to better understand how we can handle errors:

  • Error
    • Programmer errors
    • Operational errors
  • Error handling techniques
    • try…catch blocks
    • The callback function
    • Promises
    • Async/await
    • Event emitters
  • Handling errors
    • Retry the operation
    • Report the failure to the client
    • Report failures directly top of stack
    • Crash immediately

Before we learn about error handling, let’s understand an Error in Node.js.

Error is an extension of the Error object in Javascript. The error can be constructed and thrown or passed to some function. Let’s check out some examples:

throw new Error('bad request'); // throwing new error
callback_function(new Error('connectivity issue')); // passing error as an argument

While creating an error, we need to pass a human-readable string as an argument to understand what went wrong when our program is working incorrectly. In other words, we are creating an object by passing the string to the Error constructor.

You also need to know that errors and exceptions are different in JavaScript, particularly Node.js. The errors are the instances of an Error class, and when you throw an error, it becomes an exception.

Humans do not cause all errors. There are two types of errors, programmer and operational. We use the phrase “error” to describe both, but they are quite different in reality because of their root causes. Let’s have a look at each one.

Programmer errors

Programmer errors depict issues with the program written — bugs. In other words, these are the errors caused by the programmer’s mistakes while writing a program. We cannot handle these errors properly, and the only way to correct them is to fix the codebase. Here are some of the common programmer errors:

  • Array index out of bounds — trying to access the seventh element of the array when only six are available
  • Syntax errors — failing to close the curly braces while defining a JavaScript function
  • Reference errors — accessing a function or variables that are not defined
  • Deprecation errors and warnings — calling an asynchronous function without a callback
  • Type error — x object is not iterable
  • Failing to handle operational errors

Operational errors

Every program faces operational errors (even if the program is correct). These are issues during runtime due to external factors that can interrupt the program’s normal flow. Unlike programmer errors, we can understand and handle them. These are some examples:

  • Unable to connect server/database
  • Request timeout
  • Invalid input from the user
  • Socket hang-up
  • 500 response from a server
  • File not found

You might wonder why this segregation is necessary because both cause the same effect, interrupting the program? Well, you might have to act based on the type of error. For example, restarting the app may not be a suitable action for file not found error (operational error) but restarting might be helpful when your program is failing to catch the rejected promise (programmer error).

Now that you know about the errors, let’s handle them. We can avoid the abrupt termination of our program by managing these errors, which is an essential part of production-ready code.

Error handling techniques

To handle the errors effectively, we need to understand the error delivery techniques.

There are four fundamental strategies to report errors in Node.js:

  1. try…catch blocks
  2. Callbacks
  3. Promises
  4. Event emitters

Let’s understand using them one by one.

try…catch blocks

In the try…catch method, the try block surrounds the code where the error can occur. In other words, we wrap the code for which we want to check errors; the catch block handles exceptions in this block.

Here’s the try…catch code to handle errors:

var fs = require('fs')

try {
const data = fs.readFileSync('/Users/Kedar/node.txt')
} catch (err) {
  console.log(err)
}

console.log("an important piece of code that should be run at the end")

We receive the output as shown below:

$node main.js
{ Error: ENOENT: no such file or directory, open '/Users/Kedar/node.txt'
    at Error (native)
    at Object.fs.openSync (fs.js:641:18)
    at Object.fs.readFileSync (fs.js:509:33)
    at Object.<anonymous> (/home/cg/root/7717036/main.js:4:17)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.runMain (module.js:604:10)
  errno: -2,
  code: 'ENOENT',
  syscall: 'open',
  path: '/Users/Kedar/node.txt' }
an important piece of code that should be run at the end

The error is processed and displayed. In the end, the rest of the code executes as planned.

Callbacks

A callback function (commonly used for asynchronous code) is an argument to the function in which we implement error handling.

The purpose of a callback function is to check the errors before the result of the primary function is used. The callback is usually the final argument to the primary function, and it executes when an error or outcome of the operation emerges.

Here’s the syntax for a callback function:

function (err, result) {}

The first argument is for an error, and the second is for the result. In case of an error, the first attribute will contain the error, and the second attribute will be undefined and vice versa. Let’s check out an example where we try to read a file by applying this technique:

const fs = require('fs');

fs.readFile('/home/Kedar/node.txt', (err, result) => {
  if (err) {
    console.error(err);
    return;
  }

  console.log(result);
});

The result looks like this:

$node main.js
{ Error: ENOENT: no such file or directory, open '/home/Kedar/node.txt'
    at Error (native)
  errno: -2,
  code: 'ENOENT',
  syscall: 'open',
  path: '/home/Kedar/node.txt' }

We received the error because the file is not available. We can also implement the callbacks with the user-defined functions. The example below illustrates a user-defined function to double the given number using the callbacks:

const udf_double = (num, callback) => {
  if (typeof callback !== 'function') {
    throw new TypeError(`Expected the function. Got: ${typeof callback}`);
  }

  // simulate the async operation
  setTimeout(() => {
    if (typeof num !== 'number') {
      callback(new TypeError(`Expected number, got: ${typeof num}`));
      return;
    }

    const result = num * 2;
    // callback invoked after the operation completes.
    callback(null, result);
  }, 100);
}

// function call
udf_double('2', (err, result) => {
  if (err) {
    console.error(err)
    return
  }
  console.log(result);
});

The program above will throw an error since we pass the string instead of an integer. The result is as follows:

$node main.js
TypeError: Expected number, got: string
    at Timeout.setTimeout (/home/cg/root/7717036/main.js:9:16)
    at ontimeout (timers.js:386:14)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)

Promises

Promise in Node.js is a contemporary way to handle errors, and it is usually preferred compared to callbacks. Since promises are alternatives to callbacks, let’s convert the example discussed above (udf_double) to utilize promises:

const udf_double = num => {
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      if (typeof num !== 'number') {
        reject(new TypeError(`Expected number, got: ${typeof num}`));
      }

      const result = num * 2;
      resolve(result);
    }, 100);
  });
}

In the function, we will return a promise, which is a wrapper to our primary logic. We pass two arguments while defining the Promise object:

  1. resolve — used to resolve promises and provide results
  2. reject — used to report/throw errors

Now, let’s execute the function by passing the input:

udf_double('8')
  .then((result) => console.log(result))
  .catch((err) => console.error(err));

We get an error, as shown below:

$node main.js
TypeError: Expected number, got: string
    at Timeout.setTimeout (/home/cg/root/7717036/main.js:5:16)
    at ontimeout (timers.js:386:14)
    at tryOnTimeout (timers.js:250:5)
    at Timer.listOnTimeout (timers.js:214:5)

Well, this looks much simpler than callbacks. We can also use a utility such as util.promisify() to convert callback-based code into a Promise. Let’s transform the fs.readFile example from the callback section to use promisify:

const fs = require('fs');
const util = require('util');

const readFile = util.promisify(fs.readFile);

readFile('/home/Kedar/node.txt')
  .then((result) => console.log(result))
  .catch((err) => console.error(err));

Here we are promisifying the readFile function. We get the result as below:

[Error: ENOENT: no such file or directory, open '/home/Kedar/node.txt'] {
  errno: -2,
  code: 'ENOENT',
  syscall: 'open',
  path: '/home/Kedar/node.txt'
}

Async/await

Async/await is just syntactic sugar that is meant to augment promises. It provides a synchronous structure to asynchronous code. For simple queries, Promises can be easy to use.

Still, if you run into scenarios with complex queries, it’s easier to understand the code that looks as though it’s synchronous.

Note that the return value of an async function is a Promise. The await waits for the promise to be resolved or rejected. Let’s implement the readFile example using async/await:

const fs = require('fs');
const util = require('util');

const readFile = util.promisify(fs.readFile);

const read = async () => {
  try {
    const result = await readFile('/home/Kedar/node.txt');
    console.log(result);
  } catch (err) {
    console.error(err);
  }
};

read()

We are creating the async read function in which we are reading the file using await. The output is as below:

[Error: ENOENT: no such file or directory, open '/home/Kedar/node.txt'] {
  errno: -2,
  code: 'ENOENT',
  syscall: 'open',
  path: '/home/Kedar/node.txt'
}

Event emitters

We can use the EventEmitter class from the events module to report errors in complex scenarios — lengthy async operations that can produce numerous failures. We can continuously emit the errors caused and listen to them using an emitter.

Let’s check out the example where we try to mimic a receiving data scenario and check if it is correct. We need to check if the first six indexes are integers, excluding the zeroth index. If any index among the six is not an integer, we emit an error, making further decisions based on this error:

const { EventEmitter } = require('events'); //importing module

const getLetter = (index) =>{
    let cypher = "*12345K%^*^&*" //will be a fetch function in a real scenario which will fetch a new cypher every time
    let cipher_split = cipher.split('')
    return cipher_split[index]
}

const emitterFn = () => {
  const emitter = new EventEmitter(); //initializing new emitter
  let counter = 0;
  const interval = setInterval(() => {
    counter++;
    
    if (counter === 7) {
      clearInterval(interval);
      emitter.emit('end');
    }
    
    let letter = getLetter(counter)
    
    if (isNaN(letter)) { //Check if the received value is a number
      (counter<7) && emitter.emit(
        'error',
        new Error(`The index ${counter} needs to be a digit`)
      );
      return;
    }
    (counter<7) && emitter.emit('success', counter);

  }, 1000);

  return emitter;
}

const listner = emitterFn();

listner.on('end', () => {
  console.info('All six indexes have been checked');
});

listner.on('success', (counter) => {
  console.log(`${counter} index is an integer`);
});

listner.on('error', (err) => {
  console.error(err.message);
});

Firstly, we import the events module to use EventEmitter. Then we define the getLetter() function to fetch the new cipher and send value on a particular index whenever requested by emitterFn(). The emitterFn() will initiate the EventEmitter object. We fetch the value on all six indexes one by one and emit an error if it is not an integer.

A variable stores the value received from emitterFn(), and we listen to them using listener.on(). After checking all the indexes, the program will end. The output looks as shown below:

1 index is an integer
2 index is an integer
3 index is an integer
4 index is an integer
5 index is an integer
The index 6 needs to be a digit
All six indexes have been checked

Handling errors

Now that you know the techniques to report errors, let’s handle them.

Retry the operation

Sometimes, errors can be caused by the external system for valid requests. For example, while fetching some coordinates using an API, you get the error 503, service not available, which is caused due to overload or a network failure.

At this point, the service might be back in a few seconds, and reporting an error might not be the ideal thing to do, so you retry the operation. Also, this may not be a good idea if you are deep down the stack because all layers keep retrying the process, and the wait time extends heavily. In such cases, it’s better to abort and let clients retry from their side.

Report the failure to the client

While receiving the wrong input from the client, retrying doesn’t make sense because we might get the same result upon reprocessing the incorrect information. In such cases, the most straightforward way is to finish the rest of the processing and report it to the client.


More great articles from LogRocket:

  • Don’t miss a moment with The Replay, a curated newsletter from LogRocket
  • Learn how LogRocket’s Galileo cuts through the noise to proactively resolve issues in your app
  • Use React’s useEffect to optimize your application’s performance
  • Switch between multiple versions of Node
  • Discover how to animate your React app with AnimXYZ
  • Explore Tauri, a new framework for building binaries
  • Compare NestJS vs. Express.js

Report failures directly top of the stack

Sometimes it’s appropriate to report the errors directly because you might know the cause. For example, the ENOENT error discussed in the try…catch blocks section is generated when you are trying to open a file that is not present, and you can use any of the methods discussed above to report it. In this way, you can report creating the file to solve the error.

Crash immediately

In case of unrecoverable errors, crashing the program is the best option. For example, an error is caused due to accessing a file without permission; there is nothing you can do instead of crashing the system and letting sysadmin provide the access. Crashing is also the most practical way to deal with programmer errors to recover the system.

Conclusion

To conclude, appropriate error handling is mandatory if you strive to write good code and deliver reliable software. In this post, we learned about errors and the importance of handling them in Node.js. We discussed preliminary ways to report the errors: try…catch blocks, callbacks and promises, async/await syntax, and event emitters. We also learned to handle these errors once they are reported.

Lastly, I hope it was helpful to you, and now you can handle errors in your Node.js application.

200’s only Monitor failed and slow network requests in production

Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket. LogRocket Network Request Monitoringhttps://logrocket.com/signup/

LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.

Понравилась статья? Поделить с друзьями:
  • Nivona 520 ошибка s12
  • Niveauregel inaktiv x5 ошибка перевод на русский
  • Nokia e52 ошибка при самотестировании телефона
  • Nitrox mod subnautica ошибка
  • Nitrox launcher subnautica ошибка