Back to blog
How to Make HTTP Requests in Node.js With Fetch API
Augustas Pelakauskas
Back to blog
Augustas Pelakauskas
The world’s first website started with HTML only – no CSS, no images, and no JavaScript. Since then, browsers and websites have come a long way. Nowadays, it’s common for a website to depend on dozens of different resources such as images, CSS, fonts, JavaScript, JSON data, etc. On top of all that, dynamic websites load even more resources.
As an excellent language for client-side scripting, JavaScript has played an essential role in the evolution of websites. With the help of XMLHttpRequest or XHR objects, JavaScript enabled client-server communication without page reloads.
Today this dynamic is challenged by the Fetch API. However, JavaScript is still more popular because it can be used for server-side code, thanks to Node.js.
Fetch API is now included by default with Node.js version 18 and above. This article explains what Fetch API is, how it can be used in Node.js, and how it is better than alternatives such as Axios or XHR.
For your convenience, we also prepared this tutorial in a video format:
Fetch API is an application programming interface for fetching network resources. It facilitates making HTTP requests such as GET, POST, etc.
Fetch API supports new standards, such as Promise, resulting in cleaner code that doesn’t require callbacks.
The native support for the Fetch API exists in all major browsers. JavaScript developers rely on the npm node-fetch package for the server-side code. The package is wildly popular, with millions of downloads every week.
Node.js has released experimental support for the Fetch API with version 17.5, while with Node.js version 18, the Fetch API became stable and is enabled by default as part of the Node.js runtime. Since then, you can write your server-side JavaScript code that uses the Fetch API without installing a third-party library. To check your Node.js version, run the following command in your terminal:
node -v
If you’re running a Node.js version between 17.5 and 18, you can run Fetch API code files by enabling the --experimental-fetch flag:
node --experimental-fetch your_code.js
If your node version is below 17.5, you can install node-fetch with the command below:
npm install node-fetch
For the following examples, a scraping sandbox website will be used as a target. As the Fetch API returns a Promise object, you can use the fetch-then syntax. First, initialize the Node.js project by executing the following lines in your terminal:
mkdir fetch-api-scraper
cd fetch-api-scraper
npm init -y
These commands will create a package.json file in a new folder called fetch-api-scraper.
To see node-fetch in action, create a new .js file and enter the following lines of code:
fetch('https://sandbox.oxylabs.io/products/1')
.then((response) => response.text())
.then((body) => {
console.log(body);
});
This code sends a fetch request using an HTTP GET method and prints the HTML by logging the response body.
To explain it further, the fetch() method returns a Promise object. The first then() extracts the text from the response, and the second then() prints the response HTML.
Save the code file as product.js, open the terminal, and run the following:
node product.js
Running this line will print the HTML document of the target page:
You can also send a fetch request using an async function with the async-await syntax as follows:
(async () => {
const response = await fetch('https://sandbox.oxylabs.io/products/1');
const body = await response.text();
console.log(body);
})();
If you want to extend the code to fetch data like the product title from the entire HTML, you can scrape and parse data using Cheerio. Install the Cheerio library by executing the following line in your terminal:
npm install cheerio
The following Node.js Fetch example extracts the product’s title:
const cheerio = require('cheerio');
fetch('https://sandbox.oxylabs.io/products/1')
.then((response) => response.text())
.then((body) => {
const $ = cheerio.load(body);
console.log($('h2').text());
});
Test Oxylabs Scraper APIs designed for advanced web scraping tasks:
Now, let's talk about the response headers. The response object contains all of the response headers in the response.headers collection. If you wish to print the response headers, you can do so by looping over all key and value pairs in the response.headers property:
fetch('https://sandbox.oxylabs.io/products/1')
.then((response) => {
response.headers.forEach((value, key) => {
console.log(`${key}: ${value}`);
});
});
You should see a similar output as in the screenshot below:
While running this code using Node.js, you’ll see all of the response headers as expected. However, things will be unexpectedly different when running in the browser. If a server you attempt to query has CORS headers enabled, your browser will limit the headers you can access for security reasons.
You’ll only be able to access the following headers: Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, and Pragma. Read more about it here.
It’s also possible to send custom request headers using the second parameter of fetch(), where various options can be set, including headers. The following example shows how to send a custom User-Agent header in the HTTP request:
const headers = {
headers: {
'User-Agent': 'My User Agent',
},
};
fetch('https://ip.oxylabs.io/headers', headers)
.then((response) => response.text())
.then((body) => {
console.log(body);
});
The ip.oxylabs.io/headers URL outputs the headers of your request. As discussed in the next section, the second parameter can be used for additional functionality.
Tip: If you have a cURL command with headers you want to use for your Fetch API code, you can easily extract them by using a cURL to JSON converter.
The default request method used by the Fetch API is GET. However, it’s possible to send a POST request as follows:
fetch(url, {method: 'POST'})
Let’s practice sending some dummy data to a test website that accepts POST requests: https://httpbin.org/post. You’ll need to convert the data you want to send in a node-fetch POST request into a string:
const url = 'https://httpbin.org/post';
const data = {x: 1920, y: 1080,};
const headers = {'Content-Type': 'application/json',};
fetch(url, {
method: 'POST',
headers: headers,
body: JSON.stringify(data),
})
.then((response) => response.json())
.then((data) => {
console.log(data);
});
Notice how to set method: 'POST' and how to use JSON.stringify(data) to convert the data into a string in the request body. To learn how to handle JSON data, follow our tutorial on how to read JSON files in JavaScript.
Similarly, you can also use the HTTP methods such as DELETE, PUT, etc.
As the Node Fetch API returns a Promise object, you can use the fetch-then-catch convention to handle errors:
fetch('https://invalid_url')
.then((response) => response.text())
.then((body) => {
console.log(body);
}).catch((error) => {
console.error(error);
});
If you’re using an async function, you can handle errors with the try-catch block as follows:
(async () => {
try {
const response = await fetch('https://invalid_url');
const body = await response.text();
console.log(body);
} catch (error) {
console.error(error);
}
})();
Both methods should produce an output as shown in the screenshot:
Axios is a popular Node package for making HTTP GET and POST requests with ease. Make sure to check our tutorial on web scraping with JavaScript and Node.js to see a practical example of Axios. Additionally, you might also find it useful to read about using proxies in Node-Fetch and proxy integration with Axios. In case you have a cURL command that you want to replicate using Axios, Node.js, or JavaScript, you can also take advantage of these cURL to Node Axios, cURL to Node.js, and cURL to JavaScript converters.
You can install Axios with the following command:
npm install axios
To send a GET request, call the get() method as follows:
const response = await axios.get(url);
Similarly, to send a POST request, call the post() method as follows:
const response = await axios.post(url);
Let's take an example to see how the Node Fetch API differs from Axios. Send a POST request to https://httpbin.org/post with JSON data. The important things to note here are the following:
JSON data
Custom request headers
The response will be in JSON format
Writing the same code using Axios and Fetch API will distinguish the differences.
The following code uses Axios:
const axios = require('axios');
const url = 'https://httpbin.org/post';
const custom_data = {x: 1920, y: 1080,};
const headers = {'Content-Type': 'application/json',};
axios.post(url, custom_data, {
headers: headers,
})
.then(({data}) => {
console.log(data);
})
.catch((error) => {
console.error(error);
});
And below you can find a Node Fetch example:
const url = 'https://httpbin.org/post';
const custom_data = {x: 1920, y: 1080,};
const headers = {'Content-Type': 'application/json',};
fetch(url, {
method: 'POST',
headers: headers,
body: JSON.stringify(custom_data),
})
.then((response) => response.json())
.then((data) => {
console.log(data);
});
Both of these code snippets will produce the same output.
As evident from the examples above, here are the differences between Axios and Fetch API:
Fetch API uses the body property of the request, while Axios uses the data property.
Using Axios, JSON data can be sent directly, while Fetch API requires the conversion to a string.
Axios can handle JSON directly. The Fetch API requires the response.json() method to be called first to get the response in JSON format.
The response data variable name must be data in the case of Axios, while it can be anything in the case of Fetch API.
Axios allows an easy way to monitor and update progress using the progress event. There is no direct method in Fetch API.
Fetch API does not support interceptors, while Axios does.
Fetch API allows the streaming of a response, while Axios doesn’t.
Here are some of the most common Fetch API errors and mistakes you may encounter, along with their solutions:
The Fetch API does not throw an error for HTTP response codes like 404 or 500. Check the response.ok property to determine if the request was successful.
Fetch API does not natively support request timeouts, causing requests to hang indefinitely. Implement timeouts manually using techniques like AbortController.
When using response.body, forgetting to properly consume or close the stream can lead to memory leaks. Consume the stream completely using methods like .json() or .text().
Fetch does not send cookies by default, which can break authentication for same-origin requests. To include cookies in your Fetch requests, you need to explicitly set the credentials option in the Fetch API.
Fetch fails silently for CORS errors; response.ok or catch won’t help since the request doesn't even reach the server. Check the browser console for CORS issues and configure server headers to allow cross-origin requests.
The addition of Fetch API to Node.js is a long-awaited feature, which is now stable and included with node by default. Combined with libraries such as Cheerio, the Fetch API can also be used for web scraping.
Curious to find out more about web scraping? Make sure to check our blog. If you want to learn about a different method of scraping via a headless browser, refer to our Puppeteer tutorial. Also, don’t hesitate to try our general-purpose web scraper for free.
Node.js is a JavaScript runtime for building server-side applications, while node-fetch is a lightweight module that brings the Fetch API to Node.js, allowing developers to make HTTP requests similar to how they would in the browser.
About the author
Augustas Pelakauskas
Senior Copywriter
Augustas Pelakauskas is a Senior Copywriter at Oxylabs. Coming from an artistic background, he is deeply invested in various creative ventures - the most recent one being writing. After testing his abilities in the field of freelance journalism, he transitioned to tech content creation. When at ease, he enjoys sunny outdoors and active recreation. As it turns out, his bicycle is his fourth best friend.
All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.
Get the latest news from data gathering world
Scale up your business with Oxylabs®