Intro example
Scraping Fish is dead easy to use. Simply provide your API key and the URL you want to scrape as query parameters to our API endpoint. In the response, you receive the desired website HTML. Below is a simple example to get you started:
- Python
- NodeJS
- cURL
import requests
payload = {
"api_key": "[your API key]",
"url": "https://example.com",
}
response = requests.get("https://scraping.narf.ai/api/v1/", params=payload)
print(response.content)
const axios = require("axios");
const payload = {
api_key: "[your API key]",
url: "https://example.com",
};
const response = await axios.get("https://scraping.narf.ai/api/v1/", { params: payload });
console.log(response.data);
curl -G 'https://scraping.narf.ai/api/v1/?api_key=[your API key]&url=https://example.com'
A response:
<!DOCTYPE html>
<html>
…
<body>
<div>
<h1>Example Domain</h1>
<p>
This domain is for use in illustrative examples in documents. You may
use this domain in literature without prior coordination or asking for
permission.
</p>
<p>
<a href="https://www.iana.org/domains/example">More information...</a>
</p>
</div>
</body>
</html>
And that's it! No need to worry about getting blocked.
info
If the URL you want to scrape contains query parameters, for example https://httpbin.org/get?best_api=scrapingfish&website=https://scrapingfish.com, you need to encode the URL. Using requests Python package or axios in NodeJS, parameters are automatically URL-encoded, unless you construct the URL manually with template strings.
note
If the request somehow fails, you won't be charged. Read more about possible response status codes in the next section.