Troubleshoot crawl errors and understand how to fix them.
In This Article
Common crawl errors
If you start a crawl and see "Unable to Extract Content", it could be caused by one of the following:
-
The website you're crawling does not have a sitemap
-
The website blocks crawlers through its settings or configuration
Website blocking crawlers
Some websites prevent automated tools from accessing content. If this is the case:
-
You’ll need to whitelist our IP addresses to allow crawling
-
Once the IPs are whitelisted, the crawler should be able to access and extract the content successfully
35.185.80.15835.238.68.102
35.203.87.215
34.125.28.173
34.148.80.130
⚠️ Your developer or site administrator will usually need to handle whitelisting.
Individual page failures
If only certain pages in your crawl fail, here are a few common reasons:
-
The page contains no text
-
The content is not properly formatted (e.g., heavily JavaScript-rendered)
-
The page is blank or does not load correctly
In these cases, the system may skip the page or fail to extract usable content.
💡 Still stuck? Email us at team@frase.io and include:
- The site URL you're trying to crawl
- A brief description of the issue
- Any error messages you're seeing