If a Site Audit scans only one page or gets a message “Site crawl has stopped error on the site audit page”
It often means access to the website is being blocked. Here’s how to troubleshoot and fix the issue:
Whitelisting IP Addresses
First, ensure our IP addresses are whitelisted. Find the list of IP addresses here.
Common Causes and Solutions
-
Cache or Security Plugins
- Disable Plugins: Temporarily disable these to see if they are blocking the crawl.
-
Indexation Instructions
- Check robots.txt or Meta Directives: Ensure these are not restricting the crawler. Below you can see an example of a blocked robot:
-
Server Settings
- Allow IP Addresses: Verify server settings to ensure our IPs are not blocked.
-
Authorization Settings
- Remove Authentication: Ensure no authentication is blocking our crawl.
-
.htaccess File
- Review Restrictions: Check the .htaccess file for access restrictions.
-
Excessive Load Time
- Optimize Performance: Improve site speed to reduce load times.
-
Blocking Due to Too Many Requests
- Adjust Rate Limits: Ensure the site doesn’t block IPs after many requests.
-
Canonical Loop Issue
- Fix Loops: Resolve any canonical loop issues.
If these issues are present, our robots can’t properly analyze your site. Reviewing and addressing these common causes will help resolve the problem.
By following these steps, the Site Audit should be able to scan the entire website effectively.
Check this article for insights: Site Audit Crawl Has Stopped
If you still can’t properly analyze the website after going through the steps above, send us a message at support@ubesuggest.com.