Search Results for:

*

Search function

The search box in the top right of the interface allows you to search all visible columns. It defaults to regular text search of the ‘Address’ column, but allows you to switch to regex, choose from a variety of predefined...

 Continue Reading SEO Spider Guide

Robots.txt Testing In The SEO Spider

View URLs blocked by robots.txt, the disallow lines & use the custom robots.txt to check & validate a site's robots.txt thoroughly, and at scale.

 Continue Reading SEO Spider Tutorials

Crawling Password Protected Websites

Crawl websites that require a login, using web forms authentication using our inbuilt Chrome browser.

 Continue Reading SEO Spider Tutorials

How Accurate Are Website Traffic Estimators?

If you’ve worked at an agency for any significant amount of time, and particularly if you’ve been involved in forecasting, proposals or client pitches, you’ve likely been asked at least one of (or a combination or amalgamation of) the following...

 Continue Reading Screaming Frog Blog

Exclude

Configuration > Exclude The exclude configuration allows you to exclude URLs from a crawl by using partial regex matching. A URL that matches an exclude is not crawled at all (it’s not just ‘hidden’ in the interface). This will mean...

 Continue Reading SEO Spider Guide

URL rewriting

Configuration > URL Rewriting The URL rewriting feature allows you to rewrite URLs on the fly. For the majority of cases, the ‘remove parameters’ and common options (under ‘options’) will suffice. However, we do also offer an advanced regex replace...

 Continue Reading SEO Spider Guide

Robots.txt

The Screaming Frog SEO Spider is robots.txt compliant. It obeys robots.txt in the same way as Google. It will check the robots.txt of the subdomain(s) and follow (allow/disallow) directives specifically for the Screaming Frog SEO Spider user-agent, if not Googlebot...

 Continue Reading SEO Spider Guide

Crawling

The Screaming Frog SEO Spider is free to download and use for crawling up to 500 URLs at a time. For £199 a year you can buy a licence, which removes the 500 URL crawl limit. A licence also provides...

 Continue Reading SEO Spider Guide

How do I extract multiple matches of a regex?

If you want all the H1s from the following HTML: <html> <head> <title>2 h1s</title> </head> <body> <h1>h1-1</h1> <h1>h1-2</h1> </body> </html> Then we can use: <h1>(.*?)</h1>

 Continue Reading Seo Spider FAQ

Why is my regex extracting more than expected?

If you are using a regex like .* that contains a greedy quantifier you may end up matching more than you want. The solution to this is to use a regex like .*?. For example if you are trying to...

 Continue Reading Seo Spider FAQ

Get in touch

For an in-depth proposal on our services, complete our contact form to request a proposal.

Contact us

We'll get back to you asap.