Search Results for:
*
Robots.txt
The Screaming Frog SEO Spider is robots.txt compliant. It obeys robots.txt in the same way as Google. It will check the robots.txt of the subdomain(s) and follow (allow/disallow) directives specifically for the Screaming Frog SEO Spider user-agent, if not Googlebot...
Crawling
The Screaming Frog SEO Spider is free to download and use for crawling up to 500 URLs at a time. For £199 a year you can buy a licence, which removes the 500 URL crawl limit. A licence also provides...
How do I extract multiple matches of a regex?
If you want all the H1s from the following HTML: <html> <head> <title>2 h1s</title> </head> <body> <h1>h1-1</h1> <h1>h1-2</h1> </body> </html> Then we can use: <h1>(.*?)</h1>
Why is my regex extracting more than expected?
If you are using a regex like .* that contains a greedy quantifier you may end up matching more than you want. The solution to this is to use a regex like .*?. For example if you are trying to...
How does the Spider treat robots.txt?
The SEO Spider is robots.txt compliant. It checks robots.txt in the same way as Google. It will check robots.txt of the (sub) domain and follow directives specifically any for Googlebot, or for all user-agents. You are able to adjust the...
Why isn’t my Include/Exclude function working?
The Include and Exclude are case sensitive, so any functions need to match the URL exactly as it appears. Please read both guides for more information. Functions will be applied to URLs that have not yet been discovered by the...
Web Scraping & Custom Extraction
Scrape any data from the HTML of a page using CSS Path, XPath and regex to enhance a crawl, such as author name, comments, shares or more.