Screaming Frog SEO Spider Update – Version 2.10
Dan Sharp
Posted 30 January, 2013 by Dan Sharp in Screaming Frog SEO Spider
Screaming Frog SEO Spider Update – Version 2.10
Just a quick post to let you all know we have released an update of the Screaming Frog SEO spider to version 2.10.
The updated version of the SEO spider has plenty of new features, which include –
- Save Configuration – You can now save the spider configuration to be default on start-up. This feature is fairly basic currently, but will be developed further to allow a number of configurations to be saved and accessed quickly.
- Response Time – The SEO spider now collects response times for URLs under the ‘Response Codes’ tab. This should help identify slow loading pages!
- Filter Totals – There is now a filter total in the GUI, to save exporting to Excel and calculating.
- Bulk Export Out Links – You can now bulk export out link data via the ‘advanced export’ menu.
- Accept Cookies – The SEO spider can now accept cookies, which can be useful for some websites which require it to be able to crawl. This feature is still turned off as default as of course, search bots do not accept cookies.
- Auto Pause On Reaching Memory – One of our most frequent queries is, how many URLs can the SEO spider crawl? Well, this is dependent on memory allocation. Now when you start to reach the memory allocation, the SEO spider will automatically pause and recommend saving the crawl and increasing memory allocation, before uploading the crawl again and resuming.
- 5XX Response Retries – There is now an ‘advanced’ tab in the spider configuration which allows the user to adjust the number of automatic retries for 5XX responses.
- Request Authentication – Users now have the ability to switch off messages from websites which require authentication.
- Response Timeout – The amount of time the SEO spider waits for a response from a URL is now configurable, which is useful for very slow loading websites.
- New Filters – We have introduced a couple of new filters, such as page titles with under 30 characters or meta descriptions with under 70 characters.
You can download the new version on our SEO spider page.
Update 2.11 (25th February 2013)
We have released a small update to version 2.11. This release essentially irons out a few bugs in version 2.10 which include –
- Fixed a ‘Null Pointer Exception’ error when exporting the ‘response codes’ tab for any results where the response time field has no value (such as a DNS look-up failure).
- Fixed a bug with custom filters loading from older projects.
- Fixed a domain matching bug with .com and .com.au’s.
- Changed the date format in the SEO spider so there is no confusion between UK and US date formats :-).
There were a number of other smaller amends and updates along the way as well. As always, if you spot any problems, please contact us via the support page with the details.
Thanks again to everyone for all their support of the tool, we are already working on the next batch of features. There are still plenty on the list!
Absolutely love this tool! Will dig into the updates later today.
Cheers Yousaf, you know where I am if you have any feedback! :-)
Good work guys! Nice to have the response time :-)
The auto pause on memory is going to be sweeeeet!
Fantastic. Will upgrade this morning. I love this tool. I think that along with Excel its my favourite SEO tool.
Excellent news, just downloading it now.
Excelent update guys, great tool!
oooh, lovely
Thanks, I love this tool, it is the first thing I use looking at any new site, and have suggested to many clients who can now see duplicate issues quickly.
This is huge….Bulk Export Out Links – You can now bulk export out link data via the ‘advanced export’ menu.
Hoping this saves a boat load of time in determining where on-site 301s are pointing to!
great updates!
might be a silly question but is there any way to ignore main navigation links when doing a crawl?
i’m looking to crawl sites to measure the extent of internal linking within the page body content only.
thanks!
Hi Marc,
You can exclude them, using the exclude feature (although it will involve you manually doing this) – https://www.screamingfrog.co.uk/seo-spider/user-guide/configuration/#10
You could do it quite quickly though. Crawl the homepage, export the out links and then copy and paste the nav links into the exclude.
But don’t forget – If the nav links to URLs which can’t be accessed by other URLs which the spider can crawl, it obviously won’t find them!
Cheers.
Dan
Hmmm yea the pages in my nav are also linked in the body so I dont want to exclude the urls – just the instance of the nav link. so not possible?
thanks
No, in that case, not possible unfortunately.
You could just remove 1 (or the number of times it’s linked to from the nav) from the ‘in links’ counts to adjust for the nav ;-)
Cheers
One of the best tools in my arsenal… thanks for the update!
Was hoping Bulk Export Out Links was something else…..
Feature request –
Bulk Export 301 Redirects…..page containing the link to the 301, the 301 redirecting URL, and the final location that the 301 points to.
You can bulk export in links to 3XX URLs via the ‘advanced export’ menu. However, you have to follow the chain manually by analysing them (if there is one).
We do have a redirect chain report on the ‘todo’ list which will be coming soon when there are lots of hops :-)
Awesome, cheers!
I think it’s about time I convince my boss to let me upgrade to the paid version :)
Awesome! Can’t wait to dig into all the new features!
Giving it a spin as we speak! Nice updates :)
Cheers buddy :-)
I love this tool, though the SEO in the title is almost an injustice for the fact it can be used for so much more. An SEO turned me on to it but I use it for pre-release testing, data collection and broken link detection. Essential tool. . This is not a schill post. I am actually a big fan boy.
Thanks for the kind comments Glenn, really appreciated.
Love the addition of response time data per page! Thanks as always SreamingFrog
Great tool, fast, easy, a lot of features and a Linux client (a big plus)!
Good job! Best wishes to Screaming Frog Team!
NICE! THis is a great update.
Finally got some basic config saving… and turning off the password request thing is awesome… that sometimes becomes VERY annoying. So that’s a good add. And I’m excited to look at the new filters… should be cool. Export outgoing links is big too! Response time will be helpful as well. Thanks guys!
PS: Still want line numbers. :)
Great update for a very useful software. Keep up the good work. When I’m teaching SEO, I’m teaching with Screaming Frog.
Great update! Exporting to Excel with CSV is still a problem when there are comma’s in the meta description. Excel see it as a new column and you have to manually correct all those rows. Can you fix that please?
Hey Simon,
This is fine for us!
So it’s not a SEO Spider issue, it’s an Excel /system thing unfortunately. Recommend you take a look at your set-up!
Cheers.
You could open the file in Notepad++ and do a find/replace.
To differentiate the comma you’ll notice the actual separator is next to a double quote. So if you set the Find function to regular expression and
Find ,[^”]
Replace ^ or something random that doesn’t appear in the text.
The file will now open as expected in Excel and you can do a Find/Replace to switch the ^ back to ,
Perhaps you didn’t use the right procedure? Sometime it happens to me when I’m in a rush.
Unfortunately by opening a raw export sometimes confuses Excel and when you split text using the menu in the data ribbon, adjacent columns (that contain data) are overwritten and you lose the content.
In order to get everything working, you don’t need Notepad++ or any other editor but just follow the instructions below:
1) open a blank spreadsheet
2) go to the data ribbon
3) Select import text
4) Choose the raw data file you exported from SF
5) the configuration menu where you can decide the delimiter is shown
6) pick up comma and press next
7) select the destination cell where data will be pasted from (normally if you not clicked anywhere else this automatically appears as A1
8) press finish and enjoy
I have a problem with the meta descriptions breaking everything up to different lines in Excel when the meta description in the code have a line break in them. Any solution to this?
Thanks
Tnx Andrea, you have no idea how long have I been strugling with that problem and you just solved it for me :)
Thank you sooo much !
This software is an excellent crawler to explore in depth all pages present on a website.
I’m just happy/nostalgic to see the Nullsoft installer…
Oh, and saved configs yay :)
Great update team. I am specially happy with the response time option on the new version, that will indeed be great. I was thinking about asking you if would be possible to add this on but looks like you read my thoughts :)
I am going to start petting a frog as recognition of your work :)
The memory usage auto pause feature will be great. I have never been able to use this tool on a couple of my larger clients as it chokes out. We’ll see how it works today!
Fantastic update, thanks guys. This is an awesome tool, perfect for highlighting issues or areas for improvement in sites. Keep up the good work!
You guys did great updates recently, thanks for keeping the Screaming frog a must have tool for SEOs!
I love this tool it helps a lot to me in SEO tactics…..Thanks for post….
Cool, nice update guys. I especially like the bulk export links feature :) Thanks!
Screaming Frog you are an awesome piece of software, so many cool features and very user friendly, thanks to the guys who made this you are great!
Hey really great tools and updated Keep up the good work.
One Question about the Response Times: What exactly is the Response Time? Is it the Time till the first byte is transfered, or till the whole html is transferred?
Hi Mike,
The time it takes to issue an http request to and get the full http response back from the server. The spider is multi threaded (configurable), and making http requests is I/O bound, so this may not be a truly accurate figure.
Cheers,
Dan