Screaming Frog SEO Spider Update – Version 2.55

Dan Sharp

Posted 29 July, 2014 by in Screaming Frog SEO Spider

Screaming Frog SEO Spider Update – Version 2.55

I don’t usually write (a new) blog post for smaller intermediary updates which are mainly for bug fixes. However, on this occassion we have decided to release a supporting post as version 2.55 of the Screaming Frog SEO spider includes a small feature update alongside a number of bug fixes, improved messaging & help for the user.

Without further ado, version 2.55 includes the following –

1) Command Line Option To Start Crawls

We are working on a scheduling feature and full command line option. In the meantime, we have made a quick and easy update which allows you to start the SEO Spider and launch a crawl via the command line, which means you can now schedule a crawl.

Please see our post How To Schedule A Crawl By Command Line In The SEO Spider for more information on scheduling a crawl.

Supplying no arguments starts the application as normal. Supplying a single argument of a file path, tries to load that file in as a saved crawl. Supplying the following:

--crawl http://www.example.com/

starts the spider and immediately triggers the crawl of the supplied domain. This switches the spider to crawl mode if its not the last used mode and uses your default configuration for the crawl.

Note: If your last used mode was not crawl, “Ignore robots.txt” and “Limit Search Depth” will be overwritten.

Windows

Open a command prompt (Start button, then search programs and files for ‘Windows Command Processor’)

Move into the SEO Spider directory:

cd "C:\Program Files\Screaming Frog SEO Spider"

To start normally:

ScreamingFrogSEOSpider.exe

To open a crawl file (Only available to licensed users):

ScreamingFrogSEOSpider.exe C:\tmp\crawl.seospider

To auto start a crawl:

ScreamingFrogSEOSpider.exe --crawl http://www.example.com/
windows cli

MAC OS X

Open a terminal, found in the Utilities folder in the Applications folder, or directly using spotlight and typing: ‘terminal’.

To start normally:

open "/Applications/Screaming Frog SEO Spider.app"

To open a saved crawl file:

open "/Applications/Screaming Frog SEO Spider.app" /tmp/crawl.seospider

To auto start a crawl:

open "/Applications/Screaming Frog SEO Spider.app" --args --crawl http://www.example.com/
mac cli

Linux

The following commands are available from the command line:

To start normally:

screamingfrogseospider

To open a saved crawl file:

screamingfrogseospider /tmp/crawl.seospider

To auto start a crawl:

screamingfrogseospider --crawl http://www.example.com/

2) Small Tweaks

We also made a few smaller updates which include –

  • A new configuration for ‘User Interface’ which allows graphs to be enabled and disabled. There are performance issues on Late 2013 Retina MacBook Pros with Java FX which we explain here. A bug has been raised with Oracle and we are pressing for a fix. In the mean time users affected can work around this by disabling graphs or using low resolution mode. A restart is required for this to take affect.
  • We have also introduced a warning for affected Mac users on start up (and in their UI settings) that they can either disable graphs or open in low resolution mode to improve performance.
  • Mac memory allocation settings can now persist when the app is reinstalled rather than be overwritten. There is a new way of configuring memory settings detailed in our memory section of the user guide.
  • We have further optimised graphs to only update when visible.
  • We re-worded the spider authentication pop up, which often confused users who thought it was an SEO Spider login!
  • We introduced a new pop-up message for memory related crashes.

3) Bug Fixes

Version 2.55 includes the following bug fixes –

  • Fixed a crash with invalid regex entered into the exclude feature.
  • Fixed a bug introduced in 2.50 where starting up in list mode, then moving to crawl mode left the crawl depth at 0.
  • Fixed a minor UI issue with the default config menu allowing you to clear default configuration during a crawl. It’s now greyed out at the top level to be consistent with the rest of the File menu.
  • Fixed various window size issues on Macs.
  • Detect buggy saved coordinates for version 2.5 users on the mac so they get full screen on start up.
  • Fixed a couple of other crashes.
  • Fixed a typo in the crash warning pop up. Oops!

That’s everything now for this small release, there’s lots more to come in the next update.

Thanks for all your support.

Dan Sharp is founder & Director of Screaming Frog. He has developed search strategies for a variety of clients from international brands to small and medium-sized businesses and designed and managed the build of the innovative SEO Spider software.

35 Comments

  • Lee Branch 10 years ago

    Hi there,

    I just tried upgrading to the latest Mac version, but now when I try to open Screaming Frog I get a message saying:

    Screaming Frog SEO Spider needs Java 7.
    Please go to https://java.com/download/ to download and install.

    If I go to that URL it gives me this error message:

    Chrome does not support Java 7 on Mac OS X. Java 7 runs only on 64-bit browsers and Chrome is a 32-bit browser.

    If you download Java 7, you will not be able to run Java content in Chrome on Mac OS X and will need to use a 64-bit browser (such as Safari or Firefox) to run Java content within a browser. Additionally, installing Java 7 will disable the ability to use Apple Java 6 on your system.

    Going to try and revert to a previous SF version as I need to do some crawling right now! Any ideas on fixing this please?

    Thanks, and thanks for the great tool :-)

    Reply
    • screamingfrog 10 years ago

      Hi Lee,

      Thanks for comment –

      FAQ for you over here which answers all your queries –

      https://www.screamingfrog.co.uk/seo-spider/faq/#44

      After upgrading to Java7 about 6-months ago on OS X, I’ve encountered I think one website which didn’t work in Chrome to give you an estimation of the size of the issue!

      Cheers.

      Dan

      Reply
  • Lee Branch 10 years ago

    Great! Sorry I didn’t check the FAQ’s first…

    http://cdn.meme.li/instances/400x/24779540.jpg

    Thanks for the fast reply :-)

    Reply
    • screamingfrog 10 years ago

      Hey Lee,

      No worries, FAQ’s can be a pain to look through I know!

      Glad that has helped anyway! We still support the older 2.40 version (as per the FAQ) if you were worried about any Chrome issues anyway.

      Cheers.

      Dan

      Reply
  • Mondane 10 years ago

    I just update the frog on Ubuntu 14.04 and now I’m receiving the following message:

    ‘Please set your default jre to be java 7 using: sudo update-alternatives –config java’

    As I’m seeing my choices, it’s already set to 7 (albeit openjdk):

    There are 2 choices for the alternative java (providing /usr/bin/java).

    Selection Pad Prioriteit Status
    ————————————————————
    * 0 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1071 auto mode
    1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 manual mode
    2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1071 manual mode

    Is this a bug in the frog?

    Regards.
    Mondane

    Reply
    • Mondane 10 years ago

      I tried it, but it doesn’t work. Also, on a clean Ubuntu installation, I get this message:

      ~$ sudo update-alternatives –config java
      There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
      Nothing to configure.

      And still, the frog doesn’t start. I don’t want to install Oracle Java 7. Do you have any suggestions?

      Regards,
      Mondane

      Reply
  • konrad 10 years ago

    have you considered adding “Linked from” option to let’s say URL list, like eg. i got 180 page site that has few links to certain URL and don’t know which ones it’s linked from

    Reply
    • screamingfrog 10 years ago

      Hi Konrad,

      I am not 100% sure I understand your suggestion – But if you mean viewing ‘in links’ to any URL in a crawl, you can use the ‘in links’ tab at the bottom which shows you where a URL is ‘linked from’, the anchor text and whether it’s followed or nofollowed.

      Cheers,

      Dan

      Reply
  • Haikson 10 years ago

    Can not install.
    “Screaming Frog SEO Spider is running. Please close it…”
    It’s first installation. SFSS Does not running and not in the process list.

    Win7x64

    Reply
    • screamingfrog 10 years ago

      Hi Haikson,

      I have never seen that before without it being installed & open.

      Try a reboot and make sure you have admin rights.

      If that doesn’t help, do ping through a message to our support –

      https://www.screamingfrog.co.uk/seo-spider/support/

      Thanks,

      Dan

      Update – We’re using a Windows system call to ask if there is a process called ‘ScreamingfrogSEOSpider.exe’ for this check. So I’d say it would be a coincidence to have another process of the same name. A reboot should do it either way!

      Reply
  • Joe 10 years ago

    Thanks for keeping this product up to date. At first I used your free version, thinking it wasn’t worth the money. After having paid for the spider, which comes out to be less than $10 a month, I realize how powerful it is and use it all the time.

    A scheduler would definitely be great for me, since my internet is best at the middle of the night.

    Any ideas why the spider would pick up a .css file being referenced on about every page? It’s the first time I’ve seen something like that happen

    Reply
    • screamingfrog 10 years ago

      Hi Joe,

      Thanks for your kind comments.

      The spider will crawl any CSS files that are referenced in the HTML.

      If you meant to say ‘why wouldn’t the spider pick up a particular CSS file’, it’s generally due to being blocked via robots.txt etc. We list a few common reasons here in our FAQ –

      https://www.screamingfrog.co.uk/seo-spider/faq/#14

      Cheers,

      Dan

      Reply
  • Joe 10 years ago

    Is there a tech support that can analyze a crawl output, or crawl a URL to see the issue. I understand that a CSS file would get picked up, but this seems unnatural to be showing a CSS file for about every page, while not showing other CSS files that exist in the source code.

    Is it possible for CSS to be referenced in a wrong way that crawlers see it multiple times for 1 CSS file, where the Spider is showing it as a page, and showing up for the Titles, Descriptions, etc. With a http status code of 200

    Reply
  • Joshu Thomas 10 years ago

    I would say this has been THE Best update and your crawling tool has advanced to such a useful tool. Really really thankful to this great tool and new features to find the duplicate meta are super useful. Keeping it free is something great you are doing. Good job guys and great contribution to the SEO industry.

    thanks again
    josh

    Reply
  • Tony 10 years ago

    How Do I install the latest version. I don’t see where I can install it?

    Reply
  • Chris 10 years ago

    Hey Dan,

    Does the updated version of Screaming Frog (v2.55) facilitate image and video xml Sitemap creation? If not, is it on the list of improvements to the tool?

    Can you reccomend a OSX compatible tool that would be able to provide image and video xml Sitemaps for enterprise level sites?

    Thanks,

    /ck

    Reply
    • screamingfrog 10 years ago

      Hey Chris,

      No, the new version doesn’t have those features just yet. They are both on the ‘todo’ list though!

      I don’t know an OSX tool which does it I am afraid.

      Sorry I couldn’t be of more help!

      Cheers.

      Dan

      Reply
  • gnana 10 years ago

    hi

    i downloaded the seo spider exe file but i am not able to run the tool and my os is windows xp advance. please advice me

    Gnana

    Reply
    • screamingfrog 10 years ago

      Hi Gnana,

      It will work fine on Windows XP if it has Java 7 installed correctly.

      Thanks,

      Dan

      Reply
  • Kerstin 10 years ago

    Please, in next version: PDF-URLs in XML-Sitemap .

    Thanks!

    Reply
  • Stephanos 10 years ago

    Hi,

    I tried to run Screaming frog 2.55 (free version) and I get a “not responding” message. I have an Assus netbook with an Intel Atom CPU Z520 @ 1.33GHz, 2 GB RAM, Intel Graphics Media Accelerator 500. Moreover I have installed Java version 8 (verified) and I have cleaned windows registries. By the way I used to run version 2.4 without any problem. Any clue how to fix it?

    Reply
  • Hi! there is still problem with word count section. Even when page has many content, screaming frog count this like 0.
    Also in list mode external links doesn’t show up in External section.

    Hope this can be fixed in next update ;)

    Reply
    • screamingfrog 10 years ago

      Hi Łukasz,

      If you’re not seeing a word count on some pages, it’s due to invalid HMTL mark-up which is throwing the parser off (similar to descriptions not appearing etc – https://www.screamingfrog.co.uk/seo-spider/faq/#37).

      You can pop it through to our support (https://www.screamingfrog.co.uk/seo-spider/support/) and we can let you know which particular piece of code is the issue.

      We don’t currently show links under ‘external’ in list mode, as often users upload 50k different domains in a list etc and it can make the whole thing more intensive. You can always export ‘all outlinks’ using the ‘bulk export’ and do a quick filter in Excel though.

      Thanks for the suggestions, will give the last one more thought!

      Cheers.

      Dan

      Reply
  • Omar Taghlabi 10 years ago

    Hi there,
    Each time I run the crawler, the CPU Usage jumps to 100% slowing my computer down to a point where it’s useless.
    I’ve had sreaming frog for a while and never had this issue.
    I also changed the memory to 6GB in “ScreamingFrogSEOSpider.l4j.ini file but that didn’t resolve anything.
    I’m running a windows machine, 64bits and have both java-32 and java-64 bits installed.
    Please advice!

    Reply
  • Flavio 10 years ago

    Hi,

    I’ve managed to start the crawler on win CMD fine, however is there a way to force it to:

    1 – do the crawl
    2 – dump the crawl data to a file

    The goal being that I schedule a weekly crawl of our site for our website team to fix any issues (404 etc..).

    Is this possible?

    BR,

    Flavio

    Reply
  • Swanand 10 years ago

    I tried using this tool to find broken links in my website.
    This tool does not crawl for sites hosted on Demo or Staging servers.
    Why is it like that?
    I find this tool very user friendly, but i cannot use it give the above restriction.
    Please improve.

    Reply

Leave A Comment.

Back to top