SEO Spider
How to Work in Teams Using the SEO Spider
Introduction
Collaboration is often an essential part of working optimally, especially when it comes to SEO. Due to the nature of the work, you’re often engaging with several different areas of a business, such as developers, designers and copywriters, to share insights and get recommendations actioned.
While the SEO Spider Tool is a local website crawler, you needn’t silo yourself away. There are several ways to work collaboratively and ensure everyone is on the same page, whether it’s sharing crawls, automatically exporting to the cloud, or implementing advanced scheduling processes.
Below are several ways to work collaboratively with the Screaming Frog SEO Spider.
Exporting Data
One of the most widely used features is the ability to export data into spreadsheets. This is useful if you want to share data with someone who doesn’t have the SEO Spider, or don’t want to send over a full crawl file.
Simply click the ‘export’ button in the top left-hand corner to export data from the top window tabs and filters.
To export lower window data, right click on the URL(s) that you wish to export data from in the top window, then click on one of the options.
There’s also a ‘Bulk Export’ option located under the top-level menu, allowing you to export things such as response codes, canonical issues, directives, structured data and much more.
Exporting Crawls
Exporting your crawl data is one of the easiest ways to share your findings with someone who also uses the SEO Spider. When you export a crawl file, it also remembers your configuration and tab layout, making it easy for someone to pick up where you left off and dive into the data.
Exporting and opening a crawl requires a licence, and the method varies depending on your storage mode.
In the default memory storage mode, you can save a crawl at any time (when paused or finished) and re-open by selecting ‘File > Save/Save As…’, or ‘File > Open’.
In database storage mode, crawls are automatically ‘saved’ and committed in the database during a crawl. To open a crawl, click on File > Crawls in the main menu. To open a crawl that was previously saved using memory storage mode, click on File > Import. To export a crawl, click File > Export.
The ‘Crawls’ window displays an overview of automatically stored crawls, where you can open, rename, organise into project folders, duplicate, compare, export, or delete them in bulk.
Exporting Configuration Files
As well as sharing crawl files, you’re able to share SEO Spider configuration files. Being able to share a configuration with people is useful for teams working on the same site or set of sites that requires a particular config, such as excludes, includes, limits and more.
To export a configuration file, go to File > Configuration > Save As…:
To import a configuration, click File > Configuration > Load. If all is well, you’ll see a success message within the SEO spider:
Exporting Data to Google Drive
It’s also possible to export data from the SEO Spider directly into Google Drive. To do this, simply change the ‘Type’ to Google Sheets on the Export window.
If you haven’t connected a Google account yet, simply click Manage, then click Add.
A new browser window will open, where you can login to your desired Google account and grant the required permissions.
Once access has been granted, you can close the browser window and return to the SEO Spider.
Exporting to Google Sheets will save exports in a ‘Screaming Frog SEO Spider’ folder within your Google Drive account. You can then share this folder with other users, and it also opens up opportunities for automation.
Scheduling Crawls
You’re able to schedule crawls to run automatically within the SEO Spider, as a one-off, or at chosen intervals. This feature can be found under ‘File > Scheduling’ within the app.
You’re able to preselect the mode (spider, or list), saved configuration, as well as APIs (Google Analytics, Search Console, Majestic, Ahrefs, Moz) to pull in any data for the scheduled crawl.
You can also automatically save the crawl file and export any of the tabs, bulk exports, reports or XML Sitemaps to a chosen location, such as a network drive.
Scheduling also allows you to automatically export any tabs, filters, exports or reports to Google Sheets by switching the ‘format’ to gsheet. As mentioned previously, this will save a Google Sheet within your Google Drive account in a ‘Screaming Frog SEO Spider’ folder.
The ‘project name’ and ‘crawl name’ used in scheduling will be used as folders for the exports. For example, a ‘Screaming Frog’ project name and ‘Weekly Crawl’ name, will sit within Google Drive like below.
You’re also able to choose to overwrite the existing file (if present) or create a timestamped folder in Google Drive.
If you wish to export to Google Sheets to connect to Google Data Studio, then use the ‘Export for Data Studio’ custom overview export.
This allows users to select crawl overview data to be exported as a single summary row to Google Sheets. It will automatically append new scheduled exports to a new row in the same sheet in a time series. Please read our tutorial on ‘How to Automate Crawl Reports In Data Studio‘ to set this up.
When using scheduling, the SEO Spider will run in headless mode (meaning without an interface) when scheduled to export data. This is to avoid any user interaction or the application starting in front of you and options being clicked, which would be a little strange, but it does mean that the machine must be running. However, it is also possible to use the SEO Spider as a cloud crawler so you don’t have to ensure that your device is running (more on this later).
If you prefer to use the command line to operate the SEO Spider, please see our command line interface guide.
Scheduling, combined with exporting to a shared location such as Google Drive, Dropbox or OneDrive, opens up a realm of possibilities when it comes automation and working collaboratively with the SEO Spider.
Saving to a Shared Location
Similar to exporting directly to Google sheets, you can of course export to a shared network drive such as Dropbox or OneDrive. This is as simple as selecting your desired network drive when saving crawls, configuration files, exports and more.
If you’re using database storage mode, using a network drive as the location for the database is not supported. This is because it will be much too slow and the connection unreliable. Vault drives are also not supported.
Google Data Studio
Thanks to the Data Studio export option within the scheduling, it’s possible to create fully automated Data Studio reports. This is useful for things like regular website health checks, or communicating top-level findings to stakeholders.
We have a comprehensive guide on how to setup a fully automated crawl report within Google Data Studio, opening up more options for a collaborative approach.
One of the most useful features of Google Data Studio is email scheduling, which allows you to send reports out on a daily, weekly or monthly basis. To do this, open up your Data Studio report and click the dropdown next to ‘Share’, and select ‘Schedule email delivery’:
Ensure you allow enough time for the crawl to complete and Google sheets to sync when setting the email time. For instance, if you’re crawl normally takes 1 hour to fully complete, set email delivery for at least an hour after the initial crawl schedule time.
Running the SEO Spider in the Cloud
As mentioned, if you want to automate areas of your workflow when using the SEO Spider, you need to ensure that your device is running at the time the automation is due to occur. However, there are various ways of running the SEO spider that don’t require you to leave your machine turned on all the time.
Running the SEO Spider in the cloud is one such way of achieving this, and we have an in-depth guide on how to do this using Google’s Compute Engine. You can quickly spin up virtual machines for everyday tasks, run concurrent crawls and use Compute Engine’s advanced scheduling capabilities to achieve automation without tying up your local resources.
As well as this, running it in the Cloud allows you or your colleagues to access ongoing or finished crawls from anywhere, so you can easily get the information you need.
Any of the aforementioned methods for saving and sharing data apply to an instance of the SEO Spider running in the cloud, with the added benefit of also being able to export to Google storage buckets. Storage buckets allow you to store your data in the cloud, opening up further opportunities for automation whilst also maintaining affordability. Other users can also be added to storage buckets, making them great for sharing SEO Spider crawls and exports.
While there are several ways of running the SEO Spider in the cloud, our guide uses Chrome Remote Desktop to access the Virtual Machine, mostly for simplicity. However, this means that if you want others to be able to access the same VM, you’ll need to setup Chrome Remote Desktop on a Google account that multiple users have access to. Chrome Remote Desktop does not support concurrent connections, so if that’s something you require, you’ll want to explore something like Windows Server Remote Desktop Connections.
Please note: licences are per user, so you’ll need one for every user that will be accessing it.
Summary
This tutorial will hopefully help improve your workflows when working in teams with the Screaming Frog SEO Spider tool.
If you experience any issues when working collaboratively or have any further questions or suggestions for this feature, please contact us via support and we can help.