If you want to learn what you can do before you start using Screaming Frog, one of the most frequently used tools by SEO experts, this content is for you.
What Is Screaming Frog?
Screaming Frog is a tool that scans your website by imitating search engines and allows you to see your website's shortcomings by listing important metrics for SEO. It was founded in 2010 by Dan Sharp. The most important difference from its competitors is that it is Java-based, not cloud-based, and you can install and use it on your computer.
There are free and premium versions of Screaming Frog. While the restriction is 500 URLs in the free version, it allows unlimited browsing in the paid version. You can start using it by downloading it from https://www.screamingfrog.co.uk/seo-spider/#download.
Obtaining and Activating a Screaming Frog License
If you want to use the premium version instead of the free version, you must obtain a license from https://www.screamingfrog.co.uk/seo-spider/licence/ and enter this license in the tool you downloaded. After entering the user name and license number, you can start using your Screaming Frog program by closing and reopening it.
Settings and Configuration Options
Memory Allocation: Opens to 1GB for 32-bit and 2GB for 64-bit machines as the boot setting in Screaming Frog. If you customize and increase the amount of RAM here, it will allow you to crawl more URLs while in RAM mode.
When browsing websites with many pages, you can manually adjust the setting here and increase it, with computers with high memory, you can finish your browsing faster and in a shorter time.
Storage Mode: In this section about where the data you scan will be stored and processed, there are two options: Memory Storage and Database Storage. When the Memory Storage setting is selected, all data is stored in RAM, while when Database Storage is selected, storage is provided on the HDD/SSD.
Memory Storage Mode is recommended for websites with fewer URLs and machines with high RAM.
Proxy Configuration: If you want to use a proxy, you can make settings from this section.
Language Configuration: Here you can set the language you want to use the tool in.
Screaming Frog Mode Settings
Spider: In this setting, Screaming Frog bots will continue to crawl until they discover all URLs on the website you started crawling.
List: This mode allows you to manually specify the URLs you want to crawl. It only scans the URLs you specify.
SERP: It provides a preview of how the meta title and description tags of the pages you scan are reflected in the search results.
Compare Mode: This mode creates your new scan by comparing it with the previous scan. In this way, you can more easily see which issues have been resolved.
Screaming Frog Configuration Settings
Spider: You can select the areas you want or don't want to include before the crawl from the crawl area. You can specify what to exclude with the extraction field. You can configure the limit or depth of your scan in the limits section. You can change the render option in the rendering section to see the data correctly on your JS rendering site. In the advanced section, you can give directions about whether to consider tags such as noindex, canonical, and next/prev. In the preferences section, you can specify or change the pixel boundaries of your meta tags again.
Content Area: In this area, you can check the analysis and grammar compatibility of the content on the site you will browse. Screaming frog considers content in the body of a page. It is recommended that you make customizations so that the tool can make more accurate interpretations on a website that was not created using HTML5 semantic elements.
Content Duplicates: You can make your custom settings here to test the originality of the content on your site.
Spelling & Grammar: Spelling and grammatical errors and the corresponding “Spelling” and “Grammar” options must be enabled for results to be displayed in the Content tab. In this way, you also provide these controls.
Robots.txt: You can make the settings related to the crawling of the site you will scan without considering the commands in the robots.txt file in this crawl. You can also create a robots file yourself and have it scanned accordingly.
URL Rewriting: You can specify the URLs that you do not want to see in the search results from this field. You can rewrite URLs in the URL Rewriting field. For example, you can reschedule the URLs with the Regex rules you apply to show the URLs with www as non-www or to see the URLs with the .co.uk domain extension in the .com domain extension and see them in the crawl results accordingly. Or, if you want to print your test site URLs, which are in a separate subdomain, as in your live site, this area will be very useful.
CDN: The CDN tab is a feature you can use to make Screaming Frog see the URLs of your CDN service as internal links when browsing. After this adjustment, the links of the CDN address will appear in the "Internal" tab in Screaming Frog, and more details will be previewed.
Include: The "Include" feature of Screaming Frog can be used when scanning for URLs that will be preferred on sites with a high number of URLs. For this feature to work, the URL from which you start the crawl must have an internal link that matches the regex, or Screaming Frog will not be able to crawl a second URL after the first URL. In addition, the URL from which you start scanning in Screaming Frog must also comply with your matching rules here.
https://www.screamingfrog.co.uk/seo-spider/user-guide/configuration/#include
Exclude: You can use this feature to exclude URLs, folders, and parameters that you do not want to be crawled while browsing. To exclude URLs from crawling, you need to use Regex language and write commands like this. The exclusions you apply here do not work on the first URL you start crawling on Screaming Frog. The Exclude setting applies only to other URLs discovered during crawling.
https://www.screamingfrog.co.uk/seo-spider/user-guide/configuration/#exclude
Speed: In this field, you can set how many URLs the tool will scan per second. The maximum number of Screaming Frog's default settings is 5, and the number of URLs crawled per second is 2. You may need to reduce the browsing speed according to the server performance of the site you are browsing, otherwise, if the number of requests per second is high, you are likely to encounter 500 errors.
User-Agent: Before you start scanning, you can regulate which User-Agent you will use to visit the pages with this setting. It may have blocked the Screaming Frog User-Agent on the server side of a website to be scanned. In this case, if you use the Screaming Frog User-Agent, your scan will not occur. You can use this field to check the crawlability of your website in different user-agent options.
HTTP Header: The HTTP Header option allows you to scan by providing custom HTTP Header request during a scan in Screaming Frog.
Custom Search: The Custom Search section is a feature that allows you to search for any data you want in the source code of a site, thanks to Screaming Frog. You can search HTML for a value you enter as Text or Regex in the Custom Search section. You can check whether the value you entered is in HTML with the "Contains" or "Does Not Contains" options, and you can get the results.
Custom Extraction: The Custom Extraction section allows you to extract data from a site's HTML using CSSPath, XPath or Regex. For example, you can access the product codes on the product pages of an e-commerce site with the CSSPath, XPath or Regex settings you will make through this field. The Custom Extraction tool allows you to extract data from working HTML pages that only have 200 response codes. You can switch to JavaScript rendering mode to pull data from HTML rendered outside of a static HTML page.
Custom Link Positions: Screaming Frog scans a web page's content, sidebar or footer areas and categorizes each link it discovers based on its location. With the Custom Link Positions tool, you can ensure that the XPath field, which expresses the link positions in the "Inlinks" and "Outlinks" sections in Screaming Frog, works in accordance with the criteria you specify. Thus, you can check whether there is a link in the field you specified.
Google Analytics API: By connecting your Google Analytics account with Screaming Frog, you can perform a more detailed scan with your old URLs. To make this connection, you need to click the Google Analytics tab from the API Access menu and then click the “Connect to New Account” button to connect with your Analytics account.
Search Console API: For the Search Console connection, you need to click on Google Search Console from the API Access menu and click the "Connect to New Account" button in the window that opens and select your Google account where your Search Console property is located and give the requested permissions. In this way, you can also perform a check for your URLs that were previously discovered and crawled by Google but are not currently on our website.
Page Speed Insight API: If you want to see the speed values of your crawled URLs obtained from PageSpeed Insights together with the Screaming Frog scan, you must click on the PageSpeed Insights section of the API Access menu and enter the desired "Secret Key" information before starting your scan. In this way, you will be able to see the speed performance metrics of your crawled URLs in your crawl results.
Ahrefs API Connection: If you want to see data from Ahrefs in your scan results, you can connect your Ahrefs account with Screaming Frog. When you click the Ahrefs section in the API Access menu, you need to enter the "Access Token" information in the window that opens.
Authentication: If the website you want to scan works after a user log in, you can save your site's user login URL and username & password information to Screaming Frog from the "Forms Based" section in this area. In this way, while scanning, the Screaming Frog bot will first log into your site as if it is a user and will be able to scan for the pages that come after the login screen.
Our Similar Articles in The SEO (Search Engine Optimization) Category

A Dive into Prompt Engineering Techniques Pt.1
Exploring LLMs through the academic paper 'A Prompt Pattern Catalog' by White et. al, this blog showcases practical prompt engineering with real examples.
Read more
Topic Clustering, a Core Content Strategy
Dive into the modern content evolution with topic clustering. Learn how a topic-centric approach boosts user experience and search rankings.
Read more
Unleashing the Potential of ChatGPT Plugins
Discover the impact of ChatGPT plugins in content marketing. Dive into AI's content personalization, learn about key plugins like 'Browse with Bing'.
Read more