AnalyticaHouse

Marketing tips, news and more

Explore expert-backed articles on SEO, data, AI, and performance marketing. From strategic trends to hands-on tips, our blog delivers everything you need to grow smarter.

How to Integrate Virtual Page with GTM Element Visibility
Sep 5, 2022 976 reads

How to Integrate Virtual Page with GTM Element Visibility

In this blog post, you'll find answers to the question above and gain detailed insights into what a Virtual Page is, its advantages and disadvantages, and how to manage this process without developer support by integrating it with GTM Element Visibility.In cases where the page URL does not change but the content does, you may sometimes need additional page tracking and more detailed analysis. To conduct in-depth funnel analysis in such scenarios, let's explore what these definitions mean and how you can implement them via GTM without developer assistance using Element Visibility.What is a Virtual Page (Single Page Application)?A Virtual Page, or SPA (Single Page Application), which helps your site perform faster, is a type of web application that interacts with the user by dynamically rewriting the current page instead of loading entire new pages from the server.In a SPA, all the source code loads at once when the site opens, and new pages are displayed by running frontend scripts, using the preloaded code. The advantage here is that when users navigate to a different page, they don’t have to wait for the code to reload. In short, Virtual Pages improve site speed and enhance the user experience.For example, imagine browsing an e-commerce website and opening various products. In a traditional structure, each product click sends a new request to the server, which reloads the entire page. While this may seem fine, during high traffic periods, it can significantly slow down your site.In a SPA scenario, since all source code is preloaded, user actions are managed through existing code, and page speed isn’t affected. When a page is clicked, it changes—but isn’t reloaded.However, alongside the speed advantages, SPAs can also negatively impact your site. One issue is broken source tracking, known as rogue referrer. The referrer data, which tells you where your users came from and how long they stayed, may be disrupted in SPAs.Additionally, SPA applications can pose disadvantages for SEO. Since SPAs are seen as a single page, it can hinder proper indexing by search engines and lower your page rankings.Advantages of SPA (Single Page Application): Speed / Performance: As mentioned, SPAs dynamically update the content without reloading the whole page, allowing users to take action quickly within the site. UX (User Experience): SPAs provide an experience similar to mobile apps by preventing interruptions during navigation, offering a smooth and practical experience. Caching: SPAs enable faster caching, allowing the use of local data and connection flow effectively—even during internet issues—ensuring continued usability. Disadvantages of SPA (Single Page Application): Rogue Referrer: Tracking sources may be disrupted, making it difficult to detect where users came from, which hinders detailed funnel analysis. SEO: Since SPAs appear as a single page, they may prevent your site from being indexed properly, affecting search engine rankings. The “All Pages” trigger in GTM is fired on full page loads, so it doesn't work on Virtual Pages. This becomes a limitation when detailed funnel analysis is required.To solve this, first, identify where Virtual Pages are used on your site, then see how integration is done using GTM Element Visibility.Areas Where Virtual Pages Are Used Virtual Cart Pages: These are side panels showing cart/summary without navigating to a new page. Lead Generation Pages: Often used in SMS or consent pop-ups. Common Examples of Virtual Page Use: Gmail Facebook Twitter Google Drive Google Maps Netflix How to Integrate a Virtual Page?The first method is pushing events via developer support—either natively or through GTM. While this may seem easy, relying on developers can slow things down or prevent fast intervention in case of issues.The second method—covered here—is using GTM Element Visibility. This allows fast implementation without developer involvement and supports detailed funnel tracking.Virtual Page Integration with GTM Element VisibilityWe'll use GTM for virtual cart and lead generation examples. Your website must have GTM installed. Then, you can configure the Element Visibility trigger. First, let’s understand how this trigger works.When you select “New Trigger” in GTM, you'll see the screen below.Clicking on “Trigger Configuration” asks you to select a trigger type.When you choose “Element Visibility,” the following screen appears:The Element Visibility trigger can be fired in 3 ways: Once per page: Trigger fires only once per page load—ideal for limiting duplicate hits. Once per element: Trigger fires for each instance—useful if a user reopens a pop-up multiple times. Every time an element appears on screen: Trigger fires each time the element appears according to the specified visibility threshold. Percent visible defines how much of the element is visible, while minimum percent visible is the threshold to trigger. Default is 50%.With minimum on screen duration, you can also set a minimum time (e.g., 1000 ms) before triggering.If the element loads after page load, use observe DOM changes to detect it properly.GTM Element Visibility for Virtual Cart PagesSelect your target element using browser inspector tools (Inspect > Elements).Use the most minimal and stable selector—for example:document.querySelector("box-flex. cart-summary")... Validate that the element exists only once on the page by running:document.querySelector("box-flex. cart-summary")Once the trigger is set, proceed to tag configuration. To push data as a pageview, configure it in GA and override the default page URL via More Settings > Field to Set, customizing the page name and title fields.GTM Element Visibility for Lead Generation PagesIn some cases, SMS confirmation is shown via pop-ups, not new pages. Without tracking these, it's hard to know when users exit the SMS funnel.By tracking SMS modals with Element Visibility, you can push virtual pageviews to GA and gain detailed insights.With these examples, brands using virtual cart pages or lead generation pop-ups can perform Virtual Page Integration using GTM Element Visibility—without developer support—allowing more accurate performance measurement and analysis.See you in the next post…

List of Things to Consider in Blog Posts
Sep 5, 2022 2556 reads

List of Things to Consider in Blog Posts

You should definitely take a look at this checklist to increase your organic traffic with blog posts tailored to user search intent. With the 13 tips we’ve compiled for you, you’ll ensure search engines understand your content while offering users a great experience.One of the most important channels in content marketing is blog writing.You’re probably familiar with the following advice from SEO experts and content writers: “Create content for users, not search engines.” “Google always rewards high-quality content.” With the 13 recommendations we’ll share, your content can earn respect from both search engines and users.Checklist for Blog WritingYou should carefully review your blog posts. If you want greater visibility and more clicks for a wide range of keywords, certain elements must be considered. Start by checking the fundamentals and then expand your SEO efforts from there. Engaging Content Topics SEO-Friendly URL Structure Create an Author Profile Table of Contents Catchy & Powerful Titles Optimize Your Intro Paragraph Content Headings Ideal Content Length Use of Visuals Summarizing Your Content The Power of Internal Linking Structured Data Markups Display of Related Content 1. Engaging Content TopicsYour blog topics should be engaging for your target audience. When you cover what they’re curious about, they’ll pay more attention to your content and website.Ways to identify engaging topics: Review your site search terms and landing pages via Google Analytics. Ask your sales and marketing teams which questions they frequently receive. Listen to discussions on social media about your industry. You can use these methods or analyze top-performing pages of competitors with tools like Ahrefs to identify the types of content your audience prefers.2. SEO-Friendly URL StructureWe recommend using a simple, short, and memorable SEO-friendly URL for every page on your website.Using dates or long strings in URLs can make them difficult to remember and may negatively affect server response time during crawling.❌ Not Recommended: “https://www.example.com/index.php?id_sezione=360&sid=0108mb202233ah4234”✅ Recommended: “https://www.example.com/seo”An SEO-friendly URL gives both users and search engines a quick clue about the page’s content. Also, remember that uppercase and lowercase letters in URLs are treated as separate pages by Google.3. Create an Author ProfileHaving an author profile for your blog posts is important for E-A-T (Expertise, Authoritativeness, Trustworthiness). Google may use this to evaluate content quality.Listing the author’s name and linking to their profile allows users to easily explore more content by the same writer. For YMYL (Your Money Your Life) topics, author profiles help build credibility.4. Table of ContentsWe recommend adding a table of contents to your blog post, ideally at the top, left, or right of the article.This helps users and search engine bots quickly understand and navigate the content structure.For WordPress sites, you can use plugins like TOC to add a table of contents. Check plugin reviews and download numbers before installation.Tables of contents can also lead to rich results in Google SERPs by showing sitelink text under the main result.5. Catchy & Powerful TitlesYour meta titles (page titles shown in SERPs) significantly influence click-through rates.Titles are important for both SEO and user attention. Relevant and optimized titles improve search performance.Tips for writing page titles: Review Google’s search result pages. Include your brand name. Use numbers where possible. Optimize for search engines, write for users. Revise meta titles for pages with low CTR. Use questions to spark curiosity. 6. Optimize Your Intro ParagraphMeta titles get users to click—but the intro paragraph gets them to stay.Your intro should clearly explain what the content is about and why it’s worth reading. Include stats if possible, and use your target keyword naturally.7. Content HeadingsUse heading tags (H1 to H6) to organize your content. These help search engines and users understand the topic hierarchy.Use keywords naturally in your headings. H1 is the main title, and H6 is the least emphasized.8. Ideal Content LengthContent quality matters more than length. However, the recommended length for blog posts in 2022 is between 1,500 and 3,000 words.Short content is easier to consume, but Google prefers in-depth coverage. Aim for at least 300 words, but consider going longer if your audience prefers detailed insights.Use “read more” accordion menus or A/B testing to gauge reader engagement and refine your strategy accordingly.9. Use of VisualsInclude images and visuals in your blog posts. Sometimes, a graphic or infographic can communicate better than text—especially for complex topics.10. Summarizing Your ContentJust like in school essays, your post should include an introduction, body, and conclusion. A summary at the end helps reinforce your message.Use a call-to-action in your conclusion—like subscribing to a newsletter or browsing a product page—to encourage conversions.11. The Power of Internal LinkingInternal links are essential across your entire site. They help both users and bots explore your content and boost overall SEO performance.Link high-traffic pages to lower-traffic ones to spread visibility. Think of your site as a web of interconnected content.12. Structured Data MarkupsStructured data gives search engines context about your pages. This can lead to rich results in Google and better visibility.Use schema.org vocabulary—especially for product, category, service, or blog pages. For blogs, use BlogPosting or Article. Add FAQ, HowTo, Breadcrumbs, or Recipe where relevant.13. Display of Related ContentIf a user has finished reading a blog post, don’t let them leave your site—show related posts to keep them engaged.Display up to 3 related posts at the end of your article. This not only reduces bounce rate but strengthens internal linking.ConclusionIn this article, we covered 13 essential tips for writing effective blog content. By implementing these strategies, your content will be better understood by search engines and offer users a more optimized experience.If you'd like us to cover more SEO-related topics like this checklist, let us know. If you found this content helpful, feel free to share it on social media to support us!

Exit, Entrance and Landing Page Reportings in GA4
Sep 5, 2022 6029 reads

Exit, Entrance and Landing Page Reportings in GA4

It may seem that the GA4 panel, which you will be using entirely in the near future, does not make it as easy as the GA3 panel to access information about your pages. You can create your own dashboard or a customized report with a few editing and reporting techniques. But first, let's look at what these pages mean and how you can access them in the GA3 panel.Exit PageExit pages are the pages that your site visitors last viewed and left the site. For example, if a user is reading your blog content and then visits a product page on your site but leaves your site, your checkout page is the product page.Here's how you can see Exit pages in the GA3 panel: Select Site Content under the Behavior report. You can access the report when you click on Exit Pages in the drop-down menu.The Exit page report can give you some insights. If you run a blog or news site, it's perfectly normal to read a single article and then leave. However, if many people are leaving your e-commerce site during checkout, it's a red flag. For example, it can be used to understand poorly performing pages on the site. Or it can give you ideas about whether you have a page that is loading very slowly.Well, let's see where you can see this data on the GA4 panel. In the GA4 panel, you can follow the exit pages with the Exits metric. In the future, a default dashboard can be added to the panel only for exit pages, but for now, you have to create it manually. First, open an empty Report Template in Explore.Click the + sign in the Dimensions field.Select "page path + query string" as Dimension and click Import.Then add Exits and Views to the Metrics field.After all the data has been added you will get this table:Entrance PageEntrances pages are where the user begins their journey through your site. But this can be confused with other terms like pageview and session. To clarify, Google Analytics records a page view every time a page is loaded on your website and the tracking code is executed. The number of views a page receives on your site constitutes the pageview metric. This is different from an entrance because it doesn't have to be the first page your user visits. Your Analytics account records as a pageview any page a user visits during a session.Also, Google Analytics counts one session each time a user visits your website. It logs all the pages they visit and the events they trigger as a single session unless they are active for more than 30 minutes. If they reach the 30-minute inactivity limit, Analytics will save it as a new session the next time they interact with your site.An entrance resource directs a user to your site. Entrance sources can be paid campaigns, social media posts, or other external sources linked to your site. You can see this data in the GA3 panel as follows:Select Site Content under the Behavior report. You can access the report when you click on All Pages in the drop-down menu.To see this data in the GA4 panel, you can do the following: First, open an empty Report Template in Explore.Click the + sign in the Dimension field.Select "page path + query string" as Dimension and click Import.Then add Entrances and Views to the Metrics field.After all the data has been added you will get this table:Landing PageThe landing page is the web page people come to after clicking your ad. The URL for this page is usually the same as your ad's final URL. For each ad, you specify a final URL to determine the landing page people are directed to when they click on your ad.Your landing page experience is one of several factors that helps determine a keyword's Quality Score. The experience of a landing page is represented by such things as the usefulness and relevance of the information provided on the page, ease of navigation for the user, the number of links on the page, and users' expectations based on the ad clicked.You can see this in the GA3 panel as follows: Select Site Content under the Behavior report. You can access the report when you click on Landing Pages in the drop-down menu.You can see this issue in two different ways in the GA4 panel. First, you can create a customized report: First, open an empty Report Template in Explore.Click the + sign in the Dimension field.Select "Landing Page" as Dimension and click Import.Then add any data you want to see in the Metrics field. You can use metrics such as views, sessions, engaged sessions, total users, new users, returning users, engagement rate, average engagement, time per session, conversions, and total revenue, or you can create your own customized metric as we mentioned in the GA4 Custom Definition and Usage Areas section.After all the data has been added you will get this table:Another method is to create a dashboard by customizing the Pages and Screens Report. For this, you must follow these methods: Open the Report field in the GA4 panel and select the Engagement category. Open the Pages and Screen dashboard located here.Click on the Customize Report field in the upper right.Click the Save field and select Save as a new report.You can update the name of the report to Landing Pages.Click Dimensions in the Customize report area.Select Add Dimension on the screen that opens.Add Landing Page as Dimension.Select the three dots next to the Landing Page and click Set as default.Finally, save all your changes by clicking Apply.Then click on the Metrics field to select the metrics you want to add or remove.Again, save the changes by clicking Apply. Select Save changes to the current report to save the changes made to the entire report.To see this dashboard more easily, select Library in the Report section.Click on the Edit Collection field in Life Cycle.Drag the Landing Page report under Engagement from among the report collections on the right.Then save it by clicking Save. Now when you open the Engagement menu under the Report area, you will find a special area where you can see the Landing Page data.Thus, you will be able to better measure and analyze the performance of your pages in the GA4 panel.

Screaming Frog User Guide and Configuration Settings
Sep 5, 2022 29665 reads

Screaming Frog User Guide and Configuration Settings

If you want to learn what you can do before you start using Screaming Frog, one of the most frequently used tools by SEO experts, this content is for you.What Is Screaming Frog?Screaming Frog is a tool that scans your website by imitating search engines and allows you to see your website's shortcomings by listing important metrics for SEO. It was founded in 2010 by Dan Sharp. The most important difference from its competitors is that it is Java-based, not cloud-based, and you can install and use it on your computer.There are free and premium versions of Screaming Frog. While the restriction is 500 URLs in the free version, it allows unlimited browsing in the paid version. You can start using it by downloading it from https://www.screamingfrog.co.uk/seo-spider/#download.Obtaining and Activating a Screaming Frog LicenseIf you want to use the premium version instead of the free version, you must obtain a license from https://www.screamingfrog.co.uk/seo-spider/licence/ and enter this license in the tool you downloaded. After entering the user name and license number, you can start using your Screaming Frog program by closing and reopening it.Settings and Configuration OptionsMemory Allocation: Opens to 1GB for 32-bit and 2GB for 64-bit machines as the boot setting in Screaming Frog. If you customize and increase the amount of RAM here, it will allow you to crawl more URLs while in RAM mode.When browsing websites with many pages, you can manually adjust the setting here and increase it, with computers with high memory, you can finish your browsing faster and in a shorter time.Storage Mode: In this section about where the data you scan will be stored and processed, there are two options: Memory Storage and Database Storage. When the Memory Storage setting is selected, all data is stored in RAM, while when Database Storage is selected, storage is provided on the HDD/SSD.Memory Storage Mode is recommended for websites with fewer URLs and machines with high RAM.Proxy Configuration: If you want to use a proxy, you can make settings from this section.Language Configuration: Here you can set the language you want to use the tool in.Screaming Frog Mode SettingsSpider: In this setting, Screaming Frog bots will continue to crawl until they discover all URLs on the website you started crawling.List: This mode allows you to manually specify the URLs you want to crawl. It only scans the URLs you specify.SERP: It provides a preview of how the meta title and description tags of the pages you scan are reflected in the search results.Compare Mode: This mode creates your new scan by comparing it with the previous scan. In this way, you can more easily see which issues have been resolved.Screaming Frog Configuration SettingsSpider: You can select the areas you want or don't want to include before the crawl from the crawl area. You can specify what to exclude with the extraction field. You can configure the limit or depth of your scan in the limits section. You can change the render option in the rendering section to see the data correctly on your JS rendering site. In the advanced section, you can give directions about whether to consider tags such as noindex, canonical, and next/prev. In the preferences section, you can specify or change the pixel boundaries of your meta tags again.Content Area: In this area, you can check the analysis and grammar compatibility of the content on the site you will browse. Screaming frog considers content in the body of a page. It is recommended that you make customizations so that the tool can make more accurate interpretations on a website that was not created using HTML5 semantic elements.Content Duplicates: You can make your custom settings here to test the originality of the content on your site.Spelling & Grammar: Spelling and grammatical errors and the corresponding “Spelling” and “Grammar” options must be enabled for results to be displayed in the Content tab. In this way, you also provide these controls.Robots.txt: You can make the settings related to the crawling of the site you will scan without considering the commands in the robots.txt file in this crawl. You can also create a robots file yourself and have it scanned accordingly.URL Rewriting: You can specify the URLs that you do not want to see in the search results from this field. You can rewrite URLs in the URL Rewriting field. For example, you can reschedule the URLs with the Regex rules you apply to show the URLs with www as non-www or to see the URLs with the .co.uk domain extension in the .com domain extension and see them in the crawl results accordingly. Or, if you want to print your test site URLs, which are in a separate subdomain, as in your live site, this area will be very useful.CDN: The CDN tab is a feature you can use to make Screaming Frog see the URLs of your CDN service as internal links when browsing. After this adjustment, the links of the CDN address will appear in the "Internal" tab in Screaming Frog, and more details will be previewed.Include: The "Include" feature of Screaming Frog can be used when scanning for URLs that will be preferred on sites with a high number of URLs. For this feature to work, the URL from which you start the crawl must have an internal link that matches the regex, or Screaming Frog will not be able to crawl a second URL after the first URL. In addition, the URL from which you start scanning in Screaming Frog must also comply with your matching rules here.https://www.screamingfrog.co.uk/seo-spider/user-guide/configuration/#includeExclude: You can use this feature to exclude URLs, folders, and parameters that you do not want to be crawled while browsing. To exclude URLs from crawling, you need to use Regex language and write commands like this. The exclusions you apply here do not work on the first URL you start crawling on Screaming Frog. The Exclude setting applies only to other URLs discovered during crawling.https://www.screamingfrog.co.uk/seo-spider/user-guide/configuration/#excludeSpeed: In this field, you can set how many URLs the tool will scan per second. The maximum number of Screaming Frog's default settings is 5, and the number of URLs crawled per second is 2. You may need to reduce the browsing speed according to the server performance of the site you are browsing, otherwise, if the number of requests per second is high, you are likely to encounter 500 errors.User-Agent: Before you start scanning, you can regulate which User-Agent you will use to visit the pages with this setting. It may have blocked the Screaming Frog User-Agent on the server side of a website to be scanned. In this case, if you use the Screaming Frog User-Agent, your scan will not occur. You can use this field to check the crawlability of your website in different user-agent options.HTTP Header: The HTTP Header option allows you to scan by providing custom HTTP Header request during a scan in Screaming Frog.Custom Search: The Custom Search section is a feature that allows you to search for any data you want in the source code of a site, thanks to Screaming Frog. You can search HTML for a value you enter as Text or Regex in the Custom Search section. You can check whether the value you entered is in HTML with the "Contains" or "Does Not Contains" options, and you can get the results.Custom Extraction: The Custom Extraction section allows you to extract data from a site's HTML using CSSPath, XPath or Regex. For example, you can access the product codes on the product pages of an e-commerce site with the CSSPath, XPath or Regex settings you will make through this field. The Custom Extraction tool allows you to extract data from working HTML pages that only have 200 response codes. You can switch to JavaScript rendering mode to pull data from HTML rendered outside of a static HTML page.Custom Link Positions: Screaming Frog scans a web page's content, sidebar or footer areas and categorizes each link it discovers based on its location. With the Custom Link Positions tool, you can ensure that the XPath field, which expresses the link positions in the "Inlinks" and "Outlinks" sections in Screaming Frog, works in accordance with the criteria you specify. Thus, you can check whether there is a link in the field you specified.Google Analytics API: By connecting your Google Analytics account with Screaming Frog, you can perform a more detailed scan with your old URLs. To make this connection, you need to click the Google Analytics tab from the API Access menu and then click the “Connect to New Account” button to connect with your Analytics account.Search Console API: For the Search Console connection, you need to click on Google Search Console from the API Access menu and click the "Connect to New Account" button in the window that opens and select your Google account where your Search Console property is located and give the requested permissions. In this way, you can also perform a check for your URLs that were previously discovered and crawled by Google but are not currently on our website.Page Speed Insight API: If you want to see the speed values of your crawled URLs obtained from PageSpeed Insights together with the Screaming Frog scan, you must click on the PageSpeed Insights section of the API Access menu and enter the desired "Secret Key" information before starting your scan. In this way, you will be able to see the speed performance metrics of your crawled URLs in your crawl results.Ahrefs API Connection: If you want to see data from Ahrefs in your scan results, you can connect your Ahrefs account with Screaming Frog. When you click the Ahrefs section in the API Access menu, you need to enter the "Access Token" information in the window that opens.Authentication: If the website you want to scan works after a user log in, you can save your site's user login URL and username & password information to Screaming Frog from the "Forms Based" section in this area. In this way, while scanning, the Screaming Frog bot will first log into your site as if it is a user and will be able to scan for the pages that come after the login screen.

What is Robots.txt and How to Create and Use It?
Sep 4, 2022 11769 reads

What is Robots.txt and How to Create and Use It?

When search engine bots visit a website, they use the robots.txt file to control crawling and indexing. Also known as the Robots Exclusion Standard, robots.txt tells crawlers which files, folders, or URLs on your web server they may or may not access.You may have heard many misconceptions about how to use robots.txt. In reality, it simply tells visiting bots which URLs on your site they should crawl. It’s used primarily to reduce request load and optimize crawl budget. It is not a way to prevent pages from appearing in search results—that requires a tag or authentication barrier.What Is Robots.txt?robots.txt is a plain-text file placed in your site’s root directory that gives crawlers directives about which URLs (HTTP 200) they may or may not crawl.Bots generally obey these directives. Pages disallowed in robots.txt won’t be crawled, though if those URLs are linked elsewhere, Google may still crawl them.SEO Tip: If bots encounter a 5xx server error reading your robots.txt, they’ll assume something is wrong and stop crawling. That can make images behind a CDN disappear from Google’s view, for example.Why Is Robots.txt Important for SEO?Before crawling your sitemap URLs, bots first fetch your robots.txt. Any incorrect directive can lead to important pages being skipped. A temporary misconfiguration shouldn’t be irreversible—but fix it quickly to avoid lasting harm.For instance, if you accidentally disallow a key category page, it won’t be crawled until you remove the directive. Bots cache your robots.txt for 24 hours, so changes take up to a day to take effect.Where to Find Robots.txtPlace your robots.txt in your site’s root directory (e.g. example.com/robots.txt). Crawlers universally look for it there—never move it.Creating Robots.txtYou can hand-edit robots.txt with any text editor or generate it via an online tool. Then upload it to your site’s root.Manual CreationOpen a plain‐text editor and enter directives such as:User-agent: * Allow: / Sitemap: https://example.com/sitemap.xml Save as robots.txt and upload to your root directory.Recommended DirectivesKey robots.txt commands: User-agent: Selects which crawler a rule applies to. Allow: Grants crawling permission. Disallow: Blocks crawling of specified paths. Sitemap: Points crawlers to your sitemap URL. User-agentSpecifies which bot follows the following rules. Common bots include: Googlebot Bingbot YandexBot DuckDuckBot Baiduspider …and many more. Example: Block only Googlebot from a thank-you page:User-agent: Googlebot Disallow: /thank-you Allow & DisallowAllow: permits crawling. Without any directives, the default is “allow all.”Disallow: forbids crawling of the specified path.Examples: Allow all: User-agent: *Allow: / Block all: User-agent: *Disallow: / Block a folder but allow one subpage: User-agent: * Disallow: /private/ Allow: /private/public-info Testing with Google’s Robots.txt TesterIn Google Search Console, under Index > Coverage, you’ll see any robots.txt-related errors. You can also use the Robots.txt Tester to simulate how Googlebot handles specific URLs.Common GSC Warnings Blocked by robots.txt: URL is disallowed. Indexed though blocked by robots.txt: Page is in the index despite being disallowed—use noindex or remove links. Best Practices & Reminders Bots fetch robots.txt before crawling any page. Use Disallow: to prevent low-value pages from being crawled and wasting budget. Include your sitemap with Sitemap:. Keep robots.txt under 500 KiB—Google only reads up to that size. Test for server errors—5xx responses cause bots to stop crawling. Respect case sensitivity in URL paths. Conclusionrobots.txt is a simple yet critical file for guiding crawlers and optimizing your crawl budget. Ensure it’s correct, keep it at your root, and test any changes promptly.

GA4 Custom Definitions and Usage Areas
Sep 4, 2022 1221 reads

GA4 Custom Definitions and Usage Areas

The latest version of Analytics, GA4, allows you to understand the full customer lifecycle across both your website and mobile apps. While GA4 still shares many features with Universal Analytics, it also introduces some intriguing changes. The most notable of these is custom definitions. Of course, we’re not seeing the concept of dimensions and metrics for the first time in GA4. If you’ve been using Analytics for a while, you’re probably already familiar with custom dimensions.When you send a hit to Google Analytics, certain data is collected automatically. This includes user identifiers (such as user IDs), device information, and also tracks which content the user interacted with and how they behaved on your website or mobile apps.Benefits of Using Dimensions in GA4However, there may be other data you want to collect that is specific to your website or mobile app. That’s where dimensions come into play. Simply put, a dimension is a parameter or feature of your data. It can describe a property of a product, event, user, page, etc. Dimensions help us better define and understand the what, where, and when of our data.For example:When a transaction occurs on a website, some possible dimensions are: Transaction ID Coupon Code Last Traffic Source When a user logs in to a website and we send a login event to Google Analytics, the dimensions of that event could include: Login Method User ID When a product is purchased, possible dimensions are: Product Name Product Category Product Variant Product Size When a logged-in user views their account page, dimensions might include: User ID Registered Country Scopes in GA4Within a GA4 property, you’ll notice you can also configure metrics. Dimensions and metrics are very similar, but one key difference is that dimensions can have an event scope or a user scope, whereas metrics are always event-scoped. The variety of scopes we were familiar with in Universal Analytics has been simplified, with a primary focus on the event scope. Let’s review the available scopes:EventApplies only to the specific event/hit where the dimension is sent. For example, you might send an event “trial_started” to Universal Analytics along with an extra parameter “pricing_plan”. That dimension will only apply to the “trial_started” event.UserApplies to all events for a user from the moment the user property is set (as long as the GA cookie remains valid). In GA4, this is called a User Property, similar to user-scoped dimensions in Universal Analytics, but it only affects events going forward—it does not retroactively apply to past events within the same session.ProductOnly valid for a specific product (tracked via Enhanced Ecommerce). Even if you send multiple products in the same transaction, each product can have its own product-scoped dimensions (e.g., “product_color”, “product_size”).Note: Session and hit scopes are not directly supported in GA4 yet, but Google has indicated they plan to add session scope in the future. If you need to apply a dimension to all events in a session, you must include that dimension with every event (via gtag.js or GTM).Using Custom DefinitionsIf you want to use a parameter in GA4’s reports, you must register it as a Custom Definition. Otherwise, it won’t appear in the interface. You should do this at the same time you start sending the parameter, because custom definitions are not retroactive. Any data sent before you register the custom dimension or metric will not show up in your reports, though raw data (e.g., in BigQuery) will include it.How to Add a Custom Dimension in GA4 In your existing GA4 property, go to Configure → Custom definitions. Click on Create custom dimension. Enter the name you want to see in reports into the Dimension name field. Select Event as the scope (since we’re defining an event parameter). Optionally add a description in the Description field. Choose the parameter you’re sending from the Event parameter dropdown. Click Save. You can now use this dimension in your reports. How to Add a Custom Metric in GA4 In the Custom definitions area, switch to the Custom metrics tab and click Create custom metric. Enter the name for the metric in the Metric name field. Provide a description in the Description field. Select the parameter from the Event parameter dropdown. Choose the appropriate Unit of measurement (e.g., Integer, Currency, Time). Save—your new custom metric is now available for reporting. Note: Free GA4 properties have limits on the number of custom definitions (e.g., up to 25 user properties, 50 custom dimensions). GA360 accounts have higher limits.Summary and Tips Register custom dimensions/metrics at the same time you begin sending the parameters—definitions are not retroactive. Use tag manager or gtag.js to include parameters with every relevant event. Choose the correct scope: Event for event parameters, User for user properties, and Product for ecommerce product data. Leverage GA4’s custom definitions to enrich your data and unlock deeper analysis.