LSI Keywords: What Are They and Why Do They Matter in SEO?

Posted by JessicaFoster

The written content on your website serves to not only inform and entertain readers, but also to grab the attention of search engines to improve your organic rankings.

And while using SEO keywords in your content can help you get found by users, focusing solely on keyword density doesn’t cut it when it comes to creating SEO-friendly, reader-focused content.

This is where LSI keywords come in.

LSI keywords serve to add context to your content, making it easier to understand by search engines and readers alike. Want to write content that ranks and wows your readers? Learn how to use LSI keywords the right way.

What are LSI keywords?

Latent Semantic Indexing (LSI) keywords are terms that are conceptually related to the main keyword you’re targeting in your content. They help provide context to your content, making it easier for readers and search engines to understand what your content is about.

Latent semantic analysis

LSI keywords are based on the concept of latent semantic analysis, which is a technique for understanding natural language processing. In other words, it analyzes the relationship between one word and another in order to make sense of the overall content.

Search engine algorithms use latent semantic analysis to understand web content and ultimately determine what content best fits what the user is actually searching for when they use a certain keyword in their search.

Why are LSI keywords important for SEO?

The use of LSI keywords in your content helps search engines understand your content and therefore makes it easier for search engines to match your content to what users are searching for.

Exact keyword usage is less important than whether your overall content fits the user’s search query and the intention behind their search. After all, the goal of search engines is to showcase content that best matches what users are searching for and actually want to read.

LSI keywords are not synonyms

Using synonyms in your content can help add context to your content, but these are not the same as LSI keywords. For example, a synonym for the word “sofa” could be “couch”, but some LSI keywords for “couch” would be terms like “leather”, “comfortable”, “sleeper”, and “sectional”.

When users search for products, services, or information online, they are likely to add modifiers to their main search term in order to refine their search. A user might type something like “red leather sofa” or “large sleeper sofa”. These phrases still contain the primary keyword “sofa”, but with the addition of semantically-related terms.

How to find LSI keywords to use in your content

One of the best ways to find LSI keywords is to put yourself in the mind of someone who is searching for your primary keyword. What other details might they be searching for? What terms might they use to modify their search?

Doing a bit of brainstorming can help set your LSI keyword research off on the right track. Then, you can use a few of the methods below to identify additional LSI keywords, phrases, and modifiers to use in your content.

Google autocomplete

Use Google to search for your target keyword. In most cases, Google’s autocomplete feature will fill the search box with semantically-related terms and/or related keywords.

For the keyword “sofa”, we can see some related keywords (like “sofa vs couch”) as well as LSI keywords like “sofa [bed]”, “[corner] sofa”, and ‘[leather] sofa”.

Competitor analysis

Search for your target keyword and click on the first few competing pages or articles that rank highest in the search results. You can then use the find function to search the content for your primary keyword and identify LSI keywords that bookend that key term.

For example, a search for “digital marketing services” may yield several competitor service pages. You can then visit these pages, find the phrase “digital marketing services”, and see what semantically-related keywords are tied in with your target keyword.

Some examples might include:

  • “Customizable”
  • “Full-service”
  • “Results-driven”
  • “Comprehensive”
  • “Custom”
  • “Campaigns”
  • “Agency”
  • “Targeted”
  • “Effective”

You can later use these LSI keywords in your own content to add context and help search engines understand the types of services (or products) you offer.

LSI keyword tools

If conducting manual LSI keyword research isn’t your forte, you can also use designated LSI keyword tools. Tools like LSIGraph and UberSuggest are both options that enable you to find semantic keywords and related keywords to use in your content.

LSIGraph is a free LSI keyword tool that helps you “Generate LSI keywords Google loves”. Simply search for your target keyword and LSIGraph will come up with a list of terms you can consider using in your content.

In the image above, you can see how LSIGraph searched its database to come up with a slew of LSI keywords. Some examples include: “[reclining] sofa”, “sofa [designs]”, and “[discount] sofas”.

Content optimization tools

Some on-page optimization tools include LSI keyword analysis and suggestions directly within the content editor.

Surfer SEO is one tool that provides immediate LSI keyword recommendations for you to use in your content and analyzes the keyword density of your content in real-time.

Here we see that Surfer SEO makes additional keyword suggestions related to the primary term “rainboots”. These LSI keywords include: “little”, “pair”, “waterproof”, “hunter”, “rubber”, “men’s”, and so on.

Using LSI keywords to improve SEO

You can use any or all of the LSI keywords you identified during your research as long as they are applicable to the topic you are writing about and add value to your content. Using LSI keywords can help beef up your content, but not all of the terms you identify will relate to what you are writing about.

For example, if you sell women’s rain boots, including LSI terms like “men’s” or “masculine” may not tie in to what you’re offering. Use your best judgment in determining which terms should be included in your content.

In terms of using LSI keywords throughout your content, here are a few places you can add in these keywords to improve your SEO:

  • Title tags
  • Image alt text
  • Body content
  • H2 or H3 subheadings
  • H1 heading
  • Meta description

LSI keywords made simple

Identifying and using LSI keywords is made simple when you take a moment to consider what your target audience is searching for. They aren’t just searching for your primary keyword, but are likely using semantically-related terms to refine their search and find the exact service, product, or information they are searching for.

You can also use data-driven keyword research and content optimization tools to identify LSI keywords that are showing up in other high-ranking articles and web pages. Use these terms in your own content to improve your on-page SEO and attract more users to your website.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

LiveBlogPosting Schema: A Powerful Tool for Top Stories Success

Posted by cml63

One of the great things about doing SEO at an agency is that you’re constantly working on different projects you might not have had the opportunity to explore before. Being an SEO agency-side allows you to see such a large variety of sites that it gives you a more holistic perspective on the algorithm, and to work with all kinds of unique problems and implementations.

This year, one of the most interesting projects that we worked on at Go Fish Digital revolved around helping a large media company break into Google’s Top Stories for major single-day events.

When doing competitor research for the project, we discovered that one way many sites appear to be doing this is through use of a schema type called LiveBlogPosting. This sent us down a pathway of fairly deep research into what this structured data type is, how sites are using it, and what impact it might have on Top Stories visibility.

Today, I’d like to share all of the findings we’ve made around this schema type, and draw conclusions about what this means for search moving forward.

Who does this apply to?

With regards to LiveBlogPosting schema, the most relevant types of sites will be sites where getting into Google’s Top Stories is a priority. These sites will generally be publishers that regularly post news coverage. Ideally AMP will already be implemented, as the vast majority of Top Stories URLs are AMP compatible (this is not required, however).

Why non-publisher sites should still care

Even if your site isn’t a publisher eligible for Top Stories results, the content of this article may still provide you with interesting takeaways. While you might not be able to directly implement the structured data at this point, I believe we can use the findings of this article to draw conclusions about where the search engines are potentially headed.

If Google is ranking articles that are updated with regular frequency and even providing rich-features for this content, this might be an indication that Google is trying to incentivize the indexation of more real-time content. This structured data may be an attempt to help Google “fill a gap” that it has in terms of providing real-time results to its users.

While it makes sense that “freshness” ranking factors would apply most to publishers, there could be interesting tests that other non-publishers can perform in order to measure whether there is a positive impact to your site’s content.

What is LiveBlogPosting schema?

The LiveBlogPosting schema type is structured data that allows you to signal to search engines that your content is being updated in real-time. This provides search engines with contextual signals that the page is receiving frequent updates for a certain period of time.

The LiveBlogPosting structured data can be found on schema.org as a subtype of “Article” structured data. The official definition from the site says it is: “A blog post intended to provide a rolling textual coverage of an ongoing event through continuous updates.”

Imagine a columnist watching a football game and creating a blog post about it. With every single play, the columnist updates the blog with what happened and the result of that play. Each time the columnist makes an update, the structured data also updates indicating that a recent addition has been made to the article.

Articles with LiveBlogPosting structured data will often appear in Google’s Top Stories feature. In the top left-hand corner of the thumbnail image, there will be a “Live” indicator to signal to users that live updates are getting made to the page.

Two Top Stories Results With The “Live” Tag

In the image above, you can see an example of two publishers (The Washington Post and CNN) that are implementing LiveBlogPosting schema on their pages for the term “coronavirus”. It’s likely that they’re utilizing this structured data type in order to significantly improve their Top Stories visibility.

Why is this Structured Data important?

So now you might be asking yourself, why is this schema even important? I certainly don’t have the resources available to have an editor continually publish updates to a piece of content throughout the day.

We’ve been monitoring Google’s usage of this structured data specifically for publishers. Stories with this structured data type appear to have significantly improved visibility in the SERPs, and we can see publishers aggressively utilizing it for large events.

For instance, the below screenshot shows you the mobile SERP for the query “us election” on November 3, 2020. Notice how four of the seven results in the carousel are utilizing LiveBlogPosting schema. Also, beneath this carousel, you can see the same CNN page is getting pulled into the organic results with the “Live” tag next to it:

Now let’s look at the same query for the day after the election, November 4, 2020. We still see that publishers heavily utilize this structured data type. In this result, five of the seven first Top Stories results use this structured data type.

In addition, CNN gets to double dip and claim an additional organic result with the same URL that’s already shown in Top Stories. This is another common result of LiveBlogPosting implementation.

In fact, this type of live blog post was one of CNN’s core strategies for ranking well for the US Election.

Here is how they implemented this strategy:

  1. Create a new URL every day (to signal freshness)
  2. Apply LiveBlogPosting schema and continually make updates to that URL
  3. Ensure each update has its own dedicated timestamp

Below you can see some examples of URLs CNN posted during this event. Each day a new URL was posted with LiveBlogPosting schema attached:

<a target="_blank" href="https://www.cnn.com/politics/live-news/us-election-news-11-02-2020/index.html

https://www.cnn.com/politics/live-news/election-results-and-news-11-03-20/index.html

https://www.cnn.com/politics/live-news/election-results-and-news-11-04-20/index.html

Here’s”>https://www.cnn.com/politics/l&#8230; another telling result for “us election” on November 4, 2020. We can see that The New York Times is ranking in the #2 position on mobile for the term. While the ranking page isn’t a live blog post, we can see underneath the result is an AMP carousel. Their strategy was to live blog each individual state’s results:

It’s clear that publishers are heavily utilizing this schema type for extremely competitive news articles that are based around big events. Oftentimes, we’re seeing this strategy result in prominent visibility in Top Stories and even the organic results.

How do you implement LiveBlogPosting schema?

So you have a big event that you want to optimize around and are interested in implementing LiveBlogPosting schema. What should you do?

1. Get whitelisted

The first thing you’ll need to do is get whitelisted by Google. If you have a Google representative that’s in contact with your organization, I recommend reaching out to them. There isn’t a lot of information out there on this and we can even see that Google has previously removed help documentation for it. However, the form to request access to the Live Coverage Pilot is still available.

This makes sense, as Google might not want news sites with questionable credibility to access this feature. This is another indication that this feature is potentially very powerful if Google wants to limit how many sites can utilize it.

2. Technical implementation

Next, with the help of a developer, you’ll need to implement LiveBlogPosting structured data on your site. There are several key properties you’ll need to include such as:

  1. coverageStartTime: When the live blog post begins
  2. coverageEndTime: When the live blog post ends
  3. liveBlogUpdate: A property that indicates an update to the live blog. This is perhaps the most important property:
    1. headline: The headline of the blog update
    2. articleBody: The full description of the blog update
    3. datePublished: The time when the update was originally posted
    4. dateModified: The time when the update was adjusted

To make this a little easier to conceptualize, below you can find an example of how CNN has implemented this on one of their live blogs. The example below features two “liveBlogUpdate” properties on their November 3, 2020 coverage of the election.

Case study

As I previously mentioned, many of these findings were discovered during research for a particular client who was interested in improving visibility for several large single-day events. Because of how agile the client is, they were actually able to get LiveBlogPosting structured data up and running on their site in a fairly short period of time. We then tested to see if this structured data would help improve visibility for very competitive “head” keywords during the day.

While we can’t share too much information about the particular wins we saw, we did see significant improvements in visibility for the competitive terms the live blog post was mapped to. When looking in Search Console, we can see lifts of between +200% and +600%+ improvements in YoY clicks and visibility for many of these terms. During our spot checks during the day, we often found the live blog post ranking in the 1-3 results (first carousel) in Top Stories. The implementation appeared to be a major success in improving visibility for this section of the SERPs.

Google vs. Twitter and the need for real-time updates

So the question then becomes, why would Google place so much emphasis on the LiveBlogPosting structured data type? Is it the fact that the page is likely going to have really in-depth content? Does it improve E-A-T in any way?

I would interpret that the success of this feature demonstrates one of the weaknesses of a search engine and how Google is trying to adjust accordingly. One of the primary issues with a search engine is that it’s much harder for it to be real-time. If “something” happens in the world, it’s going to take search engines a bit of time to deliver that information to users. The information not only needs to be published, but Google must then crawl, index, and rank that information.

However, by the time this happens, the news might already be readily available on platforms such as Twitter. One of the primary reasons that users might navigate away from Google to the Twitterverse is because users are seeking information that they want to know right now, and don’t feel like waiting 30 minutes to an hour for it to populate in Google News.

For instance, when I’m watching the Steelers and see one of our players have the misfortune of sustaining an injury, I don’t start to search Google hoping the answer will appear. Instead, I immediately jump to Twitter and start refreshing like crazy to see if a sports beat writer has posted any news about it.

What I believe Google is creating is a schema type that signals a page is in real-time. This gives Google the confidence to know that a trusted publisher has created a piece of content that should be crawled much more frequently and served to users, since the information is more likely to be up to date and accurate. By giving rich features and increased visibility to articles using this structured data, Google is further incentivizing the creation of real-time content that will retain searches on their platform.

This evidence also signals that sites indicating to search engines that content is fresh and regularly updated may be an increasingly important factor for the algorithm. When talking to Dan Hinckley, CTO of Go Fish Digital, he proposed that search engines might need to give preference to articles that have been updated more recently. Google might not be able to “trust” that older articles still have accurate information. Thus, ensuring content is updated may be important to a search engine’s confidence about the accuracy of the results.

Conclusion

You really never know what types of paths you’re going to go down as an SEO, and this was by far one of the most interesting ones during my time in the industry. Through researching just this one example, we not only figured out a piece of the Top Stories algorithm, but also gained insights into the future of the algorithm.

It’s entirely possible that Google will continue to incentivize and reward “real-time” content in an effort to better compete with platforms such as Twitter. I’ll be very interested to see any new research that’s done on LiveBlogPosting schema, or Google’s continual preference towards updated content.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Google My Business: What It Is, How To Use It, and Why

Posted by MiriamEllis

Google My Business is both a free tool and a suite of interfaces that encompasses a dashboard, local business profiles, and a volunteer-driven support forum with this branding. Google My Business and the associated Google Maps make up the core of Google’s free local search marketing options for eligible local businesses.

Today, we’re doing foundational learning! Share this simple, comprehensive article with incoming clients and team members to get off on the right foot with this important local business digital asset.

An introduction to the basics of Google My Business

First, let’s get on the same page regarding what Google My Business is and how to be part of it.

What is Google My Business?

Google My Business (GMB) is a multi-layered platform that enables you to submit information about local businesses, to manage interactive features like reviews and questions, and to publish a variety of media like photos, posts, and videos.

What is GMB eligibility?

Eligibility to be listed within the Google My Business setting is governed by the Guidelines for representing your business on Google, which is a living document that undergoes frequent changes. Before listing any business, you should consult the guidelines to avoid violations that can result in penalties or the removal of your listings.

You need a Google account to get started

You will need a Google account to use Google’s products and can create one here, if you don’t already have one. It’s best for each local business to have its own company account, instead of marketing agencies using their accounts to manage clients’ local business profiles.

When a local business you’re marketing has a large in-house marketing department or works with third party agencies, Google My Business permits you to add and remove listing owners and managers so that multiple people can be given a variety of permissions to contribute to listings management.

How to create and claim/verify a Google My Business profile

Once the business you’re marketing has a Google account and has determined that it’s eligible for Google My Business inclusion, you can create a single local business profile by starting here, using Google’s walkthrough wizard to get listed.

Fill out as many fields as possible in creating your profile. This guide will help you understand how best to fill out many of the fields and utilize many of the features. Once you’ve provided as much information as you can, you’ll be given options to verify your listing so that you can control and edit it going forward.

Alternatively, if you need to list 10+ locations of a business all at the same time, you can do a bulk upload via spreadsheet and then request bulk verification.

Where your Google My Business information can display

Once your data has been accepted into the GMB system, it will begin showing up in a variety of Google’s local search displays, including the mobile and desktop versions of:

Google Business Profiles

Your comprehensive Google Business Profile (GBP) will most typically appear when you search for a business by its brand name, often with a city name included in your search language (e.g. “Amy’s Drive Thru Corte Madera”). In some cases, GBPs will show for non-branded searches as well (e.g. “vegan burger near me”). This can happen if there is low competition for a search term, or if Google believes (rightly or wrongly) that a search phrase has the intent of finding a specific brand instead of a variety of results.

Google Business Profiles are extremely lengthy, but a truncated view looks something like this, located to the right of the organic search engine results:

Google Local Packs

Local packs are one of the chief displays Google uses to rank and present the local business information in their index. Local packs are shown any time Google believes a search phrase has a local intent (e.g. “best vegan burger near me”, “plant-based burger in corte madera”, “onion rings downtown”). The searcher does not have to include geographic terms in their phrase for Google to presume the intent is local

Most typically these days, a local pack is made up of three business listings, with the option to click on a map or a “view all” button to see further listings. On occasion, local packs may feature fewer than three listings, and the types of information Google presents in them varies .

Local pack results look something like this on desktop search, generally located above the organic search results:

Google Local Finders

When a searcher clicks through on the map or the “view all” link in a local pack, they will be taken to the display commonly known as the Local Finder. Here, many listings can be displayed, typically paginated in groups of ten, and the searcher can zoom in and out on the map to see their options change.

The URL of this type of result begins google.com/search. Some industries, like hospitality have unique displays, but most local business categories will have a local finder display that looks like this, with the ranked list of results to the left and the map to the right:

Google Maps

Google Maps is the default display on Android mobile phones, and desktop users can also choose to search via this interface instead of through Google’s general search. You’ll notice a “maps” link at the top of Google’s desktop display, like this:

Searches made via Google Maps yield results that look rather similar to the local finder results, though there are some differences. It’s a distinct possibility that Google could, at some point, consolidate the user experience and have local packs default to Google Maps instead of the local finder.

The URL of these results begins google.com/maps instead of google.com/search and on desktop, Google’s ranked Maps’ display looks like this:

The GMB dashboard is where you manage most of this

Once you’ve created and claimed your Google Business Profiles, you’ll have access to managing most (but not all) of the features they contain in your Google My Business dashboard, which looks like this:

The GMB dashboard has components for ongoing management of your basic contact info, reviews, posts, images, products and other features.

GMB Insights

The GMB dashboard also hosts the analytical features called GMB Insights. It’s a very useful interface, though the titles and functions of some of its components can be opaque. Some of the data you’ll see in GMB Insights includes:

  • How many impressions happened surrounding searches for your business name or location (called Direct), general searches that don’t specify your company by name but relate to what you offer (called Discovery), and searches relating to brands your business carries (called Branded).
  • Customer actions, like website visits, phone calls, messaging, and requests for driving directions.
  • Search terms people used that resulted in an impression of your business.

There are multiple other GMB Insights features, and I highly recommend this tutorial by Joy Hawkins for a next-level understanding of why reporting from this interface can be conflicting and confusing. There’s really important data in GMB Insights, but interpreting it properly deserves a post of its own and a bit of patience with some imperfections.

When things go wrong with Google My Business

When engaging in GMB marketing, you’re bound to encounter problems and find that all kinds of questions arise from your day-to-day work. Google relies heavily on volunteer support in their Google My Business Help Community Forum and you can post most issues there in hopes of a reply from the general public or from volunteer contributors titled Gold Product Experts.

In some cases, however, problems with your listings will necessitate speaking directly with Google or filling out forms. Download the free Local SEO Cheat Sheet for robust documentation of your various GMB support options.

How to use Google My Business as a digital marketing tool

Let’s gain a quick, no-frills understanding of how GMB can be used as one of your most important local marketing tools.

How to drive local business growth with Google’s local features

While each local business will need to take a nuanced approach to using Google My Business and Google Maps to market itself, most brands will maximize their growth potential on these platforms by following these seven basic steps:

1) Determine the business model (brick-and-mortar, service area business, home-based business, or hybrid). Need help? Try this guide.

2) Based on the business model, determine Google My Business eligibility and follow the attendant rules laid out in the Guidelines for representing your business on Google.

3) Before you create GMB profiles, be certain you are working from a canonical source of data that has been vetted by all relevant parties at the business you’re marketing. This means that you’ve checked and double-checked that the name, address, phone number, hours of operation, business categories and other data you have about the company you are listing is 100% accurate.

4) Create and claim a profile for each of the locations you’re marketing. Depending on the business model, you may also be eligible for additional listings for practitioners at the business or multiple departments at a location. Some models, like car dealerships, are even allowed multiple listings for the car makes they sell. Consult the guidelines. Provide as much high quality, accurate, and complete information as possible in creating your profiles.

5) Once your listings are live, it’s time to begin managing them on an ongoing basis. Management tasks will include:

  • Analyzing chosen categories on an ongoing basis to be sure you’ve selected the best and most influential ones, and know of any new categories that appear over time for your industry.
  • Uploading high quality photos that reflect inventory, services, seasonality, premises, and other features.
  • Acquiring and responding to all reviews as a core component of your customer service policy.
  • Committing to a Google Posts schedule, publishing micro-blog-style content on an ongoing basis to increase awareness about products, services, events, and news surrounding the locations you’re marketing.
  • Populating Google Questions & Answers with company FAQs, providing simple replies to queries your staff receives all the time. Then, answer any incoming questions from the public on an ongoing basis.
  • Adding video to your listings. Check out how even a brand on a budget can create a cool, free video pulled from features of the GMB listing.
  • Commiting to keeping your basic information up-to-date, including any changes in contact info and hours, and adding special hours for holidays or other events and circumstances.
  • Investigating and utilizing additional features that could be relevant to the model you’re marketing, like menus for goods and services, product listings, booking functionality, and so much more!
  • Analyzing listing performance by reviewing Google My Business Insights in your dashboard, and using tactics like UTM tagging to track how the public is interacting with your listings.

Need help? Moz Local is Moz’s software that helps with ongoing management of your listings not just on Google, but across multiple local business platforms.

6) Ongoing education is key to maintaining awareness of Google rolling out new features, altering platforms, and adjusting how they weight different local ranking factors. Follow local SEO experts on social media, subscribe to local SEO newsletters, and tune in to professional and street level industry surveys to continuously evaluate which factors appear to be facilitating maximum visibility and growth.

7) In addition to managing your own local business profiles, you’ll need to learn to view them in the dynamic context of competitive local markets. You’ll have competitors for each search phrase for which you want to increase your visibility and your customers will see different pack, finder, and maps results based on their locations at the time of search. Don’t get stuck on the goal of being #1, but do learn to do basic local competitive audits so that you can identify patterns of how dominant competitors are winning.

In sum, providing Google with great and appropriate data at the outset, following up with ongoing management of all relevant GMB features, and making a commitment to ongoing local SEO education is the right recipe for creating a growth engine that’s a top asset for the local brands you market.

How to optimize Google My Business listings

This SEO forum FAQ is actually a bit tricky, because so many resources talk about GMB optimization without enough context. Let’s get a handle on this topic together.

Google uses calculations known as “algorithms” to determine the order in which they list businesses for public viewing. Local SEOs and local business owners are always working to better understand the secret ranking factors in Google’s local algorithm so that the locations they’re marketing can achieve maximum visibility in packs, finders, and maps.

Many local SEO experts feel that there are very few fields you can fill out in a Google Business Profile that actually have any impact on ranking. While most experts agree that it’s pretty evident the business name field, the primary chosen category, the linked website URL, and some aspects of reviews may be ranking factors, the Internet is full of confusing advice about “optimizing” service radii, business descriptions, and other features with no evidence that these elements influence rank.

My personal take is that this conversation about GMB optimization matters, but I prefer to think more holistically about the features working in concert to drive visibility, conversions, and growth, rather than speculating too much about how an individual feature may or may not impact rank.

Whether answering a GMB Q&A query delivers a direct lead, or writing a post moves a searcher further along the buyer journey, or choosing a different primary category boosts visibility for certain searches, or responding to a review to demonstrate empathy wins back an unhappy customer, you want it all. If it contributes to business growth, it matters.

Why Google My Business plays a major role in local search marketing strategy

As of mid-2020, Google’s global search engine market share was at 92.16%. While other search engines like Bing or Yahoo still have a role to play, their share is simply tiny, compared to Google’s. We could see a shift of this dynamic with the rumored development of an Apple search engine, but for now, Google has a near-monopoly on search.

Within Google’s massive share of search, a company representative stated in 2018 that 46% of queries have a local intent. It’s been estimated that Google processes 5.8 billion global daily queries. By my calculation, this would mean that roughly 2.7 billion searches are being done every day by people seeking nearby goods, services, and resources. It’s also good to know that, according to Google, searches with the intent of supporting local business increased 20,000% in 2020.

Local businesses seeking to capture the share they need of these queries to become visible in their geographic markets must know how to incorporate Google My Business marketing into their local SEO campaigns.

A definition of local search engine optimization (local SEO)

Local SEO is the practice of optimizing a business’s web presence for increased visibility in local and localized organic search engine results. It’s core to providing modern customer service, ensuring today’s businesses can be found and chosen on the internet. Small and local businesses make up the largest business sector in the United States, making local SEO the most prevalent form of SEO.

Local SEO and Google My Business marketing are not the same thing, but learning to utilize GMB as a tool and asset is key to driving local business growth, because of Google’s near monopoly.

A complete local SEO campaign will include management of the many components of the Google My Business profile, as well as managing listings on other location data and review platforms, social media publication, image and video production and distribution, and a strong focus on the organic and local optimization of the company website. Comprehensive local search marketing campaigns also encompass all the offline efforts a business makes to be found and chosen.

When trying to prioritize, it can help to think of the website as the #1 digital asset of most brands you’ll market, but that GMB marketing will be #2. And within the local search marketing framework, it’s the customer and their satisfaction that must be centered at every stage of on-and-offline promotion.

Focus on GMB but diversify beyond Google

Every aspect of marketing a brand contains plusses, minuses and pitfalls. Google My Business is no exception. Let’s categorize this scenario into four parts for a realistic take on the terrain.

1) The positive

The most positive aspect of GMB is that it meets our criteria as owners and marketers of helping local businesses get found and chosen. At the end of the day, this is the goal of nearly all marketing tactics, and Google’s huge market share makes their platforms a peerless place to compete for the attention of and selection by customers.

What Google has developed is a wonder of technology. With modest effort on your part, GMB lets you digitize a business so that it can be ever-present to communities, facilitate conversations with the public which generate loyalty and underpin everything from inventory development to quality control, and build the kind of online reputation that makes brands local household names in the offline world.

2) The negative

The most obvious negative aspects of GMB are that its very dominance has cut Google too much slack in letting issues like listing and review spam undermine results quality. Without a real competitor, Google hasn’t demonstrated the internal will to solve problems like these that have real-world impacts on local brands and communities.

Meanwhile, a dry-eyed appraisal of Google’s local strategy observes that the company is increasingly monetizing their results. For now, GMB profiles are free, but expanding programs like Local Service Ads point the way to a more costly local SEO future for small businesses on tight budgets

Finally, local brands and marketers (as well as Google’s own employees) are finding themselves increasingly confronted with ethical concerns surrounding Google that have made them the subject of company walkouts, public protests, major lawsuits, and government investigations. If you’re devoting your professional life to building diverse, inclusive local communities that cherish human rights, you may sometimes encounter a fundamental disconnect between your goals and Google’s.

3) The pitfall

Managing your Google-based assets takes time, but don’t let it take all of your time. Because local businesses owners are so busy and Google is so omnipresent, a pitfall has developed where it can appear that GMB is the only game in town.

The old adage about eggs in baskets comes into play every time Google has a frustrating bug, monetizes a formerly-free business category, or lets competitors and lead generators park their advertising in what you felt was your space. Sometimes, Google’s vision of local simply doesn’t match real-world realities, and something like a missing category or an undeveloped feature you need is standing in the way of fully communicating what your business offers.

The pitfall is that Google’s walls can be so high that the limits and limitations of their platforms can be mistaken as all there is to local search marketing.

4) The path to success

My article on how to feed, fight, and flip Google was one of the most-read here on the Moz blog in 2020. With nearly 14,000 unique page views, this message is one I am doubling down on in 2021:

  • Feed Google everything they need to view the businesses you’re marketing as the most relevant answers to people in close proximity to brand locations so that the companies you promote become the prominent local resources in Google’s index.
  • Fight spam in the communities you’re marketing to so that you’re weeding out fake and ineligible competitors and protecting neighbors from scams, and take principled stands on the issues that matter to you and your customers, building affinity with the public and a better future where you work and live.
  • Flip the online scenario where Google controls so much local business fate into a one-on-one environment in which you have full control over creating customer experiences exceptional enough to win repeat business and WOM recommendations, outside the GMB loop. Turn every customer Google sends you into a keeper who comes directly to you — not Google — for multiple transactions.

GMB is vital, but there’s so much to see beyond it! Get listed on multiple platforms and deeply engage in your reviews across them. Add generous value to neighborhood sites Nextdoor, or on old school fora that nobody but locals use. Forge B2B alliances and join the Buy Local movement to become a local business advocate and community sponsor. Help a Reporter Out. Evaluate whether image, video, or podcasting media could boost your brand to local fame. Profoundly grow your email base. Be part of the home delivery revival, fill the hungry longing for bygone quality and expertise, or invest in your website like never before and make the leap into digital sales. The options and opportunities are enticing and there’s a right fit for every local brand.

Key takeaway: don’t get stuck in Google’s world — build your own with your customers from a place of openness to possibilities.

A glance at the future of Google My Business

By now, you’ve likely decided that investing time and resources into your GMB assets is a basic necessity to marketing a local business. But will your efforts pay off for a long time to come? Is GMB built to last, and where is Google heading with their vision of local?

Barring unforeseen circumstances, yes, Google My Business is here to stay, though it could be rebranded, as Google has often rebranded their local features in the past. Here are eight developments I believe we could see over the next half decade:

  1. As mentioned above, Google could default local packs to Maps instead of the local finder, making their network a bit tidier. This is a good time to learn more about Google Maps, because some aspects of it are quite different.
  2. Pay-to-play visibility will become increasingly prevalent in packs, organic, and Maps, including lead generation features and trust badges.
  3. If Apple Maps manages to make Google feel anxious, they may determine to invest in better spam filters for both listings and reviews to defend the quality of their index.
  4. Location-based image filters and search features will grow, so photograph your inventory.
  5. Google will make further strides into local commerce by surfacing, and possibly even beginning to take commissions from, sales of real time inventory. The brands you market will need to decide whether to sell via Google, via their own company websites, or both.
  6. Google could release a feature depicting the mapped delivery radii of brick-and-mortar brands. Home delivery is here to stay, and if it’s relevant to brands you market, now is the time to dive in.
  7. Google has a limited time window to see if they can drive adoption of Google Messaging as a major brand-to-consumer communications platform. The next five years will be telling, in this regard, and brands you market should discuss whether they wish to invite Google into their conversations with customers.
  8. Google could add public commenting on Google Posts to increase their interactivity and push brands into greater use of this feature. Nextdoor has this functionality on their posts and it’s a bit of a surprise that Google doesn’t yet.

What I’m not seeing on the near horizon is a real commitment to better one-on-one support for the local business owners whose data makes up Google’s vast and profitable local index. While the company has substantially increased the amount of automated communications it sends GMB listing owners, Google’s vision of local as an open-source, DIY free-for-all appears to continue to be where they’re at with this evolving venture.

Your job, then, is to be vigilant about both the best and worst aspects of the fascinating Google My Business platform, taking as much control as you can of how customers experience your brand in Google’s territory. This is no easy task, but with ongoing education, supporting tools, and a primary focus on serving the customer, your investment in Google My Business marketing can yield exceptional rewards!

Ready to continue your local SEO education? Read: The Essential Local SEO Strategy Guide.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Select Meaningful B2B SEO Keywords

Posted by Cody_McDaniel

It’s no secret that B2B marketing is different than B2C. The sales cycle is longer, there are multiple stakeholders involved, and it’s usually more expensive. To market effectively, you need to create content that helps, educates, and informs your clientele. The best way to do that is to identify the keywords that matter most to them, and build out content accordingly.

To find out how, watch this week’s episode of Whiteboard Friday! 

Anatomy of a Perfect Pitch Email

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi and welcome to another Whiteboard Friday. My name is Cody McDaniel, and I’m an SEO manager at Obility. We are a B2B digital marketing agency, and today I want to talk about selecting meaningful B2B SEO keyword targets and the process and steps you can take in your own keyword research.

So B2B is a little bit different than you would see in your normal B2C types of marketing, right? The sales cycle or the length of time it takes to actually make a purchasing decision is usually a lot longer than you would see just buying something off Amazon, right? It’s going to take multiple stakeholders. Individuals are going to be involved in that process. It’s going to be usually a lot more expensive.

So in order to do that, they’re going to want to be informed about their decision. They’re going to have to look up content and information across the web to help inform that decision and make sure that they’re doing the right thing for their own business. So in order to do that, we have to create content that helps, educates, and informs these users, and the way to do that is finding keywords that matter and building content around them.

1. Gather seed list

So when we’re developing keyword research for our own clientele, the first thing that we do is gather a seed list. So usually we’ll talk with our client contact and speak to them about what they care about. But it also helps to get a few other stakeholders involved, right, so the product marketing team or the sales team, individuals that will eventually want to use that information for their clients, and talk with them about what they care about, what do they want to show up for, what’s important to them.

That will sort of help frame the conversation you want to be having and give you an understanding or an idea of where eventually you want to take this keyword research. It shouldn’t be very long. It’s a seed list. It should eventually grow, right? 

2. Review your content

So once you’ve done that and you have a baseline understanding of where you want to go, the next thing you can do is review the content that you have on your own website, and that can start with your homepage.

What’s the way that you describe yourselves to the greater masses? What’s the flagship page have to say about what you offer? You can go a little bit deeper into some of your other top-level pages and About Us. But try to generate an understanding of how you speak to your product, especially in relation to your clients in the industry that you’re in. You can use that, and from there you can go a little bit further.

Go through your blog posts to see how you speak to the industry and to educate and inform individuals. Go to newsletters. Just try to get an understanding of what exists currently on the website, where your efficiencies may be, and of course where your deficiencies are or your lack of content. That will help you generate ideas on where you need to look for more keywords or modifications in the keywords you have.

3. Determine your rankings

Speaking of which, with the keywords that you currently have, it’s important to know how you stand. So at this point, I try to look to see how we’re ranking in the greater scheme of things, and there are a lot of different tools that you can use for that. Search Console is a great way to see how potential users across the web are going to your website currently. That can help you filter by page or by query.

You can get an understanding of what’s getting clicks and generating interest. But you can also use other tools — SEMrush, SpyFu, Ahrefs, and Moz, of course. They’ll all give you a keyword list that can help you determine what users are searching for in order to find your website and where they currently rank in the search engine results page. Now usually these lists are pretty extensive.

I mean, they can be anything from a few hundred to a few thousand terms. So it helps to parse it down a little bit. I like to filter it by things like if it has no search volume, nix it. If it’s a branded term, I don’t like to include it because you should be showing up for your branded terms already. Maybe if it’s outside the top 50 in rankings, things like that, I don’t want that information here right now.

4. Competitive research

I want to understand how we’re showing up, where our competencies are, and how we can leverage that in our keyword research. So that should help the list to be a little bit more condensed. But one of the things you can also look at is not just internal but external, right? So you can look at your competition and see how we’re ranking or comparing at least on the web.

What do they use? What sort of content do they have on their website? What are they promoting? How are they framing that conversation? Are they using blog posts? All that information is going to be useful for maybe developing your own strategies or maybe finding a niche where, if you have particularly stiff competition, you can find areas they’re not discussing.

But use that competition as a framework for identifying areas and potential opportunities and how the general public or industry speaks to some of the content that you’re interested in writing about. So once you have that list, it should be pretty big, good idea of the ecosystem you’re working with, it’s important to gather metrics.

5. Gather metrics

This is going to contextualize the information that you have, right? You want to make informed decisions on the keywords that you have, so this metric gathering will be important. There are a lot of different ways you can do it. Here at Obility, we might categorize them by different topic types so we can make sure that we’re touching on all the different levels of keyword usage for the different topics that we discuss in our content.

You can look at things like search volume. There a lot of different tools that do that, the same ones I mentioned earlier — Moz, SpyFu, SEMrush. There’s a great tool we use called Keyword Keg, that kind of sort of aggregates all of them. But that will give you an idea search volume on a monthly basis. But you can also use other metrics, things like difficulty, like how hard it is to rank compared to some of the other people on the web, or organic click-through rate, like what’s the level of competition you’re going to be going up against in terms of ads or videos or carousels or other sort of Google snippets.

Moz does a great job of that. So use these metrics, and what they should help you do is contextualize the information so that maybe if you’re pretty close on two or three keywords, that metric gathering should help you identify which one is maybe the easiest, it has the most potential, so on and so forth. So once you have that, you should be getting a good understanding of where each of those keywords lives and you should be selecting your targets.

6. Select target keywords

Now I’ve run through a ton of clients who former agencies have sent them a list of 300 to 400 keywords that they’re trying to rank for, and I cannot stand it. There’s no value to be had, because how can you possibly try and optimize and rank for hundreds and hundreds of different variations of keywords. It would take too long, right? You could spend years in that rabbit hole.

What we try to do is focus on maybe 30 or 40 keywords and really narrow down what sort of content is going to be created for it, what you need to optimize. Does it exist on your website? If not, what do we need to make? Having that list makes a much more compartmentalized marketing strategy, and you can actually look at that and weigh it against how you’re currently deploying content internally.

You can look at success metrics and KPIs. It just helps to have something a little bit more tangible to bite down on. Of course, you can grow from there, right? You start ranking well for those 20 or 30 terms, and you can add a few more on at the end of it. But again, I think it’s really important to focus on a very select number, categorizing them by the importance of which ones you want to go first, and start there because this process in content creation takes a long time.

7. Consider intent

But once you’ve selected those, it’s also important to consider intent. You can see I’ve outlined intent here a little bit more in depth. What do I mean by that? Well, the best way that I’ve seen intent described online is as an equation. So every query is made up of two parts, the implicit and the explicit. What are you saying, and what do you mean when you’re saying it?

So when I think of that and trying to relate it to keywords, it’s really important to use that framework to develop the strategy that you have. An example that I have here is “email marketing.” So what’s the implicit and explicit nature of that? Well, “email marketing” is a pretty broad term.

So implicitly they’re probably looking to educate themselves on the topic, learn a little bit more about what it’s about. You’ll see, when you search for that, it’s usually a lot more educational related content that helps the user understand it better. They’re not ready to buy yet. They just want to know a little bit more. But what happens when I add a modifier on it? What if I add “software”? Well, now that you would have intent, it may mean the same thing as email marketing in some context, but software implies that they’re looking for a solution.

We’ve now gone down the funnel and are starting to identify terms in which a user is more interested in purchasing. So that type of content is going to be significantly different, and it’s going to be more heavily implied on features and benefits than just the email marketing. So that intent is important to frame your keywords, and it’s important to make sure that you have them in every step of your purchasing funnel.

The way that I like to usually look at that, and you see it everywhere, it’s an upside down triangle. You have your top, middle, and bottom level pieces of content. Usually the top is going to be things like blogs and other sorts of informational content that you’re going to be having to use to inform users of the types of topics and things in the industry you care about.

That’s probably where something like “email marketing” would exist. But “email marketing software” is probably going to be sitting right here in the middle, where somebody is going to want to make an informed decision, relate it to other pieces of content on competitor websites, check those features, and determine if it’s a useful product for them, right? From there, you can go a little bit further and move them into different types of content, maybe email marketing software for small business.

That’s far more nuanced and specific, and maybe you’ll have a white paper or a demo that’s specifically tailored to businesses that are looking for email marketing in the small business space. So having content in three separate spaces and three different modifications will help you identify where your content gaps are and make sure that users can move throughout your website and throughout the funnel and inform themselves on the decision they’re trying to make.

Conclusion

So with that, this should give you some idea of how we develop keyword research here at our own agency, and I hope that you guys can utilize some of these strategies in your own keyword research wherever you are out in the world. So thanks again for listening. Happy New Year. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

What I Found After Experimenting with Google Discover for Two Months

Posted by Moberstein

I’m completely fascinated by Google’s Discover Feed. Besides the fact that it serves highly-relevant content, it also seems beyond the reach of being gamed. In a way, it almost seems beyond the reach of pure SEO (which makes it downright tantalizing to me).

It all made me want to understand what makes the feed tick.

So I did what any sensible person would do. I spent the better part of two months running all sorts of queries in all sorts of different ways to see how it impacted my Discover Feed.

Here are my ramblings.

My approach to analyzing Google’s Discover Feed

Let me explain what I did and how I did it, to both give you a better understanding of this analysis and point out its gaping limitations.

For five days a week, and over the course of two months, I executed all sorts of user behavior aimed at influencing my Discover Feed.

I ran queries on specific topics on mobile, searched for other topics on desktop… clicked on results… didn’t click on results… went directly to sites and clicked… went directly to sites and didn’t click anything, etc.

In other words, I wanted to see how Google reacted to my various behaviors. I wanted to see if one behavior influenced what showed in my Discover Feed more than other behaviors.

To do this, I searched for things I would generally never search for, went to sites I would normally never visit, and limited my normal search behavior at times so as not to influence the feed.

For example, I hate celebrity news and gossip with a passion, so I went to people.com every day (outside of the weekends) and scrolled through the site without clicking a thing. I then recorded if related material (i.e. celebrity nonsense) ended up in my Discover Feed the next day.

I recorded all of my various “web behaviors” in the same way. I would execute a given behavior (e.g. search for things related to a specific topic on mobile, but without clicking any results) and record what happened in my Discover Feed as time went on.

Here’s a breakdown of the various behaviors I executed along with the topics associated with each behavior. (For the record, each behavior corresponds to a single topic or website so I could determine the impact of that behavior on my Discover Feed.)

Allow me to quickly elaborate on the behaviors above:

  • These are all topics/sites that I am in no way interested or involved in (particularly self-help car repair).
  • When I clicked a YouTube video, I watched the entire thing (I mean, I didn’t actually watch half the time, but Google doesn’t know that… or do they?)
  • When I visited a site, I did scroll through the content and stay on the page for a bit.
  • A search for a “segment of a topic already existing in Discover Feed” means that the overall topic was something that regularly appeared in my feed (in this case, NFL football and MLB baseball). However, the subtopics, in this case the Cowboys and Marlins, were topics I never specifically searched for and did not appear in my feed. Also, the data for these two categories only reflects one month of experimentation.

All of this points to various limitations.

Is it possible that Google sees a topic like entertainment news as more “Discover-worthy” than sewing? It is.

Is it possible that going to a site like Fandango during a pandemic (when many theaters were closed) influenced Google’s decision to include or exclude things related to the topic matter dealt with by the site? It is.

What if I didn’t skip the weekends and executed the above every single day. Would that have made a difference? I don’t know.

I’m not trying to portray any of the data I’ll present as being overly-conclusive. This is merely what I did, what I found, and what it all made me think.

Let’s have at it then.

How user behavior impacted my Discover Feed

Before I dive into the “data”, I want to point out that the heart of my observations isn’t found in the data itself, but in some of the things I noticed in my Discover Feed along the way.

More than that, this data is far from conclusive or concrete, and in most ways speaks to my unique user-profile. That said, let’s have a look at the data, because there just may be some general takeaways.

As I mentioned, I wanted to see the impact of the various online behaviors on my Discover Feed. That is, how frequently did Google insert content related to the topics associated with each specific behavior into my feed?

For all the times I went to japantimes.co.jp how often was there content in my feed related to Japanese news? For all the times I searched for and watched YouTube videos on lawn care, how often did Google show me such content in Discover?

Survey says:

Here are some of the most intruding highlights reflected in the graph above:

  • Watching YouTube videos on mobile had no impact on my feed whatsoever (though it certainly did on what YouTube ads I was served).
  • Watching YouTube videos on desktop has little impact (in fact, any insertion of “sewing” into my feed was only as a topic card which contained no URL).
  • Searching on Google alone, without clicking a result, was ineffective.
  • Visiting a desktop site and clicking around was very effective at filling my feed with “cooking” content, however the same was not true on mobile.

Watching YouTube videos (desktop) about sewing was only successful in getting Google to include the topic in its “Discover more” cards.

I want to emphasize that when I say things like “YouTube mobile watches had no impact”, I don’t mean that as a general statement. Rather, such a statement is only aligned with the way I engaged with YouTube (one video watch per day). Clearly, and as is obvious, if you watch a large number of YouTube videos around one topic in a short time, Discover will pick this up.

I did, in fact, test this.

I gave my kids free rein at various moments to take my phone and watch a large number of videos related to specific topics (surprisingly, they were happy to oblige and to watch vast amounts of YouTube).

I have twin 9-year-old boys. One watched an obscene number of YouTube videos and executed an insane number of searches related to airplanes and flight simulators. I am still awaiting the day where my feed stops showing cards related to this topic. Here’s my search history to prove it:

The other little fellow watched videos about the weather and animal behavior that results from it for a few hours straight (hey, it was during the height of quarantine). That same day, this is what I saw in my feed:

You don’t need me to tell you that if Google thinks you’re going gaga over a specific topic, it will throw said topic into your Discover Feed posthaste.

My goal in all of this was not to see what is the quickest way to get Google to update the topics it shows in your Discover Feed. The point in my methodology was to see if there was one type of behavior that Google seemed to take more seriously than another vis-a-vis inserting new topics into my Discover Feed.

To that, Google did react differently to my various behaviors.

That doesn’t mean I can make many conclusions based on the above data. For example, Google clearly saw my going to foodnetwork.com and clicking on an article each day as a strong signal that “cooking” deserves to be in my Discover Feed.

Google was apt to think of my behavior of visiting foodnetwork.com and clicking an article each day as an endorsement for wanting “cooking” content in my Discover Feed.

At the same time, Google completely ignored that behavior on mobile. Each day I went to japantimes.co.jp and scrolled through an article. Yet, not once did Google include anything even remotely related to Japanese news in my feed.

I suspect that the topic here was too far removed from overall search behavior. So while it was reasonable for Google to assume I wanted cooking-related material in my feed, the same did not hold true for topics related to Japan.

I think this is the same reason why the topic associated with my visiting a site on desktop without clicking anything made it into my feed. The topic here was celebrity news, and I imagine that Google has profiled this topic as being one that is highly-relevant to Discover. So much so that Google tested including it in my feed at various points.

Despite never clicking on an article when visiting people.com each day, Google still flirted with showing celebrity news content in my Discover Feed.

That said, there is some reason to believe that desktop behavior has more of an impact than mobile user behavior.

The case for desktop Discover Feed dominance

About a month into my little experiment I wondered what would happen if I started searching for and clicking on things that were segments of topics that already appeared in my feed.

Deciding on these segments was quite easy. My feed is constantly filled with material on baseball and American football. Thus, I decided to search for and click on two teams I have no interest in. This way, while the topic overall was already in my feed, I would be able to see the impact of my behavior.

Specifically, on desktop I searched for things related to the Dallas Cowboys, clicking on a search result each time. Similarly, I did the same for the Miami Marlins baseball team on mobile.

Again, in both cases, content specific to these teams had yet to appear in my feed.

Here are the results of this activity:

Over a 30-day period, I found 10 instances of content related to the Dallas Cowboys in my feed and 6 instances of content about the Miami Marlins.

Again, just as in the first set of data I presented, a disparity between mobile and desktop exists.

Is this a general rule? Is this based on my particular profile? I don’t know. It’s just an interesting point that should be investigated further.

I will say that I doubt the content itself played a role. If anything, there should have been more results on mobile about the Marlins, as I was very much caught up in the World Series that was taking place at the time of my activity.

What does this data actually mean?

There are so many factors at play, that using any of the data above is a bit “hard.” Yes, I think there are some trends or indicators within it. However, that’s not really what I want you to take away from all of this. (Also, is it such a crime to consume data solely because it’s interesting to see some of what’s going on?)

What do I want you to take away then?

As part of my data analysis (if you’ll even call it that) I looked at how long it took for a behavior to result in Discover Feed inclusion. Surprisingly, the numbers were pretty consistent:

Discounting the 31 behavior instances around my “Search Desktop No Click” activity (e.g. searching for all things related to “fishing” but clicking on nothing) to impact my feed, Google picked up on what I was doing fairly quickly.

Generally speaking, it took less than 10 behaviors for Google to think it should update the topics shown in my feed.

That’s really the point. Despite the normal things I search for and engage with both regularly and heavily (things like SEO, for example) Google took this “lighter” yet consistent behavior as a signal to update my feed.

Google was very aware of what I was doing and acted on it pretty quickly all things considered. In the case of “food/cooking” content, as shown earlier, Google took my behavior very seriously and consistently showed such content in my feed.

Forget which behavior on which device produced more cards in my feed. The fact that it varied at all is telling. It shows Google is looking at the type of engagement and where it happens in the context of your overall profile.

Personally, I think if you (yes, you, the person reading this) did this experiment, you would get different results. Maybe some of the trends might align, but I would imagine that would be it.

And now for the really interesting part of all this.

Diving into what was and what wasn’t in my Discover Feed

As I’ve mentioned, the data is interesting in some of the possible trends it alludes to and in that it shows how closely Google is watching your behavior. However, the most interesting facets of this little project of mine came from seeing what Google was and was not showing day-in and day-out.

Is Google profiling users utilizing the same account?

The first month of this study coincided with a lockdown due to COVID-19. That meant my kids were home, all day, for a month. It also meant they watched a lot of YouTube. From Wild Kratts to Woody Woodpecker, my kids consumed a heap of cartoons and they did so using my Google account (so I could see what they were watching).

Wouldn’t you know, a funny thing happened. There was no “cartoon” content in my Discover Feed. I checked my feed religiously that month and not once did I notice a card about a cartoon.

Isn’t that odd?

Not if Google is profiling my account according to the devices being used or even according to the content being consumed. All signs point to Google being well aware that the content my kids were watching was not being consumed by the one using Discover (me).

This isn’t a stretch at all. The same happens in my YouTube feed all the time. While my desktop feed is filled to the brim with Fireman Sam, the YouTube app on my phone is a mixture of news and sports (I don’t “SEO” on YouTube) as my kids generally don’t watch their “programs” on my phone.

The URLs I visited were absent from Discover

There was one other thing missing from my Discover Feed and this one has enormous implications.

URLs.

Virtually none of the URLs I visited during my two-month experiment popped up in my Discover Feed!

I visited the Food Network’s website some 40 times, each time clicking and reading (pretending to read to be fair) an article or recipe. By the time I was nearing the end of my experiment Discover was showing me some sorts of food/cooking related content every day.

Through all of that, not once did Google show me a URL from the Food Network! Do you like apples? Well, how do you like them apples? (Cooked slowly with a hint of cinnamon.)

This was the general trend for each type of behavior that produced new topics in my feed. I visited a few websites about car repair, Google threw some cards about the topic in my Feed… none of which were sites I visited.

The only time I saw the same site I visited that appeared in my Discover Feed was ESPN for some of the sports queries I ran and people.com which I visited every day. However, I think that was entirely accidental as both sites are top sources of content in their spaces.

Yes, some sites I visit regularly do appear in my feed in general. For example, there were some local news sites that I visited multiple times a day for the better part of a month so as to track COVID-19 in my area. I freely admit it was a compulsion. One that Google picked up on.

In other words, it took a heck of a lot for Google to think I wanted that specific site or sites in my feed. Moreover, it would seem that Google doesn’t want to simply show content from the URLs you visit unless the signal otherwise is immense.

This leads me to my next question…

Is Discover really an SEO issue?

What can you do to optimize for Google Discover? It’s almost an absurd question. I visited the same site every day and Google still didn’t include its URL in my feed. (Again, I am aware that certain behaviors will trigger a specific URL, my point is that Google is not as apt to do so as you might think.) It all points to a certain lack of control. It all points to Google specifically not wanting to pigeon-hole the content it shows in Discover.

In other words, you can’t create content specifically for Discover. There’s no such concept. There’s no such control. There is no set of standardized “ranking signals” that you can try to optimize for.

Optimizing your images to make sure they’re high-quality or ensuring they’re at least 1,200 pixels wide and so forth isn’t really “optimizing” for Discover. It’s merely making yourself eligible to get into the ballpark. There is no standardized path to actually get on the field.

The entire idea of Discover is to offer content that’s specifically relevant to one user and all of their various idiosyncrasies. The notion of “optimizing” for something like that almost doesn’t compute.

Like with optimizing your images for Discover, all you can really do is position yourself.

And how does one position themselves for inclusion into the Discover Feed?

One of the sites that kept popping up in my feed was dallascowboys.com. This makes sense as I was searching for things related to the Dallas Cowboys and clicking on all sorts of results as a consequence. However, in my “travels” I specifically did not visit dallascowboys.com. Yet, once Google saw I was interested in the Cowboys, it was one of the first sites I was served with.

You don’t need to be a rocket scientist to see why. What other site is more relevant and more authoritative than the official site of the franchise?

If you want your site to be included in Discover, you need to be incredibly relevant and authoritative on whatever it is your site deals with.

That means investing time and resources into creating unique and substantial content. It means crafting an entire strategy around creating topical identity. After all, the idea is to get Google to understand that your site deals with a given topic, deals with it in-depth, and deals with it often (i.e., the topic is closely related to who you are as a site).

That sounds a heck of a lot more like “content marketing” than pure SEO, at least it does to me.

A cross-discipline marketing mind meld

Discover, to me, is the poster child for the merging of pure content creation and SEO. It speaks to the idea of needing a more abstract understanding of what a sound content strategy is, in order to be effective in the “Google-verse.”

It’s perhaps a different sort of motion than what you might typically find in the world of pure SEO. As opposed to diving into the minute details (be it a specific technical problem or a specific aspect of content optimization), Discover urges us to take a more holistic approach, to take a step back.

The way Discover is constructed advocates for a broader approach based on a meta-analysis of how a site is perceived by Google and what can be done to create a stronger profile. It’s almost the perfect blend of content, marketing, and an understanding of how Google works (SEO).

Fascinating.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

According to the Experts: 5 Technical SEO Trends to Watch in 2021

Posted by morgan.mcmurray

It’s no secret that SEO relies heavily on technical components to drive site rankability, and with so many emerging technologies, new tools, and metrics (*cough* Core Web Vitals *cough*), you might be wondering whether these constant updates will affect your more technical work.

To find out more about the state of technical SEO in 2021, we asked seven industry experts for their thoughts. The overwhelming answer? Keep doing what you’re doing.

“The core essentials in 2021 will remain about the same — every SEO needs to understand the fundamentals of crawling vs. indexing and the technical basics that have to be met before a site can rank,” says Moz Search Scientist, Dr. Pete Meyers. “All the fancy footwork in the world won’t get you anywhere if there’s no floor beneath you.”

But what should that floor be constructed of? Read on to find out!

1. Focus on the fundamentals

Click to tweet this!

Technical best practices are the “best” for a reason, so having a strong foundation of basic technical SEO skills is still a must.

“For me, the most underrated technical SEO strategy has always been the fundamental best practices,” says consultant Joe Hall. “It might sound surprising to some, but the vast majority of my clients have a difficulty in grasping the importance of best practices or just the basic fundamentals of SEO. I think this is in large part because of our community’s focus and attention on the ‘next best thing’, and not very often talking about the basic fundamentals of technical SEO.”

Those fundamentals include hard skills like understanding how to recognize and fix crawlability, indexation, accessibility, and site performance issues, but also how to prioritize the issues you come across.

SEO expert Lily Ray notes that prioritization is an area of improvement that novice technical SEOs need to address first, as they may be more inclined to cite small things as major problems when they’re really not: “It is common for new or junior SEOs to send a laundry list of technical items as exported by [SEO tools] directly to clients, without prioritizing which ones are the most important to fix, or knowing which ones can be ignored,” she says. “In many cases, the tools don’t even flag some of the more severe technical issues that may be affecting crawling, indexation, or rendering… Good technical SEOs are able to pinpoint real problems that are having a significant impact on the website’s ability to rank well, and they know which tools or other resources to use to be able to solve those problems.”

So start taking note of not just the what when it comes to technical issues, but also the influence those issues actually have on the sites you work on.

Need to brush up or build on these hard skills? Not to worry — Moz Academy recently released a Technical SEO Certification that can help you do just that!

Sign Me Up!

Beyond the more hands-on, practical skill sets required for building and maintaining search-optimized websites, our experts agree that basic soft skills are just as important, with several citing the need for cross-team collaboration abilities.

“Technical SEO implementations generally require working with multiple teams… which means there’s a lot of partnership, persuasion, give and take collaborations,” says Alexis Sanders, the SEO Director at Merkle. “Having project management, client services, storytelling, and communication skills will support implementation.”

So don’t get stuck in the weeds of your technical work — make sure you’re in constant communication with the teams and stakeholders who will help support your initiatives.

2. Gear up for Core Web Vitals

Click to tweet this!

One of the hottest topics in the industry right now is no doubt Core Web Vitals, the new Google ranking factors update expected in May 2021. But do technical SEOs really need to worry about them?

The experts say yes, but to work as a team to address them, and make your SEO voice heard. Alexis Sanders puts it this way: “The page experience update consists of Core Web Vitals, mobile-friendliness, web app security, and removing interstitials. Regardless of how teams are structured, making progress is going to require a wide array of talents, giving SEO a more involved seat at the table, as these elements affect our bottom-line.”

When prioritizing what to focus on, make sure that improving site speed is at the top of your list.

“If you only work on one area of Technical SEO in 2021, make it site speed,” advises Kick Point President Dana DiTomaso. “Site speed is one of those great parts of technical SEO where the benefit isn’t only for search engines — it also helps the people visiting your website. Remember, not everyone is coming to your website using the latest technology and fastest internet connection.”

When asked about their favorite ways to optimize, here’s what the experts suggested:

  1. Start using a content delivery network, such as cloudflare.com.
  2. Implement server-side caching for markup and design assets like CSS and JavaScript, and minimize the number of individual requests made for each page by bringing CSS and JavaScript in-line.
  3. Optimize media files by converting to next-generation formats and compressing for size and use of data.
  4. Use tools like BuiltWith, Wappalyzer, and Lighthouse to investigate what third party scripts have been loaded on a page, and remove them if you no longer need them, or move as many as compatible to a tag management tool.
  5. Focus on image performance optimization.
  6. Work with analytics and other internal teams to establish processes and expectations for adding and removing tagging.
  7. Set requirements and expectations around page speed early in the development process.

Addressing any site speed and usability issues now will set you up to better weather rankings shake-ups caused by Core Web Vitals.

Click to tweet this!

3. Use schema and structured data strategically

To ensure that crawlers can read, index, and serve the content of their sites to searchers, many SEOs rely on structured data and schema frameworks to organize everything — as well they should. But when implementing structured data, the experts agree, make sure you’re using it to achieve specific goals, and not just because you can.

“Some structured data has material impact on search results or how Google can process and understand a site, while other structured data will be totally irrelevant to any given site or have no measurable impact,” says Dr. Pete. “We need to use structured data with clear intent and purpose in order to see results.”

Lily Ray agrees, pointing out the debate on the topic of schema within the industry:

“There is a wide range of opinions on this topic within the SEO community, with some SEOs wanting to ‘mark up all the things’ and others not believing schema is important if it doesn’t generate Rich Results. Personally, I like to apply structured data if I believe it can provide search engines with more context about the entities included in our clients’ websites, even if that schema does not generate Rich Results. For example, I believe that adding Schema attributes related to your brand and your authors is a good approach to help solidify information in Google’s Knowledge Graph.”

The takeaway? Get clear on your goals, and implement structured data if it makes sense for your strategy, but don’t “mark up all the things” if doing so will create unnecessary work for you and your team without bringing about the results you’re looking for.

4. Leverage automation to get things done

Click to tweet this!

Emerging technologies don’t always stick around long enough to become useful, but one innovation that won’t be going away anytime soon is using languages like Python to help automate various workflows, like data analysis and research.

“The technical SEO industry has been exploding with new ideas and innovations in the past couple of years, particularly related to analyzing data at scale and automating SEO processes, which has resulted in programming languages like Python moving into the spotlight,” says Lily Ray.

Why is automation important? Not only can it make your day-to-day work easier and more streamlined, it can have positive effects on your business as well.

“I still think that improving time to task completion (performance optimization) is core to every business,” says Miracle Inameti-Archibong, the Head of SEO at Erudite. “Not just because of the page experience update coming in May, but because it affects all channels and directly affects the bottom line of the business (sale, leads) which is what key decision-makers are interested in.”

In 2021, explore ways in which automation can help you achieve both your technical SEO and broader business goals more effectively.

5. Don’t forget about analytics and reporting

Click to tweet this!

SEO success is incremental and gradual, usually taking months to years before you can definitively show how the work you put in has paid off. But if something goes wrong? Well, Dr. Pete has the perfect analogy: “The truth is that technical SEO is often like washing dishes — no one gives you much credit for it, but they sure notice when you break something.”

While technical SEO is the basis for all other SEO work, your non-SEO co-workers and managers will likely pay attention more when things are going wrong than when they’re going right. To help mitigate this issue, he suggests steering clear of “vanity metrics”, such as pages indexed, and instead “showing how a clear plan of action led to improvements in relevant rankings, traffic, and sales.”

Click to tweet this!

Make sure you’re outlining specific metrics and goals from the start of every campaign, which will help guide your efforts and give you an easier framework for reporting on things down the line. And don’t forget to factor in outside forces that may be affecting your results.

“Organic traffic can be impacted by a lot of external factors, or your other, non-technical SEO campaigns,” says Tom Capper, Moz’s Senior Search Scientist (say that five times fast). “Tactics like SEO split-testing or, at a more primitive level, counterfactual forecasting, can help to isolate these effects in many cases, and happily technical changes tend to have a quicker, more direct impact than some other types of change that don’t see returns until the next core update.”

So when analyzing and reporting, remember: quantity isn’t always quality, and make sure you have the full picture before gleaning insights.

Conclusion

Click to tweet this!

While the core of your technical SEO work will stay the same in 2021, there is plenty of opportunity to build and improve on foundational skills, implement structured data and automation, clarify the way you analyze and report your results, and plan for Core Web Vitals to take effect. And while technical work can sometimes feel isolating, remember that cross-team collaboration is key to success, and that you’re part of a community of SEOs with similar goals!

Speaking of community, we’d be remiss if we didn’t mention the amazing work of Areej AbuAli and the Women in Tech SEO network.

“If you identify as a woman, do join the Women in Tech SEO Slack channel and subscribe to its newsletter,” advises Miracle Inameti-Archibong. “I wish I had a community like that at the beginning of my career. There are loads of people always willing to help with not just technical SEO issues, but mentoring and sharing of opportunities.”

Have questions for the experts, or advice not mentioned here? Let us know in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Daily SEO Fix: Gaining Insight from Exported Moz Pro Data

Posted by M.Cole

Gaining insight from data you’ve gathered is vital to the success of your SEO efforts, allowing you to monitor your performance over time and make strategic changes where necessary.

One of the Moz onboarding team’s goals is to make sure you’re getting the most out of your Moz Pro data. Most of the time, that will involve providing insight within the tool. But you can also get great insight from exported Moz Pro data.

To help ensure that you’re doing all you can to maximize the value of your exported data, we’ve created a handy collection of Daily SEO Fix videos. Remember, if you’d like to speak directly with a member of our onboarding team, you can book a personalized, one-on-one Moz Pro walkthrough.

Book a Moz Pro Walkthrough

What’s the most beneficial thing you’ve learned from your exported Moz Pro data? Let us know in the comments!


Filter ranking keywords by subfolders

How can I see ranking keywords for a specific subfolder of my site?

When analyzing your keywords, you may sometimes want to focus on the performance of a certain subfolder of your site.

In this video, Jo guides you through exporting a CSV of ranking keywords, and filtering to identify keywords for a specific subfolder.

This gives you the ability to focus on ranking keywords that are relevant to what you’re currently working on.


Filter and export popular keywords

How do I filter my keywords before I export them?

In this Daily Fix, Emilie takes us through filtering your keywords prior to export.

Filtering your keywords allows you to hone in on the keywords that you’re ranking highest for, in addition to keywords that have a high monthly volume and low difficulty score.


Export your rankings to a CSV

How can I export a CSV of my tracked keywords?

In this video, I show you how to export a CSV of your rankings keyword data.

Exporting a CSV from the rankings section of your Moz Pro Campaign allows you to see a detailed overview of how your tracked keywords are performing.

You can also filter these keywords by label to analyze a specific group of keywords.


Export title and description tags

How do I view all of my site’s page URLs, titles, and descriptions?

In this Daily Fix, Jo explains how to export a site’s page URLs, titles, and descriptions to a CSV with Moz Pro.

This is super helpful for a content audit, as it gives you a great overview of every page that we’ve been able to crawl on your site.


Export follow links from Link Explorer

How do I export a CSV of my followed links?

In this video, Maddie shows you how to export a CSV of your followed links.

It can be beneficial to view the links that are passing link equity through to your site and contributing to your Page Authority and Domain Authority.

By doing this, you can get a better idea of whether you need to work on building more followed links to your site.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Resolve Duplicate Content

Posted by meghanpahinui

What is duplicate content, and why is it a concern for your website? Better yet, how can you find it and fix it?

In this week’s episode of Whiteboard Friday, Moz Learn Team specialist, Meghan, walks through some handy (and hunger-inducing) analogies to help you answer these questions! 

Anatomy of a Perfect Pitch Email

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hey, Moz fans. Welcome to another edition of Whiteboard Friday. I’m Meghan, and I’m part of the Learn Team here at Moz. Today we’re going to talk a bit about duplicate content. 

So why are we talking about duplicate content?

Well, this is a pretty common issue, and it can often be a bit confusing. What is it? How is it determined? Why are certain pages on my site being flagged as duplicates of one another? And most importantly, how do I resolve it if I find that this is something that I want to tackle on my site? 

What is duplicate content?

So first off, what is duplicate content?

Essentially, duplicate content is content that appears in more than one place on the Internet. But this may not be as cut and dry as it seems. Content that is too similar, even if it isn’t identical, may be considered duplicates of one another. 

When thinking about duplicate content, it’s important to remember that it’s not just about what human visitors see when they go to your site and compare two pages. It’s also about what search engines and crawlers see when they access those pages. Since they can’t see the rendered page, they typically go off of the source code of the page, and if that code is too similar, the crawler may think that it’s looking at two versions of the same page. 

Imagine that you go to a bakery and there are two cupcakes in front of you that look almost identical. They don’t have any signs. How do you know which one you want? That’s what happens when a search engine encounters two pages that are too similar. 

This confusion between pieces of content can lead to things like ranking issues, because search engines may not be able to figure out which page they should rank or they may rank the incorrect page. Within the Moz tools, we have a 90% threshold for duplicate content, which means that any pages with code that is at least 90% the same will be flagged as duplicates of one another.

Solutions

So now that we’ve briefly covered what duplicate content is, what do we do about it? There are a few different ways that you can resolve duplicate content. 

301 redirects

First is the option to implement 301 redirects. This option would be similar to having a VHS copy of a movie, which maybe isn’t so relevant anymore.

So you want to be sure to provide folks with the digital version that’s streaming online. On your site, you can redirect older versions of pages to new, updated versions. This is relevant for issues with subdomain or protocol changes as well as content updates where you no longer want people to be able to access that older content.

Rel=canonicals

Next is the option to implement rel=canonicals on your page. Say you’re at a bake sale and you have two types of cookies with you, sugar and chocolate chip. You consider your sugar cookies to be top-notch. So when folks ask you which one they should try, you point them to the sugar cookies even though they still have the option to try the chocolate chip.

On your site, this would be similar to having two items for sale that are different colors. You want human visitors to be able to see and access both colors, but you would use a canonical tag to tell crawlers which one is the more relevant page to rank. 

Meta noindex

You also have the option to mark pages as meta noindex.

For example, you may have two editions of your favorite book. You’re going to read and reference that second edition because it’s the newest and most relevant. But you still want to be able to read and access edition one should you need to. Meta noindex tags tell the crawler that they can still crawl that duplicate page, but they shouldn’t include it in their index. This can help with duplicate content issues due to things like pagination. 

Add content

But what if you have two pages that really aren’t duplicates of one another? They are about different topics, and they should be treated as separate pieces of content. Well, in this case, you may opt to add more content to each of these pages so it’s less confusing to the crawler.

This would allow them to stand out from one another, and it would be similar to say adding sprinkles and a cherry to one cupcake and maybe a different color frosting to the other. 

Use Moz Pro to help identify and resolve duplicate content

If you ever need help identifying which pages on your site may be considered duplicates of one another, Moz Pro Site Crawl and On-Demand Crawl can help.

Within both of these tools, we’ll flag which pages are considered duplicates of one another, and you can even export that data to CSV so you can analyze it outside of the tool. Just a little pro tip here. In the CSV export of that data, the duplicate content group will tell you which pages are considered duplicates of one another.

So any pages with the same duplicate content group number are part of the same group of duplicate pages. This is by no means an exhaustive list of the ways you can resolve duplicate content, but I do hope that it helps to point you in the right direction when it comes to tackling this issue. If you’re interested in learning more about SEO fundamentals and strategy, be sure to check out the SEO Essentials Certification that’s offered through the Moz Academy.

Thanks for watching.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

10 Steps to Blend STAT Ranking Data with Site Performance Metrics

Posted by AndrewMiller

Too often, we assume that SEO best practices will work in any industry against any competitive set. But most best practices go untested and may not be “best” in every situation.

We all know that tactics that worked in 2020 won’t necessarily move the needle in 2021 as Core Web Vitals (CWV) and other signals shuffle to the front. We have to do better for our businesses and our clients.

I’m a data nerd at heart with lots of battle scars from 15 years in SEO. The idea of analyzing thousands of local SERPs sounded like too much fun to pass up. I found some surprising correlations, and just as importantly, built a methodology and data set that can be updated quarterly to show changes over time.

I analyzed 50,000+ SERPs in the retail banking sector so I could make sense of the massive shifts in rankings and search behaviors during the lockdown period. We have a lot of historical data for bank websites, so comparing pre/post COVID data would be easier than starting from scratch.

I’ll share how I did it below. But first, I want to share WHY I think sharing this type of research is so important for the SEO community.

Why validate SEO best practices with data?

It’s a great time to be an SEO. We have amazing tools and can gather more data than ever. We have thriving communities and excellent basic training materials.

Yet, we often see our craft distilled into overly-simplified “best practices” that are assumed to be universally true. But if there’s one universal truth in SEO, it’s that there are no universal truths. A best practice can be misinterpreted or outdated, leading to missed opportunities or outright harm to a business.

Using the increasing importance of CWV as an example, SEOs have an opportunity (and obligation) to separate fact from fiction. We need to know if, and by how much, CWV will impact rankings over time so we can prioritize our efforts.

We can elevate our SEO game individually and collectively by testing and validating best practices with research. It just takes a curious mind, the right tools, and a willingness to accept the results rather than force a narrative.

Failing to validate best practices is a liability for SEO practitioners and shows an unwillingness to challenge assumptions. In my experience, a lack of data can lead to a senior stakeholders’ opinions carrying more weight than an SEO expert’s recommendations.

Start by asking the right questions

Real insight comes from combining data from multiple sources to answer critical questions and ensure your strategies are backed by valid data. In my analysis of local banks, I started by listing the questions I wanted to know the answers to:

  • What characteristics are shared by top-ranking local bank websites?
  • Who are banks actually competing against in the SERPs? Is it primarily other banks?
  • How do competitive SERPS change based on when/where/how users search?
  • How can smaller, local businesses gain an edge over larger competitors from outside their region?
  • How does SERP composition affect a bank’s ability to rank well for targeted keywords?
  • How important are Core Web Vitals (CWV) for rankings? How does this change over time?

You could run this same analysis by replacing “banks” with other local business categories. The list of potential questions is endless so you can adjust them based on your needs.

Here’s an important reminder – be prepared to accept the answers even if they are inconclusive or contradictory to your assumptions. Data-driven SEOs have to avoid confirmation bias if we’re going to remain objective.

Here’s how I analyzed 50,000 search results in a few hours

I combined three of my favorite tools to analyze SERPs at scale and gather the data needed to answer my questions:

  • STAT to generated ranking reports for select keywords
  • Screaming Frog to crawl websites and gather technical SEO data
  • Power BI to analyze the large data sets and create simple visualizations

Step 1: Determine your data needs

I used US Census Bureau data to identify all cities with populations over 100,000, because I wanted a representation of local bank SERPs across the country. My list ended up including 314 separate cities, but you could customize your list to suit your needs.

I also wanted to gather data for desktop and mobile searches to compare SERP differences between the device types.

Step 2: Identify your keywords

I chose “banks near me” and “banks in {city, st}” based on their strong local intent and high search volumes, compared to more specific keywords for banking services.

Step 3: Generate a STAT import file in .csv format

Once you have your keywords and market list, it’s time to prepare the bulk upload for STAT. Use the template provided in the link to create a .csv file with the following fields:

  • Project: The name of the new STAT project, or an existing project.
  • Folder: The name of the new folder, or an existing folder. (This is an optional column that you can leave blank.)
  • Site: The domain name for the site you want to track. Note, for our purposes you can enter any URL you want to track here. The Top 20 Report will include all ranking URLs for the target keywords even if they aren’t listed in your “Site” column.
  • Keyword: The search query you’re adding.
  • Tags: Enter as many keyword tags as you want, separated by commas. I used “city” and “near me” as tags to distinguish between the query types. (This is an optional column that you can leave blank.)
  • Market: Specify the market (country and language) in which you would like to track the keyword. I used “US-en” for US English.
  • Location: If you want to track the keyword in a specific location, specify the city, state, province, ZIP code, and/or postal code. I used the city and state list in “city, st” format.
  • Device: Select whether you would like Desktop or Smartphone results. I selected both.

Each market, location, and device type will multiply the number of keywords you must track. I ended up with 1,256 keywords (314 markets X 2 keywords X 2 devices) in my import file.

Once your file is complete, you can import to STAT and begin tracking.

Step 4: Run a Top 20 Report in STAT for all keywords

STAT’s built-in Google SERP Top 20 Comparison report captures the top 20 organic results from each SERP at different intervals (daily, weekly, monthly, etc.) to look at changes over time. I did not need daily data so I simply let it run on two consecutive days and removed the data I did not need. I re-run the same report quarterly to track changes over time.

Watch the video below to learn how to set up this report! 

My 1,256 keywords generated over 25,000 rows of data per day. Each row is a different organic listing and includes the keyword, monthly search volume, rank (includes the local pack), base rank (does not include the local pack), https/http protocol of the ranking URL, the ranking URL, and your tags.

Here’s an example of the raw output in CSV format:

It’s easy to see how useful this data is by itself but it becomes even more powerful when we clean it up and start crawling the ranking URLs.

Step 5: Clean up and normalize your STAT URLs data

At this point you may have invested 1-2 hours in gathering the initial data. This step is a bit more time consuming, but data cleansing allows you to run more advanced analysis and uncover more useful insights in Screaming Frog.

Here are the changes I made to the STAT rankings data to prepare for the next steps in Screaming Frog and Power BI. You’ll end up with multiple columns of URLs. Each serves a purpose later.

  1. Duplicate the Ranking URL column to a new column called Normalized URL.
  2. Remove URL parameters from the Normalized URL fields by using Excel’s text to columns tool and separating by “?”. I deleted the new columns(s) containing the URL parameters because they were not helpful in my analysis.
  3. Duplicate the new, clean Normalized URL column to a new column called TLD. Use the text to columns tool on the TLD column and separate by “/” to remove everything except the domain name and subdomains. Delete the new columns. I chose to keep the subdomains in my TLD column but you can remove them if it helps your analysis.
  4. Finally, create one more column called Full URL that will eventually become the list of URLs that you’ll crawl in Screaming Frog. To generate the Full URL, simply use Excel’s concatenate function to combine the Protocol and Normalized URL columns. Your formula will look something like this: =concatenate(A1, “://”, C1) to include the “://” in a valid URL string.

The 25,000+ rows in my data set are well within Excel’s limitations, so I am able to manipulate the data easily in one place. You may need to use a database (I like BigQuery) as your data sets grow.

Step 6: Categorize your SERP results by website type

Skimming through the SERP results, it’s easy to see that banks are not the only type of website that rank for keywords with local search intent. Since one of my initial questions was SERP composition, I had to identify all of the different types of websites and label each one for further analysis.

This step is by far the most time consuming and insightful. I spent 3 hours categorizing the initial batch of 25,000+ URLs into one of the following categories:

  • Institution (banks and credit union websites)
  • Directory (aggregators, local business directories, etc.)
  • Reviews (local and national sites like Yelp.com)
  • Education (content about banks on .edu domains)
  • Government (content about banks on .gov domains and municipal sites)
  • Jobs (careers sites and job aggregators)
  • News (local and national news sites with banking content)
  • Food Banks (yes, plenty of food banks rank for “banks near me” keywords)
  • Real Estate (commercial and residential real estate listings)
  • Search Engines (ranked content belonging to a search engine)
  • Social Media (ranked content on social media sites)
  • Other (completely random results not related to any of the above)

Your local SERPs will likely contain many of these website types and other unrelated categories such as food banks. Speed up the process by sorting and filtering your TLD and Normalized URL columns to categorize multiple rows simultaneously. For example, all the yelp.com rankings can be categorized as “Reviews” with a quick copy/paste.

At this point, your rankings data set is complete and you are ready to begin crawling the top-ranking sites in your industry to see what they have in common.

Step 7: Crawl your target websites with Screaming Frog

My initial STAT data identified over 6,600 unique pages from local bank websites that ranked in the top 20 organic search results. This is far too many pages to evaluate manually. Enter Screaming Frog, a crawler that mimics Google’s web crawler and extracts tons of SEO data from websites.

I configured Screaming Frog to crawl each of the 6,600 ranking pages for a larger analysis of characteristics shared by top-ranking bank websites. Don’t just let SF loose though. Be sure to configure it properly to save time and avoid crawling unnecessary pages.

These settings ensure we’ll get all the info we need to answer our questions in one crawl:

List Mode: Paste in a de-duplicated Full URL list from your STAT data. In my case, this was 6,600+ URLs.

Database Storage Mode: It may be a bit slower than Memory (RAM) Storage, but saving your crawl results on your hard disk ensures you won’t lose your results if you make a mistake (like I have many times) and close your report before you finish analyzing the data.

Limit Crawl Depth: Set this to 0 (zero) so the spider will only crawl the URLs on your list without following internal links to other pages on those domains.

APIs: I highly recommend using the Pagespeed Insights Integration to pull Lighthouse speed metrics directly into your crawl data. If you have a Moz account with API access, you can also pull link and domain data from the Moz API with the built-in integration.

Once you have configured the spider, let it rip! It could take several minutes to several hours depending on how many URLs you’re crawling and your computer’s speed and memory constraints. Just be patient! You might try running larger crawls overnight or on an extra computer to avoid bogging your primary machine down.

Step 8: Export your Screaming Frog crawl data to Excel

Dumping your crawl data into Excel is remarkably easy.

Step 9: Join your data sets in Power BI

At this point, you should have two data sources in Excel: one for your STAT rankings data and another for your Screaming Frog crawl data. Our goal is to combine the two data sources to see how organic search rank may be influenced by on-page SEO elements and site performance. To do this, we must first merge the data.

If you have access to a Windows PC, the free version of Power BI is powerful enough to get you started. Begin by loading your two data sources into a new project using the Get Data wizard.

Once your data sets are loaded, it’s time to make the magic happen by creating relationships in your data to unlock correlations between rankings and site characteristics. To combine your data in Power BI, create a many-to-many relationship between your STAT Full URL and Screaming Frog Original URL fields. 

If you are new to BI tools and data visualization, don’t worry! There are lots of helpful tutorials and videos just a quick search away. At this point, it’s really hard to break anything and you can experiment with lots of ways to analyze your data and share insights with many types of charts and graphs.

I should note that Power BI is my preferred data visualization tool but you may be able to use Tableau or some equally powerful. Google Data Studio was not an option for this analysis because it only allows for left outer joins of the multiple data sources and does not support “many-to-many” relationships. It’s a technical way of saying Data Studio isn’t flexible enough to create the data relationships that we need.

Step 10: Analyze and visualize!

Power BI’s built-in visualizations allow you to quickly summarize and present data. This is where we can start analyzing the data to answer the questions we came up with earlier.

Results — what did we learn?

Here are a couple examples of the insights gleaned from merging our rankings and crawl data. Spoiler alert — CWV doesn’t strongly impact organic rankings….yet!

Who are banks actually competing against in the SERPs? Is it primarily other banks?

On desktops, about 67% of organic search results belong to financial institutions (banks and credit unions) with heavy competition from reviews sites (7%) and online directories (22%). This information helps shape our SEO strategies for banks by exposing opportunities to monitor and maintain listings in relevant directories and reviews sites.

Okay, now let’s mash up our data sources to see how the distribution of website categories varies by rank on desktop devices. Suddenly, we can see that financial institutions actually occupy the majority of the top 3 results while reviews sites and directories are more prevalent in positions 4-10.

How important are Core Web Vitals (CWV) for rankings? How does this change over time?

Site performance and site speed are hot topics in SEO and will only become more important as CWV becomes a ranking signal in May this year. We can begin to understand the relationships between site speed and rankings by comparing STAT rankings and Pagespeed Insights data from Screaming Frog reports.

As of January 2021, sites with higher Lighthouse Performance Scores (i.e. they load faster) tend to rank better than sites with lower scores. This could help justify investments in site speed and site performance.

Some CWV elements correlate more closely with better rankings and others are more scattered. This isn’t to say CWV aren’t important or meaningful, but rather it’s a starting point for further analysis after May.

So what? What can we learn from this type of analysis?

Separately, STAT and Screaming Frog are incredibly powerful SEO tools. The data they provide are useful if you happen to be an SEO but the ability to merge data and extract relationships will multiply your value in any organization that values data, and acts on insights.

Besides validating some generally accepted SEO knowledge with data (“faster sites are rewarded with better rankings”), better use of relational data can also help us avoid spending valuable time on less important tactics (“improve Cumulative Layout Shift at all costs!”).

Of course, correlation does not imply causation, and aggregated data does not guarantee an outcome for individual sites. But if you’re a bank marketing professional responsible for customer acquisition from organic channels, you’ll need to bring this type of data to your stakeholders to justify increased investments in SEO.

By sharing the tools and methodology, I hope others will take it further by building and contributing their additional findings to the SEO community. What other datasets can we combine to deepen our understanding of SERPs on a larger scale? Let me know your thoughts in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Featured Snippets Drop to Historic Lows

Posted by Dr-Pete

On February 19, MozCast measured a dramatic drop (40% day-over-day) in SERPs with Featured Snippets, with no immediate signs of recovery. Here’s a two-week view (February 10-23):

Here’s a 60-day view, highlighting this historic low-point in our 10K-keyword data set:

I could take the graph back further, but let’s cut to the chase — this is the lowest prevalence rate of Featured Snippets in our data set since we started collecting reliable data in the summer of 2015.

Are we losing our minds?

After the year we’ve all had, it’s always good to check our sanity. In this case, other data sets showed a drop on the same date, but the severity of the drop varied dramatically. So, I checked our STAT data across desktop queries (en-US only) — over two million daily SERPs — and saw the following:

STAT recorded an 11% day-over-day drop. Interestingly, there’s been a 16% total drop since February 10, if we include a second, smaller drop on February 13. While MozCast is desktop-only, STAT has access to mobile data. Here’s the desktop/mobile comparison:

While mobile SERPs in STAT showed higher overall prevalence, the pattern was very similar, with a 9% day-over-day-drop on February 19 and a total drop of about 12% since February 10. Note that, while there is considerable overlap, the desktop and mobile data sets may contain different search phrases. While the desktop data set is currently about 2.2M daily SERPs, mobile is closer to 1.7M.

Note that the MozCast 10K keywords are skewed (deliberately) toward shorter, more competitive phrases, whereas STAT includes many more “long-tail” phrases. This explains the overall higher prevalence in STAT, as longer phrases tend to include questions and other natural-language queries that are more likely to drive Featured Snippets.

Why the big difference?

What’s driving the 40% drop in MozCast and, presumably, more competitive terms? First things first: we’ve hand-verified a number of these losses, and there is no evidence of measurement error. One helpful aspect of the 10K MozCast keywords is that they’re evenly divided across 20 historical Google Ads categories. While some changes impact industry categories similarly, the Featured Snippet loss showed a dramatic range of impact:

Competitive healthcare terms lost more than two-thirds of their Featured Snippets. It turns out that many of these terms had other prominent features, such as Medical Knowledge Panels. Here are some high-volume terms that lost Featured Snippets in the Health category:

  • diabetes
  • lupus
  • autism
  • fibromyalgia
  • acne

While Finance had a much lower initial prevalence of Featured Snippets, Finance SERPs also saw massive losses on February 19. Some high-volume examples include:

  • pension
  • risk management
  • mutual funds
  • roth ira
  • investment

    Like the Health category, these terms have a Knowledge Panel in the right-hand column on desktop, with some basic information (primarily from Wikipedia/Wikidata). Again, these are competitive “head” terms, where Google was displaying multiple SERP features prior to February 19.

    Both Health and Finance search phrases align closely with so-called YMYL (Your Money or Your Life) content areas, which, in Google’s own words “… could potentially impact a person’s future happiness, health, financial stability, or safety.” These are areas where Google is clearly concerned about the quality of the answers they provide.

    What about passage indexing?

    Could this be tied to the “passage indexing” update that rolled out around February 10? While there’s a lot we still don’t know about the impact of that update, and while that update impacted rankings and very likely impacted organic snippets of all types, there’s no reason to believe that update would impact whether or not a Featured Snippet is displayed for any given query. While the timelines overlap slightly, these events are most likely separate.

    Is the snippet sky falling?

    While the 40% drop in Featured Snippets in MozCast appears to be real, the impact was primarily on shorter, more competitive terms and specific industry categories. For those in YMYL categories, it certainly makes sense to evaluate the impact on your rankings and search traffic.

    Generally speaking, this is a common pattern with SERP features — Google ramps them up over time, then reaches a threshold where quality starts to suffer, and then lowers the volume. As Google becomes more confident in the quality of their Featured Snippet algorithms, they may turn that volume back up. I certainly don’t expect Featured Snippets to disappear any time soon, and they’re still very prevalent in longer, natural-language queries.

    Consider, too, that some of these Featured Snippets may just have been redundant. Prior to February 19, someone searching for “mutual fund” might have seen this Featured Snippet:

    Google is assuming a “What is/are …?” question here, but “mutual fund” is a highly ambiguous search that could have multiple intents. At the same time, Google was already showing a Knowledge Graph entity in the right-hand column (on desktop), presumably from trusted sources:

    Why display both, especially if Google has concerns about quality in a category where they’re very sensitive to quality issues? At the same time, while it may sting a bit to lose these Featured Snippets, consider whether they were really delivering. While this term may be great for vanity, how often are people at the very beginning of a search journey — who may not even know what a mutual fund is — going to convert into a customer? In many cases, they may be jumping straight to the Knowledge Panel and not even taking the Featured Snippet into account.

    For Moz Pro customers, remember that you can easily track Featured Snippets from the “SERP Features” page (under “Rankings” in the left-hand nav) and filter for keywords with Featured Snippets. You’ll get a report something like this — look for the scissors icon to see where Featured Snippets are appearing and whether you (blue) or a competitor (red) are capturing them:

    Whatever the impact, one thing remains true — Google giveth and Google taketh away. Unlike losing a ranking or losing a Featured Snippet to a competitor, there’s very little you can do to reverse this kind of sweeping change. For sites in heavily-impacted verticals, we can only monitor the situation and try to assess our new reality.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!