Review of 2019 Updates
Last year, Google performed 3,200 changes to their system, including features and regular updates meant to keep the results relevant.
“Our search algorithms are complex math equations that rely on hundreds of variables, and last year alone, we made more than 3,200 changes to our search systems. Some of these were visible launches of new features, while many others were regular updates meant to keep our results relevant as content on the web changes.”
Sullivan, Danny. “How we keep Search relevant and useful.” Google, 15 Jul 2019. https://www.blog.google/products/search/how-we-keep-google-search-relevant-and-useful/. Accessed 2 Jan, 2020.
Timeline of Important 2019 Google Updates
June 6, 2019: SEARCH DIVERSITY
June 6, 2019, Google announced on the Search Liaison Twitter account that it updated search results to show a more diverse set of search results.
Google will aim to show no more than two results from the same domain for a particular query in the top results. This only impacts core results, not additional search features.
“Site diversity will generally treat subdomains as part of a root domain. IE: listings from subdomains and the root domain will all be considered from the same single site.”
@searchliaison. “The site diversity change means that you usually won’t see more than two listings from the same site in our top results…” Twitter, 6 June. 2020, 01:58 p.m., twitter.com/PurdueWLab/status/176728308736737282.
July 1, 2019 : ROBOTS.TXT DIRECTIVES
July 1, 2019, Google announced that the robots.txt protocol is working towards becoming an Internet standard. The change affecting the most website has been how to “deindex” a page, or how to keep a page from displaying in search results.
Previously, many websites simply disallowed the page in their robots.txt This practice is ineffective because crawl bots can find the page through others linking to it.
September 1, 2019: END to NOINDEX as a Directive
September 1, Google officially withdrew support for unpublished rules within the Robots Exclusion Protocol, putting an end to noindex as a directive within the robots.txt file.
In order to successfully prevent a page from appearing in Google Search developers will need to include a noindex meta tag in the page’s HTML code.
Example: <meta name=”robots” content=”noindex” />
Be sure the page is allowed in the robots.txt file. When Googlebot next crawls that page and sees the tag, Googlebot will drop that page entirely from Google Search results, regardless of whether other sites link to it.
September 10, 2019: New Ways to Identify the Nature of Links
September 10, 2019, Google introduced rel=”sponsored” and rel=”ugc” to indicate paid and user-generated content, respectively.
“All the link attributes — sponsored, UGC and nofollow — are treated as hints about which links to consider or exclude within Search. We’ll use these hints — along with other signals — as a way to better understand how to appropriately analyze and use links within our systems.”
“Evolving “nofollow” – new ways to identify the nature of links.” Google Webmaster Central Blog, 10 Sept. 2019, https://webmasters.googleblog.com/2019/09/evolving-nofollow-new-ways-to-identify.html. Accessed 2 January, 2020.
October 3, 2019: H1 Tags
October 3, 2019, Google revealed that multiple H1 tags on a page, or even the lack thereof, would not trip up Google’s systems.
“In short, when thinking about this topic, SEO shouldn’t be your primary objective,” Mueller advised. “Instead, think about your users: if you have ways of making your content accessible to them, be it by using multiple H1 headings or other standard HTML constructs, then that’s not going to get in the way of your SEO efforts.”
Mueller, John. “Multiple H1 Headings: How to Handle Them for SEO & Accessibility? #AskGoogleWebmasters.” YouTube, commentary by John Mueller, 03 Oct. 2019, youtu.be/gK645_7TA6c.
October 25, 2019: BERT
October 25, 2019, Google announced BERT: bidirectional encoder representations from transformers.
*Note: Bing has been utilizing BERT 6 months prior, in April 2019.
Bill Slawski, search engine patent expert, describes BERT as:
“Bert is a natural language processing pre-training approach that can be used on a large body of text. It handles tasks such as entity recognition, part of speech tagging, and question-answering among other natural language processes. Bert helps Google understand natural language text from the Web.”
Montti, Roger. “Google BERT Update – What it Means.” Search Engine Journal, 25, October, 2019, https://www.searchenginejournal.com/google-bert-update. Accessed 02, January, 2019.
If you have heard the advice,
- “think more of natural language searches,”
- “BERT means optimizing your site for long tail queries.”
They likely misunderstood natural language processing…It’s not that your site has to be long tail search friendly.
BERT is about Google understanding what a user means and being able to connect that to more specific information that already exists on your website.
You do not need to optimize for BERT.
Curious about what 2020 seo will bring? These are my thoughts on industry buzzwords and how to stay relevant in a no-click serp.