Finaamarketing

Information Satisfaction is a key ranking factor for Google

Information Satisfaction is a key ranking factor for Google

Discover how to use information satisfaction, or IS, a crucial statistic in Google’s organic search rankings, for your company.

Google has been gradually modifying its language when discussing ranking algorithms.

While astute industry observers will note that Google hasn’t revealed anything particularly new, it’s still critical to clarify and codify what has been implied and apply it to digital marketing.

Information happiness and content that puts individuals foremost

Information satisfaction is the central idea on which we ought to concentrate (IS). It is not new, and it has long been hiding in plain sight.

What does it mean to be satisfied with information?

When evaluating a system’s ability to meet user needs, information satisfaction is a crucial evaluation criterion.

In terms of search engine optimization, the system is an information retrieval system, or search engine. Yet, IS is employed in a wide range of other contexts and systems, including the evaluation of content management systems and the experiences of cancer patients.

If the search functions and the websites displayed as results, both separately and together, satisfy the expectations of the searcher, then a search engine’s results pages would have high IS ratings.

We need to address two issues in order to comprehend IS’s function in organic search:

  • How and why is IS important to Google?
  • How can companies address this? 

What Google has stated regarding IS

I’ll limit my discussion to Google’s most recent statements here, even though the term “satisfaction” has been used sporadically in the business for much longer than ten years.

Google has been discussing ranking algorithms aimed at placing “more original, helpful content written by people, for people” even before the release of the Panda algorithm.

Google’s helpful content ranking method is centered around the concept of satisfaction, with the stated objective being to “reward content where visitors feel they’ve had a satisfying experience.”

Google states later in the document that the “classifier process is entirely automated, using a machine-learning model” for the helpful content update.

Simplifying this and speculating a little, this indicates that the machine learning model is:

Simplifying this and speculating a little, this indicates that the machine learning model is:

operating continuously in the background.

… being updated/refined on a regular/constant basis to accommodate for modifications resulting from page revisions and the publication of new pages.

This explains in large part why it could take several months to recover from a useful content demotion.

The idea of information satisfaction, which has been variously defined as the primary top-level metric of the entire SERP that Google optimizes for, was at the center of Pandu Nayak’s evidence in October 2023 in Google’s antitrust case in the United States.

The full testimony is well worth reading through, but to make my point, I’d like to focus on just three very lucid passages:

In the middle of a discussion on ranking tests, on page 6428:

I understand that IS is Google’s main criterion for quality.

A: In agreement.

Page 6432, discussing RankBrain:

After that, [RankBrain] is adjusted using IS data.

A: You’re right.

On page 6448, regarding RankEmbed BERT:

Q: After that, is [RankEmbed BERT] adjusted using data from human IS raters?

A: It is, indeed. 

This means that, for many algorithms, the IS score serves as the final arbiter of Google search quality, independent of the signals that the algorithms consider.

For instance, IS scores are used to assess and optimize RankBrain, regardless of its real workings. That is essentially the main lesson for SEO to be learned from that testimony. (DeepRank has also already surpassed RankBrain in several areas.)

The Search Quality Rater Guidelines also mention place satisfaction as a metric. Let’s return to the version from 2017. High-quality pages are said to have:

Section 4.2: “A Satisfying Amount of High Quality Main Content”

“Unambiguous and Satisfactory Website Content” (4.3).

All raters’ rules have continued to grow and provide further details about this since then.

In fact, Google notes that “diversity in search results is essential to satisfy the diversity of people who use search” in section 0.0 of the current edition. As soon as Google mentions “authoritative and trustworthy information,” this is further clarified.

According to Google, when evaluating the quality of the main content (MC), raters should “Consider the extent to which the MC is satisfying and helps the page achieve its purpose.” Section 3.1 of the most recent version of the Google guidelines emphasizes the importance of information satisfaction in rating the page quality. Satisfaction with information appears much later.

More recently, when discussing Search Generative Experience’s capacity to respond to a larger range of Google inquiries, Google CEO Sundar Pichai stated, “We are improving satisfaction,” during the company’s 2023 Q4 earnings call at the end of January 2024 (the complete tape is available). I could go on forever.

To put it succinctly, Google uses information satisfaction as its primary search quality metric. Information retrieval experts may find it unsurprising that IS is a crucial metric. However, in general, businesses and content producers might require a little more assistance.

Using IS in digital marketing and SEO

I’ve been arguing that SEO is really product management for a very long time.

This prompts the audience to consider SEO in terms of the user journey, wherein each encounter between the user and the business is a part of the user experience.

Since websites are typically the focal point of a brand’s online presence, user satisfaction on the site is essential for both happy consumers and high Google rankings.

Since the application of IS thinking in reality largely depends on how sophisticated the client’s user engagement operation is, one of the first things I look for when taking on a new client is information about their customer journeys and user personas.

When an organization’s user engagement procedures are basic, it will take a consistent effort to start from scratch or drastically alter current internal processes in order to integrate IS thinking into the content production, website operation, and marketing activities.

Conversely, companies that have effective customer engagement systems in place and record these interactions in a way that other teams can use are probably already meeting the needs of their consumers. This is especially true in situations where various specialized teams work together.

Whichever end of this spectrum your business is on, there are two things you must do in order to incorporate IS thinking:

  • a thorough comprehension of the consumer journeys, as they provide the fundamental structure required for content production and upkeep.
  • A method of evaluating data at every stage of the trip. 

Customer journeys serve as the foundation.

First and foremost, you must possess a thorough comprehension of the various users (roughly represented by user personas) and every stage of the customer journey for every user persona.

From there, you may determine how to best meet the user’s needs at each stage of the process, at that particular location and time.

Let’s examine this more closely. Examining a specific client journey requires that it be:

  • thorough, going into great detail about the user, their background, their state of mind, their influences and influencers, and a plethora of other details at every turn.
  • updated often to take into account shifts in consumer behavior or the market, as well as the company’s most recent understanding of its clientele. 

My experience with IS thinking in client engagements has shown that challenges pertaining to the customer journey are the most prevalent problems.

First of all, there may be several personas for a particular type of consumer, and no firm has a single type of customer. This indicates that, in actuality, there are several client journeys that we are discussing.

Another common issue I observe is that customer journeys are conceptualized in a linear fashion, as if they are a neat sequence of steps or stages that customers must adhere to as per the documentation, but in practice, human behavior and psychology are far more complex.

Thinking linearly can also have the negative side effect of making you lose sight of the customer’s post-conversion behaviors as independent journey steps. This covers customer service, customer assistance, and user retention.

Assessing the pleasure with information

After the customer journeys are gathered, the content and SEO teams have the task of fulfilling the user’s searches at every stage of the trip.

Outsiders need not worry (nor care) about Google’s methodology for measuring satisfaction, as the company is always researching and creating new signals. Because we have a deeper understanding of our clients, we can perform even better! We know who to contact if we run into trouble.

The first step is to reframe the way that keyword research is thought of: When IS is the objective, a Google search query is the outward expression of an issue that people are experiencing at that particular stage of the customer journey.

These days, meeting the user’s needs for knowledge entails assisting them in finishing the work at hand.

SEOs have been discussing query intent for a while now. While thinking about IS is a positive first step, you still need to dig far deeper.

When considering a particular inquiry at one point in the customer journey, your interpretation of it could differ from your interpretation of it when considering it in the context of a later stage in the same user’s trip.

Indeed, it is not uncommon to discover the same query being used at several stages of the customer journey, and each time the query would probably require a different response.

We now have a conceptual model to help us understand how to quantify information satisfaction.

IS has been around for decades, and a lot of research has been done on how to use it most effectively in various contexts.

A survey, at the risk of oversimplifying the depth of research, is a crucial instrument in monitoring IS. A Likert scale, which is a sliding scale from a negative rating to a positive rating, is used in the majority of IS survey questions.

This structure will be instantly recognizable to anyone who is familiar with the Search Quality Rating Program’s General Guidelines. The page quality (PQ) rating task that is displayed in the current rules is screenshotted as follows: 

That is a standard information satisfaction questionnaire question with a Likert scale structure.

Nayak attested to this in his testimony. For instance, in the midst of a debate regarding algo changes trials, on page 6425 (the answers by Nayak are indicated on lines labeled A):

  1. We examine each and every one of these 15,000 query results.
  2. Alright.
  3. And our raters rate them for us.
  4. So these are being rated by people?
  5. They offer ratings for each of them. Consequently, the query set as a whole receives an IS score.

Conclusion

For the entire set?

  1. You do receive an IS score for that (keeps going).

This means that you can design a procedure that complies with the rules to determine a page’s IS score in the same manner as Google does.

In other words, SEO is really satisfaction engine optimization, as Google is actually informing us that it is a satisfaction engine.

Leave a Comment

Your email address will not be published. Required fields are marked *