top of page
SubscribePopUp

User-first SEO: A modern approach to growing your online visibility

Author: Michel Fortin



In traditional SEO, experts typically recommend following a keyword-first approach: find high-volume keywords, create content around them, meet a bunch of ranking factors, and wait for traffic to come rolling in. But, if the results are less than expected, where did it fail?


The problem is not necessarily that it failed, but that it failed to take into account the most important part of SEO: the user. With a keyword-first approach, the focus is on the search engine, which can lead to optimizations that end up being at the user's expense.


Traditional SEO is often a zero-sum game, where trying to appeal to both searchers and search engines can be a delicate balancing act that forces unwilling sacrifices and compromises. But today, it’s no longer necessary. A user-first approach naturally appeals to the search engines because they have evolved in order to meet the same objective you have, which is to help the user.


In this article about user-first SEO, I’ll discuss:


Why traditional SEO is outdated


In the late 90s, I taught marketing at a local college. Part of the curriculum included SEO. While it was quite basic back then, the keyword-first SEO approach was the prevailing method. It’s the same method that persists today. But, simply because something is done a certain way and has been done that way for over 20 years doesn’t mean it’s the best way, let alone the only way.


There’s an old folktale about a new Jewish bride who, for her first big meal, would cut off the ends of the brisket before cooking it. There are several variations of this story, but a popular one is when her in-laws ask her why she cuts off the ends, to which she answers: “That’s how my mother always did it!”


Curious, they ask the mother the same question. Her answer was identical: “That’s how my mother always did it!” Thinking it’s some sort of secret family recipe handed down from generation to generation, one day they ask the grandmother the same question, to which she exclaims: “Because it wouldn’t fit in the pan!”


Similarly, the traditional, keyword-first approach to SEO made sense because it worked and worked well for a long time. But today, it’s less effective and there are a few reasons for that, chief among which is how search has evolved. To understand how things changed and what’s more effective, it’s important to understand where they have changed and why.


According to Google, SEO is the process of making your site better for search engines so they can easily find, crawl, and understand it. But SEO is not really about search engines—they’re an intermediary. Your goal is to target your audience while going through that intermediary. It’s to improve your chances of appearing when your audience searches for you or something you offer.


A visual representation of three dimensions of SEO

If you’re acquainted with SEO, you know that it has three dimensions: technical SEO, on-page SEO, and off-page SEO. Another way to look at it is optimizing signals that come from behind your site, on your site, and beyond your site. These three things haven’t changed since the beginning.


However, Google updates itself thousands of times a year in an effort to improve the quality of its results. When you consider that, over the course of the 20+ years it’s existed, the total estimated number of updates is potentially in the thousands, if not millions. It would be foolish to think that the signals Google looks for remained stagnant—or that an SEO approach from two decades ago still applies today the exact same way it did back then.

“Our computers, smartphones and apps are regularly updated to help make them better. The same thing happens with Google Search. In fact, Google Search is updated thousands of times a year to improve the experience and the quality of results.”Danny Sullivan, public liaison of search at Google (Google, 2021)

As search engines have evolved to improve their results (and to borrow from Google’s definition), so too has “the process of making your site better for the search engines.”


Traditional SEO vs. Modern SEO


One of the biggest changes we’ve seen is Google’s increasing transparency. It used to operate as a black box, where ranking factors were mostly unknown. Obviously, Google kept its cards close to its chest because it didn’t want to open itself to misinterpretation and degrade the quality of its results—let alone allow spammers and other bad actors to exploit any potential loopholes to game the system.


Granted, Google is still not 100% transparent. But, that black box is definitely shrinking. Today, we know more about what Google wants because it tells us what it’s looking for.


For example, Google offers several comprehensive guidelines to follow. Its search advocate team openly answers questions through its search forums, podcasts, and social media accounts. Google also regularly files many patents on its search process, which are available for all to see. Thanks to SEO experts like Bill Slawski (RIP) and Olaf Klopp among others, we get a glimpse into Google’s inner workings.


What precipitated the change in transparency, in my opinion, is the way things have evolved, which has made it possible for Google to become more transparent (if not pressured in doing so). More specifically, there are three key shifts that may have played a role.


01. Search environments

Environmental factors play a significant role in how we search for stuff. The questions we ask and the answers we seek may vary according to trends, times, news, politics, habits, economies, and much more. New external factors come into play all the time, too—some of which we never would have imagined or prepared for (like a global pandemic, for example).


Moreover, we used to search the internet only on desktop computers and via browsers. But today, we have hundreds of mobile devices and smart appliances that can search the web—let alone countless apps and platforms to search with (and not just browsers). As these technologies are updated all the time, as Google’s Danny Sullivan once noted, it creates new opportunities for Google to adapt and improve.


02. Search behaviors

SEO is a never-ending process because Google doesn’t operate in a vacuum. Rankings are volatile because they have to be. If all things were equal (i.e., if all users searched the same way with the same intent, and using the same keywords meant for the same purpose), then there would be little need to optimize things. Obviously, that’s not reality. “The only constant is change,” as Heraclitus famously posited.


Google is constantly introducing new features as well as new ways to appear in its results. At the same time, doing so also increases the likelihood that new bugs, errors, and vulnerabilities will surface. This creates new avenues for spammers, who wish to leapfrog over legitimate sites, to exploit, potentially filling the search engine results pages (SERPs) with poor-quality content that provides users with less value.


03. Search algorithms

Finally, in alignment with and likely as a consequence of the previous two shifts, Google has three goals: serve higher quality results, make them harder to manipulate, and above all, remove spam. Google deals with an always-increasing number of spammy search results, reaching 40 billion in 2020, so it’s no surprise that it made significant advancements in spam detection.


Plus, Google’s algorithms use machine learning, which is becoming increasingly more sophisticated and effective. This artificial intelligence is always learning and growing, even beyond Google’s ability to grasp or explain how it works in a succinct manner. In fact, it employs human quality raters to manually verify its algorithms and ensure they’re performing as expected. (I’ll come back to this point.)


Consequently, the three shifts mentioned above have likely influenced the need for greater transparency. While that’s a good thing, it’s also an important wake-up call: SEO may remain technically the same over the years, but as search engines continue to advance to provide better results for their users, so too should we consider advancing the way we optimize for them.


In modern SEO, “ranking factors” is misleading


“We’ve kind of moved away from the over 200 ranking signals number because it feels like even having a number like that is kind of misleading. (...) A lot of these [signals] take into account so many different things, you can’t just isolate them out.” John Mueller, search advocate at Google, (SEJ, Montti, 2021)

Google’s challenge in explaining its algorithms (at least in simple terms) is also the very reason why it’s becoming difficult to pinpoint any specific ranking factor that will have an impact on rankings, particularly beyond the technical factors. This is not to say that ranking factors don’t exist, but things are becoming less black and white, and any previously identified ranking factors are getting more complex.


Many years ago, it used to be easy to rank. For the most part, and competition aside, you only had to research targeted keywords with high search volumes and low competition; stuff those keywords into your content and throughout the page; and get backlinks from other sites that point to those pages.


Back then, if your keyword had a misspelling, it didn’t matter. If the keyword didn’t make sense or didn’t fit with the content, it didn’t matter. If it made the content look awkward and robotic, it didn’t matter. After all, these pieces of content were never meant for human consumption—they were meant for ranking, not reading. There was no need to create good content or offer any value, anyway.


Once you had your keywords covered, backlinks were next. But trying to earn legitimate backlinks to those awful pieces of content was next to impossible. You had no choice but to beg for them, which rarely worked. So you had to bluff, borrow, or buy them instead. Once you were able to farm enough backlinks, it worked. Your content would rank and rank well.


(Then one day, a Panda and a Penguin walked into a bar.)


As Google grew smarter and more sophisticated, trying to rank the same way was no longer as easy as it used to be. Ranking factors became a little harder to gauge. Many SEOs have tried to crack the Google code and speculated on a number of ranking factors, while testing helped to prove but a few. Google has publicly confirmed some of them, but for many others, it denied or skated around them.


Discovering any unconfirmed ranking factors was a double-edged sword: While they made it easy to rank, they also made it easy for spammers to exploit and game the system. It may have forced Google’s hand in becoming more transparent and in getting smarter in order to fight spam. However, “getting smarter” elicited more speculation and added more possible ranking factors to the ever-growing list, which seemed to expand with every update. And therein lies the problem.


Google continues to introduce smaller algorithms—or “baby algorithms,” as Google’s Gary Illyes once called them—that pay attention to an increasingly larger number of signals. Unlike a small, yet broad, set of ranking factors, Google uses statistical analysis and millions of these micro-algorithms in its ranking process. Trying to keep up is next to impossible and irrelevant, in my estimation.


With different ranking factors that work differently in different situations, which Google measures in a variety of ways and weighted differently according to an increasing number of variables, it’s no wonder that the SEO industry has jokingly memefied the phrase “it depends” as the typical answer to questions about ranking factors. Because it really does.


The opportunity costs of chasing rankings


So if ranking factors are misleading, should you ignore rankings altogether? No. Rankings are still important. But, ranking for specific keywords—particularly those you selected based on their high search volumes like in my previous scenario—are vanity metrics and specious at best. Chasing them will only divert your attention away from more important metrics that have a stronger impact on your business.


It’s tripping over dollars picking up pennies.


For example, a common problem I encounter when I conduct SEO audits is when a client wants to rank for a pet keyword that failed to get any traction. (Of course, it was the SEO’s fault.) However, during that time while the client was so focused on ranking for their beloved keyword, they failed to see the 200 other related ones that their site is now ranking for—the combined total of which drove far more traffic.


Now, let’s say that the keyword is a good one after all. By putting all their attention on that one keyword, this client is likely to ignore all of its possible variations, some of which may have driven more, and perhaps better, quality traffic. And, let’s not forget alternative keywords and related ones, too.


There’s also the issue of a keyword’s ambiguity, fluidity, and polysemy—i.e., keywords having completely different meanings depending on the context or situation. For example, say you want to rank for the term “Philadelphia lawyer” because it has 23K monthly searches. Plus, it’s trending upwards, which makes it even more enticing. But, Google shows almost 50M results, which is extremely competitive.


Monthly search volume reported by Google Trends for the term “Philadelphia lawyer”

More importantly, other types of results dominate the SERPs. There are no actual practicing attorneys until a few SERPs later. For instance, “Philadelphia lawyer” is a term dating back to colonial America to describe a shrewd and skillful attorney. Thanks to a 1948 Woody Guthrie song, it’s also used to describe a sneaky, unethical lawyer. More recently, it’s a label given to someone who loves to debate and argue.


First page of search results for the term “Philadelphia lawyer”

So it will be very tough (if not next to impossible) to rank for that term. Add in the fact that it’s for a completely different audience and type of result, and it’s clear that this would be a waste of time that could be better spent elsewhere.


Regardless of what you want to rank for, keep in mind that search volume is not all that it’s cracked up to be. I know of many sites that have generated considerable traffic with keywords that initially reported little to no volume. Conversely (although rarely), some sites that ranked very well for some popular keywords received little to no traffic.


There are several reasons for these discrepancies, but the most common one is that many keyword research tools report search volumes that are incomplete, inflated, or inaccurate.

  • Some tools extract their volume data based on spelling-specific keywords, and they don’t include common misspellings, keyword variations, or related words that trigger the same search results, which may translate into a far greater number of searches for that one keyword.

  • On the other hand, some source their data from Google Ads whose estimates are broad matches that may also be too broad. Google often lumps keywords together that may be similar but have completely different meanings or intents. (If you’ve ever used Google Ads before, you know how handy the negative keywords feature is.)

  • Some calculate search volumes based on snapshots taken at different points in time or during certain intervals, and averaged out (such as over a year), regardless of any fluctuations that may occur. Others are based on mere extrapolations, guesstimates, or forecasts.

  • Some use clickstream data (i.e., aggregate data from apps, browsers, extensions, and so on), but they do not account for all searches. In our increasingly privacy-conscious world, tracking makes data less reliable. Moreover, many anonymize their data and then sanitize it by filtering out automated queries and suspected spam, making the numbers more imprecise.


Ultimately, chasing keyword rankings based solely on their search volumes is a futile endeavor. While rankings are important in general, keyword rankings by themselves are less so. They’re vanity metrics based on unreliable or inflated data that provide little value other than the perception of popularity.


And contrary to popular opinion, SEO shouldn’t be a popularity contest.


Rankings are not key performance indicators


Not all performance indicators are the same, and not all of them are important or relevant, either. Those that have an impact on your business are called key performance indicators (KPIs) because they are key to your business and should tie into your business goals. Some are tied directly, others are not.

In economics, what’s called a “leading indicator” is a measurement of something that will directly impact, influence, or predict other results in the future. On the other hand, a “lagging indicator” is a measurement of something that is the result of, or influenced by, other indicators—often, the leading ones.


In marketing, leading KPIs are the direct results of a campaign occurring in the early stages, while lagging KPIs are the consequential results that mostly occur in the later stages. A good analogy is to think of your business as a car. Leading indicators look through the windshield at the road ahead while lagging indicators look through the rearview window at the road you were on.


What does this have to do with SEO?


Looking at keyword rankings is like taking snapshots with your camera from a rearview mirror while you’re driving your business. Plus, they’re taking your eyes off the road, let alone important gauges on your dashboard, like the ones telling you when you’re running out of gas (i.e., resources).


Lagging indicators look at the end-results, such as leads, sales, and profits. With SEO specifically, they include organic traffic and the conversions it generates. But, unlike paid campaigns that produce quick, immediate results, generating organic traffic takes time. Gathering feedback and applying course corrections along the way takes longer as well.


Now, the most common leading indicators in SEO are rankings. But rankings are poor KPIs as they may not always translate into meaningful results (i.e., they may not influence lagging indicators like quality traffic and sales). Moreover, keyword rankings are never an appropriate feedback mechanism for making course corrections for the many reasons expressed already.


Instead of keyword rankings, focus on visibility.


The more visible you are in search results, particularly in those that matter (specifically, to your audience) and in relation to your competition for those results, the greater and more positive the impact will be on your traffic, your conversions, and your business.


Furthermore, tracking your search visibility instead of your rankings allows you to identify problems early and apply any improvements quickly. Like a paid ads campaign, you can see how many times your site shows up in search results. Monitoring visibility also provides competitive insights that can help you benchmark performance and identify any gaps worth pursuing.


The three pillars of search visibility


Arguably, search visibility is truly the best indicator of an SEO strategy’s success. SEO is about gaining visibility that drives traffic and generates business. Rankings are important but only one small part of the equation. Visibility includes rankings as well as other KPIs. In fact, search visibility is an aggregate indicator based on three key characteristics. I call them the “Three Ps of SEO.” They are:

  1. Presence

  2. Prominence

  3. Performance


The three Ps of SEO (presence, prominence, performance).


01. Presence

Presence refers to the number of properties indexed by the search engines. A property is any asset that search engines can discover, crawl, and index such as pages, images, videos, documents, and so on. For the sake of simplicity, I will use “pages” from now on to refer to any indexable property.


Presence is also based on the number of associated queries for which you show up in search results. The more pages appear and the more keywords trigger their appearance, the stronger your presence will be. Of course, not all pages need or deserve to be present. But those that do should be.


02. Prominence

Prominence refers to how often and how well your site stands out in search results. It’s based on the number of times it appears and how high it does. These appearances also include search features (such as cards, snippets, and carousels) and other sources (such as Google Discover and Google News).


The best tool to gauge search visibility is Google’s Search Console (GSC). In the Search results report (in the left-hand navigation panel), the “average position” is the average of all pages and for all impressions in Google Search. You can drill down to see more granular metrics, such as individual pages. The average position for single pages is based on the fact that the keyword and the position can be different each time your page appears in search results.


03. Performance

Of the three, performance is the most important. It refers to the number of times your pages appear and generate clicks. GSC generally counts a search impression only when a result is loaded and can be seen, irrespective of the user actually scrolling down the SERP and seeing it. If it’s on a subsequent SERP that’s never loaded or further down in mobile results, however, Google doesn’t count it as an impression.


This is important, as the more searches trigger your pages to appear in results, the stronger the indication that your pages are ranking well. The more often your pages appear in the SERPs, the higher the chances they are going to get clicks. And, the higher the number of search impressions that translate into clicks (called “clickthrough rate” or CTR), the more you know how productive your SEO efforts are.


Changes in impressions don’t always correlate with changes in clicks. When they don’t, there are many reasons for that, such as the intent behind the query (I’ll return to this). CTR is helpful for this reason.


a screenshot of google search console results
Google Search Console results from a recent SEO campaign by the author.

It’s important to monitor changes over time (especially sudden changes), which may indicate that something may be going on—like a Google update, for example. But, changes also provide important feedback on your SEO strategy, including the competitive landscape, that can help you make course corrections.


Are keywords dead? Or is it just semantics?


Higher search visibility reveals a page’s ability to answer questions. For instance, if your site is more visible in search, it means that it’s not only ranking well but also that Google determines your pages to be in alignment with what users are looking for—with or without keywords.


Keyword research is still important because it’s a starting point. The goal is not to find out what keywords to rank for but to learn what your users are looking for instead. The distinction is subtle but critical.


In the early days, keywords were restrictive, which made searching the web a drudgerous task. A keyword search could turn up countless results that were completely disjointed and disparate. While they may have contained keywords that matched the query exactly, they were vastly different from the intent of the search, forcing you to wade through endless SERPs to find what you were looking for.


If you’ve been around the Internet for as long as I have, you might remember when we had to “search the search engines” (which seemed paradoxical) to find any decent matches. Even when a match was found, the chosen page might still offer nonsense like keyword-crammed content.


Lucky for us, Google has evolved. For many years, it was primarily a lexical search engine that served results based on literal keyword matches. (It’s for this reason that stuffing keywords worked so well.) With the help of machine learning, however, Google is getting better at understanding the different ways we use words by taking into account the irregularities and nuances of the human language.


This is called semantic search.


For this reason, keyword research is no longer about finding what keywords to optimize for, which is a core aspect of traditional SEO. Now, it’s about learning what topics to build content around. It’s about addressing the searcher’s problem and helping them along their journey in solving that problem—and not just merely matching content to keywords.


“Content that ranks well in semantic search is well-written in a natural voice, focuses on the user's intent, and considers related topics that the user may look for in the future.” — “Semantic Search” (Wikipedia, 2020)

In machine learning, topics are called entities (e.g., people, places, events, or ideas) associated together in a collection called a “knowledge graph.” In simple terms, the knowledge graph is a group of related entities and the inferred relationships between them that help search engines better understand context and nuance.


For example, someone took Harry Potter and The Philosopher’s Stone and fed it into a basic machine-learning algorithm. It came up with this simple knowledge graph:



In the same way, Google’s algorithms (which are vastly more complex) create knowledge graphs based on grouping entities and inferring relationships between them. Like the example above, it does so by looking at the various ways these entities are mentioned in the content it finds. It then creates logical associations to create clusters of topics and subtopics. It’s like Google’s version of “Six Degrees of Kevin Bacon.


What’s important to note is that, with the help of knowledge graphs and a form of machine learning called “natural language processing” (NLP), Google can understand queries like humans do, infer associations from them, and make connections without the need to find exact keywords.


And that’s just the tip of the iceberg.


SEO boils down to two essential ingredients


In a way, Google wants to think like a human in order to understand the search user and their intent, allowing them to more effectively bypass countless possibilities and irrelevant results to finally find the one most relevant search listing or piece of content. One way to look at it is Google trying to remove itself as an intermediary from the search equation.


What does this mean? Where before SEO meant to optimize for the search engine, today it means to optimize for the user, which is what Google really wants, anyway. In essence, the practice of search engine optimization is evolving to be less about engines and more about users. Some call it “search user optimization.” I like to call it “user-first SEO.”


Throughout my 30-year career in digital marketing, I’ve seen SEO start out as being about just a handful of things. It eventually grew to include a lot of things, growing more and more complex while the speculated number of ranking factors ballooned. But, with the help of machine learning algorithms, the pendulum is swinging back to somewhat simpler times.


Nowadays, SEO has returned to being about just a few things. In fact, modern SEO boils down to two simple but fundamental ingredients. (Of course, each one of these two may have many facets and variations.) They are:

  1. The quality of the content, and

  2. The quality of the experience.

In essence, simply offer great content users want and a great experience consuming that content. The higher the quality of the content and user experience is, the more likely your page will earn greater visibility—and more importantly, your traffic and conversions will likely be of higher quality, too.


You might be asking, “How do you measure quality, then?” As with all things in SEO, it depends. Quality is subjective and relative. What it means is different with every user and in every situation. With some things, it’s pretty clear-cut; with others, not so much. Google’s many algorithms are complex for this very reason. It’s also the reason why relying on specific, black-and-white ranking factors is futile.


Instead of focusing on ranking factors (or what some call “ranking signals”), focus on improving your quality signals and the quality of those signals. Google certainly does—in fact, it provides two sets of detailed guidelines that offer plenty of recommendations to follow.


  1. The SEO guidelines are for site owners and SEOs. They’re fairly straightforward and address what your site needs to have and what it must avoid. While they focus on requirements for inclusion in Google’s database, they provide a lot of advice on how to improve your visibility, too.

  2. The quality raters guidelines (QRG) are intended for individuals whose jobs are to spot-check search results and confirm that Google’s algorithms are performing as intended. Unlike the SEO guidelines, however, raters use scales to measure quality since it’s subject to interpretation.


What makes the QRG interesting is that it offers plenty of examples, illustrations, and screenshots that give us a glimpse of what quality signals Google prefers. In helping raters understand what to look for, the manual is also helping site owners and SEOs do the same.


Intent alignment is a key quality signal


There are two major categories in the QRG. “Needs met” refers to how well the content matches what the user is looking for, while “page quality” is how well it meets their expectations. The goal should always be to fully satisfy the user’s search. Your ability to do so depends on how efficiently, how effectively, and how thoroughly your content answers their query.


According to the QRG, when users must see additional results to get the answers they want (by bouncing or “pogosticking” back to Google), then the result is not relevant and should be rated as “Fails to Meet” (the lowest rating on the QRG’s scale). But, when the result removes any need to investigate other results, it typically gets the highest rating (“Fully Meets”).


Your aim should be to answer your users’ queries by giving them relevant answers and providing value. It’s more than just fulfilling the request. It’s also understanding the user’s journey, anticipating subsequent needs, answering other questions they may have on that journey, and delivering those answers in a way that achieves the user’s intended purpose and preferred method of consumption.


Specifically, what’s often referred to as “search intent” is the objective the query aims to reach. There are four kinds of search intent. To put it in simple terms, it’s when the user wants to know something, research something, do something, or go somewhere. Here’s a quick look at each one:


  1. Informational intent is when the user is looking for information. Some easy questions may only need short answers, while other questions may be complex and call for a more detailed explanation. For example, queries may range from “weather today” to “how do I build a birdhouse.”

  2. Investigational intent is when the user is conducting research prior to making a decision. They’re shopping around, or narrowing down their choices and validating them. Some examples include “Apple vs Android” or “best tacos Vancouver.”

  3. Transactional intent is when the user wants to perform a transaction. While it’s often to make a purchase, it can be any transaction, which may be specific or implicit. For example, “order pizza,” “free trial Wix,” “signup PayPal,” or “dress shoes sale.”

  4. Navigational intent is when the user is looking for (or trying to get to) a website. Either they’re unsure of the domain name or are simply taking a shortcut. They may also be searching for a quick way to access specific internal pages. “TD Bank” and “Twitter login” are a few examples.


Some practitioners and publications use “search intent” and “user intent” interchangeably, but for my purposes, user intent goes a step further. Whereas “search intent” is the objective the query aims to reach, “user intent” is the objective the user wants to achieve. But, unless you’re a psychic, true user intent is unknown.


However, with the help of machine learning algorithms and countless split-tests conducted behind the scenes across millions of searches to constantly refine its search results, Google makes educated guesses and predictions as to what people are looking for and how they want it. It goes to reason, therefore, that analyzing the SERPs can offer clues worth paying attention to.


For example, enter a topic your audience is interested in and examine the SERPs. Look for patterns and any distinct elements that stand out. See what’s being served, both as and beyond the blue links. Note the search features (like cards and carousels) or media types (like images and videos) that appear the most. Consider the types of content that show up, such as how-to tutorials, buying guides, listicles, etc.


Also, visit some of the top results to see what they do and how they do it. Pay attention to the quality of their content. It’s more than just looking at the word count, which doesn’t matter when you apply a user-first SEO approach, anyway. Instead, see if the content answers the questions the user may have related to their query and if it provides them with sufficient value.


If the intent doesn’t fit, go beyond the search results. Google offers helpful suggestions to guide the user along their journey. Called “query refinements,” they’re not only opportunities to refine and validate the intent around your chosen topics but also provide you with a rich source of content ideas.


An example of Google’s People also search for feature.
An example of Google’s People also search for feature.

For example, they include the dropdown autocomplete search predictions in the search bar and the related searches at the bottom of the SERP. But the most insightful ones are the “People also ask” questions in the middle and the “People also search for” suggestions that appear when returning to Google.


If you’re looking for more content ideas without having to dive into every individual SERP, I recommend AlsoAsked.com and SearchResponse.io. They both analyze Google’s SERPs around a given topic or page, mine further, and combine the results, which should give you plenty to work with.


Ultimately, keyword research can help you identify topics around which to create content for your audience, while intent research can help verify the potential viability and visibility of your chosen topic. Creating quality content that aligns with a clear intent will generate stronger quality signals and improve your visibility. In fact, it’s important, if not essential, to ensure intent alignment before doing anything else.


Otherwise, it’s like paddling in a canoe in the dark while being on dry land.


Quality signals, meet signal quality


Finally and most importantly, the quality of your quality signals is just as important as the signals themselves. I know this sounds odd, but if Google can’t find or validate your quality signals, they will be for naught. You want to ensure they’re clear and strong. That’s where “page quality” from the QRG comes in.


The QRG provides a lot of detail around things like usefulness, appropriateness, and transparency. But the thing that stands out as the most vital is a concept that the QRG mentions over 120 times—it’s called “E-A-T,” which stands for ”Expertise,” “Authoritativeness,” and “Trustworthiness.”


In other words, E-A-T refers to being credible, believable, and reliable. These quality signals are, in my estimation, critical in modern SEO for a variety of reasons. In an age where misinformation is rampant, spammers (and scammers) are busier than ever, and competition is at an all-time high, giving your users what they want will fall short if they don’t have confidence in the content you’re offering them (or in you, for that matter).


There are plenty of tutorials on E-A-T out there, including some of my own. Some revolve around the quality of the content and improving a variety of signals that support it. Others include adding helpful snippets of code called “Schema markup” that helps Google contextualize those signals.


But, to distill key signals from Google’s 170+-page set of guidelines down into a simple checklist, while possible, would be incomplete due to its subjective nature. But short of reading the QRG, here’s a simple list:

  • Is the content helpful and does it have a beneficial purpose?

  • Is the author or site owner clear, transparent, and credentialed?

  • Is the site safe, secure, intuitive, and aesthetically pleasing?

  • Is the content current, unique, accurate, and grammatically correct?

  • Is the author or owner in good standing, with a good reputation?

  • Is the site user-friendly, mobile-friendly, and obstruction-free?

  • Is the author or owner mentioned in authoritative sources?

  • Is the content cited elsewhere, authenticated, or peer-reviewed?

  • Is the page offering an adequate amount of information?


There are plenty of other signals, of course. But, these are the kinds of questions your audience wants answers to when visiting your site, whether they do so consciously or unconsciously. Your job is to provide those answers and to make them abundantly clear.


How many answers you will need to provide and the depth of those answers will vary depending on many factors, such as the sensitive nature of the content or its degree of influence on the user. For example, a site offering medical advice will likely require more depth than one about cat lovers.


To make the distinction, think of Abraham Maslow’s hierarchy of human needs. Your users may have different needs. But there are foundational human needs, like survival, safety, and security, that must not only be met but also protected (from risks, threats, or perils) and not compromised.


For instance, a site with a misleading, deceptive, or harmful page will have poor quality signals. If it has the potential of being any of these inappropriate things (or if it can be misconstrued as such), even if the site is legitimate and genuinely trying to be helpful, then the objective will be to strengthen the quality signals as much as possible to increase confidence in it and remove any doubt.


All sites require quality signals. I submit that all sites need E-A-T signals to some degree, too. How much E-A-T, however, will depend on how much harm it can potentially cause.


I’m certainly not a lawyer. But in civil litigation, I know that you must have a preponderance of evidence to support your argument. With SEO, the same holds true. The “preponderance of evidence” is directly tied to the weight of the argument, the risks involved, or the gravity of the potential consequences. The heavier the weight is, the stronger the evidence (or in this case, the quality signals) must be.


Success leaves clues


A keyword-first approach to SEO is the traditional way of getting rankings. That way still works, but it’s losing steam and not as effective as it used to be. A more effective way that will have a stronger impact on your visibility, and therefore your traffic, is to apply a user-first SEO approach.


Search engines evolve and become more intelligent all the time so that they can combat spam, improve the quality of their users’ experience, and increase confidence in their results. Fundamentally, SEO hasn’t changed. But, to keep applying it in ways that Google has learned to overcome and improve on is wasted effort that could be better spent elsewhere, such as on providing users with more value.


Think of it this way: you and Google share the same audience. Consequently, you ought to share the same goal, too, which is to provide users with the best possible content and user experience. If you don’t, the competition that does a better job of achieving that goal will eventually steal your share.


 

michel fortin

Michel Fortin is a marketing advisor, author, speaker, and the VP of Digital Marketing at Musora Media, the company behind Drumeo. For nearly 30 years, he has worked with clients from around the globe to improve their visibility, build their authority, and grow their business.



Get the Searchlight newsletter to your inbox

* By submitting this form, you agree to the Wix Terms of Use

and acknowledge that Wix will treat your data in accordance

with Wix's Privacy Policy

Thank you for subscribing

bottom of page