Search Fatigue has been studied for many years, at least since the 1970s. Economists define search fatigue as the point at which a consumer stops searching for a product or service at a desired value point. In the Searchable Web Ecosystem (which consists of Publishers, Indexers, and Searchers) only the Indexers are in a position to measure search fatigue for the whole population of Searchers. Each Searcher knows when he or she becomes fatigued, but Publishers know nothing about when a Searcher becomes fatigued or why a Searcher chooses a given listing in a search result.
This blind spot in Publisher analytics has been there from the very start. Although we once received much more complete referral data from major search engines like Bing and Google, they withheld all data pertaining to search fatigue.
For all intents and purposes you the Publisher are dealing solely with traffic derived from Queries of Last Resort. A Query of Last Resort is the final query in a Searcher’s quest for the best possible Website, at least from your perspective. You have no way of knowing if the Searcher continues exploring the Web for better options after leaving your site (even if they make a purchase).
The Searcher’s Journey is measured in several ways. For example, economists recognize Mu-type Searchers (“a shopper or an expert with zero search costs”, per Bruce Ian Carlin and Florian Ederer in their classic “Search Fatigue” paper). Although you may have a zero search cost in some topics, you have a non-zero search cost in most topics. That is simply because you cannot be expert in very many topics.
A search engine may measure a Searcher’s Journey in terms of sessions. The session data may or may not be linked together across multiple sessions, but if the searcher logs out of the personalized search service, changes platform or IP address, or mixes query paths together the data the search engine collects may not be meaningful.
A consumer may measure his or her own Search Journey in terms of “from the day I first heard about that to now [several years later]”. Here is an example of a multi-year search I performed for the name of a band and a song from 1970. I used to hear that song on the radio all the time when it was still popular. And then it fell out of favor (as songs do) and I went on with my life. But years later I started to think about the song occasionally, remembering some of the striking notes in the version I liked. I knew about the Crosby, Stills, Nash, & Young version of the song and occasionally came across others (including Joni Mitchell’s original version, as she wrote the song). Although not knowing which artist’s cover of the song I had loved in my youth bothered me on occasion, it was hardly a driving priority in my life. But then in 2006 I heard the song again on a cable TV music channel, and I had the presence of mind to make a note of who the artist was.
I’m not sure when we can say I began that search but probably it started somewhere in the late 1980s or early 1990s. That may seem extreme to you, but things can stay with me. For example, there was a song I used to hear on a radio station right before the news break on the half hour and top of the hour. It featured an electric guitar playing a memorable rift and there was an organ that dueled with it. The radio station never named the song (ever) and to the best of my memory they never played the whole song. But in those days I used to record songs off the radio on cassette tapes and I recorded that song. Everyone who heard it asked who was playing and what the name of the song was. And eventually someone stole the tape from me so I can only hear the song in my memories.
That search has been going on for more than 40 years. I don’t have much hope of finding the song again. And there was another song, by an Irish show band called The Times, that I always wanted to find. It was called “When I Look Around Me”. For years I have been hanging around YouTube and various online music archives looking for old songs I wanted to hear again. Last month (August 2015) someone finally uploaded a scratchy recording of the song to YouTube. Okay, it doesn’t sound quite as great as I remember it to be, but now I have heard it once again.
Most of our fatiguing searches last hours, days, weeks, and sometimes months. It’s rare that you search for something for years, although I have noticed people will occasionally ask about old television shows, movies, books, and songs in forums (usually receiving no definitive answers). Once in a while you find what you were looking for.
On the Web today we have the ability to bookmark many things, but I rarely bookmark sites any more. Instead I allow my browser history to record what I visit. Unfortunately the Web browser developers design really awful search interfaces and eventually I give up searching my history for sites I have recently visited. In-browser search is usually more frustrating to me than any other form of search, if only because the browser produces so many false-positives.
But let’s talk about your Web marketing metrics. What led me to wander down memory lane was a discussion in the comments on Search Engine Roundtable, where someone confused “search engine optimization” with “conversion rate optimization”. He takes the position that you can attribute revenue to your SEO efforts.
I pointed out that this is simply not true. You cannot accurately attribute revenue to an SEO channel.
There are several reasons why this is so, but Queries of Last Resort are the major stumbling block for attribution models. You have no way of determining (from the data available to you) how many queries brought a particular search visitor to your Website.
Why is this important in attribution modeling? Because of search friction. Search friction is the result of search fatigue (or, conversely, search fatigue is the result of search friction). Search friction is the failure to find what you are looking for. The results of your search (whether you wander down the street looking at all the stores and restaurants or just type queries into a search engine) are unsatisfying.
Search engineers make a big noise about how they interpret searcher intent. That’s really all it is: noise. They are struggling to define proper signals from a huge amount of data and while I grant they have made some huge strides forward in this area the poor quality of my search results assure me that the search engines have a long, long way to go in correctly interpreting my intent.
It’s not their fault. It’s just one of the great mountains standing before the search world, awaiting the as-yet unnamed hero who will conquer it.
Search friction increases as you run more queries in your Search Journey. A Search Journey is the sum total of all your efforts to find something specific. You may occasionally settle for something less but you will return to the Search Journey sooner or later until you either give up completely or you find what you are looking for.
You as the marketer do not have either the tools or the data to track and measure the Search Journey and without being able to map a Search Journey your attribution model is incomplete. Take the query “seo theory blog”. Anyone who is reasonably familiar with this blog will type that query into the search engine and will be presented with this Website as the top listing (well, that is what happens for me on Bing, DuckDuckGo, Google, and Yandex). If I block a smaller search engine (or a large one like Baidu) you won’t find this blog there.
A knowledgeable searcher who is looking for this blog can usually find it without incurring any search costs. These queries are navigational queries and as long as you type in the correct keywords (on a search engine that has indexed this site) your queries are satisfied. But if you type a different query in first, maybe “michael seo expert” because you cannot remember my name or the name of the blog, you won’t find me. I don’t target that expression and this site is not optimized to expand in that topic direction.
Search engine optimization improves the visibility of a Website in the search results and if I knew that people tried to find this Website by that query then to optimize this site I should make an effort to make it relevant to the query. But since you don’t find me for that query you either stop looking or you change the query. Eventually you find me and whatever that last query was that brought you here was a Query of Last Resort.
You can also call it a “Terminal Query” but I assure you that if you try to search for this article using that expression you will curse every would-be UNIX user on the planet. “Terminal Query” is a good expression for illustrating one reason why search friction occurs: the language is too ambiguous for search engines to know how to be precise. Search on the query “dog day afternoon” and you will find a perfect example of why search engines create friction in their results. They simply don’t know what you are looking for.
This is, of course, why search engines study search session data. By analyzing the queries you type in over a short period of time (maybe 30 minutes) they can glean clues from your search behavior that may predict what you will ultimately click on. But if you click on several or many search results it doesn’t matter how good their predictions of what you’ll click next may be, they cannot see that they have failed to satisfy your query.
A query is satisfied when the searcher stops searching. But when does the searcher stop searching? And does the searcher confine their search to a single search engine?
Without data for the Search Journey you have no idea of how many other keywords you could be optimizing for. Without that list of keywords you have no way of estimating how many potential visitors never reach your site simply because they gave up before they found you.
They may not be looking for you specifically but you may be the best possible resource for their needs. In that case you want to be found for keywords of which you are totally unaware. So when you compute your SEO channel attribution for revenue, how much lost revenue do you include in the model? Pick a number at random because any number will be just as good as another.
Without access to the user’s search history your metrics are dead in the water before they even try to rev up their engines.
As a publisher you can only crudely optimize for Web search. You don’t have enough data to do more than that. But you can compensate for the lack of search optimization by improving your conversion optimization. This is what most of us do anyway. After all, you got the visitor to your site, you might as well try to get them to buy something (if you’re selling).
But what if you are not always interested in selling something? What if you are trying to establish yourself as “an authority in the field”? Authorities don’t always make sales. So should you not be breaking your visitor data into “wants to buy something” and “wants to read something” groups? With such a division you might be able to improve your channel attribution but what about crossover visitors who come to read and end up buying or who come to buy and end up reading?
And then you have the consumer research phase, where they may visit multiple Websites for discovery before making some sort of purchase decision. They may find your site during the discovery phase but be unable to find you later on when they are ready to buy. Is that lost revenue? How do you account for it in your model?
The Query of Last Resort undermines all your KPIs. If you don’t know how many non-navigational first-query visits your site receives then how can you possibly know if your site is performing optimally? You cannot. Your KPIs never rise out of the “Sheer Dumb Luck” category.
Most people choose to target certain keywords. That means they ignore the vast majority of keywords that can, may, and often do bring relevant interested traffic to their sites. Furthermore, every time someone tells me they need to rank 1st for “[Keyword1 Keyword2]” I know they have not done an analysis on their long-form keyword revenues (which, sadly, we cannot perform with any hope of accuracy now).
Long-form keywords are those queries that are longer than the ones you think are most important. When analyzing several different Websites’ data (in the past) I found that 60-70% of their revenues were coming from long-form queries, not from the “vanity” or “premium” 1-, 2-, and 3-word queries they wanted to rank for. If you are not making most of your money from queries you don’t care about you should be. Brands be damned; most people don’t know what they are looking for, so being a brand isn’t much help at all.
What the searcher wants to find is the right value proposition. The value proposition they want almost certainly is not your carefully crafted spiel. I went into a bank to get a yes-or-no answer one day about an automated transaction. I asked the banker point-blank, “Can you reverse this?” and she spent the next 5 minutes explaining to me what I already knew (that the transaction was automated). Your expectations and assumptions about what the consumer is searching for get in the way of satisfying that consumer’s need. This happens all the time, mostly with fringe questions and queries, but there are an endless array of fringe queries.
Another mistake people make is in assuming that their optimization is directly affecting the search results. I used to mull over the feasibility of writing a book called I Control the Web: How Search Engine Optimization Really Works but I am not yet ready to share that much theory. Maybe when I am ready to leave SEO behind me I’ll publish the story and see who still cares enough to read it.
You cannot directly affect the search results no matter how hard you try. The system works against you. What might happen is that the search engine picks you up and thrusts you into the limelight, all for reasons you will never understand. It’s not because you put the right keywords in the title tags, wrote long enough copy, earned the right links, and have lots of link value flowing through your navigation. The search engines cannot (yet) predict when your site will become important enough to be that popular.
When you can predict the spikes and valleys in your search referral traffic two years ahead you will be on par with me, and I can only do that some of the time. Neither of us is yet ready to leave the monastery, Grasshopper (search for the source of that metaphor – maybe you’ll enjoy the journey).
The Query of Last Resort is your enemy. It defaces everything you do in metrics analysis and Key Performance measurement. The Query of Last Resort laughs at your futile efforts to create a “consumer model”. A few years back my friend Kent Yunk published an article here on SEO Theory about consumer intent modeling. He mentioned one of the pitfalls of informal marketing early in the article: “you are not your target audience”. What you think you would search for is not necessarily what other people think they should search for.
We are left destitute in the world of consumer modeling data because the search engines and advertising networks keep all the good data for themselves. All we can do is optimize for the last queries used before people arrived at our sites, and we can only get those queries from the crude reports in Webmaster dashboards and advertising accounts. And yet lacking the search history data we have no idea of what people really want to find. We just know what we want them to find.
“You are not your target audience.”
Knowing what you want people to find puts you at a disadvantage. There are indeed many tools out there attempting to help you understand who is coming to your site, who is visiting your competitor sites, and what they are acquiring (in terms of products, services, and information) on their Search Journeys but this data is segmented in the worst way possible. You are only seeing termination events without knowing how long the sequences are. This is like trying to analyze a genome by studying the length of telomeres on random strands of DNA. Most of the data is being kept from you.
Squeezing more money out of a Website is not search engine optimization.
Furthermore, attributing revenue to an SEO channel, while necessary for budgeting purposes, is misleading. These attribution models are at best bad guesses and at worst complete and total fantasies. Until you develop a method for separating the “Sheer Dumb Luck” results from the “They Came Because We Called” results you’re really attributing all your SEO revenue to “Sheer Dumb Luck”, this despite the rises in traffic you have engineered through publishing more content, fixing technical errors, creating great structure, adding information scent to the content and navigation, and earning brilliant links from the most authoritative sites on the Web.
Your sieve does not have to be perfect. It won’t be perfect. You don’t need to defend it but you do need to justify it. People will insist that diving into the data and “doing the science” will give you the justification but that just misses the mark completely. Without the right kind of data your sieve must use intuitive modeling. Intuitive modeling is a form of game theory. Game theory is not necessarily the purview of mathematical geniuses and super-computers. You just have to lay out enough options to account for what you don’t know, cannot know, but need to know.
You cannot prove an intuitive model to be correct but if you can show that it is consistent and that it consistently changes direction when your data changes direction then you have an indicator to work with. You can push or pull and see some sort of result. It is a crude result but if your revenue goes up that’s all well and good, is it not?
We cannot be the perfect little experts we want to be but as long as we can stick a thermometer up the data’s ass and see that something is hot we’re better off than if we just stand around trying to measure an elephant with three blind men. The elephant is what you think your optimization goals should be. Your assumptions are wrong, maybe even crazy wrong, but if you’re humble enough to admit that then there is hope for you yet.
I have been analyzing huge volumes of search referral and Website traversal data for almost twenty years. No one has yet brought me a tool that can tell me where the visitor has been and where they are going. All I know for sure is that every now and then someone pauses on their Search Journey to listen to a different song, and every now and then someone finds the song they are looking for.
I still can’t tell the two apart and neither can you. I hope you don’t get fired for admitting as much in public.
And now I need to learn more about the intuitive model approach.
“You cannot prove an intuitive model to be correct but if you can show that it is consistent and that it consistently changes direction when your data changes direction then you have an indicator to work with.”
You’ve pushed my brain to the limit today, but that’s good for me.
One of the difficulties is getting the client to have the same perspective. All the arguments, sound self-serving or like an aversion to accountability.