Hot or Not?
A Biased Review of Recent Keyword Tool Launches
Streaming Podcast with Theme Zoom Architect Sue Bell
A while back I made a bold move by announcing some of the new keyword research tools on the market, and even mentioning them by name. Team members inside my own company emailed me and told me it was a “bad” marketing strategy. I quietly thanked them for their opinion. There is a method to my madness . . . or so the voices inside my head tell me. ; - )
The reason for my lack of concern about mentioning other keyword tools is because I simply do not view them as competition. The keyword research tools in today’s marketplace are not in the same league as the TZ technology. None of these tools have any natural language processing technology in them whatsoever... and that is just the beginning.
In order to build a “military-industrial-grade” or “professional-level” keyword research tool it is necessary to includes authentic semantic keyword technology.
This is a challenging and time consuming task.
Doing this right is expensive and difficult to maintain in multiple languages. (Yes TLKT™ is the only semantic keyword tool in the marketplace at the moment with a capacity for multiple languages and multiple foreign currencies).
Back-Engineering the Competition for Fun and Profit:
As promised, The Theme Zoom engineering team has spent the weekend “playing around with” several of these newly launched keyword tools. Heck, we even waded through some of the “never-ending-scrolling-sales-letters” which is something that we never do.
After we tore off the wrapping and discarded the sales hype from these recent keyword tool launches, the Theme Zoom programmers proceeded to “back-engineer” these tools. (This is always fun for the marketing department to watch.)
Theme Zoom Architect (and successful web-entrepreneur) Sue Bell gave me the following distillation of the “generic keyword feature sets” that you will find in many standard keyword research tools. You will be required to use a compilation of many different tools in order to get them all; no one tool has had all of them . . . until now. (We have decided to put all of the most useful ones in one application . . . The Last Keyword Tool™ .)
Overwhelmed by Useless Keyword Features!
When the Theme Zoom experts back-engineered over a dozen keyword research and competitive analysis tools on the market, we were overwhelmed by useless features.
In addition to the basic functions and raw data sets outlined below, Theme Zoom architect Sue Bell commented,
“There are generally a lot of smoke and mirrors in many of the keyword research tools on the market.”
We came across hundreds of overwhelming and useless features provided by some of these new tools.
By overwhelming and useless we mean that these tools provided functionality, data, or analysis of data which cannot possibly be “accurate”. While all keyword tools have “less-than-perfect” data, there are some tools where data can be misleading, not just inaccurate.
Examples of Useless, Misleading and Overwhelming Adwords Features
We found at least one “keyword tool” that decided to extend its keyword functionality and move into competitive adwords analysis. The marketing copy on this tool convinces you that, among other things, it will answer the following question: “How many pay per click ads are being run on Google for keyword ‘X’?”
It seems like a simple enough question, right? You go to Google, type in keyword ‘X’ and count the ads that show up on the side, right?
Reason 1 - Day Parting:
Have you ever set up a Google adwords campaign?
If so you may be aware of a feature called ‘day parting’ – this is the idea that you probably have more customers looking for you on certain days of the week or certain hours of the day. There are some times of some days when you get random clicks that NEVER amount to sales, so you turn your ads off during these unproductive times, and save yourself money.
Note that these hours are radically different depending on your market; office supply stores are generally 9-5 Monday through Friday; urgent care is usually the exact opposite.
So what happens when your analysis software decides to count the ads during the unproductive times for your market?
It’s going to come back with completely bogus information!
The only ads it’s going to be able to count are:
- big business who can afford to dump money into a pit
- businesses investing in big branding
- people too stupid to turn their ads off
- all of the above
The important point here is that- just because an ad shows up at a certain time does not mean that these ads are making money! Sometimes ads are placed at odd hours by a businesses who can afford to pay more to acquire a customer than you. They are not even looking at whether any one particular ad is profitable. They simply look at the bottom line of their overall online advertising campaign to insure that it’s not in the red. If they are in “branding” mode, they may not even care about that.
How, then, are you supposed to tell if this keyword has few ads running because of “big branding” during off-peak hours or because it is a profitable low hanging niche?
Reason 2 - Local Search Prediction Issues and Adwords:
Further… there is this thing that is becoming popular called “local search” ;-)
How this is implemented into Google adwords ads is as follow:
Let’s say I’m in Chicago and I go searching for the term “real estate agent”. I’m going to get ads for Chicago based real estate agents along with some national results.
In this case, a downloadable app that I run from my desktop, located in my home in Chicago is going to give me appropriate results if my business is in Chicago… But what if I’m doing research for a client in LA? Or what if I my product is national and I am researching for multiple locales around the country?
Clearly you can see that this problem is even worse when you are using a web based service that uses a local IP to collect this information – it can be located anywhere and almost guaranteed to not be in the location where your ads will be running – causing some seriously skewed data.
Reason 3 - Adwords Budget Limit Set
Oh, there is one more thing.
You can set up how much money you want to spend on your ad. When that limit is hit, your ad is turned off.
If you have a highly popular ad, and a limited budget then your ad will hit that limit pretty fast. (If this is you, you will generally learn ‘day parting’ pretty fast, to make sure every single ad cent counts). This issue, too, would skew the results of someone like me who might be “adwords ad tracking.” Unless I’m tracking ads during the limited times of the day when your ad is running, I’ll never see your highly profitable ad… Or, I might catch it once and then never see it again, making the assumption that the ad, because it was short lived, was not profitable.
How Can You Really Decide if an Adwords Ad is Profitable?
So now you see that in order to really get an idea of how many ads are running successfully for keyword ‘X’, you need to be sampling the query results every hour or so for at least a week and comparing the results to one another to get a good idea of what the situation really is. And either you’ll want to do it from many different locations around the country or around the world OR you’ll want to be able to specify the locale for where the counting should occur.
Why is “wrong” SO VERY WRONG?
Doing anything less than what I just described above is what I mean by “useless data”.
Querying data from a single location only once a day (or even worse, only once period!), at an arbitrary time and getting a sampling of adwords ads is well… useless.
It cannot hope to accurately convey anything about the real picture of what is going on in a market around a given keyword term and can lead you into thinking that your market is actually something which it absolutely is not.
This is actually much worse than simply useless… it can cost you money. And that’s where wrong gets REALLY WRONG.
At Theme Zoom we looked into what it would take to program such a system correctly in order to do this “right”. We created a formal “spec” for how much band width and how many servers scattered around the world (virtually or physically) to give an accurate picture, and the cost was decidedly prohibitive. So if a $97/month utility is claiming to give you such data, caveat emptor (buyer beware) is all I can say!
Now, let’s get back to the features that actually need to be on a “military-industrial professional-grade” natural language processing keyword research tool.
1. Related Keywords: You need to be able enter a “seed word” and get related keywords back. This list can be exceptionally long and confusing or it can be short and meaningful.
Big Keyword Lists are More Exciting than Useful:
There is a dopamine hit that your brain experiences when your software downloads an exceptionally long list of keywords. For neophytes, the initial response is something like “Oh GOODIE! Here MUST be exactly the keywords I am looking for!” simply because there are sooooo many. Marketers count on you feeling this “hit.” One of the tools on the market in 2007 was actually called “Keyword Bonanza” like the all you can eat buffet restaurant. (sigh)
But is more really better?
When you are stuck with a giant list of keyword you actually have a multifold problem:
- How do you sort the wheat from the chaff? (good from bad keywords)
- How do you find which keywords are actually relevant to your seed keyword?
- Which keywords are most important to your market conversation?
More is Not Better
What a lot of keyword tools do to create such enormous long lists of keywords is to tumble the list of words that they collect with generic terms such as “buy” or “how to”.
It doesn’t take a very long list of keywords married with a short list of tumblers to create gigantic keyword list where all the keywords are all really just variations on a theme.
Selective Tumbling Makes More Sense
What makes more sense is to give YOU the ability to tumble your own stock terms that are relevant to your business model with the keywords returned by your seed term.
Better still, we will allow you to tumble keywords with locale names to create great local search terms!
This is an elegant solution and we have slated this feature to be implemented into TLKT very soon. Specific tumble terms will be combined with keywords you hand select creating a highly useful list of keywords. As a result, the keyword list that will be returned after the tumbling will have all the raw data any of our drills give you, and allow you to analyze them in all the usual ways with filters and custom columns and the like. You may then analyze these terms using natural language processing, set with any filter that you desire available only in TLKT ™. Now the list will be meaningful, even if it’s still unmanageably long
See? You can still get your dopamine hit for those of you who still believe that “bigger is better” AND sort the wheat from the chaff all at the same time.
Natural Language Processing, LSI and True Keyword Relevance:
Now let’s look at relevance; this problem is even harder to solve.
Latent Semantic Indexing has long been proven to be the most effective way to determine the relevance of a keyword to a body of text, but until recently calculating the latent semantic index for a set of terms was a long and arduous (really EXPENSIVE) task. This is not a function that could be put into an inexpensive keyword tool.
This is where TLKT members get the advantage of Theme Zoom’s many years of engineering efforts. The R&D dollars spent over the years to refine this ability now brings an affordable LSI algorithm to the masses.
Natural Language Processing: What Does LSI Have to Do with Ranking?
But how does LSI help you to rank higher on the search engine for keywords that convert? We use natural language processing to determine the semantic relationship between each keyword in your project with the top 100 conversations about your seed term on the web.
Market Conversations not Keywords
What this means is the higher the LSI score, the more relevant that specific keyword is to the top ranking conversations about your seed term going on, on the web, RIGHT NOW.
That means you’ve got your finger on the pulse of the internet every time you hit that drill button.
Let me give you an example of exactly how this translates into your keyword research. Within 24 hours after Michael Jackson's death, his name and related terms were showing up in health related drills.
Think about that for a second. Michael died at the age of 50, significantly younger than most of his contemporaries, meaning that there were health issues involved in his death. This was immediately reflected in health topics on the internet, and showed up in TLKT drills within a 24 hour period. That’s what good LSI can do for you – it will keep your research current with the trends that are happening RIGHT NOW in your market.
There are other tools on the market that pull “fresh keywords” in real time, but almost none of them can give you ‘decision insurance’ based on cost, traffic, competition as well as natural language processing.
By the way, this technology is not available anywhere else except with Theme Zoom products. Have I said it? I am trying not to be too sales-ish, but they told me I had to try to market this tool, even though we have no competition.
2. Keyword Data Function: This functionality brings you the associated standard data for each keyword returned by your drill. The standard data returned is generally:
- Broad Match Cost
- Broad Match Paid Traffic
- Phrase Match Cost
- Phrase Match Paid Traffic
- Competing Pages Google
- Competing Pages Yahoo
- Frequency of use by competitors
- Other variations of above standard data
Of interest here is the data source for these elements. Some keyword tool builders get pretty creative about where their program goes to find/scrape/borrow this data. This makes comparing the results from one keyword research tool to another really confusing.
For example, Wordtracker used to get most of their data from Dogpile.com and later changed that to include other sources. Furthermore, the “queries-per-day” indicator uses a multiplier. Keyword Discovery has gone through many cycles where their data gets pulled. I finally gave up on depending upon the query numbers, because they were simply too outlandish. It is not that I cannot use these tools, it is that I do not take their “estimated query results” literally.
Additionally, data on the web is pretty dynamic, meaning that data that was fetched an hour ago may not match the data that is fetched right now, even when using the same software and the same source (unless the data is in a static database, heaven forbid!).
There is no right or wrong to these data anomalies. This is simply the situation with a dynamic internet. It’s kind of like a silk shirt; those imperfections are not flaws, but rather part of the “charm” of trying to take a snapshot of something in motion. When you step into the stream, you realize that the water has moved on.
The solution is to use the “least imperfect” data that you can, which means jumping through some fancy hoops programmatically.
3. Domain Rank Info for Keyword: This function gets data for the top ranking domains, occasionally with the provision to compare specific domain ranks entered by the user. Only a couple of the keyword tools on the market provide this added feature. . . and generally they do not do it well.
But it’s not their fault. Really.
Let me tell you a story about why all domain rank tools are not created equal.
Personalized Search Results
Once upon a time, in a Googleplex far far away the programmers started playing around with “personalized results”. It started a couple of years ago. ; - )
This is where Google creates a profile for you when you are logged in. The profile tracks the websites you frequent and the topics you view. Based on the things you appear to like, Google tries to “tweak” the results of your query to place personalized results at the top. (Today you can even add comments and manually adjust those results by dragging entries up and down the page).
I had no issue with this and it was all fine and good until they started bringing those ideas out from behind the firewall of a “personalized” result.
Browser Dependent Search Results:
Today they skew the results based on what kind of browser you are using. In other words, they have determined that if you are using a Linux browser you must be a propeller head and so they are more likely to include wikipedia results as the number one result for your query, where someone doing the exact same query from exactly the same IP address (your street address on the internet) but using a different browser might not have wikipedia show up anywhere in the top 10 results!
So now the top ranked domains is dependent upon what browser you are using among a host of other technicalities (like which Google data center are you getting your result from and when was the last time it was synchronized with the master database).
This officially moves the idea of “correct” rank results into the category of “even harder to figure out than LSI”!
So aside from the now non-trivial information of which are the top domains, a keyword tool is also expected to include:
- Page Rank
- Inbound Links to Page
- Pages indexed from Site
- Age of Domain
- Deep links to overall domain versus specifically to the Index Page
- Important Inbound Links from .gov, .edu, Wikipedia, etc
4. Commercial Intent and Transactional Intent keyword indicator (My favorite topic): Some of these keyword tools are including data from the MSN commercial intent database. We have decided that the data is bogus, and will not include it.
We are working on an algorithm in the depths of our secret labs that we suspect more accurately predicts the likelihood of a keyword to be transactional. We are still deciding how and if we will integrate this into the public tool. If you would like more information on the problem of accurately predicting “commercial intent” keywords read Shari Thurow’s article on Search Engine Land
5. Data Analysis: After exploring the said benefits of the keyword research tools on the market, here are Sue’s final comments about how we are integrating the above distillation of features into The Last Keyword Tool”.
It’s not the aggregation of raw data that makes a keyword tool useful, but rather the ability it gives you to analyze that data. This is because information that is not easily comprehensible and actionable to the ordinary person is useless.”
What makes The Last Keyword Tool (and Krakken for that matter) the most USEFUL international keyword research and semantic (conversation) analysis tool in the world is the ability it gives you to analyze and understand the meaning behind the raw data for each individual keyword, as well as the relationships between the keywords.
Let’s examine this for a second. Knowing the number of competing pages for a single term is pretty useless information all by itself. Taken out of context, this information is simply a meaningless number. So how do we add meaning? How can we place this data in context so it suddenly becomes important? There are two different ways.
1) We can contrast it with the same data field for other keywords.
2) We can combine it with other data for that keyword and contrast it against other keywords.
Contrasting the numbers of competing pages from one keyword to the next will give us an idea of the scope of that keyword. It helps to make a vertical map, with the keywords which have the lowest numbers of competing pages at the bottom and the largest (or top of the vertical markets) at the top.
To illustrate the second example, if you combine this with a natural language semantic proximity index it will start to paint a three dimensional picture. Keywords that are highly relevant will be directly above or below the seed term, depending on their numbers of competing pages; and keywords which are less relevant will be scattered further out, the less relevant the further they are scattered.
This is starting to give you an idea of vertical markets and a crude form of market segmentation. The easiest segmentation you can make with this data is to find those terms which are the vertical markets and those which are direct niches. There is not enough data contained here to analyze that further.
So let’s take a different example.
Again, looking at only the competing pages, we have a crude idea of how competitive a term is. When we compare competing pages from term to term we can say this term is more competitive than another.
Now, if we combine the competing pages with the natural traffic, we can get a low hanging fruit or KEI indicator. Further, comparing one keyword to the next, we can start to find under optimized niches.
Our goal with The Last Keyword Tool was to set up a construct in which we would provide you with all the raw data that is USEFUL and ACCURATE and give you the ability to analyze this data either with our filters and columns or your own.
Among other things, The Last Keyword Tool comes equipped with filters and columns which will allow you to examine and execute:
- Market Analysis
- Niche Analysis
- Keyword Analysis
- Competitive Analysis and beyond
- Semantic (conversation) relational analysis
In this way you are systematically working to “swallow your market whole” using far fewer inbound links than your competition. This is accomplished using high “quality” semantically related keywords instead of picking through thousands of “general” keywords.
For example, if you type in a keyword or theme and drill for keywords to be returned, it is not enough to simply return keywords that have been “scraped” from competitor websites. Why not return the most profitable and semantically related keywords in the proper order to save time?
Not only will this save time, but it will help you develop a long term ranking strategy that encourages you to “swallow your market whole” over time, while you competition is running “to and fro” chasing random keywords. There is more to the process of building a long term profitable website than simply collecting a bucket of “low hanging fruit”.
But don’t take my word for it. Try it yourself. It’s free.
We are so confident that you will notice the difference between the other tools on the market and a Professional Grade Keyword Research Tool (which must include semantic/conversation analysis) that, for the moment, we are giving it away for free. I look forward to seeing you on the inside:
Russell Wright, Sue Bell and the Theme Zoom Team