Do You Need Local Pages? – Whiteboard Friday

Posted by Tom.Capper

Does it make sense for you to create local-specific pages on your website? Regardless of whether you own or market a local business, it may make sense to compete for space in the organic SERPs using local pages. Please give a warm welcome to our friend Tom Capper as he shares a 4-point process for determining whether local pages are something you should explore in this week’s Whiteboard Friday!

https://fast.wistia.net/embed/iframe/pxn5jrjj6k?seo=false&videoFoam=true
https://fast.wistia.net/assets/external/E-v1.jshttps://fast.wistia.net/assets/external/E-v1.js

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hello, Moz fans. Welcome to another Whiteboard Friday. I’m Tom Capper. I’m a consultant at Distilled, and today I’m going to be talking to you about whether you need local pages. Just to be clear right off the bat what I’m talking about, I’m not talking about local rankings as we normally think of them, the local map pack results that you see in search results, the Google Maps rankings, that kind of thing.

A 4-step process to deciding whether you need local pages

I’m talking about conventional, 10 blue links rankings but for local pages, and by local pages I mean pages from a national or international business that are location-specific. What are some examples of that? Maybe on Indeed.com they would have a page for jobs in Seattle. Indeed doesn’t have a bricks-and-mortar premises in Seattle, but they do have a page that is about jobs in Seattle.

You might get a similar thing with flower delivery. You might get a similar thing with used cars, all sorts of different verticals. I think it can actually be quite a broadly applicable tactic. There’s a four-step process I’m going to outline for you. The first step is actually not on the board. It’s just doing some keyword research.

1. Know (or discover) your key transactional terms

I haven’t done much on that here because hopefully you’ve already done that. You already know what your key transactional terms are. Because whatever happens you don’t want to end up developing location pages for too many different keyword types because it’s gong to bloat your site, you probably just need to pick one or two key transactional terms that you’re going to make up the local variants of. For this purpose, I’m going to talk through an SEO job board as an example.

2. Categorize your keywords as implicit, explicit, or near me and log their search volumes

We might have “SEO jobs” as our core head term. We then want to figure out what the implicit, explicit, and near me versions of that keyword are and what the different volumes are. In this case, the implicit version is probably just “SEO jobs.” If you search for “SEO jobs” now, like if you open a new tab in your browser, you’re probably going to find that a lot of local orientated results appear because that is an implicitly local term and actually an awful lot of terms are using local data to affect rankings now, which does affect how you should consider your rank tracking, but we’ll get on to that later.

SEO jobs, maybe SEO vacancies, that kind of thing, those are all going to be going into your implicitly local terms bucket. The next bucket is your explicitly local terms. That’s going to be things like SEO jobs in Seattle, SEO jobs in London, and so on. You’re never going to get a complete coverage of different locations. Try to keep it simple.

You’re just trying to get a rough idea here. Lastly you’ve got your near me or nearby terms, and it turns out that for SEO jobs not many people search SEO jobs near me or SEO jobs nearby. This is also going to vary a lot by vertical. I would imagine that if you’re in food delivery or something like that, then that would be huge.

3. Examine the SERPs to see whether local-specific pages are ranking

Now we’ve categorized our keywords. We want to figure out what kind of results are going to do well for what kind of keywords, because obviously if local pages is the answer, then we might want to build some.

In this case, I’m looking at the SERP for “SEO jobs.” This is imaginary. The rankings don’t really look like this. But we’ve got SEO jobs in Seattle from Indeed. That’s an example of a local page, because this is a national business with a location-specific page. Then we’ve got SEO jobs Glassdoor. That’s a national page, because in this case they’re not putting anything on this page that makes it location specific.

Then we’ve got SEO jobs Seattle Times. That’s a local business. The Seattle Times only operates in Seattle. It probably has a bricks-and-mortar location. If you’re going to be pulling a lot of data of this type, maybe from stats or something like that, obviously tracking from the locations that you’re mentioning, where you are mentioning locations, then you’re probably going to want to categorize these at scale rather than going through one at a time.

I’ve drawn up a little flowchart here that you could encapsulate in a Excel formula or something like that. If the location is mentioned in the URL and in the domain, then we know we’ve got a local business. Most of the time it’s just a rule of thumb. If the location is mentioned in the URL but not mentioned in the domain, then we know we’ve got a local page and so on.

4. Compare & decide where to focus your efforts

You can just sort of categorize at scale all the different result types that we’ve got. Then we can start to fill out a chart like this using the rankings. What I’d recommend doing is finding a click-through rate curve that you are happy to use. You could go to somewhere like AdvancedWebRanking.com, download some example click-through rate curves.

Again, this doesn’t have to be super precise. We’re looking to get a proportionate directional indication of what would be useful here. I’ve got Implicit, Explicit, and Near Me keyword groups. I’ve got Local Business, Local Page, and National Page result types. Then I’m just figuring out what the visibility share of all these types is. In my particular example, it turns out that for explicit terms, it could be worth building some local pages.

That’s all. I’d love to hear your thoughts in the comments. Thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/10053566
via IFTTT

Ranking the 6 Most Accurate Keyword Research Tools

Posted by Jeff_Baker

In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.

Did it work?

Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.

But we got a whole lot more than just traffic.

From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.

As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece…

How well keyword research tools can predict where you will rank.

A little background

We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.

We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.

With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:

This image links to an example of a content brief Brafton delivers to writers.

Between mid-January and late May, we ended up writing 55 blog posts each targeting 55 unique keywords. 50 of those blog posts ended up ranking in the top 100 of Google results.

We then paused and took a snapshot of each URL’s Google ranking position for its target keyword and its corresponding organic difficulty scores from Moz, SEMrush, Ahrefs, SpyFu, and KW Finder. We also took the PPC competition scores from the Keyword Planner Tool.

Our intention was to draw statistical correlations between between our keyword rankings and each tool’s organic difficulty score. With this data, we were able to report on how accurately each tool predicted where we would rank.

This study is uniquely scientific, in that each blog had one specific keyword target. We optimized the blog content specifically for that keyword. Therefore every post was created in a similar fashion.

Do keyword research tools actually work?

We use them every day, on faith. But has anyone ever actually asked, or better yet, measured how well keyword research tools report on the organic difficulty of a given keyword?

Today, we are doing just that. So let’s cut through the chit-chat and get to the results…

This image ranks each of the 6 keyword research tools, in order, Moz leads with 4.95 stars out of 5, followed by KW Finder, SEMrush, AHREFs, SpyFu, and lastly Keyword Planner Tool.

While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).

As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.

Let’s dig in!

The Pearson Correlation Coefficient

Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.

In order to understand the relationship between two variables, our first step is to create a scatter plot chart.

Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.

This image shows a scatter plot for Moz's keyword difficulty scores versus our keyword rankings. In general, the data clusters fairly tight around the regression line.

We start with a visual inspection of the data to determine if there is a linear relationship between the two variables. Ideally for each tool, you would expect to see the X variable (keyword ranking) increase proportionately with the Y variable (organic difficulty). Put simply, if the tool is working, the higher the keyword difficulty, the less likely you will rank in a top position, and vice-versa.

This chart is all fine and dandy, however, it’s not very scientific. This is where the Pearson Correlation Coefficient (PCC) comes into play.

The PCC measures the strength of a linear relationship between two variables. The output of the PCC is a score ranging from +1 to -1. A score greater than zero indicates a positive relationship; as one variable increases, the other increases as well. A score less than zero indicates a negative relationship; as one variable increases, the other decreases. Both scenarios would indicate a level of causal relationship between the two variables. The stronger the relationship between the two veriables, the closer to +1 or -1 the PCC will be. Scores near zero indicate a weak or no relatioship.

Phew. Still with me?

So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.

We will use the following table from statisticshowto.com to interpret the PCC score for each tool:

Coefficient Correlation R Score

Key

.70 or higher

Very strong positive relationship

.40 to +.69

Strong positive relationship

.30 to +.39

Moderate positive relationship

.20 to +.29

Weak positive relationship

.01 to +.19

No or negligible relationship

0

No relationship [zero correlation]

-.01 to -.19

No or negligible relationship

-.20 to -.29

Weak negative relationship

-.30 to -.39

Moderate negative relationship

-.40 to -.69

Strong negative relationship

-.70 or higher

Very strong negative relationship

In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.

These scatter plots show three types of correlations: positive, negative, and no correlation. Positive correlations have data plots that move up and to the right. Negative correlations move down and to the right. No correlation has data that follows no linear pattern

And here are some examples of charts with their correlating PCC scores (r):

These scatter plots show what different PCC values look like visually. The tighter the grouping of data around the regression line, the higher the PCC value.

The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.

That was the tough part – you still with me? Great, now let’s look at each tool’s results.

Test 1: The Pearson Correlation Coefficient

Now that we’ve all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.

In order of performance:

#1: Moz

This image shows a scatter plot for Moz's keyword difficulty scores versus our keyword rankings. In general, the data clusters fairly tight around the regression line.

Revisiting Moz’s scatter plot, we observe a tight grouping of results relative to the regression line with few moderate outliers.

Moz Organic Difficulty Predictability

PCC

0.412

P-val

.003 (P<0.05)

Relationship

Strong

% Keywords Matched

100.00%

Moz came in first with the highest PCC of .412. As an added bonus, Moz grabs data on keyword difficulty in real time, rather than from a fixed database. This means that you can get any keyword difficulty score for any keyword.

In other words, Moz was able to generate keyword difficulty scores for 100% of the 50 keywords studied.

#2: SpyFu

This image shows a scatter plot for SpyFu's keyword difficulty scores versus our keyword rankings. The plot is similar looking to Moz's, with a few larger outliers.

Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.

SpyFu Organic Difficulty Predictability

PCC

0.405

P-val

.01 (P<0.05)

Relationship

Strong

% Keywords Matched

80.00%

SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.

#3: SEMrush

This image shows a scatter plot for SEMrush's keyword difficulty scores versus our keyword rankings. The data has a significant amount of outliers relative to the regression line.

SEMrush would certainly benefit from a couple mulligans (a second chance to perform an action). The Correlation Coefficient is very sensitive to outliers, which pushed SEMrush’s score down to third (.364).

SEMrush Organic Difficulty Predictability

PCC

0.364

P-val

.01 (P<0.05)

Relationship

Moderate

% Keywords Matched

92.00%

Further complicating the research process, only 46 of 50 keywords had keyword difficulty scores associated with them, and many of those had to be found through SEMrush’s “phrase match” feature individually, rather than through the difficulty tool.

The process was more laborious to dig around for data.

#4: KW Finder

This image shows a scatter plot for KW Finder's keyword difficulty scores versus our keyword rankings. The data also has a significant amount of outliers relative to the regression line.

KW Finder definitely could have benefitted from more than a few mulligans with numerous strong outliers, coming in right behind SEMrush with a score of .360.

KW Finder Organic Difficulty Predictability

PCC

0.360

P-val

.01 (P<0.05)

Relationship

Moderate

% Keywords Matched

100.00%

Fortunately, the KW Finder tool had a 100% match rate without any trouble digging around for the data.

#5: Ahrefs

This image shows a scatter plot for AHREF's keyword difficulty scores versus our keyword rankings. The data shows tight clustering amongst low difficulty score keywords, and a wide distribution amongst higher difficulty scores.

Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.

Ahrefs Organic Difficulty Predictability

PCC

0.316

P-val

.03 (P<0.05)

Relationship

Moderate

% Keywords Matched

100%

On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.

#6: Google Keyword Planner Tool

This image shows a scatter plot for Google Keyword Planner Tool's keyword difficulty scores versus our keyword rankings. The data shows randomly distributed plots with no linear relationship.

Before you ask, yes, SEO companies still use the paid competition figures from Google’s Keyword Planner Tool (and other tools) to assess organic ranking potential. As you can see from the scatter plot, there is in fact no linear relationship between the two variables.

Google Keyword Planner Tool Organic Difficulty Predictability

PCC

0.045

P-val

Statistically insignificant/no linear relationship

Relationship

Negligible/None

% Keywords Matched

88.00%

SEO agencies still using KPT for organic research (you know who you are!) — let this serve as a warning: You need to evolve.

Test 1 summary

For scoring, we will use a ten-point scale and score every tool relative to the highest-scoring competitor. For example, if the second highest score is 98% of the highest score, the tool will receive a 9.8. As a reminder, here are the results from the PCC test:

This bar chart shows the final PCC values for the first test, summarized.

And the resulting scores are as follows:

Tool

PCC Test

Moz

10

SpyFu

9.8

SEMrush

8.8

KW Finder

8.7

Ahrefs

7.7

KPT

1.1

Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).

Test 2: Adjusted Pearson Correlation Coefficient

Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.

Here are the adjusted results for the handicap round:

Adjusted Scores (3 Outliers removed)

PCC

Difference (+/-)

SpyFu

0.527

0.122

SEMrush

0.515

0.150

Moz

0.514

0.101

Ahrefs

0.478

0.162

KWFinder

0.470

0.110

Keyword Planner Tool

0.189

0.144

As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.

For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.

Here are the updated scores at the end of round two:

Tool

PCC Test

Adjusted PCC

Total

SpyFu

9.8

10

19.8

Moz

10

9.7

19.7

SEMrush

8.8

9.8

18.6

KW Finder

8.7

8.9

17.6

AHREFs

7.7

9.1

16.8

KPT

1.1

3.6

4.7

SpyFu takes the lead! Now let’s jump into the final round of statistical tests.

Test 3: Resampling

Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.

Big thanks to Russ Jones, who put together an entirely different model that answers the question: “What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?”

He randomly selected 2 keywords from the list and their associated difficulty scores.

Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.

He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:

Resampling

% Guessed correctly

Moz

62.2%

Ahrefs

61.2%

SEMrush

60.3%

Keyword Finder

58.9%

SpyFu

54.3%

KPT

45.9%

As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.

Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.

In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.

For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.

The updated scores are as follows:

Tool

PCC Test

Adjusted PCC

Resampling

Total

Moz

10

9.7

10

29.7

SEMrush

8.8

9.8

8.4

27

Ahrefs

7.7

9.1

9.2

26

KW Finder

8.7

8.9

7.3

24.9

SpyFu

9.8

10

3.5

23.3

KPT

1.1

3.6

-.4

.7

So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.

Finally, we need to make some usability adjustments.

Usability Adjustment 1: Keyword Matching

A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.

To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:

  1. You have to use another tool to get the data, which devalues the entire point of using the original tool.
  2. You miss an opportunity to rank for a high-value keyword.

Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.

One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.

The penalties are as follows:

Tool

Match Rate

Penalty

KW Finder

100%

0

Ahrefs

100%

0

Moz

100%

0

SEMrush

92%

-.8

Keyword Planner Tool

88%

-1.2

SpyFu

80%

-2

Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were not found in its keyword difficulty tool, but rather through manually digging through the phrase match tool. We will give them a pass, but with a stern warning!

Usability Adjustment 2: Reliability

I told you we would come back to this! Revisiting the second test in which we threw away the three strongest outliers that negatively impacted each tool’s score, we will now make adjustments.

In real life, there are no mulligans. In real life, each of those three blog posts that were thrown out represented a significant monetary and time investment. Therefore, when a tool has a major blunder, the result can be a total waste of time and resources.

For that reason, we will impose a slight penalty on those tools that benefited the most from their handicap.

We will use the level of PCC improvement to evaluate how much a tool benefitted from removing their outliers. In doing so, we will be rewarding the tools that were the most consistently reliable. As a reminder, the amounts each tool benefitted were as follows:

Tool

Difference (+/-)

Ahrefs

0.162

SEMrush

0.150

Keyword Planner Tool

0.144

SpyFu

0.122

KWFinder

0.110

Moz

0.101

In calculating the penalty, we scored each of the tools relative to the top performer, giving the top performer zero penalty and imposing penalties based on how much additional benefit the tools received over the most reliable tool, on a scale of 0–100%, with a maximum deduction of 5 points.

So if a tool received twice the benefit of the top performing tool, it would have had a 100% benefit, receiving the maximum deduction of 5 points. If another tool received a 20% benefit over of the most reliable tool, it would get a 1-point deduction. And so on.

Tool

% Benefit

Penalty

Ahrefs

60%

-3

SEMrush

48%

-2.4

Keyword Planner Tool

42%

-2.1

SpyFu

20%

-1

KW Finder

8%

-.4

Moz

0

Results

All told, our penalties were fairly mild, with a slight shuffling in the middle tier. The final scores are as follows:

Tool

Total Score

Stars (5 max)

Moz

29.7

4.95

KW Finder

24.5

4.08

SEMrush

23.8

3.97

Ahrefs

23.0

3.83

Spyfu

20.3

3.38

KPT

-2.6

0.00

Conclusion

Using any organic keyword difficulty tool will give you an advantage over not doing so. While none of the tools are a crystal ball, providing perfect predictability, they will certainly give you an edge. Further, if you record enough data on your own blogs’ performance, you will get a clearer picture of the keyword difficulty scores you should target in order to rank on the first page.

For example, we know the following about how we should target keywords with each tool:

Tool

Average KD ranking ≤10

Average KD ranking ≥ 11

Moz

33.3

37.0

SpyFu

47.7

50.6

SEMrush

60.3

64.5

KWFinder

43.3

46.5

Ahrefs

11.9

23.6

This is pretty powerful information! It’s either first page or bust, so we now know the threshold for each tool that we should set when selecting keywords.

Stay tuned, because we made a lot more correlations between word count, days live, total keywords ranking, and all kinds of other juicy stuff. Tune in again in early September for updates!

We hope you found this test useful, and feel free to reach out with any questions on our math!

Disclaimer: These results are estimates based on 50 ranking keywords from 50 blog posts and keyword research data pulled from a single moment in time. Search is a shifting landscape, and these results have certainly changed since the data was pulled. In other words, this is about as accurate as we can get from analyzing a moving target.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/10026507
via IFTTT

How to Win Some Local Customers Back from Amazon this Holiday Season

Posted by MiriamEllis

Your local business may not be able to beat Amazon at the volume of their own game of convenient shipping this holiday season, but don’t assume it’s a game you can’t at least get into!

This small revelation took me by surprise last month while I was shopping for a birthday gift for my brother. Like many Americans, I’m feeling growing qualms about the economic and societal impacts of putting my own perceived convenience at the top of a list of larger concerns like ensuring fair business practices, humane working conditions, and sustainable communities.

So, when I found myself on the periphery of an author talk at the local independent bookstore and the book happened to be one I thought my brother would enjoy, I asked myself a new question:

“I wonder if this shop would ship?”

There was no signage indicating such a service, but I asked anyway, and was delighted to discover that they do. Minutes later, the friendly staff was wrapping up a signed copy of the volume in nice paper and popping a card in at no extra charge. Shipping wasn’t free, but I walked away feeling a new kind of happiness in wishing my sibling a “Happy Birthday” this year.

And that single transaction not only opened my eyes to the fact that I don’t have to remain habituated to gift shopping at Amazon or similar online giants for remote loved ones, but it also inspired this article.

Let’s talk about this now, while your local business, large or small, still has time to make plans for the holidays. Let’s examine this opportunity together, with a small study, a checklist, and some inspiration for seasonal success.

What do people buy most at the holidays and who’s shipping?

According to Statista, the categories in the following chart are the most heavily shopped during the holiday season. I selected a large town in California with a population of 60,000+, and phoned every business in these categories that was ranking in the top 10 of Google’s Local Finder view. This comprised both branded chains and independently-owned businesses. I asked each business if I came in and purchased items whether they could ship them to a friend.

Category

% Offer Shipping

Notes

Clothing

80%

Some employees weren’t sure. Outlets of larger store brands couldn’t ship. Some offered shipping only if you were a member of their loyalty program. Small independents consistently offered shipping. Larger brands promoted shopping online.

Electronics

10%

Larger stores all stressed going online. The few smaller stores said they could ship, but made it clear that it was an unusual request.

Games/Toys/Dolls etc.

25%

Large stores promote online shopping. One said they would ship some items but not all. Independents did not ship.

Food/Liquor

20%

USPS prohibits shipping alcohol. I surveyed grocery, gourmet, and candy stores. None of the grocery stores shipped and only two candy stores did.

Books

50%

Only two bookstores in this town, both independent. One gladly ships. The other had never considered it.

Jewelry

60%

Chains require online shopping. Independents more open to shipping but some didn’t offer it.

Health/Beauty

20%

With a few exceptions, cosmetic and fitness-related stores either had no shipping service or had either limited or full online shopping.

Takeaways from the study

  • Most of the chains promote online shopping vs. shopping in their stores, which didn’t surprise me, but which strikes me as opportunity being left on the table.
  • I was pleasantly surprised by the number of independent clothing and jewelry stores that gladly offered to ship gift purchases.
  • I was concerned by how many employees initially didn’t know whether or not their employer offered shipping, indicating a lack of adequate training.
  • Finally, I’ll add that I’ve physically visited at least 85% of these businesses in the past few years and have never been told by any staff member about their shipping services, nor have I seen any in-store signage promoting such an offer.

My overarching takeaway from the experiment is that, though all of us are now steeped in the idea that consumers love the convenience of shipping, a dominant percentage of physical businesses are still operating as though this realization hasn’t fully hit in… or that it can be safely ignored.

To put it another way, if Amazon has taken some of your customers, why not take a page from their playbook and get shipping?

The nitty-gritty of brick-and-mortar shipping

62% of consumers say the reason they’d shop offline is because they want to see, touch, and try out items.RetailDive

There’s no time like the holidays to experiment with a new campaign. I sat down with a staff member at the bookstore where I bought my brother’s gift and asked her some questions about how they manage shipping. From that conversation, and from some additional research, I came away with the following checklist for implementing a shipping offer at your brick-and-mortar locations:

✔ Determine whether your business category is one that lends itself to holiday gift shopping.

✔ Train core or holiday temp staff to package and ship gifts.

✔ Craft compelling messaging surrounding your shipping offer, perhaps promoting pride in the local community vs. pride in Amazon. Don’t leave it to customers to shop online on autopilot — help them realize there’s a choice.

✔ Cover your store and website with messaging highlighting this offering, at least two months in advance of the holidays.

✔ In October, run an in-store campaign in which cashiers verbally communicate your holiday shipping service to every customer.

✔ Sweeten the offer with a dedication of X% of sales to a most popular local cause/organization/institution.

✔ Promote your shipping service via your social accounts.

✔ Make an effort to earn a mention of your shipping service in local print and radio news.

✔ Set clear dates for when the last purchases can be made to reach their destinations in time for the holidays.

✔ Coordinate with the USPS, FedEx, or UPS to have them pick up packages from your location daily.

✔ Determine the finances of your shipping charges. You may need to experiment with whether free shipping would put too big of a hole in your pocket, or whether it’s necessary to compete with online giants at the holidays.

✔ Track the success of this campaign to discover ROI.

Not every business is a holiday shopping destination, and online shopping may simply have become too dominant in some categories to overcome the Amazon habit. But, if you determine you’ve got an opportunity here, designate 2018 as a year to experiment with shipping with a view towards making refinements in the new year.

You may discover that your customers so appreciate the lightbulb moment of being able to support local businesses when they want something mailed that shipping is a service you’ll want to instate year-round. And not just for gifts… consumers are already signaling at full strength that they like having merchandise shipped to themselves!

Adding the lagniappe: Something extra

For the past couple of years, economists have reported that Americans are spending more on restaurants than on groceries. I see a combination of a desire for experiences and convenience in that, don’t you? It has been joked that someone needs to invent food that takes pictures of itself for social sharing! What can you do to capitalize on this desire for ease and experience in your business?

Cards, carols, and customs are wreathed in the “joy” part of the holidays, but how often do customers genuinely feel the enjoyment when they are shopping these days? True, a run to the store for a box of cereal may not require aesthetic satisfaction, but shouldn’t we be able to expect some pleasure in our purchasing experiences, especially when we are buying gifts that are meant to spread goodwill?

When my great-grandmother got tired from shopping at the Emporium in San Francisco, one of the superabundant sales clerks would direct her to the soft surroundings of the ladies’ lounge to refresh her weary feet on an automatic massager. She could lunch at a variety of nicely appointed in-store restaurants at varied prices. Money was often tight, but she could browse happily in the “bargain basement”. There were holiday roof rides for the kiddies, and holiday window displays beckoning passersby to stop and gaze in wonder. Great-grandmother, an immigrant from Ireland, got quite a bit of enjoyment out of the few dollars in her purse.

It may be that those lavish days of yore are long gone, taking the pleasure of shopping with them, and that we’re doomed to meager choosing between impersonal online shopping or impersonal offline warehouses … but I don’t think so.

The old Emporium was huge, with multiple floors and hundreds of employees … but it wasn’t a “big box store”.

There’s still opportunity for larger brands to differentiate themselves from their warehouse-lookalike competitors. Who says retail has to look like a fast food chain or a mobile phone store?

And as for small, independent businesses? I can’t open my Twitter feed nowadays without encountering a new and encouraging story about the rise of localism and local entrepreneurialism.

It’s a good time to revive the ethos of the lagniappe — the Louisiana custom of giving patrons a little something extra with their purchase, something that will make it worth it to get off the computer and head into town for a fun, seasonal experience. Yesterday’s extra cookie that made up the baker’s dozen could be today’s enjoyable atmosphere, truly expert salesperson, chair to sit down in when weary, free cup of spiced cider on a wintry day… or the highly desirable service of free shipping. Chalk up the knowledge of this need as one great thing Amazon has gifted you.

In 2017, our household chose to buy as many holiday presents as possible from Main Street for our nearby family and friends. We actually enjoyed the experience. In 2018, we plan to see how far our town can take us in terms of shipping gifts to loved ones we won’t have a chance to see. Will your business be ready to serve our newfound need?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/10016748
via IFTTT

What Do Dolphins Eat? Lessons from How Kids Search

Posted by willcritchlow

Kids may search differently than adults, but there are some interesting insights from how they use Google that can help deepen our understanding of searchers in general. Comfort levels with particular search strategies, reading only the bold words, taking search suggestions and related searches as answers — there’s a lot to dig into. In this week’s slightly different-from-the-norm Whiteboard Friday, we welcome the fantastic Will Critchlow to share lessons from how kids search.

https://fast.wistia.net/embed/iframe/i5pivmij3z?seo=false&videoFoam=true

https://fast.wistia.net/assets/external/E-v1.js

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. I’m Will Critchlow, founder and CEO of Distilled, and this week’s Whiteboard Friday is a little bit different. I want to talk about some surprising and interesting and a few funny facts that I learnt when I was reading some research that Google did about how kids search for information. So this isn’t super actionable. This is not about tactics of improving your website particularly. But I think we get some insights — they were studying kids aged 7 to 11 — by looking at how kids interact. We can see some reflections or some ideas about how there might be some misconceptions out there about how adults search as well. So let’s dive into it.

What do dolphins eat?

I’ve got this “What do dolphins eat?” because this was the first question that the researchers gave to the kids to say sit down in front of a search box, go. They tell this little anecdote, a little bit kind of soul-destroying, of this I think it was a seven-year-old child who starts typing dolphin, D-O-L-F, and then presses Enter, and it was like sadly there’s no dolphins, which hopefully they found him some dolphins. But a lot of the kids succeeded at this task.

Different kinds of searchers

The researchers divided the ways that the kids approached it up into a bunch of different categories. They found that some kids were power searchers. Some are what they called “developing.” They classified some as “distracted.” But one that I found fascinating was what they called visual searchers. I think they found this more commonly among the younger kids who were perhaps a little bit less confident reading and writing. It turns out that, for almost any question you asked them, these kids would turn first to image search.

So for this particular question, they would go to image search, typically just type “dolphin” and then scroll and go looking for pictures of a dolphin eating something. Then they’d find a dolphin eating a fish, and they’d turn to the researcher and say “Look, dolphins eat fish.” Which, when you think about it, I quite like in an era of fake news. This is the kids doing primary research. They’re going direct to the primary source. But it’s not something that I would have ever really considered, and I don’t know if you would. But hopefully this kind of sparks some thought and some insights and discussions at your end. They found that there were some kids who pretty much always, no matter what you asked them, would always go and look for pictures.

Kids who were a bit more developed, a bit more confident in their reading and writing would often fall into one of these camps where they were hopefully focusing on the attention. They found a lot of kids were obviously distracted, and I think as adults this is something that we can relate to. Many of the kids were not really very interested in the task at hand. But this kind of path from distracted to developing to power searcher is an interesting journey that I think totally applies to grown-ups as well.

In practice: [wat do dolfin eat]

So I actually, after I read this paper, went and did some research on my kids. So my kids were in roughly this age range. When I was doing it, my daughter was eight and my son was five and a half. Both of them interestingly typed “wat do dolfin eat” pretty much like this. They both misspelled “what,” and they both misspelled “dolphin.” Google was fine with that. Obviously, these days this is plenty close enough to get the result you wanted. Both of them successfully answered the question pretty much, but both of them went straight to the OneBox. This is, again, probably unsurprising. You can guess this is probably how most people search.

“Oh, what’s a cephalopod?” The path from distracted to developing

So there’s a OneBox that comes up, and it’s got a picture of a dolphin. So my daughter, a very confident reader, she loves reading, “wat do dolfin eat,” she sat and she read the OneBox, and then she turned to me and she said, “It says they eat fish and herring. Oh, what’s a cephalopod?” I think this was her going from distracted into developing probably. To start off with, she was just answering this question because I had asked her to. But then she saw a word that she didn’t know, and suddenly she was curious. She had to kind of carefully type it because it’s a slightly tricky word to spell. But she was off looking up what is a cephalopod, and you could see the engagement shift from “I’m typing this because Dad has asked me to and it’s a bit interesting I guess” to “huh, I don’t know what a cephalopod is, and now I’m doing my own research for my own reasons.” So that was interesting.

“Dolphins eat fish, herring, killer whales”: Reading the bold words

My son, as I said, typed something pretty similar, and he, at the point when he was doing this, was at the stage of certainly capable of reading, but generally would read out loud and a little bit halting. What was fascinating on this was he only read the bold words. He read it out loud, and he didn’t read the OneBox. He just read the bold words. So he said to me, “Dolphins eat fish, herring, killer whales,” because killer whales, for some reason, was bolded. I guess it was pivoting from talking about what dolphins eat to what killer whales eat, and he didn’t read the context. This cracked him up. So he thought that was ridiculous, and isn’t it funny that Google thinks that dolphins eat killer whales.

That is similar to some stuff that was in the original research, where there were a bunch of common misconceptions it turns out that kids have and I bet a bunch of adults have. Most adults probably don’t think that the bold words in the OneBox are the list of the answer, but it does point to the problems with factual-based, truthy type queries where Google is being asked to be the arbiter of truth on some of this stuff. We won’t get too deep into that.

Common misconceptions for kids when searching

1. Search suggestions are answers

But some common misconceptions they found some kids thought that the search suggestions, so the drop-down as you start typing, were the answers, which is bit problematic. I mean we’ve all seen kind of racist or hateful drop-downs in those search queries. But in this particular case, it was mainly just funny. It would end up with things like you start asking “what do dolphins eat,” and it would be like “Do dolphins eat cats” was one of the search suggestions.

2. Related searches are answers

Similar with related searches, which, as we know, are not answers to the question. These are other questions. But kids in particular — I mean, I think this is true of all users — didn’t necessarily read the directions on the page, didn’t read that they were related searches, just saw these things that said “dolphin” a lot and started reading out those. So that was interesting.

How kids search complicated questions

The next bit of the research was much more complex. So they started with these easy questions, and they got into much harder kind of questions. One of them that they asked was this one, which is really quite hard. So the question was, “Can you find what day of the week the vice president’s birthday will fall on next year?” This is a multifaceted, multipart question.

How do they handle complex, multi-step queries?

Most of the younger kids were pretty stumped on this question. Some did manage it. I think a lot of adults would fail at this. So if you just turn to Google, if you just typed this in or do a voice search, this is the kind of thing that Google is almost on the verge of being able to do. If you said something like, “When is the vice president’s birthday,” that’s a question that Google might just be able to answer. But this kind of three-layered thing, what day of the week and next year, make this actually a very hard query. So the kids had to first figure out that, to answer this, this wasn’t a single query. They had to do multiple stages of research. When is the vice president’s birthday? What day of the week is that date next year? Work through it like that.

I found with my kids, my eight-year-old daughter got stuck halfway through. She kind of realized that she wasn’t going to get there in one step, but also couldn’t quite structure the multi-levels needed to get to, but also started getting a bit distracted again. It was no longer about cephalopods, so she wasn’t quite as interested.

Search volume will grow in new areas as Google’s capabilities develop

This I think is a whole area that, as Google’s capabilities develop to answer more complex queries and as we start to trust and learn that those kind of queries can be answered, what we see is that there is going to be increasing, growing search volume in new areas. So I’m going to link to a post I wrote about a presentation I gave about the next trillion searches. This is my hypothesis that essentially, very broad brush strokes, there are a trillion desktop searches a year. There are a trillion mobile searches a year. There’s another trillion out there in searches that we don’t do yet because they can’t be answered well. I’ve got some data to back that up and some arguments why I think it’s about that size. But I think this is kind of closely related to this kind of thing, where you see kids get stuck on these kind of queries.

Incidentally, I’d encourage you to go and try this. It’s quite interesting, because as you work through trying to get the answer, you’ll find search results that appear to give the answer. So, for example, I think there was an About.com page that actually purported to give the answer. It said, “What day of the week is the vice president’s birthday on?” But it had been written a year before, and there was no date on the page. So actually it was wrong. It said Thursday. That was the answer in 2016 or 2017. So that just, again, points to the difference between primary research, the difference between answering a question and truth. I think there’s a lot of kind of philosophical questions baked away in there.

Kids get comfortable with how they search – even if it’s wrong

So we’re going to wrap up with possibly my favorite anecdote of the user research that these guys did, which was that they said some of these kids, somewhere in this developing stage, get very attached to searching in one particular way. I guess this is kind of related to the visual search thing. They find something that works for them. It works once. They get comfortable with it, they’re familiar with it, and they just do that for everything, whether it’s appropriate or not. My favorite example was this one child who apparently looked for information about both dolphins and the vice president of the United States on the SpongeBob SquarePants website, which I mean maybe it works for dolphins, but I’m guessing there isn’t an awful lot of VP information.

So anyway, I hope you’ve enjoyed this little adventure into how kids search and maybe some things that we can learn from it. Drop some anecdotes of your own in the comments. I’d love to hear your experiences and some of the funny things that you’ve learnt along the way. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/9990550
via IFTTT

Google’s August 1st Core Update: Week 1

Posted by Dr-Pete

On August 1, Google (via Danny Sullivan’s @searchliaison account) announced that they released a “broad core algorithm update.” Algorithm trackers and webmaster chatter confirmed multiple days of heavy ranking flux, including our own MozCast system:

Temperatures peaked on August 1-2 (both around 114°F), with a 4-day period of sustained rankings flux (purple bars are all over 100°F). While this has settled somewhat, yesterday’s data suggests that we may not be done.

August 2nd set a 2018 record for MozCast at 114.4°F. Keep in mind that, while MozCast was originally tuned to an average temperature of 70°F, 2017-2018 average temperatures have been much higher (closer to 90° in 2018).

Temperatures by Vertical

There’s been speculation that this algo update targeted so called YMYL queries (Your Money or Your Life) and disproportionately impacted health and wellness sites. MozCast is broken up into 20 keyword categories (roughly corresponding to Google Ads categories). Here are the August 2nd temperatures by category:

At first glance, the “Health” category does appear to be the most impacted. Keywords in that category had a daily average temperature of 124°F. Note, though, that all categories showed temperatures over 100°F on August 1st – this isn’t a situation where one category was blasted and the rest were left untouched. It’s also important to note that this pattern shifted during the other three days of heavy flux, with other categories showing higher average temperatures. The multi-day update impacted a wide range of verticals.

Top 30 winners

So, who were the big winners (so far) of this update? I always hesitate to do a winners/losers analysis – while useful, especially for spotting patterns, there are plenty of pitfalls. First and foremost, a site can gain or lose SERP share for many reasons that have nothing to do with algorithm updates. Second, any winners/losers analysis is only a snapshot in time (and often just one day).

Since we know that this update spanned multiple days, I’ve decided to look at the percentage increase (or decrease) in SERP share between July 31st and August 7th. In this analysis, “Share” is a raw percentage of page-1 rankings in the MozCast 10K data set. I’ve limited this analysis to only sites that had at least 25 rankings across our data set on July 31 (below that the data gets very noisy). Here are the top 30…

The first column is the percentage increase across the 7 days. The final column is the overall share – this is very low for all but mega-sites (Wikipedia hovers in the colossal 5% range).

Before you over-analyze, note the second column – this is the percent change from the highest July SERP share for that site. What the 7-day share doesn’t tell us is whether the site is naturally volatile. Look at Time.com (#27) for a stark example. Time Magazine saw a +19.5% lift over the 7 days, which sounds great, except that they landed on a final share that was down 54.4% from their highest point in July. As a news site, Time’s rankings are naturally volatile, and it’s unclear whether this has much to do with the algorithm update.

Similarly, LinkedIn, AMC Theaters, OpenTable, World Market, MapQuest, and RE/MAX all show highs in July that were near or above their August 7th peaks. Take their gains with a grain of salt.

Top 30 losers

We can run the same analysis for the sites that lost the most ground. In this case, the “Max %” is calculated against the July low. Again, we want to be mindful of any site where the 7-day drop looks a lot different than the drop from that site’s July low-point…

Comparing the first two columns, Verywell Health immediately stands out. While the site ended the 7-day period down 52.3%, it was up just over 200% from July lows. It turns out that this site was sitting very low during the first week of July and then saw a jump in SERP share. Interestingly, Verywell Family and Verywell Fit also appear on our top 30 losers list, suggesting that there’s a deeper story here.

Anecdotally, it’s easy to spot a pattern of health and wellness sites in this list, including big players like Prevention and LIVESTRONG. Whether this list represents the entire world of sites hit by the algorithm update is impossible to say, but our data certainly seems to echo what others are seeing.

Are you what you E-A-T?

There’s been some speculation that this update is connected to Google’s recent changes to their Quality Rater Guidelines. While it’s very unlikely that manual ratings based on the new guidelines would drive major ranking shifts (especially so quickly), it’s entirely plausible that the guideline updates and this algorithm update share a common philosophical view of quality and Google’s latest thinking on the subject.

Marie Haynes’ post theorizing the YMYL connection also raises the idea that Google may be looking more closely at E-A-T signals (Expertise, Authoritativeness and Trust). While certainly an interesting theory, I can’t adequately address that question with this data set. Declines in sites like Fortune, IGN and Android Central pose some interesting questions about authoritativeness and trust outside of the health and wellness vertical, but I hesitate to speculate based only on a handful of outliers.

If your site has been impacted in a material way (including significant traffic gains or drops), I’d love to hear more details in the comments section. If you’ve taken losses, try to isolate whether those losses are tied to specific keywords, keyword groups, or pages/content. For now, I’d advise that this update could still be rolling out or being tweaked, and we all need to keep our eyes open.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/9984946
via IFTTT

Rewriting the Beginner’s Guide to SEO, Chapter 4: On-Page Optimization

Posted by BritneyMuller

Chapter Four of the Beginner’s Guide to SEO rewrite is chock full of on-page SEO learnings. After all the great feedback you’ve provided thus far on our outline, Chapter One, Chapter Two, and Chapter Three, we’re eager to hear how you feel about Chapter Four. What really works for you? What do you think is missing? Read on, and let us know your thoughts in the comments!


Chapter 4: On-Page Optimization

Use your research to craft your message.

Now that you know how your target market is searching, it’s time to dive into on-page optimization, the practice of crafting web pages that answer searcher’s questions. On-page SEO is multifaceted, and extends beyond content into other things like schema and meta tags, which we’ll discuss more at length in the next chapter on technical optimization. For now, put on your wordsmithing hats — it’s time to create your content!

Creating your content

Applying your keyword research

In the last chapter, we learned methods for discovering how your target audience is searching for your content. Now, it’s time to put that research into practice. Here is a simple outline to follow for applying your keyword research:

  1. Survey your keywords and group those with similar topics and intent. Those groups will be your pages, rather than creating individual pages for every keyword variation.
  2. If you haven’t done so already, evaluate the SERP for each keyword or group of keywords to determine what type and format your content should be. Some characteristics of ranking pages to take note of:
    1. Are they image or video heavy?
    2. Is the content long-form or short and concise?
    3. Is the content formatted in lists, bullets, or paragraphs?
  3. Ask yourself, “What unique value could I offer to make my page better than the pages that are currently ranking for my keyword?”

On-page optimization allows you to turn your research into content your audience will love. Just make sure to avoid falling into the trap of low-value tactics that could hurt more than help!

Low-value tactics to avoid

Your web content should exist to answer searchers’ questions, to guide them through your site, and to help them understand your site’s purpose. Content should not be created for the purpose of ranking highly in search alone. Ranking is a means to an end, the end being to help searchers. If we put the cart before the horse, we risk falling into the trap of low-value content tactics.

Some of these tactics were introduced in Chapter 2, but by way of review, let’s take a deeper dive into some low-value tactics you should avoid when crafting search engine optimized content.

Thin content

While it’s common for a website to have unique pages on different topics, an older content strategy was to create a page for every single iteration of your keywords in order to rank on page 1 for those highly specific queries.

For example, if you were selling bridal dresses, you might have created individual pages for bridal gowns, bridal dresses, wedding gowns, and wedding dresses, even if each page was essentially saying the same thing. A similar tactic for local businesses was to create multiple pages of content for each city or region from which they wanted clients. These “geo pages” often had the same or very similar content, with the location name being the only unique factor.

Tactics like these clearly weren’t helpful for users, so why did publishers do it? Google wasn’t always as good as it is today at understanding the relationships between words and phrases (or semantics). So, if you wanted to rank on page 1 for “bridal gowns” but you only had a page on “wedding dresses,” that may not have cut it.

This practice created tons of thin, low-quality content across the web, which Google addressed specifically with its 2011 update known as Panda. This algorithm update penalized low-quality pages, which resulted in more quality pages taking the top spots of the SERPs. Google continues to iterate on this process of demoting low-quality content and promoting high-quality content today.

Google is clear that you should have a comprehensive page on a topic instead of multiple, weaker pages for each variation of a keyword.

Depiction of distinct pages for each keyword variation versus one page covering multiple variations

Duplicate content

Like it sounds, “duplicate content” refers to content that is shared between domains or between multiple pages of a single domain. “Scraped” content goes a step further, and entails the blatant and unauthorized use of content from other sites. This can include taking content and republishing as-is, or modifying it slightly before republishing, without adding any original content or value.

There are plenty of legitimate reasons for internal or cross-domain duplicate content, so Google encourages the use of a rel=canonical tag to point to the original version of the web content. While you don’t need to know about this tag just yet, the main thing to note for now is that your content should be unique in word and in value.

Depiction of how duplicate content looks between pages.

Cloaking

A basic tenet of search engine guidelines is to show the same content to the engine’s crawlers that you’d show to a human visitor. This means that you should never hide text in the HTML code of your website that a normal visitor can’t see.

When this guideline is broken, search engines call it “cloaking” and take action to prevent these pages from ranking in search results. Cloaking can be accomplished in any number of ways and for a variety of reasons, both positive and negative. Below is an example of an instance where Spotify showed different content to users than to Google.

Spotify shows a login page to Google.Spotify shows a National Philharmonic Orchestra landing page to logged in visitors.

In some cases, Google may let practices that are technically cloaking pass because they contribute to a positive user experience. For more on the subject of cloaking and the levels of risk associated with various tactics, see our article on White Hat Cloaking.

Keyword stuffing

If you’ve ever been told, “You need to include {critical keyword} on this page X times,” you’ve seen the confusion over keyword usage in action. Many people mistakenly think that if you just include a keyword within your page’s content X times, you will automatically rank for it. The truth is, although Google looks for mentions of keywords and related concepts on your site’s pages, the page itself has to add value outside of pure keyword usage. If a page is going to be valuable to users, it won’t sound like it was written by a robot, so incorporate your keywords and phrases naturally in a way that is understandable to your readers.

Below is an example of a keyword-stuffed page of content that also uses another old method: bolding all your targeted keywords. Oy.

Screenshot of a site that bolds keywords in a paragraph.

Auto-generated content

Arguably one of the most offensive forms of low quality content is the kind that is auto-generated, or created programmatically with the intent of manipulating search rankings and not helping users. You may recognize some auto-generated content by how little it makes sense when read — they are technically words, but strung together by a program rather than a human being.

Gibberish text on a webpage

It is worth noting that advancements in machine learning have contributed to more sophisticated auto-generated content that will only get better over time. This is likely why in Google’s quality guidelines on automatically generated content, Google specifically calls out the brand of auto-generated content that attempts to manipulate search rankings, rather than any-and-all auto-generated content.

What to do instead: 10x it!

There is no “secret sauce” to ranking in search results. Google ranks pages highly because it has determined they are the best answers to the searcher’s questions. In today’s search engine, it’s not enough that your page isn’t duplicate, spamming, or broken. Your page has to provide value to searchers and be better than any other page Google is currently serving as the answer to a particular query. Here’s a simple formula for content creation:

  • Search the keyword(s) you want your page to rank for
  • Identify which pages are ranking highly for those keywords
  • Determine what qualities those pages possess
  • Create content that’s better than that

We like to call this 10x content. If you create a page on a keyword that is 10x better than the pages being shown in search results (for that keyword), Google will reward you for it, and better yet, you’ll naturally get people linking to it! Creating 10x content is hard work, but will pay dividends in organic traffic.

Just remember, there’s no magic number when it comes to words on a page. What we should be aiming for is whatever sufficiently satisfies user intent. Some queries can be answered thoroughly and accurately in 300 words while others might require 1,000 words!

Pro tip: Don’t reinvent the wheel!
If you already have content on your website, save yourself time by evaluating which of those pages are already bringing in good amounts of organic traffic and converting well. Refurbish that content on different platforms to help get more visibility to your site. On the other side of the coin, evaluate what existing content isn’t performing as well and adjust it, rather than starting from square one with all new content.

NAP: A note for local businesses

If you’re a business that makes in-person contact with your customers, be sure to include your business name, address, and phone number (NAP) prominently, accurately, and consistently throughout your site’s content. This information is often displayed in the footer or header of a local business website, as well as on any “contact us” pages. You’ll also want to mark up this information using local business schema. Schema and structured data are discussed more at length in the “Code” section of this chapter.

If you are a multi-location business, it’s best to build unique, optimized pages for each location. For example, a business that has locations in Seattle, Tacoma, and Bellevue should consider having a page for each:

example.com/seattle
example.com/tacoma
example.com/bellevue

Each page should be uniquely optimized for that location, so the Seattle page would have unique content discussing the Seattle location, list the Seattle NAP, and even testimonials specifically from Seattle customers. If there are dozens, hundreds, or even thousands of locations, a store locator widget could be employed to help you scale.

Hope you still have some energy left after handling the difficult-yet-rewarding task of putting together a page that is 10x better than your competitors’ pages, because there are just a few more things needed before your page is complete! In the next sections, we’ll talk about the other on-page optimizations your pages need, as well as naming and organizing your content.

Beyond content: Other optimizations your pages need

Can I just bump up the font size to create paragraph headings?

How can I control what title and description show up for my page in search results?

After reading this section, you’ll understand other important on-page elements that help search engines understand the 10x content you just created, so let’s dive in!

Header tags

Header tags are an HTML element used to designate headings on your page. The main header tag, called an H1, is typically reserved for the title of the page. It looks like this:

 <h1>Page Title</h1>

There are also sub-headings that go from H2 (<h2>) to H6 (<h6>) tags, although using all of these on a page is not required. The hierarchy of header tags goes from H1 to H6 in descending order of importance.

Each page should have a unique H1 that describes the main topic of the page, this is often automatically created from the title of a page. As the main descriptive title of the page, the H1 should contain that page’s primary keyword or phrase. You should avoid using header tags to mark up non-heading elements, such as navigational buttons and phone numbers. Use header tags to introduce what the following content will discuss.

Take this page about touring Copenhagen, for example:

<h1>Copenhagen Travel Guide</h1>
<h2>Copenhagen by the Seasons</h2>
<h3>Visiting in Winter</h3>
<h3>Visiting in Spring</h3>

The main topic of the page is introduced in the main <h1> heading, and each additional heading is used to introduce a new sub-topic. In this example, the <h2> is more specific than the <h1>, and the <h3> tags are more specific than the <h2>. This is just an example of a structure you could use.

Although what you choose to put in your header tags can be used by search engines to evaluate and rank your page, it’s important to avoid inflating their importance. Header tags are one among many on-page SEO factors, and typically would not move the needle like quality backlinks and content would, so focus on your site visitors when crafting your headings.

Internal links

In Chapter 2, we discussed the importance of having a crawlable website. Part of a website’s crawlability lies in its internal linking structure. When you link to other pages on your website, you ensure that search engine crawlers can find all your site’s pages, you pass link equity (ranking power) to other pages on your site, and you help visitors navigate your site.

The importance of internal linking is well established, but there can be confusion over how this looks in practice.

Link accessibility

Links that require a click (like a navigation drop-down to view) are often hidden from search engine crawlers, so if the only links to internal pages on your website are through these types of links, you may have trouble getting those pages indexed. Opt instead for links that are directly accessible on the page.

Anchor text

Anchor text is the text with which you link to pages. Below, you can see an example of what a hyperlink without anchor text and a hyperlink with anchor text would look like in the HTML.

<a href="http://www.domain.com/&quot;

<a href="http://www.domain.com/"></a>
<a href="http://www.domain.com/" title="Keyword Text">Keyword Text</a>

On live view, that would look like this:

http://www.example.com/

Keyword Text

The anchor text sends signals to search engines regarding the content of the destination page. For example, if I link to a page on my site using the anchor text “learn SEO,” that’s a good indicator to search engines that the targeted page is one at which people can learn about SEO. Be careful not to overdo it, though. Too many internal links using the same, keyword-stuffed anchor text can appear to search engines that you’re trying to manipulate a page’s ranking. It’s best to make anchor text natural rather than formulaic.

Link volume

In Google’s General Webmaster Guidelines, they say to “limit the number of links on a page to a reasonable number (a few thousand at most).” This is part of Google’s technical guidelines, rather than the quality guideline section, so having too many internal links isn’t something that on its own is going to get you penalized, but it does affect how Google finds and evaluates your pages.

The more links on a page, the less equity each link can pass to its destination page. A page only has so much equity to go around.

Depiction of how link equity works between pages

So it’s safe to say that you should only link when you mean it! You can learn more about link equity from our SEO Learning Center.

Aside from passing authority between pages, a link is also a way to help users navigate to other pages on your site. This is a case where doing what’s best for search engines is also doing what’s best for searchers. Too many links not only dilute the authority of each link, but they can also be unhelpful and overwhelming. Consider how a searcher might feel landing on a page that looks like this:

Welcome to our gardening website! We have many articles on gardening, how to garden, and helpful tips on herbs, fruits, vegetables, perennials, and annuals. Learn more about gardening from our gardening blog.

Whew! Not only is that a lot of links to process, but it also reads pretty unnaturally and doesn’t contain much substance (which could be considered “thin content” by Google). Focus on quality and helping your users navigate your site, and you likely won’t have to worry about too many links.

Redirection

Removing and renaming pages is a common practice, but in the event that you do move a page, make sure to update the links to that old URL! At the very least, you should make sure to redirect the URL to its new location, but if possible, update all internal links to that URL at the source so that users and crawlers don’t have to pass through redirects to arrive at the destination page. If you choose to redirect only, be careful to avoid redirect chains that are too long (Google says, “Avoid chaining redirects… keep the number of redirects in the chain low, ideally no more than 3 and fewer than 5.”)

Example of a redirect chain:

(original location of content) example.com/location1 >> example.com/location2 >> (current location of content) example.com/location3

Better:

example.com/location1 >> example.com/location3

Image optimization

Images are the biggest culprits of slow web pages! The best way to solve for this is to compress your images. While there is no one-size-fits-all when it comes to image compression, testing various options like “save for web,” image sizing, and compression tools like Optimizilla, ImageOptim for Mac (or Windows alternatives), as well as evaluating what works best is the way to go.

Another way to help optimize your images (and improve your page speed) is by choosing the right image format.

How to choose which image format to use:

Flowchart for how to choose image formatsSource: Google’s image optimization guide

Choosing image formats:

  • If your image requires animation, use a GIF.
  • If you don’t need to preserve high image resolution, use JPEG (and test out different compression settings).
  • If you do need to preserve high image resolution, use PNG.
    • If your image has a lot of colors, use PNG-24.
    • If your image doesn’t have a lot of colors, use PNG-8.

There are different ways to keep visitors on a semi-slow loading page by using images that produce a colored box or a very blurry/low resolution version while rendering to help visitors feel as if things are loading faster. We will discuss these options in more detail in Chapter 5.

Pro tip: Don’t forget about thumbnails!
Thumbnails (especially for E-Commerce sites) can be a huge page speed slow down. Optimize thumbnails properly to avoid slow pages and to help retain more qualified visitors.

Alt text

Alt text (alternative text) within images is a principle of web accessibility, and is used to describe images to the visually impaired via screen readers. It’s important to have alt text descriptions so that any visually impaired person can understand what the pictures on your website depict.

Search engine bots also crawl alt text to better understand your images, which gives you the added benefit of providing better image context to search engines. Just ensure that your alt descriptions reads naturally for people, and avoid stuffing keywords for search engines.

Bad:

<img src="grumpycat.gif" alt="grumpy cat, cat is grumpy, grumpy cat gif">

Good:

<img src="grumpycat.gif" alt="A black cat looking very grumpy at a big spotted dog">

Submit an image sitemap

To ensure that Google can crawl and index your images, submit an image sitemap in your Google Search Console account. This helps Google discover images they may have otherwise missed.

Formatting for readability & featured snippets

Your page could contain the best content ever written on a subject, but if it’s formatted improperly, your audience might never read it! While we can never guarantee that visitors will read our content, there are some principles that can promote readability, including:

  • Text size and color – Avoid fonts that are too tiny. Google recommends 16+px font to minimize the need for “pinching and zooming” on mobile. The text color in relation to the page’s background color should also promote readability. Additional information on text can be found in the website accessibility guidelines. (Google’s web accessibility fundamentals).
  • Headings – Breaking up your content with helpful headings can help readers navigate the page. This is especially useful on long pages where a reader might be looking only for information from a particular section.
  • Bullet points – Great for lists, bullet points can help readers skim and more quickly find the information they need.
  • Paragraph breaks – Avoiding walls of text can help prevent page abandonment and encourage site visitors to read more of your page.
  • Supporting media – When appropriate, include images, videos, and widgets that would complement your content.
  • Bold and italics for emphasis – Putting words in bold or italics can add emphasis, so they should be the exception, not the rule. Appropriate use of these formatting options can call out important points you want to communicate.

Formatting can also affect your page’s ability to show up in featured snippets, those “position 0” results that appear above the rest of organic results.

Screenshot of a featured snippet

There is no special code that you can add to your page to show up here, nor can you pay for this placement, but taking note of the query intent can help you better structure your content for featured snippets. For example, if you’re trying to rank for “cake vs. pie,” it might make sense to include a table in your content, with the benefits of cake in one column and the benefits of pie in the other. Or if you’re trying to rank for “best restaurants to try in Portland,” that could indicate Google wants a list, so formatting your content in bullets could help.

Title tags

A page’s title tag is a descriptive, HTML element that specifies the title of a particular web page. They are nested within the head tag of each page and look like this:

<head>
  <title>Example Title</title>
</head>

Each page on your website should have a unique, descriptive title tag. What you input into your title tag field will show up here in search results, although in some cases Google may adjust how your title tag appears in search results.

Screenshot with the page title highlighted in the SERPs

It can also show up in web browsers…

Screenshot of a page title in a browser window

Or when you share the link to your page on certain external websites…

Screenshot of a page title shared on an external website

Your title tag has a big role to play in people’s first impression of your website, and it’s an incredibly effective tool for drawing searchers to your page over any other result on the SERP. The more compelling your title tag, combined with high rankings in search results, the more visitors you’ll attract to your website. This underscores that SEO is not only about search engines, but rather the entire user experience.

What makes an effective title tag?

  • Keyword usage: Having your target keyword in the title can help both users and search engines understand what your page is about. Also, the closer to the front of the title tag your keywords are, the more likely a user will be to read them (and hopefully click) and the more helpful they can be for ranking.
  • Length: On average, search engines display the first 50–60 characters (~512 pixels) of a title tag in search results. If your title tag exceeds the characters allowed on that SERP, an ellipsis “…” will appear where the title was cut off. While sticking to 50–60 characters is safe, never sacrifice quality for strict character counts. If you can’t get your title tag down to 60 characters without harming its readability, go longer (within reason).
  • Branding: At Moz, we love to end our title tags with a brand name mention because it promotes brand awareness and creates a higher click-through rate among people who are familiar with Moz. Sometimes it makes sense to place your brand at the beginning of the title tag, such as on your homepage, but be mindful of what you are trying to rank for and place those words closer toward the beginning of your title tag.

Meta descriptions

Like title tags, meta descriptions are HTML elements that describe the contents of the page that they’re on. They are also nested in the head tag, and look like this:

<head>
  <meta name=”description” content=”Description of page here.”/>
</head>

What you input into the description field will show up here in search results:

In many cases though, Google will choose different snippets of text to display in search results, dependent upon the searcher’s query.

For example if you search “find backlinks,” Google will provide this meta description as it deems it more relevant to the specific search:

The meta description pulls each step from the page content and lists it out.

While the actual meta description is:

How to find backlinks? Step 1: Navigate to Link Explorer, a tool used to research the backlink profile of a website. It will show you the quality of backlinks using metrics like Domain Authority, Page Authority, and Spam Score. You can do a good amount of backlink research with the free version or pay to receive unlimited backlink data.

This often helps to improve your meta descriptions for unique searches. However, don’t let this deter you from writing a default page meta description — they’re still extremely valuable.

What makes an effective meta description?

The qualities that make an effective title tag also apply to effective meta descriptions. Although Google says that meta descriptions are not a ranking factor, like title tags, they are incredibly important for click-through rate.

  • Relevance: Meta descriptions should be highly relevant to the content of your page, so it should summarize your key concept in some form. You should give the searcher enough information to know they’ve found a page relevant enough to answer their question, without giving away so much information that it eliminates the need to click through to your web page.
  • Length: Search engines tend to truncate meta descriptions to around 300 characters. It’s best to write meta descriptions between 150–300 characters in length. On some SERPs, you’ll notice that Google gives much more real estate to the descriptions of some pages. This usually happens for web pages ranking right below a featured snippet.

URL structure: Naming and organizing your pages

URL stands for Uniform Resource Locator. URLs are the locations or addresses for individual pieces of content on the web. Like title tags and meta descriptions, search engines display URLs on the SERPs, so URL naming and format can impact click-through rates. Not only do searchers use them to make decisions about which web pages to click on, but URLs are also used by search engines in evaluating and ranking pages.

Clear page naming

Search engines require unique URLs for each page on your website so they can display your pages in search results, but clear URL structure and naming is also helpful for people who are trying to understand what a specific URL is about. For example, which URL is clearer?

example.com/desserts/chocolate-pie

OR

example.com/asdf/453?=recipe-23432-1123

Searchers are more likely to click on URLs that reinforce and clarify what information is contained on that page, and less likely to click on URLs that confuse them.

Page organization

If you discuss multiple topics on your website, you should also make sure to avoid nesting pages under irrelevant folders. For example:

example.com/commercial-litigation/alimony

It would have been better for this fictional multi-practice law firm website to nest alimony under “/family-law/” than to host it under the irrelevant “/commercial-litigation/” section of the website.

The folders in which you locate your content can also send signals about the type, not just the topic, of your content. For example, dated URLs can indicate time-sensitive content. While appropriate for news-based websites, dated URLs for evergreen content can actually turn searchers away because the information seems outdated. For example:

example.com/2015/april/what-is-seo/

vs.

example.com/what-is-seo/

Since the topic “What is SEO?” isn’t confined to a specific date, it’s best to host on a non-dated URL structure or else risk your information appearing stale.

As you can see, what you name your pages, and in what folders you choose to organize your pages, is an important way to clarify the topic of your page to users and search engines.

URL length

While it is not necessary to have a completely flat URL structure, many click-through rate studies indicate that, when given the choice between a URL and a shorter URL, searchers often prefer shorter URLs. Like title tags and meta descriptions that are too long, too-long URLs will also be cut off with an ellipsis. Just remember, having a descriptive URL is just as important, so don’t cut down on URL length if it means sacrificing the URL’s descriptiveness.

example.com/services/plumbing/plumbing-repair/toilets/leaks/

vs.

example.com/plumbing-repair/toilets/

Minimizing length, both by including fewer words in your page names and removing unnecessary subfolders, makes your URLs easier to copy and paste, as well as more clickable.

Keywords in URL

If your page is targeting a specific term or phrase, make sure to include it in the URL. However, don’t go overboard by trying to stuff in multiple keywords for purely SEO purposes. It’s also important to watch out for repeat keywords in different subfolders. For example, you may have naturally incorporated a keyword into a page name, but if located within other folders that are also optimized with that keyword, the URL could begin to appear keyword-stuffed.

Example:

example.com/seattle-dentist/dental-services/dental-crowns/

Keyword overuse in URLs can appear spammy and manipulative. If you aren’t sure whether your keyword usage is too aggressive, just read your URL through the eyes of a searcher and ask, “Does this look natural? Would I click on this?”

Static URLs

The best URLs are those that can easily be read by humans, so you should avoid the overuse of parameters, numbers, and symbols. Using technologies like mod_rewrite for Apache and ISAPI_rewrite for Microsoft, you can easily transform dynamic URLs like this:

http://moz.com/blog?id=123

into a more readable static version like this:

https://moz.com/google-algorithm-change

Hyphens for word separation

Not all web applications accurately interpret separators like underscores (_), plus signs (+), or spaces (%20). Search engines also do not understand how to separate words in URLs when they run together without a separator (example.com/optimizefeaturedsnippets/). Instead, use the hyphen character (-) to separate words in a URL.

Geographic Modifiers in URLs

Some local business owners omit geographic terms that describe their physical location or service area because they believe that search engines can figure this out on their own. On the contrary, it’s vital that local business websites’ content, URLs, and other on-page assets make specific mention of city names, neighborhood names, and other regional descriptors. Let both consumers and search engines know exactly where you are and where you serve, rather than relying on your physical location alone.

Protocols: HTTP vs. HTTPS

A protocol is that “http” or “https” preceding your domain name. Google recommends that all websites have a secure protocol (the “s” in “https” stands for “secure”). To ensure that your URLs are using the https:// protocol instead of http://, you must obtain an SSL (Secure Sockets Layer) certificate. SSL certificates are used to encrypt data. They ensure that any data passed between the web server and browser of the searcher remains private. As of July 2018, Google Chrome displays “not secure” for all HTTP sites, which could cause these sites to appear untrustworthy to visitors and result in them leaving the site.


If you’ve made it this far, congratulations on surpassing the halfway point of the Beginner’s Guide to SEO! So far, we’ve learned how search engines crawl, index, and rank content, how to find keyword opportunities to target, and now, you know the on-page optimization strategies that can help your pages get found. Next, buckle up, because we’ll be diving into the exciting world of technical SEO!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/9975815
via IFTTT

Take the 2018 Moz Local Search Marketing Industry Survey

Posted by MiriamEllis

Local search marketing is a dynamic and exciting discipline, but like many digital professions, it can be a bit isolating. You may find yourself running into questions that don’t have a ready answer, things like…

  • What sort of benchmarks should I be measuring my daily work by?
  • Do my clients’ needs align with what my colleagues are seeing?
  • Am I over/undervaluing the role of Google in my future work?

Here’s a chance to find out what your peers are observing and doing on a day-to-day basis.

The Moz Local Search Marketing Industry Survey will dive into job descriptions, industries served, most effective tactics, tool usage, and the non-stop growth of Google’s local features. We’ll even touch on how folks may have been impacted by the recent August 1 algorithm update, if at all. In-house local SEOs, agency local SEOs, and other digital marketers are all welcome! All participants will be entered into a drawing for a $100 Amazon gift card. The winner will be notified on 8/27/18.

Give just 5 minutes of your time and you’ll get insights and quotable statistics back when we publish the survey results. Be sure to participate by 8/24/2018. We sincerely appreciate your contributions!

Take the Local SEO Survey Now

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/9952888
via IFTTT