Pop-Ups, Overlays, Modals, Interstitials, and How They Interact with SEO – Whiteboard Friday

Posted by randfish

Have you thought about what your pop-ups might be doing to your SEO? There are plenty of considerations, from their timing and how they affect your engagement rates, all the way to Google’s official guidelines on the matter. In this episode of Whiteboard Friday, Rand goes over all the reasons why you ought to carefully consider how your overlays and modals work and whether the gains are worth the sacrifice.

https://fast.wistia.net/embed/iframe/ohomyv8n62?videoFoam=true

https://fast.wistia.net/assets/external/E-v1.js

Pop-ups, modals, overlays, interstitials, and how they work with SEO

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about pop-ups, overlays, modals, interstitials, and all things like them. They have specific kinds of interactions with SEO. In addition to Google having some guidelines around them, they also can change how people interact with your website, and that can adversely or positively affect you accomplishing your goals, SEO and otherwise.

Types

So let’s walk through what these elements, these design and UX elements do, how they work, and best practices for how we should be thinking about them and how they might interfere with our SEO efforts.

Pop-ups

So, first up, let’s talk specifically about what each element is. A pop-up now, okay, there are a few kinds. There are pop-ups that happen in new windows. New window pop-ups are, basically, new window, no good. Google hates those. They are fundamentally against them. Many browsers will stop them automatically. Chrome does. Firefox does. In fact, users despise these as well. There are still some spammy and sketchy sites out there that use them, but, generally speaking, bad news.

Overlays

When we’re talking about a pop-up that happens in the same browser window, essentially it’s just a visual element, that’s often also referred to as an overlay. So, for the purposes of this Whiteboard Friday, we’ll call that an overlay. An overlay is basically like this, where you have the page’s content and there’s some smaller element, a piece, a box, a window, a visual of some kind that comes up and that essentially says, maybe it says, “Sign up for my email newsletter,” and then there’s a place to enter your email, or, “Get my book now,” and you click that and get the book. Those types of overlays are pretty common on the web, and they do not create quite the same problems that pop-ups do, at least from Google’s perspective. However, we’ll talk about those later, there are some issues around them, especially with mobile.

Modals

Modals tend to be windows of interaction, tend to be more elements of use. So lightboxes for images is a very popular modal. A modal is something where you are doing work inside that new box rather than in the content that’s underneath it. So a sign-in form that overlays, that pops up over the rest of the content, but that doesn’t allow you to engage with this content underneath it, that would be considered a modal. Generally, most of the time, these aren’t a problem, unless they are something like spam, or advertising, or something that’s taking you out of the user experience.

Interstitials

Then finally, interstitials are essentially, and many of these can also be called interstitial experiences, but a classic interstitial is something like what Forbes.com does. When you visit Forbes, an article for the first time, you get this, “Welcome. Our sponsor of the day is Brawndo. Brawndo, it has what plants need.” Then you can continue after a certain number of seconds. These really piss people off, myself included. I really hate the interstitial experience. I understand that it’s an advertising thing. But, yeah, Google hates them too. Not quite enough to kick Forbes out of their SERPs entirely yet, but, fingers crossed, it will happen sometime soon. They have certainly removed plenty of other folks who have gone with invasive or overly heavy interstitials over the years and made those pretty tough.

What are the factors that matter for SEO?

A) Timing

Well, it turns out timing is a big one. So when the element appears matters. Basically, if the element shows up initially upon page load, they will consider it differently than if it shows up after a few minutes. So, for example, if you have a “Sign Up Now” overlay that pops up the second you visit the page, that’s going to be treated differently than something that happens when you’re 80% or you’ve just finished scrolling through an entire blog post. That will get treated very differently. Or it may have no effect actually on how Google treats the SEO, and then it really comes down to how users do.

Then how long does it last as well. So interstitials, especially those advertising interstitials, there are some issues governing that with people like Forbes. There are also some issues around an overlay that can’t be closed and how long a window can pop up, especially if it shows advertising and those types of things. Generally speaking, obviously, shorter is better, but you can get into trouble even with very short ones.

B) Interaction

Can that element easily be closed, and does it interfere with the content or readability? So Google’s new mobile guidelines, I think as of just a few months ago, now state that if an overlay or a modal or something interferes with a visitor’s ability to read the actual content on the page, Google may penalize those or remove their mobile-friendly tags and remove any mobile-friendly benefit. That’s obviously quite concerning for SEO.

C) Content

So there’s an exception or an exclusion to a lot of Google’s rules around this, which is if you have an element that is essentially asking for the user’s age, or asking for some form of legal consent, or giving a warning about cookies, which is very popular in the EU, of course, and the UK because of the legal requirements around saying, “Hey, this website uses cookies,” and you have to agree to it, those kinds of things, that actually gets around Google’s issues. So Google will not give you a hard time if you have an overlay interstitial or modal that says, “Are you of legal drinking age in your country? Enter your birth date to continue.” They will not necessarily penalize those types of things.

Advertising, on the other hand, advertising could get you into more trouble, as we have discussed. If it’s a call to action for the website itself, again, that could go either way. If it’s part of the user experience, generally you are just fine there. Meaning something like a modal where you get to a website and then you say, “Hey, I want to leave a comment,” and so there’s a modal that makes you log in, that type of a modal. Or you click on an image and it shows you a larger version of that image in a modal, again, no problem. That’s part of the user experience.

D) Conditions

Conditions matter as well. So if it is triggered from SERP visits versus not, meaning that if you have an exclusionary protocol in your interstitial, your overlay, your modal that says, “Hey, if someone’s visiting from Google, don’t show this to them,” or “If someone’s visiting from Bing, someone’s visiting from DuckDuckGo, don’t show this to them,” that can change how the search engines perceive it as well.

It’s also the case that this can change if you only show to cookied or logged in or logged out types of users. Now, logged out types of users means that everyone from a search engine could or will get it. But for logged in users, for example, you can imagine that if you visit a page on a social media site and there’s a modal that includes or an overlay that includes some notification around activity that you’ve already been performing on the site, now that becomes more a part of the user experience. That’s not necessarily going to harm you.

Where it can hurt is the other way around, where you get visitors from search engines, they are logged out, and you require them to log in before seeing the content. Quora had a big issue with this for a long time, and they seem to have mostly resolved that through a variety of measures, and they’re fairly sophisticated about it. But you can see that Facebook still struggles with this, because a lot of their content, they demand that you log in before you can ever view or access it. That does keep some of their results out of Google, or certainly ranking lower.

E) Engagement impact

I think this is what Google’s ultimately trying to measure and what they’re trying to essentially say, “Hey, this is why we have these issues around this,” which is if you are hurting the click-through rate or you’re hurting pogo-sticking, meaning that more people are clicking onto your website from Google and then immediately clicking the Back button when one of these things appears, that is a sign to Google that you have provided a poor user experience, that people are not willing to jump through whatever hoop you’ve created for them to get access your content, and that suggests they don’t want to get there. So this is sort of the ultimate thing that you should be measuring. Some of these can still hurt you even if these are okay, but this is the big one.

Best practices

So some best practices around using all these types of elements on your website. I would strongly urge you to avoid elements that are significantly harming UX. If you’re willing to take a small sacrifice in user experience in exchange for a great deal of value because you capture people’s email addresses or you get more engagement of other different kinds, okay. But this would be something I’d watch.

There are three or four metrics that I’d urge you to check out to compare whether this is doing the right thing. Those are:

  • Bounce rate
  • Browse rate
  • Return visitor rates, meaning the percentage of people who come back to your site again and again, and
  • Time on site after the element appears

So those four will help tell you whether you are truly interfering badly with user experience.

On mobile, ensure that your crucial content is not covered up, that the reading experience, the browsing experience isn’t covered up by one of these elements. Please, whatever you do, make those elements easy and obvious to dismiss. This is part of Google’s guidelines around it, but it’s also a best practice, and it will certainly help your user experience metrics.

Only choose to keep one of these elements if you are finding that the sacrifice… and there’s almost always a sacrifice cost, like you will hurt bounce rate or browse rate or return visitor rate or time on site. You will hurt it. The question is, is it a slight enough hurt in exchange for enough gain, and that’s that trade-off that you need to decide whether it’s worth it. I think if you are hurting visitor interaction by a few seconds on average per visit, but you are getting 5% of your visitors to give you an email address, that’s probably worth it. If it’s more like 30 seconds and 1%, maybe not as good.

Consider removing the elements from triggering if the visit comes from search engines. So if you’re finding that this works fine and great, but you’re having issues around search guidelines, you could consider potentially just removing the element from any visit that comes directly from a search engine and instead placing that in the content itself or letting it happen on a second page load, assuming that your browse rate is decently high. That’s a fine way to go as well.

If you are trying to get the most effective value out of these types of elements, it tends to be the case that the less common and less well used the visual element is, the more interaction and engagement it’s going to get. But the other side of that coin is that it can create a more frustrating experience. So if people are not familiar with the overlay or modal or interstitial visual layout design that you’ve chosen, they may engage more with it. They might not dismiss it out of hand, because they’re not used to it yet, but they can also get more frustrated by it. So, again, return to looking at those metrics.

With that in mind, hopefully you will effectively, and not too harmfully to your SEO, be able to use these pop-ups, overlays, interstitials, modals, and all other forms of elements that interfere with user experience.

And we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5749149
via IFTTT

There’s No Such Thing as a Site Migration

Posted by jonoalderson

Websites, like the businesses who operate them, are often deceptively complicated machines.

They’re fragile systems, and changing or replacing any one of the parts can easily affect (or even break) the whole setup — often in ways not immediately obvious to stakeholders or developers.

Even seemingly simple sites are often powered by complex technology, like content management systems, databases, and templating engines. There’s much more going on behind the scenes — technically and organizationally — than you can easily observe by crawling a site or viewing the source code.

When you change a website and remove or add elements, it’s not uncommon to introduce new errors, flaws, or faults.

That’s why I get extremely nervous whenever I hear a client or business announce that they’re intending to undergo a “site migration.”

Chances are, and experience suggests, that something’s going to go wrong.

http://platform.twitter.com/widgets.js

Migrations vary wildly in scope

As an SEO consultant and practitioner, I’ve been involved in more “site migrations” than I can remember or count — for charities, startups, international e-commerce sites, and even global household brands. Every one has been uniquely challenging and stressful.

In each case, the businesses involved have underestimated (and in some cases, increased) the complexity, the risk, and the details involved in successfully executing their “migration.”

As a result, many of these projects negatively impacted performance and potential in ways that could have been easily avoided.

This isn’t a case of the scope of the “migration” being too big, but rather, a misalignment of understanding, objectives, methods, and priorities, resulting in stakeholders working on entirely different scopes.

The migrations I’ve experienced have varied from simple domain transfers to complete overhauls of server infrastructure, content management frameworks, templates, and pages — sometimes even scaling up to include the consolidation (or fragmentation) of multiple websites and brands.

In the minds of each organization, however, these have all been “migration” projects despite their significantly varying (and poorly defined) scopes. In each case, the definition and understanding of the word “migration” has varied wildly.

We suck at definitions

As an industry, we’re used to struggling with labels. We’re still not sure if we’re SEOs, inbound marketers, digital marketers, or just… marketers. The problem is that, when we speak to each other (and those outside of our industry), these words can carry different meaning and expectations.

Even amongst ourselves, a conversation between two digital marketers, analysts, or SEOs about their fields of expertise is likely to reveal that they have surprisingly different definitions of their roles, responsibilities, and remits. To them, words like “content” or “platform” might mean different things.

In the same way, “site migrations” vary wildly, in form, function, and execution — and when we discuss them, we’re not necessarily talking about the same thing. If we don’t clarify our meanings and have shared definitions, we risk misunderstandings, errors, or even offense.

Ambiguity creates risk

Poorly managed migrations can have a number of consequences beyond just drops in rankings, traffic, and performance. There are secondary impacts, too. They can also inadvertently:

  • Provide a poor user experience (e.g., old URLs now 404, or error states are confusing to users, or a user reaches a page different from what they expected).
  • Break or omit tracking and/or analytics implementations, resulting in loss of business intelligence.
  • Limit the size, shape, or scalability of a site, resulting in static, stagnant, or inflexible templates and content (e.g., omitting the ability to add or edit pages, content, and/or sections in a CMS), and a site which struggles to compete as a result.
  • Miss opportunities to benefit from what SEOs do best: blending an understanding of consumer demand and behavior, the market and competitors, and the brand in question to create more effective strategies, functionality and content.
  • Create conflict between stakeholders, when we need to “hustle” at the last minute to retrofit our requirements into an already complex project (“I know it’s about to go live, but PLEASE can we add analytics conversion tracking?”) — often at the cost of our reputation.
  • Waste future resource, where mistakes require that future resource is spent recouping equity resulting from faults or omissions in the process, rather than building on and enhancing performance.

I should point out that there’s nothing wrong with hustle in this case; that, in fact, begging, borrowing, and stealing can often be a viable solution in these kinds of scenarios. There’s been more than one occasion when, late at night before a site migration, I’ve averted disaster by literally begging developers to include template review processes, to implement redirects, or to stall deployments.

But this isn’t a sensible or sustainable or reliable way of working.

Mistakes will inevitably be made. Resources, favors, and patience are finite. Too much reliance on “hustle” from individuals (or multiple individuals) may in fact further widen the gap in understanding and scope, and positions the hustler as a single point of failure.

More importantly, hustle may only fix the symptoms, not the cause of these issues. That means that we remain stuck in a role as the disruptive outsiders who constantly squeeze in extra unscoped requirements at the eleventh hour.

Where things go wrong

If we’re to begin to address some of these challenges, we need to understand when, where, and why migration projects go wrong.

The root cause of all less-than-perfect migrations can be traced to at least one of the following scenarios:

  • The migration project occurs without consultation.
  • Consultation is sought too late in the process, and/or after the migration.
  • There is insufficient planned resource/time/budget to add requirements (or processes)/make recommended changes to the brief.
  • The scope is changed mid-project, without consultation, or in a way which de-prioritizes requirements.
  • Requirements and/or recommended changes are axed at the eleventh hour (due to resource/time/budget limitations, or educational/political conflicts).

There’s a common theme in each of these cases. We’re not involved early enough in the process, or our opinions and priorities don’t carry sufficient weight to impact timelines and resources.

Chances are, these mistakes are rarely the product of spite or of intentional omission; rather, they’re born of gaps in the education and experience of the stakeholders and decision-makers involved.

We can address this, to a degree, by elevating ourselves to senior stakeholders in these kinds of projects, and by being consulted much earlier in the timeline.

Let’s be more specific

I think that it’s our responsibility to help the organizations we work for to avoid these mistakes. One of the easiest opportunities to do that is to make sure that we’re talking about the same thing, as early in the process as possible.

Otherwise, migrations will continue to go wrong, and we will continue to spend far too much of our collective time fixing broken links, recommending changes or improvements to templates, and holding together bruised-and-broken websites — all at the expense of doing meaningful, impactful work.

Perhaps we can begin to answer to some of these challenges by creating better definitions and helping to clarify exactly what’s involved in a “site migration” process.

Unfortunately, I suspect that we’re stuck with the word “migration,” at least for now. It’s a term which is already widely used, which people think is a correct and appropriate definition. It’s unrealistic to try to change everybody else’s language when we’re already too late to the conversation.

Our next best opportunity to reduce ambiguity and risk is to codify the types of migration. This gives us a chance to prompt further exploration and better definitions.

For example, if we can say “This sounds like it’s actually a domain migration paired with a template migration,” we can steer the conversation a little and rely on a much better shared frame of reference.

If we can raise a challenge that, e.g., the “translation project” a different part of the business is working on is actually a whole bunch of interwoven migration types, then we can raise our concerns earlier and pursue more appropriate resource, budget, and authority (e.g., “This project actually consists of a series of migrations involving templates, content, and domains. Therefore, it’s imperative that we also consider X and Y as part of the project scope.”).

By persisting in labelling this way, stakeholders may gradually come to understand that, e.g., changing the design typically also involves changing the templates, and so the SEO folks should really be involved earlier in the process. By challenging the language, we can challenge the thinking.

Let’s codify migration types

I’ve identified at least seven distinct types of migration. Next time you encounter a “migration” project, you can investigate the proposed changes, map them back to these types, and flag any gaps in understanding, expectations, and resource.

You could argue that some of these aren’t strictly “migrations” in a technical sense (i.e., changing something isn’t the same as moving it), but grouping them this way is intentional.

Remember, our goal here isn’t to neatly categorize all of the requirements for any possible type of migration. There are plenty of resources, guides, and lists which already try do that.

Instead, we’re trying to provide neat, universal labels which help us (the SEO folks) and them (the business stakeholders) to have shared definitions and to remove unknown unknowns.

They’re a set of shared definitions which we can use to trigger early warning signals, and to help us better manage stakeholder expectations.

Feel free to suggest your own, to grow, shrink, combine, or bin any of these to fit your own experience and requirements!

1. Hosting migrations

A broad bundling of infrastructure, hardware, and server considerations (while these are each broad categories in their own right, it makes sense to bundle them together in this context).

If your migration project contains any of the following changes, you’re talking about a hosting migration, and you’ll need to explore the SEO implications (and development resource requirements) to make sure that changes to the underlying platform don’t impact front-end performance or visibility.

  • You’re changing hosting provider.
  • You’re changing, adding, or removing server locations.
  • You’re altering the specifications of your physical (or virtual) servers (e.g., RAM, CPU, storage, hardware types, etc).
  • You’re changing your server technology stack (e.g., moving from Apache to Nginx).*
  • You’re implementing or removing load balancing, mirroring, or extra server environments.
  • You’re implementing or altering caching systems (database, static page caches, varnish, object, memcached, etc).
  • You’re altering the physical or server security protocols and features.**
  • You’re changing, adding or removing CDNs.***

*Might overlap into a software migration if the changes affect the configuration or behavior of any front-end components (e.g., the CMS).

**Might overlap into other migrations, depending on how this manifests (e.g., template, software, domain).

***Might overlap into a domain migration if the CDN is presented as/on a distinct hostname (e.g., AWS), rather than invisibly (e.g., Cloudflare).

2. Software migrations

Unless your website is comprised of purely static HTML files, chances are that it’s running some kind of software to serve the right pages, behaviors, and content to users.

If your migration project contains any of the following changes, you’re talking about a software migration, and you’ll need to understand (and input into) how things like managing error codes, site functionality, and back-end behavior work.

  • You’re changing CMS.
  • You’re adding or removing plugins/modules/add-ons in your CMS.
  • You’re upgrading or downgrading the CMS, or plugins/modules/addons (by a significant degree/major release) .
  • You’re changing the language used to render the website (e.g., adopting Angular2 or NodeJS).
  • You’re developing new functionality on the website (forms, processes, widgets, tools).
  • You’re merging platforms; e.g., a blog which operated on a separate domain and system is being integrated into a single CMS.*

*Might overlap into a domain migration if you’re absorbing software which was previously located/accessed on a different domain.

3. Domain migrations

Domain migrations can be pleasantly straightforward if executed in isolation, but this is rarely the case. Changes to domains are often paired with (or the result of) other structural and functional changes.

If your migration project alters the URL(s) by which users are able to reach your website, contains any of the following changes, then you’re talking about a domain migration, and you need to consider how redirects, protocols (e.g., HTTP/S), hostnames (e.g., www/non-www), and branding are impacted.

  • You’re changing the main domain of your website.
  • You’re buying/adding new domains to your ecosystem.
  • You’re adding or removing subdomains (e.g., removing domain sharding following a migration to HTTP2).
  • You’re moving a website, or part of a website, between domains (e.g., moving a blog on a subdomain into a subfolder, or vice-versa).
  • You’re intentionally allowing an active domain to expire.
  • You’re purchasing an expired/dropped domain.

4. Template migrations

Chances are that your website uses a number of HTML templates, which control the structure, layout, and peripheral content of your pages. The logic which controls how your content looks, feels, and behaves (as well as the behavior of hidden/meta elements like descriptions or canonical URLs) tends to live here.

If your migration project alters elements like your internal navigation (e.g., the header or footer), elements in your <head>, or otherwise changes the page structure around your content in the ways I’ve outlined, then you’re talking about a template migration. You’ll need to consider how users and search engines perceive and engage with your pages, how context, relevance, and authority flow through internal linking structures, and how well-structured your HTML (and JS/CSS) code is.

  • You’re making changes to internal navigation.
  • You’re changing the layout and structure of important pages/templates (e.g., homepage, product pages).
  • You’re adding or removing template components (e.g., sidebars, interstitials).
  • You’re changing elements in your <head> code, like title, canonical, or hreflang tags.
  • You’re adding or removing specific templates (e.g., a template which shows all the blog posts by a specific author).
  • You’re changing the URL pattern used by one or more templates.
  • You’re making changes to how device-specific rendering works*

*Might involve domain, software, and/or hosting migrations, depending on implementation mechanics.

5. Content migrations

Your content is everything which attracts, engages with, and convinces users that you’re the best brand to answer their questions and meet their needs. That includes the words you use to describe your products and services, the things you talk about on your blog, and every image and video you produce or use.

If your migration project significantly changes the tone (including language, demographic targeting, etc), format, or quantity/quality of your content in the ways I’ve outlined, then you’re talking about a content migration. You’ll need to consider the needs of your market and audience, and how the words and media on your website answer to that — and how well it does so in comparison with your competitors.

  • You significantly increase or reduce the number of pages on your website.
  • You significantly change the tone, targeting, or focus of your content.
  • You begin to produce content on/about a new topic.
  • You translate and/or internationalize your content.*
  • You change the categorization, tagging, or other classification system on your blog or product content.**
  • You use tools like canonical tags, meta robots indexation directives, or robots.txt files to control how search engines (and other bots) access and attribute value to a content piece (individually or at scale).

*Might involve domain, software and/or hosting, and template migrations, depending on implementation mechanics.

**May overlap into a template migration if the layout and/or URL structure changes as a result.

6. Design migrations

The look and feel of your website doesn’t necessarily directly impact your performance (though user signals like engagement and trust certainly do). However, simple changes to design components can often have unintended knock-on effects and consequences.

If your migration project contains any of the following changes, you’re talking about a design migration, and you’ll need to clarify whether changes are purely cosmetic or whether they go deeper and impact other areas.

  • You’re changing the look and feel of key pages (like your homepage).*
  • You’re adding or removing interaction layers, e.g. conditionally hiding content based on device or state.*
  • You’re making design/creative changes which change the HTML (as opposed to just images or CSS files) of specific elements.*
  • You’re changing key messaging, like logos and brand slogans.
  • You’re altering the look and feel to react to changing strategies or monetization models (e.g., introducing space for ads in a sidebar, or removing ads in favor of using interstitial popups/states).
  • You’re changing images and media.**

*All template migrations.

**Don’t forget to 301 redirect these, unless you’re replacing like-for-like filenames (which isn’t always best practice if you wish to invalidate local or remote caches).

7. Strategy migrations

A change in organizational or marketing strategy might not directly impact the website, but a widening gap between a brand’s audience, objectives, and platform can have a significant impact on performance.

If your market or audience (or your understanding of it) changes significantly, or if your mission, your reputation, or the way in which you describe your products/services/purpose changes, then you’re talking about a strategy migration. You’ll need to consider how you structure your website, how you target your audiences, how you write content, and how you campaign (all of which might trigger a set of new migration projects!).

  • You change the company mission statement.
  • You change the website’s key objectives, goals, or metrics.
  • You enter a new marketplace (or leave one).
  • Your channel focus (and/or your audience’s) changes significantly.
  • A competitor disrupts the market and/or takes a significant amount of your market share.
  • Responsibility for the website/its performance/SEO/digital changes.
  • You appoint a new agency or team responsible for the website’s performance.
  • Senior/C-level stakeholders leave or join.
  • Changes in legal frameworks (e.g. privacy compliance or new/changing content restrictions in prescriptive sectors) constrain your publishing/content capabilities.

Let’s get in earlier

Armed with better definitions, we can begin to force a more considered conversation around what a “migration” project actually involves. We can use a shared language and ensure that stakeholders understand the risks and opportunities of the changes they intend to make.

Unfortunately, however, we don’t always hear about proposed changes until they’ve already been decided and signed off.

People don’t know that they need to tell us that they’re changing domain, templates, hosting, etc. So it’s often too late when — or if — we finally get involved. Decisions have already been made before they trickle down into our awareness.

That’s still a problem.

By the time you’re aware of a project, it’s usually too late to impact it.

While our new-and-improved definitions are a great starting place to catch risks as you encounter them, avoiding those risks altogether requires us to develop a much better understanding of how, where, and when migrations are planned, managed, and start to go wrong.

Let’s identify trigger points

I’ve identified four common scenarios which lead to organizations deciding to undergo a migration project.

If you can keep your ears to the ground and spot these types of events unfolding, you have an opportunity to give yourself permission to insert yourself into the conversation, and to interrogate to find out exactly which types of migrations might be looming.

It’s worth finding ways to get added to deployment lists and notifications, internal project management tools, and other systems so that you can look for early warning signs (without creating unnecessary overhead and comms processes).

1. Mergers, acquisitions, and closures

When brands are bought, sold, or merged, this almost universally triggers changes to their websites. These requirements are often dictated from on-high, and there’s limited (or no) opportunity to impact the brief.

Migration strategies in these situations are rarely comfortable, and almost always defensive by nature (focusing on minimizing impact/cost rather than capitalizing upon opportunity).

Typically, these kinds of scenarios manifest in a small number of ways:

  • The “parent” brand absorbs the website of the purchased brand into their own website; either by “bolting it on” to their existing architecture, moving it to a subdomain/folder, or by distributing salvageable content throughout their existing site and killing the old one (often triggering most, if not every type of migration).
  • The purchased brand website remains where it is, but undergoes a design migration and possibly template migrations to align it with the parent brand.
  • A brand website is retired and redirected (a domain migration).

2. Rebrands

All sorts of pressures and opportunities lead to rebranding activity. Pressures to remain relevant, to reposition within marketplaces, or change how the brand represents itself can trigger migration requirements — though these activities are often led by brand and creative teams who don’t necessarily understand the implications.

Often, the outcome of branding processes and initiatives creates new a or alternate understanding of markets and consumers, and/or creates new guidelines/collateral/creative which must be reflected on the website(s). Typically, this can result in:

  • Changes to core/target audiences, and the content or language/phrasing used to communicate with them (strategy and content migrations -—more if this involves, for example, opening up to international audiences).
  • New collateral, replacing or adding to existing media, content, and messaging (content and design migrations).
  • Changes to website structure and domain names (template and domain migrations) to align to new branding requirements.

3. C-level vision

It’s not uncommon for senior stakeholders to decide that the strategy to save a struggling business, to grow into new markets, or to make their mark on an organization is to launch a brand-new, shiny website.

These kinds of decisions often involve a scorched-earth approach, tearing down the work of their predecessors or of previously under-performing strategies. And the more senior the decision-maker, the less likely they’ll understand the implications of their decisions.

In these kinds of scenarios, your best opportunity to avert disaster is to watch for warning signs and to make yourself heard before it’s too late. In particular, you can watch out for:

  • Senior stakeholders with marketing, IT, or C-level responsibilities joining, leaving, or being replaced (in particular if in relation to poor performance).
  • Boards of directors, investors, or similar pressuring web/digital teams for unrealistic performance goals (based on current performance/constraints).
  • Gradual reduction in budget and resource for day-to-day management and improvements to the website (as a likely prelude to a big strategy migration).
  • New agencies being brought on board to optimize website performance, who’re hindered by the current framework/constraints.
  • The adoption of new martech and marketing automation software.*

*Integrations of solutions like SalesForce, Marketo, and similar sometimes rely on utilizing proxied subdomains, embedded forms/content, and other mechanics which will need careful consideration as part of a template migration.

4. Technical or financial necessity

The current website is in such a poor, restrictive, or cost-ineffective condition that it makes it impossible to adopt new-and-required improvements (such as compliance with new standards, an integration of new martech stacks, changes following a brand purchase/merger, etc).

Generally, like the kinds of C-level “new website” initiatives I’ve outlined above, these result in scorched earth solutions.

Particularly frustrating, these are the kinds of migration projects which you yourself may well argue and fight for, for years on end, only to then find that they’ve been scoped (and maybe even begun or completed) without your input or awareness.

Here are some danger signs to watch out for which might mean that your migration project is imminent (or, at least, definitely required):

  • Licensing costs for parts or the whole platform become cost-prohibitive (e.g., enterprise CMS platforms, user seats, developer training, etc).
  • The software or hardware skill set required to maintain the site becomes rarer or more expensive (e.g., outdated technologies).
  • Minor-but-urgent technical changes take more than six months to implement.
  • New technical implementations/integrations are agreed upon in principle, budgeted for, but not implemented.
  • The technical backlog of tasks grows faster than it shrinks as it fills with breakages and fixes rather than new features, initiatives, and improvements.
  • The website ecosystem doesn’t support the organization’s ways of working (e.g., the organization adopts agile methodologies, but the website only supports waterfall-style codebase releases).
  • Key technology which underpins the site is being deprecated, and there’s no easy upgrade path.*

*Will likely trigger hosting or software migrations.

Let’s not count on this

While this kind of labelling undoubtedly goes some way to helping us spot and better manage migrations, it’s far from a perfect or complete system.

In fact, I suspect it may be far too ambitious, and unrealistic in its aspiration. Accessing conversations early enough — and being listened to and empowered in those conversations — relies on the goodwill and openness of companies who aren’t always completely bought into or enamored with SEO.

This will only work in an organization which is open to this kind of thinking and internal challenging — and chances are, they’re not the kinds of organizations who are routinely breaking their websites. The very people who need our help and this kind of system are fundamentally unsuited to receive it.

I suspect, then, it might be impossible in many cases to make the kinds of changes required to shift behaviors and catch these problems earlier. In most organizations, at least.

Avoiding disasters resulting from ambiguous migration projects relies heavily on broad education. Everything else aside, people tend to change companies faster than you can build deep enough tribal knowledge.

That doesn’t mean that the structure isn’t still valuable, however. The types of changes and triggers I’ve outlined can still be used as alarm bells and direction for your own use.

Let’s get real

If you can’t effectively educate stakeholders on the complexities and impact of them making changes to their website, there are more “lightweight” solutions.

At the very least, you can turn these kinds of items (and expand with your own, and in more detail) into simple lists which can be printed off, laminated, and stuck to a wall. At the very least, perhaps you’ll remind somebody to pick up the phone to the SEO team when they recognize an issue.

In a more pragmatic world, stakeholders don’t necessarily have to understand the nuance or the detail if they at least understand that they’re meant to ask for help when they’re changing domain, for example, or adding new templates to their website.

Whilst this doesn’t solve the underlying problems, it does provide a mechanism through which the damage can be systematically avoided or limited. You can identify problems earlier and be part of the conversation.

If it’s still too late and things do go wrong, you’ll have something you can point to and say “I told you so,” or, more constructively perhaps, “Here’s the resource you need to avoid this happening next time.”

And in your moment of self-righteous vindication, having successfully made it through this post and now armed to save your company from a botched migration project, you can migrate over to the bar. Good work, you.


Thanks to…

This turned into a monster of a post, and its scope meant that it almost never made it to print. Thanks to a few folks in particular for helping me to shape, form, and ship it. In particular:

  • Hannah Thorpe, for help in exploring and structuring the initial concept.
  • Greg Mitchell, for a heavy dose of pragmatism in the conclusion.
  • Gerry White, for some insightful additions and the removal of dozens of typos.
  • Sam Simpson for putting up with me spending hours rambling and ranting at her about failed site migrations.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5736349
via IFTTT

The State of Links: Yesterday’s Ranking Factor?

Posted by Tom.Capper

Back in September last year, I was lucky enough to see Rand speak at MozCon. His talk was about link building and the main types of strategy that he saw as still being relevant and effective today. During his introduction, he said something that really got me thinking, about how the whole purpose of links and PageRank had been to approximate traffic.

Source

Essentially, back in the late ’90s, links were a much bigger part of how we experienced the web — think of hubs like Excite, AOL, and Yahoo. Google’s big innovation was to realize that, because people navigated the web by clicking on links, they could approximate the relative popularity of pages by looking at those links.

So many links, such little time.

Rand pointed out that, given all the information at their disposal in the present day — as an Internet Service Provider, a search engine, a browser, an operating system, and so on — Google could now far more accurately model whether a link drives traffic, so you shouldn’t aim to build links that don’t drive traffic. This is a pretty big step forward from the link-building tactics of old, but it occurred to me that it it probably doesn’t go far enough.

If Google has enough data to figure out which links are genuinely driving traffic, why bother with links at all? The whole point was to figure out which sites and pages were popular, and they can now answer that question directly. (It’s worth noting that there’s a dichotomy between “popular” and “trustworthy” that I don’t want to get too stuck into, but which isn’t too big a deal here given that both can be inferred from either link-based data sources, or from non-link-based data sources — for example, SERP click-through rate might correlate well with “trustworthy,” while “search volume” might correlate well with “popular”).

However, there’s plenty of evidence out there suggesting that Google is in fact still making significant use of links as a ranking factor, so I decided to set out to challenge the data on both sides of that argument. The end result of that research is this post.

The horse’s mouth

One reasonably authoritative source on matters relating to Google is Google themselves. Google has been fairly unequivocal, even in recent times, that links are still a big deal. For example:

  • March 2016: Google Senior Search Quality Strategist Andrey Lipattsev confirms that content and links are the first and second greatest ranking factors. (The full quote is: “Yes; I can tell you what they [the number 1 and 2 ranking factors] are. It’s content, and links pointing to your site.”)
  • April 2014: Matt Cutts confirms that Google has tested search quality without links, and found it to be inferior.
  • October 2016: Gary Illyes implies that text links continue to be valuable while playing down the concept of Domain Authority.

Then, of course, there’s their continued focus on unnatural backlinks and so on — none of which would be necessary in a world where links are not a ranking factor.

However, I’d argue that this doesn’t indicate the end of our discussion before it’s even begun. Firstly, Google has a great track record of giving out dodgy SEO advice. Consider HTTPS migrations pre-2016. Will Critchlow talked at SearchLove San Diego about how Google’s algorithms are at a level of complexity and opaqueness where they’re no longer even trying to understand them themselves — and of course there are numerous stories of unintentional behaviors from machine learning algorithms out in the wild.

Third-party correlation studies

It’s not difficult to put together your own data and show a correlation between link-based metrics and rankings. Take, for example:

  • Moz’s most recent study in 2015, showing strong relationships between link-based factors and rankings across the board.
  • This more recent study by Stone Temple Consulting.

However, these studies fall into significant issues with correlation vs. causation.

There are three main mechanisms which could explain the relationships that they show:

  1. Getting more links causes sites to rank higher (yay!)
  2. Ranking higher causes sites to get more links
  3. Some third factor, such as brand awareness, is related to both links and rankings, causing them to be correlated with each other despite the absence of a direct causal relationship

I’ve yet to see any correlation study that addresses these very serious shortcomings, or even particularly acknowledges them. Indeed, I’m not sure that it would even be possible to do so given the available data, but this does show that as an industry we need to apply some critical thinking to the advice that we’re consuming.

However, earlier this year I did write up some research of my own here on the Moz Blog, demonstrating that brand awareness could in fact be a more useful factor than links for predicting rankings.

Source

The problem with this study was that it showed a relationship that was concrete (i.e. extremely statistically significant), but that was surprisingly lacking in explanatory power. Indeed, I discussed in that post how I’d ended up with a correlation that was far lower than Moz’s for Domain Authority.

Fortunately, Malcolm Slade recently discussed some of his very similar research at BrightonSEO, in which he finds similar broad correlations to myself between brand factors and rankings, but far, far stronger correlations for certain types of query, and especially big, high-volume, highly competitive head terms.

So what can we conclude overall from these third-party studies? Two main things:

  1. We should take with a large pinch of salt any study that does not address the possibilities of reverse causation, or a jointly-causing third factor.
  2. Links can add very little explanatory power to a rankings prediction model based on branded search volume, at least at a domain level.

The real world: Why do rankings change?

At the end of the day, we’re interested in whether links are a ranking factor because we’re interested in whether we should be trying to use them to improve the rankings of our sites, or our clients’ sites.

Fluctuation

The first example I want to look at here is this graph, showing UK rankings for the keyword “flowers” from May to December last year:

The fact is that our traditional understanding of ranking changes — which breaks down into links, on-site, and algorithm changes — cannot explain this degree of rapid fluctuation. If you don’t believe me, the above data is available publicly through platforms like SEMRush and Searchmetrics, so try to dig into it yourself and see if there’s any external explanation.

This level and frequency of fluctuation is increasingly common for hotly contested terms, and it shows a tendency by Google to continuously iterate and optimize — just as marketers do when they’re optimizing a paid search advert, or a landing page, or an email campaign.

What is Google optimizing for?

Source

The above slide is from Larry Kim’s presentation at SearchLove San Diego, and it shows how the highest SERP positions are gaining click-through rate over time, despite all the changes in Google Search (such as increased non-organic results) that ought to drive the opposite.

Larry’s suggestion is that this is a symptom of Google’s procedural optimization — not of the algorithm, but by the algorithm and of results. This certainly fits in with everything we’ve seen.

Successful link building

However, at the other end of the scale, we get examples like this:

Picture1.png

The above graph (courtesy of STAT) shows rankings for the commercial keywords for Fleximize.com during a Distilled creative campaign. This is a particularly interesting example for two reasons:

  • Fleximize started off as a domain with relatively little equity, meaning that changes were measurable, and that there were fairly easy gains to be made
  • Nothing happened with the first two pieces (1, 2), even though they scored high-quality coverage and were seemingly very comparable to the third (3).

It seems that links did eventually move the needle here, and massively so, but the mechanisms at work are highly opaque.

The above two examples — “Flowers” and Fleximize — are just two real-world examples of ranking changes. I’ve picked one that seems obviously link-driven but a little strange, and one that shows how volatile things are for more competitive terms. I’m sure there are countless massive folders out there full of case studies that show links moving rankings — but the point is that it can happen, yet it isn’t always as simple as it seems.

How do we explain all of this?

A lot of the evidence I’ve gone through above is contradictory. Links are correlated with rankings, and Google says they’re important, and sometimes they clearly move the needle, but on the other hand brand awareness seems to explain away most of their statistical usefulness, and Google’s operating with more subtle methods in the data-rich top end.

My favored explanation right now to explain how this fit together is this:

  • There are two tiers — probably fuzzily separated.
  • At the top end, user signals — and factors that Google’s algorithms associate with user signals — are everything. For competitive queries with lots of search volume, links don’t tell Google anything it couldn’t figure out anyway, and links don’t help with the final refinement of fine-grained ordering.
  • However, links may still be a big part of how you qualify for that competition in the top end.

This is very much a work in progress, however, and I’d love to see other people’s thoughts, and especially their fresh research. Let me know what you think in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5729871
via IFTTT

Half of Page-1 Google Results Are Now HTTPS

Posted by Dr-Pete

Just over 9 months ago, I wrote that 30% of page-1 Google results in our 10,000-keyword tracking set were secure (HTTPS). As of earlier this week, that number topped 50%:

While there haven’t been any big jumps recently – suggesting this change is due to steady adoption of HTTPS and not a major algorithm update – the end result of a year of small changes is dramatic. More and more Google results are secure.

MozCast is, of course, just one data set, so I asked the folks at Rank Ranger, who operate a similar (but entirely different) tracking system, if they thought I was crazy…

Could we both be crazy? Absolutely. However, we operate completely independent systems with no shared data, so I think the consistency in these numbers suggests that we’re not wildly off.

What about the future?

Projecting the fairly stable trend line forward, the data suggests that HTTPS could hit about 65% of page-1 results by the end of 2017. The trend line is, of course, an educated guess at best, and many events could change the adoption rate of HTTPS pages.

I’ve speculated previously that, as the adoption rate increased, Google would have more freedom to bump up the algorithmic (i.e. ranking) boost for HTTPS pages. I asked Gary Illyes if such a plan was in the works, and he said “no”:

As with any Google statement, some of you will take this as gospel truth and some will take it as devilish lies. While he isn’t promising that Google will never boost the ranking benefits of HTTPS, I believe Gary on this one. I think Google is happy with the current adoption rate and wary of the collateral damage that an aggressive HTTPS ranking boost (or penalty) could cause. It makes sense that they would bide their time..

Who hasn’t converted?

One of the reasons Google may be proceeding with caution on another HTTPS boost (or penalty) is that not all of the big players have made the switch. Here are the Top 20 subdomains in the MozCast dataset, along with the percentage of ranking URLs that use HTTPS:

(1) en.wikipedia.org — 100.0%
(2) www. amazon.com — 99.9%
(3) www. facebook.com — 100.0%
(4) www. yelp.com — 99.7%
(5) www. youtube.com — 99.6%
(6) www. pinterest.com — 100.0%
(7) www. walmart.com — 100.0%
(8) www. tripadvisor.com — 99.7%
(9) www. webmd.com — 0.2%
(10) allrecipes.com — 0.0%
(11) www. target.com — 0.0%
(12) www. foodnetwork.com — 0.0%
(13) www. ebay.com — 0.0%
(14) play.google.com — 100.0%
(15) www. bestbuy.com — 0.0%
(16) www. mayoclinic.org — 0.0%
(17) www. homedepot.com — 0.0%
(18) www. indeed.com — 0.0%
(19) www. zillow.com — 100.0%
(20) shop.nordstrom.com – 0.0%

Of the Top 20, exactly half have switched to HTTPS, although most of the Top 10 have converted. Not surprisingly, switching is, with only minor exceptions, nearly all-or-none. Most sites naturally opt for a site-wide switch, at least after initial testing.

What should you do?

Even if Google doesn’t turn up the reward or penalty for HTTPS, other changes are in play, such as Chrome warning visitors about non-secure pages when those pages collect sensitive data. As the adoption rate increases, you can expect pressure to switch to increase.

For new sites, I’d recommend jumping in as soon as possible. Security certificates are inexpensive these days (some are free), and the risks are low. For existing sites, it’s a lot tougher. Any site-wide change carries risks, and there have certainly been a few horror stories this past year. At minimum, make sure to secure pages that collect sensitive information or process transactions, and keep your eyes open for more changes.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5722419
via IFTTT

Why Net Neutrality Matters for SEO and Web Marketing – Whiteboard Friday

Posted by randfish

Net neutrality is a hot-button issue lately, and whether it’s upheld or not could have real ramifications for the online marketing industry. In this Whiteboard Friday, Rand covers the potential consequences and fallout of losing net neutrality. Be sure to join the ensuing discussion in the comments!

https://fast.wistia.net/embed/iframe/44ceuzw53o?videoFoam=true https://fast.wistia.net/assets/external/E-v1.js

Why net neutrality matter for SEO and web marketing

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re taking a departure from our usual SEO tactics and marketing tactics to talk for a minute about net neutrality. Net neutrality is actually something that is hugely critical and massively important to web marketers, especially those of us who help small and medium businesses, local businesses, and websites that aren’t in the top 100 most popular sites and wealthiest sites on the web.

The reason that we’re going to talk net neutrality is because, for the first time in a while, it’s actually at high risk and there are some things that we might be able to do about it. By protecting net neutrality, especially here in the United States, although this is true all over the world, wherever you might be, we can actually help to preserve our jobs and our roles as marketers. I’ll talk you through it in just a sec.

What is net neutrality?

So, to start off, you might be asking yourself, “Okay, Rand, I might have heard of net neutrality, but explain to me what it is.” I’m going to give you a very basic introduction, and then I’ll invite you to dig deeper into it.

But essentially, net neutrality is this idea that as a user of the Internet, through my Internet service provider — that might be through my cellphone network, that might be through my home Internet provider, through my Internet provider at work, these ISPs, a Verizon or a Comcast or a Cox or a T-Mobile or AT&T, those are all here in the United States and there are plenty of others overseas — you essentially can get access to the whole web equally, meaning that these ISPs are not regulating download speed based on someone paying them more or less or based on a website being favored by them or owned by them or invested in by them. Essentially, when you get access to the web, you get access to it equally. There’s equality and neutrality for the entire Internet.

Non-neutrality

In a non-neutrality scenario, you can see my little fellow here is very unhappy, because his ISP is essentially regulating and saying, “Hey, if you want to pay us $50 a month, you can have access to Google, Facebook, Twitter, and MSN. Then if you want to pay a little bit more, $100 a month, you can get access to all these second-tier sites, and we’ll let you visit those and use those services. If you want to pay us $150 a month, you can get access to all websites.”

This is just one model of how a non-neutrality situation might work. There are a bunch of other ones. This is probably not the most realistic one, and it might be slightly hyperbolic, but the idea behind it is always the same — that essentially the ISP can work however they’d like. They are not bound by government rules and regulations requiring them to serve the entire web equally.

Now, if you’re an ISP, you can imagine that this is a wonderful scenario. If I’m AT&T or I’m Verizon, I might be maxing out how much money I can make from consumers, and I’m constantly having to be competitive against other ISPs. But if I can do this, I can then have a bunch more vectors (a) to get money from all these different websites and web services, and (b) to charge consumers much more based on tiering their access.

So this is wonderful for me, which is why ISPs like Comcast and Verizon and AT&T and Cox and all these others have put a lot of money towards lobbyists to try and change the opinions of the federal government, and that’s mostly, well, in the United States right now, it’s the Republican administration and the folks in Congress and the Federal Communications Chair, who is Ajit Pai, recently selected by Trump as the new FCC Chair.

Why should marketers care?

Reasons that you should care about this as a web market are:

1. Equal footing for web access creates a more even playing field.

  • Greater competition. It allows websites to compete with each other without having to pay and without having to only serve different consumers who may be paying different rates to their ISPs.
  • It also means more players, because anyone can enter the field. Simply by registering a website and hosting it, you’re now on an even playing field technically with everyone, with Google, with Facebook, with Microsoft, with Amazon. You get the same access, at least at the fundamental ISP level, with everyone else on the Internet. That means there’s a lot more demand for competitive web marketing services, because there are many more businesses who need to compete and who can compete.
  • Also much less of an inherent advantage for these big, established, rich players. If you’re a very, very rich website, you have tons of money, you have tons of resources, lots of influence, it’s easy to say, “Hey, I’m not going to worry about this because I know I can always be in this lowest tier or whatever they’re providing for free because I can pay the ISP, and I can influence government rules and regulations, and I can connect with all the different ISPs out there and make sure I’m always accessible.” But for a small website, that’s a nightmare scenario, incredibly hard, and it makes a huge competitive advantage by being big and established already, which means it’s much tougher to get started.

2. The costs of getting started online are much lower under net neutrality.

Currently, if you register your website and you start doing your hosting:

  • You don’t need to pay off any ISPs. You don’t need to go approach Comcast or Verizon or AT&T or Cox or anybody like this and say, “Hey, we would like to get on to your high-speed, fastest-tier, best-access plan.” You don’t have to do that, because net neutrality, the law of the land means that you are automatically guaranteed that.
  • There’s also no separate process. So it’s not just the cost, it’s also the work involved to go to these ISPs. There are several hundred ISPs with hundreds of thousands of active customers in the United States today. That number has generally been shrinking as that industry has been consolidating a little bit more. But still, that’s a very, very challenging thing to have to do. If you are a big insurance provider, it’s one thing to have someone go manage that task, but if you’re a brand-new startup website, that’s entirely another to try and do that.

3. The talent, the strategy, the quality of product and services and marketing that a new company, a new website has are going to create winners and losers in their field today versus this potential non-neutrality situation, where it’s not quite a rigged system, but I’m calling it a rigged system a little bit because of this built-in advantage that you have for money and influence.

I think we would all generally agree that, in 2017, in late-stage capitalist societies, that, generally speaking, there’s already a huge advantage by having a lot of money and influence. I’m not sure those with money and influence necessarily need another leg up on entrepreneurs and startups and folks who are trying to compete on the web.

What might happen?

Now, maybe you’ll disagree, but I think that these together make a very compelling case scenario. Here’s what might actually happen.

  • “Fast lanes” for some sites – Most observers of the space think that fast lanes would become a default. So fast lanes, meaning that certain tiers, certain parts of the web, certain web services and companies would get fast access. Others would be slower or potentially even disallowed on certain ISPs. That would create some real challenges.
  • Free vs. paid access by some ISPs – There would probably be some free and some paid services. You can see T-Mobile actually tried to do this recently, where they basically said, “Hey, if you are on a T-Mobile device, even if you’re paying us the smallest amount, we’re going to allow you to access these certain things.” I think it was a video service for free. Essentially, that currently is against the rules.

You might say, “Rand, that seems unfair. Why shouldn’t T-Mobile be able to offer some access to the web for free and then you just have to pay for the rest of it?” I hear you. I think unfortunately that’s a bit of a red herring, because that particular implementation of a non-neutral situation is not that bad. It’s not particularly bad for consumers. It’s not particularly bad for businesses.

If T-Mobile just charged their normal rate, and then they happen to have this, “Oh, by the way, here you get this little portion of the web for free,” no one’s going to complain about that. It’s not particularly terrible. But it does violate net neutrality, and it is a very slippery slope to a world like this, a very painful world for a lot of people. That’s why we’re willing to sort of take the sacrifices of saying, “Hey, we don’t want to allow this because it violates the principle and the law of net neutrality.”

  • Payoffs required for access or speed – Then the third thing that would almost certainly happen is that there would be payoffs. There would be payoffs on both sides. You have to pay more as a consumer, to your ISP, in order to access the web in the way that you do today, and as a website, in order to reach consumers who maybe can’t afford or choose not to pay more, you have to pay off the ISP to get that full access.

What’s the risk?

Why am I bringing this up now?

  • Higher than ever… Why is the risk so high? Well, it turns out the new American administration has basically come out against net neutrality in a very aggressive fashion.
  • The new FCC Chair, Ajit Pai, has been fighting this pretty hard. He’s made a bunch of statements just in the last few days. He actually overturned some rulings from years past that asked smaller ISPs to be transparent about their net neutrality practices, and that’s been overturned. He’s arguing this basically under what I’m going to call a guise of free markets and competitive marketplaces. I think that is a total misnomer.

This creates a truly equal marketplace for everyone. While it is somewhat restrictive, I think one of the most interesting things to observe about this is that this is a non-political issue or at least not a very politicized issue for most of American voters. Actually, 81% of Democrats in a Gallup survey said that they support net neutrality, and an even greater percent of Republicans, 85%, said they support net neutrality.* So, really, you have virtually an overwhelming swath of voters in the United States who are saying this should be the law of the land.

The reason that this is generally being fought against by both Congress and the FCC is because these big ISPs have a lot of money, and they’ve paid a lot of lobbying dollars to try and influence politics. For those of you outside the United States, I know that sounds like it should be illegal. It’s not in our country. I know it’s illegal in most democracies, but it’s sort of how democracy in the United States works.

*Editor’s note: This poll was conducted by the University of Delaware.

What can we do?

If you want to take some action on this and fight back and tell your Congress person, your senator, your representatives locally and federally that you are against this, I would check out SaveTheInternet.com for those folks who are in the United States. For whatever country you’re in, I would urge you to search for “support net neutrality” and check out the initiatives that may be available in your country or your geography locally so that you can take some action.

This is something that we’ve fought against as Internet users in the past and as businesses on the web before, and I think we’re going to have to renew that fight in order to maintain the status quo and keep equal footing with each other. This will help us preserve our careers in web marketing, but it will also help preserve an open, free, competitive Internet. I think that’s something we can all agree is very important.

All right. Thanks, everyone. Look forward to your comments. Certainly open to your critiques. Please try and keep them as kosher and as kind as you can. I know when it gets into political territory, it can be a little frustrating. And we will see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5706209
via IFTTT

Large Site SEO Basics: Faceted Navigation

Posted by sergeystefoglo

If you work on an enterprise site — particularly in e-commerce or listings (such as a job board site) — you probably use some sort of faceted navigation structure. Why wouldn’t you? It helps users filter down to their desired set of results fairly painlessly.

While helpful to users, it’s no secret that faceted navigation can be a nightmare for SEO. At Distilled, it’s not uncommon for us to get a client that has tens of millions of URLs that are live and indexable when they shouldn’t be. More often than not, this is due to their faceted nav setup.

There are a number of great posts out there that discuss what faceted navigation is and why it can be a problem for search engines, so I won’t go into much detail on this. A great place to start is this post from 2011.

What I want to focus on instead is narrowing this problem down to a simple question, and then provide the possible solutions to that question. The question we need to answer is, “What options do we have to decide what Google crawls/indexes, and what are their pros/cons?”

Brief overview of faceted navigation

As a quick refresher, we can define faceted navigation as any way to filter and/or sort results on a webpage by specific attributes that aren’t necessarily related. For example, the color, processor type, and screen resolution of a laptop. Here is an example:

Because every possible combination of facets is typically (at least one) unique URL, faceted navigation can create a few problems for SEO:

  1. It creates a lot of duplicate content, which is bad for various reasons.
  2. It eats up valuable crawl budget and can send Google incorrect signals.
  3. It dilutes link equity and passes equity to pages that we don’t even want indexed.

But first… some quick examples

It’s worth taking a few minutes and looking at some examples of faceted navigation that are probably hurting SEO. These are simple examples that illustrate how faceted navigation can (and usually does) become an issue.

Macy’s

First up, we have Macy’s. I’ve done a simple site:search for the domain and added “black dresses” as a keyword to see what would appear. At the time of writing this post, Macy’s has 1,991 products that fit under “black dresses” — so why are over 12,000 pages indexed for this keyword? The answer could have something to do with how their faceted navigation is set up. As SEOs, we can remedy this.

Home Depot

Let’s take Home Depot as another example. Again, doing a simple site:search we find 8,930 pages on left-hand/inswing front exterior doors. Is there a reason to have that many pages in the index targeting similar products? Probably not. The good news is this can be fixed with the proper combinations of tags (which we’ll explore below).

I’ll leave the examples at that. You can go on most large-scale e-commerce websites and find issues with their navigation. The points is, many large websites that use faceted navigation could be doing better for SEO purposes.

Faceted navigation solutions

When deciding a faceted navigation solution, you will have to decide what you want in the index, what can go, and then how to make that happen. Let’s take a look at what the options are.

“Noindex, follow”

Probably the first solution that comes to mind would be using noindex tags. A noindex tag is used for the sole purpose of letting bots know to not include a specific page in the index. So, if we just wanted to remove pages from the index, this solution would make a lot of sense.

The issue here is that while you can reduce the amount of duplicate content that’s in the index, you will still be wasting crawl budget on pages. Also, these pages are receiving link equity, which is a waste (since it doesn’t benefit any indexed page).

Example: If we wanted to include our page for “black dresses” in the index, but we didn’t want to have “black dresses under $100” in the index, adding a noindex tag to the latter would exclude it. However, bots would still be coming to the page (which wastes crawl budget), and the page(s) would still be receiving link equity (which would be a waste).

Canonicalization

Many sites approach this issue by using canonical tags. With a canonical tag, you can let Google know that in a collection of similar pages, you have a preferred version that should get credit. Since canonical tags were designed as a solution to duplicate content, it would seem that this is a reasonable solution. Additionally, link equity will be consolidated to the canonical page (the one you deem most important).

However, Google will still be wasting crawl budget on pages.

Example: /black-dresses?under-100/ would have the canonical URL set to /black-dresses/. In this instance, Google would give the canonical page the authority and link equity. Additionally, Google wouldn’t see the “under $100” page as a duplicate of the canonical version.

Disallow via robots.txt

Disallowing sections of the site (such as certain parameters) could be a great solution. It’s quick, easy, and is customizable. But, it does come with some downsides. Namely, link equity will be trapped and unable to move anywhere on your website (even if it’s coming from an external source). Another downside here is even if you tell Google not to visit a certain page (or section) on your site, Google can still index it.

Example: We could disallow *?under-100* in our robots.txt file. This would tell Google to not visit any page with that parameter. However, if there were any “follow” links pointing to any URL with that parameter in it, Google could still index it.

“Nofollow” internal links to undesirable facets

An option for solving the crawl budget issue is to “nofollow” all internal links to facets that aren’t important for bots to crawl. Unfortunately, “nofollow” tags don’t solve the issue entirely. Duplicate content can still be indexed, and link equity will still get trapped.

Example: If we didn’t want Google to visit any page that had two or more facets indexed, adding a “nofollow” tag to all internal links pointing to those pages would help us get there.

Avoiding the issue altogether

Obviously, if we could avoid this issue altogether, we should just do that. If you are currently in the process of building or rebuilding your navigation or website, I would highly recommend considering building your faceted navigation in a way that limits the URL being changed (this is commonly done with JavaScript). The reason is simple: it provides the ease of browsing and filtering products, while potentially only generating a single URL. However, this can go too far in the opposite direction — you will need to manually ensure that you have indexable landing pages for key facet combinations (e.g. black dresses).

Here’s a table outlining what I wrote above in a more digestible way.

Options:

Solves duplicate content?

Solves crawl budget?

Recycles link equity?

Passes equity from external links?

Allows internal link equity flow?

Other notes

“Noindex, follow”

Yes

No

No

Yes

Yes

Canonicalization

Yes

No

Yes

Yes

Yes

Can only be used on pages that are similar.

Robots.txt

Yes

Yes

No

No

No

Technically, pages that are blocked in robots.txt can still be indexed.

Nofollow internal links to undesirable facets

No

Yes

No

Yes

No

JavaScript setup

Yes

Yes

Yes

Yes

Yes

Requires more work to set up in most cases.

But what’s the ideal setup?

First off, it’s important to understand there is no “one-size-fits-all solution.” In order to get to your ideal setup, you will most likely need to use a combination of the above options. I’m going to highlight an example fix below that should work for most sites, but it’s important to understand that your solution might vary based on how your site is built, how your URLs are structured, etc.

Fortunately, we can break down how we get to an ideal solution by asking ourselves one question. “Do we care more about our crawl budget, or our link equity?” By answering this question, we’re able to get closer to an ideal solution.

Consider this: You have a website that has a faceted navigation that allows the indexation and discovery of every single facet and facet combination. You aren’t concerned about link equity, but clearly Google is spending valuable time crawling millions of pages that don’t need to be crawled. What we care about in this scenario is crawl budget.

In this specific scenario, I would recommend the following solution.

  1. Category, subcategory, and sub-subcategory pages should remain discoverable and indexable. (e.g. /clothing/, /clothing/womens/, /clothing/womens/dresses/)
  2. For each category page, only allow versions with 1 facet selected to be indexed.
    1. On pages that have one or more facets selected, all facet links become “nofollow” links (e.g. /clothing/womens/dresses?color=black/)
    2. On pages that have two or more facets selected, a “noindex” tag is added as well (e.g. /clothing/womens/dresses?color=black?brand=express?/)
  3. Determine which facets could have an SEO benefit (for example, “color” and “brand”) and whitelist them. Essentially, throw them back in the index for SEO purposes.
  4. Ensure your canonical tags and rel=prev/next tags are setup appropriately.

This solution will (in time) start to solve our issues with unnecessary pages being in the index due to the navigation of the site. Also, notice how in this scenario we used a combination of the possible solutions. We used “nofollow,” “noindex, nofollow,” and proper canonicalization to achieve a more desirable result.

Other things to consider

There are many more variables to consider on this topic — I want to address two that I believe are the most important.

Breadcrumbs (and markup) helps a lot

If you don’t have breadcrumbs on each category/subcategory page on your website, you’re doing yourself a disservice. Please go implement them! Furthermore, if you have breadcrumbs on your website but aren’t marking them up with microdata, you’re missing out on a huge win.

The reason why is simple: You have a complicated site navigation, and bots that visit your site might not be reading the hierarchy correctly. By adding accurate breadcrumbs (and marking them up), we’re effectively telling Google, “Hey, I know this navigation is confusing, but please consider crawling our site in this manner.”

Enforcing a URL order for facet combinations

In extreme situations, you can come across a site that has a unique URL for every facet combination. For example, if you are on a laptop page and choose “red” and “SSD” (in that order) from the filters, the URL could be /laptops?color=red?SSD/. Now imagine if you chose the filters in the opposite order (first “SSD” then “red”) and the URL that’s generated is /laptops?SSD?color=red/.

This is really bad because it exponentially increases the amount of URLs you have. Avoid this by enforcing a specific order for URLs!

Conclusions

My hope is that you feel more equipped (and have some ideas) on how to tackle controlling your faceted navigation in a way that benefits your search presence.

To summarize, here are the main takeaways:

  1. Faceted navigation can be great for users, but is usually setup in a way that negatively impacts SEO.
  2. There are many reasons why faceted navigation can negatively impact SEO, but the top three are:
    1. Duplicate content
    2. Crawl budget being wasted
    3. Link equity not being used as effectively as it should be
  3. Boiled down further, the question we want to answer to begin approaching a solution is, “What are the ways we can control what Google crawls and indexes?”
  4. When it comes to a solution, there is no “one-size-fits-all” solution. There are numerous fixes (and combinations) that can be used. Most commonly:
    1. Noindex, follow
    2. Canonicalization
    3. Robots.txt
    4. Nofollow internal links to undesirable facets
    5. Avoiding the problem with an AJAX/JavaScript solution
  5. When trying to think of an ideal solution, the most important question you can ask yourself is, “What’s more important to our website: link equity, or crawl budget?” This can help focus your possible solutions.

I would love to hear any example setups. What have you found that’s worked well? Anything you’ve tried that has impacted your site negatively? Let’s discuss in the comments or feel free to shoot me a tweet.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5700658
via IFTTT

[Case Study] How We Ranked #1 for a High-Volume Keyword in Under 3 Months

Posted by DmitryDragilev

This blog post was co-written with Brad Zomick, the former Director of Content Marketing at Pipedrive, where this case study took place.

It’s tough out there for SEOs and content marketers. With the sheer amount of quality content being produced, it has become nearly impossible to stand out in most industries.

Recently we were running content marketing for Pipedrive, a sales CRM. We created a content strategy that used educational sales content to educate and build trust with our target audience.

This was a great idea, in theory — we’d educate readers, establish trust, and turn some of our readers into customers.

The problem is that there are already countless others producing similar sales-focused content. We weren’t just competing against other startups for readers; we also had to contend with established companies, sales trainers, strategists, bloggers and large business sites.

The good news is that ranking a strategic keyword is still very much possible. It’s certainly not easy, but with the right process, anyone can rank for their target keyword.

Below, we’re going to show you the process we used to rank on page one for a high-volume keyword.

If you’re not sure about reading ahead, here is a quick summary:

We were able to rank #1 for a high-volume keyword: “sales management” (9,900 search volume). We outranked established sites including SalesManagement.org, Apptus, InsightSquared, Docurated, and even US News, Wikipedia, and the Bureau of Labor Statistics. We managed this through good old-fashioned content creation + outreach + guest posting, aka the “Skyscraper Technique.”

Here are the eight steps we took to reach our goal (click on a step to jump straight to that section):

  1. Select the right topic
  2. Create bad-ass content for our own blog
  3. Optimize on-page SEO & engagement metrics
  4. Build internal links
  5. Find people who would link to this content
  6. Ask people to link to our content
  7. Write guest posts on leading blogs
  8. Fine-tuning content with TF * IDF

Before we start, understand that this is a labor-intensive process. Winning a top SERP spot required the focus of a 3-person team for the better part of 3 months.

If you’re willing to invest a similar amount of time and effort, read on!


Step 1: Finding a good topic

We wanted three things from our target keyword:

1. Significant keyword volume

If you’re going to spend months ranking for a single keyword, you need to pick something big enough to justify the effort.

In our case, we settled on a keyword with 9,900 searches each month as per the Keyword Planner (1k–10k range after the last update).

That same keyword registered a search volume of 1.7–2.9k in Moz Keyword Explorer, so take AdWords’ estimates with a grain of salt.

One way to settle on a target volume is to see it in terms of your conversion rate and buyer’s journey:

  • Buyer’s journey: Search volume decreases as customers move further along the buyer’s journey. Fewer searches are okay if you’re targeting Decision-stage keywords.
  • Conversion rate: The stronger your conversion rate for each stage of the buyer’s journey, the more you can get away with by targeting a low search volume keyword.

Also consider the actual traffic from the keyword, not just search volume.

For instance, we knew from Moz’s research that the first result gets about 30% of all clicks.

For a keyword with 9,900 search volume, this would translate into over 3,000 visitors/month for a top position.

If we could convert even 5% of these into leads, we’d net over 1,800 leads each year, which makes it worth our time.

2. Pick a winnable topic

Some SERPs are incredibly competitive. For instance, if you’re trying to rank for “content marketing,” you’ll find that the first page is dominated by CMI (DA 84):

You might be able to fight out a first-page rank, but it’s really not worth the effort in 99% of cases.

So our second requirement was to see if we could actually rank for our shortlisted keywords.

This can be done in one of two ways:

Informal method

The old-fashioned way to gauge keyword difficulty is to simply eyeball SERPs for your selected keywords.

If you see a lot of older articles, web 1.0 pages, unrecognizable brands, and generic content sites, the keyword should be solid.

On the other hand, if the first page is dominated by big niche brands with in-depth articles, you’ll have a hard time ranking well.

I also recommend using the MozBar to check metrics on the fly. If you see a ton of high DA/PA pages, move on to another keyword.

In our case, the top results mostly comprised of generic content sites or newish domains.

Moz Keyword Explorer

Moz’s Keyword Explorer gives you a more quantifiable way to gauge keyword difficulty. You’ll get actual difficulty vs. potential scores.

Aim for a competitiveness score under 50 and opportunity/potential scores above 50. If you get scores beyond this threshold, keep looking.

Of course, if you have an established domain, you can target more difficult keywords.

Following this step, we had a shortlist of four keywords:

  1. sales techniques (8100)
  2. sales process (8100)
  3. sales management (9900)
  4. sales forecast (4400)

We could have honestly picked anything from this list, but for added impact, we decided to add another filter.

3. Strategic relevance

If you’re going to turn visitors into leads, it’s important to focus on keywords that are strategically relevant to your conversion goals.

In our case, we chose “sales management” as the target keyword.

We did this because Pipedrive is a sales management tool, so the keyword describes us perfectly.

Additionally, a small business owner searching for “sales management” has likely moved from Awareness to Consideration and thus, is one step closer to buying.

In contrast, “sales techniques” and “sales forecast” are keywords a sales person would search for, not a sales leader or small business owner (decision-makers).


Step 2: Writing a bad-ass piece of content

Content might not be king anymore, but it is still the foundation of good SEO. We wanted to get this part absolutely right.

Here’s the process we followed to create our content:

1. Extremely thorough research

We had a simple goal from the start: create something substantially better than anything in the top SERPs.

To get there, we started by reviewing every article ranking for “sales management,” noting what we liked and what we didn’t.

For instance, we liked how InsightSquared started the article with a substantive quote. We didn’t like how Apptus went overboard with headers.

We also looked for anomalies. One thing that caught our attention was that two of the top 10 results were dedicated to the keyword “sales manager.”

We took note of this and made sure to talk about “sales managers” in our article.

We also looked at related searches at the bottom of the page:

We also scoured more than 50 sales-related books for chapters about sales management.

Finally, we also talked to some real salespeople. This step helped us add expert insight that outsourced article writers just don’t have.

At the end, we had a superior outline of what we were going to write.

2. Content creation

You don’t need to be a subject matter expert to create an excellent piece of content.

What you do need is good writing skills… and the discipline to actually finish an article.

Adopt a journalistic style where you report insight from experts. This gives you a better end-product since you’re curating insight and writing it far better than subject matter experts.

Unfortunately, there is no magic bullet to speed up the writing part — you’ll just have to grind it out. Set aside a few days at least to write anything substantive.

There are a few things we learned through the content creation experience:

  1. Don’t multi-task. Go all-in on writing and don’t stop until it’s done.
  2. Work alone. Writing is a solitary endeavor. Work in a place where you won’t be bothered by coworkers.
  3. Listen to ambient music. Search “homework edit” on YouTube for some ambient tracks, or use a site like Noisli.com

Take tip #1 as non-negotiable. We tried to juggle a couple of projects and finishing the article ended up taking two weeks. Learn from our mistake — focus on writing alone!

Before you hit publish, make sure to get some editorial feedback from someone on your team, or if possible, a professional editor.

We also added a note at the end of the article where we solicit feedback for future revisions.

If you can’t get access to editors, at the very least put your article through Grammarly.

3. Add lots of visuals and make content more readable

Getting visuals in B2B content can be surprisingly challenging. This is mostly due to the fact that there are a lot of abstract, hard-to-visualize concepts in B2B writing.

This is why we found a lot of blog posts like this with meaningless stock images:

To avoid this, we decided to use four custom images spread throughout the article.

We wanted to use visuals to:

  • Illustrate abstract concepts and ideas
  • Break up the content into more readable chunks.
  • Emphasize key takeaways in a readily digestible format

We could have done even more — prolific content creators like Neil Patel often use images every 200–300 words.

Aside from imagery, there are a few other ways to break up and highlight text to make your content more readable.

  • Section headers
  • Bullets and numbered lists
  • Small paragraphs
  • Highlighted text
  • Blockquotes
  • Use simple words

We used most of these tactics, especially blockquotes to create sub-sections.

Given our audience — sales leaders and managers — we didn’t have to bother with dumbing down our writing. But if you’re worried that your writing is too complex, try using an app like Hemingway to edit your draft.


Step 3: Optimize on-page SEO and engagement metrics

Here’s what we did to optimize on-page SEO:

1. Fix title

We wanted traffic from people searching for keywords related to “sales management,” such as:

  • “Sales management definition” (currently #2)
  • “Sales management process” (currently #1)
  • “Sales management strategies” (currently #4)
  • “Sales management resources” (currently #3)

To make sure we tapped all these keywords, we changed our main H1 header tag to include the words definition, process, strategies, and resources.

These are called “modifiers” in SEO terms.

Google is now smart enough to know that a single article can cover multiple related keywords. Adding such modifiers helped us increase our potential traffic.

2. Fix section headers

Next, we used the right headers for each section:

Instead of writing “sales management definition,” we used an actual question a reader might ask.

Here’s why:

  • It makes the article easier to read
  • It’s a natural question, which makes it more likely to rank for voice searches and Google’s “answers”

We also peppered related keywords in headers throughout the article. Note how we used the keyword at the beginning of the header, not at the end:

We didn’t want to go overboard with the keywords. Our goal was to give readers something they’d actually want to read.

This is why our <h2> tag headers did not have any obvious keywords:

This helps the article read naturally while still using our target keywords.

3. Improve content engagement

Notice the colon and the line break at the very start of the article:

This is a “bucket brigade”: an old copywriting trick to grab a reader’s attention.

We used it at the beginning of the article to stop readers from hitting the back button and going back to Google (i.e. increase our dwell time).

We also added outgoing and internal links to the article.

4. Fix URL

According to research, shorter URLs tend to rank better than longer ones.

We didn’t pay a lot of attention to the URL length when we first started blogging.

Here’s one of our blog post URLs from 2013:

Not very nice, right?

For this post, we used a simple, keyword-rich URL:

Ideally, we wouldn’t have the /2016/05/ bit, but by now, it’s too late to change.

5. Improve keyword density

One common piece of on-page SEO advice is to add your keywords to the first 100 words of your content.

If you search for “sales management” on our site, this is what you’ll see:

If you’re Googlebot, you’d have no confusion what this article was about: sales management.

We also wanted to use related keywords in the article without it sounding over-optimized. Gaetano DiNardi, our SEO manager at the time, came up with a great solution to fix this:

We created a “resources” or “glossary” section to hit a number of related keywords while still being useful. Here’s an example:

It’s important to make these keyword mentions as organic as possible.

As a result of this on-page keyword optimization, traffic increased sharply.

We over-optimized keyword density in the beginning, which likely hurt rankings. Once we spotted this, we changed things around and saw an immediate improvement (more on this below).


Step 4: Build internal links to article

Building internal links to your new content can be surprisingly effective when promoting content.

As Moz has already written before:

“Internal links are most useful for establishing site architecture and spreading link juice.”

Essentially, these links:

  • Help Googlebot discover your content
  • Tell Google that a particular page is “important” on your site since a lot of pages point to it

Our approach to internal linking was highly strategic. We picked two kinds of pages:

1. Pages that had high traffic and PA. You can find these in Google Analytics under Behavior –> Site Content.

2. Pages where the keyword already existed unlinked. You can use this query to find such pages:

Site:[yoursite.com] “your keyword”

In our case, searching for “sales management” showed us a number of mentions:

After making a list of these pages, we dove into our CMS and added internal links by hand.

These new links from established posts showed Google that we thought of this page as “important.”


Step 5: Finding link targets

This is where things become more fun. In this step, we used our detective SEO skills to find targets for our outreach campaign.

There are multiple ways to approach this process, but the easiest — and the one we followed — is to simply find sites that had linked to our top competitors.

We used Open Site Explorer to crawl the top ten results for backlinks.

By digging beyond the first page, we managed to build up a list of hundreds of prospects, which we exported to Excel.

This was still a very “raw” list. To maximize our outreach efficiency, we filtered out the following from our list:

  • Sites with DA under 30.
  • Sites on free blog hosts like Blogspot.com, WordPress.com, etc.

This gave us a highly targeted list of hundreds of prospects.

Here’s how we organized our Excel file:

Finding email addresses

Next step: find email addresses.

This has become much easier than it used to be thanks to a bunch of new tools. We used EmailHunter (Hunter.io) but you can also use VoilaNorbert, Email Finder, etc.

EmailHunter works by finding the pattern people use for emails on a domain name, like this:

To use this tool, you will need either the author’s name or the editor/webmaster’s name.

In some cases, the author of the article is clearly displayed.

In case you can’t find the author’s name (happens in case of guest posts), you’ll want to find the site’s editor or content manager.

LinkedIn is very helpful here.

Try a query like this:

site:linkedin.com “Editor/Blog Editor” at “[SiteName]”.

Once you have a name, plug the domain name into Hunter.io to get an email address guess of important contacts.


Step 6: Outreach like crazy

After all the data retrieval, prioritization, deduping, and clean up, we were left with hundreds of contacts to reach out to.

To make things easier, we segmented our list into two categories:

  • Category 1: Low-quality, generic sites with poor domain authority. You can send email templates to them without any problems.
  • Category 2: Up-and-coming bloggers/authoritative sites we wanted to build relationships with. To these sites, we sent personalized emails by hand.

With the first category of sites, our goal was volume instead of accuracy.

For the second category, our objective was to get a response. It didn’t matter whether we got a backlink or not — we wanted to start a conversation which could yield a link or, better, a relationship.

You can use a number of tools to make outreach easier. Here are a few of these tools:

  1. JustReachOut
  2. MixMax
  3. LeadIQ
  4. Toutapp
  5. Prospectify

We loved using a sales tool called MixMax. Its ability to mail merge outreach templates and track open rates works wonderfully well for SEO outreach.

If you’re looking for templates, here’s one email we sent out:

Let’s break it down:

  1. Curiosity-evoking headline: Small caps in the subject line makes the email look authentic. The “something missing” part evokes curiosity.
  2. Name drop familiar brands: Name dropping your relationship to familiar brands is another good way to show your legitimacy. It’s also a good idea to include a link to their article to jog their memory.
  3. What’s missing: The meat of the email. Make sure that you’re specific here.
  4. The “why”: Your prospects need a “because” to link to you. Give actual details as to what makes it great — in-depth research, new data, or maybe a quote or two from Rand Fishkin.
  5. Never demand a link: Asking for feedback first is a good way to show that you want a genuine conversation, not just a link.

This is just one example. We tested 3 different emails initially and used the best one for the rest of the campaign. Our response rate for the whole campaign was 42%.


Step 7: Be prepared to guest post

Does guest blogging still work?

If you’re doing it for traffic and authority, I say: go ahead. You are likely putting your best work out there on industry-leading blogs. Neither your readers nor Google will mind that.

In our case, guest blogging was already a part of our long-term content marketing strategy. The only thing we changed was adding links to our sales management post within guest posts.

Your guest post links should have contextual reference, i.e. the post topic and link content should match. Otherwise, Google might discount the link, even if it is dofollow.

Keep this in mind when you start a guest blogging campaign. Getting links isn’t enough; you need contextually relevant links.

Here are some of the guest posts we published:

  • 7 Keys to Scaling a Startup Globally [INC]
  • An Introduction to Activity-Based Selling [LinkedIn]
  • 7 Tips for MBAs Entering Sales Management Careers [TopMBA]

We weren’t exclusively promoting our sales management post in any of these guest posts. The sales management post just fit naturally into the context, so we linked to it.

If you’re guest blogging in 2017, this is the approach you need to adopt.


Step 8: Fine-tuning content with TF * IDF

After the article went live, we realized that we had heavily over-optimized it for the term “sales management.” It occurred 48 times throughout the article, too much for a 2,500 word piece.

Moreover, we hadn’t always used the term naturally in the article.

To solve this problem, we turned to TF-IDF.

Recognizing TF-IDF as a ranking factor

TF-IDF (Term Frequency-Inverse Document Frequency) is a way to figure out how important a word is in a document based on how frequently it appears in it.

This is a pretty standard statistical process in information retrieval. It is also one of the oldest ranking factors in Google’s algorithms.

Hypothesis: We hypothesized that dropping the number of “sales management” occurrences from 48 to 20 and replacing it with terms that have high lexical relevance would improve rankings.

Were we right?

See for yourself:

Our organic pageviews increased from nearly 0 to over 5,000 in just over 8 months.

Note that no new links or link acquisition initiatives were actively in-progress during the time of this mini-experiment.

Experiment timeline:

  • July 18th – Over-optimized keyword recognized.
  • July 25th – Content team finished updating body copy, H2s with relevant topics/synonyms.
  • July 26th – Updated internal anchor text to include relevant terms.
  • July 27th – Flushed cache & re-submitted to Search Console.
  • August 4th – Improved from #4 to #2 for “Sales Management”
  • August 17 – Improved from #2 to #1 for “Sales Management”

The results were fast. We were able to normalize our content and see results within weeks.

We’ll show you our exact process below.

Normalization process — How did we do it?

The normalization process focused on identifying over-optimized terms, replacing them with related words and submitting the new page to search engines.

Here’s how we did it:

1. Identifying over-optimized term(s)

We started off using Moz’s on-page optimization tool to scan our page.

According to Moz, we shouldn’t have used the target term — “sales management” — more than 15 times. This means we had to drop 33 occurrences.

2. Finding synonymous terms with high lexical relevance

Next, we had to replace our 28+ mentions with synonyms that wouldn’t feel out of place.

We used Moz’s Keyword Explorer to get some ideas.

3. Removed “sales management” from H2 headings

Initially, we had the keyword in both H1 and H2 headings, which was just overkill.

We removed it from H2 headings and used lexically similar variants instead for better flow.

4. Diluted “sales management” from body copy

We used our list of lexically relevant words to bring down the number of “sales management” occurrences to under 20. This was perfect for 2,500+ word article.

5. Diversify internal anchors

While we were changing our body copy, we realized that we also needed more anchor text diversity for our internal links.

Our anchors cloud was mostly “sales management” links:

We diversified this list by adding links to related terms like “sales manager,” “sales process,” etc.

6. Social amplification

We ramped up our activity on LinkedIn and Facebook to get the ball rolling on social shares.

The end result of this experimentation was an over 100% increase in traffic between August ‘16 to January ‘17.

The lesson?

Don’t just build backlinks — optimize your on-page content as well!


Conclusion

There’s a lot to learn from this case study. Some findings were surprising for us as well, particularly the impact of keyword density normalization.

While there are a lot of tricks and tactics detailed here, you’ll find that the fundamentals are essentially the same as what Rand and team have been preaching here for years. Create good content, reach out to link prospects, and use strategic guest posts to get your page to rank.

This might sound like a lot of work, but the results are worth it. Big industry players like Salesforce and Oracle actually advertise on AdWords for this term. While they have to pay for every single click, Pipedrive gets its clicks for free.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/5695656
via IFTTT