The Website Migration Guide: SEO Strategy & Process

Posted by Modestos

What is a site migration?

A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.

Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.

Quick access links

Site migration examples
Site migration types
Common site migration pitfalls
Site migration process
1. Scope & planning
2. Pre-launch preparation
3. Pre-launch testing
4. Launch day actions
5. Post-launch testing
6. Performance review
Appendix: Useful tools

Site migration examples

The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.

Debunking the “expected traffic drop” myth

Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.

Examples of unsuccessful site migrations

The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.

Example of a poor site migration — recovery took 6 months!

But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.

Another example of a poor site migration — no signs of recovery 6 months on!

In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.

Examples of successful site migrations

What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:

  1. Minimal visibility loss during the first few weeks (short-term goal)
  2. Visibility growth thereafter — depending on the type of migration (long-term goal)

The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.

The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.

As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.

Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.

Example of a very successful site migration — instant growth following new site launch!

This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.

In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.

Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.

Site migration types

There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.

Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:

  • Site moves with URL changes
  • Site moves without URL changes

Site move migrations


These typically occur when a site moves to a different URL due to any of the below:

Protocol change

A classic example is when migrating from HTTP to HTTPS.

Subdomain or subfolder change

Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.

Domain name change

Commonly occurs when a business is rebranding and must move from one domain to another.

Top-level domain change

This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from to .com, or moving from .com to and so on.

Site structure changes

These are changes to the site architecture that usually affect the site’s internal linking and URL structure.

Other types of migrations

There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.


This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.

Content migrations

Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.

Mobile setup changes

With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.

Structural changes

These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.

Site redesigns

These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.

Hybrid migrations

In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.

Common site migration pitfalls

Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:

Poor strategy

Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.

Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.

Poor planning

Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.

Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.

Lack of resources

Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.

As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.

Lack of SEO/UX consultation

When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.

To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.

Late involvement

Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.

Lack of testing

In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.

Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.

Slow response to bug fixing

There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.

Underestimating scale

Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let’s launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.

It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.

Site migration process

The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.

Phase 1: Scope & Planning

Work out the project scope

Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.

A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.

However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.

Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.

You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.

Prepare the project plan

Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.

The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.

A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.

Phase 2: Pre-launch preparation

These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.

Wireframes review

Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.

Preparing the technical SEO specifications

Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.

The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.

Make sure to include specific requirements that cover at least the following areas:

  • URL structure
  • Meta data (including dynamically generated default values)
  • Structured data
  • Canonicals and meta robots directives
  • Copy & headings
  • Main & secondary navigation
  • Internal linking (in any form)
  • Pagination
  • XML sitemap(s)
  • HTML sitemap
  • Hreflang (if there are international sites)
  • Mobile setup (including the app, AMP, or PWA site)
  • Redirects
  • Custom 404 page
  • JavaScript, CSS, and image files
  • Page loading times (for desktop & mobile)

The specification should also include areas of the CMS functionality that allows users to:

  • Specify custom URLs and override default ones
  • Update page titles
  • Update meta descriptions
  • Update any h1–h6 headings
  • Add or amend the default canonical tag
  • Set the meta robots attributes to index/noindex/follow/nofollow
  • Add or edit the alt text of each image
  • Include Open Graph fields for description, URL, image, type, sitename
  • Include Twitter Open Graph fields for card, URL, title, description, image
  • Bulk upload or amend redirects
  • Update the robots.txt file

It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).

Identifying priority pages

One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.

In order to do this, you need to:

  1. Crawl the legacy site
  2. Identify all indexable pages
  3. Identify top performing pages

How to crawl the legacy site

Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:

  • Ignore robots.txt (in case any vital parts are accidentally blocked)
  • Follow internal “nofollow” links (so the crawler reaches more pages)
  • Crawl all subdomains (depending on scope)
  • Crawl outside start folder (depending on scope)
  • Change the user agent to Googlebot (desktop)
  • Change the user agent to Googlebot (smartphone)

Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.

How to identify the indexable pages

Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:

  • Return a 200 server response
  • Either do not have a canonical tag or have a self-referring canonical URL
  • Do not have a meta robots noindex
  • Aren’t excluded from the robots.txt file
  • Are internally linked from other pages (non-orphan pages)

The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).

How to identify the top performing pages

Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.

If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.

It’s recommended to prepare a spreadsheet that includes the below fields:

  • Legacy URL (include only the indexable ones from the craw data)
  • Organic visits during the last 12 months (Analytics)
  • Revenue, conversions, and conversion rate during the last 12 months (Analytics)
  • Pageviews during the last 12 months (Analytics)
  • Number of clicks from the last 90 days (Search Console)
  • Top linked pages (Majestic SEO/Ahrefs)

With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.

The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.


Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.

Keywords rank tracking

If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.

Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)

If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.

Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.

Site performance

The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.

It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.






Optimization score







Category page






Subcategory page






Product page











Optimization score







Category page






Subcategory page






Product page






Old site crawl data

A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.

Search Console data

Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:

  • Search analytics queries & pages
  • Crawl errors
  • Blocked resources
  • Mobile usability issues
  • URL parameters
  • Structured data errors
  • Links to your site
  • Internal links
  • Index status

Redirects preparation

The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.

Why are redirects important in site migrations?

Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.

What happens when redirects aren’t correctly implemented?

When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.

301, 302, JavaScript redirects, or meta refresh?

When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.

302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.

Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.

If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.

Redirect mapping process

If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.

The redirect mapping file is a spreadsheet that includes the following two columns:

  • Legacy site URL –> a page’s URL on the old site.
  • New site URL –> a page’s URL on the new site.

When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.

Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.

Increasing efficiencies during the redirect mapping process

Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.

Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.

Don’t forget the legacy redirects!

You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.

Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.


URL A redirects to URL B (legacy redirect)

URL B redirects to URL C (new redirect)

Which results in the following redirect chain:


To eliminate this, amend the existing legacy redirect and create a new one so that:

URL A redirects to URL C (amended legacy redirect)

URL B redirects to URL C (new redirect)

Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!

Implement blanket redirect rules to avoid duplicate content

It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.

In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:

Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.

Avoid internal redirects

Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.

Don’t forget your image files

If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.

Phase 3: Pre-launch testing

The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.

Making sure search engines cannot access the staging/test site

Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.

Site available to specific IPs (most recommended)

Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.

Password protection

Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.

Robots.txt blocking

Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.

User-agent: *
Disallow: /

One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.

User journey review

If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.

On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.

Site architecture review

A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.

Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.

Meta data & copy review

Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.

Internal linking review

Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:

  • Main & secondary navigation
  • Header & footer links
  • Body content links
  • Pagination links
  • Horizontal links (related articles, similar products, etc)
  • Vertical links (e.g. breadcrumb navigation)
  • Cross-site links (e.g. links across international sites)

Technical checks

A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.

Robots.txt file review

Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:

Disallow: /

If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.

But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.

When preparing the new site’s robots.txt file, make sure that:

  • It doesn’t block search engine access to pages that are intended to get indexed.
  • It doesn’t block any JavaScript or CSS resources search engines require to render page content.
  • The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
  • It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.

Canonical tags review

Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.

Meta robots review

Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.

XML sitemaps review

Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.

You should check each XML sitemap to make sure that:

  • It validates without issues
  • It is encoded as UTF-8
  • It does not contain more than 50,000 rows
  • Its size does not exceed 50MBs when uncompressed

If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.

In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:

  • 3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
  • Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
  • Canonicalized pages (apart from self-referring canonical URLs)
  • Pages with a meta robots noindex directive
<!DOCTYPE html>
<meta name="robots" content="noindex" />
  • Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK
Date: Tue, 10 Nov 2017 17:12:43 GMT
X-Robots-Tag: noindex
  • Pages blocked from the robots.txt file

Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.

Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.

HTML sitemap review

Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.

The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.

For example, the HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.

The NYTimes HTML sitemap (level 1)

The NYTimes HTML sitemap (level 2)

Structured data review

Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.

Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.

Structured Data Testing Tool.png

The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.

Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.

JavaScript crawling review

You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.

As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.

Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.

Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.

Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!

Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.

Mobile site SEO review

Assets blocking review

First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.

Mobile-first index review

In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:

  • Page titles
  • Meta descriptions
  • Headings
  • Copy
  • Canonical tags
  • Meta robots attributes (i.e. noindex, nofollow)
  • Internal links
  • Structured data

A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.

In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.

Responsive site review

A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.

Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.

To signal browsers that a page is responsive, a meta=”viewport” tag should be in place within the <head> of each HTML page.

<meta name="viewport" content="width=device-width, initial-scale=1.0">

If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.

Separate mobile URLs review

If the mobile website uses separate URLs from desktop, make sure that:

  1. Each desktop page has a tag pointing to the corresponding mobile URL.
  2. Each mobile page has a rel=”canonical” tag pointing to the corresponding desktop URL.
  3. When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
  4. Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
  5. There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
  6. The mobile URLs return a 200 server response.

Dynamic serving review

Dynamic serving websites serve different code to each device, but on the same URL.

On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.

Mobile-friendliness review

Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:

  1. The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
  2. The font size isn’t too small.
  3. Touch elements (i.e. buttons, links) aren’t too close.
  4. There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
  5. Mobile pages aren’t too slow to load (see next section).

Google’s mobile-friendly test tool can help diagnose most of the above issues:

Google’s mobile-friendly test tool in action

AMP site review

If there is an AMP website and a desktop version of the site is available, make sure that:

  • Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
  • Each AMP page has a rel=”canonical” tag pointing to the corresponding desktop page.
  • Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.

You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.

Mixed content errors

With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.

Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.

Mixed content errors in Chrome’s JavaScript Console

There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.

Image assets review

Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.

Site performance review

Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.

Analytics tracking review

Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.

Redirects testing

Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.

Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:

  • Redirect loops (a URL that infinitely redirects to itself)
  • Redirects with a 4xx or 5xx server response.
  • Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
  • Canonical URLs that return a 4xx or 5xx server response.
  • Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
  • Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
  • Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
  • Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
  • Invalid characters in URLs.

Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.

Phase 4: Launch day activities

When the site is down…

While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.

If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.

Technical spot checks

As soon as the new site has gone live, take a quick look at:

  1. The robots.txt file to make sure search engines are not blocked from crawling
  2. Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
  3. Top pages canonical tags
  4. Top pages server responses
  5. Noindex/nofollow directives, in case they are unintentional

The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.

Search Console actions

The following activities should take place as soon as the new website has gone live:

  1. Test & upload the XML sitemap(s)
  2. Set the Preferred location of the domain (www or non-www)
  3. Set the International targeting (if applicable)
  4. Configure the URL parameters to tackle early any potential duplicate content issues.
  5. Upload the Disavow file (if applicable)
  6. Use the Change of Address tool (if switching domains)

Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.

Blocked resources prevent Googlebot from rendering the content of the page

Phase 5: Post-launch review

Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.

However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.

In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.

Check crawl stats and server logs

Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.

Crawl stats on Google’s Search Console

Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.

Review crawl errors regularly

Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.

Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!

Other useful Search Console features

Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).

Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.

Measuring site speed

Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.

Evaluating speed using Google’s tools

Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.

ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:

  • Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
  • Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
  • Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
  • Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
  • Optimization suggestions: A list of best practices that could be applied to a page.

Google’s PageSpeed Insights in action

Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:

  • First Meaningful Paint that measures when the primary content of a page is visible.
  • Time to Interactive is the point at which the page is ready for a user to interact with.
  • Speed Index measures shows how quickly a page are visibly populated

Both tools provide recommendations to help improve any reported site performance issues.

Google’s Lighthouse in action

You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.

The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.

Measuring speed from real users

Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.

In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.

Phase 6: Measuring site migration performance

When to measure

Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.

In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.

But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.

How to measure

Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:

  • Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
  • Desktop and mobile rankings (from any reliable rank tracking tool)
  • User engagement (bounce rate, average time on page)
  • Sessions per page type (i.e. are the category pages driving as many sessions as before?)
  • Conversion rate per page type (i.e. are the product pages converting the same way as before?)
  • Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)

Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:

  • Number of indexed pages (Search Console)
  • Submitted vs indexed pages in XML sitemaps (Search Console)
  • Pages receiving at least one visit (analytics)
  • Site speed (PageSpeed Insights, Lighthouse, Google Analytics)

It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.

Good luck and if you need any consultation or assistance with your site migration, please get in touch!

Appendix: Useful tools


  • Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
  • Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
  • Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
  • Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
  • On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.

Handy Chrome add-ons

  • Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
  • User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
  • Ayima Redirect Path: A great header and redirect checker.
  • SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
  • Scraper: An easy way to scrape website data into a spreadsheet.

Site monitoring tools

  • Uptime Robot: Free website uptime monitoring.
  • Robotto: Free robots.txt monitoring tool.
  • Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
  • SEO Radar: Monitors all critical SEO elements and fires alerts when these change.

Site performance tools

  • PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
  • Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
  • Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.

Structured data testing tools

Mobile testing tools

Backlink data sources

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog


Declining Organic Traffic? How to Tell if it’s a Tracking or Optimization Issue

Posted by andrewchoco

Picture this scenario. You’re a new employee that has just been brought in to a struggling marketing department (or an agency brought on to help recover lost numbers). You get access to Google Analytics, and see something like this:

(Actual screenshot of the client I audited)

This can generate two types of emotional response: excitement or fear (or both). The steady decline in organic traffic excites you because you have so many tactics and ideas that you think can save this company from spiraling downward out of control. But there’s also the fear that these tactics wont be enough to correct the course.

Regardless of whether these new tactics would work or not, it’s important to understand the history of the account and determine not only what is happening, but why.

The company may have an idea of why the traffic is declining (i.e. competitors have come in and made ranking for keywords much harder, or they did a website redesign and have never recovered).

Essentially, this boils down to two things: 1) either you’re struggling with organic optimization, or 2) something was off with your tracking in Google Analytics, has since been corrected, and hasn’t been caught.

In this article, I’ll go over an audit I did for one of my clients to help determine if the decline we saw in organic traffic was due to actual poor SEO performance, an influx in competitors, tracking issues, or a combination of these things.

I’ll be breaking it down into five different areas of investigation:

  1. Keyword ranking differences from 2015–2017
    1. Did the keywords we were ranking for in 2015 change drastically in 2017? Did we lose rankings and therefore lose organic traffic?
  2. Top organic landing pages from 2015–2017
    1. Are the top ranking organic landing pages the same currently as they were in 2015? Are we missing any pages due to a website redesign?
  3. On-page metric
    1. Did something happen to the site speed / bounce rate / page views etc.
  4. SEMrush/Moz keyword, traffic, and domain authority data
    1. Looking at the SEMrush organic traffic cost metric as well as Moz metrics like Domain Authority and competitors.
  5. Goal completions
    1. Did our conversion numbers stay consistent throughout the traffic drop? Or did the conversions drop in correlation with the traffic drop?

By the end of this post, my goal is that you’ll be able to replicate this audit to determine exactly what’s causing your organic traffic decline and how to get back on the right track.

Let’s dive in!

Keyword ranking differences from 2015–2017

This was my initial starting point for my audit. I started with this specifically because the most obvious answer, for a decline in traffic is a decline in keyword rankings.

I wanted to look at what keywords we were ranking for in 2015 to see if we significantly dropped in the rankings or if the search volume had dropped. If the company you’re auditing has had a long-running Moz account, start by looking at the keyword rankings from the initial start of the decline, compared to current keyword rankings.

I exported keyword data from both SEMrush and Moz, and looked specifically at the ranking changes of core keywords.

March was a particularly strong month across the board, so I narrowed it down and exported the keyword rankings in:

  • March 2015
  • March 2016
  • March 2017
  • December 2017 (so I could get the most current rankings)

Once the keywords were exported, I went in and highlighted in red the keywords that we were ranking for in 2015 (and driving traffic from) that we were no longer ranking for in 2017. I also highlighted in yellow the keywords we were ranking for in 2015 that were still ranking in 2017.

2015 keywords:

2017 keywords:

(Brand-related queries and URLs are blurred out for anonymity)

One thing that immediately stood out: in 2015, this company was ranking for five keywords, including the word “free.” They have since changed their offering, so it made sense that in 2017, we weren’t ranking for those keywords.

After removing the free queries, we pulled the “core” keywords to look at their differences.

March 2015 core keywords:

  • Appointment scheduling software: position 9
  • Online appointment scheduling: position 11
  • Online appointment scheduling: position 9
  • Online scheduling software: position 9
  • Online scheduler: position 9
  • Online scheduling: position 13

December 2017 core keywords:

  • Appointment scheduler: position 11
  • Appointment scheduling software: position 10
  • Online schedule: position 6
  • Online appointment scheduler: position 11
  • Online appointment scheduling: position 12
  • Online scheduling software: position 12
  • Online scheduling tool: position 10
  • Online scheduling: position 15
  • SaaS appointment scheduling: position 2

There were no particular red flags here. While some of the keywords have moved down 1–2 spots, we had new ones jump up. These small changes in movement didn’t explain the nearly 30–40% drop in organic traffic. I checked this off my list and moved on to organic landing pages.

Top organic landing page changes

Since the dive into keyword rankings didn’t provide the answer for the decline in traffic, the next thing I looked at were the organic landing pages. I knew this client had switched over CMS systems in early 2017, and had done a few small redesign projects the past three years.

After exporting our organic landing pages for 2015, 2016, and 2017, we compared the top ten (by organic sessions) and got the following results.

2015 top organic landing pages:

2016 top organic landing pages:

2017 top organic landing pages:

Because of their redesign, you can see that the subfolders changed between 2015/2016 to 2017. What really got my attention, however, is the /get-started page. In 2015/2016, the Get Started page accounted for nearly 16% of all organic traffic. In 2017, the Get Started page was nowhere to be found.

If you run into this problem and notice there are pages missing from your current top organic pages, a great way to uncover why is to use the Wayback Machine. It’s a great tool that allows you to see what a web page looked like in the past.

When we looked at the /get-started URL in the Wayback Machine, we noticed something pretty interesting:

In 2015, their /get-started page also acted as their login page. When people were searching on Google for “[Company Name] login,” this page was ranking, bringing in a significant amount of organic traffic.

Their current setup sends logins to a subdomain that doesn’t have a GA code (as it’s strictly used as a portal to the actual application).

That helped explain some of the organic traffic loss, but knowing that this client had gone through a few website redesigns, I wanted to make sure that all redirects were done properly. Regardless of whether or not your traffic has changed, if you’ve recently done a website redesign where you’re changing URLs, it’s smart to look at your top organic landing pages from before the redesign and double check to make sure they’re redirecting to the correct pages.

While this helped explain some of the traffic loss, the next thing we looked at was the on-page metrics to see if we could spot any obvious tracking issues.

Comparing on-page engagement metrics

Looking at the keyword rankings and organic landing pages provided a little bit of insight into the organic traffic loss, but it was nothing definitive. Because of this, I moved to the on-page metrics for further clarity. As a disclaimer, when I talk about on-page metrics, I’m talking about bounce rate, page views, average page views per session, and time on site.

Looking at the same top organic pages, I compared the on-page engagement metrics.

2015 on-page metrics:

2016 on-page metrics:

2017 on-page metrics:

While the overall engagement metrics changed slightly, the biggest and most interesting discrepancy I saw was in the bounce rates for the home page and Get Started page.

According to a number of different studies (like this one, this one, or even this one), the average bounce rate for a B2B site is around 40–60%. Seeing the home page with a bounce rate under 20% was definitely a red flag.

This led me to look into some other metrics as well. I compared key metrics between 2015 and 2017, and was utterly confused by the findings:

Looking at the organic sessions (overall), we saw a decrease of around 80,000 sessions, or 27.93%.

Looking at the organic users (overall) we saw a similar number, with a decrease of around 38,000 users, or 25%.

When we looked at page views, however, we saw a much more drastic drop:

For the entire site, we saw a 50% decrease in pageviews, or a decrease of nearly 400,000 page views.

This didn’t make much sense, because even if we had those extra 38,000 users, and each user averaged roughly 2.49 pages per session (looking above), that would only account for, at most, 100,000 more page views. This left 300,000 page views unaccounted for.

This led me to believe that there was definitely some sort of tracking issue. The high number of page views and low bounce rate made me suspect that some users were being double counted.

However, to confirm these assumptions, I took a look at some external data sources.

Using SEMrush and Moz data to exclude user error

If you have a feeling that your tracking was messed up in previous years, a good way to confirm or deny this hypothesis is to check external sources like Moz and SEMrush.

Unfortunately, this particular client was fairly new, so as a result, their Moz campaign data wasn’t around during the high organic traffic times in 2015. However, if it was, a good place to start would be looking at the search visibility metric (as long as the primary keywords have stayed the same). If this metric has changed drastically over the years, it’s a good indicator that your organic rankings have slipped quite a bit.

Another good thing to look at is domain authority and core page authority. If your site has had a few redesigns, moved URLs, or anything like that, it’s important to make sure that the domain authority has carried over. It’s also important to look at the page authorities of your core pages. If these are much lower than when they were before the organic traffic slide, there’s a good chance your redirects weren’t done properly, and the page authority isn’t being carried over through those new domains.

If, like me, you don’t have Moz data that dates back far enough, a good thing to check is the organic traffic cost in SEMrush.

Organic traffic cost can change because of a few reasons:

  1. Your site is ranking for more valuable keywords, making the organic traffic cost rise.
  2. More competitors have entered the space, making the keywords you were ranking for more expensive to bid on.

Usually it’s a combination of both of these.

If our organic traffic really was steadily decreasing for the past 2 years, we’d likely see a similar trendline looking at our organic traffic cost. However, that’s not what we saw.

In March of 2015, the organic traffic cost of my client’s site was $14,300.

In March of 2016, the organic traffic cost was $22,200

In December of 2017, the organic traffic cost spiked all the way up to $69,200. According to SEMrush, we also saw increases in keywords and traffic.

Looking at all of this external data re-affirmed the assumption that something must have been off with our tracking.

However, as a final check, I went back to internal metrics to see if the conversion data had decreased at a similar rate as the organic traffic.

Analyzing and comparing conversion metrics

This seemed like a natural final step into uncovering the mystery in this traffic drop. After all, it’s not organic traffic that’s going to profit your business (although it’s a key component). The big revenue driver is goal completions and form fills.

This was a fairly simple procedure. I went into Google Analytics to compare goal completion numbers and goal completion conversion rates over the past three years.

If your company is like my client’s, there’s a good chance you’re taking advantage of the maximum 20 goal completions that can be simultaneously tracked in Analytics. However, to make things easier and more consistent (since goal completions can change), I looked at only buyer intent conversions. In this case it was Enterprise, Business, and Personal edition form fills, as well as Contact Us form fills.

If you’re doing this on your own site, I would recommend doing the same thing. Gated content goal completions usually have a natural shelf life, and this natural slowdown in goal completions can skew the data. I’d look at the most important conversion on your site (usually a contact us or a demo form) and go strictly off those numbers.

For my client, you can see those goal completion numbers below:

Goal completion name




Contact Us




Individual Edition




Business Edition




Enterprise Edition








Conversion rates:

Goal completion name




Contact Us




Individual Edition




Business Edition




Enterprise Edition








This was pretty interesting. Although there was clearly fluctuation in the goal completions and conversion rates, there were no differences that made sense with our nearly 40,000 user drop from 2015 to 2016 to 2017.

All of these findings further confirmed that we were chasing an inaccurate goal. In fact, we spent the first three months working together to try and get back a 40% loss that, quite frankly, was never even there in the first place.

Tying everything together and final thoughts

For this particular case, we had to go down all five of these roads in order to reach the conclusion that we did: Our tracking was off in the past.

However, this may not be the case for your company or your clients. You may start by looking at keyword rankings, and realize that you’re no longer ranking on the first page for ten of your core keywords. If that’s the case, you quickly discovered your issue, and your game plan should be investing in your core pages to help get them ranking again for these core keywords.

If your goal completions are way down (by a similar percentage as your traffic), that’s also a good clue that your declining traffic numbers are correct.

If you’ve looked at all of these metrics and still can’t seem to figure out the reasoning for the decrease and you’re blindly trying tactics and struggling to crawl your way back up, this is a great checklist to go through to confirm the ominous question of tracking issue or optimization issue.

If you’re having a similar issue as me, I’m hoping this post helps you get to the root of the problem quickly, and gets you one step closer to create realistic organic traffic goals for the future!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog

The #1 Reason Paid Ads (On Search, Social, and Display) Fail – Whiteboard Friday

Posted by randfish

Pouring money into a paid ad campaign that’s destined to fail isn’t a sound growth strategy. Time and again, companies breaking into online ads don’t see success due to the same issue: they aren’t known to their audiences. There’s no trust, no recognition, and so the cost per click remains high and rising.

In this edition of Whiteboard Friday, Rand identifies the cycle many brands get trapped in and outlines a solution to make those paid ad campaigns worth the dollars you put behind them.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about the number one reason so many paid ad campaigns, especially from new companies and companies with new products or new ventures that they’re going into, new markets and audiences they’re addressing, fail. They just fall apart. I see this scenario play out so many times, especially in the startup and entrepreneurial world but, to be honest, across the marketing landscape.

Here’s how it usually goes. You’ve got your CEO or your CMO or your business owner and they’re like, “Hey, we have this great new product. Let’s spread the word.” So they talk to a marketer. It could be a contractor. It could be an agency. It could be someone in-house.

The marketer is like, “Okay, yeah, I’ll buy some ads online, help us get the word out there and get us some traffic and conversions.”

Then a few months later, you basically get this. “How’s that paid ad campaign going?” “Well, not so good. I have bad news.”

The cycle

Almost always, this is the result of a cycle that looks like this. You have a new company’s campaign. The campaign is to sell something or get exposure for something, to try and drive visits back to a web page or a website, several pages on the site and then get conversions out of it. So you buy Facebook ads, Instagram ads, maybe LinkedIn and Twitter. You probably use the Google Display Network. You’re probably using AdWords. All of these sources are trying to drive traffic to your web page and then get a conversion that turns into money.

Now, what happens is that these get a high cost per click. They start out with a high cost per click because it’s a new campaign. So none of these platforms have experience with your campaign or your company. So you’re naturally going to get a higher-than-normal cost per click until you prove to them that you get high engagement, at which point they bring the cost per click down. But instead of proving to them you get high engagement, you end up getting low engagement, low click-through rate, low conversion rate. People don’t make it here. They don’t make it there. Why is that?

Why does this happen?

Well, before we address that, let’s talk about what happens here. When these are low, when you have a low engagement rate on the platform itself, when no one engages with your Facebook ads, no one engages with your Instagram ads, when no one clicks on your AdWords ad, when no one clicks on your display ads, the cost to show to more people goes up, and, as a result, these campaigns are much harder to make profitable and they’re shown to far fewer people.

So your exposure to the audience you want to reach is smaller and the cost to reach each next person and to drive each next action goes up. This, fundamentally, is because…

  • The audience that you’re trying to reach hasn’t heard of you before. They don’t know who you are.
  • They don’t know, trust, or like you or your company product, they don’t click. They don’t click. They don’t buy. They don’t share. They don’t like.

They don’t do all the engagement things that would drive this high cost per click down, and, because of that, your campaigns suffer and struggle.

I see so many marketers who think like this, who say yes to new company campaigns that start with an advertising-first approach. I want to be clear, there are some exceptions to the rule. I have seen some brand new companies that fit a certain mold do very well with Instagram advertising for certain types of products that appeal to that audience and don’t need a previously existing brand association. I’ve seen some players in the Google AdWords market do okay with this, some local businesses, some folks in areas where people don’t expect to have knowledge and awareness of a brand already in the space where they’re trying to discover them.

So it’s not the case always that this fails, but very often, often enough that I’m calling this the number one reason I see paid ads fail.

The solution

There’s only one solution and it’s not pretty. The solution is…

You have to get known to your audience before you pour money into advertising.

Meaning you need to invest in organic channels — content or SEO or press and PR or sponsorships or events, what have you, anything that can get your brand name and the names of your product out there.

Brand advertising, in fact, can work for this. So television brand advertising, folks have noticed that TV brand advertising often drives the cost per click down and drives engagement and click-through rates up, because people have heard of you and they know who you are. Magazine and offline advertising works like this. Sometimes even display advertising can work this way.

The second option is to…

Advertise primarily or exclusively to an audience that already has experience with you.

The way you can do this is through systems like Google’s retargeting and remarketing platforms. You can do the same thing with Facebook, through custom audiences of email addresses that you upload, same thing with Instagram, same thing with Twitter. You can target people who specifically only follow the accounts that you already own and control. Through these, you can get better engagement, better click-through rate, better conversion rate and drive down that cost per click and reach a broader audience.

But if you don’t do these things first, a lot of times these types of investments fall flat on their face, and a lot of marketers, to be honest, and agencies and consultants lose their jobs as a result. I don’t want that to happen to you. So invest in these first or find the niches where advertising can work for a first-time product. You’re going to be a lot happier.

All right, everyone. Look forward to your comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog

How to Diagnose SEO Traffic Drops: 11 Questions to Answer

Posted by Daniel_Marks

Almost every consultant or in-house SEO will be asked at some point to investigate an organic traffic drop. I’ve investigated quite a few, so I thought I’d share some steps I’ve found helpful when doing so.

Is it just normal noise?

Before you sound the alarm and get lost down a rabbit hole, you should make sure that the drop you’re seeing is actually real. This involves answering two questions:

A.) Do you trust the data?

This might seem trivial, but at least a quarter of the traffic drops I’ve seen were simply due to data problems.

The best way to check on this is to sense-check other metrics that might be impacted by data problems. Does anything else look funky? If you have a data engineering team, are they aware of any data issues? Are you flat-out missing data for certain days or page types or devices, etc.? Thankfully, data problems will usually make themselves pretty obvious once you start turning over a few rocks.

One of the more common sources of data issues is simply missing data for a day.

B.) Is this just normal variance?

Metrics go up and down all the time for no discernible reason. One way to quantify this is to use your historical standard deviation for SEO traffic.

For example, you could plot your weekly SEO traffic for the past 12 months and calculate the standard deviation (using the STDEV function on Google Sheets or Excel makes this very easy) to figure out if a drop in weekly traffic is abnormal. You’d expect about 16% of weeks to be one standard deviation below your weekly average just by sheer luck. You could therefore set a one-standard-deviation threshold before investigating traffic drops, for example (but you should adjust this threshold to whatever is appropriate for your business). You can also look at the standard deviation for your year-over-year or week-over-week SEO traffic if that’s where you’re seeing the drop (i.e. plot your % change in YoY SEO traffic by week for the past 12 months and calculate the standard deviation).

SEO traffic is usually pretty noisy, especially on a short time frame like a week.

Let’s assume you’ve decided this is indeed a real traffic drop. Now what? I’d recommend trying to answer the eleven questions below, at least one of them will usually identify the culprit.

Questions to ask yourself when facing an organic traffic drop

1. Was there a recent Google algorithm update?

MozCast, Search Engine Land, and Moz’s algorithm history are all good resources here.

Expedia seems to have been penalized by a Penguin-related update.

If there was an algorithm update, do you have any reason to suspect you’d be impacted? It can sometimes be difficult to understand the exact nature of a Google update, but it’s worth tracking down any information you can to make sure your site isn’t at risk of being hit.

2. Is the drop specific to any segment?

One of the more useful practices whenever you’re looking at aggregated data (such as a site’s overall search traffic) is to segment the data until you find something interesting. In this case, we’d be looking for a segment that has dropped in traffic much more than any other. This is often the first step in tracking down the root cause of the issue. The two segments I’ve found most useful in diagnosing SEO traffic drops specifically:

  • Device type (mobile vs. desktop vs. tablet)
  • Page type (product pages vs. category pages vs. blog posts vs. homepage etc.)

But there will likely be plenty of other segments that might make sense to look at for your business (for example, product category).

3. Are you being penalized?

This is unlikely, but it’s also usually pretty quick to disprove. Look at Search Console for any messages related to penalties and search for your brand name on Google. If you’re not showing up, then you might be penalized.

Rap Genius (now Genius) was penalized for their link building tactics and didn’t show up for their own brand name on Google.

4. Did the drop coincide with a major site change?

This can take a thousand different forms (did you migrate a bunch of URLs, move to a different JavaScript framework, update all your title tags, remove your navigation menu, etc?). If this is the case, and you have a reasonable hypothesis for how this could impact SEO traffic, you might have found your culprit. saw a pretty big drop in SEO traffic after changing their JavaScript framework.

5. Did you lose ranking share to a competitor?

There are a bunch of tools that can tell you if you’ve lost rankings to a competitor:

If you’ve lost rankings, it’s worth investigating the specific keywords that you’ve lost and figuring out if there’s a trend. Did your competitors launch a new page type? Did they add content to their pages? Do they have more internal links pointing to these pages than you do?

GetStat’s Share of Voice report lets you quickly see whether a competitor is usurping your rankings

It could also just be a new competitor that’s entered the scene.

6. Did it coincide with a rise in direct or dark traffic?

If so, make sure you haven’t changed how you’re classifying this traffic on your end. Otherwise, you might simply be re-classifying organic traffic as direct or dark traffic.

7. Has there been a change to the search engine results pages you care about?

You can either use Moz’s SERP features report, or manually look at the SERPs you care about to figure out if their design has materially changed. It’s possible that Google is now answering many of your relevant queries directly in search results, put an image carousel on them, added a local pack, etc. — all of which would likely decrease your organic search traffic. lost most of its SEO traffic because of rich snippets like the one above.

8. Is the drop specific to branded or unbranded traffic?

If you have historical Search Console data, you can look at number of branded clicks vs. unbranded clicks over time. You could also look at this data through AdWords if you spend on paid search. Another simple proxy to branded traffic is homepage traffic (for most sites, the majority of homepage traffic will be branded). If the drop is specific to branded search then it’s probably a brand problem, not an SEO problem.

9. Did a bunch of pages drop out of the index?

Search Console’s Index Status Report will make it clear if you suddenly have way fewer URLs being indexed. If this is the case, you might be accidentally disallowing or noindexing URLs (through robots.txt, meta tags on the page, or HTTP headers).

Search Console’s Index Status Report is a quick way to make sure you’re not accidentally noindexing or disallowing large portions of your site.

10. Did your number of referring domains and/or links drop?

It’s possible that a large number of your backlinks have been removed or are no longer accessible for whatever reason.

Ahrefs can be a quick way to determine if you’ve lost backlinks and also offers very handy reports for your lost backlinks or referring domains that will allow you to identify why you might have lost these links.

A sudden drop in backlinks could be the reason you’re seeing a traffic drop.

11. Is SEM cannibalizing SEO traffic?

It’s possible that your paid search team has recently ramped up their spend and that this is eating into your SEO traffic. You should be able to check on this pretty quickly by plotting your SEM vs. SEO traffic. If it’s not obvious after doing this whether it’s a factor, then it can be worth pausing your SEM campaigns for specific landing pages and seeing if SEO traffic rebounds for those pages.

To be clear, some level of cannibalization between SEM and SEO is inevitable, but it’s still worth understanding how much of your traffic is being cannibalized and whether the incremental clicks your SEM campaigns are driving outweigh the loss in SEO traffic (in my experience they usually do outweigh the loss in SEO traffic, but still worth checking!).

If your SEM vs. SEO traffic graph looks similar to the (slightly extreme) one above, then SEM campaigns might be cannibalizing your SEO traffic.

That’s all I’ve got — hopefully at least one of these questions will lead you to the root cause of an organic search traffic drop. Are there any other questions that you’ve found particularly helpful for diagnosing traffic drops? Let me know in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog

Google’s Walled Garden: Are We Being Pushed Out of Our Own Digital Backyards?

Posted by Dr-Pete

Early search engines were built on an unspoken transaction — a pact between search engines and website owners — you give us your data, and we’ll send you traffic. While Google changed the game of how search engines rank content, they honored the same pact in the beginning. Publishers, who owned their own content and traditionally were fueled by subscription revenue, operated differently. Over time, they built walls around their gardens to keep visitors in and, hopefully, keep them paying.

Over the past six years, Google has crossed this divide, building walls around their content and no longer linking out to the sources that content was originally built on. Is this the inevitable evolution of search, or has Google forgotten their pact with the people’s whose backyards their garden was built on?

I don’t think there’s an easy answer to this question, but the evolution itself is undeniable. I’m going to take you through an exhaustive (yes, you may need a sandwich) journey of the ways that Google is building in-search experiences, from answer boxes to custom portals, and rerouting paths back to their own garden.

I. The Knowledge Graph

In May of 2012, Google launched the Knowledge Graph. This was Google’s first large-scale attempt at providing direct answers in search results, using structured data from trusted sources. One incarnation of the Knowledge Graph is Knowledge Panels, which return rich information about known entities. Here’s part of one for actor Chiwetel Ejiofor (note: this image is truncated)…

The Knowledge Graph marked two very important shifts. First, Google created deep in-search experiences. As Knowledge Panels have evolved, searchers have access to rich information and answers without ever going to an external site. Second, Google started to aggressively link back to their own resources. It’s easy to overlook those faded blue links, but here’s the full Knowledge Panel with every link back to a Google property marked…

Including links to Google Images, that’s 33 different links back to Google. These two changes — self-contained in-search experiences and aggressive internal linking — represent a radical shift in the nature of search engines, and that shift has continued and expanded over the past six years.

More recently, Google added a sharing icon (on the right, directly below the top images). This provides a custom link that allows people to directly share rich Google search results as content on Facebook, Twitter, Google+, and by email. Google no longer views these pages as a path to a destination. Search results are the destination.

The Knowledge Graph also spawned Knowledge Cards, more broadly known as “answer boxes.” Take any fact in the panel above and pose it as a question, and you’re likely to get a Knowledge Card. For example, “How old is Chiwetel Ejiofor?” returns the following…

For many searchers, this will be the end of their journey. Google has answered their question and created a self-contained experience. Note that this example also contains links to additional Google searches.

In 2015, Google launched Medical Knowledge Panels. These gradually evolved into fully customized content experiences created with partners in the medical field. Here’s one for “cardiac arrest” (truncated)…

Note the fully customized design (these images were created specifically for these panels), as well as the multi-tabbed experience. It is now possible to have a complete, customized content experience without ever leaving Google.

II. Live Results

In some specialized cases, Google uses private data partnerships to create customized answer boxes. Google calls these “Live Results.” You’ve probably seen them many times now on weather, sports and stock market searches. Here’s one for “Seattle weather”…

For the casual information seeker, these are self-contained information experiences with most or all of what we care about. Live Results are somewhat unique in that, unlike the general knowledge in the Knowledge Graph, each partnership represents a disruption to an industry.

These partnerships have branched out over time into even more specialized results. Consider, for example, “Snoqualmie ski conditions”…

Sports results are incredibly disruptive, and Google has expanded and enriched these results quite a bit over the past couple of years. Here’s one for “Super Bowl 2018″…

Note that clicking any portion of this Live Result leads to a customized portal on Google that can no longer be called a “search result” in any traditional sense (more on portals later). Special sporting events, such as the 2018 Winter Olympics, have even more rich features. Here are some custom carousels for “Olympic snowboarding results”…

Note that these are multi-column carousels that ultimately lead to dozens of smaller cards. All of these cards click to more Google search results. This design choice may look strange on desktop and marks another trend — Google’s shift to mobile-first design. Here’s the same set of results on a Google Pixel phone…

Here, the horizontal scrolling feels more intuitive, and the carousel is the full-width of the screen, instead of feeling like a free-floating design element. These features are not only rich experiences on mobile screens, but dominate mobile results much more than they do two-column desktop results.

III. Carousels

Speaking of carousels, Google has been experimenting with a variety of horizontal result formats, and many of them are built around driving traffic back to Google searches and properties. One of the older styles of carousels is the list format, which runs across the top of desktop searches (above other results). Here’s one for “Seattle Sounders roster”…

Each player links to a new search result with that player in a Knowledge Panel. This carousel expands to the width of the screen (which is unusual, since Google’s core desktop design is fixed-width). On my 1920×1080 screen, you can see 14 players, each linking to a new Google search, and the option to scroll for more…

This type of list carousel covers a wide range of topics, from “cat breeds” to “types of cheese.” Here’s an interesting one for “best movies of 1984.” The image is truncated, but the full result includes drop-downs to select movie genres and other years…

Once again, each result links to a new search with a Knowledge Panel dedicated to that movie. Another style of carousel is the multi-row horizontal scroller, like this one for “songs by Nirvana”…

In this case, not only does each entry click to a new search result, but many of them have prominent featured videos at the top of the left column (more on that later). My screen shows at least partial information for 24 songs, all representing in-Google links above the traditional search results…

A search for “laptops” (a very competitive, commercial term, unlike the informational searches above) has a number of interesting features. At the bottom of the search is this “Refine by brand” carousel…

Clicking on one of these results leads to a new search with the brand name prepended (e.g. “Apple laptops”). The same search shows this “Best of” carousel…

The smaller “Mentioned in:” links go to articles from the listed publishers. The main, product links go to a Google search result with a product panel. Here’s what I see when I click on “Dell XPS 13 9350” (image is truncated)…

This entity live in the right-hand column and looks like a Knowledge Panel, but is commercial in nature (notice the “Sponsored” label in the upper right). Here, Google is driving searchers directly into a paid/advertising channel.

IV. Answers & Questions

As Google realized that the Knowledge Graph would never scale at the pace of the wider web, they started to extract answers directly from their index (i.e. all of the content in the world, or at least most of it). This led to what they call “Featured Snippets”, a special kind of answer box. Here’s one for “Can hamsters eat cheese?” (yes, I have a lot of cheese-related questions)…

Featured Snippets are an interesting hybrid. On the one hand, they’re an in-search experience (in this case, my basic question has been answered before I’ve even left Google). On the other hand, they do link out to the source site and are a form of organic search result.

Featured Snippets also power answers on Google Assistant and Google Home. If I ask Google Home the same question about hamsters, I hear the following:

On the website, they say “Yes, hamsters can eat cheese! Cheese should not be a significant part of your hamster’s diet and you should not feed cheese to your hamster too often. However, feeding cheese to your hamster as a treat, perhaps once per week in small quantities, should be fine.”

You’ll see the answer is identical to the Featured Snippet shown above. Note the attribution (which I’ve bolded) — a voice search can’t link back to the source, posing unique challenges. Google does attempt to provide attribution on Google Home, but as they use answers extracted from the web more broadly, we may see the way original sources are credited change depending on the use case and device.

This broader answer engine powers another type of result, called “Related Questions” or the “People Also Ask” box. Here’s one on that same search…

These questions are at least partially machine-generated, which is why the grammar can read a little oddly — that’s a fascinating topic for another time. If you click on “What can hamsters eat list?” you get what looks a lot like a Featured Snippet (and links to an outside source)…

Notice two other things that are going on here. First, Google has included a link to search results for the question you clicked on (see the purple arrow). Second, the list has expanded. The two questions at the end are new. Let’s click “What do hamsters like to do for fun?” (because how can I resist?)…

This opens up a second answer, a second link to a new Google search, and two more answers. You can continue this to your heart’s content. What’s especially interesting is that this isn’t just some static list that expands as you click on it. The new questions are generated based on your interactions, as Google tries to understand your intent and shape your journey around it.

My colleague, Britney Muller, has done some excellent research on the subject and has taken to calling these infinite PAAs. They’re probably not quite infinite — eventually, the sun will explode and consume the Earth. Until then, they do represent a massively recursive in-Google experience.

V. Videos & Movies

One particularly interesting type of Featured Snippet is the Featured Video result. Search for “umbrella” and you should see a panel like this in the top-left column (truncated):

This is a unique hybrid — it has Knowledge Panel features (that link back to Google results), but it also has an organic-style link and large video thumbnail. While it appears organic, all of the Featured Videos we’ve seen in the wild have come from YouTube (Vevo is a YouTube partner), which essentially means this is an in-Google experience. These Featured Videos consume a lot of screen real-estate and appear even on commercial terms, like Rihanna’s “umbrella” (shown here) or Kendrick Lamar’s “swimming pools”.

Movie searches yield a rich array of features, from Live Results for local showtimes to rich Knowledge Panels. Last year, Google completely redesigned their mobile experience for movie results, creating a deep in-search experience. Here’s a mobile panel for “Black Panther”…

Notice the tabs below the title. You can navigate within this panel to a wealth of information, including cast members and photos. Clicking on any cast member goes to a new search about that actor/actress.

Although the search results eventually continue below this panel, the experience is rich, self-contained, and incredibly disruptive to high-ranking powerhouses in this space, including IMDB. You can even view trailers from the panel…

On my phone, Google displayed 10 videos (at roughly two per screen), and nine of those were links to YouTube. Given YouTube’s dominance, it’s difficult to say if Google is purposely favoring their own properties, but the end result is the same — even seemingly “external” clicks are often still Google-owned clicks.

VI. Local Results

A similar evolution has been happening in local results. Take the local 3-pack — here’s one on a search for “Seattle movie theaters”…

Originally, the individual business links went directly to each of those business’s websites. As of the past year or two, these instead go to local panels on Google Maps, like this one…

On mobile, these local panels stand out even more, with prominent photos, tabbed navigation and easy access to click-to-call and directions.

In certain industries, local packs have additional options to run a search within a search. Here’s a pack for Chicago taco restaurants, where you can filter results (from the broader set of Google Maps results) by rating, price, or hours…

Once again, we have a fully embedded search experience. I don’t usually vouch for any of the businesses in my screenshots, but I just had the pork belly al pastor at Broken English Taco Pub and it was amazing (this is my personal opinion and in no way reflects the taco preferences of Moz, its employees, or its lawyers).

The hospitality industry has been similarly affected. Search for an individual hotel, like “Kimpton Alexis Seattle” (one of my usual haunts when visiting the home office), and you’ll get a local panel like the one below. Pardon the long image, but I wanted you to have the full effect…

This is an incredible blend of local business result, informational panel, and commercial result, allowing you direct access to booking information. It’s not just organic local results that have changed, though. Recently, Google started offering ads in local packs, primarily on mobile results. Here’s one for “tax attorneys”…

Unlike traditional AdWords ads, these results don’t go directly to the advertiser’s website. Instead, like standard pack results, they go to a Google local panel. Here’s what the mobile version looks like…

In addition, Google has launched specialized ads for local service providers, such as plumbers and electricians. These appear carousel-style on desktop, such as this one for “plumbers in Seattle”…

Unlike AdWords advertisers, local service providers buy into a specialized program and these local service ads click to a fully customized Google sub-site, which brings us to the next topic — portals.

VII. Custom Portals

Some Google experiences have become so customized that they operate as stand-alone portals. If you click on a local service ad, you get a Google-owned portal that allows you to view the provider, check to see if they can handle your particular problem in your zip code, and (if not) view other, relevant providers…

You’ve completely left the search result at this point, and can continue your experience fully within this Google property. These local service ads have now expanded to more than 30 US cities.

In 2016, Google launched their own travel guides. Run a search like “things to do in Seattle” and you’ll see a carousel-style result like this one…

Click on “Seattle travel guide” and you’ll be taken to a customized travel portal for the city of Seattle. The screen below is a desktop result — note the increasing similarity to rich mobile experiences.

Once again, you’ve been taken to a complete Google experience outside of search results.

Last year, Google jumped into the job-hunting game, launching a 3-pack of job listings covering all major players in this space, like this one for “marketing jobs in Seattle”…

Click on any job listing, and you’ll be taken to a separate Google jobs portal. Let’s try Facebook…

From here, you can view other listings, refine your search, and even save jobs and set up alerts. Once again, you’ve jumped from a specialized Google result to a completely Google-controlled experience.

Like hotels, Google has dabbled in flight data and search for years. If I search for “flights to Seattle,” Google will automatically note my current location and offer me a search interface and a few choices…

Click on one of these choices and you’re taken to a completely redesigned Google Flights portal…

Once again, you can continue your journey completely within this Google-owned portal, never returning back to your original search. This is a trend we can expect to continue for the foreseeable future.

VIII. Hard Questions

If I’ve bludgeoned you with examples, then I apologize, but I want to make it perfectly clear that this is not a case of one or two isolated incidents. Google is systematically driving more clicks from search to new searches, in-search experiences, and other Google owned properties. This leads to a few hard questions…

Why is Google doing this?

Right about now, you’re rushing to the comments section to type “For the money!” along with a bunch of other words that may include variations of my name, “sheeple,” and “dumb-ass.” Yes, Google is a for-profit company that is motivated in part by making money. Moz is a for-profit company that is motivated in part by making money. Stating the obvious isn’t insight.

In some cases, the revenue motivation is clear. Suggesting the best laptops to searchers and linking those to shopping opportunities drives direct dollars. In traditional walled gardens, publishers are trying to produce more page-views, driving more ad impressions. Is Google driving us to more searches, in-search experiences, and portals to drive more ad clicks?

The answer isn’t entirely clear. Knowledge Graph links, for example, usually go to informational searches with few or no ads. Rich experiences like Medical Knowledge Panels and movie results on mobile have no ads at all. Some portals have direct revenues (local service providers have to pay for inclusion), but others, like travel guides, have no apparent revenue model (at least for now).

Google is competing directly with Facebook for hours in our day — while Google has massive traffic and ad revenue, people on average spend much more time on Facebook. Could Google be trying to drive up their time-on-site metrics? Possibly, but it’s unclear what this accomplishes beyond being a vanity metric to make investors feel good.

Looking to the long game, keeping us on Google and within Google properties does open up the opportunity for additional advertising and new revenue streams. Maybe Google simply realizes that letting us go so easily off to other destinations is leaving future money on the table.

Is this good for users?

I think the most objective answer I can give is — it depends. As a daily search user, I’ve found many of these developments useful, especially on mobile. If I can get an answer at a glance or in an in-search entity, such as a Live Result for weather or sports, or the phone number and address of a local restaurant, it saves me time and the trouble of being familiar with the user interface of thousands of different websites. On the other hand, if I feel that I’m being run in circles through search after search or am being given fewer and fewer choices, that can feel manipulative and frustrating.

Is this fair to marketers?

Let’s be brutally honest — it doesn’t matter. Google has no obligation to us as marketers. Sites don’t deserve to rank and get traffic simply because we’ve spent time and effort or think we know all the tricks. I believe our relationship with Google can be symbiotic, but that’s a delicate balance and always in flux.

In some cases, I do think we have to take a deep breath and think about what’s good for our customers. As a marketer, local packs linking directly to in-Google properties is alarming — we measure our success based on traffic. However, these local panels are well-designed, consistent, and have easy access to vital information like business addresses, phone numbers, and hours. If these properties drive phone calls and foot traffic, should we discount their value simply because it’s harder to measure?

Is this fair to businesses?

This is a more interesting question. I believe that, like other search engines before it, Google made an unwritten pact with website owners — in exchange for our information and the privilege to monetize that information, Google would send us traffic. This is not altruism on Google’s part. The vast majority of Google’s $95B in 2017 advertising revenue came from search advertising, and that advertising would have no audience without organic search results. Those results come from the collective content of the web.

As Google replaces that content and sends more clicks back to themselves, I do believe that the fundamental pact that Google’s success was built on is gradually being broken. Google’s garden was built on our collective property, and it does feel like we’re slowly being herded out of our own backyards.

We also have to consider the deeper question of content ownership. If Google chooses to pursue private data partnerships — such as with Live Results or the original Knowledge Graph — then they own that data, or at least are leasing it fairly. It may seem unfair that they’re displacing us, but they have the right to do so.

Much of the Knowledge Graph is built on human-curated sources such as Wikidata (i.e. Wikipedia). While Google undoubtedly has an ironclad agreement with Wikipedia, what about the people who originally contributed and edited that content? Would they have done so knowing their content could ultimately displace other content creators (including possibly their own websites) in Google results? Are those contributors willing participants in this experiment? The question of ownership isn’t as easy as it seems.

If Google extracts the data we provide as part of the pact, such as with Featured Snippets and People Also Ask results, and begins to wall off those portions of the garden, then we have every right to protest. Even the concept of a partnership isn’t always black-and-white. Some job listing providers I’ve spoken with privately felt pressured to enter Google’s new jobs portal (out of fear of cutting off the paths to their own gardens), but they weren’t happy to see the new walls built.

Google is also trying to survive. Search has to evolve, and it has to answer questions and fit a rapidly changing world of device formats, from desktop to mobile to voice. I think the time has come, though, for Google to stop and think about the pact that built their nearly hundred-billion-dollar ad empire.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog

MozCon 2018: Making the Case for the Conference (& All the Snacks!)

Posted by Danielle_Launders

You’ve got that conference looming on the horizon. You want to go — you’ve spent the past few years desperately following hashtags on Twitter, memorizing catchy quotes, zooming in on grainy snapshots of a deck, and furiously downloading anything and everything you can scour from Slideshare.

But there’s a problem: conferences cost money, and your boss won’t even approve a Keurig in the communal kitchen, much less a ticket to a three-day-long learning sesh complete with its own travel and lodging expenses.

What’s an education-hungry digital marketer to do?

How do you convince your boss to send you to the conference of your dreams?

First of all, you gather evidence to make your case.

There are a plethora of excellent reasons why attending conferences is good for your career (and your bottom line). In digital marketing, we exist in the ever-changing tech space, hurtling toward the future at breakneck speed and often missing the details of the scenery along the way.

A good SEO conference will keep you both on the edge of your seat and on the cutting-edge of what’s new and noteworthy in our industry, highlighting some of the most important and impactful things your work depends on.

A good SEO conference will flip a switch for you, will trigger that lightbulb moment that empowers you and levels you up as both a marketer and a critical thinker.

If that doesn’t paint a beautiful enough picture to convince the folks that hold the credit card, though, there are also some great statistics and resources available:

Specifically, we’re talking about MozCon

Yes, that MozCon!

Let’s just take a moment to address the elephant in the room here: you all know why we wrote this post. We want to see your smiling face in the audience at MozCon this July (the 9th–11th, if you were wondering). There are a few specific benefits worth mentioning:

  • Speakers and content: Our speakers bring their A-game each year. We work with them to bring the best content and latest trends to the stage to help set you up for a year of success.
  • Videos to share with your team: About a month or so after the conference, we’ll send you a link to professionally edited videos of every presentation at the conference. Your colleagues won’t get to partake in the morning Top Pot doughnuts or Starbucks coffee, but they will get a chance to learn everything you did, for free.
  • Great food onsite: We understand that conference food isn’t typically worth mentioning, but at MozCon you can expect snacks from local Seattle vendors – in the past this includes Trophy cupcakes, KuKuRuZa popcorn, Starbucks’ Seattle Reserve cold brew, and did we mention bacon at breakfast? Let’s not forget the bacon.
  • Swag: Expect to go home with a one-of-a-kind Roger Mozbot, a super-soft t-shirt from American Apparel, and swag worth keeping. We’ve given away Roger Legos, Moleskine notebooks, phone chargers, and have even had vending machines with additional swag in case you didn’t get enough.
  • Networking: You work hard taking notes, learning new insights, and digesting all of that knowledge — that’s why we think you deserve a little fun in the evenings to chat with fellow attendees. Each night after the conference, we’ll offer a different networking event that adds to the value you’ll get from your day of education.
  • A supportive network after the fact: Our MozCon Facebook group is incredibly active, and it’s grown to have a life of its own — marketers ask one another SEO questions, post jobs, look for and offer advice and empathy, and more. It’s a great place to find TAGFEE support and camaraderie long after the conference itself has ended.
  • Discounts for subscribers and groups: Moz Pro subscribers get a whopping $500 off their ticket cost (even if you’re on a free 30-day trial!) and there are discounts for groups as well, so make sure to take advantage of savings where you can!
  • Ticket cost: At MozCon our goal is to break even, which means we invest all of your ticket price back into you. Check out the full breakdown below:

Can you tell we’re serious about the snacks?

You can check out videos from years past to get a taste for the caliber of our speakers. We’ll also be putting out a call for community speaker pitches in April, so if you’ve been thinking about breaking into the speaking circuit, it could be an amazing opportunity — keep an eye on the blog for your chance to submit a pitch.

If you’ve ever seriously considered attending an SEO conference like MozCon, now’s the time to do it. You’ll save actual hundreds of dollars by grabbing subscriber or group pricing while you can (think of all the Keurigs you could get for that communal kitchen!), and you’ll be bound for an unforgettable experience that lives and grows with you beyond just the three days you spend in Seattle.

Grab your ticket to MozCon!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog