Technical SEO

This is the more technical part of the SEO guide. Don’t panic, we’ll make it easy to digest!

Technical SEO is essential. The truth is: It doesn’t matter if your content is amazing, if Google can’t see it, your content won’t rank.

Let’s make sure your content is ranking.

When talking about website accessibility, you’ll hear a lot about crawling (can also be called a web crawler, spider or spiderbot).

It’s a process used by search engines to find your content and pass it further for indexing. Simply put, a crawler follows all the links it finds and then analyzes the content on those pages (similar to how you would browse the internet).

There are some limitations that we will talk about below.

Robots.txt file

Robots.txt file is usually uploaded to your website’s root folder. It's used to instruct crawlers to stop crawling your website or certain parts of it.

If you have the file, find it by entering https://yourdomain12345.com/robots.txt in your browser tab.

The first thing crawlers will check when coming to your website is the robots.txt file, where it will decide the crawling path.

However, your pages can still be indexed if the crawler discovers your web page from a different website that is linking back to you (we will cover this in the next section).

If a person isn’t going to search for a specific page on your website using Google, it’s probably not worth it for search engines to crawl it.

For example, your website’s admin pages, a thank you page for successfully signing up or thousands of unique search queries within your website’s search that generate unique pages but they serve no purpose being indexed on Google.

Sometimes mistakes happen and important pages get blocked. Especially when migrating to a new domain, CMS or when you make big changes to your website’s structure.

Make sure your robots.txt file is in order.

Learn more: Official Google documentation about robots.txt file

Noindex and nofollow tags

Noindex and nofollow tags are small snippets of code that are inserted in your website’s pages and can look like this if you inspect the code:

<meta name="robots" content="noindex, nofollow">

A noindex tag is used when you want crawlers to crawl your pages but not index them.

A good example is when you have a couple of filter combinations for your products or services and they create hundreds or thousands of unique pages. They may not have any use being indexed in the search engines but they link to other important pages within your website, therefore it’s good practice to allow Google to crawl them but not index them.

Nofollow tags are used when you don’t want crawlers to “click” (visit) the page you are linking to.

Such examples would contain paid links, comments in forums or under your blog posts, or other user-generated content. The reasoning for that is that those links may be low value, harmful and Google may not like that.

Noindex/nofollow tags checklist:

  • Use robots.txt if you want to stop crawlers from crawling your website

  • Use noindex tag if you want to stop search engines from indexing your pages

  • Use nofollow tag if you want to stop search engines from following certain links

  • Use noindex AND nofollow tags combined if you want to stop search engines from indexing your pages AND following certain links

Learn more: DeepCrawl's guide about noindex and nofollow tags

Sitemap

If you’re migrating to a new domain, launching a new website, making big structural changes or have a large website with a lot of new content, a dynamic sitemap is a very good practice. It helps crawlers to better understand the structure of your website and find new pages more quickly.

You can also have separate sitemaps dedicated to images or videos if that is a big part of your website.

Learn more: Official Google documentation about creating sitemaps (with free tools to use)

Interlinking and website architecture

Your website’s architecture and internal links play an important role in helping search engines determine the importance of your pages and they can also have an impact on your user experience.

Every page within a small-medium website should be accessible within a maximum of four clicks.

The “closer” your page is to the homepage, the more important it gets in the eyes of search engines. The same logic applies to the visibility of your links.

For example, links on your website’s header will have more importance compared to the links in the footer. Links under your blog posts' comments will have even less importance.

A structure of a small-medium size website could look similar to this:

If you have a deep blog, add categories to help organize by topics.

Make sure all your important pages have links pointing back to them so they are discoverable by search engines.

And don’t forget to link to your older but relevant articles when writing new content. The more relevant links an article receives, the better chance it will have to rank higher.

Website load speed

Website load speed is a very important ranking factor. Make sure your website loads as fast as possible.

There are plenty of effective and free tools that can measure your website’s loading speed and provide suggestions on how it can be improved.

We recommend the official Google tool and GTmetrix.

The most common issues that may cause slower loading speeds:

  • Poorly optimized images

  • An excessive amount of code

  • JavaScript issues

  • Installing too many plugins

  • Too many ads

  • Disable caching

Mobile-friendly website

Mobile device usage increases every year. There’s no question how important it is that your website is mobile-friendly (responsive).

If you are using quality website builders (like Ycode), your website is likely already optimized for mobile devices. It’s still recommended to check your site on all devices, just to be sure.

You can use this Google tool to find out whether your website is mobile friendly and get suggestions on how it could be improved.

Structured data

Google sums up what is structured data very well: “Structured data is a standardized format for providing information about a page and classifying the page content; for example, on a recipe page, what are the ingredients, the cooking time and temperature, the calories, and so on.”

Schema.org is what defines how each element on the page (ingredients, cooking time, temperature and so on) should be marked in your code, in a way that’s understandable for different search engines.

Structured data is not a ranking factor but it’s a perfect way to make your search results stand-out. A few examples of how schema implementation can look like in search results:

If you are running an e-commerce store, star rating for your product pages:

If you are running a recipe website, star rating, preparation time, picture for your recipes:

If you are running an event website, listings of the events:

There are many more options. Read more about them in the official Google structured data documentation.

Up next: Off-page SEO
Read more

Get your no-code fix

Join our monthly email with no-code insights and Ycode updates.
.
2019 - 2021 © Ycode. All rights reserved.