Reading Time: 4 minutesLast updated on January 15th, 2016 at 11:20 am
It struck me the other day when I was dealing with yet another client who had had a SEO analysis generated for them by another SEO company who were pitching for business just how many myths there are in the SEO industry.
It’s a good sign that the person you are talking to doesn’t understand how to optimise a site effectively when they start to repeat some of these myths and I thought I’d mention some of my favourite ones (as of November 2015).
- Meta tags – there are so many people out there that insist that you need some or all of the following meta tags to get your site to the top of the search results.
- Keywords Meta – Google hasn’t read this meta tag for many years – in fact if you look all the way back in 2009 you’ll see updates from Google about this (I remember an earlier update in about 2005 but this seems to have vanished).
- Revisit Meta – there was, a long time ago (in a universe far away) a search engine that looked at this meta tag and if you “told” it to revisit every 7 days that’s what it did. None of the major search engines use this tag as they will all decide on how often the page changes and therefore how often they need to come back and read it. If you think about it, even Google has limited resources so why would they waste bandwidth checking a page that hasn’t changed for 3 years every 7 days when they could be using that bandwidth to check news pages on the BBC for example?
- Robots Meta – There is a very good use for the robots meta tag, that’s to tell the search engines that you don’t want them to index the page and/or follow any links on the page. The typical use by people that don’t understand this is to use a robots “follow, index” statement which is what the search engines would be doing anyway (they want to “index” the page so it stands a chance of appearing in the results and want to “follow” the links on the page to find other pages).
- Distribution Meta – The idea of this tag was, a long time ago, to indicate that you wanted a page available globally or wanted to place restrictions on it – it had three values but isn’t used nowadays (if it ever was by search engines like Google) :
- Global – indicates that your webpage is intended for everyone
- Local – intended for local distribution of your document
- IU – Internal Use, not intended for public distribution
- Other Meta Tags – There are several other meta tags that you will see people saying that you “must ” have. My advice is to look at https://support.google.com/webmasters/answer/79812?hl=en, this page tells you exactly which meta tag Google understands and it’s a fairly safe bet that if Google doesn’t understand it then none of the other major engines will either.
- You only have to optimise your home page – Contrary to what you may think (and you’re not alone in thinking this), the home page of a site is not the most important page of the site. If your site is about photography and someone is looking for hints on how to take photos of a lunar eclipse then the most important page is the one about taking photos of the moon during an eclipse, similarly if someone is looking for a menu for your restaurant then that’s the most important page for them. Every page on your site should be optimised for the content of that page.
- You have to have a sitemap – As you will see from our “do I need a site map” page (which actually is a good example of point 2 above as it’s a very popular page on this site that answers the question people are asking), there are two types of sitemaps. You only really need them if the site is large and/or the navigation is difficult for people (and the search engines) to follow. A small site where every page is linked to from the navigation really doesn’t need either an html or xml site map. On an aside, the xml sitemap is there to tell the search engines where the pages are and not, as I’ve been told in the past, to act as a list of pages that you don’t want the engines to visit.
- You have to have a robots.txt file – This file has one main purpose, and that’s to say to search engines that you don’t want them to visit or index particular pages or directories. If there are just one or two pages that you don’t want indexed then it’s just as easy to tag those pages with a meta robots tag (see above). There is a secondary usage for this file that very few people seem to know about and that’s to list where the location of xml sitemaps for the site. Despite what you may be told, if you don’t want to block any pages or directories and you don’t want to list an xml sitemap you don’t have to have a robots.txt file.
- Just list a series of locations on your page and you’ll be found for all of them – I’ve seen this recently on a couple of websites belonging to firms of accountants, they have an office in “say” Birmingham and think that by listing Coventry, West Bromwich, Dudley, Solihull, Sutton Coldfield, Wolverhampton and Wallsall that Google with think that they are based in all those areas (this is just an example and not an actual case). What they seem to forget it that even if that were to work ( which it doesn’t) , if a visitor is looking for accountants in Coventry they are not likely to be wanting to use a firm based in Brimingham.
The above 5 points are just a few of the SEO myths that you will see floating around on the internet or will be told by some SEO people. Hopefully you are now in a position to understand why they are not right and can look at these kinds of things with a slightly more cautious eye.
One thing you can always ask the “expert” is when did they last test what ever it is they are telling you – if the answer is “I haven’t” then treat everything they say with caution – a good SEO professional is constantly testing things, even if they know that something worked in a particular way last month or the month before they know that the rules change and it may not be working now.