These are some basic steps which I feel every search engine optimizer must follow to optimize a website from scratch. If these issues are resolved at the start then you will not face any optimization issues later on.
Check for 404 pages/broken links on website
Broken links are poison for any website. If you have access to a crawler then the first thing to do is check for broken links on the website. This will give you a clear picture of the navigation structure of the website as well as links which are broken. To save time you can also request the website owner to provide you access to their Google Webmaster account which will outline the complete crawl issues of the website. After you have the complete list of broken links on the website you should have them corrected as soon as possible. I have developed an online 404 checker for this purpose which is free 🙂 so please do use it.
Enable search engine friendly URLs on website
If the website URL structure is dynamic in nature i.e. have “?” and “&” in its URLs then you should try and have search engine friendly URLs enabled on the website. The benefit of these URLs is that they will be keyword rich and help optimize those pages on search engine results. Most websites nowadays already have these URLs enabled as support is available on both Linux and Windows hosting servers.
Check for duplicate pages on website and remove them
Try and check for duplicate content by using moz or plagspotter. Both are paid however Moz offers a 30 days trial period. You can cancel the trial period after the completion of 30 days. Both these websites let you know if there are more instances of web pages which have the same content as yours and it helps you take action. Most of the time the duplicate content may be due to minor navigational issues in your website which can be resolved with some tweaks however these need to be catered to otherwise the website may never come high on search engine results and may even get banned.
Implement the canonical tag on website
As per Google definition
“A canonical page is the preferred version of a set of pages with highly similar content”
Google provides this one stop easy solution for website owners to implement the canonical tag and inform Google and other search engines which is the main version of the page the search engines should give priority when giving results. You can read more about canonical tags here and here. The best part of this solution is that it can help resolve your duplicate page content issues very quickly.
Make sure that the meta tags are unique for each page on website
Most websites have nearly the same meta title, meta description and meta keywords on multiple pages which leads search engines to think of these pages as duplicates. Due to this search engines do not think highly of these pages and they do not come up on search results often. Try and create as unique meta content as possible related to the web page which will help your website in the long run.
Some times Google does not display the right title of the web page in its search listings. It does not mean that Google has not indexed your website. If you look at the Google cache of your web page you will see that it has indexed the page with the correct title.
This usually happens when Google feels that the page title is not descriptive enough of the content being presented on the web page. It also takes information from Dmoz however this depends whether your website is listed there or not. If it is listed on Dmoz then please add the following tag in the <head> section of your web page.
<meta name=”robots” content=”NOODP”>
This will tell Googlebot and other robots to ignore content from Dmoz and only use the title of the web page for display in it’s search listings.
You can read more on this on Google Site title and description
Hope the above helped!
If your website has static URLs i.e. do not contain any query string parameters and you need to redirect your old website URLs to your new website URLs then you don’t have to fear anymore. You can write simple redirect statements in your .htaccess file and you don’t have to be regular expression guru.
Add the following line in your .htaccess file
redirect 301 /oldname http://www.example.com/newname
As you can see in the above statement the htaccess tells the Apache web server to 301 redirect URL if it encounters /oldname to http://www.example.com/newname. You can add as many statements as you like however if there are over 20 plus URLs then consider using RedirectMatch. You should try your hand at regular expressions and get the job done in few statements.
My company’s SEO team lead came to me with an interesting problem and I thought I share it with everyone.
It so happens that his team were doing SEO related work on an e-commerce website and they were seeing a lot of the website pages indexed in Google which had the text
The page you're looking for doesn't exist. If you followed a link from elsewhere in the site, please contact us.
Return to home page
One look at the content on these pages and I assumed that it was their custom 404 page however Google thought otherwise. It was treating all these pages as duplicate content pages and the website ranking was getting affected by this.
As the website had not been developed by us we contacted the concerned technical team for that website and the client and informed them regarding this issue. They came back to us a couple of days later saying that these pages are 404 pages and this is a non issue for them. To put it short they did not believe us that their 404 pages could be the cause of the duplicate content problem.
I then decided to check the status that was being returned when anybody browsed the website and voila we got our answer. All the so called custom 404 pages returned a 200 status meaning they really existed. The web server should have sent out a 404 status if they really were 404 pages. This would have informed the bots which could then term those pages as ‘non-existent’ and the problem would have been resolved.
The pieces started to come together.
Google considered all these pages as actual pages and due to the same content being displayed on all of them the website was being penalized for duplicate content. The problem was more to do with the e-commerce software they were using to run their website as it did not send out the 404 status. It was simply displaying a page and saying that the page was not found.
This incident should be an eye opener for website owners, SEO’s and web developers alike as a simple mistake by your software could bring down your website rankings like nine pins.
Hope the above helped. Comments?
Check for 404 status with online 404 checker utility I have developed.
Our SEO department recently tweaked the URLs of a client website in order to improve the search engine rankings. The change was minor and they just removed a category path from the URL which shortened the actual length of the URL. There was one problem. The website was already indexed in Google and even though the change was minimal the effect was on hundreds of URLs. When any one clicked on the old URL indexed in Google the 404 page showed up which was a Big NO NO if you are some one managing a website having more than 50,000 GBP of sales.
The solution was an addition of one line in the .htaccess file which would automatically redirect the visitor from the old URL to the new URL.
I added the following line which resolved the problem.
RedirectMatch webman/(.*)$ http://www.adeelsarfaraz.com/$1 [R=301,L]
Please note that the above is an example. You can replace the webman to your path name and the domain name to the relevant domain name.
I have just developed a very simple keyword analysis tool which would allow visitors to check the number of keywords they have used in their web page/article/paragraph and in the process save themselves from committing keyword stuffing within their articles.
It works by allowing visitor to enter the number of keywords (one on each line) and then paste the article in the next text box. Then the visitor just needs to click on the Generate button and the system will calculate the number of times a particular keyword has been used in that article. The keyword which has been found will show up in the Found Keywords text box and the ones not found will be shown in the Not Found keywords text box.
Here is the keyword analysis tool
As things would have it August seems to be my lucky month. First my blog went number 1 on Yahoo and Bing and now it is on first page on Google.com. I was nowhere on Google for some time now and with some work I have been able to get the results.
Here is the snap shot of the results on Google
I just checked results for Pakistan Web Development Consultant on Yahoo and Bing and my blog was ranked at top position on both search engines. It has some way to go on Google but I think I will be achieve it in a few days.
Please see the results on Bing
Please see the results on Yahoo
A Coming Soon page is an important if not a necessary step during website development. The coming Soon page usually comes at the end of website development when the website has been developed and is in the testing/quality assurance stage. Depending on the progress at this stage the client can ask the development team to setup this page on his hosting server. Most clients are very eager on the launch of their website especially if it’s their first one and some clients even place ads in the daily newspapers to attract visitors to their website.
A Coming Soon page is a one page HTML which comprises of the website logo, brief introduction of the website, “first to know when the website launches” email input box. If the website is to come up for SEO after its launch then the content is tweaked further to include some content rich keywords and phrases so that if the crawler comes about then it will index the page with those keywords. It does help with future SEO.
People tend to think of the Coming Soon page as just another HTML page however in the light of the above this usually serves as the last step before the website launch and provides good boost to the website upon launch.
Let me know your thoughts. It’s always nice to hear what others have to say.
We all know that content is king when we talk of websites trying to achieve high rankings on Google. To achieve this effect website owners employ content writers to write keyword rich content for their websites. Sometimes they go overboard with keyword stuffing but most of the time these things pay off and they reap huge dividends when their website starts receiving traffic on targeted keywords.
However while all the above is true there are some things which you cannot control and those are client requirements. Clients don’t want to change the look and feel of the website by much (most of the time) and you have limited space to manoeuvre. The reason is that they don’t want the website revenue function to be hampered in any way while at the same time they want their SEO team to provide them the results they are paying them for.
Well the Accordion effect has brought about a much needed solution in this regard. The Accordion effect allows you to update content on the webpage though initially hidden from the visitor. If the visitor clicks on the title of the article then the content is shown otherwise it’s hidden. This effect allows you to optimize the content for the search engine as well as the customer without the danger of Google penalizing your website for hidden content. Accordion has become a very important tool for a SEO as it provides solution in the fast moving and changing environment we live.
If you would like to see the Accordion effect in action on a website then you can browse this page and see how it uses this effect to optimize itself.
Hope the above helped.