SEO and Web Development Tips

Written by Graham W Wöbcke

The following information outlines some basic SEO tips for your webpages and website. I actually don't like to think of these things as SEO tips - I'd rather call them a webpage best practise guide for any page you build - however by following this guide, I am positive it will improve your sites chances of being indexed completely and correctly in search engines.

Part One - Google Sitemaps

I'm going to write a series of articles on SEO showing all that I have learnt and tips I have picked up. SEO is becoming an essential part of every web developers skill set, and those without it will be left behind, and possibly without work! In all of these posts, I will use a fictitious website (at the moment) - - to illustrate the examples. In my first artilce, I am going to talk about Google Sitemaps.

Google XML Sitemaps have been around for a while now and it is probebly a good time to starting to become familiar with them. They can help you to achieve up to date indexing in Google, which in turn should help you with search placement.

A Google XML Sitemap allows web developers to provide Google directly with a master list of all their site's critical pages for indexing/crawling. The sitemap data is recorded inside an XML file and it includes a list of URL's belonging to the site, the date the page was last modified date, how often the site is updated and the sites priority.

These Google XML sitemaps generally helps Google index your site but if your site is small (say under 10 pages) or is not updated very often, the Google XML Sitemap may not help much at all, especially if your site is already inside the index. It helps most with trying to keep the latest postings/versions of your page in Google. Larger sites with lots of pages should benefit as sometimes not all of your pages are appearing in the Google index.

Google XML sitemaps will in many cases not improve your page rankings, but by having the most current version of your site in Google's index, this can help improve your results in Google SERPs (Search Engine Result Pages). This is due to you being able to make an update to a page and Google's index will have this page updated more quickly than without the XML sitemap. This effectively means that with more frequent spidering you can help get your latest site version in the index, and this should help with your sites rankings.

If your site is small or your prefer to you can create your XML sitemap manually. The structure is not difficult to follow and even an XML novice will be able to follow the structure of the XML sitemap. If you prefer automated tools that require little changes to the output, then I can recommend VIGOS GSitemap which is a free, easy to use tool that will help you create your XML sitemaps with ease.

Here is an example sitemap:

Submitting your completed XML Sitemap to Google is relatively straightforward. After the file has been created the first thing you want to do is upload the file to your server, preferably at the root level eg. Now you will need to log into the Sitemap console using your Google account login. From here you can add a site to your account. To do this, simply enter your top level domain where it says "Add Site" eg. This will add the domain to your account and allow you to then submit the XML sitemap you have created. You will now be taken to the site summary page for this site. You will see a text link that says "Submit a Sitemap". Click on this link to enter the online location of your XML sitemap. Once you have entered in the location, click "Add Web Sitemap" and it is recorded.

So, that's it. It is quite a simple process to get a sitemap added into Google. There is one final thing that you can do and I recommend you perform this optional step - you should verify your Sitemap. This can be easily done by placing a specific meta tag on your home page provided by Google. Verification allows you to access crawl stats and other valuable information regarding your Google listing.

Part Two - The Top of your Page

The top of your HTML is some of the most important 'code' you are likely to write if you are interested in obtaining a good search engine position. I'm going to break down the four most important factors and hopefully describe it in a way that helps.

1. Title Meta Tag

The title tag is displayed as the headline in Search Engine Result Pages. This title tag should be easy to read and contain your main keyword phrase toward the beginning of the tag. Don't put your company name first, unless you are huge company already like Amazon or Ebay. People are more likely to search for products or services on offer, not your name. I think it is a good idea to capitalise important words in your tag as well.

Titles should be five to ten words long and no more than 70 to 80 characters.

2. Description Meta Tag

The description tag is the paragraph will be displayed in the Search Engine Result Pages. Your description tag should be designed designed to attract customers and it should compel the reader to act right now and follow your link. If you do not include this description tag, search engines will sometimes display the first text on your page (which may not be the best option). When writing your description, you should approach it as though it is a proper sentence and follow proper grammar and the use of punctuation. It is also a good idea to try to include your subject and geographical references to where you are, like your city name.

The description should not exceed 150-200 characters in length.

3. Keywords Meta Tag

The importance of Meta keyword tags fluctuates among different search engines. Some people deeply involved with the SEO community debate as to whether or not they help at all. I usually advise using a small number of relevant targeted keywords as they may help with rankings in niche search engines. Just use relevant 'tags' that apply directly to the content of that particular page, and don't overdo it. Remember this simple rule:

The total of all keywords on a page add to a 'score value' of 100. Therefore each keyword you use takes a percentage away from that value.

For example, if you use 5 keywords they will each have a value of 1:5 (or 20/100) annd if you had 10 keywords, they will each have a value of 1:10 (or 10/100) meaning they won't have as much value as using less keywords.

4. Revisit Tag

While not a search engine optimization item, the "revisit" tag may help a search engine spider to return in so many days to re-index the site. This may be of great importance to when your site updates its data and you are trying to get your latest content inside the index.

5. Robots Tag

The Robots Tag was created to provide developers a method of keeping pages out of search engine indexes who cannot upload or control the robots.txt file at their websites. The Robots Tag that may contain one or more of the following keywords without regard to case: none, noindex, nofollow, all, index and follow. Here is a break down of these keywords:

  • none - ignore this page (equiv to: noindex, nofollow)
  • noindex - page may not be indexed
  • nofollow - do not follow links from this page
  • all - no restrictions (equiv. to: index, follow)
  • index - include this page
  • follow - follow links from this page to find other pages

You can also specify actions for specific robots, like GoogleBot for example. This can be helpful if you find you are being crawled frequently by robots not offering any value to your site, and you wish to limit it to just GoogleBot and MSNBot for example.

One new keyword that has appeared is NOODP. This refers to the description displayed on search engine result pages and DMOZ. Some search engines will take the description of your site from DMOZ over your own description tags. Use this tag if you do not wish for this to happen.

6. Put .CSS and JavaScript into External Files

For search engines, excessive or poorly formatted code at the top of your page will have a negative impact on your rankings. By including your CSS and JavaScript in external files, you will increase the amount of text a search engine can read in your page.

If you have lots of inline JavaScript/CSS code, the text content will suffer and may not be read correctly by Search Engines.

So what is an example of a good header?

So now, using out example website - - which offers lawn mowing services in Sydney, here is an example of a good header targeting three 'keywords': lawn mowing services, tree lopping, garden maintenance

Of course, this article did not discuss selecting appropriate keywords or displaying appropriate content, but it should be a guide to help you create an effect page header.

Part Three - Six Quick Tips for Web Developers

1. Always use the standard HTML tags for headings, bold text and lists.

The HTML has tags for headings, bold text and and ordered/unordered lists and you should always use them. By using CSS, you can practically style them anyway you like. By using a H1 heading tag for your headings, strong tags for important text and ul, ol and li tags to create lists will help search engines understand what text on that page is a heading or what are the more important terms on that page.

Using our lawn mowing example from the previous post, here is an example of NOT making a heading in a good way for SEO:

<p style='font-size:16pt; font-weight:bold'>Garden Maintainence Services</p> 

By applying a CSS style to a paragraph tag that makes text larger and bold doesn't tell search engines it is a heading, but rather, it will treat it as though it is a paragraph of normal text because it is a paragraph tag. A much better way would be to define a style for your heading tags (H1, H2, H3, H4, H5 and H6) in your CSS file. The better way to make a heading is this:

<h1>Garden Maintainence Services</h1>

2. Always use ALT text when adding an image.

As search engines can't read text inside an image, by adding ALT text to your image tag helps the engines understand more about that image. You can use it to place more keywords and phrases into your page but don't keyword stuff the ALT tags as this is frowned upon and can get you 'blacklisted'.

Here is an example of using ALT text inside an image:

<img src='/images/gardens.gif' width='400' height='300' alt='Garden Maintainence Services'>

3. Avoid Canonical URL issues and never place session IDs on the URL.

Search engines see,, and for example as three different pages. To correct this, you should always link to the preferred domain and at the root level ie. You may also wish to use a redirect to point all of your pages to this preferred option. If you have signed up at Google and are using Google Sitemaps and the Webmaster Tools, you can also select your preferred domain that Google uses.

If you are using sessions, make sure your Session ID (especially if you have a PHP site) is not added as a parameter PHPSESSID to the end a URL on your PHP page eg. PHPSESSID=34467908. This can be extremely problematic for your site's search engine ranking. Search engines will see a unique PHPSESSID in the URL every time they visit a page on your site, and in turn think it is a different page each time and even worse, it could be viewed as lots of duplicate content and your site may become banned. Turn off this setting in your PHP.ini file or ask your server administrator to do this for you.

4. Have a unique, meaningful TITLE tag on each and EVERY PAGE.

Many websites and web developers neglect this rule, and your website may suffer by not being indexed correctly.

By not having a meaningful title tag, you are reducing the amount of traffic to your site greatly. Each and every page has different text, content and images so why would it not also have a different title? Why use a title of "untitled-1" when you can put meaningful words about your page in the title. It is also a bad idea to use one generic title across all of the pages on your site.

So, using our dummy site map from Part One above, here are three titles for those three pages on our site:

<title>Lawn Mowing, Tree Lopping and Garden Maintainence Services in Sydney ::</title>

<title>Work Request for Lawn Mowing and Garden Maintainence Services in Sydney ::</title>

<title>Current Work Schedule of The Lawn Man's Garden Maintainence Services in Sydney ::</title>

5. Do not use "click here" or "read more" text links unless absolutely necessary.

Many content producers ignore/neglect this. You need to be made aware of the implications by doing this.

By not using meaningful text inside links, search engines cannot establish whether the link actually links correctly to the same subject as mentioned the link text and in the text around the link. An example of this is to search for "click here" inside Google. You will find the top result will be for Adobe Acrobat Reader or Apple Quicktime Player. This is because millions of pages use something similar to this:

To install the latest version of Acrobat Reader <a href="">click here</a>.

So, it is important to have meaningful text inside your links as well. Let's say we wanted to link from the homepage of our lawnman site to the work request page. The best way in which to do this would be:

If you would like The Lawn Man to work for you, please fill out the <a href="">Work Request for Lawn Mowing and Garden Maintainence Services</a>.

The link text contains many of the same keywords in the TITLE tag and META data used of the work request page, so the search engines can determine that the link pointing to that page is one they can trust.

6. Where possible, use Validators on your site.

Your site does not need to be technically perfectly to rank high in the search engines but having a validated HTML page will help ensure that search engines (and browsers) will accurately see your page. Try using the official W3C Validator as a guide. Validating generally identifies areas of your HTML code that is redundant, unnecessary, or not accepted across all browsers. All of which will help make your site more search engine friendly.

Classic ASP String Functions

The first code snippet is a little function that converts text into mixed case, or sentance case. It basically seeks out spaces in a string and converts the first character after thhe space to a capitalized letter and the remaining characters before the next space to lower case.

Mixed Case Function

ASP seems to lack a URL Decode function but has a URL Encode function available. Here is a function that can decode any URL Encoded URL or variable.

URL Decode Function

Here is a practical example of when you would use the URLDecode function - to record the search terms someone has use to find your site in Google. This functions examines the string to see if it contains "google." and if it does, assumes the string contains google keywords and seeks out "q=" inside the string. If it also finds this, the stripStr is now created with the prefix "GOOGLE:" and the key words are appended to the end.

Strip HTML Tags Function

This snippet is a little function that strips out specified special characters. It basically seeks out any of the characters that exist in the array and removes them all.

Strip Any Character Function

This next snippet is a simple function that will check an array for any occurances of a particular string.

Check In Array Function

General Classic ASP Tips

Lately I have needed to perform a lot of redirects from old pages to new pages. Some of these old pages have decent rankings in search engines, so how do I make sure these new pages retain this ranking? By using a HTTP 301 Redirect (HTTP Moved Permanently). Here is how you code a 301 redirect in ASP:

301 Redirect

If you are wanting to redirect to your default page name (index.asp,default.asp etc.), it is a good idea to redirect only to the folder name and leave off the name of the page like:

Response.AddHeader "Location", "/subfolder/"

Scrape External Site Content With Classic ASP

There are many times were you have wanted to grab something from another site that isn't provided via RSS. You could type it in, but that would be time consuming. So how do we do this? We scrap the content from the site using the XMLHTTP object. If you have read the previous article on how to cache an RSS feed, you will no doubt say many similarities here in this article, so I won't re-explain those portions. So let's have a look at the code.

The code contains two functions, LoadThePage and GrabTheContent, and there names self explain what they perform. LoadThePage saves a copy the external HTML into an XMLHTTP object. GrabTheContent manipulates this object and returns a string that we can use. We give this function two paramters, the text at the start of the section we want to grab and the text that ends what we want to grab. Pretty simple. For this example, we will be using as the page we retrieve text from.

Now the main portion of the program firstly it checks the date stamp of the application variable from when it was last cached and if necessary, retrieves a new cache. We do this so we don't hammer someone else's website and slow down the performance of our page (and to not annoy the other webserver with heaps of connections). Once we have either retrieved a new cache or used the existing one, we display the contents on the page. So we would see these results:


The Galah can be easily identified by its rose-pink head, neck and underparts,
with paler pink crown, and grey back, wings and undertail. Birds from the west of
Australia have comparatively paler plumage. Galahs have a bouncing acrobatic flight,
but spend much of the day sheltering from heat in the foliage of trees and shrubs.
Huge noisy flocks of birds congregate and roost together at night.

Ok. So that is nice but how would we format this retrieved text? How do you remove the HTML tags?

To strip the HTML tags, we will be using a function I have previously published named stripHTML. What this function does is strip ALL HTML tags using regex and returns plain text. Now to format the text appearance, we will need to use the string REPLACE function to change/remove/insert tags into appropriate places. Essentially, you scan through the retrieved page and add in the formatting you like. We will finally present this text inside a DIV with an inline style applied. So we would firstly add in the stripHTML function as shown previously on this page and then finally, we would change how the text is presented at the bottom of the script to this:

So let's take a look at how it appears now:


The Galah can be easily identified by its rose-pink head, neck and underparts, with paler pink crown, and grey back, wings and undertail. Birds from the west of Australia have comparatively paler plumage. Galahs have a bouncing acrobatic flight, but spend much of the day sheltering from heat in the foliage of trees and shrubs. Huge noisy flocks of birds congregate and roost together at night.

There you go, much better - and exactly the same results as the first example but much more readable and adaptable to your site after using your formatting. You could now proceed a step further and change this to use classes rather than using inline styles.

Cache an RSS Feed in ASP

If like me you work a lot with XML and RSS, it will be necessary to cache it to reduce the load on systems. ASP would normally request the file each and every time the script is loaded in the browser. The following method caches an RSS feed for a specified period of time, thus reducing the load on a server greatly. I am going to assume you understand the RSS 2.0 format in this example.

Essentially, the script checks the timestamp of the application variable, and if it has been longer than 1 hour, it will reload the RSS feed into an application variable, otherwise it will use the existing content inside the application variable rss-html. If the script needs to retrieve the RSS feed, it loads it into a DOM object and then parses the RSS looking for <item> tags. We then create a string for each of the RSS fields we are interested in storing. While we parse each <item> we add a delimiter inside these strings "#%#" to separate the entries. Once we have finished parsing the RSS, we are left with three strings.

We now split these strings into arrays using the split function, seeking the delimiter "#%#" we previously used. Once the elements are inside three arrays, we loop through with a FOR loop until we reach the end of the arrays. We can process each of the elements as we loop through and format as required. The reason I prefer not to feed the RSS straight into an array is because we would need to determine the number of records and redim the array, where as with the method I have used, you create a concatenated string that can be easily split later at the delimiters and feed into an array.

General PHP Tips

Quite often, you will want to take the information provided freely on another website and include it on your own. This post will explain how you can do some basic parsing using PHP.

I will now describe how to obtain value for the current US Dollar exchange rate against 1 Australian Dollar from Yahoo!'s finance site. This is probebly one of the simplest parse requests but it is still quite useful to understand. Here is the current exchange rate:

I will now explain how this is done.

	$open = fopen("", "r");
	$results1 = fread($open, 32);
	printf ("1 AUD = %01.4f USD", substr($results1, 11, 12));

This simple program is only 4 lines long.

  • Line 1: Specifies what URL you are retrieving the quote from and it is being assigned to a varaible named $open.
  • Line 2: Read the URL and stores the first 32 bytes inside the variable $results1. You can change the byte value of 32 to what ever you feel is necessary to complete a job.
  • Line 3: Close the connection to the websites specified in the variable $open.
  • Line 4: Formats the output using printf, to 4 decimal places (float). Only the information contained inside the substr is displayed in this format.

Of course, no one should be writing code without some form of re-usability, so here is the code inside a function that can be called to display the value of any known currency type.

The function named QuoteExchangeRate works in exactly same manner as the first example, but you can now pass to it two variables specifying the exchange rate you are after and the code that Yahoo! finance understands. So QuoteExchangeRate("AUD", "USD"); specifies that we are want 1 Australian Dollar (AUD) changed into US Dollars (USD).

You could also set the script up to use values passed on the URL by using QuoteExchangeRate($_GET['from'], $_GET['to']);. You would then need to call the PHP file with the parameters ?from=AUD&to=USD to achieve the same result.

You can specify other currencies very easily, such as QuoteExchangeRate("AUD", "GBP"); which will display 1 Australian Dollar in British Pounds:

You can also reverse the lookup by specifying QuoteExchangeRate("USD", "AUD"); which will display 1 US Dollar in Australian Dollars:

How to parse RSS with PHP

RSS is a very important part of the internet now and it is being widely adopted by many sites as a method of keeping browsers informed with the latest news and offerings.

In the example script, I will show you how you can parse an ebay RSS feed based on search criteria that is passed to it via a URL variable named st eg. parse.php?st=searchtext. The code will be split up into sections and I'll explain how it works underneath it.

	// create rssItem class
	class rssItem {
		var $rssItemTitle;
		var $rssItemLink;
		var $rssItemDescription;
	// working variables
	$feedTitle = "";
	$feedLink = "";
	$feedDescription = "";
	$arItems = array();
	$itemCount = 0;
	// feed variables, expects ?st= on URL
	$searchTerm = str_replace(" ","+",$_GET['st']);
	$rssFile = "$searchTerm&_sacat=0&_rss=1";
	// descriptions (true or false) goes here
	$showDescriptions = true;

The first part of the script is used to create a class and define variables. We create a class named rssItem to hold our RSS data. Inside this class we create variables that we need to use to store the various elements of the feed such as title, link and description. We also define some working variables to hold information as required and we configure a link to the RSS file we require plus we define an array to hold the headline objects. There is also a boolean variable set to either true or false for determinign whether it displays the item descriptions.

The next three functions are based of functions that are found in Chapter 22 of the book PHP Developers Cookbook by Sterling Hughes [Sams], an absolutely invaluable source of PHP inspiration. At the bottom of this post, I will list the four PHP books I own and refer to solve all my issues.

function startElement($parser, $name, $attrs) {
	global $curTag;
	$curTag .= "^$name";

function endElement($parser, $name) {
	global $curTag;
	$caret_pos = strrpos($curTag,'^');
	$curTag = substr($curTag,0,$caret_pos);

function characterData($parser, $data) { 
	global $curTag; // get the Channel information first
	global $feedTitle, $feedLink, $feedDescription;
	$titleKey = "^RSS^CHANNEL^TITLE";
	$linkKey = "^RSS^CHANNEL^LINK";
	if ($curTag == $titleKey) {
		$feedTitle = $data;
	elseif ($curTag == $linkKey) {
		$feedLink = $data;
	elseif ($curTag == $descKey) {
		$feedDescription = $data;
	// now get the items
	global $arItems, $itemCount;
	$itemTitleKey = "^RSS^CHANNEL^ITEM^TITLE";
	$itemLinkKey = "^RSS^CHANNEL^ITEM^LINK";

	if ($curTag == $itemTitleKey) {	// make new rssItem
		$arItems[$itemCount] = new rssItem();     
		// set new item object's properties
		$arItems[$itemCount]->rssItemTitle = $data;
	elseif ($curTag == $itemLinkKey) {
		$arItems[$itemCount]->rssItemLink = $data;
	elseif ($curTag == $itemDescKey) {
		$arItems[$itemCount]->rssItemDescription = $data;
		$itemCount++; // increment item counter

These functions - startElement, endElement, and characterData are used to extract the data contained inside the XML document. So to parse an XML document in PHP, you will need to define three functions to handle what the parser encounters:

  1. the start element of a tag - Function startElement - eg. <item>
  2. the end element of a tag - Function endElement - eg. </item>
  3. the data within these tags - Function characterData eg. This is a test item.

The way we handle these functions is by setting a global variable ($curTag) to a string containg all the parent tags separated by a caret (^). You could change this to any other character like a comma or a colon if you wish. This would mean that the $curTag variable could hold a value similar to ^RSS^CHANNEL^ITEM.

Once the parser has for example found the <ITEM> tag, we have to check for when the parser has found the correct $curTag, and extracts the data via the characterData function. This function checks if the $curTag contains something we want to extract, and if true, assigns it to our variables. The characterData function is able to extract the general information inside the RSS as well as any items it comes across. For each item this function comes across, it creates a new xItem, and inserts it into our $arItems array with the data it has found in the RSS.

// main program portion - start parser
$xml_parser = xml_parser_create();
// use our above functions when elements or data is found
xml_set_element_handler($xml_parser, "startElement", "endElement");
xml_set_character_data_handler($xml_parser, "characterData");

We now start the actual parsing. Luckily, PHP has a standard function for XML parsing. We can easily activate this by declaring a variable that accesses the PHP function xml_parser_create(). Once we have done this, we have access to the other XML functions built into PHP. The code tells PHP's XML parser we want our to use our functions when the parser comes accross a start tag, end tag, or character data.

// open the RSS feed as specified in $rssFile
if (!($fp = fopen($rssFile,"r"))) {
	die ("could not open RSS for input");
// if successfully opened, parse the file
while ($data = fread($fp, 8192)) {
	if (!xml_parse($xml_parser, $data, feof($fp))) {
		die(sprintf("XML error: %s at line %d", xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser)));

This portion loads in the RSS file as specified in $rssFile and if found, assigns the contents of the RSS file into the variable $fp. We then proceed to parse through the data we have found using the xmp_parse() function until there is no more data to process. There is also some inbuilt error trapping should there be an error inside the RSS file. Once done, we close down the XML objects used by using the function xml_parser_free() so that we reclaim any used memory.

// write out the items
echo ("<html><head><meta name =\"description\" content=\"$feedDescription\"></head>");
echo ("<body bgcolor=\"#ffffff\" style=\"font:normal 13pt 'Trebuchet MS', Georgia, 'Times New Roman', Times;color:#333333;\">");
//echo ("Link to feed: <a href=\"$feedLink\">$feedTitle</a><br/><br/>");
for ($i=0;$i<count($arItems);$i++) {
	$trssItem = $arItems[$i];
	echo ("<a href=\"$trssItem->rssItemLink\"><strong>$trssItem->rssItemTitle</strong></a><br/>");
	if ($showDescriptions) {
		echo ($trssItem->rssItemDescription);
		echo ("<br/><br/>");
echo ("</body></html>");

After we have successfully parsed the RSS file, we have our data inside our declared objects and variables and this will make formatting it on the screen relatively simple. Essentially, you loop through your array with a for loop starting from zero to the upper boundry of the array. You then assign the current array item to a temporary variable and print out each element. As each element in the array contains an object as specified inside your rssItem class, you will need to access your data using something like $trssItem->rssItemTitle to get the item title. This continues until you have displayed all of your arrays elements.

Let's now see the script in action by performing a search on with the search term "brio train" and then parse the returned RSS. Enjoy this script and make your own copy on your site.

Note: If you are an absolute PHP beginner, a book I recommend to learn with is PHP/MySQL Programming for the Absolute Beginner by Andy Harris.