XML Site-Maps

XML Site-Maps

XML Site-Maps Good for your SEO ?

Your Website

Every website are concerned that once search engines locate their website they locate every single one of the web pages on the website and add these pages to their index. It’s crucial for a website owner that the search engine picks up the Keywords and page description they have loaded into their page content.

It is the existence of these Keywords, within a search engine’s index that helps get the web page displayed within the search engine’s result page. This occurs whenever someone does a search using the same keyword(s) within their search string in a search engine’s – search box.

It’s every website owner’s responsibility to make it as simple as possible for search engine spiders to locate and index every single webpage on their website, whenever search engine spiders visit the website.

A tried and tested way of doing this is to include a Site Map within the website. Normally the site map is accessed via a menu item. Hence both site visitors and search engine spiders can locate the site map easily enough. So let’s take a look at what a site map is. How many types of site map there are? How is a site map created and attached to the website?

What is a site map?

A sitemap is a graphical ( visual ) presentation that lists all the different pages contained within a website.

How many types of site maps are there?

There are two types of sitemaps, HTML ( Hyper Text Markup Language – type ) and XML ( eXtensible Markup Language – type).

The HTML based site map

An HTML sitemap is a graphical presentation that displays hierarchically grouped, lists of hyperlinks belonging to all the pages of a website. It’s primarily designed for humans.

By adding an HTML sitemap in your website, site visitors can easily navigate through the website. Additionally, a sitemap of this sort helps in – Sitemap based SEO – because it allows search engine spiders to easily find all the hyperlinks to every page on the website thus avoiding – missed pages.

Missed pages simply mean that these pages will not be in the search engine’s index. Therefore these pages never show up for those who use search engines ( like Google, Bing and Yahoo ) to locate appropriate pages for them to visit.

The XML based site map

An XML sitemap, is basically a list of the different URLs of a website but the list is created using very specific syntax ( i.e. XML ) that all search engines spiders are trained to understand with ease.

Using an XML site map for – Sitemap based SEO – accelerates search engine indexing because an XML site map informs search engine spiders about the different URLs of the website in a language they understand.

Most search engine spiders are trained to look for and identify if an XML based site map exists on a website.  If it does its accessed and used immediately by the spider.

A search engine’s spider does not have to visit each page of the website and navigate through the links on the page to understand the website’s architecture.  The spider only has to locate XML based site map to see and understand the entire website architecture.

Hence a website that uses an XML based site map for – Sitemap based SEO – would be indexed faster and more accurately and would perhaps rank well in search engines because the website has been thoroughly checked.

 The Importance Of Registering Your Sitemap with Google

Google actually assists webmasters in multiple ways. In the past, – Sitemap based SEO – was more of a guessing game.   Then Google launched a program called Google Analytics. This program provides valuable data about your site. It reports a ton of useful information including:

  • The last time the Googlebot paid a visit to your website
  • The keywords ( keyphrases ) people use in searches to locate your site
  • Problems in your website that need fixing

And a whole lot more.

If you have not created a Google analytics account for your website you really are missing on a whole lot of very important information about your website offered freely by the biggest search engine in use on the Internet.

One of the of Google Analytics data entry forms permits a website owner ( webmaster ) to enter the URL that points to their website XML site map.  This means that all webmasters have the power to inform Google exactly where their XML site map is to be found.  This goes a really long way in ensuring that your website is found and indexed by Google’s spiders.

What’s really nice about the using an XML based site map on your website is that search engine spiders other than Google also access and use the same XML site map.  There are a lot of tools available on the Internet that create will create an XML based site map for a website – for Free.  Do a search in Google and you will be pleasantly surprised at how many there are.

All of these tools essentially do the same thing. Once you’ve given them a website URL they thoroughly scan the website and deliver a properly formatted XML file, normally named sitemap.xml as the output of this exercise.  This file must be saved to your local computer. Once done, use FTP and place sitemap.xml in the root directory of your website, which is almost always/public_html.

If you are constantly making changes to your website, i.e. adding pages or deleting old pages, then after every such change you need to go back to the XLM based sitemap creation tools and re-create the file sitemap.xml and overwrite the old sitemap.xml file in your website’s root directory. This is because the website architecture has changed and hence the contents of the current sitemap.xml are obsolete.

Using an obsolete sitemap.xml file on a website hurts the website’s ranking in search engines. This is because one or more of the URLs contained within sitemap.xml could point to a nonexistent resource on the website.  Hence, it’s pretty important to update sitemap.xml immediately after making any structural change in the website.  Not to do so is unwise.

According to Google, XML site map optimization is specifically helpful for the following reasons:

Your website contains dynamic content

Your website is new and doesn’t have links that point to it. Since the spiders look at the inbound and outbound links during the crawling process, your site may not be scanned if it hardly has links that lead to it

If your website has pages that Googlebot cannot easily discover, e.g. pages with Rich AJAX or images.

Your site contains a huge archive of webpages that are not well interlinked to each other, or not linked at all

Do take a look at diagram 1 and diagram 2 below.

Ivan Bayross
Open source tutorials | open source training

Sitemaps XML format

Jump to:
XML tag definitions
Entity escaping
Using Sitemap index files
Other Sitemap formats
Sitemap file location
Validating your Sitemap
Extending the Sitemaps protocol
Informing search engine crawlers

This document describes the XML schema for the Sitemap protocol.

The Sitemap protocol format consists of XML tags. All data values in a Sitemap must be entity-escaped. The file itself must be UTF-8 encoded.

The Sitemap must:

  • Begin with an opening <urlset> tag and end with a closing </urlset> tag.
  • Specify the namespace (protocol standard) within the <urlset> tag.
  • Include a <url> entry for each URL, as a parent XML tag.
  • Include a <loc> child entry for each <url> parent tag.

All other tags are optional. Support for these optional tags may vary among search engines. Refer to each search engine’s documentation for details.

Also, all URLs in a Sitemap must be from a single host, such as www.example.com or store.example.com. For further details, refer the Sitemap file location

Sample XML Sitemap

The following example shows a Sitemap that contains just one URL and uses all optional tags. The optional tags are in italics.

<?xml version="1.0" encoding="UTF-8"?>

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">



      <lastmod>2005-01-01</lastmod>       <changefreq>monthly</changefreq>       <priority>0.8</priority>



Also see our example with multiple URLs.

XML tag definitions

The available XML tags are described below.

Attribute Description
<urlset> required Encapsulates the file and references the current protocol standard.
<url> required Parent tag for each URL entry. The remaining tags are children of this tag.
<loc> required URL of the page. This URL must begin with the protocol (such as http) and end with a trailing slash, if your web server requires it. This value must be less than 2,048 characters.
<lastmod> optional The date of last modification of the file. This date should be in W3C Datetimeformat. This format allows you to omit the time portion, if desired, and use YYYY-MM-DD.Note that this tag is separate from the If-Modified-Since (304) header the server can return, and search engines may use the information from both sources differently.
<changefreq> optional How frequently the page is likely to change. This value provides general information to search engines and may not correlate exactly to how often they crawl the page. Valid values are:

  • always
  • hourly
  • daily
  • weekly
  • monthly
  • yearly
  • never

The value “always” should be used to describe documents that change each time they are accessed. The value “never” should be used to describe archived URLs.

Please note that the value of this tag is considered a hint and not a command. Even though search engine crawlers may consider this information when making decisions, they may crawl pages marked “hourly” less frequently than that, and they may crawl pages marked “yearly” more frequently than that. Crawlers may periodically crawl pages marked “never” so that they can handle unexpected changes to those pages.

<priority> optional The priority of this URL relative to other URLs on your site. Valid values range from 0.0 to 1.0. This value does not affect how your pages are compared to pages on other sites—it only lets the search engines know which pages you deem most important for the crawlers.The default priority of a page is 0.5.Please note that the priority you assign to a page is not likely to influence the position of your URLs in a search engine’s result pages. Search engines may use this information when selecting between URLs on the same site, so you can use this tag to increase the likelihood that your most important pages are present in a search index.Also, please note that assigning a high priority to all of the URLs on your site is not likely to help you. Since the priority is relative, it is only used to select between URLs on your site.

Back to top

Entity escaping

Your Sitemap file must be UTF-8 encoded (you can generally do this when you save the file). As with all XML files, any data values (including URLs) must use entity escape codes for the characters listed in the table below.

Character Escape Code
Ampersand & &amp;
Single Quote &apos;
Double Quote &quot;
Greater Than > &gt;
Less Than < &lt;

In addition, all URLs (including the URL of your Sitemap) must be URL-escaped and encoded for readability by the web server on which they are located. However, if you are using any sort of script, tool, or log file to generate your URLs (anything except typing them in by hand), this is usually already done for you. Please check to make sure that your URLs follow the RFC-3986 standard for URIs, the RFC-3987 standard for IRIs, and the XML standard.

Below is an example of a URL that uses a non-ASCII character (ü), as well as a character that requires entity escaping (&):


Below is that same URL, ISO-8859-1 encoded (for hosting on a server that uses that encoding) and URL escaped:


Below is that same URL, UTF-8 encoded (for hosting on a server thatuses that encoding) and URL escaped:


Below is that same URL, but also entity escaped:


Sample XML Sitemap

The following example shows a Sitemap in XML format. The Sitemap in the example contains a small number of URLs, each using a different set of optional parameters.

<?xml version="1.0" encoding="UTF-8"?>

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">


























Back to top

Using Sitemap index files (to group multiple sitemap files)

You can provide multiple Sitemap files, but each Sitemap file that you provide must have no more than 50,000 URLs and must be no larger than 10MB (10,485,760 bytes). If you would like, you may compress your Sitemap files using gzip to reduce your bandwidth requirement; however the sitemap file once uncompressed must be no larger than 10MB. If you want to list more than 50,000 URLs, you must create multiple Sitemap files.

If you do provide multiple Sitemaps, you should then list each Sitemap file in a Sitemap index file. Sitemap index files may not list more than 50,000 Sitemaps and must be no larger than 10MB (10,485,760 bytes) and can be compressed. You can have more than one Sitemap index file. The XML format of a Sitemap index file is very similar to the XML format of a Sitemap file.

The Sitemap index file must:

  • Begin with an opening <sitemapindex> tag and end with a closing </sitemapindex> tag.
  • Include a <sitemap> entry for each Sitemap as a parent XML tag.
  • Include a <loc> child entry for each <sitemap> parent tag.

The optional <lastmod> tag is also available for Sitemap index files.

Note: A Sitemap index file can only specify Sitemaps that are found on the same site as the Sitemap index file. For example, http://www.yoursite.com/sitemap_index.xml can include Sitemaps on http://www.yoursite.com but not on http://www.example.com or http://yourhost.yoursite.com. As with Sitemaps, your Sitemap index file must be UTF-8 encoded.

Sample XML Sitemap Index

The following example shows a Sitemap index that lists two Sitemaps:

<?xml version="1.0" encoding="UTF-8"?>

<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">










Note: Sitemap URLs, like all values in your XML files, must be entity escaped.

Sitemap Index XML Tag Definitions

Attribute Description
<sitemapindex> required Encapsulates information about all of the Sitemaps in the file.
<sitemap> required Encapsulates information about an individual Sitemap.
<loc> required Identifies the location of the Sitemap.This location can be a Sitemap, an Atom file, RSS file or a simple text file.
<lastmod> optional Identifies the time that the corresponding Sitemap file was modified. It does not correspond to the time that any of the pages listed in that Sitemap were changed. The value for the lastmod tag should be in W3C Datetimeformat.By providing the last modification timestamp, you enable search engine crawlers to retrieve only a subset of the Sitemaps in the index i.e. a crawler may only retrieve Sitemaps that were modified since a certain date. This incremental Sitemap fetching mechanism allows for the rapid discovery of new URLs on very large sites.

Back to top

Other Sitemap formats

The Sitemap protocol enables you to provide details about your pages to search engines, and we encourage its use since you can provide additional information about site pages beyond just the URLs. However, in addition to the XML protocol, we support RSS feeds and text files, which provide more limited information.

Syndication feed

You can provide an RSS (Real Simple Syndication) 2.0 or Atom 0.3 or 1.0 feed. Generally, you would use this format only if your site already has a syndication feed. Note that this method may not let search engines know about all the URLs in your site, since the feed may only provide information on recent URLs, although search engines can still use that information to find out about other pages on your site during their normal crawling processes by following links inside pages in the feed. Make sure that the feed is located in the highest-level directory you want search engines to crawl. Search engines extract the information from the feed as follows:

  • <link> field – indicates the URL
  • modified date field (the <pubDate> field for RSS feeds and the <updated> date for Atom feeds) – indicates when each URL was last modified. Use of the modified date field is optional.

Text file

You can provide a simple text file that contains one URL per line. The text file must follow these guidelines:

  • The text file must have one URL per line. The URLs cannot contain embedded new lines.
  • You must fully specify URLs, including the http.
  • Each text file can contain a maximum of 50,000 URLs and must be no larger than 10MB (10,485,760 bytes). If you site includes more than 50,000 URLs, you can separate the list into multiple text files and add each one separately.
  • The text file must use UTF-8 encoding. You can specify this when you save the file (for instance, in Notepad, this is listed in the Encoding menu of the Save As dialog box).
  • The text file should contain no information other than the list of URLs.
  • The text file should contain no header or footer information.
  • If you would like, you may compress your Sitemap text file using gzip to reduce your bandwidth requirement.
  • You can name the text file anything you wish. Please check to make sure that your URLs follow the RFC-3986 standard for URIs, the RFC-3987 standard for IRIs
  • You should upload the text file to the highest-level directory you want search engines to crawl and make sure that you don’t list URLs in the text file that are located in a higher-level directory.

Sample text file entries are shown below.



Back to top

Sitemap file location

The location of a Sitemap file determines the set of URLs that can be included in that Sitemap. A Sitemap file located at http://example.com/catalog/sitemap.xml can include any URLs starting with http://example.com/catalog/ but can not include URLs starting with http://example.com/images/.

If you have the permission to change http://example.org/path/sitemap.xml, it is assumed that you also have permission to provide information for URLs with the prefix http://example.org/path/. Examples of URLs considered valid in http://example.com/catalog/sitemap.xml include:



URLs not considered valid in http://example.com/catalog/sitemap.xml include:




Note that this means that all URLs listed in the Sitemap must use the same protocol (http, in this example) and reside on the same host as the Sitemap. For instance, if the Sitemap is located at http://www.example.com/sitemap.xml, it can’t include URLs from http://subdomain.example.com.

URLs that are not considered valid are dropped from further consideration. It is strongly recommended that you place your Sitemap at the root directory of your web server. For example, if your web server is at example.com, then your Sitemap index file would be at http://example.com/sitemap.xml. In certain cases, you may need to produce different Sitemaps for different paths (e.g., if security permissions in your organization compartmentalize write access to different directories).

If you submit a Sitemap using a path with a port number, you must include that port number as part of the path in each URL listed in the Sitemap file. For instance, if your Sitemap is located at http://www.example.com:100/sitemap.xml, then each URL listed in the Sitemap must begin with http://www.example.com:100.

Sitemaps & Cross Submits

To submit Sitemaps for multiple hosts from a single host, you need to “prove” ownership of the host(s) for which URLs are being submitted in a Sitemap. Here’s an example. Let’s say that you want to submit Sitemaps for 3 hosts:

www.host1.com with Sitemap file sitemap-host1.xml

www.host2.com with Sitemap file sitemap-host2.xml

www.host3.com with Sitemap file sitemap-host3.xml

Moreover, you want to place all three Sitemaps on a single host: www.sitemaphost.com. So the Sitemap URLs will be:




By default, this will result in a “cross submission” error since you are trying to submit URLs for www.host1.com through a Sitemap that is hosted on www.sitemaphost.com (and same for the other two hosts). One way to avoid the error is to prove that you own (i.e. have the authority to modify files) www.host1.com. You can do this by modifying the robots.txt file on www.host1.com to point to the Sitemap on www.sitemaphost.com.

In this example, the robots.txt file at http://www.host1.com/robots.txt would contain the line “Sitemap: http://www.sitemaphost.com/sitemap-host1.xml”. By modifying the robots.txt file on www.host1.com and having it point to the Sitemap on www.sitemaphost.com, you have implicitly proven that you own www.host1.com. In other words, whoever controls the robots.txt file on www.host1.com trusts the Sitemap at http://www.sitemaphost.com/sitemap-host1.xml to contain URLs for www.host1.com. The same process can be repeated for the other two hosts.

Now you can submit the Sitemaps on www.sitemaphost.com.

When a particular host’s robots.txt, say http://www.host1.com/robots.txt, points to a Sitemap or a Sitemap index on another host; it is expected that for each of the target Sitemaps, such as http://www.sitemaphost.com/sitemap-host1.xml, all the URLs belong to the host pointing to it. This is because, as noted earlier, a Sitemap is expected to have URLs from a single host only.

Back to top

Validating your Sitemap

The following XML schemas define the elements and attributes that can appear in your Sitemap file. You can download this schema from the links below:

For Sitemaps: http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd
For Sitemap index files: http://www.sitemaps.org/schemas/sitemap/0.9/siteindex.xsd

There are a number of tools available to help you validate the structure of your Sitemap based on this schema. You can find a list of XML-related tools at each of the following locations:


In order to validate your Sitemap or Sitemap index file against a schema, the XML file will need additional headers as shown below.


<?xml version='1.0' encoding='UTF-8'?>

<urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

         xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"






Sitemap index file:

<?xml version='1.0' encoding='UTF-8'?>

<sitemapindex xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

         xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/siteindex.xsd"






Back to top

Extending the Sitemaps protocol

You can extend the Sitemaps protocol using your own namespace. Simply specify this namespace in the root element. For example:

<?xml version='1.0' encoding='UTF-8'?>

<urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

         xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"


         xmlns:example="http://www.example.com/schemas/example_schema"> <!-- namespace extension -->








Back to top

Informing search engine crawlers

Once you have created the Sitemap file and placed it on your webserver, you need to inform the search engines that support this protocol of its location. You can do this by:

The search engines can then retrieve your Sitemap and make the URLs available to their crawlers.

Submitting your Sitemap via the search engine’s submission interface

To submit your Sitemap directly to a search engine, which will enable you to receive status information and any processing errors, refer to each search engine’s documentation.

Specifying the Sitemap location in your robots.txt file

You can specify the location of the Sitemap using a robots.txt file. To do this, simply add the following line including the full URL to the sitemap:

Sitemap: http://www.example.com/sitemap.xml

This directive is independent of the user-agent line, so it doesn’t matter where you place it in your file. If you have a Sitemap index file, you can include the location of just that file. You don’t need to list each individual Sitemap listed in the index file.

You can specify more than one Sitemap file per robots.txt file.

Sitemap: http://www.example.com/sitemap-host1.xml

Sitemap: http://www.example.com/sitemap-host2.xml

Submitting your Sitemap via an HTTP request

To submit your Sitemap using an HTTP request (replace <searchengine_URL> with the URL provided by the search engine), issue your request to the following URL:


For example, if your Sitemap is located at http://www.example.com/sitemap.gz, your URL will become:


URL encode everything after the /ping?sitemap=:


You can issue the HTTP request using wget, curl, or another mechanism of your choosing. A successful request will return an HTTP 200 response code; if you receive a different response, you should resubmit your request. The HTTP 200 response code only indicates that the search engine has received your Sitemap, not that the Sitemap itself or the URLs contained in it were valid. An easy way to do this is to set up an automated job to generate and submit Sitemaps on a regular basis.
Note: If you are providing a Sitemap index file, you only need to issue one HTTP request that includes the location of the Sitemap index file; you do not need to issue individual requests for each Sitemap listed in the index.

Back to top

Excluding content

The Sitemaps protocol enables you to let search engines know what content you would like indexed. To tell search engines the content you don’t want indexed, use a robots.txt file or robots meta tag. See robotstxt.org for more information on how to exclude content from search engines.


This entry was posted in money and tagged . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>