Monday, December 24, 2012

Local SEO Tips for Business

• Create account on Google place page and add Address: - You can create your account on Google place easily but most important thing is Name, Address and phone numbers (NAP) write carefully. Google track our NAP and confirm it. Add your google map

 • Create Account with full Address Detail on Local Citations:- Local Citations sites like yellow page, quckr.com and just dial etc are also useful for Local SEO and the Address should be game as Google place address.

 • Use Schema Tagging for content writing (Address must):- Google and other search engines created a structured data standard called schema tags. If you want to more effective your content then use these sites: http://schema.org/ , http://schema-creator.org/

 • Use Local SEO keyword phrase in Meta tag and content (not mandatory).

 • Submit Reviews: - One review is submitted after 7 days.

 • Create Google+ page account and Share your page: - Adding of website and address on G+ page is mandatory.

 • Do off page work regularly: - Do bookmarking and profile linking with right address.

Saturday, December 22, 2012

How can Start optimization (SEO) for home page:-

Search Engine Optimization (SEO) does not have to be difficult, it just needs to be worked at on a regular basis for maximum results.
  • Use meta keywords tag, two or three keyword phrases are good and always optimize for keyword phrases and NOT keywords. But you can put your keyword within the keyword phrase. By the way, most people do not search for just one word any more anyway.
  • Use meta description tag, keep your most important keyword phrase near the beginning of the sentence and make this tag a full sentence approx 164-170 characters.
  • Use title tag, put our keyword phrase and use pipe sign “|”. Use most keyword phrase in left most and max limit keep 70 characters.
  • Do NOT use bold or italic keyword phrases in the first sentence on the page, but DO use your most important keyword phrase in the first sentence, but not the first word.
  • Use your keyword phrases in your headings, (H1, H2 and H3).
  • Start putting keyword phrases in bold in the second paragraph.
  • Use keywords in ALT tags.
  • Get inbounds link of high PR. you are not in control of how other sites link to you, but work hard to get them to use your keyword phrase. Most sites will link to your home page, so give them the most important keyword phrase you are optimizing your home page for.
  • When you are linking from any page back to your home page, use your most important keyword phrase in the link. When your home page is linking to any other page, use the keyword phrase in that link that the other page is being optimized for.
  • A sentence according to Google is three or more words starting with a capital letter and ending with a period or other punctuation. Stop words such as “I,” “a,” “the,” and “of” do NOT count as one of your three words.
  • Keyword phrase density can be 2% to 6%. Sometimes it can up to 10%.
Using these tips you can get first ranking on google. Source: - http://seomoz.blogspot.in/

Friday, December 21, 2012

difference between Google panda and penguin | Keyword Stuffing | Doorway pages



What is the difference between Google panda and penguin? | Keyword Stuffing | Doorway pages
S. N.
Penguin(Hates)
Panda(Hates)
                   1.
Low Quality link
Thin Content
                   2.
Over Optimized Anchor
Content Farms
         3.
Keyword Stuffing
High bounce Rate

Keyword Stuffing: - Keyword stuffing is the practice of inserting a large number of keywords into Web content and Meta tags to increase a webpage's ranking in SERP and get more traffic to the site.
How is it done?
        a)     Coloring text or link
        b)    Hiding text or link
        c)     Using “Noscript" tags
Some other Examples of keyword stuffing include:
  • Lists of phone numbers without substantial added value
  • Blocks of text listing cities and states a webpage is trying to rank for
  • Repeating the same words or phrases so often that it sounds unnatural, for example:

Doorway pages: - are often easy to identify in that they have been designed primarily for search engines, not for human beings. Sometimes a doorway page is copied from another high-ranking page, but this is likely to cause the search engine to detect the page as a duplicate and exclude it from the search engine listings. Having multiple domain names targeted at specific regions called Doorway pages.

Cloaking - Another form of doorway pages are using a method called Cloaking. They show a version of that page to the visitor, but different from the one provided to crawlers, using server side scripts.

Keyword Density: - Keyword density is the percentage of times a keyword or phrase appears on a web page compared to the total number of words on the page.

Content farm- A content farm, also called a content mill, is a Web site whose content is written for search engine bots instead of human readers. Topics on a content farm are chosen specifically for their ability to rank highly in search engine results. A content farm generates revenue by placing ads on the page or selling contextual hyperlinks within the content. The more Internet traffic a page gets the more revenue the page can generate.
 
Content spinner- A content spinner is software that rewrites content to make it reusable as new content.

Thursday, December 20, 2012

Difference between Blogger and BlogSpot?

www.blogger.com and www.blogspot.com both are different but they meet same website when we visit. When we visit www.blogspot.com, we are redirected to www.blogger.com, because BlogSpot service can only be accessed with Blogger. The Difference between Blogger and BlogSpot are given below…
  • Blogger (www.blogger.com) is a free publishing platform owned by the giant company Google. BlogSpot (www.blogspot.com) is a free domain service provider owned by the same giant: Google.
  • Blogger DOES NOT have to be used with BlogSpot, but BlogSpot must be used with Blogger. BlogSpot is a domain acquired by Google and then integrated with blogger that means BlogSpot have to be used with Blogger, but Blogger does not have to be used with BlogSpot.
  • Blogger use file transfer protocol (FTP) to run their websites.


Summary:- Difference between Blogger and blogspot is like a difference between .NET, ASP.NET, in which .NET is a platform (framework) and ASP.NET is a language, used to create web pages such as Blogger is a platform, and blogspot is a domain service provider, used to create blogs or articles. If you want to use blogspot to create then you have to register on Blogger.

Saturday, December 15, 2012

Robots.txt file, Robots Meta tag and X-Robots tag



What is a robots.txt file?

Robots.txt files inform search engine spiders how to interact with indexing your content. By default, search engines are greedy. They want to index as much high quality information as they can, and they will assume that they can crawl everything unless you tell them otherwise. If you want search engines to index everything in your site, you do not need a robots.txt file (not even an empty one). The robots.txt file must exist in the root of the domain. Search Engine bots only check for this file in the root of the domain.

How do create a robots.txt file?

The simplest robots.txt file uses two rules:

1.     User-Agent: the robot the following rule applies to

 Googlebot: crawl pages from our web index

 Googlebot-Mobile: crawls pages for our mobile index

 Googlebot-Image: crawls pages for our image index

 Mediapartners-Google: crawls pages to determine AdSense content (used only if you show   AdSense ads on your site).

2.     Disallow: the pages you want to block

A user-agent is a specific search engine robot and the Disallow line lists the pages you want to block.

 To block the entire site, use a forward slash.

Disallow: /

To block a directory, follow the directory name with a forward slash.

Disallow: /private_directory/

To block a page, list the page.

Disallow: /private_file.html

 Note:- Googlebot is the search bot software used by Google.

Can I allow pages?

Yes, Googlebot recognizes an extension to the robots.txt standard called Allow. This extension may not be recognized by all other search engine bots, so check with other search engines you're interested in to find out. The Allow line works exactly like the Disallow line. Simply list a directory or page you want to allow.

You may want to use Disallow and Allow together. For instance, to block access to all pages in a subdirectory except one, you could use the following entries:

User-Agent: Googlebot
Disallow: /folder1/

User-Agent: Googlebot
Allow: /folder1/myfile.html

If you specify data for all bots use (*). It is globle.

User-Agent:  *

Disallow: /folder1/

You can use an asterisk (*) to match a sequence of characters. For instance, to block access to all subdirectories that begin with private, you could use the following entry:

User-Agent: Googlebot
Disallow: /private*/


Robots Meta tag

You can use robots.txt analysis tool  to check the working of your tobots.txt file.

When you block URLs from being indexed in Google via robots.txt they may still show those pages as URL only listings in their search results. A better solution for completely blocking the index of a particular page is to use a robots noindex meta tag on a per page bases. You can tell them to not index a page, or to not index a page and to not follow outbound links by inserting either of the following code bits in the HTML head of your document that you do not want indexed.

    <meta name="robots" content="noindex">
    <meta name="robots" content="noindex,nofollow">
      <meta name="googlebot" content="noindex">
    Please note that if you block the search engines in robots.txt and via the meta tags then they may never get to crawl the page to see the meta tags, so the URL may still appear in the search results URL only.
    If your page is still appearing in results, it is probably because Search Engine have not crawled your site since you added the tag.

Crawl Delay

Search engines allow you to set crawl priorities. Google does not support the crawl delay command directly, but you can lower your crawl priority inside Google Webmaster Central.

 Their robots.txt crawl delay code looks like
User-agent: Slurp
Crawl-delay: 5
where the 5 is in seconds.

To block access to all URLs that include a question mark (?), you could use the following entry:

User-agent: *
Disallow: /*?

You can use the $ character to specify matching the end of the URL

User-agent: Googlebot
Disallow: /*.asp$
Sources:-  http://sitemaps.blogspot.in/2006/02/using-robotstxt-file.html

http://tools.seobook.com/robots-txt/

Using the X-Robots-Tag HTTP header

The X-Robots-Tag can be used as an element of the HTTP header response for a given URL. Any directive that can used in an robots meta tag can also be specified as an X-Robots-Tag. Here's an example of an HTTP response with an X-Robots-Tag instructing crawlers not to index a page:

HTTP/1.1 200 OK

Date: Tue, 25 May 2010 21:42:43 GMT

(…)

X-Robots-Tag: noindex

(…)

Multiple X-Robots-Tag headers can be combined within the HTTP response, or you can specify a comma-separated list of directives. Here's an example of an HTTP header response which has a noarchive X-Robots-Tag combined with an unavailable_after X-Robots-Tag.

HTTP/1.1 200 OK

Date: Tue, 25 May 2010 21:42:43 GMT

(…)

X-Robots-Tag: noarchive

X-Robots-Tag: unavailable_after: 25 Jun 2010 15:00:00 PST

(…)

The X-Robots-Tag may optionally specify a user-agent before the directives. For instance, the following set of X-Robots-Tag HTTP headers can be used to conditionally allow showing of a page in search results for different search engines:

HTTP/1.1 200 OK

Date: Tue, 25 May 2010 21:42:43 GMT

(…)

X-Robots-Tag: googlebot: nofollow

X-Robots-Tag: otherbot: noindex, nofollow

(…)

Directives specified without a user-agent are valid for all crawlers. The section below demonstrates how to handle combined directives. Both the name and the specified values are not case sensitive.

Sources:- https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag

Friday, December 14, 2012

HTTP status codes

HTTP status codes are standard response codes given by web site servers on the Internet. HTTP status code is a Hypertext Transfer Protocol (HTTP) response status codes. This includes codes from IETF internet standards. The status code is part of the HTTP/1.1 standard (RFC 2616) and it has five classes.
  • 1xx Informational (100 - 102):- indicates a provisional response
  • 2xx Success (200 – 208 and 226):- indicates the action requested by the client was received
  • 3xx Redirection (300 - 308):- indicates redirections
  • 4xx Client Error (400 – 431 and 444, 449-51, 494-99): - indicates errors
  • 5xx Server Error (500-511, 598 and 599):- indicates the server familiars
Note: - 5xx codes for the cases in which the server is aware that the server has erred. However, the 4xx codes are intended for cases in which the client seems to have erred.

503 Service Unavailable:-
The 503 Service Unavailable errors can appear in any browser in any operating system. The 503 Service Unavailable error is a server-side error, meaning the problem is usually with the web site's server. It is possible that your computer is having some kind of problem that is causing the 503 error but it is not likely.

504 Gateway Timeout:-
The 504 Gateway Timeout error is occurred when one server did not receive a timely response from another server that it was accessing while attempting to load the web page or fill another request by the browser. The 502 Bad Gateway is also similar to this error.

403 Forbidden Errors:-
The 403 Forbidden errors is an HTTP status code that means that accessing the page or resource you were trying to reach is absolutely forbidden for some reason. In other words, “Go away and don't come back here."
By far the most common reason for this error is that directory browsing is forbidden for the Web site. Most Web sites want you to navigate using the URLs in the Web pages for that site. They do not often allow you to browse the file directory structure of the site. For example try the following URL (then hit the 'Back' button in your browser to return to this page):
This URL should fail with a 403 error saying "Forbidden: You don't have permission to access /accounts/grpb/B1394343/ on this server". This is because our CheckUpDown Web site deliberately does not want you to browse directories - you have to navigate from one specific Web page to another using the hyperlinks in those Web pages. This is true for most Web sites on the Internet - their Web server has "Allow directory browsing" set OFF.
Robot.txt file is responsible for these type error. HTTP Error 401 Unauthorized is also similar problem.

Increase Conversions Upto 200%