How Google Determines the Website
To understand how Google indexes web pages is to think web as a large book and which has impressive index and which identifies where everything is located. When a user enter search query on Google, its checks Google indexed web pages and they compiled and return the most relevant search results to user. Only thing Googlebot sees how Google crawls, indexes, and serves the web of a webpage.
3 Key processes in delivering Google search results to user:
•Crawling: Does Google know about your site? Can we find it?
•Indexing : Can Google index your site?
•Serving : Does the site have good and useful content that is relevant to the user's search?
Crawling: Crawling is a process used automated software known as Googlebot skewers. It crawls the internet for new web page and updated web pages to be added in the Google index.
Google use a huge number set of computers to fetch (or “crawl”) billions of webpage’s on the web. Googlebot is Google's spider; Googlebot travels the web for finding and indexing web pages. To determine which websites to crawls, what frequency, and how many web pages they fetch from each site Googlebot uses an algorithmic process.
When Googlebot start crawl process with a list of URL’s from previous sessions and augmented with sitemap date provided by webmaster. Googlebot visit a web page, Googlebot takes the links from that web page and adds links to its list of pages to crawl. New website, updated websites, dead links are noted by Googlebot and used to update the Google Index.
Here one thing to remember you can’t pay Google to crawl your website more frequently and it’s not part of Google revenue generation service.
Indexing: Googlebot process each of web pages and Googlebot crawls in order to compile a massive index of all the word its sees and their location on each web page. In addition, Google process information includes content tags (Title tags) and attributes (ALT attributes). Googlebot not process all content types. For example, rich media files, dynamic page are cannot process by Googlebot.
Serving: when a user enters search term on Google, it takes that search term and returns the most relevant matching web pages within their index. Relevance is determined by over 200 factors, Google Pagerank for given web page is one of the factor. Google Pagerank is measure of the importance of the web page based on incoming links from other web pages. Google works hard to provide best search result to user search by identifying spam links and other practices that have negative impact on search results.
A website rank well in search results and before Google can index well, ensures that Googlebot can crawl and index website properly. Broken and dead link can have negative impact on a website ranks.
To understand how Google indexes web pages is to think web as a large book and which has impressive index and which identifies where everything is located. When a user enter search query on Google, its checks Google indexed web pages and they compiled and return the most relevant search results to user. Only thing Googlebot sees how Google crawls, indexes, and serves the web of a webpage.
3 Key processes in delivering Google search results to user:
•Crawling: Does Google know about your site? Can we find it?
•Indexing : Can Google index your site?
•Serving : Does the site have good and useful content that is relevant to the user's search?
Crawling: Crawling is a process used automated software known as Googlebot skewers. It crawls the internet for new web page and updated web pages to be added in the Google index.
Google use a huge number set of computers to fetch (or “crawl”) billions of webpage’s on the web. Googlebot is Google's spider; Googlebot travels the web for finding and indexing web pages. To determine which websites to crawls, what frequency, and how many web pages they fetch from each site Googlebot uses an algorithmic process.
When Googlebot start crawl process with a list of URL’s from previous sessions and augmented with sitemap date provided by webmaster. Googlebot visit a web page, Googlebot takes the links from that web page and adds links to its list of pages to crawl. New website, updated websites, dead links are noted by Googlebot and used to update the Google Index.
Here one thing to remember you can’t pay Google to crawl your website more frequently and it’s not part of Google revenue generation service.
Indexing: Googlebot process each of web pages and Googlebot crawls in order to compile a massive index of all the word its sees and their location on each web page. In addition, Google process information includes content tags (Title tags) and attributes (ALT attributes). Googlebot not process all content types. For example, rich media files, dynamic page are cannot process by Googlebot.
Serving: when a user enters search term on Google, it takes that search term and returns the most relevant matching web pages within their index. Relevance is determined by over 200 factors, Google Pagerank for given web page is one of the factor. Google Pagerank is measure of the importance of the web page based on incoming links from other web pages. Google works hard to provide best search result to user search by identifying spam links and other practices that have negative impact on search results.
A website rank well in search results and before Google can index well, ensures that Googlebot can crawl and index website properly. Broken and dead link can have negative impact on a website ranks.
No comments:
Post a Comment