They are programmed to ignore certain file types, not meet certain criteria. I think, in the instinct of a search engine the main thing – it is a mechanism or algorithm by which it indexes sites. Knowledge: Search engines are well aware of all details indexing sites. The knowledge that they own, are far ahead of knowledge of all users, webmasters and SEOs. The search engine knows many methods of sorting, reporting, and, naturally, also has its own specific set of tricks and gimmicks.
As long as the search robot travels over the Internet and indexes web pages, in parallel, he sends back to your data-center had collected the data. It is in this data center processed according to algorithms and spam filters to weed out the unnecessary. Just as we analyze information from an article in the newspaper according to its vision of the world, and the search engines process and rank the data in strict accordance with its laws and understanding of the Internet. Study: As the search engine ranks Web pages according to their vision and understanding of the laws of the functioning of the Internet, but these rules are constantly changes, the search algorithms are constantly changing. Here it is just and necessary mechanism of adaptation or learning search engine. At the same time, in addition to the ability to preview pages, search engines should be able to define and punish attempts to band promotion. In this case favorably to honest webmasters and optimizers. Here are examples of areas in which search engines are so often like to change their algorithms: Determine the relevancy of the content of the site to which she has found a link, and the ability to detect the information contained in the new data types, for example, databases, flash, etc.