The Internet may be the most revolutionary invention of the late 20th century, but without search engines, it would be virtually worthless today.
Nobody could handle the flood of information if the Google & Co. keyword search did not make it accessible.
It helps us every day to find the needle in a haystack.
More than 90 percent of internet users use the Internet to search for information using search engines. If you want to reach people on the Internet with your website, you have to make sure that you can find them with search engines.
If the website does not appear in the search engine results lists for the important search terms, but also when it first appears on the second results page, it is time to act.
There are many strategies to make your own website more attractive to search engines. Search engine optimization basics (abbreviated to SEO) deals with precisely this goal.
Search Engine Optimization Basics
Behind the search engine optimization is neither witchcraft nor fraud, but much analytical know-how and hard work.
Search engine optimizers analyze how search engines work and adapt websites as well as possible to these criteria: they optimize them for the search engines.
How do search engines work?
Millions of people use a search engine many times a day without wondering how the results actually come about. Search engines are the huge collection and sorting machines for information from the net.
Of course, a search engine can not scour the entire Internet for every single search query – the mass of data would be hard to deal with, and it would take a long time.
That’s why every search engine runs countless data collectors: independent programs, called crawlers or spiders, that are constantly on the net, looking at websites and collecting the most important data.
This information is archived in a huge database, the index. This index of the search engine is well sorted and can be queried at lightning speed. From this, the search engine gets its results.
What can be found with search engines, therefore, is limited by two factors: the perception capabilities of the crawler and the time of his visit to a website.
Because the information that the crawler can analyze from a website is limited. Crawlers basically only understand texts. For the content of videos and images, they are essentially blind, as well as for the design of a page. So the crawler only includes information from his visit that is in any way contained in the program code of the website.
The second limiting factor is the time of the last crawler visit on a website. If changes have been made to the page after the crawler has collected its information, the search engine will not know these changes. You can not appear in the search results until the crawler revisits the page.
Depending on the importance of the page, this can take from a few minutes to several days.
But how does the sorting of the result list come about? Why is one result number one, another ranked 164, if both websites speak of the same thing?
The relevance of hits determines the search engines for complex algorithms, in which many different criteria flow together. Which criteria that is in detail, a business secret of the search engine supplier. Google currently uses several hundred parameters to calculate the ranking of search results.
Of course, some of the criteria are easy to grasp, others are even officially confirmed. An example: Almost a myth among the owners of websites is Google’s PageRank. Named after Larry Page, one of the inventors of the search engine, this index was once the cornerstone of Google’s phenomenal triumph.
It is based on the fact that Google’s crawlers register which links refer to websites. To put it simply, pages that are often linked from other pages have a high PageRank, and pages with little such “backlinks” have a low one.
Google considers links as recommendations for the linked page. And so the PageRank is practically a scale for the popularity of a page on the net. This can be used as a criterion to hierarchize the pages in a search engine hit list.
The PageRank itself is now considered a relic and has only a small influence on the sorting of the search results. Like much of the early days of search engines, it was too easy to manipulate ever since smart webmasters exchanged links, sold or stuffed blog posts with links.
The modern mechanisms are more complex and harder to manipulate.
The supremacy of Google
For the vast majority of Germans, searching the Internet is synonymous with “googling”. Although there are hundreds of search engines, the majority of them are niche lives.
Nine out of ten internet searches in Germany are conducted with Google. Even the Microsoft search engine Bing is completely detached with a market share of less than 4 percent.
For search engine optimization in German-speaking countries, this has meant a stable quasi-monopoly for several years, that one has only to deal with the mechanisms of Google.
Other search engines only have to be included in the calculation if foreign, especially non-European, target groups are to be reached.
In China, for example, the search engine Baidu, which is almost unknown in this country, accounts for a market share of two-thirds. In Russia, the search engine Yandex leads the list with a similar market share.
In modern search engines, not every searcher sees the same results many times. Search engine operators have long begun to include all sorts of user-related criteria in the search.
For example, a user in Mainz who enters the word “neurosurgery” at Google receives different results than when looking for the same at a computer in Ingolstadt. Google registers the location of the user via the identifier of the requesting computer. For search queries, which are usually meant to be local, the search engine offers the user primarily results near his location.
But even beyond the localization Google is already making adjustments based on the personal search and click behavior of a user. Virtually all interactions of a user with Google are logged, creating a kind of user profile.
The data for it comes mainly from the many millions of Android phones, where Google can analyze virtually every user movement. With the click behavior, bookmarks and similar indicators Google wants to deliver results to every user, which are always better tailored to their interests and needs.
This development is due to the search engine’s struggle for user-relevant outcomes. Because that depends on the success of the search engine – that you actually find with it what you need.
The factor of user behavior
Not every position in the search results for a search query is worth the same. This is very much due to the way users acquire information on the Internet. Studies have shown that perception is much more selective than impatient, for example, reading a book or newspaper.
Users quickly scan texts, they do not read thoroughly. They only stay hanging when something attracts particular attention through placement, emphasis or other, also individually different criteria.
In the perception of search result lists, this peculiarity has an even stronger effect: By far the greatest attention is devoted to users’ top search results, as studies show.
The likelihood that a result will be clicked will be over 50 percent for the top ranking result. The runner-up is only about 14 percent, the third not even more 10. The results of the second page look at most search engine users no longer at all.
So, if a site appears on page two or the next one, the positioning is practically worthless. Only front results actually bring visitors.
No surgeon would operate a limping patient on the knee without first examining it and figuring out whether the limp really has its cause in the knee.
However, when a website does not appear as well in the search results as the operators wish, it is often traded out-of-the-box and actions are justified with vague guesses. However, this will be of little use in most cases.
Because true search engine optimization is an empirical business and starts with a thorough history taking one step at a time.