Search Engine Market Share Worldwide - September 2020

Search Engine Market Share Worldwide - September 2020

Google 92.26%
bing 2.83%
Yahoo! 1.59%
Baidu 1.14%
DuckDuckGo 0.5%
YANDEX 0.5%


Google states that Google Search “operates in a highly competitive environment,” facing a “vast array of competitors” in general online search, including Bing, DuckDuckGo, and Yahoo!.

That is what Google told the House Committee on the Judiciary, Nov. 22, 2019.

The question is, how can you trust the Google search engine if they do not get their own numbers right. Are they biased by their own Artificial Intelligence or do they just lie to avoid antitrust regulations?

 
I bet most of that stats (2.83), come from Windows 10 Start Menu search suggestions, and also search in Office 365.

True. However I also imagine just typing this post on a Chrome web browser would send about 20 requests to Google. So many of these stats probably can't be trusted.

At least with DuckDuckGo almost every one was a legitimate request. Things like Google are even making it hard for them by removing them from the "known" selection on Android.

DuckDuckGo recently started an advertising campaign here, at least in Berlin.
That is actually great to hear. These guys seem less crooked than the rest.
 
  • Like
Reactions: a6h
I use startpage, which isn't included in browser default search engines.
But seriously, at least in the US, they want to break up a monopoly, do something about the ISPs and their collusion. Here in NYC, we have spectrum, which got permission to buy Time Warner, saying we won't do bandwidth caps at least till whenever, now they're saying, well, we want permission. The only other choice is Verizon. In most parts of the country there are only one or two possible providers. Facebook, Google, and Amazon give you the choice to not use them. Not the same with the ISPs. Pity about the lobbyists--for those not in the US it's basically legalized bribery where they give money to lawmakers and call it something different than a bribe. But seriously--THAT's the one that affects people here in the US. (Not so much in other countries, I remember in Japan when my wife needed a portable hotspot and my in laws were looking for providers, and you had a choice.)

This is old but still true, Honest ISP Ad. (Has obscene language)

URL: https://youtube.com/0ilMx7k7mso

Again, I don't *have* to use Google, or any of the others. The blatently unfair monopolies are the ISPs which frequently prevent things such as a city providing Internet service.

A PS, as we were talking about this sort of thing in another thread. This forum seems to embed the video if I just paste a link. Ideally, I think it should just have the link and people can click it or not. (My original post embedded it. vigole's post below showed me how to fix that.)
 
This forum seems to embed the video if I just paste a link. Ideally, I think it should just have the link and people can click it or not.

I believe there is a checkbox at the bottom of the forum editor that allows you to embed the video or just show the link...
 
I use startpage

+1

I believe there is a checkbox at the bottom of the forum editor that allows you to embed the video or just show the link...

I think he was rather getting at embedding being the default so every page of a thread where someone has posted a video by default sends data to google (if i really cared THAT much i'd obviously just block youtube.com but then less experienced user might realize this or know how to).
 
I believe there is a checkbox at the bottom of the forum editor that allows you to embed the video or just show the link...
I can't find that!
I think it should just have the link and people can click it or not.

My solution:

Method 0:
[NOPARSE]https://domain/file[NOPARSE]

Method 1:
  1. Insert Link (Ctrl+K)
  2. URL: https://domain/file/
  3. Text: https://domain/file

Method 2:
  1. Insert Link (Ctrl+K)
  2. URL: https://domain/file
  3. Text: https://domain/file/
 
Loosely related to the genuine topic: The Atlas of the Digital World (german). Long story short: the 7 biggest players in the digital world have more than 50% of internet usage on their sites. The biggest four AAAF (Alphabet, Amazon, Apple, Facebook) are very efficient in keeping users inside their corporate bubble, i.e. to guide them to services owned by the same company, thus disadvantaging competitors.
 
  • Like
Reactions: a6h
In public statements, Google claimed that “competition is just a click away.”
Is it?
 
In 2018, Findx—a privacy-oriented search engine that had attempted to build its own index—shut down its crawler, citing the impossibility of building a comprehensive search index when many large websites only permit crawlers from Google and Bing.

Many large websites like LinkedIn, Yelp, Quora, Github, Facebook and others only allow certain specific crawlers like Google and Bing to include their webpages in a search engine index. That meant that the Findx search index was incomplete and was not able to return results that were likely both relevant and good quality.

When you compare any independent search engine’s results to Google for example, they have no chance to be as relevant or complete because many large websites refuse to allow any other search engine to include their pages.

Why do webmasters deny privacy-oriented search engines access to their sites?

Are they told to do so? Do agreements exist forcing such decisions?

And if you look at the robots.txt file from top web-sites why is the monopolist Google often favored in those ACLs over Google's competitors?
 
It depends on general consensus! Some people think Google is going to conquer and save the universe. They didn't heard about honourable EIC. They're busy watching Netflix.
Well, that's not bad per se: the docudrama The Social Dilemma is in the top-10 on Netflix.
„Home-keeping youth have ever homely wits.“ - William Shakespeare
„Wer stets zu Haus bleibt, hat nur Verstand fürs Haus.“ - William Shakespeare (german translation)
 
  • Like
Reactions: a6h
In 2018, Findx—a privacy-oriented search engine that had attempted to build its own index—shut down its crawler, citing the impossibility of building a comprehensive search index when many large websites only permit crawlers from Google and Bing.

Many large websites like LinkedIn, Yelp, Quora, Github, Facebook and others only allow certain specific crawlers like Google and Bing to include their webpages in a search engine index. That meant that the Findx search index was incomplete and was not able to return results that were likely both relevant and good quality.

While i understand the problem with this simply throwing in the towel is a bit weak in my opinion. As far as i know robots.txt is not even an official standard let alone support for it being legally required. Sure if you disrespect it there would likely be IP bans to follow but it's not like this can't be worked around. It's obviously not a nice thing to do or something i would really want to spend time on but if my only other option was to close up shop there wouldn't be much of a question about how to proceed.
 
  • Like
Reactions: a6h
Some people think Google is going to conquer and save the universe. They didn't heard about honourable EIC.
Thank you for comparing Google with the East Indian Company (EIC) aka Honourable East India Company (HEIC) in a more military context.
The British Crown needed to call them "honorable" in these times, as otherwise one might get the idea that serving in a regiment for protecting the EIC's commercial interests were more likely the repression of an exploiting occupier.

One may take comfort in the fact that even the EIC once stopped to exist at a certain point. It took the Indians to a rebellion and parts of the company were nationalized in the aftermath.

I leave it to the readers here to use crosslinked thinking in what direction global monopolits are heading in a digitalized (post)modern world.
 
  • Like
Reactions: a6h
I try to use DDG but what I find is the results are not as comprehensive as Google. I understand Google is the "veeger" of the universe (Star Trek movie reference) but they have better search results, at least in my opinion. Probably because of market share I am guessing, plus unlimited budget...

I use Google exclusively for searches at work, on chrome, but on any browser using Google on any platform, I never sign into Google services. I know you can still be tracked even if you don't sign in.
 
Many large websites ... only allow certain specific crawlers like Google and Bing to include their webpages ...

Why do webmasters deny privacy-oriented search engines access to their sites?
And I've been doing the same thing, for the last ~15 years, in effect. But as I'll explain below, you have the reasons for the denial all wrong. I look at my web server logs, and I find that a large fraction of traffic comes from crawlers. Then I look at each crawler, and make decisions. If it is a reputable search engine, I allow it access (in my cases, not to everything, I don't allow them to load pictures). If it is clearly a hack attack (looking for scripts etc.), I completely deny them. I also deny all crawling from Russia and China, because my web page is intended for family and friends, and I have no family and friends in those countries, and (for lack of language skills) I can't validate whether crawlers from those countries are reputable. If a crawler ignores robots.txt, it gets immediately blocked (by IP address, and I don't bother being surgically accurate, I typically deny the whole IP range used by the crawler's company or hosting provider).

When I say "reputable", I mean: the crawler honors robots.txt, it doesn't crawl excessively, it doesn't probe for vulnerabilities, their web site has clear instructions for webmasters about how to control crawling. In particular, I try to not block crawling by academic researchers, unless they get out of hand.

The real problem is that the crawler space is dominated by attackers, and completely incompetent want-to-be search engines. Real search engines (Google, Microsoft, Yahoo when that was still a thing) are really good at crawling, very efficient, and have minimal impact. They honor robots.txt, and typically give you feedback on what they see. The hacking attackers are obviously no good, and I block them quickly. I sometimes wonder whether some of the crawl entries where attempts at low-level DoS attacks, they were so impossible to explain otherwise. I finally gave up, and configured my robots.txt to only allow Google and Microsoft crawlers, since trying to stomp out all the evil/stupid ones was too much work. But here are two anecdotes:

Friends of mine are the founders of the (failed) search engine CUIL, which was founded by a former search engineer/manager from Google, and her husband, who was a computer scientist from IBM who specialized in semantic web. And I had to block CUIL, because their crawler was a god-awful mess: it ignored robots.txt and walked into directories it shouldn't have, it crawled a few files every minute or two even though they hadn't changed, and so on. While they were not evil, they were incompetent. Second anecdote: I used to work at IBM, and our research lab had a very high speed connection to the world-wide internet. One day I saw an enormous number of crawls of my personal web site coming from the public IP address of the lab where I worked. And clearly it wasn't a web browser, it was a crawler gone insane. First step was to block it at the IP level. A little investigation showed that it was a young researcher, trying to crawl the web looking for some form of content, except that they forgot to rate-limit the spider, and forgot to not crawl the same resource multiple times (which is admittedly hard if you are using a large distributed system), and were just generally a big mess. Fortunately, colleagues got them to stop before they got in big trouble.

So the answer to your question is: You are jumping to conclusions. I don't know webmasters who deny privacy-oriented search engines because of being anti privacy. I know webmasters (including myself) who deny crawling that is damaging and pointless.
 
I try to use DDG

Me too. I set DDG to my default in FF because for most (probably all) casual searches -- meaning (1) I don't type in a full url, so I'm basically using the search engine to give me the link I want or (2) I'm just pulling up fairly common stuff -- DDG is a-ok. In fact MUCH better since it has much less garbage advertising and other distraction.

But for more esoteric questions, I sometimes have to jump to google. It seems to do better sifting through things like stackexchange for example. DDG just can't do needle in haystack searches quite as well.
 
Back
Top