How Mozilla could improve search engine competition
Browsers have been improving rapidly now that there are multiple strong players with significant market share. Wouldn't it be great if the search engine market were more like that?
Mozilla is in a unique position to be able to improve competition among search engines. We're not a search engine company and we stand for meaningful user choice.
If Mozilla were to decide that improving search competition was important to us, here are some things we could do:
Measurements and surveys
We could run Test Pilot studies to learn about search engine choice, speed, and result quality. A study could even encourage users to try multiple search engines and report their experiences.
The content and order of Firefox's search bar have always been determined exclusively based on what we believe to be best for Firefox users. We should strive to have the best data possible when we make these decisions.
Ease of experimentation
We could make it easier for users to compare search engines. For example, after I enter a query into the search bar, it would be nice if I could run the same query on another search engine in one or two clicks. The Search Tabs extension is one example of how this could be done.
Ease of migration
We could let users share their search history with a new search provider, so the search engine they've used in the past doesn't have a hidden personalization advantage.
Helping users make informed choices is core to the Mozilla mission. Users should be aware if their current search engine is collecting search history, and be able to make an informed decisions about the tradeoffs. For example, they should be able to take their history to an engine with a better privacy policy.
We could require special promises from search engines about how they will use data shared through this feature. For example, we could require they only use the data for result personalization, and only on the search engine site itself.
There may be ways to use history to improve search quality that involve only partial history sharing, such as client-side relevance tweaks or RePriv. If Firefox supported one of these methods, users could comfortably ask their search engine not to store any search history.
August 12th, 2011 at 3:46 pm
I’m glade someones paying attention. But the reason there is no competition in search because it costs too much to build a competitive search engine. Index size and Query speed are largely determined by the size and capability of the data-centers processing them.
The only ways I see to fix this are by promoting a “Federated Search” paradigm or “Distributed (P2P) Search Engine” technology development.
August 13th, 2011 at 1:09 am
Is this another ploy to peddle our search history to multiple search engines for a quick buck? Please do not force unnecessary data sharing on Firefox users. Let’s give users benefit of a doubt that they are intelligent and capable of selecting a search engine of their liking. It’s enough to remind people that they have a choice and let them run with that information. Right?
August 13th, 2011 at 11:51 am
Looks like somebody had a bad day with Google.
August 13th, 2011 at 12:30 pm
I’m actually quite happy with Google at the moment. But I’d like to stir up competition before Google’s market dominance causes them to stop innovating.
Google is coming under the twin anti-innovation incentives of monopoly economics and fear of antitrust law. I think we’re already starting to see this in the way Google responds to spam, and the way Google is ultra-cautious about adding non-search information above search results.
August 13th, 2011 at 12:30 pm
If Google is convicted of abusing its market dominance, one interesting remedy would be for Google to open its web crawl data. Don’t attack Google’s innovations, attack the barriers to entry.
It’s not just the cost of building a datacenter, the by way. Google has the advantage of its crawler being targeted by web site owners.
How often do you see “User-Agent: Googlebot” sections in robots.txt files? How often do you see sites serving full content to Googlebot, but putting up paywalls for all other visitors? How often do you see sites throttle robots in general but with exceptions for Googlebot?
August 13th, 2011 at 1:42 pm
Well that’s the thing for the most part these are just natural network effects they got from being a popular search engine. So it’s kind of hard to charge them with being anti-competitive. Even if convicted I doubt the DOJ will do anything as effective as demanding Google open their web crawl data. Besides all of that it would take years, maybe decades, before such a trial would be over.
You’re right though the barriers to entry need to be eliminated. (To be clear, the cost of building a datacenter is a barrier to entry too.) But you can’t count on the courts to do anything about that.