《镜子大全》《朝华午拾》分享 http://blog.sciencenet.cn/u/liwei999 曾任红小兵,插队修地球,1991年去国离乡,不知行止。

博文

[转载] Is Google ranking based on machine learning?

已有 3687 次阅读 2014-6-18 17:21 |个人分类:立委科普|系统分类:科研笔记|关键词:Google,search,Machine,Learning| google, machine, Search, Learning |文章来源:转载

Quora has a question with discussions on "Why is machine learning used heavily for Google's ad ranking and less for their search ranking?" A lot of people I've talked to at Google have told me that the ad ranking system is largely machine learning based, while search ranking is rooted in functions that are written by humans using their intuition (with some components using machine learning). 

Surprise? Contrary to what many people have believed, Google search consists of hand-crafted functions using heuristics. Why?


One very popular reply there is from Edmond Lau, Ex-Google Search Quality Engineer who said something which we have been experiencing and have indicated over and over in my past blogs on Machine Learning vs. Rule System, i.e. it is very difficult to debug an ML system for specific observed quality bugs while the rule system, if designed modularly, is easy to control for fine-tuning:


From what I gathered while I was there, Amit Singhal, who heads Google's core ranking team, has a philosophical bias against using machine learning in search ranking.  My understanding for the two main reasons behind this philosophy is:

  1. In a machine learning system, it's hard to explain and ascertain why a particular search result ranks more highly than another result for a given query.  The explainability of a certain decision can be fairly elusive; most machine learning algorithms tend to be black boxes that at best expose weights and models that can only paint a coarse picture of why a certain decision was made.

  2. Even in situations where someone succeeds in identifying the signals that factored into why one result was ranked more highly than other, it's difficult to directly tweak a machine learning-based system to boost the importance of certain signals over others in isolated contexts.  The signals and features that feed into a machine learning system tend to only indirectly affect the output through layers of weights, and this lack of direct control means that even if a human can explain why one web page is better than another for a given query, it can be difficult to embed that human intuition into a system based on machine learning.


Rule-based scoring metrics, while still complex, provide a greater opportunity for engineers to directly tweak weights in specific situations.  From Google's dominance in web search, it's fairly clear that the decision to optimize for explainability and control over search result rankings has been successful at allowing the team to iterate and improve rapidly on search ranking quality.  The team launched 450 improvements in 2008 [1], and the number is likely only growing with time.

Ads ranking, on the other hand, tends to be much more of an optimization problem where the quality of two ads are much harder to compare and intuit than two web page results.  Whereas web pages are fairly distinctive and can be compared and rated by human evaluators on their relevance and quality for a given query [2], the short three- or four-line ads that appear in web search all look fairly similar to humans.  It might be easy for a human to identify an obviously terrible ad, but it's difficult to compare two reasonable ones:


Branding differences, subtle textual cues, and behavioral traits of the user, which are hard for humans to intuit but easy for machines to identify, become much more important.  Moreover, different advertisers have different budgets and different bids, making ad ranking more of a revenue optimization problem than merely a quality optimization problem.  Because humans are less able to understand the decision behind an ads ranking decision that may work well empirically, explainability and control -- both of which are important for search ranking -- become comparatively less useful in ads ranking, and machine learning becomes a much more viable option.

Jackie Bavaro, Google PM for 3 years Suggest Bio
Votes by Piaw Na (Worked at Google), Marc Bodnick, Alex Clemmer, Tudor Achim, and 92 more.
Edmond Lau's answer is great, but I wanted to add one more important piece of information.

When I was on the search team at Google (2008-2010), many of the groups in search were moving away from machine learning systems to the rules-based systems.  That is to say that Google Search used to use more machine learning, and then went the other direction because the team realized they could make faster improvements to search quality with a rules based system. It's not just a bias, it's something that many sub-teams of search tried out and preferred.

I was the PM for Images, Video, and Local Universal - 3 teams that focus on including the best results when they are images, videos, or places.  For each of those teams I could easily understand and remember how the rules worked.  I would frequently look at random searches and their results and think "Did we include the right Images for this search?  If not, how could we have done better?". And when we asked that question, we were usually able to think of signals that would have helped - try it yourself.  The reasons why *you* think we should have shown a certain image are usually things that Google can actually figure out.
 




Anonymous
Votes by Edmond Lau (Ex-Google Search Quality Engineer), Bin Lu (Software Engineer at Google), Keith Rabois, Vu Ha, and 34 more.
Part of the answer is legacy, but a bigger part of the answer is the difference in objectives, scope and customers of the two systems.

The customer for the ad-system is the advertiser (and by proxy, Google's sales dept).  If the machine-learning system does a poor job, the advertisers are unhappy and Google makes less money. Relatively speaking, this is tolerable to Google. The system has an objective function ($) and machine learning systems can be used when they can work with an objective function to optimize. The total search-space (# of ads) is also much much smaller.

The search ranking system has a very subjective goal - user happiness. CTR, query volume etc. are very inexact metrics for this goal, especially on the fringes (i.e. query terms that are low-volume/volatile). While much of the decisioning can be automated, there are still lots of decisions that need human intuition.

To tell whether site A better than site B for topic X with limited behavioural data is still a very hard problem. It degenerates into lots of little messy rules and exceptions that tries to impose a fragile structure onto human knowledge, that necessarily needs tweaking.

An interesting question is - is the Google search index (and associated semantic structures)  catching up (in size and robustness) to the subset of the corpus of human knowledge that people are interested in and  searching for ?

My guess is that right now, the gap is probably growing - i.e. interesting/search-worthy human knowledge is growing faster than Google's index.. Amit Singhal's job is probably getting harder every year. By  extension, there are opportunities for new  search providers to step into the increasing gap with unique offerings.

p.s: I used to manage an engineering team for a large search provider (many years ago).



【置顶:立委科学网博客NLP博文一览(定期更新版)】



http://blog.sciencenet.cn/blog-362400-804469.html

上一篇:回答一些行星绕太阳运行轨道的话题
下一篇:[转载]为什么谷歌搜索并不像广泛相信的那样主要采用机器学习?

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备14006957 )

GMT+8, 2019-10-20 07:13

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部