《海量数据挖掘-王永利》ch12-ml2.pptVIP

  1. 1、本文档共47页,可阅读全部内容。
  2. 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  5. 5、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  6. 6、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  7. 7、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  8. 8、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * w?x+b=0 w?x+b=+1 w?x+b=-1 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * w?x+b=0 w?x+b=+1 w?x+b=-1 2? -1 Note: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * This is called SVM with “hard” constraints w?x+b=0 w?x+b=+1 w?x+b=-1 2? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * + + + + + + + - - - - - - - w?x+b=0 + - - J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * + + + + + + + - - - - - w?x+b=0 For each data point: If margin ? 1, don’t care If margin 1, pay linear penalty + ?j - ?i What is the role of slack penalty C: C=?: Only want to w, b that separate the data C=0: Can set ?i to anything, then w=0 (basically ignores the data) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * + + + + + + + - - - - - + - big C “good” C small C (0,0) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * Margin Empirical loss L (how well we fit training data) Regularization parameter -1 0 1 2 0/1 loss penalty Hinge loss: max{0, 1-z} J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * g(z) z Want to minimize f(w,b): Compute the gradient ?(j) w.r.t. w(j) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * Iterate until convergence: For j = 1 … d Evaluate: Update: w(j) ? w(j) - ??f(j) Gradient descent: Problem: Computing ?f(j) takes O(n) time! n … size of the training dataset J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * ?…learning rate parameter C… regularization parameter Stochastic Gradient Descent Instead of evaluating gradient over all examples evaluate it for each individual training example Stochastic gradient descent: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, * We just had: Iterate until convergen

文档评论(0)

1243595614 + 关注
实名认证
文档贡献者

文档有任何问题,请私信留言,会第一时间解决。

版权声明书
用户编号:7043023136000000

1亿VIP精品文档

相关文档