搜索

耀世资讯

公司动态
行业新闻

联系我们

Contact us

电话:400-123-4567
Q Q:1234567890
邮箱:admin@youweb.com
地址:广东省广州市天河区88号

莫烦大大TensorFlow学习笔记(8)

发布时间:2024-05-26 10:03:20 作者:佚名
<ol> <li><a href="https://www.cnblogs.com/Lee-yl/p/10022615.html#tf.train.GradientDescentOptimizer" rel="noopener"><span style="color: rgba(0, 0, 0, 1)">tf.train.GradientDescentOptimizer:梯度下降算法</span></a></li> <li><a href="https://www.cnblogs.com/Lee-yl/p/10022615.html#tf.train.AdadeltaOptimizer" rel="noopener">tf.train.AdadeltaOptimizer</a></li> <li><a href="https://www.cnblogs.com/Lee-yl/p/10022615.html#tf.train.AdagradOptimizer" rel="noopener">tf.train.AdagradOptimizer</a></li> <li><a href="https://www.cnblogs.com/Lee-yl/p/10022615.html#tf.train.MomentumOptimizer" rel="noopener"><em id="__mceDel"><em id="__mceDel">tf.train.MomentumOptimizer</em></em></a>:动量梯度下降算法</li> <li><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"></em></em></em></em><a href="https://www.cnblogs.com/Lee-yl/p/10022615.html#tf.train.AdamOptimizer" rel="noopener"><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"><em id="__mceDel">tf.train.AdamOptimizer</em></em></em></em></em></a>:自适应矩估计优化算法</li> <li><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"><em id="__mceDel"></em></em></em></em></em><a href="https://www.cnblogs.com/Lee-yl/p/10022615.html#tf.train.RMSPropOptimizer" rel="noopener"><em id="__mceDel">tf.train.RMSPropOptimizer</em></a></li> <li>tf.train.AdagradDAOptimizer</li> <li>tf.train.FtrlOptimizer</li> <li>tf.train.ProximalGradientDescentOptimizer</li> <li>tf.train.ProximalAdagradOptimizertf.train.RMSProOptimizer</li> </ol> <p>(1)如果数据是稀疏的,使用自适应学习方法。 <br> (2)RMSprop,Adadelta,Adam是非常相似的优化算法,Adam的bias-correction帮助其在最后优化期间梯度变稀疏的情况下略微战胜了RMSprop。整体来讲,Adam是最好的选择。 <br> (3)很多论文中使用vanilla SGD without momentum。SGD通常能找到最小值,但是依赖健壮的初始化,并且容易陷入鞍点。因此,如果要获得更快的收敛速度和训练更深更复杂的神经网络,需要选择自适应学习方法。</p> <p><a href="https://blog.csdn.net/winycg/article/details/79363169" target="_blank" rel="noopener">https://blog.csdn.net/winycg/article/details/79363169</a></p> <p><img src="https://img2018.cnblogs.com/blog/1338991/201811/1338991-20181126203011170-1206183239.gif" alt="" width="464" height="359"><img src="https://img2018.cnblogs.com/blog/1338991/201811/1338991-20181126204507462-285504275.gif" alt="" width="456" height="353"></p> <p>&nbsp;</p> <div>class tf.train.Optimizer:优化器(optimizers)类的基类。</div> <div>Optimizer基类提供了计算损失梯度的方法,并将梯度应用于变量。这个类定义了在训练模型的时候添加一个操作的API。你基本上不会直接使用这个类,但是你会用到他的子类比如GradientDescentOptimizer, AdagradOptimizer, MomentumOptimizer.等等这些。&nbsp;</div> <ul> <li> <p>batch GD【全部样本,速度慢】</p> </li> <li> <p>随机GD【随机一个样本,速度快,但局部最优】</p> </li> <li> <p>mini-batch GD 【batch个样本,常在数据量较大时使用】</p> </li> </ul> <p>训练集样本数少【≤2000】:采用batchGD</p> <p>训练集样本数多:采用mini-batch GD,batch大小一般为64-512. 训练时多尝试一下2的次方来找到最合适的batch大小。</p> <p>&nbsp;</p> <p><strong><img src="https://img2018.cnblogs.com/blog/1338991/201812/1338991-20181203142437776-870365557.png" alt="" width="488" height="175"></strong></p> <p>&nbsp;</p> <p>这个类是实现梯度下降算法的优化器。这个构造函数需要的一个<strong>学习率</strong>就行了。</p> <p style="margin-left: 60px">构造函数:tf.train.GradientDescentOptimizer(<span class="hljs-number">0.001).</span>minimize(loss,global_step=None,var_list=None,gate_gradients=GATE_OP,aggregation_method=None,colocate_gradients_with_ops=False,name=None,grad_loss=None)</p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" alt="" id="code_img_closed_351ba28d-9a6f-4b47-a359-c0dab08d0775" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" alt="" id="code_img_opened_351ba28d-9a6f-4b47-a359-c0dab08d0775" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_351ba28d-9a6f-4b47-a359-c0dab08d0775" class="cnblogs_code_hide"> <pre><span style="color: rgba(0, 128, 128, 1)">1</span> <span style="color: rgba(128, 0, 128, 1)">__init__</span><span style="color: rgba(0, 0, 0, 1)">( </span><span style="color: rgba(0, 128, 128, 1)">2</span> <span style="color: rgba(0, 128, 128, 1)">3</span> <span style="color: rgba(0, 0, 0, 1)"> learning_rate, </span><span style="color: rgba(0, 128, 128, 1)">4</span> <span style="color: rgba(0, 128, 128, 1)">5</span> use_locking=<span style="color: rgba(0, 0, 0, 1)">False, </span><span style="color: rgba(0, 128, 128, 1)">6</span> <span style="color: rgba(0, 128, 128, 1)">7</span> name=<span style="color: rgba(128, 0, 0, 1)">'</span><span style="color: rgba(128, 0, 0, 1)">GradientDescent</span><span style="color: rgba(128, 0, 0, 1)">'</span> <span style="color: rgba(0, 128, 128, 1)">8</span> <span style="color: rgba(0, 128, 128, 1)">9</span> )</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p style="margin-left: 60px"><em id="__mceDel">: (学习率)张量或者浮点数</em></p> <p style="margin-left: 60px">: 为True时锁定更新</p> <p style="margin-left: 60px">: 梯度下降名称,默认为"GradientDescent".</p> <p style="margin-left: 60px">&nbsp;</p> <p>实现了 Adadelta算法的优化器,可以算是下面的Adagrad算法改进版本。</p> <p style="margin-left: 60px">构造函数: <em id="__mceDel"><strong><em>tf.train.AdadeltaOptimizer.<strong>init</strong>(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name=’Adadelta’)</em></strong></em></p> <p style="margin-left: 60px"><em><em><em>构造函数:<strong>tf.train.AdagradOptimizer.(learning_rate, initial_accumulator_value=0.1, use_locking=False, name=’Adagrad’)</strong></em></em></em></p> <p><img src="https://img2018.cnblogs.com/blog/1338991/201812/1338991-20181203145755640-2040724812.png" alt="" width="452" height="88"></p> <p><strong><em><img src="https://img2018.cnblogs.com/blog/1338991/201812/1338991-20181203145520701-795323274.png" alt="" width="405" height="136"></em></strong></p> <p>momentum表示<strong>要在多大程度上保留原来的更新方向,这个值在0-1之间</strong>,在训练开始时,由于梯度可能会很大,所以初始值一般选为0.5;当梯度不那么大时,改为0.9。 α是学习率,即当前batch的梯度多大程度上影响最终更新方向,跟普通的SGD含义相同。</p> <p style="margin-left: 60px"><em><em><em><em>构造函数:<strong>tf.train.MomentumOptimizer.(learning_rate, momentum, use_locking=False, name=’Momentum’, use_nesterov=False)</strong></em></em></em></em></p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" alt="" id="code_img_closed_363819ee-c06e-4a27-b4fb-4da1216da73b" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" alt="" id="code_img_opened_363819ee-c06e-4a27-b4fb-4da1216da73b" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_363819ee-c06e-4a27-b4fb-4da1216da73b" class="cnblogs_code_hide"> <pre><span style="color: rgba(0, 128, 128, 1)"> 1</span> <span style="color: rgba(128, 0, 128, 1)">__init__</span><span style="color: rgba(0, 0, 0, 1)">( </span><span style="color: rgba(0, 128, 128, 1)"> 2</span> <span style="color: rgba(0, 128, 128, 1)"> 3</span> <span style="color: rgba(0, 0, 0, 1)"> learning_rate, </span><span style="color: rgba(0, 128, 128, 1)"> 4</span> <span style="color: rgba(0, 128, 128, 1)"> 5</span> <span style="color: rgba(0, 0, 0, 1)"> momentum, </span><span style="color: rgba(0, 128, 128, 1)"> 6</span> <span style="color: rgba(0, 128, 128, 1)"> 7</span> use_locking=<span style="color: rgba(0, 0, 0, 1)">False, </span><span style="color: rgba(0, 128, 128, 1)"> 8</span> <span style="color: rgba(0, 128, 128, 1)"> 9</span> name=<span style="color: rgba(128, 0, 0, 1)">'</span><span style="color: rgba(128, 0, 0, 1)">Momentum</span><span style="color: rgba(128, 0, 0, 1)">'</span><span style="color: rgba(0, 0, 0, 1)">, </span><span style="color: rgba(0, 128, 128, 1)">10</span> <span style="color: rgba(0, 128, 128, 1)">11</span> use_nesterov=<span style="color: rgba(0, 0, 0, 1)">False </span><span style="color: rgba(0, 128, 128, 1)">12</span> <span style="color: rgba(0, 128, 128, 1)">13</span> )</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p style="margin-left: 60px">learning_rate: (学习率)张量或者浮点数</p> <p style="margin-left: 60px">momentum: (动量)张量或者浮点数</p> <p style="margin-left: 60px">use_locking: 为True时锁定更新</p> <p style="margin-left: 60px">name:&nbsp; 梯度下降名称,默认为 "Momentum".</p> <p style="margin-left: 60px">use_nesterov:&nbsp; 为True时,使用 Nesterov Momentum.</p> <p>&nbsp;</p> <p><strong><em><em><em><em><em>目的和动量梯度一样,减小垂直方向,增大水平方向。W为水平方向,b为垂直方向。</em></em></em></em></em></strong></p> <p><strong><em><em><em><em><em><img src="https://img2018.cnblogs.com/blog/1338991/201812/1338991-20181203150540244-1656352451.png" alt="" width="487" height="62"></em></em></em></em></em></strong></p> <p>&nbsp;</p> <p><strong><em><em><em><em><em><img src="https://img2018.cnblogs.com/blog/1338991/201812/1338991-20181203150525824-1596812867.png" alt=""></em></em></em></em></em></strong></p> <p>&nbsp;</p> <p><img src="https://img2018.cnblogs.com/blog/1338991/201812/1338991-20181203150933862-418465125.png" alt="" width="205" height="321"></p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" alt="" id="code_img_closed_57d399de-54c5-47ea-99e7-54f019dd8e6c" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" alt="" id="code_img_opened_57d399de-54c5-47ea-99e7-54f019dd8e6c" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_57d399de-54c5-47ea-99e7-54f019dd8e6c" class="cnblogs_code_hide"> <pre><span style="color: rgba(0, 128, 128, 1)"> 1</span> <span style="color: rgba(128, 0, 128, 1)">__init__</span><span style="color: rgba(0, 0, 0, 1)">( </span><span style="color: rgba(0, 128, 128, 1)"> 2</span> <span style="color: rgba(0, 128, 128, 1)"> 3</span> learning_rate=0.001<span style="color: rgba(0, 0, 0, 1)">, </span><span style="color: rgba(0, 128, 128, 1)"> 4</span> <span style="color: rgba(0, 128, 128, 1)"> 5</span> beta1=0.9<span style="color: rgba(0, 0, 0, 1)">, </span><span style="color: rgba(0, 128, 128, 1)"> 6</span> <span style="color: rgba(0, 128, 128, 1)"> 7</span> beta2=0.999<span style="color: rgba(0, 0, 0, 1)">, </span><span style="color: rgba(0, 128, 128, 1)"> 8</span> <span style="color: rgba(0, 128, 128, 1)"> 9</span> epsilon=1e-08<span style="color: rgba(0, 0, 0, 1)">, </span><span style="color: rgba(0, 128, 128, 1)">10</span> <span style="color: rgba(0, 128, 128, 1)">11</span> use_locking=<span style="color: rgba(0, 0, 0, 1)">False, </span><span style="color: rgba(0, 128, 128, 1)">12</span> <span style="color: rgba(0, 128, 128, 1)">13</span> name=<span style="color: rgba(128, 0, 0, 1)">'</span><span style="color: rgba(128, 0, 0, 1)">Adam</span><span style="color: rgba(128, 0, 0, 1)">'</span> <span style="color: rgba(0, 128, 128, 1)">14</span> <span style="color: rgba(0, 128, 128, 1)">15</span> )</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p style="margin-left: 60px"><em id="__mceDel"><em><em><em><em><em>构造函数:<strong><em><span data-mce-="">tf.train.AdamOptimizer.<span data-mce-="">(learning_rate=0.001, beta1=<span style="color: rgba(255, 0, 0, 1)">0.9</span>, beta2=<span style="color: rgba(255, 0, 0, 1)">0.999</span>, epsilon=1e-08, use_locking=False, name=’Adam’)</span></span></em></strong></em></em></em></em></em></em></p> <p style="margin-left: 60px"><em><em><em><em><em>learning_rate: (学习率)张量或者浮点数,<strong><span style="color: rgba(255, 0, 0, 1)">需要调试</span></strong></em></em></em></em></em></p> <p style="margin-left: 60px"><em><em><em><em><em>beta1:&nbsp; 浮点数或者常量张量 ,表示 The exponential decay rate for the 1st moment estimates.</em></em></em></em></em><strong><span style="color: rgba(255, 0, 0, 1)">【推荐使用0.9】</span></strong></p> <p style="margin-left: 60px"><em><em><em><em><em>beta2:&nbsp; 浮点数或者常量张量 ,表示 The exponential decay rate for the 2nd moment estimates.</em></em></em></em></em><strong><span style="color: rgba(255, 0, 0, 1)">【推荐使用0.999】</span></strong></p> <p style="margin-left: 60px"><em><em><em><em><em>epsilon: A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper.</em></em></em></em></em></p> <p style="margin-left: 60px"><em><em><em><em><em>use_locking: 为True时锁定更新</em></em></em></em></em></p> <p style="margin-left: 60px"><em><em><em><em><em>name:&nbsp; 梯度下降名称,默认为 "Adam".<br></em></em></em></em></em></p> <p style="margin-left: 60px">&nbsp;</p> <p><a href="https://openreview.net/pdf?id=Bkg3g2R9FX" target="_blank" rel="noopener">论文地址</a></p> <p><a href="https://github.com/Luolc/AdaBound" target="_blank" rel="noopener">GitHub地址:(Pytorch)</a></p> <p><a href="https://github.com/taki0112/AdaBound-Tensorflow" target="_blank" rel="noopener">GitHub地址:(Tensorflow)</a></p> <p><strong>SGD的缺点:</strong></p> <p>SGD现在后期调优时还是经常使用到,但SGD的问题是前期收敛速度慢。SGD前期收敛慢的原因: SGD在更新参数时对各个维度上梯度的放缩是一致的,并且在训练数据分布极不均很时训练效果很差。而因为收敛慢的问题应运而生的自适应优化算法Adam、AdaGrad、RMSprop 等,但这些自适应的优化算法虽然可以在训练早期展现出快速的收敛速度,但其在测试集上的表现却会很快陷入停滞,并最终被 SGD 超过。</p> <p><strong>Adam等自适应学习率算法缺点:</strong></p> <p>这就是目前很多大牛任然喜欢SGD的原因。这篇文章对于Adam后期的毛病进行了分析,原因出在自适应方法训练后期不稳定的极端学习率。换句话说,就是自适应学习率训练到后期,学习率出现极端情况,更新参数时有些维度上学习率特别大,有些维度学习率特别小。</p> <p>&nbsp;</p> <p>&nbsp;采样参数的学习率,每个单元格包含一个通过对学习率进行数值运算得到的值,颜色越浅代表学习率越小。</p> <p>我们可以看到,当模型接近收敛时,学习率中有大量的极端值(包含许多小于 0.01 和大于 1000 的情况)。这一现象表明在实际训练中,极端学习率是实际存在的。</p> <p>发现这个问题怎么解决?如何融合上面两种方法的优点?</p> <p>那就对自适应学习率加一下限制吧。具体做法是对学习率进行动态裁剪,在这一设置下,在训练早期由于上下界对学习率的影响很小,算法更加接近于 Adam;而随着时间增长裁减区间越来越收紧,模型的学习率逐渐趋于稳定,在末期更加贴近于 SGD。AMSBound 可以对 AMSGrad 采用类似的裁剪得到。</p> <p>&nbsp;</p> <p>换句话说,Adam和SGD是AdaBound的特殊情况。</p> <p>在这一设置下,在训练早期由于上下界对学习率的影响很小,算法更加接近于 Adam;而随着时间增长裁减区间越来越收紧,模型的学习率逐渐趋于稳定,在末期更加贴近于 SGD。AMSBound 可以对 AMSGrad 采用类似的裁剪得到。<br><br></p> <p><a href="https://arxiv.org/abs/1907.08610v1" target="_blank" rel="noopener">论文地址</a></p> <p><a href="https://github.com/alphadl/lookahead.pytorch" target="_blank" rel="noopener">GitHub地址:(Pytorch)</a></p> <p><a href="https://github.com/Janus-Shiau/lookahead_tensorflow" target="_blank" rel="noopener">GitHub地址:(Tensorflow)</a></p> <p>Lookahead的思路很朴素,准确来说它并不是一个优化器,而是一个使用现有优化器的方案。简单来说它就是下面三个步骤的循环执行:</p> <p><img src="https://img-blog.csdnimg.cn/20190802171642721.png" alt="" width="622" height="127" class="has"></p> <p>附:<a href="https://www.jiqizhixin.com/articles/2019-07-25-14" rel="noopener">《机器之心的Lookahead的介绍》</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><strong>LazyAdam</strong><br>和图像等领域不同,对 NLU 之类的任务,每个 batch 采样到的词有限,每次更新对 Embedding 的梯度估计都是稀疏的。非 momentum-based 的 Optimizer 每步只会更新采样到的词,而对于所有带动量的优化器(自然也就包括Adam以及带动量的SGD)都存在一个问题:当前batch中没被采样到的词,依然会使用历史动量来更新,这可能导致Embedding层过拟合。具体来说,当一个词的被采样过后,它的Embedding的梯度不为0,这个梯度也会被记录在动量中,实际更新是用动量去更新的;在后面的batch中,假如该词没有被采样到,它的Embedding的梯度为0,但是它的动量并不为0,所以该词还是被更新了。这样一来就算没有被反复采样的词,对应的Embedding也被反复更新了,就导致了过拟合。</p> <p>&nbsp;</p> <p>所以,一个改进的方案是只有当该词被采样过才更新,这就是LazyOptimizer的基本原理了。</p> <p>&nbsp;</p> <p>LazyAdam是Adam的变体,可以更有效地处理稀疏更新。原始的Adam算法为每个可训练变量维护两个移动平均累加器,累加器在每一步都会更新**。 而此类为稀疏变量提供了更加懒惰的梯度更新处理,它仅更新当前batch中出现的稀疏变量索引的移动平均累加器,而不是更新所有索引的累加器。 与原始的Adam优化器相比,它可以为某些应用提供模型训练吞吐量的大幅改进。 但是它的语义与原始的Adam算法略有不同,可能会导致不同的实验结果。</p> <p>&nbsp;</p> <p>在实现上,我们要如何判断一个词有没有被采样过呢?当然终极方法肯定是传入被采样过的词的index了,但这使用上不够友好。我这里使用了一个近似的方法:判断该词的Embedding对应的梯度是否为0,如果为0意味着它“很可能”在当前batch没有被采样到。背后的原理在于,如果它没有被采样到,那么梯度一定为0,如果它被采样到了,那么梯度为0的概率是非常小的,毕竟那么多分量,同时为0的可能性很小,所以这样实现也够用了。</p> <p>&nbsp;</p> <p>AdamOptimizer源码中函数_apply_sparse和_resource_apply_sparse 主要用在稀疏向量的更新操作上,而具体的实现是在函数_apply_sparse_shared中</p> <p><strong><span style="font-size: 1.5em"><strong>LazyAdam</strong>源码:</span></strong></p> <p><span style="font-size: 14px">可以看出公式与Adam都相同,不同的是每次迭代根据当前batch的indices来对一阶动量和二阶动量进行更新。</span></p> <p>&nbsp;</p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" id="code_img_closed_5209c52d-1774-4ac8-9cfc-39e7382382e2" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" id="code_img_opened_5209c52d-1774-4ac8-9cfc-39e7382382e2" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_5209c52d-1774-4ac8-9cfc-39e7382382e2" class="cnblogs_code_hide"> <pre><span style="color: rgba(0, 0, 255, 1)">def</span><span style="color: rgba(0, 0, 0, 1)"> _apply_sparse(self, grad, Var): beta1_power, beta2_power </span>=<span style="color: rgba(0, 0, 0, 1)"> self._get_beta_accumulators() beta1_power </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(beta1_power, Var.dtype.base_dtype) beta2_power </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(beta2_power, Var.dtype.base_dtype) lr_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._lr_t, Var.dtype.base_dtype) beta1_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._beta1_t, Var.dtype.base_dtype) beta2_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._beta2_t, Var.dtype.base_dtype) epsilon_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._epsilon_t, Var.dtype.base_dtype) lr </span>=(lr_t * math_ops.sqrt(1 - beta2_power) / (1 -<span style="color: rgba(0, 0, 0, 1)"> beta1_power)) </span><span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> \\(m :=beta1 * m + (1 - beta1) * g_t\\)</span> m=self.get_slot(Var, <span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(128, 0, 0, 1)">m</span><span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(0, 0, 0, 1)">) m_t </span>=<span style="color: rgba(0, 0, 0, 1)"> state_ops.scatter_update(m, grad.indices, beta1_t </span>* array_ops.gather(m, grad.indices) +<span style="color: rgba(0, 0, 0, 1)"> (</span>1 - beta1_t) *<span style="color: rgba(0, 0, 0, 1)"> grad.Values, use_locking</span>=self._use_locking)<span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)">一阶动量</span> <span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> \\(V :=beta2 * V + (1 - beta2) * (g_t * g_t)\\)</span> V=self.get_slot(Var, <span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(128, 0, 0, 1)">V</span><span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(0, 0, 0, 1)">) V_t </span>=<span style="color: rgba(0, 0, 0, 1)"> state_ops.scatter_update(V, grad.indices, beta2_t </span>* array_ops.gather(V, grad.indices) +<span style="color: rgba(0, 0, 0, 1)"> (</span>1 - beta2_t) *<span style="color: rgba(0, 0, 0, 1)"> math_ops.square(grad.Values), use_locking</span>=self._use_locking) <span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)">二阶动量</span> <span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> \\(Variable -=learning_rate * m_t / (epsilon_t + sqrt(V_t))\\)</span> m_t_slice=<span style="color: rgba(0, 0, 0, 1)"> array_ops.gather(m_t, grad.indices) V_t_slice </span>=<span style="color: rgba(0, 0, 0, 1)"> array_ops.gather(V_t, grad.indices) denominator_slice </span>=math_ops.sqrt(V_t_slice) +<span style="color: rgba(0, 0, 0, 1)"> epsilon_t Var_update </span>=<span style="color: rgba(0, 0, 0, 1)"> state_ops.scatter_sub(Var, grad.indices, lr </span>* m_t_slice /<span style="color: rgba(0, 0, 0, 1)"> denominator_slice, use_locking</span>=<span style="color: rgba(0, 0, 0, 1)">self._use_locking) </span><span style="color: rgba(0, 0, 255, 1)">return</span> control_flow_ops.group(Var_update, m_t, V_t)</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p>&nbsp;</p> <p><strong>Madam:</strong></p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" id="code_img_closed_b3d6e4f5-0c9d-4479-8edb-78e946c2005e" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" id="code_img_opened_b3d6e4f5-0c9d-4479-8edb-78e946c2005e" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_b3d6e4f5-0c9d-4479-8edb-78e946c2005e" class="cnblogs_code_hide"> <pre><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> array_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.training <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> adam </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.framework <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> control_flow_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> math_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> resource_variable_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> state_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> variable_scope </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.training <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> optimizer </span><span style="color: rgba(0, 0, 255, 1)">class</span><span style="color: rgba(0, 0, 0, 1)"> MaskedAdamOptimizer(adam.AdamOptimizer): </span><span style="color: rgba(0, 0, 255, 1)">def</span><span style="color: rgba(0, 0, 0, 1)"> _apply_sparse_shared(self, grad, var, indices, scatter_add): beta1_power, beta2_power </span>=<span style="color: rgba(0, 0, 0, 1)"> self._get_beta_accumulators() beta1_power </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(beta1_power, var.dtype.base_dtype) beta2_power </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(beta2_power, var.dtype.base_dtype) lr_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._lr_t, var.dtype.base_dtype) beta1_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._beta1_t, var.dtype.base_dtype) beta2_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._beta2_t, var.dtype.base_dtype) epsilon_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._epsilon_t, var.dtype.base_dtype) lr </span>=(lr_t * math_ops.sqrt(1 - beta2_power) / (1 -<span style="color: rgba(0, 0, 0, 1)"> beta1_power)) </span><span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> m_t=beta1 * m + (1 - beta1) * g_t</span> m=self.get_slot(var, <span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(128, 0, 0, 1)">m</span><span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(0, 0, 0, 1)">) m_scaled_g_values </span>=grad * (1 -<span style="color: rgba(0, 0, 0, 1)"> beta1_t) m_t </span>=state_ops.assign(m, m *<span style="color: rgba(0, 0, 0, 1)"> beta1_t, use_locking</span>=<span style="color: rgba(0, 0, 0, 1)">self._use_locking) with ops.control_dependencies([m_t]): m_t </span>=<span style="color: rgba(0, 0, 0, 1)"> scatter_add(m, indices, m_scaled_g_values) </span><span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> v_t=beta2 * v + (1 - beta2) * (g_t * g_t)</span> v=self.get_slot(var, <span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(128, 0, 0, 1)">v</span><span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(0, 0, 0, 1)">) v_scaled_g_values </span>=(grad * grad) * (1 -<span style="color: rgba(0, 0, 0, 1)"> beta2_t) v_t </span>=state_ops.assign(v, v * beta2_t, use_locking=<span style="color: rgba(0, 0, 0, 1)">self._use_locking) with ops.control_dependencies([v_t]): v_t </span>=<span style="color: rgba(0, 0, 0, 1)"> scatter_add(v, indices, v_scaled_g_values) gather_m_t </span>=<span style="color: rgba(0, 0, 0, 1)"> array_ops.gather(m_t, indices) gather_v_t </span>=<span style="color: rgba(0, 0, 0, 1)"> array_ops.gather(v_t, indices) gather_v_sqrt </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.sqrt(gather_v_t) var_update </span>=scatter_add(var, indices, -lr * gather_m_t / (gather_v_sqrt +<span style="color: rgba(0, 0, 0, 1)"> epsilon_t)) </span><span style="color: rgba(0, 0, 255, 1)">return</span> control_flow_ops.group(*[var_update, m_t, v_t])</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p>两者在计算移动平均累加器时(一阶动量和二阶动量)有所不同:</p> <p><strong>LazyAdam:</strong></p> <p>&nbsp;</p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" id="code_img_closed_771d8ce4-2a3d-4e0f-9c87-cc812d326c05" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" id="code_img_opened_771d8ce4-2a3d-4e0f-9c87-cc812d326c05" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_771d8ce4-2a3d-4e0f-9c87-cc812d326c05" class="cnblogs_code_hide"> <pre>m_t=<span style="color: rgba(0, 0, 0, 1)"> state_ops.scatter_update(m, grad.indices, beta1_t </span>* array_ops.gather(m, grad.indices) +<span style="color: rgba(0, 0, 0, 1)"> (</span>1 - beta1_t) *<span style="color: rgba(0, 0, 0, 1)"> grad.Values, use_locking</span>=self._use_locking)</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p><strong>Madam:</strong></p> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" id="code_img_closed_a6cf949e-6396-45e9-8faf-313d4d37a1ed" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" id="code_img_opened_a6cf949e-6396-45e9-8faf-313d4d37a1ed" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_a6cf949e-6396-45e9-8faf-313d4d37a1ed" class="cnblogs_code_hide"> <pre>m_scaled_g_Values=grad * (1 -<span style="color: rgba(0, 0, 0, 1)"> beta1_t) m_t </span>=state_ops.assign(m, m *<span style="color: rgba(0, 0, 0, 1)"> beta1_t, use_locking</span>=<span style="color: rgba(0, 0, 0, 1)">self._use_locking) with ops.control_dependencies([m_t]): m_t </span>=scatter_add(m, indices, m_scaled_g_Values)</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p>Madam其实是介于Lazy Adam和 Adam之间的一种方法,其与Lazy Adam唯一的不同在于对一阶动量m和二阶动量 V 进行 decay 的操作,Madam是全部都要 decay,即当前batch没有采样到的变量所对应的之前动量的累积值也要考虑。 而LazyAdam 是只 decay 采样到的embedding。(在计算指数加权平均时,LazyAdam只对当前采样到的变量之前的平均值进行累加,没有采样到的样本不累加,而Madam要全部累加)。</p> <p>LazyAdam存在的一个问题是当梯度为0时不更新对应的m和v。实际上当其他权重改变时m和v应该更新。Madam应该是解决了这个问题所以性能变得更好。</p> <p>为了更形象的说明它们的差异,通过一个假设的例子来说明,用一阶动量来举例:<br><img src="https://img2020.cnblogs.com/blog/1338991/202006/1338991-20200611101919888-2023438762.png" alt="" loading="lazy"></p> <p>&nbsp;</p> <div>链接:https://www.zhihu.com/question/265357659/answer/580469438<br>来源:知乎<br><br><br> <div> <p>AllenNLP 也在 2018 EMNLP 的 Tutorial 里面提到。</p> <p>和图像等领域不同,对 NLU 之类的任务,每个 batch 采样到的词有限,每次更新对 Embedding 的梯度估计都是稀疏的。非 momentum-based 的 Optimizer 每步只会更新采样到的词,而对于 momentum-based 的 Optimizer,现在所有框架的实现都会用当前的 momentum 去更新所有的词,即使这些词在连续的几十步更新里都没有被采样到。这可能会使 Embedding 过拟合。 </p> <p>下面是一个文本分类问题在不同 setting 下对应的 valid set 准确率曲线。learning rate 固定不变。madam 指的是修正过后的 adam,LoEmbed 表示是否加载预训练的词向量。在加载词向量的 setting 下,不加修正的 Adam 过拟合的不能看。</p> <noscript><img src="https://pic2.zhimg.com/50/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_hd.jpg" data-caption="" data-size="normal" data-rawwidth="3172" data-rawheight="1346" data-default-watermark-src="https://pic3.zhimg.com/50/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_hd.jpg" class="origin_image zh-lightbox-thumb" width="3172" data-original="https://pic2.zhimg.com/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_r.jpg"/></noscript><img src="https://pic2.zhimg.com/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_r.jpg" width="277" height="118" class="origin_image zh-lightbox-thumb lazy" data-caption="" data-size="normal" data-rawwidth="3172" data-rawheight="1346" data-default-watermark-src="https://pic3.zhimg.com/50/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_hd.jpg" data-original="https://pic2.zhimg.com/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_r.jpg" data-actualsrc="https://pic2.zhimg.com/50/v2-1b964f57beab7fbaf2c4ebec4c5d06f1_hd.jpg" data-lazy-status="ok"> <p><strong>代码:</strong></p> </div> </div> <div class="cnblogs_code"><img src="https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif" id="code_img_closed_07627d83-a0dd-439c-95d7-f84d0758b728" class="code_img_closed"><img src="https://images.cnblogs.com/OutliningIndicators/ExpandedBlockStart.gif" id="code_img_opened_07627d83-a0dd-439c-95d7-f84d0758b728" class="code_img_opened" style="display: none"> <div id="cnblogs_code_open_07627d83-a0dd-439c-95d7-f84d0758b728" class="cnblogs_code_hide"> <pre><span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> for tensorflow 1.12.0</span> <span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> array_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.training <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> adam </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.framework <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> control_flow_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> math_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> resource_variable_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> state_ops </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.ops <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> variable_scope </span><span style="color: rgba(0, 0, 255, 1)">from</span> tensorflow.python.training <span style="color: rgba(0, 0, 255, 1)">import</span><span style="color: rgba(0, 0, 0, 1)"> optimizer </span><span style="color: rgba(0, 0, 255, 1)">class</span><span style="color: rgba(0, 0, 0, 1)"> MaskedAdamOptimizer(adam.AdamOptimizer): </span><span style="color: rgba(0, 0, 255, 1)">def</span><span style="color: rgba(0, 0, 0, 1)"> _apply_sparse_shared(self, grad, var, indices, scatter_add): beta1_power, beta2_power </span>=<span style="color: rgba(0, 0, 0, 1)"> self._get_beta_accumulators() beta1_power </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(beta1_power, var.dtype.base_dtype) beta2_power </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(beta2_power, var.dtype.base_dtype) lr_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._lr_t, var.dtype.base_dtype) beta1_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._beta1_t, var.dtype.base_dtype) beta2_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._beta2_t, var.dtype.base_dtype) epsilon_t </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.cast(self._epsilon_t, var.dtype.base_dtype) lr </span>=(lr_t * math_ops.sqrt(1 - beta2_power) / (1 -<span style="color: rgba(0, 0, 0, 1)"> beta1_power)) </span><span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> m_t=beta1 * m + (1 - beta1) * g_t</span> m=self.get_slot(var, <span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(128, 0, 0, 1)">m</span><span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(0, 0, 0, 1)">) m_scaled_g_values </span>=grad * (1 -<span style="color: rgba(0, 0, 0, 1)"> beta1_t) m_t </span>=state_ops.assign(m, m *<span style="color: rgba(0, 0, 0, 1)"> beta1_t, use_locking</span>=<span style="color: rgba(0, 0, 0, 1)">self._use_locking) with ops.control_dependencies([m_t]): m_t </span>=<span style="color: rgba(0, 0, 0, 1)"> scatter_add(m, indices, m_scaled_g_values) </span><span style="color: rgba(0, 128, 0, 1)">#</span><span style="color: rgba(0, 128, 0, 1)"> v_t=beta2 * v + (1 - beta2) * (g_t * g_t)</span> v=self.get_slot(var, <span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(128, 0, 0, 1)">v</span><span style="color: rgba(128, 0, 0, 1)">"</span><span style="color: rgba(0, 0, 0, 1)">) v_scaled_g_values </span>=(grad * grad) * (1 -<span style="color: rgba(0, 0, 0, 1)"> beta2_t) v_t </span>=state_ops.assign(v, v * beta2_t, use_locking=<span style="color: rgba(0, 0, 0, 1)">self._use_locking) with ops.control_dependencies([v_t]): v_t </span>=<span style="color: rgba(0, 0, 0, 1)"> scatter_add(v, indices, v_scaled_g_values) gather_m_t </span>=<span style="color: rgba(0, 0, 0, 1)"> array_ops.gather(m_t, indices) gather_v_t </span>=<span style="color: rgba(0, 0, 0, 1)"> array_ops.gather(v_t, indices) gather_v_sqrt </span>=<span style="color: rgba(0, 0, 0, 1)"> math_ops.sqrt(gather_v_t) var_update </span>=scatter_add(var, indices, -lr * gather_m_t / (gather_v_sqrt +<span style="color: rgba(0, 0, 0, 1)"> epsilon_t)) </span><span style="color: rgba(0, 0, 255, 1)">return</span> control_flow_ops.group(*[var_update, m_t, v_t])</pre> </div> <span class="cnblogs_code_collapse">View Code</span></div> <p>Tensorflow 有个叫 LazyAdamOptimizer 的方案,但试下来稳定比这个实现差。</p> <p>&nbsp;</p>
热线电话:400-123-4567
电子邮箱:admin@youweb.com
Q Q:1234567890
地址:广东省广州市天河区88号
备案号:
耀世娱乐-耀世平台-耀世加盟站

关注我们

Copyright © 2002-2017 耀世-耀世平台-耀世加盟站 版权所有

平台注册入口