搜索

耀世资讯

公司动态
行业新闻

联系我们

Contact us

电话:400-123-4567
Q Q:1234567890
邮箱:admin@youweb.com
地址:广东省广州市天河区88号

API

发布时间:2024-05-26 10:03:11 作者:佚名

TensorLayer provides rich layer implementations trailed for various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also layers that allow user to apply on . More functions can be found in TensorFlow API.

我们提供和TensorFlow兼容的新型优化器API,以节省您的开发时间。

([learning_rate,?beta1,?beta2,?...])

Implementation of the AMSGrad optimization algorithm.

class (learning_rate=0.01, beta1=0.9, beta2=0.99, epsilon=1e-08, use_locking=False, name='AMSGrad')[源代码]?

Implementation of the AMSGrad optimization algorithm.

See: On the Convergence of Adam and Beyond - [Reddi et al., 2018].

参数
  • learning_rate (float) -- A Tensor or a floating point value. The learning rate.

  • beta1 (float) -- A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.

  • beta2 (float) -- A float value or a constant float tensor. The exponential decay rate for the 2nd moment estimates.

  • epsilon (float) -- A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper.

  • use_locking (bool) -- If True use locks for update operations.

  • name (str) -- Optional name for the operations created when applying gradients. Defaults to "AMSGrad".

热线电话:400-123-4567
电子邮箱:admin@youweb.com
Q Q:1234567890
地址:广东省广州市天河区88号
备案号:
耀世娱乐-耀世平台-耀世加盟站

关注我们

Copyright © 2002-2017 耀世-耀世平台-耀世加盟站 版权所有

平台注册入口