【Tensorflow slim】slim learning包
tensorflow
一个面向所有人的开源机器学习框架
项目地址:https://gitcode.com/gh_mirrors/te/tensorflow

·
TF-Slim在learning.py中的训练模型提供了一套简单但功能强大的工具。 这些功能包括一个训练函数可以反复测量损失,计算梯度并将模型保存到磁盘,以及用于操纵梯度的几个便利函数。 例如,一旦我们指定了模型,损失函数和优化方案,我们可以调用slim.learning.create_train_op和slim.learning.train来执行优化:
g = tf.Graph()
# Create the model and specify the losses...
...
total_loss = slim.losses.get_total_loss()
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
# create_train_op ensures that each time we ask for the loss, the update_ops
# are run and the gradients being computed are applied too.
train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = ... # Where checkpoints are stored.
slim.learning.train(
train_op,
logdir,
number_of_steps=1000,
save_summaries_secs=300,
save_interval_secs=600):
在这个例子中,slim.learning.train与train_op一起提供,用于(a)计算损失和(b)应用梯度步骤。 logdir指定检查点和事件文件的存储目录。 我们可以限制采取任何数字的梯度步数。 在这种情况下,我们要求采取1000个步骤。 最后,save_summaries_secs = 300表示我们将每隔5分钟计算摘要,save_interval_secs = 600表示我们将每10分钟保存一次模型检查点。
tensorflow
一个面向所有人的开源机器学习框架
项目地址:https://gitcode.com/gh_mirrors/te/tensorflow
Working Example: Training the VGG16 Model
import tensorflow as tf
slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg
...
train_log_dir = ...
if not tf.gfile.Exists(train_log_dir):
tf.gfile.MakeDirs(train_log_dir)
with tf.Graph().as_default():
# Set up the data loading:
images, labels = ...
# Define the model:
predictions = vgg.vgg_16(images, is_training=True)
# Specify the loss function:
slim.losses.softmax_cross_entropy(predictions, labels)
total_loss = slim.losses.get_total_loss()
tf.summary.scalar('losses/total_loss', total_loss)
# Specify the optimization scheme:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
# create_train_op that ensures that when we evaluate it to get the loss,
# the update_ops are done and the gradient updates are computed.
train_tensor = slim.learning.create_train_op(total_loss, optimizer)
# Actually runs training.
slim.learning.train(train_tensor, train_log_dir)
阅读全文
AI总结




一个面向所有人的开源机器学习框架
最近提交(Master分支:2 个月前 )
4f64a3d5
Instead, check for this case in `ResolveUsers` and `ResolveOperand`, by querying whether the `fused_expression_root` is part of the `HloFusionAdaptor`.
This prevents us from stepping into nested fusions.
PiperOrigin-RevId: 724311958
2 个月前
aa7e952e
Fix a bug in handling negative strides, and add a test case that exposes it.
We can have negative strides that are not just -1, e.g. with a combining
reshape.
PiperOrigin-RevId: 724293790
2 个月前
更多推荐
所有评论(0)