循环神经网络

循环神经网络简介

循环神经网络的主要用途是处理和预测序列数据,其挖掘数据中的时序信息以及语义信息的深度表达能力被充分利用,并在语音识别,语言模型,机器翻译以及时序分析等方面实现了突破。循环神经网络的来源就是为了刻画一个序列当前的输出与之前信息的关系。从网络结构上,循环神经网络会记忆之前的信息,并利用之前的信息影响后面结点的输出。也就是说,循环神经网络的隐藏层之间的结点是有连接的,隐藏层的输入不仅包括输入层的输出,还包括上一时刻隐藏层的输出。

循环神经网络会对于每一个时刻的输入结合当前模型的状态给出一个输出。因此循环神经网络理论上可以被看作是同一神经网络结构被无线复制的结果。

 

在循环神经网络中,这个被复制多次的结构被称之为循环体。如何设计循环体的网络结构是循环神经网络解决实际问题的关键。而其中最简单的循环体结构就是只使用了一个类似全连接层的神经网络结构。

具体循环神经网络实例

其主要流程分为以下两步:

  1. 根据循环体神经网络中的权重和偏移,根据输入的系统状态和系统输入计算系统状态
  2. 根据计算的系统状态,通过一个全连接网络得到系统的输出
import numpy as np

X = [1,2]
state = [0.0, 0.0]
w_cell_state = np.asarray([[0.1, 0.2], [0.3, 0.4]])
w_cell_input = np.asarray([0.5, 0.6])
b_cell = np.asarray([0.1, -0.1])
w_output = np.asarray([[1.0], [2.0]])
b_output = 0.1

for i in range(len(X)):
    before_activation = np.dot(state, w_cell_state) + X[i] * w_cell_input + b_cell
    state = np.tanh(before_activation)
    final_output = np.dot(state, w_output) + b_output
    print "before activation: ", before_activation
    print "state: ", state
    print "output: ", final_output

"""
before activation:  [ 0.6  0.5]
state:  [ 0.53704957  0.46211716]
output:  [ 1.56128388]
before activation:  [ 1.2923401   1.39225678]
state:  [ 0.85973818  0.88366641]
output:  [ 2.72707101]
"""

最后需要指出的是,理论上循环神经网络可以支持任意长度的序列,然而在实际中,如果序列过长会导致优化时出现梯度消散的问题(the vanishing gradient problem),所以实际中一般会规定一个最大长度,当序列长度超过规定长度之后会对序列进行截断。

长短时记忆网络(LSTM,long short term memory)结构

在对序列信息进行预测时,存在的问题是当前的决策可能依赖之前很近或者很远的信息,即在复杂语言场景中,有用信息的间隔有大有小,长短不一,循环神经网络的性能也会受到影响。LSTM 的设计就是为了解决这个问题。

LSTM 靠一些门的结构让信息有选择性地影响循环神经网络中每个时刻的状态。所谓门的结构就是一个使用 sigmoid 神经网络和一个按位做乘法的操作。为了使循环神经网络更有效的保存长期记忆,遗忘门和输入门至关重要,它们是 LSTM 结构的核心。

遗忘门会根据当前的输入,上一时刻的状态和上一时刻的输出共同决定哪一部分记忆需要被遗忘。而输入门根据同样的信息决定哪些部分将进入当前时刻的状态。比如一段文章中先介绍了某地原来是绿水蓝天,但后来被污染了。于是在看到被污染了之后,遗忘门将忘记之前的绿水蓝天的状态,而输入门将环境被污染了写入新的状态中。

最后输出门会根据最新的状态,上一时刻的输出和当前的输入决定当前时刻的输出。

lstm = rnn_cell.BasicLSTMCell(lstm_hidden_size)
state = lstm.zero_state(batch_size, tf.float32)
loss = 0.0
for i in range(num_steps):
    if i > 0: 
        tf.get_variable_scope().reuse_variable()
    lstm_output, state = lstm(current_input, state)
    final_output = fully_connected(lstm_output)
    loss += calc_loss(final_output, expected_output)

循环神经网络的变种——双向循环神经网络和深层循环神经网络

在某些问题中,当前时刻的输出不仅和之前的状态有关系,也和之后的状态相关。这时就需要使用双向循环神经网络(bidirectional RNN)来解决这个问题。例如预测一个语句中缺失的单词不仅需要根据前文来判断,也需要根据后面的内容,这时双向循环神经网络就可以发挥它的作用。

双向循环神经网络的主体结构就是两个单向循环神经网络的结合。在每一时刻,输入会同时提供给这两个方向相反的神经网络,而输出则是由这两个单向循环神经网络共同决定。

深层循环神经网络(deep RNN)是循环神经网络的另外一种变种。为了增强模型的表达能力,可以将每一个时刻上的循环体重复多次。和卷积神经网络类似,每一层的循环体中参数是一致的,而不同层中的参数可以不同。

为了更好地支持深层循环神经网路,Tensorflow 中提供了 MultiRNNCell 类来实现深层循环神经网络的前向传播过程。

lstm = rnn_cell.BasicLSTMCell(lstm_size)
stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers)
state = stacked_lstm.zero_state(batch_size, tf.float32)
for i in range(len(num_steps)):
    if i > 0: tf.get_variable_scope().reuse_variables()
    stacked_lstm_output, state = stacked_lstm(current_input, state)
    final_output = fully_connected(stacked_lstm_ouput)
    loss += calc_loss(final_output, expected_output)

循环神经网络的 dropout

类似卷积神经网络只在最后的全连接层中使用 dropout,循环神经网络一般只在不同层循环体结构之间使用 dropout,而不在同一层的循环体结构之间使用。

在 Tensorflow 中,使用 tf.nn.rnn_cell.DropoutWrapper 类可以很容易实现 dropout 功能。

lstm = rnn_cell.BasicLSTMCell(lstm_size)
dropout_lstm = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=0.5)
stacked_lstm = rnn_cell.MultiRNNCell([dropout_lstm] * number_of_layers)

循环神经网络样例应用

自然语言建模

PTB文本数据介绍

import tensorflow as tf
import reader

DATA_PATH = "../../datasets/PTB_data"
train_data, valid_data, test_data, _ = reader.ptb_raw_data(DATA_PATH)
print len(train_data)
print train_data[:100]

"""
929589
[9970, 9971, 9972, 9974, 9975, 9976, 9980, 9981, 9982, 9983, 9984, 9986, 9987, 9988, 9989, 9991, 9992, 9993, 9994, 9995, 9996, 9997, 9998, 9999, 2, 9256, 1, 3, 72, 393, 33, 2133, 0, 146, 19, 6, 9207, 276, 407, 3, 2, 23, 1, 13, 141, 4, 1, 5465, 0, 3081, 1596, 96, 2, 7682, 1, 3, 72, 393, 8, 337, 141, 4, 2477, 657, 2170, 955, 24, 521, 6, 9207, 276, 4, 39, 303, 438, 3684, 2, 6, 942, 4, 3150, 496, 263, 5, 138, 6092, 4241, 6036, 30, 988, 6, 241, 760, 4, 1015, 2786, 211, 6, 96, 4]
"""

# ptb_producer返回的为一个二维的tuple数据。
result = reader.ptb_producer(train_data, 4, 5)

# 通过队列依次读取batch。
with tf.Session() as sess:
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)
    for i in range(3):
        x, y = sess.run(result)
        print "X%d: "%i, x
        print "Y%d: "%i, y
    coord.request_stop()
    coord.join(threads)

"""
X0:  [[9970 9971 9972 9974 9975]
 [ 332 7147  328 1452 8595]
 [1969    0   98   89 2254]
 [   3    3    2   14   24]]
Y0:  [[9971 9972 9974 9975 9976]
 [7147  328 1452 8595   59]
 [   0   98   89 2254    0]
 [   3    2   14   24  198]]
X1:  [[9976 9980 9981 9982 9983]
 [  59 1569  105 2231    1]
 [   0  312 1641    4 1063]
 [ 198  150 2262   10    0]]
Y1:  [[9980 9981 9982 9983 9984]
 [1569  105 2231    1  895]
 [ 312 1641    4 1063    8]
 [ 150 2262   10    0  507]]
X2:  [[9984 9986 9987 9988 9989]
 [ 895    1 5574    4  618]
 [   8  713    0  264  820]
 [ 507   74 2619    0    1]]
Y2:  [[9986 9987 9988 9989 9991]
 [   1 5574    4  618    2]
 [ 713    0  264  820    2]
 [  74 2619    0    1    8]]
"""

使用循环神经网络实现语言模型

import numpy as np
import tensorflow as tf
import reader

DATA_PATH = "../../datasets/PTB_data"
HIDDEN_SIZE = 200
NUM_LAYERS = 2
VOCAB_SIZE = 10000

LEARNING_RATE = 1.0
TRAIN_BATCH_SIZE = 20
TRAIN_NUM_STEP = 35

EVAL_BATCH_SIZE = 1
EVAL_NUM_STEP = 1
NUM_EPOCH = 2
KEEP_PROB = 0.5
MAX_GRAD_NORM = 5

class PTBModel(object):
    def __init__(self, is_training, batch_size, num_steps):
        
        self.batch_size = batch_size
        self.num_steps = num_steps
        
        # 定义输入层。
        self.input_data = tf.placeholder(tf.int32, [batch_size, num_steps])
        self.targets = tf.placeholder(tf.int32, [batch_size, num_steps])
        
        # 定义使用LSTM结构及训练时使用dropout。
        lstm_cell = tf.contrib.rnn.BasicLSTMCell(HIDDEN_SIZE)
        if is_training:
            lstm_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=KEEP_PROB)
        cell = tf.contrib.rnn.MultiRNNCell([lstm_cell]*NUM_LAYERS)
        
        # 初始化最初的状态。
        self.initial_state = cell.zero_state(batch_size, tf.float32)
        embedding = tf.get_variable("embedding", [VOCAB_SIZE, HIDDEN_SIZE])
        
        # 将原本单词ID转为单词向量。
        inputs = tf.nn.embedding_lookup(embedding, self.input_data)
        
        if is_training:
            inputs = tf.nn.dropout(inputs, KEEP_PROB)

        # 定义输出列表。
        outputs = []
        state = self.initial_state
        with tf.variable_scope("RNN"):
            for time_step in range(num_steps):
                if time_step > 0: tf.get_variable_scope().reuse_variables()
                cell_output, state = cell(inputs[:, time_step, :], state)
                outputs.append(cell_output) 
        output = tf.reshape(tf.concat(outputs, 1), [-1, HIDDEN_SIZE])
        weight = tf.get_variable("weight", [HIDDEN_SIZE, VOCAB_SIZE])
        bias = tf.get_variable("bias", [VOCAB_SIZE])
        logits = tf.matmul(output, weight) + bias
        
        # 定义交叉熵损失函数和平均损失。
        loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
            [logits],
            [tf.reshape(self.targets, [-1])],
            [tf.ones([batch_size * num_steps], dtype=tf.float32)])
        self.cost = tf.reduce_sum(loss) / batch_size
        self.final_state = state
        
        # 只在训练模型时定义反向传播操作。
        if not is_training: return
        trainable_variables = tf.trainable_variables()

        # 控制梯度大小,定义优化方法和训练步骤。
        grads, _ = tf.clip_by_global_norm(tf.gradients(self.cost, trainable_variables), MAX_GRAD_NORM)
        optimizer = tf.train.GradientDescentOptimizer(LEARNING_RATE)
        self.train_op = optimizer.apply_gradients(zip(grads, trainable_variables))

def run_epoch(session, model, data, train_op, output_log, epoch_size):
    total_costs = 0.0
    iters = 0
    state = session.run(model.initial_state)

    # 训练一个epoch。
    for step in range(epoch_size):
        x, y = session.run(data)
        cost, state, _ = session.run([model.cost, model.final_state, train_op],
                                        {model.input_data: x, model.targets: y, model.initial_state: state})
        total_costs += cost
        iters += model.num_steps

        if output_log and step % 100 == 0:
            print("After %d steps, perplexity is %.3f" % (step, np.exp(total_costs / iters)))
    return np.exp(total_costs / iters)

def main():
    train_data, valid_data, test_data, _ = reader.ptb_raw_data(DATA_PATH)

    # 计算一个epoch需要训练的次数
    train_data_len = len(train_data)
    train_batch_len = train_data_len // TRAIN_BATCH_SIZE
    train_epoch_size = (train_batch_len - 1) // TRAIN_NUM_STEP

    valid_data_len = len(valid_data)
    valid_batch_len = valid_data_len // EVAL_BATCH_SIZE
    valid_epoch_size = (valid_batch_len - 1) // EVAL_NUM_STEP

    test_data_len = len(test_data)
    test_batch_len = test_data_len // EVAL_BATCH_SIZE
    test_epoch_size = (test_batch_len - 1) // EVAL_NUM_STEP

    initializer = tf.random_uniform_initializer(-0.05, 0.05)
    with tf.variable_scope("language_model", reuse=None, initializer=initializer):
        train_model = PTBModel(True, TRAIN_BATCH_SIZE, TRAIN_NUM_STEP)

    with tf.variable_scope("language_model", reuse=True, initializer=initializer):
        eval_model = PTBModel(False, EVAL_BATCH_SIZE, EVAL_NUM_STEP)

    # 训练模型。
    with tf.Session() as session:
        tf.global_variables_initializer().run()

        train_queue = reader.ptb_producer(train_data, train_model.batch_size, train_model.num_steps)
        eval_queue = reader.ptb_producer(valid_data, eval_model.batch_size, eval_model.num_steps)
        test_queue = reader.ptb_producer(test_data, eval_model.batch_size, eval_model.num_steps)

        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=session, coord=coord)

        for i in range(NUM_EPOCH):
            print("In iteration: %d" % (i + 1))
            run_epoch(session, train_model, train_queue, train_model.train_op, True, train_epoch_size)

            valid_perplexity = run_epoch(session, eval_model, eval_queue, tf.no_op(), False, valid_epoch_size)
            print("Epoch: %d Validation Perplexity: %.3f" % (i + 1, valid_perplexity))

        test_perplexity = run_epoch(session, eval_model, test_queue, tf.no_op(), False, test_epoch_size)
        print("Test Perplexity: %.3f" % test_perplexity)

        coord.request_stop()
        coord.join(threads)

if __name__ == "__main__":
    main()

"""
In iteration: 1
After 0 steps, perplexity is 9979.425
After 100 steps, perplexity is 1419.514
After 200 steps, perplexity is 1069.710
After 300 steps, perplexity is 890.482
After 400 steps, perplexity is 779.052
After 500 steps, perplexity is 702.706
After 600 steps, perplexity is 645.688
After 700 steps, perplexity is 595.862
After 800 steps, perplexity is 551.698
After 900 steps, perplexity is 517.378
After 1000 steps, perplexity is 490.698
After 1100 steps, perplexity is 465.219
After 1200 steps, perplexity is 444.531
After 1300 steps, perplexity is 425.488
Epoch: 1 Validation Perplexity: 235.968
In iteration: 2
After 0 steps, perplexity is 353.259
After 100 steps, perplexity is 244.039
After 200 steps, perplexity is 249.303
After 300 steps, perplexity is 249.369
After 400 steps, perplexity is 246.174
After 500 steps, perplexity is 243.266
After 600 steps, perplexity is 242.566
After 700 steps, perplexity is 239.728
After 800 steps, perplexity is 235.205
After 900 steps, perplexity is 232.611
After 1000 steps, perplexity is 231.105
After 1100 steps, perplexity is 227.587
After 1200 steps, perplexity is 225.045
After 1300 steps, perplexity is 222.219
Epoch: 2 Validation Perplexity: 181.089
Test Perplexity: 176.798
"""

SKlearn封装例子

from sklearn import model_selection
from sklearn import datasets
from sklearn import metrics
import tensorflow as tf
import numpy as np
from tensorflow.contrib.learn.python.learn.estimators.estimator import SKCompat
learn = tf.contrib.learn

def my_model(features, target):
    target = tf.one_hot(target, 3, 1, 0)
    
    # 计算预测值及损失函数。
    logits = tf.contrib.layers.fully_connected(features, 3, tf.nn.softmax)
    loss = tf.losses.softmax_cross_entropy(target, logits)
    
    # 创建优化步骤。
    train_op = tf.contrib.layers.optimize_loss(
        loss,
        tf.contrib.framework.get_global_step(),
        optimizer='Adam',
        learning_rate=0.01)
    return tf.arg_max(logits, 1), loss, train_op

iris = datasets.load_iris()
x_train, x_test, y_train, y_test = model_selection.train_test_split(
    iris.data, iris.target, test_size=0.2, random_state=0)

x_train, x_test = map(np.float32, [x_train, x_test])

classifier = SKCompat(learn.Estimator(model_fn=my_model, model_dir="Models/model_1"))
classifier.fit(x_train, y_train, steps=800)

y_predicted = [i for i in classifier.predict(x_test)]
score = metrics.accuracy_score(y_test, y_predicted)
print('Accuracy: %.2f%%' % (score * 100))

"""
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_tf_random_seed': None, '_task_type': None, '_environment': 'local', '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x11487e290>, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1
}
, '_task_id': 0, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_evaluation_master': '', '_keep_checkpoint_every_n_hours': 10000, '_master': ''}
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into Models/model_1/model.ckpt.
INFO:tensorflow:loss = 1.09918, step = 1
INFO:tensorflow:global_step/sec: 677.497
INFO:tensorflow:loss = 0.783171, step = 101
INFO:tensorflow:global_step/sec: 706.629
INFO:tensorflow:loss = 0.696257, step = 201
INFO:tensorflow:global_step/sec: 701.641
INFO:tensorflow:loss = 0.655656, step = 301
INFO:tensorflow:global_step/sec: 562.503
INFO:tensorflow:loss = 0.634671, step = 401
INFO:tensorflow:global_step/sec: 582.211
INFO:tensorflow:loss = 0.622115, step = 501
INFO:tensorflow:global_step/sec: 522.269
INFO:tensorflow:loss = 0.613794, step = 601
INFO:tensorflow:global_step/sec: 583.315
INFO:tensorflow:loss = 0.607876, step = 701
INFO:tensorflow:Saving checkpoints for 800 into Models/model_1/model.ckpt.
INFO:tensorflow:Loss for final step: 0.603483.
Accuracy: 100.00%
"""

时间序列预测

import numpy as np
import tensorflow as tf
from tensorflow.contrib.learn.python.learn.estimators.estimator import SKCompat
from tensorflow.python.ops import array_ops as array_ops_
import matplotlib.pyplot as plt
learn = tf.contrib.learn

HIDDEN_SIZE = 30
NUM_LAYERS = 2

TIMESTEPS = 10
TRAINING_STEPS = 3000
BATCH_SIZE = 32

TRAINING_EXAMPLES = 10000
TESTING_EXAMPLES = 1000
SAMPLE_GAP = 0.01

def generate_data(seq):
    X = []
    y = []

    for i in range(len(seq) - TIMESTEPS - 1):
        X.append([seq[i: i + TIMESTEPS]])
        y.append([seq[i + TIMESTEPS]])
    return np.array(X, dtype=np.float32), np.array(y, dtype=np.float32)

def lstm_model(X, y):
    lstm_cell = tf.contrib.rnn.BasicLSTMCell(HIDDEN_SIZE, state_is_tuple=True)
    cell = tf.contrib.rnn.MultiRNNCell([lstm_cell] * NUM_LAYERS)
    
    output, _ = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
    output = tf.reshape(output, [-1, HIDDEN_SIZE])
    
    # 通过无激活函数的全联接层计算线性回归,并将数据压缩成一维数组的结构。
    predictions = tf.contrib.layers.fully_connected(output, 1, None)
    
    # 将predictions和labels调整统一的shape
    labels = tf.reshape(y, [-1])
    predictions=tf.reshape(predictions, [-1])
    
    loss = tf.losses.mean_squared_error(predictions, labels)
    
    train_op = tf.contrib.layers.optimize_loss(
        loss, tf.contrib.framework.get_global_step(),
        optimizer="Adagrad", learning_rate=0.1)

    return predictions, loss, train_op

# 封装之前定义的lstm。
regressor = SKCompat(learn.Estimator(model_fn=lstm_model,model_dir="Models/model_2"))

# 生成数据。
test_start = TRAINING_EXAMPLES * SAMPLE_GAP
test_end = (TRAINING_EXAMPLES + TESTING_EXAMPLES) * SAMPLE_GAP
train_X, train_y = generate_data(np.sin(np.linspace(
    0, test_start, TRAINING_EXAMPLES, dtype=np.float32)))
test_X, test_y = generate_data(np.sin(np.linspace(
    test_start, test_end, TESTING_EXAMPLES, dtype=np.float32)))

# 拟合数据。
regressor.fit(train_X, train_y, batch_size=BATCH_SIZE, steps=TRAINING_STEPS)

# 计算预测值。
predicted = [[pred] for pred in regressor.predict(test_X)]

# 计算MSE。
rmse = np.sqrt(((predicted - test_y) ** 2).mean(axis=0))
print ("Mean Square Error is: %f" % rmse[0])

"""
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_tf_random_seed': None, '_task_type': None, '_environment': 'local', '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x11a5590d0>, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1
}
, '_task_id': 0, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_evaluation_master': '', '_keep_checkpoint_every_n_hours': 10000, '_master': ''}
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into Models/model_2/model.ckpt.
INFO:tensorflow:loss = 0.491647, step = 1
INFO:tensorflow:global_step/sec: 188.742
INFO:tensorflow:loss = 0.00536189, step = 101
INFO:tensorflow:global_step/sec: 232.293
INFO:tensorflow:loss = 0.00523746, step = 201
INFO:tensorflow:global_step/sec: 222.852
INFO:tensorflow:loss = 0.00308377, step = 301
INFO:tensorflow:global_step/sec: 221.96
INFO:tensorflow:loss = 0.00336958, step = 401
INFO:tensorflow:global_step/sec: 214.658
INFO:tensorflow:loss = 0.00279854, step = 501
INFO:tensorflow:global_step/sec: 198.507
INFO:tensorflow:loss = 0.00239536, step = 601
INFO:tensorflow:global_step/sec: 235.11
INFO:tensorflow:loss = 0.00155067, step = 701
INFO:tensorflow:global_step/sec: 234.653
INFO:tensorflow:loss = 0.00157833, step = 801
INFO:tensorflow:global_step/sec: 204.6
INFO:tensorflow:loss = 0.00120256, step = 901
INFO:tensorflow:global_step/sec: 205.953
INFO:tensorflow:loss = 0.000943568, step = 1001
INFO:tensorflow:global_step/sec: 220.453
INFO:tensorflow:loss = 0.000876339, step = 1101
INFO:tensorflow:global_step/sec: 221.479
INFO:tensorflow:loss = 0.000911509, step = 1201
INFO:tensorflow:global_step/sec: 196.529
INFO:tensorflow:loss = 0.000629159, step = 1301
INFO:tensorflow:global_step/sec: 212.558
INFO:tensorflow:loss = 0.000686743, step = 1401
INFO:tensorflow:global_step/sec: 212.959
INFO:tensorflow:loss = 0.000698992, step = 1501
INFO:tensorflow:global_step/sec: 224.379
INFO:tensorflow:loss = 0.000569221, step = 1601
INFO:tensorflow:global_step/sec: 206.306
INFO:tensorflow:loss = 0.000498735, step = 1701
INFO:tensorflow:global_step/sec: 224.091
INFO:tensorflow:loss = 0.000503566, step = 1801
INFO:tensorflow:global_step/sec: 176.59
INFO:tensorflow:loss = 0.000373, step = 1901
INFO:tensorflow:global_step/sec: 220.858
INFO:tensorflow:loss = 0.000287493, step = 2001
INFO:tensorflow:global_step/sec: 195.514
INFO:tensorflow:loss = 0.000209309, step = 2101
INFO:tensorflow:global_step/sec: 224.015
INFO:tensorflow:loss = 0.000323008, step = 2201
INFO:tensorflow:global_step/sec: 195.972
INFO:tensorflow:loss = 0.00015437, step = 2301
INFO:tensorflow:global_step/sec: 219.664
INFO:tensorflow:loss = 0.000121245, step = 2401
INFO:tensorflow:global_step/sec: 216.561
INFO:tensorflow:loss = 0.000165566, step = 2501
INFO:tensorflow:global_step/sec: 233.534
INFO:tensorflow:loss = 0.000130429, step = 2601
INFO:tensorflow:global_step/sec: 230.028
INFO:tensorflow:loss = 0.000143373, step = 2701
INFO:tensorflow:global_step/sec: 236.298
INFO:tensorflow:loss = 0.000118591, step = 2801
INFO:tensorflow:global_step/sec: 202.394
INFO:tensorflow:loss = 6.92828e-05, step = 2901
INFO:tensorflow:Saving checkpoints for 3000 into Models/model_2/model.ckpt.
INFO:tensorflow:Loss for final step: 8.82684e-05.
Mean Square Error is: 0.009076
"""
plot_predicted, = plt.plot(predicted, label='predicted')
plot_test, = plt.plot(test_y, label='real_sin')
plt.legend([plot_predicted, plot_test],['predicted', 'real_sin'])
plt.show()

 

Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐