搭建模块化的神经网络八股:

前向传播就是搭建网络,设计网络结构(forward.py

一般新建一个forward.py文件来描述前向传播过程,一般包括下面几个函数:

def forward(x, regularizer):
    """
    定义了前向传播过程
    :param x: 输入x
    :param regularizer: 正则化权重
    :return: 返回y
    """
	w =
	b =
	y =
	return y

def get_weight(shape, regularizer):
    w = tf.Variable(  )
    tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w))
    return w

def get_bias(shape):
    b = tf.Variable(  )
    return b

反向传播就是训练网络,优化网络参数(backward.py

同样,一般新建一个backward.py文件来描述反向传播过程,一般包含以下内容:

def backward():
    x = tf.placehoder(  )
    y_ = tf.placehoder(  )
    y = forward.forward(x, REGULARIZER)
    global_step = tf.Variable(0, trainable=False)
    loss =

正则化过程

loss可以是y与y_的差距(loss_mse)=tf.reduce_mean(tf.square(y-y_))

也可以是:
ce=tf.nn.sparse_softmax_cross_entropy_with_logits(logite=y, labels=tf.argmax(y_, 1))

y与y_的差距(cem) = tf.reduce_mean(ce)

加入正则化后:

loss = y与y_的差距 + tf.add_n(tf.get_colection('losses))

指数衰减学习率:(如果要使用,加下面的代码)

learning_rate = tf.train.exponential_decay(
    LEARNING_RATE_BASE,
    global_step,
    数据集总样本数/BATCH_SIZE,
    LEARNING_RATE_DECAY,
    staircase=True
)

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

滑动平均:(如果要使用,加下面代码)

ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
ema_op = ema.apply(tf.trainable_variables()) # tf.trainable_variables()是所有待训练参数
with tf.control_dependencies([train_step,ema_op]):
    train_op = tf.no_op(name='train')

with tf.Session() as sess:
    # 初始化
    init_op = tf.global_variables_initializer()
    sess.run(init_op)

    for i in range(STEPS):
        sess.run(train_step, feed_dict={x: ,y_: })
        if i % 轮数 == 0:
            print ()

if __name__=='__main__':
    backward()

10-05 10:56