如何使用张量流进行多任务深度学习

如何使用张量流进行多任务深度学习

本文介绍了如何使用张量流进行多任务深度学习的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有人知道如何使用TensorFlow进行多任务深度学习吗?即,共享底层而不共享顶层.您能分享一些示例代码吗?

Does anyone know how to do multitask deep learning with TensorFlow? That is, sharing the bottom layers while not sharing the top layers. Could you kindly share some example code?

推荐答案

具有TensorFlow后端的Keras可以轻松地做到这一点.功能性API是为这些用例设计的.看看功能性API指南.这是一个LSTM示例,共享层,摘自上述指南:

Keras with the TensorFlow backend can easily do this. The functional API was designed for these use cases. Take a look at the functional API guide. Here is an LSTM example that shared layers, taken from the above guide:

# this layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)

# when we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)

# we can then concatenate the two vectors:
merged_vector = merge([encoded_a, encoded_b], mode='concat', concat_axis=-1)

# and add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)

# we define a trainable model linking the
# tweet inputs to the predictions
model = Model(input=[tweet_a, tweet_b], output=predictions)

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])
model.fit([data_a, data_b], labels, nb_epoch=10)

训练具有多个输出的Keras模型时,可以为每个输出定义一个损耗函数,Keras会针对所有损耗的总和进行优化,这非常有用.

When you train a Keras model with multiple outputs, you can define a loss function for each output, and Keras will optimize over the sum of all losses, which is pretty useful.

这篇关于如何使用张量流进行多任务深度学习的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 19:58