在这种情况下,您需要 y_train 作为输入而不是输出:y_true_inputs = Input(...)您的损失函数将进入一个 Lambda 层,正确获取所有参数:def lambdaLoss(x):yTrue, yPred, alpha = x返回 (K.log(yTrue) - K.log(yPred))**2+alpha*yPredloss = Lambda(lambdaLoss)([y​​_true_inputs, original_model_outputs, a])您的模型将输出此损失:model = Model([original_model_inputs, y_true_inputs], loss)您将拥有一个虚拟损失函数:def dummyLoss(true, pred):返回 predmodel.compile(loss = dummyLoss, ...)并训练为:model.fit([x_train, y_train], nothing_maybe_None_or_np_zeros,....)From this post, we can write a custom loss function. Now, assume that the custom loss function depends on parameter a:def customLoss(yTrue,yPred): return (K.log(yTrue) - K.log(yPred))**2+a*yPredHow can we update parameter a at each step in a gradient descent manner like the weights?:a_new= a_old - alpha * (derivative of custom loss with respect to a)P.S. the real custom loss is different from the above. Please give me a general answer that works for any arbitrary custom loss function, not an answer to the example above. 解决方案 Create a custom layer to hold the trainable parameter. This layer will not return the inputs in its call, but we are going to have the inputs for complying with how you create layers.class TrainableLossLayer(Layer): def __init__(self, a_initializer, **kwargs): super(TrainableLossLayer, self).__init__(**kwargs) self.a_initializer = keras.initializers.get(a_initializer) #method where weights are defined def build(self, input_shape): self.kernel = self.add_weight(name='kernel_a', shape=(1,), initializer=self.a_initializer, trainable=True) self.built=True #method to define the layers operation (only return the weights) def call(self, inputs): return self.kernel #output shape def compute_output_shape(self, input_shape): return (1,)Use the layer in your model to get a with any inputs (this is not compatible with a Sequential model):a = TrainableLossLayer(a_init, name="somename")(anyInput)Now, you can try to define your loss in a sort of ugly way:def customLoss(yTrue,yPred): return (K.log(yTrue) - K.log(yPred))**2+a*yPredIf this works, then it's ready. You can also try a more complicated model (if you don't want to use a in the loss jumping over the layers like that, this might cause problems in model saving/loading)In this case, you will need that y_train goes in as an input instead of an output:y_true_inputs = Input(...)Your loss function will go into a Lambda layer taking all parameters properly:def lambdaLoss(x): yTrue, yPred, alpha = x return (K.log(yTrue) - K.log(yPred))**2+alpha*yPredloss = Lambda(lambdaLoss)([y_true_inputs, original_model_outputs, a])Your model will output this loss:model = Model([original_model_inputs, y_true_inputs], loss)You will have a dummy loss function:def dummyLoss(true, pred): return predmodel.compile(loss = dummyLoss, ...)And train as:model.fit([x_train, y_train], anything_maybe_None_or_np_zeros ,....) 这篇关于通过梯度下降在每一步更新的自定义损失函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
10-14 19:41