在mini batch中,我要返回一个(batch_size, dim)尺寸的张量,然后计算相同类别之间的平均内积,和异类之间的平均内积。
这里其实有一点可以考虑,咱们到底是将内积作为目标函数来优化好,还是将距离作为目标函数来优化好呢?
如果是内积的话,同类的好说,同类之间的内积越接近1越好,异类之间的内积,是接近0好,还是-1好呢?这确实是一个问题。
在我下面的代码中,我是将异类的内积设计为-1为目标值了,注意,送到Pull_Push_Features_loss 中的特征参数已经是经过二范数规范后的了。
class Pull_Push_Features_loss(nn.Module): #特征拉近推远损失
def __init__(self):
super(Pull_Push_Features_loss,self).__init__()
def forward(self,features, batch_size, person_num):#features是经过renorm(2,0,1e-5).mul(1e5)后的特征,尺寸为(batch_size,dim),dim为特征的长度,person_num为每个label具有的图片数量,用这种损失函数,在训练的时候,送入网络的数据都是N*K形式的组合。
ID_num = batch_size // person_num #计算一共有多少个label
loss_all = 0
pull_distance_all = 0#计算相同label之间的平均内积,目标为1最好,所以用了1-distance
for id_index in range(ID_num):
features_temp_list = features[id_index * person_num:(id_index + 1) * person_num ]
loss_temp = 0
distance = torch.mm(features_temp_list, features_temp_list.t())
distance = 1 - distance
pull_distance_all = pull_distance_all + distance.sum()
pull_distance_avg = pull_distance_all / ( ID_num * ( person_num * (person_num-1) ) )
dif_ID_features = features[0:1]
for i in range(1,ID_num):
dif_ID_features = torch.cat( (dif_ID_features, features[i*person_num:i*person_num + 1] ), 0 )
mat_similary = torch.mm(dif_ID_features, dif_ID_features.t())
push_similary_all = torch.sum(mat_similary) - ID_num
push_similary_avg = push_similary_all / ( ID_num * ID_num - ID_num )
loss = pull_distance_avg + push_similary_avg
return loss
我想了想,其实也没那么多事,就是我用了余弦距离作为衡量标准而已