torch.nn.NLLLoss
negative log likelihood lossでNLLLoss
The input given through a forward call is expected to contain log-probabilities of each class.
input has to be a Tensor of size either (minibatch,C) or (minibatch,C,d1,d2,...,dK) with K≥1 for the K-dimensional case.
後者は2次元の画像のピクセル単位のNLLLossの計算で有用
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network.
「ネットワークの最終レイヤーにLogSoftmaxレイヤーを追加」
You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
❓「LogSoftmax layer追加 + NLLLoss」またはCrossEntropyLossということ?
The target that this loss expects should be a class index in the range [0,C−1] where C = number of classes;
if ignore_index is specified, this loss also accepts this class index
table:NLLLossのinput, target, output(3通りの組合せ)
input (N, C) (C) (N,C,d1,d2,...,dK)
target (N) () (N,d1,d2,...,dK)
output (reduction == "none") (N) (N)? (N,d1,d2,...,dK)
output (reduction != "none") scalar=() () ()
N=minibatch
C=number of classes
デフォルト値は reduction = "mean"
code:python
>> import torch
>> import torch.nn as nn
>> torch.manual_seed(1)
>> m = nn.LogSoftmax(dim=1)
>> loss = nn.NLLLoss()
>> inputs = torch.randn(3, 5, requires_grad=True) # N=3 x C=5
>> inputs.size()
>> target = torch.tensor(1, 0, 4, dtype=torch.long) # C=5未満、sizeはN=3と等しい >> target.size()
>> log_softmax = m(inputs)
>> log_softmax.size()
>> output = loss(log_softmax, target)
>> output.size() # scalarになった!
torch.Size([])
>> output
tensor(1.3471, grad_fn=<NllLossBackward0>)