Shared
Functions
clip
one_hot
Linear
einsum
where
Comprehension
Softmax
Softmax is used for rescaling the input Tensor on a dim so that the elements of that dim will lie in the range of [0,1] and sum to 1. Softmax function is defined as:
Softmax is usually used in the category situation where we have n labels/tags on the last dim, therefore we use softmax to magnify the difference between them. For example:
Loss
What it really does
when you do loss.backward(), it is a shortcut for
loss.backward(torch.Tensor([1]))
. This in only valid if loss is a tensor containing a single element. DataParallel returns to you the partial loss that was computed on each gpu, so you usually want to do loss.backward(torch.Tensor([1, 1])) or loss.sum().backward(). Both will have the exact same behaviour.
NLL & CrossEntropy
sparse
代衚 targets
æ¯ æ°åçŒç ïŒèäžå sparse
忝 one_hot
çŒç
kerasçæ¬é»è®€from_logits=False 泚æè¿éçfrom_logits=Falseåªæ¯è¡šç€ºç»è¿äºäžå±softmaxïŒç»ŽåºŠäžº2ååaxis=1ïŒç»ŽåºŠäžº3ååaxis=2ïŒlogä»ç¶æ¯åšsparse_categorical_crossentropyåœæ°éé¢
ç±äº CrossEntropy ç计ç®åæ»çæç
§ dim=1
è¿è¡åç±»ïŒå æ€åšè®¡ç® Batch Loss çæ¶åéèŠè®² dim=1
讟眮䞺 categories:
Accuracy
Categorical Accuracy
The average score of true positives of entire dataset:
Alignment
Model
create a fake pytorch model, whose data will be filled/replaced by tf checkpoints;
collect all save parameters name and values;
use
getattr
to index into the submodule of pytorch model;fill the data by
pointer.data = torch.from_numpy(array)
.
Last updated
Was this helpful?