2025.4.14 フロベニウスノルムと混合L1, 2ノルムの比較
こういうことか?
code:p1.py
import numpy as np
import numpy.linalg as npal
A0 = np.random.random((10, 40))
A1 = A0.copy()
# Forbenius norm
y_f0 = npal.norm(A0, ord='fro')
y_f1 = npal.norm(A1, ord='fro')
print('FRO : ', y_f1 / y_f0, y_f0)
# 1,2 mixed norm
m, n = A0.shape
y_m0 = 0
y_m1 = 0
for i in range(m):
y_m0 = y_m0 + npal.norm(A0i,:, ord=2) y_m1 = y_m1 + npal.norm(A1i,:, ord=2) print('MIX : ', y_m1 / y_m0 , y_m0)
結果
code:result.txt
FRO : 1.5707142534960188
MIX : 1.372922050852048
加えたノイズによるノルムの増加率を比較した結果、MIXの方が抑制されていることが確認できる。出力されるノルムの値はMIXの方が大きい。
PyTorch化
code:p.py
import torch as pt
import torch.linalg as ptal
A0 = pt.rand((10, 40))
A1 = A0.clone()
# Forbenius norm
y_f0 = ptal.norm(A0, ord='fro')
y_f1 = ptal.norm(A1, ord='fro')
print('FRO : ', y_f1 / y_f0, y_f0)
# 1,2 mixed norm
m, n = A0.shape
y_m0 = 0
y_m1 = 0
for i in range(m):
y_m0 = y_m0 + ptal.norm(A0i,:, ord=2) y_m1 = y_m1 + ptal.norm(A1i,:, ord=2) print('MIX : ', y_m1 / y_m0 , y_m0)
ノルムの計算に自動微分が適用できることの確認
code:p1.py
import torch as pt
import torch.linalg as ptal
# Forbenius norm
A0 = pt.rand((5, 5), requires_grad=True)
y_f0 = ptal.norm(A0, ord='fro')
y_f0.backward()
print('A0.grad : \n', A0.grad)
# 1,2 mixed norm
A1 = pt.rand((5, 5), requires_grad=True)
m, n = A1.shape
y_m0 = 0
for i in range(m):
y_m0 = y_m0 + ptal.norm(A1i,:, ord=2) y_m0.backward()
print('A1.grad : \n', A1.grad)