NC215118. BatchNormalization1D
描述
Batch Normalization(简称BN),被提出在Google于2015年发表的《Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》。
不少同学都知道,机器学习模型往往需要一个训练集,训练之后才会使用到测试集或者其他实用场合中,这里有一个基本假设就是,训练集和测试集的数据是独立同分布的。在概率论与统计学中,独立同分布(英语:Independent and identically distributed,缩写为IID)是指一组随机变量中每个变量的概率分布都相同,且这些随机变量互相独立。一组随机变量独立同分布并不意味着它们的样本空间中每个事件发生概率都相同。例如,投掷非均匀骰子得到的结果序列是独立同分布的,但掷出每个面朝上的概率并不相同。
在深度学习中,由于问题的复杂性,我们往往会使用较深层数的网络进行训练,而在深度网络前向传播的过程中,每一层的输入都是上一层的输出,而在相邻的两层之间由于各种原因,输出的数据也并不一定是IID的,这个现象称为“Internal Covariate Shift”,因此我们需要将每一层的输出也转换为IID,Normalize常被用来实现这个过程,例如Dropout,L1正则化、L2正则化等都可以视为是一种Normalize,此外,还有一种方式叫白化(Whiten),就是对输入数据分布变换到0均值,和单位方差的分布,即正态分布。但白化的计算成本很高,尽管它可能是效果最明显的方法。
白化的计算方式各位同学可以在赛后自行查阅。
此外,由于算力因素的制约,我们往往无法将训练集一次性全部输入深度网络来训练,因此有mini-batch方式将原数据集切分为n_batch个批次来依次训练,这也是深度学习之所以备受瞩目的原因(便于分布式训练)。并且白化很难扩展到mini-batch的情况,因此就有了Batch Normalization。
BN层共有四个参数:moving_mean, moving_var, gamma, beta。
其中gamma和beta是在深度网络中接受反向传播优化,和传递梯度的参数,在本题中不考虑反向传播训练优化,始终使用给定的数值进行计算。(在BN理论中,gamma和beta是用来维持输入的分布信息的)
这里给出一个直接的计算模板(伪Python代码, 即使没学过也不可能看不懂,使用Python的选手也不用寄希望于复制粘贴就能运行,也不支持使用numpy或者scipy等Python库)
函数定义:
def batch_norm(X, gamma, beta, moving_mean, moving_var, is_training=True, eps=1e-5, momentum=0.9,):
其中X是shape为的张量,gamma, beta, moving_mean, moving_vars是的张量,is_training表示是否是训练模式。
返回 out, moving_mean, moving_var,out是计算结果,moving_mean, moving_var是本轮更新后的moving_mean, moving_var。
X = {{1.2, 2.3, 3.4, 4.5}, {2.3, 3.4, 4.5, 5.6}, {3.4, 4.5, 5.6, 6.7},} gamma = {1.0, 2.0, 3.0, 4.0}; beta = {2.0, 3.0, 4.0, 5.0} moving_mean = {0.0,0.0,0.0,0.0} moving_var = {1.0,1.0,1.0,1.0} eps = 1e-5 momentum = 0.9而言,
{2.3000, 3.4000, 4.5000, 5.6000} {0.8067, 0.8067, 0.8067, 0.8067} {0.2300, 0.3400, 0.4500, 0.5600} {0.9807, 0.9807, 0.9807, 0.9807} {{-1.2247, -1.2247, -1.2247, -1.2247}, {0,.0,0.0,0.0,0.0} {1.2247, 1.2247, 1.2247, 1.2247}} {{0.7753, 0.5505, 0.3258, 0.1011}, {2.0000, 3.0000, 4.0000, 5.0000], {3.2247, 5.4495, 7.6742, 9.8989]}
{2.3000, 3.4000, 4.5000, 5.6000} {0.8067, 0.8067, 0.8067, 0.8067} {{1.2000, 2.3000, 3.4000, 4.5000}, {2.3000, 3.4000, 4.5000, 5.6000}, {3.4000, 4.5000, 5.6000, 6.7000}} {{ 3.2000, 7.6000, 14.1999, 22.9999}, { 4.3000, 9.8000, 17.4999, 27.3999}, { 5.4000, 12.0000, 20.7999, 31.7999}}注意我们的提交里不需要输出上述函数定义里的print,示例的print仅用于方便理解。
for i in range(n_batch): out, moving_mean, moving_var = batch_norm(X, gamma, beta, moving_mean, moving_var, True, eps, momentum) out, moving_mean, moving_var = batch_norm(X, gamma, beta, moving_mean, moving_var, False, eps, momentum)
while(n_batch--){ out, moving_mean, moving_var = batch_norm(X, gamma, beta, moving_mean, moving_var, true, eps, momentum); } out, moving_mean, moving_var = batch_norm(X, gamma, beta, moving_mean, moving_var, false, eps, momentum); //C/C++当然是不能直接返回三个返回值的,这只是表示batch_norm这个函数会更新moving_mean和moving_var的值而已,
输入描述
第一行为三个整数和两个浮点数,分别为n, c, n_batch, eps, momentum。
接下来的行,每行有个浮点数,分别表示的各个元素
接下来的行有个浮点数,表示gamma的该元素初始值
接下来的行有个浮点数,表示beta的该元素初始值
接下来的行有个浮点数,表示moving_mean的该元素初始值
接下来的行有个浮点数,表示moving_var的该元素初始值所有值的范围都在。
输出描述
结果out是一个的张量,请保留四位小数,输出行,第行为第个张量的个值,按一个空格分隔。输出结果与正确结果的每个元素的误差不超过0.001即视为正确。
示例1
输入:
3 4 3 0.00005 0.9 1.2 2.3 3.4 4.5 2.3 3.4 4.5 5.6 3.4 4.5 5.6 6.7 1.0 2.0 3.0 4.0 2.0 3.0 4.0 5.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
输出:
2.5924 5.8323 10.7200 17.2550 3.7224 8.0923 14.1100 21.7750 4.8524 10.3520 17.5000 26.2940
Python3(3.9) 解法, 执行用时: 16ms, 内存消耗: 3096K, 提交时间: 2020-12-19 14:57:46
import sys def input(): return sys.stdin.readline().rstrip() def mean(X): n = len(X) c = len(X[0]) mu = [] for i in range(c): total = 0. for j in range(n): total += X[j][i] total /= n mu.append(total) return mu def variance(X, mu): n = len(X) c = len(X[0]) var = [] for i in range(c): total = 0. for j in range(n): total += (X[j][i] - mu[i]) ** 2 total /= n var.append(total) return var def get_X_hat(X, mu, var, eps): n = len(X) c = len(X[0]) X_hat = [[0.0] * c for _ in range(n)] for i in range(c): for j in range(n): X_hat[j][i] = (X[j][i] - mu[i]) / (var[i] + eps)**0.5 return X_hat def get_moving(moving, momentum, data): c = len(moving) for i in range(c): moving[i] = momentum * moving[i] + (1.0 - momentum) * data[i] return moving def get_res(X_hat, gamma, beta): n = len(X_hat) c = len(X_hat[0]) for i in range(c): for j in range(n): X_hat[j][i] = gamma[i] * X_hat[j][i] + beta[i] return X_hat def batch_norm(X, gamma, beta, moving_mean, moving_var, is_traning=True, eps=1e-5, momentum=0.9): mu = mean(X) var = variance(X, mu) if is_traning: X_hat = get_X_hat(X, mu, var, eps) moving_mean = get_moving(moving_mean, momentum, mu) moving_var = get_moving(moving_var, momentum, var) else: X_hat = get_X_hat(X, moving_mean, moving_var, eps) res = get_res(X_hat, gamma, beta) return res, moving_mean, moving_var if __name__ == '__main__': n, c, n_batch, eps, momentum = input().split() n, c, n_batch, eps, momentum = int(n), int(c), int(n_batch), float(eps), float(momentum) X = [] for i in range(n): X.append(list(map(float, input().split()))) gamma = list(map(float, input().split())) beta = list(map(float, input().split())) moving_mean = list(map(float, input().split())) moving_var = list(map(float, input().split())) for i in range(n_batch): out, moving_mean, moving_var = batch_norm(X, gamma, beta, moving_mean, moving_var, True, eps, momentum) out, moving_mean, moving_var = batch_norm(X, gamma, beta, moving_mean, moving_var, False, eps, momentum) for i in range(n): for j in range(c): if j < c - 1: print("%.4f" % out[i][j], end=" ") else: print("%.4f" % out[i][j])
C++(g++ 7.5.0) 解法, 执行用时: 3ms, 内存消耗: 500K, 提交时间: 2022-11-25 15:00:59
#include<bits/stdc++.h> using namespace std; int n,c,n_batch; double out[100][100]; double moving_mean[100]; double moving_var[100]; void batch_norm(double x[][100],double gamma[],double beta[],bool is_training=true,double eps=0.00005,double momentum=0.9) { int i,j; double x_hat[100][100]; double mu[100]={0}; double var[100]={0}; for(j=0;j<c;j++) { for(i=0;i<n;i++) { mu[j]=mu[j]+x[i][j]; } mu[j]=mu[j]/n; } for(j=0;j<c;j++) { for(i=0;i<n;i++) { var[j]=var[j]+(x[i][j]-mu[j])*(x[i][j]-mu[j]); } var[j]=var[j]/n; } // for(i=0;i<c;i++) // printf("%.4lf ",mu[i]); // printf("\n"); // for(i=0;i<c;i++) // printf("%.4lf ",var[i]); // printf("\n"); if(is_training) { for(i=0;i<n;i++) { for(j=0;j<c;j++) { x_hat[i][j]=(x[i][j]-mu[j])/sqrt(var[j]+eps); } } for(i=0;i<c;i++) { moving_mean[i]=momentum*moving_mean[i]+(1.0-momentum)*mu[i]; moving_var[i]=momentum*moving_var[i]+(1.0-momentum)*var[i]; } // for(i=0;i<c;i++) // printf("%.4lf ",moving_mean[i]); // printf("\n"); // for(i=0;i<c;i++) // printf("%.4lf ",moving_var[i]); // printf("\n"); } else { for(i=0;i<n;i++) { for(j=0;j<c;j++) { x_hat[i][j]=(x[i][j]-moving_mean[j])/sqrt(moving_var[j]+eps); } } } // for(i=0;i<n;i++) // { // for(j=0;j<c;j++) // { // printf("%.4lf ",x_hat[i][j]); // } // printf("\n"); // } for(i=0;i<n;i++) { for(j=0;j<c;j++) { out[i][j]=gamma[j]*x_hat[i][j]+beta[j]; } } } int main() { int i,j; double eps,momentum; double x[100][100]; double gamma[100],beta[100]; scanf("%d%d%d%lf%lf",&n,&c,&n_batch,&eps,&momentum); for(i=0;i<n;i++) { for(j=0;j<c;j++) { scanf("%lf",&x[i][j]); } } for(i=0;i<c;i++) scanf("%lf",&gamma[i]); for(i=0;i<c;i++) scanf("%lf",&beta[i]); for(i=0;i<c;i++) scanf("%lf",&moving_mean[i]); for(i=0;i<c;i++) scanf("%lf",&moving_var[i]); for(i=0;i<n_batch;i++) { batch_norm(x,gamma,beta,true,eps,momentum); } batch_norm(x,gamma,beta,false,eps,momentum); for(i=0;i<n;i++) { for(j=0;j<c;j++) { printf("%.4lf ",out[i][j]); } printf("\n"); } return 0; }
C(clang11) 解法, 执行用时: 2ms, 内存消耗: 364K, 提交时间: 2020-12-19 16:37:03
#include<stdio.h> #include<stdlib.h> #include<math.h> #define max 100 #define scanf_s scanf void scan(double len[], int n){ for (int i = 0; i < n; i++){ scanf_s("%lf",&len[i]); } } int main() { double X[max][max], out[max][max], X_hat[max][max]; int n, c, n_ba; double eps, mo, tmp; double gam[max], beta[max], mov_mean[max], mov_var[max], mu[max], var[max]; scanf_s("%d%d%d", &n, &c, &n_ba); scanf_s("%lf%lf", &eps, &mo); for (int i = 0; i < n; i++){ for (int j = 0; j < c; j++){ scanf_s("%lf", &X[i][j]); } } scan(gam, c); scan(beta, c); scan(mov_mean, c); scan(mov_var, c); for (int i = 0; i < c; i++){ tmp = 0.0; for (int j = 0; j < n; j++) { tmp += X[j][i] / n; } mu[i] = tmp; } for (int i = 0; i < c; i++){ tmp = 0.0; for (int j = 0; j < n; j++){ tmp += (X[j][i] - mu[i]) * (X[j][i] - mu[i]) / n; } var[i] = tmp; } while (n_ba--){ for (int i = 0; i < n; i++){ tmp = 0.0; for (int j = 0; j < c; j++){ tmp = (X[i][j] - mu[j]) / sqrt(var[j] + eps); X_hat[i][j] = tmp; } } for (int i = 0; i < c; i++){ mov_mean[i] = mo * mov_mean[i] + (1.0 - mo) * mu[i]; mov_var[i] = mo * mov_var[i] + (1.0 - mo) * var[i]; } } for (int i = 0; i < n; i++){ tmp = 0.0; for (int j = 0; j < c; j++){ tmp = (X[i][j] - mov_mean[j]) / sqrt(mov_var[j] + eps); X_hat[i][j] = tmp; } } for (int i = 0; i < n; i++){ tmp = 0.0; for (int j = 0; j < c; j++){ tmp = gam[j] * X_hat[i][j] + beta[j]; out[i][j] = tmp; } } for (int i = 0; i < n; i++){ for (int j = 0; j < c; j++){ printf("%lf ", out[i][j]); } printf("\n"); } return 0; }
C++(clang++11) 解法, 执行用时: 4ms, 内存消耗: 380K, 提交时间: 2020-12-19 15:35:11
#include<bits/stdc++.h> #define longlong ll #define MEM(a,b) memset(a,b,sizeof(a)) using namespace std; double gamm[25]; double beta[25]={0}; double mmean[25]={0}; double mvar[25]={0}; double x[25][25]={0}; double meanx[25]={0}; double varx[25]={0}; void mean(int n,int c) { for(int i=0;i<c;i++) { double sum=0; for(int j=0;j<n;j++) { sum+=x[j][i]; } meanx[i]=sum/n; } return ; } void var(int n,int c) { for(int i=0;i<c;i++) { double sum=0; for(int j=0;j<n;j++) { sum+=pow(x[j][i]-meanx[i],2); } varx[i]=sum/n; } } int main() { int n,c,n_batch; double eps,mom; cin>>n>>c>>n_batch>>eps>>mom; for(int i=0;i<n;i++) { for(int j=0;j<c;j++) { cin>>x[i][j]; } } for(int i=0;i<c;i++) { cin>>gamm[i]; } for(int i=0;i<c;i++) { cin>>beta[i]; } for(int i=0;i<c;i++) { cin>>mmean[i]; } for(int i=0;i<c;i++) { cin>>mvar[i]; } mean(n,c); var(n,c); while(n_batch--) { for(int i=0;i<c;i++) { mmean[i]=mom*mmean[i]+(1.0-mom)*meanx[i]; } for(int i=0;i<c;i++) { mvar[i]=mom*mvar[i]+(1.0-mom)*varx[i]; } } for(int i=0;i<n;i++) { for(int j=0;j<c;j++) { x[i][j]=(x[i][j]-mmean[j])/sqrt(mvar[j]+eps); } } for(int i=0;i<n;i++) { for(int j=0;j<c;j++) { x[i][j]=gamm[j]*x[i][j]+beta[j]; } } for(int i=0;i<n;i++) { for(int j=0;j<c;j++) { printf("%.4f ",x[i][j]); } cout<<endl; } return 0; }