当前位置: 首页 > 资讯 >

【23】Batch Normalization 使得 Regularizers 无效化?谈谈这两者的交互影响

Colab连结

昨天我们探讨了 L1 与 L2 Regularizers 的问题,但其实啊,Regularizers 还有一个和 Batch Normalization 有个有趣的关系,这篇文章 L2 Regularization and Batch Norm 提到了同时使用 BN 和 Regularizers 时,所产生的有趣问题。

首先,我们知道套用 Regularizers 後,会对权重产生一个小於1的衰变(decay),
让权重更新往0靠近些,这个(1−αc)的值我们称作为λ。

https://ithelp.ithome.com.tw/upload/images/20211007/20107299j0nWjCwr4I.png

另一方面,经过 CNN 或 Dense 层输出的数值通常会再经过 BN 层,BN 层会对该次 batch 计算平均 mean 和变异数 var,并正规化输出:

https://ithelp.ithome.com.tw/upload/images/20211007/20107299WByevLFMYM.png

而刚刚λ倍缩放了权重,也等於是缩放了输出,对於BN的 mean 和 var 也一样做了λ倍的缩放,所以套回原本的 BN 公式,分子和分母同时乘上 lambda 就刚好抵销,等於是经过 BN 层後,Regularizers 对输出毫无影响。

https://ithelp.ithome.com.tw/upload/images/20211007/201072991S3AiOlNB7.png

那麽,如果是这样子,我们还需要 Regularizers 做什麽?其实就算 Regularizers 的效果被 BN 约掉,他仍然可以拿来约束权重的大小,这反而让 Regularizers 有了一个新的定位!

综上所述,我们分别来训练并观察:

  1. 仅用 BN
  2. 仅用 L2 Regularizers
  3. 同时使用 BN 与 L2 Regularizers
    三种模型的权重变化!

实验一:仅用 BN

def bottleneck(net, filters, out_ch, strides, shortcut=True, zero_pad=False):

    padding = 'valid' if zero_pad else 'same'
    shortcut_net = net

    net = tf.keras.layers.Conv2D(filters * 6, 1, use_bias=False, padding='same')(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)
    if zero_pad:
        net = tf.keras.layers.ZeroPadding2D(padding=((0, 1), (0, 1)))(net)

    net = tf.keras.layers.DepthwiseConv2D(3, strides=strides, use_bias=False, padding=padding)(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    net = tf.keras.layers.Conv2D(out_ch, 1, use_bias=False, padding='same')(net)
    net = tf.keras.layers.BatchNormalization()(net)

    if shortcut:
        net = tf.keras.layers.Add()([net, shortcut_net])

    return net

def get_mobilenetV2_bn(shape):
    input_node = tf.keras.layers.Input(shape=shape)

    net = tf.keras.layers.Conv2D(32, 3, (2, 2), use_bias=False, padding='same')(input_node)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    net = tf.keras.layers.DepthwiseConv2D(3, use_bias=False, padding='same')(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)
    net = tf.keras.layers.Conv2D(16, 1, use_bias=False, padding='same')(net)
    net = tf.keras.layers.BatchNormalization()(net)

    net = bottleneck(net, 16, 24, (2, 2), shortcut=False, zero_pad=True)  # block_1
    net = bottleneck(net, 24, 24, (1, 1), shortcut=True)  # block_2

    net = bottleneck(net, 24, 32, (2, 2), shortcut=False, zero_pad=True)  # block_3
    net = bottleneck(net, 32, 32, (1, 1), shortcut=True)  # block_4
    net = bottleneck(net, 32, 32, (1, 1), shortcut=True)  # block_5

    net = bottleneck(net, 32, 64, (2, 2), shortcut=False, zero_pad=True)  # block_6
    net = bottleneck(net, 64, 64, (1, 1), shortcut=True)  # block_7
    net = bottleneck(net, 64, 64, (1, 1), shortcut=True)  # block_8
    net = bottleneck(net, 64, 64, (1, 1), shortcut=True)  # block_9

    net = bottleneck(net, 64, 96, (1, 1), shortcut=False)  # block_10
    net = bottleneck(net, 96, 96, (1, 1), shortcut=True)  # block_11
    net = bottleneck(net, 96, 96, (1, 1), shortcut=True)  # block_12

    net = bottleneck(net, 96, 160, (2, 2), shortcut=False, zero_pad=True)  # block_13
    net = bottleneck(net, 160, 160, (1, 1), shortcut=True)  # block_14
    net = bottleneck(net, 160, 160, (1, 1), shortcut=True)  # block_15

    net = bottleneck(net, 160, 320, (1, 1), shortcut=False)  # block_16

    net = tf.keras.layers.Conv2D(1280, 1, use_bias=False, padding='same')(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    return input_node, net
input_node, net = get_mobilenetV2_bn((224,224,3))
net = tf.keras.layers.GlobalAveragePooling2D()(net)
net = tf.keras.layers.Dense(NUM_OF_CLASS)(net)

model = tf.keras.Model(inputs=[input_node], outputs=[net])

model.compile(
    optimizer=tf.keras.optimizers.SGD(LR),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)

history = model.fit(
    ds_train,
    epochs=EPOCHS,
    validation_data=ds_test,
    verbose=True,
    callbacks=[PrintWeightsCallback()])
Epoch 1/10
conv2d_70 layer: [-0.02620085  0.10190549 -0.11347758 -0.120565    0.10449789  0.09324147 -0.02767587  0.03863515  0.12804998  0.10915002]
(略)
Epoch 10/10
conv2d_70 layer: [-0.17744583  0.17197324 -0.25094765 -0.4260909  -0.2861711   0.12653658  -0.18487974  0.10149723 -0.08124655  0.30025354]

https://ithelp.ithome.com.tw/upload/images/20211007/20107299EfhQsv4EQx.png

可以看到模型准确度有上升,而模型的权重的大小以0.x居多。

实验二:仅用 L2 Regularizers

def bottleneck(net, filters, out_ch, strides, regularizer, shortcut=True, zero_pad=False):

    padding = 'valid' if zero_pad else 'same'
    shortcut_net = net

    net = tf.keras.layers.Conv2D(filters * 6, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)
    if zero_pad:
        net = tf.keras.layers.ZeroPadding2D(padding=((0, 1), (0, 1)))(net)

    net = tf.keras.layers.DepthwiseConv2D(3, strides=strides, use_bias=False, padding=padding, depthwise_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    net = tf.keras.layers.Conv2D(out_ch, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)

    if shortcut:
        net = tf.keras.layers.Add()([net, shortcut_net])

    return net

def get_mobilenetV2_l2(shape, regularizer):
    input_node = tf.keras.layers.Input(shape=shape)

    net = tf.keras.layers.Conv2D(32, 3, (2, 2), use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(input_node)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    net = tf.keras.layers.DepthwiseConv2D(3, use_bias=False, padding='same', depthwise_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)
    net = tf.keras.layers.Conv2D(16, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)

    net = bottleneck(net, 16, 24, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_1
    net = bottleneck(net, 24, 24, (1, 1), regularizer, shortcut=True)  # block_2

    net = bottleneck(net, 24, 32, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_3
    net = bottleneck(net, 32, 32, (1, 1), regularizer, shortcut=True)  # block_4
    net = bottleneck(net, 32, 32, (1, 1), regularizer, shortcut=True)  # block_5

    net = bottleneck(net, 32, 64, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_6
    net = bottleneck(net, 64, 64, (1, 1), regularizer, shortcut=True)  # block_7
    net = bottleneck(net, 64, 64, (1, 1), regularizer, shortcut=True)  # block_8
    net = bottleneck(net, 64, 64, (1, 1), regularizer, shortcut=True)  # block_9

    net = bottleneck(net, 64, 96, (1, 1), regularizer, shortcut=False)  # block_10
    net = bottleneck(net, 96, 96, (1, 1), regularizer, shortcut=True)  # block_11
    net = bottleneck(net, 96, 96, (1, 1), regularizer, shortcut=True)  # block_12

    net = bottleneck(net, 96, 160, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_13
    net = bottleneck(net, 160, 160, (1, 1), regularizer, shortcut=True)  # block_14
    net = bottleneck(net, 160, 160, (1, 1), regularizer, shortcut=True)  # block_15

    net = bottleneck(net, 160, 320, (1, 1), regularizer, shortcut=False)  # block_16

    net = tf.keras.layers.Conv2D(1280, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    return input_node, net
REGULARIZER=tf.keras.regularizers.l2(0.001)

input_node, net = get_mobilenetV2_l2((224,224,3), REGULARIZER)
net = tf.keras.layers.GlobalAveragePooling2D()(net)
net = tf.keras.layers.Dense(NUM_OF_CLASS, kernel_regularizer=REGULARIZER, bias_regularizer=REGULARIZER)(net)

model = tf.keras.Model(inputs=[input_node], outputs=[net])

model.compile(
    optimizer=tf.keras.optimizers.SGD(LR),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)

history = model.fit(
    ds_train,
    epochs=EPOCHS,
    validation_data=ds_test,
    verbose=True,
    callbacks=[PrintWeightsCallback()])

产出:

Epoch 1/10
conv2d_140 layer: [ 0.13498078 -0.12406565  0.05617869  0.1112484   0.0101945  -0.09676153  0.05650736  0.03752249 -0.01030101  0.08091953]
(略)
Epoch 10/10
conv2d_140 layer: [ 0.00809666 -0.00744193  0.00336981  0.00667309  0.0006115  -0.00580412  0.00338953  0.00225074 -0.00061789  0.00485385]

准确度没上升,但是权重有受到 Regularizers 的影响,最後的大小在 0.00X附近。

https://ithelp.ithome.com.tw/upload/images/20211007/20107299DYLhhTLQ2D.png

实验三:同时使用 BN 与 L2 Regularizers

def bottleneck(net, filters, out_ch, strides, regularizer, shortcut=True, zero_pad=False):

    padding = 'valid' if zero_pad else 'same'
    shortcut_net = net

    net = tf.keras.layers.Conv2D(filters * 6, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)
    if zero_pad:
        net = tf.keras.layers.ZeroPadding2D(padding=((0, 1), (0, 1)))(net)

    net = tf.keras.layers.DepthwiseConv2D(3, strides=strides, use_bias=False, padding=padding, depthwise_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    net = tf.keras.layers.Conv2D(out_ch, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.BatchNormalization()(net)

    if shortcut:
        net = tf.keras.layers.Add()([net, shortcut_net])

    return net

def get_mobilenetV2_bn_l2(shape, regularizer):
    input_node = tf.keras.layers.Input(shape=shape)

    net = tf.keras.layers.Conv2D(32, 3, (2, 2), use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(input_node)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    net = tf.keras.layers.DepthwiseConv2D(3, use_bias=False, padding='same', depthwise_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)
    net = tf.keras.layers.Conv2D(16, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.BatchNormalization()(net)

    net = bottleneck(net, 16, 24, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_1
    net = bottleneck(net, 24, 24, (1, 1), regularizer, shortcut=True)  # block_2

    net = bottleneck(net, 24, 32, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_3
    net = bottleneck(net, 32, 32, (1, 1), regularizer, shortcut=True)  # block_4
    net = bottleneck(net, 32, 32, (1, 1), regularizer, shortcut=True)  # block_5

    net = bottleneck(net, 32, 64, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_6
    net = bottleneck(net, 64, 64, (1, 1), regularizer, shortcut=True)  # block_7
    net = bottleneck(net, 64, 64, (1, 1), regularizer, shortcut=True)  # block_8
    net = bottleneck(net, 64, 64, (1, 1), regularizer, shortcut=True)  # block_9

    net = bottleneck(net, 64, 96, (1, 1), regularizer, shortcut=False)  # block_10
    net = bottleneck(net, 96, 96, (1, 1), regularizer, shortcut=True)  # block_11
    net = bottleneck(net, 96, 96, (1, 1), regularizer, shortcut=True)  # block_12

    net = bottleneck(net, 96, 160, (2, 2), regularizer, shortcut=False, zero_pad=True)  # block_13
    net = bottleneck(net, 160, 160, (1, 1), regularizer, shortcut=True)  # block_14
    net = bottleneck(net, 160, 160, (1, 1), regularizer, shortcut=True)  # block_15

    net = bottleneck(net, 160, 320, (1, 1), regularizer, shortcut=False)  # block_16

    net = tf.keras.layers.Conv2D(1280, 1, use_bias=False, padding='same', kernel_regularizer=regularizer, bias_regularizer=regularizer)(net)
    net = tf.keras.layers.BatchNormalization()(net)
    net = tf.keras.layers.ReLU(max_value=6)(net)

    return input_node, net
REGULARIZER=tf.keras.regularizers.l2(0.001)

input_node, net = get_mobilenetV2_bn_l2((224,224,3), REGULARIZER)
net = tf.keras.layers.GlobalAveragePooling2D()(net)
net = tf.keras.layers.Dense(NUM_OF_CLASS, kernel_regularizer=REGULARIZER, bias_regularizer=REGULARIZER)(net)

model = tf.keras.Model(inputs=[input_node], outputs=[net])

model.compile(
    optimizer=tf.keras.optimizers.SGD(LR),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)

history = model.fit(
    ds_train,
    epochs=EPOCHS,
    validation_data=ds_test,
    verbose=True,
    callbacks=[PrintWeightsCallback()])

产出:

Epoch 1/10
conv2d layer: [ 0.08555499  0.12598534 -0.03244241  0.11502795  0.05982415  0.11824064  0.13084657  0.08825392  0.02584527 -0.0615561 ]
(略)
Epoch 10/10
conv2d layer: [-0.12869535 -0.09060557  0.01533028 -0.05165869 -0.00263854 -0.06649228 -0.02129784  0.09801372  0.12735689  0.0331211 ]

https://ithelp.ithome.com.tw/upload/images/20211007/20107299KfQ1sZvfy5.png

准确度有上升,权重也有比实验一有压制的感觉,虽然最大有到0.12,但多数为0.0X。

如同昨天说到,我自己实务上很少套用 Regularizers 在训练任务上,虽然实验三发现 Regularizers 可以把权重压小,但是实验一使用 BN 後的权重数值也还在可接受的区间。

ref:

https://blog.janestreet.com/l2-regularization-and-batch-norm/


相关文章:

  • IOS、Python自学心得30天 Day-28 上传图片到Firebase Storage
  • Day13-记得要戴安全帽(三)
  • 2021-Day9. 第一印象很重要!!从使用者加好友时,就建立良好关系:Line加好友欢迎讯息实作(二)
  • [Day23] 第二十三章 - 学会laravel的query方法来filter资料(Query Builder)
  • Day 8 - 使用 Order API 建立测试订单
  • 伸缩自如的Flask [day5] session
  • [iT铁人赛Day19]JAVA的类别
  • Day 14:RecyclerView 进阶项目布局
  • D26 - 用 Swift 和公开资讯,打造投资理财的 Apps { 三大法人成交比重实作.1 }
  • [ Day 3 ] - 运算式与运算子
  • Day 28 - AWS Lambda 结合 Dynamodb
  • IT铁人DAY 9-Factory Method 工厂方法
  • 萤幕录影和笔电录音-- Windows 7/8
  • [ JS个人笔记 ] AJAX & 工作实作—DAY12
  • Day 0x7 UVa11417 GCD
  • 401错误怎么处理?教你如何解决网站401错误页面
  • VPS优惠:搬瓦工香港VPS补货/G口/KVM/月付9.9美元
  • DGCHOST稳定吗?现在还可以购买DGCHOST的服务器VPS吗
  • 新的 Web.com 电子商务平台帮助中小企业接触数百万潜在买家
  • 2020最新Google Voice号码申请方法(非脚本)
  • 糖果主机香港VPS优惠码最新七折促销
  • VPS测评:on9host/香港WTT DIA+BGP/30M/三网直连/月付7.8刀起
  • 什么是301重定向?如何在WordPress网站创建301重定向?
  • 国外VPS服务器主机为什么比较便宜?
  • VPS评测:阿里、腾讯、百度、京东VPS最低配理性对比
  • 洛杉矶CN2服务器推荐:PCCW线路VPS,服务器服务商layerhost
  • 俄取消商品知识产权限制 盗版游戏、电影等商标合法化
  • 如何使用Hostinger的邮箱服务,Hostinger免费企业邮箱设置教程
  • 礼品卡和虚拟信用卡的区别:Prepaid Debit Card和Virtual Credit Card区别
  • RPA是什么?Litrpa是什么?可以做什么好用吗?