TensorFlow量化意识训练的量化节点中其他参数的用途

问题描述 投票:0回答:1

目前,我正在尝试了解TensorFlow中的量化意识培训。我了解,需要伪造的量化节点来收集动态范围信息,以作为量化操作的校准。当我将同一个模型一次比较为“普通” Keras模型,一次比较为量化感知模型时,后者具有更多参数,这很有意义,因为我们需要在量化感知训练期间存储激活的最小值和最大值。

请考虑以下示例:

import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Model
def get_model(in_shape):
  inpt = layers.Input(shape=in_shape)
  dense1 = layers.Dense(256, activation="relu")(inpt)
  dense2 = layers.Dense(128, activation="relu")(dense1)
  out = layers.Dense(10, activation="softmax")(dense2)

  model = Model(inpt, out)

  return model

该模型具有以下摘要:

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 784)]             0         
_________________________________________________________________
dense_3 (Dense)              (None, 256)               200960    
_________________________________________________________________
dense_4 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_5 (Dense)              (None, 10)                1290      
=================================================================
Total params: 235,146
Trainable params: 235,146
Non-trainable params: 0
_________________________________________________________________

但是,如果我使我的模型优化知道,它会显示以下摘要:

import tensorflow_model_optimization as tfmot

quantize_model = tfmot.quantization.keras.quantize_model

# q_aware stands for for quantization aware.
q_aware_model = quantize_model(standard)

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 784)]             0         
_________________________________________________________________
quantize_layer (QuantizeLaye (None, 784)               3         
_________________________________________________________________
quant_dense_3 (QuantizeWrapp (None, 256)               200965    
_________________________________________________________________
quant_dense_4 (QuantizeWrapp (None, 128)               32901     
_________________________________________________________________
quant_dense_5 (QuantizeWrapp (None, 10)                1295      
=================================================================
Total params: 235,164
Trainable params: 235,146
Non-trainable params: 18
_________________________________________________________________

我特别有两个问题:

  1. 在输入层之后带有3个参数的quantize_layer的目的是什么?
  2. 为什么我们每层还有5个其他不可训练的参数,它们到底是用来做什么的?

[我感谢任何有助于我(以及其他偶然发现此问题的人)理解量化意识训练的提示或其他材料。

tensorflow tensorflow-lite
1个回答
0
投票
  1. 量化层用于将浮点输入转换为int8。量化参数用于输出最小/最大和零点计算。

  2. 量化密集层需要一些其他参数:内核的最小/最大和输出激活的最小/最大/零点。

© www.soinside.com 2019 - 2024. All rights reserved.