ConvTranspose2dQAT

Function Usage

Constructs the QAT operator of ConvTranspose2d.

Prototype

API for operator construction from scratch:

amct_pytorch.nn.module.quantization.conv_tranpose_2d.ConvTranspose2dQAT(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias, padding_mode, device, dtype, config)

API for construction based on the native operator:

amct_pytorch.nn.module.quantization.conv_transpose_2d.ConvTranspose2dQAT.from_float(mod, config)

Command-Line Options

Table 1 Parameters in the API for operator construction from scratch

Option

Input/Output

Meaning

Restriction

in_channels

Input

Number of input channels.

Type: int

This parameter is mandatory.

out_channels

Input

Number of output channels.

Type: int

This parameter is mandatory.

kernel_size

Input

Size of the convolution kernel.

An int or a tuple.

This parameter is mandatory.

stride

Input

Convolution stride.

An int or a tuple.

The default value is 1.

padding

Input

Padding size.

An int or a tuple.

The default value is 0.

dilation

Input

Spacing between kernel elements.

An int or a tuple.

The default value is 1.

groups

Input

Connections between the inputs and outputs.

Type: int

Default value: 1

bias

Input

Indicates whether to enable bias items to participate in learning.

Type: bool

The default value is True.

padding_mode

Input

Padding mode.

Must be zeros.

device

Input

Running device.

Default: None

dtype

Input

Torch data type.

Torch data type. Only torch.float32 is supported.

config

Input

The following is a configuration example. For details about quantization configuration parameters, see Quantization Configuration Parameters.

config = {
    "retrain_enable":true,
    "retrain_data_config": {
        "dst_type": "INT8",
        "batch_num": 10,
        "fixed_min": False,
        "clip_min": -1.0,
        "clip_max": 1.0
    },
    "retrain_weight_config": {
        "dst_type": "INT8",
        "weights_retrain_algo": "arq_retrain",
        "channel_wise": False
    }
}

A dict.

Default: None

Table 2 Parameters in the API for construction based on native operators

Parameter

Input/Output

Description

Restriction

mod

Input

Native ConvTranspose2dQAT operator to be quantized.

A torch.nn.Module.

config

Input

Quantization configuration.

The following is a configuration example. For details about quantization configuration parameters, see Quantization Configuration Parameters.

config = {
    "retrain_enable":true,
    "retrain_data_config": {
        "dst_type": "INT8",
        "batch_num": 10,
        "fixed_min": False,
        "clip_min": -1.0,
        "clip_max": 1.0
    },
    "retrain_weight_config": {
        "dst_type": "INT8",
        "weights_retrain_algo": "arq_retrain",
        "channel_wise": False
    }
}

A dict.

Default: None

Returns

A QAT operator of ConvTranspose2dQAT for subsequent quantization perception training.

Calling Example

Construction from scratch:

1
2
3
4
5
from amct_pytorch.nn.module.quantization.conv_transpose_2d import ConvTranspose2dQAT

ConvTranspose2dQAT(in_channels=1, out_channels=1, kernel_size=1, stride=1,
                   padding=0, dilation=1, groups=1, bias=True,
                   padding_mode='zeros', device=None, dtype=None, config=None)

Construction based on the native operator:

1
2
3
4
5
6
import torch

from amct_pytorch.nn.module.quantization.conv_transpose_2d import ConvTranspose2dQAT

conv_transpose2d_op = torch.nn.ConvTranspose2d(in_channels=1, out_channels=1, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
ConvTranspose2dQAT.from_float(mod=conv_transpose2d_op, config=None)