Conv3dQAT

Function Usage

Constructs the QAT operator of Conv3d.

Prototype

API for operator construction from scratch:

amct_pytorch.nn.module.quantization.conv3d.Conv3dQAT(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias, padding_mode, device, dtype, config)

API for construction based on the native operator:

amct_pytorch.nn.module.quantization.conv3d.Conv3dQAT.from_float(mod, config)

Command-Line Options

Table 1 Parameters in the API for operator construction from scratch

Option

Input/Output

Meaning

Restriction

in_channels

Input

Number of input channels.

Type: int

This parameter is mandatory.

out_channels

Input

Number of output channels.

Type: int

This parameter is mandatory.

kernel_size

Input

Size of the convolution kernel.

An int or a tuple.

This parameter is mandatory.

stride

Input

Convolution stride.

An int or a tuple.

The default value is 1.

padding

Input

Padding size.

An int or a tuple.

The default value is 0.

dilation

Input

Spacing between kernel elements.

An int or a tuple.

The default value is 1.

groups

Input

Connections between the inputs and outputs.

Type: int

Default value: 1

bias

Input

Indicates whether to enable bias items to participate in learning.

Type: bool

The default value is True.

padding_mode

Input

Padding mode.

Must be zeros.

device

Input

Running device.

Default: None

dtype

Input

Torch data type.

Torch data type. Only torch.float32 is supported.

config

Input

The following is a configuration example. For details about quantization configuration parameters, see Quantization Configuration Parameters.

config = {
    "retrain_enable":true,
    "retrain_data_config": {
        "dst_type": "INT8",
        "batch_num": 10,
        "fixed_min": False,
        "clip_min": -1.0,
        "clip_max": 1.0
    },
    "retrain_weight_config": {
        "dst_type": "INT8",
        "weights_retrain_algo": "arq_retrain",
        "channel_wise": False
    }
}

A dict.

Default: None

Table 2 Parameters in the API for construction based on native operators

Parameter

Input/Output

Meaning

Restriction

mod

Input

Native Conv3d operator to be quantized.

A torch.nn.Module.

config

Input

Quantization configuration.

The following is a configuration example. For details about quantization configuration parameters, see Quantization Configuration Parameters.

config = {
    "retrain_enable":true,
    "retrain_data_config": {
        "dst_type": "INT8",
        "batch_num": 10,
        "fixed_min": False,
        "clip_min": -1.0,
        "clip_max": 1.0
    },
    "retrain_weight_config": {
        "dst_type": "INT8",
        "weights_retrain_algo": "arq_retrain",
        "channel_wise": False
    }
}

A dict.

Default: None

Returns

A QAT operator of Conv3d for subsequent quantization perception training.

Example

Construction from scratch:

1
2
3
4
5
from amct_pytorch.nn.module.quantization.conv3d import Conv3dQAT

Conv3dQAT(in_channels=1, out_channels=1, kernel_size=1, stride=1,
          padding=0, dilation=1, groups=1, bias=True,
          padding_mode='zeros', device=None, dtype=None, config=None)

Construction based on the native operator:

1
2
3
4
5
6
7
8
import torch

from amct_pytorch.nn.module.quantization.conv3d import Conv3dQAT

conv3d_op = torch.nn.Conv3d(in_channels=1, out_channels=1, kernel_size=1, stride=1,
                            padding=0, dilation=1, groups=1, bias=True,
                            padding_mode='zeros', device=None, dtype=None)
Conv3dQAT.from_float(mod=conv3d_op, config=None)