Inference Scenario Configuration File
Run the following command to convert the training model and parameters into the format that can be used by the inference service:
python3 scripts/infer_service_generation.py --model_params=model_service_param.json
Parameter |
Type |
Default Value |
Description |
|---|---|---|---|
task_name |
String |
None |
(Mandatory) Task name. |
project_list |
Array[Object] |
None |
(Mandatory) Project list. |
custom_params |
None |
(Optional) This field is custom. |
Parameter |
Type |
Default Value |
Description |
|---|---|---|---|
scene |
String |
None |
(Mandatory) Scenario name. |
project_name |
String |
None |
(Mandatory) Project name. |
model_list |
Array[Object] |
None |
(Mandatory) Model list. |
Parameter |
Type |
Default Value |
Description |
|---|---|---|---|
application |
String |
None |
(Mandatory) Application scenarios:
|
model_name |
String |
None |
(Optional) Model name. However, it is mandatory in the wafer detection scenario. |
result_path |
String |
None |
(Mandatory) Path of the OM file and the train_params.config folder generated during training. |
ref_path |
String |
None |
(Optional) Path for storing the reference diagram. It is used in the wafer detection scenario. |
Parameter |
Type |
Default Value |
Description |
|---|---|---|---|
max_infer_timeout |
String |
5000 |
(Optional) Maximum inference delay. |
FirstDetectionFilter |
String |
None |
(Optional) Detection object frame area filtering. You can filter the object frame based on the area and confidence. Example: {'Type': 'Area', 'TopN': 0,'BottomN': 0,'MinArea': 0,'MaxArea': 0,'ConfThresh': 0.0} |
tag_1 |
String |
None |
(Optional) Name of the tag to be detected. It is used in the tag defect detection scenario. 1 indicates the label sequence number. |
tag_1_params |
String |
None |
(Optional) Edge defect parameter configuration. It is used in the tag defect detection scenario. 1 indicates the label sequence number. Example: {'edge_defect_thres_all': 10,'shape': (1, 128, 512, 3),'physical_size': [70, 30]}
|
Parameter |
Type |
Default Value |
Description |
|---|---|---|---|
infer_scene |
String |
None |
(Mandatory) Name of an inference scenario. |
ProjectName |
String |
None |
(Mandatory) Project name. |
TaskName |
String |
None |
(Mandatory) Task name. |
config_path |
String |
None |
(Optional) Configuration file path, which is used in semiconductor detection scenarios. |
pipeline_path |
String |
None |
(Mandatory) Pipeline file path. |
default_device_id |
int |
0 |
(Mandatory) ID of the running processor. |
custom_params |
Dict |
None |
(Optional) User-defined parameter. |
A compilation example of model_configs.json is as follows:
{
"model_configs": [
{
"infer_scene": "wafer",
"ProjectName": "project_wafer",
"TaskName": "assm1-2",
"config_path":"config/models/project_jsx/assm1-2/1/aoi_ai_config.json",
"pipeline_path": "config/models/project_jsx/assm1-2/1/aoi_ai_sdk.pipeline",
"default_device_id": 0,
"custom_params": {
"maximum_tag_num": "100"
}
}
]
}
Parameter |
Type |
Default Value |
Description |
|---|---|---|---|
station_names |
Dict |
None |
(Mandatory) machine name : ID. Currently, only two types of machines are supported: "RudolphTechnologies": 0, "Camtek": 1 |
ref_key_exclude |
List |
None |
(Optional) ID of the processor layer that requires center cropping. |
labels |
List |
None |
(Mandatory) Supported processor process. |
convert_map |
Dict |
None |
(Mandatory) The format is as follows: "convert_map": {
"Processor ID": {
"Processor process": {
"Processor layer ID": ["M1"],
"classes": ["A", "B"]
}
}
|
reg_range_lut |
Dict |
None |
(Mandatory) The format is as follows: "reg_range_lut" : {
"Processor ID and layer ID": [
[[Value range on the X-axis of machine R],[Value range on the X-axis of machine C]],
[[Value range on the Y-axis of machine R],[Value range on the Y-axis of machine C]],
registration threshold, original die width on the workstation, original die height on the workstation, width of processor grayscale reference, height of processor grayscale reference, ratio of the second microscope to the first microscope
]
}
For details, see Table 7. |
preprocess.blur_threds |
String |
7 |
(Mandatory) Threshold for fuzzy filtering. |
model_infer.model_num |
int |
4 |
(Mandatory) Number of inference models. |
postprocess.reject_prob_thred_and_version.reject_prob_thred |
Float |
0.1 |
(Mandatory) Inference rejection threshold. |
postprocess.reject_prob_thred_and_version.version |
String |
None |
(Mandatory) Version number of an inference model. |
Index |
Type |
Value Range |
Description |
|---|---|---|---|
0 |
list |
Two integer values within [0,100000] |
Value range on the X-axis. |
1 |
list |
Two int values in the range of [0, 100000] |
Value range on the Y-axis. |
2 |
float |
[0, 1] |
Registration threshold. |
3 |
float |
(0, 100000) |
Width of a die. |
4 |
float |
(0, 100000) |
Height of a die. |
5 |
float |
(0, 100000) |
Grayscale image width. |
6 |
float |
(0, 100000) |
Grayscale image height. |
7 |
float |
[0.1, 10] |
Magnification ratio. |
A compilation example of aoi_ai_config.json is as follows:
{
"station_names": {
"RudolphTechnologies": 0,
"Camtek": 1
},
"ref_key_exclude": ["00101PI3"],
"labels": ["FOI_FOP","BMP"],
"convert_map": {
"001": {
"FOI_FOP": {
"layers": ["M1"],
"classes": ["A","B","E","F","L","M","MS"]
},
"BMP": {
"layers": [],
"classes": [
["A","B","D","E"],
["C","F"]
]
}
}
},
"reg_range_lut": {
"00101M1": [
[[40,130],[57,97]],[[20,110],[71,111]],0.5,17752.80126254142,18257.015024389606,10140,10431,2
]
},
"preprocess": {
"blur_threds": {
"default": 7,
"00101M1": 7.95
}
},
"model_infer": {
"model_num": 4
},
"postprocess": {
"reject_prob_thred_and_version": {
"default": {
"reject_prob_thred": 0,
"version": "beta"
},
"00101M1": {
"reject_prob_thred": 0.3,
"version": "1.1.1"
}
}
}
}