Indad
Overviews
INDAD is an Industrial KNN-based Anomaly Detection.
Original git: https://github.com/rvorias/ind_knn_ad.git
The packages contain 3 different algorithms:
- SPADE (2021)
- PaDiM (2020)
- PatchCore (2021): Recommended
In this document, we will show you how to use HACHIX version of these algorithms.
APIs
Model
Bases: BaseModel
This is the base model for INDAD
Parameters:
Name | Type | Description | Default |
---|---|---|---|
BaseModel |
Class
|
Common class for all ecos_core models |
required |
Returns: None
__init__(opt)
This function is called when creating a new instance of INDAD Model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
opt |
NameSpace
|
The default options neccessary for initializing the model. opt should includes these attributes: class_names, weight, device. |
required |
build_model(custom_weight=None)
Build model in ensemble model. Type of model is defined in opt.json
Parameters:
Name | Type | Description | Default |
---|---|---|---|
custom_weight(str) |
weight to load that not defined in opt. Default: None. |
required |
Returns:
Type | Description |
---|---|
Model
|
Model after built |
Examples:
>>> from argparse import ArgumentParser
>>> from ecos_core.indad.indad import Model
>>> parser = ArgumentParser()
>>> opts = parser.parse_args()
>>> with open(opt_path, 'r') as f:
>>> opts.__dict__ = json.load(f)
>>> model = Model(opts)
>>> #Build model with default options
>>> model.build_model()
>>> #Build model with custom weight
>>> model.build_model(custom_weight=<PATH_TO_WEIGHT>)
calculate_threshold_score(weight_path, valid_path)
calculate the threshold score from the validation dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
weight_path |
str
|
weight path |
required |
valid_path |
str
|
validation images folder path |
required |
Returns:
Name | Type | Description |
---|---|---|
List |
highest threshold score and pixel score |
|
List |
score per image |
create_augment_images(good_folder_path, image_files, number_of_copy)
staticmethod
Augment the images in a directory randomly, then generate training dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
good_folder_path |
str
|
The path to the directory containing the images |
required |
image_files |
list
|
list of image files |
required |
number_of_copy |
int
|
maximum number of copies per image to be augmented |
required |
create_validation_folder(valid_folder_path)
staticmethod
create validation folder if it doesn't exist or not qualified
Args: valid_folder_path (str): description
Returns:
Name | Type | Description |
---|---|---|
Bool |
True: Exist and qualified False: Not exist or not qualified |
generate_instance_model(method)
staticmethod
generate instance model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
method |
str
|
spade or padim or patchcore or amazon_patchcore method |
required |
Returns:
Name | Type | Description |
---|---|---|
object |
instance of method |
generate_training_data(train_path, number_of_copy)
Augment the images in a directory randomly, then generate training dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
train_path |
str
|
The path to the directory containing the images to be augmented |
required |
number_of_copy |
int
|
maximum number of copies per image to be augmented |
required |
generate_validation_dataset(base_path, valid_folder_path, split_percentage)
staticmethod
Create a validation dataset based on training dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
base_path |
str
|
path to data directory |
required |
valid_folder_path |
str
|
validation folder path |
required |
split_percentage |
float
|
ratio of validation dataset / training dataset |
required |
get_transform()
Transform input by ImageNet normalization.
Returns:
Name | Type | Description |
---|---|---|
Function |
A function for transform a single image from path |
load_dataset(good_folder_path)
staticmethod
load data from specified folder, return list of image files name and number of images
Parameters:
Name | Type | Description | Default |
---|---|---|---|
good_folder_path |
str
|
The path to the directory containing the images |
required |
Returns:
Name | Type | Description |
---|---|---|
number |
number of images |
|
list |
list of image files |
load_opt(opt_path)
staticmethod
load opt file and return data
Parameters:
Name | Type | Description | Default |
---|---|---|---|
opt_path |
str
|
opt.json path |
required |
Raises:
Type | Description |
---|---|
FileNotFoundError
|
description |
Returns:
Name | Type | Description |
---|---|---|
dict |
opt configuration |
load_threshold_score(weight_path, valid_path)
read threshold_score file and return threshold score calculated from validation dataset if opt.json is exist
if not exist it will calculate threshold score from validation dataset
Parameters:
Name | Type | Description | Default |
---|---|---|---|
weight_path |
str
|
weight path |
required |
valid_path |
str
|
validation images folder path |
required |
Returns:
Name | Type | Description |
---|---|---|
List |
threshold score and pixel score |
process_predictions(pred, raw_image_path, raw_image, image_transform, save_image_path)
After processing for prediction
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pred |
_type_
|
description |
required |
image_transform |
Array
|
Original image after transformation |
required |
raw_image |
Image
|
PIL image |
required |
save_image_path |
str
|
Path to save image |
required |
Returns:
Type | Description |
---|---|
str
|
Path to the saved image |
save_threshold_score(threshold, threshold_pixel_score=0.0)
save to opt.json
Parameters:
Name | Type | Description | Default |
---|---|---|---|
threshold |
int
|
threshold score |
required |
threshold_pixel_score(float) |
threshold for pixel level score. Default 0.0 |
required |