From 4efefd91fde1f298337349b1ac5b5c7bbc6af387 Mon Sep 17 00:00:00 2001 From: "Lyalyushkin, Nikolay" Date: Thu, 29 Aug 2019 10:45:21 +0300 Subject: [PATCH 001/927] Added description for compressed classification ONNX models --- .../description/inceptionv3-int8-onnx-0001.md | 49 +++++++++++++++++++ .../description/mobilenetv2-int8-onnx-0001.md | 49 +++++++++++++++++++ .../description/resnet50v1-int8-onnx-0001.md | 49 +++++++++++++++++++ .../description/resnetv1-101-int8-tf-0001.md | 49 +++++++++++++++++++ .../squeezenetv1.1-int8-onnx-0001.md | 49 +++++++++++++++++++ ...squeezenetv1.1-int8-sparse-v1-onnx-0001.md | 49 +++++++++++++++++++ 6 files changed, 294 insertions(+) create mode 100644 intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md create mode 100644 intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md create mode 100644 intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md create mode 100644 intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md create mode 100644 intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md create mode 100644 intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md diff --git a/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md b/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md new file mode 100644 index 00000000000..6febaa3899c --- /dev/null +++ b/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md @@ -0,0 +1,49 @@ +# inceptionv3-int8-onnx-0001 + +## Use Case and High-Level Description + +This is the Inception v3 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x299x299x3" in BGR order. + +The model output for `inceptionv3-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 11.469 | +| MParams | 23.817 | +| Source framework | PyTorch | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 77.3% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 77.3% | + +## Performance + +## Input + +Image, shape - `1,299,299,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md b/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md new file mode 100644 index 00000000000..db0f4d0fb47 --- /dev/null +++ b/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md @@ -0,0 +1,49 @@ +# mobilenetv2-int8-onnx-0001 + +## Use Case and High-Level Description + +This is the MobileNet v2 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `mobilenetv2-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.615 | +| MParams | 3.488 | +| Source framework | PyTorch | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 71.5% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 71.5% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md new file mode 100644 index 00000000000..05c68638886 --- /dev/null +++ b/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md @@ -0,0 +1,49 @@ +# resnet50v1-int8-onnx-0001 + +## Use Case and High-Level Description + +This is the Resnet-50 v1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `resnet50v1-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 8.216 | +| MParams | 25.53 | +| Source framework | PyTorch | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 76.1% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 76.1% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md b/intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md new file mode 100644 index 00000000000..47db9040d53 --- /dev/null +++ b/intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md @@ -0,0 +1,49 @@ +# resnetv1-101-int8-tf-0001 + +## Use Case and High-Level Description + +This is the Resnet-101 v1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then quantized to INT8 fixed-point precision using so-called Quantization-aware training approach implemented in TensorFlow framework. For details about the original floating point model, check out the [paper](https://arxiv.org/pdf/1512.03385.pdf). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `resnetv1-101-int8-tf-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 14.441 | +| MParams | 44.496 | +| Source framework | TensorFlow | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 74.5% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 74.5% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md b/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md new file mode 100644 index 00000000000..88855e90109 --- /dev/null +++ b/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md @@ -0,0 +1,49 @@ +# squeezenetv1.1-int8-onnx-0001 + +## Use Case and High-Level Description + +This is the SquuezeNet v1.1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `squeezenetv1.1-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.707 | +| MParams | 1.236 | +| Source framework | PyTorch | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 57.9% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 57.9% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md b/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md new file mode 100644 index 00000000000..b71821f9f83 --- /dev/null +++ b/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md @@ -0,0 +1,49 @@ +# squeezenetv1.1-int8-sparse-v1-onnx-0001 + +## Use Case and High-Level Description + +This is the SquuezeNet v1.1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 51% sparsity using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `squeezenetv1.1-int8-sparse-v1-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.707 | +| MParams | 1.236 | +| Source framework | PyTorch | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 56.76% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 56.76% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + From 9b061e01cd2797f47b771428086e5102f0208aa2 Mon Sep 17 00:00:00 2001 From: "Lyalyushkin, Nikolay" Date: Fri, 30 Aug 2019 18:14:05 +0300 Subject: [PATCH 002/927] updated md files --- .../description/inceptionv3-int8-onnx-0001.md | 11 ++-- .../inceptionv3-int8-sparse-v2-onnx-0001.md | 52 +++++++++++++++++++ .../description/mobilenetv2-int8-onnx-0001.md | 4 +- .../mobilenetv2-int8-sparse-v2-onnx-0001.md | 50 ++++++++++++++++++ .../resnet50-int8-sparse-v2-onnx-0001.md | 52 +++++++++++++++++++ .../description/resnet50v1-int8-onnx-0001.md | 4 +- .../squeezenetv1.1-int8-onnx-0001.md | 4 +- ...squeezenetv1.1-int8-sparse-v1-onnx-0001.md | 5 +- 8 files changed, 173 insertions(+), 9 deletions(-) create mode 100644 intel_models/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md create mode 100644 intel_models/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md create mode 100644 intel_models/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md diff --git a/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md b/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md index 6febaa3899c..5ffab5deadd 100644 --- a/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md +++ b/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md @@ -2,11 +2,14 @@ ## Use Case and High-Level Description -This is the Inception v3 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). +This is the Inception v3 model that is designed to perform image classification. The model has been pretrained on the +ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression +Framework (NNCF). The model input is a blob that consists of a single image of "1x299x299x3" in BGR order. -The model output for `inceptionv3-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. +The model output for `inceptionv3-int8-onnx-0001` is the usual object classifier output for the 1000 different +classifications matching those in the ImageNet database. ## Example @@ -21,11 +24,11 @@ The model output for `inceptionv3-int8-onnx-0001` is the usual object classifier ## Accuracy -The quality metrics calculated on ImageNet validation dataset is 77.3% accuracy top-1. +The quality metrics calculated on ImageNet validation dataset is 78.36% accuracy top-1. | Metric | Value | |---------------------------|---------------| -| Accuracy top-1 (ImageNet) | 77.3% | +| Accuracy top-1 (ImageNet) | 78.36% | ## Performance diff --git a/intel_models/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md b/intel_models/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md new file mode 100644 index 00000000000..e47fa696344 --- /dev/null +++ b/intel_models/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md @@ -0,0 +1,52 @@ +# inceptionv3-int8-sparse-v2-onnx-0001 + +## Use Case and High-Level Description + +This is the Inception v3 model that is designed to perform image classification. +The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point +precision with 60.31% sparsity using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x299x299x3" in BGR order. + +The model output for `inceptionv3-int8-sparse-v2-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 11.469 | +| MParams | 23.817 | +| Source framework | PyTorch | +| Sparsity | 60.31% | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 77.05% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 77.05% | + +## Performance + +## Input + +Image, shape - `1,299,299,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md b/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md index db0f4d0fb47..a7380a4c274 100644 --- a/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md +++ b/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md @@ -21,11 +21,11 @@ The model output for `mobilenetv2-int8-onnx-0001` is the usual object classifier ## Accuracy -The quality metrics calculated on ImageNet validation dataset is 71.5% accuracy top-1. +The quality metrics calculated on ImageNet validation dataset is 71.32% accuracy top-1. | Metric | Value | |---------------------------|---------------| -| Accuracy top-1 (ImageNet) | 71.5% | +| Accuracy top-1 (ImageNet) | 71.32% | ## Performance diff --git a/intel_models/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md b/intel_models/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md new file mode 100644 index 00000000000..daa6b4207f0 --- /dev/null +++ b/intel_models/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md @@ -0,0 +1,50 @@ +# mobilenetv2-int8-sparse-v2-onnx-0001 + +## Use Case and High-Level Description + +This is the MobileNet v2 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 51% of sparsity using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `mobilenetv2-int8-sparse-v2-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.615 | +| MParams | 3.488 | +| Source framework | PyTorch | +| Sparsity | 51% | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 70.84% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 70.84% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md b/intel_models/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md new file mode 100644 index 00000000000..0525387b3ee --- /dev/null +++ b/intel_models/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md @@ -0,0 +1,52 @@ +# resnet50-int8-sparse-v2-onnx-0001 + +## Use Case and High-Level Description + +This is the Resnet-50 v1 model that is designed to perform image classification. +The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point +precision with 61.02% sparsity using Neural Network Compression Framework (NNCF). + +The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. + +The model output for `resnet50-int8-sparse-v2-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 8.216 | +| MParams | 25.53 | +| Source framework | PyTorch | +| Sparsity | 61.02% | + +## Accuracy + +The quality metrics calculated on ImageNet validation dataset is 75.19% accuracy top-1. + +| Metric | Value | +|---------------------------|---------------| +| Accuracy top-1 (ImageNet) | 75.19% | + +## Performance + +## Input + +Image, shape - `1,224,224,3`, format is `B,H,W,C` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR` + +## Output + +Object classifier according to ImageNet classes, shape -`1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + diff --git a/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md index 05c68638886..599bbbdd8fe 100644 --- a/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md +++ b/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md @@ -2,7 +2,9 @@ ## Use Case and High-Level Description -This is the Resnet-50 v1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). +This is the Resnet-50 v1 model that is designed to perform image classification. +The model has been pretrained on the ImageNet image database and then symmetrically quantized to +INT8 fixed-point precision using Neural Network Compression Framework (NNCF). The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. diff --git a/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md b/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md index 88855e90109..6156f43ab58 100644 --- a/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md +++ b/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md @@ -2,7 +2,9 @@ ## Use Case and High-Level Description -This is the SquuezeNet v1.1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). +This is the SqueezeNet v1.1 model that is designed to perform image classification. +The model has been pretrained on the ImageNet image database and then symmetrically quantized +to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. diff --git a/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md b/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md index b71821f9f83..179cd418938 100644 --- a/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md +++ b/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md @@ -2,7 +2,9 @@ ## Use Case and High-Level Description -This is the SquuezeNet v1.1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 51% sparsity using Neural Network Compression Framework (NNCF). +This is the SquuezeNet v1.1 model that is designed to perform image classification. +The model has been pretrained on the ImageNet image database and then symmetrically quantized +to INT8 fixed-point precision with 51% sparsity using Neural Network Compression Framework (NNCF). The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. @@ -18,6 +20,7 @@ The model output for `squeezenetv1.1-int8-sparse-v1-onnx-0001` is the usual obje | GFLOPs | 0.707 | | MParams | 1.236 | | Source framework | PyTorch | +| Sparsity | 51% | ## Accuracy From 590c185f9fd5f01cc3d7edae3b9f13ccdada0370 Mon Sep 17 00:00:00 2001 From: "Lyalyushkin, Nikolay" Date: Fri, 30 Aug 2019 18:39:48 +0300 Subject: [PATCH 003/927] merged with develop + added index.md --- .../description/inceptionv3-int8-onnx-0001.md | 0 .../description/inceptionv3-int8-sparse-v2-onnx-0001.md | 0 models/intel/index.md | 9 +++++++++ .../description/mobilenetv2-int8-onnx-0001.md | 0 .../description/mobilenetv2-int8-sparse-v2-onnx-0001.md | 0 .../description/resnet50-int8-sparse-v2-onnx-0001.md | 0 .../description/resnet50v1-int8-onnx-0001.md | 0 .../description/resnetv1-101-int8-tf-0001.md | 5 ++++- .../description/squeezenetv1.1-int8-onnx-0001.md | 0 .../squeezenetv1.1-int8-sparse-v1-onnx-0001.md | 0 10 files changed, 13 insertions(+), 1 deletion(-) rename {intel_models => models/intel}/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md (100%) rename {intel_models => models/intel}/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md (100%) rename {intel_models => models/intel}/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md (100%) rename {intel_models => models/intel}/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md (100%) rename {intel_models => models/intel}/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md (100%) rename {intel_models => models/intel}/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md (100%) rename {intel_models => models/intel}/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md (79%) rename {intel_models => models/intel}/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md (100%) rename {intel_models => models/intel}/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md (100%) diff --git a/intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md similarity index 100% rename from intel_models/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md rename to models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md diff --git a/intel_models/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md similarity index 100% rename from intel_models/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md rename to models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md diff --git a/models/intel/index.md b/models/intel/index.md index 1fef279cb4f..89794fd0b04 100644 --- a/models/intel/index.md +++ b/models/intel/index.md @@ -187,12 +187,21 @@ Deep Learning compressed models | [resnet-50-int8-tf-0001](./resnet-50-int8-tf-0001/description/resnet-50-int8-tf-0001.md) | 6.996 | 25.530 | | [resnet-50-int8-sparse-v1-tf-0001](./resnet-50-int8-sparse-v1-tf-0001/description/resnet-50-int8-sparse-v1-tf-0001.md) | 6.996 | 25.530 | | [resnet-50-int8-sparse-v2-tf-0001](./resnet-50-int8-sparse-v2-tf-0001/description/resnet-50-int8-sparse-v2-tf-0001.md) | 6.996 | 25.530 | +| [resnetv1-101-int8-tf-0001](./resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md) | 14.441 | 44.496 | | [inceptionv3-int8-tf-0001](./inceptionv3-int8-tf-0001/description/inceptionv3-int8-tf-0001.md) | 11.469 | 23.819 | | [inceptionv3-int8-sparse-v1-tf-0001](./inceptionv3-int8-sparse-v1-tf-0001/description/inceptionv3-int8-sparse-v1-tf-0001.md) | 11.469 | 23.819 | | [inceptionv3-int8-sparse-v2-tf-0001](./inceptionv3-int8-sparse-v2-tf-0001/description/inceptionv3-int8-sparse-v2-tf-0001.md) | 11.469 | 23.819 | | [mobilenetv2-int8-tf-0001](./mobilenetv2-int8-tf-0001/description/mobilenetv2-int8-tf-0001.md) | 0.615 | 3.489 | | [mobilenetv2-int8-sparse-v1-tf-0001](./mobilenetv2-int8-sparse-v1-tf-0001/description/mobilenetv2-int8-sparse-v1-tf-0001.md) | 0.615 | 3.489 | | [mobilenetv2-int8-sparse-v2-tf-0001](./mobilenetv2-int8-sparse-v2-tf-0001/description/mobilenetv2-int8-sparse-v2-tf-0001.md) | 0.615 | 3.489 | +| [resnet50v1-int8-onnx-0001](./resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md) | 8.216 | 25.53 | +| [resnet50-int8-sparse-v2-onnx-0001](./resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md) | 8.216 | 25.53 | +| [inceptionv3-int8-onnx-0001](./inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md) | 11.469 | 23.817 | +| [inceptionv3-int8-sparse-v2-onnx-0001](./inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md) | 11.469 | 23.817 | +| [mobilenetv2-int8-onnx-0001](./mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md) | 0.615 | 3.488 | +| [mobilenetv2-int8-sparse-v2-onnx-0001](./mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md) | 0.615 | 3.488 | +| [squeezenetv1.1-int8-onnx-0001](./squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md) | 0.707 | 1.236 | +| [squeezenetv1.1-int8-sparse-v1-onnx-0001](./squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md) | 0.707 | 1.236 | ## Legal Information [*] Other names and brands may be claimed as the property of others. diff --git a/intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md similarity index 100% rename from intel_models/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md rename to models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md diff --git a/intel_models/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md similarity index 100% rename from intel_models/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md rename to models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md diff --git a/intel_models/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md similarity index 100% rename from intel_models/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md rename to models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md diff --git a/intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md similarity index 100% rename from intel_models/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md rename to models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md diff --git a/intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md b/models/intel/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md similarity index 79% rename from intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md rename to models/intel/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md index 47db9040d53..33de7d41a37 100644 --- a/intel_models/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md +++ b/models/intel/resnetv1-101-int8-tf-0001/description/resnetv1-101-int8-tf-0001.md @@ -2,7 +2,10 @@ ## Use Case and High-Level Description -This is the Resnet-101 v1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then quantized to INT8 fixed-point precision using so-called Quantization-aware training approach implemented in TensorFlow framework. For details about the original floating point model, check out the [paper](https://arxiv.org/pdf/1512.03385.pdf). +This is the Resnet-101 v1 model that is designed to perform image classification. +The model has been pretrained on the ImageNet image database and then quantized to INT8 fixed-point precision using +so-called Quantization-aware training approach implemented in TensorFlow framework. +For details about the original floating point model, check out the [paper](https://arxiv.org/pdf/1512.03385.pdf). The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. diff --git a/intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md similarity index 100% rename from intel_models/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md rename to models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md diff --git a/intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md similarity index 100% rename from intel_models/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md rename to models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md From 7b294f4633cc4087cc4f48b7342971e59f4ba149 Mon Sep 17 00:00:00 2001 From: Nikolay Date: Mon, 2 Sep 2019 11:09:54 +0300 Subject: [PATCH 004/927] Update resnet50v1-int8-onnx-0001.md updated accuracy for resnet50 --- .../description/resnet50v1-int8-onnx-0001.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md index 599bbbdd8fe..8c36b3a4258 100644 --- a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md +++ b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md @@ -23,11 +23,11 @@ The model output for `resnet50v1-int8-onnx-0001` is the usual object classifier ## Accuracy -The quality metrics calculated on ImageNet validation dataset is 76.1% accuracy top-1. +The quality metrics calculated on ImageNet validation dataset is 76.55% accuracy top-1. | Metric | Value | |---------------------------|---------------| -| Accuracy top-1 (ImageNet) | 76.1% | +| Accuracy top-1 (ImageNet) | 76.55% | ## Performance From c8c68e1cc4f65e4737b09bab77dec379157536d4 Mon Sep 17 00:00:00 2001 From: "Lyalyushkin, Nikolay" Date: Mon, 2 Sep 2019 14:09:51 +0300 Subject: [PATCH 005/927] B,H,W,C -> B,C,H,W --- .../description/inceptionv3-int8-onnx-0001.md | 2 +- .../description/inceptionv3-int8-sparse-v2-onnx-0001.md | 2 +- .../description/mobilenetv2-int8-onnx-0001.md | 2 +- .../description/mobilenetv2-int8-sparse-v2-onnx-0001.md | 2 +- .../description/resnet50-int8-sparse-v2-onnx-0001.md | 2 +- .../description/resnet50v1-int8-onnx-0001.md | 2 +- .../description/squeezenetv1.1-int8-onnx-0001.md | 2 +- .../description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md | 2 +- 8 files changed, 8 insertions(+), 8 deletions(-) diff --git a/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md index 5ffab5deadd..6d5939abc6c 100644 --- a/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md +++ b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md @@ -34,7 +34,7 @@ The quality metrics calculated on ImageNet validation dataset is 78.36% accuracy ## Input -Image, shape - `1,299,299,3`, format is `B,H,W,C` where: +Image, shape - `1,3,299,299`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md index e47fa696344..9a82c6990b4 100644 --- a/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md +++ b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md @@ -34,7 +34,7 @@ The quality metrics calculated on ImageNet validation dataset is 77.05% accuracy ## Input -Image, shape - `1,299,299,3`, format is `B,H,W,C` where: +Image, shape - `1,3,299,299`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md index a7380a4c274..386bb7f6275 100644 --- a/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md +++ b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md @@ -31,7 +31,7 @@ The quality metrics calculated on ImageNet validation dataset is 71.32% accuracy ## Input -Image, shape - `1,224,224,3`, format is `B,H,W,C` where: +Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md index daa6b4207f0..5e0cdffb92c 100644 --- a/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md +++ b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md @@ -32,7 +32,7 @@ The quality metrics calculated on ImageNet validation dataset is 70.84% accuracy ## Input -Image, shape - `1,224,224,3`, format is `B,H,W,C` where: +Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md index 0525387b3ee..0fdc24e4490 100644 --- a/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md +++ b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md @@ -34,7 +34,7 @@ The quality metrics calculated on ImageNet validation dataset is 75.19% accuracy ## Input -Image, shape - `1,224,224,3`, format is `B,H,W,C` where: +Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md index 8c36b3a4258..e4f1bee31cb 100644 --- a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md +++ b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md @@ -33,7 +33,7 @@ The quality metrics calculated on ImageNet validation dataset is 76.55% accuracy ## Input -Image, shape - `1,224,224,3`, format is `B,H,W,C` where: +Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md index 6156f43ab58..0fb57d29f26 100644 --- a/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md +++ b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md @@ -33,7 +33,7 @@ The quality metrics calculated on ImageNet validation dataset is 57.9% accuracy ## Input -Image, shape - `1,224,224,3`, format is `B,H,W,C` where: +Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size - `H` - height diff --git a/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md index 179cd418938..1bac4e99dbf 100644 --- a/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md +++ b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md @@ -34,7 +34,7 @@ The quality metrics calculated on ImageNet validation dataset is 56.76% accuracy ## Input -Image, shape - `1,224,224,3`, format is `B,H,W,C` where: +Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size - `H` - height From 9a326c7327488ff8baccf6322ae8f44627b49a6a Mon Sep 17 00:00:00 2001 From: "Lyalyushkin, Nikolay" Date: Mon, 2 Sep 2019 14:25:58 +0300 Subject: [PATCH 006/927] reversed chape in description --- .../description/inceptionv3-int8-onnx-0001.md | 2 +- .../description/inceptionv3-int8-sparse-v2-onnx-0001.md | 2 +- .../description/mobilenetv2-int8-onnx-0001.md | 2 +- .../description/mobilenetv2-int8-sparse-v2-onnx-0001.md | 2 +- .../description/resnet50-int8-sparse-v2-onnx-0001.md | 2 +- .../description/resnet50v1-int8-onnx-0001.md | 2 +- .../description/squeezenetv1.1-int8-onnx-0001.md | 2 +- .../description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md | 2 +- 8 files changed, 8 insertions(+), 8 deletions(-) diff --git a/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md index 6d5939abc6c..12165f4bbd5 100644 --- a/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md +++ b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md @@ -6,7 +6,7 @@ This is the Inception v3 model that is designed to perform image classification. ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x299x299x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x299x299" in BGR order. The model output for `inceptionv3-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md index 9a82c6990b4..d9fa98849f9 100644 --- a/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md +++ b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md @@ -6,7 +6,7 @@ This is the Inception v3 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 60.31% sparsity using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x299x299x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x299x299" in BGR order. The model output for `inceptionv3-int8-sparse-v2-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md index 386bb7f6275..b71e7371bfc 100644 --- a/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md +++ b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md @@ -4,7 +4,7 @@ This is the MobileNet v2 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x224x224" in BGR order. The model output for `mobilenetv2-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md index 5e0cdffb92c..ec849ed1a6e 100644 --- a/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md +++ b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md @@ -4,7 +4,7 @@ This is the MobileNet v2 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 51% of sparsity using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x224x224" in BGR order. The model output for `mobilenetv2-int8-sparse-v2-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md index 0fdc24e4490..d8aae864a5e 100644 --- a/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md +++ b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md @@ -6,7 +6,7 @@ This is the Resnet-50 v1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 61.02% sparsity using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x224x224" in BGR order. The model output for `resnet50-int8-sparse-v2-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md index e4f1bee31cb..8c74fb11843 100644 --- a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md +++ b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md @@ -6,7 +6,7 @@ This is the Resnet-50 v1 model that is designed to perform image classification. The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x224x224" in BGR order. The model output for `resnet50v1-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md index 0fb57d29f26..3cddfb991bc 100644 --- a/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md +++ b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md @@ -6,7 +6,7 @@ This is the SqueezeNet v1.1 model that is designed to perform image classificati The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x224x224" in BGR order. The model output for `squeezenetv1.1-int8-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. diff --git a/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md index 1bac4e99dbf..9f3a27077e8 100644 --- a/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md +++ b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md @@ -6,7 +6,7 @@ This is the SquuezeNet v1.1 model that is designed to perform image classificati The model has been pretrained on the ImageNet image database and then symmetrically quantized to INT8 fixed-point precision with 51% sparsity using Neural Network Compression Framework (NNCF). -The model input is a blob that consists of a single image of "1x224x224x3" in BGR order. +The model input is a blob that consists of a single image of "1x3x224x224" in BGR order. The model output for `squeezenetv1.1-int8-sparse-v1-onnx-0001` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. From 721770269c973a594074bd0644d22a0785a67227 Mon Sep 17 00:00:00 2001 From: "Lyalyushkin, Nikolay" Date: Mon, 2 Sep 2019 15:06:40 +0300 Subject: [PATCH 007/927] legal info and asterisk --- .../description/inceptionv3-int8-onnx-0001.md | 4 ++-- .../description/inceptionv3-int8-sparse-v2-onnx-0001.md | 4 ++-- .../description/mobilenetv2-int8-onnx-0001.md | 6 ++++-- .../description/mobilenetv2-int8-sparse-v2-onnx-0001.md | 6 ++++-- .../description/resnet50-int8-sparse-v2-onnx-0001.md | 6 ++++-- .../description/resnet50v1-int8-onnx-0001.md | 6 ++++-- .../description/squeezenetv1.1-int8-onnx-0001.md | 6 ++++-- .../description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md | 6 ++++-- 8 files changed, 28 insertions(+), 16 deletions(-) diff --git a/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md index 12165f4bbd5..1fb132d9b51 100644 --- a/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md +++ b/models/intel/inceptionv3-int8-onnx-0001/description/inceptionv3-int8-onnx-0001.md @@ -20,7 +20,7 @@ classifications matching those in the ImageNet database. | Type | Classification| | GFLOPs | 11.469 | | MParams | 23.817 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | ## Accuracy @@ -37,9 +37,9 @@ The quality metrics calculated on ImageNet validation dataset is 78.36% accuracy Image, shape - `1,3,299,299`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` diff --git a/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md index d9fa98849f9..8ffa048dbeb 100644 --- a/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md +++ b/models/intel/inceptionv3-int8-sparse-v2-onnx-0001/description/inceptionv3-int8-sparse-v2-onnx-0001.md @@ -19,7 +19,7 @@ The model output for `inceptionv3-int8-sparse-v2-onnx-0001` is the usual object | Type | Classification| | GFLOPs | 11.469 | | MParams | 23.817 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | | Sparsity | 60.31% | ## Accuracy @@ -37,9 +37,9 @@ The quality metrics calculated on ImageNet validation dataset is 77.05% accuracy Image, shape - `1,3,299,299`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` diff --git a/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md index b71e7371bfc..6d9e89c25d4 100644 --- a/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md +++ b/models/intel/mobilenetv2-int8-onnx-0001/description/mobilenetv2-int8-onnx-0001.md @@ -17,7 +17,7 @@ The model output for `mobilenetv2-int8-onnx-0001` is the usual object classifier | Type | Classification| | GFLOPs | 0.615 | | MParams | 3.488 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | ## Accuracy @@ -34,9 +34,9 @@ The quality metrics calculated on ImageNet validation dataset is 71.32% accuracy Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` @@ -47,3 +47,5 @@ Object classifier according to ImageNet classes, shape -`1,1000`, output data fo - `B` - batch size - `C` - predicted probabilities for each class in [0, 1] range +## Legal Information +[*] Other names and brands may be claimed as the property of others. diff --git a/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md index ec849ed1a6e..ba069d7811b 100644 --- a/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md +++ b/models/intel/mobilenetv2-int8-sparse-v2-onnx-0001/description/mobilenetv2-int8-sparse-v2-onnx-0001.md @@ -17,7 +17,7 @@ The model output for `mobilenetv2-int8-sparse-v2-onnx-0001` is the usual object | Type | Classification| | GFLOPs | 0.615 | | MParams | 3.488 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | | Sparsity | 51% | ## Accuracy @@ -35,9 +35,9 @@ The quality metrics calculated on ImageNet validation dataset is 70.84% accuracy Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` @@ -48,3 +48,5 @@ Object classifier according to ImageNet classes, shape -`1,1000`, output data fo - `B` - batch size - `C` - predicted probabilities for each class in [0, 1] range +## Legal Information +[*] Other names and brands may be claimed as the property of others. diff --git a/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md index d8aae864a5e..75bb1232d35 100644 --- a/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md +++ b/models/intel/resnet50-int8-sparse-v2-onnx-0001/description/resnet50-int8-sparse-v2-onnx-0001.md @@ -19,7 +19,7 @@ The model output for `resnet50-int8-sparse-v2-onnx-0001` is the usual object cla | Type | Classification| | GFLOPs | 8.216 | | MParams | 25.53 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | | Sparsity | 61.02% | ## Accuracy @@ -37,9 +37,9 @@ The quality metrics calculated on ImageNet validation dataset is 75.19% accuracy Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` @@ -50,3 +50,5 @@ Object classifier according to ImageNet classes, shape -`1,1000`, output data fo - `B` - batch size - `C` - predicted probabilities for each class in [0, 1] range +## Legal Information +[*] Other names and brands may be claimed as the property of others. diff --git a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md index 8c74fb11843..58a9360e9ea 100644 --- a/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md +++ b/models/intel/resnet50v1-int8-onnx-0001/description/resnet50v1-int8-onnx-0001.md @@ -19,7 +19,7 @@ The model output for `resnet50v1-int8-onnx-0001` is the usual object classifier | Type | Classification| | GFLOPs | 8.216 | | MParams | 25.53 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | ## Accuracy @@ -36,9 +36,9 @@ The quality metrics calculated on ImageNet validation dataset is 76.55% accuracy Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` @@ -49,3 +49,5 @@ Object classifier according to ImageNet classes, shape -`1,1000`, output data fo - `B` - batch size - `C` - predicted probabilities for each class in [0, 1] range +## Legal Information +[*] Other names and brands may be claimed as the property of others. diff --git a/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md index 3cddfb991bc..2418e35d6b9 100644 --- a/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md +++ b/models/intel/squeezenetv1.1-int8-onnx-0001/description/squeezenetv1.1-int8-onnx-0001.md @@ -19,7 +19,7 @@ The model output for `squeezenetv1.1-int8-onnx-0001` is the usual object classif | Type | Classification| | GFLOPs | 0.707 | | MParams | 1.236 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | ## Accuracy @@ -36,9 +36,9 @@ The quality metrics calculated on ImageNet validation dataset is 57.9% accuracy Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` @@ -49,3 +49,5 @@ Object classifier according to ImageNet classes, shape -`1,1000`, output data fo - `B` - batch size - `C` - predicted probabilities for each class in [0, 1] range +## Legal Information +[*] Other names and brands may be claimed as the property of others. diff --git a/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md index 9f3a27077e8..d55187fa27c 100644 --- a/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md +++ b/models/intel/squeezenetv1.1-int8-sparse-v1-onnx-0001/description/squeezenetv1.1-int8-sparse-v1-onnx-0001.md @@ -19,7 +19,7 @@ The model output for `squeezenetv1.1-int8-sparse-v1-onnx-0001` is the usual obje | Type | Classification| | GFLOPs | 0.707 | | MParams | 1.236 | -| Source framework | PyTorch | +| Source framework | PyTorch\* | | Sparsity | 51% | ## Accuracy @@ -37,9 +37,9 @@ The quality metrics calculated on ImageNet validation dataset is 56.76% accuracy Image, shape - `1,3,224,224`, format is `B,C,H,W` where: - `B` - batch size +- `C` - channel - `H` - height - `W` - width -- `C` - channel Channel order is `BGR` @@ -50,3 +50,5 @@ Object classifier according to ImageNet classes, shape -`1,1000`, output data fo - `B` - batch size - `C` - predicted probabilities for each class in [0, 1] range +## Legal Information +[*] Other names and brands may be claimed as the property of others. From ff20d848edf5739b78f1f34fce4a8c495c123a7e Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 18 Sep 2019 13:13:05 +0300 Subject: [PATCH 008/927] AC: remove usage depricated scipy image api --- .../data_readers/data_reader.py | 159 +++++++++++++++++- .../resample_segmentation_prediction.py | 5 +- .../postprocessor/resize_segmentation_mask.py | 7 +- .../postprocessor/zoom_segmentation_mask.py | 1 + tools/accuracy_checker/setup.py | 2 +- 5 files changed, 166 insertions(+), 8 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index f28bd2972b8..aa28ba52e46 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -20,7 +20,6 @@ import re import cv2 from PIL import Image -import scipy.misc import numpy as np import nibabel as nib @@ -180,8 +179,164 @@ def read(self, data_id): class ScipyImageReader(BaseReader): __provider__ = 'scipy_imread' + @staticmethod + def _from_image(image, flatten=False, mode=None): + if mode is not None: + if mode != image.mode: + image = image.convert(mode) + elif image.mode == 'P': + image = image.convert('RGBA') if 'transparency' in image.info else image.convert('RGB') + + if flatten: + image = image.convert('F') + elif image.mode == '1': + image = image.convert('L') + + return np.array(image) + + @staticmethod + def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, channel_axis=None): + _errstr = "Mode is unknown or incompatible with input array shape." + data = np.asarray(arr) + if np.iscomplexobj(data): + raise ValueError("Cannot convert a complex-valued array.") + shape = list(data.shape) + valid = len(shape) == 2 or ((len(shape) == 3) and ((3 in shape) or (4 in shape))) + if not valid: + raise ValueError("'arr' does not have a suitable array shape for any mode.") + if len(shape) == 2: + shape = (shape[1], shape[0]) # columns show up first + if mode == 'F': + data32 = data.astype(np.float32) + image = Image.frombytes(mode, shape, data32.tostring()) + + return image + if mode in [None, 'L', 'P']: + bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) + image = Image.frombytes('L', shape, bytedata.tostring()) + if pal is not None: + image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) + # Becomes a mode='P' automagically. + elif mode == 'P': # default gray-scale + pal = ( + np.arange(0, 256, 1, dtype=np.uint8)[:, np.newaxis] * + np.ones((3,), dtype=np.uint8)[np.newaxis, :] + ) + image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) + + return image + if mode == '1': # high input gives threshold for 1 + bytedata = (data > high) + image = Image.frombytes('1', shape, bytedata.tostring()) + + return image + + cmin = cmin or np.amin(np.ravel(data)) + cmax = cmax or np.amax(np.ravel(data)) + data = (data * 1.0 - cmin) * (high - low) / (cmax - cmin) + low + if mode == 'I': + data32 = data.astype(np.uint32) + image = Image.frombytes(mode, shape, data32.tostring()) + else: + raise ValueError(_errstr) + + return image + + # if here then 3-d array with a 3 or a 4 in the shape length. + # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' + if channel_axis is None: + ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) + if np.size(ca): + ca = ca[0] + else: + raise ValueError("Could not find channel dimension.") + else: + ca = channel_axis + + numch = shape[ca] + if numch not in [3, 4]: + raise ValueError("Channel axis dimension is not valid.") + + bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) + if ca == 2: + strdata = bytedata.tostring() + shape = (shape[1], shape[0]) + elif ca == 1: + strdata = np.transpose(bytedata, (0, 2, 1)).tostring() + shape = (shape[2], shape[0]) + elif ca == 0: + strdata = np.transpose(bytedata, (1, 2, 0)).tostring() + shape = (shape[2], shape[1]) + if mode is None: + if numch == 3: + mode = 'RGB' + else: + mode = 'RGBA' + + if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: + raise ValueError(_errstr) + + if mode in ['RGB', 'YCbCr']: + if numch != 3: + raise ValueError("Invalid array shape for mode.") + if mode in ['RGBA', 'CMYK']: + if numch != 4: + raise ValueError("Invalid array shape for mode.") + + # Here we know data and mode is correct + image = Image.frombytes(mode, shape, strdata) + return image + + @staticmethod + def _imread(name): + # reimplementation scipy.misc.imread + image = Image.open(name) + + return ScipyImageReader._from_image(image) + + @staticmethod + def _bytescale(data, cmin=None, cmax=None, high=255, low=0): + if data.dtype == np.uint8: + return data + + if high > 255: + raise ValueError("`high` should be less than or equal to 255.") + if low < 0: + raise ValueError("`low` should be greater than or equal to 0.") + if high < low: + raise ValueError("`high` should be greater than or equal to `low`.") + cmin = cmin or data.min() + cmax = cmax or data.max() + + cscale = cmax - cmin + if cscale < 0: + raise ValueError("`cmax` should be larger than `cmin`.") + elif cscale == 0: + cscale = 1 + + scale = float(high - low) / cscale + bytedata = (data - cmin) * scale + low + + return (bytedata.clip(low, high) + 0.5).astype(np.uint8) + + @staticmethod + def imresize(arr, size, interp='bilinear', mode=None): + im = ScipyImageReader._to_image(arr, mode=mode) + ts = type(size) + if np.issubdtype(ts, np.signedinteger): + percent = size / 100.0 + size = tuple((np.array(im.size) * percent).astype(int)) + elif np.issubdtype(type(size), np.floating): + size = tuple((np.array(im.size) * size).astype(int)) + else: + size = (size[1], size[0]) + func = {'nearest': 0, 'lanczos': 1, 'bilinear': 2, 'bicubic': 3, 'cubic': 3} + imnew = im.resize(size, resample=func[interp]) + + return ScipyImageReader._from_image(imnew) + def read(self, data_id): - return np.array(scipy.misc.imread(str(get_path(self.data_source / data_id)))) + return ScipyImageReader._imread(str(get_path(self.data_source / data_id))) class OpenCVFrameReader(BaseReader): diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/resample_segmentation_prediction.py b/tools/accuracy_checker/accuracy_checker/postprocessor/resample_segmentation_prediction.py index 80e85437a86..35c03743bbd 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/resample_segmentation_prediction.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/resample_segmentation_prediction.py @@ -56,8 +56,9 @@ def process_image(self, annotation, prediction): label = np.zeros(shape=(prediction_shape[0],) + image_shape) - label[:, low[0]:high[0], low[1]:high[1], low[2]:high[2]] = resample(prediction_.mask, - (prediction_shape[0],) + box_shape) + label[:, low[0]:high[0], low[1]:high[1], low[2]:high[2]] = resample( + prediction_.mask, (prediction_shape[0],) + box_shape + ) prediction[0].mask = label diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py index 239e6c1171a..d5bb75e1ed7 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py @@ -15,13 +15,14 @@ """ from functools import singledispatch -import scipy.misc import numpy as np from ..config import NumberField from ..utils import get_size_from_config from .postprocessor import PostprocessorWithSpecificTargets from ..representation import SegmentationPrediction, SegmentationAnnotation +from ..data_readers import ScipyImageReader + class ResizeSegmentationMask(PostprocessorWithSpecificTargets): __provider__ = 'resize_segmentation_mask' @@ -61,7 +62,7 @@ def resize_segmentation_mask(entry, height, width): def _(entry, height, width): entry_mask = [] for class_mask in entry.mask: - resized_mask = scipy.misc.imresize(class_mask, (height, width), 'nearest') + resized_mask = ScipyImageReader.imresize(class_mask, (height, width), 'nearest') entry_mask.append(resized_mask) entry.mask = np.array(entry_mask) @@ -69,7 +70,7 @@ def _(entry, height, width): @resize_segmentation_mask.register(SegmentationAnnotation) def _(entry, height, width): - entry.mask = scipy.misc.imresize(entry.mask, (height, width), 'nearest') + entry.mask = ScipyImageReader.imresize(entry.mask, (height, width), 'nearest') return entry for target in annotation: diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/zoom_segmentation_mask.py b/tools/accuracy_checker/accuracy_checker/postprocessor/zoom_segmentation_mask.py index 8bc87c85780..5bac25938a3 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/zoom_segmentation_mask.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/zoom_segmentation_mask.py @@ -21,6 +21,7 @@ from ..config import NumberField from ..logging import warning + class ZoomSegMask(Postprocessor): """ Zoom probabilities of segmentation prediction. diff --git a/tools/accuracy_checker/setup.py b/tools/accuracy_checker/setup.py index 417f189206c..87b9026fe10 100644 --- a/tools/accuracy_checker/setup.py +++ b/tools/accuracy_checker/setup.py @@ -29,7 +29,7 @@ ('ymlloader', 'yamlloader'), ('Pillow', 'pillow'), ('scikit-learn', 'scikit-learn'), - ('scipy', 'scipy<1.2'), + ('scipy', 'scipy'), ('cpuinfo', 'py-cpuinfo<=4.0'), ('shapely', 'shapely'), ('nibabel', 'nibabel') From 8c4bfd5e941f8bda3774e94de21dc24a3092dde5 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 18 Sep 2019 13:34:14 +0300 Subject: [PATCH 009/927] refactoring --- .../data_readers/data_reader.py | 47 ++++++++----------- .../postprocessor/resize_segmentation_mask.py | 2 +- 2 files changed, 21 insertions(+), 28 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index aa28ba52e46..ed582881faa 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -218,10 +218,9 @@ def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, c image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) # Becomes a mode='P' automagically. elif mode == 'P': # default gray-scale - pal = ( - np.arange(0, 256, 1, dtype=np.uint8)[:, np.newaxis] * - np.ones((3,), dtype=np.uint8)[np.newaxis, :] - ) + pal1 = np.arange(0, 256, 1, dtype=np.uint8)[:, np.newaxis] + pal2 = np.ones((3,), dtype=np.uint8)[np.newaxis, :] + pal = pal1 * pal2 image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) return image @@ -246,10 +245,9 @@ def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, c # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' if channel_axis is None: ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) - if np.size(ca): - ca = ca[0] - else: + if not np.size(ca): raise ValueError("Could not find channel dimension.") + ca = ca[0] else: ca = channel_axis @@ -258,30 +256,25 @@ def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, c raise ValueError("Channel axis dimension is not valid.") bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) - if ca == 2: - strdata = bytedata.tostring() - shape = (shape[1], shape[0]) - elif ca == 1: - strdata = np.transpose(bytedata, (0, 2, 1)).tostring() - shape = (shape[2], shape[0]) - elif ca == 0: - strdata = np.transpose(bytedata, (1, 2, 0)).tostring() - shape = (shape[2], shape[1]) + channel_axis_mapping = { + 0: ((1, 2, 0), (shape[1], shape[0])), + 1: ((0, 2, 1), (shape[2], shape[0])), + 2: ((0, 1, 2), (shape[1], shape[0])) + } + if ca in channel_axis_mapping: + transposition, shape = channel_axis_mapping[ca] + strdata = np.transpose(bytedata, transposition).tostring() + if mode is None: - if numch == 3: - mode = 'RGB' - else: - mode = 'RGBA' + mode = 'RGB'if numch == 3 else 'RGBA' if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: raise ValueError(_errstr) - if mode in ['RGB', 'YCbCr']: - if numch != 3: - raise ValueError("Invalid array shape for mode.") - if mode in ['RGBA', 'CMYK']: - if numch != 4: - raise ValueError("Invalid array shape for mode.") + if mode in ['RGB', 'YCbCr'] and numch != 3: + raise ValueError("Invalid array shape for mode.") + if mode in ['RGBA', 'CMYK'] and numch != 4: + raise ValueError("Invalid array shape for mode.") # Here we know data and mode is correct image = Image.frombytes(mode, shape, strdata) @@ -311,7 +304,7 @@ def _bytescale(data, cmin=None, cmax=None, high=255, low=0): cscale = cmax - cmin if cscale < 0: raise ValueError("`cmax` should be larger than `cmin`.") - elif cscale == 0: + if cscale == 0: cscale = 1 scale = float(high - low) / cscale diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py index d5bb75e1ed7..cbe2b56318c 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py @@ -62,7 +62,7 @@ def resize_segmentation_mask(entry, height, width): def _(entry, height, width): entry_mask = [] for class_mask in entry.mask: - resized_mask = ScipyImageReader.imresize(class_mask, (height, width), 'nearest') + resized_mask = ScipyImageReader.imresize(class_mask, (height, width), 'nearest') entry_mask.append(resized_mask) entry.mask = np.array(entry_mask) From f22e49132b8602d6a117d90bc591550a679e8a9c Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 18 Sep 2019 14:34:42 +0300 Subject: [PATCH 010/927] refactoring 2 --- .../data_readers/data_reader.py | 98 ++++++++++--------- 1 file changed, 52 insertions(+), 46 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index ed582881faa..d5ae0a7fd63 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -197,14 +197,8 @@ def _from_image(image, flatten=False, mode=None): @staticmethod def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, channel_axis=None): _errstr = "Mode is unknown or incompatible with input array shape." - data = np.asarray(arr) - if np.iscomplexobj(data): - raise ValueError("Cannot convert a complex-valued array.") - shape = list(data.shape) - valid = len(shape) == 2 or ((len(shape) == 3) and ((3 in shape) or (4 in shape))) - if not valid: - raise ValueError("'arr' does not have a suitable array shape for any mode.") - if len(shape) == 2: + + def process_2d(data, shape, pal, cmax, cmin): shape = (shape[1], shape[0]) # columns show up first if mode == 'F': data32 = data.astype(np.float32) @@ -241,44 +235,56 @@ def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, c return image - # if here then 3-d array with a 3 or a 4 in the shape length. - # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' - if channel_axis is None: - ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) - if not np.size(ca): - raise ValueError("Could not find channel dimension.") - ca = ca[0] - else: - ca = channel_axis - - numch = shape[ca] - if numch not in [3, 4]: - raise ValueError("Channel axis dimension is not valid.") - - bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) - channel_axis_mapping = { - 0: ((1, 2, 0), (shape[1], shape[0])), - 1: ((0, 2, 1), (shape[2], shape[0])), - 2: ((0, 1, 2), (shape[1], shape[0])) - } - if ca in channel_axis_mapping: - transposition, shape = channel_axis_mapping[ca] - strdata = np.transpose(bytedata, transposition).tostring() - - if mode is None: - mode = 'RGB'if numch == 3 else 'RGBA' - - if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: - raise ValueError(_errstr) - - if mode in ['RGB', 'YCbCr'] and numch != 3: - raise ValueError("Invalid array shape for mode.") - if mode in ['RGBA', 'CMYK'] and numch != 4: - raise ValueError("Invalid array shape for mode.") - - # Here we know data and mode is correct - image = Image.frombytes(mode, shape, strdata) - return image + def process_3d(data, shape, mode): + # if here then 3-d array with a 3 or a 4 in the shape length. + # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' + if channel_axis is None: + ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) + if not np.size(ca): + raise ValueError("Could not find channel dimension.") + ca = ca[0] + else: + ca = channel_axis + + numch = shape[ca] + if numch not in [3, 4]: + raise ValueError("Channel axis dimension is not valid.") + + bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) + channel_axis_mapping = { + 0: ((1, 2, 0), (shape[1], shape[0])), + 1: ((0, 2, 1), (shape[2], shape[0])), + 2: ((0, 1, 2), (shape[1], shape[0])) + } + if ca in channel_axis_mapping: + transposition, shape = channel_axis_mapping[ca] + strdata = np.transpose(bytedata, transposition).tostring() + + if mode is None: + mode = 'RGB' if numch == 3 else 'RGBA' + + if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: + raise ValueError(_errstr) + + if mode in ['RGB', 'YCbCr'] and numch != 3: + raise ValueError("Invalid array shape for mode.") + if mode in ['RGBA', 'CMYK'] and numch != 4: + raise ValueError("Invalid array shape for mode.") + + # Here we know data and mode is correct + image = Image.frombytes(mode, shape, strdata) + return image + + data = np.asarray(arr) + if np.iscomplexobj(data): + raise ValueError("Cannot convert a complex-valued array.") + shape = list(data.shape) + valid = len(shape) == 2 or ((len(shape) == 3) and ((3 in shape) or (4 in shape))) + if not valid: + raise ValueError("'arr' does not have a suitable array shape for any mode.") + if len(shape) == 2: + return process_2d(data, shape, pal, cmax, cmin) + return process_3d(data, shape, mode) @staticmethod def _imread(name): From 8a56b278521bccdd4f395016f6d9e0a22fda4f26 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 18 Sep 2019 14:45:08 +0300 Subject: [PATCH 011/927] refactoring 3 --- .../data_readers/data_reader.py | 152 +++++++++--------- 1 file changed, 76 insertions(+), 76 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index d5ae0a7fd63..d098971a6d6 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -195,86 +195,86 @@ def _from_image(image, flatten=False, mode=None): return np.array(image) @staticmethod - def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, channel_axis=None): - _errstr = "Mode is unknown or incompatible with input array shape." - - def process_2d(data, shape, pal, cmax, cmin): - shape = (shape[1], shape[0]) # columns show up first - if mode == 'F': - data32 = data.astype(np.float32) - image = Image.frombytes(mode, shape, data32.tostring()) - - return image - if mode in [None, 'L', 'P']: - bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) - image = Image.frombytes('L', shape, bytedata.tostring()) - if pal is not None: - image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) - # Becomes a mode='P' automagically. - elif mode == 'P': # default gray-scale - pal1 = np.arange(0, 256, 1, dtype=np.uint8)[:, np.newaxis] - pal2 = np.ones((3,), dtype=np.uint8)[np.newaxis, :] - pal = pal1 * pal2 - image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) - - return image - if mode == '1': # high input gives threshold for 1 - bytedata = (data > high) - image = Image.frombytes('1', shape, bytedata.tostring()) - - return image - - cmin = cmin or np.amin(np.ravel(data)) - cmax = cmax or np.amax(np.ravel(data)) - data = (data * 1.0 - cmin) * (high - low) / (cmax - cmin) + low - if mode == 'I': - data32 = data.astype(np.uint32) - image = Image.frombytes(mode, shape, data32.tostring()) - else: - raise ValueError(_errstr) + def _process_2d(data, shape, mode, pal, high, low, cmax, cmin): + shape = (shape[1], shape[0]) # columns show up first + if mode == 'F': + data32 = data.astype(np.float32) + image = Image.frombytes(mode, shape, data32.tostring()) return image + if mode in [None, 'L', 'P']: + bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) + image = Image.frombytes('L', shape, bytedata.tostring()) + if pal is not None: + image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) + # Becomes a mode='P' automagically. + elif mode == 'P': # default gray-scale + pal1 = np.arange(0, 256, 1, dtype=np.uint8)[:, np.newaxis] + pal2 = np.ones((3,), dtype=np.uint8)[np.newaxis, :] + pal = pal1 * pal2 + image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) - def process_3d(data, shape, mode): - # if here then 3-d array with a 3 or a 4 in the shape length. - # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' - if channel_axis is None: - ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) - if not np.size(ca): - raise ValueError("Could not find channel dimension.") - ca = ca[0] - else: - ca = channel_axis - - numch = shape[ca] - if numch not in [3, 4]: - raise ValueError("Channel axis dimension is not valid.") + return image + if mode == '1': # high input gives threshold for 1 + bytedata = (data > high) + image = Image.frombytes('1', shape, bytedata.tostring()) - bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) - channel_axis_mapping = { - 0: ((1, 2, 0), (shape[1], shape[0])), - 1: ((0, 2, 1), (shape[2], shape[0])), - 2: ((0, 1, 2), (shape[1], shape[0])) - } - if ca in channel_axis_mapping: - transposition, shape = channel_axis_mapping[ca] - strdata = np.transpose(bytedata, transposition).tostring() - - if mode is None: - mode = 'RGB' if numch == 3 else 'RGBA' - - if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: - raise ValueError(_errstr) - - if mode in ['RGB', 'YCbCr'] and numch != 3: - raise ValueError("Invalid array shape for mode.") - if mode in ['RGBA', 'CMYK'] and numch != 4: - raise ValueError("Invalid array shape for mode.") - - # Here we know data and mode is correct - image = Image.frombytes(mode, shape, strdata) return image + cmin = cmin or np.amin(np.ravel(data)) + cmax = cmax or np.amax(np.ravel(data)) + data = (data * 1.0 - cmin) * (high - low) / (cmax - cmin) + low + if mode == 'I': + data32 = data.astype(np.uint32) + image = Image.frombytes(mode, shape, data32.tostring()) + else: + raise ValueError("Mode is unknown or incompatible with input array shape.") + + return image + + @staticmethod + def _process_3d(data, shape, mode, channel_axis, high, low, cmin, cmax): + # if here then 3-d array with a 3 or a 4 in the shape length. + # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' + if channel_axis is None: + ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) + if not np.size(ca): + raise ValueError("Could not find channel dimension.") + ca = ca[0] + else: + ca = channel_axis + + numch = shape[ca] + if numch not in [3, 4]: + raise ValueError("Channel axis dimension is not valid.") + + bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) + channel_axis_mapping = { + 0: ((1, 2, 0), (shape[1], shape[0])), + 1: ((0, 2, 1), (shape[2], shape[0])), + 2: ((0, 1, 2), (shape[1], shape[0])) + } + if ca in channel_axis_mapping: + transposition, shape = channel_axis_mapping[ca] + strdata = np.transpose(bytedata, transposition).tostring() + + if mode is None: + mode = 'RGB' if numch == 3 else 'RGBA' + + if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: + raise ValueError("Mode is unknown or incompatible with input array shape.") + + if mode in ['RGB', 'YCbCr'] and numch != 3: + raise ValueError("Invalid array shape for mode.") + if mode in ['RGBA', 'CMYK'] and numch != 4: + raise ValueError("Invalid array shape for mode.") + + # Here we know data and mode is correct + image = Image.frombytes(mode, shape, strdata) + return image + + @staticmethod + def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, channel_axis=None): data = np.asarray(arr) if np.iscomplexobj(data): raise ValueError("Cannot convert a complex-valued array.") @@ -283,8 +283,8 @@ def process_3d(data, shape, mode): if not valid: raise ValueError("'arr' does not have a suitable array shape for any mode.") if len(shape) == 2: - return process_2d(data, shape, pal, cmax, cmin) - return process_3d(data, shape, mode) + return ScipyImageReader._process_2d(data, shape, mode, pal, high, low, cmax, cmin) + return ScipyImageReader._process_3d(data, shape, mode, channel_axis, high, low, cmin, cmax) @staticmethod def _imread(name): From 0ac0db4fcf1e5dafaa7d8507de35e992b0cfd0e7 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Wed, 18 Sep 2019 17:18:23 +0300 Subject: [PATCH 012/927] demos: check number of channels for net input and image --- demos/common/samples/ocv_common.hpp | 3 +++ demos/tests/image_sequences.py | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/demos/common/samples/ocv_common.hpp b/demos/common/samples/ocv_common.hpp index 770b0d7df75..b1e8c9c1d48 100644 --- a/demos/common/samples/ocv_common.hpp +++ b/demos/common/samples/ocv_common.hpp @@ -24,6 +24,9 @@ void matU8ToBlob(const cv::Mat& orig_image, InferenceEngine::Blob::Ptr& blob, in const size_t width = blobSize[3]; const size_t height = blobSize[2]; const size_t channels = blobSize[1]; + if (static_cast(orig_image.channels()) != channels) { + THROW_IE_EXCEPTION << "A number of channels for net input and image must match"; + } T* blob_data = blob->buffer().as(); cv::Mat resized_image(orig_image); diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index 17d4ba48683..a9d9583ca6c 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -138,8 +138,8 @@ 'smart-classroom-demo': [ image_net_arg('00000074'), - image_net_arg('00000141'), - image_net_arg('00000141'), + image_net_arg('00000002'), + image_net_arg('00000002'), image_net_arg('00000164'), image_net_arg('00000181'), image_net_arg('00000164'), From e0d5557c7fe147ddd1625fc414a382274030c24f Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 19 Sep 2019 11:24:20 +0300 Subject: [PATCH 013/927] Added model contribution guide --- README.md | 1 + contribution.md | 202 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 203 insertions(+) create mode 100644 contribution.md diff --git a/README.md b/README.md index 85ed05305da..28886d496f2 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,7 @@ This repository includes optimized deep learning models and a set of demos to ex * [Model Downloader](tools/downloader/README.md) and other automation tools * [Demos](demos/README.md) that demonstrate models usage with Deep Learning Deployment Toolkit * [Accuracy Checker](tools/accuracy_checker/README.md) tool for models accuracy validation +* [Model Contribution Guide](contribution.md) ## License Open Model Zoo is licensed under [Apache License Version 2.0](LICENSE). diff --git a/contribution.md b/contribution.md new file mode 100644 index 00000000000..600fa683959 --- /dev/null +++ b/contribution.md @@ -0,0 +1,202 @@ +# How to contribute to OMZ + +From this document you will know how to contribute your model to OpenVINO™ Open Model Zoo. Almost any model from supported frameworks (see list below) can be added. To do this do next few steps. + +1. [Model location] +2. [Model conversion] +3. [Demo] +4. [Accuracy validation] +5. [Documentation] +6. [Configuration file] +7. [Pull request requirements] + +List of supported frameworks: +* Caffe\* +* Caffe2\* (by conversion to ONNX\*) +* TensorFlow\* +* MXNet\* +* PyTorch\* (by conversion to ONNX\*) + + + +## Model location + +Upload your model to any Internet file storage with easy and direct access to it. It can be www.github.com, GoogleDrive\*, or any other. + +*After this step you got links to the model, which will be used later.* + +## Model conversion + +OpenVINO™ supports models in its own format IR. Model from any supported framework can be easily converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. + +> **NOTE 1**: due OpenVINO&trade paradigms, mean and scale values are built-in converted model. + +> **NOTE 2**: due OpenVINO&trade paradigms, if model take colored image as input, color channel order supposed to be `BGR`. + +*After this step you`ll get conversion parameters for Model Optimizer.* + +## Demo + +Demo will show main idea of how work with yout model. If your model solves one of the supported by Open Model Zoo task, try find appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). + +## Accuracy validation + +To run accuracy validation, use [Accuracy Checker]() tool, provided with repository. Doing this very simple if model task from supported. You must only create accuracy validation configuration file in this case. Most information about Accuracy Checker you can find [here](https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/README.md). + +When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means that conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][Model conversion] parameters. + +## Pull request requirements + +Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. This pull request is strictly formalized and must contains changes: +* configuration file - model.yml +* documentation of model in markdown format +* license + +Configuration and documentation files must be located in `models/public` directory in subfolder, which name will represent model name in Open Model Zoo and will be used by downloader and converter tools. Also, please add suffix to model name, according to origin framework (e.g. `cf`, `cf2`, `tf`, `mx` or `pt`). + +### Configuration file + +Models configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be in the models subfolder. Let look closer to the file content. + +**`description`** + +This tag must contain description of model. + +**`task_type`** + +This tag describes on of the task that model solves: +- `action_recognition` +- `classification` +- `detection` +- `face_recognition` +- `head_pose_estimation` +- `human_pose_estimation` +- `image_processing` +- `instance_segmentation` +- `object_attributes` +- `optical_character_recognition` +- `semantic_segmentation` + +If your model solves another task, you can freely add it with modification of [tools/downloader/common.py](https://github.com/opencv/open_model_zoo/blob/develop/tools/downloader/common.py) file list `KNOWN_TASK_TYPES` + +**`files`** + +You must describe all files, which must be downloaded, in this section. Each file must is described in few tags: + +* `name` sets file name after downloading +* `size` sets file size +* `sha256` sets file hash sum +* `source` sets direct link to file *OR* describes file access parameters + +If file is located on GoogleDrive\*, section `source` must contain: +``` + - $type: google_drive + id: +``` +**`postprocessing`** (*optional*) + +Sometimes right after downloading model are not readty for conversion, or conversion may be incorrect or failure. It may be avoided by some manipulation with original files, such as unpacking, replacing or deleting some part of file. This manipulation must be described in this section. + +For unpacking archive: + +``` + - $type: unpack_archive + file: + format: zip | tar | gztar | bztar | xztar +``` + +For replacement operations: + +``` + - $type: regex_replace + file: + pattern: + replacement: + count: +``` +where +- `file` name of file where replacement must be executed +- `pattern` string or regexp ([learn more](https://docs.python.org/2/library/re.html)) to find +- `replacement` replacement string +- `count` (*optional*) maximum number of pattern occurrences to be replaced + + +**`pytorch_to_onnx`** (*optional*) + +List of pytorch-to-onnx conversion parameters, see `model_optimizer_args` for details. + +**`caffe2_to_onnx`** (*optional*) + +List of caffe2-to-onnx conversion parameters, see `model_optimizer_args` for details. + +**`model_optimizer_args`** + +Conversion parameter, obtained [early][Model Conversion], must be specified in this section like this: +``` + - --shape=[1,3,224,224] +``` +> **NOTE:** no need to specify `framework`, `data_type`, `model_name` and `output_dir` parameters since them are deduced automatically. + +**`framework`** + +Framework of original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`) + +**`license`** + +Path to license + +---- +*Congratulation! You've got configuration file!* + +#### Example + +In this [example](https://github.com/opencv/open_model_zoo/blob/develop/models/public/densenet-121-tf/model.yml) classificational model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from GoogleDrive\* as archive. + +``` +description: >- + This is an Tensorflow\* version of `densenet-121` model, one of the DenseNet + group of models designed to perform image classification. The weights were converted + from DenseNet-Keras Models. For details see repository , + paper +task_type: classification +files: + - name: tf-densenet121.tar.gz + size: 30597420 + sha256: b31ec840358f1d20e1c6364d05ce463cb0bc0480042e663ad54547189501852d + source: + $type: google_drive + id: 0B_fUSpodN0t0eW1sVk1aeWREaDA +postprocessing: + - $type: unpack_archive + format: gztar + file: tf-densenet121.tar.gz +model_optimizer_args: + - --reverse_input_channels + - --input_shape=[1,224,224,3] + - --input=Placeholder + - --mean_values=Placeholder[123.68,116.78,103.94] + - --scale_values=Placeholder[58.8235294117647] + - --output=densenet121/predictions/Reshape_1 + - --input_meta_graph=$dl_dir/tf-densenet121.ckpt.meta +framework: tf +license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICENSE +``` + +### Documentation + +Documentation if very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. +Doucmentation must contain: +* description of model, where you describe main purpose of the model and its features, add some links to paper or/and source code of original model and so on +* model specification, e.g. type, source framework, GFLOPs and number of parameters +* main accuracy values (also description of metric) +* detailed description of input and output for original and converted models + +Detailed structure and headers naming convention you can learn from any other model, e.g. [alexnet](https://github.com/opencv/open_model_zoo/blob/develop/models/public/alexnet/alexnet.md). + +### License + +Add your models license to `tools/downloader/license.txt` file + +## Legal Information + +[*] From 42d7b23b51f4387f195672aeef2ab67e1b27a365 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 19 Sep 2019 13:00:34 +0300 Subject: [PATCH 014/927] Update links --- contribution.md | 54 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 18 deletions(-) diff --git a/contribution.md b/contribution.md index 600fa683959..f514dc12e95 100644 --- a/contribution.md +++ b/contribution.md @@ -2,13 +2,13 @@ From this document you will know how to contribute your model to OpenVINO™ Open Model Zoo. Almost any model from supported frameworks (see list below) can be added. To do this do next few steps. -1. [Model location] -2. [Model conversion] -3. [Demo] -4. [Accuracy validation] -5. [Documentation] -6. [Configuration file] -7. [Pull request requirements] +1. [Model location](#model-location) +2. [Model conversion](#model-conversion) +3. [Demo](#demo) +4. [Accuracy validation](#accuracy-validation) +5. [Documentation](#documentation) +6. [Configuration file](#configuration-file) +7. [Pull request requirements](#pull-request-requirements) List of supported frameworks: * Caffe\* @@ -27,11 +27,11 @@ Upload your model to any Internet file storage with easy and direct access to it ## Model conversion -OpenVINO™ supports models in its own format IR. Model from any supported framework can be easily converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +OpenVINO™ supports models in its own format IR. Model from any supported framework can be easily converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. -> **NOTE 1**: due OpenVINO&trade paradigms, mean and scale values are built-in converted model. +> **NOTE 1**: due OpenVINO™ paradigms, mean and scale values are built-in converted model. -> **NOTE 2**: due OpenVINO&trade paradigms, if model take colored image as input, color channel order supposed to be `BGR`. +> **NOTE 2**: due OpenVINO™ paradigms, if model take colored image as input, color channel order supposed to be `BGR`. *After this step you`ll get conversion parameters for Model Optimizer.* @@ -41,15 +41,16 @@ Demo will show main idea of how work with yout model. If your model solves one o ## Accuracy validation -To run accuracy validation, use [Accuracy Checker]() tool, provided with repository. Doing this very simple if model task from supported. You must only create accuracy validation configuration file in this case. Most information about Accuracy Checker you can find [here](https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/README.md). +To run accuracy validation, use [Accuracy Checker](./tools/accuracy_checker/README.md) tool, provided with repository. Doing this very simple if model task from supported. You must only create accuracy validation configuration file in this case. -When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means that conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][Model conversion] parameters. +When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means that conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][#model-conversion] parameters. ## Pull request requirements Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. This pull request is strictly formalized and must contains changes: -* configuration file - model.yml +* configuration file - `model.yml` * documentation of model in markdown format +* accuracy validation configuration file (see [Accuracy validation](#accuracy-validation)) * license Configuration and documentation files must be located in `models/public` directory in subfolder, which name will represent model name in Open Model Zoo and will be used by downloader and converter tools. Also, please add suffix to model name, according to origin framework (e.g. `cf`, `cf2`, `tf`, `mx` or `pt`). @@ -184,14 +185,21 @@ license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICE ### Documentation -Documentation if very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. -Doucmentation must contain: -* description of model, where you describe main purpose of the model and its features, add some links to paper or/and source code of original model and so on +Documentation is very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. +Documentation must contain: +* description of model + * main purpose + * features + * links to paper or/and source * model specification, e.g. type, source framework, GFLOPs and number of parameters + * type + * framework + * GFLOPs + * number of parameters * main accuracy values (also description of metric) * detailed description of input and output for original and converted models -Detailed structure and headers naming convention you can learn from any other model, e.g. [alexnet](https://github.com/opencv/open_model_zoo/blob/develop/models/public/alexnet/alexnet.md). +Detailed structure and headers naming convention you can learn from any other model, e.g. [alexnet](./models/public/alexnet/alexnet.md). ### License @@ -199,4 +207,14 @@ Add your models license to `tools/downloader/license.txt` file ## Legal Information -[*] +[*] Other names and brands may be claimed as the property of others. + +OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries. + +Copyright © 2018-2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at +``` +http://www.apache.org/licenses/LICENSE-2.0 +``` +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. From c8a7a3218ffd7f43fc39aa37ba705b04019b65e1 Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 19 Sep 2019 15:58:21 +0300 Subject: [PATCH 015/927] fix negative coordinates of bounding boxes (#428) --- .../annotation_converters/vgg_face_regression.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/vgg_face_regression.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/vgg_face_regression.py index a41155cfc37..2dd33f80ff0 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/vgg_face_regression.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/vgg_face_regression.py @@ -76,7 +76,7 @@ def convert(self, check_content=False, progress_callback=None, progress_interval if self.bbox_csv: for index, row in enumerate(read_csv(self.bbox_csv)): annotations[index].metadata['rect'] = convert_bboxes_xywh_to_x1y1x2y2( - int(row["X"]), int(row["Y"]), int(row["W"]), int(row["H"]) + max(int(row["X"]), 0), max(int(row["Y"]), 0), max(int(row["W"]), 0), max(int(row["H"]), 0) ) meta = { From 24afe39dff5fa1d07ac03beb137bea781a7e9a96 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 20 Sep 2019 10:46:54 +0300 Subject: [PATCH 016/927] Major fixes --- contribution.md | 73 ++++++++++++++++++++++++++++++------------------- 1 file changed, 45 insertions(+), 28 deletions(-) diff --git a/contribution.md b/contribution.md index f514dc12e95..8d6a0e6e33a 100644 --- a/contribution.md +++ b/contribution.md @@ -6,8 +6,8 @@ From this document you will know how to contribute your model to OpenVINO™ 2. [Model conversion](#model-conversion) 3. [Demo](#demo) 4. [Accuracy validation](#accuracy-validation) -5. [Documentation](#documentation) -6. [Configuration file](#configuration-file) +7. [Configuration file](#configuration-file) +6. [Documentation](#documentation) 7. [Pull request requirements](#pull-request-requirements) List of supported frameworks: @@ -23,7 +23,7 @@ List of supported frameworks: Upload your model to any Internet file storage with easy and direct access to it. It can be www.github.com, GoogleDrive\*, or any other. -*After this step you got links to the model, which will be used later.* +*After this step you will get **links** to the model* ## Model conversion @@ -33,29 +33,30 @@ OpenVINO™ supports models in its own format IR. Model from any supported f > **NOTE 2**: due OpenVINO™ paradigms, if model take colored image as input, color channel order supposed to be `BGR`. -*After this step you`ll get conversion parameters for Model Optimizer.* +*After this step you`ll get **conversion parameters** for Model Optimizer.* ## Demo -Demo will show main idea of how work with yout model. If your model solves one of the supported by Open Model Zoo task, try find appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). +Demo will show main idea of how work with your model. If your model solves one of the supported by Open Model Zoo task, try find appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). + +If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demo's input requires next options: +``` + -i "" Optional. Path to a input file or directory (for multiple inferences). By default input must be generated randomly. + -m "" Required. Path to an .xml file with a trained model + -d "" Optional. Target device for model inference. Usually CPU and GPU. By default CPU + -no_show Optional. Do not show inference result. +``` +Also you can add all necessary parameters for inference your model. ## Accuracy validation To run accuracy validation, use [Accuracy Checker](./tools/accuracy_checker/README.md) tool, provided with repository. Doing this very simple if model task from supported. You must only create accuracy validation configuration file in this case. -When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means that conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][#model-conversion] parameters. - -## Pull request requirements - -Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. This pull request is strictly formalized and must contains changes: -* configuration file - `model.yml` -* documentation of model in markdown format -* accuracy validation configuration file (see [Accuracy validation](#accuracy-validation)) -* license +When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][#model-conversion] parameters or validation configuration. -Configuration and documentation files must be located in `models/public` directory in subfolder, which name will represent model name in Open Model Zoo and will be used by downloader and converter tools. Also, please add suffix to model name, according to origin framework (e.g. `cf`, `cf2`, `tf`, `mx` or `pt`). +*After this step you will get accuracy validation configuration file - **.yml*** -### Configuration file +## Configuration file Models configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be in the models subfolder. Let look closer to the file content. @@ -78,7 +79,7 @@ This tag describes on of the task that model solves: - `optical_character_recognition` - `semantic_segmentation` -If your model solves another task, you can freely add it with modification of [tools/downloader/common.py](https://github.com/opencv/open_model_zoo/blob/develop/tools/downloader/common.py) file list `KNOWN_TASK_TYPES` +If your model solves another task, you can freely add it with modification of [tools/downloader/common.py](tools/downloader/common.py) file list `KNOWN_TASK_TYPES` **`files`** @@ -96,7 +97,7 @@ If file is located on GoogleDrive\*, section `source` must contain: ``` **`postprocessing`** (*optional*) -Sometimes right after downloading model are not readty for conversion, or conversion may be incorrect or failure. It may be avoided by some manipulation with original files, such as unpacking, replacing or deleting some part of file. This manipulation must be described in this section. +Sometimes right after downloading model are not ready for conversion, or conversion may be incorrect or failure. It may be avoided by some manipulation with original files, such as unpacking, replacing or deleting some part of file. This manipulation must be described in this section. For unpacking archive: @@ -132,9 +133,14 @@ List of caffe2-to-onnx conversion parameters, see `model_optimizer_args` for det **`model_optimizer_args`** -Conversion parameter, obtained [early][Model Conversion], must be specified in this section like this: +Conversion parameter, obtained [earlier](#model-conversion), must be specified in this section, e.g.: ``` - - --shape=[1,3,224,224] + - --input=data + - --mean_values=data[127.5] + - --scale_values=data[127.5] + - --reverse_input_channels + - --output=prob + - --input_model=$conv_dir/googlenet-v3.onnx ``` > **NOTE:** no need to specify `framework`, `data_type`, `model_name` and `output_dir` parameters since them are deduced automatically. @@ -144,14 +150,14 @@ Framework of original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`) **`license`** -Path to license +Path to model's license. ---- -*Congratulation! You've got configuration file!* +*After this step you will obtain **model.yml** file* -#### Example +### Example -In this [example](https://github.com/opencv/open_model_zoo/blob/develop/models/public/densenet-121-tf/model.yml) classificational model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from GoogleDrive\* as archive. +In this [example](models/public/densenet-121-tf/model.yml) classificational model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from GoogleDrive\* as archive. ``` description: >- @@ -183,7 +189,7 @@ framework: tf license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICENSE ``` -### Documentation +## Documentation Documentation is very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. Documentation must contain: @@ -201,13 +207,24 @@ Documentation must contain: Detailed structure and headers naming convention you can learn from any other model, e.g. [alexnet](./models/public/alexnet/alexnet.md). -### License +--- +*After this step you will obtain **.md** - documentation file* + +## Pull request requirements + +Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. This pull request is strictly formalized and must contains changes: +* configuration file - `model.yml` [from here](#configuration-file) +* documentation of model in markdown format [from here](#documentation) +* accuracy validation configuration file [from here](#accuracy-validation) +* license added to [tools/downloader/license.txt](tools/downloader/license.txt) + +Configuration and documentation files must be located in `models/public` directory in subfolder, which name will represent model name in Open Model Zoo and will be used by downloader and converter tools. Also, please add suffix to model name, according to origin framework (e.g. `cf`, `cf2`, `tf`, `mx` or `pt`). -Add your models license to `tools/downloader/license.txt` file +Validation configuration file must be located in [tools/accuracy_checker/configs](tools/accuracy_checker/configs). ## Legal Information -[*] Other names and brands may be claimed as the property of others. +[\*] Other names and brands may be claimed as the property of others. OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries. From 53524c89819d5d628e9488e408691a3a659236a8 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 20 Sep 2019 13:52:09 +0300 Subject: [PATCH 017/927] AC: get full size of dataset (#432) --- tools/accuracy_checker/accuracy_checker/dataset.py | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index f5a76e06f4f..d77ac80e52c 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -114,6 +114,10 @@ def size(self): return len(self.subset) return len(self._annotation) + @property + def full_size(self): + return len(self._annotation) + def __call__(self, context, *args, **kwargs): batch_annotation = self.__getitem__(self.iteration) self.iteration += 1 @@ -242,6 +246,12 @@ def reset(self): if self.annotation_reader: self.annotation_reader.subset = None + @property + def full_size(self): + if self.annotation_reader: + return self.annotation_reader.full_size + return len(self._identifiers) + @property def size(self): if self.annotation_reader: From 6f0cbd4dab7615d7547d5de16b7b669fb12a780e Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 20 Sep 2019 16:42:50 +0300 Subject: [PATCH 018/927] AC: fix reseting metric meta (#434) --- .../metrics/classification.py | 2 +- .../accuracy_checker/metrics/coco_metrics.py | 4 ++- .../metrics/coco_orig_metrics.py | 1 + .../accuracy_checker/metrics/detection.py | 4 +++ .../metrics/multilabel_recognition.py | 36 ++++++++++--------- .../accuracy_checker/metrics/regression.py | 29 ++++++++------- 6 files changed, 45 insertions(+), 31 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/classification.py b/tools/accuracy_checker/accuracy_checker/metrics/classification.py index e5d504f7a7f..00062cbede2 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/classification.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/classification.py @@ -90,7 +90,6 @@ def configure(self): self.top_k = self.get_value_from_config('top_k') label_map = self.get_value_from_config('label_map') self.labels = self.dataset.metadata.get(label_map) - self.meta['names'] = list(self.labels.values()) def loss(annotation_label, prediction_top_k_labels): result = np.zeros_like(list(self.labels.keys())) @@ -110,6 +109,7 @@ def update(self, annotation, prediction): self.accuracy.update(annotation.label, prediction.top_k(self.top_k)) def evaluate(self, annotations, predictions): + self.meta['names'] = list(self.labels.values()) return self.accuracy.evaluate() def reset(self): diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py index a33225bd8b4..2b09d5c3096 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py @@ -60,7 +60,7 @@ def parameters(cls): def configure(self): self.max_detections = self.get_value_from_config('max_detections') self.thresholds = get_or_parse_value(self.get_value_from_config('threshold'), COCO_THRESHOLDS) - label_map = self.dataset.metadata.get('label_map', []) + label_map = self.dataset.metadata.get('label_map', {}) self.labels = [ label for label in label_map if label != self.dataset.metadata.get('background_label') @@ -92,6 +92,8 @@ def evaluate(self, annotations, predictions): def reset(self): self.matching_results = [[] for _ in self.labels] + label_map = self.dataset.metadata.get('label_map', {}) + self.meta['names'] = [label_map[label] for label in self.labels] class MSCOCOAveragePrecision(MSCOCOBaseMetric): diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py index f716476a8f9..871433df698 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py @@ -74,6 +74,7 @@ def keypoints_to_coco(prediction_data_to_store, pred): return prediction_data_to_store + iou_specific_processing = { 'bbox': box_to_coco, 'segm': segm_to_coco, diff --git a/tools/accuracy_checker/accuracy_checker/metrics/detection.py b/tools/accuracy_checker/accuracy_checker/metrics/detection.py index 7265ccbc807..dae4b74eef8 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/detection.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/detection.py @@ -127,6 +127,10 @@ def per_class_detection_statistics(self, annotations, predictions, labels): def evaluate(self, annotations, predictions): pass + def reset(self): + valid_labels = list(filter(lambda x: x != self.dataset.metadata.get('background_label'), self.labels)) + self.meta['names'] = [self.labels[name] for name in valid_labels] + class DetectionMAP(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): """ diff --git a/tools/accuracy_checker/accuracy_checker/metrics/multilabel_recognition.py b/tools/accuracy_checker/accuracy_checker/metrics/multilabel_recognition.py index 16a75990a12..aebfc034bb9 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/multilabel_recognition.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/multilabel_recognition.py @@ -42,19 +42,12 @@ def parameters(cls): def configure(self): self.labels = self.dataset.metadata.get(self.get_value_from_config('label_map')) self.calculate_average = self.get_value_from_config('calculate_average') - - self.meta['scale'] = 1 - self.meta['postfix'] = '' - self.meta['calculate_mean'] = False - self.meta['names'] = list(self.labels.values()) - if self.calculate_average: - self.meta['names'].append('average') self.tp = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.fp = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.tn = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.fn = np.zeros_like(list(self.labels.keys()), dtype=np.float) - self.counter = np.zeros_like(list(self.labels.keys()), dtype=np.float) + self._create_meta() def update(self, annotation, prediction): def loss(annotation_labels, prediction_labels): @@ -99,12 +92,21 @@ def counter(annotation_label): def evaluate(self, annotations, predictions): pass + def _create_meta(self): + self.meta['scale'] = 1 + self.meta['postfix'] = '' + self.meta['calculate_mean'] = False + self.meta['names'] = list(self.labels.values()) + if self.calculate_average: + self.meta['names'].append('average') + def reset(self): self.tp = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.fp = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.tn = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.fn = np.zeros_like(list(self.labels.keys()), dtype=np.float) self.counter = np.zeros_like(list(self.labels.keys()), dtype=np.float) + self._create_meta() class MultiLabelAccuracy(MultiLabelMetric): @@ -180,14 +182,7 @@ def configure(self): label_map = self.get_value_from_config('label_map') self.labels = self.dataset.metadata.get(label_map) self.calculate_average = self.get_value_from_config('calculate_average') - self.meta['names'] = list(self.labels.values()) - if self.calculate_average: - self.meta['names'].append('average') - - self.meta['scale'] = 1 - self.meta['postfix'] = '' - self.meta['calculate_mean'] = False - self.meta['names'] = list(self.labels.values()) + ['average'] + self._create_meta() def update(self, annotation, prediction): self.precision.update(annotation, prediction) @@ -214,3 +209,12 @@ def evaluate(self, annotations, predictions): def reset(self): self.precision.reset() self.recall.reset() + self._create_meta() + + def _create_meta(self): + self.meta['names'] = list(self.labels.values()) + if self.calculate_average: + self.meta['names'].append('average') + self.meta['scale'] = 1 + self.meta['postfix'] = '' + self.meta['calculate_mean'] = False diff --git a/tools/accuracy_checker/accuracy_checker/metrics/regression.py b/tools/accuracy_checker/accuracy_checker/metrics/regression.py index d6390e0f655..2dc47d5b529 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/regression.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/regression.py @@ -108,18 +108,7 @@ def configure(self): self.intervals = np.unique(self.intervals) self.magnitude = [[] for _ in range(len(self.intervals) + 1)] - - self.meta['names'] = ([]) - if not self.ignore_out_of_range: - self.meta['names'] = (['mean: < ' + str(self.intervals[0]), 'std: < ' + str(self.intervals[0])]) - - for index in range(len(self.intervals) - 1): - self.meta['names'].append('mean: <= ' + str(self.intervals[index]) + ' < ' + str(self.intervals[index + 1])) - self.meta['names'].append('std: <= ' + str(self.intervals[index]) + ' < ' + str(self.intervals[index + 1])) - - if not self.ignore_out_of_range: - self.meta['names'].append('mean: > ' + str(self.intervals[-1])) - self.meta['names'].append('std: > ' + str(self.intervals[-1])) + self._create_meta() def update(self, annotation, prediction): index = find_interval(annotation.value, self.intervals) @@ -138,8 +127,22 @@ def evaluate(self, annotations, predictions): return result + def _create_meta(self): + self.meta['names'] = ([]) + if not self.ignore_out_of_range: + self.meta['names'] = (['mean: < ' + str(self.intervals[0]), 'std: < ' + str(self.intervals[0])]) + + for index in range(len(self.intervals) - 1): + self.meta['names'].append('mean: <= ' + str(self.intervals[index]) + ' < ' + str(self.intervals[index + 1])) + self.meta['names'].append('std: <= ' + str(self.intervals[index]) + ' < ' + str(self.intervals[index + 1])) + + if not self.ignore_out_of_range: + self.meta['names'].append('mean: > ' + str(self.intervals[-1])) + self.meta['names'].append('std: > ' + str(self.intervals[-1])) + def reset(self): self.magnitude = [[] for _ in range(len(self.intervals) + 1)] + self._create_meta() class MeanAbsoluteError(BaseRegressionMetric): @@ -263,7 +266,6 @@ def configure(self): 'postfix': ' ', 'calculate_mean': not self.calculate_std or not self.percentile, 'data_format': '{:.4f}', - 'names': ['mean'] }) self.magnitude = [] @@ -276,6 +278,7 @@ def update(self, annotation, prediction): self.magnitude.append(avg_result) def evaluate(self, annotations, predictions): + self.meta['names'] = ['mean'] result = [np.mean(self.magnitude)] if self.calculate_std: From 29eb923dafe5a1fc0b023102540a597f8eaaa1fb Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 20 Sep 2019 17:23:24 +0300 Subject: [PATCH 019/927] AC: fix labels reset (#435) --- .../accuracy_checker/accuracy_checker/metrics/detection.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/detection.py b/tools/accuracy_checker/accuracy_checker/metrics/detection.py index dae4b74eef8..beadc030177 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/detection.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/detection.py @@ -128,8 +128,10 @@ def evaluate(self, annotations, predictions): pass def reset(self): - valid_labels = list(filter(lambda x: x != self.dataset.metadata.get('background_label'), self.labels)) - self.meta['names'] = [self.labels[name] for name in valid_labels] + label_map = self.config.get('label_map', 'label_map') + dataset_labels = self.dataset.metadata.get(label_map, {}) + valid_labels = list(filter(lambda x: x != self.dataset.metadata.get('background_label'), dataset_labels)) + self.meta['names'] = [dataset_labels[name] for name in valid_labels] class DetectionMAP(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): From e0de22514d9e176b3b69399cde28b06c97163f19 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 20 Sep 2019 18:04:38 +0300 Subject: [PATCH 020/927] Fixes --- contribution.md | 58 ++++++++++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 25 deletions(-) diff --git a/contribution.md b/contribution.md index 8d6a0e6e33a..ccac2a7145a 100644 --- a/contribution.md +++ b/contribution.md @@ -1,6 +1,6 @@ # How to contribute to OMZ -From this document you will know how to contribute your model to OpenVINO™ Open Model Zoo. Almost any model from supported frameworks (see list below) can be added. To do this do next few steps. +From this document you will know how to contribute your model to OpenVINO™ Open Model Zoo. Almost any model from supported frameworks (see list below) can be added. It could be done in few simple steps. 1. [Model location](#model-location) 2. [Model conversion](#model-conversion) @@ -17,8 +17,6 @@ List of supported frameworks: * MXNet\* * PyTorch\* (by conversion to ONNX\*) - - ## Model location Upload your model to any Internet file storage with easy and direct access to it. It can be www.github.com, GoogleDrive\*, or any other. @@ -29,9 +27,9 @@ Upload your model to any Internet file storage with easy and direct access to it OpenVINO™ supports models in its own format IR. Model from any supported framework can be easily converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. -> **NOTE 1**: due OpenVINO™ paradigms, mean and scale values are built-in converted model. +> **NOTE 1**: due to OpenVINO™ paradigm, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. -> **NOTE 2**: due OpenVINO™ paradigms, if model take colored image as input, color channel order supposed to be `BGR`. +> **NOTE 2**: due to OpenVINO™ paradigms, if model input is a colored image, color channel order should be `BGR`. *After this step you`ll get **conversion parameters** for Model Optimizer.* @@ -43,14 +41,14 @@ If appropriate demo or sample are absent, you must provide your own demo (C++ or ``` -i "" Optional. Path to a input file or directory (for multiple inferences). By default input must be generated randomly. -m "" Required. Path to an .xml file with a trained model - -d "" Optional. Target device for model inference. Usually CPU and GPU. By default CPU - -no_show Optional. Do not show inference result. + -d "" Optional. Target device for model inference. By default CPU + -no_show Optional. Do not launch GUI window to visualize inference results. Needed for demo CI tests automation. ``` Also you can add all necessary parameters for inference your model. ## Accuracy validation -To run accuracy validation, use [Accuracy Checker](./tools/accuracy_checker/README.md) tool, provided with repository. Doing this very simple if model task from supported. You must only create accuracy validation configuration file in this case. +To run accuracy validation, use [Accuracy Checker](./tools/accuracy_checker#testing-new-models) tool, provided with repository. It is simple if model task already supported by Accuracy Checker. You need only create Accuracy Checker configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][#model-conversion] parameters or validation configuration. @@ -58,15 +56,15 @@ When the configuration file is ready, you must run Accuracy Checker to obtain me ## Configuration file -Models configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be in the models subfolder. Let look closer to the file content. +Models configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be located in the model subfolder. Let look closer to the file content. **`description`** -This tag must contain description of model. +This tag contains description of model. **`task_type`** -This tag describes on of the task that model solves: +This tag describes task that model solves: - `action_recognition` - `classification` - `detection` @@ -79,11 +77,11 @@ This tag describes on of the task that model solves: - `optical_character_recognition` - `semantic_segmentation` -If your model solves another task, you can freely add it with modification of [tools/downloader/common.py](tools/downloader/common.py) file list `KNOWN_TASK_TYPES` +If task, that your model solve, is not listed here, please add new type of task to [tools/downloader/common.py](tools/downloader/common.py) file list `KNOWN_TASK_TYPES` **`files`** -You must describe all files, which must be downloaded, in this section. Each file must is described in few tags: +You describe all files, which need to be downloaded, in this section. Each file is described in few tags: * `name` sets file name after downloading * `size` sets file size @@ -97,7 +95,7 @@ If file is located on GoogleDrive\*, section `source` must contain: ``` **`postprocessing`** (*optional*) -Sometimes right after downloading model are not ready for conversion, or conversion may be incorrect or failure. It may be avoided by some manipulation with original files, such as unpacking, replacing or deleting some part of file. This manipulation must be described in this section. +Sometimes right after downloading model is not ready for conversion by Model Optimizer and some additional preprocessing needed, such as unpacking, replacing or deleting some part of file. This manipulation is described in this section. For unpacking archive: @@ -133,7 +131,7 @@ List of caffe2-to-onnx conversion parameters, see `model_optimizer_args` for det **`model_optimizer_args`** -Conversion parameter, obtained [earlier](#model-conversion), must be specified in this section, e.g.: +Conversion parameters, obtained [earlier](#model-conversion), is specified in this section, e.g.: ``` - --input=data - --mean_values=data[127.5] @@ -146,7 +144,7 @@ Conversion parameter, obtained [earlier](#model-conversion), must be specified i **`framework`** -Framework of original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`) +Framework of original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`). **`license`** @@ -157,7 +155,7 @@ Path to model's license. ### Example -In this [example](models/public/densenet-121-tf/model.yml) classificational model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from GoogleDrive\* as archive. +In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from GoogleDrive\* as archive. ``` description: >- @@ -192,16 +190,16 @@ license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICE ## Documentation Documentation is very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. -Documentation must contain: +Documentation should contain: * description of model * main purpose * features * links to paper or/and source -* model specification, e.g. type, source framework, GFLOPs and number of parameters +* model specification * type * framework - * GFLOPs - * number of parameters + * GFLOPs (*if available*) + * number of parameters (*if available*) * main accuracy values (also description of metric) * detailed description of input and output for original and converted models @@ -212,16 +210,26 @@ Detailed structure and headers naming convention you can learn from any other mo ## Pull request requirements -Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. This pull request is strictly formalized and must contains changes: -* configuration file - `model.yml` [from here](#configuration-file) -* documentation of model in markdown format [from here](#documentation) -* accuracy validation configuration file [from here](#accuracy-validation) +Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contains changes: +* configuration file - `model.yml` from [here](#configuration-file) +* documentation of model in markdown format from [here](#documentation) +* accuracy validation configuration file from [here](#accuracy-validation) * license added to [tools/downloader/license.txt](tools/downloader/license.txt) +> If model uses your own demo, add it to [demos](/demos) folder. + +> If you made any other changes, that make auto downloading and conversion possible, add it too. + Configuration and documentation files must be located in `models/public` directory in subfolder, which name will represent model name in Open Model Zoo and will be used by downloader and converter tools. Also, please add suffix to model name, according to origin framework (e.g. `cf`, `cf2`, `tf`, `mx` or `pt`). Validation configuration file must be located in [tools/accuracy_checker/configs](tools/accuracy_checker/configs). +This PR must pass next tests: +* model is downloadable by `tools/downloader/downloader.py` script +* model is convertible by `tools/downloader/converter.py` script +* model can be used by demo or sample and provides adequate results +* model passes accuracy validation + ## Legal Information [\*] Other names and brands may be claimed as the property of others. From 2d80581201788985c2c1471042c659181ca8cf17 Mon Sep 17 00:00:00 2001 From: xuejun Date: Mon, 23 Sep 2019 10:52:53 +0800 Subject: [PATCH 021/927] =?UTF-8?q?modify=20security=5Fbarrier=5Fcamera=5F?= =?UTF-8?q?demo=20readme=20about=20how=20to=20configure=20parameters=20for?= =?UTF-8?q?=20using=20all=20computation=20ability=20of=20Intel=C2=AE=20Vis?= =?UTF-8?q?ion=20Accelerator=20Design=20with=20Intel=C2=AE=20Movidius?= =?UTF-8?q?=E2=84=A2=20VPUs?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- demos/security_barrier_camera_demo/README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/demos/security_barrier_camera_demo/README.md b/demos/security_barrier_camera_demo/README.md index dcd837cb411..99b7a6c98a0 100644 --- a/demos/security_barrier_camera_demo/README.md +++ b/demos/security_barrier_camera_demo/README.md @@ -96,6 +96,12 @@ To do inference for two video inputs using two asynchronous infer request on FPG ./security_barrier_camera_demo -i /inputVideo_0.mp4 /inputVideo_1.mp4 -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d HETERO:FPGA,CPU -d_va HETERO:FPGA,CPU -d_lpr HETERO:FPGA,CPU -nireq 2 ``` +To do inference for video inputs on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, somes optimization hints are suggested to make good use of the computation ability. Configuring the number of allocated frames (-n_iqs) to provide enough inputs for inference. Configuring the number of infer request (-nireq) to achieve asynchronous infer. Configuring the number of threads (-n_wt) for multi thread processing. For example, to run the sample on one Intel® Vision Accelerator Design with Intel® Movidius™ VPUs Compact R card, run the following command: +```sh +./security_barrier_camera_demo -i /inputVideo.mp4 -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml +-d HDDL -d_va HDDL -d_lpr HDDL -n_iqs 10 -n_wt 4 -nireq 10 +``` + > **NOTE**: For the `-tag` option (HDDL plugin only), you must specify the number of VPUs for each network in the `hddl_service.config` file located in the `/deployment_tools/inference_engine/external/hddl/config/` direcrtory using the following tags: > * `tagDetect` for the Vehicle and License Plate Detection network > * `tagAttr` for the Vehicle Attributes Recognition network From 84d66815204b61ead843cf341c028f25ad1d0847 Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Mon, 23 Sep 2019 12:36:12 +0800 Subject: [PATCH 022/927] Update face_detector.py add fd_input_reshape function to support detect more small faces in image --- .../python_demos/face_recognition_demo/face_detector.py | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/face_detector.py b/demos/python_demos/face_recognition_demo/face_detector.py index a88c7d74131..ba20843fd5b 100644 --- a/demos/python_demos/face_recognition_demo/face_detector.py +++ b/demos/python_demos/face_recognition_demo/face_detector.py @@ -46,14 +46,19 @@ def clip(self, width, height): self.position[:] = clip(self.position, min, max) self.size[:] = clip(self.size, min, max) - def __init__(self, model, confidence_threshold=0.5, roi_scale_factor=1.15): + def __init__(self, model, confidence_threshold=0.5, roi_scale_factor=1.15,input_h=0, input_w=0): super(FaceDetector, self).__init__(model) assert len(model.inputs) == 1, "Expected 1 input blob" assert len(model.outputs) == 1, "Expected 1 output blob" self.input_blob = next(iter(model.inputs)) self.output_blob = next(iter(model.outputs)) - self.input_shape = model.inputs[self.input_blob].shape + #add reshape function to detect more small faces in image + if (input_h and input_w ): + self.input_shape = [1, 3, input_h, input_w] + else: + self.input_shape = model.inputs[self.input_blob].shape + print("face detector input shape:", self.input_shape) self.output_shape = model.outputs[self.output_blob].shape assert len(self.output_shape) == 4 and \ From caa015781b1efdb3d904084d0e890f2303d9f900 Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Mon, 23 Sep 2019 12:43:58 +0800 Subject: [PATCH 023/927] Update face_recognition_demo.py add fd_input_reshape function to detect more small faces in image --- .../face_recognition_demo.py | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/face_recognition_demo.py b/demos/python_demos/face_recognition_demo/face_recognition_demo.py index 531fb6e6148..053efaca9b6 100755 --- a/demos/python_demos/face_recognition_demo/face_recognition_demo.py +++ b/demos/python_demos/face_recognition_demo/face_recognition_demo.py @@ -70,7 +70,14 @@ def build_argparser(): help="Path to the Facial Landmarks Regression model XML file") models.add_argument('-m_reid', metavar="PATH", default="", required=True, help="Path to the Face Reidentification model XML file") - + models.add_argument('-fd_iw', '--fd_input_width', default=0, type=int, + help="(optional) specified the input shape of detection model " \ + "(default: use default input shape of model). Both -fd_iw and -fd_ih parameters " \ + "should be specified for reshape.") + models.add_argument('-fd_ih', '--fd_input_height', default=0, type=int, + help="(optional) specified the input shape of detection model " \ + "(default: use default input shape of model). Both -fd_iw and -fd_ih parameters " \ + "should be specified for reshape.") infer = parser.add_argument_group('Inference options') infer.add_argument('-d_fd', default='CPU', choices=DEVICE_KINDS, help="(optional) Target device for the " \ @@ -122,12 +129,16 @@ def __init__(self, args): log.info("Loading models") face_detector_net = self.load_model(args.m_fd) + if args.fd_input_height and args.fd_input_width : + face_detector_net.reshape({"data": [1, 3, args.fd_input_height,args.fd_input_width]}) landmarks_net = self.load_model(args.m_lm) face_reid_net = self.load_model(args.m_reid) self.face_detector = FaceDetector(face_detector_net, confidence_threshold=args.t_fd, - roi_scale_factor=args.exp_r_fd) + roi_scale_factor=args.exp_r_fd, + input_h = args.fd_input_height, + input_w = args.fd_input_width) self.landmarks_detector = LandmarksDetector(landmarks_net) self.face_identifier = FaceIdentifier(face_reid_net, match_threshold=args.t_id) From 29ef443a98155fd26364f4981b096d3498dd9c6a Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Mon, 23 Sep 2019 15:48:13 +0800 Subject: [PATCH 024/927] This is for commit test add Pillow>=6.0.0 in requriement --- demos/python_demos/face_recognition_demo/requirements.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/demos/python_demos/face_recognition_demo/requirements.txt b/demos/python_demos/face_recognition_demo/requirements.txt index 778b007447c..220b1d38dba 100644 --- a/demos/python_demos/face_recognition_demo/requirements.txt +++ b/demos/python_demos/face_recognition_demo/requirements.txt @@ -1,3 +1,4 @@ opencv-python>=3.4.0 numpy>=1.11.0 scipy>=1.1.0 +Pillow>=6.0.0 From 776032bdcdad5e9747c944b2060b045b7d0b231a Mon Sep 17 00:00:00 2001 From: Katya Date: Mon, 23 Sep 2019 13:15:01 +0300 Subject: [PATCH 025/927] AC: PTCT dataset selection from config (#429) --- .../accuracy_checker/config/config_reader.py | 9 +- .../accuracy_checker/dataset.py | 3 +- .../quantization_model_evaluator.py | 109 ++++++++++-------- 3 files changed, 70 insertions(+), 51 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index 931b13ab907..991e9d76ff2 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -104,9 +104,8 @@ def _check_models_config(config): for model in models: if _is_requirements_missed(model, required_model_entries): raise ConfigError('Each model must specify {}'.format(', '.join(required_model_entries))) - - if list(filter(lambda entry: _is_requirements_missed(entry, required_dataset_entries), - model['datasets'])): + datasets = model['datasets'].values() if isinstance(model['datasets'], dict) else model['datasets'] + if list(filter(lambda entry: _is_requirements_missed(entry, required_dataset_entries), datasets)): raise ConfigError(required_dataset_error.format(model['name'], ', '.join(required_dataset_entries))) def _check_pipelines_config(config): @@ -478,7 +477,6 @@ def filter_pipelines(config, target_devices): filtering_mode = functors_by_mode[mode] filtering_mode(config, target_devices) - @staticmethod def convert_paths(config): def convert_launcher_paths(launcher_config): @@ -513,7 +511,8 @@ def convert_dataset_paths(dataset_config): for model in config['models']: for launcher_config in model['launchers']: convert_launcher_paths(launcher_config) - for dataset_config in model['datasets']: + datasets = model['datasets'].values() if isinstance(model['datasets'], dict) else model['datasets'] + for dataset_config in datasets: convert_dataset_paths(dataset_config) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index d77ac80e52c..7e9ca6df640 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -190,7 +190,8 @@ def read_annotation(annotation_file: Path): class DatasetWrapper: - def __init__(self, data_reader, annotation_reader=None): + def __init__(self, data_reader, annotation_reader=None, tag=''): + self.tag = tag self.data_reader = data_reader self.annotation_reader = annotation_reader self._batch = 1 diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index d869b3987d6..d8a3fff9727 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -32,16 +32,17 @@ class ModelEvaluator: def __init__( - self, launcher, input_feeder, adapter, dataset, preprocessor, postprocessor, metric + self, launcher, adapter, dataset_config ): self.launcher = launcher - self.input_feeder = input_feeder + self.input_feeder = None self.adapter = adapter - self.dataset = dataset - self.preprocessor = preprocessor - self.postprocessor = postprocessor - self.metric_executor = metric + self.dataset_config = dataset_config self.stat_collector = None + self.preprocessor = None + self.dataset = None + self.postprocessor = None + self.metric_executor = None self._annotations = [] self._predictions = [] @@ -50,38 +51,14 @@ def __init__( @classmethod def from_configs(cls, config): model_config = config['models'][0] - dataset_config = model_config['datasets'][0] + dataset_config = model_config['datasets'] launcher_config = model_config['launchers'][0] - dataset_name = dataset_config['name'] - data_reader_config = dataset_config.get('reader', 'opencv_imread') - data_source = dataset_config.get('data_source') - - if isinstance(data_reader_config, str): - data_reader = BaseReader.provide(data_reader_config, data_source) - elif isinstance(data_reader_config, dict): - data_reader = BaseReader.provide(data_reader_config['type'], data_source, data_reader_config) - else: - raise ConfigError('reader should be dict or string') - annotation_reader = None - dataset_meta = {} - metric_dispatcher = None - if contains_any(dataset_config, ['annotation', 'annotation_conversion']): - annotation_reader = Dataset(dataset_config) - dataset_meta = annotation_reader.metadata - dataset = DatasetWrapper(data_reader, annotation_reader) launcher = create_launcher(launcher_config, delayed_model_loading=True) config_adapter = launcher_config.get('adapter') - adapter = None if not config_adapter else create_adapter(config_adapter, None, annotation_reader) - preprocessor = PreprocessingExecutor( - dataset_config.get('preprocessing'), dataset_name, dataset_meta - ) - postprocessor = PostprocessingExecutor(dataset_config.get('postprocessing'), dataset_name, dataset_meta) - if 'metrics' in dataset_config: - metric_dispatcher = MetricsExecutor(dataset_config.get('metrics', []), annotation_reader) + adapter = None if not config_adapter else create_adapter(config_adapter, None, None) return cls( - launcher, None, adapter, dataset, - preprocessor, postprocessor, metric_dispatcher + launcher, adapter, dataset_config ) def _get_batch_input(self, batch_input, batch_annotation): @@ -98,8 +75,10 @@ def process_dataset_async( subset=None, num_images=None, check_progress=False, + dataset_tag='', **kwargs ): + def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, adapter, raw_outputs_callback): if self.stat_collector: self.stat_collector.process_batch(batch_predictions) @@ -116,6 +95,9 @@ def _create_subset(subset, num_images): elif num_images is not None: self.dataset.make_subset(end=num_images) + if self.dataset is None or (dataset_tag and self.dataset.tag != dataset_tag): + self.select_dataset(dataset_tag) + self.dataset.batch = self.launcher.batch self.stat_collector = None progress_reporter = None @@ -149,7 +131,7 @@ def _create_subset(subset, num_images): free_irs.append(ir) annotations, predictions = self.postprocessor.process_batch(batch_annotation, batch_predictions) - if not self.postprocessor.has_dataset_processors and self.metric_executor: + if self.metric_executor: self.metric_executor.update_metrics_on_batch(annotations, predictions) self._annotations.extend(annotations) @@ -161,13 +143,14 @@ def _create_subset(subset, num_images): time.sleep(wait_time) wait_time = max(wait_time * 2, .16) - if self.postprocessor.has_dataset_processors: - self.metric_executor.update_metrics_on_batch(self._annotations, self._predictions) - if progress_reporter: progress_reporter.finish() - return self.postprocessor.process_dataset(self._annotations, self._predictions) + def select_dataset(self, dataset_tag): + dataset_attributes = create_dataset_attributes(self.dataset_config, dataset_tag) + self.dataset, self.metric_executor, self.preprocessor, self.postprocessor = dataset_attributes + if self.dataset.annotation_reader and self.dataset.annotation_reader.metadata: + self.adapter.label_map = self.dataset.annotation_reader.metadata.get('label_map') def process_dataset( self, @@ -175,8 +158,11 @@ def process_dataset( subset=None, num_images=None, check_progress=False, + dataset_tag='', **kwargs ): + if self.dataset is None or (dataset_tag and self.dataset.tag != dataset_tag): + self.select_dataset(dataset_tag) self.dataset.batch = self.launcher.batch progress_reporter = None @@ -202,7 +188,7 @@ def process_dataset( batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta) annotations, predictions = self.postprocessor.process_batch(batch_annotation, batch_predictions, batch_meta) - if not self.postprocessor.has_dataset_processors and self.metric_executor: + if self.metric_executor: self.metric_executor.update_metrics_on_batch(annotations, predictions) self._annotations.extend(annotations) @@ -211,14 +197,9 @@ def process_dataset( if progress_reporter: progress_reporter.update(batch_id, len(batch_predictions)) - if self.postprocessor.has_dataset_processors and self.metric_executor: - self.metric_executor.update_metrics_on_batch(self._annotations, self._predictions) - if progress_reporter: progress_reporter.finish() - return self.postprocessor.process_dataset(self._annotations, self._predictions) - @staticmethod def _wait_for_any(irs): if not irs: @@ -310,7 +291,45 @@ def reset(self): self._annotations = [] self._predictions = [] self._metrics_results = [] - self.dataset.reset() + if self.dataset: + self.dataset.reset() def release(self): self.launcher.release() + + +def create_dataset_attributes(config, tag): + if isinstance(config, list): + dataset_config = config[0] + elif isinstance(config, dict): + dataset_config = config.get(tag) + if not dataset_config: + raise ConfigError('suitable dataset for *{}* not found'.format(tag)) + else: + raise TypeError('unknown type for config, dictionary or list must be') + + dataset_name = dataset_config['name'] + data_reader_config = dataset_config.get('reader', 'opencv_imread') + data_source = dataset_config.get('data_source') + + if isinstance(data_reader_config, str): + data_reader = BaseReader.provide(data_reader_config, data_source) + elif isinstance(data_reader_config, dict): + data_reader = BaseReader.provide(data_reader_config['type'], data_source, data_reader_config) + else: + raise ConfigError('reader should be dict or string') + annotation_reader = None + dataset_meta = {} + metric_dispatcher = None + if contains_any(dataset_config, ['annotation', 'annotation_conversion']): + annotation_reader = Dataset(dataset_config) + dataset_meta = annotation_reader.metadata + dataset = DatasetWrapper(data_reader, annotation_reader) + preprocessor = PreprocessingExecutor( + dataset_config.get('preprocessing'), dataset_name, dataset_meta + ) + postprocessor = PostprocessingExecutor(dataset_config.get('postprocessing'), dataset_name, dataset_meta) + if 'metrics' in dataset_config: + metric_dispatcher = MetricsExecutor(dataset_config.get('metrics', []), annotation_reader) + + return dataset, metric_dispatcher, preprocessor, postprocessor From ddb81086e2fbf36f092496c51bf7eeb34e7103eb Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Tue, 24 Sep 2019 09:25:13 +0800 Subject: [PATCH 026/927] Update requirements.txt remove unecessary component in requirements --- demos/python_demos/face_recognition_demo/requirements.txt | 1 - 1 file changed, 1 deletion(-) diff --git a/demos/python_demos/face_recognition_demo/requirements.txt b/demos/python_demos/face_recognition_demo/requirements.txt index 220b1d38dba..778b007447c 100644 --- a/demos/python_demos/face_recognition_demo/requirements.txt +++ b/demos/python_demos/face_recognition_demo/requirements.txt @@ -1,4 +1,3 @@ opencv-python>=3.4.0 numpy>=1.11.0 scipy>=1.1.0 -Pillow>=6.0.0 From 51bebec2dbab1f8f875a3617eb9d4c744df73134 Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Tue, 24 Sep 2019 09:28:52 +0800 Subject: [PATCH 027/927] Update face_detector.py follow up the comments. no need change the input_shape after reshape --- demos/python_demos/face_recognition_demo/face_detector.py | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/face_detector.py b/demos/python_demos/face_recognition_demo/face_detector.py index ba20843fd5b..1cc1c38f42b 100644 --- a/demos/python_demos/face_recognition_demo/face_detector.py +++ b/demos/python_demos/face_recognition_demo/face_detector.py @@ -53,12 +53,7 @@ def __init__(self, model, confidence_threshold=0.5, roi_scale_factor=1.15,input_ assert len(model.outputs) == 1, "Expected 1 output blob" self.input_blob = next(iter(model.inputs)) self.output_blob = next(iter(model.outputs)) - #add reshape function to detect more small faces in image - if (input_h and input_w ): - self.input_shape = [1, 3, input_h, input_w] - else: - self.input_shape = model.inputs[self.input_blob].shape - print("face detector input shape:", self.input_shape) + self.input_shape = model.inputs[self.input_blob].shape self.output_shape = model.outputs[self.output_blob].shape assert len(self.output_shape) == 4 and \ From 10aee2286dc7d5f1b3fe29a0ef21ff66bc0e9c0d Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Tue, 24 Sep 2019 09:50:56 +0800 Subject: [PATCH 028/927] Update README.md add option "fd_iw, fd_ih" for demo app, change the Readme.md accordingly --- .../python_demos/face_recognition_demo/README.md | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/README.md b/demos/python_demos/face_recognition_demo/README.md index 14f21920d2a..85b8485f3f6 100644 --- a/demos/python_demos/face_recognition_demo/README.md +++ b/demos/python_demos/face_recognition_demo/README.md @@ -76,8 +76,9 @@ python ./face_recognition_demo.py -h usage: face_recognition_demo.py [-h] [-i PATH] [-o PATH] [--no_show] [-tl] [-cw CROP_WIDTH] [-ch CROP_HEIGHT] -fg PATH - [--run_detector] -m_fd PATH -m_lm PATH -m_reid - PATH [-d_fd {CPU,GPU,FPGA,MYRIAD,HETERO}] + [--run_detector] -m_fd PATH -m_lm PATH -m_reid PATH + [-fd_iw FD_INPUT_WIDTH] [-fd_ih FD_INPUT_HEIGHT] + [-d_fd {CPU,GPU,FPGA,MYRIAD,HETERO}] [-d_lm {CPU,GPU,FPGA,MYRIAD,HETERO}] [-d_reid {CPU,GPU,FPGA,MYRIAD,HETERO}] [-l PATH] [-c PATH] [-v] [-pc] [-t_fd [0..1]] @@ -122,6 +123,16 @@ Models: -m_fd PATH Path to the Face Detection model XML file -m_lm PATH Path to the Facial Landmarks Regression model XML file -m_reid PATH Path to the Face Reidentification model XML file + -fd_iw FD_INPUT_WIDTH, --fd_input_width FD_INPUT_WIDTH + (optional) specify the input width of detection model + (default: use default input width of model). + Both -fd_iw and -fd_ih parameters should be specified + for reshape. + -fd_ih FD_INPUT_HEIGHT, --fd_input_height FD_INPUT_HEIGHT + (optional) specify the input height of detection model + (default: use default input height of model). + Both -fd_iw and -fd_ih parameters should be specified + for reshape. Inference options: -d_fd {CPU,GPU,FPGA,MYRIAD,HETERO} From 6a379324bb9d607e54ccbda92dcfec3da617b95c Mon Sep 17 00:00:00 2001 From: maozhong1 Date: Tue, 24 Sep 2019 10:19:31 +0800 Subject: [PATCH 029/927] Update face_recognition_demo.py change per the review comments --- .../face_recognition_demo/face_recognition_demo.py | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/face_recognition_demo.py b/demos/python_demos/face_recognition_demo/face_recognition_demo.py index 053efaca9b6..9ff67c61daf 100755 --- a/demos/python_demos/face_recognition_demo/face_recognition_demo.py +++ b/demos/python_demos/face_recognition_demo/face_recognition_demo.py @@ -71,13 +71,14 @@ def build_argparser(): models.add_argument('-m_reid', metavar="PATH", default="", required=True, help="Path to the Face Reidentification model XML file") models.add_argument('-fd_iw', '--fd_input_width', default=0, type=int, - help="(optional) specified the input shape of detection model " \ - "(default: use default input shape of model). Both -fd_iw and -fd_ih parameters " \ + help="(optional) specify the input width of detection model " \ + "(default: use default input width of model). Both -fd_iw and -fd_ih parameters " \ "should be specified for reshape.") models.add_argument('-fd_ih', '--fd_input_height', default=0, type=int, - help="(optional) specified the input shape of detection model " \ - "(default: use default input shape of model). Both -fd_iw and -fd_ih parameters " \ + help="(optional) specify the input height of detection model " \ + "(default: use default input height of model). Both -fd_iw and -fd_ih parameters " \ "should be specified for reshape.") + infer = parser.add_argument_group('Inference options') infer.add_argument('-d_fd', default='CPU', choices=DEVICE_KINDS, help="(optional) Target device for the " \ @@ -129,6 +130,11 @@ def __init__(self, args): log.info("Loading models") face_detector_net = self.load_model(args.m_fd) + + assert (args.fd_input_height and args.fd_input_width) or \ + (args.fd_input_height==0 and args.fd_input_width==0), \ + "Both -fd_iw and -fd_ih parameters should be specified for reshape" + if args.fd_input_height and args.fd_input_width : face_detector_net.reshape({"data": [1, 3, args.fd_input_height,args.fd_input_width]}) landmarks_net = self.load_model(args.m_lm) From c83cc62a38b95551b13adf5182051037e4e6ec97 Mon Sep 17 00:00:00 2001 From: Nikolay Tyukaev Date: Tue, 24 Sep 2019 17:08:03 +0300 Subject: [PATCH 030/927] doc fix --- .../accuracy_checker/annotation_converters/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md index 8e9d44f5eac..b1ca4049f66 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md @@ -66,9 +66,9 @@ Accuracy Checker supports following list of annotation converters and specific f * `labels_file` - path to file with word description of labels (synset_words). * `has_background` - allows to add background label to original labels and convert dataset for 1001 classes instead 1000 (default value is False). * `voc_detection` - converts Pascal VOC annotation for detection task to `DetectionAnnotation`. - * `imageset_file` - path to file with validation image list. - * `annotations_dir` - path to directory with annotation files. - * `images_dir` - path to directory with images related to devkit root (default JPEGImages). + * `imageset_file` - path to file with validation image list. + * `annotations_dir` - path to directory with annotation files. + * `images_dir` - path to directory with images related to devkit root (default JPEGImages). * `has_background` - allows convert dataset with/without adding background_label. Accepted values are True or False. (default is True) * `voc_segmentation` - converts Pascal VOC annotation for semantic segmentation task to `SegmentationAnnotation`. * `imageset_file` - path to file with validation image list. From c6b76cd3349a9bfdacaee0d4b5e318c72d7e6829 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Tue, 24 Sep 2019 18:16:32 +0300 Subject: [PATCH 031/927] FIX --- contribution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contribution.md b/contribution.md index ccac2a7145a..ade146d2430 100644 --- a/contribution.md +++ b/contribution.md @@ -48,9 +48,9 @@ Also you can add all necessary parameters for inference your model. ## Accuracy validation -To run accuracy validation, use [Accuracy Checker](./tools/accuracy_checker#testing-new-models) tool, provided with repository. It is simple if model task already supported by Accuracy Checker. You need only create Accuracy Checker configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). +Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use OpenVINO™ Inference Engine to run converted model or original framework to run original model. OpenVINO™ Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create Accuracy Checker configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models) -When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion][#model-conversion] parameters or validation configuration. +When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. *After this step you will get accuracy validation configuration file - **.yml*** From 3a4ff93673b2d86dfa53f8e9e4eabfe84c75222d Mon Sep 17 00:00:00 2001 From: maozhong Date: Wed, 25 Sep 2019 09:02:12 +0800 Subject: [PATCH 032/927] remove unused parameter Signed-off-by: maozhong --- demos/python_demos/face_recognition_demo/face_detector.py | 2 +- .../face_recognition_demo/face_recognition_demo.py | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) mode change 100644 => 100755 demos/python_demos/face_recognition_demo/face_detector.py diff --git a/demos/python_demos/face_recognition_demo/face_detector.py b/demos/python_demos/face_recognition_demo/face_detector.py old mode 100644 new mode 100755 index 1cc1c38f42b..a88c7d74131 --- a/demos/python_demos/face_recognition_demo/face_detector.py +++ b/demos/python_demos/face_recognition_demo/face_detector.py @@ -46,7 +46,7 @@ def clip(self, width, height): self.position[:] = clip(self.position, min, max) self.size[:] = clip(self.size, min, max) - def __init__(self, model, confidence_threshold=0.5, roi_scale_factor=1.15,input_h=0, input_w=0): + def __init__(self, model, confidence_threshold=0.5, roi_scale_factor=1.15): super(FaceDetector, self).__init__(model) assert len(model.inputs) == 1, "Expected 1 input blob" diff --git a/demos/python_demos/face_recognition_demo/face_recognition_demo.py b/demos/python_demos/face_recognition_demo/face_recognition_demo.py index 9ff67c61daf..d1748134dc7 100755 --- a/demos/python_demos/face_recognition_demo/face_recognition_demo.py +++ b/demos/python_demos/face_recognition_demo/face_recognition_demo.py @@ -142,9 +142,8 @@ def __init__(self, args): self.face_detector = FaceDetector(face_detector_net, confidence_threshold=args.t_fd, - roi_scale_factor=args.exp_r_fd, - input_h = args.fd_input_height, - input_w = args.fd_input_width) + roi_scale_factor=args.exp_r_fd) + self.landmarks_detector = LandmarksDetector(landmarks_net) self.face_identifier = FaceIdentifier(face_reid_net, match_threshold=args.t_id) From c90280dc2b388e0b1f81f847081f6ea03a981ece Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 25 Sep 2019 08:20:16 +0300 Subject: [PATCH 033/927] refactoring 3 --- .../data_readers/data_reader.py | 125 ++++-------------- .../postprocessor/resize_segmentation_mask.py | 4 +- 2 files changed, 30 insertions(+), 99 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index d098971a6d6..801188a44f5 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -180,101 +180,50 @@ class ScipyImageReader(BaseReader): __provider__ = 'scipy_imread' @staticmethod - def _from_image(image, flatten=False, mode=None): - if mode is not None: - if mode != image.mode: - image = image.convert(mode) - elif image.mode == 'P': + def _from_image(image): + if image.mode == 'P': image = image.convert('RGBA') if 'transparency' in image.info else image.convert('RGB') - if flatten: - image = image.convert('F') - elif image.mode == '1': - image = image.convert('L') - return np.array(image) @staticmethod - def _process_2d(data, shape, mode, pal, high, low, cmax, cmin): + def _process_2d(data, shape): shape = (shape[1], shape[0]) # columns show up first - if mode == 'F': - data32 = data.astype(np.float32) - image = Image.frombytes(mode, shape, data32.tostring()) - - return image - if mode in [None, 'L', 'P']: - bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) - image = Image.frombytes('L', shape, bytedata.tostring()) - if pal is not None: - image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) - # Becomes a mode='P' automagically. - elif mode == 'P': # default gray-scale - pal1 = np.arange(0, 256, 1, dtype=np.uint8)[:, np.newaxis] - pal2 = np.ones((3,), dtype=np.uint8)[np.newaxis, :] - pal = pal1 * pal2 - image.putpalette(np.asarray(pal, dtype=np.uint8).tostring()) - - return image - if mode == '1': # high input gives threshold for 1 - bytedata = (data > high) - image = Image.frombytes('1', shape, bytedata.tostring()) - - return image - - cmin = cmin or np.amin(np.ravel(data)) - cmax = cmax or np.amax(np.ravel(data)) - data = (data * 1.0 - cmin) * (high - low) / (cmax - cmin) + low - if mode == 'I': - data32 = data.astype(np.uint32) - image = Image.frombytes(mode, shape, data32.tostring()) - else: - raise ValueError("Mode is unknown or incompatible with input array shape.") + bytedata = ScipyImageReader._bytescale(data) + image = Image.frombytes('L', shape, bytedata.tostring()) return image @staticmethod - def _process_3d(data, shape, mode, channel_axis, high, low, cmin, cmax): + def _process_3d(data, shape): # if here then 3-d array with a 3 or a 4 in the shape length. # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' - if channel_axis is None: - ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) - if not np.size(ca): - raise ValueError("Could not find channel dimension.") - ca = ca[0] - else: - ca = channel_axis + ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) + if not np.size(ca): + raise ValueError("Could not find channel dimension.") + ca = ca[0] numch = shape[ca] if numch not in [3, 4]: raise ValueError("Channel axis dimension is not valid.") - bytedata = ScipyImageReader._bytescale(data, high=high, low=low, cmin=cmin, cmax=cmax) + bytedata = ScipyImageReader._bytescale(data) channel_axis_mapping = { 0: ((1, 2, 0), (shape[1], shape[0])), 1: ((0, 2, 1), (shape[2], shape[0])), 2: ((0, 1, 2), (shape[1], shape[0])) } - if ca in channel_axis_mapping: - transposition, shape = channel_axis_mapping[ca] - strdata = np.transpose(bytedata, transposition).tostring() - - if mode is None: - mode = 'RGB' if numch == 3 else 'RGBA' - - if mode not in ['RGB', 'RGBA', 'YCbCr', 'CMYK']: - raise ValueError("Mode is unknown or incompatible with input array shape.") - - if mode in ['RGB', 'YCbCr'] and numch != 3: - raise ValueError("Invalid array shape for mode.") - if mode in ['RGBA', 'CMYK'] and numch != 4: - raise ValueError("Invalid array shape for mode.") + transposition, shape = channel_axis_mapping[ca] + strdata = np.transpose(bytedata, transposition).tostring() + mode = 'RGB' if numch == 3 else 'RGBA' # Here we know data and mode is correct image = Image.frombytes(mode, shape, strdata) + return image @staticmethod - def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, channel_axis=None): + def _to_image(arr): data = np.asarray(arr) if np.iscomplexobj(data): raise ValueError("Cannot convert a complex-valued array.") @@ -283,8 +232,8 @@ def _to_image(arr, high=255, low=0, cmin=None, cmax=None, pal=None, mode=None, c if not valid: raise ValueError("'arr' does not have a suitable array shape for any mode.") if len(shape) == 2: - return ScipyImageReader._process_2d(data, shape, mode, pal, high, low, cmax, cmin) - return ScipyImageReader._process_3d(data, shape, mode, channel_axis, high, low, cmin, cmax) + return ScipyImageReader._process_2d(data, shape) + return ScipyImageReader._process_3d(data, shape) @staticmethod def _imread(name): @@ -294,43 +243,25 @@ def _imread(name): return ScipyImageReader._from_image(image) @staticmethod - def _bytescale(data, cmin=None, cmax=None, high=255, low=0): + def _bytescale(data): if data.dtype == np.uint8: return data - - if high > 255: - raise ValueError("`high` should be less than or equal to 255.") - if low < 0: - raise ValueError("`low` should be greater than or equal to 0.") - if high < low: - raise ValueError("`high` should be greater than or equal to `low`.") - cmin = cmin or data.min() - cmax = cmax or data.max() - + cmin = data.min() + cmax = data.max() cscale = cmax - cmin - if cscale < 0: - raise ValueError("`cmax` should be larger than `cmin`.") if cscale == 0: cscale = 1 - scale = float(high - low) / cscale - bytedata = (data - cmin) * scale + low + scale = float(255) / cscale + bytedata = (data - cmin) * scale - return (bytedata.clip(low, high) + 0.5).astype(np.uint8) + return (bytedata.clip(0, 255) + 0.5).astype(np.uint8) @staticmethod - def imresize(arr, size, interp='bilinear', mode=None): - im = ScipyImageReader._to_image(arr, mode=mode) - ts = type(size) - if np.issubdtype(ts, np.signedinteger): - percent = size / 100.0 - size = tuple((np.array(im.size) * percent).astype(int)) - elif np.issubdtype(type(size), np.floating): - size = tuple((np.array(im.size) * size).astype(int)) - else: - size = (size[1], size[0]) - func = {'nearest': 0, 'lanczos': 1, 'bilinear': 2, 'bicubic': 3, 'cubic': 3} - imnew = im.resize(size, resample=func[interp]) + def imresize(arr, size): + im = ScipyImageReader._to_image(arr) + size = (size[1], size[0]) + imnew = im.resize(size, resample=0) return ScipyImageReader._from_image(imnew) diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py index cbe2b56318c..a54136d630f 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py @@ -62,7 +62,7 @@ def resize_segmentation_mask(entry, height, width): def _(entry, height, width): entry_mask = [] for class_mask in entry.mask: - resized_mask = ScipyImageReader.imresize(class_mask, (height, width), 'nearest') + resized_mask = ScipyImageReader.imresize(class_mask, (height, width)) entry_mask.append(resized_mask) entry.mask = np.array(entry_mask) @@ -70,7 +70,7 @@ def _(entry, height, width): @resize_segmentation_mask.register(SegmentationAnnotation) def _(entry, height, width): - entry.mask = ScipyImageReader.imresize(entry.mask, (height, width), 'nearest') + entry.mask = ScipyImageReader.imresize(entry.mask, (height, width)) return entry for target in annotation: From ef82f5f9c1095c601b8fd99975f685bd6d7bf490 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 25 Sep 2019 10:46:48 +0300 Subject: [PATCH 034/927] AC: fix code style errors detected after pylint update (#450) * AC: fix code style errors detected after pylint update * ignore false positives --- .../accuracy_checker/adapters/mask_rcnn.py | 12 +++++++----- .../accuracy_checker/adapters/nlp.py | 4 +--- .../accuracy_checker/annotation_converters/brats.py | 2 +- .../annotation_converters/convert.py | 2 +- .../accuracy_checker/data_readers/data_reader.py | 12 ++++++------ .../accuracy_checker/launcher/onnx_launcher.py | 4 +--- .../accuracy_checker/launcher/opencv_launcher.py | 4 +--- .../metrics/semantic_segmentation.py | 2 +- .../accuracy_checker/postprocessor/filter.py | 4 ++-- .../accuracy_checker/postprocessor/nms.py | 2 +- .../accuracy_checker/preprocessor/__init__.py | 2 +- ...pece_conversion.py => color_space_conversion.py} | 13 +++++++------ .../preprocessor/geometric_transformations.py | 12 ++++++------ 13 files changed, 36 insertions(+), 39 deletions(-) rename tools/accuracy_checker/accuracy_checker/preprocessor/{color_spece_conversion.py => color_space_conversion.py} (87%) diff --git a/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py b/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py index 6861e5288dd..553acfd088d 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py @@ -16,6 +16,10 @@ import cv2 import numpy as np +try: + import pycocotools.mask as mask_util +except ImportError: + mask_util = None from .adapter import Adapter from ..config import StringField, ConfigError from ..representation import CoCocInstanceSegmentationPrediction, DetectionPrediction, ContainerPrediction @@ -27,11 +31,9 @@ class MaskRCNNAdapter(Adapter): def __init__(self, launcher_config, label_map=None, output_blob=None): super().__init__(launcher_config, label_map, output_blob) - try: - import pycocotools.mask as mask_util - self.encoder = mask_util.encode - except ImportError: + if mask_util is None: raise ImportError('pycocotools is not installed. Please install it before using mask_rcnn adapter.') + self.encoder = mask_util.encode @classmethod def parameters(cls): @@ -220,7 +222,7 @@ def segm_postprocess(self, box, raw_cls_mask, im_h, im_w, full_image_mask=False, raw_cls_mask = np.pad(raw_cls_mask, ((1, 1), (1, 1)), 'constant', constant_values=0) extended_box = self.expand_boxes(box[np.newaxis, :], raw_cls_mask.shape[0] / (raw_cls_mask.shape[0] - 2.0))[0] extended_box = extended_box.astype(int) - w, h = np.maximum(extended_box[2:] - extended_box[:2] + 1, 1) + w, h = np.maximum(extended_box[2:] - extended_box[:2] + 1, 1) # pylint: disable=E0633 x0, y0 = np.clip(extended_box[:2], a_min=0, a_max=[im_w, im_h]) x1, y1 = np.clip(extended_box[2:] + 1, a_min=0, a_max=[im_w, im_h]) diff --git a/tools/accuracy_checker/accuracy_checker/adapters/nlp.py b/tools/accuracy_checker/accuracy_checker/adapters/nlp.py index b38efcac0c0..4b817a90b86 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/nlp.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/nlp.py @@ -42,9 +42,7 @@ def parameters(cls): def configure(self): vocab_file = self.get_value_from_config('vocabulary_file') - self.encoding_vocab = { - idx: word for idx, word in enumerate(read_txt(vocab_file, encoding='utf-8')) - } + self.encoding_vocab = dict(enumerate(read_txt(vocab_file, encoding='utf-8'))) self.eos_index = self.get_value_from_config('eos_index') self.subword_option = vocab_file.name.split('.')[1] if len(vocab_file.name.split('.')) > 1 else None diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/brats.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/brats.py index 76fabff523b..46d75e63d21 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/brats.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/brats.py @@ -148,4 +148,4 @@ def convert(self, check_content=False, progress_callback=None, progress_interval def _get_meta(self): if not self.labels_file: return None - return {'label_map': [line for line in read_txt(self.labels_file)]} + return {'label_map': dict(enumerate(read_txt(self.labels_file)))} diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py index 4c2fc2e40ab..bc33c767bff 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py @@ -74,7 +74,7 @@ def get_pairs(pairs_list): subsample_set = OrderedSet() potential_ann_ind = np.random.choice(len(annotation), size, replace=False) - for ann_ind in potential_ann_ind: + for ann_ind in potential_ann_ind: # pylint: disable=E1133 annotation_for_subset = annotation[ann_ind] positive_pairs = annotation_for_subset.positive_pairs negative_pairs = annotation_for_subset.negative_pairs diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index f28bd2972b8..57d5d65f7ca 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -23,6 +23,10 @@ import scipy.misc import numpy as np import nibabel as nib +try: + import tensorflow as tf +except ImportError as import_error: + tf = None from ..utils import get_path, read_json, zipped_transform, set_image_metadata, contains_all from ..dependency import ClassProvider @@ -279,12 +283,8 @@ class TensorflowImageReader(BaseReader): def __init__(self, data_source, config=None, **kwargs): super().__init__(data_source, config) - try: - import tensorflow as tf - except ImportError as import_error: - raise ConfigError( - 'tf_imread reader disabled.Please, install Tensorflow before using. \n{}'.format(import_error.msg) - ) + if tf is None: + raise ImportError('tf backend for image reading requires TensorFlow. Please install it before usage.') tf.enable_eager_execution() diff --git a/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py index 5128bf04b8e..a41768cc729 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py @@ -67,9 +67,7 @@ def predict(self, inputs, metadata, *args, **kwargs): results = [] for infer_input in inputs: prediction_list = self._inference_session.run(self.output_names, infer_input) - results.append( - {output_name: prediction for output_name, prediction in zip(self.output_names, prediction_list)} - ) + results.append(dict(zip(self.output_names, prediction_list))) for meta_ in metadata: meta_['input_shape'] = self.inputs_info_for_meta() diff --git a/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py index 5c616f8492b..ce803bd4dff 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py @@ -144,9 +144,7 @@ def predict(self, inputs, metadata=None, **kwargs): for blob_name in self._inputs_shapes: self.network.setInput(input_blobs[blob_name].astype(np.float32), blob_name) list_prediction = self.network.forward(self.output_names) - dict_result = { - output_name: output_value for output_name, output_value in zip(self.output_names, list_prediction) - } + dict_result = dict(zip(self.output_names, list_prediction)) results.append(dict_result) if metadata is not None: diff --git a/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py b/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py index bab6877f18c..36d93115e54 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py @@ -163,7 +163,7 @@ def configure(self): self.mean = self.get_value_from_config('mean') self.median = self.get_value_from_config('median') - labels = self.dataset.labels if self.dataset.metadata else ['overall'] + labels = self.dataset.labels.values() if self.dataset.metadata else ['overall'] self.classes = len(labels) names_mean = ['mean@{}'.format(name) for name in labels] if self.mean else [] diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/filter.py b/tools/accuracy_checker/accuracy_checker/postprocessor/filter.py index e7e223c3301..2a4fda1d3e9 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/filter.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/filter.py @@ -321,8 +321,8 @@ class FilterInvalidBoxes(BaseFilter): __provider__ = 'invalid_boxes' def apply_filter(self, entry, invalid_boxes): - infinite_mask_x = np.logical_or(~np.isfinite(entry.x_mins), ~np.isfinite(entry.x_maxs)) - infinite_mask_y = np.logical_or(~np.isfinite(entry.y_mins), ~np.isfinite(entry.y_maxs)) + infinite_mask_x = np.logical_or(~np.isfinite(entry.x_mins), ~np.isfinite(entry.x_maxs)) # pylint: disable=E1130 + infinite_mask_y = np.logical_or(~np.isfinite(entry.y_mins), ~np.isfinite(entry.y_maxs)) # pylint: disable=E1130 infinite_mask = np.logical_or(infinite_mask_x, infinite_mask_y) return np.argwhere(infinite_mask).reshape(-1).tolist() diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/nms.py b/tools/accuracy_checker/accuracy_checker/postprocessor/nms.py index a30003466f4..4b08704f12e 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/nms.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/nms.py @@ -115,7 +115,7 @@ def nms(x1, y1, x2, y2, scores, thresh, include_boundaries=True, keep_top_k=None union = (areas[i] + areas[order[1:]] - intersection) overlap = np.divide(intersection, union, out=np.zeros_like(intersection, dtype=float), where=union != 0) - order = order[np.where(overlap <= thresh)[0] + 1] + order = order[np.where(overlap <= thresh)[0] + 1] # pylint: disable=W0143 return keep diff --git a/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py b/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py index df6fcf598fb..6b05569e881 100644 --- a/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py @@ -16,7 +16,7 @@ from .preprocessing_executor import PreprocessingExecutor from .preprocessor import Preprocessor -from .color_spece_conversion import BgrToRgb, BgrToGray, TfConvertImageDType +from .color_space_conversion import BgrToRgb, BgrToGray, TfConvertImageDType from .normalization import Normalize, Normalize3d from .geometric_transformations import ( GeometricOperationMetadata, diff --git a/tools/accuracy_checker/accuracy_checker/preprocessor/color_spece_conversion.py b/tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py similarity index 87% rename from tools/accuracy_checker/accuracy_checker/preprocessor/color_spece_conversion.py rename to tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py index 428e9e45e40..199a3253b41 100644 --- a/tools/accuracy_checker/accuracy_checker/preprocessor/color_spece_conversion.py +++ b/tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py @@ -18,6 +18,11 @@ import cv2 import numpy as np +try: + import tensorflow as tf +except ImportError as import_error: + tf = None + from .preprocessor import Preprocessor @@ -46,12 +51,8 @@ class TfConvertImageDType(Preprocessor): def __init__(self, config, name, input_shapes=None): super().__init__(config, name, input_shapes) - try: - import tensorflow as tf - except ImportError as import_error: - raise ImportError( - 'tf_convert_image_dtype disabled.Please, install Tensorflow before using. \n{}'.format(import_error.msg) - ) + if tf is None: + raise ImportError('*tf_convert_image_dtype* operation requires TensorFlow. Please install it before usage') tf.enable_eager_execution() self.converter = tf.image.convert_image_dtype self.dtype = tf.float32 diff --git a/tools/accuracy_checker/accuracy_checker/preprocessor/geometric_transformations.py b/tools/accuracy_checker/accuracy_checker/preprocessor/geometric_transformations.py index ee117594239..9e8c66f8770 100644 --- a/tools/accuracy_checker/accuracy_checker/preprocessor/geometric_transformations.py +++ b/tools/accuracy_checker/accuracy_checker/preprocessor/geometric_transformations.py @@ -20,6 +20,10 @@ import cv2 import numpy as np from PIL import Image +try: + import tensorflow as tf +except ImportError as import_error: + tf = None from ..config import ConfigError, NumberField, StringField, BoolField from ..dependency import ClassProvider @@ -185,12 +189,8 @@ class _TFResizer(_Resizer): __provider__ = 'tf' def __init__(self, interpolation): - try: - import tensorflow as tf - except ImportError as import_error: - raise ImportError( - 'tf resize disabled. Please, install Tensorflow before using. \n{}'.format(import_error.msg) - ) + if tf is None: + raise ImportError('tf backend for resize operation requires TensorFlow. Please install it before usage.') tf.enable_eager_execution() self.supported_interpolations = { 'BILINEAR': tf.image.ResizeMethod.BILINEAR, From 19d9e7f444ff14963914f74e0822b6d4833c75a9 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Wed, 25 Sep 2019 10:59:58 +0300 Subject: [PATCH 035/927] FIX --- contribution.md => CONTRIBUTING.md | 0 README.md | 3 ++- 2 files changed, 2 insertions(+), 1 deletion(-) rename contribution.md => CONTRIBUTING.md (100%) diff --git a/contribution.md b/CONTRIBUTING.md similarity index 100% rename from contribution.md rename to CONTRIBUTING.md diff --git a/README.md b/README.md index 28886d496f2..1b17e29842d 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,6 @@ This repository includes optimized deep learning models and a set of demos to ex * [Model Downloader](tools/downloader/README.md) and other automation tools * [Demos](demos/README.md) that demonstrate models usage with Deep Learning Deployment Toolkit * [Accuracy Checker](tools/accuracy_checker/README.md) tool for models accuracy validation -* [Model Contribution Guide](contribution.md) ## License Open Model Zoo is licensed under [Apache License Version 2.0](LICENSE). @@ -35,6 +34,8 @@ We welcome community contributions to the Open Model Zoo repository. If you have * In case of a larger feature, provide a relevant demo. * Submit a pull request at https://github.com/opencv/open_model_zoo/pulls +Additional information about contributing your model to Open Model Zoo you can find [here](CONTRIBUTING.md). + We will review your contribution and, if any additional fixes or modifications are needed, may give you feedback to guide you. When accepted, your pull request will be merged into the GitHub* repositories. Open Model Zoo is licensed under Apache License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. From 161349ea8db0ada60fa25b430431f8db2323b07b Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 25 Sep 2019 11:40:08 +0300 Subject: [PATCH 036/927] AC: fix reset annotation (#451) --- tools/accuracy_checker/accuracy_checker/dataset.py | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index 7e9ca6df640..90a92f24b68 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -50,9 +50,12 @@ def __init__(self, config_entry): self.iteration = 0 dataset_config = DatasetConfig('Dataset') dataset_config.validate(self._config) + self._images_dir = Path(self._config.get('data_source', '')) + self._load_annotation() + + def _load_annotation(self): annotation, meta = None, None use_converted_annotation = True - self._images_dir = Path(self._config.get('data_source', '')) if 'annotation' in self._config: annotation_file = Path(self._config['annotation']) if annotation_file.exists(): @@ -174,6 +177,10 @@ def _convert_annotation(self): return annotation, meta + def reset(self): + self.subset = None + self._load_annotation() + def read_annotation(annotation_file: Path): annotation_file = get_path(annotation_file) @@ -245,7 +252,7 @@ def reset(self): if self.subset: self.subset = None if self.annotation_reader: - self.annotation_reader.subset = None + self.annotation_reader.reset() @property def full_size(self): From f0122b5432b18f1f17d82d276eb84045f9552a85 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Wed, 25 Sep 2019 18:11:52 +0300 Subject: [PATCH 037/927] FIX --- CONTRIBUTING.md | 61 ++++++++++++++++++++----------------------------- 1 file changed, 25 insertions(+), 36 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index ade146d2430..de37ee06f81 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ -# How to contribute to OMZ +# How to contribute model to Open Model Zoo -From this document you will know how to contribute your model to OpenVINO™ Open Model Zoo. Almost any model from supported frameworks (see list below) can be added. It could be done in few simple steps. +From this document you will learn how to contribute your model to OpenVINO™ Open Model Zoo (OMZ). It could be done in few steps. 1. [Model location](#model-location) 2. [Model conversion](#model-conversion) @@ -19,17 +19,17 @@ List of supported frameworks: ## Model location -Upload your model to any Internet file storage with easy and direct access to it. It can be www.github.com, GoogleDrive\*, or any other. +Upload your model to any Internet file storage. The main requirements are that the model must either be downloadable from a direct HTTP(S) link or from Google Drive\*. *After this step you will get **links** to the model* ## Model conversion -OpenVINO™ supports models in its own format IR. Model from any supported framework can be easily converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. Model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. -> **NOTE 1**: due to OpenVINO™ paradigm, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. +> **NOTE 1**: due to OpenVINO™ paradigms, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. -> **NOTE 2**: due to OpenVINO™ paradigms, if model input is a colored image, color channel order should be `BGR`. +> **NOTE 2**: due to OpenVINO™ paradigms, if model input is a color image, color channel order should be `BGR`. *After this step you`ll get **conversion parameters** for Model Optimizer.* @@ -38,17 +38,19 @@ OpenVINO™ supports models in its own format IR. Model from any supported f Demo will show main idea of how work with your model. If your model solves one of the supported by Open Model Zoo task, try find appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demo's input requires next options: -``` - -i "" Optional. Path to a input file or directory (for multiple inferences). By default input must be generated randomly. - -m "" Required. Path to an .xml file with a trained model - -d "" Optional. Target device for model inference. By default CPU - -no_show Optional. Do not launch GUI window to visualize inference results. Needed for demo CI tests automation. -``` -Also you can add all necessary parameters for inference your model. + +- `-i ""` Optional. Path to a input file or directory (for multiple inferences). +- `-m ""` Required. Path to an .xml file with a trained model +- `-d ""` Optional. Target device for model inference. By default CPU +- `-no_show` Optional. Do not launch GUI window to visualize inference results. Needed for demo CI tests automation. + +Also you can add any other necessary parameters. + +*After this step you'll get **demo** for your model (if no demo was available)* ## Accuracy validation -Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use OpenVINO™ Inference Engine to run converted model or original framework to run original model. OpenVINO™ Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create Accuracy Checker configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models) +Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models) When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. @@ -88,39 +90,26 @@ You describe all files, which need to be downloaded, in this section. Each file * `sha256` sets file hash sum * `source` sets direct link to file *OR* describes file access parameters -If file is located on GoogleDrive\*, section `source` must contain: -``` - - $type: google_drive - id: -``` +If file is located on Google Drive\*, section `source` must contain: +- `$type: google_drive` +- `id` file ID on Google Drive\* + **`postprocessing`** (*optional*) Sometimes right after downloading model is not ready for conversion by Model Optimizer and some additional preprocessing needed, such as unpacking, replacing or deleting some part of file. This manipulation is described in this section. For unpacking archive: - -``` - - $type: unpack_archive - file: - format: zip | tar | gztar | bztar | xztar -``` +- `$type: unpack_archive` +- `file` archive file name +- `format` archive format (zip | tar | gztar | bztar | xztar) For replacement operations: - -``` - - $type: regex_replace - file: - pattern: - replacement: - count: -``` -where +- `$type: regex_replace` - `file` name of file where replacement must be executed - `pattern` string or regexp ([learn more](https://docs.python.org/2/library/re.html)) to find - `replacement` replacement string - `count` (*optional*) maximum number of pattern occurrences to be replaced - **`pytorch_to_onnx`** (*optional*) List of pytorch-to-onnx conversion parameters, see `model_optimizer_args` for details. @@ -155,7 +144,7 @@ Path to model's license. ### Example -In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from GoogleDrive\* as archive. +In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from Google Drive\* as archive. ``` description: >- From 14caf9556785517985e17293813072c25503f81e Mon Sep 17 00:00:00 2001 From: maozhong Date: Thu, 26 Sep 2019 10:03:45 +0800 Subject: [PATCH 038/927] revert file mode change Signed-off-by: maozhong --- demos/python_demos/face_recognition_demo/face_detector.py | 0 1 file changed, 0 insertions(+), 0 deletions(-) mode change 100755 => 100644 demos/python_demos/face_recognition_demo/face_detector.py diff --git a/demos/python_demos/face_recognition_demo/face_detector.py b/demos/python_demos/face_recognition_demo/face_detector.py old mode 100755 new mode 100644 From 2046e6f7af809a8cb0f7d5a93dd460a2abc85e28 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 26 Sep 2019 12:36:18 +0300 Subject: [PATCH 039/927] FIX --- CONTRIBUTING.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index de37ee06f81..c80beaec530 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -25,7 +25,7 @@ Upload your model to any Internet file storage. The main requirements are that t ## Model conversion -Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. Model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. More information about conversion you can learn [here](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. Find more information about conversion in [[Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. > **NOTE 1**: due to OpenVINO™ paradigms, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. @@ -35,14 +35,14 @@ Deep Learning Inference Engine (IE) supports models in Intermediate Representati ## Demo -Demo will show main idea of how work with your model. If your model solves one of the supported by Open Model Zoo task, try find appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). +The demo shows the main idea of model inference using IE. If your model solves one of the tasks supported by Open Model Zoo, find appropriate demo from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or sample from [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). -If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demo's input requires next options: +If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demos are required to support the following keys: -- `-i ""` Optional. Path to a input file or directory (for multiple inferences). -- `-m ""` Required. Path to an .xml file with a trained model -- `-d ""` Optional. Target device for model inference. By default CPU -- `-no_show` Optional. Do not launch GUI window to visualize inference results. Needed for demo CI tests automation. +- `-i ""` Required. Input to process. +- `-m ""` Required. Path to an .xml file with a trained model. +- `-d ""` Optional. Target device for model inference. Default is CPU. +- `-no_show` Optional. Do not visualize inference results. Also you can add any other necessary parameters. From cde1d2ddb21bd1b02cd386a8d01f73b871c96300 Mon Sep 17 00:00:00 2001 From: Leonid Beynenson Date: Thu, 26 Sep 2019 14:30:03 +0300 Subject: [PATCH 040/927] Add comment to smart classroom demo (#453) * Add a comment concerning one assert in smart classroom demo * Small update in the comment Co-Authored-By: Sergei Nosov --- demos/smart_classroom_demo/src/reid_gallery.cpp | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/demos/smart_classroom_demo/src/reid_gallery.cpp b/demos/smart_classroom_demo/src/reid_gallery.cpp index 8743d0640e5..5d21cd51215 100644 --- a/demos/smart_classroom_demo/src/reid_gallery.cpp +++ b/demos/smart_classroom_demo/src/reid_gallery.cpp @@ -102,6 +102,11 @@ EmbeddingsGallery::EmbeddingsGallery(const std::string& ids_list, cv::FileNode item = *fit; std::string label = item.name(); std::vector embeddings; + + // Please, note that the case when there are more than one image in gallery + // for a person might not work properly with the current implementation + // of the demo. + // Remove this assert by your own risk. CV_Assert(item.size() == 1); for (size_t i = 0; i < item.size(); i++) { From 00a144569f86b655d2e86bb2af41cd5f54a495af Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 26 Sep 2019 14:46:13 +0300 Subject: [PATCH 041/927] add configs for super resolution and road validation (#454) --- .../configs/road-segmentation-adas-0001.yml | 22 +++++ .../single-image-super-resolution-1032.yml | 57 +++++++++++++ .../single-image-super-resolution-1033.yml | 62 ++++++++++++++ .../text-image-super-resolution-0001.yml | 31 +++++++ .../accuracy_checker/dataset_definitions.yml | 82 ++++++++++++++++++- 5 files changed, 253 insertions(+), 1 deletion(-) create mode 100644 tools/accuracy_checker/configs/road-segmentation-adas-0001.yml create mode 100644 tools/accuracy_checker/configs/single-image-super-resolution-1032.yml create mode 100644 tools/accuracy_checker/configs/single-image-super-resolution-1033.yml create mode 100644 tools/accuracy_checker/configs/text-image-super-resolution-0001.yml diff --git a/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml b/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml new file mode 100644 index 00000000000..1e3853991ec --- /dev/null +++ b/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml @@ -0,0 +1,22 @@ +models: + - name: road-segmentation-adas-0001 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/road-segmentation-adas-0001/FP32/road-segmentation-adas-0001.xml + weights: intel/road-segmentation-adas-0001/FP32/road-segmentation-adas-0001.bin + adapter: segmentation + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.xml + weights: intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.bin + adapter: segmentation + cpu_extensions: AUTO + + datasets: + - name: road_segmentation diff --git a/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml b/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml new file mode 100644 index 00000000000..87266d46445 --- /dev/null +++ b/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml @@ -0,0 +1,57 @@ +models: + - name: single-image-super-resolution-1032 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/single-image-super-resolution-1032/FP32/single-image-super-resolution-1032.xml + weights: intel/single-image-super-resolution-1032/FP32/single-image-super-resolution-1032.bin + adapter: + type: super_resolution + reverse_channels: True + cpu_extensions: AUTO + inputs: + - name: "0" + type: INPUT + value: ".*lr_x4*.png" + - name: "1" + type: INPUT + value: ".*upsample_x4*.png" + + - framework: dlsdk + tags: + - FP16 + model: intel/single-image-super-resolution-1032/FP16/single-image-super-resolution-1032.xml + weights: intel/single-image-super-resolution-1032/FP16/single-image-super-resolution-1032.bin + adapter: + type: super_resolution + reverse_channels: True + cpu_extensions: AUTO + inputs: + - name: "0" + type: INPUT + value: ".*lr_x4*.png" + - name: "1" + type: INPUT + value: ".*upsample_x4*.png" + + - framework: dlsdk + tags: + - INT8 + model: intel/single-image-super-resolution-1032/INT8/single-image-super-resolution-1032.xml + weights: intel/single-image-super-resolution-1032/INT8/single-image-super-resolution-1032.bin + adapter: + type: super_resolution + reverse_channels: True + cpu_extensions: AUTO + inputs: + - name: "0" + type: INPUT + value: ".*lr_x4*.png" + - name: "1" + type: INPUT + value: ".*upsample_x4*.png" + + datasets: + - name: super_resolution_x4 diff --git a/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml b/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml new file mode 100644 index 00000000000..9871b134e99 --- /dev/null +++ b/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml @@ -0,0 +1,62 @@ +models: + - name: single-image-super-resolution-1033 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/single-image-super-resolution-1033/FP32/single-image-super-resolution-1033.xml + weights: intel/single-image-super-resolution-1033/FP32/single-image-super-resolution-1033.bin + adapter: + type: super_resolution + reverse_channels: True + cpu_extensions: AUTO + inputs: + - name: "0" + type: INPUT + value: ".*lr_x3*.png" + - name: "1" + type: INPUT + value: ".*upsample_x3*.png" + + - framework: dlsdk + tags: + - FP16 + model: intel/single-image-super-resolution-1033/FP16/single-image-super-resolution-1033.xml + weights: intel/single-image-super-resolution-1033/FP16/single-image-super-resolution-1033.bin + adapter: + type: super_resolution + reverse_channels: True + cpu_extensions: AUTO + inputs: + - name: "0" + type: INPUT + value: ".*lr_x3*.png" + - name: "1" + type: INPUT + value: ".*upsample_x3*.png" + + - framework: dlsdk + tags: + - INT8 + model: intel/single-image-super-resolution-1033/INT8/single-image-super-resolution-1033.xml + weights: intel/single-image-super-resolution-1033/INT8/single-image-super-resolution-1033.bin + adapter: + type: super_resolution + reverse_channels: True + cpu_extensions: AUTO + inputs: + - name: "0" + type: INPUT + value: ".*lr_x3*.png" + - name: "1" + type: INPUT + value: ".*upsample_x3*.png" + + datasets: + - name: super_resolution_x3 + + metrics: + - type: psnr + scale_border: 4 + presenter: print_vector diff --git a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml new file mode 100644 index 00000000000..550a80b151c --- /dev/null +++ b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml @@ -0,0 +1,31 @@ +models: + - name: text-image-super-resolution-0001 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/text-image-super-resolution-0001/FP32/text-image-super-resolution-0001.xml + weights: intel/text-image-super-resolution-0001/FP32/text-image-super-resolution-0001.bin + adapter: + type: super_resolution + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: intel/text-image-super-resolution-0001/FP16/text-image-super-resolution-0001.xml + weights: intel/text-image-super-resolution-0001/FP16/text-image-super-resolution-0001.bin + adapter: + type: super_resolution + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - INT8 + model: intel/text-image-super-resolution-0001/INT8/text-image-super-resolution-0001.xml + weights: intel/text-image-super-resolution-0001/INT8/text-image-super-resolution-0001.bin + adapter: + type: super_resolution + cpu_extensions: AUTO + diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index 650e0f92b58..ef14bf7ba11 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -305,7 +305,7 @@ datasets: - name: image_retrieval data_source: textile_crops - annotattion_conversion: + annotation_conversion: converter: image_retrieval data_dir: textile_crops gallery_annotation_file: textile_crops/gallery/gallery.txt @@ -315,3 +315,83 @@ datasets: preprocessing: - type: resize size: 224 + + - name: road_segmentation + data_source: segmentation + annotation_conversion: + converter: common_semantic_segmentation + images_dir: segmentation/images + masks_dir: segmentation/mask_road_segmentation + image_postfix: .JPEG + mask_postfix: .png + dataset_meta: segmentation/mask_road_segmentation/dataset_meta.json + annotation: road_segmentation.pickle + dataset_meta: road_segmentation.json + + preprocessing: + - type: resize + dst_height: 512 + dst_width: 896 + + postprocessing: + - type: encode_segmentation_mask + apply_to: annotation + - type: resize_segmentation_mask + apply_to: annotation + dst_height: 512 + dst_width: 896 + + metrics: + - type: mean_iou + presenter: print_vector + - type: mean_accuracy + presenter: print_vector + + - name: super_resolution_x3 + data_source: super_resolution + annotation_conversion: + converter: super_resolution + data_dir: super_resolution + lr_suffix: lr_x3 + upsample_suffix: upsample_x3 + hr_suffix: hr + two_streams: True + annotation: super_resolution_x3.pickle + + metrics: + - type: psnr + scale_border: 4 + presenter: print_vector + + - name: super_resolution_x4 + data_source: super_resolution + annotation_conversion: + converter: super_resolution + data_dir: super_resolution + lr_suffix: lr_x4 + upsample_suffix: upsample_x4 + hr_suffix: hr + two_streams: True + annotation: super_resolution_x4.pickle + + metrics: + - type: psnr + scale_border: 4 + presenter: print_vector + + - name: text_super_resolution_x3 + data_source: super_resolution + annotation_conversion: + converter: super_resolution + data_dir: super_resolution + lr_suffix: lr_x3 + hr_suffix: hr_gray + annotation: text_super_resolution_x3.pickle + + preprocessing: + - type: bgr_to_gray + + metrics: + - type: psnr + scale_border: 4 + presenter: print_vector From aa51efa48ee8113d6fdc32a44e539c2f264eca6e Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 26 Sep 2019 17:38:36 +0300 Subject: [PATCH 042/927] added configs for head pose estimation and semantic segmentation (#456) --- .../head-pose-estimation-adas-0001.yml | 47 +++++++++++++++++++ .../semantic-segmentation-adas-0001.yml | 35 ++++++++++++++ .../text-image-super-resolution-0001.yml | 1 - .../accuracy_checker/dataset_definitions.yml | 34 ++++++++++++++ 4 files changed, 116 insertions(+), 1 deletion(-) create mode 100644 tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml create mode 100644 tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml diff --git a/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml b/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml new file mode 100644 index 00000000000..f67513faf30 --- /dev/null +++ b/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml @@ -0,0 +1,47 @@ +models: + - name: head-pose-estimation-adas-0001 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/head-pose-estimation-adas-0001/FP32/head-pose-estimation-adas-0001.xml + weights: intel/ead-pose-estimation-adas-0001/FP32/head-pose-estimation-adas-0001.bin + adapter: + type: head_pose + angle_yaw: angle_y_fc + angle_pitch: angle_p_fc + angle_roll: angle_r_fc + + - framework: dlsdk + tags: + - FP16 + model: intel/head-pose-estimation-adas-0001/FP16/head-pose-estimation-adas-0001.xml + weights: intel/head-pose-estimation-adas-0001/FP16/head-pose-estimation-adas-0001.bin + adapter: + type: head_pose + angle_yaw: angle_y_fc + angle_pitch: angle_p_fc + angle_roll: angle_r_fc + + datasets: + - name: head_pose + + metrics: + - name: yaw_mae + type: mae + presenter: print_vector + annotation_source: yaw + prediction_source: angle_yaw + + - name: pitch_mae + type: mae + presenter: print_vector + annotation_source: pitch + prediction_source: angle_pitch + + - name: roll_mae + type: mae + presenter: print_vector + annotation_source: roll + prediction_source: angle_roll diff --git a/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml b/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml new file mode 100644 index 00000000000..742339e09fa --- /dev/null +++ b/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml @@ -0,0 +1,35 @@ +models: + - name: semantic-segmentation-adas-0001 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/semantic-segmentation-adas-0001/FP32/semantic-segmentation-adas-0001.xml + weights: intel/semantic-segmentation-adas-0001/FP32/semantic-segmentation-adas-0001.bin + adapter: segmentation + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: intel/semantic-segmentation-adas-0001/FP16/semantic-segmentation-adas-0001.xml + weights: intel/semantic-segmentation-adas-0001/FP16/semantic-segmentation-adas-0001.bin + adapter: segmentation + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - INT8 + model: intel/semantic-segmentation-adas-0001/INT8/semantic-segmentation-adas-0001.xml + weights: intel/semantic-segmentation-adas-0001/INT8/semantic-segmentation-adas-0001.bin + adapter: segmentation + cpu_extensions: AUTO + + datasets: + - name: semantic_segmentation_adas + + metrics: + - type: mean_iou + use_argmax: False + presenter: print_vector diff --git a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml index 550a80b151c..37fabdffbca 100644 --- a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml +++ b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml @@ -28,4 +28,3 @@ models: adapter: type: super_resolution cpu_extensions: AUTO - diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index ef14bf7ba11..e6d692abc4e 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -316,6 +316,31 @@ datasets: - type: resize size: 224 + - name: semantic_segmentation_adas + data_source: segmentation + annotation_conversion: + converter: common_semantic_segmentation + images_dir: segmentation/images + masks_dir: segmentation/mask_segmentation_adas + image_postfix: .JPEG + mask_postfix: .png + dataset_meta: segmentation/mask_segmentation_adas/dataset_meta.json + annotation: semantic_segmentation_adas.pickle + dataset_meta: semantic_segmentation_adas.json + + preprocessing: + - type: resize + dst_height: 1024 + dst_width: 2048 + + postprocessing: + - type: encode_segmentation_mask + apply_to: annotation + - type: resize_segmentation_mask + apply_to: annotation + dst_height: 1024 + dst_width: 2048 + - name: road_segmentation data_source: segmentation annotation_conversion: @@ -395,3 +420,12 @@ datasets: - type: psnr scale_border: 4 presenter: print_vector + + - name: head_pose + data_source: WIDER_val/images/16--Award_Ceremony + annotation: head_pose.pickle + + preprocessing: + - type: crop_rect + - type: resize + size: 60 From 4c6452912105dfc25f6636718bd0a035e7902ccf Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 27 Sep 2019 17:14:35 +0300 Subject: [PATCH 043/927] AC: add configs (#425) * fix definition for existed configs * add new --- tools/accuracy_checker/configs/Sphereface.yml | 30 ++++++++++ tools/accuracy_checker/configs/ctpn.yml | 58 +++++++++++++++++++ ...face-recognition-mobilefacenet-arcface.yml | 12 ---- .../face-recognition-resnet100-arcface.yml | 12 ---- .../face-recognition-resnet34-arcface.yml | 12 ---- .../face-recognition-resnet50-arcface.yml | 12 ---- .../face-reidentification-retail-0095.yml | 5 -- .../configs/facenet-20180408-102900.yml | 29 ++++++++++ .../configs/image-retrieval-0001.yml | 29 ++++++++++ .../landmarks-regression-retail-0009.yml | 5 -- .../person-reidentification-retail-0031.yml | 4 -- .../person-reidentification-retail-0076.yml | 4 -- .../person-reidentification-retail-0079.yml | 6 +- .../configs/text-detection-0003.yml | 5 -- .../configs/text-detection-0004.yml | 5 -- .../configs/text-recognition-0012.yml | 4 -- .../accuracy_checker/dataset_definitions.yml | 38 ++++++++++++ 17 files changed, 185 insertions(+), 85 deletions(-) create mode 100644 tools/accuracy_checker/configs/Sphereface.yml create mode 100644 tools/accuracy_checker/configs/ctpn.yml create mode 100644 tools/accuracy_checker/configs/facenet-20180408-102900.yml create mode 100644 tools/accuracy_checker/configs/image-retrieval-0001.yml diff --git a/tools/accuracy_checker/configs/Sphereface.yml b/tools/accuracy_checker/configs/Sphereface.yml new file mode 100644 index 00000000000..3489da6142f --- /dev/null +++ b/tools/accuracy_checker/configs/Sphereface.yml @@ -0,0 +1,30 @@ +models: + - name: Sphereface + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/Sphereface/FP32/Sphereface.xml + weights: public/Sphereface/FP32/Sphereface.bin + adapter: reid + + - framework: dlsdk + tags: + - FP16 + model: public/Sphereface/FP16/Sphereface.xml + weights: public/Sphereface/FP16/Sphereface.bin + adapter: reid + + datasets: + - name: lfw + + preprocessing: + - type: point_alignment + size: 400 + - type: resize + dst_height: 112 + dst_width: 96 + + metrics: + - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/ctpn.yml b/tools/accuracy_checker/configs/ctpn.yml new file mode 100644 index 00000000000..e34fc18032e --- /dev/null +++ b/tools/accuracy_checker/configs/ctpn.yml @@ -0,0 +1,58 @@ +models: + - name: ctpn + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/ctpn/FP32/ctpn.bin + weights: public/ctpn/FP32/ctpn.bin + adapter: + type: ctpn_text_detection + cls_prob_out: 'Reshape_2/Transpose' + bbox_pred_out: 'rpn_bbox_pred/Reshape_1/Transpose' + allow_reshape_input: True + + - framework: dlsdk + tags: + - FP16 + model: public/ctpn/FP16/ctpn.bin + weights: public/ctpn/FP16/ctpn.bin + adapter: + type: ctpn_text_detection + cls_prob_out: 'Reshape_2/Transpose' + bbox_pred_out: 'rpn_bbox_pred/Reshape_1/Transpose' + allow_reshape_input: True + datasets: + - name: ICDAR2015 + + preprocessing: + - type: resize + dst_width: 1200 + dst_height: 600 + aspect_ratio_scale: ctpn_keep_aspect_ratio + - type: resize + dst_width: 600 + dst_height: 600 + + postprocessing: + - type: cast_to_int + round_policy: lower + + metrics: + - type: focused_text_precision + name: precision + ignore_difficult: True + area_recall_constrain: 0.8 + area_precision_constrain: 0.4 + + - type: focused_text_recall + name: recall + ignore_difficult: True + area_recall_constrain: 0.8 + area_precision_constrain: 0.4 + + - type: focused_text_hmean + name: hmean + ignore_difficult: True + area_recall_constrain: 0.8 + area_precision_constrain: 0.4 diff --git a/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml b/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml index fada39828a4..b0694240b50 100644 --- a/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml @@ -12,12 +12,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: bgr_to_rgb @@ -48,12 +42,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: point_alignment diff --git a/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml index 0c9f210eec0..1a18ca58931 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml @@ -13,12 +13,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: bgr_to_rgb @@ -42,12 +36,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: point_alignment diff --git a/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml index c359fb11890..9a4c5c2640f 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml @@ -13,12 +13,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: bgr_to_rgb @@ -49,12 +43,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: point_alignment diff --git a/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml index 030e3970adc..0cfd32c2426 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml @@ -12,12 +12,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: bgr_to_rgb @@ -49,12 +43,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt - annotation: lfw.pickle preprocessing: - type: point_alignment diff --git a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml index 6442f395376..b3a50daefb7 100644 --- a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml +++ b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml @@ -26,11 +26,6 @@ models: datasets: - name: lfw - data_source: LFW/lfw - annotation_conversion: - converter: lfw - pairs_file: LFW/annotation/pairs.txt - landmarks_file: LFW/annotation/lfw_landmark.txt preprocessing: - type: point_alignment diff --git a/tools/accuracy_checker/configs/facenet-20180408-102900.yml b/tools/accuracy_checker/configs/facenet-20180408-102900.yml new file mode 100644 index 00000000000..2f4e8964423 --- /dev/null +++ b/tools/accuracy_checker/configs/facenet-20180408-102900.yml @@ -0,0 +1,29 @@ +models: + - name: facenet-20180408-102900 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/facenet-20180408-102900/FP32/facenet-20180408-102900.xml + weights: public/facenet-20180408-102900/FP32/facenet-20180408-102900.bin + adapter: reid + + - framework: dlsdk + tags: + - FP16 + model: public/facenet-20180408-102900/FP16/facenet-20180408-102900.xml + weights: public/facenet-20180408-102900/FP16/facenet-20180408-102900.bin + adapter: reid + + datasets: + - name: lfw + + preprocessing: + - type: point_alignment + size: 400 + - type: resize + size: 160 + + metrics: + - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/image-retrieval-0001.yml b/tools/accuracy_checker/configs/image-retrieval-0001.yml new file mode 100644 index 00000000000..cf1e34b280c --- /dev/null +++ b/tools/accuracy_checker/configs/image-retrieval-0001.yml @@ -0,0 +1,29 @@ +models: + - name: image-retrieval-0001 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/image-retrieval-0001/FP32/image-retrieval-0001.xml + weights: intel/image-retrieval-0001/FP32/image-retrieval-0001.bin + adapter: reid + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: intel/image-retrieval-0001/FP16/image-retrieval-0001.xml + weights: intel/image-retrieval-0001/FP16/image-retrieval-0001.bin + adapter: reid + cpu_extensions: AUTO + + datasets: + - name: image_retrieval + + metrics: + - name: rank@1 + type: cmc + top_k: 1 + + - type: reid_map diff --git a/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml b/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml index e67924e8dd4..5c3e574dd24 100644 --- a/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml +++ b/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml @@ -18,11 +18,6 @@ models: datasets: - name: vgg2face - data_source: VGGFaces2/test - annotation_conversion: - converter: vgg_face - landmarks_csv_file: VGGFaces2/bb_landmark/loose_landmark_test.csv - bbox_csv_file: VGGFaces2/bb_landmark/loose_bb_test.csv preprocessing: - type: crop_rect diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml index c9c6f6b0e42..0ca3246e03f 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml @@ -27,10 +27,6 @@ models: datasets: - name: market1501 reader: pillow_imread - data_source: Market-1501-v15.09.15 - annotation_conversion: - converter: market1501_reid - data_dir: Market-1501-v15.09.15 preprocessing: - type: bgr_to_rgb diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml index 607b1f26cd6..5eef3ffee0d 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml @@ -26,10 +26,6 @@ models: datasets: - name: market1501 - data_source: Market-1501-v15.09.15 - annotation_conversion: - converter: market1501_reid - data_dir: Market-1501-v15.09.15 preprocessing: - type: resize diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml index 3fd8bfd48cf..77d4c7ee5e4 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml @@ -20,16 +20,12 @@ models: tags: - INT8 device: CPU - model: intel/person-reidentification-retail-0079/dldt/INT8/person-reidentification-retail-0079.xml + model: intel/person-reidentification-retail-0079/dldt/INT8/person-reidentification-retail-0079.xml weights: intel/person-reidentification-retail-0079/0079/dldt/INT8/person-reidentification-retail-0079.bin adapter: reid datasets: - name: market1501 - data_source: Market-1501-v15.09.15 - annotation_conversion: - converter: market1501_reid - data_dir: Market-1501-v15.09.15 preprocessing: - type: resize diff --git a/tools/accuracy_checker/configs/text-detection-0003.yml b/tools/accuracy_checker/configs/text-detection-0003.yml index 00e7bc56a55..6fd09b621a0 100644 --- a/tools/accuracy_checker/configs/text-detection-0003.yml +++ b/tools/accuracy_checker/configs/text-detection-0003.yml @@ -48,11 +48,6 @@ models: datasets: - name: ICDAR2015 - data_source: ICDAR15_DET_validation/ch4_test_images - annotation_conversion: - converter: icdar_detection - data_dir: ICDAR15_DET_validation/gt - preprocessing: - type: resize dst_width: 1280 diff --git a/tools/accuracy_checker/configs/text-detection-0004.yml b/tools/accuracy_checker/configs/text-detection-0004.yml index a0a8686bf2a..75dee878114 100644 --- a/tools/accuracy_checker/configs/text-detection-0004.yml +++ b/tools/accuracy_checker/configs/text-detection-0004.yml @@ -50,11 +50,6 @@ models: datasets: - name: ICDAR2015 - data_source: ICDAR15_DET_validation/ch4_test_images - annotation_conversion: - converter: icdar_detection - data_dir: ICDAR15_DET_validation/gt - preprocessing: - type: resize dst_width: 1280 diff --git a/tools/accuracy_checker/configs/text-recognition-0012.yml b/tools/accuracy_checker/configs/text-recognition-0012.yml index 6bc5d50ac73..cc95667634f 100644 --- a/tools/accuracy_checker/configs/text-recognition-0012.yml +++ b/tools/accuracy_checker/configs/text-recognition-0012.yml @@ -26,10 +26,6 @@ models: datasets: - name: ICDAR2013 - data_source: ICDAR13_REC_validation/Challenge2_Test_Task3_Images - annotation_conversion: - converter: icdar13_recognition - annotation_file: ICDAR13_REC_validation/gt/gt.txt.fixed.alfanumeric preprocessing: - type: bgr_to_gray diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index e6d692abc4e..12f9b6c9bc1 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -315,6 +315,44 @@ datasets: preprocessing: - type: resize size: 224 + + - name: lfw + data_source: LFW/lfw + annotation_conversion: + converter: lfw + pairs_file: LFW/annotation/pairs.txt + landmarks_file: LFW/annotation/lfw_landmark.txt + annotation: lfw.pickle + + - name: ICDAR2015 + data_source: ICDAR15_DET/ch4_test_images + annotation_conversion: + converter: icdar_detection + data_dir: ICDAR15_DET/gt + annotation: icdar15_detection.pickle + + - name: ICDAR2013 + annotation_conversion: + converter: icdar13_recognition + annotation_file: ICDAR13_REC/gt/gt.txt.fixed.alfanumeric + annotation: icdar13_recognition.pickle + dataset_meta: icdar13_recognition.json + + - name: market1501 + data_source: Market-1501-v15.09.15 + annotation_conversion: + converter: market1501_reid + data_dir: Market-1501-v15.09.15 + annotation: market1501_reid.pickle + + - name: vgg2face + data_source: VGGFaces2/test + annotation_conversion: + converter: vgg_face + landmarks_csv_file: VGGFaces2/bb_landmark/loose_landmark_test.csv + bbox_csv_file: VGGFaces2/bb_landmark/loose_bb_test.csv + annotation: vggfaces2.pickle + dataset_meta: vgfces2.json - name: semantic_segmentation_adas data_source: segmentation From 366330d0258fcd5915154557c0eddae4f5656647 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 27 Sep 2019 17:33:22 +0300 Subject: [PATCH 044/927] AC: added config for gaze estimation (#461) --- .../configs/gaze-estimation-adas-0002.yml | 65 +++++++++++++++++++ .../accuracy_checker/dataset_definitions.yml | 13 ++++ 2 files changed, 78 insertions(+) create mode 100644 tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml diff --git a/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml b/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml new file mode 100644 index 00000000000..ab8c6b3f18e --- /dev/null +++ b/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml @@ -0,0 +1,65 @@ +models: + - name: gaze-estimation-adas-0002 + + launchers: + + - framework: dlsdk + tags: + - FP32 + model: intel/gaze-estimation-adas-0002/FP32/gaze-estimation-adas-0002.xml + weights: intel/gaze-estimation-adas-0002/FP32/gaze-estimation-adas-0002.bin + inputs: + - name: left_eye_image + type: INPUT + value: ".*_left.png" + - name: right_eye_image + type: INPUT + value: ".*_right.png" + - name: 'head_pose_angles' + type: INPUT + value: ".*.json" + adapter: gaze_estimation + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: intel/gaze-estimation-adas-0002/FP16/gaze-estimation-adas-0002.xml + weights: intel/gaze-estimation-adas-0002/FP16/gaze-estimation-adas-0002.bin + inputs: + - name: left_eye_image + type: INPUT + value: ".*_left.png" + - name: right_eye_image + type: INPUT + value: ".*_right.png" + - name: 'head_pose_angles' + type: INPUT + value: ".*.json" + adapter: gaze_estimation + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - INT8 + model: intel/gaze-estimation-adas-0002/INT8/gaze-estimation-adas-0002.xml + weights: intel/gaze-estimation-adas-0002/INT8/gaze-estimation-adas-0002.bin + inputs: + - name: left_eye_image + type: INPUT + value: ".*_left.png" + - name: right_eye_image + type: INPUT + value: ".*_right.png" + - name: 'head_pose_angles' + type: INPUT + value: ".*.json" + adapter: gaze_estimation + cpu_extensions: AUTO + + datasets: + - name: gaze_estimation_dataset + + metrics: + - type: angle_error + presenter: print_vector diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index 12f9b6c9bc1..db2ee84ca0c 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -467,3 +467,16 @@ datasets: - type: crop_rect - type: resize size: 60 + + - name: gaze_estimation_dataset + + data_source: gaze-estimation + annotation: gaze_estimation.pickle + + reader: + type: combine_reader + scheme: + ".*.png": opencv_imread + ".*.json": + type: json_reader + key: head_pose_angles From d517f66c4a65ad83098c1153b682960c1d658ce9 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 27 Sep 2019 19:34:36 +0300 Subject: [PATCH 045/927] AC: added config for handwritten-score-recognition (#462) --- .../handwritten-score-recognition-0003.yml | 31 +++++++++++++++++++ .../accuracy_checker/dataset_definitions.yml | 5 +++ 2 files changed, 36 insertions(+) create mode 100644 tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml diff --git a/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml b/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml new file mode 100644 index 00000000000..acbc6e4b685 --- /dev/null +++ b/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml @@ -0,0 +1,31 @@ +models: + - name: handwritten-score-recognition-0003 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: intel/handwritten-score-recognition-0003/FP32/handwritten-score-recognition-0003.xml + weights: intel/handwritten-score-recognition-0003/FP32/handwritten-score-recognition-0003.bin + adapter: beam_search_decoder + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: intel/handwritten-score-recognition-0003/FP16/handwritten-score-recognition-0003.xml + weights: intel/handwritten-score-recognition-0003/FP16/handwritten-score-recognition-0003.bin + adapter: beam_search_decoder + cpu_extensions: AUTO + + datasets: + - name: handwritten_score_recognition + + preprocessing: + - type: bgr_to_gray + - type: resize + dst_width: 64 + dst_height: 32 + + metrics: + - type: character_recognition_accuracy diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index db2ee84ca0c..a42ecbfa38a 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -480,3 +480,8 @@ datasets: ".*.json": type: json_reader key: head_pose_angles + + - name: handwritten_score_recognition + data_source: ILSVRC2012_img_val + annotation: hadwritten_score_recognition.pickle + dataset_meta: hadwritten_score_recognition.json From 48766ddf9801ccab2ceb9c32f4d0269be89a2b01 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Mon, 30 Sep 2019 11:40:01 +0300 Subject: [PATCH 046/927] Conversion to ONNX script is unified --- models/public/googlenet-v3-pytorch/model.yml | 2 +- models/public/mobilenet-v2-pytorch/model.yml | 2 +- models/public/resnet-50-pytorch/model.yml | 2 +- tools/downloader/common.py | 14 ++++++------ tools/downloader/converter.py | 24 ++++++++++++-------- 5 files changed, 25 insertions(+), 19 deletions(-) diff --git a/models/public/googlenet-v3-pytorch/model.yml b/models/public/googlenet-v3-pytorch/model.yml index e0c98fb22e1..a369c327dad 100644 --- a/models/public/googlenet-v3-pytorch/model.yml +++ b/models/public/googlenet-v3-pytorch/model.yml @@ -30,7 +30,7 @@ files: size: 108857766 source: https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth framework: pytorch -pytorch_to_onnx: +conversion_to_onnx_args: - --model-name=inception_v3 - --weights=$dl_dir/inception_v3_google-1a9a5a14.pth - --from-torchvision diff --git a/models/public/mobilenet-v2-pytorch/model.yml b/models/public/mobilenet-v2-pytorch/model.yml index 591943f0eca..d6c5ba7ac7a 100644 --- a/models/public/mobilenet-v2-pytorch/model.yml +++ b/models/public/mobilenet-v2-pytorch/model.yml @@ -36,7 +36,7 @@ files: $type: google_drive id: 1jlto6HRVD3ipNkAl1lNhDbkBp7HylaqR framework: pytorch -pytorch_to_onnx: +conversion_to_onnx_args: - --model-name=MobileNetV2 - --model-path=$dl_dir - --weights=$dl_dir/mobilenet-v2.pth diff --git a/models/public/resnet-50-pytorch/model.yml b/models/public/resnet-50-pytorch/model.yml index 1c8bc24c6a4..7c2f3271f40 100644 --- a/models/public/resnet-50-pytorch/model.yml +++ b/models/public/resnet-50-pytorch/model.yml @@ -30,7 +30,7 @@ files: size: 102502400 source: https://download.pytorch.org/models/resnet50-19c8e357.pth framework: pytorch -pytorch_to_onnx: +conversion_to_onnx_args: - --model-name=resnet50 - --weights=$dl_dir/resnet50-19c8e357.pth - --from-torchvision diff --git a/tools/downloader/common.py b/tools/downloader/common.py index 11aae7ec4bf..073fd468f24 100644 --- a/tools/downloader/common.py +++ b/tools/downloader/common.py @@ -292,7 +292,7 @@ def apply(self, reporter, output_dir): class Model: def __init__(self, name, subdirectory, files, postprocessing, mo_args, framework, - description, license_url, precisions, task_type, pytorch_to_onnx_args): + description, license_url, precisions, task_type, conversion_to_onnx_args): self.name = name self.subdirectory = subdirectory self.files = files @@ -303,7 +303,7 @@ def __init__(self, name, subdirectory, files, postprocessing, mo_args, framework self.license_url = license_url self.precisions = precisions self.task_type = task_type - self.pytorch_to_onnx_args = pytorch_to_onnx_args + self.conversion_to_onnx_args = conversion_to_onnx_args @classmethod def deserialize(cls, model, name, subdirectory): @@ -328,10 +328,10 @@ def deserialize(cls, model, name, subdirectory): with deserialization_context('"postprocessing" #{}'.format(i)): postprocessing.append(Postproc.deserialize(postproc)) - pytorch_to_onnx_args = None - if model.get('pytorch_to_onnx', None): - pytorch_to_onnx_args = [validate_string('"pytorch_to_onnx" #{}'.format(i), arg) - for i, arg in enumerate(model['pytorch_to_onnx'])] + conversion_to_onnx_args = None + if model.get('conversion_to_onnx_args', None): + conversion_to_onnx_args = [validate_string('"conversion_to_onnx_args" #{}'.format(i), arg) + for i, arg in enumerate(model['conversion_to_onnx_args'])] framework = validate_string_enum('"framework"', model['framework'], KNOWN_FRAMEWORKS) @@ -372,7 +372,7 @@ def deserialize(cls, model, name, subdirectory): task_type = validate_string_enum('"task_type"', model['task_type'], KNOWN_TASK_TYPES) return cls(name, subdirectory, files, postprocessing, mo_args, framework, - description, license_url, precisions, task_type, pytorch_to_onnx_args) + description, license_url, precisions, task_type, conversion_to_onnx_args) def load_models(args): models = [] diff --git a/tools/downloader/converter.py b/tools/downloader/converter.py index 0e2f835ebff..dde133f168d 100755 --- a/tools/downloader/converter.py +++ b/tools/downloader/converter.py @@ -56,19 +56,25 @@ def prefixed_subprocess(prefix, args): sys.stdout.write(prefix + ': ' + line) return p.wait() == 0 + def convert_to_onnx(model, output_dir, args, stdout_prefix): - pytorch_converter = Path(__file__).absolute().parent / 'pytorch_to_onnx.py' - prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX', - '(DRY RUN) ' if args.dry_run else '', model.name) - - pytorch_to_onnx_args = [string.Template(arg).substitute(conv_dir=output_dir / model.subdirectory, - dl_dir=args.download_dir / model.subdirectory) - for arg in model.pytorch_to_onnx_args] - cmd = [str(args.python), str(pytorch_converter), *pytorch_to_onnx_args] + prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX from {}', + '(DRY RUN) ' if args.dry_run else '', model.name, model.framework) + + conversion_to_onnx_args = [string.Template(arg).substitute(conv_dir=output_dir / model.subdirectory, + dl_dir=args.download_dir / model.subdirectory) + for arg in model.conversion_to_onnx_args] + if model.framework=='pytorch': + converter = Path(__file__).absolute().parent / 'pytorch_to_onnx.py' + else: + prefixed_printf(stdout_prefix, 'Conversion to ONNX not supported for {} framework', model.framework) + + cmd = [str(args.python), str(converter), *conversion_to_onnx_args] prefixed_printf(stdout_prefix, 'Conversion to ONNX command: {}', ' '.join(map(quote_arg, cmd))) return True if args.dry_run else prefixed_subprocess(stdout_prefix, cmd) + def num_jobs_arg(value_str): if value_str == 'auto': return os.cpu_count() or 1 @@ -149,7 +155,7 @@ def convert(model, do_prefix_stdout=True): model_format = model.framework - if model.pytorch_to_onnx_args: + if model.conversion_to_onnx_args: if not convert_to_onnx(model, output_dir, args, stdout_prefix): return False model_format = 'onnx' From 7a93aaf5f956cf3220b9a7623730552769e16027 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Mon, 30 Sep 2019 15:21:04 +0300 Subject: [PATCH 047/927] Added dict for converters --- tools/downloader/converter.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/tools/downloader/converter.py b/tools/downloader/converter.py index dde133f168d..3849c37e684 100755 --- a/tools/downloader/converter.py +++ b/tools/downloader/converter.py @@ -58,16 +58,19 @@ def prefixed_subprocess(prefix, args): def convert_to_onnx(model, output_dir, args, stdout_prefix): + converters = { + 'pytorch': Path(__file__).absolute().parent / 'pytorch_to_onnx.py' + } prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX from {}', '(DRY RUN) ' if args.dry_run else '', model.name, model.framework) conversion_to_onnx_args = [string.Template(arg).substitute(conv_dir=output_dir / model.subdirectory, dl_dir=args.download_dir / model.subdirectory) for arg in model.conversion_to_onnx_args] - if model.framework=='pytorch': - converter = Path(__file__).absolute().parent / 'pytorch_to_onnx.py' - else: + converter = converters.get(model.framework) + if converter is None: prefixed_printf(stdout_prefix, 'Conversion to ONNX not supported for {} framework', model.framework) + return False cmd = [str(args.python), str(converter), *conversion_to_onnx_args] prefixed_printf(stdout_prefix, 'Conversion to ONNX command: {}', ' '.join(map(quote_arg, cmd))) From ade2e915f1b64c090919ca76cf57d18399b2f243 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Mon, 30 Sep 2019 15:25:45 +0300 Subject: [PATCH 048/927] refacotring 4 --- .../data_readers/data_reader.py | 87 +------------------ .../postprocessor/resize_segmentation_mask.py | 55 +++++++++++- 2 files changed, 55 insertions(+), 87 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index 801188a44f5..50014946475 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -179,95 +179,14 @@ def read(self, data_id): class ScipyImageReader(BaseReader): __provider__ = 'scipy_imread' - @staticmethod - def _from_image(image): + def read(self, data_id): + # reimplementation scipy.misc.imread + image = Image.open(str(get_path(self.data_source / data_id))) if image.mode == 'P': image = image.convert('RGBA') if 'transparency' in image.info else image.convert('RGB') return np.array(image) - @staticmethod - def _process_2d(data, shape): - shape = (shape[1], shape[0]) # columns show up first - bytedata = ScipyImageReader._bytescale(data) - image = Image.frombytes('L', shape, bytedata.tostring()) - - return image - - @staticmethod - def _process_3d(data, shape): - # if here then 3-d array with a 3 or a 4 in the shape length. - # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' - ca = np.flatnonzero(np.asarray(shape) == 3) if 3 in shape else np.flatnonzero(np.asarray(shape) == 4) - if not np.size(ca): - raise ValueError("Could not find channel dimension.") - ca = ca[0] - - numch = shape[ca] - if numch not in [3, 4]: - raise ValueError("Channel axis dimension is not valid.") - - bytedata = ScipyImageReader._bytescale(data) - channel_axis_mapping = { - 0: ((1, 2, 0), (shape[1], shape[0])), - 1: ((0, 2, 1), (shape[2], shape[0])), - 2: ((0, 1, 2), (shape[1], shape[0])) - } - transposition, shape = channel_axis_mapping[ca] - strdata = np.transpose(bytedata, transposition).tostring() - - mode = 'RGB' if numch == 3 else 'RGBA' - # Here we know data and mode is correct - image = Image.frombytes(mode, shape, strdata) - - return image - - @staticmethod - def _to_image(arr): - data = np.asarray(arr) - if np.iscomplexobj(data): - raise ValueError("Cannot convert a complex-valued array.") - shape = list(data.shape) - valid = len(shape) == 2 or ((len(shape) == 3) and ((3 in shape) or (4 in shape))) - if not valid: - raise ValueError("'arr' does not have a suitable array shape for any mode.") - if len(shape) == 2: - return ScipyImageReader._process_2d(data, shape) - return ScipyImageReader._process_3d(data, shape) - - @staticmethod - def _imread(name): - # reimplementation scipy.misc.imread - image = Image.open(name) - - return ScipyImageReader._from_image(image) - - @staticmethod - def _bytescale(data): - if data.dtype == np.uint8: - return data - cmin = data.min() - cmax = data.max() - cscale = cmax - cmin - if cscale == 0: - cscale = 1 - - scale = float(255) / cscale - bytedata = (data - cmin) * scale - - return (bytedata.clip(0, 255) + 0.5).astype(np.uint8) - - @staticmethod - def imresize(arr, size): - im = ScipyImageReader._to_image(arr) - size = (size[1], size[0]) - imnew = im.resize(size, resample=0) - - return ScipyImageReader._from_image(imnew) - - def read(self, data_id): - return ScipyImageReader._imread(str(get_path(self.data_source / data_id))) - class OpenCVFrameReader(BaseReader): __provider__ = 'opencv_capture' diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py index a54136d630f..2e9a528e491 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py @@ -16,12 +16,12 @@ from functools import singledispatch import numpy as np +from PIL import Image from ..config import NumberField from ..utils import get_size_from_config from .postprocessor import PostprocessorWithSpecificTargets from ..representation import SegmentationPrediction, SegmentationAnnotation -from ..data_readers import ScipyImageReader class ResizeSegmentationMask(PostprocessorWithSpecificTargets): @@ -62,7 +62,7 @@ def resize_segmentation_mask(entry, height, width): def _(entry, height, width): entry_mask = [] for class_mask in entry.mask: - resized_mask = ScipyImageReader.imresize(class_mask, (height, width)) + resized_mask = self.resize(class_mask, width, height) entry_mask.append(resized_mask) entry.mask = np.array(entry_mask) @@ -70,7 +70,7 @@ def _(entry, height, width): @resize_segmentation_mask.register(SegmentationAnnotation) def _(entry, height, width): - entry.mask = ScipyImageReader.imresize(entry.mask, (height, width)) + entry.mask = self.resize(entry.mask, width, height) return entry for target in annotation: @@ -80,3 +80,52 @@ def _(entry, height, width): resize_segmentation_mask(target, target_height, target_width) return annotation, prediction + + def to_image(self, arr): + data = np.asarray(arr) + if np.iscomplexobj(data): + raise ValueError("Cannot convert a complex-valued array.") + shape = list(data.shape) + if len(shape) == 2: + return self._process_2d(data, shape) + if len(shape) == 3 and ((3 in shape) or (4 in shape)): + return self._process_3d(data, shape) + raise ValueError("'arr' does not have a suitable array shape for any mode.") + + def _process_2d(self, data, shape): + shape = (shape[1], shape[0]) # columns show up first + bytedata = self._bytescale(data) + image = Image.frombytes('L', shape, bytedata.tostring()) + + return image + + def _process_3d(self, data, shape): + # if here then 3-d array with a 3 or a 4 in the shape length. + # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' + bytedata = self._bytescale(data) + height, width, channels = shape + mode = 'RGB' if channels == 3 else 'RGBA' + image = Image.frombytes(mode, (width, height), bytedata.tostring()) + + return image + + @staticmethod + def _bytescale(data): + if data.dtype == np.uint8: + return data + cmin = data.min() + cmax = data.max() + cscale = cmax - cmin + if cscale == 0: + cscale = 1 + + scale = float(255) / cscale + bytedata = (data - cmin) * scale + + return (bytedata.clip(0, 255) + 0.5).astype(np.uint8) + + def resize(self, mask, width, height): + image = self.to_image(mask) + image_new = image.resize((width, height), resample=0) + + return np.array(image_new) From a95d428d9a2047cf4ba2f7cbce3b125ed07c15f2 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 30 Sep 2019 15:55:23 +0300 Subject: [PATCH 049/927] pedestrian_tracker: remove VideoCapture wrapper --- demos/pedestrian_tracker_demo/README.md | 6 +- .../include/image_reader.hpp | 38 ----- .../include/pedestrian_tracker_demo.hpp | 11 +- demos/pedestrian_tracker_demo/main.cpp | 62 ++++---- .../src/image_reader.cpp | 145 ------------------ demos/tests/cases.py | 2 +- demos/tests/image_sequences.py | 10 +- 7 files changed, 44 insertions(+), 230 deletions(-) delete mode 100644 demos/pedestrian_tracker_demo/include/image_reader.hpp delete mode 100644 demos/pedestrian_tracker_demo/src/image_reader.cpp diff --git a/demos/pedestrian_tracker_demo/README.md b/demos/pedestrian_tracker_demo/README.md index ef889f6dfef..e5e8f70c93a 100644 --- a/demos/pedestrian_tracker_demo/README.md +++ b/demos/pedestrian_tracker_demo/README.md @@ -37,7 +37,7 @@ pedestrian_tracker_demo [OPTION] Options: -h Print a usage message. - -i "" Required. Path to a video file or a folder with images (all images should have names 0000000001.jpg, 0000000002.jpg, etc). + -i "" Required. Video sequence to process. -m_det "" Required. Path to the Pedestrian Detection Retail model (.xml) file. -m_reid "" Required. Path to the Pedestrian Reidentification Retail model (.xml) file. -l "" Optional. For CPU custom layers, if any. Absolute path to a shared library with the kernels implementation. @@ -50,8 +50,8 @@ Options: -no_show Optional. Do not show processed video. -delay Optional. Delay between frames used for visualization. If negative, the visualization is turned off (like with the option 'no_show'). If zero, the visualization is made frame-by-frame. -out "" Optional. The file name to write output log file with results of pedestrian tracking. The format of the log file is compatible with MOTChallenge format. - -first Optional. The index of the first frame of video sequence to process. This has effect only if it is positive and the source video sequence is an image folder. - -last Optional. The index of the last frame of video sequence to process. This has effect only if it is positive and the source video sequence is an image folder. + -first Optional. The index of the first frame of video sequence to process. This has effect only if it is positive. + -last Optional. The index of the last frame of video sequence to process. This has effect only if it is positive. ``` To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](../../tools/downloader/README.md) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). diff --git a/demos/pedestrian_tracker_demo/include/image_reader.hpp b/demos/pedestrian_tracker_demo/include/image_reader.hpp deleted file mode 100644 index 5780ecd5b08..00000000000 --- a/demos/pedestrian_tracker_demo/include/image_reader.hpp +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (C) 2018-2019 Intel Corporation -// SPDX-License-Identifier: Apache-2.0 -// - -#pragma once -#include -#include -#include -#include - -using ImageWithFrameIndex = std::pair; - -class ImageReader { -public: - virtual bool IsOpened() const = 0; - virtual void SetFrameIndex(size_t frame_index) = 0; - virtual double GetFrameRate() const = 0; - virtual int FrameIndex() const = 0; - virtual ImageWithFrameIndex Read() = 0; - - virtual ~ImageReader() {} - - /// @brief Create ImageReader to read from a folder with images. - static std::unique_ptr CreateImageReaderForImageFolder( - const std::string& folder_path, size_t start_frame_index = 1); - - /// @brief Create ImageReader to read from a video file. - static std::unique_ptr CreateImageReaderForVideoFile( - const std::string& file_path); - - /// @brief Create ImageReader to read either from a video file - /// (if the path points to a file) or from a folder with images - /// (if the path points to a folder) - static std::unique_ptr CreateImageReaderForPath( - const std::string& path); -}; - - diff --git a/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp b/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp index 5273cd08eb7..036325ea6bd 100644 --- a/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp +++ b/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp @@ -13,8 +13,7 @@ static const char help_message[] = "Print a usage message."; /// @brief message for images argument -static const char video_message[] = "Required. Path to a video file or a folder with images "\ - "(all images should have names 0000000001.jpg, 0000000002.jpg, etc)."; +static const char video_message[] = "Required. Video sequence to process."; /// @brief message for model arguments static const char pedestrian_detection_model_message[] = "Required. Path to the Pedestrian Detection Retail model (.xml) file."; @@ -59,10 +58,10 @@ static const char output_log_message[] = "Optional. The file name to write outpu /// @brief message for the first frame static const char first_frame_message[] = "Optional. The index of the first frame of video sequence to process. "\ - "This has effect only if it is positive and the source video sequence is an image folder."; + "This has effect only if it is positive."; /// @brief message for the last frame static const char last_frame_message[] = "Optional. The index of the last frame of video sequence to process. "\ - "This has effect only if it is positive and the source video sequence is an image folder."; + "This has effect only if it is positive."; /// @brief Define flag for showing help message
@@ -115,11 +114,11 @@ DEFINE_string(out, "", output_log_message); /// @brief Define the first frame to process
/// It is an optional parameter -DEFINE_int32(first, -1, first_frame_message); +DEFINE_uint32(first, 0, first_frame_message); /// @brief Define the last frame to process
/// It is an optional parameter -DEFINE_int32(last, -1, last_frame_message); +DEFINE_uint32(last, 0, last_frame_message); /** diff --git a/demos/pedestrian_tracker_demo/main.cpp b/demos/pedestrian_tracker_demo/main.cpp index 703017f9657..6e6ba8858d0 100644 --- a/demos/pedestrian_tracker_demo/main.cpp +++ b/demos/pedestrian_tracker_demo/main.cpp @@ -8,7 +8,6 @@ #include "descriptor.hpp" #include "distance.hpp" #include "detector.hpp" -#include "image_reader.hpp" #include "pedestrian_tracker_demo.hpp" #include @@ -109,8 +108,6 @@ int main_work(int argc, char **argv) { // Reading command line parameters. - auto video_path = FLAGS_i; - auto det_model = FLAGS_m_det; auto det_weights = fileNameNoExt(FLAGS_m_det) + ".bin"; @@ -134,15 +131,12 @@ int main_work(int argc, char **argv) { delay = -1; should_show = (delay >= 0); - int first_frame = FLAGS_first; - int last_frame = FLAGS_last; - bool should_save_det_log = !detlog_out.empty(); - if (first_frame >= 0) - std::cout << "first_frame = " << first_frame << std::endl; - if (last_frame >= 0) - std::cout << "last_frame = " << last_frame << std::endl; + if (FLAGS_first != 0) + std::cout << "first_frame = " << FLAGS_first << std::endl; + if (FLAGS_last != 0) + std::cout << "last_frame = " << FLAGS_last << std::endl; std::vector devices{detector_mode, reid_mode}; InferenceEngine::Core ie = @@ -158,16 +152,29 @@ int main_work(int argc, char **argv) { CreatePedestrianTracker(reid_model, reid_weights, ie, reid_mode, should_keep_tracking_info); - - // Opening video. - std::unique_ptr video = - ImageReader::CreateImageReaderForPath(video_path); - - PT_CHECK(video->IsOpened()) << "Failed to open video: " << video_path; - double video_fps = video->GetFrameRate(); - - if (first_frame > 0) - video->SetFrameIndex(first_frame); + cv::VideoCapture cap; + try { + int intInput = std::stoi(FLAGS_i); + if (!cap.open(intInput)) { + throw std::runtime_error("Can't open " + std::to_string(intInput)); + } + } catch (const std::invalid_argument&) { + if (!cap.open(FLAGS_i)) { + throw std::runtime_error("Can't open " + FLAGS_i); + } + } catch (const std::out_of_range&) { + if (!cap.open(FLAGS_i)) { + throw std::runtime_error("Can't open " + FLAGS_i); + } + } + double video_fps = cap.get(cv::CAP_PROP_FPS); + if (0.0 == video_fps) { + // the default frame rate for DukeMTMC dataset + video_fps = 60.0; + } + if (0 != FLAGS_first && !cap.set(cv::CAP_PROP_POS_FRAMES, FLAGS_first)) { + throw std::runtime_error("Can't set the frame to begin with"); + } std::cout << "To close the application, press 'CTRL+C' here"; if (!FLAGS_no_show) { @@ -175,18 +182,9 @@ int main_work(int argc, char **argv) { } std::cout << std::endl; - for (;;) { - auto pair = video->Read(); - cv::Mat frame = pair.first; - int frame_idx = pair.second; - - if (frame.empty()) break; - - PT_CHECK(frame_idx >= first_frame); - - if ( (last_frame >= 0) && (frame_idx > last_frame) ) { - std::cout << "Frame " << frame_idx << " is greater than last_frame = " - << last_frame << " -- break"; + for (uint32_t frame_idx = FLAGS_first; 0 == FLAGS_last || frame_idx <= FLAGS_last; ++frame_idx) { + cv::Mat frame; + if (!cap.read(frame)) { break; } diff --git a/demos/pedestrian_tracker_demo/src/image_reader.cpp b/demos/pedestrian_tracker_demo/src/image_reader.cpp deleted file mode 100644 index ca2e61cd19d..00000000000 --- a/demos/pedestrian_tracker_demo/src/image_reader.cpp +++ /dev/null @@ -1,145 +0,0 @@ -// Copyright (C) 2018-2019 Intel Corporation -// SPDX-License-Identifier: Apache-2.0 -// - -#include "image_reader.hpp" -#include
-#include -#include -#include -#include -#include -#include -#include -#include - -namespace { -bool IsFolder(const std::string& folder_path) { - struct stat folder_info; - if ( stat( folder_path.c_str(), &folder_info ) != 0 ) - return false; - if ( folder_info.st_mode & S_IFDIR ) - return true; - return false; -} -bool IsFile(const std::string& path) { - struct stat info; - if ( stat( path.c_str(), &info ) != 0 ) - return false; - if ( info.st_mode & S_IFREG ) - return true; - return false; -} -} // anonymous namespace - -class ImageReaderForFolder: public ImageReader { -public: - ImageReaderForFolder(const std::string& folder_path, size_t start_frame_index) { - folder_path_ = folder_path; - frame_index_ = start_frame_index; - } - - bool IsOpened() const { - return IsFolder(folder_path_); - } - void SetFrameIndex(size_t frame_index) { - frame_index_ = frame_index; - } - - int FrameIndex() const { - return frame_index_; - } - - ImageWithFrameIndex Read() { - auto path = GetImagePath(folder_path_, frame_index_); - cv::Mat img = cv::imread(path); - - ImageWithFrameIndex result; - result.first = img; - result.second = frame_index_; - - frame_index_++; - return result; - } - - // Note that for images folder - // the default frame rate for DukeMTMC dataset is returned - double GetFrameRate() const {return 60.0;} - -private: - std::string folder_path_; - size_t frame_index_ = 1; - - static std::string GetImagePath(const std::string& folder_path, - size_t frame_index) { - std::stringstream strstr; - strstr << folder_path << "/" - << std::internal - << std::setfill('0') - << std::setw(10) - << frame_index - << ".jpg"; - return strstr.str(); - } -}; - -class ImageReaderForVideoFile: public ImageReader { -public: - explicit ImageReaderForVideoFile(const std::string& file_path) - : video_capture(file_path) {} - - bool IsOpened() const { - return video_capture.isOpened(); - } - void SetFrameIndex(size_t frame_index) { - THROW_IE_EXCEPTION << "ImageReader does not set frame index in video, " - << "since in the current implementation it is not precise"; - } - - int FrameIndex() const { - return frame_index_; - } - - ImageWithFrameIndex Read() { - ImageWithFrameIndex result; - video_capture >> result.first; - result.second = frame_index_; - frame_index_++; - return result; - } - - double GetFrameRate() const { - double video_fps = video_capture.get(cv::CAP_PROP_FPS); - if ((video_fps <= 0) || (video_fps > 200)) { - video_fps = 30; - } - return video_fps; - } - -private: - size_t frame_index_ = 1; - cv::VideoCapture video_capture; -}; - -std::unique_ptr ImageReader::CreateImageReaderForImageFolder( - const std::string& folder_path, size_t start_frame_index) { - return std::unique_ptr( - new ImageReaderForFolder(folder_path, start_frame_index)); -} - -std::unique_ptr ImageReader::CreateImageReaderForVideoFile( - const std::string& file_path) { - return std::unique_ptr( - new ImageReaderForVideoFile(file_path)); -} - -std::unique_ptr ImageReader::CreateImageReaderForPath( - const std::string& path) { - if (IsFolder(path)) - return ImageReader::CreateImageReaderForImageFolder(path); - - if (IsFile(path)) - return ImageReader::CreateImageReaderForVideoFile(path); - - return std::unique_ptr(); -} diff --git a/demos/tests/cases.py b/demos/tests/cases.py index 77204031e7f..cb65e5001c6 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -147,7 +147,7 @@ def device_cases(*args): NativeDemo('pedestrian_tracker_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, - '-i': ImageDirectoryArg('person-detection-retail')}), + '-i': ImagePatternArg('person-detection-retail')}), device_cases('-d_det', '-d_reid'), [ TestCase(options={'-m_det': ModelArg('person-detection-retail-0002')}), diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index 17d4ba48683..c41c3786ee5 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -55,16 +55,16 @@ ], 'person-detection-retail': [ + image_net_arg('00000002'), + image_net_arg('00000002'), + image_net_arg('00000002'), + image_net_arg('00000002'), image_net_arg('00000002'), image_net_arg('00000032'), - image_net_arg('00000184'), - image_net_arg('00000442'), - image_net_arg('00008165'), + image_net_arg('00000002'), image_net_arg('00017291'), image_net_arg('00017293'), image_net_arg('00040547'), - image_net_arg('00040548'), - image_net_arg('00040554'), ], 'person-vehicle-bike-detection-crossroad': [ From bce716ddafe255f61cd7681776af7808e595b38b Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Mon, 30 Sep 2019 16:13:12 +0300 Subject: [PATCH 050/927] refacotring 5 --- .../postprocessor/resize_segmentation_mask.py | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py index 2e9a528e491..d1d259dffcb 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/resize_segmentation_mask.py @@ -81,27 +81,25 @@ def _(entry, height, width): return annotation, prediction - def to_image(self, arr): + def _to_image(self, arr): data = np.asarray(arr) if np.iscomplexobj(data): raise ValueError("Cannot convert a complex-valued array.") shape = list(data.shape) if len(shape) == 2: return self._process_2d(data, shape) - if len(shape) == 3 and ((3 in shape) or (4 in shape)): + if len(shape) == 3 and shape[2] in (3, 4): return self._process_3d(data, shape) raise ValueError("'arr' does not have a suitable array shape for any mode.") def _process_2d(self, data, shape): - shape = (shape[1], shape[0]) # columns show up first + height, width = shape bytedata = self._bytescale(data) - image = Image.frombytes('L', shape, bytedata.tostring()) + image = Image.frombytes('L', (width, height), bytedata.tostring()) return image def _process_3d(self, data, shape): - # if here then 3-d array with a 3 or a 4 in the shape length. - # Check for 3 in datacube shape --- 'RGB' or 'YCbCr' bytedata = self._bytescale(data) height, width, channels = shape mode = 'RGB' if channels == 3 else 'RGBA' @@ -125,7 +123,7 @@ def _bytescale(data): return (bytedata.clip(0, 255) + 0.5).astype(np.uint8) def resize(self, mask, width, height): - image = self.to_image(mask) + image = self._to_image(mask) image_new = image.resize((width, height), resample=0) return np.array(image_new) From 7a9360da0ad923df8972ef0e534056cf5a724a7d Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 30 Sep 2019 21:02:45 +0300 Subject: [PATCH 051/927] pedestrian_tracker: add a note about -first imprecision --- demos/pedestrian_tracker_demo/README.md | 2 +- .../include/pedestrian_tracker_demo.hpp | 6 ++++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/demos/pedestrian_tracker_demo/README.md b/demos/pedestrian_tracker_demo/README.md index e5e8f70c93a..0173b989cbe 100644 --- a/demos/pedestrian_tracker_demo/README.md +++ b/demos/pedestrian_tracker_demo/README.md @@ -50,7 +50,7 @@ Options: -no_show Optional. Do not show processed video. -delay Optional. Delay between frames used for visualization. If negative, the visualization is turned off (like with the option 'no_show'). If zero, the visualization is made frame-by-frame. -out "" Optional. The file name to write output log file with results of pedestrian tracking. The format of the log file is compatible with MOTChallenge format. - -first Optional. The index of the first frame of video sequence to process. This has effect only if it is positive. + -first Optional. The index of the first frame of video sequence to process. This has effect only if it is positive. The actual first frame captured depends on cv::VideoCapture implementation and may have slightly different number. -last Optional. The index of the last frame of video sequence to process. This has effect only if it is positive. ``` diff --git a/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp b/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp index 036325ea6bd..7ebb7553f3b 100644 --- a/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp +++ b/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp @@ -57,8 +57,10 @@ static const char output_log_message[] = "Optional. The file name to write outpu "The format of the log file is compatible with MOTChallenge format."; /// @brief message for the first frame -static const char first_frame_message[] = "Optional. The index of the first frame of video sequence to process. "\ - "This has effect only if it is positive."; +static const char first_frame_message[] = "Optional. The index of the first frame of video sequence to process. " + "This has effect only if it is positive. The actual first frame captured " + "depends on cv::VideoCapture implementation and may have slightly different " + "number."; /// @brief message for the last frame static const char last_frame_message[] = "Optional. The index of the last frame of video sequence to process. "\ "This has effect only if it is positive."; From 92756ede3a4a8d55ae2183546dc93d67dd3f7cad Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Tue, 1 Oct 2019 19:06:21 +0300 Subject: [PATCH 052/927] update readme --- .../accuracy_checker/accuracy_checker/data_readers/README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/README.md b/tools/accuracy_checker/accuracy_checker/data_readers/README.md index 5b36b161ebf..e2b8a3adc33 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/README.md +++ b/tools/accuracy_checker/accuracy_checker/data_readers/README.md @@ -30,7 +30,10 @@ reader: AccuracyChecker supports following list of data readers: * `opencv_imread` - read images using OpenCV library. Default color space is BGR. * `pillow_imread` - read images using Pillow library. Default color space is RGB. -* `scipy_imread` - read images using Scipy library. +* `scipy_imread` - read images using similar approach as in `scipy.misc.imread` +``` +Note: since 1.3.0 version the image processing module is not a part of scipy library. This reader does not use scipy anymore. +``` * `tf_imred`- read images using Tensorflow. Default color space is RGB. Requires Tensorflow installation. * `opencv_capture` - read frames from video using OpenCV. * `json_reader` - read value from json file. From 35771e83fdd3bfca1e03f781f09c247569ccf6f0 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Fri, 20 Sep 2019 19:02:57 +0300 Subject: [PATCH 053/927] AC: added PyTorch launcher --- .../adapters/classification.py | 2 + .../accuracy_checker/launcher/__init__.py | 8 ++ .../launcher/pytorch_launcher.py | 126 ++++++++++++++++++ 3 files changed, 136 insertions(+) create mode 100644 tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py diff --git a/tools/accuracy_checker/accuracy_checker/adapters/classification.py b/tools/accuracy_checker/accuracy_checker/adapters/classification.py index 75f16b312c2..fd15715d161 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/classification.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/classification.py @@ -54,6 +54,8 @@ def process(self, raw, identifiers=None, frame_meta=None): list of ClassificationPrediction objects """ prediction = self._extract_predictions(raw, frame_meta)[self.output_blob] + if len(np.shape(prediction)) == 1: + prediction = np.expand_dims(prediction, axis=0) prediction = np.reshape(prediction, (prediction.shape[0], -1)) result = [] diff --git a/tools/accuracy_checker/accuracy_checker/launcher/__init__.py b/tools/accuracy_checker/accuracy_checker/launcher/__init__.py index c7c87e30c8f..f177ab22816 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/__init__.py @@ -62,6 +62,13 @@ 'onnx_runtime', "ONNX Runtime isn't installed. Please, install it before using. \n{}".format(import_error.msg) ) +try: + from .pytorch_launcher import PyTorchLauncher +except ImportError as import_error: + PyTorchLauncher = unsupported_launcher( + 'pytorch', "PyTorch isn't installed. Please, install it before using. \n{}".format(import_error.msg) + ) + __all__ = [ 'create_launcher', 'Launcher', @@ -72,6 +79,7 @@ 'DLSDKLauncher', 'OpenCVLauncher', 'ONNXLauncher', + 'PyTorchLauncher', 'DummyLauncher', 'InputFeeder' ] diff --git a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py new file mode 100644 index 00000000000..257c166b1cd --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py @@ -0,0 +1,126 @@ +from contextlib import contextmanager +import sys +import importlib +from collections import OrderedDict + +import numpy as np +import torch +from torch.autograd import Variable + +from ..config import PathField, StringField, DictField, NumberField, ListField +from .launcher import Launcher + +MODULE_REGEX = r'(?:\w+)(?:(?:.\w+)*)' +DEVICE_REGEX = r'(?Pcpu$|cuda)?' + +class PyTorchLauncher(Launcher): + __provider__ = 'pytorch' + + @classmethod + def parameters(cls): + parameters = super().parameters() + parameters.update({ + 'module': StringField(regex=MODULE_REGEX), + 'checkpoint': PathField(check_exists=True, is_directory=False, optional=True), + 'python_path': PathField(check_exists=True, is_directory=True, optional=True), + 'module_args': ListField(optional=True), + 'module_kwargs': DictField(key_type=str, validate_values=False, optional=True, default={}), + 'device': StringField(default='cpu', regex=DEVICE_REGEX), + 'batch': NumberField(value_type=float, min_value=1, optional=True, description="Batch size."), + 'output_names': ListField( + optional=True, value_type=str, description='output tensor names' + ) + }) + return parameters + + def __init__(self, config_entry: dict, *args, **kwargs): + super().__init__(config_entry, *args, **kwargs) + module_args = config_entry.get("module_args", ()) + module_kwargs = config_entry.get("module_kwargs", {}) + self.cuda = 'cuda' in self.get_value_from_config('device') + self.module = self.load_module( + config_entry['module'], + module_args, + module_kwargs, + config_entry.get('checkpoint'), + config_entry.get('state_key'), + config_entry.get("python_path") + ) + + self._batch = self.get_value_from_config('batch') + # torch modules does not have input information + self._generate_inputs() + self.output_names = self.get_value_from_config('output_names') or ['output'] + + def _generate_inputs(self): + config_inputs = self.config.get('inputs') + if not config_inputs: + self._inputs = {'input': (self.batch, ) + (-1, ) * 3} + return + input_shapes = OrderedDict() + for input_description in config_inputs: + input_shapes[input_description['name']] = input_description.get('shape', (self.batch, ) + (-1, ) * 3) + self._inputs = input_shapes + + @property + def inputs(self): + return self._inputs + + @property + def batch(self): + return self._batch + @property + def output_blob(self): + return next(iter(self.output_names)) + + def load_module(self, model_cls, module_args, module_kwargs, checkpoint=None, state_key=None, python_path=None): + module_parts = model_cls.split(".") + model_cls = module_parts[-1] + model_path = ".".join(module_parts[:-1]) + with append_to_path(python_path): + model_cls = importlib.import_module(model_path).__getattribute__(model_cls) + module = model_cls(*module_args, **module_kwargs) + if checkpoint: + checkpoint = torch.load(checkpoint) + state = checkpoint if not state_key else checkpoint[state_key] + module.load_state_dict(state) + if self.cuda: + module.cuda() + else: + module.cpu() + module.eval() + return module + + def fit_to_input(self, data, layer_name, layout): + data = np.transpose(data, layout) + tensor = torch.from_numpy(data.astype(np.float32)) + if self.cuda: + tensor = tensor.cuda() + with torch.no_grad(): + return Variable(tensor) + + def predict(self,inputs, metadata=None, **kwargs): + results = [] + for batch_input in inputs: + outputs = list(self.module(*batch_input.values())) + result_dict = { + output_name: res.data.cpu().numpy() if self.cuda else res.data.numpy() + for output_name, res in zip(self.output_names, outputs) + } + results.append(result_dict) + + return results + + def release(self): + del self.module + + +@contextmanager +def append_to_path(path): + if path: + sys.path.append(path) + + yield + + if path: + sys.path.remove(path) From 32f53d8682044b18c772e91bd9d98d6cf62db550 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Fri, 20 Sep 2019 19:06:58 +0300 Subject: [PATCH 054/927] linter --- tools/accuracy_checker/.pylintrc | 2 +- .../accuracy_checker/launcher/pytorch_launcher.py | 7 +++++-- tools/accuracy_checker/dataset_definitions.yml | 4 ++-- tools/accuracy_checker/setup.cfg | 2 +- 4 files changed, 9 insertions(+), 6 deletions(-) diff --git a/tools/accuracy_checker/.pylintrc b/tools/accuracy_checker/.pylintrc index 9c6fd442b41..c27ba95f9dd 100644 --- a/tools/accuracy_checker/.pylintrc +++ b/tools/accuracy_checker/.pylintrc @@ -21,7 +21,7 @@ disable = C0103, max-line-length = 120 ignore-docstrings = yes extension-pkg-whitelist=inference_engine,cv2,numpy,mxnet,tensorflow,pycocotools,onnxruntime -ignored-modules = numpy,cv2,openvino.inference_engine,caffe,mxnet,tensorflow,pycocotools,onnxruntime +ignored-modules = numpy,cv2,openvino.inference_engine,caffe,mxnet,tensorflow,pycocotools,onnxruntime,torch load-plugins = pylint_checkers ignored-classes = pathlib.PurePath jobs=0 diff --git a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py index 257c166b1cd..4e4ec4d32a2 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py @@ -64,7 +64,7 @@ def _generate_inputs(self): @property def inputs(self): - return self._inputs + return self._inputs @property def batch(self): @@ -99,7 +99,7 @@ def fit_to_input(self, data, layer_name, layout): with torch.no_grad(): return Variable(tensor) - def predict(self,inputs, metadata=None, **kwargs): + def predict(self, inputs, metadata=None, **kwargs): results = [] for batch_input in inputs: outputs = list(self.module(*batch_input.values())) @@ -111,6 +111,9 @@ def predict(self,inputs, metadata=None, **kwargs): return results + def predict_async(self, *args, **kwargs): + raise ValueError('MxNet Launcher does not support async mode yet') + def release(self): del self.module diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index 9bdcfa9d681..6a8ff19fa22 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -60,9 +60,9 @@ datasets: - name: imagenet_1000_classes annotation_conversion: converter: imagenet - annotation_file: val.txt + annotation_file: /home/automation/datasets/imagenet/val.txt annotation: imagenet1000.pickle - data_source: ILSVRC2012_img_val + data_source: /home/automation/datasets/imagenet/images metrics: - name: accuracy@top1 type: accuracy diff --git a/tools/accuracy_checker/setup.cfg b/tools/accuracy_checker/setup.cfg index 7c49a45db3c..ad131b321c4 100644 --- a/tools/accuracy_checker/setup.cfg +++ b/tools/accuracy_checker/setup.cfg @@ -5,4 +5,4 @@ ignore = F401 [isort] line_length = 120 use_parentheses = True -known_third_party = openvino.inference_engine,caffe,cv2,mxnet,tensorflow +known_third_party = openvino.inference_engine,caffe,cv2,mxnet,tensorflow,torch From 0e89be4ac102ce55f8f0c94fab8ef4d85204879d Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Mon, 23 Sep 2019 18:38:56 +0300 Subject: [PATCH 055/927] added tests for pytorch launcher --- .../launcher/caffe_launcher.py | 2 +- .../accuracy_checker/launcher/launcher.py | 1 + .../launcher/mxnet_launcher.py | 2 +- .../launcher/onnx_launcher.py | 2 +- .../launcher/pytorch_launcher.py | 16 +++-- .../launcher/tf_lite_launcher.py | 2 +- .../test_models/pytorch_model/__init__.py | 15 +++++ .../test_models/pytorch_model/samplenet.pth | Bin 0 -> 249564 bytes .../test_models/pytorch_model/samplenet.py | 38 +++++++++++ .../accuracy_checker/dataset_definitions.yml | 4 +- .../tests/test_pytorch_launcher.py | 63 ++++++++++++++++++ 11 files changed, 134 insertions(+), 11 deletions(-) create mode 100644 tools/accuracy_checker/data/test_models/pytorch_model/__init__.py create mode 100644 tools/accuracy_checker/data/test_models/pytorch_model/samplenet.pth create mode 100644 tools/accuracy_checker/data/test_models/pytorch_model/samplenet.py create mode 100644 tools/accuracy_checker/tests/test_pytorch_launcher.py diff --git a/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py index 6c5c681fa1d..2152bb45943 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py @@ -97,7 +97,7 @@ def fit_to_input(self, data, layer_name, layout): return data - def predict(self, inputs, metadata, *args, **kwargs): + def predict(self, inputs, metadata=None, **kwargs): """ Args: inputs: dictionary where keys are input layers names and values are data for them. diff --git a/tools/accuracy_checker/accuracy_checker/launcher/launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/launcher.py index 414440a2ab1..5ab5135377f 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/launcher.py @@ -152,6 +152,7 @@ def inputs_info_for_meta(self): def name(self): return self.__provider__ + def unsupported_launcher(name, error_message=None): class UnsupportedLauncher(Launcher): __provider__ = name diff --git a/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py index e3019c38d66..a273190d7aa 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py @@ -109,7 +109,7 @@ def fit_to_input(self, data, input_layer, layout): def inputs(self): return self._inputs - def predict(self, inputs, metadata, *args, **kwargs): + def predict(self, inputs, metadata=None, **kwargs): """ Args: inputs: dictionary where keys are input layers names and values are data for them. diff --git a/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py index a41768cc729..84ef7629962 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py @@ -63,7 +63,7 @@ def output_blob(self): def batch(self): return 1 - def predict(self, inputs, metadata, *args, **kwargs): + def predict(self, inputs, metadata=None, **kwargs): results = [] for infer_input in inputs: prediction_list = self._inference_session.run(self.output_names, infer_input) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py index 4e4ec4d32a2..246b2728dc3 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py @@ -8,11 +8,12 @@ from torch.autograd import Variable from ..config import PathField, StringField, DictField, NumberField, ListField -from .launcher import Launcher +from .launcher import Launcher, LauncherConfigValidator MODULE_REGEX = r'(?:\w+)(?:(?:.\w+)*)' DEVICE_REGEX = r'(?Pcpu$|cuda)?' + class PyTorchLauncher(Launcher): __provider__ = 'pytorch' @@ -26,7 +27,7 @@ def parameters(cls): 'module_args': ListField(optional=True), 'module_kwargs': DictField(key_type=str, validate_values=False, optional=True, default={}), 'device': StringField(default='cpu', regex=DEVICE_REGEX), - 'batch': NumberField(value_type=float, min_value=1, optional=True, description="Batch size."), + 'batch': NumberField(value_type=float, min_value=1, optional=True, description="Batch size.", default=1), 'output_names': ListField( optional=True, value_type=str, description='output tensor names' ) @@ -35,6 +36,8 @@ def parameters(cls): def __init__(self, config_entry: dict, *args, **kwargs): super().__init__(config_entry, *args, **kwargs) + pytorch_launcher_config = LauncherConfigValidator('Pytorch_Launcher', fields=self.parameters()) + pytorch_launcher_config.validate(self.config) module_args = config_entry.get("module_args", ()) module_kwargs = config_entry.get("module_kwargs", {}) self.cuda = 'cuda' in self.get_value_from_config('device') @@ -69,6 +72,7 @@ def inputs(self): @property def batch(self): return self._batch + @property def output_blob(self): return next(iter(self.output_names)) @@ -108,11 +112,13 @@ def predict(self, inputs, metadata=None, **kwargs): for output_name, res in zip(self.output_names, outputs) } results.append(result_dict) + for meta_ in metadata: + meta_['input_shape'] = {key: list(data.shape) for key, data in batch_input.items()} return results def predict_async(self, *args, **kwargs): - raise ValueError('MxNet Launcher does not support async mode yet') + raise ValueError('Pytorch Launcher does not support async mode yet') def release(self): del self.module @@ -121,9 +127,9 @@ def release(self): @contextmanager def append_to_path(path): if path: - sys.path.append(path) + sys.path.append(str(path)) yield if path: - sys.path.remove(path) + sys.path.remove(str(path)) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py index e084f4a950b..2e821974c56 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py @@ -47,7 +47,7 @@ def __init__(self, config_entry, adapter, *args, **kwargs): self._inputs = {input_layer['name']: input_layer for input_layer in self._input_details} self.device = '/{}:0'.format(self.config.get('device', 'cpu').lower()) - def predict(self, inputs, metadata, *args, **kwargs): + def predict(self, inputs, metadata=None, **kwargs): """ Args: inputs: dictionary where keys are input layers names and values are data for them. diff --git a/tools/accuracy_checker/data/test_models/pytorch_model/__init__.py b/tools/accuracy_checker/data/test_models/pytorch_model/__init__.py new file mode 100644 index 00000000000..7c9fcf6dc14 --- /dev/null +++ b/tools/accuracy_checker/data/test_models/pytorch_model/__init__.py @@ -0,0 +1,15 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" diff --git a/tools/accuracy_checker/data/test_models/pytorch_model/samplenet.pth b/tools/accuracy_checker/data/test_models/pytorch_model/samplenet.pth new file mode 100644 index 0000000000000000000000000000000000000000..6c70368e0957fe41fbc207c0ce88056530d2b600 GIT binary patch literal 249564 zcmZU)d0dU%*FT=-i3Sx-ilP!}p3Yv|l!}lBiBM@!rko}zQE4EhG?|h}gCrsv&R$nE zkYpYrW9GSJiVVNwzCX|N`aHky^~X79ud~m4y|2CZT4$}j_LU%-F1zYyq2qVm6&3T; zO%g=sycQF_#`8rCy@!g3h^z{Yj9eAu8?+`c)PD_s$QEx|p)hj&+92PE(Ab~|zObj5 zp|_Y&7`i5sFYY4lEg}3I5fT<2$(L{u^%nn&5w>PIU(!W9%3D#`S{oh~85R(>$~QVF zJR&q~4PQ#AGD@g3Kqy=l6c8ztMaa5_2L^=)1v-QVM9T6-jNJJ{487%rzYYjn6K!U) zE+}+)NF-l0ATlgGAjHHsDl&9cgsg9P(6Xq|Re`>dL2Dwy!hNI7WrgGny`}y#687|8 z9>f>__b0NBtHS&vJ^mHS@+G{bg?+57%q{J#tu6VI-jaV+MFslXn+oYQ51AXumv(az zF?5l15p$7nk%;8W7`nK)s0%+2iR8;}4f)GZE|M?r&L8^E2>nZQS*U*me^`i+Lx_0D ze{@)yTie>$TJjbCTZf{Hq)>&DP=$+R3SZe}ND6=Wzo06Sd{uY8+W+3W`TyMei2n!H z!p_Q^Kk~m})#v!h2yr$3p)S<%FLg~rmxBMHuJx}L?MVJ8cfQX5Qa3mGhq~_nPluhY zshycQfAoLr7~`TSRH65mx?&1n|1b5i|ALN-NW?$>@3;TVxzxYS|5;kn|7rebkY)a-`LA!14Ki{5dtbt<%teBlZdgYp zvSn$cpY^fn9^mDJh-@B9b;_UFmXrt%5P*)y;*Azy$|D>d=s2J^&ajxRg1gT zM7R|-Ee+ESZsB}?&gVK+AF-!RP4Ikz3JrI1Viro-Xep=3{hYTAHD~M5%(_1cLWDBRAHyv7TUUhGwE_#N#$3+#Q4NoSmE&l z&qy8Sz7F=nzNbTo{)FjhEAPnF{Yr=V{cZSS%Q@g%`4f-QRyg)0m2Q%~3|hDM!PuaK z+|F|wIPcwxaCsyUdylK=b7Lzl-uMTz)piLymrbR+4bv$7zKktB(FRd{(zLB^6c^&r z!`^pn=avskg^*iQNa77om{60BS69Sx^BoyV2;!)@{dX9Q^Mm_5bxy|A2}?f7(1fsE%-x;20`3V8M{7`>;=_f{a;3a(dsJFKhCYeeL~B;W@+RynfQ`eZ z!`$^ee0H~pd#JGhs+ygsSFQY*n^ZsDDF zrX4th4nLla?UC~M$;20qsA|!I12$lEU^PzBJ&#UqcARACJZ`0kHdz@sz-yW-Nf%65 zPJ4Q?xNeD6bbDV4oiF_Xmnutf7EdL)zWPzzO+KdKt;ubKi9527(U zYd%#Pp-F3;Vxg_{EUk|&A=_K6T(T(!!kFD6TK&6Ru5U_mf_`D7ApH6O-J z`PKs!GeU59`W$Y@t|N`D3vR)%Ri|LG!*L?VE49CFCY+OJ+t4~sh03zIoL_JcX>ziq zwap5gT-!tN7?%#8{=9~DkEY{+bx~Z_-8GP=c?Xh1x1w@P6KQ`H%R5=)$FyIJgZ=vX zxYy$VouO0#`|mHLesV$FK%*&jaWtgaZSQc*BWZ5Dt|Y!n490gF1@L4tVY(!odlqyA zkNP@rRo>I+^l&RUduch>7P%3o{LW-<&mZBiqm3Amei1&4@561QKR~5rF0=0cgeMjp zz?0*T;j!s)aKYUdN1t}2^DZ6b4V&P_ebwH~E>B42`N*}yhew{=zB9wQb!XGz%i=bo znV`viwzy2RjMVAJ4+p_T{11`wn~7ny0CKiH5INwFr;1)c_2(0)y21|&#%XXZR%yIB z>e}?+J3H>d)zMsi&=;OWl_+N=W6MoipbFv-avP=1*q#WC|XQx25k!KZD|%>!9qcJNLe4 z7dnhigL?zVh@Z+vSmejxdz(Hz;<*`ieE5cEeQ(1ewIW=VKb-q~V+kDenoaC)e77H# zS`BwiKZC*UMgibbxE^53y*YfZe!s4!|-GZY(Vk*w@#?`pYFa+iN-(zUK;b$NAun@%mg^ zsxvQU*(lor*(%VQpG3P}EQ0HwI9wQs|hdJ*ocK|ks8GX|^?;X-~j9)Kg z`5SRFl1I`F&`BQbrJUGkZE8fExdVlMT!m$Q<23D2PaHJWYqTC3a$K zU?$dlE`mF)b!1=W6W-PG2#_$%pbyX7#HmWv_;TVD&Uwr@ay&8aa14(# z)5ztDM@GVnjpb0)w4aMDd;r7a#?jaf>mlBID{ZO!%zQpx2Bq*r;Nf!?XMH&fisG~2 z$2upjCB}jKnKzT}jQLDtB!1zNUAfqaU)Ykjij=>+inBQ(%?0m0Od~fY(*yG2)Vkv! zXEMVN?Jv$`70m}QA}SqPciU1;%kA`HMK`RtZN|9|{{hcNDARdw+{oJBbGe@m6F_9; zHptlJK;J(bhOf*+>4M=FSiR*PT$t{}HLpGmZeR3C_`xtN>rkWqrdBNI_9alPm8C`{ z-ArYtF?BsI!tMUDj(fV@oC=N{!X^IW>5zTe94$iRZP@@pO$l7&g|FDrP)E$h25{0c z5!60-HYrFwjZtRV%qZ2D@;(O~Q&>?kH99DdC5xxlfySc^EnC3;?DU%gc?yM(W zY^%q`6*ZA%?HTwh$e-3lO=BfVw}rmXo7?vhsZEs=jj7;vbe=&n&l{db`ErKW-gDoN+H#{O zCu3jS0_>1{3wf_aIg4XEsCDiRCNZjlbU6Ef`uy3{=mXG2e^%0{>38s?X#hPxbU97c zOvn2zdNle|5Y)@majW>-(0z_9-PL#=Z@5jNlLJ)gy!J|zt}KN7)oR?P@dw{k$br%A?{Mcjd+uiXT}Za+0Ql94uft7fpI;>BouI~rd_F;5pRHxb zCNIXei7wFoH3~z*mFOB{YqVZnz-75+((wZexavtsc)V1DmQQMO@Z~OrML%I zhWm0;r+ctUmPpD5nc>RzhXEnc&TidmU-cePcnXUpjsgs4C zqxMnP%%#{d!H{#`Ux%`zbC_g6Ik!B>80*jVF#U55T#V}|?oak)o_~-y=ji(f3=c*E zs0nT8x;pnKU6}-&<#HI%QxP+IEcYtjp+Wm4ifECfh#+0xi2$opautlf6I`Qtt#d$d**W!4(=z? z`8UafZ`$1H#s>8Jn`{cNvEC1uL-j&-%Ug_#Utr5s%{q%4{q~ZuVY_fZQ;fU*s2h(B zZbaENOL^?15~?}ZV11o9_fT^?8s%?8ovPQQ_z;h6im$`jon54O!A@+oPr&hZKk&eC zNqj(tFy6KD#y=ZXh@QwtbkmUGK9n59oS#Eczdje|8rYESVjF4s(M42dy%qO;s}CVP zsvu(bj}KqM4%z9=p$hNRtP?erOEO_9)T!B%0bQ=FrsNGPuU%30u!KflQY< zb^pGWhF@NQD>uAlZHfCpZ{rNQdvYRG-yVQdL`IWOvI&so6adG%vmyA=VL@?~Fi*8V zk0z;MI66(5vwpZ3tf!^p!F>x+#l#(7)Xc?o`{uyyAVtjeSZW{L5e5cUUMO}!i<&1l z!39zS<47V||MCJ}zUPeL-^SstI!7!JHAT(xL(p;;*^s=;H(=1q0Ny27!I@ou;Im#MZYZn7&Cywy{&X@JRyTt3 z$3tXmL^bS6m%*qb@?7_(GBkM91PX;y>=(Fw#ZNp5E^Kc#7ASp(oe?L<6>cfl^V)|? zcDs#re#-R1b!#?CL4|8Kp3Hsi>p+D?dh}`4AnUr@iC5}mxMRZ?*^lot;=X+n5W*Lu z-baeSzIHV(S>eKb1>@-0+C+Lfj==G-3+z3)0LL2&XrQJFWz$!XJ~;(0ajFwdHOQdn zYt`A9?iSQPHwv{CS5h-8A@`FS@Kk=M;MM(9w23W)4O}T0JP5~4uDXo{3XkEA<|kq? zVLZx--Y19BRPavGlE!Ae5^()~4@34|MOXKH7`{-NU4bkz(Pb0b-G9!K9_<(0F0aPn zmbwtu5{SoNs|sU>3+&?SY$oe!i7iKmbB`o*@WBsp`Xkf~epMQCtCM!2W?&=lajY$! zDzXr^D|>Srug>NIAMJu;`xL3){Katp-4m2IY{JhrC&)OfVYF3J9*P@uIZKB`lG@n@ zQ^g}-a{PHXy2B5A`et%FKN^8|WE527Ra*t&qp?xt2NuezeReXdcNKe60 zk^`h})iwz4{mJ;wQru{rF(kBc2uO7G^F}K!6g&*SPj zf1X#Od-XHm$V(pZ-s*6l<+{*5=^3m#v5u%_xMKCjDAbBI;1-P-!*w?(aX4=_zDR$F z7Yg#9O`vmyl@Dw6!q5NmV7>oYcxw4zF3B~{yZ)}mcuTQWVA`03p0});mkBmh*aN+Ln7Om0;toy z#yt=+`zLSo4q?>Lwu6k@DoH=^(gjj7JrGr+j&>gs&?WXJ99b$$ugXba#+yFYS91pL zc5X!dO-WSo_$fHC!-qKU${+#W`|zc+CCuznq|RzTK;C_j9p}s9y?5fMSbCVu)^enp zm0Rewq?-_E-@;zq@8W%a=7-Jm*WxNtYL}XlOA3pF;A?Cf^WV?I`Ase;vSvQ(R#(Ss zDLD}P$$`A*mEnP274F{QA23sx-{h)DLYdcZLKlf~!@bf`y4@CLJU4>;mX-G1cTcd3 z%fz|K3r-NKm5#BLp-3hd`2c6;J-91bz?rC76)49^{L&*`zKB!PT{gai&}x z)BBi()2&2=`vV2!$vj>5F13)YxTXztONbN)DT zJ`y8vd8`QaclUz1+RAi9=1$&XV+~;r=?w~9WsteU40L*Q=%n7sRNeU&dH(Soe05BM zsV7@ml!GdDd~^i+UK~afzBoj$(c#|CDHANNo{S+rx3K-D6XvMB!RK4Gxm8zB;`s;5 zx$Pw$+yxon#+rI@$-4S9ZRlHAY&6KM6CZ)6ej~(1S<#}e2DJOZA^3K|5Td>O&^cd` zs#TAM{YOS{+S881%*Yo!ORq9qdM^>S+%f>gfpGBn;EQ3?hC+m_272BS=OpgD$7%bU znX>CqsInLCaXeO_;@(|?`&lwHtxA!$-0g)DGX)5!HXtse&yw=r`LH^!j|j@eY55eO z8q1yO#+Nd1Nmm{wa1Y^g(g>>2*9hCbF64F3iD2r5(v4<8Ya!;GCJc++1LjSIaH}^8 za%$y)f54tzi8+Bob62B!$S!bP_=XpCZ$GSCVkS_LsuYaXQDWy)#pr2RNgOwm&tAQu~UV*2N zb1C}rNoG+7Q@cJ4j>@^iW9@~|Nu%*b;B&I@;YYX;r%Fwna$*0Tm8h^Z6jJZqVlf}j z0`IpxCihIi#*!F;pfnS5@?GFga5YKg_uz?yE)w!17PxKu8ow2mVcfy7q^dWDES=Vl z(nXJPh3ix76)7M_3f2%^m(Pq3UuGiV7HZL?Q9K&3s1OJI8enL1I!YaCg~;!Bpm@3-)8dj6TL#DA2dA7a_7{GL&n&!@l)ZY}?T1*sqf)w0U((UM9oy zoM9ZExrCeIFF^I4AgH^23+*}&2vT~iv8`5_Yras80n@y3!B9&Wy_dnhqa9@Cob%*e z-S$Rm4yYGtipvTPf~_!qS|9xu)e0UlxfT;rd7aT@oAK~yqVn@sFC0PQ<e z<=7p9nN!ucfBX=Jd+gR(3m=mw;;qv&xtrJBIQNIwFs`Ye z^&NjN*au6wq}}>lv~DW+&cMZa9K=*-^TN-6$4Tk8 zz_iB|WD<1ffjx7{@scz6#w3)4x>Z1*G>3hMBEjge4p`?6g>8?GNq1!+c@U@wWxpSi zCwWc0(_^-=>CqBQyg~yf2E4(>$!YK+Xd&hdx;83|O2C8|03Wl*aAsK?M%pf88ow5j zyxH3X+m#5L^3;n2#_3Yer=OuMNr79EI1zlr-$M7s7TDt%4AUMPb9O`T;8&-~;Qjp( z+3B1H0+&p@JbW9g2(jmM%S7PMmqh3>tAS|sz4Uqdb>3sECEPBx0hAgeg5zAXp=3!g zJ>&BdG7tU0?&U|ZqDfXj1Ekqguc?r|Dgs(#_QC!Y)wuVSazjF9D#@~VNBV}hkyXRK z@=gtEg6f$naOvoS`f;_yu2+V8np}-It%YdJO+bfN^+f-HBXjt6kJ#_4YS3FC3+)cV zxWQ~3@Kbg`n#v{oF>eZo^^fp%t_pnKD~>NRCgFvb&g9VPJ1nXH2W-_=!7~C4ZbyG9 zu1Q@g@KjR9@&{?uU1lBq@M8$=YjMM_I8ja|MwUCeGlR_?CqwUat^?)8Zt%3y8l@L& zVX;{V-W@L^xEn44^L}UJ6rKbtpY)2|GM>xhj9-A5-&4V(pfuQKTM2U48i+@92AgM; z52L!}Xp@2%tf>zIy_ZL^v!DRK*de+UeaAQZGcd6(3C251LgUI^II+eZYi7P@p$2DR z^VbwIUVASao^=4{x9r0yRa;5C%53bi%g4&z7FId00QX-B5NMxq!1aC& z0*Sey?8%XF*u$hT$5)06dRU8*YXOaf9{qYP!?72Yxg-$_ZmH^^egCCovg=MUdPjHT z8%;TGqwg#j)VdA{`<3aa*QFpA989l`7o*~SdX!lYgD(&7f_=0W+|&xD&u**H_OHXK zmE<^TmemI;9aCt`zKK-wx-c&*>VX%9U9j%EEM1p(6ijD{(9J7+=}D>4H0s!8CSxm) z8%w3Ju5}1lFZ{+#*(lD%^A&q#x=3KW`XpXa3r3@>i%>kr0QWfaaFx&(&+)7#Kl?|J zNur5(+qed=n(Zblqc7staeV^yl3+~9+{vsuH$kZLZBVv%2k9kAV7O=uhHN>9R>zM+ zY_~GqJZu=fU1d+|Ru3g>`qeRL%K_LF_7lt=x(Ygm2f>6Bt)y^?G@Ygz1rKg6K&R{O zcr!;E5?4>;WiLu5VI_!{irlgA%22jmRS86n4cLD(U5I}qH{-Z(1K_uNkkEzK$s@ij z@2&DlNQ}M>7go%~qa_DHVB!Gxv~_QM=e`_*0aA@O)+xd!8n6~mpRg}J%5v$)#_u<4sW+`uio zpNqy*FE2Uz=8Qd^7gPxI)sK_2*S68f`%d)KByW0To(XLTx(AtCO(|^fq6(F1RDb14 zYFU{|-7b!&f3CbDryglgna_zdS}&3EC+q-+WDVLWmO!7D?V~@1aYUzoFiKb?VX)W` z;!yOBrE9kECgjwi_Sa-Gq-h#C7Jmzm=Q-d|-&?#<0j@Z+L_xSGUI*7p>Og;-4q8Xv z2UC?m$nG9a!&Xibl=>^t)(3lu%VcXT?U?`vKj*Qb`-4%@R+2`{?iXln3xUZyHeq6+ z2AN-^O%ig8AV=Q-+vao#<6vQ)k>n1$!)@$+)YqZ=A1N;B=TIi5A%baETjBY#i{N(T z2Fh0*#`J&+GFG93y>Na^w7)d5EruodCm@}em;{qq&-LkrkVNUWiRZeRnzdHjLnts(mOM@tf!K<|Xf9^lG@fx|qbp z^pccQPuQ}m7Is!L7$lu5aJ|1cr|Txo-bk0Quoo+#_ja?rVANu^%pe<`Hk>0Tyq3bi zrsvG@bumiuZ{WDQd%-E=JbGkYW**0?*{h$mQ1K-J>VJuG#}?g(vzd-mt9%D+yx0gi z4$I)RT@|Q5bpX%IG`r13lVQ+TmX)uICbf!zV5K_*_Z`Xr@+wzQda51^bhV+WZa3`A zbB6b)U2xtA3u3Hxnt5vm!kj1&fl%C2foT?3TbenLd#2E};G6H6; zJV{LDbs_ks1rB!Z#_{{BaQeBkB>MFVtnfWdhB|FyDkqjBvGIg5{aPkfZGulLLor~? zZj^Iz#J$uJ58PkI9{0;}i^oJ`Ot}KMmY-ruMb+e;;SDlmz6hQeFCfPnb+|hX^T_4U zNDP*^&c^4Ruyq=<5)XNdfGb<~5VLF6II-1>JSjW~j(a@u+}l5FllE>RqHl)oZ*MWX zZNu5o)9x&+g^z;g4p^#SXJ@0hm^S48hGm|=K?;Ubw}Iy{PE&(glVNCp0G`3gFhOlO zBq+N;X2S}mwsCt0zQGMTl%7Zf)&QDR&Oo7Km|?J?5y zw4Vl@JtGfvnhEW3FNN0aE%2qN9S)}_<3}es^2H#P)uz{xk{d~6&6_j0e#dUI!SFs{ z<1KiQ!@(3234Eirk}2$$B07_dK(KKX#8#9N&FqV;F)jn7Vgq5X>q1zX^`2*<9)@ut zPgt1FVO)~=n(S;ArxV-8!^yRY@cMojvJ^E0&lzy>i6b<>P>0~_H3HwC`|;w*P`G;U z2U$H}g=JS#@X`STIN-V#!owcJoKApaBe}dQaoN$ZvF_hXg z4E79GFb{Sg_uKSf$*_w!Li8j)llzKeA0%N>Mi1G$WdN0Ie&8m>EtsdF#>qdyRnZVd5LD@m4*1RYneK~EK%;*`og#Alohd3ydG^Q|f; zzwdp44K?TBO5bxh#9K!D+n&L=K4UujZ8On!l!Dj`9;9gQX&hUagyykBAw)reuD$mI z-j&{kcsoMx*vrykH~#QOC=%A(DMl3=OzCaWZ{(xO0sk4xxq;cb8jOP6I*H0`D&`y=T9GNe1YiUPk1!}{X8Sbxj45X4jv6|z{K+p*qLc< z;B{P|UXMHn7ESgb?qf<{<&?vIYbD&%8cSTZZ^4G*BzQO76U{eW!o#t);JZc_q;>}2 z8H?G}Wd19t*ltQErYX~=@Le!}xg>9$WfK0-qG;qK&FM6~!xJqEG(q$*6sawPnpN`1 zF38}891;9|_9*-gHN+R{iFuYcd;DY@yIrP%ZS@CGp>sExa!wtZ zqx_h+$0E>*NhaoQ735s4JILD?f)C@t!m^RVe6t%pE`P!gZ!Ea)%I$3P^@nUzX*p;$ z#KZCjc`(9E8f0E2VAbOFycH?SNlTsz7O7M=-crqCjj?Hf@cWNPc znl&+T5st6!5or8mis7~rR41?;dFq3BXS5H-E&7N(f-1QDRE+B^4&YA7_X%GAD#Yol zuVVjXD;$$tgxwzoF(-Qom*aL1$Ly1640$8M89p-SChcp$lLZqw-7r526FunK{STn) zsWr7&a1d6Ksqi6q7hzVL2|n<`s2$rudi@1lv-J>`G)3XeHTs<8nO6Kc_9F?nDFTAQ zN4W1OpXJ`RMU@fa?AM210!MXic<50@0>?~-+E=g9I_VS^*F_1Q8ds9?GkvUhs}eR& zl;Gt(pM(we`^dXFdtryfn#Qq<97(ZxE*vQo<7tms3?B_MdGp28xJQAv8{d2!i)Ecg z%)8?zJ7QbY7(K65aAn{=x+hvgP)-BxE)pRoC3kVuLo-Y}?Z9+9mqCioBD!MG9IRgC z!l{L~AfjCs270f<&(kq*de;khm-uUCelv9KRQb@l)5>9vyHOD$x{ZEdGYL7vmv}&HQYXGzAaObVgoCdvr?l86O^WdkFJs5f>!=h_FAl-kE zHHjR8Xx9U9!mOCw+@%ZeuC~A^|49&byqWzpl;JW)m*U;00l2NI3v=StY36eS`d)kv z1WX(R!K(2zS?&^iTtAZTzmoz(WWKnq7Ju@vS%I_afj$}C^E?*Gk&V#Z}%No`RS%`Z_r6#Q`3Pjp6SEgtr6Hc zV;3%8Q^aOAd!fn)OAP6lfp5jGW1g!F-Fbci&Uv3rW)|l{*vcYu&wV>=_j!n2_D&dA zwi>T$_mk?Bb9h!pA0OXK#O+@V@wW3qxUlR3c#T|xb3;Gc`&1mnQ}@rqp}RV0KT)4& ztZUBe-LeUShbqt|Z?w1zh8n2e_<$LGDj_vzc7bYUEm~^%L8p{BEBTg4DohFm%fopj z=2j`0<}yeu3ywneHhDZDxXK(B){)5ysVK}ku*u*#Up36+;sm(aYB7B{B?<|kA ztD*~V^+iMY$)CukKe-BjG{u1DG!ur65F`F>elWvO2Q^fMXJDQd+nn`FPPJla`!UFsFsCo4*~36-A5pmKNOVqo7hEen!;4*- z0<-;h!?)*zuFX4*4pr)~Aa)a~)k)B2GIL>~;ue@P^CcWydkqc*1<=g#9f@E9w<@sSS{|jxg7WF z!C1IoR!sEgtcAE)5$L#QtF2&WE@WTrw13-V17DkaNWGLB{`*q9#rbJiqxP8< z#vNe~vKPb3#qQ|yX$!Np2^XNI8ZId_p>vmZkWY zOvoKO87>}46m&&|vAq&4jlJW=8`F91BPBSGU3T;fzssz5vdSORRF_zi7 zvN=YbOnp)cOdb0KJi3nJOAj$FAv^)$^=te6@qO%PjW?4!;frpgzcXpCEp}7i9S8rS zSoAWP29t&BaUe#Ce6>G{_ajBHzw#n_C>e07w<>WwnNHJ3Zl?#Lwo+Dqn&w)ag{ZN^ zXo%34oUWZsuf1PH+b+L^H%?u|KA{2}yBL`FmV#l=5l~q08MsG3SVj_qijz+C<@lqp zN2&}n@_(V{y?3nES`x3sZ374Ikz9kDEMz~qgr|+>;W;B`3^`}r*rYR>J5*tX@x9xa zXzo^=M6dJaj?`f1zT2T_PzsweD27kPO;K!wCCQ()2z5`U<8rSgY{?KntlMx>TxewoCVbPJX?9$IXL6g;9L2dOiTNnEVHkjx(l_ZH=}*d9(1@~fMM2;@aB>((mgv> z5c6ye{xW<6t*&{57e9l2h}nQgQVZa=O$po+*8!EXj~Mq+0~M^YVCd^M7#j151vmb{ z)0$`4m50A@wY~z*Y}0}1Mo;ka%nxkIVpVt)Fa&g&9#j1jj=k$$;oPz{cyF~YoN3#G zo<};EwQ#S&I{y`Kuy-lMPftMawzp_?w3t;*t|UDXZfIg0-Z;(UHvZ^cC>UsbV80J% z;F4fN5PW=wn=u=9q)5~HJbkzt>kcbksI&9ijj`rh4ZIXx1t$ze=JfTs=;?fWCy z7W-2m-aZ8a+xj5Ccz`Xn+K!vF#bDvcY{ba*P;^e6yH@-X*M|(@Ql4e8tEM@EyLlfm zbjTY#b>lVOX(TxRUPp8+50)4wOMG|Zvd?uR51A^qtmt@=0s;7#4DL%81vHz0k?sn zmoK+*)OJYv-3wmc>&a%FaWprE1JRRuIJm-&DkfLr%>3bWbB`lD-P3?!=K~-oyc2h+ zZe{xys&d-jUfWN3G7NXluO_F$pON@DJx+9b4d%)}Lz&bcaP7}HXm)*^YzEd{yMPYzm+`%b1Ey${3v;7G*qF5&jfU=p2jjijvX8&md7IDd+}&;HB6%2( zRk?wv{%JDaaT+XG`Vq!=&j#5wk$6md5v=I>##;W|hMaZV8h;%>#R?6=?0*J|!Ei4n z=ze;dEx1&T*_&4g{Xq*ON5jaAf@mzuehMAh2H5p98$%=9p>OL5IA0jZ&Ko}??Zs*M z?Vud^Pn1TsDgn-)98TJ#76~4jNYP~81srI)$TqalNWT8F_TTuYf zm$kUZ!}Cbc*yD|Jhn~Yd+D+in0_Qk5q3G-aymr$BSH+$NbUc6_^_>u~ z`!$M=iHB!bwu0?~!=!(RCCDjRaeq!c1$L@|)!3Qhi)J-&`Xbzynph0Cqs<`oDC1Oc_CIl#Zv!!l-aPf>sxPHDE7dXb8ZOQOswKXBIL|E6@{auCY-tEar9ev9T zPMT1$9Y%KZo6?2#iDHxtSBDwPw!oS5x7n`wV$7>unTRg=3jJk{;QZwhd3bgxNPOG` zze?0#%mgj&Q1+E4)HN3_61OBPN>T7&G!E_TpTjx4e%2g@!Iyv)<36Ij1ODWS+{>8D^<8m zivP^TVUekrv`hyApFb9SF1ij+R*a%bWwWqkQ#NbAn$L8O_p=?vGSpw?Gn8GL3wK=( z*kw-d$2aqWV8baTYP8If)$5fr{)Y`r&hQzrT6F~M>UZI}(K9%A{|~q>LYlP=mGVkan&_+DCs6!b>D%t=OFytdH|oOt8=*<;!&;nGHJA$ zPfLH8p!EU~F7i(|R+TLU5^hBIxruU2xQ|V%v>_IbbqhKJM|`O7pa5C zT^{FCd5PM*Nu^6)D{+6H3jBMy3-{ha|FlA7cnbGYKAG!%ahAquO#`jnxj6Lg0vefC zjvc-;xJQ3RLf{{B?&AnkTD$2U6<PhMZwbJiCdp?`G_G-H(eK%Gp@EN5o^F2ab^J5_FDH z!-&cV7P27+`@*8I>UW>uvR0n`m*t)$GsPAs9xr2OB=qr6;6jWbjl8b9FJxEAbig=0 zG}-ln8MkjjhgAv!+3ywPUGB@qJ2qR0u^A=gi4NI&Yd#4*phhNI6tMYyN7>%&*)TkC z6_Z_D$n>(d!RV6haN+cCqB=c}%og9uv%0T_()o#k9-(di)Q=(KsH`wX$(V+iE6tyj`vaAiq313u)DLhUgmer|j@eeOjY%&8*XN*IS7*X7%Ck^!@ z=i{9#0mRl^A8XSCc;?bK*s$x9{x-D+%(zzA}>X0zWrM0rD$QnBFjakh8l9geVqI*B+(54&Y4iYZSz8%dQs~TF<)`Ai z(q3Ng;##nO&Vys`_e1l9MeNg}Ak0PpZhe*Hp$A)%a;rW(HVvW4Z582zx z{?I&a7Mx7bz%+q4?A)Y|QhrYwo|GTwWsl52)sRBgGjcRKKR!>stdjt~sV2*uk%4(T z-O)`~9^>l>ZkQ%VHlEo+t^_Oxv29P;i9I@KF}eVi-M3+*s3`PYbS4VNcjAK0akyq@ z3K}FRVcd{wBy#9`mScK{4I7`t8&}mLpifF!y46eK$D4`0sz%^8C4-z9S^`r)r?BdM zvc#xy9t0%`;CP=cb2F+Z4lz?nz!?=xT00J6&P-&P9n)|^h9MqFy3QVj$V2{ICuUT5 ziuA@l7uX+rLZlS7vHO}{5GneWRUS%Z{A*9hhLk$CO7bT;c5MQ_DeGh3PY#2w`Et-jxm1(x(h$>cE3Nt zY}9?3QpO+R(3DAZZWBlmNrZKn$Qw6R3_?m4fOh#@3{jWHbyDV_wQ8n)-c$?deeFy< z$Gs#|ldh6?8lG%Y!XOFwCQ77c&LH)l3z$}q7Ro2+;gV5&jQDk(RezliK864)osss6 zCsvZ4%x-d6xfHF-7W0&Qck@I(49Ac7=k;yI;iN{T^s<93U@V2SLP{7IGs$l??xG1Dxe{vfR_eHuG@= zf%-yHU3r^#a#uA|nxW3F^&evEmT0k;+jXE%Rt3c+WU=;79vd^z!<^FP;jyta2H1{3 zgVXX@aG{F1X*vj>JiUL@P9I0TQzJ21VZF~Hko4Xayk^KK35W7^YxvVZ+# zW@(_nu%LqUyFMjHtxL(M4G~bM8%MraCc>Q8+lb?kdbW3PDevXF7ADynPkd|!n6rU8 zo7meec$8~FDia#)Le}ZPjUC%be47z^H`EdpmkT83vmGRA8C%(y))COKOAc3;?POZTrr_96 z$m05qz-2ZO z)S*MTHVHay!8`xT6BewS2|K14la28lTUw|F^M>uI&p2|Pa6^pk^^(e2;tEMz5R}2& zWpPO`!J(XKTu5gHNxKEIYWWTMtI|P6RDk@<+k%F-M&z#WOo;Q|A((tboS8n+0p%~l z@Vm4&i<|6$?f2}N>edB21?J!s!KM9`Z$e@RzC@;A-n~C1f6D-VBW<`%`naM9D z+!a{SI1v&y>7AY8#AhI3<&(@H45i>y8RvOsY`iG7^KP*QkW9n!rE z+4o`{fwzhdM%*bOm!2u2Z$|->pu*TbN&|0aDWeF9AonMT;iSgjq@^SWuD{*f*cnpX z_;&CPdFJN^VaWsJgyTcf_KgSCMfuE6yOQM9Z6&HzDFWF8DMa(}9#V7S0dLUB2T~5d z0Hu&sY)G;xDZVnC<+Q{zja%o~wxgejLGChgtYR2@e|ISMRvi}fAD+%+rLN-G@a0JJ zF0wmuLqLEJ$@7D{s2$xw?5sM-q3>7O=`DrKcy|QOYhK^@BGwe-K6}E;^N&f5odVQM z4u*gpBeeFgfEc`5A$@%eOF`LdrqE=ZSUm? zkNCl}W#0uSdSsaT<^poBp{DVENiy5i$wU3=&&UtsY)EtW;aQZAX7Bq}uoOpm=Kidm ztPVLsjQY=$h2=xxdzm^u)oy0@{o=^Q@?6%s{| z&wS(>$dO~)8y+^N3;O=pH|~^kg^oiP1gF;~fq(i`SUfEoHvK99cjfJbE6CuTi$29u zPe`(l{wRlS;`hjwW8pCCwKy)%&E^H{-bM_94%p9XNdrGweR!FCh2?~9gs0z*0)aim z;P!ce^w7IJr@GmY{eA+3ChTDgKjwp1$|CTZYKmS7HY8g;kLPtf6O^TR@TYtgoEFz0 zZ;hpxLxltUV2tM~%rVQaYlGGnUFcAl$vdWZj%}PR4Xge?iq69?q`!;fm6oWqmr!X* zO1thk5=lfvA$vtuQMQm$N{XbERAy$#h`Q$pB`Rbj;)jfsow5>o?(+}y>Q(pN?>V3G z{^l6-boCe1c}A5R*Lkycfh>O>xkymiw2L=BXr*uGU4@74dNApxfTIsbljV`kus1T6 z=jn#v1)~~p72eUhcs)MW$?UfBC!OE023tF)@y2Jq9CyByie~z z(n}EKf-P|1)QQXki$O(LWA@(*B;ej z({bwf$SwwtT}a2|S#L}3F6_@H@1(rvmB}`TXBUmyOWJz)b8YxD@JK#`Gb~5r ze}gk|p>l;dH+B(kD|jMK`*A|N(eamVS=%DdOB3&ZG2$iDQu*cT zA~@bv6)Up`Q(Jo#3-%X;$f~8#5V;kn&$q>b3$C0Q5yLXdwIqKb8H59!Lan_qFMJWs zcXR`}DMNwpCheyid268SLXjV->=$(_!@%-uB5xhK8y<^&xNYDtDT~yD$5kkChprA8 zt#QQVm1kkOjvu~B|08bgZ^u__Hd2GkGR$7#iLpbh(PDxxCPvK{fj=;IwTpT{z~*_m6+3 zwk>wNu^<>;yy%T*F37>KwoEE2v8T?Rf571DbxK`qPKUB)^2?kz)Rwv$@Ma=8_?6Qc z!&XRscm?d*M&vTQ+^;7Pwf+;SE11yp&o!cXy^N^uyG=Z0pb5(k?t<0N za>eysK2Ypk8y>%LC_b4Vg>k2Efc>QnkR0%lTvd+K^TI6DoHB^}_c#Qr5;}#US9{=G zKX1Waa^$Awu7l_KF}VFzS58+^=6A*M0VFrWBi3Jxk|#=iyHj7g>X)&nNN7#j7y2Pmt8@o<>^3P8K&;n4rnt z)xwi*8c^Iz5ALPb3vwC-Fd;A;9!^Zc7V{#=lKgI6>lmJH=g7C}-q66k_Bigf4c>2g z1J4`mxMs?9yb+;@A9eOofpIuljI|ba{-=gByS^7J?%LtG$v*gH=N%!!CKooWxGBEM z83`_(tKj~_UHrBww%v9nt|-S$c46I0ehK!pZZ-!&EdEL~_=xFdfE zisYEpM@5(X!9boTN%nCznfs}DC>Vs`bk!O7eUA|o=e`sA_LpahFvD>*(a>v(DQA3b zrlp}WFeiT{zn`~?7kTs&UyK{YL7&XAvmzcBzqf}&$L8as20!{0a2Y-;v=B7Bp`@Gvt3!4;y!A(cIhCXylWGqnhm5 z>sHqi<;K14ldhsTZm<$qqP|JOk98!WtuqLYck$nDvAp<)=MGpD?26}2jB&vqfQW%p@oeEN{AqBJiZr$H zZZ~K2ob;|_Q}ry|u}ulRHcu0O%rWI}*Gy4;geRB9_CxzDLyj_9%wv|9iLNeNSp2Vz zZd)Iq3&+pF*S#+M3(vqqO4C3T4eB&l1|18di%!U zL)VMc_4{#>-FaBJk=hNF_DH(E;1* zzEkXj+EZUcdaOEbrWmoH$(mhy#gWg=R}fwXWSR3^cwcgv+DAKq)D*@I%08hic;zs|&^hEA9 zy?dwt55_5D!;R6965@m!!q5_yQ^w%%qa>`!VV5C&`6kX1U-grPm=9K1;JKfce>p>L z&fB52#TC?K&xqv%JLp99Np4ecWS!15+#YdSxY!aC(8{GVYZ)F<~Rx$k#5YtjtZZFdD4f30M;o+>0q|Ht!IW)W>fJ+ZvRBn8^0;_P90sm?}CoT`PfRET;)s$B))Mv{K zXZSvNKY5+&f;}D&;$0E)xaCR*OxSu8meWA2*;HDxMRB_Lpt?II_ZY~Ra^|Ai)|qVD z6a@_}b~xy&lwGR5NRxM(aHB#JK1{I0kz)cuVWBBMi4Ed3ZHD3rC%IB$c1#$5l)JxC zflr~6F)8*Je7N?O*1#sL>-!nYM_j|#V z_wCZ>VY6&-*(?iwWRSzR6AO8qej3ZH^(bjsBkBBeKk(n^51igVljFild0gxPu04K^ zIdDJjgJd{$XO>_%OOJP)GlK}(NvxeMg8m6Z9wj>!-xSC~sBb1OHp&n%)>X{3-tH0Z zL?~x-h=T8Uizo7BIq>~TRCY_i?fUEaPeKfRo@xhP=c1vyU5Sosrs9LKpM)1fw8(0- z95$9dlMr1ig1Js7CJ-+>TV^$4g#H+0jx+IX05L zjONl4yF)an`W&b?bzudw_3-6p7U`Ab@_^^li$5&*D-Q162!~UxaP@%|u)fk68>P(u zaVdkbHEuoNZhdO~nM`BC@6+cgB`|Mm2tMuh1ir0$LZ%B0X+husx_C8Nye0cgoMC>J z%6^v7c&kEKaHzPX(QUriyHd)?9P#3&95>YLVkf}na`z!)o5ZtY){FPIyb%7}+fSPc zJkc<6jJRub3r+Vr1AERD3sDndaZLF=;Z5&ydN^`4-~Y7*H|`JSa_t82ee@J=lvF^` z^GG`QC6HDGDWYq|OlG7;nLT=AV;@!ZDJY0U3aB<`_8e*>pi3i8P zhi9ey$50I`pGkYhobB$3YBL2H!9kqlcSH=Feob^-ZHMg{cR+u}DyZ{)22W#Oi~R=I zN?G?q*j?F~cZMH?DSElI*KQV_-u$|_qiZ+V8M6|P{60$eUvxo-Cy9_~@gGGd3n=dS z3gIj2fE;!4+Jt6${-hs|s9q#GCf$O=enp_Wd>4h>kopKQn>o#TK1}MpjLK}D!t%f! zFwt!dFI+Q#Lpl~g=cE-tf!9!bfveQ}wGg}~oDwd7)r56ES80KQm2}1=(Baf-T2b_f zTHRE^sy+wgsT&>s$Do0AVJ!GihQ;J8);J$Nk7aOF0= zIlTsp+>>$MBWIjEdoU|~c}}NXJ3SKqme9A=-D!wyH$nLQjsil5lEcX!G@_*iQY#*k zo~<*VocB|dYtY3NhbM@6PQRdW);UmiEo1X{(le?%9Tb}UAy*}uEk*{=&acwnV6KS- zokDtHI0w=`B+@-iC(wE`gEt2j!{Yev4E=q%e*b5HzxvcZs1EKs4#Y`MWbtODC737} zqP~0_+1opi+Ojwdj8LUj1KvT%)d$e9PGWe?jTZJy?WEkbo5{yYmd9-?gqNqRK%=KC zU--F^KYBkGUZiS@r~OT(c|3$(T~ej|C!+B5PONwWW$Apj9|V3d6W2}2gE<-IXm)x9 zXP$0=jIndEuJj-j4J(C^*W=+Fm+_atxHbbhK$AP> z`@%oFG`<;TfI81Y_*+s2t%{4~zy~V)chycd@8OG^hQ-jvL^*zNs}FaYSz?QOJ}p~R zPFmtU@Uv1y)qw4SiLo`Oxpfh2o~(y9^*&5u9&0C+!S5wQp_TvhC_mB@>@K#68{&#! z=&n+df4_v&)-DI{_EvGxtMibzYciSWCzHdpQYhFm9&DCtph~g^-XD~Q8vSm9|2I9j zeF-r9WQ@c(nugEw{(^P>DOlq=5q>Wp#LpA~f-M6ba^cwN^B;p>Zh@k~}B zeOFKhb%lO>)999niCyro2hje=)9BsfC`RtNB-;I}gmIT5X>V*2*JNjMiK8lInDapuho{&vBJ)wZRSc#PbJsm;B4@$Cb! z?_9Mwqi8hAmpVajhX>Sk)Ex0!!a-7N494^?raa!j6}(R7Lsp7DmhAi}Y#zH1TL04* z{?2vAZJj=XrGpV`Sk!~k5xJ7Z4QJs~|8G#X&w{u0{zVg&gQP6pXK`9HLU5+UL3^-` z>?>Wk>9;yAJf9*~%^XSPh5d2v!ysI7;T$Bc>4QB^39NL*oA$jo!X=9WIPX~rl_{n3 zu1Fhx)N3~NYBa{__jDouz7s6g-wb+}yW+gMQje8!>%gc;j||)Vd8E}FT6s%}*G2B7 zE@lzDV`5Ki$gF_g+Y$XPF!asV!E5fRz9Sv3x7$+wZCSJ_8V!FR zJrs`xl!%+g94Sdz@SS#rZ{-Gsu6)z(EMGM)qUsPCp6-@Q*Lp?pigUns#+Cr3cGB_3 zvD`cAj__Ni3_>5Chr|EYaM`faFyYKmF?ao3ic9GVWBweH(#XAWVZ1GzH!LVNzioqu zN(0DJeI?DJC**nXFU?phi|x@OjUVzvd}8S*sCHOU#|<_7m#iVoJYfUc$$NxlYpy}c zbHGir6mZpppWr+#8@7gw!M^AIqtkN#U}S1~$(oqSsO2yb?Q^Qgh#YZSOJ5vnm_jYb z2IKA*(KM&68_&C($$9(RsodJ1)2Gj5-6Q9Plcp+MtCA%oJujg{6O=@c*}?8jsrtB4 z*ek4VA1CSjKIk!hi_kKppSW@KPVqyu4?JyhrS6gl=UfbfJ1p!o4E*u3k38f~NLSIHI#iw?wI7B?Vmbe*^) zLYk|8si4XLJM<6ii|tQcQFe_bZ~5p)ySE>s)~Fpw#us43sky9fvKnqSwn62gDxuPM zo?tjx7Ph&L$2D!ukP_Zg&@nLN`(HZ%n)9&Fe+96#Tb@|jw1ryV%5g-b0v^jBPV4VE z^CB~EQhNOw+#`>HRbTsU~pS@@matG9vPtkUte27atjuYlwgL4P0P$m944Uag;3FwQ(Yg#Bf zMFFSoFyQmAyVJcaPyRYDujF{u1gM=EMxU4VDXG@o!PDNaBF{&?K>13R_v1hp8_hJm`xR#0oT3$3+e2tQJi1&om|&%7WaP4;{xo> zrrU}rW>z|!Q>zxTUs&U2*@3WSfVa>;U5nnII48=cCe!kLSK&#vtDv3cLzDWx5Ko=E z1e?{hsBGv;(X;3xnAwJK`eJ`l`?QWV+mv~cZvmC~>5IA#v~gtQ9{e`6f~xK1aLIo; zRG88LjizcS{*2{;dVR2TaX4pIY6(Lg>0puNN?0c`BD@>NQsyErT(Z#+^H$fum9`-F zPsswS{PyFQlPkoN+oUXS)pBm|SLcOa8)!qj16pPsq1<0mK5^GV)Ssg-zU{LK&T|Tf zbr~W|-m}_6uH%k)`IrgaT`&VT>Q%$+`?k0(;wt?q-Ae6GYRPHF525_XZSb`?fcEcC zh$=2?al6Mf*1uf|kNXdzdviWRacdavIC`G9C{LvC@6_PG`jONyX&*|^5UM^n-YvXX z9^3VL;?RMoz%KcM@WA+-c;ee(80{JaO->199nv` zkB^#mii&TZ(ZnECzOp?U$Df;kxkuXIsFoF%&tA$~_8lX|`e1zCZxPOJGC_x*N4YjX zT1fd@O5U2Az`u1L#aQPES!dF)%GU;`9U6|i-IikfoiNdEcAns2t-*E)G7z$$53Sah z$G$yR(cut9erdj%rmi+;JHLyx;%!e-KQlylgG2Cvw-$%~?TRN0$G{EE;glg$OFd$H za@e>E_*x%=6g(B?qoxe;_<3U=c<;O*h89-~tA$B)HS8%(-=2ZfHqJrM@IH9!sEY8-dLrdN_yq0K z04|C#4XG#Xd&ATGkD+FOH|QfAZdqVq-Sv|+bxKL!HG}7q16;emz%hW zAtSMKsx5oHSpnw<)QOk3c92h7672~uhAov6U)@u&uvY*ze7H!qHU<1JqMUZ!`3xFY^sq?lCaHuy_lP^|g}ths zFw;{89{W|miY24C%=jqCrX3VqW`^RJn}6W?p`7TX?953=wd|?Ho|H*J6JSS8>hSJRXo5 zP1@%=g~}oeT&rh=o;A))PTR<-$PLq@cH>M-Sz59q4l1mIvF6htp6Bhrb9_b7P$h`v zE`?+KqA}vmaYHz*#EqS1%cFjlAx7mLVufXSbn!)rPz|Q=F@HR3b+7e^yO@eg631cM z^>PR{5n+hgiIPkG=15*#7jEfQ1`Y6*it9ZvI?(x2_l zrIGTgKs=k}P=Y?rxcjsVUs_^{Db2>*-KbMM`27fU^Yy|-W=?!U>X^MM-zi=huz@1? z?Q=i(Lmv-~Y=E@SFQ|EhD~3%n!ak9oVg2 z%JEP82SGh-z_q@H>@O%F4zP#EKMip8!w()GMq9Gc*fj8$9VMQL)y3(uQtsMg4~;m~ zhrgH|CA9;6@YR%yRM0w`LWX;C!wnsnlXbKtXK|R&!`>2)S{G8&%rZJS*BiQhv*9N9 z&7gH?1UI!ixNcDoqCbYCc=#?o&Q@LqR-Os4a^O*zbjb`a42cExvmrDDPhp_!KBjG zi@(iNm$(X=s8hBO6i4lGaT5!v&i;v_>xGuOjvViB2=uMaK*F{( zP&_+PJYIN-K3l|sC*8Sx#=TYIrGeuP z*thHg1aH!ZjF(gCSlM7!e)vx)U-kuz0xh}piKq}B(z?Tw^=gqUVv z=#hP{9=xuJ^rI!3Zbqb&$Maml z^t>b9T-HQOJ)JRC)BCC@L8Nd_NhzMJE%N{t z$W_wd4-;rjxf~vFGvI3v_es0O4YB>95@se}q|m}1e8lG=t+?qx5xcu!`}4>2HtsS! z9$H$`zpo!Z=<*KQ)djXVcNbEe9I&_GOY>KCrO7fcX#Klvy8f>jjz-FeS6=Oa0ndjD zvm#u0LbD97U#fz({dKrt)o;nK9f>1v?S?J7H$~s*5YT=1SbXbi$yv`%k#D^RwD=5R zpA#Bru`2;)Fa9hnJn>1G7q5Y3EA>HU@o-SN@Bp;$`0y&1uO(CFcjJY$AAU_dM)H@` zDAe(mFsQaKI+irjP~UD;f2JR2JbeY*mhB{&@q1w2y$>+Dq<|_$>Er(LemvP?t#IbJ zDfW@K#rgKhw7MIxM(^P~totmG8(ajV9nH8|7|Q?2MAF}d_r=p%l@Pkg9p}$f;fTOf z#S4=Pa6M}Y3*~>op~i!BvT_orZ+i^e#~X9yCXt@LR+DnT&0=?h!?dkeIxi194nLa# zuG>qTtw~+^!r6Q5G;0J``$vM&k=?8m`<#?6C6dnPspxAomWJ0{l9(ILkY^mrbJdc0 z#tADjJ$4-`j=h9eEz?D?My?CyBBsWeJw;EF%zEKSYTC?A$$7QizP;;n3&cHD?0`WBm5GnMf&a=*QwLHtvRUB za$KyCb>@W!8tIl%3Raz+L*ccOf7alM*tZ|{tCseR;uY|xZHe&i>s{gJZd1sV{7O#W z0k4lav)t*K)ca>RnXWrU@y))hWE;oxpXKtu!HDtA5t4mVM8>a}d^gTUZ=ZT{&zS)$ z$X@*Tv{UT;^#-YB>SIdoIQr+g87{i6p|Nv^^UGu#8tkabQNMbVUzrVW`P7GXS4oVH zqm5$sn=%~HU7vboeu1MsQeoL*FT7vT3Ftcv#~fdQYE`dDJO6?xFPn$2Q-g8T;XAOa zZ>hLwZ<8?akpf(D%R!rk$I#nE2j8`Cl$cf)6fpd-C{9ul0xP2_ZHWzz>Xdk@-c!+ri)LklTK-~~w>}qN zjNFRn!=J*j5q-gJzhyN4inZFUoI=S%akx^*g%@tG2k})$+w%3fL&^|kPNQ?rnO?dZk89loejd51GIQhphJnOz%ygA7WqB1^1y>70sP-5fT z4$KAT;u5ldJb+hD>5hX;Ht;=-1KdM`qXc$QfKk)s>D40>*635^F{5t)YG;q+gjdFV zIzVDNE;vsq}qG#-z#p2i z(wv02c+`H}NDd=ZOIi(1(}qfA!Q}g3Axqk8WgT{N*PcgU)4Q#3&hs)(^$5bdC!WAN z#R%?G6orp2ufW2@7+mrC3T|D0T;lv^GHM4|0AU}4Q$HTqsVEc;AwC>~+QcGVgWq6at%J{Y1^_q)l{B1q2Xq4sYQy>4tqz8q@jHu@LgGhE_&T#jZQ1!|35UU^6=h-PK*O*&`11_Lal)?QLSu!(D~$ zzmurib}G2+vSbIl{(MC}15RxmfU|W>>9>;~*LMc+&rToAXg9{ACdb97_}z50_$>d7 zyM*!f$H7r?3;Gm#(y87tu&iMV#CS%qmTEG{*3O6H6CB}r$sOwQ-+1yYGNnn`L4?Or z1U=WDc(^c`P5%~wwm~NRXqLJg-L^J86tmD-1-@FP40~F3 zLT|ZHR#kN7o)vw`u%|9x8?l`~|Cq_IF_hv{#&AV*cl_0*RJa!I&A|%UaOwGWp||4- zLcNPZx3VA%88L_-1Ph#~9ztE>6Uh5SGt?(3aZX4aw;kL@9j(5sGkp#hujobFm#9np zk@xH`_2me9tzMn?h9FVw9(6NgZRWNc}`TwgQQid!j5id>B-Mb;abIan&6sM@~TJ` z^KHGUIp&41ZQKo*W1CBTT=#)<_ugD?I-TD4_(iXChr{NfUun$ga8Q=(EiV77#n)RL zNZ&*c`taX%=)0#U&dS(HYR~03HZBpQ z>=(ueFQq<051JArD^~oHw7h#EJUrj7xaIs0GU@XfYSi=j#&|g#u9Hk7jy^3;iRzC9 ziX)g`hqG(PZfLnsC9c154!XohdeO6X7}_-$ukCijH^Y#dHmwvhTyvpCNG9cqi(>t2 z7SfM)!mY0g=ze7l&6E91*_Mal*LVjCo~Vf}8FqO8wHyB4Y(ko+4${*DH_$wK1Wp@x zo=hH;gIPg89K10P0&Xh^y>DCti!U}p#nUtzqYN$K*Jc~(9Mz#Ui7n9WqltUaiyR*ND3)_2E{(cF z3@FtM=aF4>xw-!iK4;_v&&~7Uhud{AFx8IRjws-a&Eb?JC&>myqxs)VTYj%FihJgo zc$hxj4v*WD#8YR7qDn6%l+WHzXGP>hDZjTh`r*B*8F29DSE1E= zi*R~RIIcOe5O!znAf=dL81Q!@o>J_B^-Wo}jXHtW3|5ci>?7J+kRH_q0z7&(1??FgtY8Csp|0i|I zu0q4>LKtAUgcQrhK<9xSxNUbVTefJT?@=YZbzcUx=RAhFdO2d8UMTH;EeCf8?ZR*4 z)kX6~SHYqqlxGAcpuY1z;j4Qc6e}JVW=Z$$Ouk$pWQ3=n!b~H&c+0fpx#V?Ck-8)2 zYp;THjv8E;nF9A@KZ^yik*wDi&kDa|_{O`@!e%=^c34qFD^pYG;oNk-c0Yhu`K;kY zHFf-+Q6;n(jihAb(`2%7F~mk&viHz)tUabXYj}*npGmhVwB{i3$~^-2$fo0Of5G0h z3eYme0-Bv8=!pJBm=iLY6@TA>w~<5n=z;`zUuGcj9J-dc`M;KUuO}g3jW+xk2`JyH zCM-ExD2{w|6&}mF@R`>$x#~_A!MgD;Mcwhj^Qi(qyB5#q1%e;tW7(?2iXFV=@phvt z1%?_yNs$E)+@H>!wYP-C0s1U=M~1Kbj24@X713MmFL|jYq2(DnKD%HDznQ+AJ`_X< z9~};3n8d{B*JlEMIuws~qtc+r%$@dZ4&)7$jiTB|HN11ymn+l)Fl_HVQrVOwE?(9I zxdGbfsJ{lkN^|;?zV2dJloh%x(Zl_vBXIGsW-u6O1C52%GB-(Exh2OZib>IiATzQb$8#Ilaxm**cV`MjWAM8&)C~ z+6tq}^XY2EUN%^+#HFXa@#yOBwByHgtO_WE(Y^;@t=nxdoc$0M{|)C0;%1&yP(uY9 zyK?!ClVa;oH~hI(o?lrl;21+ks2UBzn#G4fUM~nQOWtEh*mPPXWf|2{{m{6plv(Jl z%&)9>lC4avpl#sDQ!XCnFWtRyLsb~pe*$PbUIm9w{G_!j2YcwAKj)!n9}Q3A0`Y~; zV@i17i+LZq@K3hneUFRa84l&zNaVTync@AKWfH4g8;t@Y`Qcc5@srefOE{Cp@|L?f zW^aE=uA*)DytBd6V_8Nic)c3V&FgN7lJ0mow0u=O zq`Fx8hD^okyk%%Gvj=**_JX{eO@h^9HIHBKkI?RaQ{mBxk<`>Che}_1dn9Zh1KR0& z=y-mYG&lHx|GAT->g)yUR+P|65jExNkKs50(0?b54lT zrGKbcG6q}DG3@p7l|1?bED%p5 zOSfuBWt}VXW9jFDAl|feAok2Zh`-fig|TN?_&fX^^?4nN=a$W*K7oD7Xz5FFR?<9H z?(QvScQNFbhd#l-?pD0dtB^;dT?17-E1-Cd&{QL_ioyFd+mO3 z(A$Dmyqkyb*B1?T`|`o#XSlR`78#c%3oZT=Fy`_SzE$-ARNU`Vwy_FiT#-c|MR%T` zffU)`f+Jt#;=HY|fCp@cdBLq@G4MUey4s_G^K6)~Dh4}^Z3P*fv!FdVgZ!Pg;r!NM z7`VtD583P#()Z*E+MXqXu3rqOZyiAI_mop%h6M8{yCs@RcR+QquZ60^_dNQ9#6gzr zE%D-$Ex6*H8#h*^!tT@=!kxMWkofO9?6&X0G56!)#{Ivb-!Bqsj@VFK$bP<^^>Yp%u zloC(Ox+nQtqw!tM1RTF3MR;{sdZ(Z3!I%Z9!tlg{_{Cu#k5*C_Q|DXqzeXiayU%1}tbdH!Lt3CEn4 zGCfwO>4JGMdn>BLi2UJT@+TBuoa#f3Dn}@@*FviOu#c9F(-wSlV|dd3$22o41}*j< z6;4?Xf`jUR;m##vG8ByP+PY!9>Dw{#qW~V)>#DH6WeSh#5(_=s%3)ki9sf-0hQ6)4 zpi#k6n2TEU>zV_7l+owGQ#5&2lovlxwW4|5%wU6bm&-X`js3UFVED>ncl)v!Ow3iG z>$OR^x4IBq2lU|i2Nse;XehO9-N7GkCYGpH72_Vo68GG-v*E~AWq#YXok2#8XLVFV z>ERkmr`*YcsQ4y<^23J*{$7xu6I zTHI$*Nl>lCF}^BC!Hr7vF>7+LuuALpGirVhQCxhEcyy zP8{NL8Hz)P;PqMZP%?f5#h2=`=g`@ra@05;<=&mw^%QBq8#^KV={p*w6^nnpvp}<` z5p-0w_}s%%2*M!lTc^#}3MYW_s_vXH;VFoF18~sE2e4R6NwocT00#cr#H&}v32{jt zQkO)e#2I0v^D%`s#^j1^J^PY8IH2wG6w+k{QRPVlJ{soEmE*&4@}+_3dqtCj-L|36 z>~gWZ{532%Qw&QlOlG418Lq7vOz)n^;C@YW{@R%X@rOr=PM;!S{x@yzYqgtwGwKEH zo<4Ne>?>65jz)vySA_5XcEL-v{rDz&Csy~ESbW4WN>qIjNJh5xl-ZfZAHEPI$(Bf9 zs_`J-u?rpUI?21X|Crrd}nki1p$l3r_~jbbd;NesNw118-2^Eiz12*nxOtH?z7$nif< z@Yr{K;nR~yY>11-zkHs?^|BzjCj;2-`Uj8jo@$uQv#8x#x)xWqRo!Jc)PhPN2ywrOur>h$L^%FSBWDWX*b7OTS*6=eurM>1oPWe zQ{K2C3#{u;3i;LJz*J2Q8wT`~`t2h14VQuOkzF~fdJJ{0NDzh^{UfhK{UBERGH88v z#8-hO^m5Nzx|(p6=6a=pfu!AB%3IAEQ{7o7br&aYEbvH72<2>_Ct03Cg+FcV??%?hYF%SEh;sA3vgl17CV< ze76U$ks+R$+5t`WYu0uA9r{0vJA3>NQCk?e`@i!lmMRy#JsWTAx1>O(2KzU^lCTd?8i;e^Rtxa?LQvJ^p|>Et|sIb8%Kg(Bv$rm7nVA1gA%i$ z=-L#8J)HANp{x#O?x_}5wW#5;g@JA-q_eEf*BPy|9}^xN#-6znyF;!gTTj;GMKi{c z?)#gRaqI--M74ox=|=wL^Fr8_91V@jg2WA1wo|B-CvQx5#&yA#6gfgeyp{Qv3Qk^ag5~C~DN#)kZrwl{e6dOJ zdK!;0{g%<1-rb5oJ2{cx#AFIVUG6xmI<@k$&rOaxOe>HLEhq<{zYcAQ<0A&xXSX zjL0c@rugq`SFBkzPFN-RhvT1jN6cCS0mX)RTFZzoAKy={T0x{beut!qc=AAaOa{f$ zx&P9WFE1_>bdD?1IYUF%-!K+nwl5p{nUdknxnktEV#=GKz&pRcl;&M0R%$s<9XrD4 zZ0u&Zd(;Y-O_76l#sN?tzZSx;c0;c%Z^0;P4DB72CtBOs!o?O{{ywv-G|S6M+^t0T zwmSfFUfh6-(=))bI!w^WD5bN;@94!YD^y*!oerDqEZJD(OMBn^g@jNSN}2hNR%WEY zoH@2&bkJ7nmTeR*H)*g%fDNWP+ViK`m4Z>OK35i+VcDb_F?v%5HNJ7<4>v4O$J!PL zpPtS$&)xwCD~Uz><2!BcsmNFH8D-h3a1>n>D!k%^{ZpdAtL_k7Q_Lz^xhk=w@tqyl z$_(QB8$Z*Jb<<&jg9Zl79>Wf3^}avCOxtKPBPbTiYmjqa^2@S)UCjduW#mg!dt>8cA8qJnEtuFH3Jjk&Y+z%{d)h>HU@mGQJp{at)$qWmGpycVdc>>`o0ahc)}mbSlUXKqp!o1gI{Pyd@FtZFOVih zhG5FBM&WVpf3(8tI}8|{2uWkELx8gjev~hVeLruP^Y3ab%~cUP2T?g-|sPffYy zt|pdl-w)0z5#Tx}S$wd50N1xFlmCVxI47hSmUWK60aBLoYm@}matj>?(E@KbT`1QH);)DoytrT#nMjvuc zapLGPuHxJmO>jf@7j3hyA-C1etat{wyk-O&UpWRJ3suR)*o4=c{UDu&$)xmQ2IjpT zCThKnp_iJ`LWsR9l?dwGUu7E%9gr{jG~5@*ni)&y-VE_~WKUQf8ZC4doCb>MC9&&G zX?B-xTzAX_-Hq+Y`jrO`e$WNgyDH!($(ylN)x+$XQ#2OBafRvuN%Qy#T_1+gm|kCC z-_;YMqViEvo$w2c^&RPjffau9cjTWB5ch79I@})yvf3Ycbe|Bi%qT|+;F9u(BBZ@ zX3Q==QRH*$i(8&dwonk?Bo6DB0)ZQnakC!qq$_6lOxBd=KK?FdjnjZh9#`m84>{JQ z1R55&90P4T1Oo>j($H$6q;6SI51idMv`sIjFej z2;|3qhTjVtX!BQBYUrOxzkBp&qXTPU!mVs!Vq^>t`|c!Vs~5l|8GTW`HGzJX{(tuw zB$P(blNvC`CA44>7PM4${3q_`r@S(z#BgNEm`_Wo@=-C$0L7o zQE8wKZVjuZI=>Mf>wA9y*@5Hm`xaNpOO)dtCtlN{x%*(@g=|8cxvaEuLMQ3;*NjJp8fx-#?BhvV~Ad5t0!i?sHvIA`P@iqBKc+X^66t2B|2t zR6>K4hWlJsY0^$pJB?2yB}s$ocfP;>;C^t<`vJTR#lBJMqZNoh*mC^Iy zV=A%wK?{D1kZIf#r_V#)aIg;k+mlamh2E4kw+v2h8bO&-X1U)V8^}AC3sxJv(dSe= zL?1Qbh}7=n`ky0dT0f`d;|7Ugz7~=bDqhw${S}N_CgqMpHATIj+IUl~mFC{|=UtB{ z$n0&+Ax32mN2+z<9qHd`&xin86rRp02Ujqfe5Cr=1o|-}oQ}Nlrh!j;h(=LXG*Wda z{=gDZ&-sqrVj|9WD{oN_V}{b)(_Anbd7yO6S@9O{bX93ahjR&{?1E3O-& zZR=3#@G6$>DV^rcBO8f(1yglT2fFU?Q?{~)#0xgLNX{yopyBB;oTk)M9^ds5$SE!-C-wqU<-U*M^j%CLM^QgMleR!ApLt-<8a6!dO z@y3;bxb)RHJThpKI8H}}L%BaYt$#^--~SSRgy`aV-(0vlJ(a)gHs#(^u7az_Uf9+9 zpImp8j6+xU<;c;-(tYE>zOI_^)a?Q}-V275*aZA?JsW<#NEaRyXkaJVDR}(*Hs!5I z!>`w^u-l-;aL6~6dR`lio2$;D`ImTMw2vijeU~PWZF5BT>lyfSbugOxny_x}MyzQp z1iga&)b)86*ycC^lYc7VT$4NEKfgBEmA@2i$Hw8vFR$T4A!6gJJf8KOc!SS+zR~fY z5cf8MADF6PsE;m(&Nadm`5_2>Tut|^YRT1NhkVf0Z0fYI76+bK0@mKXFy!D4nlXJK zUv|;PD~CeqrgAn7^<@qlCP+KW&anM(1h=M1%%UUGy#1L5E_08e_VpfobHYnu=8fgT zZRb|l`DZGg&XBT&&BM?{+7Uawdjg>~AXICsfcsq|Qf;^A5uzpZ?BGG)Zg-+v4KCQf zxj}rSdtB_X&J0!a$Kvw%tKg83Lcw{-ZG{6g+AKrtUU&uE1|Onj zd6m@hkPUc`+KrEm420d&@6!oIJzlzC1XlNwn7Wr;V2sszGA>p^%Wv`G{kAToy7VJ# zvdTect+}W-CIuoF_JIY-Zz1dB7W#Vd1dWOF_+OOPO=5v4%;FUq|72JD=;0l!@uZTkA-Z0fhAJoGWDX@G#TDC~ zP%l!ICXGI$0t2_^FRY`5KJ(t>a92+X@hX4>bY=Z(myfatJKR;OAw3mI*1n! zl+(1LSSjOB4Rz@WVB?$2!#_R}j{Wh(tIhTDOD7IdQ@I1*ZV%zB3dvZRazg0)JRaM; z_tTk(Ceici6bQPnf^nZ6G2+r`XlZult0y{;!bc-q-P4AzR+#VwO!hdhg|jueLVBW9W~Ye2(E-A~YfdRn*c>iv_csuzr^quyY9;Itz-S8kB5ixxfaTQyp*Zi1Fl2JB*rTz5eh4MB`_oGbkOvBe{c2PjYS7N=%#7`1Z_*E7oBfY=jTsh@~btZwEijN?C63I-VNgqT}6^yIAYay zWpG`s#4n%CXFPS3w^*g|xys!Xcgu%=UU^J&Mm5u$1$AKbK!YA^I8KU@9z0*k9zTxT zNLgE3#dTF0Qm);S8&_#lhsSS)?{5DnT}gqik32`6HIL8+$)oY%Qa3y}>?4J}MC^a# zIC&(-%U|?czAyEX@To}fIB$xn$<`c|S`Ym{eU{7mjHX*|b(Hh? z7!;&0!7UQ&bLNpKyr`pqKO8^N@6N@dYZpVzSyw8a>284qHg(W`PhZg8IZIGpAX~1t*S@L$KR!_;SRGJ1_4~wgUplJg*Bc(o3iCqh7*>+GA9f|A)rT zJT2t6JM*srx2fwO8_*bSO5)UPF^s>tUpH{T52kalbI@+^4DG-neU8wfVH@$!mGP*0 z#*)1&1ypy^^R8FjVZ6oalv;TzX=!b zbl{m*iX7^>8kQ;Vq|0WPsCT)mdlCkT%R?sMjL;IukY+KTPZd(+X9XVfa12*P7m4lV z<6&i7t2i%wEN{K-$!5>{f!ed-++4R7K9xAKs&Pj;eJ~%glY)5tthJzUJ(eYcrI7Y! zG_Un^<|A=dD4(~l{FI?9>Y5s`_9f|BmO3(*rdV=m;0#e{(ZhFhbWo>xGzV*SZo<{L(0#nFB|t86JGW$1Ag4;KvtsWYQn1799rp2S5s_X@(4z1iT~fLSyo>Uad+-Px1(EVyxBM$-mR4Sq%f+x zbw|^jB%$l2Fkx|^0{d6{VS1P?1|_vi0?f;TyOFl2wS5R1G*^l@@2tc}-(@gs*G_pz zK?XdiiWH;o+6xySeuYi@ta;75rEu!m3%dT=j`uI0Cj@QsV3U3lugarc65#A z>ofEyO_c7FBbK}-;-~~WH{~0*FOqri8a~{814PcgDCjB1i+RDGxY;*=euXJeh}2su z40a{|gIjRm`b_eRngEshIjp(;40uh{#$Wd{#X0{u0Cn#OCpA0a=m`Vyo>_mqoz+BP zd4O%hPeFIBFLa`47;Ap^WS6EQ=&W8xM~c$e#(XTdjo2aHD(a8_Za)#IWf5u3`YS{( zSVn$L63aZw8814Y5ni5Y0e`1Y;-|uD@uT+!V)fvg5 zk-jc}39m-Xf~$$jcr4%t94~i+Wo_@|N^NIgqP-UmX%n#3c51oN!X8*W=C9Coloc1+ zpM}q#5;=U%5E$rC3y#;niXL72L(!g1@Ht`&XT0u!tMxoNXuUtJ>@gin|5Q`3dmaqF z_7mz)72uckYq8tPxpe*FG1z)qg>Jo^Cz@Vs7gzP4$SrMs@QvA1uxeu%Gsg@q);aNx zai{5m!(HK)tSfg;-wSE$Uy;_Ic0oSY3Nh}Gm>!ghr;;wpweB0D!`mw1`Zp82F721M zq9$%wF%Z1&`oY$u3P{F)a%DtTMNT+k4W)MzqGs|GjnI-&kb2BN% zNZHqye(V_Bk(Iau<@smf+}8%|;$nvuLkgjDWEh6&tKj{2#r!aMonWUOh#vD14NFR( z_S{aPYU*Ne{o09tNd4m#(m8IqC`HOSM|0rQmBLoXqY%017JS@UC@Xp7g2C+?Tz8>D zerIQmu=>RS;m>1<^EJv_Vq0`XhjRmP%~z>+_~NzLE%`s%l4*nv9=k~+H5Dt*{)0o! zQz?5#9k`9DmwyQz&$?S)(cb*O^6fsCDb!w>7Z=vkI^%9IW#nWIJa%1n_iX?Q`Z{b9 zxdso0>+tvm**JTA6yEYvBOhfaXmCoUxHTD=m=}U7OAgZFqTlq`MTg(a?8^Q#Bo>&N z5hd+R=6s_@@$-N)^h{z#E(~aZVQGV@=gd8n-`)!6%YMoG{^x|d>K?#_mJ>p<@da9^ zSR$kh4u-*I492BHnF(KULTqvdOTaCxnR|uj< z28?1wj9s*tN4b@<;Q>GHq*Dzps^+xPGzj`_y9STbE(&Xgsl&ALhvM$dC>wj-2JiO0 z1vmAKDDiV3wEQflz6TYt-pB@fq)XmuO*ssY_zRlH1K9iPM9zPmDPQtkmj`P-pzL=Rm+Hh(3qq_V`D-Z!5WM}`V% zWggi3sx!B4Y9-fkm)VY%@$saGVEyo^SeyJ=+<98y*&#njQ!x==ne~Mo@i}64^k{H8 z=7JU--qQF}^>8+=9-fq@lVg|9qG8N4+8%4mn(aHuaK01dRt`YdLx$qdi$?rJVt?K| zHH>tt%V^KM_di4_!WJN#CP- z3cZ!0AxK#Unif5AF0n69M0ru3 z)SEG5%^(F_dCs3+yu3!Qo=qTyM>4j{xJH{c+yt9d%emI80u~)g#(T3}Fhct^G~s30 z=E`W{>&FbtE=Yv%GG*+1vQoa=qyXxggW#sdb)kdWTJ%VHCFX~}r}jIEkfyhcKEf>V zdT%}SUyvhiU5uC(W6T*+=k2MdDE#Pm5h~92prHS{kV3U4PG0E%xMmjKdN&_8B)5o9 zI{8D}N`1QIqD(ixnegumwJ_t_VtgI{l8!qcq&LSJz{l(?nl@x~@J{99(F#^gm^~D1=a!QL_ zPm{wd!L)jeV4eC@Sh}H(q5^i1WrzWnt~vp?-kPzMlm{3bSWF=+cZr?}!(g(-1sL#9 z&X%)pK-upKnttdgIryvw5BFXCP%Q~9g5%lfX9%zB;tU|!5n_m4{jdU$A@di z!s=r-9Qn?G$C3rNhflyQ4l*`8-Ir%Ic(BaAkiJ*`qJPISAXaY0}CreU7agjqkRwt7UdO%?2Qs405WV}ZLt*d@ zSU1W?92LGzT)uxd$93ICf68|=Ur&c4x2#xS$^t)AllH#m-(+iVreomgHIVw@9K47* z1KZ~W@cAnTj@+jG zQdTPciUYs?C2{>X?Epua1Z3~D?7W>gwK~Z!;;M=6fO1KSFdvvUUW@^6|0Lt zWqm0-gqm|>aA$n*eUZFG4CSwPT{!Vn6vufk=j;``V3&f#vo3Cj`xo2kl-QOd^rA7pDH})HV=3A^y9Y7ENa?&kEYciIv3?qNTUjjuzp2Oh5N;LSIOPGOau4o z&V*^lAHu+mmxO62jzM|P0Nj7y883}%0t54fIAhd4dL5^VZC$l+sLd99XL~|AjqPyT z@taim7AUjlU-;m01pHLJLFY{(Y9FYiWefV?hF=4*MSVK@CB@*~KyNg13&8;=_hRP) zD_kU(_O$U;vVn_+$?x47PWJiNsK_u!UKpy%>pXsescQxpj?u#LH#2C`bR{<1t;K$x z{%n0eSEirG^5{u-gxM_y{O5KX=%>uYgu9RF#ER3j%sWn;pEZuoKaZrbvtClkj%U!t ztPE;K=#sagh$+F@c+Z+KdUXZPdvz4GWLqF=*#=lSYO2Ixn1&t&)|h?aH`xE|E_Gzr z;9lom_{_l~YE_W2kv!g58n|L5%$* zdOMB4T-*C_Kj%4Rcon=6!^naL-(3zm zRzIoh+66pE=L2a^7|hnuHu68M%Xy0za%?9xHhndTT{mm*^R%R@4-`~{EEp2$<^hn!~Yp4fsR)VovIzmd%CV;RB~lKX|6V+vXb;P zr}F!)H(=nP(~x#o_e_0Zk=*m79Xq8*&~BrCbo-_$7p#!6v2VP1`_d!&>}rAkRVzsO z(;%!I`Wcq*Ge&tw2aH=BBzbXcrR&`i_}+J_t$s~M*ZE+CcP!-VEunF7f!ts?mX|)5 zD>0u6(8p7m&#a1tzdtWioY4hg+@E&%yx|MftsFx)oqNCnB^_RD(~p7^>OgIh2R<_E zgx!+2F|S9uBkdBmj@Bl}vIZeBW{3RN&xLsE^I`gZ(jO1)G!l%L4urmL1-NqUN4k}= zy*#?N3m2$J9MtWD*yu7sNE z2H2y+sfEJT#b?MneGU%%TqMo9B0(d35-y29OH(Jh<1xueZJxD2sL|hyrMBg`E4zwv zCVilB&#zIzEn`gC|5rX})jr{ab26H5AB^f>M`G|Mb;+HY1jcm|^I&N|vFnRK^r<-o zdzv%QS6)Tyhu?xIt3701UP{-Dvth;aAilOx9cx}Cg3akLspIQK$1kM|4x9YsKSQpN zZCK@S{P3T94u;wtAvev%bPek~(lf>$;#=y@8(@4U&sZ?52L zP7yq#?@#)Ev=8Q{XyUHDQidwP2A98ZLt76W44tIUW1e;8=6zM9GP*=iTAsl*N7CJU ztj`rsM(a@b5^oeMkiWHP^Oc5jnt9lZS2yk9+S6tf+)z)~I@s`?51+Y3HIwi6+WtT0 zB=pN^ho{wgIOx`KdG{$J(YMEDoF88+s{L4u^(!ukee$;o4wn>p?%c(+xg>~u*H5Eg zTB9K3=m5$HJ}Z8ddW@Bx&tdR=4;*~*Hhl41#tRokh~C$C;RnZjVPZmsZ0x2daNYhs zEn3qc?t7K*Hc!;yZ`A|D_?Zcqcg6!e$9H1oi3&_bsh}{rKNrqVlRG3|1bU)GtEZdO z=2;5(Gtvj@=eSa>=?99=*XKTe{(<&B2k0D|%&#gQNc@A>P~WJJ>aIEP^JsTyt60Dm ztHubz@{MAa-XMN>=ccHp(;#|uc@C#~b|9_ZQbv3j;wP6Dxcq#gG=H3e9&i89q0AW3 zzAzO}B=pAljz0J*Jrz25mqPaUFx;vlWhj>Q!vXKdqG9}T(7Y1>{S!)Y#o`u;g_#ET z_EoXZ>Lg5m^&i~TlvvZ9rC#&XUo;oDw_R9cR!KN`b=6JTNC{hG@HW2uE#}i!MqDbmjR^ z!bHG1)AZO+$6sh{=|@%njfWNR2g=G8^PYqVzL|5CUT!?d(dB>1F|#X&Yj%NYqcXT_ zng1WwUWi ziwY+>d;(La99sYLD;vENC~a-MythsoPSSWovW(t%w;`5h-}@nsSb7%stR0JHEsx=> z)e`V49#3`~e!_FjF`RhjR>3x33A-TB-1xe|=HqQyex&ZWqIw2-kI)h(DGZp^~~MG#pgn z3n}jC?cGK=uLWW@W{bc6bVTztwwU#}fQ=gt5tXjR9Es0W+wej#sT29I``%ZgcZoQIU!Zd>`FzwtmIF_6Ox#_zl)=Sda ztd=`s_3HKb!@5=$V|WOjoA`>(@C>Hp?Z9_aHo|JhO zXu8Di89KobRo9-xFI-Q@Di@AIfNTyl<45u{k;2g0R3)O z;B5s9(PnxP@9$8;>Qg)Or3rzedQ5)^scVNlOZ6lN#$C?*mdhs^cG5rPmo$9M9SSPe zZNO9j`kIp-%*Bdvy0qcPN}C4rvFj>K@m~*|WZYRzDo#Jd+=BkYK2U2j+{s?*Y_5twS-itF+4QN`zM|xJ=6O_Jt zQ0CU{u&wzmR3-1B*xm|wUGghDyIUZ9c=A^qpg4psjJzfEmb+76RcHJn4ytsOD5HD!9!Fglt`L4ksC@~((?~f>n zc3Yl^x72KDTUJM^tn%gR#g$^OgZeDd{dnv2rR*Hy#xtD^@aFcNysG1F&J~mSld?7U z_gGDnFFd4=k;>BV(;?a6fABfU1{N7dkmH0Sl$281ru<^ee6v$1Sgc;?-f` zAnKz{tO%!ijEDA{mq78x6G2EY#7J`$y!Y$?m<%!jW6v$L_q3|uYOO%uFKJ?LVLSC0 zX3tAI6v5P45$Ke55)R!~#C_jqQ?HRFV*HSL(mipIWV5|l9Qt1Pn|=@?hct>E{j*qO zx+c4IUcq|dk7<23B@!o|quhTpDE6B(j+q~e>B^I7^m1w6pWh4Wn+z%RTmhW8TL~dj z!g*KY6S|by5%=92A~&%8Ojcd9NW*#)l~?A%2!DX{@==@=Y0TzI198EPFM_9XG&Y>u z10&|-(y7gEcw_P|XmdJ^fBw9a2`jfy<8Z`j3Dw7u0xUr&M90Pap31w2tQV zSPK_+EG9>*U?>mUM^^t5xS~lJpI%Mj|H|^&d6)v$h3N5>2b-}(*$Ibi3FFbZTZF?^ z=i$i3Y!u&az%3WLq4!@ytlAw5@4mIeJg0n^-rN-fD=4&A|SmXf1BGJy7HxzL>H@pvlnDV2?zgxNJi_-yCTvcQM4 zp)g_*xL&KFlh-mKaHBcjFuo~vYS1E`ryFUMX?L!1p2;_QMsxPXZFC}Iu~@TfGPQ1q zr^cO$lyrCm&%WJ-&b>ZNgGX8-bto5bg%g{9+rp1SoM5i?Dh$tl4JCTo)F90)&wVw3 z^&hR_hw=r%Ro;mrDrTUHwEyj!=nUaC7lq|Y7lrlL_3$5NiTGa^EYOtU^uQq$JKq_1 zKEDPhG~{C2`f)g2;+Bi4k*w4~5yP(Oqk*|1rVd-h6(L7KdukF+aJNOiqmCw(;Zz=X zhMp{!{LLB@@oY>_v(0mfC4^SJ02WXZxhp7bTD6I zAq?C)4~^t;x+Z-4eJ(!(WOWwld z2vlmVpq7nNp0%P1{@d1tjk`GGD~)hSa}2;yoh?93>cQklc^juwbrc#q2+vr|quN=g zVXwq02&)@SQ@q|oQKTcr_n*&;2K^+5?d#Y<`={_CDHXJpcCtrhJ}W3^ii?+J)93FE zV!PdJjvMffGE=5}SN%hEZ1bOUhrAZ%_6?_?4K6&} zKAvm~_tLDi2Xu1ialv76yZCrQEmbY3mya2=L#SF@MdKW&!K*KEvJArl$h7tsC*^M$ z=(LI@pZsSE?s{0>YA~8_e)fctK^ytVYl#&&UKhjhD(ReWq;xX_UY75{b97e=iqR@~ zu_}zJJqO~x%p2nG*e~U$V^;FGMmswG{V}~yQ{Y8;Sz;gKcl7A5D%Xx&E-vtq_VWem zqQ8PIZ>Uh2qI`Kq!>evBiYSn{9kPM3bRjEQtjaC&VLgunlVvnqH$Q*DP z1|?aL($#2mxRod#AD&O;rBd!_nF==F?1R5UrI~XiVy(P8R^NXFM{fR>J4OzLAv1dk z=Myd2WaUTFP_^c-fl>kd#|}_+w3pmyrv#^Y1&|hPgYH*T1t%%*C$AjJx*CI|eD@qm zRe1{=HvR`ymVH6x^d36V{R8C0Yr$VlYwl-z6sB%niS;qkzEF7(9-KGm&)* zGnP?h&42XYvtFXpIuH1}d@Ktu*V5N+x2WLxO(+_c3g22PguYUC?djess;O*;nys6m z*XPN2^Vnul=6i(n?(Tr#={Kq7Yz#~`kj{OxMA4;cFz!wNB1_-3p6-pi4)p9hoDR0- zeMKSVE)6eXyqV-pFlv(Cd0T~H@1|4t#GNb@HiPZX?s%uaDqLHj%zKKZ@1Wtf!u_v4 z=>1_X?smwad#|0*{=!puymJK@<-dbI?(b#G3(B6uLba0+A{{ z{V(j3>+O+xw!L%3{6&}O!GbJUYIqy2b`0bf+dTMI)C61?V~;aU)$!BsGI0LpM90@h zV3Nuqvf6Wn+8vzm`>sbYYlR=VyGpZ$3)5kQ^lsljFqKwlYqRMK1vYuA0+VmI)6YpQ zaDF?{>p$=4=h6Y2Fk(jn|g0hJebMWV>EsgKPZ;hi)ICSY-)& zTYjQtIiCcxyIpWYz6+1I7R-r~8*sw=MdFV7jvS|zi5}5wDe<`%j#BwbhZ6@-gv4p6 zyQj=e5D44n<-m!>UnLgE3dl=ZAgHHz5|zrl=;a%Gh&{Yj@Lo|&%8qyFu(U({zD3Hb zhe+vFHW;Si#&cC{ypu1u7>F5iI^ZweCubKLY3v%wkS+jNE{JIP8Df1Id z&HC`hWj1^+qa!c9{!5;B;HfZTi605K+Ql575SFEB;VirF^0YHLSkd+pzE9V~8#h&X zq`5Em&sV|4Kg@Aob3V+fs}X!^n#tqoVqCt{n!B1b3s-zkl1fN0Ee|rrqPBIi11b3+ zjRC~g@)A;;Fq7R{rf?_yKHQ;nr{qj(AGH_Q&P4=Q`Bhu`>7Wly`X@*^^AG) zeqD)^ZOq!OgHbIcfPX1{l=emL==-pueBp#diOsuM9`Ug8Os4%hbhI0X>Cb{ld88r6 zJntgk(G-F=j~vF)cEfR-^DYYOp+#Pwr5>Md7Q{3=(fz%fXzQZ@(5ScP=?^EN{!3}E z^>ihKkBr96YB}VQJ%Dz2>;dCKroe{*m^NxJ^f-|#-hXE!xhh<#$l*Fg*-5>@ir+H) zorv~UU&QFYxjZ}P5xtRI<+i!yl(@5idMIBY?UGL1Z`L}|xM&P#^fpFOy9AzxEo9X` z^26}6peZ7p)MRNb=q!cTo#Uv z9Y#F*4`jWmrTmF^p*_@_&gkj#v=b^q}KH+r8J(9it668&=NAY8K-u&H~&8B~) z_>W!r%eoJ;CjCD&nxnw}ZU>C$uFOh4*XeFd2Hbe{lI9%u!p4HW=(YBt{EEFPx|t`# zexKcfO`<95r#fSVlQq{xbmvZIGDz{;SLomCJ^kvP3-iOTP-FkDT%$D@y=Q);`G<_i zD|av?O|(IU`%`&U z-y~(wY~DblpY+D|(s24&Q%{?<%wfQ~ncVNiVIfQUrjNW)32r+Q;o8(@YCG|jo?c79 z(NinrjjubgfrUEP{J9Tp(oER0X`TGknoOE|8dGfz|&1anER_Sv>b!JD1&0%%po=Ch?@Jn)GB%C>>9*=2iUyxpq^F{Kf%AVc^D* zY#4JHCavoRr+@9>3@MLuvZNk-J`9jN&+qB*{TyMk{$^SoD-%t$h+nEE0Zh`YHUOZvxX^_8o zMA0K%xuV?>RgZVzlv&Q?>op5k&s+yrmEV%4n-$iWFOZxK#<+{R!QlDvIPr+L@Z`RN zu(;X_Jr>R2D;1^U;(dd0nI}_^pbIc@el7J~kb$2@hEwkh0l)Mp!v6W|@s_kB-xt~s zy`Pxk659l{GYS-+2Op!TOP!$U=14(9qYQ5Ealn!hau{a!6xLtMgO` zW;W|!@EL3D=+*=-@58aGmp7`>S_pgJE==m(4_9vQj7}|kVEn7okTos_^#A0-hdX$e;YyGN(WrAOdEBMo};G^%H-bp z_H3K6P&RPRReC`I!tmP`a5%IRy7#|8eK&p=UNl>??{G!j>&0}iNtKPfW$Z6Gzi)hz zatd{y$+6>ZJaGEG=+a{|CGHPoXU%uwHmy#mu=Wyc{``PjMyBxlaR+dGPdm0L|3Q&v z-8lNua`El$ooE*m=8@`aw=1}OrY=F$& zYB?tyc*$YP!Svr;6E5)7!(FSS|K(3j)_!k5S+5^(N{|N^9azpC#tmhSubEIBUe1S? z+@r|`nbcBZj!v8Z(xy+6kM5Eo$29y0ecxHIk9{BR5x$>hY(H22^*}EFI-<-YO1rY* z;w?PP_$=hVGT`{;6|a(#}V{q zi!KH2se%0X?ligSh3wCFPd1EgHkZBdX9!dU*YEG`)z@ zmU{LBA@+JEj*V54I46Gi;C(U}d5s5q%^_quKLouS0(jbKQ`)4{5gT5Ng{eo45sNR1 z2g+A--vhD2_D~O6`FI=K{M20c`Ij?NnC{#K}Qh#Jn3XsnWDw z*c_cC<^*nmKeCm)WRx9hd|eAV?^>bj?$dlX@u1M^(hEO^Z{k9?aa`0@1rAO*2PuoL zQbSLm;+cLt&B>dWJB-Jpf-8Brw1P+BTT#|a>ahnL6JCcQMF+>?-7hD~Xa7A3D!Ufp ziQhA+RmxfS{GG%rT#dN=M?rbZgy~Sc;T%3yJcIq}2jND?lN4wmlcy{!#L#|Cu%ma5 zkm~oFLc+&lP@XQ%S1pAR*E;ZlAujyj#SR|qz71D-ETd%$3(2^^6CRW8`jR9iS+IP@BC>W&Z8^! ziuwnK+ZWJK#dY-dd?_^FF+_!%4wT>Jo-D&AQm*n~E4T(E;$Salu1?aFJfY{sanEh& z^f^tG`+gL2RSei}p`~nY)NpvVw+XD$N0QB|U1IIX9pY|j*RNLCLf1O3p-W-{n_qZO zJ7)}L-FaoC)ag8~SFVNy8`NOgr;CIsHax>)wX{!kMd58-dCazTuyg~_`6K>3>R=#s zNZcYmw@$%Ssn2HCaWCX<+yL?e1F`TzA|1$o2$9+&g{3bx;*7xK*#5Wyj;dr*jbEtv zhddaG2FA} z&#zJ+zeQ9RHHnXE`a>kv3Wb)sxbZ}HcHbJpQIrirjVI3ia*@W}{Vgt?v4i6yM`O&7 zAc}gNK|`H$#WUttscHQvYLzm6#yxh^kzu#RQIh{Bc(-lb4IqmFRtT^V$*(7_=_`qcPg6&6MQ6m-RHs8rS! z^(UW&rl}$HZOuVUan!`TPVP8j!Z!$BDr4={7eKB2HaWYUp`4a;m^W1!Cr%2W(mzxA z?f{8L6eaDB#D2Wm@uXmUv6`}Vec9*D20po`koriCncb@f@bHL0cAA`odp+xfz^qYJ z?;-6tDqCrceIB&!>P@3g9;969tna zA|?up>O~RTUkA09DuDm4Aw2A|Jv%QX+I2?SlciP2f7fs1-$zbz#y(|X z;viQ()mj7YMb=<(v;!3lPJpXfxf1`}g?o;Pobj;o!1Jq^-Bd@DsS%N7I zSv(FxJhuu1UcZ(NoihZ7OiiN0OE=QK%8lY9X}0}SwVoPmrpdBAwMi?-kj0oEpfz2Y zHBO&{J!?PE4V9~KY>fk~jo(3zmlSZ|ViR&OR+b#q_89!^FeD|tfxG^>Fwk`is5X73 zg4wN99Busj&$aKhLgl}7hSw8e+OX)dU21xCU`1BhjYn*>mT;T+7VrNMy(6~)Yt>9 zOGe`3x2|}7k|K_safA*!ypWd_De{WRaq{c_=BVIl%&i;0(3RRTIBkd;*EjE{iTTsT zF~0;JDrH)<#+%ACUS6Qn+=Vrt4d7K>e7MiY%R=$yA+TbmyOhcO3KQMzd5rHmdBfrz zP^IX`2RCZc`<1__%ZoZFekaXT71F_=TSq?Mjx>0EE?CqV@v9~K>2Shmc3okPmY??0 zh&R>Zzf5ORnfDUXHx2{Cm}2?)3QLF?8AGmC&*;SHG}2UhP2$tb;JS7=4o`LeUaExF}oKQ`%P!Gk_~bM3yR z@T+z`)MxfRo8QYp>Oc43GYJ`D^}$eT803m}3pM!U!wh&Sd2K??lc}x!rhA58cg`L? zS&X{Z0Si7jOT8*xj?DIJOY66=2N zQSoo$PWT+-Ddjk%XX$G`Sr$$$8+awID}o(fT)2l2~; zGCtP3l6u!#bL`Qsc;}!8`0P}c+;9?m@3al7)OBO8of8CkqZSuV)1btDQ(@PH{m@zU z0Hvol%PnLZg0!nki0QC}^4_bW#&q!H?DQU-JU)@y>_KSX?#KtG zdV=*YQ}hUT;#Uz7VCdPCJ$Gy3@3`)`ErNa?u9ekS(#{1KSgKqR;-WA&M zR3Cl!k0NuG7WXxNlKW$P4`#aq;^ITbQpR>5AOFz_HNN-dweJe#qu-fenadu?U82i> z|MX)GWh;Oh$u-g=o|fD{B6QVqz?B_miF3LymBs8`2-bFk&~Dptx?W(14|1=H{?qzM z9E2-Uu67{GWA$;r_ZQO7?#CYYPr#mE-fZ7Vmr4UEl|J{JNM&Y(tVom*Lb5VTDq12c6_V^NY20%XmC7co>}>JP-syLL zfB4H^JnrY-_xqgJ>-iMeW<)MrySx_eJ=RB)ea2W=qeH7-+wdMY-|~r)XREF2L|PU; z75oMmq0vnrjCj%+Z*{JMh*wvHr$Z9N`c=+IbC@=ib;E*YDpz$H54Eai#qrS&7!}=# zJD;Bfu?m`E&*KsFd!!Tp>{<`&5{JQxSyeQ0UZH2DxeOgDdyr!ma?y!0(66w^EkDgz z|7)Y@>6j0vA3I{Kphh~Wx5!|>I%wMefuB~MT$y6Z-~2bw_m-pJ5ZgdWHjBt0>J;^I z$q}>XIMHrrSL_j-LG5c^(ro(+)MLsw8d0NyU80}U?3FjECG|gOFTO0RNtEMW)z*CX zffdG_?@2yi?D%GR56%cUO3V7WLZEbBzERSXbpJjF?f+_j<-yM5RV!I9A%{j82=o$tvfTGCO2YcS??t*Co6o)Wrt zV)?fU-u)7*h0fM}1^s8&F}Lv;_H~XF4o=ua6En&n*{qp9XKlq7VH&u}Fp+lFJ%G*w zW|Ha41C(TZ4^kWB1e1g2Y@54Kyk@ip9L1f2(Tpxwv3Vfhy)uC2={1qzm~zs8t%mvO zBWcE>EI5)Z$!P~~z=I0%U~%6OvTW74Ebfuu*4~TN{~qPNL$xSn-vk(NW;^we_8<%O zwJBWAi9Re(M3?*ZGUtm^SI|a7uUEq9b>|@3ZxYXZWJAZg z8(?n1aK6(~EmK}x0sEWE#94>m2qV|3VdR_ZLa~b*ZCPE2`01c{+BA_SZcN3eWg>J0 z_vZK(H@G*t6}m@H#tp_7;T1-)ckNG-U;Z9qvTV>wYd&a5&Yh=MbD&_(bxQf^jDv!W zAR^P1)U9oW@v@GJswh3)Y^lQYL#pUS{VlP7vKFqNeFTJ>Nf00Oj!J*ZSi!E2RD80a zF1?FjFR}c-La>)iI`g`X4dUYXJH$6%a{Tu#<*j0q%AuCRKuE4wcPRFE!ja6Is6@l;*uv% zDf?O`tv_&?HVjXOnj$mETU*4&e`?rOni+m>X`v0$cmI(|3L71GEPQ`Gjr*-!gqaZ@ z60Z#SSDi5{zwZXiB2tA9(#&^7;Y`j+%&9n?SPVbMtVI6-Z)BGZ^u4jT)triP4aie0s3k<;F1?!eO!WRSBAjd@_c+6&<=U!K^Rdj&5pCA zKJc{eY@$Atk~XCZHzglY?wM7r_53F1C*S3*;tE`MY#*8~_)A~S8@Sabipv9nvFPSu z41AM7#!Ibvp-(d3e0K)#FZl#3E#}daS9&-$xDZoMXTVZhZK3xIYyMg`6vyfUm&E=j zaK=B_rgIp_bS1dn;fJ+`=9pkvgd>hO$+lz{!cPUMPs%}fPJ0mc70q$xf~&&xZYfkX zDFr@{ED$pL0PEZ2QH}2packxWcq@A0Cja|jwoZf09^_ETvhB2ye+b%NqwvN-2QkAG z__XpOn$T|p`rHT`Vuq7!D1Q1(0vhg7?PQNLiY5Vv653 zcAaiQVc*qp$t_i`uGI!{a4$GhunMd?T@f;-D)JZ0Y>79f2mbfN(AjYS77r??U3LJU z##MsX5Q$B_L7R0C&8OctC-Q}<8^QByZ+1WC1+RYUpjyH(meQ!gu2aW&QExA_c`y#j(Jp9gMJ!)wXAyGot^YB+Oy z>28Sc;)N3;Pb+Lf?McfnAh))g1!-% zl;R)Fb7r@efASoS8vm8UPT$`A@Xbk~V*Pn>oG=F;?JXwbn3dS1SPE06tk8u(1$2Ki z6SoaIj5tmg8l=pw!GD&le>9hab*@NWgMK)#uXMID>&^2otgmRTmz*S)#r(&|i7&aP zlhvRBXs3UfA9qjZ*nYz>;IKYEIXH^9L`>t8;i_K4lAA^E1x1`48o;VQ%1Cd9DZYJV z&Tmd^=J+UEK2-IaqizqP$7hD){2!CVZQYHzt)a8{GC2T_Y%s?9Vt2N`p2wTtjpZHF zI-`Y&uh?UFC=XecE;dbe!kX$LIPLXBSoY(D5L=T*QyL_1v0PwzL+v}ZaGV2s4t^2j zZFTU zi67yxVhH2gcvv^;tGHN8pZd*NB>Yyl;a#T`v3K_%?6apHuAR?qb;+IuCMA-Er^(c{Wifg#WsnhLf>DxaQd)@(OK*hrW||MYt&! zRVi`36sgO-;Kwnep3u#Ed&u_uL0lgcf?h?c964n(89Tev3 zK6!!_oyM zol{<~7c0V%^cMiSXSa1by#rlX7J;w7+GD~qBATIW#us3-iZqH5<&USSU7e>&5QycM|qQ3-u9IVfbTo(K8w zG9|wGR|?uZ3&MBbg5jA35VSzb(3_^o`avHoP|Bk9zj{*tQ`7PNeRWvzHyMX7{YWeF zJJCMryS4SD4gYM`mAnulOkVs_w9UC9`aICVhkY!>15!V^K`B!FDKX&9a3kK>dqsLi z2BA{%TkvUpMr{sT@OOp{-ZS-vx~EZ46WRfZr*}hR%Wz2d-2@LEqS&l*2j4s*kGs#9 zV2OEuKG3s7=-*`uX0A9&ACCsp$-!!T$14oiJuMYJ`Co$jp7UT`{Q-Work-D~Uc?uB zbmtXe8=-pc2g+2>;18c~i31)&$~{$9pexyt;$Z6~+-he3+Z5y~z<4z-i(;ri~4pm=?X zH2Z%J=bD~U=Lr)q_{~8uuLwbh)%{@5lPWqdxyJRz^haH-Yr_2taiSx1q1l=1g{z_A zoN&jRAJrOg!qy}3TI~nzI6R#@-$)_LiKAFI|2;(JN`D_ans?p!N~acYq{E9p)B3SW znEYN1SNfDucxoaK*_O(GzYOCw;kI1fJ_;u1lvBjAQ&Pt_o&LQYiW#@fu=50COq?s7 z`^~cO`P*DRdMk~Bw@B~uz*72LvQ$u<>&FJKM{~Zm8n2vxkdE0FP}3ND_7sNke3w%C z@;rv^9eq$Y`HkQhkS@G3xC7^2>VVVppQQ9S5iQ__*e|B1S0C5)+~q)qSiE~CZ!0vQ zn+`V8b6~(VA6q0x>tDKGJAcK-?nsk+K$n%hdCYeU-Y_jquo!p? zUIv))(M?|bE$$Ex*ti2m$PrwEpThY}FP5w9$?fZR^!q6kviRAEo+lgfrdCHW;hISA zb}Z*pF{zZE(I9mgf_U8cT6nzUGc^r26C*-r3Y{nF;=pSfm~!?2k>^5w;~UKv%(NhX z#x(NIauOUHh~k_l@$UcY`;e&$F z7_I#oOpf1y^~p-yUn!2(?71Xf{*gsps$C$kbSmoJQH3v--RMf*5!iQeE_=R}7=B5c zA+5ZfmUWk7mu-RQ{_q)mp1z!RT{PyDA%47O=Rka$kU@?YmPjmLfd`t+aXj9-z|;uQLcF~_|`5lucH&29GZ;Plg-d^_a&JBQJpt_xJgDjF%=t{ zvc(daJ@+(Es~b;%fYvOAU%;w_t8sO@+I2EjV)rRl+MC%Ti;lkcL%*tgWp zD-V`$&V{{yoKUSwA1!V!ftDu${H;;q6aM)wQ*b&CoBDZDsp@UWGw;E1{wlQLUNsc8 z$FgaklccO?#)kr0cw@C0FYkMkZg0}y6OH@eqRcT84)2pJy`vvrK|1{xr@2z6ITPbD5=kodY zZ^gaSlA!llnfynXZynzb&6rq(YJyXeI7#F$i$J>tSp>F=pO5Yvu)hpvPPP5_J zBlM&Ha+;w!ft@d#akr=y^t!PO`XBDjmWRGlubeLcr7UIk!)U(5NOoy{80likd6g1B&>{y^ z<|g3w{zG}2zKxKV?0_F9Ink9rHG-$kLHgS?3`UOYiAIxtiMcOSaMoiRj+kV@$JTy= zGxsOJ=*d|SQXI>fe|}JpYaghxzy^-}P{TxvSU%gfjeh3X@Pb|!VE;W8eyud+`&F zy_fiuvpY!9-xUb2F^$)PvQG8z^nQ5r^+SNJqPEq1(+?v_i@^jhOj} z`n{aN-uKe!y-kMbb$u&3*c6BjTa)lhT$|`Jxdw*5YNC6OQmME`hqHQ}#7T?7(Q>Ig z-PiMmM}@j9@AH~`u6_V_KW+H*Um?fNp1|kRXY*=F(${oG18)CWfEtEUPfwcr_wHiG z3)G~Zj%m4g+greoUF|Wj&{ov^v6Eh(vg2<_ooS`S6zGyv?fH=zGVFF0GS03FgdeUQ3HO7Y=j=RcVilx>F;uSewk z;4^L9AfxHq?m@A|B&fJn1qw6I34Pw~fE{f{vze-zCko-Afby;LP^+5_ z+Nv1fk&!--@S~gTVucdV`uCCMm>TeelTw%Rqzsj%j`Pxx8i={7%%MIKi|NB-@yf{F z*!BK29w^OOTh+XwqiQWaOZqCTI5V-rH49}+r%C^Rs+)PfvOW!-^g&GfJ{SMwnqzyJ z4hEMlf}cH;>Fp-OqRX0;=lfK&{T3-EeE|+r9FM6|?xwr=RfPOBe7pHJ^-}MJE!!3G zh@s@}d6y-`{cIE-Ox#{E$>TizoV5_`?PFj-pfYBCRH7}XI+eK}O9e$OANp;(h_{Z5 z;nQAQxbyr>s=xXej2@-H_TpT2{qP)a+};g_hn*Q_TXFyHJyCODJPe-j3YMOC!qv+P zU~B#+T<9@f;s(0%nlZz0`A|139H)nw_e^oSc{Mbf+QX5!NvQp9AX=wXlYv4E$|a{$ zUa>O1O|wU}XV$!7Y7kp`_2W5Z3pwU{F(@9}hNCUBdEQ}1_VUc+X-N$8l0TAZ!cXzi ze_LQM3}>%t${Z%Y3vM;7fVEaVFm}y9@>gF^b8hV+ zpBbBQQTiRoeddZ2{|(2l_ev z4)EiSQ_7?#yZNd8k>?=OTW?Y(0g28e$X-k&V3k)UUX?P8@yeI|*9Tw(`c4UFl?*9$BAW!u_B3<2RnSP^;!Bj@D9V z=U^A~uv$wmzqw&s??YaDJ-0*m4U@Rw;cwAg@*g#Ks->$Lr{U6=a2UO51|84Km3sW^ zF@B#1`t`|yKK9O-Tt1N&{xGCpq1$-Q$qvdh-9-nN?VvBufO;4Y;oZqP;C8402Gq@> zBu{PTb_JYw(uFoYkE6i;gZbf)SoGAq2v_eUNPLnc@%xPse)8fQcnnkGr#FKxGV8)Id9Ol%8P4iOWmx1K1x2mMUTWi7X z_ImDU%z!sH?nBJ+P3+85rDyQ~wcN4f|9VTghmc69h$@jd9%bUBodq=O+Y?Z?oI;yS z2V#TD7BHRR!egV?K8cS zWIQC1sb37L-(8FWWBe$f@wOG$Dzy zNBW}C)=qqLWDw?zdoJ_tOSJcX1%{8C2@~h&bL;Y*kRs)*d`c5>;OkDPbLElv9we`? z*;COiHxK)rk=~gnd~kroH@kl6m}i~lW%0zaNcKIF2u4HeDag7vEI1d*UPJA<=I&$h zL(5vk?;O!N{peilKo;^k15mxv;c_pmA;fD5Ol{mh0p)l{9HOEQpi-6Em zR6lPVTe|Oq{Xzk~y7&UFMW?~74FxbUIK@R`dQY85sW}^&rQhA8pqVhUpAsuSQo@jtb@aVy10B*m2!3u3_$f`w z!$_Rf$HGSZD>2LL?k4jYoeZ9FzX)FH3=tO0=$`gckjh@hQ zbO45*))&2wtRmHJ39>s64}spILNIWQV6WZnpcFC$m7C+}*hytfci0aZnG1NxqdP*x zFTkeCOmVOC7`ktI0&g60Ml*#DxErR0)TAqhBt}#e6}_OhzDqdbUK#c6GzGWqk0UM3 zOZ55JOisBfL)YWS@yMxXuY>U?b9bx^p} zX)okR32VO(5O%A0pav5wOYE(=b=u;N2sQds&+@OiC*87Jg0O3D;|vLD48ZP#Lmz2satD&+ty zeNZdcxN=I={)z!A!$2r4hd9+EkSW&%szr&#Ts5I=*h)W~CwGt!n@nNXvR2uZeJb3T z@PN+F-T=RJY~XSGAlxA5hdMsS=%S$mse#(~{*MI(^J4foz>CYDpx0x!SXvn3$n$=$ zBSD(OEr01Mek|P2KGc`b|Gh*-2CmfWyCEM5`$G$o4)XZh()_2Tfu{I8=CE-Mg6rf( zeCLNeyu^`wXZ=^YUGWhOwVTQFa;A`K=8n-6@8wF7LR{A`UQ;9&sIOB6_Q=d+JaZCk zG%pIXcS%e$vxDMG!z@&_?xu>l5wJXQD47TkM4gQeV0k8=)_u!>CBX`)b)pAt zcW4n$4)DhP2~{w>?@3U-YLB;hzo7S52O~ab36O2V5bFWmZYRU5kPhn6-d*PX^$^8L z=c=!V;^67bbds&spl*`ur;EH2Mka&^TYRmtwRsgr)T;1b>1?@VM3oR~vatO9bO($p zybG^C{D9jRrFr@>$yoCsoPE0Pp+9$quw&_ZVX$8+eAvGUhaYpoH??y3L~b&zTx<^$ z*3CfeMVCp<>o?p*L%ueWVCOeeR$t%_Il&BVHQD^Vegg^_ru<9uC*2#}PN6c%&7Clq z#pTN6KBpWSq

I`a~`a*-S?)-Gx$?IAiMjsPAYEHd%5~)GPAFC&Q(lc8wVqe>P)< z3wwpvIeRMF)V|PHWK*XYB_l>UT--$g+o#+cM$w^hxaeyMp#;A0>~DKKw{lD7eol zfTo8r+`TA=Wm9b+?W_V`DUqY3KAYHGyEAKy*W>h_mY@*XmH#T4W4D5KYV4Lr&sSWM z+`Z>2e6QcCc-V6YzEjfWJ*%0vNnN{z=@#f;?vFY3DmeV@aX2wEU(gs6jRyx!D=1ePtyJXQE;JTpWHq zNa_mt3e{l`h1A*?)Gjf0eg{oP^993%{njb;X=o$4?(xKj_p|BvUsrxE^I%zS6KPRC zCET}=-As88RyV82(Or|T6%Gfzp?mnksQ1v=)m0d0)QKzhX!AFhm(*>4ESxIz<~?1t zXiiBuya@5ZbK6#jJ$e0_CQ5#z#MgJNhvS1cQGu^H}$UF|TL4*ex@OpC!FNe8pfM)Qikb~yCgRj8GEFHtizQP=1u=@z+Q!}DVh zF-sk5)r*BsIwJUlsH#W>xb}(;K3R!V1GI3zXo?l}_h|kL zY2QEehu5jc&%mzJZdkm-kSConW>-atH&MHV!hUxuSD!x_O#8ONk>L+OHEE}mMOCH! z$e_f* zJ?JR}JTbSWxZeMpM5J#nf}1Flmw#KT(o6U~@cBFUJ~Vz1`65)lETpiW+J* zd*a8tAF0!uThQgkZ>k;>4S&*BLdA~RIH_k@g@3HT+glEZN){R7>|Rb(t#KAkR_}nn zJI;~0Mm>I5ewkt?zVdpYI|IKWT;<-x7KjvmZ^KrTsX{PLI#njJ=Q;MV40qzSlI)0Bu;LB#R}o2*?AiNtQ$UXI$7~sF&|5J>ft_74gGU9 zdD_smXqVFuhwOY&aa^@C+N*A5y){lC$#}%Tgo|`G)qyioHF-pW318K{069M^sbfna ze3cu(%C$Se^1Vfv3(28Sl z+U4Q6_h2>l#Iwro25U{KCE}L$@L-NZfk{`dUrsDduOwX5cmh3I`9@ZjzXE%>ctoG3~)sZx5*)X0eWfD5P zwInB{JEqTe;F{1#{{6`sQI-a zPAhT39Lb^Cz5D<-9=c0SOE*CCcFC(VZVPt#oJ%*F`f|zf2;TAI^+Jc3+Udlc+`2(AGKFS zqGHJaw0bCDj^{=SF1ZgoFZIHeP7ASHTcBvA9L_(lKZI%Xld)=JG!;x(%{#oLI~z8D zuJuCn3_k~ZD_Y5@I+_ypK7-O6DZ9I~QtaiTL?-f0u={-<7=JGpgBJc4o^E)iqEErOq$T`w=MIke&zVQs@5XGOp&)gt+44X<9i~!Hm3H07|Echh_ZsY9 z=>+SBBK8t{N}Oq3ZvAG@B_}Mf==q`Yiw47})zlmpbsmc|5?%@^?_$B-`xf^II3}uD ztiiT;WvnTGO6eL~WFcoi$}amZ6Vx&^c-^et{1>}Gh>ikl1WP;GWwyL*kH0WyPAJ^$ zYl@v7@1fc8il7^+P9I%9)3Qmmv<%W<$fGLhKBdpQXWSM){Bg!3J#w+f&s}1?r#df- zltVFWr_l7(6CXsEgIQ=J*xkwy7G$o+KB1$)=g?C~xom~LsUA#DZveF>pz=R?lyxf< zHnr-}sDl;Id#(e=tFGa3pRZE6#Y&30F z_mnVU_5iF8wZx2b%fR;VKn%{lLfgGUan)Q^Gz(tHH;;|Qta4;wSG}m)5zhBRdh<&& zJwAHAGY@JAgP}LC%3ewGwa80SzTaUZ_c)=928S;}V8JV>kk{k^s|5U#o=)-WEFgID zCTIye0*}B(}jFPVajJR;4k1UDf_eGCB+K1vw9D{W0`^w)U<-tF5dj=xi5{!X4M?KsXiHdJs*BDA!Y3)-)`($z;k+~><9c0I5a zgT`xdN3}f7k+QeNCv)jo)J?b^k%(I!s$=BAB{=QE8AzCNl}z@<%G$zgv7bVq5I1BK zU#ve!8vnCiAyUJ zIAX*ZO6dLnEW?h9Uk?&kq4_dzOzY26$95rCeM5f(SSnjVit99+oJJw7gjYD~0wpv5D zf3t>nw!Wt5(mUd#X$rhwBb=(|-GKR5@j6qG$jLXXuQP__O%ScPemPR<9a zo$QP5bG3vqiTkNI;RaZ5trKUi>w%V07wJHcJ`RW<3cx%xWAPBmTGa$ zuV{%|e3D-m4yNWAcOh`G8Irp#CrkStqYO2ko8!XcGM><`?+Gl+SD`ftGr6Z(BCZ^x zz_AetynMo1`uJirwro-MtoPjpJCmM?r=tJT8R`C|ecJ~YPwtBc1LHB-BogL~n#610 zCkkcH&%uN(t0*&mk!WFG2RVWBNwGwaGW|Mazuq0gv{NdiM}4@>vJ0D}UVu44y|Cfl zJxWZJn2o*mkm}GF{=P^JWel1 z^~W%|I*6EjQP6s*hXaOA$L1()nkbzk95+e43+H2^`)J9%6FZ0hEGmZV5sJ9b-w7`_ zOcAZ3?nhr`X_X7FZoLBu*=pnfsCuV1ZeSkC%vI&=V%~^0QKod1rOb5%0 zfUjG-V!vD?yz+7izA$WsQ!Bng_d*g2?#+X!Ry&@zqAQL#lPcCFG1-Rpf{mq(BxkaZ zoEi=A*1SIaY?c`g%XeTo3u)GUZ5q#gBts9u5eIzLMOnP$4XF<1us&lrJWHE>Q-Wyv zDL^TyZARs+!k)cV(-fclU`{d*gs8-!PXy?DF8bd6hJK;8OC? zQs&+-i)60FdTjnuk>j^Kg9En`X~TU#4!SJQxl|1*b(bOfPCi^$-!4qb8qeL&9;aqL z2cs`kgY}tnG`^D>HXJaAgy;Lk9e+(YdfpLn#E*2W>zWO&+zV!ZEP;K&{HG^nKM(GN>mc=bKAPaqoaI=2v5sc96$nYii(#}}7MV%k?O281 z;?r)%6?#J@w$GX{HXEDGT4yaO#`-o)w|PyEesxgu#BO}lybT6BY@mePAJWeDFqlbQ zo^7Fta9~{?j{mJi!R9V-IdnIT2cHbx1^=A)!{O61E)IGvg5pHT zFXQnqS)0XfW`2JK0AY0a`E@loG=nEuw3-n+h_kRP+qa;Gux zZT1lI4UHi3Ung3V8A>lsCqn+5pVX{hO3}*JyjRnocD&K&0~R~r(or97J(|Hzo^!c5 zaW8~_ya9tl(x6Q5vp69yoa-!{IPkX|kE`?J)7y0UXJsx0j!Ge`hD`DOm@-N_W`Hpx1>L>-nfb<>|Bd(<{CJ906|@!<1)9V zLRv1opjC%^@{tLZ+$-W@xvOWK@c2v~ZOi>jo=;RUsd9@rKxTlne+R{_n~Do>dSg{^ zE?*dHhkr1b1NV259q*@s59E)7)5JSM_YXbTsJHYz?zV(Y+)66C8(fFIt`)HGZdX*^ zVTk?WwP~2#6VQfRaL!L2BVKjm3#~Ir7Cc3C`4_`I2T9!~%RQ*sZz}XT?G7Uf)OmAM z8r7{Z#hDFKw>01cFVLSz0k4n4Zo4S58={XpGR*nM+&mT(?Rdoa6!?28QTDW`hAgLi z75z)i;i8mNylI?Fmn(H)wB(=Jour8OlzNc9K|IE>9rkLxQW3N1B4th3MV^C;;Edv8 z@Qq)Acl=&LtlKN$X8T#byDt=%bXAxYtE|B~kT4nPW7*urJsR#d9ZN7Y67f+Qh zqm%tBA>42meOzMTwnB;NnuOzQNL`Rnn68uit;^sA(La_HGIvjln z5}Oyo_H)WY^ZvJJSmaBIJQRCcjKg)}OQG(a5;`Y!;jP+soZZ@;YI6thzD@oW8rSk! zVciOri`fazV;6I|nism9OcUoNt6)UUU6>me$iL=p!X?W3(5Jt2M!4BeSh(h@*Lbyw zT(YWGZ1+^hxkU*uVbB=x|6T`gmg#cOfzxnikFIz_HMRV={aG+<`68~#cuq5ZxN!2h zS@`&u3N(##!}`ZHIL*hK8bwWB^CwJP+qaNt}Ge)2MzFFFo}1!6C-Ho7F`EmOq9Usd>|sW!K+`b6Ca?x#H|UJ?gq6kpEx zLSHu|z_jXJFiPbf#PnRwEA-kR-t@AUr=cNVy8N4#=!NhP*X`mHiS40OnT`8nrjlBg zl*#t45_+EQNvjraKuenj=-TO-I8taAH^zrz{-iBfs@5p@S|wnsl%?u$lUU_r2Vq$I zHySe{0*6=5rCxthaJMw`IUJCNyL79B@A@ zFd`vU1avw$k&eDir4!0= zkh)vy{+zRxB2V&YHmDQ#y_v%QW{w2U3~k)x-Ix3P>kuzn$csk#J+NJ~6dSztP;EoI zw1o0nBXglJawW`#0pc6d`{4Erut&sXR=|Z#JDfsV?lN9F2Wf2J-=zRGjhOL^PU`30+Ne zutTdvu+UV3*JI4+t8zMyGH-zM7xckWei_z#rD2V&B~A++CpjJ#!nhJ;j=j1bf+Tm} z+k5rGgZQa5WVAQhK97gI+Z*Alc|L~p50iHE1vK<`JU_^O=DF@~448V~fe|tOg6Xsc z;ww*AF?7)x=+X9p=GCvo+p9wOi&LM9(y2=99G5~jLLUozqg2EdC%W_Er5+E{ zO(FG)Dw-2=msHgEftzmu8A}YA4@RHDII;JFIcQ@O zRWW;?8}0h6La(dNLBF2ExzT+Q;rb}{>Z%I+R-F+QOuBHr^#~r4Wy_w9-yrVwQz{;L zT*!&=;NJ^-lSFUj!T8>LB@Q= z;y+N?<%;@7&mhlj8y|L8s_5Kq$72oF!)hu2q#l=vhn0W9l1GWKZvQrDz5kltp@2uO z55vnFHu04HH6-Wd#3THRaJ;=6hHr4jJ99SkU)NSD9Q#*Tu4~23Q`~q?cq_dextG3w zsuO$)PSS1X0%v?vu=`@p1p@W2ioC0m%p&+Y&BdMsSQhx z%kvtoG`Ke~4nK&;aYFwou+U{Bw*Cth=Ldcl|6BD(bdYlQUj{CR)T8bYk?(`YC&+V_ zyaQfOJnOZmW-LZqr&6D-%GmvN8FguUCiP$M(z6&fs7-wX&zft%Owp7b59RUN)AEpN zQ4V)^PU35O+Q|QvGsj-m=OOZ^!~uUY@sHPLi0v)ypv^)s#4QgUjmBW=EK~ODIE*tI z+sQ*CNz9zc;;37_z$rZw|I%U5P6X7E_5!A#E%?#f&ZszFjYBqXhQW6d$Zz!;EOnN; zHhBj8Vv{X?bNocthx+goQzxAM=Q>1$P2(n=eEw2(N%&Ws2A}eLxgw=6yZgopZLR<4 z_d$C)Zmm@D&GHxZpX`FeCt9+@DlPo;Xcms!wjTc^yr5mzdP7-feZI1_2{K)(>6cy> z4eD1dj0}o_$*M)Nm$3NNv{g}Mao8;7QQCElm zpw@IF>PhF+w|^E=Mn^F3J-HU2v}AzaDS3Fi>=arrkXSQ?)@+fz1O=(*_`dWI9&Xiu zr3Z)bp~>#(9(YOUWAhYNIcV_&CoLRUrh$6Veb9Sem@sL%FUQDKBv)fByNt`H+S(ut zX?CPJtu3OzmJw&$PvwS-4HfApj z?b)jQX2}+6QB1{Ofukj#y*iqWa3o86HHsP+fMJi^ad5i{Ybwpb9j|%`BQMy|pEJ_E z#`q7lxt@R%=XQd|g(~4?!4y`#`yW(Xuww}cMvimrd1G=S$Sfqjou`{v5;>O@UYqeH zyLeu9_5+MQCGrNDH}CHq#4gc3H@E`s~#;Df7F}++ts7dpyGv?UHywX$7Kjd z^<&|9eH28t#{EANit#!YtYZ-Z&m8Qe_kJPR-pmB+F_$QK&U145+Kb;VbHeUd^vTOc z`YtXUj4pH2xc#BDBU}HR?y8Q!^Jx>YuxpmEQsp-N3O|K`F@4BNVg-G8bdGkUcB54> z3UuepXv%xx%VqnjWrNdq@Q>|Nc%9q7ik=aYi+E|R=(x_4_6!_Dr>Di@qaP}8_kuCY zN$lkG?Z$9KWwgXLS&Ma{&&j!=L*i-GNwc;?xXL;V4|F+$HK_+F!C#p(?q~^>u4?EN z*;DjrQLfY(S_kth_r4NA#j#jyjA*ArOHyg?GI`9g zj1hvTEutPrePz8|)M5XlW_TLA66W=E!C!rM!|#dR;6nFo@lC5S*Z2>m+p|VNK)45P zyI_V9CKI4^oC$}_c>&p5zS7v%F* z&l!XV76U$eeU5a`&%|lcjA-fS9YVRmDV`MK!xrls`O%bb@ZF^^wwaCKlE`aR++@!_ zBSv${;mxq#^jgJcZQzY5al-ZgF?1e|SoKjHCq$@FNtqE+MnlGP&(TszywQ@jcBGVc zsmM-NqRgTaNkhbQ&q>lQ+R>1v($dhP@!mgy=RVK<{m%J*KOd96=;bZ3#mqJ6_`3rT zAOzr|4{soKS{2p2%b_h&=7V=R@$nCfX}r|Ujyq%^geUxhgfK;M)0)FvC#~#4PWw{! zjEl~PRNjkIl9rKA`!i@7|Blk`+<_tI)x~~>`J&j(3*T#h63jpU2NRM;!06t$;kB6x z_43OT2RJmq&R!MhR{j}gNM0uIeI3bX^KzOseHT?9*pAw%J^8xwH7a?m$8R#VVZ-cN zI(fDOzPw*gYkN)NX=d4S(+kICx>3M+ueS5cADzVXSxU5d{B7~W`~{r5WS{uv_U#+O@r&&9WX5sA@lo7V9g+{%3m7vP$CG zR{Ox^!#nxHaus~iI32AA-y>Vc3u5GhS2-&d%3 z;94$tqKGDw?d7qD&%q6~4|MFjBiWw&B<#A<2@{)&XhOj;`ma>#{CVVwGj|nm)1lGS zWT!!j{jIQ%{3;|xqv+f7fv_Me1-2UBqp<~n8aZ;5up`DPY)E#b6$?sy$O z^&*keKd4|xa!1-;7d@FoH)E2q-^PX=(T%nbWFN%NRbdI&i^g*uBZw5V+thIAcIC4IKh%kqJ|A3AYP ze$V;&jc9IB3Ul`eXG4YLXK8n*B|> zYpg@hrfme51c?bW;T9}jRZf$)E8?x)5`V+rjW)-O#ZNcS2>nwHxa_-}j%`D>pK+RO zuEoK&__rl6zXXu|$El zFZ9Qvg-QrT=gGY~52r5=BA2bhMcr{N5Lx^Np1O7v-y6C^N53$9GO7voHs!#wHEwv` z)ES$mDDlf#yW}NFI@GCDU2r_tDoh_*Adc$Tj}M(S<(`(3k1S$3R*n56{*$=V$=a=O zyzR31YLFlAnY|7y-lvmisy+(;PQw(_LlmZU7Jn5e;Q3R_WgRbc5!d*bHOVkqLN&8g-^JbG@L)5_Mrgr8;E< zoCcNFdvG9gAo{MYqJ}%lJX^RUDh#p33tL0vH+=lb)$1BPwSi`M z*%f=Nx(6<_m1Y+PQ|;jf+ICKfFTJ(I{63H1iuo8yPud5YUQdDSkWu`u@f3VI`d74w zSdCgy--U#$*Tup4_sM;In=B^Gi=1!u#H7p|`uI-`3X5Gh%z7vv)!PQkJUXy#oIS6- zYQ%f@OoZkiW>mK9EZxe>hYm-_P{(;Gn7`=~s`vXqAfTkQZBBvGcF9b;P3&#eD_qWa4W-)n_VxE;sYn%IP@}?|N1Hz z*bU^sf<5xT=XSu{kO+7?{v=*sX9}sMUVK-!m|Y_}!kV1Epp)waL5(8$o@xUB;Yrke z&J9p6=nogKS>d_GF=F=q-ypY-2g4lce%NpVoP77tyDgpYbLa_T9q8$24fHyd-R(IOgP ze?%O3sy`_wIMJMG-BCsNuV6K(1T9O{Xhipwm}h%L+&TA{plxsm@--CU>cTJb!p&p3 zCf^&@Er`N(HDf_EmGXCX=Y{eq;ee?jlsVseyx+hd`l?z2G)75~n}hfM#kRKvp-K zE!6bUCo>*m4=SK>=y*Q=Lk-u7(R9S{A^p{V0!M6aP>Hb-8|2vH%oh!`d0}@nE(^yx ziLv>?cMZ=`o6Rvh@?d89LQFLoz?U{|fz`u$bKIPzP?XkZDS*-F?O3%L4Cg~)yxWOA{c)MD=l_xG>E7b_R?-*Hm*VyYQ+DO-yl{gmOc zn1GLbM?luUST@W51ozU1a#rJh;d811Jw0wgQ!;u}|IWXK>gtoQKxY%4bgHAymdUcp z=&>;0To=cM$8vUCB{fw@^PPwHY2mBB;8#~oEo<|jkNr{}`Fo9cvh+M8)R#$qtUdfq zsVo0k?=1SAlfEMl!%)1E4V?nwX{ysMh#9p9yTq&D9nlqaZh1;9`=#RcZVqg!WzP1O z+r$vt?(}2PZCauEQ=Ht>7q94kCSRjD81wcDJbZIgTv#^%n?BadUDwO$cgPr_?Ti^K z+m1uU*9B1Pb5MADUh+L>o+SBmH-5V<6h2%}7v5evOSap)U~ou;a4R=e^tDxnqsNEy zq@4E@_w67}H>j7~B~BRjFP--3{G^Ny>?~bG>4TtC-5V`>(C;$N*UJ#to)QE1l!$! zE;1h!l3&AWd6vY0iY13RH-y8D*W^dCFVg9XYana0#A>JiaMM~1?rA?p@&etW;jl@# z955DtOv|N)9}mUOt4*m`L)xbnc(BQZP};6O6PJ#72fuWN;hWM8tZk==8dUap4ADKKaDB)?)Y-O*Q#bfw zq>(vqwG5_tyY^A-gSQa7-wfxKwU?`9wZkZ{7hry~H{Q|v1p(isbLPu0Aly7kCLJ2d zb5ou8;&nSc{;LcH#l@8EIF98{Izg|}4+3^8B$Kijn3nCswCkD7q~i?CH%NjzRlPaI zVH5syy+#kh`jTSvZ`ka)qx|VHb&ShA3qHa6*kXPF`%le>!EWxT^r#pV8y)fQ;X!oy zwVzlU`-^VtYvAwQQ{dP+iJP8kjta-N$)4_NmERds3iTU*lGiMWGg6xm2}+hYC}AER zUh9pAUd<3XS#N@j3CD!r9exS99ux8K3z&!xN{;& zaFmB~bINeucKs^t&QRj3+ZI4+p&2_~zX$)0_Q#j2M2V*(2i*Z@g)4Gzd~4f>`@b(B zJ3odzp&6n~0|Av(aAaA&m_6zwBv^*=(rw+Sb!sSvo_Gu};;GPk{t(_J^?izFxM4(^ z6aU$;mxK0Xh>5$i=t_rtN~&^Xci&Vu$!#sMw`n}O0%$+P-lC)l_t z;P6T|)&}1+DJdoM?aPR7aq&%^T_MY5AIkt0PLsS_gxa#s3xs=mcbP~2d zS})Tbc3v1g=^S-+3i&L z5-?hr20gmDaWY*6I=^BI;w|>~Yx8 zH-+Y2T8cs z!;uogH28CNcd1LgL=>~G3z3%}(pM{qT@|?!$0;jw)=6bi%UFpO_q~_bYc)w{FBPm^ z@l>AO=P#{D4FhwBn=nxEEfsYx6E?c60>{|x!md00V0l&yIAj#k*xKfbji+3w=G7(W zb0i8r*e{|Ik52gM)G3(SrpLu&?~5N57t0U(J%h|O5rTbe-6 zM&d4$F>u7!5T|GE5T>_zLQ_h(e5};Je~#5))S$;^c20OIMTd3`Q{eRquW8&gXFg;h zCxzH$utPVQ%(BYF^xG`-bk(O+V#nS6qoo1@`pi#E2sH{+@y5wM5c$k?ejo!aj!_n13` zPX?ZshYU0T?TJ0`ad45a76y@8#eN*~rUxthaDp(k0BAh*L-^zHUCaxpr;)}lXtPcW zEcx9D&!+3(vN2sG*R~1&H3{a;LkRQ?&VY*4QPK)xK6LpiE2{Kj@6Wc>y7{Z58Xev!mo8mw*y8~-s7`0vZs_qU)qD#gvQHee7aPjw(lxWlsmBIImtm|x0T=B zOy}0=-O#wb5yEB}ur%BdbHav5yh;W3&$S1i^jIE#W-m^7ybDugl3(0dO{$u-+#*8qCr^iRZ)v6%Y{LVj+~eN%Jh~ny7ka6CR+K%~ zq&^nyRJ)?1!=G{c==i-|!aFmGT|I>9%EYeF*FS=nXr6_Kk{{QZZpKnSMb=q=f+htu^w=RhH8HR>PViRu^J zDK#w*Vp_|^hegsXB|#Kw)Gx?tLSm&Hh9$55(@g)(oq=krgYj{a3QUt6W{d4#LSu*w z*JSR6R^JW0_nbDYop`aI4h&1#4T-u^jyHG_`o&ek;A@I7>3nBCKA;!(F?%O&bJzyA zt54G0=^e@8>}tW-TTe_0YNb!dO(>nSV8qR-d5pAKWOdtPIi=yJEljURWb_1kWhxaHeR^KVP4u;|-pC z`_pLDR0rI&SMqb3ren^l2HC2NyMjZs2_MvG2GuENV3FNM>=`%-6(4Pf#}{>B%O4H+ zq^!lS1`g*9leBQw^JG%a?9FGToXPm*nY>x|zNpf67IsWg!mY0c;yk@E{C$+1HjQ7! z+PTuMgqpKY21UW zwfDfb98E}>;7se=xX0t6sQUL%6hI3yv7+ht}uj!(;V9EYDpnzx>G&zxis5uatXp zbXz}gxo;Z`jL2Pz8@1?| z2!kX0qtB35dg-(kZ^zHUTi4BS_Mw%#))=%>NvB>gFvgg!c0MLbKgXoDDi+g6tbnS{W97lyK7i-? znb^Vky*$M~lkT}ULmnZtw9LeVe zBj@P%)F^r0EH_Jefc!Lm@Y{~cO#Y+QWr=XM*^^9a_Q0*II`Q42_oNVMgBzropTg=J z&S7my(6>V-9IdeD<@VON(5frmZ~6{dv-aSMjFHmkC^?{Ab1=U-6o2&4#4Gbu@Zttb zoc%LUG+)06t90)OcjP|Qt}S|s4 z;J@TaZ1;W#1s>PMicXoNIW?PX?;qvw%A+wxW&=ClNuKh+S3KxL2-hoKBcIny+j7#x zh-@wXH8Kk-^Ii+CQoiDJQ=IT?{Sf|fAy=sXvO`?*M2-KQa>vciN*sDUgKSqS;fNCg zo^n&eFM%;oW-oxwk4w}No=y#OO6ceD9IEdPINl`-`h{AfPoF_pVQP<8j*Q?{o@N|! zd5}0{ybh)wHj%yZX_8eYSUL~-IGl})Hi=KeN~qgNq!)vXxv#npW;;0I>wlgccjJ_N zn?X7C8dWFM_T4KESTG1Rozzj$`+{&bL+V*qUxY#4XCc+A4u*#H;c088$@jS}g}GyA zQ`4(caQN9x65QSR*}*g5Dh7$A`}O$e{ui|Gaw9!2-A~sQA5)P>gs60VFt@r`w? zZzX&`ugUMEPEl63GU3gt(RlmreQ{2Ief}}VgDXumvD!e1uV1U9k^0VD{;(^y9W}$} zRb#RCry6)WDx%)FL44})WKq}Y8mK*tr=hOHDBq=6*sP{4&i8U6&u2g3LY^+4)#@$Q zp6G;*>$kzA^$x;|ks>xgS$f5DMF<n>BL17 z`Kdp{`0sY$S$AdJf9MPN@n{sQ-|z<|rBf9C#{zY9Ldna$g6fy;hUA4w@MVLHyq!XxY5}9JWt{@)w~(xBDmPOcyr}Kd8>b z0{?*DCME3uVjMqCtrbsd^kR8WQ#!5WE-T9320vDxf&BjUP`fvRhBv$xWnN`8Z{Ho- z=jzUF)=jW5!5yqRJ(W!u(U~nSYzCjD6|$_43ZM~liK>rSV`SYziH&ZJ?#i9G!D|w& zaC-om4&n5&-huz!xJ70K5+B209q5(a0@1DwmQ5SRc^4+qw+-o}^H-b69ofUf__^OTsCpU z@oC|5<*@zWx>dlWe-7|5J7jGF7+- zI>HQTM&$;-{+e;KN)3dMc}pj5w?lJp7hLeciKF|D!H^Dr$n1G<%t(GmhmM~XMjrV< zEn6Q^&Zx~|m7yV3-Ml2r`!iX{koXL_CACyK+LjlF#0nFy#!~U4(b5j^6Ri1J4|z9K zaPY)_xY68_{C3|Wm6iWtpYm$R^Oxa*P=SYyk-UNpx9E3=!K^j5oOYGe!q#(>l`vp3(iuNelZ*GIF`fn*fDOX(7G7LLR-UAoJBcjW zdV&hBo{>tnURIp<)km7|Tp&ZuA)t6C0bb3lqw^gqA>r0VsO^5A2LBku%~QkiSAYS3 zUbkOtubhsH_xnL(Kpt&~moaMGMU=qG)i;j?ZF{a;ERwsTTP&T@~Cc-#*iRMoM=oJXRDhXK|4 z7QuI$f&8^NQrhzbqT}muv3PJkmCl~T>yy?|)BIjiU$i&(@iZ6D?31zteY=C@VS609 zDvh3Rn8?Kz_hp+3mk1+|mC@pJ>-gB#`{MSA{v2Pf&uU9mS^IDToj);_KW*yFZ3lc{IJJ%q_O`*L7YGHgAzAJjCClBi{rbJWk>dd<9+K2wbtu9n+pR2+HHXP~tCT`1?GT%=-d< zf5_PM?y}SS%FdV;e}h7PzoJzKBIvBSIj;MrO)*y&^Ccxij`uak1+kvgO;3$Yhg-we zI0d})M(P(-PL`!Oo)&XDR0~TNb%kTPR(Lm1I)qf}@ruM(H1fF;_Dj^2-xR!2t#1)! z+a4DCw<&Olyv(_1OEWbWw2RHZbtMOwlx36pF~c`srO4UEf_7pZbeww}TvV0N;#UnF zQ}-3VZEXehm5MxVo)=Y`Y!YfeX<+V(GT7{-F8|%KjQhVSXS?THAm@z}ESxe6F58d5 z_jb}A_EI*hx%#5Uz6&t)Kn-QTpUY$Rd7|}z>rm{{OK5y_2)s333Xj4Ukl=ic>T9mi zuxC-wBQKSD&nrXYL-X)Kl_h#e9?Oi3Eg0YXAZlDthV30CS6i=RXi~EgN;^A>=?k`r z4W+pi{K5u{TUO(+fF#Pkwih)e*2=5uRd^;ZM#|^u;Q0$p5Yev}#9v5(-w6?@*l8iz zUXi-Uc|U|Em1=qryHCuMW%2~Iz4R11uy0BYZ2r~{7snzSe0dIO{mbZMtqv}Ui6XTg z75pa8j#ob%g<*RBI9h251(g4WfZL;C74(qxG=EA3<#8YMl@Hy|07{B?1>`~Tm?r`Rju+7p6A9>WuKI~b=LB12% zw6dFUPw$LOm?Q@^%a7vP7H7=rnhW{!!f-;eqxh;Y6~9b&$9Y-Xz;UuVp0L{o8nuc% zB}fsqdKW@$w`#H9Y#sK?*Fl#}hMaR>TYASRaH(Eryjiyjdq2;mUe|Y!Z2JnZ^b5c} zNxvwxZ7m&X9K+Ysy6{%J$Mo>bXcX5kpfkZKFre)!&-X*Yc90ggy1QZN(+m#ITuX`z zf6Ks{tH03WTpLy? z>%pVao9UGEQmzW@!(&eD6P&AqcxGL`Jn>3CXnzRCC)%@k$>dk^wvKmU!tTAGo~ncL zoy+i;dI279mh$5{S&#KA^s_&H=K&*`g+O23BVt^!+>imEhmOP;uEK`8Gy z-HBHo*(|@j^(9<>bd^q?n~nyzmjLAZ^M1_?*u2l4X4a&$zGIF!G_erohcH;jE#gi` zeOYtf1*jZ50fIG?_>Poqaj*2kWtmbYba#trSQ!Mj&RK|VFJj~t-qQcAdEEcd z0RA%F1qY`3qt?5gJZaB2;r+L{VD?iapG_`8=Z&xEN5oiEShZH#FMbyfJKUlb^0>4{NJ;w--lZ<46*+TggMkLmlkR@1Ry{bc?|&HbXf58Y-${Kf%OnTR^_s`!u}Qq4+7{$hmtf_|)nbQL znYen7KYLF*&w>7b#AW6-H1z37=vickzuu{!=~RK|y-A{#{j~Y!&~dmZJciD1PT{@T z!`bBDXm-hL5#L%SU`nbq?|3$t^$y2U?1>@tY^@%37?%a}M&{BO*AyB#FpxXj?V-#< zCGq%=K9ss*Csu9Khb&Bnmjib4*oW)cc3B}`_3r`gEn8^++(dct*voL>mOhMpI*AVr zTaRg?1Eriyfu8px@cN-J+;Dv(e$IEpb%CmQ&{u>r!=~Va73uK2rUZL@-v^=Tk08v$ z&*ArOfM6c-s{ZgqGr;u!}BqyIu+l% z>B&R7oD(K=Ocbh`g84|-KwP3d15NJ*vEpDosB@CoQhOpKrvc*p70CmPI*~7&Wi4mS)ttItX8-&fTKhP47V>C3qk&4b3p-zJ2%d1$9 z_C^gb$Y`LT=IV-R8PDKtQaUU-mI^yuH__!+1MuALzbKx{!5*!Xq4UeVG$dmWb~csH z(WWS-EV@Zk7GEc=-;=59(QH~X;TJub;Kc6te+YLAAHl%mCa}U?;z;zkNb}A7gkour z6+5FyJ|{RF6Ta67HdAwD<;PkD#R(z2P0E4%3ab^X4?9WEMkv(BpW$hDiXk*jWKXR# z!nDW@IC<#{`l^{BhRztpMN_^)UfM_6UN3Rc?)Bthp#yos33se|`2hBObmB_}o`TT> zKkDr;4Ktk0uwO4fsImAhG_IUSC3gEo@2qAr_R9qc87lAmbrFBjxkTH}ln6t92UAaL z5_Jb_v(G7IHjfFx&939vRx_22PR`=*bH?)lAsBPJAK>uXu{>_XNIoBWmh{`DJ<%H< zepfO>NI$&=Y;^T$`2KyMZ!--x8^^NuczrfZGUP<5H#|jiIc~fh0V+<0xPI3a@ITxr zUT%FyA>}btEHO9JQ*6bmb`4(rI3AR1e^I}nr!;?lUzT+G9ef|jN+|oJggqGat(MVQObwrJhSH#%M1>)fyziHL!RPNO5#-9u;h1!)Rq@})- zwvavS>flG;)^!2d1P=Ft3>e3BkHpQes3_2V(f;d?z~FnCxuO^(Q40b%I5 zzK&L3U5h$ThQoqyesmibp>gGL`e=Te?lvf4(5#ojZ=-=YZu}uxhe8=U8N^UPjr8vJ zN)>KB>`YcWBl%IDH^;8s&jsIWsao0%{cP>R-+o>eV;2>QAq9R=r<_Q|Fa}}G3Ost_ z2Bau|7p<=R5Da4P3oZR}A$P%TTB+KJb|2QFw?pO#^=G0$P zc}?#wg11jD1Z|9`l}Sg0eR`#Qa9Re3CGVl-w1K|BbncVfn--idmmiw6mA&?v(xV_z zi1|JbZ=7ln*OrW;4+HLt;kTaAI@QHul$$dok2t`U3lCHA9*J|hwgc=qDRF1g!dd>M z9_sfibH6la{^Yrvy_Ig#yOvMlv)1kOCM|~wmtUm4LyN?m#uZSy(F1wR1a8Qv6)#x& zOWeIl@_t*pgDMQhRY`9^hB z#Om$RJ7b3_D__3}rGziONbx<%h<)Zf4K>yj!fwwtl@Y-z5s{GAYX=m0H_3ZezuZR9`C zbcDNV$&htg;BPW5o>XUx#o0@6x@sQ%dP0zGK2!27#o_PMvqjrbH`ck|oi97ZLd4(i z;s_Tf&@y;PCUN#S&*nSGHPW%*>q!VY&=q_Bn~bXk5pMSU1EItB;nDps#KQ~w;Iy1j z=}c`5l6;S@TkFHWq)z<&<#^h!Zh$k(JCSYI>tNha2KQC$>5&kN1*5wnX;q3- zKl^Fg_XicFH?1io*bsVjy(c=k^y6!T-b3OrD|x`PLp1)yNjPZODqM+)r;`s8!EK)m z1AZPME8U-3lyPVDACjhxpEf`V?wG-+eK_~Fbs0a71< zv$q0PS;qQWEe>xEe7^8*f-8Dmpn2rd-1P?1mv@1wf$@2!pE_UI}w-S~+N-%Tf%&!fcu zW~~$o4f>(NC&>-uy;Ho?tN?#btD^k#ap6UUfPOWPg`PIgg#K<7koP_ezQz3~+`eLi z?Q8nuTdVK%_Sra`JI;_(TpJm?>8SNWKB5)D zGiF*N`Jcz1n`7ys=3T+gvqq@wS3w;WrM^<(1u?W^H(cx5nT=xXs6i(}{`P7lypDTA zkqS~j-p-Zp>gDsV2ivK$)>~*enoO_TdSmO!o;3NT0%j!~f^e zw&p)+*BC~Lr~c5GT&X+%F%gs3@1p%rKMToI%K2N;D44#eFWzY!0BY7V;hW@R`7-?@ z-OF4^4hg@=*7OyP`)?doZQcNNpGQO96mKehTmkwaqj_9{5vN)`1MRAVV*fY2#EGhE zLj55to>ZhqMv>P*ZZ(yMdfH`&{h!X7@_&9(j@NFPd=3EK?RobmwBW;w!^?k!bWsv|6?UK8#o3 zv5FTdbfi8erT(P2zD9g{(-$(a7>{#}B_>sc7hMZ^M;$(=!^GN!Fd@JbuP#c4K3aOP z@1Hli4Ht#p#YJ&U>tE3^u+a(5JFU$_cu5B4E1 zw=^{GIUOf;Rc4)@BPdb2d*&;T6Q3GuibFPM(BLmmDbPfd{w>=KJ)E6{gvY~pNBd;n zwsi>3?edrYt@>1zhWC(zYAfv%1}7caF*9Gy*)w6Kq*Rk?*)pamW;_g1oGvNVw@qF+$(;#zI_PyN+2k7*c{M`dNXV4VZ zZdwA4_tepBZ<5emVj_&`uF7!@2ZW>Q=b$jGh`Q{Z3AcusV2@dK@aiz&H=pyQweUGb zU7jrbdOQ`bSyqCP^?Lf%-Ai`tZV4rd7I5TugnZASUHB?%9XTbeg{u>!OyDV7wp*i$ zsqRzhU*$4(kmk;I6PNMR2nU*%+=r?ghoSfKXiD#~fF}+Zh^@_Dw6bxokon4qy>^5jeKY`|eSKiC#J#`R)Nt4KP2VW;HD z+Kx+>L|{$zP<~o?5lR%s3f(*E;q`f6g`ba&gq_`u5G`_u^&X;;g)W>J*Z?6eKGd2& z9JOoeX~=g|Jh@K;WS1Y)qbm>KluH=oN#u&ycNeHsy1y&~3l155KzLAB0IPPM2F0q~ z^x9Y(Rqpl2u+$DX<-QME%S`C{-Yops?=aZMjG}wi^DwpfFN7`GLkR{4(ITQdC%J8t zmAb!x^?{>NsC!85#+j5j#75R;`j{?ej-kw72Vv`&y-->_0CGoN#GNbC*VVVW-+CTu`sgPg~?_cx&NC3wOJkDT1PGU-6_fB z@)3lOHL2tp{)f&jnIT(N6iv=!uhG0CJy0VykA_=YaNGEg6uzY&b+4NQ+vZ8!fh7i9 zVid;_?Hk3*OI1;=Xn@dIe2j18_aWsbJ)ZUWnAq|sg;nS7r*WUnaU(}lXpp^(-b0?Rd=Zp30RUO9kDs1hL&~JQcOiU>PTJKMMmMu(q5!J|VI; zy-s7*P6~(0ByRj_srR_)Hf`Tv04GW!BA@Zb3962;ON_nvzoa~ZR{{MT-YZDXY@ z%=GWcdJE>#tV;^wsF)b+oYsL$Zb&uba92(q?uWa%!P(8g5p$+2fDsO>adAMLl(|dg z!>$SZZO0})p<>BbPEF@O;RcX*cZpav=ox*lJ}9`x&EqfsMX|P(9$MWBg3na}n7$_% z9OK(LVYVth+ENGMpa0P6xWBMq;w!p;Pf^h8T!hz}Jn-~X8}gZM2(!bsL)~H%SR5G5 z10U_9@lOUq-)XjFXPpdrF_IHkL6-}E`1}O%c;A(FE7;W%!fWd zfrafZ5G$DQkluFGh)z6p3-egt)3E4fC@udt0Nsm!)Bdb1Auz}UM(nMJGWlmAUvH(b zGiC>?s|HaIyBJy`s`2V>kub^sf%ryu1s|&N68Mhdwfac9nu*xz?HALt$?>%B)-Oh z1U9R(>K6Rgp~5fM*>%o(HbVaqz2wqZ1q>MkB?(4SwW7>S!E+VlG}F8uiBZ<_PI z3-3OV073dUs3^8G>Mi<8eePG&?l}jb?xh=cvo^;08_rTz&nvXM!*H%tF-Mc*O1NsE z50rRTkhSUCyAaC19G9Aq3r+i@j|x)5XfEs9e1bj=p#Xvz(-kuC4`&|KcD`DTUm=h_qsy2|w#q z#5VJq=<@&pipK52^Yg=D$CZhabL}8Zj@IU2g$cCO%#Z>r)1k-co)mp=1oyXBgWbIy zP`@<`mi>2CnAH6^c2*cGeV&(v$F4@W|M4s7eV^|9eBM>K+s6V|#~VuTslFILc_TiC zgBZ7CHGbjg4N)Uu-H=tzihokW1c8M%Z9#q-uj++D)1<@-;Wa-lqZ6! z`#hO--zHk{;)T4M^+Ou5Kv%r7G_+#WmqHx2e;I7cQzz9970`M=9Jj37B`TP$d==ONp*Q}4HjS8vD(=Tv$xhV$3SmAQXyJ2hONwqisr|7(+x&FU6PDEr= z5uzknAu>MidowD#6ityz+8Uy?WHqRyQW8ok-%5Mo^S+lTE$v-OrD#jqTEF-2Pscfq zb9}ttuh+fL=i?E>b6ZTQ;?@y*9Ck=Jaj6QXY5vEV@g2p}U!KU*ZtkY*6Zb;!aF%;M z{{zea>&)kONp3!!c%j_xvG^zX8$|a2(w6$@7pq(0vc!L?@$ZGxJo=N%T?M{7=!lpZ zufT_5LwHe#Bb1ySh4u*<;vB133M^V73o}0i@U4QLmLI2TbptV4`km{)rIOR-IPrAn zm3$`Nhc{;hu)>2#RJgK;rl@V>oE=NJJysX5h)?0Z?`E=0+y>?i>sTx8C6)F)h-$hT znCwsn``QgyeW(u>SMLzIw-*XmW340xVl-^CT1aLa3M-r@yoR9r))aPR71i7AfQosw zFzl@gpF7!8^c>m|zNe+)0_kULtN9MAq)y+TozKPUh#;)J{~ufmT!P=Xtd)B!6w2HU zg3(0L6SPwgiSt>J^K7TG_mPuuduTsdqEoCmPumdv?d-8zqYZSpca)Ou&IXgj8!&v# z6$(Aog%_uF!y6m)QFYJ_`kCWQp>v)=c%GYdPa@IKx;qW(w~TLW3FGb$C0DUu02d9& zfH#E=5UbLWJ1xnAg@5`9BY(JHC({#DJZ=YyPcy)~&li|>d?Xi--c8Z!XN8QV*%T4@ zo#vd|P6GlDfMty*PBJrt3@KMM?vw*31a{>3p(in8jw{{BQs?s4Eimwm8&CUo9X4Gw zL(``P64PKUO#T=P)h@~Gb0!}G>&wCB;Rs&w_8@z|zeYKe^MpOSXYj$45yH2x({Oc+ zKaEs8C^&BEA$|<*LobH4QDTjpyxOzSF!}_PFB(qDZ;puUmmp=E)A41vL&Zi<4b+ro zpwG6L^Zj}AVAY_bFluKa_rAAMuHxPWKm47G2aXrQkA-RcFS(poeT?8#n?%&;?*&!< z7HlzZ4NuH}K^Cty(bdM5pPS5v-yIEc>N^{pm>-W-vYj}ycn!Yh5?RN*{+#`M49YIY z;MKWD;oH+YlvKWo$DIpd>&RC6-Mfu$`Hsf+ot-%#<|2%EX8==+YvhN5`r@p{V7hlF zLU20j4_hrVsr|tc=&mL8`eXxX!j>sK;Y_^y@`*;=6je+`T3aaY`zXQlyboHwyHArs zGilZkb10W;RKF``F+AQXEdRCx`aK^j&R)9%E_o~Rn{nfDk9Hg;21zr3RRu70@kYES=j+v>sJ4x5vO7T` z=@YjoNnK1;K}dDyO?L}8@|zWG(VYR!msex?m}OY*zK=}D5xv-vQZc3Z56tg~=&(Yh zt}T|(P_T~LXLg|y$w}v&B>DAjg`&2sR@OS(ia#9bLq-yB`To=qbh-BdIP<+77B9?) z{lgB4U5gahdQ}RS-qFWzi`02c2M6pI!))P{%$bqv`L|)S*xvVslnD;wCxMrs&EOgo zO8k#Qt{J#*Ls$Adek#6qZ6NdP-|}^li*?|AHVmN ze9;=aSao|B?9sVh=vJ9eXQT6}QPxfmp9G85HN$cFmCfvI#(?*=Q~&Wi^fBpy$AJ8i@j?EP;oIPLx=%Pp4v_X7%Kfl+_qdx9^4uH*tg zIUd}`?GqGY57SO}X~*=YO7MM>2QzLt;%1$J!opeJXlAgS`X#u$wrv1HW`QhwZRUzU1{VkE!-#V z3`QEA7c#xh3Ngx#@O}6f$WnU&JtTis$=4a&GGHAjZgZq_;b-J}pNe?3<}!Yxm@9sr zw?}+jS}D8TuoiUvI&tJ1eY~UH3GaSAEletSEv!1$3s>}%@_MtMQm;w3<<9fE3ma!D zVPuDK*tOFftQUP~#Qtizah5H*&)X-A4~_E(`P>6vq-oLh{tKwPo;sW4O0K2qPqd|1 z2XE097=z7DDYAnhFjw! zMTKYc_*`iie;#2YY>(bgS;nLA;?F&TuW>OYMeU@ROD7<-;B|RJv*dVd3S(8DxqLor zH<@;PD)%>wfZTx&@{}twOd8ijFMKD8|1M_2kXHlv?vEu{-`fRC9;)Ekao0SuhsM&q z6=6cs$L?$r(1lZ8NIc054Rnxv6h*OB(A~F{j865%VFh=^8{>D-jfike*s~Lt$Mr_l zzJ;Wg_gX?NN+-27-X@A&XS+}foL-d_76h8x;* z(DrbSn(PbL9Afx&S3}S^V$5=B#}?Q67?f%qvCAPvR57ce0b>hM+qn{qa;2Hb!yiIo zqny^|4ip#KJFr#xC%I~@KHgT{1Q`eYIo>jlr=+I{V+)Vt?lT_fP(GGT%39!RWvrlc zeE_!pmVDP;bh#^?rYV=NgK8r3;LE!3wD%F%cEAAsBuiZY9}}pW+nw`@7V^0^2WjSL z3ty$r^Tf1{n9^AZ%AyVsdX2!a)JRkdJSvRx*5@9Q^JDRed3d|l)}4Zu4Mo?3bMcb-P55S*$MV!6fL(S$y_+)q z`Syzz4IRhI4`MN(zaJjm(*VAe0sK2y0nZmI;iK6WJX5odl;@n{B;9$qZPx?gUjII9 zzWNa@vP@<9mMruc;6N`OVnJ7WKl%QCNt+BUY2B$i6k{d_%iOilD&_xczCNTy-}~S` z1x4I@_p11(G7J|cBtv$=b}UfoCSgzJ-Wbcfu zWskR>#Y!2+xk71}3;RCz#4~z`?xzjms?1n$%MQf!4}SP^)@m3sp%m;x&&kG}I}0;f z1hkEJ#OjKBbSOw0TZe3;?}M+3*moPuK5NO9QYWo_jyhSI>p}X7G&Bm3vgxjq*wD2v z4;t%7!_WD$)d+2l93P3WVGq5T@q~)5sA8IaDJ9Qc4~~bu!AVga9j;E3dCd#rg;ONI zY~)xPH1Gw{<$9>eagv*DTE$bwG|<3G0kaDuP}g7$?nt)431KVYx{?YRPkk%xHOGic zD_@goe`Ee|Z>=!>^fK|f?b8`qJt+2-m&paGbdjq<7dQ+iW3RxJs;H%k7g>g;Z=s!Es{pfEU z)(Jl+>dn{Xg%n85E3^5;&&{&_>j%^I>sz>Fh6+4Pye_7iII@Sp^6ln%WDqOEk3~6f z!9N~Ucc;jkv+~fX^H5RCai@IcU3Id`C?JEH1e~3thx$iDv0JMze>u2>I~}i;-UEtg zW_}U={aJ}ijV6j~E&B6^lD|@BOb5#^b&x#JL)d1$8D9U%u(7okdfOg>IX@M-Uu!bY zmF8mOEKN|>{}HT??}oj;uVt?2!XpdQ=~T}ETzn z+84J(F$V1gnWaCLls3RL^{FUdbPM*TOYQ=1 zQ~rJ7KIIQD6PBJS!j>L2VEKMOSAEuCzlj^+U8@TDq;}&EP3l|_qJ)8P6#f?dOt+wJcf@Y zYOFCtpa0`85IW+we1Xvg(#riKf4F2b_8gHXW%MML_LVqlt?R<)-TGoT+qc4&HI44- z)%!*ItcObW**Ie1adFD*@zVXTf(rcBv&EQ5p8HsrWmqb?w!?VAmK_|coyK1M`bln} zL*QfTz}NSjfr&n*Y`7u?-KVAEy$AImC4zDP^CiLsQ5!Q4YhdiLbZiXQ6~@jmfK_qF zsp54{>HC>5EhJlfYO9I)%2Oe#_M})lcmo!md?}35K24T01;}bHf)oE1L)*Vc@L|+E z!T;t)ncwJC_Nj`ey4UN#XHQ3dHAE)PInW>P_FxkCou--#L9l&7j(mEF7M4cF!&q4g zG$;$)mXHrOVjLhlDG+ix4X3A$eenC*6#OxAo|OL>F9f>w!3SMWQvJ=rXqD6nk6k$@ zKlf%MOn4tJGgg$k@%B}4VNNAwtXAW=3MsF>{J0o7;5MXQucgVEYCKxv;%+s!lN*SF~rNm^TdL{^^tZ-X64lT~~Y`;K)~3)QdCMT!rT;TI@YV^3p9B zgomrA3URqveC9vtJoeKaykFSSOwF}|?r%dlXJsWsUf;~wKVAanJ+J6?Zw;0e?iJla zra~tPiD^5e2RA-=1mTtwxuN|6rBzl?wQ7HUTHldpUYD}+cMZ90eGj4d@F?E3w}jg) z?1+ROXmx!*{d&1cViWq3e!*A4tAm~pzte=`8v8?A5@2%fQ0ZCMjiWASU|du+8rvs{ zE0+|LoBDjv8gm_*USgC2&4B_>!R`IuMmtp;nFfkzhIIW&4 zhw7d(Jd^mE6q+*Ogqu4q+I)}n^lreN>Y4buX%i$}wa1_TH4D~(2ZU*oKPjeWIWPL0 z%JxDNT^=xr{qE|c!Yd8BoUQ;?7hSmO<7(biJQI=*+=1q*0JyU}fxWe^Ky%?LP+RCo zZvcD#YBz1%r}+y-HpY&w?? z>J5>{3~-|PW?X8rR~{kta?bkh#B1w*irH}!_}cFeG-bGqY*x%gqx3*{=N`#{N|UkA zq5!;DFbAjGM|XVMLE>lM1P=5`lhc?2A!qYeSzGyE62u2^ zWMmOsyctzi)lkcSZf4NQ1;3%2_jlQJZBL0sr^}Yrn%sHp91cm`#fJ+sdFtsvXjC=k zYd1#nSG6c=Ek)9?V}h4g^<$Nf$vA$U3%xS@B$Pf&hC0JOJnU^X zv^ovJ1+IJH^71$orB3~_SJ7O4P&)eyu8_NWBaWVzj7FBt6l-@zuyedGXqWo2Qm;5V zdp8|pEv1}R{cLim*5Fmf>wz48Kyk|+%v3MNsci-56F41Lizdi6x?H_xD(A+lpw+ff z*ruz>wXqtkos#G#D~P)pF2?a|u+MqO4x^A>f~ zz~#cpvm5cH^ES|U6TtaSeemoWXAJ563iX3h==B5vDu=Fv^M5VTw>B2i#Sq-IrXGS? zU9sVj7pnFP=7j|k^TysBe#T}{w7~|btPpt7oNV^aSWgj=;lg0M4Xn1{w`izXz|Rh4 zv6IhLzULzSt;&b+x$H~uYnY;Vrson|dqWA&fBs86z2N!h0XUoO$MN&Z>HV@D;9ahbHa@8|__H&v?T%ts)gHiV+hy~* zY{Jf#XJF8VWK#d(#(iDx2nvf$xGnSzMD3Tp9?n#-!IF%f;yA=8lT3F?o|T-N;@eUq zOxrxP;_=!}e5GxYu&me=$~UbSqW=7*C+j@n6z@&rnNy{AS-vB8hQ+$YVg*Ic1pPIqC=oXZrqRAMxZ zpUM{ta=E?hDoE@9lMYHOq2Q8BP+YtV+DogTEq=9>@s?BK$QW@^{{UQ6a|AXGE2Xh< zvv6`)7R7$K0P;C;V7&Vgj61dxU5EXZuUopQB5v9}QG06&LE5E|~ zxqoSTr#9Nu{EzxSlX$w*w^Jyw(3)Nk)<<>FLHD-I{nHibF!_r6DfK{za`7g4|6=N3 z{sGdm4S4dn51_t#7j*162AiMnr|`jp$zZ4}&5iKnZ{4e58aVOsaUuBYlrApS41s0$ zJMg-s9I$w2g`Vf1i)S}R@RpMSEQJ!Fxz-kp!g_GfMGJQ2SaP~%&$FhD6q}X4!`%Va zpuVbwiU&(Ppu8?T&v>19Bu5)_T>UuG${e?E^@LY#9XNGsE4B1+hL%1u@$%@C;9k}N zryaN9T+Ro#@1G#^mL8t;?2KKVI!bw)PvX^&+T5=~h1+)7apm)y;>;<(=whv=V5*!+ zKc(43)NUiTb?hoRBP&Gn>`;7IISK1)O33rF7ybMrb=Pz=V9@;Hg8V(0hi!4eJg`pUK^#!Mj2_@=6zD<__VWIYYSO!zQTu-5tvm zr3}<#TO6PNO(@pw$Vy{faFz59Qi*&5I~=xv>Z`fXQlU;Kq&v$ZyMEFcytgpTeWQ5v zUMP=@){uWFe*)@#wfT66DQeo!ViV^Ca6IkDYeZ-2(%Ka_FZ9B3R^xH^#gnv7UP--r zT5wj_IkB$%EtGWEXRcMm0c}=-&51}BmQ+){T!W3wzmThko6sitZfYlZL8sMPn0AGQ z0DCJ|T+skkuks;BuY?viNS(n?Rn#vY7GP|4{0NMk&kSA*eRC(9_=+z^3ORdFX*Fplx#(=I1`8HGidduXi`RfAufD zuD29tsdvV6jc?#xW*F+Fo)*Iml(EGi99$;*amD&hSd&pri@JZI!`9K%R=bV*M)%^_ zHcS3~qw+;!%$iq-Gusfg88dv*E6+wsHXf zEpDcwKRxkqTLT0%exm;M#{9Kzlz8O30ebujLBnfH=(wE)gsiH8umyYRin9)Wt)7iN zK6T=5Va=qioh1(FVS}TM;v{yQ8D{%VgwYvW$?2y)*%;MAu>B5+UD`!Fd(Rz-F2H}j zUGT@g`(QYG1Grza$5+YvY$-7|M|Dr&)Qb;j509cbt4gT(U@uI&xspzQHsHUjpHh2g zMN<9Q6MJf%gkGs0_;tR-qYXFUwDc^hlpLT>6T7mO=R#V(*N&Ht&`0dr3(e>C<(QMd z;A8Y~c63x`#n=mCdsRPl+o4mj@!xKU796nVjseeF;l#HzwfHZX@X<3iynDX45nmiFel?dqs~)sA&W^}tD4raVU>6?_xSd4G*D?zQotm&0k^m13?IU`GQP;wSdEcuH5@YDH(i5*8A0;69=ke&@5M8zrhMOTDwd0 zvSfHE&Y=xFjJCRT;8})EVD+sBW<<<}Bq?uNRCQ1EEm0Pog9TD>(BmWS7V;fybEw5p zo4bzF!7ZnT!LXG@G;s4XQR{~%8YT{+q|7@cE7QiZ_YzZ0VJYMkwS)I}Hy#^$mfV|M z+2NiKJjg#z=bMsg`GjMTQz1P^q~beM*1?;{3JxB#8j^kq8(YraV%tDj^c>9qj}iF{^%dBi4lq~Wmlv1uMcAlK0!IxH_HbH zRY?5r92&88DvY%`4x3DFRJh2`(egDi*7JQxtuBtN_sj#IQ4qKv=r67twG#Sl9}H|@ zf!+uF;K{ycVx4gbB?ml^y1g2x8f=4YlLO(;=rkI3>^wNvCyTGZg3n(!qwOu4IMGU% z8wLc>)1{`k@PH0K8{3stb~n&AT@zlBdjQ;P$3oQ`J=m+EO1DqcLYTsH7-;3i+K;q3 zu=oirFPtyF|5QlUG1FxU=3X4rWehnsm%wL3KR9zH89D?o4R?sABVF=H$>oI@o^Jx# z%N)>jc}MIM5XSoHM?hEIm7N`w_{$p;FmHYbZT-TyZl*sU+uW14DE*T4_v{D9X5EH? zu~#T2Q4iN7IfJA34VduZk7)eZn=Opgc-w*Quzb@cnMJ-D4w7x6_x~B-&%;+?UF37Y zaI80vy0wEoy&K87PMI`!_k6OTZ17Q)W{FO&!ku%?)WJ*IBkrG$M~(V$rr~IeUC~kM z;T#YxZGzyd^g3 z5w&A6ucwr`9h*jjx9)?`VjnDWdL~#+s3Wrj9(-&BijSKr=x9VK(Y<$2&|QI7-d`hD zj=b)kuJTmMS>;0CvX1;;4gMO^`brn>-ccc*gC3mE2flb zx#D;Iu55Q9NE~uR3wz&k$EYMt;dS~8y43RuRee!r^i*L3+n-d|R79>PUz7JGi5)v= z2o`_v({=qHL+*dkB-rg$Y7+GUUoeB>u_U9#eYWTXK zFHB4RC2VUwNvEYuYjgb<+3OQobVu&Y3y#Uib>tS|TgxEM(GA5%i(O#%-G%Z|d!5i~ zs~L7!+5yY_H-O5)G>Yl*f)v&M3Z5f>3#{Lh1|9oGRf-1KyTFUR-5%1itjFZ9w+f27 zt)Z%G4DB>c@ITfL&UQ*{dRySq(Vfu!Wic2xHi@Z*1JOmw_Ak5U!i(n57Q(*oBKdn& zer2YB22<68Fa?a#TzNeeu6M`T7$UHuLwZvIT@%vfwlSSJj|j4RzJ-a&F$QTV@dSx5O%F4`eMj-_x_QW^;fvm;W(r`OA=jE z%vif701rOvChZjrP*G8j_s8i#%kSf`e2ub@RTF;p|c{k!Ba4{awNO>pvK`urHkxUz1g@9q{SrI&$kVj(`0n z_k(j@QsN>tK5wrAb52YrBP-;5m3r`!^7!}SOL!%Q!o@cMxHfJ#U-{M!;m=ZN8r6bR zW6L!t$PE~^<@Y&`KG=1BDEK{FM*P~{0N3RlEe@cUV z*6WDzR!?Avq88nMeTTk&+bUjossblH)5tO;177aXqmc6_=wE$ToQDF3ZHSILLB={}pD$m&*MQ!gk(Vkil+;}Suh9?cevFlS17WD<2 zhG}FPa){1+n~EW|YlTg90Vw;{08h5s;n(1mvORZh(716&>EDksT(GXMuy;ZMy3W<2 zbq)zURH%W3GG){#(SWU`OW0{(Hh-(G5ffc~$ah2sdLCm29xWYtZstY$;NpP;C1#?R z@dm2z-I3M%rHXdPqOgnGAaaRHfZ{hs9NY9B>P#)UX-tyjJ2WCgQv;NxT$6vRImv4; z<v$q?Cs`c;v?e*}U>}4$d~EkD?}qA8(`Ohm53qegLb+xZtU4((ZWW zE1|=18677*B}F?F(nMd&8iH1sGcw zjIt;5XzKbB9GE0A_bv|d_0!{N1V(DQO2 z=ZKQ;Sr#NYTwYSaxE4q`ze4i5`E$m|p}fXnr&#=JGkupjCLa^n zJE1EWSNT+a3YwXETKHs}4jsy_!i^n<=s9_h@c86q zL9TcN23C#0JC%1R&fOL_yqOQT>-ypc{nebKrB3gc945S~0%MhWp!yAK4zHYy9k$Fz z)eoh()g}r?Bwms4%E^a&N@4Ums|0okuYmSB@QLCA%B*`x!`5Ac=%|nM&!!0MRtLyJ zZ@RHipuq7Vk7;213mSU3kaEVa!43rmG)UzU%~d`R{IMJq)?b6OKgz|e3IQ-T^elW# z_zPn+W}#Y#OJq86C=N-t#deJ(UNqGoM~+`2s!vKL<9l*2U9%1rKiCGoW$Vc{p@m$M z4huS;GK4dwBYAR$362|?frqxwb9?1oM=K)s!vJkf9Q3LiKA%(#!wpj?R?CjWVAIL!4HE-T;xM}=AZcsJF=#_c!+K+{GUyqZ6LoVO9 zYLNUv2)mys3TNwz#ke`uP%w2mb?d#DdYQ=hO!ZQD=>CSz$}Pk^@e7FZxm-O-4z+RZ z@~#hRh0D_HwZrS)m|rDv{dF2?PcJw8p6mySUb=kpn<&^N2n4CH>prQA)UD!I;%BJ1AVv{;|{J(=_acVlb^&1b3O%k6@Wwe+WpA6gN4t&3D7k_w? ziBBDC#M;G|=-*2hs*}3l&8ISDVKon6q^dc-U%pDf{chB4=Nf8L?!;b&6;QhUDs=C& zm-hNAv#Y%wPtp7=o-3^r9p9yJ>@+WM(p|(myBJ~dq28RLSwTNmy%ukF^n%9DOY!-& zQkpTx1kdJMkz=7MSP%Fv29ZB^D~!eHk*PGTzapp?81RDNLFlZ%7faJO;$Xd4R1MR` zL1Xmj_LLeZ_LiP8k7grOn2Yzv`qSIcSh8E8%1^9M$%0o0(adMscspo1O|tdp#2&^{ z&(aT9jF)B*d*(t|v^DRly+ti;dGPtuMqHpNhu%?k*kf>xFvA7#Nb6>5DLC%&Jgycl z_v%F@xhAOmcp;hUFMy8Sx?*DAMi?)eQlsYr#9M>uQ|}XE;uqU=@;0RW6Wh0hqiT^EcD=cd&;E$S=a=npMSKYKA9YsTF}qlJJ=PW6 zB&9`8FC(1c(-STvGYy=d%KJunvEj&6to)RSaB&ig+G}zB@Cd2vl}I&L{aM*J23MaL zh;!DYg2Pl6Tn@W%t<7iXr?LxwY3uRlQ8ncD?VcPsyn|W&W>Z*aJ?wDgKK$0w zhKy;e(QrXH9{-syq~0%)E%rVI!K()2?67_CDc~(+|Fweztvd1N5;?p*qln9}1TIr`+|6)P)r%vaz5-Sse8-3CBK@kM z!O)I$tlEXe5}#+iFoZ{&?5E?ElPKg=ECz1h0YeI1d7*zX&UM*{>f@|9?%sGpHHqOp z<+>2`)syON&vMAp)zpx^09$;zU?rq-P-79?+`dLAtr;v7zi$w#q|Cb0bT?d(Eb#+x z4P|F%O>T*o_O($R`Od~n?l#zr|2PEm;W1j^Hl|)&x;Bdoj%Pvrq)d2~slxNJG%36J zA{}1Z6)!(8ZpT-LFf%t=nnToicL1 zBV~NUn9Y2tI; z;1pSnNr&#QJvq%{5A+)wLe{mebYUe!sFh;3n^JbmKwd3%dIsv| zuoSU?x}VB&3$@b_f2JGW5eM+r<{D~^_(}fWm+4MtDUvBptJDQA?JT3d;Kr?!b$EKja#_>9XnvuQ041Al)1ymnykYQU3OKFKuA|&> z+(0w3xcM5EIYdC@vK!);MWytzth@Z5lofTpmyMluEX2THUUJTz@4w~6E!ROko zIA({6#Iltf2C0>(IVBKldnKah#zvvrhr<*g?J(yjy1|iKq42Tu99ng7C0`w!!lUjl z=gq$7K|MEH-Z{Q2_0w6-7ps!_@61%td_=rGG8vY2 zoB{j2RB)kZM|8d3gY+CqXyqL#|EBUszQMJ(#Gb4Mv8lIMzxtC*VO9}-dDUNJX)n9g z<&v27m89HNC#F@bnkO{TILnmz|)+{3@BoKZmfV zGx$!soaQ?%6>FsV{!aMic5Ee3yOzU@#E>+4&?*y7zx@jhkx#*PggzIa_miIU!#RYb zJrc7IiNkz$bFk+iHd(x$)VfTRfFaLp2CtAo62S_zM7SHOKQ4|u$KRs&V}lHWOO z5*FO^WzCvYID9z{^(SA48=ccJ;(G`FvM5v3QyVI`tZSE7ec1%xu6~2?!4;xH*Mq`F z$0V>x@FSD0?eg0(9ckXa0yMSB#623(Xt73eKqe*Qw8L^rQ@aewzXxNN!8-iQ(1Gsf zeik3+?F0Q!6KU*!`n=EKJiK}}8FU{Vf@AQDPIcTy)87|MK3fI)>$(A3M`+QNg)8yJ zHBH#LcpPaf)zFVCtKpc{Qr^1mSw)b}2En^Qke7E@M`Nqs()8msuzqD5d<-h4AHVuh zXyjC$bnOFmeY69P>ua+@oEhF*WXVwyH!$6CGi$t=%#oH&G)18c8Ftabug8>eVX@@i z9C3uE+uQS?zPUo@a7#RTv?ujH`JLtl=h5&Zdc5!ECt3PxS9C1WhP1W4v7hZ^h!?GK zy3t%z3Mj&}_q^bXSW_{1K_(vVUM9I$hT+rHN3x5<+hKb@9cWVgBMK2+@ZHc7IJO}c zcRlPSvFLY+2kqA5y{JCo?8ggmO8+|$zsHx`vMliOxz1cJPltyGf7846Z2VZ}&%+Ns zp|>;BvF{PULz9w0$KC@+bcn$~jmL1I#RRK0Qz4})L$r%f$0ISvsc2&{Eqa&%g)53d zjM@usVR~rz-v+s3kIo!dJAtQ^{-nzr)4;x3k-t_8(B3XF0%aEX?s*|>SYyL|RsNQH z9rD1P{?<2ev$N>Z^f=*7 zuiiXH>@J=d3Ov6-a>+|wzI&q|LO6Vcxp`Y@nYJUwWHeCO;HhYQvWnh%I|)f6RPjNX zDi86T0E&yH8AH$C;-W8Y;A@^u!S)^`QKBe!;YHD`X(~JJF9+M*)?(GeZfGrc<4(4> zWXDA_F57+qCM;V3i(5wVes3AOl8Ohpj+Ne7tKSM!mY$_+8`bE2L@!LgZ7STG6Cy4;G+De==!`}N zUUX;vU2?x9bv_2f@PyJa+!PSRW|qd-e@K6Re9T?)!Z_g5F`uYksWbYHJ4Xw1dh;Ee!lBn?Hg`N}Y_=$vy3QOHS-3QkJaW;CCVf&t{1kSGN7yW z-^H-_Zfv%BG>^_c4fmy7*sv|5`FWi_ITej&;iC%v3N*o*yJsQ&N_=)EV2zaxF#%a#w*?Tg;T{RuD^P)L3_zqb9 z>BjOk^>lH%g>-L;gpAEnmR=}=-I+54;Zh7k4+Dbcl?O87;4 z1D(AP!kW_kp$F@r-EVaq&>G4&LbIq~UsP^bs4F@7>(QRKY{&9cDO&nmZxs&gX$+bqrs+H5N2E8 zv$j3rM`axp;x1CWfd_uy5i2S@3dAc>hWO6&99?{3&386w(^!+fe0SU^nUc8@Ybp7I z-RT*4?X)?M@0t$x_2XgQn9jU6X%D1pO1;|R-DLQsOuS;~#|fKjAnR(4cw+5XY?$xN z(Ou6|(1cn_?&(Nn#HzboVPl@aD zr^%8#@S#8_`sYxm^i){0XDIxRFNDwQI`YyqGtPac!C^DHqy5YQ5WRFZ-RzFMUH6ul zf4duebZP->r}6kJ>>^b*9;1E=*Wp=j9j+_R5#E>{qo)UwLI27Q^0Ze2k5g0lFDKwXS~1 zg~uB`&`>j)8m{M3Y>mW>Y`aB!k`BRyb8FGj$BD~tMTzBy^I+GO|F~gA94B<}z}eYe z{PT<&7br@s;BFe+yJss{4-VoEVWY&wp8L@0t>mV255&HSN5$=Z+%fsa76^}hN|#so z3oq{YS12allJ=JxSYw=nswXp0qvvCx-`HPZH?SRiLjCZY+b|5d_#8A&?xMIumnqmR z9Tj5kW2u=o*I#hwN16J3)cFXkS(=Pa0~Uzme_w~@tmhKP%ahxJl3`k{H($S3N~cWy zVcp;r+&Z-iBCeG2tFx#1(VJ;}Z@^CRP3Tm~CAf=izm>43i8c#675KR{H>~?Qj`u!r zBlk~9;(RdXzt)3-BH>8D{bHP8eZ9I zL%LcQexaS~uDi?`^G_TU-mXgG85cylfBrKBH%Pl;&Au!`F3CD?pokqUBuJU4=45Nu ziTmjub?+c;?t6*W*>vEeI?uu4`)sj8))uhavky|ARSA}xH_9q)Dq!raCYoBV$f*l8 zFtV%yzB%t^qvjbxz$(cRQu#(Ww>KFkrH!W1!U*2)v`zfFbR$_T2lPqzhSS?m(ujwB zG22VPTkFOOi?$x5S5Cj7(Xp&7acCwP?+&L~E?emN&S>Pp9#L;W+eamU!`a zEG13s!QN}%Q*E6E4h-D~^X(#GTkJ|6IpL4Tpr#+Ps@;{e_mmRsd^3s4gx*3}c__#< z6ZqVn)53he4J23{F7MfG7n~Wji}PC6F+mvLH@L-fz06qYP=;tTp^m}&G~JETqxg?T zc=C(^4&PJ&s`?dBFYTTU7jL1-Tkk{B{6SFIHy;O_yar1<3Ebs=IWCw~4?XpYP-Suy zhEJW0(M`2@bK`xmRL{hO1%#dS=Hb!z>p-=Ol3XFa4bMla3@r4}kzGq!D;%A)2Gp+V z^Z4QY@MMD$*Hv7E(5q|Vw)Qc}>sm~`O*ATD=vDBxwL+`NAl@*flnK9!*Y2zHc&BFK zg?{+ogJC?g+c{noX)h$pn)%!?TiiZ>5#5csMZ><_BOAX?B#y2U!)65Iokb0l-#84P zbnhdca2td%XZ1M$lp%a~s}^RN0t{Uu!@pjO@oP~sE_snIygcd5-4oq$vW7364E91C zUkmFpg3#wF!AyxycdphRyHuWlVOLCf!mcM0|2>scM|9$bUHhqNM<*QEF74{at>s1g zp3sxPP8{7Bgf<1%&?`HUzFjyX^mSWFp#dwn=}|YB^UxUW^V(qI{9*9MG@QLQui`HI zb*R`jjTZ!-UjVjL{)7b@pg!MvbU z9uWUTG@9HW{>=3gv-E$G%7|60@Z}c#7q<|mIhP0n-NM*Jr?b%5e-ocfNv2l^yTSh` zI`4Qa-!G1%2qnrYDutx9M8k8RLxm73q$H%MRGM0fhLur9loExcj20E1`69o(_g$i&wbt3b*|6n{SMtQkhe+wrjrKcRN%IXm((p{Gp!{Uske-h8YXaq zd0+mQmB#&Yu3_z_Pw?0NH0eom*E>O;*yKD7x2vzk4`Yf%<#l6t(qd7niva`j2+p6UrW9s}y7pA9Q@-_ybB z`P^-KG>p62Zu9Q`A$-%Z20V89F-(}lK~J@Xrm7MNN2bXKCiG|Z)qcFb)&(u*mP$O4 zHpp5{G%U0ye>^^iuRrzUg<>33Y2K-D@8iuAs$X#9JP|zpbl}$W#=?}M3c5+VXvcaL z+4Fu!>G#YISQ_j@)tPg6MczeNct9UJGz@^F5x>M9D<07h?**7T%}88P(?~Isl4NUV z>=OQL2!=A1JouYF1}9$IjZ?>D@^VK+;bacmNHbJXCmPGVdkY3qKX}xMY|vh-2z&h^ zaiFJ#p!j1ktE3yC=J8lM-E9HyGDt_6!fAN_Qi0EhJ(Ts|{EuFY=?`6|+rbE#I=fF9 zPJf1YQ^WCXTvauYmv%Jbf*K_}mSZo9Eh^mkaH?2*R+^RmSxVb|e5-67P|9l-eze)ys$D`C!@`xUYye~X!yjE3w`VIwH zAEt%Ik3mm2UdR-O@nhdQYIwU1DjWY3OAiF#guEH}^^u))FQ~(B`_|JCiG3fmQYQ6= zjQBt663X9@#>c1aL**N9#ba$JVV8SXUTFSNb`BOo-l8+~QAf%)xsKxM30G{MoyvyX zy{}=|<&K=ZbuS(BS%8^Any7cz21$dOMXaB0K*Lui!;{n1 z!m$_`KdGG{PQE?_Bb|PTCsg{9gY^6!ZZBp}4R`t}F{b7xydujib0|-m!_)oJpv&-P z*g4~gctx6HYA!e^jIF!~u7@jV#golAXv1fixITtdjo0D!{b_Vbx=)?q(i?M256k^_ z&xZJ+EAUxCC!RAWA1-ys2Cc~okl%d+xcCL)+BrUGEYF~+m)pSNssm(vxq#}^HE_*V zKkD^sIu_sd0XxkwY`^dUmQ`5d!qIc!TzUZ6V5Hhu(ph>RUTn-k)1Va6o6}1c+BP148|L86?_b4R51e^so#d*LI_VGD z00dbK{3Ga~rUcGie?gVC3T?3=!D?k9UI@36=c>hG*|MIP{%$BG zdV1ksg`eUuGZR65-%1d^-J;ji2U6%EX??C8Oe?~l!?Wz2tZRIWLL(AcN4JhYeHqHo zZ=R^KUxrU&L&RaxW5Bxo9myWg;v3aY==lBv+>VnN6fQ?0r(+kaYO=;&oA(GQ10M?V z?gz!ehxG98X*V3*^9KHMwa4DhOGVS}hse|+lQbqKgSw{=RCpV3LCj~+>{bLm&!R{r z^$3N^4Ctm+fy71*L9MVO5K_D!$vFmnyKRCE1~SyFE`{F_z47qUHrQP{fLD!5rk)G` z&?@8K^k>^-EUJv>?w+MW+wpLIy00hHObTXA?-Ov}M-g{wj9}EhO`pRri|5wZk=9Z* z%rVZPLDFZn|0VL8AIeT27{8Od@>7V*Ln!|_km7O{MLIe&gy1O++ad|!Vk zRv0?6er*Zg(T$K(NhjtweYSWNP6qCaG2?DL4vfA{8F9IAXHJ>WepwD3J_TX?w|n%$ zwkHh>*Z{3hAB#Z&n!<?Ox zJv3t9SG~mTvvzZPLNC@m_M9j0C}$hHFkW{pl52GD&@-1|JiELa3J!ndzXh`}wQ#xQ z)w1BvTYiZBTQ$+>?lIa_`Chmyi{pubi@8nT3xgdNlE%Yf>?4No)?Q}3ZNX#l_gr%h zH;JNcHb;d44g;ZUjXD%A`bhm3O@pqh?4YCce#p7G49!jNif=r|)2TpJ%*^)yS=}}) zTq0$LKTTnirS%+?>&XSL4DkKZ9KK!snWGen$);5mhCi&ulfL=9cZUi@6%@dthr{{h zvmo3!LWXJ&`eSZt34HY|#$n+V=oqjQ){fr>E|!mM?hWsab$fD24@0Oj${EH>eT0T$ zfp4Z*aDb9J<~HpSUmy9v62yZiO^e{X7LffrodDz2ro&N#d(`04-A3U|Kep?xB7QZ@ zfE_D#ajmmH51u+!ROn=ZD@$Wh*KHkSX~v=70X<5xlcDbbJN&b=AF}3ll4hdWV+J}f*cnTI!K{3e6W6+-eRPcFJ#LxUdA;Bx8ycuw_j@t3zMTwGK} zW4?{U9g!n3$7dEsE{rAdMkcyMFJ_~%R64z|MzCwW16J{GU{-JnkCk{LkIuIU&$3H| zDQA!dZIJeZf4lO7=Ff81aa+KxyCvUCl-#&?2IBiP5w>ZJgVpj~Fl$3E;g6y+Xf8Ys z&pdbH!rJAirK(64Ze>E^#EE=gcQD4^e+ZThl54kNHD)L85w_=r$vYjn3E>0Q;K9}X zakyPi3{<-*+TU&#O12!59FWUpqm0wZyyIqZMqXd=Ojd`tk^f2es)fR@>&?Q#q$K(< zdjjn6eoQM@d$VIcV9d{F2$cq%@xIe^bzN#{oC-h1$m z5@ANa6Qr{)io^B(3G^nK6WsLCtY3~$di)NwpVHuzgVSmG1Aox&_ZfD*8H`%tRf0-j zAKbgzh=*LeO_7~DaQ~j#+;VjcCf&=2_s{>)X%oq*QG1AzcjtrA6-_Lh8b-mXs?vj3 z2Tw;{qTu2hajrCLn|Q>7@4SC0yvkfx74$q0`#SICo(0;fc?PCJ)APa_`mPwem0} zbTs6|!X8pKPKkBi1@ZL(im>=@Z_(d<@BK`U;6NTaXp-m z?a3UeT9Lx?e*F@pAHmhnD!+Dc;G+s7?+E=P_j`Bu6r*uRb ztzRLPiK9SKJ`tWs{;N&fRq*u*5#AL{MzG8kFJ_LzZb@@t&7_rr-|q@IFL74ahLuyv zj2GgseKPjd>5g-be1dR|c&G`;W93aNC1=E0{3y)`Qdh*lz>RHScWo$*@3{wUU5~+A zApsY>_lH^S5=+$npOS}k^of~vGZ>kF2Qwqan6QX$5@KY2zx(}NF7=guAYsBf+8?fv8 zzF1tp9a^%Y=yX~ZTT1t+{x{npU@SqZOCKCsbBIT3b_0bwT3i==M#x8rb@py6$%ppA z?9ly$UR$t0df&}6j^zR25_7A$2fum{&xzYJ$g4P>W_%rqF#)<&hsPd(vVrQXn)rl% z7IonA13}U`F@v2l{VCtGkzW?R1dofi1l_I@*UdGMzTF&4g0?!o)=!b?_N#&EiEBCD z?VH%35`%%EmxbQXVlb~l4^4_*3H^=C@yYXT*!^URltbxoueR^uV2?9r}# zzThOqKI+LEPcFiUgUnZ9T(V;m$Hneep6~>N_!KP71OZDUh??WB=M(vj-)5X%Eoo@t|ZLUON$;ekkD7 za69aIIZO24Z7KAQ&Ve(Rqq(r+xezz)kGLvoGdkF7;^vfbJat6}-hR9f#u(qiK3^+k zF!~Z$s~wR#2tP)yq z*YozmDfa}-h2e<-ez`Z3K`SLsq`BAZ(xj6I>q zh6!%c++!>!emD%-tON$NUgF|waTFgvhN~CM1iM34cxBrr2-rQFWX2!{uhQgG8M~zX z*h?_eYlQ7u_3*cDDcN@}Mx`Z{xT_%recu>Sm{}|ZI9&ssLxE@-wgo=#a{!$SBcUa; zC)*h>fa^Cd(ru@IWTkYOp1s}7txL|y3}&0ao98*8nt7RCD!l=!q1dIRNpD(65eXk#K=$-X0?ImjOEy|RSnZVj^Mb3Ct%x>iG?Ls(BJ_=M>7PO}ZfX7OfTDc==(~&tU#nrjOsANYBZHKeYSO z1)8kwMo{369;cS#Z`oMVduYQ6wV@C$Wl%So8If~+Jr&N5!c}|pa2^Naj($A63u<=<5 zUY_8HV>KU(P0~A9%P9`t7)gEq^jJ{ezMcLvUO+-#F$K7Ug6+cH5NW@R%^N<$6)DFs ze@}?e?6iwtQX0>>cApGJjH5##Wso(qiQoM4rMXQh9QIlj+@cd$o*d5^((Jg4mm8l` zJ5R+;$-J>w6C}QH;|`0{#K_eXt8z+uz}<`qta;dxzKmW4|7aT@bI_pgZEa*fOB)-c zY~Y1>Q$gA6s$e{MA2>bK<)34E;rgJJ!qN>(;MmI+p~HnU;=A$cRe|^s-p8OnB zw6D_&I7Je`26cms@#EPw^j>u{=-rthWi!U(`nu6{Ff>=VGg}KI@}9$&A5{=HXPVq% znkAmw=MQg=n{%^k6MS%-h1){2V8^C?@YMdG%y&kFSRJrU;?x=8vmWXEQMZ^k%(_Hj zNwuszOc%BOmCFA5tK) zAAl)?FHnc>2cVaiD)>Lt5#HDq;oDekJ~fQs#IpnTF%8W5o=JWE?oU)&c9I>&S2y@@G?A9jQLsgwE1 z!ap!>?P@YkKLA;or?~OGC6D}+%JY6oJof6v)R2(@N@g{jWo;zrhgFc~%dtGBj}JK} zyU-_TZ)E>y6emmXdec~-$eMh9SUv)_ea?nomTI^$_qixIs0&;FJ0s?K$uJ~x9mdOj zgn-}>oGiZ$%M7Q0>Y!|(LKkjvyde6&9LB3xJ|nB}Cv?MU3NLKw1jY`TV%ZuKE~^V- z;|4SAlF*HhEn5dmj;qih({~l~o7Pf{h8{T;92Cw>2Zm!GMzm4P358jjJ-}T%YZHo6!$Y7d-9+wX;ruo}9@JKgv9P-VAI$D^+h8@SC z<3|Z1+xfV7W!_kxXgq{JSLl$|xGr?HOXB-??*;?w`hYXGX!68F4sU)^1bq#29T%DcMOVtLQ^La%+h`$PSHOK zQBG09z4MmTd9)22-tPlb56qPA$uyw0n;{mzG-v%!OK{5&6|u4F9w;~Hi`S0qmhvJA zxKTL{l+C@NVuv!i@p3*swGUd}eJ=|fz6*p$N1!6-DVblF_!HrkG{DY_%oeYPxuX?? zrE6W$$umu8C{+{s7@G2{lGAWnQ-y6dO7p^}w&?L{DqFX25q*t2;gb0=G-0ZT#GdlO zy7DHtm>hu3MkQi|L%q;N+z1}?m5~Q~@&4!w;DWKD{U?7mOIuc@viz6WS7M5k?aIY& z5`$sG$Mbm6dIgdxh95< zod#g;R7Y0%pv^-TG(x`hBQlfj8kRVZlH4~kayTA|A`HfRR?`1uLXj-kv>U(dJe6lg z%%IwxTPZ3uRX85ngD$Om2M^-{!299}c(rT@Sm-$LTOCD=UHlwgUDF0n=QnVCwjB)H z*BiYDOFs6+eKFrU4OF5hiN*b*p;u%!y13}DUQL*=sI*b)e%})PjTCHyrR+mlwB&dB z)C+GHuf-dyb12?^o6XMO%Cz@TU$o!&P~LVrn_Lv{2{!#Y!pF`Fac^{A^m>p+cW3zs z$Mh#qk9q1;vr>ukqMgKzI}EsXLOI-N>?+!C$Ll{1!m?(!~Js`aZ>96bc;L( zZL1~!j#fV*;9(k`%dW!TA6Qfxa7s`aG=pPrh4a|?CBo>cUhG&l4^LEWr$G@KY+cd` zo1)Dj+$&9Jw>IW^E|pXg`x3nSW>%H|OeNoiOYz3XR#ET2g)^3Ix(&kwGx%q*oz9xpioX9%#>ob9Qao8lSI4ShX2M+b zFjz{H-tCvI^3>qxQ*M*mwl3URo1o9F<51JOoksLi!9=}$y0KE?0@TVd>-YrpbgIQy zr+Z+Z@wdt1-63J~0hv5Z^9tI3nj%D(4+TvnZ<-ig2G-mSw>aFA&Vif6Vy*y^Tmj~- zhhb)sKyA|9qWNbfRP0&}kGA>nj@5baxZ@i-+IujCIk(Y}2Oc2ynjrmsQa)&oCChJF zp-?jkKTCUn1M8oV|H>4~(EJY+JXB%mLt9~7(m&{AT}@F>9jN8Mxv=TheAF5|8}2Ta zyuWVJeedE!P`2eCl=-Oe9*1UmRF7P?bKL@?ovR_vaFP7Jc7=RgUZmis9Ye0y%I%b%9PGHp<0pt4c zZ!`;!Cc1KF^B$P%Y0j-l<=k?<9Bx0kCZGAW3#Tq||yb|4QR?mB7}mml%-0nQR6{(JtpcT%z3v6=eQ=s!{>;+LC}< z+#vk%AsGK9o*t$XH#)zerI~R;(QGZ!d*6{O)h-JzPrlLld1mk}ZZ4KB=t`q zRo2_mUwW2%;Q0Y%H0xs!w~lLs8|O@fKSyg}ghGVW0iMmBG?L-p4_nqS8VRq*{-8&# zX4q(U4^~}0%eNLsaKeK=G`qJxmaENV-(p*Fz|4ixz0h@frRs=BOKecrd4$ zrD)e}29DBtEOQy9kC~bqX>VQ%pIulYCjXHbs;1d=M{lCAz-Sh$@94x6dISlP6E@+U zRb|4P3EkPw{yH7iUqut=bfNZ^WAyv8HbiXMMXG^ypxx<~_&Y*_{-jECGfD)*1A9ah z;|6wmb=C3iaH!8ZMf9C!!4J=ZSy%jO7* z=T1Xdx`AM>9Ybp0-;jpEddOU0L5~ZkiW9oFf@Rlkc=xd=s;o1?p);ny55x0xKX8Nm zZmN;+J_a25_quDBjLA4EsYD@Ak0?SDjc+}fU89bg7tR;esfnx z;&#l$u1CDl;Et_0&nBEZXy>u6)>SBxH%X3VZ4P_+Sa>bIAxCL%8svTmdBri@e^3b% z>_)Pt)j3)}I|LWa&j9?8&h;v{#QU?~2}M#p$716G3UBxeMUf{c+A)s8ePncMeJbZp z9?H{aD>H3XrhtG`aHBSzbvLXaw9Tm0D`F;Ou`i?%6AuNoWA=$ zsf5o>7V_!m(&4>sBBp~LRi+#f&U7;nZTGvPO7J3BfA}Un z`p=5@uhry2vy-Ct#*ftCIE>DXKLabB8w8sP{@7;tTl{x^nRwbhmwNmQqH)n3aZvd` zvChC3+cl@+tiheI{pb(a@=lMl-QBT1(}3y&?SysNQl~qW7U^zT37@kTm}(>R zG~FV2X!&EeCz&K)5rVcSGN86p3CV33ErTai9G{I3D{Y0XZzfV(L9O^IYZ=)r?*gW9 zkxnn!ET;6?1HnZn1*eS((9>fQS^FQL;}RRA;lxfD^-LEhN_?-beV60Y#rv=^v@>=% z8YOj$VsT9EK)h7B7lqhWxTM(?7HkT~V?NSeV(~=$_N->kH$&Uz7EgeOBJnF@FWA;M8_HS^izBAfbe3o3L8LYe9h5tKhM2(;C z!I3>?U>iWp6fzf8(RrnYVB{{(z#{z8kt$?|IyTY>ERQwlPh3@J3=sfQROz0ZQ zGb_CD#}G5&-{ENR8a5Pp(jIboFoJ6)JrF#8?!>UtDSY5UK8Mu~m3Z+{(8E1l{QbKp zemJ&QY!6<+!v>oHjf&+B@;kIlVp?`c*XGS5YQ?RN)%0OQv+zP)@Z+dbd8QrC6E(PtM;Bb18Oc|Vt_QV++DIAN zSlkpv-j(O2pXU(gmsc(S)5YdzRv{HMMu8Y$#NTxWlTDQm_gMc2dJdX_^FkMjEgc`i zaF*UTO_A&yc86YU>@PeTxR>N#OW?5i3vp6NUo=cL!PVVol8(Zfi4ucj6cKOQ_r3rkHL82hKf ze2dZ4l)r>qEKfs__Yd0NV>0dzONXb>m5mD)uy*?#jLkR<<-W!o=JktO&k$u!z9*X< zF^HAN5#5v4(@FD-g+8WVz{1B5`W$T)I_eqlU5S&}<9-G0Su}ytO!C2E*D>naCyafs z?&jZHUx}?PN1*1Tls!+&1OJAzpd<9AHQ$y%=k8nJ`txKwb8il{cOGjGWEWg#*j`+7Ih1E(IG-tjfHN2K!g3tD*Eq6%7{r;w8KRxXUu zCZlr|bbGxvRqpZx`L7$~dM}Bm+RWniIy-#z`5lFMg;C;$zhu`klX@zCgiHT@fvSBv z7}WC^t<)RHKV~-3UVRrzxfITu7U%Hr(0f9kA02Rfu5>2#myuukPjTNKRkk?L2HC-X zA#3|AI6gI?_hCly$=+P_L83I0KE049CQ13gML3+ z>2KR2S~K!6gpe`64@u&tS>PGD36Fi3Lu|}zabQb1 zte}n9A!Y%l{GEp1gNNZC$?I*h@ed?Dn1!##E5oN5*JZm?o%oLbDtLZRlV{9!rQ3aD zVa=KrR=zuxU;h~b>iUYXOH4)Eln}PIjfO7N6|e1RM2DTCFm%`r(tdQDGy;#vopTDX z%Wn@s*LoCrY-pl=m63w~n_a-Uxx#2E_x-Fu8<(dw!2oIIkW%4?#cLOG@pKFPm$eGt z`ZZBytSTR~_k$ee+u*MVJnusQ3{f7A)B7vSbCcAtG3BH1ei73NmF+x9@f;=lXyA=> z1>BP51zU9$aO(_HhG$*TMbii#wcV77;qTyw+XlK&lM71{ZLzuc3#d>wV*5vLWoy2U zsqs8}>HD}*;L_7BFwgY_r2dv1Klgpe z;h_sZq7<3qWK*zh)8*A;^iZjr5@(z^2k1JPwg_Hy>-iPFc;t&BLH zDQXxwxhM8|z7KniwuHYk_QP3!Yp#jv2n(n8C0KKW3_~*6r-Q^4%Pvw8ChNHPSU;NoP5+~}M z2frPQ;mV3Rpn9ku50zhrwk5h8SdH{(pe8Ss`YQHmS)e>nOL#fXj(sNNLU>soYcJ|c z77GsX=6S~C@0Y}~YZ8$cokQ`=wq2Q3k=x;F?KkNHpyZu4h&?fOzy&N%k^DnSid0)so zqJZ_izd%^N0)~E=2%XPpV{m3UeDG)Jo}z@S3g6R;6eHY#OYv8aE;wOl0=?XeJn?rL zA^2nai7J_0cd2Vrmdw$cw}OMdC+N7J28$2fxc8zp9F;wvpZ8mYd7T{5$u)-JlInzx z`HQIX=@!hlO2pbJ6VYJLF-lY43sVlgk`1ra7mU64N*$sonr7AoC*PQaYj@1Tju-Az zvD0Nb6x$tLdiRpfb#n#%?V)_(!#EH!4nSnqQ+ikYMc9(yg_S;1#xkfIkL)Y)E~E_X z?|HlF>X+w~W@SqyM{m&dOvG4?fwCH0Hqhuowt{Z_-nng$xsGm?$$HQ|YJ zH+@xK=Y>SZRBW_DbAy*%bXA^p-r`YFKBuS9rTo3!|5o3K=8ng--8YQuxPu z_-7c5tugcQkn(6W)!k{+f1D~FOOcp`M~2feFFPp*4`k51PcYwi3Qg-*3R}-MlJb!N zamA`$tg<{?v`$T+@9hu3Idd}E1*M7+r4!lo^>AKhAW9pMTa@k{4FyN*q44cojEEYI zhGo6+P>1c}-`1UAw){Obh54Z6f0LxV=XEh4T~qt}C1pUyRs< z&m@oM-EAzt`Y#ge1U+H;q_NmMeX}^~!UeJU=??69UxNnJc(cunj&QQY5zlMJ)9#1n zoHR6erSZ@9dxP5Fg&P$!dHo^P&!tf>Blr}`- z8{HyXqjPk3@Gr=`BsqiLCUeY*5MJV!7Uv!71h*ZHVA{hVUYGa|igY`Q8>J3u!aAVqQ)5Jn6HB?m zDheXTSR)*LEF}5WQm3_1MCwy0%Apyat{BZ@I*q^!>nv$%n-vds(d2WpPC?vzCowu_ zI_$onLrSUY+#0Nc(>e?yn_5qyv@u((KdQp%1CG%fV}F|ME%kaw70~dua+*A;I~33B z%6cK=A+1Ai`K~!PS^w@{p1IM1>xw3FdWJ7^(nFf{#fQys3}ZomwmW;7I;)&zpWHIu zoodLlYr3G?P7RJo02W8yfz|sg*ugoF{jTbx_e?!p(nrRp<{u~9Ki0TBJ_);NsbXNS zLHOc$2g#9TjhX>RsLf_y)dHl zQP3Q@1)QrCVBt1@Ow?^7fftGC731;DN)_CuZ^$c?&p|h}caV3Y6;l73FMhg^Kr?oF zu;HH!OnDiP3(jqT4ztvGtFANY-g(Pk)@`+W~u;l*-m_Tg$8b5_weJB%UF? z3p!5jgC{++pg`jE9=12;%MV=e$AkuGikQJ360cS_vJbDDB6TbOO{QM`WjN`X)cck8 zO4Ix5@b}BJMc-5LbXWR}&nr&nX1g>>*`X?W72cr^Nhj&rwG$M3+Kx~DbK~anft;T= zfSZpT0$ZU)ba-+cI@&eLuD1V^XJ05Jv9JnOM@^P>E3pNd^$pT{X^BUE9;F6L15=3OI8aF-%ml}`%N_bRs=fjIwjBP`5NkaPQ;F?vY{-`?%FC?bu8k=8Y{|C*`8hi)TFInQ17zc9E ztRtTc3)bId4PGS@Kd=B%`GO&?KQcuy@r-5lOZUV_TVnY}hhFf$%A1Ws&(Iaso$NR~ zpNBUp^X8O79x|ek4K~kZ4KFQL+o#AkW^Cv39c6U5=!)RirpY}&jDm)W6c{ppmax0f ziv_iO*qx?|Z}MJ4pp>y%oc~n3dLt5#tgC|oNrPpZ&Nd2}p1rYpceZ%kDitd~d0@?e zb_fgEF3VZiAJZ?4!|X&)&`(R?70%MT+OeJ%x$BAk=F54Y;Y(R@Y7*qdORTa(R(xrF zA-oHI1LIS}+4Aol`gD6c2+NOApHu3v@lvI%%i-f38C^zh2Rq}<`3(@Gcu356E|Z)b z-{5xWFHm_<0~%dh=x5Ld>~vuu24|E}^Vu5waQg;EsjQ{LLp9OcHilcJj`v|BL)dX^ zAZphlnn&xvm#QP;#EL^y{Bk>gN*{=7_ww+K_dY5*5CzvZ$KXrnSRv@$Y3L{U5b;YZ zXg_kq)aEuqjVgGf`;-3j>m`c7^zLjY416H)JrV&NqmYn4;@GEy7%#1K?h@ z0sDSkjH#9D(5>%mQR#GnKLx}Di}S8PS4orZw8;S7u8d^874PVNMI7eX9l*0r{b7sSJVDrcgwzlFvzCh@9~ocB zr3)v~grgedw&op89XgUt-94mSnkH|1BsrkRD{#pC&lK)m&lgPs@amn3cuZx00L$X2 zx=NF)>*w*ZFIMOkqE(f=Y9Rje3`Bjaj(qn=8^uRP!#6uQhz8-@-mQfWbonfCD&9aw z;ze=hk}3SyOk(%Cbrw2{K2+Pqm_H2e$J5=kxiIS_d2RQ?^6DkDKX9}-U{D;@&F{+* z)z>LpYLhLT6D!JlWeP98rO3+fgb7_<_~Jk5%+%*pA=iDKgT~)xW3iNp@qds>kx943 zmy29bEjEvCnDxfJ&Q=&aVHNB;c@>6G6iv3U=j2;j++aA1PJW)l^%9r$!edLmc_L5n zf33n>x1WV>1O9{QPRr@_S9dTJpF-)bMBKK1F<(5EOxKEY#P=_AapY#z|F1thA#)C; zsZ5~y00$`B6k8SfxhM2h-X+t0+yYI??wn?^m-cwuVM}2@yioEU6t;I~lSh5LU)0S7O_+2KeuZ11}2>$8j@`!dBA|nB|jAGpct(CzQapF(ro=JT#N9pfu}h0NB__J?oKKPXY4$4owD%+244(!cW8E-t zYXd2iOFYS5Mbzy?t}v*3H>}$fCM(OO=Hsl5bj-vh9z}Y=u?}9O=tVVteN4c(W)qQ!?G|*dk%!Ejs>eQ54!nW zffegI@s;(Ham-6~l4~f7HwS9tl9)unq%2)HY&nztMsDS9qqL-)lNm3&c$plh=h2mZ z7CgGI^nCmOhNh_9T<>**R#xBV!>@btnbgjh_o4%4JzXOoub9a-{!u)kypR%SbmK9` z?(og^5xEca#iwaoqryy^a}9y@OVt` z@=Un*V=p!BKMRqG=2WwJJ09!jDc^Sx=~<_E{L%X+tTh>jx%1*NsxiB&bD4k{+0rZ| zf_TTxJsh5Jh0>>Wl)QKcX|9RCyl2Q}@RUYbzv)8jgQd{-PY2N26$Vd?`m?FT`>$#m2;Qo-)NpDHHg(D1QRkL{ z_p7_$KXX3yAM_u%w;z#TJKvFgPk3-yM`hR(F&vH;B;av}IJ~ga2Cr_5o5&x=$V4uo>AZumLQK9KbHcf_x$2-^KtiB1$^DL0r%M_(W%n?q~hF_-ju$PCpMa) z^}kKxj}g^ytY`{4T$Qm-%66LM;vqlrV7@SX##S6Bs`9@bYP4})iSTIgeYtJ)FyWTL z0{*kag%++mA$I<%LkVMRNVFMtx;g1EJE~kHC22O=_+cJ>7;gmCF*;~*aVqw1?@U{> zCc!YtC%b3hHu${O5q3@N$)Tq+X)70qn(2s(|0$qpA8*R9H^HB?6NF~D7MrdxmDpBF zg6@fhIDgF-`cQfaOl)6)+6EK&^-}>4NBLk|y#elAdR5{FoT4*zGlcMSuj%6SWO#6L z58BPume>0HplOnXeDmtQJjaIU>&VeqE%EfL?)vg*6J_-1;X&JzjcIt?6bgLQoey?y zh1VHz^zpU>`ilo4ro%G1nL#;OR6McCzu!%aH5wtLl)3V^Q-!ce%^9k8qs0Gy zJ)i_%7yhO(j)pJpjZ0D-IawnG3&-CP#=f`<|3+xA>gw^Vb$luA`sdH~`k~ZkR#&PV zRY*xm-uU3g8E{({Ea>)m0Tz!#xzYS6Xa|0WYS($RxOV``%lrka@-Gm%KaO7CEXP;- zhoY)}Z(gur2#mOrh^JO;z)HUa6qXrs!Js4XLBCWOJGL3>_4;wNTFx5fiGzI z2#X^A(ub*=g-#X3@P^@1-tGkmZz26*?}B&W)-X>z z<(N)pzoPI%NITqEF$4o`swsD06eorR@R|z>obGPK?)#m49SI1Sw+PKEA1!@a6ljWD?XxJ$o`y8ExJyK?2=>c2(r+XdCa~8lyY5$>e z)E}xmRaklF16t)d59Zn|=OMvE;k^GA-jSe%0d1Y2rC&TxYR%%+WPnbQqoHM_2V4$} zrEgy|#4ao6@Sj9Ce5ZVzKE;jYHHl`lYSu}fo8n2C7Ss5TO$^LGFo5gteWGQ%Zqe7~ zO6aR3qq49+uvo2vGma(!q(_Lhqil{S*wV`VN`XC$)<3_z2lBmC{vCbIlI z6&!pH(K{ zuwD6dED0Tn@9!ya^4UQg_O}{Wt?Vkk=+cP7rD*i542P%3-C)%B4(PV}IsJ@v$I%W| zQnvrIEa_bwv|EhBR}o`H-O}SU`cNKTl|0g(M&>ZK`$DQ7JRIL<2SH1h%k-)TLF=p_ zaPRMl|218b_K2fc-CxS*Gz8<>fxE?({~d>b@EPFMEb;eqNARGbtMGttJczI5aQBT1 zDyDR2hmvee^j?dne>76^R|j%D)|K~tuAobv<8VRLAzF960H10}`-D*oh&~L&#Ks?> z^7X8sCyRrN64U8!S{NKv?tm&`g`B>#lDuYmliu`dF-!7QZ0MXp?hX6+ncN(k+`Dk* zrf@16SH)}J-67|*(%p{ED!AT#46HKOz=xsV$*(S-`{-UI?T-l*72B5;SEWk5$_Y>x zHJg~{)`5Z1k7J+L_H(7VwWawsES<- zvcyd`mSNo>SK}T`yF3FIPneC$>sR3|y$^!@IWIgh*#htblz8Y$bUkIR>g z5D%93Mc@0>jXrx*d-C9t3*LgJ9Om@i@r; z0bGe_7L=T}h=!IQ$nxSQell~|m|@dpW9A+>2p3LDzr)FtJzvFB>xw`;+5aOn-0Fst z!mJ@U3t4}#4otp0nQcEV=fmZ(+*Du3dkszSUZ6cUMyha6@jP65nG4e$-_teMRQfb^ zDBN{E4KrLjQl`lkv3H0O-s^LMrdf=G+wsN1y7(9|q`VvBfQ^*CD+~vg?GUV1$-u}h zm9|}}p=sMBw!)HkV85>vABOeAu6f3scUH>zYIPTqvu?^%Q=Hi6bG3NXECTYE^}(VZ zJJ5Yj0R9@;1DXTU#1p-5 z&S|36_e^n#s}esq9?pIAWO>ZK^(_0cKUu1)vP)|`SA93+BKH?`_ihYVAI;$dDXH{j za}GV)(+R!59OKm~sr)TcgBSdlNOsdcTzS7TW;w@Quw)$NHXK$Rvj_mn(%2$gFR~~1- zjYu+LG8+At2xet!VpKq1Om~~bS#vsR=Kuq^Vki0j%Y39gqA6UE)1lST^LclyHm<4N zg1mJBCa8?z(-;lWr-JG79C`feun?B`_QrmKB6ri6pDxUF+PHdm=* z!K86`b*!?e@=+w)RAc;l@=;lg+!LrC+nrr{rgEQ-ladB=5|oxM#C^{)=;Z#HoGrhD zeSX!6Q!HJnW!nMp3-ac+phy3)H{U;+KDy2sag4Nxiqk3LX`U zFB%^Td!{ZYt;$faJ@JMf|Gk2X<+`IDF6Tjy4e{Jh5(4H%V9@GBNoLBRCb5JS7Rw-cqpKNk{;0K{QBKUT!zu-KDV2a8q>buO0 zBeqY5KbczMMG*v#$Z`s@pNAjcslv$Kt5GM{oBo@AkPntmqlWD<+*PkHotoZ8Q`gSr zy;_6$*U$-E^L+rU{!f<&*#@IhPua4Uw}#@A_NAzPZXDIDN`_edIq>6|G2B&%#nVYI z#fI8hD99+{?dgvoDmIgxbal|s-4?Cu<4MQ%*}1hRB_D;!eQ|c?BMSKI%9UmZ$!dH8 z6}5U`?xX^cMnGYET`(V5$aLK}5Odrnk=(?&e5W-^aBQ>3gEsxyDrT>EF>wim>|QLk zUU@-rpR=jAr6>PXDWxs-{cy0}0^B&a52{Y>h3_^zqrhoeP&D749wv3+0a2z1*{*m| z^80-oJ(kaoDCW|l9=vvC5=YDKVlvgC(3b6REdP^0vvY+muO`zhjhUG5JdFK}?!uxn zPgJW~D%=Vmi~5!`Xz+i#vFCVgu2?RL&#FSOM*kYDm(_&7acAh9%VBWqI#<}(Z%^4Q z$+P0JVlSsi+@jMTJ+MO3R&=y3(dMfG@ZEX_wwMaI@5})4!RQ|Fw5?6Bow>|?{`;Qz zdW8)-F5F8C_W1FKl~bgwT06WMBlVDN9YpQN>PRlb5C_hghWY=^rAuM!d9UXe+L8AG z-UjT(yiews-;pmdk7eCc(hrk{%YWq~_D|!cYwEmW%`kTEew3CqmBE~=8+mk>{SwRR zwiqvQoq8p0Bkr@6H+*ZMKfB|_!j46x))v7s68b2r+zow;o{5D3lH&%llJ(de}7Dr z?#ZUjBk6CF4j=jAiVa5=Ve+1J(tXzkU51Xsc3oY}Q_mOX5AourrEdIeiw=FX*i1Lu zbJ$|?1YW)787;gVCul{Uld?D&!t=j2sODKwAq-GJ{)XQN|avan%OE)NW@rcpz`0<3I+rL7f^ zpl2wSZ^?x*?Wt5}@*>_g}ZDH%ofi^Bi}h z_Cun=fjQ7@7Ax+%KLl@BjKo((@5y(GIYjlmL(}f&lCHEPUGr)iq%Gbl@z1Vs$+wsE z^{f%R$x;WOj~jW?ldY6q8O-%T=~(D|9E!g5L-mHfyx>$&{xV1`9P$CZzD9gtP6wTLNr%f}&hU6#lkh9dS1=seDa=^8gyTZm2^MAYQq5$p zqE<1d-JX^{sirdtMHs!=i_LtK#UERW*sFRvw3YjS&*0T`!)+a&t?k86lI>{F_09Mr z^oG0BraNNYq$D0ZGK@PUHv8a!1bU)4i{1B6L+{R+5<@td4uqz0Ld!NVdu+_GZq zttWRkmw2QzC9bVqn4t2*oKzN0hwa~9yD#}+4|Y%wTQ_Lp9EnSlwc!Bu3|cE}wfDr8 zJMH=Wy57Qvv`pCc`57g-N}iFmd))WhZV}W%<8j1;L3pC@_PHqcOkAy9MCBoeAf@+k zv~irvroC?qcE|kP&&4O;h;v$S>&+=j{yi4e-e=(N#Co)QDftQu5>a$sjXtkBNqu@D zY5aZY=3-b#CPmVF$Hee+cP*3nk&+_y&~*@x?;eSEuXWk$>|S0w#}3<*6T~OVUm-j7 z3RTaZfa`xB2b1hr+CSe8x;N^xGbo4FD3`q^?2{?bu76Rhko<>i@V=9 zz&(RJ>@_75$7=6Fx2hrR(yt$>c8tPFV_wn9sjn$;fi4+q2MF74NIvvcc4#+Z3r(JZ zyrxiuhpPf`NyaevlTr)ohi(w8M~4Z^j{Yh$O-ynRUO5>j$$x=>wVwsm@T>4}`4{*w zvI`b?|AfZ!Od)P}i`Zg!2?{M`@ykJP3>xMMQ8yOC%RGt8=!JB!p&y-|iqd&wgyub^ z1o?AIJo(9nE)A>|dTQRJr2O-g_`RCOD($aj)cGj~r6~3HufK z%f%QR=Wh>bhvS5i#uuT|T8A@w)`OSH#`4YGgW<&BH*jOtQn8PEn)oZSLg*ZRk=k!x z7e;_1=UB;M#lPvK<$euXI}N$g_dAU_zY*RhJS($vRb-*mhC_OPBEzCyc!X5=`yo}# zu$oLkN{^^ESV8*#H$wD(x9NG@LI~>Z%LhJp;a@vSgm=-K#NQc7aQgWvx_>+Z(q8tE zvI<@KVyyyxY?wrwKL>(+tv!0?HPecB&)~OtqZs?%l{JoRreA?Je9qJXM?KWWu7*P} zdq^sTTrh$72H+)mzub>!4B`-lAh!_~cgkNW>fzPZ0&E>T1eR^RR^BF^lmA+pDdtZc z%|3onSREP%DGT-BoymCIyl*&Kyu1Kfm2G6(_XgDuaz>}RM(Ub5R-7f@NINGRad~tK zy!`POR+=jF`TuwC+?Yq{`z<*%Y!p@O8OQA_Ww_Jgp*Y`0k9QPV;Ivhl)Xyp(ijI3i z;k~ZduV^0TWSVf)%X%o-APZvEUGlqN1_st1nDa);qCOtY-a(^z#D<^3JpXZgFsM>u ziY3qiU1zwx&k?F-8PNBR?)>|PDWAOcT>S5^)XSzN!#2i^aCz1n*q>*GcjQf2Gx#nA zCP{4C34M9n=mU_|*bfJ!@1S8j^|{h96$WgSMHl0-cxm8TJQlBqYbB4dbG;l792!SO z&D+T!{~(oEN%O_*TjHMl!(zbeSUA667d6Z03JU_gIeT9um)a}wihb80Z{0#TGG#wq zSC~xdf32vs;fMG}z0dF8@YF&5Zn z(@$98^og|gPp0ZhM;rq_@V;7?)iQeX_}dd;$B^FqwaE{(XYBwL(H?ht^v1k#Z%N7b z4e4LEgjG_Hx`yQz;n@C9wCCno%6c_Nh}pA%v;I>hha<7%)=>gq_xFZHMo2U?LHsGj^u19AMC>m`@IGKhDqYb!XCo(AWL34 z{xTiKN}46-O2zkdaLQ#fA)|INs_*ZHx4aFgW=#*ixx^9Ak2m1LgZ+6~`%ui1PZn#! zo9J=48rGTV;olh&es`8Ej4(Ff^B)KC#6e#{Yj`O+2KsSBqogNh$l{MITUIX8k@|`w zsP^+nHhnq+1Z!jbjKj163aE?fXW{WnBmVbcD4#9%K3poiNLtfLpEixQxGmM+#8RTwzO5WjB^m5_{a4+}-!n4DWIHE!v6_!V9 ztWLq-OiebrD&?%6*nym0E!?-f3zKW4ZjSyYxC!65&y7unuhZ<t68 z@8m?4&+=$EGX%@`b9!=FQE;MvcCM?@4M@j2HfY&SOZ$4hgVb}LkhMu2b*UcGQ zBewBR>728ce+qAI)S!6P1UK&eDVX%HCAE-eF!Yut4(cuOVOF%!a_>N{GT4e%t^eR$ zP^#FJ=*afl6fxmNA)Ifuz-ar|Fe2_F*AJTjo!92_fonz3A!%+O9PW|rJ1Lt~IhJpI zo5W4CR?)_?Ba+UV%*j&kZ^`MOuJ=}Dva{uQ_INdwo&KDmNXsth+vl`U(seG6P_2Yl z;wic=Z%7MmorHzXQM6Lq2ye6mV6lUVD1UDhd>y=|JW6>CE?6CpvQqx&V9Pe~k!mWc zf6&C+rS@2HZ83ViwZm((#~{oJp&_luV3O1+@%ow($=Y`3S3Q1ERIW7$I|Bs=qYU`& zxkkM1?SqRf4#7q3f#_tO%)CgG_aENG5Of%Jteb_WT=Iq2wI`^|bTy0*7>oPAuE)tk zG4m;om(>qn0{DG4QX%6>dICch)zFyS$C)xlR{}tNBO#U2F|brYiVE z5)Ie)x-1qiY{WTz_CdO&iKZ`K$UC}cvBJWU=ol#%f{l z$sJ(*_k`%E+7rE3^rw3r&b%aPA0>wE5MEDch2`VtQL1VUJ!-Fo$rIC1#bq1rYXB&| z{<~at!**Qc>LZME$cFTQiTp+Ops>io1s#73N6oZS*qN$=oDel#9%HBOd@->=s z!ML9MM|cPImg9I{%@J^%Gn&6SCa}t$1X0I)4Tk;w1g_h|Iq|nGhwO-B+s{n`W@Pb* zRTV@gKauAwF6J!lpHT3rNXmh&W$gjyNOh;gW_)RmBff>;vd&YIKQfjlcqiJ__^J%n*Xi>Y*{zTI#FW%*|^`;iZc{hD>UJ1|uEX;D3z2j_H9; z$=jg7QW1L}SK+9F-L$>uPN?3$oyx~J;Z08)xassC_#bT)lBI-)L0BM$ugroQyC%`s z03Y}4m$t(Dhx&BGRN^8APKJrnOr#y&2Y-+q9{#hM3_m5H*TNgV! z2D0h09Pr7^<+9knl(p9a?7wLi{O8gKhKidg z>DE+Httf3H#qxTll~qy#6FkikevQkIv1>rK?RlY ztXu_ae&2x=yT8HFg|9eY~?#jaTRW0H0O;`S9^BP!xVye3oLtgG<`zaCSZxW+{~a92>_s z!tL>_DD|&>?ko6p{3)-HdkR&FHgI^{Ei%sN1-rcR$ZLcQ&D}Ya3I;zG`m9;){`A@d ziaa@z2OSy3Ula^EH?ol|3?>Uv_U~X>V`GB3-}W3BzN zXtKQubeENa_S#H}TK-+sZCe3`eK!l2N}Vw2QxYC)U0Z%-a}BN>HGn+)_KTZmg}S#^ zcB5%h2SQzE7rv&tO7j0%Vq%wikdr+iRNveTd9B8jX?+9=v%+x6jvhSOc&jj1)f$I2 znN!Tr3ZZG#Jy>G7U05^p8}#4%j{46?N8_HFIOSs#%~1$~3;Qhj>&|}=z2UcDzjr9b z)=1}^?izaf+7u&OUPT6kRiM_ABFmxrczL;89j7c z%SWD@vP|_==(x6kVyaWY_KMVJD`h0rbGmb&LU)|yd5LE4|0W!L(GFp%2Ze@hnz;7M zN9Z=smhPU_N4upZtZ%BvQ~&!YSa`(?gDH!8+EnpbmE-hwxejG53FQyA3GBPjk^VT9i`4cAxRM!K3QKA zyxm8grhDlMYhF}=pTy13F$?7BEm|{?d%e0xbMj(ubqX4%fe)xLg~caQ=Uc0{GM+IxF9wx3fXY#+Gq+DC8OH=j{gCM}B6B?)+jkNg zZ=RMoT^qr4Qz92gJjHuUZZE>XCr#E1pxl@Mp7M4jWa!-lqe&9CcG7h! zZrDRcR}ayE70TE?#e)91?<#+9Xof3h+JT{zhVjo@O-BtwsKl(8bUsetJ@&v|UT)yN zB`LHkHkZ4M&E@c+{`jEwxVSMXkntZ zrB`YE;d4U(O}jc(%;e{6r^R)akRi&qfOXjWC=gD-ASb&`OEvh zA?xF|?3`ccZt>HFU#Y1J-1-{N&Zde*S>^;4qa(?Q$VuoJgKQ6kq)hv*c2yI z#mg~nxO1llZ&<&PazBJ}__#t4S35&kmrh!jI+!=LTov6$mxHC2^gHym+@ue6O6v(;W5Pw-ona?dc=1_mN3hk~|sT%!q~Q5`(eT>;iQe!-+3S!B@|mNrn9zcaMr?@is`ex}vWJE6T`b-6`zDV%ttN>@rkxqWUwHt073Wo7SE{-OjdbLb&#{Si&Nh9|^? zIxjx>YMUstelQm;aYC)AQ36fhP8mx^!h*w=bfn0THcmm9 zUBdJCxQgqhN-T*VlfaA|f=_%CY!53|u_Q_R7T`Vg#p)C>jl+k`Vi{DdX{&con{ zbkr*v34LDgfSl%aC||aMtX@yWp-;1*<;HPZrZ05>hgrhad$+-2kvC{9^Tj@jqwtP@ z-2eCX`0i#to(Q{)r}u8a?u!#qlT~2y*dFblJt;pPchCLJR7Vb;c#dk- ze$kXg`)I82kmQ`4u_|{ueN2&Izs{i;q+LgLijLIspVXsuc`#∾K!b-ynBRvaouX zv~ws+hK#A>P$@YWyZE`$iMnEZQLsr2x_24!LVMt#PhVl*+6nkvF$5RP8;N(vzN4NI z=EBZrnJ_c96DkjV!{9lRbeK~`n^#9rx%8RD407i;l3&dv<0$Pe7>9bl15wUV9X~g^ z!?XwY&b@01L(O-4pbxiEl&Qqk`Pqg1v$DZg`I9hh=`c?9*5#KgEa0@Ihr~5ErI!zO zqj2akWWLSDF*~QA*#$+3J)z7`Ru966Ac+BjeZ?m?Z;9*FuL^f$)OktB9va@JiP{zE zStLFnJ>g&-GcfOb!;*>v9q3MqBN16Q`NP3AqCYEegpT@I5F*oB`r?BvpfVoDJ= z2vEC;+FHj$|M{kTs&+EJUgynW4-_EN)D2@k*1$UV8+1`VnLU5*DA$O)S{}cxJ1$u3 zfnUbEf!Sy?-r1*4EbE;L^68oQ^4npUkYdIw`Tz$UFvftPq1YTL$D^FbV}zAHzDh8| zAwn?K`wiej0S@SRz8BY*S>T_Zl8?|`6FZcBIN-01XyK87D`a&sVDVOX>z!UcYHk^H z!vHaRok&nYXEaTV~nej~(R+zqwPGx#fdH~LfLzstgY(8gFTH&~kconpfOf!%^HWY*AMX!)94re|lz zeJln3^yjbJ#+ph>`Z!$l`cn%lc^bdpeSk_DDygB;ne#Ti5qz@;ky4KUHd97USeGcQ zJ8Fx~ff*F}J{O8Y*1)pSc@RB9G7AsBLqDgyp^qw-9C&RbZyH=BWdrrlOy3ySotjJ? z!8K(1Z9FV38iEJHEb*c1Rl4?Wdbt?-jIOF2h5ad)%X#8nA?S`W*yPmFtUN1zbzg9B!RI*>O9*1SlIMH4?8{_gI3qo(93=ZRScQT8`sP6h`U$u zeQgKH$bOQTs}X$MAc+GWL}U9V1$^r0i?Z|P@vn%fpo*ScQ}+*+I8Q_M(HlACkRjKG z6jO_y3p-vh$5YNzgqn|Sa9!eL%SU!nsG5{t_#BFfYlccW!ObwFR+9(rG8VVV-=V1d zuEH#rDfIq@P5wi2T-EK*uU^=4gG7dqg_|^ag>7s#;VW2*HUlp#Q}%KNj_sRV4W>1To?xn z%_`iLO0CPUmbF1|opD0n?EYvzR*OAiWW}bRyLhnF#aJyy2yIEG)WxkwLlHj>j#HK^F`hb!ayaO5#v-rzf*VYwH+Gt7rE z4;mp56}Wir4cKUpc-qvFchq>`kPb_b;7aqpEH5R!hSof# zz`Q0pWzhsDa2nTadL#z7KBq-l9zwg0GVeIzL!?wVmD$`^ zlj8Sza<78(B=bd?L$8&BpOh0e81qH^s;W+HuWk!67j02*tvThrGvnDKO?Yp(4Ca*&O)5LIpP28FQ_n@fpKZ0Xk}qL6bKS?+SQbI%*&+6(}rkX z@&)#K4@HZC1=OMbuxwIJr;zhmfv?scB*!c3#KTuo@t={B`>|e=>FlX8y4B?wEpnE0 z@de}>zBE&e)N@O=QtQKV7<0Ye{oJ%$a86?CufKg8%ulG{^DSl^Uq6IvR0AZgxFQ~r za=B_N^gpGyJh*;+GFk` zK0bplW17g^Hi-g9y@pk>pXkSgNA$z?Cf&R6K}?vYFYU@o>F#?|-nRatAn1J(GM>kX z$#U{ERm+CE|MBB_m1#W4+7Y|oT#o9y6?x4J7mhUV>9VI(g@)!`rRw|uH1F0MC#*Xy z*jM?Y`RIQ1<6)3!e||1MbMoWm*6HLVMu6SZAsn$dhhKE-&feD~hRcmX_;^uTx$>3M zWMG^G8{Vj+gRdfZxG2M}BpFPx@xk9$m*U&0a#*rZ5ssU^h38KsT{iv(e)Br(78PL% zj;42@Qb7gUo&0gHq@_M=?GM&&7r-Ow8Vvk7hes?qEUx?Z32gf5W2xVEvAjpF+RopH{zyiL= zTH4(8_(JTXGk{05l)AS)mGWCDR?xFt;72uAV0^|iXiSj7m;f1Z@Q%*%w;fCPva=aJ zv>D3pmRs_>3D#^?xLfqnPNQSh+u+W!wRo;t6_<_-CG)T$xc5{!biMr<4vkdi)T&2f z*M(BRpyI6%YaB22G9oef4n6WqP9Nj%d>miL^Br=ooZnDek% z;?|~f_>U8A){j(C`Mx&Z=re+~Mum#H>A|3JBMh!Dc`2-4d0KGW;sw7qPLvn~64$I{ zG|!P8g}(|0N&McSEaz!~3WrR1-;FqQ+oFo=`h5^*Y0spI$F9(plfL|B#(dxVq|ry4>8S>r%Lo|0rXOm}T?UKx8{k;AV9J&|3t3S=V9@|gz8W_kI(}>w z0-Y6LV#iaMCcjXq&gp}1jc<|fv?s#7hG}THI? zkcm%Iq@T1Svv_e{>Y$4c>oURJrobu^yLTUE*M_&24-4F435qFA+qry?P;(> z(PSt6s*4sjT}9ljr^?L_D&VcG4_jV8BYYlgF0Aub!1Twy=o5H>ZVmH?{V!B``}8)k zOu0=sC~^Ax&6Tux!GqRS`QwE>(z{k-Ev4Fqqvgx}qW)@Y93ou{zZCA#?klmVo#==< z#nL;NpM%6Pj;G^mrqOjXRdnvUi|Wo+!LmDl8wJs{6*WnKAlyrnIm=v;bOCBwB3m{f$Ebi`ol^UOEAdEka zZC5SPJ}{A*!lH!JQS2UcT8m#Dx&~h#C6kf5Ls{8_XEeFwFC@G^ie3hx7<@e&2H8D? z*VT(?srm+V32@-?F*n5bE(H|Xk`D#Lrb2?(Q3!a@hc)`hV-K%H_%Lodbyrkoza7z% zmtqPgmx@qoFdbc9X7YWlkC19IP};Zcrmo#yl)pQY!PTeD*Qg=n+m1WX)N2KEQ;QsN&qnDjRou92K?7woquh%#g9kd zxt5-egxzxoqK4TmP=9xm;uIwx^aXXSn-v6glJ|N*dOg_HpT@gC3?VJ@HMs`Zu*KqQ zR61V*MxCz)3Rc0A19}v^av|kDP{;a+%^ZJUi??;}#tpYk`S&te9$DOz_1;#~+X3eU zp9KtST^fXg{}y1!IY*lErbc)_Pv9F{mf+I`TZHKswu%2@lsR76jV~vx!;W1k*yY7$ z9@JJ(`<{-I-qQ{6J$mz=vPB%?Je3=JTJem~FO=TjR^l7}fLTU~?ky>j>l9|Pv7|N0 zKRe1s2WD`<3lGZKRKj6#N;oF3k-dAVbNtvTywlSF{r@v(&3|Y4-*7$NwdyoAZ ztk`?=V`~4w(~-!4(a#=ZGQJq?d>uhu5LP@13~fzZhs~ zOJn`Jf5ct~zPoowOvA&Pj-)n1m9;O3wCHIs$nlWjot?e-ZQoV!aPD9C$Q8lh(A%Cx zse5#{q7NvkZJ?VY9qDqR3##}V^P^Qi;F*52(D&s@xZKnYGh_O4g-2iBd3P5@ns$PU zzX1ovkK^8ppFzaV!J^6bLA-*aX}!l@TJzBz_sdtn-QSzYSHBm(+hNK%dW%4&<7fHY z0R=R|NAmft>(1uSbZ}0l9GXw~0|VAxhD49;WT>)^g1Qu9=+ij-RJ{PZr_Tb-)lPUZ z94J8QGmSqj&nFwB*!gLu)D7Awriv-TwqKd7m7Ob`K0S|vXL$0|4mlk7*;+c&^x468 z3~rvh7kyui;IPOPGF0{k>q!&9>Y6teRgC7gNBgNu>~9GA9)p)0zLcfdl(LVS>_7WTd04M*!t62= zC^5Ui4OX^1=CV>*B z6I;xV2%W#vxKr{0QgS0M?GXTz790Z4-Cc!4p3Pu;FN5kcb6|wwV$vw@%friaQ8pqP z=JzWFvj_x}O3Cl2mP(K5oH!5$NUZvvFjuP=1-tE|Q2__A3#;Oz7ndNrXA1AS9>6^! z=3=pR3@T-e!sceg=eBvg=EX@GwIu>yMrz~b)?l=^+{foLZ$ZHP;TZDK1Y2@<3)chB z(X@#vm}GYVHF~5%y20}DH+Sq%t*`>c_h-;`v@cXdGeoc00Y%>P&^muN8k}r}`%VtR zlTFK6UVAuvo+ME3uw&41>khnq^^6TXRV6(&5SKN}pnSd`&d-VGO22ZBx@3bYzR{>C z{|L60^y7P1_j7b_Tfuh7E8%Ez8C;vNgXSA<;knBkIBsbkR(@)O+) z?^==f$yPdVMJ> ze#nP&J~pthn=JbpT8efD58=z-Zv}NdWy!lWnrGYgXA`Hx;Ilgg7fEb&pH5e)_og@d zCAx5K$QUW}n}i`#bJ)goJ^y=B37h{LhC3}L!F3-CPF?ScW`;N5#-DlOA$21h=atB< za`(930TKk)g*ZO30yI75bFyp*4o`nY%Qk=gKQ9h?NxceIpR(!4wSM$^Myr@mvl0K5 zAK~g{SHZ2)j(?`-g6H+4VDM`svSGKu-D4XdaKTeyit8izr-Opo`4#N>*^RpYFyVwZ z+E~^5kZ2oyoI6vs`KfFv>BP1}`ji!*6Ml|;Pv#4|3`BCx2*jJ-`W&0zpUY-R{?yL~ zLiXhI;Jq-7Jv5$+34)q9>mNwofS%Cp@N^zI<)@UdFo1KD95^g$o#CxMP{bJ}%~b z&EQU&JS7A)0taz?QUg^#E*78X%b=NRB-h5>g2CA`RCIl$px4J8lRw142#qQz>^X~d z1F{4|-_4X8dqI$$avpkhP3FO8lX1ngEofZPlk=QjlZL#^d1bf)&tD7>->yG~|6M#m zV|@+IJ6_vCaZ!pm^IML4>zKnr<7YXVbubv`eWgcE4NzAzS)$)5Ls-BM@^8Ju74}D1Upbc>YaO85-gsQyb0b8U?-w!pju7nj6~?J-#jOpe z#H^3e5F8T8J$~NbMo+dY)!s0h2R_(9vZGB{Neif6i)QGoh& zp`)r3l1qI^uX-c?RZZbv?v{P}U)<^dwCs`OYdc>>8;dtnUffTZl(nAJpC@D6up(i{#o=sfZ-we-GQpy|E@W8e&|thR z)V(pq*@I39=`BlHZsaS$@PxkjV7m)_Jr#v#jGmKy*AZCYeh*&Xy-62enQ%xyX+Aaz zMTO38=zL@=U-BKu5wU>tHFdF%Z3Y~;rzpcv zJurhQM`G_8o_Be-Bc^oDX1$PHU(wZ@N&IxG5_FrSOcyTc;M|X0(Cgh6 zdVSy(T{|O#_M5bYQQyXb;c*QrZ&XD$^B$0^fZc_$LIrJ1km4%#`pFRwk`PG|lZqNlIEG-KJ4wcK-XE9uYc zU+)23*}KBf#l0lIhAgcZ=tM_kHj{>ZF@?)%;rMCi%Dex2Ks`S{0HezT@cg7n?pH%@ zi>-$=#ARjC&^>oMSp_)Jg+7h+qT6y_zdiz{9rH*3bRGEHEeATjUZINKyT$GHlWB!T z5hMN}F)-+?4f2sLHDuYNFK-3L0jZzj$B_Y3--{0m0MQek_Y zblz9)A=9PqJU)|P`_BS#{$q9CHNq3;3)<`=X$(Ei-lj*}ba79~EhxVig^Q2QWc7~A zoT>2?4C7YfJI9Uiducz4PONheUv-%NesaK+j~{qQj68=0mDA(m0XS+&DXzE3lRQgl z!cm`@wD`EQ(EaFNSj7GK?YL@qVKWd149Q2EE)!wMD=Sfb|8cOqriDp4wc^zE^C-?f zf|YmgBiYT1Bu}6_hCEr0Nhh}Qmctu?YPHbn%n0ssY6w4?b&WhPMZ>tP5U%S!$bFEp zYpaGl(Yb zv4HHpH|fBPF)(p?D6T);mF#wA@X3cGaINtlx2GlhaM0F#TDWsFU5q#eHN$4%VU5wy z@Y$1{Q<|yQB0%*KDo|>E2oi!4>HXcGGxW(?b4Vb) zxvNm_$bP}lKZ3%HWHJ2H1pe?fj`DsEVVgxtVtd{;Xo>QWG8da*m3$vg6!+5jeNH?q z;V)@FI3;?|Fr*fjUi@RR0KfNSKdSr{kS4bS~<3pJ(5$dfI z;gsx9@wUS)+W1SE1La%99a`6*u+vJ zkKbcfNKtDycIe2Y+8*I}d3gi%;V+vqwoAm;!?CpfE7PH~U=7I2Hk`nJ&oejg+j=^-N|;oEddK*Tgn)r%Y-f*pq+%3DamZ?G@VzgpN1`mlzHL8A*f#_ zF-`O8AztDhIc=(<2kFB^wAJF%Q`N~cFp(#k7>P|9QZ`ofWt)Q!X`;nRh+KJ&R`&K5 zpH|(b{+Is8(0Rvm^+#b`Qi$x(QZh>_+24B(NztICkdlnjP}^t7)v^R0>*dikA-R@VlQQV56js2j+f(j?o82+q6_HEL?!% zyd_Xx-t5R)P_T`m$Zn7QC`xlRG*6G~m z@=k7iyN#m;exSy-ZE*6S#3i^O7gW*}Xwc@5wILxL(P+U_Skxs)G^mdj7xi52TsAoa z9xPSm(&3}&paHJ!1ux&qsc7x2AuiKyp&hP0{>Z)ljJn}as>^!qGvTKkBnMxGIl4tYf`n|IUa z2y;&VA$3&V^b-~}FTkAqU2@$t>Flk250Zi{X^Ak0vK;z=ky|IS(XWynKX(Ko-u0ms zkx!|9XtH=|le6IbOS)gvhCrP{XKv~?k~`cV&2FW|aN^Qe(QV6m`p-3jt+t(p!{w^N z&6jn8)ejF`aLtQnm|OB8>l$*=MI5n83*UZIfLH3x6xdvkevg01s$OqI!;g0b)zU8F z=+r0R{>TZ>_FII*KG%o=9w|_A!wqX55t*-i?EK`}NjSSZ124E}aqsp{?EX3h2X{#n zj(3RX>_(llPtAxR_u9G=McPUezTUK=f&vPS-7 z`CPV5F=SVJKVGS7!p&3XP^Hx|K{Yakmd(&7-wO;qlkdse&zH)VM(DDAT~C^L5_xye z*<`gegLOK~_|DjSbRl*ColvmA2m5Pil*=l?B;_gfjaTAhe&c0Z^*TaW+W?+zsKvQv z%lPOK>HpaGnt0F#!s`UZwf6j77DyxaO7nQ~JnG+L3%yLcNK40DsjdAnmSXAwcKy?) z@6(;IOZp}9*gp}6PEy7C^yO$jxf|{n^Ocs?_vb$YwK(vhBW=#;h%L|8b81nNkaF9T z<k>GQ}(*$PzFcnCR{?0EKor?hjF5dTXND=jpOMq>WUv=@qsg4n#fvp<3i2IkL&{RG>-~{Bc$e_eVG(5CtS*%7m<>M{+#{ulC?)|J@PaJ=TTs z15L%R2VRh6e?h(`)`#OARWNd1jo`Ch8S48NiIJYeAhc?aa4EQyG(G)=oiqDMS?8l- z;Rg%xQ1}bT>Gw(M!CuA#jYcBRTg9KB4#uXxk{-Fb68xkd#UWoaWvx#-@gUn=L2+<* zI1$nX_4jO~AxB=rh(lhq02Se(RaaVdz7xM6ZNigGPLRtX8Jga-=PT9b92{UqV?)MZ zmyca>!cD&5LjCQO$M{o9a zqY;xQL6hTd8h*>3vyxkDZ@u$JqhIMTSI0uq%^uO=_FR|~WRDBKXuvg(SZawcpn2|{ zpkj~(*GN4C=019y{wbcDF4j@g z76y}_necW~B`jTB4!a{$NvaPe=pY`rJBQ_W2f#_i3h?PwNYf5vvX;bl@*Zulyg zH-xqO>qr9L2Do+3g>}@FNd3H&v5Wjnp4sl49PxqPPg5n0*}F)Lvl06|aHrcnhrpnm zkL3OSDtvkqhAWmkvtG_o;pW6eFsA5;c)puC{Vn`UCUdGGW8Ou=758EG40Xv*xrsHC zvV;Q1IoLk)kMJ?jj?SJw%JV)B<$c*hcxB6Ba>au@=;=^s&Z(xRN$xnT+60t5D_~Ah zCv1)?$EV+>3n$JF#YoxrGaJY61Mh~tOM5L@{+ocdgi9rRRrChZdEe#*z7#(Z3S^)PzAx5L&!SH;rO zX1V9rKCk9!S)O4&4iaiBz)HA9IH#2E7G3wz-5 zIAhLR;02mDCGYPv>3KELm9DuO4%pR=&&Z!pUP&J+P|+m2_VF;mM(VGv{{`klkQEL* zh4+JXXh>PC@GoBthZ)AoHl4U4SXHFM8|M{Vn07_<$WxP;Re!)MCl;T6FQ*G$8O{sW zFQbhEB#+D3Wppehj8;n>tQCoE!b^Xt|HD8N-HZBSb!rRP8*P$DuDK!HE-tP4FmR+8 z@HJgLT^7u5KO!vql}3RICt2%$U%}4QgsqeNiJyWMdGJFkwyK@T4}ZJy;==DFl`m0` zXmipl+Qw_6jj`M62$=7-nsc@u0?mX^bV}09YHtMb`JV9{Xco-Ies^HOgFn>msUg4H zypmjO3~+F0GJ79>O#KJgb6eIAp>Xh6tXUL{r`GJiN$0C+zca(`rPE{|OH$BX(hS6{ z_O!Eojo6mjEJk~jQ1lo#mS_QQF|w+*BN_5rG4aAimt zJL)G3*Sjo~Z!?@nm+H&Kl^yg!iWIJ)!xk(=dbBya4lS$UG*{sLP#elZS@R6x?~>Kn!9ovBp2=za zYF{eU*)RRRyJ^6uS@@tWjo`~rRPAoXwAv0I6dPjob~&g$IVlEJx6gI*yUGY*dsRL&% zd6XG`JEDd&%E!{rxURfy^I(kHt3-LT`=XHT%Hp4FxooKocp2DoqS`g8vZ|#nBVC1# zx>>Not_T#G=gI%%De!!~BUG>Pikw_u)34rLv9OOG7YMz0(Ugb6{==sHX+n{(Yn&F0SOJgZN{)HG(K(aN*TqTC_Anw}!5UO1|;>?K0bkH@39{NN7QiFB?8k>83xz7?TE3ckE?x zZHJzGU%vxpev|l?jew~ehT!QR?-WNkJ0q#%4vDF*auJmOnR2m}i9EzwoojBZaB74x-{@%1f0ycVZQFXW z`u-l^n)KQc$I8U56{fs6H47fz+C$z_{=K$`J69|1fipw5f~Dj)93b(m=hrhe&(0Rg zemn2YtTT08W+$#o=*-iu_Z1aQ_fz=l5}G~6LmXZ4LF&GfuDh25FaD;_dcJzR z{>2wzd|tXRd`2IM_tlxWv&6W*^qdCn2;v>Fui!x4e({vXL|N|~Z^87zD6oopE4>OU zDMHdnzbEdel8ffJV34ykM+{{Xt&b3Xs2!$}y~N8oK>i-*$WBceGv@1Z-5~|4$r#3& z{w<KkYs>d1$dpBJ|7{{yQ{+}KF%y%4Z+Alse24OeZ?2wKh0;P=jv9C1W0L_NC( z8$zzY1F;8J+H7<_Wamc@)jMNR+XQ%2fbzzU7UDpkixlmCnfmD+hnd%nx$>_EK3~0s z=5|aLdZeYoe@iyg&)r)fYR)m~e4$OwTL<7{(UhlL8b+^sJ8-$}M;LOrCup8$@>>%v z>^OElPY~g#nw)Y(iu3DXM%;(S+S5NaFcYf%rf{1XBU*n zjN%r<^5qM}i)+>JtIhy!mAZUV&DA+n@?35|q)V;~hw!yu4zfz?eq5>fh`i2!hVZ2M z$|Fv^_=YBGPH%x-PHH$f;US%(U6lApdd@~FVNfqqOj+(JmNlth z^AvlInV2K=+1L)b1!MdtW^QZ}P@-mcm)OeoU2Z4;tZjUr+up?;fbX`Y5a1 zE9r)w)ikg9UTtj9T}u9FMuoH&Zde-2qe_iATSbOpW}D!6YN*(Udk3VE6H%st4D%PN(#kr5`Vlsp8T(r3Y%W5vGG=d;37xz-m8Mn z-WjmcGY(fy?uf4x?0KDBm5saIf(VPw{JTUHm-*{q=Hado!xm&?>nGl-ABE$We}L^n zQfNhH59~5{C=VL-Rd}%=9I6!?z<6>DejBLOub4u~dq&SQ?kcjU!n}l5h zo1oY!nZ~~`Af>ft+|Z$khPPO8LUaf#7#s83zcYE)i?3wRw?u4MJCyv#8Di{yYnIB= zIOyJe$q%5wk@u%^>CY;Dsmx--`QG??+$U-~kOC7Q_2LdC0lYROo~N1)r+g7Sd&LnO10@G>AnhTl40y2?v}V#X;feu+<YxHvWPC5Oj<8qSCr^5W09f zYq|N*OAl%1G8g&EbWL9SJcE~Oi&TGYD(hTX$_r{bu;G~m@j&Ya6sAgDRYxGyMAKA7Je{~#FFY@x$@mGN%FG*aJKgs(V)KAb8NIy-&@g&&R(V;CiLYTqU# z?jH+vf$QPP*;d$hzD!IVc#5xgEvKk4li6Tl6eNu>lD@wup!j?W1mdMCOlys=}ZM1U5Rfz69 zim$y0r=u(5=$eBseH*t?Uh?HRZE2P=v)@|A-JT2)9)mvTBr{*h!<~)H# zg`Ti@%?%h<vaW*cRY#ub?VNWJ}02-@RKw@G#ehOPU24E-Ll7q~?{?%4qz0#7`sT>xH zE7N##zyGMGjQ(pee)%Z;=GILf zWpZ8Yw=ahj?ej7ChZ!dg-_IRKex%{c<2d1q4rk4r%X*6}dD5ntH2UjcNoOd4V~ghS zi0w+e?_mf!cp1Tgr2)9WPy+`{3gPJ*>oMn52rT&^^~)w{fSos9sk{zZ5@{z2xUKT((;HP;QoumOe zgJ)ywUuAUf(pSoxxPw8g3mz}GB!A1}@MqCCxZgDv&igSXcb_6LNsmzb=-XsFqASy^ z1ESRCN4lq*;ZdFm9!!(=7*i*UYbw6dHHnid_qBnV%tCM(V2yUok4a^bAElm^SXB8+ zxFWxn+UPfZiS?!v1+DO;C>|T$m5Nd2+3>i>T%Zw(ux^Jfrk_oKVMg6CGT;p)UarHt z!A;mBErw%8PUL(SRX+A07EjlxL&cVjLe6oCF|cP0z8hMGuYG^g)K5J~hZIown25JL zw8Yp)cVX=s8M``(lvQAlzjSB9ma!$G;vq$x_SR17FY$)&@1)Gu6^TcDZlRdbPYcGb zmgXTfDfc^n10}vrlmGgu4WCTUz&%YhnmJqQVVk&wR69qqO2?~o-RKcG_sC-NH#+>S zN{_04Potdc7a@4^H`-)+fo)}laItxWaADO0+P_l?9cLs#*6eEfTYionemX>pS8ZYs zr!MFkwx5nl^QzhNbF?zgm}Z97!>xg4+-+$BKCuZUzu))BW2Ohr+tM8mo^_=}yN+n? z9m9vEzjx0ng`y!1;yLf-Wb$YeEvoZow~8y|b+8+MGBD>^eI!4UsU3gSw-S#J(B^A4 z>abe6SB=b%m4NW8<(kk|m?PfCC1AMk2BqBZ!ZVgktt~b>O?%?4 z@p6t5Z}I5Q^Ri8_Q{o1Qw|@rdZxw{5Vhz%-`v5WTgK+AVez?qWrKqrXHplEG`1V4F zt)=tB6OWU$b4~)(-ICfZm8u2xJ!eVd*+PtAXX)rQB(;Y$II+AHy;f5qaI z-BqB@Z{SO4I)p8d^08AGX0R^Q>a_{)B2}ba=~B`tNRxK{Bp&VCMfl;`4OYGyg+H48 z_-)s3!k&&+xOJcnPTrFX=fcj&A610W1G_pLXGw4~$r4`{*kF=JAQa_!i*zIe=REEw zJk`k|^~vpmxyE=|t-dPm86@=^?A!`!BP@l8hAq-?T^lT-B2 zE%f$bGP;&LfE~|kVSlm}uH2CawukmW)HE&XD(#o*uV&z_539xWz-Pj(&2kEFFQfC0 zhhfBXb$s$&VpZr^(b}1tp;T@IRf9+1;<8^f?ePrE9eoadt;@x{QTt%6oiQFR8H2YS zWB}v#15BbP84gLq`lYC1Ekeq8_w}t3$G53 zK{K~Y@{+~x$xbIr%ukUzl|C=VqSQ4o;~`@Ie;0)ch27LD^~E~B&%~Ac3*m`VPjs8s zDjscF0=7v>a7WTtUXII$wgV5ug9cNux8n;s%H_~}rz5BJ%Vt+Kfsbx`LE9v+pmKQt zZ5!yqO3t3*DA!J?njgzEPAw8bj!{2B@4^Mm?Sb~8&tI%xz zJLbx$`~*Yx@lNDt`JE7=UedXqSJ*u80>AW8V_~TX zZyeNlvcy)>|NRrX?VpUBe+59dDcOK;uYi4cKBSD9f{o1^u>Mai_HR0bp7Ua{!Po=8 zbjiX_)oBoJoI#qq{(zIg8eFANf_=v*V@PagJm~zcHhhv9y!uiKaiSr-9excCnW~_M zf&vB(+<>Utx3<+U63{;r>q8~KtbeG~4VghVRrZR1ZjZ;z)y913z$VZhTSa>^19{l^ z^E64yjX$tIPx|XK_|z9ilB*|ys$}i`=p*Q0ba6_6JUJd_E^QK92>%fyRNER%#T&$tG z?wjF?^Ap^8;+5LwVAM_p|3vBGb z>v{~s{VR-VxwLna`<|!1eP=+~!@=NvO^d^v^QEpqb20JOa^aluL@qYgVdurt8Iywf zIZcI#s>i~ku-8z(C>a9oS5u0o6Hgpwj9tE%ip}eEq;BncIy5DohreiskE`k-I%F9h zC@3U%sSkE!z&)6&wFK_>cm}S`+PGll8z?b(N2WvG!_O5@>E@Vt_&p>Z`gV|ZFAEFE zBdeR(e&naD%S!`cuvs#^AMS|L4l>=BdemD7y`+iOlObmEOu%)UDSU@ME&VZr&r3St z`d9@l*k-{UKlKJ9iFKSjViWFgbD^(stKeU!w=hl!!&6I5aGJ{@h`SNN3Q?ZIH@7Nk zlfIY3PxPaQ9}+-m;47&E))sb@>fyl${uq5r;?zoBr6wO`eioW0-dr#pWCPb=rD-eO z8SjK2O&efXlpzQm*7NCKs&FhI7rNa(N)-_X>}(MTRfj#HVx^QPT1jFkr-FXfPT1+0 zf!m*)h3pM0@$v1gQ06lYtb`$`xN|d^2K3>mt8Nmz^B={`v7nUp1c`gP1NL2zctAIt zanPAxaDT*Zcx%*A%76}`O4nVq;Z!Aze%3}C+qRIw(W$8C=OzxlULfhZ`5a8EaM7#b zoOsCvvqC3h@zuXVPW@PXcPcKJ@lGi#xr+Az9!dp2!fy$7y5-GX8A z7U8I}q>bw);a!^&zW(|ZG+o|`apU$1Jth9S@%+2u$gz21KJ271mv>V8+^4dpSL-=f zt2-ujAozPaRLC;$1t)*0E9G_xoErX`+)_^Q-OhKo(9?m{^0at-KMne+bsfZE@2E60 ziEeB(XC721oHZT?VQJTBergYj%N+p|e*K0&|CGUgcMC6^RDs*+KDvR2cWL*%0 z=B9cy%(()u&Aba=Mmo@{UM5(dDS02mi+RiL9PluYho<6iunY?(@syr8K1SLj&fO?j zd`W=8NxA6ucmOJ;Y^6JwZP|P7E3qYJI;-}yq=~&PSns?t?>_RAY7ajW?!W6#Z=Wp? zju*Gmrqz$g^I;fxPh&FM>;V1u-xuEGr$L+BBFi3u`!{d4y(NKGdVZX|PBmP~bhJSISxucR^CzkNn)8_m! z=r>%7vBL!khv@pNAi-|1jGrZ+f#H8gh|5B6Qe@*N*p+FF$@)KF#`6e{milgM8a*&D zA`Kp#5!m$32tK~~2@M`%Bjl=?!1aWV*h^ds&LI+iG*Clc)k%*({b+=hK&_3l$fo~w=y^GS zZA<#H>suv$6l08=2aSMTw@WzkrX3u*JroxiR^XE^i?MElDs|kx2P*%w!ynZzX7X*bma>1Hjux{5b;Z0B% z!L++F#ycIM+Y!;ith~8A@Nz8Vq}d6V{y}iNy=ndyX-vr3BxCcj;&6fPm ze*D!_(ipbRr#z26e6C#+JxuCh-qk&1^P79<^8SKC z<^^Fz0I+PGKYm;`0w1+LynczxXG?0GBP?GRhHcT;|FxLPn~gdUrRi=P75EKkx(Dn9Y5?`LTTAnJU%#%7R%Im ze%J-z(YJ&s2QAjVx)@F!x+1JOwIAH>>aoYl`FzrNzNq0o26p7wVXW0)NF8xX@HMD~ zfE8b1+~C8otp65y>xE3pU6nv#rn-1_j5XFuI$76bLsV`3P7a55l0u#$yE~nevPEB^ zqpKs2`&2=rteWVqh7Rh#me>n-gCOE6ioNWlTz%|0+V}!^fss4Eh*crkv<%)H=E3iR zGI{%{p_FoV7cUr7N$CxtaP-?qUNZVHS^TPlzz>VL%Yp~=sQ-8R_H!j1P5vzGH+AO8 zqm{5q_(mT+YhiHSb<%9@&oP@fF`fBLZ{5o{{!xXHGv@_G=JpWj;5S;ce*kK&c@K6D z?yU2BqqOfFL4gVC*wi_iY(~_J2APKVxAhAAtRE-VdX2!bRX3oq=pw@@q(*dyismr-J>~zvQ~P5dwNe zg1kTx@>aA_RfY+yuQ&zPo+8hF7s7s<_LKI7=d^^oV&B><(rwVgJ<{%fl8QSuDh}g* z_(@4UF*hyGL>Wy1~^r@%lOinlVHP>Y(xg)>ZR>$mv8P2VHNqiKK+E3~e~cb*5ypy@0425#WsyNR6Q8w2O( zgP?SL9hYD8;e!K?(-C!F=P&Qq;Mc-|X#4y$Nt6oSbXUgP-3JSed5X2#db3!r+XC}# z+)0=%u?`k)gRhq?`B8K@p6YLn)&UBgbn<%lan(P0;U_VF8;UeL#@Yv<8%x4z(fsS{cJ?ue}~br zk-W%f1&R$4udVNN_LcbAgJ1sxkKzQp^gK;4Fg#65RA%7M#gw?ursb}nII&`kK7e<9A02gKBONHxTw&sO+Wpx~?yZjc`Ri?no*`EM|FG9?o+s+3@ zf0Fb%Pb{@cfT?mL4A^oLlAPj%UBCW{mbzV7HCjNo%u`Tx&HzWO-3|`Zu7cXl(O@*m z3%Z%Bi@#4Bfoo$ncugx1l#+(hoD1nvpP>gf{VBxN;vVX7svc(b>xJd+fzV^M4lk&+ zk2wG2ymgM%yF}p}oUG z2o0HpiES=eQR~rQlZG1zWmH8t8?Id=m6n`=J-b zwLGPPTQ7*?Yd+A!h(|QnC5Y0}7h*r5T=pqnmCl_gWzQB-HvHNCu{sErZ>K$c;~}dcrKd; z+OZ9k)X)#_Wb45OttimxEJba^GU3CzZJ?1`j&JYj;F_5^n4Dz|Rbo#}mdJ-L+5LCq5*zL2cT-8O{bFLfuZ9CDTZ4DP@YL!zjI z{&!J7V+)UIEvM`(dzc+#i+kUdQ2VBS?0f5*T=R7cTzYteG^fjW_!nLAw&qPzcJQEx zg~2@K^%Rbl_GP#3^oAS#=5o%cM7~otS;{`mWa~NWDQ?^{DC%5Ir*hAecc>;UT};$- zeF1!K`wUZO6^V6W9(Y8#RANG%77xr?03K_*mTk1_r#gC#+eaYeytJl8l1?pBtAl8(e=ST+GqC&hyC z#|aYOaWG-!@nYw4 z-h5hO6P9g;^&flUs6Aik%*>raSKBtwO+1QM&HCYH%{KTK6NHDBMYFe*70U~#fQpR) zc*ws542DJXam6dLOUkcFZLBJNT9FRgupN%?Gv=X}!^OCYYS{~?_mn+HOX62-O?CWas1|)#m;t716Y=b|Cc$s4yZFp71y4uz$0ZkCIeMf&K5C2->Cb$?zmm=;Wu#W8 z--bqC$G)#3U z#)KP}x%^bIJZ_^o1ZFD3`@UJAv~e-TO4+A-$!pLc~opRnxx73%#l4A1yY zAjhbW6xhIf1R zraj+wLi9~n9+PqavfuS&zyB_arfE z1(soJ{n`gB&)Z^^#Am;;=_0{Fse|)`GVik- zgciLNQ9J0HeDJGsx~Z5d=)M{WM>KllfGK;>@PP`yHW>=f?AxK(I2-DwUxI>&dvH1O z2GxBU$O~4vW3z7_)X*w8GxddVFy9D|FTDgy?YnSE$u|CRJX=uP{+MdMEhgIz=Y%Wg zOE`OXCCBe>gXUKv1pMU^DXEZDBz8#&A9icABU!lLM7jw_|oD6CfDmS zFYv}8*Ii&$a~8~~w8G2ZA5(;-0bG4fu=}T`-SsHy&5{cuK|=YQp7=?uf$GsF7boQzI-I6H&55CW52J_q*FYYIWb&Z+dGa{ zES`pEMofgEU9{PTp7F!qQ_=iW6x>f^&>A=qCb-yO#i$DWQ7QTJ?};$6-iF@3OBb6H z-EsIrBgl#GNb4`U;}n&2VUch}9%t<%FL?kon&VO?X$vGwjTU?;F z^jh*4bz!vBZMCUqUmn_D5Kj*4%8iE_G12}74KxqqZVs2AtZ*4^xIPa{>?1JyxeZSL zw-GMhJ0>hiRHgr(&xEO4X0u_pt@1_X`TTMAG+MIV9>#}Urze>gs4PW|zmFVGE0Yzt zKFgASy}v;AC(Q6c<6m*EbpjO{TXIBfqD;H#0yrwSi@8dOJ1csT*Zng%GV-y+UZ{d( z)3G?}LO)!a(;K($+=v0@BcS^Le{^kJN5-{FadAR6Hk7+#>!f4wW=VJ0HatPniZ)<_ zQ8eB&9E5tM_M~n+43{bzVAjc9cxqG&KTa7a`&J}%8Vpy*@83pH2P1)duG>P_P8nhD z#c~d=-zvH&PJ{75j_9*d5ov-R4mmoJ{SD61U;lg5&}{}zw+f-p=*_!MI+CXK%NP-S14`7-LgbHs}xICWNycyr$+C=Cn#;hVF~pBuu+f4Xydm~rK~1axet%r7s+7rThtv_uM9ca~C$zk-5Rch3B8 z3;#1YNEX`%@V@gC*y7$H?x%VK)~(;flTE|f?DZsC*1C+YjgG|nBnAFwQH<-Wu0xed zF|;i*;~uN*IMMK*P}q7Hd|eu#dVDj?+WdvWx*y{~PJ!&X%Sg)I#6w8hc8c}tUg!I8 zk*M?PGaUE*23PGfDR{tfw)J1mtB*Ut#hqEgcd5fJ_~%O)*~JF`Z9ESfBAsLtED{FH=aD%>=-jdy@bFwmobULRLK_7*d37L9F+N6ls+B^J&K_9dc0^1ZxJ;~j z)S0aVHL<(Y5j$QZ6H+{DseFD87LSv1r+VgOb5Vl_#(p7<%UNPh|Lt@;L4~d-X0y_e zv*dO(ozqtClbM~o4=IzYX|K5<*J!76;yW)|cey);4yvP=qE2x2K#rL3@iJ)`ZN~{A z0^S?E3g->f!wpxoFxMguFWBUY$M2Y-k>3;87v6-?o+0q|c6VNv`4X-k*A{HS5)&&j zC7=IS@!-34a6ev}HOKT22HaD|m;Xj#Q+T7y?3gFV^tdGSE|9qN7f0g#-Xo|jEm@o~ zd?)1HnSnu(d2oD}1)Chph1it0wD8VnA#uCF7&(CER7CN8hyNTS@9Dtmf&}Pfk^{SX zULcn`2dT5Y2^Rj_i5;BGaIKCTUo{?)8tB=X{O*P};hjY>I@DLsp*cO(cQdSoMf|vW6Ne!kG^r7ojr&iPl>01 zlecJNf;YCl{z&#Sh;q7Jq!ZiAam}m4!iNDPux!FnFu2oI>V!(>e@AY^?OC5;?)MpB z-sigP^$wCPw00!n!9gt9oFUV-=poEnULa-mjJeEI(s4dM6SZ2Lpr^`ym}9XIEWTfX z503g!{^uqh`(upT&wqj%=hHCeRyt16+$i5txB?&jRAakIhvkI@x!@D9RsPEAF0C*A zOM9Gb+0e`wo2F#RChxyW&6Z)}*wjv>yY@W|oubc?ITN|sdo2GO6VFfVo1lI|G1=No z#vvu0*|NrgeZosH(!(8Av`RBX=3dzMYa3AdXZWz*n)W2j;L7@~I9f>)wZFUqofC8T zc!&m*{|3(JXUcPX9U#@N3hX^%q0D(|3@B-6@cToxl2;)f{X6}Hf}E~=yG0Lnrw8$W zKJzf~XBGTt>%oFc65pC2bs{AzqR(#?^cee1{MnMq9>>$k=W?Fxc=ZInY1fywHuVvd z9^aJm1tqvhvjJ=ZiGJ8hV3alydVKf?iq3WL_0v{y`|hjY;`9{m4wE<_FP*8QdOUU; zSqp`88zAks84k_4PV#S?c-n{}m^mO`c>dxj#9Gd#0fV}c^AN*itZ=THq_NNO<+&yfbZJf$tdiY=)Uk1B=A$K3 z(L2Z=%57Qe<0*<7AIwQZhSRblSDyZ6KSxO9->z5Jh;sKm7<&087$!)Xio+6#4SA15 zJ!L%VHWBBg8t|1mJq#XWMs1-(*{0frJKQdSiY!y%WkDl{tcwQs8_2)@w39~1X}IF& zFpNoUpwpcuu$6}*@86OP+Yb+d#cKax(7rnIi|m4-js9$?smF0nyM*qyY%qLsI%z3p zQ}%Wle;Z;Zw^y7GMomA!ePM<8r^^(K?erR2&u+tyJ3rH~wN-e+egn2Q&BxZ2gG52y z1mC;wruN-4@FADUE#pc+{BDO`<;Tfmjyun3jfNikWu*O9OU7dx!9e;hhYoc_{jfBN zxw4%?29;r4)e@XM?*vT@yCJN+=nl@I$KmYTI+@yX$xHPP;8%q$#N&lfo}Z9IRYFs4`FGO9|z4u`numTZoR)cwzthU|@SpEE3FckA4Bi zclwW1`l$+i9(kjhjt|Z$E{A!(#q4{|4aIv;X;{|+I9SsakGK>B|2tHiVe->2Uv@jrdZLq}g+%Fx*XW73*70(3mYZpnvHSVdxw;jAJwWt1|&zCO@xr*cJ@e56#st?nr87u+c;Uz z58-Un8pAkj0b6GIk^l1nxOh^s(_zYERPMYoRQlZIAsn9>D9e(e!^ z2V50R1XZCrVHb>BS93gn=>;shaU1f3x8NPy&ZM#B5jYOD5PCxZu3w}$YTw( zb!w!4+fB&!)j*tg+8&pLZGzalkEn?jW5zfm$+tBdgH2Asd+F@F!FN2SY(F4|Y}kkD z+I{)uj7njFzXLC`W3h8|8LE8PCD_DgvqkbGnErb=6n5J$v(t^E=e>!(_g044qjD&F zY&!`hg=~2Fm{`1gAnh4b&V7u}(`>v-y~?i3Y8>ZKa$|!qJvBpqGG!4v7P@f8$#8x! zI)ThWv_!X~xLkiJ}rziQg?>5wcbN*7%_%QFFt8<(?z zS}fj5{tZhy{;Ba%H-))27I-2$mUO-XuR0saH&uJ`IC%kW?ySPo?`Vm2m7^i_kp~RV zY!Tmd)n`wyi=wL5JkY(UhJEU^FFa%QCUxVxB0^a2QRFMok!N=$McVklR!dW z3qAj*=sf&-`oB0{R0yT5sgh6{TB>_aB@rdFjI4}o*!t$J8Z&UL z>Mj~HP0GOyd_`|(?uY+6y@6hdHvGWtp~Tvm#0MqDxXSM}XjkIEGn~G-HtIyusD!b! z`BWVon;6KNxt-9z&uzF|5iCSkxsqkKQ#ACvGJkk5M`qMmjww^Nit?GGg@><|vB$`4 zcrfm^>&ID2yyp8M@mq`{70Uf4-Bk;a`_+qra)~gXz0awl+wEMzbXf?G_mNzdJFK`Yyk7h^M31)Z8;cKjJ&+CEYmD0N0;$ez0LKx^ z;=0YXp24{OQGvSy6vGVg?IX|@|44NWB1-V!@l!nh?ZOaMpc6T+Bv1NhDHN=i+70g>^c zwDI3L>HXN5t7qClfcO}iTYkBk&s-_yUeTqp5eXb{73tpHDfCKQ%O^ig;?vTMcXaj) zELJ6nmLcH6Ki`d@r=j$%CW8@|6+ijz#l2>8a7U#+XX^P%oW&j} z?!E~-rM}&`S(EYf-|0Mb;Ue0jStB&>*(=0dIY=GS4t-U5fopazb5ylbz}@<$7`x>b z^}X{Hwxy?${#Om$eDyL7khoQc*DZvgRWB$%>V|01V#V1dMU*9^!&~QYn6cKFB9CaZ z<+tN>dFnvAe%k`KL_ZSSx3~)_M;7C(!`hhrc{K{lqA;{$J31E6!$bG)3bv{4wAynT zH(&oGym>T=H+Ub0{0mFD-C2``W&NZ%of{NhRAaX&XWD74#hdQ8QGBrys_cJ8r*4gd z3DHuADeSPI)%&J!Pwu|(Sic@>2M&gBZeuY^Ia;`|!GRYWB2J7tE6cl`4k_A7!psMy z@MrQzdfn>CCf|B-hrbH7W!Q4)U3K35T;eVL`3Zgz^M!eTEID2+yL{IT1^&LolV-jN zLa%};LLO9q$1v?jD7lflV~bzyK=?BcW-jm6(4sjpUaX;gl(%Sb97T6}{Ui zv#-<{3e-lglHoYAPbvI2eguC}o)8v7KZ~UB?ht2eqZ_>m`RayZMi-8t~ginK5U(7Nf}L!IPYU5MLvBi zQ>)0rj-To9EZBfPehtGuFAieQ%y|B%?S}q#S}0#{i?e1L)1m}-l+EvsYCYz`XN|}7 zNxxa#@Z%6Zt3686nvJcaJK@XcH^6+vH!M75j0!#TX~8N}PVg#|*i(z>$ks@@m8Xbv z^$7yhc3^I(4>wvYW5byX#3{Q6NWZHhUK?^qoSXj`Zduhx{)tm$(C@0CuN_4_auQrO zEKVuk^VA7T?p^}B^j&1_)(Rd2n#9s;lKb%a8_M0h9Ge0j3S|x*uz!0Pm<{y9-fjf# zXLLB{oHabY?}paW$=i2ADWw#J!;_!-)IZb_pLGr=zy0bw@Vcd-qO}Fb_BX=j0U4xF z-YnQJ=nAS@TL8WrVa2*cK~8e%+*_vzFYcJ*&ebvEv1q2m^O5+(XFjfg6 zeHnh(J`S}NyYooRXwFhOPkMhm_|t-3>}uu7T}#e@&s94fe`Np6Hm$Qpxx zz7aeGS6pNIKp6aLnP6QbhiQ(7D6GqFYFwd;xaAy-Q1D@C0>b-MU8vrrLQLCY0$nF5 ziE(e+LHkw$XfAn6BTr=Tb$44%JZz2?bH`&-g%Nk{FL70t#}T%TfIBx9GMyQPpNe17 z^_~6r>F!w&-+W)ZlW7ENth&SBi&G$X#|X(5VTg97m&BZ)Orc}zV%d#$$wgNF7e+mc z7F&a+@MD=CIi&rD0p~I(rI#KbkY+yp@=SQor$WdJ>xIYT4Y)_-Sd3SWr1F{o7E)5d zakD8elzNl0VmsJbXU;PWI%9`w4{<=apIETTg;Sj|x9 zcCV|_ayAKGt}T+)E49;*y@j-=Pz8_gx=gzcM!?|0S^VSUDDLEwG=Tqbm{pzE)T=;SV_r-d(}c1mX);oZd?)(tGUf3L zhM-;bDYz})2ktpL!!?DKG$P9qZ%!Kr`o8Tn`0QKScfE=($@JOY+z7)>hhk^v9=vLB zPo83JjBsd z5!uEZqQl+IXmY(D&gfYy#2g&N<@Jql;%8@=7}5=I8!5vHD+N}ZK7j7dcjVOIE_~=~ zxj67p8TGP^78kFMhCh33dBYzaX-+tfdz#-Tx9fk%Hab=25TJ!q4g$}4i8+(GTB~yDy&M9Sf&;V!ZV9Zn6CT@QnEex zw~sTA$~VW16Lpkyt^<^$-0{JF%P4rmHM;RWRpJ?5aHi{swTGN>{LTtN9RCRpckPbJ zP5p7QbeBpL3EYoeg?P0l@>W;i>X+}~wbAGBXkl6LAvm+O0cv#fMNM-(yyKn=_f=m}Sm_mt z+%-WIho2$6EyFlVDT8cw_ZN>(^5@m*Ni=MfHBY)FWzOPO(6$^yu{3KA4QjJvOA}p| z<@aT~OIN9Hh_>)${1X8ZF4Op7WnzlIC5N_t6AA}r2>PST>0rGdy<0qp$N-#r8Q_{hVg?|lz8uC?P>DZe;oL#SYDIFq{89iZQ_N;uUl2>MFn$E*W2)aEP4 z85*V3t@;``PwPY3YnAXp!&#Tys|bF(5KgSsp}$R1{=R1`b$h8RdF?uZ=Ia4cr*;4i zlg=KIKKi1;&}gdPT1PE8CW2z(U)UXG%`LwjIq#wd+dtMug8);W@qI1qGiSP+ZHlS0 z@4}{cw?S@RjMR%Af+zKQ@Q8&cAZ|pYFw^xTRm^xxeeQdUKjx3e;Ly)>51i@Ku}Q3G zc#JmMSmO7^-SL*hYYM%voiZ;-KCnp-1c$t8F(>amoz(Q;UV~fU-q{dveB=ZZvOM@* zms@0{2(GF2SaX*F%@-1P(v{9N#nhuy4lM0ZrV;Q-j2pFw&dCa7 zZ?|{CXZ1R~^{*MWzBa_F&r!mTmPlH9^(tg`$`EeESm0CbeUutn2KT&e;Qsuhv~aEh zZ~bnA`xit|_ya4h-!5^*+S93P58Nx0%-xTw)-2)|p?44eH&K(C<1 z5N^{MtSZJxy!uGt^vp`|y4e|(%EMt;wI634xk#TSH@5cx6hi`jfK%vFlAE)TTE>jx zPYU~Cbh!iH7-7g#0uYW_4d;XiqL58XNa=wGOvxH2#0#C+d4YxGQFtq^IAV%dr_QD0 z*$cC3%5#X!%c@YIi<&Eu-y1VOxKQrjZ%JQO^qmynP|ZE39I4i z2Yr5{??B$(XTWX8adH<I5ehJ+u-b%xwop}7c zaB*?ZPY`<0i@RL?0=3b#a4%hnN*?%A(Xvix67rq)uJMD+kM(qKa*R+hXqQ-1dJ5t? zN!mN2woYQQeF} zS}cB^WlxNrc=}3iGpH{89xudMlvlC?&j&R=FFWC zamS0J4Eyl=6)&m7Yyy0g+`M0R=!w-^qRFxQF#e-fK{jI!f!7Xg*M3_>SoR?Uo&<|@ zXzfsL+H`~xw<-z4r)Y5Tp3S1e-4l@F?|{Qc7m4$1^e}m0s_eix7n-cPMu=2*!mW{p zXsh~fsA}tlR^+Pc&G%kv5(7h1Ljd9&~ab_xPFT z+U2+f&-!k{kL8bwhvvv*%F&)U$j%6JAI;{upEk>E!*7%E;q#O|dm~KsS`Htj9N2~a zHL@GEb?~pdjEXj-3thLGqjAF#2z>6&iY3y{ZqRI3^D$*GVxTi0{xOILZS2E&EBC_j zvWXm9Bct=%u9L7W4-#+2(>9X~q4rBY1uoO(`I+}2Ve@-htvLqvUtCP<(tnF`X7(tX z{<2&yIhOpZ6Ck4M9o=l~1FDDZ*~@Ag^_H)5k@?RO6Rvf|@_hsFP1qfAz_*vA`Klhw zT;4$Pq3vXC=8lVH56ICe9uBPXBsqn5v`BrDut3Uu>8H5j;SD1>^n{c(&rK0tA6Vg% z($}mk99z!1tAf1G5|6Yei7941`J}lX+xIe(Jh0!ObGo!onr+NSA3AbK z%pFQiw&t}Tbuq-p81tmF{ov1KLPo(akp4M-a>D_K1@_~jM*-BVXoJ^QyK!UF5ja|8 zz_#9I@T%q*`4nHFr1n3s#l(X9gqhRrgs$u-{rp!NwwxE7LPd5N;!WzsQ~XcT*N?V* zG2CONv$E$-3A$`-k#F zJn49Bq-gV_oixvM!r#|JVNFatP2?~N)J}pK#~+Z~Mom6?&=F5|GQp6EHPpH7DpgMd z&UvFvJ(o&z-T&?i0e7r%QkQ<9CviHw_DeqP6Nq#FQy~fCMD=$_2-u@3&5GOMYNrQq zUa1ALUQ1rPsCm$KJPI3zYvY>fkr1jdjf4Ikrn|Ppj#-# zJ(8Zs%pI(CyHMCTp^D}l%!N6>qG|gGC3@;O6C=!?!}bev(D&6YxbiKY6k9GyPNnno z)YpI)UMzyux2=TYMhjs@yB?k$Ih`MF`X{`dc7?iRcIC7Eri+pCNzN>B1(mh{J%wv)7Et@NF^vJonupP_lD zy7A@x-@tgMEgP&p2&$u&vzOO;o?RiaW{XO&*JxKPIjcrSPc-mzj~(dJ9DycE4zMA; z7jK9=>RNrt62i3QdDz0|AV1p|ox|N|YH_TXl2}b$VGUSh`(ZXTLEAieRMU7PtR1UI z3Q0Og^Lcu^sp38I3!@n?ju2yO$mE<6}yFluo z3s*q5Lk_Q|uB11jFQ!E_3C5L`G&o_K%aay6SA|^zh2-B~L42;jC#)^So+at*Y%v3L zURGdggD(zUvSnpwoDvgnJXwg4(7mBZlu}j!~I6Y(%|$@oZjau zXWuXv*KUcY>PM#NSN8x$K2ep;mh~pDL&r#IX&$cadPTU?d=2tay>UQmGMx%D2M6OB z!hnu28gi-^Rz3-%4U;4A{IyW=Jin$!mCG>JGh5^?&%_gMp15|{PJFa$0V=%=$45EI zJY?^C`emRaW{o$-qVv6>^ifydcgh1}o}LrrPwPV8+vlL5DM<*Flb&UFgr*1!Sgah* za?V${)oe2NnzaWy-tFYHwQ`*Nv@ickjgy=}i^R0MUvTnxBmVJT6?Tr!z=jfCw0`#h z+GktyHrZ|v+Iz5u$1xguE1aXd)x%WhGIDGVV3lrTSiakPDXY7P`{%9avjYQ#AVqJ^ zkc*J+-O89$zZA|(pXGbsZk%{Z@($81IQvA6+e3R%+&_wPcDTM*CvsB@=>~*WC0hCw7};`H^CsdH!jd$ z%T_z{gg)^*VIF&jHEgYet7Z_k`AWgq2y$mADb;g#X%aJ z`yM8m`9W64CDCz^BEK}XrZE*)Dfx}`-0AIuxy~t&-pLR*M9O4K8i4&TZ{=y;lc`q~ ziR%*_d81Dvz7BMCg&8AZ$FO|YJ}<5d@)eabrSdVXWMAm29;?I22aIrUk}2xlus~@A z$JHzS#c^g{Bs)fQW11#6CQ7_lD|>7hrHEa9BuDVpC20J)JILKO<5PO-(!69cho#2z z^Pp82c~t5(9lZ{k$Ftz_aceQ-Sex)^rUMW8+z9qAEp&OzNc8Lb5L7biAo!aV=&ZEB z9>?Oy-AV};Z7IhsUhee7%7~J6&Y+>Y9u_F>MW-Ac+_5qm5BMkIo@dJBIV22LUq1qU z`Z;ru_6In(<^Zq0QYQQDC&#&xzr-4t3KutOn{3K? zwWi{9ZgTBAJ3*W+J0VV;))`HtpQRm;#d)LKVb1AnXz5%=N7oo|&b>;Z{o_QCnMr=+ z(#ND`rG=YcWzxO=1-#OCBkaERf@b?1l=YTadsC*F(TBC41e?jb_}iZmym3dNXmRui z-SSp+-LvQ<{T=K@8(b&Daj$~_S$CXT=N}cfEbURAmN1Hg&ZWUtiNmyQs5(zpmuBQ2 zZ_w5@3w}`$2SM6u;A6E@v}{aA>rHoEv!AZR_P$@BzkD7(=qSJp>5ie1F3Uaqsm3;CSMf?#1}Vb|p^OaSUhn&c?O%(s*~kB3#>ZK2}&~!q3NHQvS{Z%qN+^((ZC>y0}an zeJusb6#8@RzuoNH@Rp9Sz|ox&Ky_suoFDE;WdmykgVyCVuCFGy1r~}{vIFFle-+O6 z4#Fd&!YTSlGcEgV46Pq0z|U_A*!6-AuYA)L8;p*^oDgHOR!HFfpX0&gd<^HrgmT(0 zM-)4qhiBo>sp5`I+GS+9RQ4&w+Eh=8V_Z)2e=WsTHRkZzBL?ydgD|4#Gvp|2!%g>H zQBLzTtiIn;eqqdTv7#{o$8j7j2sFl&w()pVG~vc0;doEV?QOXh%k|X}JY?Q4-jWl= zQCiC~ z7GI5(oL^TcbQAOQF~8`Mmo8`A)8YP}4y-lFih7zAa<}+)c)j5qygsNzZoPu|!=yBJ zza#PET+hqy25m&+Zi%j=?@Jz>VsD<6$;a!( z#S5O(nr^0L+mcz_F+Z1<7mVWprR89C;UYc$+6T`FpQzUIBl#ThWUsgLsQJw^7?Y*J zfwPvv@vnV(a-9b5nREfVf6K#{ZgzM%$diT#IDzrQ!Dyav00Z7^#nT6ZQP({hjsKgC z&&|{%218fe^2QlemMs#O4GZHZgLLWQ8h_9or7d1J&jDZSLHy6;8O(^<#u<$(*d(wA zt&F`Qq)r*nXLeb$!yh}zDYaF+;%~)8s%GLb**js-)OuNIQyT4`94{Qccoi_+633kW z1ipWbF!h`{kJ9qs;U>Olqvt}65$9mP={gSXw;lSax?)sF7(b6mhpAPoCH|}%^>XbE zOSEK^=--VzUa#lZd*1_f8AY4y(_PjmgtK<;W9Ymr9slJ|LaTahPIBBtIeo&QZI}xT z-gXy0mk;2-BYRL^oe_BN>;Pf!Gj(A?2SyZdz>WigHIa+AZ*EQO8?_BiF3q zOFcA2Gqn$t{AdHEFP(~0PX*#EA(Z!u)}-?_kl!x~z)dBOV3G7)9g?|?)AMw3N_RK$ zS^Q~qSuhWrk6aSR{MShcom5GF+WtH{=A8K0a)F>Maaz0kHM&^VJ*L-*GLi%h^dw~` zej!B~KliI>{4*34rE|hGTP+@Py^R{K4uRoOKVh@EHT&xK;FY>gc+7n)`g^{on2$O9 zb*U@cemB58M-P6aE64lkfVkd6g-c#0;qq}?F(mOKl+6pr5%1;+mqit6_m@w?@JrD9 z<`+`=b_vc@o`GL~im7u#yX({AhlOf~fAHgaA)b-^rJ7B>$-?IuWk^G4ec{@rOhMR!$4t!erA(&4=D z!OIkly)0pmRTUk#*M#xQPeSazK6p^)Hu>CaaQSzwkdjM|pt;vL)F^Y{Cniy_Mcox^ zXZW%j7{CBOOPJ{ZTqyl5e11HWRqIrM_6#RXEb@cAkQqXl(Ff?Utr-#rsNnUUzhGO| zM+mOnDBgL{g?z70FW0!_gtO%GpkqrrS=UQ_H9bRq?Qx2$k}io0`p)?O$777JJPgBFh-V#emz%V^y9?6L`bCDH$n&|U5M_-|TtQ-C%3mg|P znKC6FaOd*|Jo;J?jF33av!o8!Tk}M?Fl`cUlJ?kJ-rNP9W{Gk0!iEQQa>m=q&E&Eo z1J1e)=CO|kb4ra8-OJU4mDZhjyjo{|K3}%3X&B!4{zK?9 zBug;Oo5kKc)Uj^15f-$2qvOUH80rAn=82LE02*I-4hskDGy? zPu&+^?Mx7roeRWNi>=Z=Cy~Ew2;#S<6~Zf>5X|Y-M$>QsrTDL7%U;g1{rBV0+F`wT z>0=V*8k`4>*hkR4&nea_jY*m@GGgm+9Ug|=g9xXZ{o8nXNcHv%GKLEM>yPm5mMdPbAIA*kS%tk z#mlwmpXw#?PMaRf=jw3h+>MlTU}&JB3GOe4_`WM__2TVoF%`gzl8K(con1CcDU=;+BkpWzk(Qc8M=9TNWu? ztC`G8Rw4TRI)Gm+(#5j!6jUUxhnN(1{Jp9ZEGl;42PLUAgPwI#}L!1gC9nh7M^yb<5A6);-eWaK}HCabKB_rw8%k zQQyP>oAdDFgAQkgTyou{;KT>BoOq<0lpDM?54ZPDV6S;nrdqWEawgRYDh)U2#P`jZ zs`nBmwS-Z%n<0x&LV0QEC9>5hCfk+QIQQf}QobRbm5zDwey?y+ShbFKoURk9CH9rv zP-&*LN9vL4bihO)DV1!{=l0`jTvD(KYl6npgg%-$f3C0O@X4kHjdQU5W*K;PO{IUb z`Rtaoo0|hWQ;We8YBjNtGMf6*|2>Os!w>VN>XBGyIGr}G|3}MuN?o5wVt;r>R%Zs& z-dGF182nRW+GT-tk2KUT>Pi`w1M%;%QTY99p?Jn22DQo}aQJgI@o6W?{q%b)EicQX zQ3r+yyA{S!*5hRIU)dEMx5m=b&NXCrW+{CcZNvu8q-w-p?g z`~_d(i8OE65biIIw(H6xlq4sI|5}*gz8kDooN%?ip~x;$hxPXuXPz}9ij{6YmM!&_ zLz|chu2&`I*)>B|YMs;zR;#K~bzV;)b%7JV)Y-xJj^2m4K^ej)H%%<-F7bme)If64 zNOAfCA1Q0T7iT|C!?Yd}ckB5;yf)wh-hKT7H#Jqs+UI(rp4=&_bGalA`s5@g>&&6$ ztPcAA@_c%U)Q>)!D)YIU1{J28=t^QLeNwQOI?aQ{3w=zvFwGhZP2yqg<{ljD*oPmD z@gbj$o#~NUJ7x82kY!|eiSI3QMX$>q(4c%*kpCJ&`+q*8!f1ckYhxlTyY&;&6b$gk zOPu`%VE1(ti(ke zE6UBc!qQQ4xbbcrtSWww&xeF#<0A#Ed>o8JGk?mWQ>1Q9QY40K!p5KKUmm0FdHwVre@PINke^Sh} zU9coGiFS2ZuYO%1B4j;Hv{PsM(r@HxC1ntIMlg^ZY3_PM z-4!>83ZRbJqukgib2?6(T8+y#OV9o)1=8wk1{hQciz-zy&QpaG&%KaM)OCU;-$Gd2 z7=%}LTTsfZJPPr(5=Ynfr;X>qqpF+@|laCYKM!2ed`rfNA%w>c+%o&6Fi~L;Q6-ayDW!2z0xmCE? zQ;>3zZ$%$*F!bvf&$rjfvr*{*x}$zqyx3C4ukIC)dC6L~t5Jci(|*Fmb614Wq9a(i zDjpr=-h%$hZg}EeRC(pD8}umeim-D-K79KziE{G?p!JX$SS)q8PL=rcO7CDOR_`au zq+Rey|5L6t$?wRZ`GsuXLOJP7QYfBCvfu}oYlwpP!JRybQ(7fbtChrK_}(Er2uj1> zI(?ztt`1s*=WuJyd04&B8Z8y2`|H3-xW1|s%>QUXMc`o}XyZ-taB$&K*VTAY?oIBO zFq*Ggm$9672zLt|&*5*=(d7Gd7-8$ebv~NBO1~6*Ws~W?aRZfD4T9Q_8@cGDJmWfF z81gUn3J^q~hYy6{%x0pvgSG}(YWFWfU8Z{=wU zdSPAQN~u3*_${Q!y*Ef6SK*(`-n_W{7)cg z;F5sFLT8l|e8DpXohNq1^Ot@KQ~#W$+lKWNboC_knjDX&NmUg0XDS?g83s$^SJK48 zcVKbZG$=~=OmXXO!OQimpmQ)ALS9u+OS2(YjE$w_)t>AlWu!FD)I$0JfEpJE{MTM0yLeF^&u0!rwbwVP zlfrgX_P7QPYSLZFZ3pffhP=W{9iCj;Ln-U*g+_;KWy@FG6l_%jcw;X^R-5C*-xMcv zv6LIoc3uF_5^_PkK9KXj*}FXZ>VSR$(KLS5KbM$TWxjkXo94AO(WJ^r+<*BKnqIAh zf0MU!ckgw)e67S2Fxk&i$LW(TljBVu{sJf%9F^IkFo5D)`{C(lJ<@5DvSIOt7-iQurc{6B83GSbNZJ zh`ltPcmIl$vNkX1vfdS{*tZy~R=k3s+9M!0$(6S(Ed;?!A7iTrp}dj}?ribK3v*6k z$aO#Z(lP~oJME#5x~AYW!xnGp=)o9OO>B0O_C^nt$aU~b7rM;g@8UhFV>&B=$4Vz zp#9Xezg+OQ*MRuBNkZ47H(>AD3RonuoE0n{z;|tX3^uzBlkWzSdEElOv9pCXIzEIt z=S5=0ktfiQ)DJw5Y{h#2IXL;fA*wGi#4ssayuEG*cD8O5f2o&%@;*zLT|W%WYZ75# z*$`6b?aM)z=TOYIda)+uG(Fngmwmhs(9KsO%~>!I{~M{zho7cmO2iHdTw=q=&-TMx zW!`+N@h+T}yqKN3?xe^{k^H?3sM{tx9ucpOvkHz1N43hq_kp6YK~0aNckZSUR%XmD z(m5iuC)u6%XNRT3dC{m+?vP#xZ7-jS_6M$s6qraf<+m_PULVJglH)A5m8kp1l(G_f ziA|a>B;4gI=rvaxZ*G`J#&ueJdXWtlCy(cB6+`YWghEb306Im5f$7aDQkH%*@4I$X z96WS3z6yCjsQVeF7Z$+*DiU6=oB$TF(voa{6rD(rvW&xf@nGYTs5z?!iXOLtdH6NR zx#bR<%|5{=Cr4Vgv0l);^oIVZS%IF^_5AOHE~d2~70j0Oz|%3R=oJ{oZ~y!jEq>e+ z=Kf2huwg0Gil)%C{jv}l<_d%KDx|w!H+=D|298xWLc;J$VOrZxwz2L{n%jpWTVD~D zPU(i30lm38(TVNXU4Yu74RkO5I{4m5r;_;{5jZdZ%t-W;*#~iA@78fTXd!K zv>VlSdPBQw_QS3y3moOO8<#$f;16nJS$$O%NK@?)ig zv3%4|xF8#Q;Nq#$3+##w+Zx()8Tew0-f2wC&4+e_F+_(3~}gNA;Ab?*j}hkh$mpNw*C_+BdxI=f3)JR}h#zgK~!#!hkN z#BJQ7TtF7@1F%n;3VyabOzKMMcxuQo&T(#{KKpNwa{p|qn|cm>2mJ+U6~Si%!njvM z8%HfM;)d859uu(^{+L<{KWYzgRjM4P{nX>l5944`jy$({%7!>}#Gvrc3~f6NNg%i-vEi`^e{La`UaW1{)>DJ zKYPs*Hh$BGO@oxc>id21;3qv&^o<3hSG#cBi!^9k;ErB`F}00|5KESOqM6oTP~SHO z&Cl!w_0{T>*>vAkS8E5m>bCLXCMUM~w~01(ETv`U4`J>5eLV9WbFD<;GC6e|lb$>h zntnVI6>v7|Jpo!e&;aLMUySBo0B`GWpC z#|S=qV;ha!`5WdxS|-IHXL9fTMnbr!0`i_ts5ME8EuGS_-~2&fkg6`_P;%(Y-(ou1 zX)YXJo!Xd>=>6q1CA!f;B*6VtYcT5+_W5j8xQ)0stS2=L* z-orfY7E{ig9G$ZbwPu=dbkzr zB<{Vb0?7xpD1695Ryt%Mv12E)ZmYy7FIK1KXOe69-7l&E5=cy3#!_&Q&YI&CsQ-@qMs`q<_N}1FblF<{16PPk)3YEO-!ZTJpr6Gf7D zahj9gg6wDGPz=k?C-azjT+}v`G(K#>emy+cOFI-QQgcA^j7+O5sLP4%qYvLCeD(|Kx# zl=aT{;Q#)M;w_hTBe=NcsYto6T}C&h43{GQC|BaL@eb52Hw0U)`{E>6P=od(yyns_^-P%?l?FP=9wvRy6hsYF0#fyac8h`?HhXhCyc_)_j5({LvfU^ z4?kJ$Je;+{VkW*xr{3tw14LDfLkPiI&g=SaCe&xF1k&tuPQ1)Q`$6>Tpi zV8$O+yl%V)3T?D`h}6+KXITl($2~BiViRv1bOQQ@TjGSa3v_s?)N@{b06h176XxuU z#QBO(#4PK6Y$Qr9o*Fgb(~Y?hysJMCcTI)XZ;wQ}(uJ27$8n&G2M+JuTiP+{WBa4? zaA$I7M!Rpcdf+G=RuF(gU+l&qhD)URq$NLepFm?*A^-iE$iuohP$&0O!d}G;ejxRs z)w13~&$pT2XY&YR)%&8m6Y#5<65nNYe`>#Z&P7+b1187mR}{K>@>TPZC>Jmq)k2lH zVcJ2MF5AIhr!J*3{Xx>R--(}vMnU0&B*DnOFR!wxq%l5wB)|01@|o{O;+Jm~)VgsH zo~-RbdTYjUkJOpe5;z7A^^?4Yi>ydFJDdGH8o)I$1J#zkg!|1?g}O#v5`QffbF~X7 z;(eUB`$bniKWDhO2KEbYTP2^`R0B-yz68dd?OmQHca3H??nU>9n}l6|qp0e&Iy`K> zKt2adQ0@L^AwRW*dQFHRhl#ULaf20whnliM?G;)!br)5hJO#7m%+T0z8$SAK${%K~ zB0Gl(JX-Mt*i5aUv)gmTcUj7?PR*Sk_OBF|=E`Bg@+5SpYxTrC~T!!Me#V?{-rQ3b3bYf{VtOWH>E#Eda~iS zQqoU|654EUyZBY?*ZB$-~W_#Uu$l(NzaUY29=Jv+X`SGrodl`tqt7|3SZ+N+q z$wZndMbER6s&(JvV6lUYR^|<@0O!fn|BCqsf1Bzxc=4%y!lVDqldF!-wnKfWxz zyQO{4vTa9EE!Y_B>wA-dku}=fJV-5H>~O%a?Ko;)BCWQP=ZvL;q^`QX=pX9~EoXiB zvUJbr{8I9g-3t;eYwV?Y#Rc~66DVBxmc|R034Fs{nfnXVcxQ7CdA4@t)=+I+M%Tz> zg%}a< zrnHdi{M}$0T!N}26oeh#!{B>Bg=>wII&XG1fUBKvz|*Ccc)Dz;IQZKF>7M-u;x&Ev zktulIEjw9Kz@ z*yzs`?NTbvQJ6=Y_D92s-_qUAF`W0?&1L12MSHUai4bJzmkvCF-n|dx{O8W^>+{ z26|LGnY28NVaOy2HZj(bpKaIyt8yzTLC+sb<>a}-WEYHBcL*jP%%H2kCb9BP3kVD^ zm1c|Y1V8e?-&1lS*8Ha^Kejgx2_BC=!P-LjX9d=Ow+aiKH88kfE`RP-N9#&t(EIxc zbSkk!vz7r|*591kelKDF{7Ptfs4B6{HCV-eIB(u(g3lf~;hr05v`%>~Uibe2{J|5G z3$}60Z#{_DDRmw?a~RHXRN~EUMqs9D0xG&zSQ{z=O-V^nBuC$?Y&4OFrxrwYiA8 z>ra-hB)zaL`mB`cngm@Fv~XCPEfhGGfzj(7uy4U0QoK6`6^0)pxpn=yKPr()KpBmz z*-1xk#KC|_W#r=pkaETc7cXt1(v8S=dd;HQI)nc)bRLde{ZSYuBqNbfwuTv%DBN>U zQfaDwk(6lDBBNbKMv@g3sT3KdAxYeGN-6C@LqtnkOQl_Y_dmebz2DF0ocDbm>M?RY zoL{z|GXIXnjddottw@>klJ?WKE($zr(|>SqL<1dEHJ}2udeBoJf>HO&#Ah^@$J}yr z>MiRhDVG#0wXE*zgwgDtXXy9}OS&^YMvCbSL@oOUA!c-)@Jrqh2Yv8? z9Q9$`d)sjC9eqRS=2DB@|7q~_`F|<>j}{rNJVQY%XQTCt-4tr|6q++Hh;#eNqxck{ z{^Lc`t z`3HYFb^T_8Q`>y-`LECLccBG3dU~M2g&PukTLi@o(oU#;ET5?Q3iICwVAC%F7wxpC zx=}C0wnO=-q2-TyOP#r!u_jO1?n@d;O4yW9Mo?bCx6aF=jm&0{Ni2o~>9sK3R)qVS zZ$vK-ZF;f507Zy}VuuDX@m0O#pH-zDRlPW3rxC8)kqdwRo`as2-SBg98jbyY49@wK zg4n$R-b>Fx+uvXCR3lg>&919bCKZC*2z~1eTS=^WBh{ zJU_BR@~nu$d*k&Oabtk5t7e50k_^IT^l@BYCV+VqoSSc|O{O?YwUA&s}YK}!Of;a7n@#@v#{!j7GEXk9&>mg$XJ3!~`eU~{_Bn#^r? zb4bzt3b?41!SLo=1o4qjJMA3}`eTd=iw7cQTJa{2!Cb!lD7`vhjn|8laNeUPm_nb# z$Vn+6r|wMN+tX;vMGdHEKO@bUn!xGj4BFgm%_i@@Fp`w@{l? zis#U$YmqojX(47@=nbznIZ^wD1MsF!5gW?-@VL#x;i&ovcoe@wus)%PI%nopbbn`Z%3uUebBh(fUw+YHvBp9-Z3T7M&e~uRxPa@L3VkP!^_wngS?E< zaO-wq&t+%C|DyO&&mlZb%BLK@b`ouzFHo#oe|oqyO6r(viqP4Xcq;ZG_{Ky~ z*Y_&8qR1YvFCB?JRhwzogeTDE8NzL`hoQgi7CLXL2XZFE*jwVKjGF0At%~vNw#g7L z^bD7r*psl?QKYX2Jn?3Wp4f!bxggkplh?)Iu-Q+bv~mzSejP*cDb9jRrcRIAgh%elZ6GRLZ+nRFdb^BjhzxpJf z`*Il^H&~-d-vXzcJvW?|4;&z>HR!Pdn8B7?vuN1VUt<38M07}41Z}6)>E$mQ^3vIj za;ENhZo_G8mAJdAYsZm|OCUVIc3qgVB>}Qh?eVzzO%e`~I9@D<%X4(lSIV(H-r!0L zkJ*#Y)b3Sky=Mz8FC+QeKNfy&^TQ{B$6(Gd#7h&^NlD@%)as99=Xq{%cfLrOW22r3l?Lk%qd!W%$C=NKBS+q7loAv zC$QF?N^o&Wgz3(!=-rn3C2^T;63J2#^I9)K_MnBi};mm)nu%Ue(T5s*Z`{rBFu)vkFrW?Y% zrI+z*qO8I{KIe4rRZ$I-_@<2@H`qz21W`B&}8>)c|j<1Ag z@g73+6eU3}co%CI&E`g_|K7aDhqqXR7CzvA%OU zcA;^2Z+I%?Jsr+D!#<0XL#9EW?mptgCL=hf6~_6QfudKT4SkxO3m2u%cfR&_u+h9s z?}Iz&xoRtIDt;p@DeI1J4u-c1?==y#2ueg>1m7y=lj-+ z2{W(J*p|_xzSkP29J6DT<{&DHMHJ9kM%Jrs#35t9Ij$>^DGrw;66IY!&rB{v@%c0saLC;`d&~ z!gjy0nBlsEo=3$&fJZ1P8+YX;n?~WoC01M{anH_cX3>W(>F_yNho6s;=c%`o;7V#B zeBY}B{yIK9MoymVt)G&HlvmeRne3#p`wu0i&BQL7B=5nYI`~r)MGd1H=;fKdu<(|< z^jxJ;{f-S78GV3)BOlS_u)pH60k#})E0H~l?|@-x3{*%SFReevz(TbFUOuqo&z~yA z`RVUxFO;6W*cULN*0w5Z+Z;SxcpZ$3D!@{IFy?5_!FFl4Fnfp_UmMdMF1|TM z>q-k?lr%4$X}F)(-DklNBvy><2Xe3Q;U0NC`0v%96job7r_$!|7S~VWKZyz7-)RIT zx7x${ElCu)V?65bj}qSuw#SWk^n~itbkdt%DcrbsgNB-~cKl`32W5O$V6Lykg=W*F+wP`Tsp z0pY}WGj=H63M$%pbic}#1162f2PMOCa{ggZyBW-FRXJ6|4|_t=#yWW869W@w`O?3u zXUTI_HK|MN9k*V6;nar7d^ked4U#6?jZ@@-Kc3ObE1G=hX^ChyW*}96Re^#dQDAVj z53YJ`%GKfkPU~F+$KRQA-@avVYUX16@Z%&A-C0_jkw&awybD+Qa9jGtT z7N3PEV958Ea8>st#H*;|MZbRh>gOOT8s|=7DQ;M`z80n_UJ}#YV$f-q2Q}1N;WXJ` ziFvd|boA07#a>ESEA`s4o?RuS^*fxrdVQ;m^ZNkvgIb-|%`2%2mp=C?rB|rP+n+|f zlscc%IjKJ33%Si1fO&$*B~yEI$)5|t(LtHwjS)Nfz05XV)mOrb8r`5H-Wu#wvyFY$ z*MO#mAbD@LVW@N`2^XwE&&2!LE#j|qf+Z^%~bZGGY)v&v85gj25 zG+*0?&i8*074BK+SK5PjD-1!~#DSz4a|S$C__LPAV~`&&-JeWvLY28Kj0<=N8(!$4 zv3wIec)kcH@y#;mThsuz$62z> z0)`dY2c-AQMJmjCDNg=xJhtp=f(D<_y!`kP2-V0Jn@iT#nw8{R25BuBUpLrGi&L1Lbzz`ZlZltQ)DP;NWBAL$R ze0ZKLL_UopO~I^r9a~ETY$7|?({vSTQP(-Jm9Kf`uT3Yfx7#~_`ak1hk{MNM}evJPlX!`n7o4*Gb zY?voD76B=_CW%j;>f!$3L+JZO2RxoU2EO#V1=-pLYfB z$1~dcPwHiRwNXKW1}$;b#Fy^F>0rVlOmgrMjO;9Nt^?vSJyc3D0vq!lII~)M)?Vj>>y!!7?|TroJ4M61*v+tShdy_=ABJ8RI;0%X zRr)-lZ{>-o^}_X!QTWlv3)5pClXcKK$e-pUPBspN{w@BnagHv`Fds)J4_p+k_BN)@ z!>>f$cjF*b+krMu{|Z_%7sWn)FCCxk&Y+>*$0eU}G^n@lfzyxVLAT#SvC957gih6S z3Ra7i{DswUt!N^@Y|`Y@Yj(qcxL#}$DDvyuO*$XSIHc{DYWi}1G?wH*wam9Ef z1dg~#1%4@X?vpPs9pj9)N|WHt@H?>9^|u%!cTc=nrokHnC}p z{nb(N)faeu_K2{z`8@PH_Zw{Xod>70k_YgY3HQ3Bf<3S||9R}i?p88W=;g$bxZrIRINJB-vAaG~ z^+bJqu>Px9egCLq>6O={Zq@?hE)O7^C1c3{Y%`50JuUVvJ}at)k3g-XD!j!=j&le2 z@VNEYsB&x^J}M5txTUF}YUGTyx3^0yZUvo_v9Lg~D>uB^kC(mFpkvlVdOJ^#`%jid-}kzBbnYLR@S%`~?lNQJZ@DmM z`8wKLb^@MUp2v0$z%k9+;9-|p{CTJa%NUEaWLcKe@cgcv*2M+vx0`U$Fl8KXqsk@0 z2||;fGYzxfFP?D!LaP)nQIF)FJn#N5`cSfyRz9805BD|+`)ylj`_AJqX;iW>N--9# zral9+BtdL0XOdN%i4*?@VuhPIo=ROnEwQ%T=gUBNb~ud9b0m(XrY0^wrNiSEC(v%G zCpXSX-zm2K2Smm!;}bJC^X^CQB(9kcri}5yBh>|THf27H*Zf13FDGF{QeXU`_L*{i zuB1yhC*Xri^<+7IFrMAfn{y>+{GoFVqK!qRpf=>TAQJ*~LtQ`A6em?@@CLI&n zJZ6$w@))w18H_vKn?*lWJ6`@ch9Uo)`Ji#|RlRefgSXYNdYVu##bs zO=GYw$cA^j$)Rg-cm93HO`KVk1|HRBc-YU3^{Wd>s`SHI8}F)redTa^;nu2|L!02} z$-xjW`K5a+$@03HYn+CK?xpaW!Tj@q2H_%U){S$xz;Unm@J%mB{Io}MuUEtN-@5Si z&l7lBum+-S?m*nLbTM%BW?`-|pwY;AIDL1$5N*&x^@nbej7=ES%DGbExGH+kMUkTn z_CPnIEYa|}10{aCE$DTNrBpjbe)(h>A9(yu9Q)wA(=hYF{2_2644o)>%I-gNvTnRe zNw>84Yu^<--Pc%1$$tqmHmFMZkO5r#I|9!As0Lf@+0bv6sbkfNZ(?p-8g+5p%pX@u z9_FtH$!l?@P_+G)I3%V~a^eod{ukSdc^|l7uv%WWXsGcQ{r4k!AO5#Gie@-b6V?+a3 zi0Z{FF-5T_&PD}Xt$P@nTkTnUp*&>#w!%?kRQbysD_(LSpH^Lq5}tlK1v3s=3)ztN=j$){49ZfSXnWOXd)6`hC9b{%57QcROf+GEsbnT(URhAsM zC&pe8-A-C@ti~{`^BIamKJ13`8l(BF>r1ft`bR8?Z>6AK4(ut{mtz;)7n-Ge)GK`* z9PjZ>jGi5bvwJ&1=~hczb|YGP2jzmMpA0TKsEnDZFX`0frQC3+C(E@|ldv)pjSas+itkRvu%I`ml9ucw>aXmtuwOMlS-8aJvYtuMl;&bY*|DVJ<$|1EUKCEUv zj_*6z!NHBTC@)!kY%>zTs-Fw)3DHOE(a$LO<{|t~^N#rEXBO3}ynriDc1bM#Jic1< z4@z%ZqnCL!j@ZiZA~8=KRJjm$7@eb5%VKfyS~n_B8Cd1mZ5Lg>u7^RIyTDaNi;|sE zc+$rpCt>UxVb5J1)Q+*Cl`AKS=fd{j^ZXR};3+u~gELsSQ;AZic)<0f5vcTBp9>xZ zk)KWuMgIIvgG#bNMy&wWJljiJpQUVgbTNjdmO-zTrljj)F09))+EHT-;Hcx(f|7l< z_^=>b>MpIpRf!kKgZ$7fGl3S$&Vk2Cnt0`1n($ytJXQ}`N8*(r)ZeC!<#}6R^5D4? znr_BDbsj_P(ixE3>5i|1G^lmMNzm0kC4|kJjF;B;Vh^+Tl(1=!Acl>F^G6D3dzdvI ztCE~~{bx~-PZ9N!&fbSF8l!jO2smHA60V$A!LwmzI4E*a)v{W14qKZ<-R}03?t29w zceI35y@}pDoFyncpU)4P>Y??bZ3{t|Z@h2c|r9B+FodRz1yHNSs0-D^hfWsCT@rzTpp>6Lg$kIH5 zdQw()-_)_B(6S0+_gz7o`f|{h`rEfw#(?>l|G565JsQPI4$`5iH zg7I85X$YxY?~pnGzT7T(B-7tI!~8KlP;;I%bNq8k2$gtc1-c(_z^yu5_3)Z-tHGN` z^>AeEc0*pF`i2TVmkDA2JV<%kJ~1Lma?4wIb5PtkHmgw+yO~54k2e|9})4?hA?Tcp{; z7!^>j^nt=-br8MVpKq>95fU~&CgZL}(5J8~6}g<|)gRXL$cvu*$Do+6w+yB1_k(G@ zpBg{eF&x(KFc4>Igi)(qSE)N`z&}sk5Sq?s3W=tNxN*Q9ewy!3x?2Pa={1$R=m)T@ zg)En+rsMAVrQ$Tt;jrXr8r|^T1vM$pVefnammcmyYv*2qy^rN7Me!+3HQa)uuh??7 z)ISRq2H@m*fwXMTdhAjg>gb#)xro{i{M_|~HgwB?fO$h%tN6I^ci}u( zM+{eJVX0;4Wn5e2kEfmL+TM;z~}e& z<@O7$P%>C{XZ)2WS@^JMU~$`FiQsED)dyrj9ZzJ^sqllS6#iT z?CSU9AKPZSe4vR;y|)Rn87E-nPb~`nB4u^^*Fb5QCmb@nC9eID0Q(#Vvh~+kSnZgH zO(s?tf3XL48hCP3?p(*#HRW)tITSsUcVYh*Cxv*wTf+8_;kf;TWTBm#g>(C*!L%=3 z#Hg>`aY>!jPknTW{DR$4A)){;Sl$+YZjy3r(mQey1n^*^CRn@T5`8?XfVFyhtQfFQ z=sj^K+51TEs`KXf?z9gW`J9G9&9iyM>1=VRTcQ~2n1X+UHi^q`&*eALzBj)@83WU1 z!56bj^!#5USlHxJqK_O|&eerwb6lZsM_{nX6kI>{LZ9(IcxXUZ z?hH_;+s(%Ot;b-Tcw!Yew{7D~ueMXGpBrC2ww3CB6~n0HX*4NI1>?4dL+!>07^CJ5 zAqhd~-Fr2hR62r}qVafb>=uY~K2leikQ+`uG2l zmXxuSOsG7}P;x5N#*?nZu8IFA%QAW0xpS>Kr@83kj`4yxa)S~~*{=#S-galpHtGH( z`%1ibSr6wIZN-G}qhyt>j~k~g6v9vGvfF8A-0QNOpV;k#aVD;qfl)YE#SC3X4`wgd z3pC$I8`8K3GBmYVm3PX0p)zqC z?RMygLmRfBn)yZy_uPy>q97tRUcGvOn06+ThRUpkP6KPyyBANLn_?*|z9;90-H?0%LHtQZjt%du5QYz1N+!DT z{FUeM(iyKn<;^r0;xdg_hRX0_;~a61t2fM5*hf!72MVtfBl-Q@cpkZY5|31K;q47} zkTrccmTj3#^Gtq|_AevsxO0+Lww!Y^Fxn~hwUftBuQl+p`CFL!K$k7Lt3y_#7JsxG zfjz3S$w#+>TH^b1;*0^Tc;XxF+ik*-o5+f@0$6qLa2RM3B6-E=$`LQ8F`OLJ@`>ozvZ`KqM0^Cf7pbB-^t^_ zLQ5{)FLh*kX5gryzPK|{>I!bOF{~WU5n5-t<%TfT_!&j*^)f|)^$799B zfAHXKJ7f&BhZ(P>AXM-e9JgK(C$rRlb8sQ0j93hLrHrSyYhmTg8Io_Li3V)c6xw8a zVQI@ab`if(TBU=qWTXpb1cjjKnajf6NAB=@*f9R*uS9MmYiaT82>hIM7OK845bqU^ zVB>+ALa6B-in><_;R(HY!|@fkVv)XJ#v;|yNUpq;3T1}c6n3?d-dX2MoIE4Za{KNo zM=u2`T4W5HhP;BB13QFw(^kW>w+eV-rCHU*AJTL7YdZ$s&8L|&70^fOfuAY43RcU* z(4aX5KV5xISDX7$FQG!nZ)z19=OqglS1qYh+<6aD=?MHe^olyx8t}c3HNxVbU67J< z1T*b<9J8WR(3&F4^ES@l!kWK=g3=f1?`etiW7Ei`-5zgGi-zmEzSOaG=;_s49*NBl z7VxJuBVM*&a%l?Rp?*dT-rkx3XI?v^+p1j7iIaM-+g>}>reCAWFURxCVMn?AP=CHK z$bdU_0F=E?lVXu4|G2!D=V|2gu5o>N{ozR*xz~n;vW2{?<1oGPdPcL#{Mo9j2Ca>c zBaao~G-qxY)uT23y5vj0ld2?6!D-rA+7m99*NSsa?SxUGD(sl8j04}!!KDtlB<4G? z>wf||Tx+6&^jxRn&b}NxJD282bKt}CN5jL8Z$i$5yHu?2OarwKaHRS(xEU<%x4xZ( zQ5|jMu*8}lm&vly8yD_lVSqDVRl(vHqe1CxzW8}yA)L4iIAPB(3cel6wO!6v?u_m$ zB--`j=U1jf)q^w=bgEDP`!5>;tv#`LRbOm8JqA1W9~S$($|u*xl^7L$Ph9-y7UfAx zlBEMnp|Z*ovP$JB>g*S?TX>#6JZXe;6SC21w8WmKyA)18{%s(dN;_>2nFPKRcu%bP&fbW>Z>bii`bmyOpHmLS`(4CKJD-a0Q)MW0Yy%8znTIA-(UflULvV7g7qVV# z2k%q8Xv$DGR5>2aLAe8Xh3x^}J4s#2kDaA)JLXk==_OEnV{dxhr;#4sPN9s?UqsIv zUxn)r1oqnyOBMqY>0G)lIr<+1oo5%}#kM0b_U3rb{n$iXi$Zwlwyiv1=X7>Z&k^4a zcqmjWc+jF|bCx|~?v!P2%BeFOMD5?FDc9UYsEJLb>Ynzz(=ZQ*u9Cx5KVpT&f7s8HI=yS&K^q4+(#qxCRTM3f75@ zmBYjE{+RnD4ffQ@ftYoamS@gqoo{{ks??R$5q7YlQ$4NZ{m^Z24-TCF67q&cz~9TS z@n=U4?k#9>Eb7@u{N>*b16+mzE|oSp7>nJvjfcS!GK=bpye0uFkl98UL;>F6ir=b{&hP?yT zcvYn3hlK^PQ*d$Ec3ShP3CgmR@S;~D*z~J{SCgtKNSd2XH;tkN2Oapbu|LoC*5_V{ zA>ygp39#zp^Qy`1>L6S4(aBzV4^N(-h*PKP@{{VXf{A)8?rY4U7iksv_rU-(@9@S# zeJ4^eioN?y4TJ;#w8`3)G~6wesSW$yY(n zQw1_Zw$S#yR&*$HyC}$CgjY)fu_oFD$Gt0cvYZ(vE>kVSo2C7sq4z&ZF#Ii+=jFlT zOLdU7Gz?$HneaZ{$#^#21cqNogAorc@XZ7R+*{p9b&tPOntT9$-)8ERI5AD=o~#5L zwPlFAdEp?jgaqOMXRI1*@HMR0!8d?K;+Mhw-b4OV5R0lnpKd{FP zb-rQtlJqy6COK(7|Htf|Xmao_tQ;)TjLtHct1*P-o~UrdD!?(bGT=s&DhBtD0`L9R zVuJGyC@6OlW>4sc`QJNexp}Biqn(ZW-ff|i?cTh$?IyJ}#*=mWT-+U^P4A^=_V=6r zX#e1)y!K3*INwdccL6b2U8}&G1}(?8v-k7WG2s|oFayV!)=^bcPZlNWX?;Hj>?r?1 zg*4nfTv!a=T0P+vYmoM*NfX55RWujz@nN}Amr)3(HX z3t5CapT%QS-PmV@tK^O!#%&?D1y6@KvQzm4ORU_{OzR=7P*A}c(^H{t`v!jcHbQDXL$T+9OpWyH)XOPHhl z3gn*jhgTz1c<$MBp;lr`9RJV-`o>qlkbbi;;N5;0W7ZuX)~JxC$^ls1+z#%>Ei^<5 zDTY1PK&7|{GA*Bp7b^X35jeUf?ahN*aYn+DuDFb3rY?Pfz^G(YR3hXV&4 zqM3HaVC1+2kG*w>OAs4W{*3RO$0W|M*shINjr!KFf!4Bz2_x}C#J~bS{;InJ_GsK?|I@J z#e=+A;tzMxn=3i`zKGpr-U)f$89aE)G8%L~m47MMN&EOD{*_%whi8=uAN}5wvc5k& zqduF3EsCnH|J%bv^g z+;F+=ays$j3}|KThtZQ1@$ljzSa(z&9ZtKWRYEpiEuW4L59i{Ia(@`seG_T-*WtJ4 z6sTYD7HF>7%0E1=IfQThN;6khbH(g@dhuy8O;+yC3smltsmBP)2=V5{bIT~5Bb7dNBcQm^r!hP75!(+i5h#LhjTafDKC1U`b@!L-)G|`)vMx# zm~WI|5X9q>llWxE1~K8zY*sjTgX^MQ`R=t*;CbX59g7NPjWxUZKdpE$9KVVxbZn*m zmgL&(s1VicG@<#g0u;nZJJOLa1!n-pb;=!Slk;S+0;$;uCH6MWnsRyz7d!3Z6T*>L% z)^fu^Ul2`C;U5_{@b;`1%~LLsL+l&Sm*#vt&!?k>BJ+4iK+0JBFn$L|~K}#EEkyBq4{538~ERb@k zO2>z?a^n+W&i;e&Ub>H*mV7ZyEs5lqQ7j$^dQ9#qhFH;OGXA}1kJjTYxHIe?^|W<@ zL%SYB2zP?vY&Dveb%pa)UAX4FjA%XGfS>za;(OW}tT(chzg-&vMV{WgEpHerW-j2n zLnL;~*6U<6#E>IXtkK;~V(^^|g5E_PFmi#7u=(T_{@rI^mHAvb@QXi!il!lOdZ7#U zeyzrLZzke~x83Mts~sh14#l^6#Sj%V1=gs|gBxwGoU0iKIpTX~pcKT8Y**OS!tz=B$SWKft( zAG*@p1}#l3DWGI0F8SUYcF9FkjD)UGoZ1MllULzWjR#T^-n|w=#@KlQ?tzt#Is2FM2v&l|B9#vX*%0}d)&x-NqjA{tjl#9N#$8d7;MhZApjjN>Z+*Pgz(d778XbCjIkNfLr zhmJGa&CMaLH91@oKa}f@&){7tKQT#hA=zI&2fxSVLD`@$wB~Uvr|%5pvC3BB_9N}^ zZODH7F@3eLyjMKS*2>c=J}iWlE~M{DsX|_r4d*F`^SKd^oFWfK@{9~s_V=8K%}X6P zY1CS{FFIm!nmqPCvlL6a>S5lQ-lDP1EAdp}0g9>j#2w2T>Fqd8eEMuR6;|!R!7Dc7 zUY{$tHn1D&-pl1WBZ)b#V1erzTroEGA~t-@0>3zUidy#rGOj+Q;5#m8)j1Kfk5^r3<1l%ib4!!Pcvewu zdrxUcHHwenQ-7i0<^xl zL{?SjS;qROaK6X@`ka@;!( z{2=7tN~ZmtS(LveM$DOBFGf7A!_j*qz!$sYDYw+Bn%f$Dtm_qYZ_;wok-CRHJ8uY= zhxf*d7vnJJu`br_)`!k^OE|crgx>VC#yM9D@TlZv9G6!DK}Fi|WAS~eORwO$6~63l z&<}RWN*$Dqk-UHDOqwOl{o6x!i}EW%F(Wsc>vFa`Srv}J5@!WmE6P%zjY*t;s0HRE z$gtnoUVJjSfLhLLieuY<(8T%kVPiE^na}HZW!lm`$6@tl~kp76E3usl5L_f$f{LHdxc(jIiV7J zZ5SivcQj~0%>a&WUgcD`yG1OCIz>ZG^C5Jh7IuBJng8jH;zuK*@QJYp`i(pxwBJ8V zjoK1_F5iy*LQ-g0eF24rnsHyV*JL-{fsGejgyr85EhlRh|3%KFR35u=ggJ<5%%7JX%`J7dMoT`U>7ryK;#cMzN;LDIkNVkw?sYyGr_WL1t{OLRt`9BxNT>2*L zI2!|BoWt;2^KRHRNEri0%&L0qBJmfLc0pR<1d{J%&F`CsJMCWPjVedVsaShaRgYV; za5+n!4Nb;kqOmM_4VWRG9nc_7lCrN=*S*+v)hswXcqZ87^}z#q30!h}JxslPn;g&g zB9o>HThc^@D@ItTw+J%H(h2p*^yK#5^cPhN*K@t7qx%*^Kjv1_tiP<{nkgUiRSCzR;%4Euf z){~c9BxbI-M)P}y&@GpK*z$1~4Tv9t?ZqP6oH)koUmRzp=NmZc=MgMDvKcQ#c}u>@ zApRobfr>$%@Z!q?%<&pW$+a?M>F$U3*2Pmd3td)jkmH%RBcalG3xDVvj~@!VNZCXk zPB$Nhp(bzO{kXNb*vA07S1ST`Im%z}E#YzUENIK<@Ond4((t!r#qdRZPws)l)sni1 z*@vlYKs;%OMMAG_Ysk2Co3Ji73JT1JiDuHi-mD@Iyk*va(fbpWH+DPPS!zRI|C8|g z1>z%$2LHek;f|XeXm9Yv;GPqy&#!x~q|El9dhIi?JogdJ`m0J;w%Nhg z+Aj1v;T~K%x{YFFvS73PK|J+9axeDUM2}1k;LQt4IKNUGKO1Di*D>ClJFEr%Zq60m z?ktruFbBl4iY$Ph)45O2LEIL70qDIp@AZ0Jbt)r>6)Zo{{Vm&gMwJX1C_bc{5|8M0 zkNLc8{5g2D$&in)mb!o0bxx-q&7(`-TqH)!FTwm{R+VYJ3ty|2avmW*Z1Ew6Py2^* z|M~4C8#b0j5&k%Zu%=)`%y5zM=N*6N4RmII-antqT1Mx$CE5u5|DCdFnF^ zcX3Al#S5ev!x5-YsG^=jb%gMEUpUq6Dt!928s%fkFms+JCNv)-^Xygd@}VsKOHbuz z1=mSd_XwSS-dm`6I+q>Cn(}Suy=1FnPx=va>0y34{O8gk*o5n2#Lm~!**O6oP11nt zsWO~7)|O}LS)j)WM||=05%oXf4O2(<#ZBs~c%t4;s@Ob^cAX3mLw>G?k)N)C+mUv$ zDY;q5vpCmkc36vF)k=Q( zIr3mzFcUS-3>1@L5Um@zfX(}P;?_t%kiFK2buJIU`#)Q#+xt{JAF~SeZNCW7J-gxI zCq>})P6t2LsPNxs;R1DiN#eD^lt1>P@HX=km`(_wK~-x>=4U2V7x(0Q*Z+vMn?}KG zDUbEiWDLfO#m!IslaadBe}%v_u=u9G+@$4?z1pZho9quNU-efS3!q+JBX z0V6m(#R@IU|HF}=c1T>Bp8VSKFcldFLdJu8FjP+k3ogsySmilro!LN5;p3(E_DQjK zjy{fVR^hML+8hT;Io1Bb7C2k=sMw?y>*zM9Q=GWc3^y6vgsGcbgGR{FK`AiwYX9s1pW2!gE&mjub;q}Xm)HpP`EbU#G0JGThRKPGUXj?|?&Y|3gi z)c{u0@aD-Y!eK>o?l9CsyWbYvoY9BAds(4Fjt@UE)rE*L7eLQuH8J;q%4o*80Nz^jTRb_mock{v%vrZ@lg!`%aY)xC6u&`YOAc18 zDt%K2S3VhV@sInUUFV7=(F?^Qx868&*#u1QoGbpaZik@OLA<*&2N#7d6ATj?MVCt} zV0Xk$TH2wE*AA+4i+L?rna&hstoott(~(fT?II|QeJ92>EE7I&Sjy&Wv!K@wEgbu~ z7oKzU!7=~Bc)svYSY^HrAN(5*o)xBOwCe?Y_|z3Y$X=m{w>|i5R3STP`tjfiYIr-l zCx=#lq6(Sue0fPSd2Xr1TONTp{)iFArszufjDa8yxIwR!lW516ozQ6!2Lqo^r}q;F zv&`c+Lc8iPPW86u=AYxKzc5dnrEUce|105hS%q|>O_>u?K=NmKvwYhIh>LgxI$tcL zu4y^-&5Yt1Bez$s|DHxWes+Pc8m*&v_6rFbzg!{b}32e_x%3tpAz2tzR$U?&&O_i6YRhEpE&M)oow43D?EF8 z9QHaZW99Mbcx6#3`76%{&%06{CwUQs_cLaj;R@0`@H;&3oP=jRpF^U-1+iDjBT;GL zX_zm~Y^TrGp%rf)3V!eI)4O9`@V?&@!SIPX_t>&aY)PEW;oTI;ZSZc29$rFvS6xYK zbv9kf_@SoDYR)eM?Q(cb8hW5zJ z(XG{v6RQ#-X6|$Pacw`fT5lB%9)xo9lu$}rp)T?HYQ!0VW*9wBmH%cD$;T>D+E{0L zY^scJJzAh=@@Oc%l7O?)D+JkNds^A!te{%o8T<5)hsj+gNPdt;@GozpvzH9;Ww8^= zCVT*s{eXveCh<6*T~yjHkuQ|TbKuK~IQdx!t>|Y!Kl1KTnxiYvc$o|NF5U3<&dsDy zt&huw$I;rf74Tny3w8PR-Q6gYEH%hZ?4*dr6S~g?LiT!|^Fg@rh3(1?5XS%OWSdS%kQxvK!7&+zF?R z_fSUNBGjO4tQ|NV14`e~0ox;}7%&-_y+0 zZ9@8kUR-8gIR8k8$y zp|uJM^Q7$bz)nz}6)Z-$J96vHU8uUK9p*RggY0{T+&umk_T?QgvX32KS!l>yy#1Of&Ww6T;pdWh>RVfWP<~5X zBK`ayTCxT2?eE6Bo3``lpAnqUWev`(3Bj^=dE~nCH;pl{H^u(0+HA&ZSGUu*B_P>GOwPPN;^^6eqFWRuXvI)#PP?u~7UNb@@oYVCc)Syf>A7J1+X;&+4?x#FL8Sg)t~k7= zfDdZs^Q50YV8@~TXt8h+ZCK?Li#o@C(?FJ_zqRN}h=M z3OwmZJuN9b4w++HVAY73qPC-tsC4=P-Rkd2+J{SMk!`4yr7?xO!QUX#phLC0-(=P` zyCwAdR43Rk>OzD4kBbjGNj;Ce(byh#PyX9Nlhh*eI67Yiw0~;Su)Aec{^2o=D00Jx zkKYOX$0brqmQ=1t4uI`j$75i|AX>67ngh2+vbuu;-*3+crKj;YX^AUdRsAFDJ)#gl zcQ@nglsK@xsDVE({vh{Dt7%y70P>in#PfpWaABM^9r-UF((cc%9%=dxe^5;E6tqO>c2Ma}$?T)ioS0u4pMvgtl1)L*8Xs-5xF?>*3^ zR@uX^V>FmdNEVMr3izyS3d+;_VUUkVwCWT5ZMsF;()Zj)b1duSIn(-BGwS*y121jd z1y-Ak@y1KU5Y1hnVrs;hvfX%i&SCQ0-xJ=HNZ)f+ElM@_!S$=$aBKZ@c(%+)3>T!g zc5XcgO-=5lDxRoUFU?A&iIv#PM3=zZ*Uxj=;e{cWe5JS8+n&(B~9>BVpYR4WEKAo z4&Ln#g%T@b@#H+}^iv0?U>YcN=}mL4rLe&#eSCJ!ir4p?%2yf-rTw+8T_z|=)j9&zfSMLWs)Aw zpWB;z*(hRqd?rNNje*Pt#vH+w2Ra=D#hhw-{VfgFC)i`ds{q>2)eHygeuL<(h1l_` zE5`p^%oEqV64yH1kOkCekj3(kmB!Y0XxsgE>iQ*`3auJo$Osqw)3I5&bFmZi=bO+R zHv#>Z-33{v{-p5Hj5jxRg~ff!-CAFFrS3^%`R;E^^iJIeRarAo@8$)hhw9jS#vNGs zUQTNc^ue|fSIPW|hB$VriV$EU;JlM-vHHUXKJI0X&(Es!AxeTxH=1Fva+|2svPBVkUMx|e%imMmBVhRn6KYxYkn)BU!uMupT4>M*JEBHn2RwZfaL2|CQ086;bt;}hdwN&NSA8Ar`Xvxfi=pXbx>R{7 z7f^4BuQ@F7AdEd*#&6wCIB%RE-P3$c{R16POX?69EgC@(lqrR3rlMSD4CgG~4Rza0 zaZAM@x?3H_)i$fhYULCT7R(@fPERZ^xj}cfv`|Z#ISqK2F1$FPgC=`s<386~%5s>% zp{?UM=Pn4F;#1(%;RS3u+7IL0=d-oM4$|(D4})TUf$qja3>;A?c1SB`pP$yCu)Qy@ zGq>U5vQT!Z@`gwc7fxCmCX9DnEPh5Y>l#jA0 zPqVmLd2;j?RIqKYO87ER=opa+c2cI-W=ArO&2vGxoK0F%SL8(99Vl==$LAMHjLg*T zJXCuJ%}JOpQ`l#Z>59&L@Y^nO+x;8{+f1q&uGC0<9G(iQ-L?ukAHLF^jnT4k*)q5= zNeAaBY2e%MQVwywJ{V20z;9)P`Q@OyK4;^;HaM5#n=n&+k3O6=Z)8J$hRbz8ueXVzgXZY;ocmxOiKT$ zR@1Eh_hdGqCqzq$6a93!7WOF7z|P;Na_XxoPc$Ob ziF`8Phxq;a1NdIN3{*zy(ev0$G3)3!HVg}ABX0*%-lzeq6gzW=JvJO-(ahoZ+(`BJ zB0Ao4Ir?djWS!M(*y>)AC~uGFjTen125v2R7v=Ek$nNZ-Zv|h@^#yr|8h(-P*pXw5 zc-{W}V(-m9GM{rM6mGtVH|cAk@7J4R+Y2A?-aMR|LiUSge>=mIH_yTO`0pww`UZQo z+_>I2fDcQvpz_}f-G$nvv}Uz3|Iu0_w0fVHyfOHOOD!=DBGs;eg|wE`}ks7;N@ z9Z=)-LyEtbN5;SBP(`B>=cLQU%XK0&N#2nYO*$Zdvqf->DkC43aiGRS5GadugUqOd zpvXV3ToIJ#B;brIoAB4CQK%3;SP1K5g;U%<3%_~{p@{~|aa#LJ`C0wW;A37d_}0zf zK?PaD#c@CB-RBQ9;6@kvyEz&*&UHbBOl1hTp~yuq0_fK5g@Vga70kG_9X=n4W|J0U zetJ@J{;lwm+|=FJA$va`&?%+-OICbRtxERWCXcn&r1Ip}IUd7!~2 zd;-h4WoT!1t}PdK9$o@l9?gUT&vLvGHWkVWcHrvq`{C`rDfI7?D(iQ8AZA~;$7^F{ zxcz`Nj_nyhC&_|KBV)?Zig0v_y>->@${! zO76y%k=?i^RRtX?uZuGtZ^8YEQb#suDB6wLD!xg&0~-n*p^+7%bz zbgmaa8XwqXV&6X(o7uFQap(U{r4L5 z($U5NCHWZE`L^gAeqCbBx?y2=ZQMEgEk(aE;gvC6+2d{v$Ijm(Oc;6!tTV^UKYb5E zr)v@$E+LMpdhY@2fscgR_c!H`Gzb>8>j@SIedIr-yVSt1X2P7aqM#izne=L6aeTQy`jD!W#Q>FQ%9tfHoc(*nN&Wd3r73 zMbppIh5A+rR<#DFk5lyUx@E>%#ST1srZTyXNkqpFsWfEKKe7zEL9=u^L12s-=5%xs z%ij;inhfTHDE5>jxb%s(KI8b{}0+tpt1wSFr1EFXg@Ge@v) z>~*mk+{k~-Cf0Z~A5UI>!?o(QG=JAaax}dq1YRnDhht6(C!22yYKDh|U+E_JEbg4> zUiL@)*>ec3&$Ho??FGDNUL~limeEIjPs-BhC0Io%^4FxTqKZKl>DT6SV6SR0yQxp| z_mTVz`$N{d75ut)GB0y7V4L~j{Pa}`t*Ex=4d}#9)GJJHzY~38XpypNloA6w<4NFg|$_@rgj|X1>X?Ee8))VPY5LD zPNK^vaw*dCE)6q!Ox`9>#K?^Pys67SnCBl$5g~)Q^ic|3sp*cx_jYBSmSP2b|ofvOBuL|>;#xcy`;9sUqc{>3+h>K*%eXSgZ$*`kd`YW?6|L$140M?+q@ zVI}AET1X$f1~XKxq+BB>65M0^&EXx zgZErl<}*c?pmLYQgu9j~{#En@W6h0}_PqtBN&CO0_PRK5rZ&eYeWtmyZ-e3zL+a)+ z1V2*|R{!gWPk(I?M!l5yMB^ov=i{~NIxJ($bCKOM% z{SvQwH_MlW=fLTZL&A#Lr80wg>X0bDf!PNf(eIa(5%_J3q0cTt#LOb_uPUW!%IfHF zZxRe)Je2xz^UbsYY%I}2OVUkyO zuoLD7SW?$ei8m+${+eOWU5@MV4$WBJSL%zepPJ#`bK9jeU>P=ixiBv?<3DXpJC|tO37VP02@{3kM3?>(vVo968BUyS==@~ zN5~HB&Zi%w(T+{sgjXuF7(ba(>!H~+@7zjpIExT>VjO3_yUqQtJ|Wdd4w$mfkW1c2 zQMZ6iH~^ICF>&1(kqx^l7jH&ywEo z_Iby|2x~vIdfbC2v_8RqayRj>YMgi~>KVMAWrOkuZFFABVD0E^2kFmt!1s^;Xnx@^ zIO6Zj<-4cyXC)_G>*)fvPz)j7l2_=O9V}10Asm1CLTsq&4rK=}fnxVWF-UCz`_8{0 zs5oy2WvibOuX-4a9pZ|~<+o7ztvc;m;SOB~&F4h(!T51Xh2+UYS`wSZ-VN!lzGFJG z>5+{z;+z#GIV|9DRw2Cl?kxPj12|%AxtO)p4dq5zlzXX$GBrhd->VN->Mx*sp@+q5 z*SBy<&Lfzw5g>*J>hg;&+S^Di~WaMpS*UgHtX~qV(*|=q$}_%qLx^dkPz2yjB9V>&-_MW$Eub-xXI>cyP+o zf#T#k>2po@6rD+ydLS=m^Ze1tG%@&-@HDlC2H)+AR{QGdW&13wShtiyLVpsJcSgHk zWpGaNtgX4-MSPnj(9(QM{S#!SRhEBk0 z^##KpmZ|ZK zbr$^Kf` zyqFfINqLu%$0+?_FAmJJp&b?R!dvIt@OJEJs=NP&xJi-koKk~7(yXU=agxMS-v)V4 zw?KHegLI?T36FGEV6XBq6sI$ZOx9_^fJ1Tg!&j5LtEsb%o;JVtYy`)@CkaP*V(Sev zRI*`+*w9RGHvE*%)yi+e+Z+94sLKaih zsfn;%RKvEVGdQYZg*2<)4YCqb_RMzR&pPXAg<5|Of96UDLvGN39=_nWe=2m;QO7AU z_0Z+B9o>Fl&Te1(l#y( zzcOpyy9RTmZt<5)8{Y3>!ILZA&~k?@kT2brv-zoTsb9W0L(d9>Pb+Y2k9E{{LYC+s zIf92IcjVFXc~n-jj@w76L7bA3l$kGr`?5-5d2}>otd%;BR=>d~Sn6ey^tSr=8M?n2 zPCtLe!{!TJ`Ja{a)@rlI?G@*R6W6_Ag^Cj1d$|BM*ch>}Jg+$+w~e<-fpTP-Y9SWip84=2AP{`ma60_H2MA>ZY`XxG<~b3Ge{pyu85v2M0t zU{nToZa<~$$a$D5olE~@L2Q=MiJAh&;%OfREK)r~ZZA6Unp=m#UH1kkDZZj8NnxAf zF_c>yEcv#z8ho_bPhA!jQ}FfM?&Dhu$hy4=9J6l7BBqSQDd$dz^FtMRW`Q|x-lB$@ zZ+Zz%8<#`w4H*}|uooi|LU}}w@f;vA=BG{U06`mG!m!q(kRp@ zrY5E@ZK8a~4Pac9>sGR70Tk6#QlEMMXrgm~P!a2d|D8`JZ$&9Lm3Bt{U&U(ZTu}{1 zFF%MEFOK7P8uhgFdx)Zp(JY9+1@D0S3!ABC zr$2gCYyuop0>RmdbaX(tu*dHS*nG*9I6pS*WSmVKun$Kno}@+XtKrA#Y+3x$t&}-> zl@PsRIyxs53l~mbC7UEWRPJ*a)?_Y3@t!Ys+n5UPRg`2+%Du(UDKS;Q3x++kvF(gLwmDD428X`fbnTDqlh<0R8)XZPnLY8W*aeT6tlbszocs$n|P?;Ol;x`8$(Cv+C$z*F~`5 zbqCg;F&QGWd$CGXJeYSlA=b5wBjI-jt5|jCXaDuYR*ca1E~>D<(wvP_>xE4>fd|s$0_*2=2JO|u=+a^(VS3Xa=-Jr|7q`ym-BQcMTID)z82JHC)D7k# ziPI>&^(CYoTf`l#4~iuxC(yJ8H;(%jjOj;|@LItdo~bq%zHihM;_q4TeElJ~cvz(@ zdUszOG`0tx^Iag0T@@yrSg*(3Vj3uM%P!ctQ~`(YNd^0V?Xcz4E|~OeAlW82!-G;y zuidu=f*b z+IMiYF<}LxWAt|aUfTUK6TfQo#@!Jg;E0X1uW6Wy|HgFX`;vQJzvi@1Ek2^88FJcn z`4QwF^AR1+Bth56c9@xCO&)W?4H7TcwN3c3b0& z{mpc?(F84vw&OFclUVav3)eV@QDo1(WNw`fM;yoT+2C&Qr2Ie1`R;@t25XT6WQyxA zPvk*0kL5c48f0j_UA*V9gVvRN1&fob;9vSM+UeX0l}vNt%q$txk_^)9{|bf`htT|bcSNo*HRDT`m?a+_cOLM7X<)itS)o#(y#S?W$nX^NW&M4FTk1|(w z#X&9u`NOVUP?LUt)5hnLP4W;dPy9xwPq<=!zqxdDOSzbOyBGRhbm4v{M?gf;7=HJv z8J2f`1;>I-*}<-XCOE#LF{kQb+2ecEH1ITaYV<_kJ3l~oo(gGS8OlSt^=H3V_5zvh zld1n5K!eA1<*n=Tp)Irz#+8;q&PNw+zv9K?{W@T$Be~?z5DX5z-1wc`2@j2o6AHrQ zwEmPTeqL*d;i&3HVWinfTAwo# zD)(lCL8CbueVHdlJ@x&;s7w3u?D@k(YdqQ1llNThMW)JXSbX|3 z>4sdQZC`eS^23LuXwZwRHs`@S&CQ~sNtKXMa}-qiHPE%24tQnqV6HH@N%<1o_|#&= z4cj_EtJOaUx%^XH@@^Pq#*PQ$>dW%<4kmD_Yd1FiH4**eJuqO;XZUxl0Y<$zFYH%L z7j-hN@yA(nbSYU+rm}eP)s2DN>)Tzp-O&fbN@}6$s5I|+Y(x*MW&Akhq!6@a5-Gx9 zIMKn94W)fx_?7>lA$Br7cRK?%uJfgN;K%&Agx(!-YM5NbY9{nKN*H+D)n%%)CHY_z43M03{>4Unr0@oiy@L*?z3Mf z?AtVrKK`|*GdvNVPq+`>ng$$jPsTgCN^_a(N|^FAmDs1v|DZ;iukjYq^Oy2A+iZlj?0ws-s?Ru z?Qkv4SlyFrmJ9{^c`T}A*U|VOPu}*}k`wn?^QYhcX!WL0d^lq(-Pv`S%KwE)Uul1F z+Ddy&9P?fLHu)%(_b;IR-R8l(9vfkSPbchcsES`-&Z{~w-c6XOmkYuAJK&$(n2n>H zNo9f^ANAB>`=oFww-pYL$G;@%RsmwL6BtkbCx6(k<$giSnG2U674$p*q%P;~(6Rn` zl*kSCbfy79P~Cn&1C59p@l2wQch;~RV~7&KfF?);KN z)?{VwWGPB)A_Z`|pGZT`)Y7_VQ)xCydooj9?pFAiKIk~{zunixM9)H~it3A#&Khxk zu_fL(T?5PP2lFTU`*3e6OWDs?bhK1m{7?Ilt3N&#FQ=R1i@{2~LB|AV=G=#v${dna zNdI5+wvcL@0)DqvroR-HwJLT935_a=8ev= z^)ThsKn^`{0eUNx!P$_}XcS_I$3ryHA*%%bTQ&(JMk!!KsS|trwB#3IA7JFyHkuI> z0uS!?;SP6#pvlV*XJ>a|g^|Z#bjLf;@0&6Qee6INX0}pG=6&*29>IZEf@xg3)ES8~ z#{5Dn)T>zv+erGW0^DHPq1fjcIQgfoFg7&?9qy54OS>^#oO zl~Om;^Y#B{mwY8_gHy6U>zw2xX4#7UKlQ;S7JKlw^m~6JWjH#{mzW|?W(o?2uEC_t z3h~hI8=~*4?(pHR2BubrfIa1B)Cz;u|_fSiZxE$u$gWK zTHx0XQz$>hgC>|fgsQUVBzSd$hrKs~x>N_c-8fR}&Ie%4Vmo~P z$`l6}SxT(xJv8o+8%+u|!<~WS$!73kx*a3RlLzDqsM!LD6;@Ko=!!nuX&6kc5hDL0b&)5I7~?Rc9qC!|nDb^&yM-w{2B zx$^t*+b}m{0Ns0cS-gaPtYSDE8XKEv$e)uCYsB;_?-FWU&;iFD?Uaygz`H{Q`gcN+ zd#|0yd*AlNObd0sz2|XN<=w6HLE>{nOmPuQ$|RrMUt8|!D)FRG^`vXjd31Y(2^S8? z=Fpy&eEp&=W=I^jFxez3E03fzbJo(qzu9u9&z4udYzyOsTlZ6bqJhkw_VJq`>$xZHc%}Kf_1x>Ee3Hxzj*pER>R%Jn{K@bHg?(4n09o#IC!+xj34Q~U$V zd)3oyHp3Ui8|Y&FU14kKN?0v)#9DK8>a@!dV;ZEH;l6Hs2!^2d>}a%^7X~c{Rrrqg zXW{0?6_D}f2K1Er<<)CfWBu7AVdbR+`JzYRtZ3hf5ASm1Y{vl{c_)XJ?(fDCC;Yfg zp*J;~S>x)B!X;%ALMcwxOO^ow7}%1u(1wo3#l_l|-4c2QhxFa~}U z_*LFo;31pZ!JH#b#<5GTGA_y;jZe=#1<#d3P%&;iT$%bv*bry~Cl|%xj<*}BZ;KAP z71?3pp%G$~dw znhjc`+%e4QBe-vwA^7Zag8uzC^05?8HmEf~bqhPx4nK**RX5^>Ac7ax%dl>t3x0V& z73!nj^^Ho%)tpN!JEqHfA9({y<`N_&4deJb zJ^5MxqjZ1Q999lU;*gK$;c?(a`13qXW}+Fy8;1XYJ1;7r^P5Hq*&CHrTHd$a%+avmZ$byseH6+)f4gb2)3yd1O zNnN5W(%H9-`@U7BT~47qOny{2=dLY=9G}Rdt0E~Zmy1HV#Gs6f;1r!aI<&`$`?~nB z#l_>?$=;n6_P+o>#UsMbtDd}Ux}1HR4)ccV#q?wP5pL?%gBw8`U%sstGJ78s<1524 zxN8Sg&L|e1rAuzHsJJSN@oQk*qeawd=Lk;NxL25U+zid~Ct%x~-W=V}0sr>(rOO3~ z2?Z%+J7_a{G)diE-|l?iy^mo3sR+EyCvkL!D|%l!Hs_^)!i!nJ$7n|RW$2!A(q}Ex=aBW-0#OYgJTA2^l85fC_A4{ z?T+NqSba*!nSoc5y|{9hE0|fAlAfm{dY&GJuk|d1y0{y3&=fOJz`a&h_ca=nB2d|@J11RezOl(Af)LSj$- zcMV&Li^S9|J=t_!8oI~+7FOq1fOBdo)poaGD>}pvKAK4HyhzZkkbDxuyJK@yCCq#g zEaefa;HQ+yGo1I14F7G%P2JYutA*+t#$CsFpOc!G`EoO)se zztxs_v#tmDluah}zcLUWy5~U93kQC(K)SmI{UYUVHT2lU19iup1LZ~P+~>6&<|`H8 zap~U|^HcJmBrb%M&)t!v`OS}sEnsVSn7&mWf(qxkRIheIbjBl zvUM@LZ+c-JUg;oFSWKyJ?1cM=<+vN^b8~4VQm!mimmIu*2t@a5ud^uXn+X!PnBu5=nsFMLe! zSjrH5q?Lo_pEuw^&qBEp9R`KK9$5493O<-9-9-vj_`1YRsnP0c#2dA1k~Sg zUtsYY_A2|KS91*RF;$j$+$UMS+Y?U5xry#gX*41z12cyha+I~FICJ}7;riKIqHT%= z{k7T;haakP^}I-~UN#9V9~ZFDV#&g{r=lh=SleAuJ~&g^#PVu>lFVkHOc zHiSF;_2PxkbLrW|y)Y%TR_fhf5Y8Q5!+Mw9_&{T`=s0l@=UUd1@6TH@Jjpk3iGz z0$BEokh)ossCs@M#LfGF#nHEL-{bFqJ71B1>kx@WX9i>bs&U`XJ+SP=R{9>kmXvia zQKYU9PYTV1HJdupnpuXtBy2ZVECBv_X$n^r?WN-~g6evVAOtmABHdd)ctNb3JT|+9(DN`3kpVyNL(h_QA<(O6ccOSI&u? zNfydVT;ekr7Uukb@6X@Dj?`ZK=(Z7^(Q2SkPwn}X(_boFsl>sWlILdVVR7N}87w>a zjPC7J=gOaBCD&dP{OmM^CjAPenQkhmFCW!OD_kUo%ImE-X-^>B@o1!z&;HP6)x}^o+>0_=nrT46Lm^_9 z0k$^vCg(1a!!{*P_Sf$_up@5I{sg}*b5UkkL7ug@C63J{n!n|ppc~vQ zFP(;%N6GTfl+4Cuw*0H=w(McE#Fq-mfc?w5bLF!O+-22tD9QPcYU?}G+WqH1>y9e= zw3LgtBMgMk1=HE##t}GNq0X_#B7`^l;$U)(4$9O;ou;e z;4zM>yrn$mGYfngm;oUzCOB9$5nkT)$5*$8i0K`w;f0?mYZ?va2HRQer<*|i^4!s> z%MT$^@+zrjHqf5?!+6st0}i`3irU>y0UkHz&OQn3u-}A#B~BolSbf}(y9E~av}d=0 zQbv50E-zl$7mdzeCPRHwj@;>uRmVIK-&vvmr&v5v;m%Pb=40)z{j!Jp$=K=S1M%nT zM$p;12UqQBlOKxAg{U8O^6Kxu;B4@7Av$jqMx>1p=c-JD4yiggqNE%24xPvqpA13e zKnU5(bm>{FGM;!ChC2uNQa`7mP%QN(hZrH1F1#z2(k05c5>1WI2FXhHMZ%HgU2vf} zi*x-G;Ec)&_R}%KQSV+-)bI+iQ-mfbI8S1;Kbs^zfHOyKKQCCZsr9$wB8T@d? zBk0#nS@NaJDM{Ri<|ng2f6@u}-{cPw1Bc;P`F0%NqQu9IuG2WJQ&suX)P%f6($2iE zEw=SkfW4PI_@$zhH|e+u6KZY6hb~X!xFNL^7%>Q6e73`nz1=w2uQ&3^1E_!Q8C`SV4zPPN zUx@0!%3AmE?;<@;a6gaz&TWAtm+8Wur~l!T&oWkVapwzDRIo5)J}kbVFIyDR3j>t> z$>-K^I31>sjRQkPA#5|!;fK)kmLqEg`mtx-HM+F2jm8X2!qE$Yc^dm57T@R|mJJT8IXa3MT4?e3^!q+OK`EL((jOpAByG}_bt+!v{!tD7l z?9O%g?`l`{88ew(M=E0VYJJ*pK@hjuRZ`E_K5VWTgWq>%V~?xGaA0zOPH6JuWd~B} zlXkQ?IhlZqwn5lLkrp}q6S6!eVAe}3EX*>+!s*5^IrxcC@;sTYrbXb~&n={BG)H1C zluLK30Leez1ADGqjx8UTk@v|Z)Y00K*X#d-V*50Vd@eD^ujYv7YL)r)=67&vqclYy z_Jz7!d@YpxeoShvDj0n$2G2UG@T=*!Da+@quyb4=9P%lYhSnUUn*Gx`>F*vPPcMrc zk4o94>Sxk^b0B+29EW44((%E4AF!<(fJZ-e#BOV^L4-*vm%Nz38=gLd`UDlM+8d00 zXX~PJYc%e)s)H#(iS*Rvx$x#(Bb|jVTp#a@8Hdf~Mao8e=ue}dcxk;jU|$-ouhAoI zDc6?0X%!mID212}%A(xIo{KsbfNOdI%$?Q`y`#sl!K*lk6(rVNiZgV|&!m)r)2RO& z7uG&nM$h{GakEn?rS2Dx5Sc!qGk<1L`Wj@3=(CH>@Q$pOP@hYoidn_y;UD zJ4esE>fq>tBQ(W>&?KL;F@Ja7~FCr$=~8d0=;Fd{ZHHQw;cp!2wcQ(~U1BC-Ui& z>Jk$?o(_i<(9T#X*K0L~V^1q%d0I6ES>6@%?DvA^UQ4RVs1a^BUjY3lC&^UxD0O{c zEKF`~hs*Nn<#mmYD(YQv4x zG%K9i?x={R)80xRmxWMts|&B+9*EjXGjO@a9?DdhD7kB9L*~DDcI}=clW&hf?TkXX z%cnFlzgi6Aejek-=9A*!pq&)@Lr*Z;J{^BO(_lkeGdz_VgiUptG~4F~WhP5Il6xX6 z4x7pwv;1*Qkv(j&a%7*FZO}V&Bh{X?;vXfqq`lBPTsCbp5C77Md#;quqSq<3%%hcR zFPxzP!OO6=%|-GXR*DU)3T4-J+DW_2@1o~}I*QGD4d;xO;^b5DaB}HS_;l$dye@7Q zmYv#z?2(Jz-z1XfgaDdgr^uVG^hL!mYaFrlk`Qxi8uhcAE2ytl;ibDb;;wC*!EHEo94jidP0noR#^Sr0t-= z`orz`dbBh5o8yFym6kYblq1b^w%{`xjCfoheOhofP&^B!czu#0x+>2!YK^JH|*_qDA-WIQx34HuUf3CZ~jSd(u0Ota4EE;zcT6B+q)&E#J z({L)ksEwN=l&M6CifEvch_ly8X&|W#C5Z+p{6mpaBALg?n2aS6GE2_0*Flt_XpoXp zXfBlsDXI5)ulM!E7njTN>}T(_*ShcDy?z%)S{)_oSIuGK!~#fY62!oh@ucX-4&vo& z0iA3QL;bZ77<5bHGqx7pRymWt5H*3599LK{$(S~O&?WEP>^WmM1wmOQoXn~&Bl~9s zF?tcQn5O-hlemA3@qV}mvi{7bK0gJp{gVwj{q8Z-t6FI!>XAtsEH&W992v-q6b4lx z9ad}qfU1&h*sdrIyDiPYvz_6*eUOX4Z_kG0K5>YD>(BWo>Ow8qeqhFuMl#^<&5@qY z<#3Y>$b7vxG;fl|LG4JIeC`WJQrL=&Z6Vm@Y=J`^NX_=d;>DdTqx_2>Sv_q%y{?i? zgMPZex1cbpqHc}{_lTf&OeXU@U>)%^SwwclW^p3MT*y$34SnSwOyvd+QkkTBl48(K zI-?$uV67Eo^QK`ozi$b3i{1hU?b_+r<0E8)B18SeY&pkn|0DH9{~3v4BVFPnjv4D0 zpz==+8fEz4>H##W)45H1tLMTJHFHQznhKvkJJHkY^>O`!JDkV)C+XAd<;3_C`}z2h zR3+p!8I-hVc6XfSWNQhL!k8E2aK8&Jbhm)dUmAH?UYL>8NUfXMb+mW~*sA;{@)jNB`_FXF>6|K3aHdy(bR3%duCbVec`Ri9S+0S%`@@;a zzzMpd-Gnqp2I)$;CzV zb9x5n@{@CvCe6kE?N&H>#bW5mk0XhODUi=H!dp7nIb?Y;37FUfuGSlBBObiOpW?x2 za`y*0C}Tvv?bU4d56)bs`_nOoG8z*TMGk?qy%6 z%ch5*`}a^t4dWAmyQ6eMelD)$h2qn-22@TgfHbq*mczU=7{+Q`*L-Gk1??tbx!Fc? zqp+Dtd~^wa+Q~rDFBiDp#6HKK8z7aXP8yWMpwKJ{>~jXm3NK4aZ?47&p_#naNC&tg z?oK|8tH4C!d8TC}%L+C)L#B3R7%lwukba5GW#*gYl9qO5e4!VG>3imLO8l#->DzJQ zeDXNu^D1fBXO>62*AMl#NP*f9d7RAFgBrrCsMw|?JmnifgI{LQumCu|L+ zT7QI`YA7as8lPxN`cIxmnJnn65{C0zz7x~M(X{IE0`@&Op_sM8NUbY{b68pfK0r2{ z)@&n^w1f9Rlhu4)bLPBuyGZ^u$f4^zUHoL;gwnrz$(Fg4emQoAj8w9@+bQbs!{9sm8q}ppkq8odciXLFs7xYV?O` zcN4|z>?GuCZpJ>1NJi$R2Jw}?Ns1#5&_f1=RJ!34Sy_=l&mPjFnlY7-Hr<(Al2M0e zmUro#$X-tUoH2-6xExa?1zFa}Ql|d%G6>uWaP-{@ve4$Wll z?kXy>xrO{GI?oB6c?y16$I-gCMda**N6gzL_AuSXl{1g!Wy#tN5jL=dq~{cp;JS1e zp6Wp~+iICB8Xd&;$OFok2&4uj26U|S4*mIA0c{ug;^GzaI7<84$%FDfj;zd55DTz{ z=!{5OA(jI&{|S*3J`<$%ehEz5-NIRN?kb(5wgXnKEhFzX-hh0Cue|px-(}BhZ`9;Q z^A5Ov<&8fI2Jhq$l5!}TY|cGHnm)GhmSGv)SSW%0@3rZ=X}Z`xA;y%=bASuyCXp14 zv*hW9*PM0ALQp(McsSq~>&+_bv+2JBir{*nk_tSA^JU#=PrVEqt zWD&UVE{i6fT!5d}Uc+Uq*1n`8nywCLqM8I#RJ1 z{5Q#1q5<}wsv((%k+DUl>^yoV?A*Q>CQF{D_M_!=@}3LGi#tPNKiNX)<&!Ln=OZ*$ zxNv))3vdm69dJsC8q-v|fNP+(kj`G&&i45W*>1>N@^-|cM0;J03m<19HoWY;iR zVowcr7b|if%+le6@(iF)p_n)BBudj$s?f5b6&$s@IBVCKa?TF7;=ea5AT`$+wD74NuxH3ggG8 zusrJ^y6CtMh+VFwA!4^_V!14ootXnOm%PA(Ysz45#|oq72debryN%3{y#dsH;G^V- z1}FYi1+CigjyjyEpcg;sVb?+NLE&j`exRb$Lr-(CD&*CGoyH$<3i8HtsE6Rv` zNekOkT?EL_hfSe7L26AAvVpU>)Zdl#{+Pk-+R}m_LWf|NZ7|F}vz#7xi6awl0%5>; zBbmLj7Zd+`iD$A+vG*x^p39yE$5~Ra?wAJ7eBMR14{PIr30dZ!D9g3(eA@4=iialkfL0lu;^3g}pilp*Z0Htk+n?z2Y(iWAg=Y1t*7o4QQc( z+_l_ZZ(W$bA^}`3>hljq{$=i7O@<{w!Q4HX|M!oL)%NDNO!(o;kvp(gD30>IJz<`& z9o;_j4oA@E5RtQ=iAPdGG1hw+7g;1izGdvnJa)a$y`4bcdvl@MX$|gBX(!F6{K=|& za-50$G_do#jju1{f)=|6ulL+eCoZ$!cR!%=^YuVML>Y?(@6oRh&OyWRY1lNs3Uso| z;eFXldadg$nsu@Fgz2Gh=y4XDP`7|!-fWt8%m}FJ1n_GAK}y;uj=#G_%~ zi1xEaDGO8aiPZ)CyvBtVe7g@-2f~bAS4Lv=z4@SbZ4sGMnnChBiZEAC6RaG~;KizV zyxkIuYcA{}pK`ts#jsIgE0qkpo3r3&cPA{O2#Xc?(CE+#M>hsRV{jD6HL-nRJ#pAD zGZg22S%-=%B(e9$Jy6*rf!Y<5(9v`aHqCJ-eNUJ1t_$1-S><9fu#U}9 zN-AW{xJ~SY&fw;q1K^S02Dxr5Pvh%B$lJD({@S4mv2zSC?7u4{V^KDgWCf91JySq- z_Yh~9bu;daSEMF)Cg@F?K&Q(~!llCj*dFNzoge3Loc2V}*YD1vwGNwEd^?o6Xx(8{ z_dTCHdfJG?Ut-ZMuM+#mw-TR_d}do@C7$Q7;f~L=amVNj)NznN5ZXzj=f46oEW*7< zRWbXm1-%yZ04|L-z^_Z=bl7Dz(Xo0*KQ%svH*4x)Vq*y^b?;@&)@i{JeiOW^X~n>@ z7bIZkCE$2I#D&)fI0eR0*f>2L-fx&p?srzejczrf&pU_y<|k2|We6_oj)bo%4fJrE zHA)m8?W5fj4&^%n3|m^Wg1p_J6^6aJwVk%`V4ftF&=W;}ovPvg7FaZvj5N zn@U6@p3(L&Mc}VZrA{?DtWJpaT%M358{ZJR(MOPLr{BhU&pN@{xEjZQbU;{GEX!DW z%{-LbKzb+kCHJDO~L5`sOBr+Ck*#X-h@J$^hk0bzcjSc%&(=EgZ7=dxg@ zL?D=cVcC)1D(Kl82%YMt$Z@&_dhcZT--jB(C?^iztewYgm~IN&e$1ga=Jvtv{9rgZ zZb;gTOd+-^2_8!&V#EkBE4N<}|Q&c+D9C?nYqY zfgGH6R_Ca~D)SZv@N1kKI89yjlERUVqUU=9?n&o09q4w?b zs9rZ4Ghcgt z*)0g~mR`bApIh*EQZ2nO)tKBDFNChv1*rL^5=NA|>DFj#aMOQ^(+R`yw9;^ar8O!P ztj8^K84xML=5;Rh151^uz?&A#eB1t%dK&$t8%8q7R`o2B|0_)XE_NPZ&Fa6N#M}~&WE3}ir76Fz!{Im? zx^s;$-jkDu*l*#S@Ww%UH#nF)659jk8lymQR2K#ouo;Y7OCe^3JVz#f84hX7kV z?=Mxue+grB>#9wdC>I7x*^J;2rAaWMpTcGkh=R_97ZL4>q|wHgsp)wcIAfs$m%VvcG4?!&3oBk_&K)P<0z@=SESh%XyD0Y}3MZJ0;@YEJu@Fdm0 zZ3)`e3Ycx$Plo4I!;%w0%w1>pE|DyV9|eN(was=sdgdAG$n3#)YA0ZhP$CVkH{c#V zHwARJw&Q8RFnl@l6^))^&e^iu4jxsp?4J~EEGP&kr+ki4>F>{B*0&mXvMUp5`cIm* zGYiY|jB(EN+r-Gf0lr+F!rjhhX}3hN^U^~#aCrU-1nxI{@FE2=%#VRi%4)voy$Z6$ z`yQ+nEaeoI24F;h2-nfL7(+Mbz{9OU__x4-uc)^VUUGhu4RgmxQ|T8751fPfalzCf zBN*HQSntVPLB8%QN%VcX8U12|sItphNOR9&&$1sI=TDearHR#N~}St4wC{6&M-FiS>Z>wJW*h&05?5 zn?5EnbORiZ`$&Ggj>PAIp%A0d48NXOz+_8d{^(grZpWFE=(t`9cDrSvspBD>YC$=B z+m4`SOETOFiA9682%~{D z7UB-upete1pmv4`WugWOP_!h#SZ7G;A zh1G-XljVO6UCn$<41>kdi{O6gD5j8Q+&%x2h^Imomd=_Fy(|xH`-K96J$vcTzPo6< zb%IP>6yWc)9!WVrLhbRc!pYy8z(2BbBcu8kMw&b^h6f3z-f)*Sgwj+_z%qeOQ! z4U*^cpRgGlC(ghdKNDP3_6L0AR}isnav<~JE1n1|qJ~EfcCm*_?@ZTg9 zT3-lX?BYSA^F0XlOs5rW-*;}_S=d~886Vxdiir^)Q8AV6shIu3t;SOPdCop?YfU@# zcyyeYPxJtP!#{e^GaklbL&KE?jHGl3oK8vNbZ@bR zUH-A8e6|SxkYgL!wO|PRKTkkz6WaXY#EjtEQBe^jm)c$8#&(+yh+QpGpMydmhZ~y+?qDpKv82P94oZJ z^K;#>j~YU5_)3x-w4bEQuytrh4V*u)5`LA%;li?8kaJ-Pc|Lv%R|~#^=f!N^%$#NX zFZaJ;(e3SUELseu?muSA%XcDA`Z>Mv^DzMaT?KsL@iAlhmwGr^P`Y&f&s|U_nAcNrx^x;-XD$H)mMy0!g z;Fw+xef`;xVlF_XPp5*zi2_>8`sCaUD{&W_FpLu|YU6F&AjpoC%`S+Os54PuR#_Fk$2%zO*H;iiX zBYSLS;QEWv_{sG-EsUDU&ilPMW2e8PcK-v+SXzwvYfqx#FsrwXxM8&7?GM;tq)s|x zE)br*EiaWMK&wG0s3qrs@V87ny7N9=%<9de|K5OGlSJX7K^R_c?ixdAkQty0(nYbeppM}cIe=hNHg&&SK=)2=fnn8XSn}^8)7Y6s z^}gIEu2VCiZ}=frpZr1dEz(goB@%a!TBC7cKGr+Mb5x4?B*Hlvu;Dn4C2q!Ey##D4 zKZ9@LqCo$s9klv3K)~KSl6uh#+x=C+Lw_%;n>-2qjsHP}Pa%86g&|HGMeq9)B>^Ow()1>b@=@N%)-2 zDwX5>*}NTf-!XWtYl6+{qoBQ>2LW!zoS9b*=>KL+cpDPP&5R>o?0L}fcrHc`Eds5C zYJ~r?vF33dx`=ARo9Ktk(KQ{I=WYY(ul4EJDQ7I)KgjZjx^VvJ9QaW1i`i+75O}#9 zc)B;B*RYHA@b7`S$HE!W=;wGH&SG_IHwcV&q4MT={AzZV^5#Pc2(UV{P0Nb$L0}t9 z=KX+MXpb}6?t{^R5RO~W3H){V1}r&z3J&M!aE)(D^4|-`a0ESXVx_t&9DA{Z|2gm( zUM=qec-jqjH$Npwk?P=b;|}?!eF3U=im(iPC+N!=#>Ol&2<%oZUpGzGS^Zg)BRIvmV)4{ex4u$R9IDZqONqS7?BT6s(bB zl>t)9u_*OF)kDC zV=j!RVpg*p27nWzy+8+ez6VK(CW5l>LL9o+L{z1`aQSwYk6O6^L~@p*==Yp@8hWTUZ`y+6og5?sg+z8+N{@|PBSLnI;0*WbZ!B=()pwPfU z&bnVlk_V5$y=Nkj){qPp!)ACu#}IpdT?7@8i(tEVDG0H8oZCX@@F#nBot8C5n%ybb zbg-WD?qxXOk_wjfpUKIkC!zD`7#eLk57*Wfqwk;!v$xraO-7Qe@ zI|OsJCZSAL4mPy(km7+Z>}C6K7nELtsICkAz7~%<4V!T8Yu2lxe2%navA=8Nq|v16 zspJl~lL)`GVOcm2p*Ujhsw$8>NsXz+cBK+sDOrE(1XMv1N3CI z;a<~phyy!cQvYX%op1H&9-U--(5VW+1D`OPI|Q~iPB`)Q8`EHt3Xw<6iB|A=xHoVO zYjTR|w&%^Lv@wO0-HU?k;LY%v*GL8}%Ec&Gm^?c#KU?V^9SW8gk%fh#)^=PgUXjaufAE6@$!#q1zr#tJ?$eqp{e#pK{@zcO#i6S@R#u+@NFifIZZruD+iJ<@K6E3 z3HJCzfy#o}xH&8p1>UIP)9|yvaw;+L2RlRm(93w2oW%v}vfxX&IEMMh;#&2MxIwCv zDD?lO{Wqi8`t1PDxjhPv&UzrcNftwVc7R9iRgnFnOYi92W(*fn(EsI!@l$2-dCd&Q z@`^32_a%64Lk;I2&S|8tPe-JJwal54PZ!4DD?#I6vF z2$<`W%{;bC2W0OkkL*u@ot`O5*gwG)yPe?jVqctYIEOpwdp4*ZT8#%?bJ4ga1Lke| zj#ice&{H-5by>&hJgrK!6W;(?qKV*L9mn3eVwvEyTd03J8y2rQ4cYZVz#AyUUL2&W zPg&v5vHDtIecW$$Rbkxyd?NA180FlPNtbpN-nC7`H}_58W@0o1=|m!i?V&m%Az1Uv zn;cR<#z;MVNEasTMx6>Ru(-pZ`j#R%xk><{`c?6A^-<&wsiC&AJ(&^zl5BC0f_H2U zzjmi8D*5Em#??jmkM&BO(&z!3u@?}h<%N6y8q<5xJ|w_n6z?=mftQYTba|#Y%;0%a znL9t=(E%YM#ygKkPp9K=NnczImhkcoI|n_p4zB#wM+vRBBudyC?cC3hM#Eg{nHdc) zJGwZ^s_)@^b__nqZ6*iO2Z=`65M418ic31Y(W$+Z9=ZIOcBLF+b674ws&_j2=dZ&} zwz*X1hc5HcH)3LGiB`mfzMiZf9RP(A7I=ZMp;fY={NXKaS$ZmM} z`Xe0Dcn)&=N63vQpNY|n>r7k4PB^xyf$Y~5Hxxg&gUx_Agx$G5SlXS2E>7<7?71d< zJa83DUu#31+g4ojYmmAbKj!rA2&N{de=v^b!HkG|0cv>FI7f(U*OfkwInUCE*_F$A|4V5cf>2~9E_}ljtPF$Hr zYRoeshV>?W>2W4~hS$ZyLJAS%QNyQgeVZe7)^qFQxBa|PpyuDUDui%K1!_OIk z@O5Z4a}gB%xBwq*bYOqcQo6w-77smWg%8%(@o?k}__lTys4scN6T31AW@*)fX>=+0 ziseDytep^KPz4VriSS1+7;x%p&Vi$o5q8YD&nW^c>JeQ|)X7xr4Vc7c9#+vZcAkhU zOo^B0WmIDQY9G4`P`Tj%W^AEQb;hJs+RU>R;oPJkl=MyT3Q3a!$4Fm!z- zTGUe*A$wrgwD}||S%BMka~9gph-0;VAKhaZtUHb_pn+v0G-BMkBQ3J3n=z>F(b;jgIz#H+sND6o1dH6JdV&Z#6$ zy0=JKr$5?$bAUqWskF6$VUE7cr zHkSI`htqRo$;3_(uGXy_`1apqt{?9LtUArZs)syalrDqTwJr!Ra$=X-C6HE8g7<8O z>K;!4jM};Z*-Ran5&W9Y=%BFWz&c#G#2qF-VtZnu!dSP?nAvz+jpW{rfhpl`yu^`M z!}m+H_(kI)+>)vC%x`rzKajTueErz@!J8>mBxV6zZe5Au{YYE(g`l(eIq=&y0-9~J zxH96R+?ms5_^E|K@Km)KNsknbv#5fB{d)LIClNo4|05F11)+kw7=CUxz?M69XtC=$ z#!oIo$Jy%mQn-WuIkpP2jYY|ZfGjXtSBhN|7m17-t6^~vG-4cIfOY`eQ~l2gv|FB{ zlGPW0D=WY$qXE;U!|0}{DAXw}LFXgcC^O{=@HM|;v15I0<&N*9Hn$TZOBK1o$MsO? zx-8S_y9tFWwvn&wY-UEI8pqQ7GWnU{KtD}Sf{t1r+#%lsrT1H)kDbejcmx3>IUhvs z#M89B)A$#5>44Vk7MwY79@M|EUc0dYY+c+(8>|C?({mY*d`!gFR9{T3&q9~aWt?i; z3JeHaOn$AMkLGQciPOq!bc4P)E?Oi9pJG#B-Me^l$f2B^oxB29cF5w5AGD{rAt^%s);zrYIkA&$$g*^HHqTD5op?42LhV=AT3#&KG(j( zX*{M2_Jx-)Wg-fKD?7n4?i%##27|?JS!~~(ME#d*qfKodXc~Q@O1~U{f7Sxy_GzGc z+9DY89-#X)meQ`Q2~KmiIEo&0CVp?bXm`sYXg<_J7l_-#k68wE=CeaET;4-_Rr27| zkz24rWi?t%&jGWwh#lvz;(CrINOhKTCWC6m?v-GKdrG1!0C9rL&C!&9jbs5nQ8MyTk4{xWe8 z{dExZv=i};UJ%)O>=tzkzXXkT;hgM$OF-MM5d^&tg%A3v@Y_pvDq%#`A>^_(J22tS9G!l?&bx?zuqwcFU^>u1*$my z#*G-M_L8IrYEWk$0ByzB@QGnLT^KqAw#l%JiVZEK>v;*7Qwca^c^S^J@38pHBpkWF z3WFFMczMnRj_5{#U4cHMZ1`4=OpNI4jsR9k@^L^ zV$AJ#gW%H_G}Q9~>W}(R`MyZ#g~_>qN!R~u=&cMh&JDPr&5ebh!H0lWk3 z;VFMTM4Z3DdMRIm&d3U4_1yOV zb>S44d1oE!hPz|Emlr9{xCmwYK2m{atQVuL06QA}P{{f$Dp;Jy0P81oMScX4nyLn? zpKrr|4)18ksx!F1Y6GY&<-o*9CjL9_i7Lk1S^nE;F#YUCKfQBB&nb1(Aa9V4$gjdP zhpHi4G#DdJQnEgY&0nj$fnx77;Dz;b@?TaSIjI`}!&%l)>~9JwT?$mk#}Z7;{0L}! z!p4P~(Ehv*LI*E`lG-+CFAStJ7Vz=VvP)#;(Q2r2w8aFcb?BlYk5zLr=p~gWwB^BS za66t0zu8`E6y87+$+d7tGzuT=tDqG__Fx+@A6;uVf$Y90@XFkbHzf$hFS>|UxEqvZ zlE~Mk@J_e>Cvq`xH&C^EHu9j>u-u0aw4v`vfz8dY-mgIz|pk}K&{^$CSrBC`)_W6H*dlS zNE?FHdl67AjDfmjp#Nf7er8uO#1AE7UZy@>`1Lu-E0!Qu*1u`i)GMStS%5EZ^BHH& zEh5eA-v35^m>Ssk!rmqa$o75?7JdFuctD6fV=kiFqjC^1(xS1q6yW43LpaLWOFa$O z(lh~6R3CbV8LTel;I4MUS`yGNU6xQ*b0nv41f_o?LFV`cXw|w4Gy7+g$EyQqYCsIi zNA!V_xEt2^^<&%PgJ3;H9z?~=@SuJV)M$D_PeKSzP1z4YT{+O)>kUFp*U5|>3FK%z zoBgMefxC5b9N2JG>@|P>rc4UDgoa7`9lpy%`o3GigpU0MTv)i z#R|LN&|4W)FILA2)vNTa{#h`7K>|{p=Hbb2O^~+rCaTM*leP=rVRLx| z7L`^J0nH|IFXTHTcGMqtwlWr|uUT$$0w?i1~Dv z9*DK%n7E%LuMM6s_vBAefqkM-G^l~?Q*&s!aV!kHX~bN?In;j>`;ZHg=<0@Q=9SAl z2))-vCGJe;elXuD0H&DDK*6YXI9NXk zLNZu?x~2e^8cTrt&BZo(keyQs$Z?1EiG%(8XehH(=Zn|f;aC)YrtZNf;LH&=JM&8+rdG6gF=wr^Kp!pBZ^RLtR3&kXH<$qDc@4*^OR2>8# z`T$R@Hsb5p27;N|2m04L4+X>n@w!qInOMevb=O8X{M-Z`t%K;t0afl7 zmrn9hlF3}>oA61x5|{40PG<{9aw>Pa!BLT)^iNkW%wq3ZX3vr_VE;Bu-RJ>}-E)xp zFNsrRXH0jv?E%kAA|Oal!DxR7j4qgl!GD4|s&*gAoM+?EzW6+N9}+D)GibP&w*5EUi1XdGgG0xNR%Htw;i>> z3B2_iNvqv9x=d{rc=b&OCEuISa=a2=vuDs}fdV*G`xLfYUB!E$qi}l%!KCM#QTom> z7>i57YBvA3wkZQQ$aUb+(OSH3H4SSuSjNdAN1QX>&rDsn`EVv=fjxQVXXtXLF}aZsEK;@*d|N_(1v^WI?5DE(SQ) zLGF`G%x-hRGwm^8F?N;Og)ZmM903Tjb^w7P0sg+CCipn!WK_O?6JE;ZW14tAoO}F- zbLVdZUU1WckTV`Q>`;R)TluipE=P(aCYUF3x-{>_eqQaenHfJbJ+@0}MaChs2&5FpZR;Iw57X8T;jM(!WT;b83aK zm**k(ZvmM2G@ElLIly6G;}0nTd45h&aijEmyM(L|~f=bdbX(hE08nCUW@ z^<@y&%FBV@z8X-OJx*S{=!4|nNoam~2O}abY#V9u_b|BLu7K67HuXS|4_sSCA$f8neNqYrOWAj6&ofIXrr~QVkHyYfv zF~%tP_8q6H^$E?~bQ=m2I{9SYE0nIA{g((o zp2=Nw>^xpS2K3{@T-f|9jMPi4gE;G*K> zQZ@&s5uaa=g})5epLayQQt=^Hz?wr)B$>J_$|wF zy2Fuvy9LufC-GRqG7KfT!%U4$#BkR?oaguq@}o;JNX?!zZ^;|F zu^lGIPT~f%zQv62*<9^+vvJLi9M0LgVoWNjKtWC|-rFXH|0V8$2TRp4bjDw%XTfRa zqB48Z1Spf+MOL)vi8AJIcn@peY=g7Ur_eB)aOkqhq{U7{VD+jIzAJo(Xu~PcWP1gB zo~}SQ*Ca5Wy^d50T_^Gm;UHt^ON(^F(dc?6eAV%SRaP;WT@go?Pdy3vQWVM-D}nTp zI8f9bBAFgCf$S4crZK!#L&x=~EhoVrx>*VO z{Hr)O?Hg6tt%6=7u2kbhDUEwwhAS09A$A$-l_}W?gWnRUzWD^a3QdOYyiBrkr#LS3 zdI;lx&tk5|0ob!(3+7Zk1bfS2C^*cr3B$s0X;CXAD}IHq2cI#$f?~uuJ)cU-?_dsJ zlIBY42=gB$$HJeYdrAGydhjG|Cv+0 zQ3~btQ(#ZzJtl@dOODt1K&He}NDKOnZ*O_w=;0WWvDgY%pYTTU<%6)wa1l;@vL3pn zrFg+%3|I>0;RvhMc-$(*FPUEh7pg9h4+`P9(X||f3$5|Ag%n?|Is>1+>?hKVLl{_+ z4T)D9$i<2T5UU--7%MSY)EPtOE!&B6uDik0%E$DBeG1*Y`~*=?3&y%AL$pZ|=jL@= z)2$unV8S{Lg06jFdB(}0Klv{+x-f(Ip8UxXd|F6Gbrp#8FL7?^Ck5DV%Jz!d-l21k z8LsI_1E%XTgdTjs>SnB9Ns=N|Ma@9>23G6JvW!ETtzc|r4U`yM!z7GFVH;UAD9odu zs*oNl)P+kg6JY;v0k&yb@j5PthL|@pRsQjmVq)+9>333VTjz@m5DT z66?jn+zX@kf>S1t7a-UDAN?ouiF2;3*(X~Y?sPW5zW zV!a~EOidHSO-^xW`0_DXYA_wrcrhr__L%IM{+9Asw#kRZt1xU94;(#jpqW`4$ZE*J zDXtV&OJ&e2;r(RBSFNJC3hcqxe_L=jC!}4Hja_FD!jp&&%2?}T|DxYI@cCV*# zwhSBsWvvbn95grb|4$AK%&$UQr!mvLVGw40l*2 z`u2mDdMMf9aSJJLI!eDZhqs%%NJ7;V;`EcLfWG$3{KG_#_nJB_)Os<71~iu z<;x>DU{*|yT+e~4+l}!26m{&|(@Q*Zt3Yaei1Do3$Sht}2g6Z)L}G?7jvtahh*e

qrDR!&?B0!=TW=HW-!`|JXYFm!NI{Iikr5@jfQ+RTE z9$G|yh7`|yTp`894?}DXxFH%f`%JhtN}AXbwj5>|1Y^NjMQ(u30eY!u7Jsqje%N~R zIDAvQf;Oi$xOe-q(absr79L*$zf;(Zl6xw|y)Bwf5!wab-|sO+M3C$8fE^`&|IT=A zDM8UCiC|&Nqc5I?qcq$9aFVm39qbu2-!>U?U6z82bpW>6SP_3`1C&%2V`nYdaD(-r z3EQT@djDyV!xrPiQdi*l{k7P9Ux>T^lqp#sG7TNO++pgF2-jic8plVspN_g{@MZSS zfSI2Q@#(oPEQ`#7{fgaqWrsZ|OT4ENHRo_n%rZ>rdPjfU-2^_uRp6Ook6KCDDE7{u zbSk@$jTeWJXC@4nYvbXl>m(S@(8u)pOwjOVpTVDbs4)^lw=U5mw>x$i9jIMGH?#Y_ zjcOISacn-s2dHp6XDM(88`-^k-3UtFje;qDD)3Kx5Mrpvij-`~UiqH`d7@Q1P7}iaL*#*9M{U zKrYHOf5zE$@5uj9bRPaxcyAmRDv1bXHINV$ZQS$Rw3G&w22!NHEwpG4o9vanW$%)C z&vUJ$WF#b!QWufWA=s+$@zuHoPL4AY!?2!(Fbr{uw50lYtBGv~Lj zBamIMsLAXhPFc4)MsG)~UBEQ(P12w<*#$QIX~2TQZulnfg+|2-bJN{@LH17yT)igB zKl|DPX9wHDv6+U*0FxW4aeaN$=(u@!#m8Rypo|^Lz|r-D0fk zkt7mfzGtKqc)ma?c6L9&fiALdy&rD#t3=;zG4LWJ9)q*xNNr6k_)5&-n#> zJ$W7G-TgP@S0X+w2X$6nWSM|=8e<~J9iP1yTLNRS z*18s+CoO>uUQ6NZlqPz9gBvP+D5MX2n3r>L2N_XGM@)A{y_=;x??>y%;m^$=STq8& z_s_>UFYM9WI}=Vje#6beRq)_l3G-{)Va$%5kR`Gj6m_e?@Q^1AY@LGk{JoG9zXB9? zcJpj{wfVohFMywR{(}dW zf}o@zjM?8}ankc*a(-zj_GXzu67#OVRi4g$wr&?lt_%VbS!Xo)#f4m6Cs=ElQTuDz zsA7cVK`2fug9n{{K;3?msqGt~JfRVV zbWeb`Bl|f7{IJb*q$^{O#JW@faq945c z6O;Ij=Q-$l<_r{RFM;D*55u`oS7<%Z4+&k-bVflb(f5lYL!957+oLDJ_jov@sV0*D zJZD2?<89oY+XSY9Wt_umzO>Bd1Mc+8!||-U&`$1?Wd>Tf)T$VC9Bz?T-QOfZ`!|8{ z33TDFF*;c4$=Q{BSzm3)2M#V@z_%)LgOtUhe2(W8)ahT&_)xaNw;QJ_(=3UuL@$YK-oY^WSap z!J0M%RUWp@lZQ1gA8?#DtOlDA5&lc64j1YvO8zs&>N8Pr^B6(#u43H0JsoGKcar<+ zq4-kP1ZEUng5THDN!WWM-o;OrXc+5`wLWR6+%3+3u&0=aHI`t;yc5;-)~3H9Av%*dQvRDs!mZnM%t3(!*BbJw?T+A* zgN)%ZEfC%b2f@Z?dzmJPIQ{1TcY49Ffwu;hxIf16&+fSH>I)pbxQK8+*P(4mGq(S@ z1fH>;sBp9xyk1S#-{WG9v(0wGVi{GQP~Hkilh@=696JwJs~_UE>+2c2PzxT#UxrHu z50Q;>w+XaefaF8z#Q2&MN6=RtAFW9Tv8r4626A9p&>y_}$rGEr)8UWoOi1ZFhH3O1 zlpgmbE~+_rL;N>YJrjeMK5rqtO#vXSHAshEGM|T1FzoL-0y7j(bM_B!$7OI2^zJ#hSv3cNB5!_}e*dGf$M^k;GrVJJ-;^$^8^OKwTW>K#HPRwfhssG=n9sN>)?xGIsB=O z0eKfwJh!Ba=570o%WpcOhD|@Yx~Cc>6NO+`TqLYEImwv+iJ-eB9$y^%#Yyllhv!{9 z5EH2rZC<^??;TY{A)M0e6xE%L^&Q;7sQLtd@x8`SuCJ?(I%2TVD*r z6_4T3+MQ_gD;?W<`JfPd0xX0cLE35?BE5;_A~Y?Dxb*=j-k3ndy|Yp4+)T)R`+~Za zq=4KyCD^#9f%%I=Fe$#C^>|dGMw=ve{D2K--SkYdpg@Iq53U9gnP>VH=k3Uyv}U|s zp^IkIp*0ch2IU^ z(cNz*|9aaLSSK`2cX;ZSDGhuJQ$%j)C|x>cN;uacO@%Mi&F_LANVqT zG0(g@0)K2WgQme{$|< zRC5JZwMKI4?^j{&!(Qf7kb}AgTWZbnMO`Ys7<5#dI4NBO&D?Ta_d$cFo$we1D@Cw> zkrin3_wq8UjdAmv0xC4Cgrv(cU-7^Vs@6XNO#IfCL(5V-rJS z4m*NMR1C-d<#8fnCk%P^nJ~e{gIe~kC){PPfJV+!K0 zEwHl*;9^<{YDpclbU&{c;GBFF>XeG8@S`J zR6D)ewgl%7Z^gy>l+5UHh38&uKD~1lD>#z8vR^ysRn6@%qLvA6?e3^y@Pe3~ZQwm_ zdPCC(Cd0L>)#xD=4lgbnvi(#$M1Nk7t?lmM^Ae$rnedj40_?lbFX;+7KngP_8L0B>>A#RrsS=^9b8lHfVitkFfD-(oU(6_ zW+e}!S56|(7PwyR1#{e95$+>v?7HlR>b(yzcYtN3>;gI7W0Sx~GKs|G$-))m5_mft zi~<|BtO{dcWYkJY9lL zRE(R;I`u-lgt%9PLO`&c{mlhdA%QAj08{V$=XMVi2H z2-F-9q z!TCl{aoV|lYF?iX9A#(yT?6sFE4DG{lkpZOdf%i%pY}p9+iwlUvd#CWzW8Xs4&L?PaU@6$G83Yv} z#%z!XpV^p(Hn~6y6^r1#z#u)neJWhYE61+R=Vh?}3dWr=flcmipn7Z? z#&m~c->(=<{I~!u8MjQtYab-ejN!eE7ANfLrGM*t=}i7+2zZnMkBtU->hII(94A%m z`I*SL)4#|QpO$IeGT5kFiM zZH1eg7t)x{3>2H+PmaZ=qlL#&aJLnLmb!l8u7e`B-1p(d)IP6QptB&fY6431@w z=uhMBa$n_Qs!IIml<$eCv+FE1?2*IX-DA)kQ%R(rEy_({T)#@u08H5Jj#@J1aAkc8Pbag0_%|HF zj8pDt&{GY`Y<}F`a~e{w-vOs{ieS2}99MN7&_60LLhg24!go#?(0MTct;EalgSk2M zpDo6jBgu@BQAS%FG+55Nhb~GRC$El#AXlP`9xY9RDdQnjKRE>F+HQr5{dS19Q7H61 znLJ<5GCQZwQhk^0a9G+AO|&y$&(TDTj9<;OT)?L-4H2BepN~*uWiWKHe98F^#F#s2`yj6vU|%OO2PqtE6jg5 zg-Q{M&5CaDwr4U?`Z1B)8zV+t6bZ>$z6!JNvK*+77AGm;1Chngq+-q%wjVI3V!FS< z;#V+Ftosb;?E4KrSHCBQftt8-Qz_1A^CsyD-tgy`GkzVOkLv8smnll3=a;B}mERgP zNnOCHueeDR6^iHwM}N#X*+bsz>;{{p*ZBQmB8b!qfyc7-kXy8gw7e?EkP=-`p4(1% zqjA*itT9QNYzWFVjDtccR^^vqw5lHoJ(~yGP40L!;{a%?P=Ki(=xd`(@|&N+=#DAe zw%LO4WP>wNNWV{KQ#WciZ5uASI0Jn*F#mK?1AJd}2?rN#hPktHV8s3|emdJq)U`@6 z{K;#Op0ya?bw}YJ5`^-VJIL58M~Gdyo19;M9R8CXqwd2R{PtIM&{gu1=q{T>_I$L3 zAb1V4PutQE@0)m!%{=vmkICnh6O1Emh!sw3-ZWE#6Z$2n{^A6Tg?VF_^k4GJCj^e) zYQTZ@KbS7YdS?pOp_%9xm~%u4W`tVdlwEOP?;DRPPn$eUxQPxTEakI z3{F=62LVsL^-SF#kT0#o9J6BHcK>NyTFbg;=GD`LrgB~i6d%aiUy$U=MG-)<>mwQ$YT-u z;h-oe-Y^I8gF86Red=_f)(mzR1(T?E+sU7=D&%780MA8T1S2{$5sw1LSl1fH^6kOt zE#nL9T95A;JIq*NJ7^xC1LGB0yf*J)oUXlekStpKh=I3EA({+<2zhk0m z5a}s@h?9JFanubyQe%xh5Ip$-PAyL*{VSZ|R?!I%%P!`f-E;$Yr+fnSc*dDqx`xz@ zrR)Dl<$xsIM>G4+bokX{vVnb0v-5pHWl9nnUo^w8;swZ2Po)nhoCDL92oqPb+^BUP z#B&Vb?x_b*O_Q09;12fkV*GxSD5Cn>n@Tyn;C(A-0oVRwFin=kxc&$*(kp`KIbI+j zPzZM(jxdfO1)4s;e8%jl{Dmoq$Li;U?amT>dL$2il_uineRJXRQEMW_K0|I*h<}5% zz|L_B_t6wH)&VEYwLR7Vt}7mr)vUitL3tWuCMs~Hf}_~3+XUwI-N0&9dCXbzfp_@r zF_uMthXr55;JU*$cs5g#Hof~s|6GwoVOwGT3bQS=H1r5r(&9>Qo)X}zsRhG3DPDP4 zW*y8k`^PzR`3&mJ)gtqUf6%{&p2N*)o?v@BjRd{EO}Ts2vGBJ7?YJicYV%uQfvh%e z*1H5df1d^Kr-lGB66j;13)P?I($a7>z%NJOgk2C0B)>!v7rZa@_F{n-y0avGqtlZztORq)u(9@Or}!yI`PJjOKS^U??~ z5=JGP60l)I0dYCB45}G}?DIex6gXBPwjiBu+_hXNJ#h)AEOLR5fJ9n4?A%v#F*+bpY^|(Nobv+1ZDOagqt}H zHo8UQtcrN*=a$Ai^Qqc#>v)?Mi;JOMBrO`OO%^ABL zKEjt-(nLiql6F7U1NWK`lnmaAW|ieU1sgx`IS_(>^Yh_YeN!N`VwER~O;w_|H?KyV*~3r*s7l`jCDQyXZ@)=MPHs)<16Y+UV3iG`XEIuD4` zvttHSwML4g?pRGnn{)A6y9Pou4@2{1Nvx4&ytg4itUh8&l4jIX(}3IfSSb;U zinXb-tS-1+4Fjd@Fm$%`F2Amw57~#);rEQ4*wK6fbc&O3Dms*hD9gYy#>Eyla-}tP z!zlC40tWsurbN|B-o!R-2-%K(^JH+_<*pyMECIg{GgidX zc)T}Yhrho4;ncU<((VO9$UmgS`fEiXdGRo>S+^9vr+Ff_&d`5QKNAaW`{7 zp-hzjdQmJ?OU~lfw;Dmslbd*^>^|slQ_7PbEWtUtD%_K*a(K^06Q}$oFmF_dd$scj z>$;hOrv%Q?kVD<@Sx*YaZe2oG_2npcc^LC;op4_;3vF#`B*&wkfU3!KYAluyt6pp) z^sp8leW}AgFEIou>yEQ-fo8J%eFFF&p36Lw;ZSt-Hhs)kSu;diV1s`UhW8J11{J6B zy)FmRSwfB=xTcM>@Srnf={duYryv(3r16Q&BN`m^1&2KU!OO8SIyL1mF8lhG^xb&J zu^27IYvw+%Z~h+g%jPZ^^jA_tvy5`?AQ#ruO+`yvA^y-$1E~LSfUz29;(_6Cn31l7 z-%hgqSi3Bg^;P10KV#Ce>>!m%3@5vVgYeL>F+@z5i<4f@;#aKi;n<4YgbR?c z)&lo7Siw-{G3Z(Qk(RGZfhlu^;Yeu^u3Ix!<~CM_vUN?oPWxcatI!j8KDU;3L~z;J zR)JZgZn$^eA(Uzq0-@iFiGImky85&~2v@uz*VX(<&07ch_4Y!X+?s&1>~ol(D@b=} zc$DtHn}yy#qp85T7n~V4ZqfTLvq6|?V&y(6xbgKV)z}P8Q69K^+XUQSI?kK_w479w>k`K&9>g%-4jNXP zfXtq=w7~QZj2btBPr+)u`dJA-Jt-x+`_6&uA!Te=3dZcvT$a~A3(Yh8==Hzv;fhBj zl+g^H!1L7*c#}`yR~HE(LVPt1dq{LXjQo=pte5l>CvtZPd?LL0^+VgFQd@<)7_e)wttI$i-7{oDZ0ctpUz zWoyWYsu@sgZ;<|K(payxH=BRtPNM-d(*Cwr{@%7iMbX zAG0^SDV4r3@@6Ji;l|_gwN)N)ao-Ib5bCCuxy3|&KZlH#yam`R!L@k2h~)$_iE*R} z2n~FJrz7zYSi(41q29E?XB(PtPXN2i*I=FvPWkciZ1%qV9RzT;mIe(-J`O zvM#3o5+DW*')GS1}=N1SF{gSwgP;e^CQ{>;VzJYOOOFN~&^eA7{+pc+9v8S1hu(J49u z!`FJli7_`ckVIUWK7}iv{}-0tdZ_Pdxe%{tMnPk27`<@$6{c~=XeVsPbIw&@Ct*gs zU&X^jV+YKzDaS?YC-7Y!pQAZ3RbU?FhZ}g)xxU5jFn6Si{84kn&PidM4Q5B6IB^7Z zZ!bYGzR$6IHwowWxRT%@PaJy~O&;~#p&54@QL@nu>X+TcmM|5xo6qHSzfFc06>s7i=k;u7l85aS&tm^?`d@Nn zP6f>N&c~HH7Ci4@aa{Y_n=H6+pTzHt$MhRX823*DCYKIy91GR3d_@Ub9(+fq9Fi)V zG|3iZZxw@#(l@%Z*o$tiuP8UT)JMiFQ+VsOE>Rt`DA-=vLBo#5K-JNMxL1I2mb-6Y z!8C1srCVljd#4xPU&ij9EtBx~6I(EO>_>bB59uo#YVlTyG|*QHaXf+Jws=(8kL__U zVBDMy82R-U239%Z+QJ%iU0MrSSci_;@#IwI4$SMCjiD zprvgFJ(-@#?g^ynYU5;L+zfCItb)BAH+knKRMGQChKOAZh za$-8{G|Z!`WWw;C({!xeJsY>F$m!cNCU)huaI71;3899b&@E66;+&_%HoIBgnPBR(pX_rI3VkKTko<277?jD;lCEOd6%&ZJ)}Mj8O-YzLdY5Kt zGydtHL46g!$z0(%Txj!fMu)s;SmXNwColEHjd~Mkp~(g)EoArRwgz5c-UGOxSO_E} z0ybA&g;UyYRObKtt!FuPq&K#w5NfUOOr{WX)c2p8zI*8(O7_xN3ZsLPy zS4}{_Q}^KA>V@bR>_L|BnrPjOyKo?)3=2OEP&>Wn5NjAnEzC5^x_WAaZ=R{IPjQ{5%MMMX4WSsY}i)sT!+5CD}{^G2Z><1 zEFAf7Cm384N3S40r*7&)EOHHoy5cER_s9n7^u`cvj5aZy{4=?f5(ow$0<#pE4^93R z50u^D-wuR5xue8DrjrT_P2}Ghos2=tT}X4HH|cw%#JiE4j&gsTaMM{){-p7Jp#J17 zh&HC8#gE%4x5FLW%rtO*#$%%0sP0IvSe}~d5b+Wuk0Z{-q!pX%Q5ybVr3s~2^q$4Q{L5=lm4BwOj(S9GY?oKxi zVLm(0+*0_E&Bybjb5WR^Lkub=!jhdQc>~uZ@JqrQV%i=`_FweFME1sUU%wD9q=|A5 z37sOpO5E|Ffi`|T?#%SdW>9@!OXsrP+1zzOcxP=U@xQVhmD>MO-ryfBtI0&|2@Ama z!4;}x9zh$$x*0chDrR=Qg}Q=|`kS&epq9P+!euh4@tYJHoT5Z(4pl&8jXyOw!?ISm z7-tr`aLjM8Gdq1Qm{zNh6-wK2^A7fV*zld)Y2lL(`+tye<3QYZ3NhgHb>>GBf!3&C z#sse?QWCdN%({=9pCE=Sray-OU|bHjSZXC-jaR$3Gp|`8s@Sa{!;B?bq2daPf>*KS z)GiDy3nae70&wkmE#$2+A)E8J>)&4RfNXf&0CX?`vttiKv}GUU*6zf6;Re){& z1eqrmrmoltVP-2CBj-2g>})%%{%}u}~6sH4uMl$(Qc)u!AV2$Dpgog^{`h zY!>P!dCFbbxhD`8CI18s*GJI2(i#)v=0Ppf-b=Qxqyg``c^Rp)pdaE5f}R#U+wlUj zLC7ACFt&5)wnwOxrH-m@<%k30w0xF%@d?7HK7`Uy0S!dg*!s=GinztSO<^-U_=RCAFI0;D) z?ZMEF52tbu!)%F-*dOkWy$7?&$&073QMw1pWM0wT?p*RSZvZ^(9uvO_K47$5l!OQ- zVC*S#;GaGOvWwS(?denq<9i@4LWpF3`@zF&vM5~^PA2pz60_8eC_bYOGp}!j1#A}| zeYhPH%et|5jGsI@*AP|iw}{M^%XQ^V!met3?~u9raC<^hHKdGLK_D129Gz(7(2neUjf&ej+Fy`B(* zjT2#kwh%8h)(WRNw_w^?4g_CTMGd#*c-Y+vU&sf;(nX(uEX*K>|7{{GXK3-_)?R}v zju*(`g^anKwURN2!y&h|7oOi31ljPDkh$g@Hb3U!kmOz7?YM&=QdSBVPDv2UixqHD zcmeJ{dX()EQ&GN$fb^&W&KmD&NU8m9jg;TlkUcnVNX*Tjc*9t)5_Z?VHWjXwXxsbTy6wZ@K zMkCj|5F}C#?df^2@1q!J;tE>~)ry7ZMjdoxzX*!2<>RoxPdbE6O0sKZUFf{tuqoRB-aHNHZ3%97<38#XGTB8(NC5fW?9Y+^!V)9$(`(7JX4c7Mb`}#ZWRWzRSi0`4Fn_O97W=Y@fx4koQr(!0O4l+W zD*7C$t6I$wF1d*IF-u8A1 zqkbRjPg(&m$wD3MKT%HX`Zx^7Xjr8kf?{(f@W=CpVeKj&E>Mey>#1*fBW=MTt*gfE zT#!tfR*muQ#BL}i?0#uYRPge=x z?G*k|k9*|ET?a@Gn~FcqrowVz0ls9ICKShifuYxNVCVZA?51D9fEDdn6ny}LRkcZ= zW-vxi7e^JZ7qG6N2?fT!;zG?#bX($y!`2P><;fJ#2$CVE=5HVcCvK4`l|}UKWf5Hd zbUR#`;)PPvUL$YrE@Hm5o=Sz(lDINQ60uGaS~QOej3faH+g4% z{3UzjSYOM=Fv+U*U_Xryb!`n+BGckD_Ey4$QWiz}+#J z0s&7pVx7`mdb%zYX1xAPzIPnQZ98Uz(vcB*Ei?;jdzZ|&*uA_Sn_aTOGb9d+X5~Rr>IK|#Gmr+aTnr0SJn`e`6*B&ODg2QGI6i3; zzE`Xz^QI=E@Yw=P3!VjJQ3|vz--Wu2! z+>AfE+kqt*xKcNYq3Ki@iAgyPpWDCCtqqFsO79mmdFGIwjytrkc_qX>-Jq*$5e@Pc zvEZuT4U(FT_&BA6Co>R<_8vz$?-$OcMVJ^-Z4E<1C>PZ1}7O-a+NGu2=?^#P$;p zPk#vYuBJyC9k7_?!w>QNNV=Lnw#Qb}R_5E)epL?dFYkf-MN!~#bSIc>+>VK|<@k0| zEa_dGPbZm6zAZpLFJ`*QP3XD9a*O>W}&uY0_G(KV?S+z{rf)BgzEt?8hDU6 z1+0RQiEG z9n;R-V;ytHaPpr?z-=ps(<=tZ44(jy+YrM^f4K(#JMRJM+oL(1&svC2ye=kj)#1@- z73cbC1^)h9wWKnhz1xoo@ITuuMjgXwI9Pca>;K+`^3O;;^rDfdOYlXnG;>Om@}M^I zD0T{{z)po<`UMFa;hp$J5cM*Gb;)laNvQyJHr2xUkE&HDO37Xjf zc&1Jk%$~@=5B?D-ba_YSbOeKMSP@K#n2S@B$}xrQk;)~d_&1LpM}h7#&TaNy$vG#& z|CZAO(;5Z%=`2H_u(%hFZHd6xvRA~F`-qx#Eg&iG7N}-*2mS@WCk`w(H7ax+Z0sxG z@TpRwmL3apsuUo!d?mJc5ApQejNoIzU05Dy1JO`Q{`=C(QQRkvfuFA6p_hSl)yD+z zIwK1Es~2HZh%Y87MuX(uTrz3-4Ac_sq|G4@P_|qI)dCG*i)IGMalPopg>NBZQKx>Y zCyx|&mC^qk<>1h-hxlOrLTu430{fI0m}42myRMaiak?Vt(S4bhx89Zx7F~tF$8Gfd zdR21T!-XcBsgfIC(>Qy!h|z<`zLRr_f(14A?(GU$tZkH}0Me%JW7DxBCIF_t6TN`oRh|wN1pS zt52i1>Q?kWR11b*n^EsA`@Wg#k4w*G5tq!Lq)XL;cGmRC zlophBpMp&SY2;1iEpkSv6fNggux$GRbc>Y5_YYigU0NmT32wqaMYhyebTOW-@IcK? zKIpUI0oIk~!;f|wFj|;bZW3|;z8vrX6~R8LrRGO1hAW7?z$kQ|3_+2pZy-#01FX1_ z1R)w7I3V?f^C>e0KYHa6S1WHgv!{qA)XC%UlSIg9$V9u-d|K=)kKgvKrITaFpoej? zZZ0VT{S*1*zc#N@w;R$B=&S>gYp%oR!3Urw%GlNG97vyL77FV{vM%@uBv`K!mW%}8 zs-)?-Un`*GrEomFzqtsk&uoQE&TcSiaK;$7474+vO#UxanegYGapVh6;?Fn=dw-#YKj16Zp8((?8kKc?Y-d_NB?d-id(tam{&*9Fc+ zjKPuI6ZqL+Emi+GlPeS{jou+exQDwR_nC>SmIHDkw@~0{|_E4Q`!jjt0%(YsWs3Vun~hg6YJ59mr>0*SIAc=%l^Rf}x{k99-T@xB)3TmPmz_0xE9 zyA1L4)>Yt9T#fxBJ@COX1(ViK;?wokzNiBMgNray_6>^4 z-N#dwk+k+jAnZ;FAnqn{P#fQf(-kh$qyH+Xf3px*RN5KKBZ^U%@!GZ1#jr-t5ta3Q zz}06j{1kPAbzT=}Nb7d8tLc%xRH`$MisTTBZ=Z>}ni2i5^cGDw5rh>v-ZcCDSN(MB ziQFLjFf@&c$7bUmDdwt%|}6+Dw7Cz3-~Z?x*xoL-^%!a zjC*h0jl;!9>G|h3@%1iGOc?M*W!c@ZRJa}Q1TTRcSug7Hbrn4|e~33P_zpaJ?}cw? z8G=>50Qma_;j;2AuywH&c1$_~Tz&%Q5jhB(w6DP~-DHsP9VnkKWDCK?PvKGRJao0A zBu8>Hk%_24{Qx$L`0_ZxQ5~F9e$TPlyS3DM==;57cw&OZZJ>^zj zZ@}(U+5vSlKf`2ASG+$#fq(ulk9b>c1%YZc3<|S^hHfLYWB=|gE}txkmdDML^HFoN z8#>JX0RMO|Fu7(oF1+c77KJvPh=^d&81;ebrx7eeE``yLtvGeXW*~m~BhLI30A_uY zdB+BpBiA|<{VR0KZyU*RzkL(r{w++wNR}IPYt2Rv^KN|UUWAkPjAI`=N2&Q9xVkR| zHk^J$h5r+0EaD3McWo-acq7ZuTUEhy?OU)jW)@dIHV;Jq^nsbY0oZ=-hRV+Y;5Nk- zJ31#KtJou>%}-$R@z>~f0^s#6bBu9z(GPd<2Pc71GDElmI_5f)#q}=ewOfLmR4qaK z*3I--njNreFu_*S>at1WrZ;(3{RYd~>x0Y`+uDt68uf+ACfo4q z;R4*!_lZbFxPryr^=KsS4ckub$G!de9FI5NsMB4KQQQ}t)2}M^RXwF}DBHlUE246BZqg3^@p5G|R9wBec@S#Ku^&KyNf-RrBc zB6k5jW7SXQ`F@A)tEI4XSs2{x3?^X`YH;>XC>`*ah&N;g$oQ)g)agjY*JJ{SG|r|g zttudn<=UNAJiTIzq2l9LMOJvzQK6fqqN>)1N*29?^){&1pX%#TOK*BVU_Epzi%w&c=IH z#O1>aywz9&T-N;@F#eH-DD1@@R)5KEJ%6Y(AgKM-358akgURMOoD&~)uwP*-Rn&L~ zJYgyB>TQXvXQP+8PW6O4;yW>7T{N`Y+VIw~-0Emn7|y!PMUbgRr5r<;uuKr{T1jGn z^#eE=Lg6hFF%NU zvkrH}F@D>Fc&Ie4qRmZj>4J|x$xc|wd?6NOGXEI~XcOQ!7$kw=(g(;HduuI1*Ckbx9a5f$((SmR<9>gs+$I-X| zysDB&>vTft+Z{q+7krAm=y|RdDZx@y%nFZyz!myAvmK0@pW7^(OTwClNJ}?k4`QA zhTs}hS($_Gz0Y{c#bq>JCV;N=597>pk-+s%JtXSB9}SmJ0$X_w=g6Hi9M`OP2vT`k zu6jg*v&60tXA2cV?C)B5d!rh6e|bo|#^qthLv09%ROA*9C8CdcIfMmiDCP}**=2~^@zc?^fs0=yN3~$68zP?PdL)9OZFPHk@|~5T*zCB z@^Ou5dfNrf7Y~Bj(tj8%X^0PP65&m(Bu@IZjjppS!MwIxFimy~xb9fSb9f|=WOe{# zMuY%=OABnBw;gL1DZ^!vH5l^ZJxAb0A!qi!OW3o)AI$%ng2J2-REZxZ86~!m;^0i5 zq`8r!+kevo*QA(t#~qauJXjA~558OySN=F@JLab*;q2CT^ylmg$ayoLu?n8^o^JN% z)%$s2rV5XkzR1PPmu}eVN3GG!~6_V$wb7qakax+f@4D0`x>w7Y=Hgkn@a-^C}^lUNYMbKetaH z&C3_kd;bM9U*$Z=GGrRs<{^Bhe*?BIYoJrN+2Oa#`Y1Bq!a4Hx5_xL#ke<=5A@ZFp z^Efpf^xQM>(Luy7hNqdgu@zq0Y(|4lfBofhPVh(Cj5>bWg$i#2$eO?`lJ~coXVYCk z&TOVIc&wQ-%4UqXJKnT1Q60aZEddd;PMGIs1(h*&Bvhpbs=CVYV39X*it!Aj7o0-_g~0XoVmI_SV3}j1_cO#&XQMGX{o7PD0_KaGK*& zhE=otpnd@0W>p|YIohJlLRGN&z81Cjw8HqXG}@|uhc01vxY*l7)Kp$U;(;dWgt}1l zUXm|4I!#OH~+7sGYzNm>*BBsiD;0iD3Kvj(x5nd zJyA$f63wYd6DgIkK_xO~p2?JwMsx^X$FW z?|0wDbtq2FG4zKn3c3ihpZ9l=ya&c`KkP3i%{xkCW~r0pw@KhXeIJS_ox-UzO3Bvd z6p)jR;@AjBpzKHj^yM7Ec%@hz$iKk7k{pu5Q{k)fS5jSFN+fK*^Dau4lDphoFPZg+ zq`9V`Fi!#-ni9b5%181revqGN=mTly1!$zVliCGKFgHzqlJOsZh`*g0-qzW~yBv6y zI$X*mO1e`Zv8Dnnv^;Uru5`MsYdYUbRu7yTBP$f+xgFf9pJdLz9O^wZA77Iqj_}MBE`bvjjUez+FGQR=LV+Hm<`DQXLbPo~exq}5M+o3QZodkO3A+P=@@nh~o zLZd9%*djs>a?gYGg|F$JA|4cN4FPLa5$sqwjedQjj>leZ#)K=U(e|@6M$j=Bypmeg z%5gnz{!xOKzfUTy3$LKAZ60y=CdMBk zkz!QNo5bvU>jK*cCo>(A&UC3lIi1{{^?YP|t>r)aT z@@geL>ops6A_J+h%651jFORl9NGsj^iEVuv{5n%YwH>aZv#~Zz-n|!ppOA(4mw!;^ z2wsKrECn}axOvgU*`9TV6i14{-tnpPpIAk@0FBlz-wd>_ER!@)F z666bcJ>^g;>;k7G?r{uZK_>Y9Tvqo{2G(YNCJs&wX!Qk|f7&@dU3KB*)u>;T3Rdxdq zX=j8AcMG{}+$e<1T1(T8&4!_YsUYet0zOqcaEY=UI&dB90^Jx?c;Ou)`G$j*qVVx5IZ_ZxNAEZG~xm9Jx7%S?Xk@KnZY zz7Ss4dy6gWb@21wRJ!z)5U4wCgTj%WoL5trkR!6hzxghDFPjPe-C3aTCI^yuAMY7N z;;t2%?5X|s7`OB)raXFxr#&tJn@6@;+{gqzd127#7(;A@gZZ9$eg~tWxf~(HZ zQ@$Ke*KUN)nQiob*E4)*GQfY@CxKF~+`T1U6aOq4q0dLQ(&E#>P*o?w&Kr)0(bRD? zWJ6)P;|y5$a}&PB72uWj5xu>$G0Ba~$c_l(p}{*4X1kN=P75Oat|OJM)sxtO$Rsix zm5p=v`(x_MC>#$qgQ=7C@oX2L8lSpF8#g7R#0@9nsCo(-ONt0Od!&K>R&$5%M>$Vf)d@OhEQT8Ve$ESz6GYv!<(xa^7E0!9z^qe9 zLNSyEc}+scFheTTTS1)49bnyF7FT}F!YOCF=m+Iwct1H0=6p6Fb=x?Idt7FXT!EVqjDLjf#z$L!Xc@>^k~|ToCesn9MG+ke~IzaEs%Ft8e10c!l=$z zG`?{J&L3L>^Pld9Ss6`KJ46!5<0ksn*Avficeea3HPlzmlHPlEl@=v^CZn;zXnl1a zMx?GI^Tpbz$o%Oz+B3la>XD5vs~y0Kn~@qUH-?Rwno!p2N8&__AYqO%sSZq_&gGj? zMm-cSv>M@EkzknUriJ2y>!Dgxi`%7fGrC-Nm^rb67LG+itg{W~9Q44pLI>1~wZy6j zDV%2Pjyo(Bus}Bs9^Ghy52tGB67G3$ies;&{c5L0CbwvAYF!CD`eh6KyeO9Uqc#d-Y=Ut2!ObY!D1z7iS#oE) zOk6qD9#493Oo5JOSZn?O!!JyM9!*!gbyS|pr%uJj`Q=;&$r{YVBgl92G`w0H2$|#2 zP*-~yg7-w@hkP!_KT3kd%WX6$KLg5nil`(lfcuA2$Y?T!v;Rb}e@YrolG}lE z+x20~yDSjxeN7_$hSBRp2{ia-;Vj!lSh_|E?k|2tXKF=)Xu1nN@sogi4X;o>w-bg= z#p7-+C*?Cg1Q&b)cwD=k{(iTcZ<#g|6MAl_;&Gy{c?4@^X9=cDBNk_TviKb8Di zz7gC=BP3bQ#_;+FP~mzFPcaT;&xtiOE72ELcQ~NUl?a|^pDpBcb#UjtYv?E0&$Asp zLd;6QwCS&;rW->*^EuZM85<^RCKSMZSpdPZ!>FMDO1r5~*ZaI(cK zvX$cpT&}r^VtM%x)M<>H>SHkQqa$>4_oS}7X1H3=2YvQsgRQ+3`J^9?!83)x!|XB? zoS480(g7-~)B){j1#rCB7c!r2LW^Cw*gZX+#;1L+lsWYRqTctBtD2smNs{p2>=)FJ zhpm8q;yo+!DW5DoH03g|TZbCpUm$60=4MShqD;A^24 zB3uvvV_rLOV&gWL5Lt%HFP()>E)U`@XN|Yd_fR{pL~#GU5LV~Mp`)`0Dr+R*KzA&j zu~!8n9+wAMr9<{fH}W1{)k5iNu0vM$n}%nd<#{%Y(M(}+cp$bAZr2om)eJz7 zl_}Wlm5JGJ7vNKyGLB~(N3XmWM2qnuI$7SBLT)eYSrdh8yn?`NPBnxK^yQNTKQG{#@@!kO}zD8vNJa zHjwGy7(ktuP<#@PyxYZ{TUsXJO{GsTNBR-2-r)}gd$U3O*>xJG>4UGX29tt`TM_0K zk~v-%@zYZ`SV9hi`Ls@&*Kh@=k4A!C+f=+)RsjE1`JwWNAFnr$py|I6jDN5N6++TD zj@^COwDm3gRlWdy)D2eOeu0*0^LY0h9?{3kzG3EGWvH5Dfs?APqQOl|w9Fg;^pFO> z=8s&zb~zdCU(DC$P9GX4R>0mcf1bt{hU{Cxb$Y9AfU5g!{PDOB7KX(d*nay63(F@l z^}155W`;C-Lu(EkS7BgJ^;yzC>k;}^{e!z6>bNfZ8Qf3#j_JU0K3*|kk{U!RU;0AC z(^d$|^Wkm3APj3AOR3)RNLW*02rm!z({Ht{L@VhQ7{;xGU$tT2F^`W6+cK+8IvRt` z;$Lv|%~P-r<``S0PI!yU(%2tw1G*&+PYUeknJ|j@XtWxib6#SP?o_ffQIYFhNicd& zH4sv{2bIFoh{w8q_z+b=W@av;2@CCrN?jgW`s=gDF3n<2mA#@Rm)-a~CKS-AOOpAS zN9`eHZ6>+V8c1EB6J`YKOfd)^%i;IgMDgvftaA`tRlRf zzX2MauELb%-%#|RCnj`qGp3X_m~uA+t>QN_We?Vn$$j5YGnGJ>XEbkkd@>tYWDjm~ z@>sj)HF@6tfi(U0!3}!?QRH|oU1x5OSAKKuwD8MhV{juRYL(!AQEQl;Qo-M_Z8LhT zsD&*pYT$25iQtl6l-^qe+r8Ui*!C;ur27mZ3gW_Sd7hMR-e2@$=v&}S0NcM9{h@YnsUBHr@) z4)%AiQ%mV>V3(#w4G-k-R7_4`dRPu*zUTTlk)?3H5XrpPifpZv99Eus%)2{0jrD95 zX1+9Br>|Bw<0kiDEYy);)cPi{BC4|B`(`@%@zEYWCHauH?r>09mBatoa1<_IkHVCl zd~nGYp!T9B7^NnFvvfB=TwVyuerX~dmkcl|znGZ2^I-BGN6b4Zg>E(9@ovCcG=0+t zD|^!^FSQizFn7sGKV`W3@gM{Xa4c9xq+PD>9w-anCj)+UNBa0hP$gmU@8H_#eW zfbLf^VRCOXWIb;rBWpCVExL@pop=EaJ=b%N6^;WK`^wS^ zsRs}7A7GqQOr*A-BJl^xu=H;dhL$3+Y5I!o+?>@``8qU3oq>ApnY#So1XgO>OVTLG z?Nfg!5qIrVn~(5C!Mz|WfM-+iG|vPLWGC`Y(IMlr)vSsNfLzQekFEvK{+hnI0M`b zNhaAW+$?r@iW>#Ef1}OTImv-O03@{j>^p(GpgPY zA4u+oIqz@7+t$Nmm1`=L`}<;NUJjNUT|=|km#C?IF}BN=pxX={UM?@ksEDnwLvRtC zQV62m$H%Grf$#LWU>bP%`jUdw60GY{hU<4GL6mF?^!hDA=k8|St@+F7>lS@n%e~Gi z@k}~@dmOJmw}GcIB#hbXvha6?CjPdLcF>GS(wuFo33-MrSoo;!wt_e?mX-RQ;zuHC5OZ4uk;%xEvUlYjBtL9=sNnnrIqa3 zH<9ZTJz6oc_Y2g<21Cb3ZhpFS7Hn3W%s6UT5tC9eh(FXu>i>QNJ<-QFBiI5p5^PCi z(n4su*vY@J%NQP~rbE1=Bk&g|!fa1PX0!Vo9BWTP3CVbrR;%j?|> zT&A*-{7emiOOtA0f%XMd|LhC8ecFUni9ui802S(s0o_3-GSey(cNQk{lb^ak@$A>s zE11BCPZ_xU7Cdn|T$OM4ldc&Y#r*Olcxa`^<%<`C-lcGO*1sKOrDvg5_d#6m#Gu-%I5-m= zf^Hr+xzF1ZkYCmd<9}yk=8BCVD0UJ4-mW1JzPF?3jnz=HyqtSBAeBD&9%`H-p`oCa z)Es+`p_hK)U-M?#$YqEvzPn%#cjqj>pGHzVtKe|Z4`SE9g!Z&aF+*qDNdJ>OUVHE- zSivz@;&;2i-MJ#D=iNy9e{g=e#Cs&DF$-1%=V642H+(&O8ejY6@lSsELWQ>qLfyYe z%xDy4JjZVUE0qe(CVr@U!j9)|_6-cZWLWd5f6?i8FJF9O0d-yB4Br~dVe?1=8CvrX zB(i+*h}J>+u=fB;Pw)nwU@Mjh+rivdi_!DuU9d0khk=&uP@p1ATHO3#d#)MZ^W{A_ z_%;y+i=PvZUAwAQIbWlh#lo!rsUd*NpUKQ&Q!s0<04c$HnD5w2{;ZyfRoP=uyjPxy zneZC6$cMwWo>n*)uoRW{8bQ1M5xg-^oJm^#545|(@$Q+4kd+&b&J}lI8urrc)%PGo zy`AcqRa4z1voY0aC3p_^@p)U*$y)b3F!rp3lxrUF^rknzYOxpi?@`7(a&g2mqn?N- zAH`17So|9}k)uVB8gs?C(Tmv5Icz|`(Iaqrjfz*0FqE@TF zQI|?72KGrZl986getsJc2~Nb$3HtcVO^6)|Frt&Nh;Hc5!q>;9Ftf{w=={H;bc<&_ z|G_&$Si?7g$M=O;9MI$MFqUDSa~ZjF9oi^`IpoMjRj?Jd#sJ+4h@7d82V6q%QT}V< zEc}PNPx%5irBcNhAskS>Nb+ioVR5q%{=Jz3YAtW5Lx?x56PZSN=W2MNA8gSxt&^zt z>EcGFh=|_zqknD0S?}2s;1lmE(NyL*78Wkh-aG^)0*%1hRlx6pbt znzkK_K<|u3&fRuPe_CDvnKE+@s)!yWx!m{O_VQV18ykunU%jLMDth5#*sTh27g?Bc zjq`Bd+J|@3ZQ$pX95h|fgLAK^@vc_hgJq$Ekb6CyIDg3HiS;L7XIwkPmIT4iG0usc zrUXx>t%US?C!F_iH>u25!Uhu=w(58qKDJ6Bg`c+a=4f^jjwpe#Qi^yo^9gi@-X(uC zRiN)y47aDdL2|m}U|mlboNJiD@b&VD?(}1{=JE?3FRQ!ip#45@zom%dCPOhvTcy@$g2?m(+8;+8rA{7?jD6(;5)N`QZ*E zQeg$G8@vo#|CQsiOno?d^(KDM(V`ApZ}9hXGyYQ%93$(O0)vvABQ16XKAGBxm526Y zXtD>Lx_=Mms@>&To#BJi770*0YzZ4)RKpg*QH)qp44J20Fs2NTi&`3v*#Pw^Dy zkhn4DvJJy0`iq(M4^M*R!%pZ6FWF(l3716lQ z;WGXcy+MseQu!m&bD{il61v7erj~{UWKVkyq}eNQzUd)+H>sL$Y}QP^ixbW(rh?Vw zHzDKXaXhC?FltnVE#4{2K5-bNpPig4&!v?>u=s0G6oZv(B4D_te)6V2VW^P7~BqTz9#bxo2m%J-h@oy5J-OCgnu>0ph}zD z$A8zsr0#QQl-vYqJH^3NRu}u-vJuNmF`zh+M*U|G9ZI)(f3Dnz+TY?hQ~NQ!91xGM z$RqsZ(?%1#ElJV&ZD^p?LT>)P&6gasM{jN?le5c<3TcHC`!~PA#-jlwhboWrp21&hOl(glAG4xif2|B#?>7(iBo?9A;sTyR zO+FFaeizS*c*3{9LuldPL8SE5_zSC)ePq0`~%>&s1MAiDp9BXo%G?!9n>nu6gqy}VQlV32#?N%l@BbaJ;%dd*y(`^ zvyHI#>wH*0b_4!p$zbjNU$pzB8|-}A4Er^15j&YJwD0(B*wj-1Gap_6EfFnp(7p)1 z2Sz~o!aB|smHTuS6ASWI@@xGldVI`+7Rxc5)TF|$J+TId`pUqsO_p5ey(W$6J~;TU z6@Q7W#xHZ?@z3gRT+*@(r5AsKW`i17-qC|Sn`&_2og=DrWWphPe_d}WPD6HV==lNE=n{0)Wc}>t2Zw;v1$8UeFig#Qup=qQn z6Fs;DYp=SJ#(FV0%UVLlz9BlK_z;|X@8jRx2c+Ha5iOZ=1{_XK!|%T;(bm2dQ#W2F zV#({sc+L$-GZbV7^Do0B!B)WkMnE_!6WwLqX<}~_p3F$)CGK1f?zvHD?zJDRU&f+q zQWd5bE?^7p^g&={7IwLqz}_Hh9NusT?Y5^u|J*O=L@3S7m;;x#hG4JG1rqJB0Vj5q zK>x&raL=L*{$%$NDb5L#QhX5?e6N8`9oOhCvwyVYeJ|f;<^h~wb_iA8f1wH~Gnn%e zm0`V4Dz|q#kAw4snU&2SNa+hdD9>|3xy)=Ju?KKc)I-p3DCWNyFGVXUptc{yiH5~R zJo9}5)9bSzF79?Ae;y^#a7jaKSk{Bn9n`5#%YIN?ya0xb7J}o72N-a^p9E#wppjSs zoY07%9=6hq-1J=D_Pw(>rm!F_d%@)+kGJA;U0Lup=Kj7`-m3csps$7AYTyEq}a|W5pkAwW3&Nw=x0+Z+6fu}X; zlnY$qP{cGaxq1|iET2FW+z+FdZVrFzfnLa*tqS!gGT@lSB(zl$L)S$sP+9aR>Q}!; z|3){wzhMF{I^F_PO3cu}9!cgidweBajnNBVfZ-3y?XX1HKNa=xqGk%6drXP9YD4U@Irp!N@S7&cL6ouljcetU1zecT*l|0*5W-`iew%;XY8`LBXA zS?YMz{x;s>`ezt@mS~&xBlF*RXd21}b<13$WF*C0pREP+1012jQGn^KEF(ompKv!5 zi-&^S$fC_l@aM;LIMjWao=}u#mtLayV8Ks#Uw#nMIt=i~>UbQE6QnZXa-f~HAC~=c zCQU{JM&+0Ao>epeqZiB{*H*`&)QPNTN-B!%K7sw}6Bsx7)9?T)fwAm`nYr8Hy5|oZ zpjkY|EQ8}<8>3Y73wpC>2rObJGC><|!qK?!DocNPX8t}`x+3Tf#-9HHw)*<`FXSua z)=J`Ns5Yyn@C(1s6k@8JKU1YG4iK`b6_?)j$JM8NAnD#uvSU{nxSF1Z?Y1%~H4=%A zF}rwQe~#1Spb#9jXu#gpKWSC2B`hkd!Uk10&|Blrc?)IOv8RVX>ExwKGdVK|4Sx^Q z9|ob#*a!X`u8X`p@gi;7RR97*O?1J^5}33t5fZOuk+JM?y7;FY?vYN##?bG0>>GiF zfhLfaxeknSxQ@4kE@(&E!?T9lv`#S;W{JP#@@NV)desN|tnCL87x@2Aze_l7O-x{K zg*r2qtt`Ys2jl;s1kXVz|KK?uYro_QbA3xGTQlMRzaMmFE*;kDMCV7tkS#X|)T9VL zt5!j;MDV7%>9RjF`Qqg6!Yl(sxQHhmg_m1V+l!o>_M)AC|rAVav85P0|b<|+uw_E~%b@1ShW8y8^SqkF9nA&5j8%{lP52UfA@X%$)B}s;R5as_2xbN3Y^Ehi=TkLbxT<{ zuDgEf;!hY~na13>!gcwUWZ{OU1UisS7dbT2vTqmD@>Cmsj0do%mpx)VmFFldlZtu`r%zFhBC$9+?fK|eay*q%P{cY0*=M83U#>t<;~68 z*v-x>*^+<_tig?Jwm(0DeHakNuIbkTNk@(^wR9WwL_WkLg_m$`G3T~guL2R%cksKF z<8l4HMso3=1Z$ZuPp5@Yu#R}cx9Xe0+?yN1_V}@EgL(@q^vR2*)oSQ`?mlcABW#hT zFT3%uBKuN84NA^(=jIu*+)iGeUFKfF_2Y%v+-nn_v z8$sBPbC~ALLiN`hknOylEpt1-&Jy>-O&k|{+FWa^^E3v7o9$0dEYlokJ{$EM_& zK$oO5l93|LyIBH5iIbrwuAbv9m~wW%i^OJLDao(j!ps&hWK0Sbm|~G}2;x|^lO^lv z@mrdB`C1E%k65DC6)l|G`;NL+d_=lExei{lwfAEoL-$QB3pPS};{#%1qrZ#I(spGcFrU znA}-w@crv<{`TN$jB`Q=bEW7P=W9I=QW^KbrTrYk&UpmxaiYwWYlT%$wU08dMy(lU zw>k6G=@wb@x(B@LB%t0_jw$eQV2m{`Fn=BE;Nr%a%mNife&ya@Q1NXw6S>0#ta|Km z=Z*$=P|kaUu79Kax!}&u7 zY{*nDQbjL)4$51jm2j=UXqL3bi>eL>A@TyGx!uD#Lk&Bl|6CcIgyw3 zAUlV*(pgt0!O)CFjO)t9%&kyyX3FP<_#sz_xm6m%5SyjUrpMFZdAbOA?_E!KpY(^} z-TI8(uKmpG!ES@8A8QSKR_%pdx7IO*t`DKkd@|GD(+uCZ{8R|H`=0E!k_-m?g-cSF z&^@&q1uec|d|f|{kM_aHh#73AxDBg5bq2e#LxZ)rrp6ZTn8waiQ(!~<_On}M05f8B z>BAK}AvJIw+wKuZ)PqCe;qDd8?K_>2*(Sy$`4vOw_q`l@Pl(-T6Oa7Bemo&Nmo-gw zV>b@A;;z26tk>{FHoAWZJ(^uuqxtLDq=-wb*u!TNTN;rY-Tke|cxIMlbo%;&1CwTl6J_RD5A@TUTPKJbQ;mM$Wf zeHd4A9L;q>B`DFb5vNDI$K`Kc;qdJaI%o4y*jH2qZ)fSkl}rAxWS$}>s+B|kvGq*J T)1A!6mKyk|Gn?rdc4Phraz4W3 literal 0 HcmV?d00001 diff --git a/tools/accuracy_checker/data/test_models/pytorch_model/samplenet.py b/tools/accuracy_checker/data/test_models/pytorch_model/samplenet.py new file mode 100644 index 00000000000..a742a650c02 --- /dev/null +++ b/tools/accuracy_checker/data/test_models/pytorch_model/samplenet.py @@ -0,0 +1,38 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + +import torch.nn as nn +import torch.nn.functional as F + + +class SampLeNet(nn.Module): + def __init__(self): + super(SampLeNet, self).__init__() + self.conv1 = nn.Conv2d(3, 6, 5) + self.pool = nn.MaxPool2d(2, 2) + self.conv2 = nn.Conv2d(6, 16, 5) + self.fc1 = nn.Linear(16 * 5 * 5, 120) + self.fc2 = nn.Linear(120, 84) + self.fc3 = nn.Linear(84, 10) + + def forward(self, x): + x = self.pool(F.relu(self.conv1(x))) + x = self.pool(F.relu(self.conv2(x))) + x = x.view(-1, 16 * 5 * 5) + x = F.relu(self.fc1(x)) + x = F.relu(self.fc2(x)) + x = self.fc3(x) + return x diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index 6a8ff19fa22..9bdcfa9d681 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -60,9 +60,9 @@ datasets: - name: imagenet_1000_classes annotation_conversion: converter: imagenet - annotation_file: /home/automation/datasets/imagenet/val.txt + annotation_file: val.txt annotation: imagenet1000.pickle - data_source: /home/automation/datasets/imagenet/images + data_source: ILSVRC2012_img_val metrics: - name: accuracy@top1 type: accuracy diff --git a/tools/accuracy_checker/tests/test_pytorch_launcher.py b/tools/accuracy_checker/tests/test_pytorch_launcher.py new file mode 100644 index 00000000000..05a43de4fca --- /dev/null +++ b/tools/accuracy_checker/tests/test_pytorch_launcher.py @@ -0,0 +1,63 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + +import pytest +pytest.importorskip('accuracy_checker.launcher.pytorch_launcher') +import cv2 +import numpy as np + +from accuracy_checker.launcher.launcher import create_launcher +from accuracy_checker.config import ConfigError +from accuracy_checker.data_readers import DataRepresentation + +def get_pth_test_model(models_dir): + config = { + "framework": 'pytorch', + "module": 'samplenet.SampLeNet', + "checkpoint": models_dir/'pytorch_model'/'samplenet.pth', + 'python_path': models_dir/'pytorch_model', + "adapter": 'classification', + "device": 'cpu', + } + + return create_launcher(config) + + +class TestPytorchLauncher: + def test_launcher_creates(self, models_dir): + launcher = get_pth_test_model(models_dir) + assert launcher.inputs['input'] == (1, -1, -1, -1) + assert launcher.output_blob == 'output' + + def test_infer(self, data_dir, models_dir): + pytorch_test_model = get_pth_test_model(models_dir) + img_raw = cv2.imread(str(data_dir / '1.jpg')) + img_resized = cv2.resize(img_raw, (32, 32)) + rgb_image = cv2.cvtColor(img_resized, cv2.COLOR_BGR2RGB) + input_blob = pytorch_test_model.fit_to_input([rgb_image], 'input', (0, 3, 1, 2)) + + res = pytorch_test_model.predict([{'input': input_blob}], [{}]) + + assert np.argmax(res[0]['output']) == 5 + + +@pytest.mark.usefixtures('mock_path_exists') +class TestMxNetLauncherConfig: + def test_missed_model_in_create_pytoch_launcher_raises_config_error_exception(self): + config = {'framework': 'pytorch'} + + with pytest.raises(ConfigError): + create_launcher(config) From b8a6f1aafcf4a44c3d121b92bb4ff0280b78d874 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 2 Oct 2019 12:48:15 +0300 Subject: [PATCH 056/927] AC: fix typo in definitions (#471) --- tools/accuracy_checker/dataset_definitions.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index a42ecbfa38a..9bdcfa9d681 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -352,7 +352,7 @@ datasets: landmarks_csv_file: VGGFaces2/bb_landmark/loose_landmark_test.csv bbox_csv_file: VGGFaces2/bb_landmark/loose_bb_test.csv annotation: vggfaces2.pickle - dataset_meta: vgfces2.json + dataset_meta: vggfaces2.json - name: semantic_segmentation_adas data_source: segmentation From 58dbdf4ccbda9a66b5b5f593cd631314bb1ade3e Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 27 Sep 2019 18:10:15 +0300 Subject: [PATCH 057/927] Refactoring --- CONTRIBUTING.md | 143 ++++++++++++++++++++++-------------------------- 1 file changed, 66 insertions(+), 77 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c80beaec530..f9fb04bd3f4 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,60 +1,38 @@ # How to contribute model to Open Model Zoo -From this document you will learn how to contribute your model to OpenVINO™ Open Model Zoo (OMZ). It could be done in few steps. - -1. [Model location](#model-location) -2. [Model conversion](#model-conversion) -3. [Demo](#demo) -4. [Accuracy validation](#accuracy-validation) -7. [Configuration file](#configuration-file) -6. [Documentation](#documentation) -7. [Pull request requirements](#pull-request-requirements) - -List of supported frameworks: -* Caffe\* -* Caffe2\* (by conversion to ONNX\*) -* TensorFlow\* -* MXNet\* -* PyTorch\* (by conversion to ONNX\*) - -## Model location - -Upload your model to any Internet file storage. The main requirements are that the model must either be downloadable from a direct HTTP(S) link or from Google Drive\*. - -*After this step you will get **links** to the model* - -## Model conversion - -Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. Find more information about conversion in [[Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. - -> **NOTE 1**: due to OpenVINO™ paradigms, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. - -> **NOTE 2**: due to OpenVINO™ paradigms, if model input is a color image, color channel order should be `BGR`. - -*After this step you`ll get **conversion parameters** for Model Optimizer.* - -## Demo - -The demo shows the main idea of model inference using IE. If your model solves one of the tasks supported by Open Model Zoo, find appropriate demo from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or sample from [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). - -If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demos are required to support the following keys: +## Pull request requirements -- `-i ""` Required. Input to process. -- `-m ""` Required. Path to an .xml file with a trained model. -- `-d ""` Optional. Target device for model inference. Default is CPU. -- `-no_show` Optional. Do not visualize inference results. +Contribution to OpenVINO™ Open Model Zoo (OMZ) comes down to creating pull request (PR) in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contains: +* configuration file - `model.yml` (learn more in [Configuration file](#configuration-file) section) +* documentation of model in markdown format (learn more in [Documentation](#documentation) section) +* accuracy validation configuration file (learn more in [Accuracy Validation](#accuracy-validation) section) +* license added to [tools/downloader/license.txt](tools/downloader/license.txt) +* (*optional*) demo (learn more about it in [Demo](#demo) section) -Also you can add any other necessary parameters. +Name your model in OMZ using next rules: +- name must be consistent with name given by authors, but full match not necessary +- use lowercase +- spaces are not allowed in the name, use `-` or `_` (`-` is preferable) as delimiters instead +- if necessary, add suffix to model name, according to origin framework (see **`framework`** description in [configuration file](#configuration-file) section) -*After this step you'll get **demo** for your model (if no demo was available)* +This name will be used for downloading, converting, etc. +Example: +``` +resnet-50-pytorch +mobilenet-v2-1.0-224 +``` -## Accuracy validation +Configuration and documentation files must be located in `models/public/` directory. -Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models) +Validation configuration file must be located in [`tools/accuracy_checker/configs`](tools/accuracy_checker/configs). -When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. +If you adding demo, it must be locate it in [demos](/demos) folder. Learn more about it in [Demo](#demo) section. -*After this step you will get accuracy validation configuration file - **.yml*** +This PR must pass next tests: +* model is downloadable by `tools/downloader/downloader.py` script (see [Configuration file](#configuration-file) for details) +* model is convertible by `tools/downloader/converter.py` script (see [Model conversion](#model-conversion) for details) +* model can be used by demo or sample and provides adequate results (see [Demo](#demo) for details) +* model passes accuracy validation (see [Accuracy validation](#accuracy-validation) for details) ## Configuration file @@ -83,6 +61,8 @@ If task, that your model solve, is not listed here, please add new type of task **`files`** +> Before filling this section, you must ensure that a model is downloadable either from a direct HTTP(S) link or from Google Drive\*. + You describe all files, which need to be downloaded, in this section. Each file is described in few tags: * `name` sets file name after downloading @@ -94,6 +74,8 @@ If file is located on Google Drive\*, section `source` must contain: - `$type: google_drive` - `id` file ID on Google Drive\* +> **NOTE:** if file is located on GitHub\* the version of the file must be fixed. + **`postprocessing`** (*optional*) Sometimes right after downloading model is not ready for conversion by Model Optimizer and some additional preprocessing needed, such as unpacking, replacing or deleting some part of file. This manipulation is described in this section. @@ -106,17 +88,13 @@ For unpacking archive: For replacement operations: - `$type: regex_replace` - `file` name of file where replacement must be executed -- `pattern` string or regexp ([learn more](https://docs.python.org/2/library/re.html)) to find +- `pattern` string or regexp ([learn more](https://docs.python.org/2/library/re.html)) - `replacement` replacement string - `count` (*optional*) maximum number of pattern occurrences to be replaced -**`pytorch_to_onnx`** (*optional*) - -List of pytorch-to-onnx conversion parameters, see `model_optimizer_args` for details. - -**`caffe2_to_onnx`** (*optional*) +**`conversion_to_onnx_args`** (*optional*) -List of caffe2-to-onnx conversion parameters, see `model_optimizer_args` for details. +List of onnx conversion parameters, see `model_optimizer_args` for details. Applicable for Caffe2\* and PyTorch\* frameworks. **`model_optimizer_args`** @@ -142,6 +120,39 @@ Path to model's license. ---- *After this step you will obtain **model.yml** file* +## Model conversion + +Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. Find more information about conversion in [[Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. + +> **NOTE 1**: due to OpenVINO™ paradigms, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. + +> **NOTE 2**: due to OpenVINO™ paradigms, if model input is a color image, color channel order should be `BGR`. + +*After this step you`ll get **conversion parameters** for Model Optimizer.* + +## Demo + +The demo shows the main idea of model inference using IE. If your model solves one of the tasks supported by Open Model Zoo, find appropriate demo from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or sample from [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). + +If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demos are required to support the following keys: + +- `-i ""` Required. Input to process. +- `-m ""` Required. Path to an .xml file with a trained model. +- `-d ""` Optional. Target device for model inference. Default is CPU. +- `-no_show` Optional. Do not visualize inference results. + +Also you can add any other necessary parameters. + +*After this step you'll get **demo** for your model (if no demo was available)* + +## Accuracy validation + +Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models). + +When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. + +*After this step you will get accuracy validation configuration file - **.yml*** + ### Example In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from Google Drive\* as archive. @@ -197,28 +208,6 @@ Detailed structure and headers naming convention you can learn from any other mo --- *After this step you will obtain **.md** - documentation file* -## Pull request requirements - -Contribution to OpenVINO™ Open Model Zoo comes down to creating pull request in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contains changes: -* configuration file - `model.yml` from [here](#configuration-file) -* documentation of model in markdown format from [here](#documentation) -* accuracy validation configuration file from [here](#accuracy-validation) -* license added to [tools/downloader/license.txt](tools/downloader/license.txt) - -> If model uses your own demo, add it to [demos](/demos) folder. - -> If you made any other changes, that make auto downloading and conversion possible, add it too. - -Configuration and documentation files must be located in `models/public` directory in subfolder, which name will represent model name in Open Model Zoo and will be used by downloader and converter tools. Also, please add suffix to model name, according to origin framework (e.g. `cf`, `cf2`, `tf`, `mx` or `pt`). - -Validation configuration file must be located in [tools/accuracy_checker/configs](tools/accuracy_checker/configs). - -This PR must pass next tests: -* model is downloadable by `tools/downloader/downloader.py` script -* model is convertible by `tools/downloader/converter.py` script -* model can be used by demo or sample and provides adequate results -* model passes accuracy validation - ## Legal Information [\*] Other names and brands may be claimed as the property of others. From cdbe2d3b0afb6af1ed72febbea2be11383d98f7d Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Wed, 2 Oct 2019 13:12:08 +0300 Subject: [PATCH 058/927] Added "legal" info --- CONTRIBUTING.md | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index f9fb04bd3f4..b4fb1724f3c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,8 +1,17 @@ # How to contribute model to Open Model Zoo +We appreciate your intention to contribute model to OpenVINO™ Open Model Zoo (OMZ). This guide would help you and explain main issues. OMZ is licensed under Apache License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. Please note, that we accept models under permissive license: **MIT**, **Apache 2.0**, **BSD-3-Clause**. In other case it may take longer time to get approval (or even denial) for your model. + +Nowadays OMZ supports models from frameworks: +* Caffe\* +* Caffe2\* (via conversion to ONNX\*) +* TensorFlow\* +* PyTorch\* (via conversion to ONNX\*) +* MXNet\* + ## Pull request requirements -Contribution to OpenVINO™ Open Model Zoo (OMZ) comes down to creating pull request (PR) in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contains: +Contribution to OMZ comes down to creating pull request (PR) in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contains: * configuration file - `model.yml` (learn more in [Configuration file](#configuration-file) section) * documentation of model in markdown format (learn more in [Documentation](#documentation) section) * accuracy validation configuration file (learn more in [Accuracy Validation](#accuracy-validation) section) @@ -34,6 +43,13 @@ This PR must pass next tests: * model can be used by demo or sample and provides adequate results (see [Demo](#demo) for details) * model passes accuracy validation (see [Accuracy validation](#accuracy-validation) for details) +After the end, your PR will be review by OpenVINO™'s team for consistence and legal. + +Your PR can be denied in case: +* inappropriate license (e.g. GPL-like licenses) +* inaccessible dataset +* PR fails one of the test above + ## Configuration file Models configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be located in the model subfolder. Let look closer to the file content. @@ -70,6 +86,8 @@ You describe all files, which need to be downloaded, in this section. Each file * `sha256` sets file hash sum * `source` sets direct link to file *OR* describes file access parameters +> You may obtain hash sum using `sha256sum ` command on Linux\*. + If file is located on Google Drive\*, section `source` must contain: - `$type: google_drive` - `id` file ID on Google Drive\* @@ -149,6 +167,8 @@ Also you can add any other necessary parameters. Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models). +If model uses dataset which is unsupported by Accuracy Checker, you also must provide link to it. Please notice this issue in PR description. Don't forget about dataset license too (see [above](#how-to-contribute-model-to-open-model-zoo)). + When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. *After this step you will get accuracy validation configuration file - **.yml*** @@ -200,6 +220,7 @@ Documentation should contain: * framework * GFLOPs (*if available*) * number of parameters (*if available*) +* validation dataset description/link * main accuracy values (also description of metric) * detailed description of input and output for original and converted models From ff498a512f4676a3840a71aa39fbda4b477136ad Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 2 Oct 2019 13:24:28 +0300 Subject: [PATCH 059/927] added parameters descriptions and readme --- tools/accuracy_checker/README.md | 2 ++ .../launcher/pytorch_launcher.py | 18 +++++++--- .../launcher/pytorch_launcher_readme.md | 36 +++++++++++++++++++ 3 files changed, 51 insertions(+), 5 deletions(-) create mode 100644 tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher_readme.md diff --git a/tools/accuracy_checker/README.md b/tools/accuracy_checker/README.md index d36daf4479c..d8557516838 100644 --- a/tools/accuracy_checker/README.md +++ b/tools/accuracy_checker/README.md @@ -49,6 +49,7 @@ In order to evaluate some models required frameworks have to be installed. Accur - [OpenCV DNN](https://docs.opencv.org/4.1.0/d2/de6/tutorial_py_setup_in_ubuntu.html). - [TensorFlow](https://www.tensorflow.org/). - [ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/README.md). +- [Pytorch](https://pytorch.org/) You can use any of them or several at a time. @@ -124,6 +125,7 @@ Please view: - [how to configure TensorFlow Launcher](accuracy_checker/launcher/tf_launcher_readme.md). - [how to configure TensorFlow Lite Launcher](accuracy_checker/launcher/tf_lite_launcher_readme.md). - [how to configure ONNX Runtime Launcher](accuracy_checker/launcher/onnx_runtime_launcher_readme.md). +- [how to configure Pytorch Launcher](accuracy_checker/launcher/pytorch_launcher_readme.md) ### Datasets diff --git a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py index 246b2728dc3..cdada200652 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py @@ -21,11 +21,19 @@ class PyTorchLauncher(Launcher): def parameters(cls): parameters = super().parameters() parameters.update({ - 'module': StringField(regex=MODULE_REGEX), - 'checkpoint': PathField(check_exists=True, is_directory=False, optional=True), - 'python_path': PathField(check_exists=True, is_directory=True, optional=True), - 'module_args': ListField(optional=True), - 'module_kwargs': DictField(key_type=str, validate_values=False, optional=True, default={}), + 'module': StringField(regex=MODULE_REGEX, description='Network module for loading'), + 'checkpoint': PathField( + check_exists=True, is_directory=False, optional=True, description='pre-trained model checkpoint' + ), + 'python_path': PathField( + check_exists=True, is_directory=True, optional=True, + description='appendix for PYTHONPATH for making network module visible in current python environment' + ), + 'module_args': ListField(optional=True, description='positional arguments for network module'), + 'module_kwargs': DictField( + key_type=str, validate_values=False, optional=True, default={}, + description='keyword arguments for network module' + ), 'device': StringField(default='cpu', regex=DEVICE_REGEX), 'batch': NumberField(value_type=float, min_value=1, optional=True, description="Batch size.", default=1), 'output_names': ListField( diff --git a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher_readme.md b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher_readme.md new file mode 100644 index 00000000000..0df5e9263b4 --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher_readme.md @@ -0,0 +1,36 @@ +# How to configure Pytorch launcher + +For enabling Pytorch launcher you need to add `framework: pytorch` in launchers section of your configuration file and provide following parameters: + +* `device` - specifies which device will be used for infer (`cpu`, `cuda` and so on). +* `module`- pytorch network module for loading. +* `checkpoint` - pre-trained model checkpoint (Optional). +* `python_path` - appendix for PYTHONPATH for making network module visible in current python environment (Optional). +* `module_args` - list of positional arguments for network module (Optional). +* `module_kwargs` - dictionary (`key`: `value` where `key` is argument name, `value` is argument value) which represent network module keyword arguments. +* `adapter` - approach how raw output will be converted to representation of dataset problem, some adapters can be specific to framework. You can find detailed instruction how to use adapters [here](../adapters/README.md). +* `batch` - batch size for running model (Optional, default 1). +In turn if you model has several inputs you need specify them in config, using specific parameter: `inputs`. +Each input description should has following info: + * `name` - input layer name in network + * `type` - type of input values, it has impact on filling policy. Available options: + * `CONST_INPUT` - input will be filled using constant provided in config. It also requires to provide `value`. + * `IMAGE_INFO` - specific key for setting information about input shape to layer (used in Faster RCNN based topologies). You do not need provide `value`, because it will be calculated in runtime. Format value is `Nx[H, W, S]`, where `N` is batch size, `H` - original image height, `W` - original image width, `S` - scale of original image (default 1). + * `INPUT` - network input for main data stream (e. g. images). If you have several data inputs, you should provide regular expression for identifier as `value` for specifying which one data should be provided in specific input. + * `shape` - shape of input layer described as comma-separated of all dimensions size except batch size. + Optionally you can determine `layout` in case when your model was trained with non-standard data layout (For Pytorch default layout is `NCHW`). +If you model has several outputs you also need specify their names in config for ability to get their values in adapter using option `output_names`. + +Pytorch launcher config example (demonstrates how to run AlexNet model from [torchvision](https://pytorch.org/docs/stable/torchvision/models.html)): + +```yml +launchers: + - framework: pytorch + device: CPU + module: orchvision.models.alexnet + + module_kwargs: + pretrained: True + + adapter: classification +``` From d0eaf856f784fa559083afe873605fd325f97ba7 Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Wed, 2 Oct 2019 14:26:17 +0300 Subject: [PATCH 060/927] tools/accuracy_checker: move the list of dependencies to a standalone file This makes it easier to use pip-compile on it. Turn `tests_requirements` into a list, which is both shorter and consistent with `requirements`. --- tools/accuracy_checker/requirements.in | 10 ++++++++ tools/accuracy_checker/setup.py | 34 +++++++++----------------- 2 files changed, 21 insertions(+), 23 deletions(-) create mode 100644 tools/accuracy_checker/requirements.in diff --git a/tools/accuracy_checker/requirements.in b/tools/accuracy_checker/requirements.in new file mode 100644 index 00000000000..99f80689dc8 --- /dev/null +++ b/tools/accuracy_checker/requirements.in @@ -0,0 +1,10 @@ +nibabel +numpy +pillow +py-cpuinfo<=4.0 +PyYAML +scikit-learn +scipy +shapely +tqdm +yamlloader diff --git a/tools/accuracy_checker/setup.py b/tools/accuracy_checker/setup.py index 87b9026fe10..1374ab719e4 100644 --- a/tools/accuracy_checker/setup.py +++ b/tools/accuracy_checker/setup.py @@ -22,27 +22,6 @@ from setuptools.command.test import test as test_command from pathlib import Path -requirements = OrderedDict([ - ('NumPy', 'numpy'), - ('tqdm', 'tqdm'), - ('PyYAML', 'PyYAML'), - ('ymlloader', 'yamlloader'), - ('Pillow', 'pillow'), - ('scikit-learn', 'scikit-learn'), - ('scipy', 'scipy'), - ('cpuinfo', 'py-cpuinfo<=4.0'), - ('shapely', 'shapely'), - ('nibabel', 'nibabel') -]) - -try: - importlib.import_module('cv2') -except ImportError: - requirements['opencv'] = 'opencv-python' - -tests_requirements = OrderedDict([("PyTest", 'pytest==4.0.0'), ("PyTest Mock", 'pytest-mock==1.10.4')]) - - class PyTest(test_command): user_options = [('pytest-args=', 'a', "Arguments to pass to pytest")] @@ -77,6 +56,15 @@ def find_version(*path): long_description = read("README.md") version = find_version("accuracy_checker", "__init__.py") +requirements = [read("requirements.in")] + +try: + importlib.import_module('cv2') +except ImportError: + requirements.append('opencv-python') + +tests_requirements = ['pytest==4.0.0', 'pytest-mock==1.10.4'] + setup( name="accuracy_checker", description="Deep Learning Accuracy validation framework", @@ -90,7 +78,7 @@ def find_version(*path): ]}, zip_safe=False, python_requires='>=3.5', - install_requires=list(requirements.values()), - tests_require=list(tests_requirements.values()), + install_requires=requirements, + tests_require=tests_requirements, cmdclass={'test': PyTest} ) From 40057f2ab9899d660baa06b36a23034b59338fc3 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 2 Oct 2019 15:01:03 +0300 Subject: [PATCH 061/927] AC: extend color space conversion (#472) --- .../accuracy_checker/preprocessor/README.md | 4 +++- .../accuracy_checker/preprocessor/__init__.py | 4 +++- .../preprocessor/color_space_conversion.py | 19 +++++++++++++++++++ 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/preprocessor/README.md b/tools/accuracy_checker/accuracy_checker/preprocessor/README.md index a2929dff680..3172dfa43ed 100644 --- a/tools/accuracy_checker/accuracy_checker/preprocessor/README.md +++ b/tools/accuracy_checker/accuracy_checker/preprocessor/README.md @@ -44,7 +44,9 @@ Accuracy Checker supports following set of preprocessors: These parameters support work with precomputed values of frequently used datasets (e.g. `cifar10` or `imagenet`). * `bgr_to_rgb` - reversing image channels. Convert image in BGR format to RGB. -* `bgr_to_gray` - converting image in BGR to grayscale color space. +* `bgr_to_gray` - converting image in BGR to gray scale color space. +* `rgb_to_bgr` - reversing image channels. Convert image in RGB format to BGR. +* `rgb_to_gray` - converting image in RGB to gray scale color space. * `flip` - image mirroring around specified axis. * `mode` specifies the axis for flipping (`vertical` or `horizontal`). * `crop` - central cropping for image. diff --git a/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py b/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py index 6b05569e881..9e803986624 100644 --- a/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/preprocessor/__init__.py @@ -16,7 +16,7 @@ from .preprocessing_executor import PreprocessingExecutor from .preprocessor import Preprocessor -from .color_space_conversion import BgrToRgb, BgrToGray, TfConvertImageDType +from .color_space_conversion import BgrToRgb, RgbToBgr, BgrToGray, RgbToGray, TfConvertImageDType from .normalization import Normalize, Normalize3d from .geometric_transformations import ( GeometricOperationMetadata, @@ -50,6 +50,8 @@ 'BgrToGray', 'BgrToRgb', + 'RgbToGray', + 'RgbToBgr', 'TfConvertImageDType', 'Normalize3d', diff --git a/tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py b/tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py index 199a3253b41..c3c4474f4ee 100644 --- a/tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py +++ b/tools/accuracy_checker/accuracy_checker/preprocessor/color_space_conversion.py @@ -45,6 +45,25 @@ def process(self, image, annotation_meta=None): image.data = np.expand_dims(cv2.cvtColor(image.data, cv2.COLOR_BGR2GRAY).astype(np.float32), -1) return image +class RgbToBgr(Preprocessor): + __provider__ = 'rgb_to_bgr' + + def process(self, image, annotation_meta=None): + def process_data(data): + return cv2.cvtColor(data, cv2.COLOR_RGB2BGR) + image.data = process_data(image.data) if not isinstance(image.data, list) else [ + process_data(fragment) for fragment in image.data + ] + return image + + +class RgbToGray(Preprocessor): + __provider__ = 'rgb_to_gray' + + def process(self, image, annotation_meta=None): + image.data = np.expand_dims(cv2.cvtColor(image.data, cv2.COLOR_RGB2GRAY).astype(np.float32), -1) + return image + class TfConvertImageDType(Preprocessor): __provider__ = 'tf_convert_image_dtype' From 0b0cfc55eb23e64269fc0b80a9a9c03182da8694 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 2 Oct 2019 15:01:33 +0300 Subject: [PATCH 062/927] AC: do not store annotattion and predictions if they is not used (#470) --- .../accuracy_checker/evaluators/model_evaluator.py | 5 +++-- .../evaluators/quantization_model_evaluator.py | 6 +++--- .../accuracy_checker/accuracy_checker/metrics/hit_ratio.py | 4 ++-- .../accuracy_checker/metrics/metric_executor.py | 5 ++++- 4 files changed, 12 insertions(+), 8 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py index 91951b4d950..cbf4469ae40 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py @@ -136,8 +136,9 @@ def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, if not self.postprocessor.has_dataset_processors: self.metric_executor.update_metrics_on_batch(annotations, predictions) - self._annotations.extend(annotations) - self._predictions.extend(predictions) + if self.metric_executor.need_store_predictions: + self._annotations.extend(annotations) + self._predictions.extend(predictions) if progress_reporter: progress_reporter.update(batch_id, len(batch_predictions)) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index d8a3fff9727..548ba4cb8fb 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -133,9 +133,9 @@ def _create_subset(subset, num_images): if self.metric_executor: self.metric_executor.update_metrics_on_batch(annotations, predictions) - - self._annotations.extend(annotations) - self._predictions.extend(predictions) + if self.metric_executor.need_store_predictions: + self._annotations.extend(annotations) + self._predictions.extend(predictions) if progress_reporter: progress_reporter.update(batch_id, len(batch_predictions)) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/hit_ratio.py b/tools/accuracy_checker/accuracy_checker/metrics/hit_ratio.py index de27dccf9c9..853d692d75e 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/hit_ratio.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/hit_ratio.py @@ -20,10 +20,10 @@ import numpy as np from ..representation import HitRatioAnnotation, HitRatioPrediction -from .metric import FullDatasetEvaluationMetric +from .metric import PerImageEvaluationMetric from ..config import NumberField -class BaseRecommenderMetric(FullDatasetEvaluationMetric): +class BaseRecommenderMetric(PerImageEvaluationMetric): annotation_types = (HitRatioAnnotation, ) prediction_types = (HitRatioPrediction, ) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py b/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py index f3d5426f46d..18344d4ec3a 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py @@ -19,7 +19,7 @@ from ..presenters import BasePresenter, EvaluationResult from ..config import StringField from ..utils import zipped_transform -from .metric import Metric +from .metric import Metric, FullDatasetEvaluationMetric from ..config import ConfigValidator, ConfigError MetricInstance = namedtuple( @@ -47,6 +47,7 @@ def __init__(self, metrics_config, dataset=None, state=None): reference = 'reference' threshold = 'threshold' presenter = 'presenter' + self.need_store_predictions = False for metric_config_entry in metrics_config: metric_config = ConfigValidator( "metrics", on_extra_argument=ConfigValidator.IGNORE_ON_EXTRA_ARGUMENT, @@ -70,6 +71,8 @@ def __init__(self, metrics_config, dataset=None, state=None): metric_config_entry.get(threshold), metric_presenter )) + if isinstance(metric_fn, FullDatasetEvaluationMetric): + self.need_store_predictions = True @classmethod def parameters(cls): From 3218a0518ff95e4ba17ec422826ad5fce7111f97 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 2 Oct 2019 15:01:59 +0300 Subject: [PATCH 063/927] AC: add __len__ attribute to dataset (#436) --- .../accuracy_checker/dataset.py | 21 +++++++++++-------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index 90a92f24b68..52789005124 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -101,7 +101,9 @@ def config(self): return deepcopy(self._config) #read-only def __len__(self): - return self.size + if self.subset: + return len(self.subset) + return len(self._annotation) @property def metadata(self): @@ -113,9 +115,7 @@ def labels(self): @property def size(self): - if self.subset: - return len(self.subset) - return len(self._annotation) + return self.__len__() @property def full_size(self): @@ -228,6 +228,13 @@ def __getitem__(self, item): return batch_annotation, batch_input, batch_identifiers + def __len__(self): + if self.annotation_reader: + return self.annotation_reader.size + if self.subset: + return len(self.subset) + return len(self._identifiers) + def make_subset(self, ids=None, start=0, step=1, end=None): if self.annotation_reader: self.annotation_reader.make_subset(ids, start, step, end) @@ -262,8 +269,4 @@ def full_size(self): @property def size(self): - if self.annotation_reader: - return self.annotation_reader.size - if self.subset: - return len(self.subset) - return len(self._identifiers) + return self.__len__() From f04cb6bd3331c991325690714c73c1ad43c92420 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Wed, 2 Oct 2019 15:14:13 +0300 Subject: [PATCH 064/927] FIX --- tools/downloader/common.py | 7 +++++-- tools/downloader/converter.py | 14 ++++++-------- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/tools/downloader/common.py b/tools/downloader/common.py index 073fd468f24..66b1f40fd8d 100644 --- a/tools/downloader/common.py +++ b/tools/downloader/common.py @@ -30,6 +30,7 @@ # make sure to update the documentation if you modify these KNOWN_FRAMEWORKS = {'caffe', 'dldt', 'mxnet', 'pytorch', 'tf'} +KNOWN_ONNX_SUPPORT = {'pytorch'} KNOWN_PRECISIONS = {'FP16', 'FP32', 'INT1', 'INT8'} KNOWN_TASK_TYPES = { 'action_recognition', @@ -328,13 +329,15 @@ def deserialize(cls, model, name, subdirectory): with deserialization_context('"postprocessing" #{}'.format(i)): postprocessing.append(Postproc.deserialize(postproc)) + framework = validate_string_enum('"framework"', model['framework'], KNOWN_FRAMEWORKS) + conversion_to_onnx_args = None if model.get('conversion_to_onnx_args', None): + if framework not in KNOWN_ONNX_SUPPORT: + raise DeserializationError('Conversion to ONNX not supported for "{}" framework!'.format(framework)) conversion_to_onnx_args = [validate_string('"conversion_to_onnx_args" #{}'.format(i), arg) for i, arg in enumerate(model['conversion_to_onnx_args'])] - framework = validate_string_enum('"framework"', model['framework'], KNOWN_FRAMEWORKS) - if 'model_optimizer_args' in model: mo_args = [validate_string('"model_optimizer_args" #{}'.format(i), arg) for i, arg in enumerate(model['model_optimizer_args'])] diff --git a/tools/downloader/converter.py b/tools/downloader/converter.py index 3849c37e684..8c1b4f1711d 100755 --- a/tools/downloader/converter.py +++ b/tools/downloader/converter.py @@ -58,21 +58,19 @@ def prefixed_subprocess(prefix, args): def convert_to_onnx(model, output_dir, args, stdout_prefix): + script_dir = Path(__file__).absolute().parent converters = { - 'pytorch': Path(__file__).absolute().parent / 'pytorch_to_onnx.py' + 'pytorch': 'pytorch_to_onnx.py', } - prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX from {}', - '(DRY RUN) ' if args.dry_run else '', model.name, model.framework) + prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX ', + '(DRY RUN) ' if args.dry_run else '', model.name) + converter = converters[model.framework] conversion_to_onnx_args = [string.Template(arg).substitute(conv_dir=output_dir / model.subdirectory, dl_dir=args.download_dir / model.subdirectory) for arg in model.conversion_to_onnx_args] - converter = converters.get(model.framework) - if converter is None: - prefixed_printf(stdout_prefix, 'Conversion to ONNX not supported for {} framework', model.framework) - return False + cmd = [str(args.python), str(script_dir / converter), *conversion_to_onnx_args] - cmd = [str(args.python), str(converter), *conversion_to_onnx_args] prefixed_printf(stdout_prefix, 'Conversion to ONNX command: {}', ' '.join(map(quote_arg, cmd))) return True if args.dry_run else prefixed_subprocess(stdout_prefix, cmd) From 4014aa5a31684fa48b335a47ef9db8c8800e7b27 Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Wed, 2 Oct 2019 15:22:05 +0300 Subject: [PATCH 065/927] Bump the develop version to R4 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 85ed05305da..d4528d8e71f 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Open Model Zoo repository [![Build Status](http://134.191.240.124/buildStatus/icon?job=omz/2018/trigger)](http://134.191.240.124/job/omz/job/2018/job/trigger/) -[![Stable release](https://img.shields.io/badge/version-2019_R3-green.svg)](https://github.com/opencv/open_model_zoo/releases/tag/2019_R3) +[![Stable release](https://img.shields.io/badge/version-2019_R4-green.svg)](https://github.com/opencv/open_model_zoo/releases/tag/2019_R4) [![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/open_model_zoo/community) [![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE) From 4158f4f65d847056033e7822bfe87989bf97bd18 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Wed, 2 Oct 2019 17:02:53 +0300 Subject: [PATCH 066/927] Conversion script moved to class Model --- tools/downloader/common.py | 25 ++++++++++++++++++------- tools/downloader/converter.py | 9 ++------- 2 files changed, 20 insertions(+), 14 deletions(-) diff --git a/tools/downloader/common.py b/tools/downloader/common.py index 66b1f40fd8d..1af2a665652 100644 --- a/tools/downloader/common.py +++ b/tools/downloader/common.py @@ -29,8 +29,13 @@ DOWNLOAD_TIMEOUT = 5 * 60 # make sure to update the documentation if you modify these -KNOWN_FRAMEWORKS = {'caffe', 'dldt', 'mxnet', 'pytorch', 'tf'} -KNOWN_ONNX_SUPPORT = {'pytorch'} +KNOWN_FRAMEWORKS = { + 'caffe': None, + 'dldt': None, + 'mxnet': None, + 'pytorch': 'pytorch_to_onnx.py', + 'tf': None, +} KNOWN_PRECISIONS = {'FP16', 'FP32', 'INT1', 'INT8'} KNOWN_TASK_TYPES = { 'action_recognition', @@ -305,6 +310,7 @@ def __init__(self, name, subdirectory, files, postprocessing, mo_args, framework self.precisions = precisions self.task_type = task_type self.conversion_to_onnx_args = conversion_to_onnx_args + self.converter_to_onnx = KNOWN_FRAMEWORKS[framework] @classmethod def deserialize(cls, model, name, subdirectory): @@ -329,14 +335,19 @@ def deserialize(cls, model, name, subdirectory): with deserialization_context('"postprocessing" #{}'.format(i)): postprocessing.append(Postproc.deserialize(postproc)) - framework = validate_string_enum('"framework"', model['framework'], KNOWN_FRAMEWORKS) + framework = validate_string_enum('"framework"', model['framework'], KNOWN_FRAMEWORKS.keys()) - conversion_to_onnx_args = None - if model.get('conversion_to_onnx_args', None): - if framework not in KNOWN_ONNX_SUPPORT: - raise DeserializationError('Conversion to ONNX not supported for "{}" framework!'.format(framework)) + conversion_to_onnx_args = model.get('conversion_to_onnx_args', None) + if KNOWN_FRAMEWORKS[framework]: + if not conversion_to_onnx_args: + raise DeserializationError('"conversion_to_onnx_args" is absent. ' + 'Framework "{}" is supported only by conversion to ONNX.' + .format(framework)) conversion_to_onnx_args = [validate_string('"conversion_to_onnx_args" #{}'.format(i), arg) for i, arg in enumerate(model['conversion_to_onnx_args'])] + else: + if conversion_to_onnx_args: + raise DeserializationError('Conversion to ONNX not supported for "{}" framework'.format(framework)) if 'model_optimizer_args' in model: mo_args = [validate_string('"model_optimizer_args" #{}'.format(i), arg) diff --git a/tools/downloader/converter.py b/tools/downloader/converter.py index 8c1b4f1711d..defe295a863 100755 --- a/tools/downloader/converter.py +++ b/tools/downloader/converter.py @@ -58,18 +58,13 @@ def prefixed_subprocess(prefix, args): def convert_to_onnx(model, output_dir, args, stdout_prefix): - script_dir = Path(__file__).absolute().parent - converters = { - 'pytorch': 'pytorch_to_onnx.py', - } - prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX ', + prefixed_printf(stdout_prefix, '========= {}Converting {} to ONNX', '(DRY RUN) ' if args.dry_run else '', model.name) - converter = converters[model.framework] conversion_to_onnx_args = [string.Template(arg).substitute(conv_dir=output_dir / model.subdirectory, dl_dir=args.download_dir / model.subdirectory) for arg in model.conversion_to_onnx_args] - cmd = [str(args.python), str(script_dir / converter), *conversion_to_onnx_args] + cmd = [str(args.python), str(Path(__file__).absolute().parent / model.converter_to_onnx), *conversion_to_onnx_args] prefixed_printf(stdout_prefix, 'Conversion to ONNX command: {}', ' '.join(map(quote_arg, cmd))) From 51ce2136c8e4494ae5b9f7bca0d0272e7b3cbf9d Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Thu, 26 Sep 2019 15:50:49 +0300 Subject: [PATCH 067/927] tools/downloader: Document a workaround for a TLS connection issue The official macOS Python 3.5 binaries are linked with system OpenSSL, which doesn't support TLS 1.2. Unfortunately, some of our file sources only support TLS 1.2 and up. The workaround is to install 'requests[security]', which includes pyOpenSSL (which Requests will automatically use if it's available). Since Python 3.6, the official binaries bundle a newer version of OpenSSL, so the issue no longer exists. --- tools/downloader/README.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/tools/downloader/README.md b/tools/downloader/README.md index 5ddf7575456..7dab59ebcf6 100644 --- a/tools/downloader/README.md +++ b/tools/downloader/README.md @@ -39,6 +39,20 @@ conversion to ONNX format. To use automatic conversion install additional depend python3 -mpip install --user -r ./requirements-pytorch.in ``` +When running the model downloader with Python 3.5.x on macOS, you may encounter +an error similar to the following: + +> requests.exceptions.SSLError: [...] (Caused by SSLError(SSLError(1, '[SSL: TLSV1_ALERT_PROTOCOL_VERSION] +tlsv1 alert protocol version (_ssl.c:719)'),)) + +You can work around this by installing additional packages: + +```sh +python3 -mpip install --user 'requests[security]' +``` + +Alternatively, upgrade to Python 3.6 or a later version. + Model downloader usage ---------------------- From ad0ac79154fa0760e6160feb0dff64732d0ac85b Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Thu, 26 Sep 2019 10:17:52 +0300 Subject: [PATCH 068/927] enabling video super resolution --- .../accuracy_checker/data_readers/__init__.py | 2 ++ .../accuracy_checker/data_readers/data_reader.py | 5 +++++ .../accuracy_checker/launcher/input_feeder.py | 8 +++++++- 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/__init__.py b/tools/accuracy_checker/accuracy_checker/data_readers/__init__.py index dd9216749c2..d0156e6b8a4 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/__init__.py @@ -29,6 +29,7 @@ DataRepresentation, ClipIdentifier, + MultiFramesInputIdentifier, create_reader ) @@ -48,5 +49,6 @@ 'DataRepresentation', 'ClipIdentifier', + 'MultiFramesInputIdentifier', 'create_reader' ] diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index 2b41d14e93d..d1a569ceb8f 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -46,6 +46,7 @@ def __init__(self, data, meta=None, identifier=''): ClipIdentifier = namedtuple('ClipIdentifier', ['video', 'clip_id', 'frames']) +MultiFramesInputIdentifier = namedtuple('MultiFrames', ['input_id', 'frames']) def create_reader(config): @@ -83,6 +84,7 @@ def __init__(self, data_source, config=None, **kwargs): self.read_dispatcher = singledispatch(self.read) self.read_dispatcher.register(list, self._read_list) self.read_dispatcher.register(ClipIdentifier, self._read_clip) + self.read_dispatcher.register(MultiFramesInputIdentifier, self._read_frames_multi_input) self.validate_config() self.configure() @@ -120,6 +122,9 @@ def _read_clip(self, data_id): frames_identifiers = [video / frame for frame in data_id.frames] return self.read_dispatcher(frames_identifiers) + def _read_frames_multi_input(self, data_id): + return self.read_dispatcher(data_id.frames) + def read_item(self, data_id): return DataRepresentation(self.read_dispatcher(data_id), identifier=data_id) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py b/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py index 2cdabcd2164..2f84cb07b5b 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py @@ -19,6 +19,7 @@ from ..config import ConfigError from ..utils import extract_image_representations +from ..data_readers import MultiFramesInputIdentifier LAYER_LAYOUT_TO_IMAGE_LAYOUT = { 'NCHW': [0, 3, 1, 2], @@ -96,6 +97,11 @@ def fill_non_constant_inputs(self, data_representation_batch): if not input_regex: raise ConfigError('Impossible to choose correct data for layer {}.' 'Please provide regular expression for matching in config.'.format(input_layer)) + if isinstance(identifiers, MultiFramesInputIdentifier): + input_id_order = { + input_index: frame_id for frame_id, input_index in enumerate(identifiers.input_id) + } + input_data = data[input_id_order[input_regex]] data = [data] if np.isscalar(identifiers) else data identifiers = [identifiers] if np.isscalar(identifiers) else identifiers for identifier, data_value in zip(identifiers, data): @@ -139,7 +145,7 @@ def _parse_inputs_config(self, inputs_entry, default_layout='NCHW'): else: config_non_constant_inputs.append(name) if value: - value = re.compile(value) + value = re.compile(value) if not isinstance(value, int) else value non_constant_inputs_mapping[name] = value layout = input_.get('layout', default_layout) layouts[name] = LAYER_LAYOUT_TO_IMAGE_LAYOUT[layout] From b1723334c8aa28707226ab1c9dfb561dd9839d38 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Thu, 26 Sep 2019 11:01:07 +0300 Subject: [PATCH 069/927] ea/video_super_resolution --- .../annotation_converters/__init__.py | 3 +- .../super_resolution_converter.py | 74 ++++++++++++++++++- .../data_readers/data_reader.py | 2 +- .../accuracy_checker/launcher/launcher.py | 1 + 4 files changed, 76 insertions(+), 4 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py index 28b7f40ce1f..19011ce2ade 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py @@ -24,7 +24,7 @@ from .detection_opencv_storage import DetectionOpenCVStorageFormatConverter from .lfw import LFWConverter from .vgg_face_regression import VGGFaceRegressionConverter -from .super_resolution_converter import SRConverter +from .super_resolution_converter import SRConverter, SRMultiFrameConverter from .imagenet import ImageNetFormatConverter from .icdar import ICDAR13RecognitionDatasetConverter, ICDAR15DetectionDatasetConverter from .ms_coco import MSCocoDetectionConverter, MSCocoKeypointsConverter @@ -63,6 +63,7 @@ 'LFWConverter', 'VGGFaceRegressionConverter', 'SRConverter', + 'SRMultiFrameConverter', 'ICDAR13RecognitionDatasetConverter', 'ICDAR15DetectionDatasetConverter', 'MSCocoKeypointsConverter', diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py index 1f4328d0387..755087e36c7 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py @@ -18,8 +18,9 @@ from ..config import PathField, StringField, BoolField, ConfigError, NumberField from ..representation import SuperResolutionAnnotation from ..representation.super_resolution_representation import GTLoader +from ..utils import check_file_existence +from ..data_readers import MultiFramesInputIdentifier from .format_converter import BaseFormatConverter, ConverterReturn -from ..utils import check_file_existence LOADERS_MAPPING = { 'opencv': GTLoader.OPENCV, @@ -95,7 +96,7 @@ def convert(self, check_content=False, progress_callback=None, progress_interval if not check_file_existence(self.data_dir / hr_file_name): content_errors.append('{}: does not exist'.format(self.data_dir / hr_file_name)) if self.two_streams and not check_file_existence(self.data_dir / upsampled_file_name): - content_errors.append('{}: does not exist'.format(self.data_dir / hr_file_name)) + content_errors.append('{}: does not exist'.format(self.data_dir / upsampled_file_name)) identifier = [lr_file_name, upsampled_file_name] if self.two_streams else lr_file_name annotation.append(SuperResolutionAnnotation(identifier, hr_file_name, gt_loader=self.annotation_loader)) @@ -109,3 +110,72 @@ def generate_upsample_file(original_image_path, scale_factor, upsampled_file_nam image = cv2.imread(str(original_image_path)) upsampled_image = cv2.resize(image, None, fx=scale_factor, fy=scale_factor, interpolation=cv2.INTER_CUBIC) cv2.imwrite(str(original_image_path.parent / upsampled_file_name), upsampled_image) + + +class SRMultiFrameConverter(BaseFormatConverter): + __provider__ = 'multi_frame_super_resolution' + annotation_types = (SuperResolutionAnnotation, ) + + @classmethod + def parameters(cls): + params = super().parameters() + params.update({ + 'data_dir': PathField( + is_directory=True, description="Path to folder, where images in low and high resolution are located." + ), + 'lr_suffix': StringField( + optional=True, default="lr", description="Low resolution file name's suffix." + ), + 'hr_suffix': StringField( + optional=True, default="hr", description="High resolution file name's suffix." + ), + 'number_input_frames': NumberField( + description='number inputs per inference', value_type=int, + ), + 'max_frame_id': NumberField( + description='the last frame index', optional=True, value_type=int, + ), + 'annotation_loader': StringField( + optional=True, choices=LOADERS_MAPPING.keys(), default='pillow', + description="Which library will be used for ground truth image reading. " + "Supported: {}".format(', '.join(LOADERS_MAPPING.keys())) + ) + }) + return params + + def configure(self): + self.data_dir = self.get_value_from_config('data_dir') + self.lr_suffix = self.get_value_from_config('lr_suffix') + self.hr_suffix = self.get_value_from_config('hr_suffix') + self.annotation_loader = LOADERS_MAPPING.get(self.get_value_from_config('annotation_loader')) + self.num_frames = self.get_value_from_config('number_input_frames') + self.max_frame_id = self.get_value_from_config('max_frame_id') + + def convert(self, check_content=False, progress_callback=None, progress_interval=100, **kwargs): + content_errors = [] if check_content else None + frames_ids = [] + annotations = [] + if not self.max_frame_id: + for file_in_dir in self.data_dir.iterdir(): + image_name = file_in_dir.parts[-1] + if self.lr_suffix in image_name and self.hr_suffix not in image_name: + frames_ids.append(int(image_name.split(self.lr_suffix))) + frames_ids.sort() + else: + frames_ids = list(range(self.max_frame_id)) + num_iterations = len(frames_ids) + for idx, _ in enumerate(frames_ids): + if len(frames_ids) - idx < self.num_frames: + break + input_ids = list(range(self.num_frames)) + input_frames = ['{}{}'.format(idx + shift, self.lr_suffix)for shift in input_ids] + hr_name = self.hr_suffix.join(input_frames[0].split(self.lr_suffix)) + if check_content and not check_file_existence(self.data_dir / hr_name): + content_errors.append('{}: does not exist'.format(self.data_dir / hr_name)) + annotations.append(SuperResolutionAnnotation( + MultiFramesInputIdentifier(input_ids, input_frames), hr_name, gt_loader=self.annotation_loader + )) + if progress_callback and idx % progress_interval == 0: + progress_callback(idx * 100 / num_iterations) + + return ConverterReturn(annotations, None, content_errors) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index d1a569ceb8f..44258d4fdf0 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -46,7 +46,7 @@ def __init__(self, data, meta=None, identifier=''): ClipIdentifier = namedtuple('ClipIdentifier', ['video', 'clip_id', 'frames']) -MultiFramesInputIdentifier = namedtuple('MultiFrames', ['input_id', 'frames']) +MultiFramesInputIdentifier = namedtuple('MultiFramesInputIdentifier', ['input_id', 'frames']) def create_reader(config): diff --git a/tools/accuracy_checker/accuracy_checker/launcher/launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/launcher.py index 414440a2ab1..5ab5135377f 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/launcher.py @@ -152,6 +152,7 @@ def inputs_info_for_meta(self): def name(self): return self.__provider__ + def unsupported_launcher(name, error_message=None): class UnsupportedLauncher(Launcher): __provider__ = name From 6d4228334e6652610a05fd6dba2f4ed557fb9a9c Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Mon, 30 Sep 2019 14:37:27 +0300 Subject: [PATCH 070/927] apply fix --- .../super_resolution_converter.py | 24 +++++++++---------- .../accuracy_checker/launcher/input_feeder.py | 19 ++++++++------- 2 files changed, 22 insertions(+), 21 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py index 755087e36c7..adf2e7d28ad 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/super_resolution_converter.py @@ -15,6 +15,7 @@ """ import cv2 +import numpy as np from ..config import PathField, StringField, BoolField, ConfigError, NumberField from ..representation import SuperResolutionAnnotation from ..representation.super_resolution_representation import GTLoader @@ -132,9 +133,6 @@ def parameters(cls): 'number_input_frames': NumberField( description='number inputs per inference', value_type=int, ), - 'max_frame_id': NumberField( - description='the last frame index', optional=True, value_type=int, - ), 'annotation_loader': StringField( optional=True, choices=LOADERS_MAPPING.keys(), default='pillow', description="Which library will be used for ground truth image reading. " @@ -154,21 +152,23 @@ def configure(self): def convert(self, check_content=False, progress_callback=None, progress_interval=100, **kwargs): content_errors = [] if check_content else None frames_ids = [] + frame_names = [] annotations = [] - if not self.max_frame_id: - for file_in_dir in self.data_dir.iterdir(): - image_name = file_in_dir.parts[-1] - if self.lr_suffix in image_name and self.hr_suffix not in image_name: - frames_ids.append(int(image_name.split(self.lr_suffix))) - frames_ids.sort() - else: - frames_ids = list(range(self.max_frame_id)) + for file_in_dir in self.data_dir.iterdir(): + image_name = file_in_dir.parts[-1] + if self.lr_suffix in image_name and self.hr_suffix not in image_name: + frame_names.append(image_name) + frames_ids.append(int(image_name.split(self.lr_suffix)[0])) + sorted_frames = np.argsort(frames_ids) + frames_ids.sort() + sorted_frame_names = [frame_names[idx] for idx in sorted_frames] + num_iterations = len(frames_ids) for idx, _ in enumerate(frames_ids): if len(frames_ids) - idx < self.num_frames: break input_ids = list(range(self.num_frames)) - input_frames = ['{}{}'.format(idx + shift, self.lr_suffix)for shift in input_ids] + input_frames = [sorted_frame_names[idx + shift] for shift in input_ids] hr_name = self.hr_suffix.join(input_frames[0].split(self.lr_suffix)) if check_content and not check_file_existence(self.data_dir / hr_name): content_errors.append('{}: does not exist'.format(self.data_dir / hr_name)) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py b/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py index 2f84cb07b5b..d08bdd4f16f 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py @@ -89,12 +89,12 @@ def fill_non_constant_inputs(self, data_representation_batch): input_data = None identifiers = data_representation.identifier data = data_representation.data - if not isinstance(identifiers, list) and not input_regex: + if not isinstance(identifiers, list) and input_regex is None: input_data = data input_batch.append(input_data) continue - if not input_regex: + if input_regex is None: raise ConfigError('Impossible to choose correct data for layer {}.' 'Please provide regular expression for matching in config.'.format(input_layer)) if isinstance(identifiers, MultiFramesInputIdentifier): @@ -102,12 +102,13 @@ def fill_non_constant_inputs(self, data_representation_batch): input_index: frame_id for frame_id, input_index in enumerate(identifiers.input_id) } input_data = data[input_id_order[input_regex]] - data = [data] if np.isscalar(identifiers) else data - identifiers = [identifiers] if np.isscalar(identifiers) else identifiers - for identifier, data_value in zip(identifiers, data): - if input_regex.match(identifier): - input_data = data_value - break + else: + data = [data] if np.isscalar(identifiers) else data + identifiers = [identifiers] if np.isscalar(identifiers) else identifiers + for identifier, data_value in zip(identifiers, data): + if input_regex.match(identifier): + input_data = data_value + break if input_data is None: raise ConfigError('Suitable data for filling layer {} not found'.format(input_layer)) input_batch.append(input_data) @@ -144,7 +145,7 @@ def _parse_inputs_config(self, inputs_entry, default_layout='NCHW'): constant_inputs[name] = value else: config_non_constant_inputs.append(name) - if value: + if value is not None: value = re.compile(value) if not isinstance(value, int) else value non_constant_inputs_mapping[name] = value layout = input_.get('layout', default_layout) From 7198aad50a3780c63e48a2e655874aae6526fe3e Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Wed, 2 Oct 2019 18:46:49 +0300 Subject: [PATCH 071/927] update readme --- .../accuracy_checker/annotation_converters/README.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md index b1ca4049f66..6708c4b039c 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md @@ -118,11 +118,19 @@ Accuracy Checker supports following list of annotation converters and specific f * `data_dir` - path to data directory, where gallery (`bbox_test`) and `query` subdirectories are located. * `market1501_reid` - converts Market1501 person reidentification dataset to `ReidentificationAnnotation`. * `data_dir` - path to data directory, where gallery (`bounding_box_test`) and `query` subdirectories are located. -* `super_resolution` - converts dataset for super resolution task to `SuperResolutionAnnotation`. +* `super_resolution` - converts dataset for single image super resolution task to `SuperResolutionAnnotation`. * `data_dir` - path to folder, where images in low and high resolution are located. * `lr_suffix` - low resolution file name's suffix (default lr). * `hr_suffix` - high resolution file name's suffix (default hr). * `annotation_loader` - which library will be used for ground truth image reading. Supported: `opencv`, `pillow` (Optional. Default value is pillow). Note, color space of image depends on loader (OpenCV uses BGR, Pillow uses RGB for image reading). + * `two_streams` - enable 2 input streams where usually first for original image and second for upsampled image. (Optional, default False). + * `upsample_suffix` - upsample images file name's suffix (default upsample). +* `multi_frame_super_resolution` - converts dataset for super resolution task with multiple input frames usage. + * `data_dir` - path to folder, where images in low and high resolution are located. + * `lr_suffix` - low resolution file name's suffix (default lr). + * `hr_suffix` - high resolution file name's suffix (default hr). + * `annotation_loader` - which library will be used for ground truth image reading. Supported: `opencv`, `pillow` (Optional. Default value is pillow). Note, color space of image depends on loader (OpenCV uses BGR, Pillow uses RGB for image reading). + * `number_input_frames` - the number of input frames per inference. * `icdar_detection` - converts ICDAR13 and ICDAR15 datasets for text detection challenge to `TextDetectionAnnotation`. * `data_dir` - path to folder with annotations on txt format. * `icdar13_recognition` - converts ICDAR13 dataset for text recognition task to `CharecterRecognitionAnnotation`. From 9a6b4112248e8c3a3f21491bbde76c0150eb9df2 Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Wed, 2 Oct 2019 18:55:18 +0300 Subject: [PATCH 072/927] ci: Bump all dependency versions --- ci/requirements-ac.txt | 18 ++++++++++++------ ci/requirements-conversion.txt | 25 ++++++++++++------------- ci/requirements-demos.txt | 2 +- ci/requirements-downloader.txt | 4 ++-- 4 files changed, 27 insertions(+), 22 deletions(-) diff --git a/ci/requirements-ac.txt b/ci/requirements-ac.txt index 342f042f103..2dd0733567d 100644 --- a/ci/requirements-ac.txt +++ b/ci/requirements-ac.txt @@ -1,12 +1,18 @@ -joblib==0.13.2 # via scikit-learn -nibabel==2.5.0 -numpy==1.17.0 -pillow==6.1.0 +# +# This file is autogenerated by pip-compile +# To update, run: +# +# pip-compile --output-file=ci/requirements-ac.txt tools/accuracy_checker/requirements.in +# +joblib==0.14.0 # via scikit-learn +nibabel==2.5.1 +numpy==1.17.2 +pillow==6.2.0 py-cpuinfo==4.0.0 pyyaml==5.1.2 scikit-learn==0.21.3 -scipy==0.19.0 +scipy==1.3.1 shapely==1.6.4.post2 six==1.12.0 # via nibabel -tqdm==4.33.0 +tqdm==4.36.1 yamlloader==0.5.5 diff --git a/ci/requirements-conversion.txt b/ci/requirements-conversion.txt index d147fcefea7..cb067e25b8b 100644 --- a/ci/requirements-conversion.txt +++ b/ci/requirements-conversion.txt @@ -1,14 +1,14 @@ -absl-py==0.7.1 # via tensorboard, tensorflow +absl-py==0.8.0 # via tensorboard, tensorflow astor==0.8.0 # via tensorflow -certifi==2019.6.16 # via requests +certifi==2019.9.11 # via requests chardet==3.0.4 # via requests decorator==4.4.0 # via networkx defusedxml==0.6.0 -gast==0.2.2 # via tensorflow +gast==0.3.2 # via tensorflow google-pasta==0.1.7 # via tensorflow graphviz==0.8.4 # via mxnet -grpcio==1.22.0 # via tensorboard, tensorflow -h5py==2.9.0 # via keras-applications +grpcio==1.24.0 # via tensorboard, tensorflow +h5py==2.10.0 # via keras-applications idna==2.8 # via requests keras-applications==1.0.8 # via tensorflow keras-preprocessing==1.1.0 # via tensorflow @@ -16,25 +16,24 @@ markdown==3.1.1 # via tensorboard mxnet==1.3.1 networkx==2.3 numpy==1.14.6 -onnx==1.5.0 -pillow==6.1.0 # via torchvision +onnx==1.6.0 +pillow==6.2.0 # via torchvision protobuf==3.6.1 requests==2.22.0 # via mxnet scipy==1.3.1 -six==1.12.0 # via absl-py, grpcio, h5py, keras-preprocessing, onnx, protobuf, tensorboard, tensorflow, test-generator, torchvision +six==1.12.0 # via absl-py, grpcio, h5py, keras-preprocessing, onnx, protobuf, tensorboard, tensorflow, torchvision tensorboard==1.14.0 # via tensorflow tensorflow-estimator==1.14.0 # via tensorflow tensorflow==1.14.0 termcolor==1.1.0 # via tensorflow -test-generator==0.1.1 torch==1.2.0 torchvision==0.4.0 typing-extensions==3.7.4 # via onnx typing==3.7.4 # via onnx -urllib3==1.25.3 # via requests -werkzeug==0.15.5 # via tensorboard -wheel==0.33.4 # via tensorboard, tensorflow +urllib3==1.25.6 # via requests +werkzeug==0.16.0 # via tensorboard +wheel==0.33.6 # via tensorboard, tensorflow wrapt==1.11.2 # via tensorflow # The following packages are considered to be unsafe in a requirements file: -# setuptools==41.0.1 # via markdown, protobuf, tensorboard +# setuptools==41.2.0 # via markdown, protobuf, tensorboard diff --git a/ci/requirements-demos.txt b/ci/requirements-demos.txt index e07312c7697..9d13b7e9119 100644 --- a/ci/requirements-demos.txt +++ b/ci/requirements-demos.txt @@ -1 +1 @@ -numpy==1.17.0 ; python_version >= "3.4" +numpy==1.17.2 ; python_version >= "3.4" diff --git a/ci/requirements-downloader.txt b/ci/requirements-downloader.txt index 3f154ea4cb8..3aab6bae5b8 100644 --- a/ci/requirements-downloader.txt +++ b/ci/requirements-downloader.txt @@ -4,9 +4,9 @@ # # pip-compile --output-file=ci/requirements-downloader.txt tools/downloader/requirements.in # -certifi==2019.6.16 # via requests +certifi==2019.9.11 # via requests chardet==3.0.4 # via requests idna==2.8 # via requests pyyaml==5.1.2 requests==2.22.0 -urllib3==1.25.3 # via requests +urllib3==1.25.6 # via requests From 17857dd142c72e7cf5156ecd8b18eeadf33b6e9d Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Thu, 3 Oct 2019 11:53:40 +0300 Subject: [PATCH 073/927] Bump AC version: 0.7.3 -> 0.7.4 --- tools/accuracy_checker/accuracy_checker/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/__init__.py b/tools/accuracy_checker/accuracy_checker/__init__.py index b6d89a2d811..b132441be98 100644 --- a/tools/accuracy_checker/accuracy_checker/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/__init__.py @@ -14,4 +14,4 @@ limitations under the License. """ -__version__ = "0.7.3" +__version__ = "0.7.4" From 0a42b3442223cae6f1e996ee600cbb54689fa183 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 4 Oct 2019 14:03:31 +0300 Subject: [PATCH 074/927] FIX --- CONTRIBUTING.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b4fb1724f3c..2d4d3ef2c97 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -116,7 +116,7 @@ List of onnx conversion parameters, see `model_optimizer_args` for details. Appl **`model_optimizer_args`** -Conversion parameters, obtained [earlier](#model-conversion), is specified in this section, e.g.: +Conversion parameters (learn more in [Model conversion](#model-conversion) section) is specified in this section, e.g.: ``` - --input=data - --mean_values=data[127.5] @@ -140,7 +140,7 @@ Path to model's license. ## Model conversion -Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. Find more information about conversion in [[Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. Find more information about conversion in [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. > **NOTE 1**: due to OpenVINO™ paradigms, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. From 2d8682b69f77859552d5a81b2d1c9f8638e3a707 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Mon, 9 Sep 2019 16:43:06 +0300 Subject: [PATCH 075/927] add caffe2 to onnx converter --- models/public/densenet-121-cf2/model.yml | 45 ++++++++++++ tools/downloader/caffe2_to_onnx.py | 87 ++++++++++++++++++++++++ tools/downloader/converter.py | 4 ++ 3 files changed, 136 insertions(+) create mode 100644 models/public/densenet-121-cf2/model.yml create mode 100644 tools/downloader/caffe2_to_onnx.py diff --git a/models/public/densenet-121-cf2/model.yml b/models/public/densenet-121-cf2/model.yml new file mode 100644 index 00000000000..efb73adece5 --- /dev/null +++ b/models/public/densenet-121-cf2/model.yml @@ -0,0 +1,45 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + This is an Caffe2\* version of `densenet-121` model, one of the DenseNet + group of models designed to perform image classification. This model + was converted from Caffe\* to Caffe2\* fromat. + For details see repository , + paper +task_type: classification +files: + - name: predict_net.pb + size: 77239 + sha256: 820772d4e7b907599cba93ab0e7d2db0dc0b6e313e842a8729a0ea0354e4a719 + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/densenet121/predict_net.pb + - name: init_net.pb + size: 40785727 + sha256: a3650579bc883a1755750994507c48d84d0f75d193e304eb8caf5031acb5f028 + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/densenet121/init_net.pb +framework: caffe2 +caffe2_to_onnx: + - --model-name=densenet-121-cf2 + - --predict-net-path=$dl_dir/predict_net.pb + - --init-net-path=$dl_dir/init_net.pb + - --input-shape=1,3,224,224 + - --input-names=data + - --output-file=$conv_dir/densenet-121-cf2.onnx +model_optimizer_args: + - --input_shape=[1,3,224,224] + - --input=data + - --mean_values=data[103.94,116.78,123.68] + - --scale_values=data[58.8235294] + - --input_model=$conv_dir/densenet-121-cf2.onnx +license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/tools/downloader/caffe2_to_onnx.py b/tools/downloader/caffe2_to_onnx.py new file mode 100644 index 00000000000..9d06aa21aeb --- /dev/null +++ b/tools/downloader/caffe2_to_onnx.py @@ -0,0 +1,87 @@ +import argparse +from pathlib import Path +import sys +import json +import os + +import onnx +from caffe2.python.onnx.frontend import Caffe2Frontend +from caffe2.proto import caffe2_pb2 + + +def positive_int_arg(values): + """Check positive integer type for input argument""" + result = [] + for value in values.split(','): + try: + ivalue = int(value) + if ivalue < 0: + raise argparse.ArgumentTypeError('Argument must be a positive integer') + result.append(ivalue) + except Exception as exc: + print(exc) + sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) + return result + +def parse_args(): + """Parse input arguments""" + + parser = argparse.ArgumentParser(description='Conversion of pretrained models from Caffe2 to ONNX') + + parser.add_argument('--model-name', type=str, required=True, + help='Model to convert. May be class name or name of constructor function') + parser.add_argument('--output-file', type=Path, required=True, + help='Path to the output ONNX model') + parser.add_argument('--predict-net-path', type=str, required=True, + help='Path to predict_net .pb file') + parser.add_argument('--init-net-path', type=str, required=True, + help='Path to init_net .pb file') + parser.add_argument('--input-shape', metavar='INPUT_DIM', type=positive_int_arg, nargs='+', + required=True, help='Shape of the input blob') + parser.add_argument('--input-names', type=str, nargs='+', + help='Space separated names of the input layers') + + return parser.parse_args() + +def load_model(predict_net_path, init_net_path): + predict_net = caffe2_pb2.NetDef() + with open(predict_net_path, 'rb') as file: + predict_net.ParseFromString(file.read()) + + init_net = caffe2_pb2.NetDef() + with open(init_net_path, 'rb') as file: + init_net.ParseFromString(file.read()) + + return predict_net, init_net + +def convert_to_onnx(predict_net, init_net, input_shape, input_names, output_file, model_name=''): + """Convert Caffe2 model to ONNX and check the resulting onnx model""" + + output_file.parent.mkdir(parents=True, exist_ok=True) + value_info = {} + for name, shape in zip(input_names, input_shape): + value_info[name] = [shape[0], shape] + if predict_net.name == "": + predict_net.name = model_name + + onnx_model = Caffe2Frontend.caffe2_net_to_onnx_model( + predict_net, + init_net, + value_info, + ) + try: + onnx.checker.check_model(onnx_model) + print('ONNX check passed successfully.') + with open(str(output_file), 'wb') as f: + f.write(onnx_model.SerializeToString()) + except onnx.onnx_cpp2py_export.checker.ValidationError as exc: + sys.exit('ONNX check failed with error: ' + str(exc)) + +def main(): + args = parse_args() + predict_net, init_net = load_model(args.predict_net_path, args.init_net_path) + convert_to_onnx(predict_net, init_net, args.input_shape, args.input_names, args.output_file, args.model_name) + + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/tools/downloader/converter.py b/tools/downloader/converter.py index defe295a863..245dd0819b1 100755 --- a/tools/downloader/converter.py +++ b/tools/downloader/converter.py @@ -155,6 +155,10 @@ def convert(model, do_prefix_stdout=True): if not convert_to_onnx(model, output_dir, args, stdout_prefix): return False model_format = 'onnx' + if model.caffe2_to_onnx_args: + if not convert_to_onnx(model, output_dir, args, stdout_prefix): + return False + model_format = 'onnx' expanded_mo_args = [ string.Template(arg).substitute(dl_dir=args.download_dir / model.subdirectory, From 07689bf5b95e521dda160f9d2f40e665a5726f84 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Mon, 9 Sep 2019 18:26:48 +0300 Subject: [PATCH 076/927] support of multiple inputs --- models/public/densenet-121-cf2/model.yml | 2 +- tools/downloader/caffe2_to_onnx.py | 34 ++++++++++++++---------- 2 files changed, 21 insertions(+), 15 deletions(-) diff --git a/models/public/densenet-121-cf2/model.yml b/models/public/densenet-121-cf2/model.yml index efb73adece5..ce73214ae26 100644 --- a/models/public/densenet-121-cf2/model.yml +++ b/models/public/densenet-121-cf2/model.yml @@ -33,7 +33,7 @@ caffe2_to_onnx: - --model-name=densenet-121-cf2 - --predict-net-path=$dl_dir/predict_net.pb - --init-net-path=$dl_dir/init_net.pb - - --input-shape=1,3,224,224 + - --input-shape=[1,3,224,224] - --input-names=data - --output-file=$conv_dir/densenet-121-cf2.onnx model_optimizer_args: diff --git a/tools/downloader/caffe2_to_onnx.py b/tools/downloader/caffe2_to_onnx.py index 9d06aa21aeb..35e2213317a 100644 --- a/tools/downloader/caffe2_to_onnx.py +++ b/tools/downloader/caffe2_to_onnx.py @@ -3,6 +3,7 @@ import sys import json import os +import re import onnx from caffe2.python.onnx.frontend import Caffe2Frontend @@ -12,15 +13,19 @@ def positive_int_arg(values): """Check positive integer type for input argument""" result = [] - for value in values.split(','): - try: - ivalue = int(value) - if ivalue < 0: - raise argparse.ArgumentTypeError('Argument must be a positive integer') - result.append(ivalue) - except Exception as exc: - print(exc) - sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) + shapes = re.findall(r'[(\[]([0-9, -]+)[)\]]', values) + for shape in shapes: + single_shape = [] + for value in shape.split(','): + try: + ivalue = int(value) + if ivalue < 0: + raise argparse.ArgumentTypeError('Argument must be a positive integer') + single_shape.append(ivalue) + except Exception as exc: + print(exc) + sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) + result.append(single_shape) return result def parse_args(): @@ -36,10 +41,10 @@ def parse_args(): help='Path to predict_net .pb file') parser.add_argument('--init-net-path', type=str, required=True, help='Path to init_net .pb file') - parser.add_argument('--input-shape', metavar='INPUT_DIM', type=positive_int_arg, nargs='+', + parser.add_argument('--input-shape', metavar='INPUT_DIM', type=positive_int_arg, required=True, help='Shape of the input blob') - parser.add_argument('--input-names', type=str, nargs='+', - help='Space separated names of the input layers') + parser.add_argument('--input-names', type=str, + help='Comma separated names of the input layers') return parser.parse_args() @@ -59,7 +64,8 @@ def convert_to_onnx(predict_net, init_net, input_shape, input_names, output_file output_file.parent.mkdir(parents=True, exist_ok=True) value_info = {} - for name, shape in zip(input_names, input_shape): + input_names = input_names[0].split(',') + for name, shape in zip(input_names, input_shape[0]): value_info[name] = [shape[0], shape] if predict_net.name == "": predict_net.name = model_name @@ -67,7 +73,7 @@ def convert_to_onnx(predict_net, init_net, input_shape, input_names, output_file onnx_model = Caffe2Frontend.caffe2_net_to_onnx_model( predict_net, init_net, - value_info, + value_info ) try: onnx.checker.check_model(onnx_model) From 9afd2e6577b4ef5a1f33262e009e9993732dbbf7 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Tue, 10 Sep 2019 12:03:24 +0300 Subject: [PATCH 077/927] add more models --- .../densenet-121-cf2/densenet-121-cf2.md | 65 ++++++++++++++++++ models/public/resnet-50-cf2/model.yml | 44 ++++++++++++ models/public/resnet-50-cf2/resnet-50-cf2.md | 68 +++++++++++++++++++ models/public/squeezenet1.1-cf2/model.yml | 43 ++++++++++++ .../squeezenet1.1-cf2/squeezenet1.1-cf2.md | 68 +++++++++++++++++++ models/public/vgg19-cf2/model.yml | 43 ++++++++++++ models/public/vgg19-cf2/vgg19-cf2.md | 67 ++++++++++++++++++ tools/downloader/caffe2_to_onnx.py | 4 +- 8 files changed, 400 insertions(+), 2 deletions(-) create mode 100644 models/public/densenet-121-cf2/densenet-121-cf2.md create mode 100644 models/public/resnet-50-cf2/model.yml create mode 100644 models/public/resnet-50-cf2/resnet-50-cf2.md create mode 100644 models/public/squeezenet1.1-cf2/model.yml create mode 100644 models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md create mode 100644 models/public/vgg19-cf2/model.yml create mode 100644 models/public/vgg19-cf2/vgg19-cf2.md diff --git a/models/public/densenet-121-cf2/densenet-121-cf2.md b/models/public/densenet-121-cf2/densenet-121-cf2.md new file mode 100644 index 00000000000..6f444349a18 --- /dev/null +++ b/models/public/densenet-121-cf2/densenet-121-cf2.md @@ -0,0 +1,65 @@ +# densenet-121-cf2 + +## Use Case and High-Level Description + +This is an Caffe2\* version of `densenet-121` model, one of the DenseNet +group of models designed to perform image classification. This model +was converted from Caffe\* to Caffe2\* fromat. +For details see repository , +paper + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 5.723 | +| MParams | 7.971 | +| Source framework | Caffe2\* | + +## Accuracy + +## Performance + +## Input + +### Original model + +Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR`. +Mean values - [103.94,116.78,123.68], scale value - 58.8235294 + +### Converted model + +Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR` + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `fc6`, shape - `1,1000,1,1`, contains predicted +probability for each class in logits format + +### Converted model + +Object classifier according to ImageNet classes, name - `fc6`, shape - `1,1000,1,1`, contains predicted +probability for each class in logits format + +## Legal Information + +[https://raw.githubusercontent.com/caffe2/models/master/LICENSE]() diff --git a/models/public/resnet-50-cf2/model.yml b/models/public/resnet-50-cf2/model.yml new file mode 100644 index 00000000000..d098fd5392d --- /dev/null +++ b/models/public/resnet-50-cf2/model.yml @@ -0,0 +1,44 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + This is an Caffe2\* version of `resnet-50` model, designed to perform image classification. + This model was converted from Caffe\* to Caffe2\* fromat. + For details see repository , + paper +task_type: classification +files: + - name: predict_net.pb + size: 31649 + sha256: 657081428cd8a8d9f1a6b20a8b6dba51725d3fc1eaabf0f19747a3b843e18a16 + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/resnet50/predict_net.pb + - name: init_net.pb + size: 128070759 + sha256: 97046c44ecd15b3c8806f609a15d0cc52af7bdc8aa19c720f8a1f6abe68e9a74 + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/resnet50/init_net.pb +framework: caffe2 +caffe2_to_onnx: + - --model-name=resnet-50-cf2 + - --predict-net-path=$dl_dir/predict_net.pb + - --init-net-path=$dl_dir/init_net.pb + - --input-shape=[1,3,224,224] + - --input-names=gpu_0/data + - --output-file=$conv_dir/resnet-50-cf2.onnx +model_optimizer_args: + - --input_shape=[1,3,224,224] + - --input=gpu_0/data + - --mean_values=gpu_0/data[103.53,116.28,123.675] + - --scale_values=gpu_0/data[57.375,57.12,58.395] + - --input_model=$conv_dir/resnet-50-cf2.onnx +license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/resnet-50-cf2/resnet-50-cf2.md b/models/public/resnet-50-cf2/resnet-50-cf2.md new file mode 100644 index 00000000000..122776abf4f --- /dev/null +++ b/models/public/resnet-50-cf2/resnet-50-cf2.md @@ -0,0 +1,68 @@ +# resnet-50-cf2 + +## Use Case and High-Level Description + +This is an Caffe2\* version of `resnet-50` model, designed to perform image classification. +This model was converted from Caffe\* to Caffe2\* fromat. +For details see repository , +paper + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 8.216 | +| MParams | 25.53 | +| Source framework | Caffe2\* | + +## Accuracy + +## Performance + +## Input + +### Original model + +Image, name - `gpu_0/data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR`. +Mean values - [103.53,116.28,123.675], scale values - [57.375,57.12,58.395] + +### Converted model + +Image, name - `gpu_0/data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR` + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `gpu_0/softmax`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `gpu_0/softmax`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/KaimingHe/deep-residual-networks/master/LICENSE) diff --git a/models/public/squeezenet1.1-cf2/model.yml b/models/public/squeezenet1.1-cf2/model.yml new file mode 100644 index 00000000000..5322c899f9b --- /dev/null +++ b/models/public/squeezenet1.1-cf2/model.yml @@ -0,0 +1,43 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + This is an Caffe2\* version of `squeezenet1.1` model, designed to perform image classification. + This model was converted from Caffe\* to Caffe2\* fromat. + For details see repository , + paper +task_type: classification +files: + - name: predict_net.pb + size: 6175 + sha256: d20be00eb448d3952265620357132916aba8744b027937b56c469b001b46472b + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/squeezenet/predict_net.pb + - name: init_net.pb + size: 6181001 + sha256: d8115221de899d081a1a83785bf0dbaeea19463cdf7dbddba662cc7abb4f32dc + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/squeezenet/init_net.pb +framework: caffe2 +caffe2_to_onnx: + - --model-name=squeezenet1.1-cf2 + - --predict-net-path=$dl_dir/predict_net.pb + - --init-net-path=$dl_dir/init_net.pb + - --input-shape=[1,3,227,227] + - --input-names=data + - --output-file=$conv_dir/squeezenet1.1-cf2.onnx +model_optimizer_args: + - --input_shape=[1,3,227,227] + - --input=data + - --mean_values=data[103.96,116.78,123.68] + - --input_model=$conv_dir/squeezenet1.1-cf2.onnx +license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md b/models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md new file mode 100644 index 00000000000..859141c87db --- /dev/null +++ b/models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md @@ -0,0 +1,68 @@ +# squeezenet1.1-cf2 + +## Use Case and High-Level Description + +This is an Caffe2\* version of `squeezenet1.1` model, designed to perform image classification. +This model was converted from Caffe\* to Caffe2\* fromat. +For details see repository , +paper + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.784 | +| MParams | 1.235 | +| Source framework | Caffe2\* | + +## Accuracy + +## Performance + +## Input + +### Original model + +Image, name - `data`, shape - `1,3,227,227`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR`. +Mean values - [103.96,116.78,123.68] + +### Converted model + +Image, name - `data`, shape - `1,3,227,227`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR`. + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `softmaxout`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `softmaxout`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +## Legal Information + +[https://raw.githubusercontent.com/DeepScale/SqueezeNet/master/LICENSE]() diff --git a/models/public/vgg19-cf2/model.yml b/models/public/vgg19-cf2/model.yml new file mode 100644 index 00000000000..235461abeb2 --- /dev/null +++ b/models/public/vgg19-cf2/model.yml @@ -0,0 +1,43 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + This is an Caffe2\* version of `vgg19` model, designed to perform image classification. + This model was converted from Caffe\* to Caffe2\* fromat. + For details see repository , + paper +task_type: classification +files: + - name: predict_net.pb + size: 2862 + sha256: ebb8608fe80ee8bce096a60ecf3e6e8442eb118aaa0fa77d5d58b5dcff7dfb5f + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/vgg19/predict_net.pb + - name: init_net.pb + size: 718338501 + sha256: 492dbbbc7dd23cb052c66964714759f88370f3fa8542aa32556e93abc7beb69f + source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/vgg19/init_net.pb +framework: caffe2 +caffe2_to_onnx: + - --model-name=vgg19-cf2 + - --predict-net-path=$dl_dir/predict_net.pb + - --init-net-path=$dl_dir/init_net.pb + - --input-shape=[1,3,224,224] + - --input-names=data + - --output-file=$conv_dir/vgg19-cf2.onnx +model_optimizer_args: + - --input_shape=[1,3,224,224] + - --input=data + - --mean_values=data[103.939,116.779,123.68] + - --input_model=$conv_dir/vgg19-cf2.onnx +license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/vgg19-cf2/vgg19-cf2.md b/models/public/vgg19-cf2/vgg19-cf2.md new file mode 100644 index 00000000000..830f7c0fb18 --- /dev/null +++ b/models/public/vgg19-cf2/vgg19-cf2.md @@ -0,0 +1,67 @@ +# vgg19-cf2 + +## Use Case and High-Level Description + +This is an Caffe2\* version of `vgg19` model, designed to perform image classification. +This model was converted from Caffe\* to Caffe2\* fromat. +For details see repository , +paper +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 39.3 | +| MParams | 143.667 | +| Source framework | Caffe2\* | + +## Accuracy + +## Performance + +## Input + +### Original mode + +Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR`. +Mean values - [103.939, 116.779, 123.68] + +### Converted model + +Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR`. + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +## Legal Information + +[https://raw.githubusercontent.com/keras-team/keras/master/LICENSE]() diff --git a/tools/downloader/caffe2_to_onnx.py b/tools/downloader/caffe2_to_onnx.py index 35e2213317a..2fc909529d6 100644 --- a/tools/downloader/caffe2_to_onnx.py +++ b/tools/downloader/caffe2_to_onnx.py @@ -64,8 +64,8 @@ def convert_to_onnx(predict_net, init_net, input_shape, input_names, output_file output_file.parent.mkdir(parents=True, exist_ok=True) value_info = {} - input_names = input_names[0].split(',') - for name, shape in zip(input_names, input_shape[0]): + input_names = input_names.split(',') + for name, shape in zip(input_names, input_shape): value_info[name] = [shape[0], shape] if predict_net.name == "": predict_net.name = model_name From e6f37a7fb6839f652ed0fe00ed79864c74f4c281 Mon Sep 17 00:00:00 2001 From: Ekaterina Aidova Date: Fri, 4 Oct 2019 15:42:22 +0300 Subject: [PATCH 078/927] AC: update mask rcnn adapter for support mask rcnn ONNX --- .../accuracy_checker/adapters/mask_rcnn.py | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py b/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py index 553acfd088d..4379b732dda 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/mask_rcnn.py @@ -164,10 +164,18 @@ def _process_pytorch_outputs(self, raw_outputs, identifiers, frame_meta): boxes[:, 1::2] /= im_scale_y classes = classes.astype(np.uint32) masks = [] - for box, cls, raw_mask in zip(boxes, classes, raw_masks): - raw_cls_mask = raw_mask[cls, ...] + raw_mask_for_all_classes = np.shape(raw_masks)[1] != len(identifiers) + if raw_mask_for_all_classes: + per_obj_raw_masks = [] + for cls, raw_mask in zip(classes, raw_masks): + per_obj_raw_masks.append(raw_mask[cls, ...]) + else: + per_obj_raw_masks = np.squeeze(raw_masks, axis=1) + + for box, raw_cls_mask in zip(boxes, per_obj_raw_masks): mask = self.segm_postprocess(box, raw_cls_mask, *original_image_size, True, True) masks.append(mask) + x_mins, y_mins, x_maxs, y_maxs = boxes.T detection_prediction = DetectionPrediction(identifier, classes, scores, x_mins, y_mins, x_maxs, y_maxs) instance_segmentation_prediction = CoCocInstanceSegmentationPrediction(identifier, masks, classes, scores) From 7299e36364e5f7b0604284f2789389fc69dc6802 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Thu, 3 Oct 2019 20:11:49 +0300 Subject: [PATCH 079/927] smart_classroom: check action number --- demos/smart_classroom_demo/include/action_detector.hpp | 2 +- demos/smart_classroom_demo/src/action_detector.cpp | 8 ++++++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/demos/smart_classroom_demo/include/action_detector.hpp b/demos/smart_classroom_demo/include/action_detector.hpp index e50337301c0..1feb9a2bbb7 100644 --- a/demos/smart_classroom_demo/include/action_detector.hpp +++ b/demos/smart_classroom_demo/include/action_detector.hpp @@ -95,7 +95,7 @@ struct ActionDetectorConfig : public CnnConfig { /** @brief Number of SSD anchors for the new network version */ std::vector new_anchors{1, 4}; /** @brief Number of actions to detect */ - int num_action_classes = 3; + size_t num_action_classes = 3; /** @brief Async execution flag */ bool is_async = true; /** @brief SSD bbox encoding variances */ diff --git a/demos/smart_classroom_demo/src/action_detector.cpp b/demos/smart_classroom_demo/src/action_detector.cpp index b7e2c413749..04ca3ec5a73 100644 --- a/demos/smart_classroom_demo/src/action_detector.cpp +++ b/demos/smart_classroom_demo/src/action_detector.cpp @@ -77,7 +77,6 @@ ActionDetection::ActionDetection(const ActionDetectorConfig& config) for (auto&& item : outputInfo) { item.second->setPrecision(Precision::FP32); - item.second->setLayout(InferenceEngine::TensorDesc::getLayoutByDims(item.second->getDims())); } new_network_ = outputInfo.find(config_.new_loc_blob_name) != outputInfo.end(); @@ -108,6 +107,11 @@ ActionDetection::ActionDetection(const ActionDetectorConfig& config) const auto anchor_dims = outputInfo[glob_anchor_name]->getDims(); anchor_height = new_network_ ? anchor_dims[2] : anchor_dims[1]; anchor_width = new_network_ ? anchor_dims[3] : anchor_dims[2]; + decltype(anchor_dims.size()) action_dimention_idx = new_network_ ? 1 : 3; + if (anchor_dims[action_dimention_idx] != config_.num_action_classes) { + throw std::logic_error("The number of specified actions and the number of actions predicted by " + "the Person/Action Detection Retail model must match"); + } const int anchor_size = anchor_height * anchor_width; head_shift += anchor_size; @@ -279,7 +283,7 @@ void ActionDetection::GetDetections(const cv::Mat& loc, const cv::Mat& main_conf int action_label = -1; float action_max_exp_value = 0.f; float action_sum_exp_values = 0.f; - for (int c = 0; c < config_.num_action_classes; ++c) { + for (size_t c = 0; c < config_.num_action_classes; ++c) { float action_exp_value = std::exp(scale * anchor_conf_data[action_conf_idx_shift + c * action_conf_step]); action_sum_exp_values += action_exp_value; From 53fe1f763f1477aca2b9ac82c8c4507abbd45b02 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Fri, 4 Oct 2019 19:30:39 +0300 Subject: [PATCH 080/927] demos/tests/cases.py: update smart_classroom_demo actions --- demos/tests/cases.py | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index 555a959f588..ad15d95fa5b 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -189,10 +189,15 @@ def device_cases(*args): device_cases('-d_act', '-d_fd', '-d_lm', '-d_reid'), [ *combine_cases( - single_option_cases('-m_act', - ModelArg('person-detection-action-recognition-0005'), - ModelArg('person-detection-action-recognition-0006'), - ModelArg('person-detection-action-recognition-teacher-0002')), + [ + TestCase(options={'-m_act': ModelArg('person-detection-action-recognition-0005')}), + TestCase(options={'-m_act': ModelArg('person-detection-action-recognition-0006'), + '-student_ac': 'sitting,writing,raising_hand,standing,turned_around,lie_on_the_desk'}), + # person-detection-action-recognition-teacher-0002 is supposed to be provided with -teacher_id, but + # this would require providing a gallery file with -fg key. Unless -teqcher_id is provided + # -teacher_ac is ignored thus run the test just with default actions pretending it's about students + TestCase(options={'-m_act': ModelArg('person-detection-action-recognition-teacher-0002')}), + ], single_option_cases('-m_lm', None, ModelArg('landmarks-regression-retail-0009')), single_option_cases('-m_reid', None, ModelArg('face-reidentification-retail-0095'))), TestCase(options={'-m_act': ModelArg('person-detection-raisinghand-recognition-0001'), '-a_top': '5'}), From 4635bb9f4adff013a15818871f1ac8ef50d33870 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Fri, 4 Oct 2019 19:32:15 +0300 Subject: [PATCH 081/927] smart_classroom: decltype(anchor_dims.size()) -> std::size_t --- demos/smart_classroom_demo/src/action_detector.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demos/smart_classroom_demo/src/action_detector.cpp b/demos/smart_classroom_demo/src/action_detector.cpp index 04ca3ec5a73..91b46c6564e 100644 --- a/demos/smart_classroom_demo/src/action_detector.cpp +++ b/demos/smart_classroom_demo/src/action_detector.cpp @@ -107,7 +107,7 @@ ActionDetection::ActionDetection(const ActionDetectorConfig& config) const auto anchor_dims = outputInfo[glob_anchor_name]->getDims(); anchor_height = new_network_ ? anchor_dims[2] : anchor_dims[1]; anchor_width = new_network_ ? anchor_dims[3] : anchor_dims[2]; - decltype(anchor_dims.size()) action_dimention_idx = new_network_ ? 1 : 3; + std::size_t action_dimention_idx = new_network_ ? 1 : 3; if (anchor_dims[action_dimention_idx] != config_.num_action_classes) { throw std::logic_error("The number of specified actions and the number of actions predicted by " "the Person/Action Detection Retail model must match"); From 0c62c60b14f5873062ece7824cd484860d2cfe7f Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 7 Oct 2019 12:37:20 +0300 Subject: [PATCH 082/927] demos/tests/cases.py: teqcher_id->teacher_id --- demos/tests/cases.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index ad15d95fa5b..b28f44533df 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -194,7 +194,7 @@ def device_cases(*args): TestCase(options={'-m_act': ModelArg('person-detection-action-recognition-0006'), '-student_ac': 'sitting,writing,raising_hand,standing,turned_around,lie_on_the_desk'}), # person-detection-action-recognition-teacher-0002 is supposed to be provided with -teacher_id, but - # this would require providing a gallery file with -fg key. Unless -teqcher_id is provided + # this would require providing a gallery file with -fg key. Unless -teacher_id is provided # -teacher_ac is ignored thus run the test just with default actions pretending it's about students TestCase(options={'-m_act': ModelArg('person-detection-action-recognition-teacher-0002')}), ], From f7f20460a17e9ad523e1091b818de31ba706f5d8 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 7 Oct 2019 13:06:00 +0300 Subject: [PATCH 083/927] pedestrian_tracker: uint32->int32, remove useless prints --- .../include/pedestrian_tracker_demo.hpp | 4 ++-- demos/pedestrian_tracker_demo/main.cpp | 14 ++++++-------- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp b/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp index 7ebb7553f3b..eebe55f9e44 100644 --- a/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp +++ b/demos/pedestrian_tracker_demo/include/pedestrian_tracker_demo.hpp @@ -116,11 +116,11 @@ DEFINE_string(out, "", output_log_message); /// @brief Define the first frame to process
/// It is an optional parameter -DEFINE_uint32(first, 0, first_frame_message); +DEFINE_int32(first, -1, first_frame_message); /// @brief Define the last frame to process
/// It is an optional parameter -DEFINE_uint32(last, 0, last_frame_message); +DEFINE_int32(last, -1, last_frame_message); /** diff --git a/demos/pedestrian_tracker_demo/main.cpp b/demos/pedestrian_tracker_demo/main.cpp index 6e6ba8858d0..32a2b0205ca 100644 --- a/demos/pedestrian_tracker_demo/main.cpp +++ b/demos/pedestrian_tracker_demo/main.cpp @@ -82,8 +82,6 @@ bool ParseAndCheckCommandLine(int argc, char *argv[]) { return false; } - std::cout << "Parsing input parameters" << std::endl; - if (FLAGS_i.empty()) { throw std::logic_error("Parameter -i is not set"); } @@ -133,10 +131,10 @@ int main_work(int argc, char **argv) { bool should_save_det_log = !detlog_out.empty(); - if (FLAGS_first != 0) - std::cout << "first_frame = " << FLAGS_first << std::endl; - if (FLAGS_last != 0) - std::cout << "last_frame = " << FLAGS_last << std::endl; + if ((FLAGS_last >= 0) && (FLAGS_first > FLAGS_last)) { + throw std::runtime_error("The first frame index (" + std::to_string(FLAGS_first) + ") must be greater than the " + "last frame index (" + std::to_string(FLAGS_last) + ')'); + } std::vector devices{detector_mode, reid_mode}; InferenceEngine::Core ie = @@ -172,7 +170,7 @@ int main_work(int argc, char **argv) { // the default frame rate for DukeMTMC dataset video_fps = 60.0; } - if (0 != FLAGS_first && !cap.set(cv::CAP_PROP_POS_FRAMES, FLAGS_first)) { + if (0 >= FLAGS_first && !cap.set(cv::CAP_PROP_POS_FRAMES, FLAGS_first)) { throw std::runtime_error("Can't set the frame to begin with"); } @@ -182,7 +180,7 @@ int main_work(int argc, char **argv) { } std::cout << std::endl; - for (uint32_t frame_idx = FLAGS_first; 0 == FLAGS_last || frame_idx <= FLAGS_last; ++frame_idx) { + for (int32_t frame_idx = std::max(0, FLAGS_first); 0 > FLAGS_last || frame_idx <= FLAGS_last; ++frame_idx) { cv::Mat frame; if (!cap.read(frame)) { break; From a933851aac746b02215d3759b3f7ffae79b1e174 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 7 Oct 2019 13:11:22 +0300 Subject: [PATCH 084/927] demos/ocv_common: A->The --- demos/common/samples/ocv_common.hpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demos/common/samples/ocv_common.hpp b/demos/common/samples/ocv_common.hpp index b1e8c9c1d48..ee29eca0f1c 100644 --- a/demos/common/samples/ocv_common.hpp +++ b/demos/common/samples/ocv_common.hpp @@ -25,7 +25,7 @@ void matU8ToBlob(const cv::Mat& orig_image, InferenceEngine::Blob::Ptr& blob, in const size_t height = blobSize[2]; const size_t channels = blobSize[1]; if (static_cast(orig_image.channels()) != channels) { - THROW_IE_EXCEPTION << "A number of channels for net input and image must match"; + THROW_IE_EXCEPTION << "The number of channels for net input and image must match"; } T* blob_data = blob->buffer().as(); From 96a4edb6f8e93a2dce02073237ec4d8dda4bb120 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 7 Oct 2019 13:15:49 +0300 Subject: [PATCH 085/927] pedestrian_tracker: fix formatting --- demos/pedestrian_tracker_demo/main.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demos/pedestrian_tracker_demo/main.cpp b/demos/pedestrian_tracker_demo/main.cpp index 32a2b0205ca..a01528c6636 100644 --- a/demos/pedestrian_tracker_demo/main.cpp +++ b/demos/pedestrian_tracker_demo/main.cpp @@ -133,7 +133,7 @@ int main_work(int argc, char **argv) { if ((FLAGS_last >= 0) && (FLAGS_first > FLAGS_last)) { throw std::runtime_error("The first frame index (" + std::to_string(FLAGS_first) + ") must be greater than the " - "last frame index (" + std::to_string(FLAGS_last) + ')'); + "last frame index (" + std::to_string(FLAGS_last) + ')'); } std::vector devices{detector_mode, reid_mode}; From eacc3d08dc65bd090281e9c1df1360fb39e33961 Mon Sep 17 00:00:00 2001 From: Katya Date: Mon, 7 Oct 2019 14:34:09 +0300 Subject: [PATCH 086/927] AC: supported single class semantic segmentation. Updated dice index metric (#489) --- .../accuracy_checker/adapters/README.md | 2 ++ .../accuracy_checker/adapters/segmentation.py | 31 ++++++++++++++++++- .../accuracy_checker/metrics/README.md | 7 +++-- .../metrics/semantic_segmentation.py | 16 +++++++--- 4 files changed, 48 insertions(+), 8 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/adapters/README.md b/tools/accuracy_checker/accuracy_checker/adapters/README.md index 224c2040ede..01e54ff8441 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/README.md +++ b/tools/accuracy_checker/accuracy_checker/adapters/README.md @@ -20,6 +20,8 @@ AccuracyChecker supports following set of adapters: * `classification` - converting output of classification model to `ClassificationPrediction` representation. * `segmentation` - converting output of semantic segmentation model to `SeegmentationPrediction` representation. * `make_argmax` - allows to apply argmax operation to output values. +* `segmentation_one_class` - converting output of semantic segmentation to `SeegmentationPrediction` representation. It is suitable for situation when model's output is probability of belong each pixel to foreground class. + * `threshold` - minimum probability threshold for valid class belonging. * `tiny_yolo_v1` - converting output of Tiny YOLO v1 model to `DetectionPrediction` representation. * `reid` - converting output of reidentification model to `ReIdentificationPrediction` representation. * `grn_workaround` - enabling processing output with adding Global Region Normalization layer. diff --git a/tools/accuracy_checker/accuracy_checker/adapters/segmentation.py b/tools/accuracy_checker/accuracy_checker/adapters/segmentation.py index 32a030f5f47..8ce4be078f4 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/segmentation.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/segmentation.py @@ -17,7 +17,7 @@ import numpy as np from ..adapters import Adapter from ..representation import SegmentationPrediction, BrainTumorSegmentationPrediction -from ..config import ConfigValidator, BoolField +from ..config import ConfigValidator, BoolField, NumberField class SegmentationAdapter(Adapter): @@ -71,6 +71,35 @@ def _extract_predictions(self, outputs_list, meta): return {self.output_blob: restore_output} +class SegmentationOneClassAdapter(Adapter): + __provider__ = 'segmentation_one_class' + prediction_types = (SegmentationPrediction, ) + + @classmethod + def parameters(cls): + params = super().parameters() + params.update({ + 'threshold': NumberField( + optional=True, value_type=float, min_value=0.0, default=0.5, + description='minimal probability threshold for separating predicted class from background' + ) + }) + return params + + def configure(self): + self.threshold = self.get_value_from_config('threshold') + + def process(self, raw, identifiers=None, frame_meta=None): + result = [] + frame_meta = frame_meta or [] * len(identifiers) + raw_outputs = self._extract_predictions(raw, frame_meta) + for identifier, output in zip(identifiers, raw_outputs[self.output_blob]): + output = output > self.threshold + result.append(SegmentationPrediction(identifier, output)) + + return result + + class BrainTumorSegmentationAdapter(Adapter): __provider__ = 'brain_tumor_segmentation' prediction_types = (BrainTumorSegmentationPrediction, ) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/README.md b/tools/accuracy_checker/accuracy_checker/metrics/README.md index b87de8f67f9..0fa0accdb23 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/README.md +++ b/tools/accuracy_checker/accuracy_checker/metrics/README.md @@ -153,9 +153,10 @@ More detailed information about calculation segmentation metrics you can find [h * `ndcg` - [Normalized Discounted Cumulative Gain](https://en.wikipedia.org/wiki/Discounted_cumulative_gain). Supported representations: `HitRatioAnnotation`, `HitRatioPrediction`. * `top_k` - definition of number elements in rank list (optional, default 10). * `dice` - [Sørensen–Dice coefficient](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient). Supported representations: `BrainTumorSegmentationAnnotation, BrainTumorSegmentationPrediction`. -* `dice_index` - [Sørensen–Dice coefficient](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient). Supported representations: `BrainTumorSegmentationAnnotation, BrainTumorSegmentationPrediction`. Supports result representation for multiple classes. Metric represents result for each class if `label_map` for used dataset is provided, otherwise it represents overall result. For `brats_numpy` converter file with labels set in `labels_file` tag. - * `mean` - allows calculation mean value (default - `True`) - * `median` - allows calculation median value (default - `False`) +* `dice_index` - [Sørensen–Dice coefficient](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient). Supported representations: `BrainTumorSegmentationAnnotation, BrainTumorSegmentationPrediction`, `SegmentationAnnotation, SegmentationPrediction`. Supports result representation for multiple classes. Metric represents result for each class if `label_map` for used dataset is provided, otherwise it represents overall result. For `brats_numpy` converter file with labels set in `labels_file` tag. + * `mean` - allows calculation mean value (default - `True`). + * `median` - allows calculation median value (default - `False`). + * `use_argmax` - allows to use argmax for prediction mask (default - `True`). * `bleu` - [Bilingual Evaluation Understudy](https://en.wikipedia.org/wiki/BLEU). Supperted representations: `MachineTranslationAnnotation`, `MachineTranslationPrediction`. * `smooth` - Whether or not to apply Lin et al. 2004 smoothing. * `max_order` - Maximum n-gram order to use when computing BLEU score. (Optional, default 4). diff --git a/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py b/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py index 36d93115e54..eaf8e380f96 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py @@ -144,8 +144,8 @@ def reset(self): class SegmentationDIAcc(PerImageEvaluationMetric): __provider__ = 'dice_index' - annotation_types = (BrainTumorSegmentationAnnotation,) - prediction_types = (BrainTumorSegmentationPrediction,) + annotation_types = (BrainTumorSegmentationAnnotation, SegmentationAnnotation) + prediction_types = (BrainTumorSegmentationPrediction, SegmentationPrediction) overall_metric = [] @@ -154,7 +154,8 @@ def parameters(cls): parameters = super().parameters() parameters.update({ 'mean': BoolField(optional=True, default=True, description='Allows calculation mean value.'), - 'median': BoolField(optional=True, default=False, description='Allows calculation median value.') + 'median': BoolField(optional=True, default=False, description='Allows calculation median value.'), + 'use_argmax': BoolField(optional=True, default=True, description="Allows to use argmax for prediction mask") }) return parameters @@ -162,6 +163,7 @@ def parameters(cls): def configure(self): self.mean = self.get_value_from_config('mean') self.median = self.get_value_from_config('median') + self.use_argmax = self.get_value_from_config('use_argmax') labels = self.dataset.labels.values() if self.dataset.metadata else ['overall'] self.classes = len(labels) @@ -178,7 +180,7 @@ def update(self, annotation, prediction): result = np.zeros(shape=self.classes) annotation_data = annotation.mask - prediction_data = np.argmax(prediction.mask, axis=0) + prediction_data = np.argmax(prediction.mask, axis=0) if self.use_argmax else prediction.mask.astype('int64') for c in range(1, self.classes): annotation_data_ = (annotation_data == c) @@ -206,4 +208,10 @@ def evaluate(self, annotations, predictions): return result def reset(self): + labels = self.dataset.labels.values() if self.dataset.metadata else ['overall'] + self.classes = len(labels) + names_mean = ['mean@{}'.format(name) for name in labels] if self.mean else [] + names_median = ['median@{}'.format(name) for name in labels] if self.median else [] + self.meta['names'] = names_mean + names_median + self.meta['calculate_mean'] = False self.overall_metric = [] From 1863d41fcd92c4d07e8f8dfe74036b50582080ea Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Tue, 8 Oct 2019 14:37:06 +0300 Subject: [PATCH 087/927] image_retrieval: fix example of execution --- demos/python_demos/image_retrieval_demo/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/demos/python_demos/image_retrieval_demo/README.md b/demos/python_demos/image_retrieval_demo/README.md index 91c70b38515..e5c24501704 100644 --- a/demos/python_demos/image_retrieval_demo/README.md +++ b/demos/python_demos/image_retrieval_demo/README.md @@ -63,8 +63,8 @@ To run the demo, please provide paths to the model in the IR format, to a file w ```bash python image_retrieval_demo.py \ -m /home/user/image-retrieval-0001.xml \ --v /home/user/video.dav.mp4 \ --i /home/user/list.txt \ +-i /home/user/video.dav.mp4 \ +-g /home/user/list.txt \ -l /opt/intel/openvino/inference_engine/lib/intel64/libcpu_extension_avx512.so \ --ground_truth text_label ``` From 4a8c774f05ab3f9b350402ec70596a99570a74d2 Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 8 Oct 2019 17:19:07 +0300 Subject: [PATCH 088/927] AC: fix light weight accuracy configs (#494) --- tools/accuracy_checker/configs/Sphereface.yml | 3 --- .../configs/face-recognition-mobilefacenet-arcface.yml | 3 --- .../configs/face-recognition-resnet100-arcface.yml | 3 --- .../configs/face-recognition-resnet34-arcface.yml | 3 --- .../configs/face-recognition-resnet50-arcface.yml | 3 --- .../configs/face-reidentification-retail-0095.yml | 3 --- .../accuracy_checker/configs/facenet-20180408-102900.yml | 3 --- tools/accuracy_checker/dataset_definitions.yml | 8 ++++++-- 8 files changed, 6 insertions(+), 23 deletions(-) diff --git a/tools/accuracy_checker/configs/Sphereface.yml b/tools/accuracy_checker/configs/Sphereface.yml index 3489da6142f..b6981cfc122 100644 --- a/tools/accuracy_checker/configs/Sphereface.yml +++ b/tools/accuracy_checker/configs/Sphereface.yml @@ -25,6 +25,3 @@ models: - type: resize dst_height: 112 dst_width: 96 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml b/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml index b0694240b50..992d2aadfde 100644 --- a/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml @@ -48,6 +48,3 @@ models: size: 400 - type: resize size: 112 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml index 1a18ca58931..571f7ab2a7a 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml @@ -42,6 +42,3 @@ models: size: 400 - type: resize size: 112 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml index 9a4c5c2640f..a303d7e2ba2 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml @@ -49,6 +49,3 @@ models: size: 400 - type: resize size: 112 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml index 0cfd32c2426..d7e7c6f5ad3 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml @@ -49,6 +49,3 @@ models: size: 400 - type: resize size: 112 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml index b3a50daefb7..ef8ce00885d 100644 --- a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml +++ b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml @@ -32,6 +32,3 @@ models: size: 400 - type: resize size: 128 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/configs/facenet-20180408-102900.yml b/tools/accuracy_checker/configs/facenet-20180408-102900.yml index 2f4e8964423..0b697926331 100644 --- a/tools/accuracy_checker/configs/facenet-20180408-102900.yml +++ b/tools/accuracy_checker/configs/facenet-20180408-102900.yml @@ -24,6 +24,3 @@ models: size: 400 - type: resize size: 160 - - metrics: - - type: pairwise_accuracy_subsets diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index 9bdcfa9d681..ccdb4f704a4 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -324,6 +324,10 @@ datasets: landmarks_file: LFW/annotation/lfw_landmark.txt annotation: lfw.pickle + metrics: + - type: pairwise_accuracy_subsets + subset_number: 2 + - name: ICDAR2015 data_source: ICDAR15_DET/ch4_test_images annotation_conversion: @@ -339,10 +343,10 @@ datasets: dataset_meta: icdar13_recognition.json - name: market1501 - data_source: Market-1501-v15.09.15 + data_source: Market1501-person-reidentification/Market-1501-v15.09.15 annotation_conversion: converter: market1501_reid - data_dir: Market-1501-v15.09.15 + data_dir: Market1501-person-reidentification/Market-1501-v15.09.15 annotation: market1501_reid.pickle - name: vgg2face From 1b486736240c915204dea28863de4a77146c22e0 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Tue, 8 Oct 2019 18:28:45 +0300 Subject: [PATCH 089/927] Added info about demo --- CONTRIBUTING.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 2d4d3ef2c97..f91b3ab34b0 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -159,8 +159,14 @@ If appropriate demo or sample are absent, you must provide your own demo (C++ or - `-d ""` Optional. Target device for model inference. Default is CPU. - `-no_show` Optional. Do not visualize inference results. +> Note: For Python is preferable to use `-` instead of `_` as word separators (e.g. `-no-show`) + Also you can add any other necessary parameters. +If you adding new demo, please provide auto-testing support too: +- add demo launch parameters in [demos/tests/cases.py](demos/tests/cases.py) +- prepare list of input images in [demos/tests/image_sequences.py](demos/tests/image_sequences.py) + *After this step you'll get **demo** for your model (if no demo was available)* ## Accuracy validation From f98f2e6fb8327b90d10df1c90a4c38f7e7e0dfaf Mon Sep 17 00:00:00 2001 From: maozhong Date: Wed, 9 Oct 2019 15:30:01 +0800 Subject: [PATCH 090/927] add minimun cosion distance option support for match matching Signed-off-by: maozhong --- .../face_recognition_demo/README.md | 17 +++++++---- .../face_recognition_demo/face_identifier.py | 5 ++-- .../face_recognition_demo.py | 6 +++- .../face_recognition_demo/faces_database.py | 28 ++++++++++++------- 4 files changed, 37 insertions(+), 19 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/README.md b/demos/python_demos/face_recognition_demo/README.md index 85b8485f3f6..6bc541ffc8f 100644 --- a/demos/python_demos/face_recognition_demo/README.md +++ b/demos/python_demos/face_recognition_demo/README.md @@ -75,12 +75,14 @@ any arguments yields the following message: python ./face_recognition_demo.py -h usage: face_recognition_demo.py [-h] [-i PATH] [-o PATH] [--no_show] [-tl] - [-cw CROP_WIDTH] [-ch CROP_HEIGHT] -fg PATH - [--run_detector] -m_fd PATH -m_lm PATH -m_reid PATH - [-fd_iw FD_INPUT_WIDTH] [-fd_ih FD_INPUT_HEIGHT] - [-d_fd {CPU,GPU,FPGA,MYRIAD,HETERO}] - [-d_lm {CPU,GPU,FPGA,MYRIAD,HETERO}] - [-d_reid {CPU,GPU,FPGA,MYRIAD,HETERO}] + [-cw CROP_WIDTH] [-ch CROP_HEIGHT] + [-match_algo {HUNGARIAN,MIN_DIST}] -fg PATH + [--run_detector] -m_fd PATH -m_lm PATH -m_reid + PATH [-fd_iw FD_INPUT_WIDTH] + [-fd_ih FD_INPUT_HEIGHT] + [-d_fd {CPU,GPU,FPGA,MYRIAD,HETERO,HDDL}] + [-d_lm {CPU,GPU,FPGA,MYRIAD,HETERO,HDDL}] + [-d_reid {CPU,GPU,FPGA,MYRIAD,HETERO,HDDL}] [-l PATH] [-c PATH] [-v] [-pc] [-t_fd [0..1]] [-t_id [0..1]] [-exp_r_fd NUMBER] @@ -103,6 +105,9 @@ General: (optional) Crop the input stream to this height (default: no crop). Both -cw and -ch parameters should be specified to use crop. + -match_algo {HUNGARIAN,MIN_DIST} + (optional)algorithm for face matching(default: + HUNGARIAN) Faces database: -fg PATH Path to the face images directory diff --git a/demos/python_demos/face_recognition_demo/face_identifier.py b/demos/python_demos/face_recognition_demo/face_identifier.py index 9a54dea0714..95c696b2d17 100644 --- a/demos/python_demos/face_recognition_demo/face_identifier.py +++ b/demos/python_demos/face_recognition_demo/face_identifier.py @@ -39,7 +39,7 @@ def __init__(self, id, distance, desc): self.distance = distance self.descriptor = desc - def __init__(self, model, match_threshold=0.5): + def __init__(self, model, match_threshold=0.5, match_algo='HUNGARIAN'): super(FaceIdentifier, self).__init__(model) assert len(model.inputs) == 1, "Expected 1 input blob" @@ -57,6 +57,7 @@ def __init__(self, model, match_threshold=0.5): self.faces_database = None self.match_threshold = match_threshold + self.match_algo = match_algo def set_faces_database(self, database): self.faces_database = database @@ -89,7 +90,7 @@ def get_matches(self): matches = [] if len(descriptors) != 0: - matches = self.faces_database.match_faces(descriptors) + matches = self.faces_database.match_faces(descriptors, self.match_algo) results = [] unknowns_list = [] diff --git a/demos/python_demos/face_recognition_demo/face_recognition_demo.py b/demos/python_demos/face_recognition_demo/face_recognition_demo.py index d1748134dc7..d4ca264d717 100755 --- a/demos/python_demos/face_recognition_demo/face_recognition_demo.py +++ b/demos/python_demos/face_recognition_demo/face_recognition_demo.py @@ -32,6 +32,7 @@ from face_identifier import FaceIdentifier DEVICE_KINDS = ['CPU', 'GPU', 'FPGA', 'MYRIAD', 'HETERO', 'HDDL'] +MATCH_ALGO = ['HUNGARIAN', 'MIN_DIST'] def build_argparser(): @@ -55,6 +56,8 @@ def build_argparser(): help="(optional) Crop the input stream to this height " \ "(default: no crop). Both -cw and -ch parameters " \ "should be specified to use crop.") + general.add_argument('-match_algo', default='HUNGARIAN', choices=MATCH_ALGO, + help="(optional)algorithm for face matching(default: %(default)s)") gallery = parser.add_argument_group('Faces database') gallery.add_argument('-fg', metavar="PATH", required=True, @@ -146,7 +149,8 @@ def __init__(self, args): self.landmarks_detector = LandmarksDetector(landmarks_net) self.face_identifier = FaceIdentifier(face_reid_net, - match_threshold=args.t_id) + match_threshold=args.t_id, + match_algo = args.match_algo) self.face_detector.deploy(args.d_fd, context) self.landmarks_detector.deploy(args.d_lm, context, diff --git a/demos/python_demos/face_recognition_demo/faces_database.py b/demos/python_demos/face_recognition_demo/faces_database.py index 7ead34bb193..f953d69920a 100644 --- a/demos/python_demos/face_recognition_demo/faces_database.py +++ b/demos/python_demos/face_recognition_demo/faces_database.py @@ -149,7 +149,7 @@ def ask_to_save(self, image): label = name if save else None return label - def match_faces(self, descriptors): + def match_faces(self, descriptors, match_algo='HUNGARIAN'): database = self.database distances = np.empty((len(descriptors), len(database))) for i, desc in enumerate(descriptors): @@ -159,17 +159,25 @@ def match_faces(self, descriptors): dist.append(FacesDatabase.Identity.cosine_dist(desc, id_desc)) distances[i][j] = dist[np.argmin(dist)] - # Find best assignments, prevent repeats, assuming faces can not repeat - _, assignments = linear_sum_assignment(distances) matches = [] - for i in range(len(descriptors)): - if len(assignments) <= i: # assignment failure, too many faces - matches.append((0, 1.0)) - continue + # if user specify MIN_DIST for face matching, face with minium cosine distance will be selected. + if match_algo == 'MIN_DIST': + for i in range(len(descriptors)): + id = np.argmin(distances[i]) + min_dist = np.min(distances[i]) + matches.append((id, min_dist)) + else: + # Find best assignments, prevent repeats, assuming faces can not repeat + _, assignments = linear_sum_assignment(distances) + for i in range(len(descriptors)): + if len(assignments) <= i: # assignment failure, too many faces + matches.append((0, 1.0)) + continue + + id = assignments[i] + distance = distances[i, id] + matches.append((id, distance)) - id = assignments[i] - distance = distances[i, id] - matches.append((id, distance)) return matches def create_new_label(self, path, id): From deebf59ec28b7fe613df6d33ac6730d824ae93de Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 9 Oct 2019 17:29:27 +0300 Subject: [PATCH 091/927] AC: Pytorch Launcher import (#498) --- .../accuracy_checker/launcher/__init__.py | 7 +------ .../launcher/pytorch_launcher.py | 18 +++++++++++------- .../tests/test_pytorch_launcher.py | 3 +-- 3 files changed, 13 insertions(+), 15 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/__init__.py b/tools/accuracy_checker/accuracy_checker/launcher/__init__.py index f177ab22816..46cc8a8c253 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/__init__.py @@ -62,12 +62,7 @@ 'onnx_runtime', "ONNX Runtime isn't installed. Please, install it before using. \n{}".format(import_error.msg) ) -try: - from .pytorch_launcher import PyTorchLauncher -except ImportError as import_error: - PyTorchLauncher = unsupported_launcher( - 'pytorch', "PyTorch isn't installed. Please, install it before using. \n{}".format(import_error.msg) - ) +from .pytorch_launcher import PyTorchLauncher __all__ = [ 'create_launcher', diff --git a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py index cdada200652..4ddf194f413 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/pytorch_launcher.py @@ -4,9 +4,6 @@ from collections import OrderedDict import numpy as np -import torch -from torch.autograd import Variable - from ..config import PathField, StringField, DictField, NumberField, ListField from .launcher import Launcher, LauncherConfigValidator @@ -44,6 +41,13 @@ def parameters(cls): def __init__(self, config_entry: dict, *args, **kwargs): super().__init__(config_entry, *args, **kwargs) + try: + # Pytorch import affects performance of common pipeline + # it is the reason, why it is imported only when it used + import torch # pylint: disable=C0415 + except ImportError as import_error: + raise ValueError("PyTorch isn't installed. Please, install it before using. \n{}".format(import_error.msg)) + self._torch = torch pytorch_launcher_config = LauncherConfigValidator('Pytorch_Launcher', fields=self.parameters()) pytorch_launcher_config.validate(self.config) module_args = config_entry.get("module_args", ()) @@ -93,7 +97,7 @@ def load_module(self, model_cls, module_args, module_kwargs, checkpoint=None, st model_cls = importlib.import_module(model_path).__getattribute__(model_cls) module = model_cls(*module_args, **module_kwargs) if checkpoint: - checkpoint = torch.load(checkpoint) + checkpoint = self._torch.load(checkpoint) state = checkpoint if not state_key else checkpoint[state_key] module.load_state_dict(state) if self.cuda: @@ -105,11 +109,11 @@ def load_module(self, model_cls, module_args, module_kwargs, checkpoint=None, st def fit_to_input(self, data, layer_name, layout): data = np.transpose(data, layout) - tensor = torch.from_numpy(data.astype(np.float32)) + tensor = self._torch.from_numpy(data.astype(np.float32)) if self.cuda: tensor = tensor.cuda() - with torch.no_grad(): - return Variable(tensor) + with self._torch.no_grad(): + return self._torch.autograd.Variable(tensor) def predict(self, inputs, metadata=None, **kwargs): results = [] diff --git a/tools/accuracy_checker/tests/test_pytorch_launcher.py b/tools/accuracy_checker/tests/test_pytorch_launcher.py index 05a43de4fca..788bf7497fd 100644 --- a/tools/accuracy_checker/tests/test_pytorch_launcher.py +++ b/tools/accuracy_checker/tests/test_pytorch_launcher.py @@ -15,13 +15,12 @@ """ import pytest -pytest.importorskip('accuracy_checker.launcher.pytorch_launcher') +pytest.importorskip('torch') import cv2 import numpy as np from accuracy_checker.launcher.launcher import create_launcher from accuracy_checker.config import ConfigError -from accuracy_checker.data_readers import DataRepresentation def get_pth_test_model(models_dir): config = { From da761c146efbea6c393dd68a577aeb4a8f3a075d Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Wed, 9 Oct 2019 17:57:59 +0300 Subject: [PATCH 092/927] FIX --- CONTRIBUTING.md | 131 ++++++++++++++++++++++++++++++++++-------------- 1 file changed, 93 insertions(+), 38 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index f91b3ab34b0..7efce2190f3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -22,7 +22,7 @@ Name your model in OMZ using next rules: - name must be consistent with name given by authors, but full match not necessary - use lowercase - spaces are not allowed in the name, use `-` or `_` (`-` is preferable) as delimiters instead -- if necessary, add suffix to model name, according to origin framework (see **`framework`** description in [configuration file](#configuration-file) section) +- suffix to model name, according to origin framework (see **`framework`** description in [configuration file](#configuration-file) section), if you adding reimplementation of existing model in OMZ from another framework This name will be used for downloading, converting, etc. Example: @@ -43,7 +43,7 @@ This PR must pass next tests: * model can be used by demo or sample and provides adequate results (see [Demo](#demo) for details) * model passes accuracy validation (see [Accuracy validation](#accuracy-validation) for details) -After the end, your PR will be review by OpenVINO™'s team for consistence and legal. +After the end, your PR will be reviewed our team for consistence and legal compliance. Your PR can be denied in case: * inappropriate license (e.g. GPL-like licenses) @@ -56,11 +56,11 @@ Models configuration file contains information about model: what it is, how to d **`description`** -This tag contains description of model. +Description of model. **`task_type`** -This tag describes task that model solves: +Model task class: - `action_recognition` - `classification` - `detection` @@ -79,7 +79,7 @@ If task, that your model solve, is not listed here, please add new type of task > Before filling this section, you must ensure that a model is downloadable either from a direct HTTP(S) link or from Google Drive\*. -You describe all files, which need to be downloaded, in this section. Each file is described in few tags: +You describe all files, which need to be downloaded, in this section. Each file is described by: * `name` sets file name after downloading * `size` sets file size @@ -106,7 +106,7 @@ For unpacking archive: For replacement operations: - `$type: regex_replace` - `file` name of file where replacement must be executed -- `pattern` string or regexp ([learn more](https://docs.python.org/2/library/re.html)) +- `pattern` regular expression ([learn more](https://docs.python.org/3/library/re.html)) - `replacement` replacement string - `count` (*optional*) maximum number of pattern occurrences to be replaced @@ -135,16 +135,49 @@ Framework of original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`). Path to model's license. +### Example + +In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from Google Drive\* as archive. + +``` +description: >- + This is an Tensorflow\* version of `densenet-121` model, one of the DenseNet + group of models designed to perform image classification. The weights were converted + from DenseNet-Keras Models. For details see repository , + paper +task_type: classification +files: + - name: tf-densenet121.tar.gz + size: 30597420 + sha256: b31ec840358f1d20e1c6364d05ce463cb0bc0480042e663ad54547189501852d + source: + $type: google_drive + id: 0B_fUSpodN0t0eW1sVk1aeWREaDA +postprocessing: + - $type: unpack_archive + format: gztar + file: tf-densenet121.tar.gz +model_optimizer_args: + - --reverse_input_channels + - --input_shape=[1,224,224,3] + - --input=Placeholder + - --mean_values=Placeholder[123.68,116.78,103.94] + - --scale_values=Placeholder[58.8235294117647] + - --output=densenet121/predictions/Reshape_1 + - --input_meta_graph=$dl_dir/tf-densenet121.ckpt.meta +framework: tf +license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICENSE +``` ---- *After this step you will obtain **model.yml** file* ## Model conversion -Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ package. Find more information about conversion in [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ toolkit. Find more information about conversion in [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. -> **NOTE 1**: due to OpenVINO™ paradigms, image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. +> **NOTE 1**: image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. -> **NOTE 2**: due to OpenVINO™ paradigms, if model input is a color image, color channel order should be `BGR`. +> **NOTE 2**: if model input is a color image, color channel order should be `BGR`. *After this step you`ll get **conversion parameters** for Model Optimizer.* @@ -171,7 +204,7 @@ If you adding new demo, please provide auto-testing support too: ## Accuracy validation -Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#resting-new-models). +Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#testing-new-models). If model uses dataset which is unsupported by Accuracy Checker, you also must provide link to it. Please notice this issue in PR description. Don't forget about dataset license too (see [above](#how-to-contribute-model-to-open-model-zoo)). @@ -181,38 +214,60 @@ When the configuration file is ready, you must run Accuracy Checker to obtain me ### Example -In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from Google Drive\* as archive. +Let use one of the file from `tools/accuracy_checker/configs`, for example, this is validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml): ``` -description: >- - This is an Tensorflow\* version of `densenet-121` model, one of the DenseNet - group of models designed to perform image classification. The weights were converted - from DenseNet-Keras Models. For details see repository , - paper -task_type: classification -files: - - name: tf-densenet121.tar.gz - size: 30597420 - sha256: b31ec840358f1d20e1c6364d05ce463cb0bc0480042e663ad54547189501852d - source: - $type: google_drive - id: 0B_fUSpodN0t0eW1sVk1aeWREaDA -postprocessing: - - $type: unpack_archive - format: gztar - file: tf-densenet121.tar.gz -model_optimizer_args: - - --reverse_input_channels - - --input_shape=[1,224,224,3] - - --input=Placeholder - - --mean_values=Placeholder[123.68,116.78,103.94] - - --scale_values=Placeholder[58.8235294117647] - - --output=densenet121/predictions/Reshape_1 - - --input_meta_graph=$dl_dir/tf-densenet121.ckpt.meta -framework: tf -license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICENSE +models: + - name: alexnet-cf + launchers: + - framework: caffe + model: public/alexnet/alexnet.prototxt + weights: public/alexnet/alexnet.caffemodel + adapter: classification + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + - type: crop + size: 227 + - type: normalization + mean: 104, 117, 123 + + - name: alexnet + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/alexnet/FP32/alexnet.xml + weights: public/alexnet/FP32/alexnet.bin + adapter: classification + + - framework: dlsdk + tags: + - FP16 + model: public/alexnet/FP16/alexnet.xml + weights: public/alexnet/FP16/alexnet.bin + adapter: classification + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + - type: crop + size: 227 + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + - name: acciracy@top5 + type: accuracy + top_k: 5 ``` + ## Documentation Documentation is very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. From 4798f32ca563bc104e8c62ef6b37696fdd58131f Mon Sep 17 00:00:00 2001 From: maozhong Date: Thu, 10 Oct 2019 09:26:24 +0800 Subject: [PATCH 093/927] change per review comments Signed-off-by: maozhong --- demos/python_demos/face_recognition_demo/README.md | 4 ++-- .../face_recognition_demo/face_recognition_demo.py | 2 +- demos/python_demos/face_recognition_demo/faces_database.py | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/demos/python_demos/face_recognition_demo/README.md b/demos/python_demos/face_recognition_demo/README.md index 6bc541ffc8f..0ae5502f199 100644 --- a/demos/python_demos/face_recognition_demo/README.md +++ b/demos/python_demos/face_recognition_demo/README.md @@ -76,7 +76,7 @@ python ./face_recognition_demo.py -h usage: face_recognition_demo.py [-h] [-i PATH] [-o PATH] [--no_show] [-tl] [-cw CROP_WIDTH] [-ch CROP_HEIGHT] - [-match_algo {HUNGARIAN,MIN_DIST}] -fg PATH + [--match_algo {HUNGARIAN,MIN_DIST}] -fg PATH [--run_detector] -m_fd PATH -m_lm PATH -m_reid PATH [-fd_iw FD_INPUT_WIDTH] [-fd_ih FD_INPUT_HEIGHT] @@ -105,7 +105,7 @@ General: (optional) Crop the input stream to this height (default: no crop). Both -cw and -ch parameters should be specified to use crop. - -match_algo {HUNGARIAN,MIN_DIST} + --match_algo {HUNGARIAN,MIN_DIST} (optional)algorithm for face matching(default: HUNGARIAN) diff --git a/demos/python_demos/face_recognition_demo/face_recognition_demo.py b/demos/python_demos/face_recognition_demo/face_recognition_demo.py index d4ca264d717..a7d0ece4f9e 100755 --- a/demos/python_demos/face_recognition_demo/face_recognition_demo.py +++ b/demos/python_demos/face_recognition_demo/face_recognition_demo.py @@ -56,7 +56,7 @@ def build_argparser(): help="(optional) Crop the input stream to this height " \ "(default: no crop). Both -cw and -ch parameters " \ "should be specified to use crop.") - general.add_argument('-match_algo', default='HUNGARIAN', choices=MATCH_ALGO, + general.add_argument('--match_algo', default='HUNGARIAN', choices=MATCH_ALGO, help="(optional)algorithm for face matching(default: %(default)s)") gallery = parser.add_argument_group('Faces database') diff --git a/demos/python_demos/face_recognition_demo/faces_database.py b/demos/python_demos/face_recognition_demo/faces_database.py index f953d69920a..b81eb3bd43b 100644 --- a/demos/python_demos/face_recognition_demo/faces_database.py +++ b/demos/python_demos/face_recognition_demo/faces_database.py @@ -164,7 +164,7 @@ def match_faces(self, descriptors, match_algo='HUNGARIAN'): if match_algo == 'MIN_DIST': for i in range(len(descriptors)): id = np.argmin(distances[i]) - min_dist = np.min(distances[i]) + min_dist = distances[i][id] matches.append((id, min_dist)) else: # Find best assignments, prevent repeats, assuming faces can not repeat From 3f28ddf75824d06e202ce35989becbc9ad644f7e Mon Sep 17 00:00:00 2001 From: dliang0406 Date: Thu, 10 Oct 2019 15:50:07 +0800 Subject: [PATCH 094/927] make a folder for the annotation/meta file if it doesn't exist (#501) --- .../accuracy_checker/annotation_converters/convert.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py index bc33c767bff..a4e6f216e3a 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py @@ -147,10 +147,16 @@ def main(): def save_annotation(annotation, meta, annotation_file, meta_file): if annotation_file: + annotation_dir = annotation_file.resolve().parent + if not annotation_dir.exists(): + annotation_dir.mkdir(parents=True) with annotation_file.open('wb') as file: for representation in annotation: representation.dump(file) if meta_file and meta: + meta_dir = meta_file.resolve().parent + if not meta_dir.exists(): + meta_dir.mkdir(parents=True) with meta_file.open('wt') as file: json.dump(meta, file) From 7313faaf79adeb34e9a59aa61814c577d39dc8d9 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Tue, 10 Sep 2019 13:37:11 +0300 Subject: [PATCH 095/927] unify conversion --- ci/requirements-conversion.txt | 1 + .../densenet-121-caffe2.md} | 18 +- .../model.yml | 24 +- models/public/index.md | 8 +- .../model.yml | 24 +- .../resnet-50-caffe2.md} | 18 +- .../model.yml | 24 +- .../squeezenet1.1-caffe2.md} | 16 +- .../{vgg19-cf2 => vgg19-caffe2}/model.yml | 24 +- .../vgg19-caffe2.md} | 16 +- .../configs/densenet121-caffe2.yml | 46 ++++ .../configs/resnet-50-caffe2.yml | 46 ++++ .../configs/squeezenet1.1-caffe2.yml | 44 ++++ .../accuracy_checker/configs/vgg19-caffe2.yml | 45 ++++ tools/downloader/README.md | 15 +- tools/downloader/caffe2_to_onnx.py | 71 +++--- tools/downloader/common.py | 1 + tools/downloader/converter.py | 4 - tools/downloader/license.txt | 208 ++++++++++++++++++ tools/downloader/requirements-caffe2.in | 3 + .../tests/representative-models.lst | 1 + 21 files changed, 520 insertions(+), 137 deletions(-) rename models/public/{densenet-121-cf2/densenet-121-cf2.md => densenet-121-caffe2/densenet-121-caffe2.md} (74%) rename models/public/{densenet-121-cf2 => densenet-121-caffe2}/model.yml (64%) rename models/public/{resnet-50-cf2 => resnet-50-caffe2}/model.yml (63%) rename models/public/{resnet-50-cf2/resnet-50-cf2.md => resnet-50-caffe2/resnet-50-caffe2.md} (75%) rename models/public/{squeezenet1.1-cf2 => squeezenet1.1-caffe2}/model.yml (61%) rename models/public/{squeezenet1.1-cf2/squeezenet1.1-cf2.md => squeezenet1.1-caffe2/squeezenet1.1-caffe2.md} (73%) rename models/public/{vgg19-cf2 => vgg19-caffe2}/model.yml (62%) rename models/public/{vgg19-cf2/vgg19-cf2.md => vgg19-caffe2/vgg19-caffe2.md} (73%) create mode 100644 tools/accuracy_checker/configs/densenet121-caffe2.yml create mode 100644 tools/accuracy_checker/configs/resnet-50-caffe2.yml create mode 100644 tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml create mode 100644 tools/accuracy_checker/configs/vgg19-caffe2.yml create mode 100644 tools/downloader/requirements-caffe2.in diff --git a/ci/requirements-conversion.txt b/ci/requirements-conversion.txt index cb067e25b8b..adaa344a39e 100644 --- a/ci/requirements-conversion.txt +++ b/ci/requirements-conversion.txt @@ -4,6 +4,7 @@ certifi==2019.9.11 # via requests chardet==3.0.4 # via requests decorator==4.4.0 # via networkx defusedxml==0.6.0 +future==0.17.1 gast==0.3.2 # via tensorflow google-pasta==0.1.7 # via tensorflow graphviz==0.8.4 # via mxnet diff --git a/models/public/densenet-121-cf2/densenet-121-cf2.md b/models/public/densenet-121-caffe2/densenet-121-caffe2.md similarity index 74% rename from models/public/densenet-121-cf2/densenet-121-cf2.md rename to models/public/densenet-121-caffe2/densenet-121-caffe2.md index 6f444349a18..24b5d5ec58b 100644 --- a/models/public/densenet-121-cf2/densenet-121-cf2.md +++ b/models/public/densenet-121-caffe2/densenet-121-caffe2.md @@ -1,12 +1,12 @@ -# densenet-121-cf2 +# densenet-121-caffe2 ## Use Case and High-Level Description -This is an Caffe2\* version of `densenet-121` model, one of the DenseNet +This is a Caffe2\* version of `densenet-121` model, one of the DenseNet group of models designed to perform image classification. This model -was converted from Caffe\* to Caffe2\* fromat. +was converted from Caffe\* to Caffe2\* format. For details see repository , -paper +paper . ## Example @@ -17,7 +17,7 @@ paper | Type | Classification| | GFLOPs | 5.723 | | MParams | 7.971 | -| Source framework | Caffe2\* | +| Source framework | Caffe2\* | ## Accuracy @@ -35,7 +35,7 @@ Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `W` - width Channel order is `BGR`. -Mean values - [103.94,116.78,123.68], scale value - 58.8235294 +Mean values - [103.94,116.78,123.68], scale value - 58.8235294. ### Converted model @@ -46,19 +46,19 @@ Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `H` - height - `W` - width -Channel order is `BGR` +Channel order is `BGR`. ## Output ### Original model Object classifier according to ImageNet classes, name - `fc6`, shape - `1,1000,1,1`, contains predicted -probability for each class in logits format +probability for each class in logits format. ### Converted model Object classifier according to ImageNet classes, name - `fc6`, shape - `1,1000,1,1`, contains predicted -probability for each class in logits format +probability for each class in logits format. ## Legal Information diff --git a/models/public/densenet-121-cf2/model.yml b/models/public/densenet-121-caffe2/model.yml similarity index 64% rename from models/public/densenet-121-cf2/model.yml rename to models/public/densenet-121-caffe2/model.yml index ce73214ae26..fbca024b47b 100644 --- a/models/public/densenet-121-cf2/model.yml +++ b/models/public/densenet-121-caffe2/model.yml @@ -13,33 +13,33 @@ # limitations under the License. description: >- - This is an Caffe2\* version of `densenet-121` model, one of the DenseNet + This is a Caffe2\* version of `densenet-121` model, one of the DenseNet group of models designed to perform image classification. This model - was converted from Caffe\* to Caffe2\* fromat. + was converted from Caffe\* to Caffe2\* format. For details see repository , - paper + paper . task_type: classification files: - name: predict_net.pb size: 77239 sha256: 820772d4e7b907599cba93ab0e7d2db0dc0b6e313e842a8729a0ea0354e4a719 - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/densenet121/predict_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/densenet121/predict_net.pb - name: init_net.pb size: 40785727 sha256: a3650579bc883a1755750994507c48d84d0f75d193e304eb8caf5031acb5f028 - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/densenet121/init_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/densenet121/init_net.pb framework: caffe2 -caffe2_to_onnx: - - --model-name=densenet-121-cf2 - - --predict-net-path=$dl_dir/predict_net.pb - - --init-net-path=$dl_dir/init_net.pb - - --input-shape=[1,3,224,224] +conversion_to_onnx_args: + - --model-path=$dl_dir/predict_net.pb + - --model-name=densenet-121-caffe2 + - --weights=$dl_dir/init_net.pb + - --input-shape=1,3,224,224 - --input-names=data - - --output-file=$conv_dir/densenet-121-cf2.onnx + - --output-file=$conv_dir/densenet-121-caffe2.onnx model_optimizer_args: - --input_shape=[1,3,224,224] - --input=data - --mean_values=data[103.94,116.78,123.68] - --scale_values=data[58.8235294] - - --input_model=$conv_dir/densenet-121-cf2.onnx + - --input_model=$conv_dir/densenet-121-caffe2.onnx license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/index.md b/models/public/index.md index 6a1d273dd28..328a552a71c 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -13,7 +13,7 @@ The models can be downloaded via Model Downloader | ----------------- | ---------------| -------------- | -------- | ------ | ------- | | AlexNet | [Caffe\*](./alexnet/alexnet.md) | alexnet | | 1.5 | 60.965 | | CaffeNet | [Caffe\*](./caffenet/caffenet.md) | caffenet | | 1.5 | 60.965 | -| DenseNet 121 | [Caffe\*](./densenet-121/densenet-121.md)
[TensorFlow\*](./densenet-121-tf/densenet-121-tf.md) | densenet-121
densenet-121-tf | | 5.289~5.724 | 7.971 | +| DenseNet 121 | [Caffe\*](./densenet-121/densenet-121.md)
[TensorFlow\*](./densenet-121-tf/densenet-121-tf.md)
[Caffe2\*](./densenet-121-caffe2/densenet-121-caffe2.md) | densenet-121
densenet-121-tf
densenet-121-caffe2 | | 5.289~5.724 | 7.971 | | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | @@ -28,7 +28,7 @@ The models can be downloaded via Model Downloader | MobileNet V1 1.0 224 | [Caffe\*](./mobilenet-v1-1.0-224/mobilenet-v1-1.0-224.md)
[TensorFlow\*](./mobilenet-v1-1.0-224-tf/mobilenet-v1-1.0-224-tf.md) | mobilenet-v1-1.0-224
mobilenet-v1-1.0-224-tf | | 1.148 | 4.221 | | MobileNet V2 1.0 224 | [Caffe\*](./mobilenet-v2/mobilenet-v2.md)
[TensorFlow\*](./mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md)
[PyTorch\*](./mobilenet-v2-pytorch/mobilenet-v2-pytorch.md) | mobilenet-v2
mobilenet-v2-1.0-224
mobilenet-v2-pytorch | | 0.615~0.876 | 3.489 | | MobileNet V2 1.4 224 | [TensorFlow\*](./mobilenet-v2-1.4-224/mobilenet-v2-1.4-224.md) | mobilenet-v2-1.4-224 | | 1.183 | 6.087 | -| ResNet 50 | [Caffe\*](./resnet-50/resnet-50.md)
[PyTorch\*](./resnet-50-pytorch/resnet-50-pytorch.md) | resnet-50
resnet-50-pytorch | | 6.996~8.216 | 25.53 | +| ResNet 50 | [Caffe\*](./resnet-50/resnet-50.md)
[PyTorch\*](./resnet-50-pytorch/resnet-50-pytorch.md)
[Caffe2\*](./resnet-50-caffe2/resnet-50-caffe2.md) | resnet-50
resnet-50-pytorch
resnet-50-caffe2 | | 6.996~8.216 | 25.53 | | ResNet 101 | [Caffe\*](./resnet-101/resnet-101.md) | resnet-101 | | 14.441 | 44.496 | | ResNet 152 | [Caffe\*](./resnet-152/resnet-152.md) | resnet-152 | | 21.89 | 60.117 | | SE-Inception | [Caffe\*](./se-inception/se-inception.md) | se-inception | | 4.091 | 11.922 | @@ -38,9 +38,9 @@ The models can be downloaded via Model Downloader | SE-ResNeXt 50 | [Caffe\*](./se-resnext-50/se-resnext-50.md) | se-resnext-50 | | 8.533 | 27.526| | SE-ResNeXt 101 | [Caffe\*](./se-resnext-101/se-resnext-101.md) | se-resnext-101 | | 16.054 | 48.886 | | SqueezeNet v1.0 | [Caffe\*](./squeezenet1.0/squeezenet1.0.md) | squeezenet1.0| | 1.737 | 1.248 | -| SqueezeNet v1.1 | [Caffe\*](./squeezenet1.1/squeezenet1.1.md) | squeezenet1.1| | 0.785 | 1.236 | +| SqueezeNet v1.1 | [Caffe\*](./squeezenet1.1/squeezenet1.1.md)
[Caffe2\*](./squeezenet1.1-caffe2/squeezenet1.1-caffe2.md) | squeezenet1.1
squeezenet1.1-caffe2| | 0.785 | 1.236 | | VGG 16 | [Caffe\*](./vgg16/vgg16.md) | vgg16 | | 30.974 | 138.358 | -| VGG 19 | [Caffe\*](./vgg19/vgg19.md) | vgg19 | | 39.3 | 143.667 | +| VGG 19 | [Caffe\*](./vgg19/vgg19.md)
[Caffe2\*](./vgg19-caffe2/vgg19-caffe2.md) | vgg19
vgg19-caffe2 | | 39.3 | 143.667 | **Octave Convolutions Networks** diff --git a/models/public/resnet-50-cf2/model.yml b/models/public/resnet-50-caffe2/model.yml similarity index 63% rename from models/public/resnet-50-cf2/model.yml rename to models/public/resnet-50-caffe2/model.yml index d098fd5392d..d64390b3b0d 100644 --- a/models/public/resnet-50-cf2/model.yml +++ b/models/public/resnet-50-caffe2/model.yml @@ -13,32 +13,32 @@ # limitations under the License. description: >- - This is an Caffe2\* version of `resnet-50` model, designed to perform image classification. - This model was converted from Caffe\* to Caffe2\* fromat. + This is a Caffe2\* version of `resnet-50` model, designed to perform image classification. + This model was converted from Caffe\* to Caffe2\* format. For details see repository , - paper + paper . task_type: classification files: - name: predict_net.pb size: 31649 sha256: 657081428cd8a8d9f1a6b20a8b6dba51725d3fc1eaabf0f19747a3b843e18a16 - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/resnet50/predict_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/resnet50/predict_net.pb - name: init_net.pb size: 128070759 sha256: 97046c44ecd15b3c8806f609a15d0cc52af7bdc8aa19c720f8a1f6abe68e9a74 - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/resnet50/init_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/resnet50/init_net.pb framework: caffe2 -caffe2_to_onnx: - - --model-name=resnet-50-cf2 - - --predict-net-path=$dl_dir/predict_net.pb - - --init-net-path=$dl_dir/init_net.pb - - --input-shape=[1,3,224,224] +conversion_to_onnx_args: + - --model-path=$dl_dir/predict_net.pb + - --model-name=resnet-50-caffe2 + - --weights=$dl_dir/init_net.pb + - --input-shape=1,3,224,224 - --input-names=gpu_0/data - - --output-file=$conv_dir/resnet-50-cf2.onnx + - --output-file=$conv_dir/resnet-50-caffe2.onnx model_optimizer_args: - --input_shape=[1,3,224,224] - --input=gpu_0/data - --mean_values=gpu_0/data[103.53,116.28,123.675] - --scale_values=gpu_0/data[57.375,57.12,58.395] - - --input_model=$conv_dir/resnet-50-cf2.onnx + - --input_model=$conv_dir/resnet-50-caffe2.onnx license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/resnet-50-cf2/resnet-50-cf2.md b/models/public/resnet-50-caffe2/resnet-50-caffe2.md similarity index 75% rename from models/public/resnet-50-cf2/resnet-50-cf2.md rename to models/public/resnet-50-caffe2/resnet-50-caffe2.md index 122776abf4f..1f769c937dc 100644 --- a/models/public/resnet-50-cf2/resnet-50-cf2.md +++ b/models/public/resnet-50-caffe2/resnet-50-caffe2.md @@ -1,11 +1,11 @@ -# resnet-50-cf2 +# resnet-50-caffe2 ## Use Case and High-Level Description -This is an Caffe2\* version of `resnet-50` model, designed to perform image classification. -This model was converted from Caffe\* to Caffe2\* fromat. +This is a Caffe2\* version of `resnet-50` model, designed to perform image classification. +This model was converted from Caffe\* to Caffe2\* format. For details see repository , -paper +paper . ## Example @@ -16,7 +16,7 @@ paper | Type | Classification| | GFLOPs | 8.216 | | MParams | 25.53 | -| Source framework | Caffe2\* | +| Source framework | Caffe2\* | ## Accuracy @@ -34,7 +34,7 @@ Image, name - `gpu_0/data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `W` - width Channel order is `BGR`. -Mean values - [103.53,116.28,123.675], scale values - [57.375,57.12,58.395] +Mean values - [103.53,116.28,123.675], scale values - [57.375,57.12,58.395]. ### Converted model @@ -45,7 +45,7 @@ Image, name - `gpu_0/data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `H` - height - `W` - width -Channel order is `BGR` +Channel order is `BGR`. ## Output @@ -54,14 +54,14 @@ Channel order is `BGR` Object classifier according to ImageNet classes, name - `gpu_0/softmax`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ### Converted model Object classifier according to ImageNet classes, name - `gpu_0/softmax`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ## Legal Information diff --git a/models/public/squeezenet1.1-cf2/model.yml b/models/public/squeezenet1.1-caffe2/model.yml similarity index 61% rename from models/public/squeezenet1.1-cf2/model.yml rename to models/public/squeezenet1.1-caffe2/model.yml index 5322c899f9b..f85ee091e97 100644 --- a/models/public/squeezenet1.1-cf2/model.yml +++ b/models/public/squeezenet1.1-caffe2/model.yml @@ -13,31 +13,31 @@ # limitations under the License. description: >- - This is an Caffe2\* version of `squeezenet1.1` model, designed to perform image classification. - This model was converted from Caffe\* to Caffe2\* fromat. + This is a Caffe2\* version of `squeezenet1.1` model, designed to perform image classification. + This model was converted from Caffe\* to Caffe2\* format. For details see repository , - paper + paper . task_type: classification files: - name: predict_net.pb size: 6175 sha256: d20be00eb448d3952265620357132916aba8744b027937b56c469b001b46472b - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/squeezenet/predict_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/squeezenet/predict_net.pb - name: init_net.pb size: 6181001 sha256: d8115221de899d081a1a83785bf0dbaeea19463cdf7dbddba662cc7abb4f32dc - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/squeezenet/init_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/squeezenet/init_net.pb framework: caffe2 -caffe2_to_onnx: - - --model-name=squeezenet1.1-cf2 - - --predict-net-path=$dl_dir/predict_net.pb - - --init-net-path=$dl_dir/init_net.pb - - --input-shape=[1,3,227,227] +conversion_to_onnx_args: + - --model-path=$dl_dir/predict_net.pb + - --model-name=squeezenet1.1-caffe2 + - --weights=$dl_dir/init_net.pb + - --input-shape=1,3,227,227 - --input-names=data - - --output-file=$conv_dir/squeezenet1.1-cf2.onnx + - --output-file=$conv_dir/squeezenet1.1-caffe2.onnx model_optimizer_args: - --input_shape=[1,3,227,227] - --input=data - --mean_values=data[103.96,116.78,123.68] - - --input_model=$conv_dir/squeezenet1.1-cf2.onnx + - --input_model=$conv_dir/squeezenet1.1-caffe2.onnx license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md b/models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md similarity index 73% rename from models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md rename to models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md index 859141c87db..abcffc4895c 100644 --- a/models/public/squeezenet1.1-cf2/squeezenet1.1-cf2.md +++ b/models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md @@ -1,11 +1,11 @@ -# squeezenet1.1-cf2 +# squeezenet1.1-caffe2 ## Use Case and High-Level Description -This is an Caffe2\* version of `squeezenet1.1` model, designed to perform image classification. -This model was converted from Caffe\* to Caffe2\* fromat. +This is a Caffe2\* version of `squeezenet1.1` model, designed to perform image classification. +This model was converted from Caffe\* to Caffe2\* format. For details see repository , -paper +paper . ## Example @@ -16,7 +16,7 @@ paper | Type | Classification| | GFLOPs | 0.784 | | MParams | 1.235 | -| Source framework | Caffe2\* | +| Source framework | Caffe2\* | ## Accuracy @@ -34,7 +34,7 @@ Image, name - `data`, shape - `1,3,227,227`, format is `B,C,H,W` where: - `W` - width Channel order is `BGR`. -Mean values - [103.96,116.78,123.68] +Mean values - [103.96,116.78,123.68]. ### Converted model @@ -54,14 +54,14 @@ Channel order is `BGR`. Object classifier according to ImageNet classes, name - `softmaxout`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ### Converted model Object classifier according to ImageNet classes, name - `softmaxout`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ## Legal Information diff --git a/models/public/vgg19-cf2/model.yml b/models/public/vgg19-caffe2/model.yml similarity index 62% rename from models/public/vgg19-cf2/model.yml rename to models/public/vgg19-caffe2/model.yml index 235461abeb2..a551785accd 100644 --- a/models/public/vgg19-cf2/model.yml +++ b/models/public/vgg19-caffe2/model.yml @@ -13,31 +13,31 @@ # limitations under the License. description: >- - This is an Caffe2\* version of `vgg19` model, designed to perform image classification. - This model was converted from Caffe\* to Caffe2\* fromat. + This is a Caffe2\* version of `vgg19` model, designed to perform image classification. + This model was converted from Caffe\* to Caffe2\* format. For details see repository , - paper + paper . task_type: classification files: - name: predict_net.pb size: 2862 sha256: ebb8608fe80ee8bce096a60ecf3e6e8442eb118aaa0fa77d5d58b5dcff7dfb5f - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/vgg19/predict_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/vgg19/predict_net.pb - name: init_net.pb size: 718338501 sha256: 492dbbbc7dd23cb052c66964714759f88370f3fa8542aa32556e93abc7beb69f - source: https://media.githubusercontent.com/media/caffe2/models/ad1087994835832df28f08d486b53abc4a92b183/vgg19/init_net.pb + source: https://s3.amazonaws.com/download.caffe2.ai/models/vgg19/init_net.pb framework: caffe2 -caffe2_to_onnx: - - --model-name=vgg19-cf2 - - --predict-net-path=$dl_dir/predict_net.pb - - --init-net-path=$dl_dir/init_net.pb - - --input-shape=[1,3,224,224] +conversion_to_onnx_args: + - --model-path=$dl_dir/predict_net.pb + - --model-name=vgg19-caffe2 + - --weights=$dl_dir/init_net.pb + - --input-shape=1,3,224,224 - --input-names=data - - --output-file=$conv_dir/vgg19-cf2.onnx + - --output-file=$conv_dir/vgg19-caffe2.onnx model_optimizer_args: - --input_shape=[1,3,224,224] - --input=data - --mean_values=data[103.939,116.779,123.68] - - --input_model=$conv_dir/vgg19-cf2.onnx + - --input_model=$conv_dir/vgg19-caffe2.onnx license: https://raw.githubusercontent.com/caffe2/models/master/LICENSE \ No newline at end of file diff --git a/models/public/vgg19-cf2/vgg19-cf2.md b/models/public/vgg19-caffe2/vgg19-caffe2.md similarity index 73% rename from models/public/vgg19-cf2/vgg19-cf2.md rename to models/public/vgg19-caffe2/vgg19-caffe2.md index 830f7c0fb18..48778b6ac29 100644 --- a/models/public/vgg19-cf2/vgg19-cf2.md +++ b/models/public/vgg19-caffe2/vgg19-caffe2.md @@ -1,11 +1,11 @@ -# vgg19-cf2 +# vgg19-caffe2 ## Use Case and High-Level Description -This is an Caffe2\* version of `vgg19` model, designed to perform image classification. -This model was converted from Caffe\* to Caffe2\* fromat. +This is a Caffe2\* version of `vgg19` model, designed to perform image classification. +This model was converted from Caffe\* to Caffe2\* format. For details see repository , -paper +paper . ## Example ## Specification @@ -15,7 +15,7 @@ paper | Type | Classification| | GFLOPs | 39.3 | | MParams | 143.667 | -| Source framework | Caffe2\* | +| Source framework | Caffe2\* | ## Accuracy @@ -33,7 +33,7 @@ Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `W` - width Channel order is `BGR`. -Mean values - [103.939, 116.779, 123.68] +Mean values - [103.939, 116.779, 123.68]. ### Converted model @@ -53,14 +53,14 @@ Channel order is `BGR`. Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ### Converted model Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ## Legal Information diff --git a/tools/accuracy_checker/configs/densenet121-caffe2.yml b/tools/accuracy_checker/configs/densenet121-caffe2.yml new file mode 100644 index 00000000000..29c44c2fde9 --- /dev/null +++ b/tools/accuracy_checker/configs/densenet121-caffe2.yml @@ -0,0 +1,46 @@ +models: + + - name: densenet-121-caffe2 + launchers: + - framework: onnx_runtime + model: public/densenet-121-caffe2/densenet-121.onnx + adapter: classification + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 224 + - type: normalization + mean: 103.94,116.78,123.68 + std: 58.8235294 + + - name: densenet-121-caffe2 + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/densenet-121-caffe2/FP32/densenet-121-caffe2.xml + weights: public/densenet-121-caffe2/FP32/densenet-121-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/densenet-121-caffe2/FP16/densenet-121-caffe2.xml + weights: public/densenet-121-caffe2/FP16/densenet-121-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 224 diff --git a/tools/accuracy_checker/configs/resnet-50-caffe2.yml b/tools/accuracy_checker/configs/resnet-50-caffe2.yml new file mode 100644 index 00000000000..512d7d737c9 --- /dev/null +++ b/tools/accuracy_checker/configs/resnet-50-caffe2.yml @@ -0,0 +1,46 @@ +models: + + - name: resnet-50-caffe2 + launchers: + - framework: onnx_runtime + model: public/resnet-50-caffe2/resnet-50-caffe2.onnx + adapter: classification + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 224 + - type: normalization + mean: 103.53, 116.28, 123.675 + std: 57.375, 57.12, 58.395 + + - name: resnet-50-caffe2 + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/resnet-50-caffe2/FP32/resnet-50-caffe2.xml + weights: public/resnet-50-caffe2/FP32/resnet-50-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/resnet-50-caffe2/FP16/resnet-50-caffe2.xml + weights: public/resnet-50-caffe2/FP16/resnet-50-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 224 diff --git a/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml b/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml new file mode 100644 index 00000000000..8bcbe35f181 --- /dev/null +++ b/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml @@ -0,0 +1,44 @@ +models: + - name: squeezenet1.1-caffe2 + + launchers: + - framework: onnx_runtime + model: public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.onnx + adapter: classification + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + - type: crop + size: 227 + - type: normalization + mean: 103.96,116.78,123.68 + + - name: squeezenet1.1-caffe2 + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/squeezenet1.1-caffe2/FP32/squeezenet1.1-caffe2.xml + weights: public/squeezenet1.1-caffe2/FP32/squeezenet1.1-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/squeezenet1.1-caffe2/FP16/squeezenet1.1-caffe2.xml + weights: public/squeezenet1.1-caffe2/FP16/squeezenet1.1-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 227 diff --git a/tools/accuracy_checker/configs/vgg19-caffe2.yml b/tools/accuracy_checker/configs/vgg19-caffe2.yml new file mode 100644 index 00000000000..75319cd8ab3 --- /dev/null +++ b/tools/accuracy_checker/configs/vgg19-caffe2.yml @@ -0,0 +1,45 @@ +models: + + - name: vgg19-caffe2 + launchers: + - framework: onnx_runtime + model: public/vgg19-caffe2/vgg19-caffe2.onnx + adapter: classification + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 224 + - type: normalization + mean: 103.939, 116.779, 123.68 + + - name: vgg19-caffe2 + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/vgg19-caffe2/FP32/vgg19-caffe2.xml + weights: public/vgg19-caffe2/FP32/vgg19-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/vgg19-caffe2/FP16/vgg19-caffe2.xml + weights: public/vgg19-caffe2/FP16/vgg19-caffe2.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + - type: crop + size: 224 diff --git a/tools/downloader/README.md b/tools/downloader/README.md index 7dab59ebcf6..3f74984f621 100644 --- a/tools/downloader/README.md +++ b/tools/downloader/README.md @@ -32,12 +32,17 @@ For the model converter, you will also need to install the OpenVINO™ toolkit and the prerequisite libraries for Model Optimizer. See the [OpenVINO toolkit documentation](https://docs.openvinotoolkit.org/) for details. -If you using models from PyTorch framework, you will also need to use intermediate -conversion to ONNX format. To use automatic conversion install additional dependencies: +If you using models from PyTorch or Caffe2 framework, you will also need to use intermediate +conversion to ONNX format. To use automatic conversion install additional dependencies. +For models from PyTorch: ```sh python3 -mpip install --user -r ./requirements-pytorch.in ``` +For models from Caffe2: +```sh +python3 -mpip install --user -r ./requirements-caffe2.in +``` When running the model downloader with Python 3.5.x on macOS, you may encounter an error similar to the following: @@ -213,7 +218,7 @@ The basic usage is to run the script like this: ``` This will convert all models into the Inference Engine IR format. Models that -were originally in that format are ignored. Models in PyTorch's format will be +were originally in that format are ignored. Models in PyTorch and Caffe2 formats will be converted in ONNX format first. The current directory must be the root of a download tree created by the model @@ -312,8 +317,8 @@ describing a single model. Each such object has the following keys: * `description`: text describing the model. Paragraphs are separated by line feed characters. * `framework`: a string identifying the framework whose format the model is downloaded in. - Current possible values are `dldt` (Inference Engine IR), `caffe`, `mxnet`, `pytorch` and `tf` (TensorFlow). - Additional possible values might be added in the future. + Current possible values are `dldt` (Inference Engine IR), `caffe`, `caffe2`, `mxnet`, `pytorch` + and `tf` (TensorFlow). Additional possible values might be added in the future. * `license_url`: an URL for the license that the model is distributed under. diff --git a/tools/downloader/caffe2_to_onnx.py b/tools/downloader/caffe2_to_onnx.py index 2fc909529d6..0438552a734 100644 --- a/tools/downloader/caffe2_to_onnx.py +++ b/tools/downloader/caffe2_to_onnx.py @@ -1,9 +1,6 @@ import argparse from pathlib import Path import sys -import json -import os -import re import onnx from caffe2.python.onnx.frontend import Caffe2Frontend @@ -13,19 +10,15 @@ def positive_int_arg(values): """Check positive integer type for input argument""" result = [] - shapes = re.findall(r'[(\[]([0-9, -]+)[)\]]', values) - for shape in shapes: - single_shape = [] - for value in shape.split(','): - try: - ivalue = int(value) - if ivalue < 0: - raise argparse.ArgumentTypeError('Argument must be a positive integer') - single_shape.append(ivalue) - except Exception as exc: - print(exc) - sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) - result.append(single_shape) + for value in values.split(','): + try: + ivalue = int(value) + if ivalue < 0: + raise argparse.ArgumentTypeError('Argument must be a positive integer') + result.append(ivalue) + except Exception as exc: + print(exc) + sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) return result def parse_args(): @@ -34,41 +27,35 @@ def parse_args(): parser = argparse.ArgumentParser(description='Conversion of pretrained models from Caffe2 to ONNX') parser.add_argument('--model-name', type=str, required=True, - help='Model to convert. May be class name or name of constructor function') + help='Model name to convert.') parser.add_argument('--output-file', type=Path, required=True, help='Path to the output ONNX model') - parser.add_argument('--predict-net-path', type=str, required=True, + parser.add_argument('--model-path', type=Path, required=True, help='Path to predict_net .pb file') - parser.add_argument('--init-net-path', type=str, required=True, + parser.add_argument('--weights', type=Path, required=True, help='Path to init_net .pb file') parser.add_argument('--input-shape', metavar='INPUT_DIM', type=positive_int_arg, required=True, help='Shape of the input blob') - parser.add_argument('--input-names', type=str, + parser.add_argument('--input-names', type=str, required=True, help='Comma separated names of the input layers') return parser.parse_args() -def load_model(predict_net_path, init_net_path): - predict_net = caffe2_pb2.NetDef() - with open(predict_net_path, 'rb') as file: - predict_net.ParseFromString(file.read()) +def convert_to_onnx(predict_net_path, init_net_path, input_shape, input_names, output_file, model_name=''): + """Convert Caffe2 model to ONNX and check the resulting onnx model""" - init_net = caffe2_pb2.NetDef() - with open(init_net_path, 'rb') as file: - init_net.ParseFromString(file.read()) + output_file.parent.mkdir(parents=True, exist_ok=True) - return predict_net, init_net + data_type = onnx.TensorProto.FLOAT + value_info = {input_names: [data_type, input_shape]} -def convert_to_onnx(predict_net, init_net, input_shape, input_names, output_file, model_name=''): - """Convert Caffe2 model to ONNX and check the resulting onnx model""" + predict_net = caffe2_pb2.NetDef() + predict_net.ParseFromString(predict_net_path.read_bytes()) - output_file.parent.mkdir(parents=True, exist_ok=True) - value_info = {} - input_names = input_names.split(',') - for name, shape in zip(input_names, input_shape): - value_info[name] = [shape[0], shape] - if predict_net.name == "": - predict_net.name = model_name + predict_net.name = model_name + + init_net = caffe2_pb2.NetDef() + init_net.ParseFromString(init_net_path.read_bytes()) onnx_model = Caffe2Frontend.caffe2_net_to_onnx_model( predict_net, @@ -78,16 +65,16 @@ def convert_to_onnx(predict_net, init_net, input_shape, input_names, output_file try: onnx.checker.check_model(onnx_model) print('ONNX check passed successfully.') - with open(str(output_file), 'wb') as f: - f.write(onnx_model.SerializeToString()) + output_file.write_bytes(onnx_model.SerializeToString()) except onnx.onnx_cpp2py_export.checker.ValidationError as exc: sys.exit('ONNX check failed with error: ' + str(exc)) def main(): args = parse_args() - predict_net, init_net = load_model(args.predict_net_path, args.init_net_path) - convert_to_onnx(predict_net, init_net, args.input_shape, args.input_names, args.output_file, args.model_name) + convert_to_onnx(args.model_path, args.weights, args.input_shape, + args.input_names, args.output_file, args.model_name + ) if __name__ == '__main__': - main() \ No newline at end of file + main() diff --git a/tools/downloader/common.py b/tools/downloader/common.py index 1af2a665652..53481f15453 100644 --- a/tools/downloader/common.py +++ b/tools/downloader/common.py @@ -31,6 +31,7 @@ # make sure to update the documentation if you modify these KNOWN_FRAMEWORKS = { 'caffe': None, + 'caffe2': 'caffe2_to_onnx.py', 'dldt': None, 'mxnet': None, 'pytorch': 'pytorch_to_onnx.py', diff --git a/tools/downloader/converter.py b/tools/downloader/converter.py index 245dd0819b1..defe295a863 100755 --- a/tools/downloader/converter.py +++ b/tools/downloader/converter.py @@ -155,10 +155,6 @@ def convert(model, do_prefix_stdout=True): if not convert_to_onnx(model, output_dir, args, stdout_prefix): return False model_format = 'onnx' - if model.caffe2_to_onnx_args: - if not convert_to_onnx(model, output_dir, args, stdout_prefix): - return False - model_format = 'onnx' expanded_mo_args = [ string.Template(arg).substitute(dl_dir=args.download_dir / model.subdirectory, diff --git a/tools/downloader/license.txt b/tools/downloader/license.txt index 29c7e63941b..d105a8464c7 100644 --- a/tools/downloader/license.txt +++ b/tools/downloader/license.txt @@ -44,6 +44,214 @@ License terms: ================================================================================================== +* squeezenet1.1-caffe2, resnet-50-caffe2, vgg19-caffe2, densenet-121-caffe2 - Caffe2 networks converted from Caffe https://github.com/caffe2/models + +License terms: + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +================================================================================================== + * squeezenet1.0, squeezenet1.1 - SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and $<$0.5MB model size https://github.com/DeepScale/SqueezeNet License terms: diff --git a/tools/downloader/requirements-caffe2.in b/tools/downloader/requirements-caffe2.in new file mode 100644 index 00000000000..5f1ceb698af --- /dev/null +++ b/tools/downloader/requirements-caffe2.in @@ -0,0 +1,3 @@ +future +onnx +torch diff --git a/tools/downloader/tests/representative-models.lst b/tools/downloader/tests/representative-models.lst index 81b4005e76d..f20286c989d 100644 --- a/tools/downloader/tests/representative-models.lst +++ b/tools/downloader/tests/representative-models.lst @@ -4,5 +4,6 @@ mobilenet-v1-0.25-128 # TensorFlow, HTTP downloads mobilenet-v2-pytorch # PyTorch (external module), Google Drive downloads mtcnn-p # Caffe, HTTPS downloads, regex replacement octave-densenet-121-0.125 # MXNet, archive unpacking +resnet-50-caffe2 # Caffe2 resnet-50-pytorch # PyTorch (torchvision) single-image-super-resolution-1032 # DLDT From bbac9c1317849615de29583bbb3f75adf905ab07 Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 10 Oct 2019 14:25:07 +0300 Subject: [PATCH 096/927] AC: Clean up unused API (#500) --- .../quantization_model_evaluator.py | 20 -------- .../statistics_collector/__init__.py | 5 -- .../statistics_collector.py | 48 ------------------- 3 files changed, 73 deletions(-) delete mode 100644 tools/accuracy_checker/accuracy_checker/statistics_collector/__init__.py delete mode 100644 tools/accuracy_checker/accuracy_checker/statistics_collector/statistics_collector.py diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index 548ba4cb8fb..6263846904a 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -26,7 +26,6 @@ from ..adapters import create_adapter from ..config import ConfigError from ..data_readers import BaseReader -from ..statistics_collector import StatisticsCollector from ..progress_reporters import ProgressReporter @@ -38,7 +37,6 @@ def __init__( self.input_feeder = None self.adapter = adapter self.dataset_config = dataset_config - self.stat_collector = None self.preprocessor = None self.dataset = None self.postprocessor = None @@ -71,7 +69,6 @@ def _get_batch_input(self, batch_input, batch_annotation): def process_dataset_async( self, nreq=2, - statistics_functors_maping=None, subset=None, num_images=None, check_progress=False, @@ -80,8 +77,6 @@ def process_dataset_async( ): def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, adapter, raw_outputs_callback): - if self.stat_collector: - self.stat_collector.process_batch(batch_predictions) if raw_outputs_callback: raw_outputs_callback(batch_predictions) if adapter: @@ -99,7 +94,6 @@ def _create_subset(subset, num_images): self.select_dataset(dataset_tag) self.dataset.batch = self.launcher.batch - self.stat_collector = None progress_reporter = None _create_subset(subset, num_images) @@ -110,9 +104,6 @@ def _create_subset(subset, num_images): dataset_iterator = iter(enumerate(self.dataset)) if self.launcher.num_requests != nreq: self.launcher.num_requests = nreq - - if statistics_functors_maping: - self.stat_collector = StatisticsCollector(statistics_functors_maping, self.launcher.batch) free_irs = self.launcher.infer_requests queued_irs = [] wait_time = 0.01 @@ -154,7 +145,6 @@ def select_dataset(self, dataset_tag): def process_dataset( self, - statistics_functors_maping=None, subset=None, num_images=None, check_progress=False, @@ -166,9 +156,6 @@ def process_dataset( self.dataset.batch = self.launcher.batch progress_reporter = None - if statistics_functors_maping: - self.stat_collector = StatisticsCollector(statistics_functors_maping, self.launcher.batch) - if subset is not None: self.dataset.make_subset(ids=subset) @@ -181,8 +168,6 @@ def process_dataset( for batch_id, (batch_annotation, batch_inputs, batch_identifiers) in enumerate(self.dataset): filled_inputs, batch_meta = self._get_batch_input(batch_inputs, batch_annotation) batch_predictions = self.launcher.predict(filled_inputs, batch_meta, **kwargs) - if self.stat_collector: - self.stat_collector.process_batch(batch_predictions) if self.adapter: self.adapter.output_blob = self.adapter.output_blob or self.launcher.output_blob batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta) @@ -277,11 +262,6 @@ def load_network_from_ir(self, xml_path, bin_path): def get_network(self): return self.launcher.network - def get_statistics(self): - if not self.stat_collector: - return None - return self.stat_collector.get_statistics() - def reset(self): if self.metric_executor: self.metric_executor.reset() diff --git a/tools/accuracy_checker/accuracy_checker/statistics_collector/__init__.py b/tools/accuracy_checker/accuracy_checker/statistics_collector/__init__.py deleted file mode 100644 index 160bbb83881..00000000000 --- a/tools/accuracy_checker/accuracy_checker/statistics_collector/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .statistics_collector import StatisticsCollector - -__all__ = [ - 'StatisticsCollector', -] diff --git a/tools/accuracy_checker/accuracy_checker/statistics_collector/statistics_collector.py b/tools/accuracy_checker/accuracy_checker/statistics_collector/statistics_collector.py deleted file mode 100644 index 31519db1992..00000000000 --- a/tools/accuracy_checker/accuracy_checker/statistics_collector/statistics_collector.py +++ /dev/null @@ -1,48 +0,0 @@ -import numpy as np - - -class Statistic: - def __init__(self, functor, batch_size): - self.iter_counter = 0 - self.state = np.array([]) - self.processor = functor - self.batch_size = batch_size - - def update(self, activation): - an = self.processor(activation) - if self.iter_counter == 0: - self.state = an - else: - self.state = (self.state * self.iter_counter + an) / (self.iter_counter + 1) - self.iter_counter += 1 - - def update_on_batch(self, batch_a): - an_shape = np.shape(batch_a) - if an_shape[0] != self.batch_size: - self.update(batch_a) - return - for activation in batch_a: - self.update(activation) - - -class StatisticsCollector: - def __init__(self, functors_mapping, batch=1): - self.statistics = {} - for layer_name, functors in functors_mapping.items(): - self.statistics[layer_name] = [Statistic(functor, batch) for functor in functors] - - def process_batch(self, outputs): - output_dict = outputs[0] - for layer_name, output in output_dict.items(): - if layer_name not in self.statistics: - continue - - for statistic in self.statistics[layer_name]: - statistic.update_on_batch(output) - - def get_statistics(self): - per_layer_statistics = {} - for layer_name, layer_statistics in self.statistics.items(): - per_layer_statistics[layer_name] = [statistic.state for statistic in layer_statistics] - - return per_layer_statistics From 3a31d628702b589530dc257d706b3185f7dc0136 Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 10 Oct 2019 15:23:33 +0300 Subject: [PATCH 097/927] fix resolve for non-existing file (#502) --- .../accuracy_checker/annotation_converters/convert.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py index a4e6f216e3a..49cfdfb8b89 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/convert.py @@ -147,14 +147,14 @@ def main(): def save_annotation(annotation, meta, annotation_file, meta_file): if annotation_file: - annotation_dir = annotation_file.resolve().parent + annotation_dir = annotation_file.parent if not annotation_dir.exists(): annotation_dir.mkdir(parents=True) with annotation_file.open('wb') as file: for representation in annotation: representation.dump(file) if meta_file and meta: - meta_dir = meta_file.resolve().parent + meta_dir = meta_file.parent if not meta_dir.exists(): meta_dir.mkdir(parents=True) with meta_file.open('wt') as file: From 4ee69aef6c0b7eb61967c72585e342b8c97b13b1 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 11 Oct 2019 10:21:13 +0300 Subject: [PATCH 098/927] fix ready ir processing cycle (#506) --- .../accuracy_checker/evaluators/model_evaluator.py | 3 ++- .../evaluators/quantization_model_evaluator.py | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py index cbf4469ae40..a14ff065ba9 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py @@ -123,7 +123,8 @@ def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, ready_irs, queued_irs = self._wait_for_any(queued_irs) if ready_irs: wait_time = 0.01 - for batch_id, batch_annotation, batch_meta, batch_predictions, ir in ready_irs: + while ready_irs: + batch_id, batch_annotation, batch_meta, batch_predictions, ir = ready_irs.pop(0) batch_identifiers = [annotation.identifier for annotation in batch_annotation] batch_predictions = _process_ready_predictions( batch_predictions, batch_identifiers, batch_meta, self.adapter, kwargs.get('output_callback') diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index 6263846904a..ba443475345 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -115,7 +115,8 @@ def _create_subset(subset, num_images): ready_irs, queued_irs = self._wait_for_any(queued_irs) if ready_irs: wait_time = 0.01 - for batch_id, batch_annotation, batch_identifiers, batch_meta, batch_predictions, ir in ready_irs: + while ready_irs: + batch_id, batch_annotation, batch_identifiers, batch_meta, batch_predictions, ir = ready_irs.pop(0) batch_predictions = _process_ready_predictions( batch_predictions, batch_identifiers, batch_meta, self.adapter, kwargs.get('output_callback') ) From 6d4cf566630c379e86ca7e00b2db719a084ab0b4 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 11 Oct 2019 10:48:59 +0300 Subject: [PATCH 099/927] AC: overload commandline parameters if they already merged with config (#507) --- .../accuracy_checker/accuracy_checker/config/config_reader.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index 991e9d76ff2..0c7e4692926 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -371,16 +371,19 @@ def merge_dlsdk_launcher_args(arguments, launcher_entry, update_launcher_entry): if 'bitstream' not in launcher_entry and 'bitstreams' in arguments and arguments.bitstreams: if not arguments.bitstreams.is_dir(): launcher_entry['bitstream'] = arguments.bitstreams + arguments.bitstreams = None if 'cpu_extensions' not in launcher_entry and 'extensions' in arguments and arguments.extensions: extensions = arguments.extensions if not extensions.is_dir() or extensions.name == 'AUTO': launcher_entry['cpu_extensions'] = arguments.extensions + arguments.extensions = None if 'affinity_map' not in launcher_entry and 'affinity_map' in arguments and arguments.affinity_map: am = arguments.affinity_map if not am.is_dir(): launcher_entry['affinity_map'] = arguments.affinity_map + arguments.affinity_map = None return launcher_entry From 6ce9c63042c057b9e2c18fc975b2ad965b80cc9a Mon Sep 17 00:00:00 2001 From: ezamalie Date: Mon, 2 Sep 2019 13:17:08 +0300 Subject: [PATCH 100/927] Added EfficientNet B0,B5,B7 from PyTorch --- .../efficientnet-b0-pt/efficientnet-b0-pt.md | 78 +++++++ models/public/efficientnet-b0-pt/model.yml | 73 +++++++ .../efficientnet-b5-pt/efficientnet-b5-pt.md | 78 +++++++ models/public/efficientnet-b5-pt/model.yml | 79 +++++++ .../efficientnet-b7-pt/efficientnet-b7-pt.md | 78 +++++++ models/public/efficientnet-b7-pt/model.yml | 79 +++++++ models/public/index.md | 3 + tools/downloader/license.txt | 206 ++++++++++++++++++ 8 files changed, 674 insertions(+) create mode 100644 models/public/efficientnet-b0-pt/efficientnet-b0-pt.md create mode 100644 models/public/efficientnet-b0-pt/model.yml create mode 100644 models/public/efficientnet-b5-pt/efficientnet-b5-pt.md create mode 100644 models/public/efficientnet-b5-pt/model.yml create mode 100644 models/public/efficientnet-b7-pt/efficientnet-b7-pt.md create mode 100644 models/public/efficientnet-b7-pt/model.yml diff --git a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md b/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md new file mode 100644 index 00000000000..951d2cc0fac --- /dev/null +++ b/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md @@ -0,0 +1,78 @@ +# efficientnet-b0-pt + +## Use Case and High-Level Description + +The `efficientnet-b0-pt` model is one of the [efficientnet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). + +The model input is a blob that consists of a single image of 1x3x224x224 in RGB +order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] +before passing the image blob into the network. In addition, values must be multiplied +by [58.395,57.12,57.375]. + +The model output for `efficientnet-b0-pt` is the typical object classifier output for +the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.819 | +| MParams | 5.268 | +| Source framework | PyTorch\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 76.91 | 76.91 | +| Top 5 | 93.21 | 93.21 | + +## Performance + +## Input + +### Original model + +Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `RGB`. +Mean values - [123.675,116.28,103.53], scale value - [58.395,57.12,57.375] + +### Converted model + +Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR` + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE) diff --git a/models/public/efficientnet-b0-pt/model.yml b/models/public/efficientnet-b0-pt/model.yml new file mode 100644 index 00000000000..552e8410e07 --- /dev/null +++ b/models/public/efficientnet-b0-pt/model.yml @@ -0,0 +1,73 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b0-pt` model is one of the efficientnet + group of models designed to perform image classification. This model was pretrained + in PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image + database. For details about this family of models, check out the repository . + + The model input is a blob that consists of a single image of 1x3x224x224 in RGB + order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] + before passing the image blob into the network. In addition, values must be multiplied + by [58.395,57.12,57.375]. + + The model output for `efficientnet-b0-pt` is the typical object classifier output + for the 1000 different classifications matching those in the ImageNet database. +task_type: classification +files: + - name: model/gen_efficientnet.py + size: 40997 + sha256: 8613d48ca74611d3566d8c02cbf7c92aad6b16a708ffcf21183a7014fecdec09 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/gen_efficientnet.py + - name: model/efficientnet_builder.py + size: 18446 + sha256: 69bb2adc49dc79c8860f36acca910bad6733e23b46ed80551248b90141d5e1b5 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/efficientnet_builder.py + - name: model/helpers.py + size: 1097 + sha256: 0415f198b8a87cb34c9f9aed79f267043010fbd197e27df805bab9d070da82ed + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/helpers.py + - name: model/conv2d_helpers.py + size: 6175 + sha256: e6ba7e878dd28c6d0ccc9707205bf6b58bea664d7fee820d7101a6a8e48990a9 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/conv2d_helpers.py + - name: model/__init__.py + size: 32 + sha256: e0bedda5f5e949b8ace7d4f5cdf80e7e664c0b0a935486b6152078fa61c80c1b + source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/master/gen_efficientnet/__init__.py + - name: efficientnet-b0.pth + size: 21376958 + sha256: d6904d92f92ccdca67c9717f9d119392d658577f99e5ce021b57d157985783db + source: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b0-d6904d92.pth +pytorch_to_onnx: + - --model-path=$dl_dir + - --model-name=efficientnet_b0 + - --import-module=model + - --weights=$dl_dir/efficientnet-b0.pth + - --input-shape=1,3,224,224 + - --input-names=data + - --output-names=prob + - --output-file=$conv_dir/efficientnet-b0.onnx + - --model-params=pretrained=False +model_optimizer_args: + - --reverse_input_channels + - --input_shape=[1,3,224,224] + - --input=data + - --mean_values=data[123.675,116.28,103.53] + - --scale_values=data[58.395,57.12,57.375] + - --output=prob + - --input_model=$conv_dir/efficientnet-b0.onnx +framework: pytorch +license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE diff --git a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md b/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md new file mode 100644 index 00000000000..4ccd8c25269 --- /dev/null +++ b/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md @@ -0,0 +1,78 @@ +# efficientnet-b5-pt + +## Use Case and High-Level Description + +The `efficientnet-b5-pt` model is one of the [efficientnet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). + +The model input is a blob that consists of a single image of 1x3x456x456 in RGB +order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] +before passing the image blob into the network. In addition, values must be multiplied +by [58.395,57.12,57.375]. + +The model output for `efficientnet-b5-pt` is the typical object classifier output for +the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 21.252 | +| MParams | 30.303 | +| Source framework | PyTorch\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 83.69 | 83.69 | +| Top 5 | 96.71 | 96.71 | + +## Performance + +## Input + +### Original model + +Image, name - `data`, shape - `1,3,456,456`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `RGB`. +Mean values - [123.675,116.28,103.53], scale value - [58.395,57.12,57.375] + +### Converted model + +Image, name - `data`, shape - `1,3,456,456`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR` + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE) diff --git a/models/public/efficientnet-b5-pt/model.yml b/models/public/efficientnet-b5-pt/model.yml new file mode 100644 index 00000000000..a14c335704c --- /dev/null +++ b/models/public/efficientnet-b5-pt/model.yml @@ -0,0 +1,79 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b5-pt` model is one of the efficientnet + group of models designed to perform image classification. This model was pretrained + in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models + have been pretrained on the ImageNet image database. For details about this family + of models, check out the repository . + + The model input is a blob that consists of a single image of 1x3x456x456 in RGB + order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] + before passing the image blob into the network. In addition, values must be multiplied + by [58.395,57.12,57.375]. + + The model output for `efficientnet-b5-pt` is the typical object classifier output + for the 1000 different classifications matching those in the ImageNet database. +task_type: classification +files: + - name: model/gen_efficientnet.py + size: 40997 + sha256: 8613d48ca74611d3566d8c02cbf7c92aad6b16a708ffcf21183a7014fecdec09 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/gen_efficientnet.py + - name: model/efficientnet_builder.py + size: 18446 + sha256: 69bb2adc49dc79c8860f36acca910bad6733e23b46ed80551248b90141d5e1b5 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/efficientnet_builder.py + - name: model/helpers.py + size: 1097 + sha256: 0415f198b8a87cb34c9f9aed79f267043010fbd197e27df805bab9d070da82ed + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/helpers.py + - name: model/conv2d_helpers.py + size: 6175 + sha256: e6ba7e878dd28c6d0ccc9707205bf6b58bea664d7fee820d7101a6a8e48990a9 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/conv2d_helpers.py + - name: model/__init__.py + size: 32 + sha256: e0bedda5f5e949b8ace7d4f5cdf80e7e664c0b0a935486b6152078fa61c80c1b + source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/master/gen_efficientnet/__init__.py + - name: tf-efficientnet-b5.pth + size: 122398414 + sha256: 99018a74e61e5948a955ebfaa2b02ba9abe7bb2e6b7f3d2dfe100e07e103bbdb + source: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_aa-99018a74.pth +postprocessing: + - $type: regex_replace + file: model/conv2d_helpers.py + pattern: '_EXPORTABLE = False' + replacement: '_EXPORTABLE = True' +pytorch_to_onnx: + - --model-path=$dl_dir + - --model-name=tf_efficientnet_b5 + - --import-module=model + - --weights=$dl_dir/tf-efficientnet-b5.pth + - --input-shape=1,3,456,456 + - --input-names=data + - --output-names=prob + - --output-file=$conv_dir/efficientnet-b5.onnx + - --model-params=pretrained=False +model_optimizer_args: + - --reverse_input_channels + - --input_shape=[1,3,456,456] + - --input=data + - --mean_values=data[123.675,116.28,103.53] + - --scale_values=data[58.395,57.12,57.375] + - --output=prob + - --input_model=$conv_dir/efficientnet-b5.onnx +framework: pytorch +license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE diff --git a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md b/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md new file mode 100644 index 00000000000..13cb5d526fc --- /dev/null +++ b/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md @@ -0,0 +1,78 @@ +# efficientnet-b7-pt + +## Use Case and High-Level Description + +The `efficientnet-b7-pt` model is one of the [efficientnet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). + +The model input is a blob that consists of a single image of 1x3x600x600 in RGB +order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] +before passing the image blob into the network. In addition, values must be multiplied +by [58.395,57.12,57.375]. + +The model output for `efficientnet-b7-pt` is the typical object classifier output for +the 1000 different classifications matching those in the ImageNet database. + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 77.618 | +| MParams | 66.193 | +| Source framework | PyTorch\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 84.42% | 84.42% | +| Top 5 | 96.91% | 96.91% | + +## Performance + +## Input + +### Original model + +Image, name - `data`, shape - `1,3,600,600`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `RGB`. +Mean values - [123.675,116.28,103.53], scale value - [58.395,57.12,57.375] + +### Converted model + +Image, name - `data`, shape - `1,3,600,600`, format is `B,C,H,W` where: + +- `B` - batch size +- `C` - channel +- `H` - height +- `W` - width + +Channel order is `BGR` + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - Predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE) diff --git a/models/public/efficientnet-b7-pt/model.yml b/models/public/efficientnet-b7-pt/model.yml new file mode 100644 index 00000000000..3f233d82372 --- /dev/null +++ b/models/public/efficientnet-b7-pt/model.yml @@ -0,0 +1,79 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b7-pt` model is one of the efficientnet + group of models designed to perform image classification. This model was pretrained + in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models + have been pretrained on the ImageNet image database. For details about this family + of models, check out the repository . + + The model input is a blob that consists of a single image of 1x3x600x600 in RGB + order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] + before passing the image blob into the network. In addition, values must be multiplied + by [58.395,57.12,57.375]. + + The model output for `efficientnet-b7-pt` is the typical object classifier output + for the 1000 different classifications matching those in the ImageNet database. +task_type: classification +files: + - name: model/gen_efficientnet.py + size: 40997 + sha256: 8613d48ca74611d3566d8c02cbf7c92aad6b16a708ffcf21183a7014fecdec09 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/gen_efficientnet.py + - name: model/efficientnet_builder.py + size: 18446 + sha256: 69bb2adc49dc79c8860f36acca910bad6733e23b46ed80551248b90141d5e1b5 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/efficientnet_builder.py + - name: model/helpers.py + size: 1097 + sha256: 0415f198b8a87cb34c9f9aed79f267043010fbd197e27df805bab9d070da82ed + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/helpers.py + - name: model/conv2d_helpers.py + size: 6175 + sha256: e6ba7e878dd28c6d0ccc9707205bf6b58bea664d7fee820d7101a6a8e48990a9 + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/conv2d_helpers.py + - name: model/__init__.py + size: 32 + sha256: e0bedda5f5e949b8ace7d4f5cdf80e7e664c0b0a935486b6152078fa61c80c1b + source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/master/gen_efficientnet/__init__.py + - name: tf-efficientnet-b7.pth + size: 266843942 + sha256: 076e3472fb198ec7c3091aecc73ff205bcca4a114f5862e2297a4c2720c91826 + source: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_aa-076e3472.pth +postprocessing: + - $type: regex_replace + file: model/conv2d_helpers.py + pattern: '_EXPORTABLE = False' + replacement: '_EXPORTABLE = True' +pytorch_to_onnx: + - --model-path=$dl_dir + - --model-name=tf_efficientnet_b7 + - --import-module=model + - --weights=$dl_dir/tf-efficientnet-b7.pth + - --input-shape=1,3,600,600 + - --input-names=data + - --output-names=prob + - --output-file=$conv_dir/efficientnet-b7.onnx + - --model-params=pretrained=False +model_optimizer_args: + - --reverse_input_channels + - --input_shape=[1,3,600,600] + - --input=data + - --mean_values=data[123.675,116.28,103.53] + - --scale_values=data[58.395,57.12,57.375] + - --output=prob + - --input_model=$conv_dir/efficientnet-b7.onnx +framework: pytorch +license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE diff --git a/models/public/index.md b/models/public/index.md index 6a1d273dd28..5fb51671359 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -17,6 +17,9 @@ The models can be downloaded via Model Downloader | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | +| EfficientNet B0 | [PyTorch\*](./efficientnet-b0-pt/efficientnet-b0-pt.md) | efficientnet-b0-pt | 76.91/93.21 | 0.819 | 5.268 | +| EfficientNet B5 | [PyTorch\*](./efficientnet-b5-pt/efficientnet-b5-pt.md) | efficientnet-b5-pt | 83.69/96.71 | 21.252 | 30.303 | +| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pt/efficientnet-b7-pt.md) | efficientnet-b7-pt | 84.42/96.91 | 77.618 | 66.193 | | Inception (GoogleNet) V1 | [Caffe\*](./googlenet-v1/googlenet-v1.md) | googlenet-v1 | | 3.266 | 6.999 | | Inception (GoogleNet) V2 | [Caffe\*](./googlenet-v2/googlenet-v2.md) | googlenet-v2 | | 4.058 | 11.185 | | Inception (GoogleNet) V3 | [Caffe\*](./googlenet-v3/googlenet-v3.md)
[PyTorch\*](./googlenet-v3-pytorch/googlenet-v3-pytorch.md) | googlenet-v3
googlenet-v3-pytorch | | 11.469 | 23.817 | diff --git a/tools/downloader/license.txt b/tools/downloader/license.txt index 29c7e63941b..190af6dd575 100644 --- a/tools/downloader/license.txt +++ b/tools/downloader/license.txt @@ -3990,3 +3990,209 @@ License terms: SOFTWARE. ================================================================================================== + +* efficientnet-b0-pt, efficientnet-b5-pt, efficientnet-b7-pt - EfficientNet B0, B5, B7 - https://github.com/rwightman/gen-efficientnet-pytorch + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2019 Ross Wightman + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +================================================================================================== From 568c89f285a3a67ecd63549a6a6c5cdb5822c8ca Mon Sep 17 00:00:00 2001 From: ezamalie Date: Mon, 2 Sep 2019 12:58:54 +0300 Subject: [PATCH 101/927] Conversion to onnx sript expanded for more models support --- tools/downloader/pytorch_to_onnx.py | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/tools/downloader/pytorch_to_onnx.py b/tools/downloader/pytorch_to_onnx.py index 80dec8fcc17..c1e68b6dd25 100644 --- a/tools/downloader/pytorch_to_onnx.py +++ b/tools/downloader/pytorch_to_onnx.py @@ -21,6 +21,14 @@ def positive_int_arg(values): sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) return result +def model_parameters(parameters): + if not parameters: + return None + model_params = dict((param, value) for param, value in (element.split('=') for element in parameters.split(','))) + for param in model_params.keys(): + model_params[param] = eval(model_params[param]) + return model_params + def parse_args(): """Parse input arguments""" @@ -46,11 +54,12 @@ def parse_args(): help='Space separated names of the input layers') parser.add_argument('--output-names', type=str, nargs='+', help='Space separated names of the output layers') - + parser.add_argument('--model-params', type=model_parameters, + help='Pairs "name"="value" of model constructor comma-separeted parameters') return parser.parse_args() -def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None): +def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None, model_params=None): """Import model and load pretrained weights""" if from_torchvision: @@ -72,7 +81,7 @@ def load_model(model_name, weights, from_torchvision=True, model_path=None, modu try: module = __import__(module_name) creator = getattr(module, model_name) - model = creator() + model = creator(**model_params) if model_params else creator() except ImportError as err: print('Module {} in {} doesn\'t exist. Check import path and name'.format(model_name, model_path)) sys.exit(err) @@ -95,6 +104,7 @@ def convert_to_onnx(model, input_shape, output_file, input_names, output_names): output_file.parent.mkdir(parents=True, exist_ok=True) model.eval() dummy_input = torch.randn(input_shape) + model(dummy_input) torch.onnx.export(model, dummy_input, str(output_file), verbose=False, input_names=input_names, output_names=output_names) @@ -109,7 +119,8 @@ def convert_to_onnx(model, input_shape, output_file, input_names, output_names): def main(): args = parse_args() - model = load_model(args.model_name, args.weights, args.from_torchvision, args.model_path, args.import_module) + model = load_model(args.model_name, args.weights, args.from_torchvision, + args.model_path, args.import_module, args.model_params) convert_to_onnx(model, args.input_shape, args.output_file, args.input_names, args.output_names) From 55959cefbdc6078cf05a1c799ccc2db86a486beb Mon Sep 17 00:00:00 2001 From: ezamalie Date: Tue, 24 Sep 2019 17:42:12 +0300 Subject: [PATCH 102/927] FIX --- models/public/efficientnet-b0-pt/efficientnet-b0-pt.md | 2 +- models/public/efficientnet-b5-pt/efficientnet-b5-pt.md | 2 +- models/public/efficientnet-b7-pt/efficientnet-b7-pt.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md b/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md index 951d2cc0fac..958b82910b9 100644 --- a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md +++ b/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md @@ -6,7 +6,7 @@ The `efficientnet-b0-pt` model is one of the [efficientnet](https://arxiv.org/ab The model input is a blob that consists of a single image of 1x3x224x224 in RGB order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] -before passing the image blob into the network. In addition, values must be multiplied +before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. The model output for `efficientnet-b0-pt` is the typical object classifier output for diff --git a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md b/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md index 4ccd8c25269..5f5f12cab01 100644 --- a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md +++ b/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md @@ -6,7 +6,7 @@ The `efficientnet-b5-pt` model is one of the [efficientnet](https://arxiv.org/ab The model input is a blob that consists of a single image of 1x3x456x456 in RGB order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] -before passing the image blob into the network. In addition, values must be multiplied +before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. The model output for `efficientnet-b5-pt` is the typical object classifier output for diff --git a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md b/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md index 13cb5d526fc..4d13863a184 100644 --- a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md +++ b/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md @@ -6,7 +6,7 @@ The `efficientnet-b7-pt` model is one of the [efficientnet](https://arxiv.org/ab The model input is a blob that consists of a single image of 1x3x600x600 in RGB order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] -before passing the image blob into the network. In addition, values must be multiplied +before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. The model output for `efficientnet-b7-pt` is the typical object classifier output for From 18d30f89e2555281b2d72adcbeafe4d2fdee13a5 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Wed, 25 Sep 2019 11:38:19 +0300 Subject: [PATCH 103/927] FIXES --- .../efficientnet-b0-pt/efficientnet-b0-pt.md | 16 +++++++------- models/public/efficientnet-b0-pt/model.yml | 21 +++++++++---------- .../efficientnet-b5-pt/efficientnet-b5-pt.md | 16 +++++++------- models/public/efficientnet-b5-pt/model.yml | 20 +++++++++--------- .../efficientnet-b7-pt/efficientnet-b7-pt.md | 16 +++++++------- models/public/efficientnet-b7-pt/model.yml | 20 +++++++++--------- 6 files changed, 54 insertions(+), 55 deletions(-) diff --git a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md b/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md index 958b82910b9..9841b53401b 100644 --- a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md +++ b/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md @@ -2,10 +2,10 @@ ## Use Case and High-Level Description -The `efficientnet-b0-pt` model is one of the [efficientnet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). +The `efficientnet-b0-pt` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). -The model input is a blob that consists of a single image of 1x3x224x224 in RGB -order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] +The model input is a blob that consists of a single image 3x224x224 in RGB +order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. @@ -44,7 +44,7 @@ Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `W` - width Channel order is `RGB`. -Mean values - [123.675,116.28,103.53], scale value - [58.395,57.12,57.375] +Mean values - [123.675,116.28,103.53], scale values - [58.395,57.12,57.375]. ### Converted model @@ -55,7 +55,7 @@ Image, name - `data`, shape - `1,3,224,224`, format is `B,C,H,W` where: - `H` - height - `W` - width -Channel order is `BGR` +Channel order is `BGR`. ## Output @@ -64,15 +64,15 @@ Channel order is `BGR` Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ### Converted model Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ## Legal Information -[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE) +[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/5e91628ed98250989a7ddd20abfe27385e0493c1/LICENSE) diff --git a/models/public/efficientnet-b0-pt/model.yml b/models/public/efficientnet-b0-pt/model.yml index 552e8410e07..bdc473972db 100644 --- a/models/public/efficientnet-b0-pt/model.yml +++ b/models/public/efficientnet-b0-pt/model.yml @@ -13,13 +13,13 @@ # limitations under the License. description: >- - The `efficientnet-b0-pt` model is one of the efficientnet + The `efficientnet-b0-pt` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained - in PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image + in PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the repository . - The model input is a blob that consists of a single image of 1x3x224x224 in RGB - order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] + The model input is a blob that consists of a single image 3x224x224 in RGB + order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be multiplied by [58.395,57.12,57.375]. @@ -30,23 +30,23 @@ files: - name: model/gen_efficientnet.py size: 40997 sha256: 8613d48ca74611d3566d8c02cbf7c92aad6b16a708ffcf21183a7014fecdec09 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/gen_efficientnet.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/gen_efficientnet.py - name: model/efficientnet_builder.py size: 18446 sha256: 69bb2adc49dc79c8860f36acca910bad6733e23b46ed80551248b90141d5e1b5 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/efficientnet_builder.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/efficientnet_builder.py - name: model/helpers.py size: 1097 sha256: 0415f198b8a87cb34c9f9aed79f267043010fbd197e27df805bab9d070da82ed - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/helpers.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/helpers.py - name: model/conv2d_helpers.py size: 6175 sha256: e6ba7e878dd28c6d0ccc9707205bf6b58bea664d7fee820d7101a6a8e48990a9 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/conv2d_helpers.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/conv2d_helpers.py - name: model/__init__.py size: 32 sha256: e0bedda5f5e949b8ace7d4f5cdf80e7e664c0b0a935486b6152078fa61c80c1b - source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/master/gen_efficientnet/__init__.py + source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/__init__.py - name: efficientnet-b0.pth size: 21376958 sha256: d6904d92f92ccdca67c9717f9d119392d658577f99e5ce021b57d157985783db @@ -60,7 +60,6 @@ pytorch_to_onnx: - --input-names=data - --output-names=prob - --output-file=$conv_dir/efficientnet-b0.onnx - - --model-params=pretrained=False model_optimizer_args: - --reverse_input_channels - --input_shape=[1,3,224,224] @@ -70,4 +69,4 @@ model_optimizer_args: - --output=prob - --input_model=$conv_dir/efficientnet-b0.onnx framework: pytorch -license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE +license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/LICENSE diff --git a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md b/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md index 5f5f12cab01..a7f1c0e0093 100644 --- a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md +++ b/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md @@ -2,10 +2,10 @@ ## Use Case and High-Level Description -The `efficientnet-b5-pt` model is one of the [efficientnet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). +The `efficientnet-b5-pt` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). -The model input is a blob that consists of a single image of 1x3x456x456 in RGB -order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] +The model input is a blob that consists of a single image 3x456x456 in RGB +order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. @@ -44,7 +44,7 @@ Image, name - `data`, shape - `1,3,456,456`, format is `B,C,H,W` where: - `W` - width Channel order is `RGB`. -Mean values - [123.675,116.28,103.53], scale value - [58.395,57.12,57.375] +Mean values - [123.675,116.28,103.53], scale values - [58.395,57.12,57.375]. ### Converted model @@ -55,7 +55,7 @@ Image, name - `data`, shape - `1,3,456,456`, format is `B,C,H,W` where: - `H` - height - `W` - width -Channel order is `BGR` +Channel order is `BGR`. ## Output @@ -64,15 +64,15 @@ Channel order is `BGR` Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ### Converted model Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ## Legal Information -[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE) +[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/LICENSE) diff --git a/models/public/efficientnet-b5-pt/model.yml b/models/public/efficientnet-b5-pt/model.yml index a14c335704c..f84416b3c3c 100644 --- a/models/public/efficientnet-b5-pt/model.yml +++ b/models/public/efficientnet-b5-pt/model.yml @@ -13,14 +13,14 @@ # limitations under the License. description: >- - The `efficientnet-b5-pt` model is one of the efficientnet + The `efficientnet-b5-pt` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained - in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models + in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the repository . - The model input is a blob that consists of a single image of 1x3x456x456 in RGB - order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] + The model input is a blob that consists of a single image 3x456x456 in RGB + order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be multiplied by [58.395,57.12,57.375]. @@ -31,23 +31,23 @@ files: - name: model/gen_efficientnet.py size: 40997 sha256: 8613d48ca74611d3566d8c02cbf7c92aad6b16a708ffcf21183a7014fecdec09 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/gen_efficientnet.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/gen_efficientnet.py - name: model/efficientnet_builder.py size: 18446 sha256: 69bb2adc49dc79c8860f36acca910bad6733e23b46ed80551248b90141d5e1b5 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/efficientnet_builder.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/efficientnet_builder.py - name: model/helpers.py size: 1097 sha256: 0415f198b8a87cb34c9f9aed79f267043010fbd197e27df805bab9d070da82ed - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/helpers.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/helpers.py - name: model/conv2d_helpers.py size: 6175 sha256: e6ba7e878dd28c6d0ccc9707205bf6b58bea664d7fee820d7101a6a8e48990a9 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/conv2d_helpers.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/conv2d_helpers.py - name: model/__init__.py size: 32 sha256: e0bedda5f5e949b8ace7d4f5cdf80e7e664c0b0a935486b6152078fa61c80c1b - source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/master/gen_efficientnet/__init__.py + source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/__init__.py - name: tf-efficientnet-b5.pth size: 122398414 sha256: 99018a74e61e5948a955ebfaa2b02ba9abe7bb2e6b7f3d2dfe100e07e103bbdb @@ -76,4 +76,4 @@ model_optimizer_args: - --output=prob - --input_model=$conv_dir/efficientnet-b5.onnx framework: pytorch -license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE +license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/LICENSE diff --git a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md b/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md index 4d13863a184..8467cd38ca2 100644 --- a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md +++ b/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md @@ -2,10 +2,10 @@ ## Use Case and High-Level Description -The `efficientnet-b7-pt` model is one of the [efficientnet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). +The `efficientnet-b7-pt` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). -The model input is a blob that consists of a single image of 1x3x600x600 in RGB -order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] +The model input is a blob that consists of a single image 3x600x600 in RGB +order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. @@ -44,7 +44,7 @@ Image, name - `data`, shape - `1,3,600,600`, format is `B,C,H,W` where: - `W` - width Channel order is `RGB`. -Mean values - [123.675,116.28,103.53], scale value - [58.395,57.12,57.375] +Mean values - [123.675,116.28,103.53], scale values - [58.395,57.12,57.375]. ### Converted model @@ -55,7 +55,7 @@ Image, name - `data`, shape - `1,3,600,600`, format is `B,C,H,W` where: - `H` - height - `W` - width -Channel order is `BGR` +Channel order is `BGR`. ## Output @@ -64,15 +64,15 @@ Channel order is `BGR` Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ### Converted model Object classifier according to ImageNet classes, name - `prob`, shape - `1,1000`, output data format is `B,C` where: - `B` - batch size -- `C` - Predicted probabilities for each class in [0, 1] range +- `C` - predicted probabilities for each class in [0, 1] range ## Legal Information -[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE) +[LICENSE](https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/LICENSE) diff --git a/models/public/efficientnet-b7-pt/model.yml b/models/public/efficientnet-b7-pt/model.yml index 3f233d82372..14dd22b676a 100644 --- a/models/public/efficientnet-b7-pt/model.yml +++ b/models/public/efficientnet-b7-pt/model.yml @@ -13,14 +13,14 @@ # limitations under the License. description: >- - The `efficientnet-b7-pt` model is one of the efficientnet + The `efficientnet-b7-pt` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained - in TensorFlow\*, then weights were converted to PyTorch\* All the EfficientNet models + in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the repository . - The model input is a blob that consists of a single image of 1x3x600x600 in RGB - order. The BGR mean values need to be subtracted as follows: [123.675,116.28,103.53] + The model input is a blob that consists of a single image 3x600x600 in RGB + order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be multiplied by [58.395,57.12,57.375]. @@ -31,23 +31,23 @@ files: - name: model/gen_efficientnet.py size: 40997 sha256: 8613d48ca74611d3566d8c02cbf7c92aad6b16a708ffcf21183a7014fecdec09 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/gen_efficientnet.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/gen_efficientnet.py - name: model/efficientnet_builder.py size: 18446 sha256: 69bb2adc49dc79c8860f36acca910bad6733e23b46ed80551248b90141d5e1b5 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/efficientnet_builder.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/efficientnet_builder.py - name: model/helpers.py size: 1097 sha256: 0415f198b8a87cb34c9f9aed79f267043010fbd197e27df805bab9d070da82ed - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/helpers.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/helpers.py - name: model/conv2d_helpers.py size: 6175 sha256: e6ba7e878dd28c6d0ccc9707205bf6b58bea664d7fee820d7101a6a8e48990a9 - source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/gen_efficientnet/conv2d_helpers.py + source: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/conv2d_helpers.py - name: model/__init__.py size: 32 sha256: e0bedda5f5e949b8ace7d4f5cdf80e7e664c0b0a935486b6152078fa61c80c1b - source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/master/gen_efficientnet/__init__.py + source: https://github.com/rwightman/gen-efficientnet-pytorch/raw/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/gen_efficientnet/__init__.py - name: tf-efficientnet-b7.pth size: 266843942 sha256: 076e3472fb198ec7c3091aecc73ff205bcca4a114f5862e2297a4c2720c91826 @@ -76,4 +76,4 @@ model_optimizer_args: - --output=prob - --input_model=$conv_dir/efficientnet-b7.onnx framework: pytorch -license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/master/LICENSE +license: https://raw.githubusercontent.com/rwightman/gen-efficientnet-pytorch/a36e2b2cd1bd122a508a6fffeaa7606890f8c882/LICENSE From 34fee9fb50c6e0a25cb3eb06ea81f34f6d033c04 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 26 Sep 2019 15:55:51 +0300 Subject: [PATCH 104/927] pt -> pytorch --- .../efficientnet-b0-pytorch.md} | 6 +++--- .../model.yml | 4 ++-- .../efficientnet-b5-pytorch.md} | 6 +++--- .../model.yml | 4 ++-- .../efficientnet-b7-pytorch.md} | 6 +++--- .../model.yml | 4 ++-- models/public/index.md | 6 +++--- 7 files changed, 18 insertions(+), 18 deletions(-) rename models/public/{efficientnet-b0-pt/efficientnet-b0-pt.md => efficientnet-b0-pytorch/efficientnet-b0-pytorch.md} (78%) rename models/public/{efficientnet-b0-pt => efficientnet-b0-pytorch}/model.yml (95%) rename models/public/{efficientnet-b5-pt/efficientnet-b5-pt.md => efficientnet-b5-pytroch/efficientnet-b5-pytorch.md} (77%) rename models/public/{efficientnet-b5-pt => efficientnet-b5-pytroch}/model.yml (95%) rename models/public/{efficientnet-b7-pt/efficientnet-b7-pt.md => efficientnet-b7-pytorch/efficientnet-b7-pytorch.md} (77%) rename models/public/{efficientnet-b7-pt => efficientnet-b7-pytorch}/model.yml (95%) diff --git a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md b/models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md similarity index 78% rename from models/public/efficientnet-b0-pt/efficientnet-b0-pt.md rename to models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md index 9841b53401b..abdbb93363c 100644 --- a/models/public/efficientnet-b0-pt/efficientnet-b0-pt.md +++ b/models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md @@ -1,15 +1,15 @@ -# efficientnet-b0-pt +# efficientnet-b0-pytorch ## Use Case and High-Level Description -The `efficientnet-b0-pt` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). +The `efficientnet-b0-pytorch` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). The model input is a blob that consists of a single image 3x224x224 in RGB order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. -The model output for `efficientnet-b0-pt` is the typical object classifier output for +The model output for `efficientnet-b0-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. ## Example diff --git a/models/public/efficientnet-b0-pt/model.yml b/models/public/efficientnet-b0-pytorch/model.yml similarity index 95% rename from models/public/efficientnet-b0-pt/model.yml rename to models/public/efficientnet-b0-pytorch/model.yml index bdc473972db..78a980838b7 100644 --- a/models/public/efficientnet-b0-pt/model.yml +++ b/models/public/efficientnet-b0-pytorch/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b0-pt` model is one of the EfficientNet + The `efficientnet-b0-pytorch` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained in PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the repository . @@ -23,7 +23,7 @@ description: >- before passing the image blob into the network. In addition, values must be multiplied by [58.395,57.12,57.375]. - The model output for `efficientnet-b0-pt` is the typical object classifier output + The model output for `efficientnet-b0-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. task_type: classification files: diff --git a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md b/models/public/efficientnet-b5-pytroch/efficientnet-b5-pytorch.md similarity index 77% rename from models/public/efficientnet-b5-pt/efficientnet-b5-pt.md rename to models/public/efficientnet-b5-pytroch/efficientnet-b5-pytorch.md index a7f1c0e0093..42981791ae1 100644 --- a/models/public/efficientnet-b5-pt/efficientnet-b5-pt.md +++ b/models/public/efficientnet-b5-pytroch/efficientnet-b5-pytorch.md @@ -1,15 +1,15 @@ -# efficientnet-b5-pt +# efficientnet-b5-pytorch ## Use Case and High-Level Description -The `efficientnet-b5-pt` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). +The `efficientnet-b5-pytorch` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). The model input is a blob that consists of a single image 3x456x456 in RGB order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. -The model output for `efficientnet-b5-pt` is the typical object classifier output for +The model output for `efficientnet-b5-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. ## Example diff --git a/models/public/efficientnet-b5-pt/model.yml b/models/public/efficientnet-b5-pytroch/model.yml similarity index 95% rename from models/public/efficientnet-b5-pt/model.yml rename to models/public/efficientnet-b5-pytroch/model.yml index f84416b3c3c..2d8104555d8 100644 --- a/models/public/efficientnet-b5-pt/model.yml +++ b/models/public/efficientnet-b5-pytroch/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b5-pt` model is one of the EfficientNet + The `efficientnet-b5-pytorch` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family @@ -24,7 +24,7 @@ description: >- before passing the image blob into the network. In addition, values must be multiplied by [58.395,57.12,57.375]. - The model output for `efficientnet-b5-pt` is the typical object classifier output + The model output for `efficientnet-b5-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. task_type: classification files: diff --git a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md b/models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md similarity index 77% rename from models/public/efficientnet-b7-pt/efficientnet-b7-pt.md rename to models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md index 8467cd38ca2..a9f5baf31fd 100644 --- a/models/public/efficientnet-b7-pt/efficientnet-b7-pt.md +++ b/models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md @@ -1,15 +1,15 @@ -# efficientnet-b7-pt +# efficientnet-b7-pytorch ## Use Case and High-Level Description -The `efficientnet-b7-pt` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). +The `efficientnet-b7-pytorch` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/rwightman/gen-efficientnet-pytorch). The model input is a blob that consists of a single image 3x600x600 in RGB order. The RGB mean values need to be subtracted as follows: [123.675,116.28,103.53] before passing the image blob into the network. In addition, values must be divided by [58.395,57.12,57.375]. -The model output for `efficientnet-b7-pt` is the typical object classifier output for +The model output for `efficientnet-b7-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. ## Example diff --git a/models/public/efficientnet-b7-pt/model.yml b/models/public/efficientnet-b7-pytorch/model.yml similarity index 95% rename from models/public/efficientnet-b7-pt/model.yml rename to models/public/efficientnet-b7-pytorch/model.yml index 14dd22b676a..d871040e6d2 100644 --- a/models/public/efficientnet-b7-pt/model.yml +++ b/models/public/efficientnet-b7-pytorch/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b7-pt` model is one of the EfficientNet + The `efficientnet-b7-pytorch` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained in TensorFlow\*, then weights were converted to PyTorch\*. All the EfficientNet models have been pretrained on the ImageNet image database. For details about this family @@ -24,7 +24,7 @@ description: >- before passing the image blob into the network. In addition, values must be multiplied by [58.395,57.12,57.375]. - The model output for `efficientnet-b7-pt` is the typical object classifier output + The model output for `efficientnet-b7-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. task_type: classification files: diff --git a/models/public/index.md b/models/public/index.md index 5fb51671359..2ea0724c4f3 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -17,9 +17,9 @@ The models can be downloaded via Model Downloader | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | -| EfficientNet B0 | [PyTorch\*](./efficientnet-b0-pt/efficientnet-b0-pt.md) | efficientnet-b0-pt | 76.91/93.21 | 0.819 | 5.268 | -| EfficientNet B5 | [PyTorch\*](./efficientnet-b5-pt/efficientnet-b5-pt.md) | efficientnet-b5-pt | 83.69/96.71 | 21.252 | 30.303 | -| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pt/efficientnet-b7-pt.md) | efficientnet-b7-pt | 84.42/96.91 | 77.618 | 66.193 | +| EfficientNet B0 | [PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0-pytorch | 76.91/93.21 | 0.819 | 5.268 | +| EfficientNet B5 | [PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | +| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | | Inception (GoogleNet) V1 | [Caffe\*](./googlenet-v1/googlenet-v1.md) | googlenet-v1 | | 3.266 | 6.999 | | Inception (GoogleNet) V2 | [Caffe\*](./googlenet-v2/googlenet-v2.md) | googlenet-v2 | | 4.058 | 11.185 | | Inception (GoogleNet) V3 | [Caffe\*](./googlenet-v3/googlenet-v3.md)
[PyTorch\*](./googlenet-v3-pytorch/googlenet-v3-pytorch.md) | googlenet-v3
googlenet-v3-pytorch | | 11.469 | 23.817 | From 06580ca50d0d682d5192ec7fed71150729417e93 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Thu, 26 Sep 2019 16:11:37 +0300 Subject: [PATCH 105/927] Added configs for accuracy validaton --- .../configs/efficientnet-b0-pytorch.yml | 96 +++++++++++++++++++ .../configs/efficientnet-b5-pytorch.yml | 95 ++++++++++++++++++ .../configs/efficientnet-b7-pytorch.yml | 95 ++++++++++++++++++ 3 files changed, 286 insertions(+) create mode 100644 tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml create mode 100644 tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml create mode 100644 tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml new file mode 100644 index 00000000000..9519c8e5514 --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml @@ -0,0 +1,96 @@ +models: + - name: efficientnet-b0-pytorch + + launchers: + - framework: onnx_runtime + model: efficientnet-b0-pytorch/efficientnet-b0.onnx + adapytorcher: classification + cpu_extensions: AUTO + inputs: + - name: data + type: INPUT + shape: 1,3,224,224 + + + datasets: + - name: imagenet_1000_classes + data_source: ImageNet + reader: pillow_imread + + preprocessing: + - type: resize + size: 256 + aspect_ratio_scale: greater + use_pillow: True + interpolation: BICUBIC + + - type: crop + use_pillow: True + size: 224 + + - type: normalization + mean: (123.675,116.28,103.53) + std: (58.395,57.12,57.375) + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + reference: 76.91 + threshold: 0.05 + + - name: acciracy@top5 + type: accuracy + top_k: 5 + reference: 93.21 + threshold: 0.05 + + - name: efficientnet-b0-pytorch + + launchers: + - framework: dlsdk + tags: + - FP32 + model: efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.xml + weights: efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.bin + adapytorcher: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.xml + weights: efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.bin + adapytorcher: classification + cpu_extensions: AUTO + + + datasets: + - name: imagenet_1000_classes + data_source: ImageNet + reader: pillow_imread + + preprocessing: + - type: bgr_to_rgb # Actually rgb->bgr + + - type: resize + size: 256 + aspect_ratio_scale: greater + use_pillow: True + interpolation: BICUBIC + + - type: crop + use_pillow: True + size: 224 + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + reference: 76.91 + threshold: 0.05 + - name: acciracy@top5 + type: accuracy + top_k: 5 + reference: 93.21 + threshold: 0.05 diff --git a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml new file mode 100644 index 00000000000..0bc89bb4061 --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml @@ -0,0 +1,95 @@ +models: + - name: efficientnet-b5-pytorch + + launchers: + - framework: onnx_runtime + model: efficientnet-b5-pytorch/efficientnet-b5-pytorch.onnx + adapytorcher: classification + cpu_extensions: AUTO + inputs: + - name: data + type: INPUT + shape: 1,3,456,456 + + + datasets: + - name: imagenet_1000_classes + data_source: ImageNet + reader: pillow_imread + + preprocessing: + - type: resize + size: 488 + aspect_ratio_scale: greater + use_pillow: True + interpolation: BICUBIC + # Crop ratio 0.934 + - type: crop + use_pillow: True + size: 456 + + - type: normalization + mean: (123.675,116.28,103.53) + std: (58.395,57.12,57.375) + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + reference: 83.69 + threshold: 0.05 + - name: acciracy@top5 + type: accuracy + top_k: 5 + reference: 96.71 + threshold: 0.05 + + - name: efficientnet-b5-pytorch + + launchers: + - framework: dlsdk + tags: + - FP32 + model: efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.xml + weights: efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.bin + adapytorcher: classification + cpu_extensions: AUTO + + + - framework: dlsdk + tags: + - FP16 + model: efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.xml + weights: efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.bin + adapytorcher: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + data_source: ImageNet + reader: pillow_imread + + preprocessing: + - type: bgr_to_rgb # Actually rgb->bgr + + - type: resize + size: 488 + aspect_ratio_scale: greater + use_pillow: True + interpolation: BICUBIC + # Crop ration 0.934 + - type: crop + use_pillow: True + size: 456 + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + reference: 83.69 + threshold: 0.05 + - name: acciracy@top5 + type: accuracy + top_k: 5 + reference: 96.71 + threshold: 0.05 diff --git a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml new file mode 100644 index 00000000000..6616ef6ec42 --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml @@ -0,0 +1,95 @@ +models: + - name: efficientnet-b7-pytorch + + launchers: + - framework: onnx_runtime + model: efficientnet-b7-pytorch/efficientnet-b7-pytorch.onnx + adapytorcher: classification + cpu_extensions: AUTO + inputs: + - name: data + type: INPUT + shape: 1,3,600,600 + + + datasets: + - name: imagenet_1000_classes + data_source: ImageNet + reader: pillow_imread + + preprocessing: + - type: resize + size: 632 + aspect_ratio_scale: greater + use_pillow: True + interpolation: BICUBIC + # Crop ratio 0.949 + - type: crop + use_pillow: True + size: 600 + + - type: normalization + mean: (123.675,116.28,103.53) + std: (58.395,57.12,57.375) + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + reference: 84.42 + threshold: 0.05 + - name: acciracy@top5 + type: accuracy + top_k: 5 + reference: 96.91 + threshold: 0.05 + + - name: efficientnet-b7-pytorch + + launchers: + - framework: dlsdk + tags: + - FP32 + model: efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.xml + weights: efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.bin + adapytorcher: classification + cpu_extensions: AUTO + + + - framework: dlsdk + tags: + - FP16 + model: efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.xml + weights: efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.bin + adapytorcher: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + data_source: ImageNet + reader: pillow_imread + + preprocessing: + - type: brg_to_rgb # Actually rgb->bgr + + - type: resize + size: 632 + aspect_ratio_scale: greater + use_pillow: True + interpolation: BICUBIC + # Crop ratio 0.949 + - type: crop + use_pillow: True + size: 600 + + metrics: + - name: accuracy@top1 + type: accuracy + top_k: 1 + reference: 84.42 + threshold: 0.05 + - name: acciracy@top5 + type: accuracy + top_k: 5 + reference: 96.91 + threshold: 0.05 From a0027e0fad30043dfd4988c4250ec05cf72a2d63 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Thu, 26 Sep 2019 16:16:22 +0300 Subject: [PATCH 106/927] FIX --- .../efficientnet-b5-pytorch.md | 0 .../model.yml | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename models/public/{efficientnet-b5-pytroch => efficientnet-b5-pytorch}/efficientnet-b5-pytorch.md (100%) rename models/public/{efficientnet-b5-pytroch => efficientnet-b5-pytorch}/model.yml (100%) diff --git a/models/public/efficientnet-b5-pytroch/efficientnet-b5-pytorch.md b/models/public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.md similarity index 100% rename from models/public/efficientnet-b5-pytroch/efficientnet-b5-pytorch.md rename to models/public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.md diff --git a/models/public/efficientnet-b5-pytroch/model.yml b/models/public/efficientnet-b5-pytorch/model.yml similarity index 100% rename from models/public/efficientnet-b5-pytroch/model.yml rename to models/public/efficientnet-b5-pytorch/model.yml From 847674da19785cbc6752734e3d3423c648b7ca1c Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 27 Sep 2019 11:26:27 +0300 Subject: [PATCH 107/927] FIX: AC configs --- .../configs/efficientnet-b0-pytorch.yml | 37 +++--------------- .../configs/efficientnet-b5-pytorch.yml | 36 +++--------------- .../configs/efficientnet-b7-pytorch.yml | 38 +++---------------- 3 files changed, 16 insertions(+), 95 deletions(-) diff --git a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml index 9519c8e5514..5bd0090d3f3 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml @@ -3,7 +3,7 @@ models: launchers: - framework: onnx_runtime - model: efficientnet-b0-pytorch/efficientnet-b0.onnx + model: public/efficientnet-b0-pytorch/efficientnet-b0.onnx adapytorcher: classification cpu_extensions: AUTO inputs: @@ -14,7 +14,6 @@ models: datasets: - name: imagenet_1000_classes - data_source: ImageNet reader: pillow_imread preprocessing: @@ -32,42 +31,28 @@ models: mean: (123.675,116.28,103.53) std: (58.395,57.12,57.375) - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - reference: 76.91 - threshold: 0.05 - - - name: acciracy@top5 - type: accuracy - top_k: 5 - reference: 93.21 - threshold: 0.05 - - name: efficientnet-b0-pytorch launchers: - framework: dlsdk tags: - FP32 - model: efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.xml - weights: efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.bin + model: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.xml + weights: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.bin adapytorcher: classification cpu_extensions: AUTO - framework: dlsdk tags: - FP16 - model: efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.xml - weights: efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.bin + model: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.xml + weights: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.bin adapytorcher: classification cpu_extensions: AUTO datasets: - name: imagenet_1000_classes - data_source: ImageNet reader: pillow_imread preprocessing: @@ -82,15 +67,3 @@ models: - type: crop use_pillow: True size: 224 - - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - reference: 76.91 - threshold: 0.05 - - name: acciracy@top5 - type: accuracy - top_k: 5 - reference: 93.21 - threshold: 0.05 diff --git a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml index 0bc89bb4061..fe19c636eb3 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml @@ -3,7 +3,7 @@ models: launchers: - framework: onnx_runtime - model: efficientnet-b5-pytorch/efficientnet-b5-pytorch.onnx + model: public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.onnx adapytorcher: classification cpu_extensions: AUTO inputs: @@ -14,7 +14,6 @@ models: datasets: - name: imagenet_1000_classes - data_source: ImageNet reader: pillow_imread preprocessing: @@ -32,26 +31,14 @@ models: mean: (123.675,116.28,103.53) std: (58.395,57.12,57.375) - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - reference: 83.69 - threshold: 0.05 - - name: acciracy@top5 - type: accuracy - top_k: 5 - reference: 96.71 - threshold: 0.05 - - name: efficientnet-b5-pytorch launchers: - framework: dlsdk tags: - FP32 - model: efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.xml - weights: efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.bin + model: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.xml + weights: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.bin adapytorcher: classification cpu_extensions: AUTO @@ -59,14 +46,13 @@ models: - framework: dlsdk tags: - FP16 - model: efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.xml - weights: efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.bin + model: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.xml + weights: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.bin adapytorcher: classification cpu_extensions: AUTO datasets: - name: imagenet_1000_classes - data_source: ImageNet reader: pillow_imread preprocessing: @@ -81,15 +67,3 @@ models: - type: crop use_pillow: True size: 456 - - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - reference: 83.69 - threshold: 0.05 - - name: acciracy@top5 - type: accuracy - top_k: 5 - reference: 96.71 - threshold: 0.05 diff --git a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml index 6616ef6ec42..2d552a9043f 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml @@ -3,7 +3,7 @@ models: launchers: - framework: onnx_runtime - model: efficientnet-b7-pytorch/efficientnet-b7-pytorch.onnx + model: public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.onnx adapytorcher: classification cpu_extensions: AUTO inputs: @@ -14,7 +14,6 @@ models: datasets: - name: imagenet_1000_classes - data_source: ImageNet reader: pillow_imread preprocessing: @@ -32,26 +31,14 @@ models: mean: (123.675,116.28,103.53) std: (58.395,57.12,57.375) - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - reference: 84.42 - threshold: 0.05 - - name: acciracy@top5 - type: accuracy - top_k: 5 - reference: 96.91 - threshold: 0.05 - - name: efficientnet-b7-pytorch launchers: - framework: dlsdk tags: - FP32 - model: efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.xml - weights: efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.bin + model: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.xml + weights: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.bin adapytorcher: classification cpu_extensions: AUTO @@ -59,18 +46,17 @@ models: - framework: dlsdk tags: - FP16 - model: efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.xml - weights: efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.bin + model: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.xml + weights: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.bin adapytorcher: classification cpu_extensions: AUTO datasets: - name: imagenet_1000_classes - data_source: ImageNet reader: pillow_imread preprocessing: - - type: brg_to_rgb # Actually rgb->bgr + - type: bgr_to_rgb # Actually rgb->bgr - type: resize size: 632 @@ -81,15 +67,3 @@ models: - type: crop use_pillow: True size: 600 - - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - reference: 84.42 - threshold: 0.05 - - name: acciracy@top5 - type: accuracy - top_k: 5 - reference: 96.91 - threshold: 0.05 From 89065660adaa32dd815017dbb2dd36f1d27ab38f Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 27 Sep 2019 18:11:33 +0300 Subject: [PATCH 108/927] FIX --- tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml | 6 +++--- tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml | 6 +++--- tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml | 6 +++--- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml index 5bd0090d3f3..c94086aca61 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml @@ -4,7 +4,7 @@ models: launchers: - framework: onnx_runtime model: public/efficientnet-b0-pytorch/efficientnet-b0.onnx - adapytorcher: classification + adapter: classification cpu_extensions: AUTO inputs: - name: data @@ -39,7 +39,7 @@ models: - FP32 model: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.xml weights: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.bin - adapytorcher: classification + adapter: classification cpu_extensions: AUTO - framework: dlsdk @@ -47,7 +47,7 @@ models: - FP16 model: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.xml weights: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.bin - adapytorcher: classification + adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml index fe19c636eb3..b5976da4724 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml @@ -4,7 +4,7 @@ models: launchers: - framework: onnx_runtime model: public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.onnx - adapytorcher: classification + adapter: classification cpu_extensions: AUTO inputs: - name: data @@ -39,7 +39,7 @@ models: - FP32 model: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.xml weights: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.bin - adapytorcher: classification + adapter: classification cpu_extensions: AUTO @@ -48,7 +48,7 @@ models: - FP16 model: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.xml weights: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.bin - adapytorcher: classification + adapter: classification cpu_extensions: AUTO datasets: diff --git a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml index 2d552a9043f..e6f3799f62f 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml @@ -4,7 +4,7 @@ models: launchers: - framework: onnx_runtime model: public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.onnx - adapytorcher: classification + adapter: classification cpu_extensions: AUTO inputs: - name: data @@ -39,7 +39,7 @@ models: - FP32 model: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.xml weights: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.bin - adapytorcher: classification + adapter: classification cpu_extensions: AUTO @@ -48,7 +48,7 @@ models: - FP16 model: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.xml weights: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.bin - adapytorcher: classification + adapter: classification cpu_extensions: AUTO datasets: From 0b5882f7930ec8427dc1d14d9ef4e13cb14a8b8f Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 10 Oct 2019 15:04:56 +0300 Subject: [PATCH 109/927] fixed configs --- .../accuracy_checker/configs/efficientnet-b0-pytorch.yml | 9 +++------ .../accuracy_checker/configs/efficientnet-b5-pytorch.yml | 8 +++----- .../accuracy_checker/configs/efficientnet-b7-pytorch.yml | 8 +++----- 3 files changed, 9 insertions(+), 16 deletions(-) diff --git a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml index c94086aca61..8c8e4d304eb 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml @@ -5,7 +5,6 @@ models: - framework: onnx_runtime model: public/efficientnet-b0-pytorch/efficientnet-b0.onnx adapter: classification - cpu_extensions: AUTO inputs: - name: data type: INPUT @@ -37,26 +36,24 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.xml + model: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.xml weights: public/efficientnet-b0-pytorch/FP32/efficientnet-b0-pytorch.bin adapter: classification - cpu_extensions: AUTO - framework: dlsdk tags: - FP16 - model: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.xml + model: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.xml weights: public/efficientnet-b0-pytorch/FP16/efficientnet-b0-pytorch.bin adapter: classification cpu_extensions: AUTO - datasets: - name: imagenet_1000_classes reader: pillow_imread preprocessing: - - type: bgr_to_rgb # Actually rgb->bgr + - type: rgb_to_bgr - type: resize size: 256 diff --git a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml index b5976da4724..8ceed5791c0 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml @@ -5,7 +5,6 @@ models: - framework: onnx_runtime model: public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.onnx adapter: classification - cpu_extensions: AUTO inputs: - name: data type: INPUT @@ -37,16 +36,15 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.xml + model: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.xml weights: public/efficientnet-b5-pytorch/FP32/efficientnet-b5-pytorch.bin adapter: classification cpu_extensions: AUTO - - framework: dlsdk tags: - FP16 - model: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.xml + model: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.xml weights: public/efficientnet-b5-pytorch/FP16/efficientnet-b5-pytorch.bin adapter: classification cpu_extensions: AUTO @@ -56,7 +54,7 @@ models: reader: pillow_imread preprocessing: - - type: bgr_to_rgb # Actually rgb->bgr + - type: rgb_to_bgr - type: resize size: 488 diff --git a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml index e6f3799f62f..ac195507e3b 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml @@ -5,13 +5,11 @@ models: - framework: onnx_runtime model: public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.onnx adapter: classification - cpu_extensions: AUTO inputs: - name: data type: INPUT shape: 1,3,600,600 - datasets: - name: imagenet_1000_classes reader: pillow_imread @@ -37,7 +35,7 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.xml + model: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.xml weights: public/efficientnet-b7-pytorch/FP32/efficientnet-b7-pytorch.bin adapter: classification cpu_extensions: AUTO @@ -46,7 +44,7 @@ models: - framework: dlsdk tags: - FP16 - model: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.xml + model: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.xml weights: public/efficientnet-b7-pytorch/FP16/efficientnet-b7-pytorch.bin adapter: classification cpu_extensions: AUTO @@ -56,7 +54,7 @@ models: reader: pillow_imread preprocessing: - - type: bgr_to_rgb # Actually rgb->bgr + - type: rgb_to_bgr - type: resize size: 632 From 2f7f9ebb36f6e263a96560aa9b4ad8b6de510fee Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 10 Oct 2019 15:06:29 +0300 Subject: [PATCH 110/927] FIX --- models/public/efficientnet-b5-pytorch/model.yml | 1 - models/public/efficientnet-b7-pytorch/model.yml | 1 - 2 files changed, 2 deletions(-) diff --git a/models/public/efficientnet-b5-pytorch/model.yml b/models/public/efficientnet-b5-pytorch/model.yml index 2d8104555d8..b6ee9c33729 100644 --- a/models/public/efficientnet-b5-pytorch/model.yml +++ b/models/public/efficientnet-b5-pytorch/model.yml @@ -66,7 +66,6 @@ pytorch_to_onnx: - --input-names=data - --output-names=prob - --output-file=$conv_dir/efficientnet-b5.onnx - - --model-params=pretrained=False model_optimizer_args: - --reverse_input_channels - --input_shape=[1,3,456,456] diff --git a/models/public/efficientnet-b7-pytorch/model.yml b/models/public/efficientnet-b7-pytorch/model.yml index d871040e6d2..f118d4ebcb3 100644 --- a/models/public/efficientnet-b7-pytorch/model.yml +++ b/models/public/efficientnet-b7-pytorch/model.yml @@ -66,7 +66,6 @@ pytorch_to_onnx: - --input-names=data - --output-names=prob - --output-file=$conv_dir/efficientnet-b7.onnx - - --model-params=pretrained=False model_optimizer_args: - --reverse_input_channels - --input_shape=[1,3,600,600] From bd908bd8892f442dc0e6a52a3f8b5bd15b46a32e Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 10 Oct 2019 15:13:45 +0300 Subject: [PATCH 111/927] FIX --- models/public/efficientnet-b0-pytorch/model.yml | 2 +- models/public/efficientnet-b5-pytorch/model.yml | 2 +- models/public/efficientnet-b7-pytorch/model.yml | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/models/public/efficientnet-b0-pytorch/model.yml b/models/public/efficientnet-b0-pytorch/model.yml index 78a980838b7..3e4251b38ec 100644 --- a/models/public/efficientnet-b0-pytorch/model.yml +++ b/models/public/efficientnet-b0-pytorch/model.yml @@ -51,7 +51,7 @@ files: size: 21376958 sha256: d6904d92f92ccdca67c9717f9d119392d658577f99e5ce021b57d157985783db source: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b0-d6904d92.pth -pytorch_to_onnx: +conversion_to_onnx_args: - --model-path=$dl_dir - --model-name=efficientnet_b0 - --import-module=model diff --git a/models/public/efficientnet-b5-pytorch/model.yml b/models/public/efficientnet-b5-pytorch/model.yml index b6ee9c33729..b8a24807c53 100644 --- a/models/public/efficientnet-b5-pytorch/model.yml +++ b/models/public/efficientnet-b5-pytorch/model.yml @@ -57,7 +57,7 @@ postprocessing: file: model/conv2d_helpers.py pattern: '_EXPORTABLE = False' replacement: '_EXPORTABLE = True' -pytorch_to_onnx: +conversion_to_onnx_args: - --model-path=$dl_dir - --model-name=tf_efficientnet_b5 - --import-module=model diff --git a/models/public/efficientnet-b7-pytorch/model.yml b/models/public/efficientnet-b7-pytorch/model.yml index f118d4ebcb3..8cd8350a19a 100644 --- a/models/public/efficientnet-b7-pytorch/model.yml +++ b/models/public/efficientnet-b7-pytorch/model.yml @@ -57,7 +57,7 @@ postprocessing: file: model/conv2d_helpers.py pattern: '_EXPORTABLE = False' replacement: '_EXPORTABLE = True' -pytorch_to_onnx: +conversion_to_onnx_args: - --model-path=$dl_dir - --model-name=tf_efficientnet_b7 - --import-module=model From bce9f49253f76126c1739de6fda15801eeacca65 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Mon, 2 Sep 2019 12:58:54 +0300 Subject: [PATCH 112/927] Revert "Conversion to onnx sript expanded for more models support" This reverts commit f011e45b06cc3dd144ef59165d53f2d781e05fa5. --- tools/downloader/pytorch_to_onnx.py | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/tools/downloader/pytorch_to_onnx.py b/tools/downloader/pytorch_to_onnx.py index c1e68b6dd25..4371f2fec2d 100644 --- a/tools/downloader/pytorch_to_onnx.py +++ b/tools/downloader/pytorch_to_onnx.py @@ -21,14 +21,6 @@ def positive_int_arg(values): sys.exit('Invalid value for input argument: {!r}, a positive integer is expected'.format(value)) return result -def model_parameters(parameters): - if not parameters: - return None - model_params = dict((param, value) for param, value in (element.split('=') for element in parameters.split(','))) - for param in model_params.keys(): - model_params[param] = eval(model_params[param]) - return model_params - def parse_args(): """Parse input arguments""" @@ -54,12 +46,11 @@ def parse_args(): help='Space separated names of the input layers') parser.add_argument('--output-names', type=str, nargs='+', help='Space separated names of the output layers') - parser.add_argument('--model-params', type=model_parameters, - help='Pairs "name"="value" of model constructor comma-separeted parameters') + return parser.parse_args() -def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None, model_params=None): +def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None): """Import model and load pretrained weights""" if from_torchvision: @@ -81,7 +72,7 @@ def load_model(model_name, weights, from_torchvision=True, model_path=None, modu try: module = __import__(module_name) creator = getattr(module, model_name) - model = creator(**model_params) if model_params else creator() + model = creator() except ImportError as err: print('Module {} in {} doesn\'t exist. Check import path and name'.format(model_name, model_path)) sys.exit(err) @@ -119,8 +110,7 @@ def convert_to_onnx(model, input_shape, output_file, input_names, output_names): def main(): args = parse_args() - model = load_model(args.model_name, args.weights, args.from_torchvision, - args.model_path, args.import_module, args.model_params) + model = load_model(args.model_name, args.weights, args.from_torchvision, args.model_path, args.import_module) convert_to_onnx(model, args.input_shape, args.output_file, args.input_names, args.output_names) From c02605dcb166717d2fe437f1642388b19d4702b2 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Thu, 10 Oct 2019 19:12:21 +0300 Subject: [PATCH 113/927] Some grammar and style fixes --- CONTRIBUTING.md | 84 ++++++++++++++++++++++++------------------------- README.md | 2 +- 2 files changed, 42 insertions(+), 44 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7efce2190f3..1ad6811d33b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ # How to contribute model to Open Model Zoo -We appreciate your intention to contribute model to OpenVINO™ Open Model Zoo (OMZ). This guide would help you and explain main issues. OMZ is licensed under Apache License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. Please note, that we accept models under permissive license: **MIT**, **Apache 2.0**, **BSD-3-Clause**. In other case it may take longer time to get approval (or even denial) for your model. +We appreciate your intention to contribute model to OpenVINO™ Open Model Zoo (OMZ). This guide would help you and explain main issues. OMZ is licensed under the Apache License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. Please note, that we accept models under permissive licenses, as **MIT**, **Apache 2.0**, **BSD-3-Clause**, etc. Otherwise, it may take longer time to get approve (or even refuse) for your model. Nowadays OMZ supports models from frameworks: * Caffe\* @@ -11,15 +11,15 @@ Nowadays OMZ supports models from frameworks: ## Pull request requirements -Contribution to OMZ comes down to creating pull request (PR) in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contains: -* configuration file - `model.yml` (learn more in [Configuration file](#configuration-file) section) +Contribution to OMZ comes down to creating pull request (PR) in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contain: +* configuration file `model.yml` (learn more in [Configuration file](#configuration-file) section) * documentation of model in markdown format (learn more in [Documentation](#documentation) section) * accuracy validation configuration file (learn more in [Accuracy Validation](#accuracy-validation) section) * license added to [tools/downloader/license.txt](tools/downloader/license.txt) * (*optional*) demo (learn more about it in [Demo](#demo) section) -Name your model in OMZ using next rules: -- name must be consistent with name given by authors, but full match not necessary +Name your model in OMZ according to the following rules: +- name must be consistent with original name, but complete match is not necessary - use lowercase - spaces are not allowed in the name, use `-` or `_` (`-` is preferable) as delimiters instead - suffix to model name, according to origin framework (see **`framework`** description in [configuration file](#configuration-file) section), if you adding reimplementation of existing model in OMZ from another framework @@ -31,11 +31,10 @@ resnet-50-pytorch mobilenet-v2-1.0-224 ``` -Configuration and documentation files must be located in `models/public/` directory. - -Validation configuration file must be located in [`tools/accuracy_checker/configs`](tools/accuracy_checker/configs). - -If you adding demo, it must be locate it in [demos](/demos) folder. Learn more about it in [Demo](#demo) section. +Files location: +* the configuration and documentation files must be in the `models/public/` directory +* the validation configuration file must be in the `tools/accuracy_checker/configs` directory +* the demo must be in the `demos` directory This PR must pass next tests: * model is downloadable by `tools/downloader/downloader.py` script (see [Configuration file](#configuration-file) for details) @@ -43,20 +42,20 @@ This PR must pass next tests: * model can be used by demo or sample and provides adequate results (see [Demo](#demo) for details) * model passes accuracy validation (see [Accuracy validation](#accuracy-validation) for details) -After the end, your PR will be reviewed our team for consistence and legal compliance. +At the end, your PR will be reviewed by OMZ maintainers for consistence and legal compliance. -Your PR can be denied in case: +Your PR can be rejected in some cases, e.g.: * inappropriate license (e.g. GPL-like licenses) * inaccessible dataset * PR fails one of the test above ## Configuration file -Models configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be located in the model subfolder. Let look closer to the file content. +The model configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be located in the model subfolder. Let's look closer to the file content. **`description`** -Description of model. +Description of the model. Must match with the description from model [documentation](#documentation). **`task_type`** @@ -73,18 +72,18 @@ Model task class: - `optical_character_recognition` - `semantic_segmentation` -If task, that your model solve, is not listed here, please add new type of task to [tools/downloader/common.py](tools/downloader/common.py) file list `KNOWN_TASK_TYPES` +If the task, that your model solves, is not listed here, please add new it to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. **`files`** -> Before filling this section, you must ensure that a model is downloadable either from a direct HTTP(S) link or from Google Drive\*. +> Before filling this section, make sure that the model can be downloaded either via the direct HTTP(S) link or from Google Drive\*. -You describe all files, which need to be downloaded, in this section. Each file is described by: +Downlodable files. Each file is described by: -* `name` sets file name after downloading -* `size` sets file size -* `sha256` sets file hash sum -* `source` sets direct link to file *OR* describes file access parameters +* `name` - sets file name after downloading +* `size` - sets file size +* `sha256` - sets file hash sum +* `source` - sets direct link to file *OR* describes file access parameters > You may obtain hash sum using `sha256sum ` command on Linux\*. @@ -92,18 +91,18 @@ If file is located on Google Drive\*, section `source` must contain: - `$type: google_drive` - `id` file ID on Google Drive\* -> **NOTE:** if file is located on GitHub\* the version of the file must be fixed. +> **NOTE:** if file is on GitHub\*, use the specific file version. **`postprocessing`** (*optional*) -Sometimes right after downloading model is not ready for conversion by Model Optimizer and some additional preprocessing needed, such as unpacking, replacing or deleting some part of file. This manipulation is described in this section. +Post processing of the downloaded files. For unpacking archive: - `$type: unpack_archive` - `file` archive file name - `format` archive format (zip | tar | gztar | bztar | xztar) -For replacement operations: +For replacement operation: - `$type: regex_replace` - `file` name of file where replacement must be executed - `pattern` regular expression ([learn more](https://docs.python.org/3/library/re.html)) @@ -116,7 +115,7 @@ List of onnx conversion parameters, see `model_optimizer_args` for details. Appl **`model_optimizer_args`** -Conversion parameters (learn more in [Model conversion](#model-conversion) section) is specified in this section, e.g.: +Conversion parameters (learn more in [Model conversion](#model-conversion) section), e.g.: ``` - --input=data - --mean_values=data[127.5] @@ -125,15 +124,15 @@ Conversion parameters (learn more in [Model conversion](#model-conversion) secti - --output=prob - --input_model=$conv_dir/googlenet-v3.onnx ``` -> **NOTE:** no need to specify `framework`, `data_type`, `model_name` and `output_dir` parameters since them are deduced automatically. +> **NOTE:** no need to specify `framework`, `data_type`, `model_name` and `output_dir`, since they are deduced automatically. **`framework`** -Framework of original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`). +Framework of the original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`, etc.). **`license`** -Path to model's license. +Path to the model license. ### Example @@ -173,7 +172,7 @@ license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICE ## Model conversion -Deep Learning Inference Engine (IE) supports models in Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in OpenVINO™ toolkit. Find more information about conversion in [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. > **NOTE 1**: image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. @@ -183,20 +182,20 @@ Deep Learning Inference Engine (IE) supports models in Intermediate Representati ## Demo -The demo shows the main idea of model inference using IE. If your model solves one of the tasks supported by Open Model Zoo, find appropriate demo from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or sample from [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). +A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). -If appropriate demo or sample are absent, you must provide your own demo (C++ or Python). Demos are required to support the following keys: +Demos are required to support the following keys: - `-i ""` Required. Input to process. -- `-m ""` Required. Path to an .xml file with a trained model. -- `-d ""` Optional. Target device for model inference. Default is CPU. +- `-m ""` Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. +- `-d ""` Optional. Default is CPU. - `-no_show` Optional. Do not visualize inference results. > Note: For Python is preferable to use `-` instead of `_` as word separators (e.g. `-no-show`) Also you can add any other necessary parameters. -If you adding new demo, please provide auto-testing support too: +If you add new demo, please provide auto-testing support too: - add demo launch parameters in [demos/tests/cases.py](demos/tests/cases.py) - prepare list of input images in [demos/tests/image_sequences.py](demos/tests/image_sequences.py) @@ -204,18 +203,17 @@ If you adding new demo, please provide auto-testing support too: ## Accuracy validation -Accuracy validation can be performed by [Accuracy Checker](./tools/accuracy_checker) tool, provided with repository. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lot of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre- and post processing parameters, accuracy metric to compute and so on). More details you can find [here](./tools/accuracy_checker#testing-new-models). +Accuracy validation can be performed by the [Accuracy Checker](./tools/accuracy_checker) tool. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lots of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre and post processing parameters, accuracy metric to compute and so on). Find more details [here](./tools/accuracy_checker#testing-new-models). If model uses dataset which is unsupported by Accuracy Checker, you also must provide link to it. Please notice this issue in PR description. Don't forget about dataset license too (see [above](#how-to-contribute-model-to-open-model-zoo)). -When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was fully successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. +When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. *After this step you will get accuracy validation configuration file - **.yml*** ### Example -Let use one of the file from `tools/accuracy_checker/configs`, for example, this is validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml): - +Let use one of the files from `tools/accuracy_checker/configs`, for example, validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml): ``` models: - name: alexnet-cf @@ -270,22 +268,22 @@ models: ## Documentation -Documentation is very important part of model contribution, it helps to better understand possible usage of the model. Documentation must be named after suggested models name. -Documentation should contain: +Documentation is very important part of model contribution, it helps to better understand possible use of the model. Documentation must be named after the name of the model. +The documentation should contain: * description of model * main purpose * features - * links to paper or/and source + * references to paper or/and source * model specification * type * framework * GFLOPs (*if available*) * number of parameters (*if available*) -* validation dataset description/link +* validation dataset description and/or link * main accuracy values (also description of metric) * detailed description of input and output for original and converted models -Detailed structure and headers naming convention you can learn from any other model, e.g. [alexnet](./models/public/alexnet/alexnet.md). +Learn the detailed structure and headers naming convention from any model documentation, e.g. [alexnet](./models/public/alexnet/alexnet.md). --- *After this step you will obtain **.md** - documentation file* diff --git a/README.md b/README.md index 1b17e29842d..76ca4ad6540 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ We welcome community contributions to the Open Model Zoo repository. If you have * In case of a larger feature, provide a relevant demo. * Submit a pull request at https://github.com/opencv/open_model_zoo/pulls -Additional information about contributing your model to Open Model Zoo you can find [here](CONTRIBUTING.md). +You can find additional information about model contribution [here](CONTRIBUTING.md). We will review your contribution and, if any additional fixes or modifications are needed, may give you feedback to guide you. When accepted, your pull request will be merged into the GitHub* repositories. From 843a7917cb3e9c1bfab70f73a3fa6271cc1d4213 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 11 Oct 2019 14:23:12 +0300 Subject: [PATCH 114/927] AC: initial version of BERT model support (#467) * AC: initial version of BERT model support * fix adapter and postprocessing logic * add XNLI dataset * add support cased models * add adapter for text classification * add number of classes in bert classification adapter * clean up & add readme --- .../accuracy_checker/adapters/README.md | 3 + .../accuracy_checker/adapters/__init__.py | 5 +- .../accuracy_checker/adapters/nlp.py | 52 ++++- .../annotation_converters/README.md | 13 ++ .../annotation_converters/__init__.py | 6 +- .../annotation_converters/_nlp_common.py | 151 ++++++++++++++ .../annotation_converters/squad.py | 193 ++++++++++++++++++ .../annotation_converters/xnli.py | 135 ++++++++++++ .../data_readers/data_reader.py | 2 +- .../accuracy_checker/launcher/input_feeder.py | 6 +- .../accuracy_checker/launcher/tf_launcher.py | 53 +++-- .../accuracy_checker/metrics/README.md | 4 +- .../accuracy_checker/metrics/__init__.py | 7 +- .../metrics/classification.py | 6 +- .../metrics/question_answering.py | 91 +++++++++ .../accuracy_checker/postprocessor/README.md | 3 + .../postprocessor/__init__.py | 5 +- .../postprocessor/extract_answers_tokens.py | 100 +++++++++ .../representation/__init__.py | 13 +- .../representation/nlp_representation.py | 37 ++++ 20 files changed, 857 insertions(+), 28 deletions(-) create mode 100644 tools/accuracy_checker/accuracy_checker/annotation_converters/_nlp_common.py create mode 100644 tools/accuracy_checker/accuracy_checker/annotation_converters/squad.py create mode 100644 tools/accuracy_checker/accuracy_checker/annotation_converters/xnli.py create mode 100644 tools/accuracy_checker/accuracy_checker/metrics/question_answering.py create mode 100644 tools/accuracy_checker/accuracy_checker/postprocessor/extract_answers_tokens.py diff --git a/tools/accuracy_checker/accuracy_checker/adapters/README.md b/tools/accuracy_checker/accuracy_checker/adapters/README.md index 01e54ff8441..c7fb90f315d 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/README.md +++ b/tools/accuracy_checker/accuracy_checker/adapters/README.md @@ -125,3 +125,6 @@ AccuracyChecker supports following set of adapters: * `nmt` - converting output of neural machine translation model to `MachineTranslationPrediction`. * `vocabulary_file` - file which contains vocabulary for encoding model predicted indexes to words (e. g. vocab.bpe.32000.de). Path can be prefixed with `--models` arguments. * `eos_index` - index end of string symbol in vocabulary (Optional, used in cases when launcher does not support dynamic output shape for cut off empty prediction). +* `bert_question_answering` - converting output of BERT model trained to solve question answering task to `QuestionAnsweringPrediction`. +* `bert_classification` - converting output of BERT model trained for classification task to `ClassificationPrediction`. + * `num_classes` - number of predicted classes. diff --git a/tools/accuracy_checker/accuracy_checker/adapters/__init__.py b/tools/accuracy_checker/accuracy_checker/adapters/__init__.py index ac8c4b57ede..4f6cef05690 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/__init__.py @@ -49,7 +49,7 @@ from .mask_rcnn import MaskRCNNAdapter -from .nlp import MachineTranslationAdapter +from .nlp import MachineTranslationAdapter, QuestionAnsweringAdapter __all__ = [ 'Adapter', @@ -96,5 +96,6 @@ 'MaskRCNNAdapter', - 'MachineTranslationAdapter' + 'MachineTranslationAdapter', + 'QuestionAnsweringAdapter' ] diff --git a/tools/accuracy_checker/accuracy_checker/adapters/nlp.py b/tools/accuracy_checker/accuracy_checker/adapters/nlp.py index 4b817a90b86..adf09c48e8d 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/nlp.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/nlp.py @@ -1,7 +1,7 @@ import re import numpy as np from .adapter import Adapter -from ..representation import MachineTranslationPrediction +from ..representation import MachineTranslationPrediction, QuestionAnsweringPrediction, ClassificationPrediction from ..config import PathField, NumberField from ..utils import read_txt @@ -69,3 +69,53 @@ def process(self, raw, identifiers=None, frame_meta=None): results.append(MachineTranslationPrediction(identifier, _clean(encoded_words, self.subword_option))) return results + + +class QuestionAnsweringAdapter(Adapter): + __provider__ = 'bert_question_answering' + prediction_types = (QuestionAnsweringPrediction, ) + + def process(self, raw, identifiers=None, frame_meta=None): + predictions = self._extract_predictions(raw, frame_meta)[self.output_blob] + result = [] + batch_size, seq_length, hidden_size = predictions.shape + output_weights = np.random.normal(scale=0.02, size=(2, hidden_size)) + output_bias = np.zeros(2) + prediction_matrix = predictions.reshape((batch_size * seq_length, hidden_size)) + predictions = np.matmul(prediction_matrix, output_weights.T) + predictions = predictions + output_bias + predictions = predictions.reshape((batch_size, seq_length, 2)) + for identifier, prediction in zip(identifiers, predictions): + prediction = np.transpose(prediction, (1, 0)) + result.append(QuestionAnsweringPrediction(identifier, prediction[0], prediction[1])) + + return result + + +class BertTextClassification(Adapter): + __provider__ = 'bert_classification' + + @classmethod + def parameters(cls): + params = super().parameters() + params.update({"num_classes": ( + NumberField(value_type=int, min_value=1, description='number of classes for classification') + )}) + + return params + + def configure(self): + self.num_classes = self.get_value_from_config('num_classes') + + def process(self, raw, identifiers=None, frame_meta=None): + outputs = self._extract_predictions(raw, frame_meta)[self.output_blob] + _, hidden_size = outputs.shape + output_weights = np.random.normal(scale=0.02, size=(self.num_classes, hidden_size)) + output_bias = np.zeros(self.num_classes) + predictions = np.matmul(outputs, output_weights.T) + predictions += output_bias + result = [] + for identifier, output in zip(identifiers, predictions): + result.append(ClassificationPrediction(identifier, output)) + + return result diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md index 6708c4b039c..9c8b2a8ba26 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md @@ -198,3 +198,16 @@ Accuracy Checker supports following list of annotation converters and specific f * `lpr_txt` - converts annotation for license plate recognition task in txt format to `CharacterRecognitionAnnotation`. * `annotation_file` - path to txt annotation. * `decoding_dictionary` - path to file containing dictionary for output decoding. +* `squad` - converts the Stanford Question Answering Dataset ([SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)) to `Question Answering Annotation`. **Note: This converter not only converts data to metric specific format but also tokenize and encodes input for BERT.** + * `testing_file` - path to testing file. + * `vocab_file` - path to model co vocabulary file. + * `max_seq_length` - maximum total input sequence length after word-piece tokenization (Optional, default value is 128). + * `max_query_length` - maximum number of tokens for the question (Optional, default value is 64). + * `doc_stride` -stride size between chunks for splitting up long document (Optional, default value is 128). + * `lower_case` - allows switching tokens to lower case register. It is useful for working with uncased models (Optional, default value is False) +* `xnli` - converts The Cross-lingual Natural Language Inference Corpus ([XNLI](https://github.com/facebookresearch/XNLI)) to `TextClassificationAnnotattion`. **Note: This converter not only converts data to metric specific format but also tokenize and encodes input for BERT.** + * `annotation_file` - path to dataset annotation file in tsv format. + * `vocab_file` - path to model co vocabulary file. + * `max_seq_length` - maximum total input sequence length after word-piece tokenization (Optional, default value is 128). + * `lower_case` - allows switching tokens to lower case register. It is useful for working with uncased models (Optional, default value is False). + * `language_filter` - comma-separated list of used in annotation language tags for selecting records for specific languages only. (Optional, if not used full annotation will be converted). diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py index 19011ce2ade..b8e225784bd 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py @@ -46,6 +46,8 @@ from .cvat_multilabel_recognition import CVATMultilabelAttributesRecognitionConverter from .cvat_human_pose import CVATPoseEstimationConverter from .cvat_person_detection_action_recognition import CVATPersonDetectionActionRecognitionConverter +from .squad import SQUADConverter +from .xnli import XNLIDatasetConverter __all__ = [ 'BaseFormatConverter', @@ -86,5 +88,7 @@ 'CVATTextRecognitionConverter', 'CVATMultilabelAttributesRecognitionConverter', 'CVATPoseEstimationConverter', - 'CVATPersonDetectionActionRecognitionConverter' + 'CVATPersonDetectionActionRecognitionConverter', + 'SQUADConverter', + 'XNLIDatasetConverter' ] diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/_nlp_common.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/_nlp_common.py new file mode 100644 index 00000000000..daf761b3062 --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/_nlp_common.py @@ -0,0 +1,151 @@ +import unicodedata + + +class Tokenizer: + def __init__(self, vocab_file, lower_case=True): + self.vocab = self.load_vocab(vocab_file) + self.lower_case = lower_case + + @staticmethod + def _run_strip_accents(text): + text = unicodedata.normalize("NFD", text) + output = [] + for char in text: + cat = unicodedata.category(char) + if cat == "Mn": + continue + output.append(char) + return "".join(output) + + @staticmethod + def _run_split_on_punc(text): + def _is_punctuation(char): + punct = set('!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~') + if char in punct: + return True + cat = unicodedata.category(char) + if cat.startswith("P"): + return True + return False + + chars = list(text) + i = 0 + start_new_word = True + output = [] + while i < len(chars): + char = chars[i] + if _is_punctuation(char): + output.append([char]) + start_new_word = True + else: + if start_new_word: + output.append([]) + start_new_word = False + output[-1].append(char) + i += 1 + + return ["".join(x) for x in output] + + def basic_tokenizer(self, text): + if isinstance(text, bytes): + text = text.decode("utf-8", "ignore") + + text = text.strip() + tokens = text.split() if text else [] + split_tokens = [] + for token in tokens: + if self.lower_case: + token = token.lower() + token = self._run_strip_accents(token) + split_tokens.extend(self._run_split_on_punc(token)) + + output_tokens = " ".join(split_tokens) + output_tokens = output_tokens.strip() + output_tokens = output_tokens.split() if output_tokens else [] + return output_tokens + + def wordpiece_tokenizer(self, text): + if isinstance(text, bytes): + text = text.decode("utf-8", "ignore") + + output_tokens = [] + text = text.strip() + tokens = text.split() if text else [] + for token in tokens: + chars = list(token) + if len(chars) > 200: + output_tokens.append("[UNK]") + continue + + is_bad = False + start = 0 + sub_tokens = [] + while start < len(chars): + end = len(chars) + cur_substr = None + while start < end: + substr = "".join(chars[start:end]) + if start > 0: + substr = "##" + substr + if substr in self.vocab: + cur_substr = substr + break + end -= 1 + if cur_substr is None: + is_bad = True + break + sub_tokens.append(cur_substr) + start = end + + if is_bad: + output_tokens.append("[UNK]") + else: + output_tokens.extend(sub_tokens) + return output_tokens + + def tokenize(self, text): + tokens = [] + for token in self.basic_tokenizer(text): + for sub_token in self.wordpiece_tokenizer(token): + tokens.append(sub_token) + + return tokens + + def convert_tokens_to_ids(self, items): + output = [] + for item in items: + output.append(self.vocab[item]) + return output + + @staticmethod + def load_vocab(file): + vocab = {} + index = 0 + with open(str(file), 'r') as reader: + while True: + token = reader.readline() + if isinstance(token, bytes): + token = token.decode("utf-8", "ignore") + if not token: + break + token = token.strip() + vocab[token] = index + index += 1 + return vocab + + +def truncate_seq_pair(tokens_a, tokens_b, max_length): + """Truncates a sequence pair in place to the maximum length.""" + + # This is a simple heuristic which will always truncate the longer sequence + # one token at a time. This makes more sense than truncating an equal percent + # of tokens from each, since if one sequence is very short then each token + # that's truncated likely contains more information than a longer sequence. + while True: + total_length = len(tokens_a) + len(tokens_b) + if total_length <= max_length: + break + if len(tokens_a) > len(tokens_b): + tokens_a.pop() + else: + tokens_b.pop() diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/squad.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/squad.py new file mode 100644 index 00000000000..785935b8a8c --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/squad.py @@ -0,0 +1,193 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + +from collections import namedtuple + +import numpy as np + +from ..representation import QuestionAnsweringAnnotation +from ..utils import read_json +from ..config import PathField, NumberField, BoolField + +from .format_converter import BaseFormatConverter, ConverterReturn +from ._nlp_common import Tokenizer + + +class SQUADConverter(BaseFormatConverter): + __provider__ = "squad" + annotation_types = (QuestionAnsweringAnnotation, ) + + @classmethod + def parameters(cls): + parameters = super().parameters() + parameters.update({ + 'testing_file': PathField(description="Path to testing file."), + 'vocab_file': PathField(description='Path to vocabulary file.'), + 'max_seq_length': NumberField( + description='The maximum total input sequence length after WordPiece tokenization.', + optional=True, default=128 + ), + 'max_query_length': NumberField( + description='The maximum number of tokens for the question.', + optional=True, default=64 + ), + 'doc_stride': NumberField( + description="When splitting up a long document into chunks, how much stride to take between chunks.", + optional=True, default=128 + ), + 'lower_case': BoolField(optional=True, default=False, description='Switch tokens to lower case register') + }) + + return parameters + + def configure(self): + self.testing_file = self.get_value_from_config('testing_file') + self.vocab_file = self.get_value_from_config('vocab_file') + self.max_seq_length = self.get_value_from_config('max_seq_length') + self.max_query_length = self.get_value_from_config('max_query_length') + self.doc_stride = self.get_value_from_config('doc_stride') + self.lower_case = self.get_value_from_config('lower_case') + + @staticmethod + def _load_examples(file): + def _is_whitespace(c): + if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F: + return True + return False + + examples = [] + answers = [] + data = read_json(file)['data'] + + for entry in data: + for paragraph in entry['paragraphs']: + paragraph_text = paragraph["context"] + doc_tokens = [] + char_to_word_offset = [] + prev_is_whitespace = True + for c in paragraph_text: + if _is_whitespace(c): + prev_is_whitespace = True + else: + if prev_is_whitespace: + doc_tokens.append(c) + else: + doc_tokens[-1] += c + prev_is_whitespace = False + char_to_word_offset.append(len(doc_tokens) - 1) + + for qa in paragraph["qas"]: + qas_id = qa["id"] + question_text = qa["question"] + orig_answer_text = qa["answers"] + is_impossible = False + + example = { + 'id': qas_id, + 'question_text': question_text, + 'tokens': doc_tokens, + 'is_impossible': is_impossible + } + examples.append(example) + answers.append(orig_answer_text) + return examples, answers + + def convert(self, check_content=False, progress_callback=None, progress_interval=100, **kwargs): + examples, answers = self._load_examples(self.testing_file) + annotations = [] + tokenizer = Tokenizer(self.vocab_file, self.lower_case) + unique_id = 1000000000 + DocSpan = namedtuple("DocSpan", ["start", "length"]) + + for (example_index, example) in enumerate(examples): + query_tokens = tokenizer.tokenize(example['question_text']) + if len(query_tokens) > self.max_query_length: + query_tokens = query_tokens[:self.max_query_length] + all_doc_tokens = [] + for (i, token) in enumerate(example['tokens']): + sub_tokens = tokenizer.tokenize(token) + for sub_token in sub_tokens: + all_doc_tokens.append(sub_token) + max_tokens_for_doc = self.max_seq_length - len(query_tokens) - 3 + doc_spans = [] + start_offset = 0 + while start_offset < len(all_doc_tokens): + length = len(all_doc_tokens) - start_offset + if length > max_tokens_for_doc: + length = max_tokens_for_doc + doc_spans.append(DocSpan(start_offset, length)) + if start_offset + length == len(all_doc_tokens): + break + start_offset += min(length, self.doc_stride) + + for idx, doc_span in enumerate(doc_spans): + tokens = [] + segment_ids = [] + tokens.append("[CLS]") + segment_ids.append(0) + for token in query_tokens: + tokens.append(token) + segment_ids.append(0) + tokens.append("[SEP]") + segment_ids.append(0) + + for i in range(doc_span.length): + split_token_index = doc_span.start + i + tokens.append(all_doc_tokens[split_token_index]) + segment_ids.append(1) + tokens.append("[SEP]") + segment_ids.append(1) + input_ids = tokenizer.convert_tokens_to_ids(tokens) + input_mask = [1] * len(input_ids) + + while len(input_ids) < self.max_seq_length: + input_ids.append(0) + input_mask.append(0) + segment_ids.append(0) + + # add index to make identifier unique + identifier = ['input_ids_{}'.format(idx), 'input_mask_{}'.format(idx), 'segment_ids_{}'.format(idx)] + annotation = QuestionAnsweringAnnotation( + identifier, + np.array(unique_id), + np.array(input_ids), + np.array(input_mask), + np.array(segment_ids), + tokens, + answers[example_index], + ) + annotations.append(annotation) + unique_id += 1 + return ConverterReturn(annotations, None, None) + + @staticmethod + def _is_max_context(doc_spans, cur_span_index, position): + best_score = None + best_span_index = None + for (span_index, doc_span) in enumerate(doc_spans): + end = doc_span.start + doc_span.length - 1 + if position < doc_span.start: + continue + if position > end: + continue + num_left_context = position - doc_span.start + num_right_context = end - position + score = min(num_left_context, num_right_context) + 0.01 * doc_span.length + if best_score is None or score > best_score: + best_score = score + best_span_index = span_index + + return cur_span_index == best_span_index diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/xnli.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/xnli.py new file mode 100644 index 00000000000..c66836ea161 --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/xnli.py @@ -0,0 +1,135 @@ +from collections import namedtuple +import csv +import numpy as np + +from ..config import PathField, StringField, NumberField, BoolField +from ..representation import TextClassificationAnnotation +from ..utils import string_to_list +from .format_converter import BaseFormatConverter, ConverterReturn +from ._nlp_common import Tokenizer, truncate_seq_pair + + +InputExample = namedtuple('InputExample', ['guid', 'text_a', 'text_b', 'label']) + +labels = ["contradiction", "entailment", "neutral"] +label_map = dict(enumerate(labels)) +reversed_label_map = {value: key for key, value in label_map.items()} + + +class XNLIDatasetConverter(BaseFormatConverter): + __provider__ = 'xnli' + annotation_types = (TextClassificationAnnotation, ) + + @classmethod + def parameters(cls): + params = super().parameters() + params.update({ + 'annotation_file': PathField(description='path to annotation file in json or tsv format'), + 'language_filter': StringField( + description='comma-separated list of languages for selection only appropriate annotations.' + 'If not provided full dataset used', + optional=True + ), + 'vocab_file': PathField(description='Path to vocabulary file.'), + 'max_seq_length': NumberField( + description='The maximum total input sequence length after WordPiece tokenization.', + optional=True, default=128 + ), + 'lower_case': BoolField(optional=True, default=False, description='Switch tokens to lower case register') + }) + + return params + + def configure(self): + self.annotation_file = self.get_value_from_config('annotation_file') + self.language_filter = self.get_value_from_config('language_filter') + if self.language_filter is not None: + self.language_filter = string_to_list(self.language_filter) + self.vocab_file = self.get_value_from_config('vocab_file') + self.max_seq_length = self.get_value_from_config('max_seq_length') + self.lower_case = self.get_value_from_config('lower_case') + self.tokenizer = Tokenizer(self.vocab_file, self.lower_case) + + def read_tsv(self): + lines = [] + with self.annotation_file.open('r') as ann_file: + reader = csv.reader(ann_file, delimiter="\t", quotechar=None) + for idx, line in enumerate(reader): + if idx == 0: + continue + guid = "dev-{}".format(idx) + language = line[0] + if self.language_filter and language not in self.language_filter: + continue + label = reversed_label_map[line[1]] + text_a = line[6] + text_b = line[7] + lines.append(InputExample(guid, text_a, text_b, label)) + + return lines + + def convert_single_example(self, example): + identifier = [ + 'input_ids_{}'.format(example.guid), + 'input_mask_{}'.format(example.guid), + 'segment_ids_{}'.format(example.guid) + ] + tokens_a = self.tokenizer.tokenize(example.text_a) + tokens_b = None + if example.text_b: + tokens_b = self.tokenizer.tokenize(example.text_b) + + if tokens_b: + # Modifies `tokens_a` and `tokens_b` in place so that the total + # length is less than the specified length. + # Account for [CLS], [SEP], [SEP] with "- 3" + truncate_seq_pair(tokens_a, tokens_b, self.max_seq_length - 3) + else: + # Account for [CLS] and [SEP] with "- 2" + if len(tokens_a) > self.max_seq_length - 2: + tokens_a = tokens_a[:self.max_seq_length - 2] + tokens = [] + segment_ids = [] + tokens.append("[CLS]") + segment_ids.append(0) + for token in tokens_a: + tokens.append(token) + segment_ids.append(0) + tokens.append("[SEP]") + segment_ids.append(0) + + if tokens_b: + for token in tokens_b: + tokens.append(token) + segment_ids.append(1) + tokens.append("[SEP]") + segment_ids.append(1) + + input_ids = self.tokenizer.convert_tokens_to_ids(tokens) + + # The mask has 1 for real tokens and 0 for padding tokens. Only real + # tokens are attended to. + input_mask = [1] * len(input_ids) + + # Zero-pad up to the sequence length. + padding_size = self.max_seq_length - len(input_ids) + if padding_size: + padding = [0] * padding_size + input_ids.extend(padding) + input_mask.extend(padding) + segment_ids.extend(padding) + + return TextClassificationAnnotation( + identifier, example.label, np.array(input_ids), np.array(input_mask), np.array(segment_ids), tokens + ) + + def convert(self, check_content=False, progress_callback=None, progress_interval=100, **kwargs): + examples = self.read_tsv() + annotations = [] + num_iter = len(examples) + for example_id, example in enumerate(examples): + annotations.append(self.convert_single_example(example)) + if progress_callback and example_id % progress_interval == 0: + progress_callback(example_id * 100 / num_iter) + + return ConverterReturn(annotations, {'label_map': label_map}, None) diff --git a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py index 44258d4fdf0..4a7cf1532d8 100644 --- a/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py +++ b/tools/accuracy_checker/accuracy_checker/data_readers/data_reader.py @@ -42,7 +42,7 @@ def __init__(self, data, meta=None, identifier=''): elif isinstance(data, list) and np.isscalar(data[0]): self.metadata['image_size'] = len(data) else: - self.metadata['image_size'] = data.shape if not isinstance(data, list) else data[0].shape + self.metadata['image_size'] = data.shape if not isinstance(data, list) else np.shape(data[0]) ClipIdentifier = namedtuple('ClipIdentifier', ['video', 'clip_id', 'frames']) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py b/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py index d08bdd4f16f..b6350beed1b 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/input_feeder.py @@ -78,8 +78,10 @@ def prepare_image_info(image_sizes_batch): return image_infos def fill_non_constant_inputs(self, data_representation_batch): - image_info_inputs = self._fill_image_info_inputs(data_representation_batch) - filled_inputs = {**image_info_inputs} + filled_inputs = {} + if self.image_info_inputs: + image_info_inputs = self._fill_image_info_inputs(data_representation_batch) + filled_inputs = {**image_info_inputs} for input_layer in self.non_constant_inputs: input_regex = None input_batch = [] diff --git a/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py index f41e9aad8a7..1c0a016ca0a 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py @@ -17,9 +17,10 @@ import re from pathlib import Path import tensorflow as tf - +from tensorflow.python.saved_model import tag_constants from .launcher import Launcher from ..config import BaseField, ListField, PathField, StringField, ConfigError, ConfigValidator +from ..utils import contains_any, contains_all class TFLauncher(Launcher): @@ -29,7 +30,10 @@ class TFLauncher(Launcher): def parameters(cls): parameters = super().parameters() parameters.update({ - 'model': PathField(is_directory=False, description="Path to model file."), + 'model': PathField( + is_directory=False, description="Path to model file (frozen graph of checkpoint meta).", optional=True + ), + 'saved_model_dir': PathField(is_directory=True, optional=True, description='Path to saved model directory'), 'device': StringField( choices=('cpu', 'gpu'), default='cpu', optional=True, description="Device name: cpu or gpu"), 'inputs': BaseField(optional=True, description="Inputs."), @@ -45,9 +49,17 @@ def __init__(self, config_entry, *args, **kwargs): tf_launcher_config = ConfigValidator('TF_Launcher', fields=self.parameters()) tf_launcher_config.validate(self.config) + if not contains_any(self.config, ['model', 'saved_model_dir']): + raise ConfigError('model or saved model directory should be provided') + + if contains_all(self.config, ['model', 'saved_model']): + raise ConfigError('only one option: model or saved_model_dir should be provided') self._config_outputs = self.get_value_from_config('output_names') - self._graph = self._load_graph(str(self.get_value_from_config('model'))) + if 'model' in self.config: + self._graph = self._load_graph(str(self.get_value_from_config('model'))) + else: + self._graph = self._load_graph(str(self.get_value_from_config('saved_model_dir')), True) self._outputs_names = self._get_outputs_names(self._graph, self._config_outputs) @@ -113,18 +125,14 @@ def output_blob(self): def predict_async(self, *args, **kwargs): raise ValueError('TensorFlow Launcher does not support async mode yet') - def _load_graph(self, model): + def _load_graph(self, model, saved_model=False): + if saved_model: + return self._load_saved_model(model) + if 'meta' in Path(model).suffix: return self._load_graph_using_meta(model) - with tf.gfile.GFile(model, 'rb') as file: - graph_def = tf.GraphDef() - graph_def.ParseFromString(file.read()) - - with tf.Graph().as_default() as graph: - tf.import_graph_def(graph_def) - - return graph + return self._load_frozen_graph(model) def _load_graph_using_meta(self, model): tf.reset_default_graph() @@ -145,6 +153,27 @@ def _load_graph_using_meta(self, model): tf.import_graph_def(graph_def, name='') return graph + @staticmethod + def _load_frozen_graph(model): + with tf.gfile.GFile(model, 'rb') as file: + graph_def = tf.GraphDef() + graph_def.ParseFromString(file.read()) + + with tf.Graph().as_default() as graph: + tf.import_graph_def(graph_def) + + return graph + + @staticmethod + def _load_saved_model(model_dir): + graph = tf.Graph() + + with graph.as_default(): + with tf.Session() as sess: + tf.saved_model.loader.load(sess, [tag_constants.SERVING], model_dir) + + return graph + def _get_graph_inputs(self, graph, config_inputs=None): inputs_ops = {'Placeholder'} inputs = [x for x in graph.as_graph_def().node if not x.input and x.op in inputs_ops] diff --git a/tools/accuracy_checker/accuracy_checker/metrics/README.md b/tools/accuracy_checker/accuracy_checker/metrics/README.md index 0fa0accdb23..f278ff6b783 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/README.md +++ b/tools/accuracy_checker/accuracy_checker/metrics/README.md @@ -13,7 +13,7 @@ Every metric has parameters available for configuration. Accuracy Checker supports following set of metrics: * `accuracy` - classification accuracy metric, defined as the number of correct predictions divided by the total number of predictions. -Supported representation: `ClassificationAnnotation`, `ClassificationPrediction` +Supported representation: `ClassificationAnnotation`, `TextClassificationAnnotation`, `ClassificationPrediction` * `top_k` - the number of classes with the highest probability, which will be used to decide if prediction is correct. * `accuracy_per_class` - classification accuracy metric which represents results for each class. Supported representation: `ClassificationAnnotation`, `ClassificationPrediction`. * `top_k` - the number of classes with the highest probability, which will be used to decide if prediction is correct. @@ -160,3 +160,5 @@ More detailed information about calculation segmentation metrics you can find [h * `bleu` - [Bilingual Evaluation Understudy](https://en.wikipedia.org/wiki/BLEU). Supperted representations: `MachineTranslationAnnotation`, `MachineTranslationPrediction`. * `smooth` - Whether or not to apply Lin et al. 2004 smoothing. * `max_order` - Maximum n-gram order to use when computing BLEU score. (Optional, default 4). +* `f1` - F1-score for question answering task. Supported representations: `QuestionAnsweringAnnotation`, `QuestionAnsweringPrediction`. +* `exact_match` - Exact matching (EM) metric for question answering task. Supported representations: `QuestionAnsweringAnnotation`, `QuestionAnsweringPrediction`. diff --git a/tools/accuracy_checker/accuracy_checker/metrics/__init__.py b/tools/accuracy_checker/accuracy_checker/metrics/__init__.py index 24cd1ca6773..41c0f3d415f 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/__init__.py @@ -57,7 +57,7 @@ ) from .hit_ratio import HitRatioMetric, NDSGMetric from .machine_translation import BilingualEvaluationUnderstudy - +from .question_answering import ExactMatchScore, ScoreF1 __all__ = [ 'Metric', @@ -117,5 +117,8 @@ 'HitRatioMetric', 'NDSGMetric', - 'BilingualEvaluationUnderstudy' + 'BilingualEvaluationUnderstudy', + + 'ScoreF1', + 'ExactMatchScore' ] diff --git a/tools/accuracy_checker/accuracy_checker/metrics/classification.py b/tools/accuracy_checker/accuracy_checker/metrics/classification.py index 00062cbede2..d14258661ac 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/classification.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/classification.py @@ -16,7 +16,7 @@ import numpy as np -from ..representation import ClassificationAnnotation, ClassificationPrediction +from ..representation import ClassificationAnnotation, ClassificationPrediction, TextClassificationAnnotation from ..config import NumberField, StringField from .metric import PerImageEvaluationMetric from .average_meter import AverageMeter @@ -29,7 +29,7 @@ class ClassificationAccuracy(PerImageEvaluationMetric): __provider__ = 'accuracy' - annotation_types = (ClassificationAnnotation, ) + annotation_types = (ClassificationAnnotation, TextClassificationAnnotation) prediction_types = (ClassificationPrediction, ) @classmethod @@ -70,7 +70,7 @@ class ClassificationAccuracyClasses(PerImageEvaluationMetric): __provider__ = 'accuracy_per_class' - annotation_types = (ClassificationAnnotation, ) + annotation_types = (ClassificationAnnotation, TextClassificationAnnotation) prediction_types = (ClassificationPrediction, ) @classmethod diff --git a/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py b/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py new file mode 100644 index 00000000000..e0bde0e63ba --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py @@ -0,0 +1,91 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + +import re +from collections import Counter + +from ..representation import QuestionAnsweringAnnotation, QuestionAnsweringPrediction +from .metric import PerImageEvaluationMetric + + +def normalize_answer(s): + def remove_articles(text): + return re.sub(r'\b(a|an|the)\b', ' ', text) + + def white_space_fix(text): + return ' '.join(text.split()) + + def remove_punc(text): + exclude = set('!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~') + return ''.join(ch for ch in text if ch not in exclude) + + return white_space_fix(remove_articles(remove_punc(s.lower()))) + + +class ScoreF1(PerImageEvaluationMetric): + __provider__ = 'f1' + + annotation_types = (QuestionAnsweringAnnotation,) + prediction_types = (QuestionAnsweringPrediction,) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.f1 = 0 + self.total = 0 + + def update(self, annotation, prediction): + max_f1_score = 0 + for gt_answer in annotation.orig_answer_text: + for pred_answer in prediction.tokens: + prediction_tokens = normalize_answer(pred_answer).split() + annotation_tokens = normalize_answer(gt_answer['text']).split() + common = Counter(prediction_tokens) & Counter(annotation_tokens) + same = sum(common.values()) + if same == 0: + continue + precision = 1.0 * same / len(prediction_tokens) + recall = 1.0 * same / len(annotation_tokens) + f1 = (2 * precision * recall) / (precision + recall) + max_f1_score = f1 if f1 > max_f1_score else max_f1_score + self.f1 += max_f1_score + self.total += 1 + + def evaluate(self, annotation, prediction): + return self.f1 / self.total + + +class ExactMatchScore(PerImageEvaluationMetric): + __provider__ = 'exact_match' + + annotation_types = (QuestionAnsweringAnnotation,) + prediction_types = (QuestionAnsweringPrediction,) + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.exact_match = 0 + self.total = 0 + + def update(self, annotation, prediction): + max_exact_match = 0 + for gt_answer in annotation.orig_answer_text: + for pred_answer in prediction.tokens: + exact_match = normalize_answer(gt_answer['text']) == normalize_answer(pred_answer) + max_exact_match = exact_match if exact_match > max_exact_match else max_exact_match + self.exact_match += max_exact_match + self.total += 1 + + def evaluate(self, annotation, prediction): + return self.exact_match / self.total diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/README.md b/tools/accuracy_checker/accuracy_checker/postprocessor/README.md index b444fb0fbf1..6cfbc7900e9 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/README.md +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/README.md @@ -57,3 +57,6 @@ Accuracy Checker supports following set of postprocessors: * `min_value` - lower bound of range. * `max_value` - upper bound of range. * `segmentation-prediction-resample` - resamples output prediction in two steps: 1) resizes it to bounding box size; 2) extends to annotation size. Supported representations: `BrainTumorSegmentationAnnotation`, `BrainTumorSegmentationPrediction`. For correct bounding box size must be set via tag `boxes_file` in `brats_numpy` [converter](../annotation_converters/README.md). +* `extract_prediction_answers` - extract predicted sequence of tokens from annotation text. Supported representations: `QuestionAnsweringAnnotation`, `QuestionAnsweringPrediction`. + * `max_answer` - maximum answer length (Optional, default value is 30). + * `n_best_size` - total number of n-best prediction size for the answer (Optional, default value is 20). diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/__init__.py b/tools/accuracy_checker/accuracy_checker/postprocessor/__init__.py index 8905d186add..878291cad25 100644 --- a/tools/accuracy_checker/accuracy_checker/postprocessor/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/__init__.py @@ -43,6 +43,7 @@ from .clip_segmentation_mask import ClipSegmentationMask from .normalize_boxes import NormalizeBoxes from .resample_segmentation_prediction import SegmentationPredictionResample +from .extract_answers_tokens import ExtractSQUADPrediction __all__ = [ 'Postprocessor', @@ -73,5 +74,7 @@ 'ClipSegmentationMask', 'SegmentationPredictionResample', - 'NormalizeLandmarksPoints' + 'NormalizeLandmarksPoints', + + 'ExtractSQUADPrediction' ] diff --git a/tools/accuracy_checker/accuracy_checker/postprocessor/extract_answers_tokens.py b/tools/accuracy_checker/accuracy_checker/postprocessor/extract_answers_tokens.py new file mode 100644 index 00000000000..5c5e50e0d02 --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/postprocessor/extract_answers_tokens.py @@ -0,0 +1,100 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + +import numpy as np + +from .postprocessor import Postprocessor +from ..representation import QuestionAnsweringAnnotation, QuestionAnsweringPrediction +from ..config import NumberField + + +class ExtractSQUADPrediction(Postprocessor): + """ + Extract text answers from predictions + """ + + __provider__ = 'extract_answers_tokens' + + annotation_types = (QuestionAnsweringAnnotation, ) + prediction_types = (QuestionAnsweringPrediction, ) + + @classmethod + def parameters(cls): + parameters = super().parameters() + parameters.update({ + 'max_answer': NumberField( + optional=True, value_type=int, default=30, description="Maximum length of answer" + ), + 'n_best_size': NumberField( + optional=True, value_type=int, default=20, description="The total number of n-best predictions." + ) + }) + return parameters + + def configure(self): + self.max_answer = self.get_value_from_config('max_answer') + self.n_best_size = self.get_value_from_config('n_best_size') + + def process_image(self, annotation, prediction): + def _get_best_indexes(logits, n_best_size): + indexes = np.argsort(logits)[::-1] + score = np.array(logits)[indexes] + best_indexes_mask = np.arange(len(score)) < n_best_size + best_indexes = indexes[best_indexes_mask] + return best_indexes + + def _check_indexes(start, end, length, max_answer): + if start >= length or end >= length: + return False + if end < start or end - start + 1 > max_answer: + return False + return True + + for annotation_, prediction_ in zip(annotation, prediction): + start_indexes = _get_best_indexes(prediction_.start_logits, self.n_best_size) + end_indexes = _get_best_indexes(prediction_.end_logits, self.n_best_size) + valid_start_indexes = [] + valid_end_indexes = [] + tokens = [] + + for start_index in start_indexes: + for end_index in end_indexes: + if _check_indexes(start_index, end_index, len(annotation_.tokens), self.max_answer): + valid_start_indexes.append(start_index) + valid_end_indexes.append(end_index) + tokens.append(annotation_.tokens[start_index:(end_index + 1)]) + + start_logits = prediction_.start_logits[valid_start_indexes] + end_logits = prediction_.end_logits[valid_end_indexes] + + start_indexes = [val for _, val in sorted(zip(start_logits+end_logits, start_indexes), reverse=True)] + if not start_indexes: + continue + start_indexes_ = start_indexes[0] + end_indexes_ = [val for _, val in sorted(zip(start_logits+end_logits, end_indexes), reverse=True)] + end_indexes_ = end_indexes_[0] + + prediction_.start_index.append(start_indexes_) + prediction_.end_index.append(end_indexes_) + + tokens_ = [" ".join(tok) for _, tok in sorted(zip(start_logits+end_logits, tokens), reverse=True)] + tokens_ = tokens_[0] + tokens_ = tokens_.replace(" ##", "") + tokens_ = tokens_.replace("##", "") + tokens_ = tokens_.strip() + prediction_.tokens.append(tokens_) + + return annotation, prediction diff --git a/tools/accuracy_checker/accuracy_checker/representation/__init__.py b/tools/accuracy_checker/accuracy_checker/representation/__init__.py index 81caaf60dbd..7e00bd394ba 100644 --- a/tools/accuracy_checker/accuracy_checker/representation/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/representation/__init__.py @@ -61,7 +61,13 @@ from .text_detection_representation import TextDetectionAnnotation, TextDetectionPrediction from .pose_estimation_representation import PoseEstimationAnnotation, PoseEstimationPrediction from .hit_ratio_representation import HitRatio, HitRatioAnnotation, HitRatioPrediction -from .nlp_representation import MachineTranslationAnnotation, MachineTranslationPrediction +from .nlp_representation import ( + MachineTranslationAnnotation, + MachineTranslationPrediction, + QuestionAnsweringAnnotation, + QuestionAnsweringPrediction, + TextClassificationAnnotation +) __all__ = [ 'BaseRepresentation', @@ -124,5 +130,8 @@ 'HitRatioPrediction', 'MachineTranslationAnnotation', - 'MachineTranslationPrediction' + 'MachineTranslationPrediction', + 'QuestionAnsweringAnnotation', + 'QuestionAnsweringPrediction', + 'TextClassificationAnnotation' ] diff --git a/tools/accuracy_checker/accuracy_checker/representation/nlp_representation.py b/tools/accuracy_checker/accuracy_checker/representation/nlp_representation.py index 448ed900d0a..5dbc7c08655 100644 --- a/tools/accuracy_checker/accuracy_checker/representation/nlp_representation.py +++ b/tools/accuracy_checker/accuracy_checker/representation/nlp_representation.py @@ -1,4 +1,5 @@ from .base_representation import BaseRepresentation +from .classification_representation import ClassificationAnnotation class MachineTranslationRepresentation(BaseRepresentation): @@ -16,3 +17,39 @@ class MachineTranslationPrediction(MachineTranslationRepresentation): def __init__(self, identifier, translation=''): super().__init__(identifier) self.translation = translation + + +class QuestionAnswering(BaseRepresentation): + def __init__(self, identifier=''): + super().__init__(identifier) + + +class QuestionAnsweringAnnotation(QuestionAnswering): + def __init__(self, identifier, unique_id, input_ids, input_mask, segment_ids, tokens, orig_answer_text=None): + super().__init__(identifier) + self.orig_answer_text = orig_answer_text if orig_answer_text is not None else '' + self.unique_id = unique_id + self.input_ids = input_ids + self.input_mask = input_mask + self.segment_ids = segment_ids + self.tokens = tokens + + +class QuestionAnsweringPrediction(QuestionAnswering): + def __init__(self, identifier, start_logits, end_logits, start_index=None, end_index=None, tokens=None): + super().__init__(identifier) + + self.start_logits = start_logits + self.end_logits = end_logits + self.start_index = start_index if start_index is not None else [] + self.end_index = end_index if end_index is not None else [] + self.tokens = tokens if tokens is not None else [] + + +class TextClassificationAnnotation(ClassificationAnnotation): + def __init__(self, identifier, label, input_ids, input_mask, segment_ids, tokens): + super().__init__(identifier, label) + self.input_ids = input_ids + self.input_mask = input_mask + self.segment_ids = segment_ids + self.tokens = tokens From b778c59e2e3727525dd2132dc66b21f8b8f38f47 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Fri, 11 Oct 2019 16:56:21 +0300 Subject: [PATCH 115/927] FIX --- CONTRIBUTING.md | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1ad6811d33b..3430fc5c955 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -59,20 +59,7 @@ Description of the model. Must match with the description from model [documentat **`task_type`** -Model task class: -- `action_recognition` -- `classification` -- `detection` -- `face_recognition` -- `head_pose_estimation` -- `human_pose_estimation` -- `image_processing` -- `instance_segmentation` -- `object_attributes` -- `optical_character_recognition` -- `semantic_segmentation` - -If the task, that your model solves, is not listed here, please add new it to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. +Model task class, see [here](tools/downloader/README.md#model-information-dumper-usage) for details. If the task class of your model is absent, please add new to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. **`files`** From b5db171d4138c625c0b321d6ad1086dad84e3282 Mon Sep 17 00:00:00 2001 From: Katya Date: Fri, 11 Oct 2019 17:09:23 +0300 Subject: [PATCH 116/927] AC: fix order for config merging (#509) --- .../accuracy_checker/config/config_reader.py | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index 0c7e4692926..13a856a28b3 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -77,8 +77,8 @@ def merge(arguments): def process_config(config, mode='models', arguments=None): if arguments is None: arguments = dict() - ConfigReader._provide_cmd_arguments(arguments, config, mode) ConfigReader._merge_paths_with_prefixes(arguments, config, mode) + ConfigReader._provide_cmd_arguments(arguments, config, mode) ConfigReader._filter_launchers(config, arguments, mode) @staticmethod @@ -371,19 +371,16 @@ def merge_dlsdk_launcher_args(arguments, launcher_entry, update_launcher_entry): if 'bitstream' not in launcher_entry and 'bitstreams' in arguments and arguments.bitstreams: if not arguments.bitstreams.is_dir(): launcher_entry['bitstream'] = arguments.bitstreams - arguments.bitstreams = None if 'cpu_extensions' not in launcher_entry and 'extensions' in arguments and arguments.extensions: extensions = arguments.extensions if not extensions.is_dir() or extensions.name == 'AUTO': launcher_entry['cpu_extensions'] = arguments.extensions - arguments.extensions = None if 'affinity_map' not in launcher_entry and 'affinity_map' in arguments and arguments.affinity_map: am = arguments.affinity_map if not am.is_dir(): launcher_entry['affinity_map'] = arguments.affinity_map - arguments.affinity_map = None return launcher_entry From 7504544699f730620c6743c50b4d6f1e5cec8faf Mon Sep 17 00:00:00 2001 From: Katya Date: Mon, 14 Oct 2019 11:32:01 +0300 Subject: [PATCH 117/927] Rename densenet121-caffe2.yml to densenet-121-caffe2.yml (#512) --- .../configs/{densenet121-caffe2.yml => densenet-121-caffe2.yml} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename tools/accuracy_checker/configs/{densenet121-caffe2.yml => densenet-121-caffe2.yml} (100%) diff --git a/tools/accuracy_checker/configs/densenet121-caffe2.yml b/tools/accuracy_checker/configs/densenet-121-caffe2.yml similarity index 100% rename from tools/accuracy_checker/configs/densenet121-caffe2.yml rename to tools/accuracy_checker/configs/densenet-121-caffe2.yml From 2a4c046f7f25fda3ac88543e7710fa1b2f4bcd00 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Thu, 26 Sep 2019 10:25:50 +0300 Subject: [PATCH 118/927] add efficientnets --- .../efficientnet-b0-tf/efficientnet-b0-tf.md | 73 ++++++++++++++++++ models/public/efficientnet-b0-tf/model.yml | 38 ++++++++++ .../efficientnet-b0_auto_aug-tf.md | 74 +++++++++++++++++++ .../efficientnet-b0_auto_aug-tf/model.yml | 39 ++++++++++ .../efficientnet-b5-tf/efficientnet-b5-tf.md | 73 ++++++++++++++++++ models/public/efficientnet-b5-tf/model.yml | 38 ++++++++++ .../efficientnet-b7_auto_aug-tf.md | 74 +++++++++++++++++++ .../efficientnet-b7_auto_aug-tf/model.yml | 38 ++++++++++ .../configs/efficientnet-b0-tf.yml | 30 ++++++++ .../configs/efficientnet-b0_auto_aug-tf.yml | 30 ++++++++ .../configs/efficientnet-b5-tf.yml | 30 ++++++++ .../configs/efficientnet-b7_auto_aug-tf.yml | 30 ++++++++ 12 files changed, 567 insertions(+) create mode 100644 models/public/efficientnet-b0-tf/efficientnet-b0-tf.md create mode 100644 models/public/efficientnet-b0-tf/model.yml create mode 100644 models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md create mode 100644 models/public/efficientnet-b0_auto_aug-tf/model.yml create mode 100644 models/public/efficientnet-b5-tf/efficientnet-b5-tf.md create mode 100644 models/public/efficientnet-b5-tf/model.yml create mode 100644 models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md create mode 100644 models/public/efficientnet-b7_auto_aug-tf/model.yml create mode 100644 tools/accuracy_checker/configs/efficientnet-b0-tf.yml create mode 100644 tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml create mode 100644 tools/accuracy_checker/configs/efficientnet-b5-tf.yml create mode 100644 tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml diff --git a/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md b/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md new file mode 100644 index 00000000000..32313740445 --- /dev/null +++ b/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md @@ -0,0 +1,73 @@ +# efficientnet-b0-tf + +## Use Case and High-Level Description + +The `efficientnet-b0-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +group of models designed to perform image classification. +This model was pretrained in TensorFlow\*. +All the EfficientNet models have been pretrained on the ImageNet image database. +For details about this family of models, check out the [repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.819 | +| MParams | 5.268 | +| Source framework | TensorFlow\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 75.70 | 75.70 | +| Top 5 | 92.76 | 92.76 | + +## Performance + +## Input + +### Original model + +Image, name - `image`, shape - `[1x224x224x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +### Converted model + +Image, name - `sub/placeholder_port_0`, shape - `[1x224x224x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `logits`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `efficientnet-b0/model/head/dense/MatMul`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE) \ No newline at end of file diff --git a/models/public/efficientnet-b0-tf/model.yml b/models/public/efficientnet-b0-tf/model.yml new file mode 100644 index 00000000000..5c0eec15332 --- /dev/null +++ b/models/public/efficientnet-b0-tf/model.yml @@ -0,0 +1,38 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b0-tf` model is one of the EfficientNet + group of models designed to perform image classification. This model was pretrained + in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image + database. + For details about this family of models, check out the repository + . +task_type: classification +files: + - name: efficientnet-b0.tar.gz + size: 47390720 + sha256: b82d670255bd48b0a122d571e5766091048a503209caf15e2cda58a41118c613 + source: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckpts/efficientnet-b0.tar.gz +postprocessing: + - $type: unpack_archive + format: gztar + file: efficientnet-b0.tar.gz +model_optimizer_args: + - --input_shape=[1,224,224,3] + - --input=0:sub + - --output=logits + - --input_meta_graph=$dl_dir/efficientnet-b0/model.ckpt.meta +framework: tf +license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md b/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md new file mode 100644 index 00000000000..446175edc3d --- /dev/null +++ b/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md @@ -0,0 +1,74 @@ +# efficientnet-b0_auto_aug-tf + +## Use Case and High-Level Description + +The `efficientnet-b0_auto_aug-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +group of models designed to perform image classification, trained with +[AutoAugmentation preprocessing](https://arxiv.org/abs/1805.09501). +This model was pretrainedin TensorFlow\*. +All the EfficientNet models have been pretrained on the ImageNet image database. +For details about this family of models, check out the [repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 0.819 | +| MParams | 5.268 | +| Source framework | TensorFlow\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 76.43 | 76.43 | +| Top 5 | 93.04 | 93.04 | + +## Performance + +## Input + +### Original model + +Image, name - `image`, shape - `[1x224x224x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +### Converted model + +Image, name - `sub/placeholder_port_0`, shape - `[1x224x224x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `logits`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `efficientnet-b0/model/head/dense/MatMul`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE) \ No newline at end of file diff --git a/models/public/efficientnet-b0_auto_aug-tf/model.yml b/models/public/efficientnet-b0_auto_aug-tf/model.yml new file mode 100644 index 00000000000..d0e488856b8 --- /dev/null +++ b/models/public/efficientnet-b0_auto_aug-tf/model.yml @@ -0,0 +1,39 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b0_auto_aug-tf` model is one of the EfficientNet + group of models designed to perform image classification, trained with AutoAugmentation preprocessing + . This model was pretrained + in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image + database. + For details about this family of models, check out the repository + . +task_type: classification +files: + - name: efficientnet-b0.tar.gz + size: 39302973 + sha256: c1109c4842c2294d9df2de9fcebc28692d1fe48ff4447e265a5d8e2f74e0fe65 + source: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckptsaug/efficientnet-b0.tar.gz +postprocessing: + - $type: unpack_archive + format: gztar + file: efficientnet-b0.tar.gz +model_optimizer_args: + - --input_shape=[1,224,224,3] + - --input=0:sub + - --output=logits + - --input_meta_graph=$dl_dir/efficientnet-b0/model.ckpt.meta +framework: tf +license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md b/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md new file mode 100644 index 00000000000..a7d6287a5f9 --- /dev/null +++ b/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md @@ -0,0 +1,73 @@ +# efficientnet-b5-tf + +## Use Case and High-Level Description + +The `efficientnet-b5-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +group of models designed to perform image classification. +This model was pretrained in TensorFlow\*. +All the EfficientNet models have been pretrained on the ImageNet image database. +For details about this family of models, check out the [repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 21.252 | +| MParams | 30.303 | +| Source framework | TensorFlow\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 83.33 | 83.33 | +| Top 5 | 96.67 | 96.67 | + +## Performance + +## Input + +### Original model + +Image, name - `image`, shape - `[1x456x456x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +### Converted model + +Image, name - `sub/placeholder_port_0`, shape - `[1x456x456x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `logits`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `efficientnet-b5/model/head/dense/MatMul`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE) \ No newline at end of file diff --git a/models/public/efficientnet-b5-tf/model.yml b/models/public/efficientnet-b5-tf/model.yml new file mode 100644 index 00000000000..540f5d964fc --- /dev/null +++ b/models/public/efficientnet-b5-tf/model.yml @@ -0,0 +1,38 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b0-tf` model is one of the EfficientNet + group of models designed to perform image classification. This model was pretrained + in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image + database. + For details about this family of models, check out the repository + . +task_type: classification +files: + - name: efficientnet-b5.tar.gz + size: 255918080 + sha256: 088c222266c64608da87d8730b3f5bf22da7677fd128357d8040a0389aab338e + source: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckpts/efficientnet-b5.tar.gz +postprocessing: + - $type: unpack_archive + format: gztar + file: efficientnet-b5.tar.gz +model_optimizer_args: + - --input_shape=[1,456,456,3] + - --input=0:sub + - --output=logits + - --input_meta_graph=$dl_dir/efficientnet-b5/model.ckpt.meta +framework: tf +license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md b/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md new file mode 100644 index 00000000000..6e6d059168f --- /dev/null +++ b/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md @@ -0,0 +1,74 @@ +# efficientnet-b7-tf + +## Use Case and High-Level Description + +The `efficientnet-b7_auto_aug-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +group of models designed to perform image classification, trained with +[AutoAugmentation preprocessing](https://arxiv.org/abs/1805.09501). +This model was pretrained in TensorFlow\*. +All the EfficientNet models have been pretrained on the ImageNet image database. +For details about this family of models, check out the [repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). + +## Example + +## Specification + +| Metric | Value | +|-------------------|---------------| +| Type | Classification| +| GFLOPs | 77.618 | +| MParams | 66.193 | +| Source framework | TensorFlow\* | + +## Accuracy + +| Metric | Original model | Converted model | +| ------ | -------------- | --------------- | +| Top 1 | 84.68 | 84.68 | +| Top 5 | 97.09 | 97.09 | + +## Performance + +## Input + +### Original model + +Image, name - `image`, shape - `[1x600x600x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +### Converted model + +Image, name - `sub/placeholder_port_0`, shape - `[1x600x600x3]`, format is `[BxHxWxC]` where: + +- `B` - batch size +- `H` - height +- `W` - width +- `C` - channel + +Channel order is `BGR`. + +## Output + +### Original model + +Object classifier according to ImageNet classes, name - `logits`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +### Converted model + +Object classifier according to ImageNet classes, name - `efficientnet-b7/model/head/dense/MatMul`, shape - `1,1000`, output data format is `B,C` where: + +- `B` - batch size +- `C` - predicted probabilities for each class in [0, 1] range + +## Legal Information + +[LICENSE](https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE) \ No newline at end of file diff --git a/models/public/efficientnet-b7_auto_aug-tf/model.yml b/models/public/efficientnet-b7_auto_aug-tf/model.yml new file mode 100644 index 00000000000..1fb0dd73fbb --- /dev/null +++ b/models/public/efficientnet-b7_auto_aug-tf/model.yml @@ -0,0 +1,38 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + The `efficientnet-b0-tf` model is one of the EfficientNet + group of models designed to perform image classification. This model was pretrained + in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image + database. + For details about this family of models, check out the repository + . +task_type: classification +files: + - name: efficientnet-b7.tar.gz + size: 492077218 + sha256: b5705cc53da6fa3e953f8509063695ed50f7adebb2488144783e20df71d0fca8 + source: https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckptsaug/efficientnet-b7.tar.gz +postprocessing: + - $type: unpack_archive + format: gztar + file: efficientnet-b7.tar.gz +model_optimizer_args: + - --input_shape=[1,600,600,3] + - --input=0:sub + - --output=logits + - --input_meta_graph=$dl_dir/efficientnet-b7/model.ckpt.meta +framework: tf +license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0-tf.yml new file mode 100644 index 00000000000..bf74b849920 --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b0-tf.yml @@ -0,0 +1,30 @@ +models: + - name: efficientnet-b0 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.xml + weights: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.xml + weights: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 224 + use_pillow: True + interpolation: BICUBIC diff --git a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml new file mode 100644 index 00000000000..90879eb470c --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml @@ -0,0 +1,30 @@ +models: + - name: efficientnet-b0_auto_aug + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.xml + weights: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.xml + weights: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 224 + use_pillow: True + interpolation: BICUBIC \ No newline at end of file diff --git a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml b/tools/accuracy_checker/configs/efficientnet-b5-tf.yml new file mode 100644 index 00000000000..ece7da4a263 --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b5-tf.yml @@ -0,0 +1,30 @@ +models: + - name: efficientnet-b5 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.xml + weights: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.xml + weights: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 456 + use_pillow: True + interpolation: BICUBIC \ No newline at end of file diff --git a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml new file mode 100644 index 00000000000..165138d1fda --- /dev/null +++ b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml @@ -0,0 +1,30 @@ +models: + - name: efficientnet-b7_auto_aug + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.xml + weights: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.bin + adapter: classification + cpu_extensions: AUTO + + - framework: dlsdk + tags: + - FP16 + model: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.xml + weights: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.bin + adapter: classification + cpu_extensions: AUTO + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 600 + use_pillow: True + interpolation: BICUBIC From e80fef28298d1dd92e003b178d09acc6226119d6 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Fri, 27 Sep 2019 15:10:47 +0300 Subject: [PATCH 119/927] update configs --- models/public/index.md | 8 +++++--- tools/accuracy_checker/configs/efficientnet-b0-tf.yml | 4 ++-- .../configs/efficientnet-b0_auto_aug-tf.yml | 4 ++-- tools/accuracy_checker/configs/efficientnet-b5-tf.yml | 4 ++-- .../configs/efficientnet-b7_auto_aug-tf.yml | 4 ++-- 5 files changed, 13 insertions(+), 11 deletions(-) diff --git a/models/public/index.md b/models/public/index.md index c3ee313b2c9..4cdc6d79c7b 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -17,9 +17,11 @@ The models can be downloaded via Model Downloader | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | -| EfficientNet B0 | [PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0-pytorch | 76.91/93.21 | 0.819 | 5.268 | -| EfficientNet B5 | [PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | -| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | +| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0-tf/efficientnet-b0-tf.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70~76.91/92.76~93.21 | 0.819 | 5.268 | +| EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md) | efficientnet-b0_auto_aug-tf | 76.43/93.04 | 0.819 | 5.268 | +| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5-tf/efficientnet-b5-tf.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | +| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | +| EfficientNet B7 AutoAugment | [TensorFlow\*](./efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md) | efficientnet-b7_auto_aug-tf | 84.68/97.09 | 77.618 | 66.193 | | Inception (GoogleNet) V1 | [Caffe\*](./googlenet-v1/googlenet-v1.md) | googlenet-v1 | | 3.266 | 6.999 | | Inception (GoogleNet) V2 | [Caffe\*](./googlenet-v2/googlenet-v2.md) | googlenet-v2 | | 4.058 | 11.185 | | Inception (GoogleNet) V3 | [Caffe\*](./googlenet-v3/googlenet-v3.md)
[PyTorch\*](./googlenet-v3-pytorch/googlenet-v3-pytorch.md) | googlenet-v3
googlenet-v3-pytorch | | 11.469 | 23.817 | diff --git a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0-tf.yml index bf74b849920..1487ad46952 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0-tf.yml @@ -5,7 +5,7 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.xml + model: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.xml weights: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.bin adapter: classification cpu_extensions: AUTO @@ -13,7 +13,7 @@ models: - framework: dlsdk tags: - FP16 - model: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.xml + model: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.xml weights: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml index 90879eb470c..0ef041655c1 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml @@ -5,7 +5,7 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.xml + model: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.xml weights: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.bin adapter: classification cpu_extensions: AUTO @@ -13,7 +13,7 @@ models: - framework: dlsdk tags: - FP16 - model: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.xml + model: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.xml weights: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml b/tools/accuracy_checker/configs/efficientnet-b5-tf.yml index ece7da4a263..3809fb8ec72 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5-tf.yml @@ -5,7 +5,7 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.xml + model: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.xml weights: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.bin adapter: classification cpu_extensions: AUTO @@ -13,7 +13,7 @@ models: - framework: dlsdk tags: - FP16 - model: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.xml + model: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.xml weights: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml index 165138d1fda..a4047a10e3b 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml @@ -5,7 +5,7 @@ models: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.xml + model: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.xml weights: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.bin adapter: classification cpu_extensions: AUTO @@ -13,7 +13,7 @@ models: - framework: dlsdk tags: - FP16 - model: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.xml + model: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.xml weights: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.bin adapter: classification cpu_extensions: AUTO From 516032de4c36cf305bf4c5009531064a16a6bdd4 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Mon, 30 Sep 2019 11:29:34 +0300 Subject: [PATCH 120/927] fix --- .../efficientnet-b0-tf/efficientnet-b0-tf.md | 2 +- models/public/efficientnet-b0-tf/model.yml | 1 + .../efficientnet-b0_auto_aug-tf.md | 2 +- .../efficientnet-b0_auto_aug-tf/model.yml | 1 + .../efficientnet-b5-tf/efficientnet-b5-tf.md | 2 +- models/public/efficientnet-b5-tf/model.yml | 1 + .../efficientnet-b7_auto_aug-tf.md | 2 +- .../efficientnet-b7_auto_aug-tf/model.yml | 1 + .../configs/efficientnet-b0-tf.yml | 27 ++- .../configs/efficientnet-b0_auto_aug-tf.yml | 29 ++- .../configs/efficientnet-b5-tf.yml | 29 ++- .../configs/efficientnet-b7_auto_aug-tf.yml | 27 ++- tools/downloader/license.txt | 210 +++++++++++++++++- 13 files changed, 323 insertions(+), 11 deletions(-) diff --git a/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md b/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md index 32313740445..a43ae4cca7e 100644 --- a/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md +++ b/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md @@ -39,7 +39,7 @@ Image, name - `image`, shape - `[1x224x224x3]`, format is `[BxHxWxC]` where: - `W` - width - `C` - channel -Channel order is `BGR`. +Channel order is `RGB`. ### Converted model diff --git a/models/public/efficientnet-b0-tf/model.yml b/models/public/efficientnet-b0-tf/model.yml index 5c0eec15332..2218f751b21 100644 --- a/models/public/efficientnet-b0-tf/model.yml +++ b/models/public/efficientnet-b0-tf/model.yml @@ -34,5 +34,6 @@ model_optimizer_args: - --input=0:sub - --output=logits - --input_meta_graph=$dl_dir/efficientnet-b0/model.ckpt.meta + - --reverse_input_channels framework: tf license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md b/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md index 446175edc3d..2216ac1ad01 100644 --- a/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md +++ b/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md @@ -40,7 +40,7 @@ Image, name - `image`, shape - `[1x224x224x3]`, format is `[BxHxWxC]` where: - `W` - width - `C` - channel -Channel order is `BGR`. +Channel order is `RGB`. ### Converted model diff --git a/models/public/efficientnet-b0_auto_aug-tf/model.yml b/models/public/efficientnet-b0_auto_aug-tf/model.yml index d0e488856b8..78d4419e889 100644 --- a/models/public/efficientnet-b0_auto_aug-tf/model.yml +++ b/models/public/efficientnet-b0_auto_aug-tf/model.yml @@ -35,5 +35,6 @@ model_optimizer_args: - --input=0:sub - --output=logits - --input_meta_graph=$dl_dir/efficientnet-b0/model.ckpt.meta + - --reverse_input_channels framework: tf license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md b/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md index a7d6287a5f9..8dd05f84bef 100644 --- a/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md +++ b/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md @@ -39,7 +39,7 @@ Image, name - `image`, shape - `[1x456x456x3]`, format is `[BxHxWxC]` where: - `W` - width - `C` - channel -Channel order is `BGR`. +Channel order is `RGB`. ### Converted model diff --git a/models/public/efficientnet-b5-tf/model.yml b/models/public/efficientnet-b5-tf/model.yml index 540f5d964fc..5ec5e6dddff 100644 --- a/models/public/efficientnet-b5-tf/model.yml +++ b/models/public/efficientnet-b5-tf/model.yml @@ -34,5 +34,6 @@ model_optimizer_args: - --input=0:sub - --output=logits - --input_meta_graph=$dl_dir/efficientnet-b5/model.ckpt.meta + - --reverse_input_channels framework: tf license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md b/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md index 6e6d059168f..b480957b3e3 100644 --- a/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md +++ b/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md @@ -40,7 +40,7 @@ Image, name - `image`, shape - `[1x600x600x3]`, format is `[BxHxWxC]` where: - `W` - width - `C` - channel -Channel order is `BGR`. +Channel order is `RGB`. ### Converted model diff --git a/models/public/efficientnet-b7_auto_aug-tf/model.yml b/models/public/efficientnet-b7_auto_aug-tf/model.yml index 1fb0dd73fbb..923d6fa76a8 100644 --- a/models/public/efficientnet-b7_auto_aug-tf/model.yml +++ b/models/public/efficientnet-b7_auto_aug-tf/model.yml @@ -34,5 +34,6 @@ model_optimizer_args: - --input=0:sub - --output=logits - --input_meta_graph=$dl_dir/efficientnet-b7/model.ckpt.meta + - --reverse_input_channels framework: tf license: https://raw.githubusercontent.com/tensorflow/tpu/master/LICENSE \ No newline at end of file diff --git a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0-tf.yml index 1487ad46952..4fa6556224c 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0-tf.yml @@ -1,5 +1,30 @@ models: - - name: efficientnet-b0 + - name: efficientnet-b0-tf + + launchers: + - framework: tf + model: public/efficientnet-b0-tf/efficientnet-b0/model.ckpt.meta + adapter: classification + output_names: + - logits + inputs: + - name: IteratorGetNext + type: INPUT + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: bgr_to_rgb + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 224 + use_pillow: True + interpolation: BICUBIC + + + - name: efficientnet-b0-tf launchers: - framework: dlsdk diff --git a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml index 0ef041655c1..1ca0f400109 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml @@ -1,5 +1,30 @@ models: - - name: efficientnet-b0_auto_aug + - name: efficientnet-b0_auto_aug-tf + + launchers: + - framework: tf + model: public/efficientnet-b0_auto_aug-tf/efficientnet-b0/model.ckpt.meta + adapter: classification + output_names: + - logits + inputs: + - name: IteratorGetNext + type: INPUT + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: bgr_to_rgb + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 224 + use_pillow: True + interpolation: BICUBIC + + + - name: efficientnet-b0_auto_aug-tf launchers: - framework: dlsdk @@ -27,4 +52,4 @@ models: - type: resize size: 224 use_pillow: True - interpolation: BICUBIC \ No newline at end of file + interpolation: BICUBIC diff --git a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml b/tools/accuracy_checker/configs/efficientnet-b5-tf.yml index 3809fb8ec72..728802f7ee0 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5-tf.yml @@ -1,5 +1,30 @@ models: - - name: efficientnet-b5 + - name: efficientnet-b5-tf + + launchers: + - framework: tf + model: public/efficientnet-b5-tf/efficientnet-b5/model.ckpt.meta + adapter: classification + output_names: + - logits + inputs: + - name: IteratorGetNext + type: INPUT + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: bgr_to_rgb + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 456 + use_pillow: True + interpolation: BICUBIC + + + - name: efficientnet-b5-tf launchers: - framework: dlsdk @@ -27,4 +52,4 @@ models: - type: resize size: 456 use_pillow: True - interpolation: BICUBIC \ No newline at end of file + interpolation: BICUBIC diff --git a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml index a4047a10e3b..36c5e5a26c5 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml @@ -1,5 +1,30 @@ models: - - name: efficientnet-b7_auto_aug + - name: efficientnet-b7_auto_aug-tf + + launchers: + - framework: tf + model: public/efficientnet-b7_auto_aug-tf/efficientnet-b7/model.ckpt.meta + adapter: classification + output_names: + - logits + inputs: + - name: IteratorGetNext + type: INPUT + + datasets: + - name: imagenet_1000_classes + preprocessing: + - type: bgr_to_rgb + - type: crop + central_fraction: 0.875 + use_pillow: True + - type: resize + size: 600 + use_pillow: True + interpolation: BICUBIC + + + - name: efficientnet-b7_auto_aug-tf launchers: - framework: dlsdk diff --git a/tools/downloader/license.txt b/tools/downloader/license.txt index 9297083956b..7c63c33bce8 100644 --- a/tools/downloader/license.txt +++ b/tools/downloader/license.txt @@ -2,7 +2,7 @@ Configuration file for the automation tools includes following models: ================================================================================================== -* densenet-121, densenet-161, densenet-169, densenet-201 - Densely Connected Convolutional Networks https://github.com/shicai/DenseNet-Caffe +* densenet-121, densenet-161, densenet-169, densenet-201 - Densely Connected Convolutional Networks https://github.com/shicai/DenseNet-Caffe License terms: @@ -36,6 +36,214 @@ License terms: ================================================================================================== +* efficientnet-b0-tf, efficientnet-b0_auto_aug-tf, efficientnet-b5-tf, efficientnet-b7_auto_aug-tf - EfficientNet https://arxiv.org/abs/1905.11946 + +Copyright 2017 The TensorFlow Authors. All rights reserved. + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2017, The TensorFlow Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +================================================================================================== + * caffenet - CaffeNet https://arxiv.org/abs/1408.5093 License terms: From 9d9f43a993f2ab16a44810cb396094d97aca9c5e Mon Sep 17 00:00:00 2001 From: jkamelin Date: Fri, 11 Oct 2019 17:02:11 +0300 Subject: [PATCH 121/927] remove suffix --- .../efficientnet-b0.md} | 4 ++-- .../model.yml | 2 +- .../efficientnet-b0_auto_aug.md} | 4 ++-- .../model.yml | 2 +- .../efficientnet-b5.md} | 4 ++-- .../model.yml | 2 +- .../efficientnet-b7_auto_aug.md} | 4 ++-- .../model.yml | 2 +- models/public/index.md | 7 +++++++ ...{efficientnet-b0-tf.yml => efficientnet-b0.yml} | 14 +++++++------- ...uto_aug-tf.yml => efficientnet-b0_auto_aug.yml} | 14 +++++++------- ...{efficientnet-b5-tf.yml => efficientnet-b5.yml} | 14 +++++++------- ...uto_aug-tf.yml => efficientnet-b7_auto_aug.yml} | 14 +++++++------- tools/downloader/license.txt | 2 +- 14 files changed, 48 insertions(+), 41 deletions(-) rename models/public/{efficientnet-b0-tf/efficientnet-b0-tf.md => efficientnet-b0/efficientnet-b0.md} (94%) rename models/public/{efficientnet-b0-tf => efficientnet-b0}/model.yml (94%) rename models/public/{efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md => efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md} (93%) rename models/public/{efficientnet-b0_auto_aug-tf => efficientnet-b0_auto_aug}/model.yml (94%) rename models/public/{efficientnet-b5-tf/efficientnet-b5-tf.md => efficientnet-b5/efficientnet-b5.md} (94%) rename models/public/{efficientnet-b5-tf => efficientnet-b5}/model.yml (94%) rename models/public/{efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md => efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md} (93%) rename models/public/{efficientnet-b7_auto_aug-tf => efficientnet-b7_auto_aug}/model.yml (93%) rename tools/accuracy_checker/configs/{efficientnet-b0-tf.yml => efficientnet-b0.yml} (70%) rename tools/accuracy_checker/configs/{efficientnet-b0_auto_aug-tf.yml => efficientnet-b0_auto_aug.yml} (65%) rename tools/accuracy_checker/configs/{efficientnet-b5-tf.yml => efficientnet-b5.yml} (70%) rename tools/accuracy_checker/configs/{efficientnet-b7_auto_aug-tf.yml => efficientnet-b7_auto_aug.yml} (65%) diff --git a/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md b/models/public/efficientnet-b0/efficientnet-b0.md similarity index 94% rename from models/public/efficientnet-b0-tf/efficientnet-b0-tf.md rename to models/public/efficientnet-b0/efficientnet-b0.md index a43ae4cca7e..d32e500dfa4 100644 --- a/models/public/efficientnet-b0-tf/efficientnet-b0-tf.md +++ b/models/public/efficientnet-b0/efficientnet-b0.md @@ -1,8 +1,8 @@ -# efficientnet-b0-tf +# efficientnet-b0 ## Use Case and High-Level Description -The `efficientnet-b0-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +The `efficientnet-b0` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image database. diff --git a/models/public/efficientnet-b0-tf/model.yml b/models/public/efficientnet-b0/model.yml similarity index 94% rename from models/public/efficientnet-b0-tf/model.yml rename to models/public/efficientnet-b0/model.yml index 2218f751b21..c7790939b7d 100644 --- a/models/public/efficientnet-b0-tf/model.yml +++ b/models/public/efficientnet-b0/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b0-tf` model is one of the EfficientNet + The `efficientnet-b0` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image database. diff --git a/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md b/models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md similarity index 93% rename from models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md rename to models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md index 2216ac1ad01..09eb6ad4ae9 100644 --- a/models/public/efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md +++ b/models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md @@ -1,8 +1,8 @@ -# efficientnet-b0_auto_aug-tf +# efficientnet-b0_auto_aug ## Use Case and High-Level Description -The `efficientnet-b0_auto_aug-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +The `efficientnet-b0_auto_aug` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification, trained with [AutoAugmentation preprocessing](https://arxiv.org/abs/1805.09501). This model was pretrainedin TensorFlow\*. diff --git a/models/public/efficientnet-b0_auto_aug-tf/model.yml b/models/public/efficientnet-b0_auto_aug/model.yml similarity index 94% rename from models/public/efficientnet-b0_auto_aug-tf/model.yml rename to models/public/efficientnet-b0_auto_aug/model.yml index 78d4419e889..171a5dd61a3 100644 --- a/models/public/efficientnet-b0_auto_aug-tf/model.yml +++ b/models/public/efficientnet-b0_auto_aug/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b0_auto_aug-tf` model is one of the EfficientNet + The `efficientnet-b0_auto_aug` model is one of the EfficientNet group of models designed to perform image classification, trained with AutoAugmentation preprocessing . This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image diff --git a/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md b/models/public/efficientnet-b5/efficientnet-b5.md similarity index 94% rename from models/public/efficientnet-b5-tf/efficientnet-b5-tf.md rename to models/public/efficientnet-b5/efficientnet-b5.md index 8dd05f84bef..f66183ce902 100644 --- a/models/public/efficientnet-b5-tf/efficientnet-b5-tf.md +++ b/models/public/efficientnet-b5/efficientnet-b5.md @@ -1,8 +1,8 @@ -# efficientnet-b5-tf +# efficientnet-b5 ## Use Case and High-Level Description -The `efficientnet-b5-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +The `efficientnet-b5` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification. This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image database. diff --git a/models/public/efficientnet-b5-tf/model.yml b/models/public/efficientnet-b5/model.yml similarity index 94% rename from models/public/efficientnet-b5-tf/model.yml rename to models/public/efficientnet-b5/model.yml index 5ec5e6dddff..4b51c4815eb 100644 --- a/models/public/efficientnet-b5-tf/model.yml +++ b/models/public/efficientnet-b5/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b0-tf` model is one of the EfficientNet + The `efficientnet-b5` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image database. diff --git a/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md b/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md similarity index 93% rename from models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md rename to models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md index b480957b3e3..6e1b3b2b3f2 100644 --- a/models/public/efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md +++ b/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md @@ -1,8 +1,8 @@ -# efficientnet-b7-tf +# efficientnet-b7 ## Use Case and High-Level Description -The `efficientnet-b7_auto_aug-tf` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) +The `efficientnet-b7_auto_aug` model is one of the [EfficientNet](https://arxiv.org/abs/1905.11946) group of models designed to perform image classification, trained with [AutoAugmentation preprocessing](https://arxiv.org/abs/1805.09501). This model was pretrained in TensorFlow\*. diff --git a/models/public/efficientnet-b7_auto_aug-tf/model.yml b/models/public/efficientnet-b7_auto_aug/model.yml similarity index 93% rename from models/public/efficientnet-b7_auto_aug-tf/model.yml rename to models/public/efficientnet-b7_auto_aug/model.yml index 923d6fa76a8..53b39ffd5fa 100644 --- a/models/public/efficientnet-b7_auto_aug-tf/model.yml +++ b/models/public/efficientnet-b7_auto_aug/model.yml @@ -13,7 +13,7 @@ # limitations under the License. description: >- - The `efficientnet-b0-tf` model is one of the EfficientNet + The `efficientnet-b7_auto_aug` model is one of the EfficientNet group of models designed to perform image classification. This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet image database. diff --git a/models/public/index.md b/models/public/index.md index 4cdc6d79c7b..9d779e43d1f 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -17,11 +17,18 @@ The models can be downloaded via Model Downloader | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | +<<<<<<< 516032de4c36cf305bf4c5009531064a16a6bdd4 | EfficientNet B0 | [TensorFlow\*](./efficientnet-b0-tf/efficientnet-b0-tf.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70~76.91/92.76~93.21 | 0.819 | 5.268 | | EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md) | efficientnet-b0_auto_aug-tf | 76.43/93.04 | 0.819 | 5.268 | | EfficientNet B5 | [TensorFlow\*](./efficientnet-b5-tf/efficientnet-b5-tf.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | | EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | | EfficientNet B7 AutoAugment | [TensorFlow\*](./efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md) | efficientnet-b7_auto_aug-tf | 84.68/97.09 | 77.618 | 66.193 | +======= +| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0/efficientnet-b0.md) | efficientnet-b0 | 75.70/92.76 | 0.819 | 5.268 | +| EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md) | efficientnet-b0_auto_aug | 76.43/93.04 | 0.819 | 5.268 | +| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5/efficientnet-b5.md) | efficientnet-b5 | 83.33/96.67 | 21.252 | 30.303 | +| EfficientNet B7 AutoAugment | [TensorFlow\*](./efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md) | efficientnet-b7_auto_aug | 84.68/97.09 | 77.618 | 66.193 | +>>>>>>> remove suffix | Inception (GoogleNet) V1 | [Caffe\*](./googlenet-v1/googlenet-v1.md) | googlenet-v1 | | 3.266 | 6.999 | | Inception (GoogleNet) V2 | [Caffe\*](./googlenet-v2/googlenet-v2.md) | googlenet-v2 | | 4.058 | 11.185 | | Inception (GoogleNet) V3 | [Caffe\*](./googlenet-v3/googlenet-v3.md)
[PyTorch\*](./googlenet-v3-pytorch/googlenet-v3-pytorch.md) | googlenet-v3
googlenet-v3-pytorch | | 11.469 | 23.817 | diff --git a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0.yml similarity index 70% rename from tools/accuracy_checker/configs/efficientnet-b0-tf.yml rename to tools/accuracy_checker/configs/efficientnet-b0.yml index 4fa6556224c..d1f44b2eaef 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0.yml @@ -1,9 +1,9 @@ models: - - name: efficientnet-b0-tf + - name: efficientnet-b0 launchers: - framework: tf - model: public/efficientnet-b0-tf/efficientnet-b0/model.ckpt.meta + model: public/efficientnet-b0/efficientnet-b0/model.ckpt.meta adapter: classification output_names: - logits @@ -24,22 +24,22 @@ models: interpolation: BICUBIC - - name: efficientnet-b0-tf + - name: efficientnet-b0 launchers: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.xml - weights: public/efficientnet-b0-tf/FP32/efficientnet-b0-tf.bin + model: public/efficientnet-b0/FP32/efficientnet-b0.xml + weights: public/efficientnet-b0/FP32/efficientnet-b0.bin adapter: classification cpu_extensions: AUTO - framework: dlsdk tags: - FP16 - model: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.xml - weights: public/efficientnet-b0-tf/FP16/efficientnet-b0-tf.bin + model: public/efficientnet-b0/FP16/efficientnet-b0.xml + weights: public/efficientnet-b0/FP16/efficientnet-b0.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml similarity index 65% rename from tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml rename to tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml index 1ca0f400109..44e8f4ae13a 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml @@ -1,9 +1,9 @@ models: - - name: efficientnet-b0_auto_aug-tf + - name: efficientnet-b0_auto_aug launchers: - framework: tf - model: public/efficientnet-b0_auto_aug-tf/efficientnet-b0/model.ckpt.meta + model: public/efficientnet-b0_auto_aug/efficientnet-b0/model.ckpt.meta adapter: classification output_names: - logits @@ -24,22 +24,22 @@ models: interpolation: BICUBIC - - name: efficientnet-b0_auto_aug-tf + - name: efficientnet-b0_auto_aug launchers: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.xml - weights: public/efficientnet-b0_auto_aug-tf/FP32/efficientnet-b0_auto_aug-tf.bin + model: public/efficientnet-b0_auto_aug/FP32/efficientnet-b0_auto_aug.xml + weights: public/efficientnet-b0_auto_aug/FP32/efficientnet-b0_auto_aug.bin adapter: classification cpu_extensions: AUTO - framework: dlsdk tags: - FP16 - model: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.xml - weights: public/efficientnet-b0_auto_aug-tf/FP16/efficientnet-b0_auto_aug-tf.bin + model: public/efficientnet-b0_auto_aug/FP16/efficientnet-b0_auto_aug.xml + weights: public/efficientnet-b0_auto_aug/FP16/efficientnet-b0_auto_aug.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml b/tools/accuracy_checker/configs/efficientnet-b5.yml similarity index 70% rename from tools/accuracy_checker/configs/efficientnet-b5-tf.yml rename to tools/accuracy_checker/configs/efficientnet-b5.yml index 728802f7ee0..efdfc5266a2 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5.yml @@ -1,9 +1,9 @@ models: - - name: efficientnet-b5-tf + - name: efficientnet-b5 launchers: - framework: tf - model: public/efficientnet-b5-tf/efficientnet-b5/model.ckpt.meta + model: public/efficientnet-b5/efficientnet-b5/model.ckpt.meta adapter: classification output_names: - logits @@ -24,22 +24,22 @@ models: interpolation: BICUBIC - - name: efficientnet-b5-tf + - name: efficientnet-b5 launchers: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.xml - weights: public/efficientnet-b5-tf/FP32/efficientnet-b5-tf.bin + model: public/efficientnet-b5/FP32/efficientnet-b5.xml + weights: public/efficientnet-b5/FP32/efficientnet-b5.bin adapter: classification cpu_extensions: AUTO - framework: dlsdk tags: - FP16 - model: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.xml - weights: public/efficientnet-b5-tf/FP16/efficientnet-b5-tf.bin + model: public/efficientnet-b5/FP16/efficientnet-b5.xml + weights: public/efficientnet-b5/FP16/efficientnet-b5.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml similarity index 65% rename from tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml rename to tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml index 36c5e5a26c5..e18eb906d06 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug-tf.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml @@ -1,9 +1,9 @@ models: - - name: efficientnet-b7_auto_aug-tf + - name: efficientnet-b7_auto_aug launchers: - framework: tf - model: public/efficientnet-b7_auto_aug-tf/efficientnet-b7/model.ckpt.meta + model: public/efficientnet-b7_auto_aug/efficientnet-b7/model.ckpt.meta adapter: classification output_names: - logits @@ -24,22 +24,22 @@ models: interpolation: BICUBIC - - name: efficientnet-b7_auto_aug-tf + - name: efficientnet-b7_auto_aug launchers: - framework: dlsdk tags: - FP32 - model: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.xml - weights: public/efficientnet-b7_auto_aug-tf/FP32/efficientnet-b7_auto_aug-tf.bin + model: public/efficientnet-b7_auto_aug/FP32/efficientnet-b7_auto_aug.xml + weights: public/efficientnet-b7_auto_aug/FP32/efficientnet-b7_auto_aug.bin adapter: classification cpu_extensions: AUTO - framework: dlsdk tags: - FP16 - model: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.xml - weights: public/efficientnet-b7_auto_aug-tf/FP16/efficientnet-b7_auto_aug-tf.bin + model: public/efficientnet-b7_auto_aug/FP16/efficientnet-b7_auto_aug.xml + weights: public/efficientnet-b7_auto_aug/FP16/efficientnet-b7_auto_aug.bin adapter: classification cpu_extensions: AUTO diff --git a/tools/downloader/license.txt b/tools/downloader/license.txt index 7c63c33bce8..9c5dcf910ec 100644 --- a/tools/downloader/license.txt +++ b/tools/downloader/license.txt @@ -36,7 +36,7 @@ License terms: ================================================================================================== -* efficientnet-b0-tf, efficientnet-b0_auto_aug-tf, efficientnet-b5-tf, efficientnet-b7_auto_aug-tf - EfficientNet https://arxiv.org/abs/1905.11946 +* efficientnet-b0, efficientnet-b0_auto_aug, efficientnet-b5, efficientnet-b7_auto_aug - EfficientNet https://arxiv.org/abs/1905.11946 Copyright 2017 The TensorFlow Authors. All rights reserved. From 95c6e32911f852f548024c8871f8649344eada9c Mon Sep 17 00:00:00 2001 From: jkamelin Date: Mon, 14 Oct 2019 13:06:04 +0300 Subject: [PATCH 122/927] rebase --- models/public/index.md | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/models/public/index.md b/models/public/index.md index 9d779e43d1f..3451848855a 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -17,18 +17,11 @@ The models can be downloaded via Model Downloader | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | -<<<<<<< 516032de4c36cf305bf4c5009531064a16a6bdd4 -| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0-tf/efficientnet-b0-tf.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70~76.91/92.76~93.21 | 0.819 | 5.268 | -| EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug-tf/efficientnet-b0_auto_aug-tf.md) | efficientnet-b0_auto_aug-tf | 76.43/93.04 | 0.819 | 5.268 | -| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5-tf/efficientnet-b5-tf.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | -| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | -| EfficientNet B7 AutoAugment | [TensorFlow\*](./efficientnet-b7_auto_aug-tf/efficientnet-b7_auto_aug-tf.md) | efficientnet-b7_auto_aug-tf | 84.68/97.09 | 77.618 | 66.193 | -======= -| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0/efficientnet-b0.md) | efficientnet-b0 | 75.70/92.76 | 0.819 | 5.268 | +| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0/efficientnet-b0.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70~76.91/92.76~93.21 | 0.819 | 5.268 | | EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md) | efficientnet-b0_auto_aug | 76.43/93.04 | 0.819 | 5.268 | -| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5/efficientnet-b5.md) | efficientnet-b5 | 83.33/96.67 | 21.252 | 30.303 | +| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5/efficientnet-b5.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | +| EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | | EfficientNet B7 AutoAugment | [TensorFlow\*](./efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md) | efficientnet-b7_auto_aug | 84.68/97.09 | 77.618 | 66.193 | ->>>>>>> remove suffix | Inception (GoogleNet) V1 | [Caffe\*](./googlenet-v1/googlenet-v1.md) | googlenet-v1 | | 3.266 | 6.999 | | Inception (GoogleNet) V2 | [Caffe\*](./googlenet-v2/googlenet-v2.md) | googlenet-v2 | | 4.058 | 11.185 | | Inception (GoogleNet) V3 | [Caffe\*](./googlenet-v3/googlenet-v3.md)
[PyTorch\*](./googlenet-v3-pytorch/googlenet-v3-pytorch.md) | googlenet-v3
googlenet-v3-pytorch | | 11.469 | 23.817 | From 33f847c785bf024d9c0ec003e6bcbc6b7fc3dc03 Mon Sep 17 00:00:00 2001 From: jkamelin Date: Mon, 14 Oct 2019 13:24:13 +0300 Subject: [PATCH 123/927] fix index.md --- models/public/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/public/index.md b/models/public/index.md index 3451848855a..6505a671dd4 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -17,7 +17,7 @@ The models can be downloaded via Model Downloader | DenseNet 161 | [Caffe\*](./densenet-161/densenet-161.md)
[TensorFlow\*](./densenet-161-tf/densenet-161-tf.md) | densenet-161
densenet-161-tf | | 14.128~15.561 | 28.666 | | DenseNet 169 | [Caffe\*](./densenet-169/densenet-169.md)
[TensorFlow\*](./densenet-169-tf/densenet-169-tf.md) | densenet-169
densenet-169-tf | | 6.16~6.788 | 14.139 | | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | -| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0/efficientnet-b0.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70~76.91/92.76~93.21 | 0.819 | 5.268 | +| EfficientNet B0 | [TensorFlow\*](./efficientnet-b0/efficientnet-b0.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70/92.76
76.91/93.21 | 0.819 | 5.268 | | EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md) | efficientnet-b0_auto_aug | 76.43/93.04 | 0.819 | 5.268 | | EfficientNet B5 | [TensorFlow\*](./efficientnet-b5/efficientnet-b5.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | | EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | From 031ac46c751b96b278a21f3568fe0b11e95f4f8b Mon Sep 17 00:00:00 2001 From: jkamelin Date: Mon, 14 Oct 2019 14:46:29 +0300 Subject: [PATCH 124/927] fixes --- .../public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md | 2 +- models/public/index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md b/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md index 6e1b3b2b3f2..76a3ec6b42d 100644 --- a/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md +++ b/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md @@ -1,4 +1,4 @@ -# efficientnet-b7 +# efficientnet-b7_auto_aug ## Use Case and High-Level Description diff --git a/models/public/index.md b/models/public/index.md index 6505a671dd4..7e46f69f07c 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -19,7 +19,7 @@ The models can be downloaded via Model Downloader | DenseNet 201 | [Caffe\*](./densenet-201/densenet-201.md) | densenet-201 | | 8.673 | 20.001 | | EfficientNet B0 | [TensorFlow\*](./efficientnet-b0/efficientnet-b0.md)
[PyTorch\*](./efficientnet-b0-pytorch/efficientnet-b0-pytorch.md) | efficientnet-b0
efficientnet-b0-pytorch | 75.70/92.76
76.91/93.21 | 0.819 | 5.268 | | EfficientNet B0 AutoAugment | [TensorFlow\*](./efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md) | efficientnet-b0_auto_aug | 76.43/93.04 | 0.819 | 5.268 | -| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5/efficientnet-b5.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.69/96.71 | 21.252 | 30.303 | +| EfficientNet B5 | [TensorFlow\*](./efficientnet-b5/efficientnet-b5.md)
[PyTorch\*](./efficientnet-b5-pytorch/efficientnet-b5-pytorch.md) | efficientnet-b5
efficientnet-b5-pytorch | 83.33/96.67
83.69/96.71 | 21.252 | 30.303 | | EfficientNet B7 | [PyTorch\*](./efficientnet-b7-pytorch/efficientnet-b7-pytorch.md) | efficientnet-b7-pytorch | 84.42/96.91 | 77.618 | 66.193 | | EfficientNet B7 AutoAugment | [TensorFlow\*](./efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md) | efficientnet-b7_auto_aug | 84.68/97.09 | 77.618 | 66.193 | | Inception (GoogleNet) V1 | [Caffe\*](./googlenet-v1/googlenet-v1.md) | googlenet-v1 | | 3.266 | 6.999 | From 129b50f2e75688398d82f290e359010b0a2ed5f6 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Wed, 11 Sep 2019 11:36:30 +0300 Subject: [PATCH 125/927] demos/tests add py demo tests --- ci/requirements-demos.txt | 1 + .../python_demos/action_recognition/README.md | 1 + .../action_recognition/action_recognition.py | 7 +- .../result_renderer.py | 6 +- .../action_recognition_demo/timer.py | 2 +- .../face_recognition_demo/models.lst | 6 ++ .../instance_segmentation_demo/README.md | 1 + .../instance_segmentation_demo.py | 8 +- .../README.md | 1 + .../multi_camera_multi_person_tracking.py | 4 +- .../object_detection_demo_ssd_async/README.md | 1 + .../models.lst | 1 + .../object_detection_demo_ssd_async.py | 26 +++--- demos/python_demos/requirements.txt | 3 +- demos/tests/cases.py | 87 +++++++++++++++++-- demos/tests/image_sequences.py | 54 ++++++++++++ 16 files changed, 179 insertions(+), 30 deletions(-) create mode 100644 demos/python_demos/face_recognition_demo/models.lst diff --git a/ci/requirements-demos.txt b/ci/requirements-demos.txt index 9d13b7e9119..65992e864fb 100644 --- a/ci/requirements-demos.txt +++ b/ci/requirements-demos.txt @@ -1 +1,2 @@ numpy==1.17.2 ; python_version >= "3.4" +scipy==1.3.1 diff --git a/demos/python_demos/action_recognition/README.md b/demos/python_demos/action_recognition/README.md index aa18d773fa8..3aef50f6677 100644 --- a/demos/python_demos/action_recognition/README.md +++ b/demos/python_demos/action_recognition/README.md @@ -60,6 +60,7 @@ Options: --fps FPS Optional. FPS for renderer -lb LABELS, --labels LABELS Optional. Path to file with label names + --no_show Optional. Don't show output ``` Running the application with an empty list of options yields the usage message given above and an error message. diff --git a/demos/python_demos/action_recognition/action_recognition.py b/demos/python_demos/action_recognition/action_recognition.py index 1a4cd54a0d9..b8355977fc2 100755 --- a/demos/python_demos/action_recognition/action_recognition.py +++ b/demos/python_demos/action_recognition/action_recognition.py @@ -28,9 +28,9 @@ from os import path -def video_demo(encoder, decoder, videos, fps=30, labels=None): +def video_demo(encoder, decoder, videos, no_show, fps=30, labels=None): """Continuously run demo on provided video list""" - result_presenter = ResultRenderer(labels=labels) + result_presenter = ResultRenderer(no_show=no_show, labels=labels) run_pipeline(videos, encoder, decoder, result_presenter.render_frame, fps=fps) @@ -54,6 +54,7 @@ def build_argparser(): default="CPU", type=str) args.add_argument("--fps", help="Optional. FPS for renderer", default=30, type=int) args.add_argument("-lb", "--labels", help="Optional. Path to file with label names", type=str) + args.add_argument("--no_show", action='store_true', help="Optional. Don't show output") return parser @@ -102,7 +103,7 @@ def main(): encoder = IEModel(encoder_xml, encoder_bin, ie, encoder_target_device, num_requests=(3 if args.device == 'MYRIAD' else 1)) decoder = IEModel(decoder_xml, decoder_bin, ie, decoder_target_device, num_requests=2) - video_demo(encoder, decoder, videos, args.fps, labels) + video_demo(encoder, decoder, videos, args.no_show, args.fps, labels) if __name__ == '__main__': diff --git a/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py b/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py index 52fe5fdaa93..cc21779abfe 100644 --- a/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py +++ b/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py @@ -31,8 +31,9 @@ class ResultRenderer(object): - def __init__(self, display_fps=False, display_confidence=True, number_of_predictions=1, labels=None, + def __init__(self, no_show, display_fps=False, display_confidence=True, number_of_predictions=1, labels=None, output_height=720): + self.no_show = no_show self.number_of_predictions = number_of_predictions self.display_confidence = display_confidence self.display_fps = display_fps @@ -84,7 +85,8 @@ def render_frame(self, frame, logits, timers, frame_ind): cv2.putText(frame, "Inference time: {:.2f}ms ({:.2f} FPS)".format(inference_time, fps), text_loc, FONT_STYLE, FONT_SIZE, FONT_COLOR) - cv2.imshow("Action Recognition", frame) + if not self.no_show: + cv2.imshow("Action Recognition", frame) key = cv2.waitKey(1) & 0xFF if key in {ord('q'), ord('Q'), 27}: diff --git a/demos/python_demos/action_recognition/action_recognition_demo/timer.py b/demos/python_demos/action_recognition/action_recognition_demo/timer.py index a1f9e639905..7d3c3323f80 100644 --- a/demos/python_demos/action_recognition/action_recognition_demo/timer.py +++ b/demos/python_demos/action_recognition/action_recognition_demo/timer.py @@ -60,7 +60,7 @@ def time_section(self): self.tock() def __repr__(self): - return "{:.2f}ms (±{:.2f}) {:.2f}fps".format(self.avg, self.std, self.fps) + return "{:.2f}ms (std: {:.2f}) {:.2f}fps".format(self.avg, self.std, self.fps) class TimerGroup: diff --git a/demos/python_demos/face_recognition_demo/models.lst b/demos/python_demos/face_recognition_demo/models.lst new file mode 100644 index 00000000000..02c47c13233 --- /dev/null +++ b/demos/python_demos/face_recognition_demo/models.lst @@ -0,0 +1,6 @@ +# This file can be used with the --list option of the model downloader. +face-detection-adas-???? +face-detection-adas-binary-???? +face-detection-retail-???? +landmarks-regression-retail-???? +face-reidentification-retail-???? diff --git a/demos/python_demos/instance_segmentation_demo/README.md b/demos/python_demos/instance_segmentation_demo/README.md index eff240fe6b1..b3aa844952c 100644 --- a/demos/python_demos/instance_segmentation_demo/README.md +++ b/demos/python_demos/instance_segmentation_demo/README.md @@ -70,6 +70,7 @@ Options: -pc, --perf_counts Optional. Report performance counters. -r, --raw_output_message Optional. Output inference results raw values. + --no_show Optional. Don't show output ``` Running the application with an empty list of options yields the short version of the usage message and an error message. diff --git a/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py b/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py index a85de93c378..8ae3534f515 100644 --- a/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py +++ b/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py @@ -79,6 +79,9 @@ def build_argparser(): args.add_argument('-r', '--raw_output_message', help='Optional. Output inference results raw values.', action='store_true') + args.add_argument("--no_show", + help="Optional. Don't show output", + action='store_true') return parser @@ -259,8 +262,9 @@ def main(): print('{:<70} {:<15} {:<15} {:<15} {:<10}'.format(layer, stats['layer_type'], stats['exec_type'], stats['status'], stats['real_time'])) - # Show resulting image. - cv2.imshow('Results', frame) + if not args.no_show: + # Show resulting image. + cv2.imshow('Results', frame) render_end = time.time() render_time = render_end - render_start diff --git a/demos/python_demos/multi_camera_multi_person_tracking/README.md b/demos/python_demos/multi_camera_multi_person_tracking/README.md index 4354a161ba0..a7df26b02f6 100644 --- a/demos/python_demos/multi_camera_multi_person_tracking/README.md +++ b/demos/python_demos/multi_camera_multi_person_tracking/README.md @@ -66,6 +66,7 @@ optional arguments: -l CPU_EXTENSION, --cpu_extension CPU_EXTENSION MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels impl. + --no_show Optional. Don't show output ``` Minimum command examples to run the demo: diff --git a/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py b/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py index abafce55a52..1ff096e86d7 100644 --- a/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py +++ b/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py @@ -91,7 +91,8 @@ def run(params, capture, detector, reid): fps = round(1 / (time.time() - start), 1) vis = visualize_multicam_detections(frames, tracked_objects, fps) - cv.imshow(win_name, vis) + if not params.no_show: + cv.imshow(win_name, vis) if output_video: output_video.write(cv.resize(vis, video_output_size)) @@ -128,6 +129,7 @@ def main(): help='MKLDNN (CPU)-targeted custom layers.Absolute \ path to a shared library with the kernels impl.', type=str, default=None) + parser.add_argument("--no_show", help="Optional. Don't show output", action='store_true') args = parser.parse_args() diff --git a/demos/python_demos/object_detection_demo_ssd_async/README.md b/demos/python_demos/object_detection_demo_ssd_async/README.md index e1740e532b2..703fedda256 100644 --- a/demos/python_demos/object_detection_demo_ssd_async/README.md +++ b/demos/python_demos/object_detection_demo_ssd_async/README.md @@ -131,6 +131,7 @@ Options: -pt PROB_THRESHOLD, --prob_threshold PROB_THRESHOLD Optional. Probability threshold for detections filtering + --no_show Optional. Don't show output ``` Running the application with the empty list of options yields the usage message given above and an error message. diff --git a/demos/python_demos/object_detection_demo_ssd_async/models.lst b/demos/python_demos/object_detection_demo_ssd_async/models.lst index 70702bc6921..c6544958d4d 100644 --- a/demos/python_demos/object_detection_demo_ssd_async/models.lst +++ b/demos/python_demos/object_detection_demo_ssd_async/models.lst @@ -5,6 +5,7 @@ face-detection-retail-???? pedestrian-and-vehicle-detector-adas-???? pedestrian-detection-adas-???? pedestrian-detection-adas-binary-???? +person-detection-retail-0013 vehicle-detection-adas-???? vehicle-detection-adas-binary-???? vehicle-license-plate-detection-barrier-???? diff --git a/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py b/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py index d901fe8d5af..f1ff9c260af 100755 --- a/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py +++ b/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py @@ -45,6 +45,7 @@ def build_argparser(): args.add_argument("--labels", help="Optional. Path to labels mapping file", default=None, type=str) args.add_argument("-pt", "--prob_threshold", help="Optional. Probability threshold for detections filtering", default=0.5, type=float) + args.add_argument("--no_show", help="Optional. Don't show output", action='store_true') return parser @@ -98,15 +99,15 @@ def main(): input_stream = 0 else: input_stream = args.input - assert os.path.isfile(args.input), "Specified input file doesn't exist" + cap = cv2.VideoCapture(input_stream) + assert cap.isOpened(), "Can't open " + input_stream + if args.labels: with open(args.labels, 'r') as f: labels_map = [x.strip() for x in f] else: labels_map = None - cap = cv2.VideoCapture(input_stream) - cur_request_id = 0 next_request_id = 1 @@ -114,6 +115,7 @@ def main(): is_async_mode = True render_time = 0 ret, frame = cap.read() + initial_frame_h, initial_frame_w = frame.shape[:2] print("To close the application, press 'CTRL+C' here or switch to the output window and press ESC key") print("To switch between sync/async modes, press TAB key in the output window") @@ -121,12 +123,12 @@ def main(): while cap.isOpened(): if is_async_mode: ret, next_frame = cap.read() + if ret: + initial_next_frame_h, initial_next_frame_w = next_frame.shape[:2] else: ret, frame = cap.read() if not ret: break - initial_w = cap.get(3) - initial_h = cap.get(4) # Main sync point: # in the truly Async mode we start the NEXT infer request, while waiting for the CURRENT to complete # in the regular mode we start the CURRENT request and immediately wait for it's completion @@ -152,10 +154,10 @@ def main(): for obj in res[0][0]: # Draw only objects when probability more than specified threshold if obj[2] > args.prob_threshold: - xmin = int(obj[3] * initial_w) - ymin = int(obj[4] * initial_h) - xmax = int(obj[5] * initial_w) - ymax = int(obj[6] * initial_h) + xmin = int(obj[3] * initial_frame_w) + ymin = int(obj[4] * initial_frame_h) + xmax = int(obj[5] * initial_frame_w) + ymax = int(obj[6] * initial_frame_h) class_id = int(obj[1]) # Draw box and label\class_id color = (min(class_id * 12.5, 255), min(class_id * 7, 255), min(class_id * 5, 255)) @@ -173,18 +175,20 @@ def main(): cv2.putText(frame, inf_time_message, (15, 15), cv2.FONT_HERSHEY_COMPLEX, 0.5, (200, 10, 10), 1) cv2.putText(frame, render_time_message, (15, 30), cv2.FONT_HERSHEY_COMPLEX, 0.5, (10, 10, 200), 1) - cv2.putText(frame, async_mode_message, (10, int(initial_h - 20)), cv2.FONT_HERSHEY_COMPLEX, 0.5, + cv2.putText(frame, async_mode_message, (10, int(initial_frame_h - 20)), cv2.FONT_HERSHEY_COMPLEX, 0.5, (10, 10, 200), 1) # render_start = time.time() - cv2.imshow("Detection Results", frame) + if not args.no_show: + cv2.imshow("Detection Results", frame) render_end = time.time() render_time = render_end - render_start if is_async_mode: cur_request_id, next_request_id = next_request_id, cur_request_id frame = next_frame + initial_frame_h, initial_frame_w = initial_next_frame_h, initial_next_frame_w key = cv2.waitKey(1) if key == 27: diff --git a/demos/python_demos/requirements.txt b/demos/python_demos/requirements.txt index 6f1e5232f72..1a80eafc6bb 100644 --- a/demos/python_demos/requirements.txt +++ b/demos/python_demos/requirements.txt @@ -1,2 +1,3 @@ opencv-python -numpy \ No newline at end of file +numpy +scipy diff --git a/demos/tests/cases.py b/demos/tests/cases.py index c1e2bb046c8..af731de297a 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -55,6 +55,11 @@ def fixed_args(self, source_dir, build_dir): return [sys.executable, str(source_dir / 'python_demos' / self._name / (self._name + '.py')), '-l', str(build_dir / 'lib/libcpu_extension.so')] +class InstanceSegmentationPythonDemo(PythonDemo): + def fixed_args(self, source_dir, build_dir): + return super().fixed_args(source_dir, build_dir) \ + + ['--labels', str(source_dir / 'python_demos' / self._name / ('coco_labels.txt'))] + def join_cases(*args): options = {} for case in args: options.update(case.options) @@ -119,11 +124,18 @@ def device_cases(*args): ], )), - # TODO: mask_rcnn_demo + # TODO: mask_rcnn_demo: no models.lst - # TODO: multichannel demos + # TODO: multichannel demos: different path for demo and demo name, + # cant accept ImagePatternArg, fails for ImageDirectoryArg, does not stop for IMAGE_SEQUENCES[] + # face-detection-adas-0001 + # INT1/face-detection-adas-binary-0001 + # face-detection-retail-0004 + # face-detection-retail-0005 + # face-detection-retail-0044 + # human-pose-estimation-0001 - # TODO: object_detection_demo_faster_rcnn + # TODO: object_detection_demo_faster_rcnn: no models.lst NativeDemo(name='object_detection_demo_ssd_async', test_cases=combine_cases( TestCase(options={'-no_show': None}), @@ -143,7 +155,7 @@ def device_cases(*args): ], )), - # TODO: object_detection_demo_yolov3_async + # TODO: object_detection_demo_yolov3_async: no models.lst NativeDemo('pedestrian_tracker_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, @@ -222,11 +234,68 @@ def device_cases(*args): ] PYTHON_DEMOS = [ - # TODO: 3d_segmentation_demo - # TODO: action_recognition - # TODO: instance_segmentation_demo - # TODO: object_detection_demo_ssd_async - # TODO: object_detection_demo_yolov3_async + # TODO: 3d_segmentation_demo: no input data + + PythonDemo(name='action_recognition', test_cases=combine_cases( + TestCase(options={'--no_show': None, '-i': ImagePatternArg('action-recognition')}), + device_cases('-d'), + [ + TestCase(options={ + '-m_en': ModelArg('action-recognition-0001-encoder'), + '-m_de': ModelArg('action-recognition-0001-decoder'), + }), + TestCase(options={ + '-m_en': ModelArg('driver-action-recognition-adas-0002-encoder'), + '-m_de': ModelArg('driver-action-recognition-adas-0002-decoder'), + }), + ], + )), + + # TODO: face_recognition_demo: requires face gallery + # TODO: image_retrieval_demo: current images does not suit the usecase, requires user defined gallery + + InstanceSegmentationPythonDemo(name='instance_segmentation_demo', test_cases=combine_cases( + TestCase(options={'--no_show': None, + '-i': ImagePatternArg('instance-segmentation-demo'), + '--delay': '1', + '-d': 'CPU'}), # GPU is not supported + single_option_cases('-m', ModelArg('instance-segmentation-security-0010'), + ModelArg('instance-segmentation-security-0050'), + ModelArg('instance-segmentation-security-0083')), + )), + + PythonDemo(name='multi_camera_multi_person_tracking', test_cases=combine_cases( + TestCase(options={'--no_show': None, + # TODO: run_tests.py does not handle multiple ImagePatternArg + '-i': [ImagePatternArg('multi-camera-multi-person-tracking')], + '-m': ModelArg('person-detection-retail-0013')}), + device_cases('-d'), + single_option_cases('--m_reid', + ModelArg('person-reidentification-retail-0031'), + ModelArg('person-reidentification-retail-0076'), + ModelArg('person-reidentification-retail-0079')), + )), + + PythonDemo(name='object_detection_demo_ssd_async', test_cases=combine_cases( + TestCase(options={'--no_show': None, + '-i': ImagePatternArg('py/object-detection-demo-ssd-async')}), + device_cases('-d'), + single_option_cases('-m', + ModelArg('face-detection-adas-0001'), + ModelArg('face-detection-adas-binary-0001', "INT1"), + ModelArg('face-detection-retail-0004'), + ModelArg('face-detection-retail-0005'), + # TODO: face-detection-retail-0044 + ModelArg('pedestrian-and-vehicle-detector-adas-0001'), + ModelArg('pedestrian-detection-adas-0002'), + ModelArg('pedestrian-detection-adas-binary-0001', "INT1"), + ModelArg('person-detection-retail-0013'), + ModelArg('vehicle-detection-adas-0002'), + ModelArg('vehicle-detection-adas-binary-0001', "INT1"), + ModelArg('vehicle-license-plate-detection-barrier-0106')), + )), + + # TODO: object_detection_demo_yolov3_async: no models.lst PythonDemo(name='segmentation_demo', test_cases=combine_cases( device_cases('-d'), diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index 66a867fe204..27ea00ca42a 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -147,4 +147,58 @@ image_net_arg('00000001'), image_net_arg('00000074'), ], + + 'action-recognition': [ + image_net_arg('00000001'), + image_net_arg('00000002'), + image_net_arg('00000003'), + image_net_arg('00000004'), + image_net_arg('00000005'), + image_net_arg('00000006'), + image_net_arg('00000007'), + image_net_arg('00000008'), + image_net_arg('00000009'), + image_net_arg('00000010'), + image_net_arg('00000011'), + image_net_arg('00000012'), + image_net_arg('00000013'), + image_net_arg('00000014'), + image_net_arg('00000015'), + image_net_arg('00000016'), + image_net_arg('00000017'), + image_net_arg('00000018'), + image_net_arg('00000019'), + image_net_arg('00000020'), + ], + + 'instance-segmentation-demo': [ + image_net_arg('00000001'), + image_net_arg('00000002'), + image_net_arg('00000002'), # the demo has simple reid + image_net_arg('00000003'), + image_net_arg('00000004'), + image_net_arg('00000008'), + image_net_arg('00000010'), + image_net_arg('00000017'), + image_net_arg('00000019'), + image_net_arg('00000020'), + ], + + 'multi-camera-multi-person-tracking': [image_net_arg('00000002')] * 11, + + 'py/object-detection-demo-ssd-async': [ + image_net_arg('00000001'), + image_net_arg('00000002'), + image_net_arg('00000003'), + image_net_arg('00000004'), + image_net_arg('00000005'), + image_net_arg('00000006'), + image_net_arg('00000007'), + image_net_arg('00000008'), + image_net_arg('00000014'), + image_net_arg('00000018'), + image_net_arg('00000022'), + image_net_arg('00000023'), + image_net_arg('00000032'), + ], } From 95a830e04a42d84036b360c54e0acfb991aac087 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Tue, 17 Sep 2019 13:01:10 +0300 Subject: [PATCH 126/927] multichannel: stop at the end of any input --- demos/multichannel_demo/common/graph.cpp | 16 ++- demos/multichannel_demo/common/graph.hpp | 2 + demos/multichannel_demo/common/input.cpp | 135 +++++++++------------- demos/multichannel_demo/common/input.hpp | 2 + demos/multichannel_demo/common/output.cpp | 3 +- demos/multichannel_demo/fd/main.cpp | 6 +- demos/multichannel_demo/hpe/main.cpp | 5 +- 7 files changed, 78 insertions(+), 91 deletions(-) diff --git a/demos/multichannel_demo/common/graph.cpp b/demos/multichannel_demo/common/graph.cpp index 5756f04cbd3..c62123ec057 100644 --- a/demos/multichannel_demo/common/graph.cpp +++ b/demos/multichannel_demo/common/graph.cpp @@ -119,9 +119,8 @@ void IEGraph::start(GetterFunc getterFunc, PostprocessingFunc postprocessingFunc vframes.push_back(std::make_shared(vframe)); ++b; } else { - if (terminate) { - break; - } + terminate = true; + break; } } @@ -188,6 +187,7 @@ void IEGraph::start(GetterFunc getterFunc, PostprocessingFunc postprocessingFunc } condVarBusyRequests.notify_one(); } + condVarBusyRequests.notify_one(); // notify that there will be no new InferRequests }); } @@ -204,6 +204,11 @@ IEGraph::IEGraph(const InitParams& p): initNetwork(p.deviceName); } +bool IEGraph::isRunning() { + std::lock_guard lock(mtxBusyRequests); + return !terminate || !busyBatchRequests.empty(); +} + InferenceEngine::SizeVector IEGraph::getInputDims() const { assert(!availableRequests.empty()); auto inputBlob = availableRequests.front()->GetBlob(inputDataBlobName); @@ -217,8 +222,11 @@ std::vector > IEGraph::getBatchData(cv::Size frameSi { std::unique_lock lock(mtxBusyRequests); condVarBusyRequests.wait(lock, [&]() { - return !busyBatchRequests.empty(); + return terminate || !busyBatchRequests.empty(); // wait until the pipeline is stopped or there are new InferRequests }); + if (busyBatchRequests.empty()) { + return {}; // woke up because of termination, so leave if nothing to preces + } vframes = std::move(busyBatchRequests.front().vfPtrVec); req = std::move(busyBatchRequests.front().req); startTime = std::move(busyBatchRequests.front().startTime); diff --git a/demos/multichannel_demo/common/graph.hpp b/demos/multichannel_demo/common/graph.hpp index 300c0e650aa..19c2f0cef0a 100644 --- a/demos/multichannel_demo/common/graph.hpp +++ b/demos/multichannel_demo/common/graph.hpp @@ -98,6 +98,8 @@ class IEGraph{ void start(GetterFunc getterFunc, PostprocessingFunc postprocessingFunc); + bool isRunning(); + InferenceEngine::SizeVector getInputDims() const; std::vector> getBatchData(cv::Size windowSize); diff --git a/demos/multichannel_demo/common/input.cpp b/demos/multichannel_demo/common/input.cpp index 14b25e7153d..a3816d90232 100644 --- a/demos/multichannel_demo/common/input.cpp +++ b/demos/multichannel_demo/common/input.cpp @@ -34,7 +34,7 @@ class VideoSource { public: - virtual bool init() = 0; + virtual bool isRunning() const = 0; virtual void start() = 0; @@ -142,7 +142,7 @@ class VideoSourceStreamFile : public VideoSource { VideoStream stream; - std::atomic_bool terminate = {false}; + std::atomic_bool running = {false}; std::atomic_bool is_decoding = {false}; std::mutex mutex; @@ -170,12 +170,14 @@ class VideoSourceStreamFile : public VideoSource { queueSize(queueSize_), perfTimer(collectStats_ ? PerfTimer::DefaultIterationsCount : 0) { } - bool init() { return true; } + bool isRunning() const override { + return running; + } void start() { - terminate = false; + running = true; workThread = std::thread([&]() { - while (!terminate) { + while (running) { { cv::Mat frame; { @@ -203,7 +205,7 @@ class VideoSourceStreamFile : public VideoSource { std::unique_lock lock(mutex); condVar.wait(lock, [&]() { - return !is_decoding && (frameQueue.size() < queueSize || terminate); + return !is_decoding && (frameQueue.size() < queueSize || !running); }); } hasFrame.notify_one(); @@ -212,7 +214,7 @@ class VideoSourceStreamFile : public VideoSource { } void stop() { - terminate = true; + running = false; condVar.notify_one(); if (workThread.joinable()) { workThread.join(); @@ -222,13 +224,13 @@ class VideoSourceStreamFile : public VideoSource { bool read(VideoFrame& frame) { queue_elem_t elem; - if (terminate) + if (!running) return false; { std::unique_lock lock(mutex); hasFrame.wait(lock, [&]() { - return !frameQueue.empty() || terminate; + return !frameQueue.empty() || !running; }); elem = std::move(frameQueue.front()); frameQueue.pop(); @@ -236,7 +238,7 @@ class VideoSourceStreamFile : public VideoSource { condVar.notify_one(); frame.frame = std::move(elem.second); - return elem.first && !terminate; + return elem.first && running; } float getAvgReadTime() const { @@ -250,7 +252,7 @@ class VideoSourceOCV : public VideoSource { PerfTimer perfTimer; std::thread workThread; const bool isAsync = false; - std::atomic_bool terminate = {false}; + std::atomic_bool running = {true}; std::string videoName; std::mutex mutex; @@ -268,9 +270,6 @@ class VideoSourceOCV : public VideoSource { template bool readFrame(cv::Mat& frame); - template - bool readFrameImpl(cv::Mat& frame); - template void startImpl(); @@ -282,7 +281,7 @@ class VideoSourceOCV : public VideoSource { void start(); - bool init(); + bool isRunning() const override; void stop(); @@ -331,7 +330,7 @@ class VideoSourceNative : public VideoSource { void start(); - bool init(); + bool isRunning() const override; bool read(VideoFrame& frame); @@ -364,8 +363,7 @@ void VideoSourceNative::start() { // nothing } -bool VideoSourceNative::init() { - // nothing +bool VideoSourceNative::isRunning() const override { return true; } @@ -440,38 +438,8 @@ bool isNumeric(const std::string& str) { } } // namespace -bool VideoSourceOCV::init() { - static std::mutex initMutex; // HACK: opencv camera init is not thread-safe - std::unique_lock lock(initMutex); - bool res = false; - if (isNumeric(videoName)) { -#ifdef __linux__ - res = source.open("/dev/video" + videoName); -#else - res = source.open(std::stoi(videoName)); -#endif - } else { - res = source.open(videoName); - } - if (res) { - source.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')); - } - return res; -} - template bool VideoSourceOCV::readFrame(cv::Mat& frame) { - if (!source.isOpened() && !init()) { - return false; - } - if (!readFrameImpl(frame)) { - return init() && readFrameImpl(frame); - } - return true; -} - -template -bool VideoSourceOCV::readFrameImpl(cv::Mat& frame) { if (CollectStats) { ScopedTimer st(perfTimer); return source.read(frame); @@ -483,38 +451,40 @@ bool VideoSourceOCV::readFrameImpl(cv::Mat& frame) { VideoSourceOCV::VideoSourceOCV(bool async, bool collectStats_, const std::string& name, size_t queueSize_, size_t pollingTimeMSec_, bool realFps_): - perfTimer(collectStats_ ? PerfTimer::DefaultIterationsCount : 0), - isAsync(async), videoName(name), - realFps(realFps_), - queueSize(queueSize_), - pollingTimeMSec(pollingTimeMSec_) {} + perfTimer(collectStats_ ? PerfTimer::DefaultIterationsCount : 0), + isAsync(async), videoName(name), + realFps(realFps_), + queueSize(queueSize_), + pollingTimeMSec(pollingTimeMSec_) { + if (isNumeric(videoName)) { + if (!source.open(std::stoi(videoName))) { + throw std::runtime_error("Can't open " + videoName + " with cv::VideoCapture::open(int)"); + } + } else { + if (!source.open(videoName)) { + throw std::runtime_error("Can't open " + videoName + " with cv::VideoCapture::open(std::string)"); + } + } + source.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')); +} VideoSourceOCV::~VideoSourceOCV() { stop(); } +bool VideoSourceOCV::isRunning() const { + return running; +} + template void VideoSourceOCV::thread_fn(VideoSourceOCV *vs) { - while (!vs->terminate) { + while (vs->running) { cv::Mat frame; - bool result = false; - while (!((result = vs->readFrame(frame)) || vs->terminate)) { - std::unique_lock lock(vs->mutex); - if (vs->queue.empty() || vs->queue.back().first) { - vs->queue.push({false, frame}); - lock.unlock(); - vs->hasFrame.notify_one(); - lock.lock(); - } - std::chrono::milliseconds timeout(vs->pollingTimeMSec); - vs->condVar.wait_for(lock, - timeout, - [&]() { - return vs->terminate.load(); - }); + const bool result = vs->readFrame(frame); + if (!result) { + vs->running = false; // stop() also affects running, so override it only when out of frames } - - if (vs->queue.size() < vs->queueSize) { + if (vs->queue.size() < vs->queueSize || !result) { // queue has space or source run out of frames std::unique_lock lock(vs->mutex); vs->queue.push({result, frame}); } @@ -525,7 +495,7 @@ void VideoSourceOCV::thread_fn(VideoSourceOCV *vs) { template void VideoSourceOCV::startImpl() { if (isAsync) { - terminate = false; + running = true; workThread = std::thread(&VideoSourceOCV::thread_fn, this); } } @@ -540,7 +510,7 @@ void VideoSourceOCV::start() { void VideoSourceOCV::stop() { if (isAsync) { - terminate = true; + running = false; condVar.notify_one(); if (workThread.joinable()) { workThread.join(); @@ -550,20 +520,17 @@ void VideoSourceOCV::stop() { bool VideoSourceOCV::read(cv::Mat& frame) { if (isAsync) { - size_t count = 0; - bool res = false; + bool res; { std::unique_lock lock(mutex); hasFrame.wait(lock, [&]() { - return !queue.empty() || terminate; + return !queue.empty() || !running; }); res = queue.front().first; frame = queue.front().second; if (realFps || queue.size() > 1 || queueSize == 1) { queue.pop(); } - count = queue.size(); - (void)count; } condVar.notify_one(); return res; @@ -608,6 +575,12 @@ VideoSources::~VideoSources() { // nothing } +bool VideoSources::isRunning() const { + // when one of VideoSources will be out of frames, it will stop IEGraph, so this isRunning() requires that all inpus were running + return std::all_of(inputs.begin(), inputs.end(), + [](const std::unique_ptr& input){return input->isRunning();}); +} + void VideoSources::openVideo(const std::string& source, bool native) { #ifdef USE_NATIVE_CAMERA_API if (native) { @@ -643,11 +616,7 @@ void VideoSources::openVideo(const std::string& source, bool native) { std::unique_ptr newSrc(new VideoSourceOCV(isAsync, collectStats, source, queueSize, pollingTimeMSec, realFps)); #endif - if (newSrc->init()) { - inputs.emplace_back(std::move(newSrc)); - } else { - throw std::runtime_error("Cannot open cv::VideoCapture"); - } + inputs.emplace_back(std::move(newSrc)); } } diff --git a/demos/multichannel_demo/common/input.hpp b/demos/multichannel_demo/common/input.hpp index 413ee022b26..ffd68a4c11d 100644 --- a/demos/multichannel_demo/common/input.hpp +++ b/demos/multichannel_demo/common/input.hpp @@ -90,6 +90,8 @@ class VideoSources { void start(); + virtual bool isRunning() const; + bool getFrame(size_t index, VideoFrame& frame); struct Stats { diff --git a/demos/multichannel_demo/common/output.cpp b/demos/multichannel_demo/common/output.cpp index 4aec97fdf6e..3e0c1e8da12 100644 --- a/demos/multichannel_demo/common/output.cpp +++ b/demos/multichannel_demo/common/output.cpp @@ -40,7 +40,7 @@ void AsyncOutput::start() { condVar.wait(lock, [&]() { return !queue.empty() || terminate; }); - if (terminate) { + if (queue.empty()) { break; } @@ -62,7 +62,6 @@ void AsyncOutput::start() { }); } - bool AsyncOutput::isAlive() const { return !terminate; } diff --git a/demos/multichannel_demo/fd/main.cpp b/demos/multichannel_demo/fd/main.cpp index 7f2ebd8fc4f..21370ff6459 100644 --- a/demos/multichannel_demo/fd/main.cpp +++ b/demos/multichannel_demo/fd/main.cpp @@ -370,11 +370,15 @@ int main(int argc, char* argv[]) { size_t perfItersCounter = 0; - while (true) { + while (sources.isRunning() || network->isRunning()) { bool readData = true; while (readData) { auto br = network->getBatchData(params.frameSize); + if (br.empty()) { + break; // IEGraph::getBatchData had nothing to process and returned. That means it was stopped + } for (size_t i = 0; i < br.size(); i++) { + // this approach waits for the next input image for sourceIdx. If you provide a single image, it may not show results, especially if -real_input_fps is enabled auto val = static_cast(br[i]->sourceIdx); auto it = find_if(batchRes.begin(), batchRes.end(), [val] (const std::shared_ptr& vf) { return vf->sourceIdx == val; } ); if (it != batchRes.end()) { diff --git a/demos/multichannel_demo/hpe/main.cpp b/demos/multichannel_demo/hpe/main.cpp index a6fd3e5457c..19208d83d07 100644 --- a/demos/multichannel_demo/hpe/main.cpp +++ b/demos/multichannel_demo/hpe/main.cpp @@ -368,10 +368,13 @@ int main(int argc, char* argv[]) { size_t perfItersCounter = 0; - while (true) { + while (sources.isRunning() || network->isRunning()) { bool readData = true; while (readData) { auto br = network->getBatchData(params.frameSize); + if (br.empty()) { + break; // IEGraph::getBatchData had nothing to process and returned. That means it was stopped + } for (size_t i = 0; i < br.size(); i++) { auto val = static_cast(br[i]->sourceIdx); auto it = find_if(batchRes.begin(), batchRes.end(), [val] (const std::shared_ptr& vf) { return vf->sourceIdx == val; } ); From a7d874a37690e8b4a29b2a2b2a580e718353b841 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Tue, 17 Sep 2019 16:27:59 +0300 Subject: [PATCH 127/927] demos/tests: add multichannel --- demos/multichannel_demo/common/graph.cpp | 3 ++- demos/multichannel_demo/common/input.cpp | 3 ++- demos/multichannel_demo/fd/main.cpp | 3 ++- demos/multichannel_demo/hpe/main.cpp | 2 ++ demos/tests/cases.py | 30 +++++++++++++++++------- 5 files changed, 29 insertions(+), 12 deletions(-) diff --git a/demos/multichannel_demo/common/graph.cpp b/demos/multichannel_demo/common/graph.cpp index c62123ec057..5665b512e0d 100644 --- a/demos/multichannel_demo/common/graph.cpp +++ b/demos/multichannel_demo/common/graph.cpp @@ -222,7 +222,8 @@ std::vector > IEGraph::getBatchData(cv::Size frameSi { std::unique_lock lock(mtxBusyRequests); condVarBusyRequests.wait(lock, [&]() { - return terminate || !busyBatchRequests.empty(); // wait until the pipeline is stopped or there are new InferRequests + // wait until the pipeline is stopped or there are new InferRequests + return terminate || !busyBatchRequests.empty(); }); if (busyBatchRequests.empty()) { return {}; // woke up because of termination, so leave if nothing to preces diff --git a/demos/multichannel_demo/common/input.cpp b/demos/multichannel_demo/common/input.cpp index a3816d90232..a13425eebe3 100644 --- a/demos/multichannel_demo/common/input.cpp +++ b/demos/multichannel_demo/common/input.cpp @@ -576,7 +576,8 @@ VideoSources::~VideoSources() { } bool VideoSources::isRunning() const { - // when one of VideoSources will be out of frames, it will stop IEGraph, so this isRunning() requires that all inpus were running + // when one of VideoSources will be out of frames, it will stop IEGraph, + // so this isRunning() requires that all inpus were running return std::all_of(inputs.begin(), inputs.end(), [](const std::unique_ptr& input){return input->isRunning();}); } diff --git a/demos/multichannel_demo/fd/main.cpp b/demos/multichannel_demo/fd/main.cpp index 21370ff6459..e5ce0f87086 100644 --- a/demos/multichannel_demo/fd/main.cpp +++ b/demos/multichannel_demo/fd/main.cpp @@ -378,7 +378,8 @@ int main(int argc, char* argv[]) { break; // IEGraph::getBatchData had nothing to process and returned. That means it was stopped } for (size_t i = 0; i < br.size(); i++) { - // this approach waits for the next input image for sourceIdx. If you provide a single image, it may not show results, especially if -real_input_fps is enabled + // this approach waits for the next input image for sourceIdx. If provided a single image, + // it may not show results, especially if -real_input_fps is enabled auto val = static_cast(br[i]->sourceIdx); auto it = find_if(batchRes.begin(), batchRes.end(), [val] (const std::shared_ptr& vf) { return vf->sourceIdx == val; } ); if (it != batchRes.end()) { diff --git a/demos/multichannel_demo/hpe/main.cpp b/demos/multichannel_demo/hpe/main.cpp index 19208d83d07..b5450978602 100644 --- a/demos/multichannel_demo/hpe/main.cpp +++ b/demos/multichannel_demo/hpe/main.cpp @@ -376,6 +376,8 @@ int main(int argc, char* argv[]) { break; // IEGraph::getBatchData had nothing to process and returned. That means it was stopped } for (size_t i = 0; i < br.size(); i++) { + // this approach waits for the next input image for sourceIdx. If provided a single image, + // it may not show results, especially if -real_input_fps is enabled auto val = static_cast(br[i]->sourceIdx); auto it = find_if(batchRes.begin(), batchRes.end(), [val] (const std::shared_ptr& vf) { return vf->sourceIdx == val; } ); if (it != batchRes.end()) { diff --git a/demos/tests/cases.py b/demos/tests/cases.py index af731de297a..3166c4844ae 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -38,6 +38,10 @@ def models_lst_path(self, source_dir): def fixed_args(self, source_dir, build_dir): return [str(build_dir / self._name)] +class MultichannelNativeDemo(NativeDemo): + def models_lst_path(self, source_dir): + return source_dir / 'multichannel_demo' / 'models.lst' + class PythonDemo: def __init__(self, name, test_cases): self._name = name @@ -126,16 +130,24 @@ def device_cases(*args): # TODO: mask_rcnn_demo: no models.lst - # TODO: multichannel demos: different path for demo and demo name, - # cant accept ImagePatternArg, fails for ImageDirectoryArg, does not stop for IMAGE_SEQUENCES[] - # face-detection-adas-0001 - # INT1/face-detection-adas-binary-0001 - # face-detection-retail-0004 - # face-detection-retail-0005 - # face-detection-retail-0044 - # human-pose-estimation-0001 + MultichannelNativeDemo(name='multi-channel-face-detection-demo', test_cases=combine_cases( + TestCase(options={'-no_show': None, + '-i': IMAGE_SEQUENCES['face-detection-adas']}), + device_cases('-d'), + single_option_cases('-m', + ModelArg('face-detection-adas-0001'), + ModelArg('face-detection-adas-binary-0001', "INT1"), + ModelArg('face-detection-retail-0004'), + ModelArg('face-detection-retail-0005')), + # TODO: face-detection-retail-0044 + )), - # TODO: object_detection_demo_faster_rcnn: no models.lst + MultichannelNativeDemo(name='multi-channel-human-pose-estimation-demo', test_cases=combine_cases( + TestCase(options={'-no_show': None, + '-i': IMAGE_SEQUENCES['human-pose-estimation'], + '-m': ModelArg('human-pose-estimation-0001')}), + device_cases('-d'), + )), NativeDemo(name='object_detection_demo_ssd_async', test_cases=combine_cases( TestCase(options={'-no_show': None}), From 74580d680070d2493eeafe63d61465381140984c Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Thu, 19 Sep 2019 11:29:35 +0300 Subject: [PATCH 128/927] demos/tests: run converter, update multi_camera_multi_person_tracking test --- demos/tests/cases.py | 10 +++++----- demos/tests/image_sequences.py | 16 +++++++++++++++- demos/tests/run_tests.py | 13 +++++++++++++ 3 files changed, 33 insertions(+), 6 deletions(-) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index 3166c4844ae..106a589d3e3 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -138,8 +138,8 @@ def device_cases(*args): ModelArg('face-detection-adas-0001'), ModelArg('face-detection-adas-binary-0001', "INT1"), ModelArg('face-detection-retail-0004'), - ModelArg('face-detection-retail-0005')), - # TODO: face-detection-retail-0044 + ModelArg('face-detection-retail-0005'), + ModelArg('face-detection-retail-0044')), )), MultichannelNativeDemo(name='multi-channel-human-pose-estimation-demo', test_cases=combine_cases( @@ -278,8 +278,8 @@ def device_cases(*args): PythonDemo(name='multi_camera_multi_person_tracking', test_cases=combine_cases( TestCase(options={'--no_show': None, - # TODO: run_tests.py does not handle multiple ImagePatternArg - '-i': [ImagePatternArg('multi-camera-multi-person-tracking')], + '-i': [ImagePatternArg('multi-camera-multi-person-tracking-1'), + ImagePatternArg('multi-camera-multi-person-tracking-2')], '-m': ModelArg('person-detection-retail-0013')}), device_cases('-d'), single_option_cases('--m_reid', @@ -297,7 +297,7 @@ def device_cases(*args): ModelArg('face-detection-adas-binary-0001', "INT1"), ModelArg('face-detection-retail-0004'), ModelArg('face-detection-retail-0005'), - # TODO: face-detection-retail-0044 + ModelArg('face-detection-retail-0044'), ModelArg('pedestrian-and-vehicle-detector-adas-0001'), ModelArg('pedestrian-detection-adas-0002'), ModelArg('pedestrian-detection-adas-binary-0001', "INT1"), diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index 27ea00ca42a..bd8b3095c77 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -184,7 +184,21 @@ image_net_arg('00000020'), ], - 'multi-camera-multi-person-tracking': [image_net_arg('00000002')] * 11, + 'multi-camera-multi-person-tracking-1': [image_net_arg('00000002')] * 11, + + 'multi-camera-multi-person-tracking-2': [ + image_net_arg('00000002'), + image_net_arg('00000032'), + image_net_arg('00017291'), + image_net_arg('00017293'), + image_net_arg('00040547'), + image_net_arg('00000002'), + image_net_arg('00000032'), + image_net_arg('00017291'), + image_net_arg('00017293'), + image_net_arg('00040547'), + image_net_arg('00000002'), + ], 'py/object-detection-demo-ssd-async': [ image_net_arg('00000001'), diff --git a/demos/tests/run_tests.py b/demos/tests/run_tests.py index f20bbd777aa..ba12ff7b1ee 100755 --- a/demos/tests/run_tests.py +++ b/demos/tests/run_tests.py @@ -94,6 +94,19 @@ def main(): num_failures += len(demo.test_cases) continue + try: + subprocess.check_output( + [ + sys.executable, '--', str(auto_tools_dir / 'converter.py'), + '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), + ], + stderr=subprocess.STDOUT, universal_newlines=True) + except subprocess.CalledProcessError as e: + print(e.output) + print('Exit code:', e.returncode) + num_failures += len(demo.test_cases) + continue + print() arg_context = ArgContext( From fe28b9dc6a3407885783c510368a74a2138737f0 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Thu, 19 Sep 2019 12:25:34 +0300 Subject: [PATCH 129/927] demos/tests: add --mo key --- demos/tests/run_tests.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/demos/tests/run_tests.py b/demos/tests/run_tests.py index ba12ff7b1ee..1e17f4bf279 100755 --- a/demos/tests/run_tests.py +++ b/demos/tests/run_tests.py @@ -48,6 +48,8 @@ def parse_args(): help='directory to use as the cache for the model downloader') parser.add_argument('--demos', metavar='DEMO[,DEMO...]', help='list of demos to run tests for (by default, every demo is tested)') + parser.add_argument('--mo', type=Path, metavar='MO.PY', + help='Model Optimizer entry point script') return parser.parse_args() def main(): @@ -98,8 +100,8 @@ def main(): subprocess.check_output( [ sys.executable, '--', str(auto_tools_dir / 'converter.py'), - '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), - ], + '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), '--jobs', 'auto' + ] + ([] if args.mo is None else ['--mo', str(args.mo)]), stderr=subprocess.STDOUT, universal_newlines=True) except subprocess.CalledProcessError as e: print(e.output) From 62223ca61db84e0bd037df7a0448f34cefe89949 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 23 Sep 2019 15:02:04 +0300 Subject: [PATCH 130/927] demos/tests: reorder IMAGE_SEQUENCES --- demos/tests/cases.py | 4 ++-- demos/tests/image_sequences.py | 32 ++++++++++++++++---------------- 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index 106a589d3e3..97b45572887 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -249,7 +249,7 @@ def device_cases(*args): # TODO: 3d_segmentation_demo: no input data PythonDemo(name='action_recognition', test_cases=combine_cases( - TestCase(options={'--no_show': None, '-i': ImagePatternArg('action-recognition')}), + TestCase(options={'--no_show': None, '-i': ImagePatternArg('py/action-recognition')}), device_cases('-d'), [ TestCase(options={ @@ -268,7 +268,7 @@ def device_cases(*args): InstanceSegmentationPythonDemo(name='instance_segmentation_demo', test_cases=combine_cases( TestCase(options={'--no_show': None, - '-i': ImagePatternArg('instance-segmentation-demo'), + '-i': ImagePatternArg('py/instance-segmentation-demo'), '--delay': '1', '-d': 'CPU'}), # GPU is not supported single_option_cases('-m', ModelArg('instance-segmentation-security-0010'), diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index bd8b3095c77..d66def1802d 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -110,6 +110,18 @@ image_net_arg('00005409'), ], + 'smart-classroom-demo': [ + image_net_arg('00000074'), + image_net_arg('00000141'), + image_net_arg('00000141'), + image_net_arg('00000164'), + image_net_arg('00000181'), + image_net_arg('00000164'), + image_net_arg('00000181'), + image_net_arg('00000001'), + image_net_arg('00000074'), + ], + 'text-detection': [ image_net_arg('00000032'), image_net_arg('00001893'), @@ -136,19 +148,7 @@ image_net_arg('00048316'), ], - 'smart-classroom-demo': [ - image_net_arg('00000074'), - image_net_arg('00000002'), - image_net_arg('00000002'), - image_net_arg('00000164'), - image_net_arg('00000181'), - image_net_arg('00000164'), - image_net_arg('00000181'), - image_net_arg('00000001'), - image_net_arg('00000074'), - ], - - 'action-recognition': [ + 'py/action-recognition': [ image_net_arg('00000001'), image_net_arg('00000002'), image_net_arg('00000003'), @@ -171,7 +171,7 @@ image_net_arg('00000020'), ], - 'instance-segmentation-demo': [ + 'py/instance-segmentation-demo': [ image_net_arg('00000001'), image_net_arg('00000002'), image_net_arg('00000002'), # the demo has simple reid @@ -184,9 +184,9 @@ image_net_arg('00000020'), ], - 'multi-camera-multi-person-tracking-1': [image_net_arg('00000002')] * 11, + 'py/multi-camera-multi-person-tracking-1': [image_net_arg('00000002')] * 11, - 'multi-camera-multi-person-tracking-2': [ + 'py/multi-camera-multi-person-tracking-2': [ image_net_arg('00000002'), image_net_arg('00000032'), image_net_arg('00017291'), From ea62c4582673d4a1e84bb6f712199cc1ac85dc0a Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Thu, 26 Sep 2019 19:50:36 +0300 Subject: [PATCH 131/927] demos: change layout of multichannel_demo, minor --- demos/multichannel_demo/README.md | 4 ++-- demos/multichannel_demo/common/input.cpp | 2 +- .../{fd => face_detection}/CMakeLists.txt | 0 .../{fd => face_detection}/README.md | 0 .../{fd => face_detection}/main.cpp | 0 .../{ => face_detection}/models.lst | 1 - .../multichannel_face_detection_params.hpp | 0 .../CMakeLists.txt | 0 .../{hpe => human_pose_estimation}/README.md | 0 .../human_pose.cpp | 0 .../human_pose.hpp | 0 .../{hpe => human_pose_estimation}/main.cpp | 0 .../human_pose_estimation/models.lst | 2 ++ .../{hpe => human_pose_estimation}/peak.cpp | 0 .../{hpe => human_pose_estimation}/peak.hpp | 0 .../postprocess.cpp | 0 .../postprocess.hpp | 0 .../postprocessor.cpp | 0 .../postprocessor.hpp | 0 .../render_human_pose.cpp | 0 .../render_human_pose.hpp | 0 demos/tests/cases.py | 19 ++++++++++++------- demos/tests/run_tests.py | 2 +- 23 files changed, 18 insertions(+), 12 deletions(-) rename demos/multichannel_demo/{fd => face_detection}/CMakeLists.txt (100%) rename demos/multichannel_demo/{fd => face_detection}/README.md (100%) rename demos/multichannel_demo/{fd => face_detection}/main.cpp (100%) rename demos/multichannel_demo/{ => face_detection}/models.lst (85%) rename demos/multichannel_demo/{fd => face_detection}/multichannel_face_detection_params.hpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/CMakeLists.txt (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/README.md (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/human_pose.cpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/human_pose.hpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/main.cpp (100%) create mode 100644 demos/multichannel_demo/human_pose_estimation/models.lst rename demos/multichannel_demo/{hpe => human_pose_estimation}/peak.cpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/peak.hpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/postprocess.cpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/postprocess.hpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/postprocessor.cpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/postprocessor.hpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/render_human_pose.cpp (100%) rename demos/multichannel_demo/{hpe => human_pose_estimation}/render_human_pose.hpp (100%) diff --git a/demos/multichannel_demo/README.md b/demos/multichannel_demo/README.md index 90176bc9499..8fc2cbbb6b8 100644 --- a/demos/multichannel_demo/README.md +++ b/demos/multichannel_demo/README.md @@ -1,5 +1,5 @@ # Multi-Channel C++ Demos The demos provide an inference pipeline for two multi-channel scenarios: face detection and human pose estimation. For more information, refer to the corresponding pages: -* [Multi-Channel Face Detection C++ Demo](./fd/README.md) -* [Multi-Channel Human Pose Estimation C++ Demo](./hpe/README.md) +* [Multi-Channel Face Detection C++ Demo](./face_detection/README.md) +* [Multi-Channel Human Pose Estimation C++ Demo](./human_pose_estimation/README.md) diff --git a/demos/multichannel_demo/common/input.cpp b/demos/multichannel_demo/common/input.cpp index a13425eebe3..6b1d966b263 100644 --- a/demos/multichannel_demo/common/input.cpp +++ b/demos/multichannel_demo/common/input.cpp @@ -577,7 +577,7 @@ VideoSources::~VideoSources() { bool VideoSources::isRunning() const { // when one of VideoSources will be out of frames, it will stop IEGraph, - // so this isRunning() requires that all inpus were running + // so this isRunning() requires that all inputs were running return std::all_of(inputs.begin(), inputs.end(), [](const std::unique_ptr& input){return input->isRunning();}); } diff --git a/demos/multichannel_demo/fd/CMakeLists.txt b/demos/multichannel_demo/face_detection/CMakeLists.txt similarity index 100% rename from demos/multichannel_demo/fd/CMakeLists.txt rename to demos/multichannel_demo/face_detection/CMakeLists.txt diff --git a/demos/multichannel_demo/fd/README.md b/demos/multichannel_demo/face_detection/README.md similarity index 100% rename from demos/multichannel_demo/fd/README.md rename to demos/multichannel_demo/face_detection/README.md diff --git a/demos/multichannel_demo/fd/main.cpp b/demos/multichannel_demo/face_detection/main.cpp similarity index 100% rename from demos/multichannel_demo/fd/main.cpp rename to demos/multichannel_demo/face_detection/main.cpp diff --git a/demos/multichannel_demo/models.lst b/demos/multichannel_demo/face_detection/models.lst similarity index 85% rename from demos/multichannel_demo/models.lst rename to demos/multichannel_demo/face_detection/models.lst index 205180adf86..b08884eca53 100644 --- a/demos/multichannel_demo/models.lst +++ b/demos/multichannel_demo/face_detection/models.lst @@ -2,4 +2,3 @@ face-detection-adas-???? face-detection-adas-binary-???? face-detection-retail-???? -human-pose-estimation-???? diff --git a/demos/multichannel_demo/fd/multichannel_face_detection_params.hpp b/demos/multichannel_demo/face_detection/multichannel_face_detection_params.hpp similarity index 100% rename from demos/multichannel_demo/fd/multichannel_face_detection_params.hpp rename to demos/multichannel_demo/face_detection/multichannel_face_detection_params.hpp diff --git a/demos/multichannel_demo/hpe/CMakeLists.txt b/demos/multichannel_demo/human_pose_estimation/CMakeLists.txt similarity index 100% rename from demos/multichannel_demo/hpe/CMakeLists.txt rename to demos/multichannel_demo/human_pose_estimation/CMakeLists.txt diff --git a/demos/multichannel_demo/hpe/README.md b/demos/multichannel_demo/human_pose_estimation/README.md similarity index 100% rename from demos/multichannel_demo/hpe/README.md rename to demos/multichannel_demo/human_pose_estimation/README.md diff --git a/demos/multichannel_demo/hpe/human_pose.cpp b/demos/multichannel_demo/human_pose_estimation/human_pose.cpp similarity index 100% rename from demos/multichannel_demo/hpe/human_pose.cpp rename to demos/multichannel_demo/human_pose_estimation/human_pose.cpp diff --git a/demos/multichannel_demo/hpe/human_pose.hpp b/demos/multichannel_demo/human_pose_estimation/human_pose.hpp similarity index 100% rename from demos/multichannel_demo/hpe/human_pose.hpp rename to demos/multichannel_demo/human_pose_estimation/human_pose.hpp diff --git a/demos/multichannel_demo/hpe/main.cpp b/demos/multichannel_demo/human_pose_estimation/main.cpp similarity index 100% rename from demos/multichannel_demo/hpe/main.cpp rename to demos/multichannel_demo/human_pose_estimation/main.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/models.lst b/demos/multichannel_demo/human_pose_estimation/models.lst new file mode 100644 index 00000000000..26d650bc499 --- /dev/null +++ b/demos/multichannel_demo/human_pose_estimation/models.lst @@ -0,0 +1,2 @@ +# This file can be used with the --list option of the model downloader. +human-pose-estimation-???? diff --git a/demos/multichannel_demo/hpe/peak.cpp b/demos/multichannel_demo/human_pose_estimation/peak.cpp similarity index 100% rename from demos/multichannel_demo/hpe/peak.cpp rename to demos/multichannel_demo/human_pose_estimation/peak.cpp diff --git a/demos/multichannel_demo/hpe/peak.hpp b/demos/multichannel_demo/human_pose_estimation/peak.hpp similarity index 100% rename from demos/multichannel_demo/hpe/peak.hpp rename to demos/multichannel_demo/human_pose_estimation/peak.hpp diff --git a/demos/multichannel_demo/hpe/postprocess.cpp b/demos/multichannel_demo/human_pose_estimation/postprocess.cpp similarity index 100% rename from demos/multichannel_demo/hpe/postprocess.cpp rename to demos/multichannel_demo/human_pose_estimation/postprocess.cpp diff --git a/demos/multichannel_demo/hpe/postprocess.hpp b/demos/multichannel_demo/human_pose_estimation/postprocess.hpp similarity index 100% rename from demos/multichannel_demo/hpe/postprocess.hpp rename to demos/multichannel_demo/human_pose_estimation/postprocess.hpp diff --git a/demos/multichannel_demo/hpe/postprocessor.cpp b/demos/multichannel_demo/human_pose_estimation/postprocessor.cpp similarity index 100% rename from demos/multichannel_demo/hpe/postprocessor.cpp rename to demos/multichannel_demo/human_pose_estimation/postprocessor.cpp diff --git a/demos/multichannel_demo/hpe/postprocessor.hpp b/demos/multichannel_demo/human_pose_estimation/postprocessor.hpp similarity index 100% rename from demos/multichannel_demo/hpe/postprocessor.hpp rename to demos/multichannel_demo/human_pose_estimation/postprocessor.hpp diff --git a/demos/multichannel_demo/hpe/render_human_pose.cpp b/demos/multichannel_demo/human_pose_estimation/render_human_pose.cpp similarity index 100% rename from demos/multichannel_demo/hpe/render_human_pose.cpp rename to demos/multichannel_demo/human_pose_estimation/render_human_pose.cpp diff --git a/demos/multichannel_demo/hpe/render_human_pose.hpp b/demos/multichannel_demo/human_pose_estimation/render_human_pose.hpp similarity index 100% rename from demos/multichannel_demo/hpe/render_human_pose.hpp rename to demos/multichannel_demo/human_pose_estimation/render_human_pose.hpp diff --git a/demos/tests/cases.py b/demos/tests/cases.py index 97b45572887..f57a5c7e207 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -38,9 +38,13 @@ def models_lst_path(self, source_dir): def fixed_args(self, source_dir, build_dir): return [str(build_dir / self._name)] -class MultichannelNativeDemo(NativeDemo): +class MultichannelFaceDetectionNativeDemo(NativeDemo): def models_lst_path(self, source_dir): - return source_dir / 'multichannel_demo' / 'models.lst' + return source_dir / 'multichannel_demo' / 'face_detection' / 'models.lst' + +class MultichannelHumanPoseEstimationNativeDemo(NativeDemo): + def models_lst_path(self, source_dir): + return source_dir / 'multichannel_demo' / 'human_pose_estimation' / 'models.lst' class PythonDemo: def __init__(self, name, test_cases): @@ -130,7 +134,7 @@ def device_cases(*args): # TODO: mask_rcnn_demo: no models.lst - MultichannelNativeDemo(name='multi-channel-face-detection-demo', test_cases=combine_cases( + MultichannelFaceDetectionNativeDemo(name='multi-channel-face-detection-demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['face-detection-adas']}), device_cases('-d'), @@ -142,7 +146,7 @@ def device_cases(*args): ModelArg('face-detection-retail-0044')), )), - MultichannelNativeDemo(name='multi-channel-human-pose-estimation-demo', test_cases=combine_cases( + MultichannelHumanPoseEstimationNativeDemo(name='multi-channel-human-pose-estimation-demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['human-pose-estimation'], '-m': ModelArg('human-pose-estimation-0001')}), @@ -271,9 +275,10 @@ def device_cases(*args): '-i': ImagePatternArg('py/instance-segmentation-demo'), '--delay': '1', '-d': 'CPU'}), # GPU is not supported - single_option_cases('-m', ModelArg('instance-segmentation-security-0010'), - ModelArg('instance-segmentation-security-0050'), - ModelArg('instance-segmentation-security-0083')), + single_option_cases('-m', + ModelArg('instance-segmentation-security-0010'), + ModelArg('instance-segmentation-security-0050'), + ModelArg('instance-segmentation-security-0083')), )), PythonDemo(name='multi_camera_multi_person_tracking', test_cases=combine_cases( diff --git a/demos/tests/run_tests.py b/demos/tests/run_tests.py index 1e17f4bf279..0dfa42cf1c4 100755 --- a/demos/tests/run_tests.py +++ b/demos/tests/run_tests.py @@ -100,7 +100,7 @@ def main(): subprocess.check_output( [ sys.executable, '--', str(auto_tools_dir / 'converter.py'), - '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), '--jobs', 'auto' + '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), '--jobs', 'auto', ] + ([] if args.mo is None else ['--mo', str(args.mo)]), stderr=subprocess.STDOUT, universal_newlines=True) except subprocess.CalledProcessError as e: From 95a0187e3b04f7d2f6fdd836f44daec9add6582b Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Thu, 26 Sep 2019 20:05:04 +0300 Subject: [PATCH 132/927] demos: fix multichannel_demo/CMakeLists.txt, tests/cases.py --- demos/multichannel_demo/CMakeLists.txt | 4 ++-- demos/tests/cases.py | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/demos/multichannel_demo/CMakeLists.txt b/demos/multichannel_demo/CMakeLists.txt index ff575b7784f..0dd412fbfe1 100644 --- a/demos/multichannel_demo/CMakeLists.txt +++ b/demos/multichannel_demo/CMakeLists.txt @@ -21,5 +21,5 @@ if(MULTICHANNEL_DEMO_USE_NATIVE_CAM) endif() add_subdirectory(common) -add_subdirectory(fd) -add_subdirectory(hpe) +add_subdirectory(face_detection) +add_subdirectory(human_pose_estimation) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index f57a5c7e207..079ddd37a3f 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -283,8 +283,8 @@ def device_cases(*args): PythonDemo(name='multi_camera_multi_person_tracking', test_cases=combine_cases( TestCase(options={'--no_show': None, - '-i': [ImagePatternArg('multi-camera-multi-person-tracking-1'), - ImagePatternArg('multi-camera-multi-person-tracking-2')], + '-i': [ImagePatternArg('py/multi-camera-multi-person-tracking-1'), + ImagePatternArg('py/multi-camera-multi-person-tracking-2')], '-m': ModelArg('person-detection-retail-0013')}), device_cases('-d'), single_option_cases('--m_reid', From 118281a9aef322e2b76c725996ff474461b957ce Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Fri, 27 Sep 2019 17:08:49 +0300 Subject: [PATCH 133/927] object_detection_demo_ssd_async: fix --- .../object_detection_demo_ssd_async.py | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py b/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py index f1ff9c260af..829aea99126 100755 --- a/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py +++ b/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py @@ -114,8 +114,9 @@ def main(): log.info("Starting inference in async mode...") is_async_mode = True render_time = 0 - ret, frame = cap.read() - initial_frame_h, initial_frame_w = frame.shape[:2] + if is_async_mode: + ret, frame = cap.read() + initial_frame_h, initial_frame_w = frame.shape[:2] print("To close the application, press 'CTRL+C' here or switch to the output window and press ESC key") print("To switch between sync/async modes, press TAB key in the output window") @@ -123,12 +124,12 @@ def main(): while cap.isOpened(): if is_async_mode: ret, next_frame = cap.read() - if ret: - initial_next_frame_h, initial_next_frame_w = next_frame.shape[:2] else: ret, frame = cap.read() + if ret: + initial_frame_h, initial_frame_w = frame.shape[:2] if not ret: - break + break # abandons the last frame in case of async_mode # Main sync point: # in the truly Async mode we start the NEXT infer request, while waiting for the CURRENT to complete # in the regular mode we start the CURRENT request and immediately wait for it's completion @@ -188,7 +189,7 @@ def main(): if is_async_mode: cur_request_id, next_request_id = next_request_id, cur_request_id frame = next_frame - initial_frame_h, initial_frame_w = initial_next_frame_h, initial_next_frame_w + initial_frame_h, initial_frame_w = frame.shape[:2] key = cv2.waitKey(1) if key == 27: From 2d9060b70063f49cde03a7e1e3b2fddabb28aabf Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Tue, 1 Oct 2019 19:29:28 +0300 Subject: [PATCH 134/927] demos: update -no_show, tests --- .../result_renderer.py | 6 +- .../action_recognition_demo/timer.py | 2 +- .../instance_segmentation_demo.py | 9 +-- .../multi_camera_multi_person_tracking.py | 4 +- .../object_detection_demo_ssd_async.py | 31 +++++----- demos/tests/args.py | 9 ++- demos/tests/cases.py | 56 +++++++++++-------- demos/tests/image_sequences.py | 12 ++-- demos/tests/run_tests.py | 7 ++- 9 files changed, 79 insertions(+), 57 deletions(-) diff --git a/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py b/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py index cc21779abfe..8888f0284fe 100644 --- a/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py +++ b/demos/python_demos/action_recognition/action_recognition_demo/result_renderer.py @@ -88,9 +88,9 @@ def render_frame(self, frame, logits, timers, frame_ind): if not self.no_show: cv2.imshow("Action Recognition", frame) - key = cv2.waitKey(1) & 0xFF - if key in {ord('q'), ord('Q'), 27}: - return -1 + key = cv2.waitKey(1) & 0xFF + if key in {ord('q'), ord('Q'), 27}: + return -1 class LabelPostprocessing: diff --git a/demos/python_demos/action_recognition/action_recognition_demo/timer.py b/demos/python_demos/action_recognition/action_recognition_demo/timer.py index 7d3c3323f80..07acd7e7490 100644 --- a/demos/python_demos/action_recognition/action_recognition_demo/timer.py +++ b/demos/python_demos/action_recognition/action_recognition_demo/timer.py @@ -60,7 +60,7 @@ def time_section(self): self.tock() def __repr__(self): - return "{:.2f}ms (std: {:.2f}) {:.2f}fps".format(self.avg, self.std, self.fps) + return "{:.2f}ms (+/-: {:.2f}) {:.2f}fps".format(self.avg, self.std, self.fps) class TimerGroup: diff --git a/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py b/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py index 8ae3534f515..60ed60d6953 100644 --- a/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py +++ b/demos/python_demos/instance_segmentation_demo/instance_segmentation_demo.py @@ -268,10 +268,11 @@ def main(): render_end = time.time() render_time = render_end - render_start - key = cv2.waitKey(args.delay) - esc_code = 27 - if key == esc_code: - break + if not args.no_show: + key = cv2.waitKey(args.delay) + esc_code = 27 + if key == esc_code: + break cv2.destroyAllWindows() cap.release() diff --git a/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py b/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py index 1ff096e86d7..d0084c2f58f 100644 --- a/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py +++ b/demos/python_demos/multi_camera_multi_person_tracking/multi_camera_multi_person_tracking.py @@ -70,7 +70,7 @@ def run(params, capture, detector, reid): else: output_video = None - while cv.waitKey(1) != 27 and thread_body.process: + while thread_body.process: start = time.time() try: frames = thread_body.frames_queue.get_nowait() @@ -93,6 +93,8 @@ def run(params, capture, detector, reid): vis = visualize_multicam_detections(frames, tracked_objects, fps) if not params.no_show: cv.imshow(win_name, vis) + if cv.waitKey(1) == 27: + break if output_video: output_video.write(cv.resize(vis, video_output_size)) diff --git a/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py b/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py index 829aea99126..ec63397b63e 100755 --- a/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py +++ b/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py @@ -116,7 +116,7 @@ def main(): render_time = 0 if is_async_mode: ret, frame = cap.read() - initial_frame_h, initial_frame_w = frame.shape[:2] + frame_h, frame_w = frame.shape[:2] print("To close the application, press 'CTRL+C' here or switch to the output window and press ESC key") print("To switch between sync/async modes, press TAB key in the output window") @@ -127,7 +127,7 @@ def main(): else: ret, frame = cap.read() if ret: - initial_frame_h, initial_frame_w = frame.shape[:2] + frame_h, frame_w = frame.shape[:2] if not ret: break # abandons the last frame in case of async_mode # Main sync point: @@ -155,10 +155,10 @@ def main(): for obj in res[0][0]: # Draw only objects when probability more than specified threshold if obj[2] > args.prob_threshold: - xmin = int(obj[3] * initial_frame_w) - ymin = int(obj[4] * initial_frame_h) - xmax = int(obj[5] * initial_frame_w) - ymax = int(obj[6] * initial_frame_h) + xmin = int(obj[3] * frame_w) + ymin = int(obj[4] * frame_h) + xmax = int(obj[5] * frame_w) + ymax = int(obj[6] * frame_h) class_id = int(obj[1]) # Draw box and label\class_id color = (min(class_id * 12.5, 255), min(class_id * 7, 255), min(class_id * 5, 255)) @@ -176,7 +176,7 @@ def main(): cv2.putText(frame, inf_time_message, (15, 15), cv2.FONT_HERSHEY_COMPLEX, 0.5, (200, 10, 10), 1) cv2.putText(frame, render_time_message, (15, 30), cv2.FONT_HERSHEY_COMPLEX, 0.5, (10, 10, 200), 1) - cv2.putText(frame, async_mode_message, (10, int(initial_frame_h - 20)), cv2.FONT_HERSHEY_COMPLEX, 0.5, + cv2.putText(frame, async_mode_message, (10, int(frame_h - 20)), cv2.FONT_HERSHEY_COMPLEX, 0.5, (10, 10, 200), 1) # @@ -189,14 +189,15 @@ def main(): if is_async_mode: cur_request_id, next_request_id = next_request_id, cur_request_id frame = next_frame - initial_frame_h, initial_frame_w = frame.shape[:2] - - key = cv2.waitKey(1) - if key == 27: - break - if (9 == key): - is_async_mode = not is_async_mode - log.info("Switched to {} mode".format("async" if is_async_mode else "sync")) + frame_h, frame_w = frame.shape[:2] + + if not args.no_show: + key = cv2.waitKey(1) + if key == 27: + break + if (9 == key): + is_async_mode = not is_async_mode + log.info("Switched to {} mode".format("async" if is_async_mode else "sync")) cv2.destroyAllWindows() diff --git a/demos/tests/args.py b/demos/tests/args.py index 5df36eb6347..d0dc5da6bbf 100644 --- a/demos/tests/args.py +++ b/demos/tests/args.py @@ -18,7 +18,7 @@ from pathlib import Path ArgContext = collections.namedtuple('ArgContext', - ['test_data_dir', 'dl_dir', 'model_info', 'image_sequences', 'image_sequence_dir']) + ['source_dir', 'test_data_dir', 'dl_dir', 'model_info', 'image_sequences', 'image_sequence_dir']) class TestDataArg: def __init__(self, rel_path): @@ -67,3 +67,10 @@ def __init__(self, sequence_name): def resolve(self, context): pattern = self.backend.resolve(context) return str(Path(pattern).parent) + +class DemoFileArg: + def __init__(self, file_name): + self.file_name = file_name + + def resolve(self, context): + return str(context.source_dir / self.file_name) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index 079ddd37a3f..a5f1efad3ca 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -23,8 +23,12 @@ TestCase = collections.namedtuple('TestCase', ['options']) class NativeDemo: - def __init__(self, name, test_cases): + def __init__(self, name, test_cases, subdirectory=None): self._name = name + if subdirectory is None: + self._subdirectory = name + else: + self._subdirectory = subdirectory self.test_cases = test_cases @@ -32,23 +36,23 @@ def __init__(self, name, test_cases): def full_name(self): return self._name + @property + def subdirectory(self): + return Path(self._subdirectory) + def models_lst_path(self, source_dir): return source_dir / self._name / 'models.lst' def fixed_args(self, source_dir, build_dir): return [str(build_dir / self._name)] -class MultichannelFaceDetectionNativeDemo(NativeDemo): - def models_lst_path(self, source_dir): - return source_dir / 'multichannel_demo' / 'face_detection' / 'models.lst' - -class MultichannelHumanPoseEstimationNativeDemo(NativeDemo): - def models_lst_path(self, source_dir): - return source_dir / 'multichannel_demo' / 'human_pose_estimation' / 'models.lst' - class PythonDemo: - def __init__(self, name, test_cases): + def __init__(self, name, test_cases, subdirectory=None): self._name = name + if subdirectory is None: + self._subdirectory = name + else: + self._subdirectory = subdirectory self.test_cases = test_cases @@ -56,6 +60,10 @@ def __init__(self, name, test_cases): def full_name(self): return 'py/' + self._name + @property + def subdirectory(self): + return Path('python_demos') / self._subdirectory + def models_lst_path(self, source_dir): return source_dir / 'python_demos' / self._name / 'models.lst' @@ -63,11 +71,6 @@ def fixed_args(self, source_dir, build_dir): return [sys.executable, str(source_dir / 'python_demos' / self._name / (self._name + '.py')), '-l', str(build_dir / 'lib/libcpu_extension.so')] -class InstanceSegmentationPythonDemo(PythonDemo): - def fixed_args(self, source_dir, build_dir): - return super().fixed_args(source_dir, build_dir) \ - + ['--labels', str(source_dir / 'python_demos' / self._name / ('coco_labels.txt'))] - def join_cases(*args): options = {} for case in args: options.update(case.options) @@ -134,7 +137,9 @@ def device_cases(*args): # TODO: mask_rcnn_demo: no models.lst - MultichannelFaceDetectionNativeDemo(name='multi-channel-face-detection-demo', test_cases=combine_cases( + NativeDemo(name='multi-channel-face-detection-demo', + subdirectory='multichannel_demo/face_detection', + test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['face-detection-adas']}), device_cases('-d'), @@ -146,7 +151,9 @@ def device_cases(*args): ModelArg('face-detection-retail-0044')), )), - MultichannelHumanPoseEstimationNativeDemo(name='multi-channel-human-pose-estimation-demo', test_cases=combine_cases( + NativeDemo(name='multi-channel-human-pose-estimation-demo', + subdirectory='multichannel_demo/human_pose_estimation', + test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['human-pose-estimation'], '-m': ModelArg('human-pose-estimation-0001')}), @@ -253,7 +260,7 @@ def device_cases(*args): # TODO: 3d_segmentation_demo: no input data PythonDemo(name='action_recognition', test_cases=combine_cases( - TestCase(options={'--no_show': None, '-i': ImagePatternArg('py/action-recognition')}), + TestCase(options={'--no_show': None, '-i': ImagePatternArg('action-recognition')}), device_cases('-d'), [ TestCase(options={ @@ -270,11 +277,12 @@ def device_cases(*args): # TODO: face_recognition_demo: requires face gallery # TODO: image_retrieval_demo: current images does not suit the usecase, requires user defined gallery - InstanceSegmentationPythonDemo(name='instance_segmentation_demo', test_cases=combine_cases( + PythonDemo(name='instance_segmentation_demo', test_cases=combine_cases( TestCase(options={'--no_show': None, - '-i': ImagePatternArg('py/instance-segmentation-demo'), + '-i': ImagePatternArg('instance-segmentation'), '--delay': '1', - '-d': 'CPU'}), # GPU is not supported + '-d': 'CPU', # GPU is not supported + '--labels': DemoFileArg('coco_labels.txt')}), single_option_cases('-m', ModelArg('instance-segmentation-security-0010'), ModelArg('instance-segmentation-security-0050'), @@ -283,8 +291,8 @@ def device_cases(*args): PythonDemo(name='multi_camera_multi_person_tracking', test_cases=combine_cases( TestCase(options={'--no_show': None, - '-i': [ImagePatternArg('py/multi-camera-multi-person-tracking-1'), - ImagePatternArg('py/multi-camera-multi-person-tracking-2')], + '-i': [ImagePatternArg('multi-camera-multi-person-tracking'), + ImagePatternArg('multi-camera-multi-person-tracking/repeated')], '-m': ModelArg('person-detection-retail-0013')}), device_cases('-d'), single_option_cases('--m_reid', @@ -295,7 +303,7 @@ def device_cases(*args): PythonDemo(name='object_detection_demo_ssd_async', test_cases=combine_cases( TestCase(options={'--no_show': None, - '-i': ImagePatternArg('py/object-detection-demo-ssd-async')}), + '-i': ImagePatternArg('object-detection-demo-ssd-async')}), device_cases('-d'), single_option_cases('-m', ModelArg('face-detection-adas-0001'), diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index d66def1802d..7cd6c2893ee 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -148,7 +148,7 @@ image_net_arg('00048316'), ], - 'py/action-recognition': [ + 'action-recognition': [ image_net_arg('00000001'), image_net_arg('00000002'), image_net_arg('00000003'), @@ -171,7 +171,7 @@ image_net_arg('00000020'), ], - 'py/instance-segmentation-demo': [ + 'instance-segmentation': [ image_net_arg('00000001'), image_net_arg('00000002'), image_net_arg('00000002'), # the demo has simple reid @@ -184,9 +184,7 @@ image_net_arg('00000020'), ], - 'py/multi-camera-multi-person-tracking-1': [image_net_arg('00000002')] * 11, - - 'py/multi-camera-multi-person-tracking-2': [ + 'multi-camera-multi-person-tracking': [ image_net_arg('00000002'), image_net_arg('00000032'), image_net_arg('00017291'), @@ -200,7 +198,9 @@ image_net_arg('00000002'), ], - 'py/object-detection-demo-ssd-async': [ + 'multi-camera-multi-person-tracking/repeated': [image_net_arg('00000002')] * 11, + + 'object-detection-demo-ssd-async': [ image_net_arg('00000001'), image_net_arg('00000002'), image_net_arg('00000003'), diff --git a/demos/tests/run_tests.py b/demos/tests/run_tests.py index 0dfa42cf1c4..203dc1faf75 100755 --- a/demos/tests/run_tests.py +++ b/demos/tests/run_tests.py @@ -77,6 +77,8 @@ def main(): print('Testing {}...'.format(demo.full_name)) print() + demo_source_dir = demos_dir / demo.subdirectory + with tempfile.TemporaryDirectory() as temp_dir: dl_dir = Path(temp_dir) / 'models' @@ -87,7 +89,7 @@ def main(): [ sys.executable, '--', str(auto_tools_dir / 'downloader.py'), '--output_dir', str(dl_dir), '--cache_dir', str(args.downloader_cache_dir), - '--list', str(demo.models_lst_path(demos_dir)), + '--list', str(demo_source_dir / 'models.lst'), ], stderr=subprocess.STDOUT, universal_newlines=True) except subprocess.CalledProcessError as e: @@ -100,7 +102,7 @@ def main(): subprocess.check_output( [ sys.executable, '--', str(auto_tools_dir / 'converter.py'), - '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), '--jobs', 'auto', + '--download_dir', str(dl_dir), '--list', str(demo_source_dir / 'models.lst'), '--jobs', 'auto', ] + ([] if args.mo is None else ['--mo', str(args.mo)]), stderr=subprocess.STDOUT, universal_newlines=True) except subprocess.CalledProcessError as e: @@ -112,6 +114,7 @@ def main(): print() arg_context = ArgContext( + source_dir=demo_source_dir, dl_dir=dl_dir, image_sequence_dir=Path(temp_dir) / 'image_seq', image_sequences=IMAGE_SEQUENCES, From eed04c8b46fb12719f7d434978cb8f8f618233eb Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Tue, 15 Oct 2019 14:54:29 +0300 Subject: [PATCH 135/927] demos: update tests, rename multi_channel, sort lists --- ci/requirements-demos.txt | 2 + .../CMakeLists.txt | 0 .../README.md | 0 .../common/CMakeLists.txt | 0 .../common/decoder.cpp | 0 .../common/decoder.hpp | 0 .../common/graph.cpp | 0 .../common/graph.hpp | 0 .../common/input.cpp | 0 .../common/input.hpp | 0 .../common/multicam/CMakeLists.txt | 0 .../common/multicam/camera.cpp | 0 .../common/multicam/camera.hpp | 0 .../common/multicam/controller.cpp | 0 .../common/multicam/controller.hpp | 0 .../common/multicam/utils.cpp | 0 .../common/multicam/utils.hpp | 0 .../common/multichannel_params.hpp | 0 .../common/output.cpp | 0 .../common/output.hpp | 0 .../common/perf_timer.cpp | 0 .../common/perf_timer.hpp | 0 .../common/threading.cpp | 0 .../common/threading.hpp | 0 .../face_detection/CMakeLists.txt | 2 +- .../face_detection/README.md | 0 .../face_detection/main.cpp | 0 .../face_detection/models.lst | 0 .../multichannel_face_detection_params.hpp | 0 .../human_pose_estimation/CMakeLists.txt | 2 +- .../human_pose_estimation/README.md | 0 .../human_pose_estimation/human_pose.cpp | 0 .../human_pose_estimation/human_pose.hpp | 0 .../human_pose_estimation/main.cpp | 0 .../human_pose_estimation/models.lst | 0 .../human_pose_estimation/peak.cpp | 0 .../human_pose_estimation/peak.hpp | 0 .../human_pose_estimation/postprocess.cpp | 0 .../human_pose_estimation/postprocess.hpp | 0 .../human_pose_estimation/postprocessor.cpp | 0 .../human_pose_estimation/postprocessor.hpp | 0 .../render_human_pose.cpp | 0 .../render_human_pose.hpp | 0 .../face_recognition_demo/models.lst | 2 +- demos/tests/cases.py | 72 ++++------ demos/tests/image_sequences.py | 136 +++++++++--------- demos/tests/run_tests.py | 8 +- 47 files changed, 104 insertions(+), 120 deletions(-) rename demos/{multichannel_demo => multi_channel}/CMakeLists.txt (100%) rename demos/{multichannel_demo => multi_channel}/README.md (100%) rename demos/{multichannel_demo => multi_channel}/common/CMakeLists.txt (100%) rename demos/{multichannel_demo => multi_channel}/common/decoder.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/decoder.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/graph.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/graph.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/input.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/input.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/CMakeLists.txt (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/camera.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/camera.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/controller.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/controller.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/utils.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multicam/utils.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/multichannel_params.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/output.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/output.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/perf_timer.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/perf_timer.hpp (100%) rename demos/{multichannel_demo => multi_channel}/common/threading.cpp (100%) rename demos/{multichannel_demo => multi_channel}/common/threading.hpp (100%) rename demos/{multichannel_demo => multi_channel}/face_detection/CMakeLists.txt (97%) rename demos/{multichannel_demo => multi_channel}/face_detection/README.md (100%) rename demos/{multichannel_demo => multi_channel}/face_detection/main.cpp (100%) rename demos/{multichannel_demo => multi_channel}/face_detection/models.lst (100%) rename demos/{multichannel_demo => multi_channel}/face_detection/multichannel_face_detection_params.hpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/CMakeLists.txt (97%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/README.md (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/human_pose.cpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/human_pose.hpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/main.cpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/models.lst (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/peak.cpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/peak.hpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/postprocess.cpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/postprocess.hpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/postprocessor.cpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/postprocessor.hpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/render_human_pose.cpp (100%) rename demos/{multichannel_demo => multi_channel}/human_pose_estimation/render_human_pose.hpp (100%) diff --git a/ci/requirements-demos.txt b/ci/requirements-demos.txt index 65992e864fb..8682c8aae05 100644 --- a/ci/requirements-demos.txt +++ b/ci/requirements-demos.txt @@ -1,2 +1,4 @@ +nibabel==2.5.1 numpy==1.17.2 ; python_version >= "3.4" scipy==1.3.1 +six==1.12.0 # via nibabel diff --git a/demos/multichannel_demo/CMakeLists.txt b/demos/multi_channel/CMakeLists.txt similarity index 100% rename from demos/multichannel_demo/CMakeLists.txt rename to demos/multi_channel/CMakeLists.txt diff --git a/demos/multichannel_demo/README.md b/demos/multi_channel/README.md similarity index 100% rename from demos/multichannel_demo/README.md rename to demos/multi_channel/README.md diff --git a/demos/multichannel_demo/common/CMakeLists.txt b/demos/multi_channel/common/CMakeLists.txt similarity index 100% rename from demos/multichannel_demo/common/CMakeLists.txt rename to demos/multi_channel/common/CMakeLists.txt diff --git a/demos/multichannel_demo/common/decoder.cpp b/demos/multi_channel/common/decoder.cpp similarity index 100% rename from demos/multichannel_demo/common/decoder.cpp rename to demos/multi_channel/common/decoder.cpp diff --git a/demos/multichannel_demo/common/decoder.hpp b/demos/multi_channel/common/decoder.hpp similarity index 100% rename from demos/multichannel_demo/common/decoder.hpp rename to demos/multi_channel/common/decoder.hpp diff --git a/demos/multichannel_demo/common/graph.cpp b/demos/multi_channel/common/graph.cpp similarity index 100% rename from demos/multichannel_demo/common/graph.cpp rename to demos/multi_channel/common/graph.cpp diff --git a/demos/multichannel_demo/common/graph.hpp b/demos/multi_channel/common/graph.hpp similarity index 100% rename from demos/multichannel_demo/common/graph.hpp rename to demos/multi_channel/common/graph.hpp diff --git a/demos/multichannel_demo/common/input.cpp b/demos/multi_channel/common/input.cpp similarity index 100% rename from demos/multichannel_demo/common/input.cpp rename to demos/multi_channel/common/input.cpp diff --git a/demos/multichannel_demo/common/input.hpp b/demos/multi_channel/common/input.hpp similarity index 100% rename from demos/multichannel_demo/common/input.hpp rename to demos/multi_channel/common/input.hpp diff --git a/demos/multichannel_demo/common/multicam/CMakeLists.txt b/demos/multi_channel/common/multicam/CMakeLists.txt similarity index 100% rename from demos/multichannel_demo/common/multicam/CMakeLists.txt rename to demos/multi_channel/common/multicam/CMakeLists.txt diff --git a/demos/multichannel_demo/common/multicam/camera.cpp b/demos/multi_channel/common/multicam/camera.cpp similarity index 100% rename from demos/multichannel_demo/common/multicam/camera.cpp rename to demos/multi_channel/common/multicam/camera.cpp diff --git a/demos/multichannel_demo/common/multicam/camera.hpp b/demos/multi_channel/common/multicam/camera.hpp similarity index 100% rename from demos/multichannel_demo/common/multicam/camera.hpp rename to demos/multi_channel/common/multicam/camera.hpp diff --git a/demos/multichannel_demo/common/multicam/controller.cpp b/demos/multi_channel/common/multicam/controller.cpp similarity index 100% rename from demos/multichannel_demo/common/multicam/controller.cpp rename to demos/multi_channel/common/multicam/controller.cpp diff --git a/demos/multichannel_demo/common/multicam/controller.hpp b/demos/multi_channel/common/multicam/controller.hpp similarity index 100% rename from demos/multichannel_demo/common/multicam/controller.hpp rename to demos/multi_channel/common/multicam/controller.hpp diff --git a/demos/multichannel_demo/common/multicam/utils.cpp b/demos/multi_channel/common/multicam/utils.cpp similarity index 100% rename from demos/multichannel_demo/common/multicam/utils.cpp rename to demos/multi_channel/common/multicam/utils.cpp diff --git a/demos/multichannel_demo/common/multicam/utils.hpp b/demos/multi_channel/common/multicam/utils.hpp similarity index 100% rename from demos/multichannel_demo/common/multicam/utils.hpp rename to demos/multi_channel/common/multicam/utils.hpp diff --git a/demos/multichannel_demo/common/multichannel_params.hpp b/demos/multi_channel/common/multichannel_params.hpp similarity index 100% rename from demos/multichannel_demo/common/multichannel_params.hpp rename to demos/multi_channel/common/multichannel_params.hpp diff --git a/demos/multichannel_demo/common/output.cpp b/demos/multi_channel/common/output.cpp similarity index 100% rename from demos/multichannel_demo/common/output.cpp rename to demos/multi_channel/common/output.cpp diff --git a/demos/multichannel_demo/common/output.hpp b/demos/multi_channel/common/output.hpp similarity index 100% rename from demos/multichannel_demo/common/output.hpp rename to demos/multi_channel/common/output.hpp diff --git a/demos/multichannel_demo/common/perf_timer.cpp b/demos/multi_channel/common/perf_timer.cpp similarity index 100% rename from demos/multichannel_demo/common/perf_timer.cpp rename to demos/multi_channel/common/perf_timer.cpp diff --git a/demos/multichannel_demo/common/perf_timer.hpp b/demos/multi_channel/common/perf_timer.hpp similarity index 100% rename from demos/multichannel_demo/common/perf_timer.hpp rename to demos/multi_channel/common/perf_timer.hpp diff --git a/demos/multichannel_demo/common/threading.cpp b/demos/multi_channel/common/threading.cpp similarity index 100% rename from demos/multichannel_demo/common/threading.cpp rename to demos/multi_channel/common/threading.cpp diff --git a/demos/multichannel_demo/common/threading.hpp b/demos/multi_channel/common/threading.hpp similarity index 100% rename from demos/multichannel_demo/common/threading.hpp rename to demos/multi_channel/common/threading.hpp diff --git a/demos/multichannel_demo/face_detection/CMakeLists.txt b/demos/multi_channel/face_detection/CMakeLists.txt similarity index 97% rename from demos/multichannel_demo/face_detection/CMakeLists.txt rename to demos/multi_channel/face_detection/CMakeLists.txt index fac14a84a1d..9a808d363fb 100644 --- a/demos/multichannel_demo/face_detection/CMakeLists.txt +++ b/demos/multi_channel/face_detection/CMakeLists.txt @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -set(TARGET_NAME "multi-channel-face-detection-demo") +set(TARGET_NAME "multi_channel_face_detection") if( BUILD_DEMO_NAME AND NOT ${BUILD_DEMO_NAME} STREQUAL ${TARGET_NAME} ) message(STATUS "DEMO ${TARGET_NAME} SKIPPED") diff --git a/demos/multichannel_demo/face_detection/README.md b/demos/multi_channel/face_detection/README.md similarity index 100% rename from demos/multichannel_demo/face_detection/README.md rename to demos/multi_channel/face_detection/README.md diff --git a/demos/multichannel_demo/face_detection/main.cpp b/demos/multi_channel/face_detection/main.cpp similarity index 100% rename from demos/multichannel_demo/face_detection/main.cpp rename to demos/multi_channel/face_detection/main.cpp diff --git a/demos/multichannel_demo/face_detection/models.lst b/demos/multi_channel/face_detection/models.lst similarity index 100% rename from demos/multichannel_demo/face_detection/models.lst rename to demos/multi_channel/face_detection/models.lst diff --git a/demos/multichannel_demo/face_detection/multichannel_face_detection_params.hpp b/demos/multi_channel/face_detection/multichannel_face_detection_params.hpp similarity index 100% rename from demos/multichannel_demo/face_detection/multichannel_face_detection_params.hpp rename to demos/multi_channel/face_detection/multichannel_face_detection_params.hpp diff --git a/demos/multichannel_demo/human_pose_estimation/CMakeLists.txt b/demos/multi_channel/human_pose_estimation/CMakeLists.txt similarity index 97% rename from demos/multichannel_demo/human_pose_estimation/CMakeLists.txt rename to demos/multi_channel/human_pose_estimation/CMakeLists.txt index 4789bcd36b1..bef03f54bc2 100644 --- a/demos/multichannel_demo/human_pose_estimation/CMakeLists.txt +++ b/demos/multi_channel/human_pose_estimation/CMakeLists.txt @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -set(TARGET_NAME "multi-channel-human-pose-estimation-demo") +set(TARGET_NAME "multi_channel_human_pose_estimation") if( BUILD_DEMO_NAME AND NOT ${BUILD_DEMO_NAME} STREQUAL ${TARGET_NAME} ) message(STATUS "DEMO ${TARGET_NAME} SKIPPED") diff --git a/demos/multichannel_demo/human_pose_estimation/README.md b/demos/multi_channel/human_pose_estimation/README.md similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/README.md rename to demos/multi_channel/human_pose_estimation/README.md diff --git a/demos/multichannel_demo/human_pose_estimation/human_pose.cpp b/demos/multi_channel/human_pose_estimation/human_pose.cpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/human_pose.cpp rename to demos/multi_channel/human_pose_estimation/human_pose.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/human_pose.hpp b/demos/multi_channel/human_pose_estimation/human_pose.hpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/human_pose.hpp rename to demos/multi_channel/human_pose_estimation/human_pose.hpp diff --git a/demos/multichannel_demo/human_pose_estimation/main.cpp b/demos/multi_channel/human_pose_estimation/main.cpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/main.cpp rename to demos/multi_channel/human_pose_estimation/main.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/models.lst b/demos/multi_channel/human_pose_estimation/models.lst similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/models.lst rename to demos/multi_channel/human_pose_estimation/models.lst diff --git a/demos/multichannel_demo/human_pose_estimation/peak.cpp b/demos/multi_channel/human_pose_estimation/peak.cpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/peak.cpp rename to demos/multi_channel/human_pose_estimation/peak.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/peak.hpp b/demos/multi_channel/human_pose_estimation/peak.hpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/peak.hpp rename to demos/multi_channel/human_pose_estimation/peak.hpp diff --git a/demos/multichannel_demo/human_pose_estimation/postprocess.cpp b/demos/multi_channel/human_pose_estimation/postprocess.cpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/postprocess.cpp rename to demos/multi_channel/human_pose_estimation/postprocess.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/postprocess.hpp b/demos/multi_channel/human_pose_estimation/postprocess.hpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/postprocess.hpp rename to demos/multi_channel/human_pose_estimation/postprocess.hpp diff --git a/demos/multichannel_demo/human_pose_estimation/postprocessor.cpp b/demos/multi_channel/human_pose_estimation/postprocessor.cpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/postprocessor.cpp rename to demos/multi_channel/human_pose_estimation/postprocessor.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/postprocessor.hpp b/demos/multi_channel/human_pose_estimation/postprocessor.hpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/postprocessor.hpp rename to demos/multi_channel/human_pose_estimation/postprocessor.hpp diff --git a/demos/multichannel_demo/human_pose_estimation/render_human_pose.cpp b/demos/multi_channel/human_pose_estimation/render_human_pose.cpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/render_human_pose.cpp rename to demos/multi_channel/human_pose_estimation/render_human_pose.cpp diff --git a/demos/multichannel_demo/human_pose_estimation/render_human_pose.hpp b/demos/multi_channel/human_pose_estimation/render_human_pose.hpp similarity index 100% rename from demos/multichannel_demo/human_pose_estimation/render_human_pose.hpp rename to demos/multi_channel/human_pose_estimation/render_human_pose.hpp diff --git a/demos/python_demos/face_recognition_demo/models.lst b/demos/python_demos/face_recognition_demo/models.lst index 02c47c13233..cdd4d6b4667 100644 --- a/demos/python_demos/face_recognition_demo/models.lst +++ b/demos/python_demos/face_recognition_demo/models.lst @@ -2,5 +2,5 @@ face-detection-adas-???? face-detection-adas-binary-???? face-detection-retail-???? -landmarks-regression-retail-???? face-reidentification-retail-???? +landmarks-regression-retail-???? diff --git a/demos/tests/cases.py b/demos/tests/cases.py index a5f1efad3ca..d90d36b2183 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -23,12 +23,10 @@ TestCase = collections.namedtuple('TestCase', ['options']) class NativeDemo: - def __init__(self, name, test_cases, subdirectory=None): - self._name = name - if subdirectory is None: - self._subdirectory = name - else: - self._subdirectory = subdirectory + def __init__(self, subdirectory, test_cases): + self.subdirectory = subdirectory + + self._name = subdirectory.replace('/', '_') self.test_cases = test_cases @@ -36,23 +34,17 @@ def __init__(self, name, test_cases, subdirectory=None): def full_name(self): return self._name - @property - def subdirectory(self): - return Path(self._subdirectory) - def models_lst_path(self, source_dir): - return source_dir / self._name / 'models.lst' + return source_dir / self.subdirectory / 'models.lst' def fixed_args(self, source_dir, build_dir): return [str(build_dir / self._name)] class PythonDemo: - def __init__(self, name, test_cases, subdirectory=None): - self._name = name - if subdirectory is None: - self._subdirectory = name - else: - self._subdirectory = subdirectory + def __init__(self, subdirectory, test_cases): + self.subdirectory = 'python_demos/' + subdirectory + + self._name = subdirectory.replace('/', '_') self.test_cases = test_cases @@ -60,15 +52,11 @@ def __init__(self, name, test_cases, subdirectory=None): def full_name(self): return 'py/' + self._name - @property - def subdirectory(self): - return Path('python_demos') / self._subdirectory - def models_lst_path(self, source_dir): - return source_dir / 'python_demos' / self._name / 'models.lst' + return source_dir / self.subdirectory / 'models.lst' def fixed_args(self, source_dir, build_dir): - return [sys.executable, str(source_dir / 'python_demos' / self._name / (self._name + '.py')), + return [sys.executable, str(source_dir / self.subdirectory / (self._name + '.py')), '-l', str(build_dir / 'lib/libcpu_extension.so')] def join_cases(*args): @@ -87,7 +75,7 @@ def device_cases(*args): return [TestCase(options={opt: device for opt in args}) for device in ALL_DEVICES] NATIVE_DEMOS = [ - NativeDemo(name='crossroad_camera_demo', test_cases=combine_cases( + NativeDemo(subdirectory='crossroad_camera_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': ImagePatternArg('person-vehicle-bike-detection-crossroad')}), device_cases('-d', '-d_pa', '-d_reid'), @@ -96,7 +84,7 @@ def device_cases(*args): single_option_cases('-m_reid', None, ModelArg('person-reidentification-retail-0079')), )), - NativeDemo(name='gaze_estimation_demo', test_cases=combine_cases( + NativeDemo(subdirectory='gaze_estimation_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': ImagePatternArg('gaze-estimation-adas')}), device_cases('-d', '-d_fd', '-d_hp', '-d_lm'), @@ -108,14 +96,14 @@ def device_cases(*args): }), )), - NativeDemo(name='human_pose_estimation_demo', test_cases=combine_cases( + NativeDemo(subdirectory='human_pose_estimation_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': ImagePatternArg('human-pose-estimation')}), device_cases('-d'), TestCase(options={'-m': ModelArg('human-pose-estimation-0001')}), )), - NativeDemo(name='interactive_face_detection_demo', test_cases=combine_cases( + NativeDemo(subdirectory='interactive_face_detection_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': ImagePatternArg('face-detection-adas')}), device_cases('-d', '-d_ag', '-d_em', '-d_lm', '-d_hp'), @@ -137,9 +125,7 @@ def device_cases(*args): # TODO: mask_rcnn_demo: no models.lst - NativeDemo(name='multi-channel-face-detection-demo', - subdirectory='multichannel_demo/face_detection', - test_cases=combine_cases( + NativeDemo(subdirectory='multi_channel/face_detection', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['face-detection-adas']}), device_cases('-d'), @@ -151,16 +137,14 @@ def device_cases(*args): ModelArg('face-detection-retail-0044')), )), - NativeDemo(name='multi-channel-human-pose-estimation-demo', - subdirectory='multichannel_demo/human_pose_estimation', - test_cases=combine_cases( + NativeDemo(subdirectory='multi_channel/human_pose_estimation', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['human-pose-estimation'], '-m': ModelArg('human-pose-estimation-0001')}), device_cases('-d'), )), - NativeDemo(name='object_detection_demo_ssd_async', test_cases=combine_cases( + NativeDemo(subdirectory='object_detection_demo_ssd_async', test_cases=combine_cases( TestCase(options={'-no_show': None}), [ TestCase(options={ @@ -194,7 +178,7 @@ def device_cases(*args): ModelArg('person-reidentification-retail-0079')), )), - NativeDemo(name='security_barrier_camera_demo', test_cases=combine_cases( + NativeDemo(subdirectory='security_barrier_camera_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': ImageDirectoryArg('vehicle-license-plate-detection-barrier')}), device_cases('-d', '-d_lpr', '-d_va'), @@ -203,7 +187,7 @@ def device_cases(*args): single_option_cases('-m_va', None, ModelArg('vehicle-attributes-recognition-barrier-0039')), )), - NativeDemo(name='segmentation_demo', test_cases=combine_cases( + NativeDemo(subdirectory='segmentation_demo', test_cases=combine_cases( device_cases('-d'), [ TestCase(options={ @@ -217,7 +201,7 @@ def device_cases(*args): ], )), - NativeDemo(name='smart_classroom_demo', test_cases=combine_cases( + NativeDemo(subdirectory='smart_classroom_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': ImagePatternArg('smart-classroom-demo'), '-m_fd': ModelArg('face-detection-adas-0001')}), @@ -239,7 +223,7 @@ def device_cases(*args): ], )), - NativeDemo(name='super_resolution_demo', test_cases=combine_cases( + NativeDemo(subdirectory='super_resolution_demo', test_cases=combine_cases( TestCase(options={'-i': ImageDirectoryArg('single-image-super-resolution')}), device_cases('-d'), TestCase(options={ @@ -247,7 +231,7 @@ def device_cases(*args): }), )), - NativeDemo(name='text_detection_demo', test_cases=combine_cases( + NativeDemo(subdirectory='text_detection_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-dt': 'video', '-i': ImagePatternArg('text-detection')}), device_cases('-d_td', '-d_tr'), @@ -259,7 +243,7 @@ def device_cases(*args): PYTHON_DEMOS = [ # TODO: 3d_segmentation_demo: no input data - PythonDemo(name='action_recognition', test_cases=combine_cases( + PythonDemo(subdirectory='action_recognition', test_cases=combine_cases( TestCase(options={'--no_show': None, '-i': ImagePatternArg('action-recognition')}), device_cases('-d'), [ @@ -277,7 +261,7 @@ def device_cases(*args): # TODO: face_recognition_demo: requires face gallery # TODO: image_retrieval_demo: current images does not suit the usecase, requires user defined gallery - PythonDemo(name='instance_segmentation_demo', test_cases=combine_cases( + PythonDemo(subdirectory='instance_segmentation_demo', test_cases=combine_cases( TestCase(options={'--no_show': None, '-i': ImagePatternArg('instance-segmentation'), '--delay': '1', @@ -289,7 +273,7 @@ def device_cases(*args): ModelArg('instance-segmentation-security-0083')), )), - PythonDemo(name='multi_camera_multi_person_tracking', test_cases=combine_cases( + PythonDemo(subdirectory='multi_camera_multi_person_tracking', test_cases=combine_cases( TestCase(options={'--no_show': None, '-i': [ImagePatternArg('multi-camera-multi-person-tracking'), ImagePatternArg('multi-camera-multi-person-tracking/repeated')], @@ -301,7 +285,7 @@ def device_cases(*args): ModelArg('person-reidentification-retail-0079')), )), - PythonDemo(name='object_detection_demo_ssd_async', test_cases=combine_cases( + PythonDemo(subdirectory='object_detection_demo_ssd_async', test_cases=combine_cases( TestCase(options={'--no_show': None, '-i': ImagePatternArg('object-detection-demo-ssd-async')}), device_cases('-d'), @@ -322,7 +306,7 @@ def device_cases(*args): # TODO: object_detection_demo_yolov3_async: no models.lst - PythonDemo(name='segmentation_demo', test_cases=combine_cases( + PythonDemo(subdirectory='segmentation_demo', test_cases=combine_cases( device_cases('-d'), [ TestCase(options={ diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index 7cd6c2893ee..36254421632 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -15,6 +15,29 @@ from args import image_net_arg IMAGE_SEQUENCES = { + 'action-recognition': [ + image_net_arg('00000001'), + image_net_arg('00000002'), + image_net_arg('00000003'), + image_net_arg('00000004'), + image_net_arg('00000005'), + image_net_arg('00000006'), + image_net_arg('00000007'), + image_net_arg('00000008'), + image_net_arg('00000009'), + image_net_arg('00000010'), + image_net_arg('00000011'), + image_net_arg('00000012'), + image_net_arg('00000013'), + image_net_arg('00000014'), + image_net_arg('00000015'), + image_net_arg('00000016'), + image_net_arg('00000017'), + image_net_arg('00000018'), + image_net_arg('00000019'), + image_net_arg('00000020'), + ], + 'face-detection-adas': [ image_net_arg('00000002'), image_net_arg('00000032'), @@ -54,6 +77,51 @@ image_net_arg('00048311'), ], + 'instance-segmentation': [ + image_net_arg('00000001'), + image_net_arg('00000002'), + image_net_arg('00000002'), # the demo has simple reid + image_net_arg('00000003'), + image_net_arg('00000004'), + image_net_arg('00000008'), + image_net_arg('00000010'), + image_net_arg('00000017'), + image_net_arg('00000019'), + image_net_arg('00000020'), + ], + + 'multi-camera-multi-person-tracking': [ + image_net_arg('00000002'), + image_net_arg('00000032'), + image_net_arg('00017291'), + image_net_arg('00017293'), + image_net_arg('00040547'), + image_net_arg('00000002'), + image_net_arg('00000032'), + image_net_arg('00017291'), + image_net_arg('00017293'), + image_net_arg('00040547'), + image_net_arg('00000002'), + ], + + 'multi-camera-multi-person-tracking/repeated': [image_net_arg('00000002')] * 11, + + 'object-detection-demo-ssd-async': [ + image_net_arg('00000001'), + image_net_arg('00000002'), + image_net_arg('00000003'), + image_net_arg('00000004'), + image_net_arg('00000005'), + image_net_arg('00000006'), + image_net_arg('00000007'), + image_net_arg('00000008'), + image_net_arg('00000014'), + image_net_arg('00000018'), + image_net_arg('00000022'), + image_net_arg('00000023'), + image_net_arg('00000032'), + ], + 'person-detection-retail': [ image_net_arg('00000002'), image_net_arg('00000002'), @@ -147,72 +215,4 @@ image_net_arg('00037128'), image_net_arg('00048316'), ], - - 'action-recognition': [ - image_net_arg('00000001'), - image_net_arg('00000002'), - image_net_arg('00000003'), - image_net_arg('00000004'), - image_net_arg('00000005'), - image_net_arg('00000006'), - image_net_arg('00000007'), - image_net_arg('00000008'), - image_net_arg('00000009'), - image_net_arg('00000010'), - image_net_arg('00000011'), - image_net_arg('00000012'), - image_net_arg('00000013'), - image_net_arg('00000014'), - image_net_arg('00000015'), - image_net_arg('00000016'), - image_net_arg('00000017'), - image_net_arg('00000018'), - image_net_arg('00000019'), - image_net_arg('00000020'), - ], - - 'instance-segmentation': [ - image_net_arg('00000001'), - image_net_arg('00000002'), - image_net_arg('00000002'), # the demo has simple reid - image_net_arg('00000003'), - image_net_arg('00000004'), - image_net_arg('00000008'), - image_net_arg('00000010'), - image_net_arg('00000017'), - image_net_arg('00000019'), - image_net_arg('00000020'), - ], - - 'multi-camera-multi-person-tracking': [ - image_net_arg('00000002'), - image_net_arg('00000032'), - image_net_arg('00017291'), - image_net_arg('00017293'), - image_net_arg('00040547'), - image_net_arg('00000002'), - image_net_arg('00000032'), - image_net_arg('00017291'), - image_net_arg('00017293'), - image_net_arg('00040547'), - image_net_arg('00000002'), - ], - - 'multi-camera-multi-person-tracking/repeated': [image_net_arg('00000002')] * 11, - - 'object-detection-demo-ssd-async': [ - image_net_arg('00000001'), - image_net_arg('00000002'), - image_net_arg('00000003'), - image_net_arg('00000004'), - image_net_arg('00000005'), - image_net_arg('00000006'), - image_net_arg('00000007'), - image_net_arg('00000008'), - image_net_arg('00000014'), - image_net_arg('00000018'), - image_net_arg('00000022'), - image_net_arg('00000023'), - image_net_arg('00000032'), - ], } diff --git a/demos/tests/run_tests.py b/demos/tests/run_tests.py index 203dc1faf75..8b021eabf84 100755 --- a/demos/tests/run_tests.py +++ b/demos/tests/run_tests.py @@ -77,8 +77,6 @@ def main(): print('Testing {}...'.format(demo.full_name)) print() - demo_source_dir = demos_dir / demo.subdirectory - with tempfile.TemporaryDirectory() as temp_dir: dl_dir = Path(temp_dir) / 'models' @@ -89,7 +87,7 @@ def main(): [ sys.executable, '--', str(auto_tools_dir / 'downloader.py'), '--output_dir', str(dl_dir), '--cache_dir', str(args.downloader_cache_dir), - '--list', str(demo_source_dir / 'models.lst'), + '--list', str(demo.models_lst_path(demos_dir)), ], stderr=subprocess.STDOUT, universal_newlines=True) except subprocess.CalledProcessError as e: @@ -102,7 +100,7 @@ def main(): subprocess.check_output( [ sys.executable, '--', str(auto_tools_dir / 'converter.py'), - '--download_dir', str(dl_dir), '--list', str(demo_source_dir / 'models.lst'), '--jobs', 'auto', + '--download_dir', str(dl_dir), '--list', str(demo.models_lst_path(demos_dir)), '--jobs', 'auto', ] + ([] if args.mo is None else ['--mo', str(args.mo)]), stderr=subprocess.STDOUT, universal_newlines=True) except subprocess.CalledProcessError as e: @@ -114,7 +112,7 @@ def main(): print() arg_context = ArgContext( - source_dir=demo_source_dir, + source_dir=demos_dir / demo.subdirectory, dl_dir=dl_dir, image_sequence_dir=Path(temp_dir) / 'image_seq', image_sequences=IMAGE_SEQUENCES, From 2df365d95e41ac5ec840e8e41fc0a43f8b6509d6 Mon Sep 17 00:00:00 2001 From: aalborov Date: Tue, 15 Oct 2019 16:02:50 +0300 Subject: [PATCH 136/927] Documentation review --- CONTRIBUTING.md | 188 ++++++++++++++++++++++++++---------------------- 1 file changed, 103 insertions(+), 85 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 3430fc5c955..69f5f577bac 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,84 +1,98 @@ -# How to contribute model to Open Model Zoo +# How to Contribute Models to Open Model Zoo -We appreciate your intention to contribute model to OpenVINO™ Open Model Zoo (OMZ). This guide would help you and explain main issues. OMZ is licensed under the Apache License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. Please note, that we accept models under permissive licenses, as **MIT**, **Apache 2.0**, **BSD-3-Clause**, etc. Otherwise, it may take longer time to get approve (or even refuse) for your model. +We appreciate your intention to contribute model to the OpenVINO™ Open Model Zoo (OMZ). OMZ is licensed under the Apache\* License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. Note that we accept models under permissive licenses, such as **MIT**, **Apache 2.0**, and **BSD-3-Clause**. Otherwise, it might take longer time to get your model approved. -Nowadays OMZ supports models from frameworks: +Frameworks supported by the Open Model Zoo: * Caffe\* * Caffe2\* (via conversion to ONNX\*) * TensorFlow\* * PyTorch\* (via conversion to ONNX\*) * MXNet\* -## Pull request requirements +## Pull Request Requirements -Contribution to OMZ comes down to creating pull request (PR) in this repository. Please use `develop` branch when creating your PR. Pull request is strictly formalized and must contain: -* configuration file `model.yml` (learn more in [Configuration file](#configuration-file) section) -* documentation of model in markdown format (learn more in [Documentation](#documentation) section) -* accuracy validation configuration file (learn more in [Accuracy Validation](#accuracy-validation) section) +To contribute to OMZ, create a pull request (PR) in this repository using the `develop` branch. +Pull requests are strictly formalized and are reviewed by the OMZ maintainers for consistence and legal compliance. + +Each PR must contain: +* [configuration file `model.yml`](#configuration-file) +* [documentation of model in markdown format](#documentation) +* [accuracy validation configuration file](#accuracy-validation) * license added to [tools/downloader/license.txt](tools/downloader/license.txt) -* (*optional*) demo (learn more about it in [Demo](#demo) section) +* (*optional*) [demo](#demo) + +Follow the rules in the sections below before submitting a pull request. + +### Model Name Name your model in OMZ according to the following rules: -- name must be consistent with original name, but complete match is not necessary -- use lowercase -- spaces are not allowed in the name, use `-` or `_` (`-` is preferable) as delimiters instead -- suffix to model name, according to origin framework (see **`framework`** description in [configuration file](#configuration-file) section), if you adding reimplementation of existing model in OMZ from another framework +- Use a name that is consistent with an original name, but complete match is not necessary +- Use lowercase +- Use `-`(preferable) or `_` as delimiters, for spaces are not allowed +- Include a suffix according to an original framework (see **`framework`** description in the [configuration file](#configuration-file) section for examples), if you add a reimplementation of an existing model in OMZ from another framework -This name will be used for downloading, converting, etc. -Example: -``` -resnet-50-pytorch -mobilenet-v2-1.0-224 -``` +This name will be used for downloading, converting, and other operations. +Examples of model names: +- `resnet-50-pytorch` +- `mobilenet-v2-1.0-224` + +### Files Location + +Place your files as shown in the table below: +File | Directory +---|--- +configuration file
documentation file |`models/public/` +validation configuration file|`tools/accuracy_checker/configs` +demo file|`demos` + +### Tests + +Your PR must pass next tests: +* Model is downloadable by the `tools/downloader/downloader.py` script. See [Configuration file](#configuration-file) for details. +* Model is convertible by the `tools/downloader/converter.py` script. See [Model conversion](#model-conversion) for details. +* Model is usable by demo or sample and provides adequate results. See [Demo](#demo) for details. +* Model passes accuracy validation. See [Accuracy validation](#accuracy-validation) for details. -Files location: -* the configuration and documentation files must be in the `models/public/` directory -* the validation configuration file must be in the `tools/accuracy_checker/configs` directory -* the demo must be in the `demos` directory -This PR must pass next tests: -* model is downloadable by `tools/downloader/downloader.py` script (see [Configuration file](#configuration-file) for details) -* model is convertible by `tools/downloader/converter.py` script (see [Model conversion](#model-conversion) for details) -* model can be used by demo or sample and provides adequate results (see [Demo](#demo) for details) -* model passes accuracy validation (see [Accuracy validation](#accuracy-validation) for details) +### PR Rejection -At the end, your PR will be reviewed by OMZ maintainers for consistence and legal compliance. +Your PR may be rejected in some cases, for example: +* If a license is inappropriate (such as GPL-like licenses). +* If a dataset is inaccessible. +* If the PR fails one of the tests above. -Your PR can be rejected in some cases, e.g.: -* inappropriate license (e.g. GPL-like licenses) -* inaccessible dataset -* PR fails one of the test above +## Configuration File -## Configuration file +The model configuration file contains information about model: what it is, how to download it, and how to convert it to the IR format. This information must be specified in the `model.yml` file that must be located in the model subfolder. -The model configuration file contains information about model: what it is, how to download it and how to convert it to IR format. This information must be specified in `model.yml` file, which must be located in the model subfolder. Let's look closer to the file content. +Refer to the detailed descriptions of each file provided below. **`description`** -Description of the model. Must match with the description from model [documentation](#documentation). +Description of the model. Must match with the description from the model [documentation](#documentation). **`task_type`** -Model task class, see [here](tools/downloader/README.md#model-information-dumper-usage) for details. If the task class of your model is absent, please add new to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. +Model task class, see [Model information dumper usage](tools/downloader/README.md#model-information-dumper-usage) for details. If there is no task class of your model, add a new one to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. **`files`** -> Before filling this section, make sure that the model can be downloaded either via the direct HTTP(S) link or from Google Drive\*. +> **NOTE**: Before filling this section, make sure that the model can be downloaded either via a direct HTTP(S) link or from Google Drive\*. -Downlodable files. Each file is described by: +Downloadable files. Each file is described by: -* `name` - sets file name after downloading -* `size` - sets file size -* `sha256` - sets file hash sum -* `source` - sets direct link to file *OR* describes file access parameters +* `name` - sets a file name after downloading +* `size` - sets a file size +* `sha256` - sets a file hash sum +* `source` - sets a direct link to a file *OR* describes a file access parameters -> You may obtain hash sum using `sha256sum ` command on Linux\*. +> **TIP**: You can obtain a hash sum using the `sha256sum ` command on Linux\*. -If file is located on Google Drive\*, section `source` must contain: +If file is located on Google Drive\*, the `source` section must contain: - `$type: google_drive` - `id` file ID on Google Drive\* -> **NOTE:** if file is on GitHub\*, use the specific file version. +> **NOTE:** If file is on GitHub\*, use the specific file version. **`postprocessing`** (*optional*) @@ -86,23 +100,23 @@ Post processing of the downloaded files. For unpacking archive: - `$type: unpack_archive` -- `file` archive file name -- `format` archive format (zip | tar | gztar | bztar | xztar) +- `file` — Archive file name +- `format` — Archive format (zip | tar | gztar | bztar | xztar) For replacement operation: - `$type: regex_replace` -- `file` name of file where replacement must be executed -- `pattern` regular expression ([learn more](https://docs.python.org/3/library/re.html)) -- `replacement` replacement string -- `count` (*optional*) maximum number of pattern occurrences to be replaced +- `file` — Name of file to run replacement in +- `pattern` — [Regular expression](https://docs.python.org/3/library/re.html) +- `replacement` — Replacement string +- `count` (*optional*) — Maximum number of pattern occurrences to be replaced **`conversion_to_onnx_args`** (*optional*) -List of onnx conversion parameters, see `model_optimizer_args` for details. Applicable for Caffe2\* and PyTorch\* frameworks. +List of ONNX\* conversion parameters, see `model_optimizer_args` for details. Applicable for Caffe2\* and PyTorch\* frameworks. **`model_optimizer_args`** -Conversion parameters (learn more in [Model conversion](#model-conversion) section), e.g.: +Conversion parameters (learn more in the [Model conversion](#model-conversion) section). For example: ``` - --input=data - --mean_values=data[127.5] @@ -111,11 +125,11 @@ Conversion parameters (learn more in [Model conversion](#model-conversion) secti - --output=prob - --input_model=$conv_dir/googlenet-v3.onnx ``` -> **NOTE:** no need to specify `framework`, `data_type`, `model_name` and `output_dir`, since they are deduced automatically. +> **NOTE:** Do not specify `framework`, `data_type`, `model_name` and `output_dir`, since they are deduced automatically. **`framework`** -Framework of the original model (`caffe`, `dldt`, `mxnet`, `pytorch`, `tf`, etc.). +Framework of the original model. Examples: `caffe`, `dldt`, `mxnet`, `pytorch`, `tf`. **`license`** @@ -123,7 +137,7 @@ Path to the model license. ### Example -In this [example](models/public/densenet-121-tf/model.yml) classification model DenseNet-121\*, pretrained in TensorFlow\*, is downloading from Google Drive\* as archive. +This example shows how to download the [classification model DenseNet-121*](models/public/densenet-121-tf/model.yml) pretrained in TensorFlow\* from Google Drive\* as an archive. ``` description: >- @@ -155,52 +169,56 @@ framework: tf license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICENSE ``` ---- -*After this step you will obtain **model.yml** file* +*After this step you get the **model.yml** file.* -## Model conversion +## Model Conversion -Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After successful conversion you will get model in IR format `*.xml` representing net graph and `*.bin` containing net parameters. +Deep Learning Inference Engine (IE) supports models in the Intermediate Representation (IR) format. A model from any supported framework can be converted to IR using the Model Optimizer tool included in the OpenVINO™ toolkit. Find more information about conversion in the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). After a successful conversion you get a model in the IR format, with the `*.xml` file representing the net graph and the `*.bin` file containing the net parameters. -> **NOTE 1**: image pre-processing parameters (mean and scale) should be built into converted model to simplify model usage. +> **NOTE 1**: Image preprocessing parameters (mean and scale) must be built into a converted model to simplify model usage. -> **NOTE 2**: if model input is a color image, color channel order should be `BGR`. +> **NOTE 2**: If a model input is a color image, color channel order should be `BGR`. -*After this step you`ll get **conversion parameters** for Model Optimizer.* +*After this step you get **conversion parameters** for the Model Optimizer.* ## Demo -A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). +A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). Demos are required to support the following keys: -- `-i ""` Required. Input to process. -- `-m ""` Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. -- `-d ""` Optional. Default is CPU. -- `-no_show` Optional. Do not visualize inference results. +Keys | Explanation +--|-- + `-i ""` | Required. Input to process. + `-m ""` | Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. + `-d ""` | Optional. Default is CPU. + `-no_show` | Optional. Do not visualize inference results. -> Note: For Python is preferable to use `-` instead of `_` as word separators (e.g. `-no-show`) +> **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. Example: `-no-show`. -Also you can add any other necessary parameters. +You can also add any other necessary parameters. -If you add new demo, please provide auto-testing support too: +If you add a new demo, provide autotesting support as well: - add demo launch parameters in [demos/tests/cases.py](demos/tests/cases.py) - prepare list of input images in [demos/tests/image_sequences.py](demos/tests/image_sequences.py) -*After this step you'll get **demo** for your model (if no demo was available)* +___ +*After this step you get a **demo** for your model (if no demo was available).* -## Accuracy validation +## Accuracy Validation -Accuracy validation can be performed by the [Accuracy Checker](./tools/accuracy_checker) tool. This tool can use IE to run converted model or original framework to run original model. Accuracy Checker supports lots of datasets, metrics and preprocessing options, what makes validation quite simple (if task is supported by tool). You need only create configuration file, which contain necessary parameters to do accuracy validation (specify dataset and annotation, pre and post processing parameters, accuracy metric to compute and so on). Find more details [here](./tools/accuracy_checker#testing-new-models). +Accuracy validation can be performed by the [Accuracy Checker](./tools/accuracy_checker) tool. This tool can use either IE to run a converted model, or an original framework to run an original model. Accuracy Checker supports lots of datasets, metrics and preprocessing options, what simplifies validation if a task is supported by the tool. You only need to create a configuration file that contains necessary parameters for accuracy validation (specify a dataset and annotation, pre- and post-processing parameters, accuracy metrics to compute and so on). For details, refer to [Testing new models](./tools/accuracy_checker#testing-new-models). -If model uses dataset which is unsupported by Accuracy Checker, you also must provide link to it. Please notice this issue in PR description. Don't forget about dataset license too (see [above](#how-to-contribute-model-to-open-model-zoo)). +If a model uses a dataset which is not supported by the Accuracy Checker, you also must provide the license and the link to it and mention it in the PR description. -When the configuration file is ready, you must run Accuracy Checker to obtain metric results. If they match your results, that means conversion was successful and Accuracy Checker fully supports your model, metric and dataset. If no - recheck [conversion](#model-conversion) parameters or validation configuration file. +When the configuration file is ready, you must run the Accuracy Checker to obtain metric results. If they match your results, that means conversion was successful and the Accuracy Checker fully supports your model, metric and dataset. Otherwise, recheck the[conversion](#model-conversion) parameters or the validation configuration file. -*After this step you will get accuracy validation configuration file - **.yml*** +___ +*After this step you get the accuracy validation configuration file **.yml**.* ### Example -Let use one of the files from `tools/accuracy_checker/configs`, for example, validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml): +This example uses one of the files from `tools/accuracy_checker/configs` — validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml)\*: ``` models: - name: alexnet-cf @@ -255,25 +273,25 @@ models: ## Documentation -Documentation is very important part of model contribution, it helps to better understand possible use of the model. Documentation must be named after the name of the model. +Documentation is a very important part of model contribution as it helps to better understand the possible usage of the model. Documentation must be named in accordance with the name of the model. The documentation should contain: -* description of model +* description of a model * main purpose * features - * references to paper or/and source + * references to a paper or/and a source * model specification * type * framework * GFLOPs (*if available*) * number of parameters (*if available*) -* validation dataset description and/or link -* main accuracy values (also description of metric) +* validation dataset description and/or a link +* main accuracy values (also description of a metric) * detailed description of input and output for original and converted models -Learn the detailed structure and headers naming convention from any model documentation, e.g. [alexnet](./models/public/alexnet/alexnet.md). +Learn the detailed structure and headers naming convention from any model documentation (for example, [alexnet](./models/public/alexnet/alexnet.md)). --- -*After this step you will obtain **.md** - documentation file* +*After this step you get **.md** — the documentation file.* ## Legal Information From ab1f25ddb97ca33b88715a5a14be66472224223c Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 15 Oct 2019 16:18:05 +0300 Subject: [PATCH 137/927] fix AC configs (#513) --- .../accuracy_checker/accuracy_checker/metrics/reid.py | 10 ++++++---- tools/accuracy_checker/configs/ctpn.yml | 4 ++-- .../configs/head-pose-estimation-adas-0001.yml | 2 +- .../configs/semantic-segmentation-adas-0001.yml | 8 -------- tools/accuracy_checker/configs/text-detection-0003.yml | 4 ++-- tools/accuracy_checker/configs/text-detection-0004.yml | 2 +- .../configs/text-image-super-resolution-0001.yml | 3 +++ tools/accuracy_checker/dataset_definitions.yml | 3 ++- 8 files changed, 17 insertions(+), 19 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/reid.py b/tools/accuracy_checker/accuracy_checker/metrics/reid.py index c006e2459a8..b7fe9e825db 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/reid.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/reid.py @@ -184,17 +184,19 @@ class PairwiseAccuracySubsets(FullDatasetEvaluationMetric): @classmethod def parameters(cls): - parameters = super().parameters() - parameters.update({ + params = super().parameters() + params.update({ 'subset_number': NumberField( optional=True, min_value=1, value_type=int, default=10, description="Number of subsets for separating." ) }) - return parameters + return params def configure(self): self.subset_num = self.get_value_from_config('subset_number') - self.accuracy_metric = PairwiseAccuracy(self.config, self.dataset) + config_copy = self.config.copy() + config_copy.pop('subset_number') + self.accuracy_metric = PairwiseAccuracy(config_copy, self.dataset) def evaluate(self, annotations, predictions): subset_results = [] diff --git a/tools/accuracy_checker/configs/ctpn.yml b/tools/accuracy_checker/configs/ctpn.yml index e34fc18032e..1789da89396 100644 --- a/tools/accuracy_checker/configs/ctpn.yml +++ b/tools/accuracy_checker/configs/ctpn.yml @@ -4,7 +4,7 @@ models: - framework: dlsdk tags: - FP32 - model: public/ctpn/FP32/ctpn.bin + model: public/ctpn/FP32/ctpn.xml weights: public/ctpn/FP32/ctpn.bin adapter: type: ctpn_text_detection @@ -15,7 +15,7 @@ models: - framework: dlsdk tags: - FP16 - model: public/ctpn/FP16/ctpn.bin + model: public/ctpn/FP16/ctpn.xml weights: public/ctpn/FP16/ctpn.bin adapter: type: ctpn_text_detection diff --git a/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml b/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml index f67513faf30..d15ed015371 100644 --- a/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml +++ b/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml @@ -6,7 +6,7 @@ models: tags: - FP32 model: intel/head-pose-estimation-adas-0001/FP32/head-pose-estimation-adas-0001.xml - weights: intel/ead-pose-estimation-adas-0001/FP32/head-pose-estimation-adas-0001.bin + weights: intel/head-pose-estimation-adas-0001/FP32/head-pose-estimation-adas-0001.bin adapter: type: head_pose angle_yaw: angle_y_fc diff --git a/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml b/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml index 742339e09fa..2effe8fb568 100644 --- a/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml +++ b/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml @@ -18,14 +18,6 @@ models: adapter: segmentation cpu_extensions: AUTO - - framework: dlsdk - tags: - - INT8 - model: intel/semantic-segmentation-adas-0001/INT8/semantic-segmentation-adas-0001.xml - weights: intel/semantic-segmentation-adas-0001/INT8/semantic-segmentation-adas-0001.bin - adapter: segmentation - cpu_extensions: AUTO - datasets: - name: semantic_segmentation_adas diff --git a/tools/accuracy_checker/configs/text-detection-0003.yml b/tools/accuracy_checker/configs/text-detection-0003.yml index 6fd09b621a0..56f2abdff6e 100644 --- a/tools/accuracy_checker/configs/text-detection-0003.yml +++ b/tools/accuracy_checker/configs/text-detection-0003.yml @@ -22,7 +22,7 @@ models: model: intel/text-detection-0003/FP16/text-detection-0003.xml weights: intel/text-detection-0003/FP16/text-detection-0003.bin adapter: - type: text_detection + type: pixel_link_text_detection pixel_link_out: model/link_logits_/add pixel_class_out: model/segm_logits/add pixel_class_confidence_threshold: 0.8 @@ -37,7 +37,7 @@ models: model: intel/text-detection-0003/INT8/text-detection-0003.xml weights: intel/text-detection-0003/INT8/text-detection-0003.bin adapter: - type: text_detection + type: pixel_link_text_detection pixel_link_out: model/link_logits_/add pixel_class_out: model/segm_logits/add pixel_class_confidence_threshold: 0.8 diff --git a/tools/accuracy_checker/configs/text-detection-0004.yml b/tools/accuracy_checker/configs/text-detection-0004.yml index 75dee878114..f7b69e9cef2 100644 --- a/tools/accuracy_checker/configs/text-detection-0004.yml +++ b/tools/accuracy_checker/configs/text-detection-0004.yml @@ -21,7 +21,7 @@ models: tags: - FP16 model: intel/text-detection-0004/FP16/text-detection-0004.xml - weights: intel/text-detection-0004/dldt/FP16/text-detection-0004.bin + weights: intel/text-detection-0004/FP16/text-detection-0004.bin adapter: type: pixel_link_text_detection pixel_link_out: model/link_logits_/add diff --git a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml index 37fabdffbca..bec1590d9d2 100644 --- a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml +++ b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml @@ -28,3 +28,6 @@ models: adapter: type: super_resolution cpu_extensions: AUTO + + datasets: + - name: text_super_resolution_x3 diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index ccdb4f704a4..fa2103fbec0 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -336,6 +336,7 @@ datasets: annotation: icdar15_detection.pickle - name: ICDAR2013 + data_source: ICDAR13_REC/Challenge2_Test_Task3_Images annotation_conversion: converter: icdar13_recognition annotation_file: ICDAR13_REC/gt/gt.txt.fixed.alfanumeric @@ -474,7 +475,7 @@ datasets: - name: gaze_estimation_dataset - data_source: gaze-estimation + data_source: gaze_estimation annotation: gaze_estimation.pickle reader: From 00653aaaf679270461fa48d13b70bb8622dbb0bf Mon Sep 17 00:00:00 2001 From: Alina Alborova Date: Tue, 15 Oct 2019 16:18:52 +0300 Subject: [PATCH 138/927] fix --- CONTRIBUTING.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 69f5f577bac..8bf1c1276f8 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -39,6 +39,7 @@ Examples of model names: ### Files Location Place your files as shown in the table below: + File | Directory ---|--- configuration file
documentation file |`models/public/` From 1c473b5c873f0d03f6d68cb3c1a0740eb1617e4d Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 15 Oct 2019 16:32:08 +0300 Subject: [PATCH 139/927] AC: fix configs (#514) --- tools/accuracy_checker/dataset_definitions.yml | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index fa2103fbec0..c3ae70d86ab 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -344,10 +344,10 @@ datasets: dataset_meta: icdar13_recognition.json - name: market1501 - data_source: Market1501-person-reidentification/Market-1501-v15.09.15 + data_source: Market-1501-v15.09.15 annotation_conversion: converter: market1501_reid - data_dir: Market1501-person-reidentification/Market-1501-v15.09.15 + data_dir: Market-1501-v15.09.15 annotation: market1501_reid.pickle - name: vgg2face @@ -474,7 +474,6 @@ datasets: size: 60 - name: gaze_estimation_dataset - data_source: gaze_estimation annotation: gaze_estimation.pickle @@ -488,5 +487,5 @@ datasets: - name: handwritten_score_recognition data_source: ILSVRC2012_img_val - annotation: hadwritten_score_recognition.pickle - dataset_meta: hadwritten_score_recognition.json + annotation: handwritten_score_recognition.pickle + dataset_meta: handwritten_score_recognition.json From 5c00f9656e10091cf5c9d3c38877d39c17abc826 Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 15 Oct 2019 17:10:45 +0300 Subject: [PATCH 140/927] AC: remove extra INT8 models from configs (#515) --- .../configs/face-reidentification-retail-0095.yml | 8 -------- .../configs/text-detection-0004.yml | 15 --------------- .../configs/text-recognition-0012.yml | 8 -------- 3 files changed, 31 deletions(-) diff --git a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml index ef8ce00885d..4abb1b41393 100644 --- a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml +++ b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml @@ -16,14 +16,6 @@ models: weights: intel/face-reidentification-retail-0095/FP16/face-reidentification-retail-0095.bin adapter: reid - - framework: dlsdk - tags: - - INT8 - device: CPU - model: intel/face-reidentification-retail-0095/INT8/face-reidentification-retail-0095.xml - weights: intel/face-reidentification-retail-0095/INT8/face-reidentification-retail-0095.bin - adapter: reid - datasets: - name: lfw diff --git a/tools/accuracy_checker/configs/text-detection-0004.yml b/tools/accuracy_checker/configs/text-detection-0004.yml index f7b69e9cef2..3dc506e3154 100644 --- a/tools/accuracy_checker/configs/text-detection-0004.yml +++ b/tools/accuracy_checker/configs/text-detection-0004.yml @@ -32,21 +32,6 @@ models: min_height: 10 cpu_extensions: AUTO - - framework: dlsdk - tags: - - INT8 - device: CPU - model: intel/text-detection-0004/INT8/text-detection-0004.xml - weights: intel/text-detection-0004/INT8/text-detection-0004.bin - adapter: - type: pixel_link_text_detection - pixel_link_out: model/link_logits_/add - pixel_class_out: model/segm_logits/add - pixel_class_confidence_threshold: 0.8 - pixel_link_confidence_threshold: 0.8 - min_area: 300 - min_height: 10 - datasets: - name: ICDAR2015 diff --git a/tools/accuracy_checker/configs/text-recognition-0012.yml b/tools/accuracy_checker/configs/text-recognition-0012.yml index cc95667634f..28154833815 100644 --- a/tools/accuracy_checker/configs/text-recognition-0012.yml +++ b/tools/accuracy_checker/configs/text-recognition-0012.yml @@ -16,14 +16,6 @@ models: weights: intel/text-recognition-0012/FP16/text-recognition-0012.bin adapter: beam_search_decoder - - framework: dlsdk - tags: - - INT8 - device: CPU - model: intel/text-recognition-0012/INT8/text-recognition-0012.xml - weights: intel/text-recognition-0012/INT8/text-recognition-0012.bin - adapter: beam_search_decoder - datasets: - name: ICDAR2013 From 622f8d2199b11744a80078e0a761c1de849ed4ec Mon Sep 17 00:00:00 2001 From: Alina Alborova Date: Tue, 15 Oct 2019 17:22:57 +0300 Subject: [PATCH 141/927] Update CONTRIBUTING.md --- CONTRIBUTING.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8bf1c1276f8..983c11d4fd1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -14,7 +14,7 @@ Frameworks supported by the Open Model Zoo: To contribute to OMZ, create a pull request (PR) in this repository using the `develop` branch. Pull requests are strictly formalized and are reviewed by the OMZ maintainers for consistence and legal compliance. -Each PR must contain: +Each PR contributing a model must contain: * [configuration file `model.yml`](#configuration-file) * [documentation of model in markdown format](#documentation) * [accuracy validation configuration file](#accuracy-validation) @@ -188,12 +188,10 @@ A demo shows the main idea of how to infer a model using IE. If your model solve Demos are required to support the following keys: -Keys | Explanation ---|-- - `-i ""` | Required. Input to process. - `-m ""` | Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. - `-d ""` | Optional. Default is CPU. - `-no_show` | Optional. Do not visualize inference results. + - `-i ""`: Required. Input to process. + - `-m ""`: Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. + - `-d ""`: Optional. Default is CPU. + - `-no_show`: Optional. Do not visualize inference results. > **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. Example: `-no-show`. From a1c865d60c0112c865d793af1669922e3f2b7c20 Mon Sep 17 00:00:00 2001 From: Alina Alborova Date: Tue, 15 Oct 2019 17:31:39 +0300 Subject: [PATCH 142/927] fix the link name --- CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 983c11d4fd1..98b39a8694e 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -74,7 +74,7 @@ Description of the model. Must match with the description from the model [docume **`task_type`** -Model task class, see [Model information dumper usage](tools/downloader/README.md#model-information-dumper-usage) for details. If there is no task class of your model, add a new one to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. +[Model task class](tools/downloader/README.md#model-information-dumper-usage). If there is no task class of your model, add a new one to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. **`files`** From c9f9c435a56f48d9517944ca85733bc394c25fbd Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Tue, 15 Oct 2019 19:41:29 +0300 Subject: [PATCH 143/927] demos: multi_channel_face_detection->multi_channel_face_detection_demo, multi_channel_human_pose_estimation->multi_channel_human_pose_estimation_demo --- demos/multi_channel/CMakeLists.txt | 4 ++-- .../{face_detection => face_detection_demo}/CMakeLists.txt | 2 +- .../{face_detection => face_detection_demo}/README.md | 0 .../{face_detection => face_detection_demo}/main.cpp | 0 .../{face_detection => face_detection_demo}/models.lst | 0 .../multichannel_face_detection_params.hpp | 0 .../CMakeLists.txt | 2 +- .../README.md | 0 .../human_pose.cpp | 0 .../human_pose.hpp | 0 .../main.cpp | 0 .../models.lst | 0 .../peak.cpp | 0 .../peak.hpp | 0 .../postprocess.cpp | 0 .../postprocess.hpp | 0 .../postprocessor.cpp | 0 .../postprocessor.hpp | 0 .../render_human_pose.cpp | 0 .../render_human_pose.hpp | 0 demos/tests/cases.py | 4 ++-- 21 files changed, 6 insertions(+), 6 deletions(-) rename demos/multi_channel/{face_detection => face_detection_demo}/CMakeLists.txt (97%) rename demos/multi_channel/{face_detection => face_detection_demo}/README.md (100%) rename demos/multi_channel/{face_detection => face_detection_demo}/main.cpp (100%) rename demos/multi_channel/{face_detection => face_detection_demo}/models.lst (100%) rename demos/multi_channel/{face_detection => face_detection_demo}/multichannel_face_detection_params.hpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/CMakeLists.txt (97%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/README.md (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/human_pose.cpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/human_pose.hpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/main.cpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/models.lst (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/peak.cpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/peak.hpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/postprocess.cpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/postprocess.hpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/postprocessor.cpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/postprocessor.hpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/render_human_pose.cpp (100%) rename demos/multi_channel/{human_pose_estimation => human_pose_estimation_demo}/render_human_pose.hpp (100%) diff --git a/demos/multi_channel/CMakeLists.txt b/demos/multi_channel/CMakeLists.txt index 0dd412fbfe1..e942cfdc9c5 100644 --- a/demos/multi_channel/CMakeLists.txt +++ b/demos/multi_channel/CMakeLists.txt @@ -21,5 +21,5 @@ if(MULTICHANNEL_DEMO_USE_NATIVE_CAM) endif() add_subdirectory(common) -add_subdirectory(face_detection) -add_subdirectory(human_pose_estimation) +add_subdirectory(face_detection_demo) +add_subdirectory(human_pose_estimation_demo) diff --git a/demos/multi_channel/face_detection/CMakeLists.txt b/demos/multi_channel/face_detection_demo/CMakeLists.txt similarity index 97% rename from demos/multi_channel/face_detection/CMakeLists.txt rename to demos/multi_channel/face_detection_demo/CMakeLists.txt index 9a808d363fb..8f2ddc070c3 100644 --- a/demos/multi_channel/face_detection/CMakeLists.txt +++ b/demos/multi_channel/face_detection_demo/CMakeLists.txt @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -set(TARGET_NAME "multi_channel_face_detection") +set(TARGET_NAME "multi_channel_face_detection_demo") if( BUILD_DEMO_NAME AND NOT ${BUILD_DEMO_NAME} STREQUAL ${TARGET_NAME} ) message(STATUS "DEMO ${TARGET_NAME} SKIPPED") diff --git a/demos/multi_channel/face_detection/README.md b/demos/multi_channel/face_detection_demo/README.md similarity index 100% rename from demos/multi_channel/face_detection/README.md rename to demos/multi_channel/face_detection_demo/README.md diff --git a/demos/multi_channel/face_detection/main.cpp b/demos/multi_channel/face_detection_demo/main.cpp similarity index 100% rename from demos/multi_channel/face_detection/main.cpp rename to demos/multi_channel/face_detection_demo/main.cpp diff --git a/demos/multi_channel/face_detection/models.lst b/demos/multi_channel/face_detection_demo/models.lst similarity index 100% rename from demos/multi_channel/face_detection/models.lst rename to demos/multi_channel/face_detection_demo/models.lst diff --git a/demos/multi_channel/face_detection/multichannel_face_detection_params.hpp b/demos/multi_channel/face_detection_demo/multichannel_face_detection_params.hpp similarity index 100% rename from demos/multi_channel/face_detection/multichannel_face_detection_params.hpp rename to demos/multi_channel/face_detection_demo/multichannel_face_detection_params.hpp diff --git a/demos/multi_channel/human_pose_estimation/CMakeLists.txt b/demos/multi_channel/human_pose_estimation_demo/CMakeLists.txt similarity index 97% rename from demos/multi_channel/human_pose_estimation/CMakeLists.txt rename to demos/multi_channel/human_pose_estimation_demo/CMakeLists.txt index bef03f54bc2..fe0db95809a 100644 --- a/demos/multi_channel/human_pose_estimation/CMakeLists.txt +++ b/demos/multi_channel/human_pose_estimation_demo/CMakeLists.txt @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -set(TARGET_NAME "multi_channel_human_pose_estimation") +set(TARGET_NAME "multi_channel_human_pose_estimation_demo") if( BUILD_DEMO_NAME AND NOT ${BUILD_DEMO_NAME} STREQUAL ${TARGET_NAME} ) message(STATUS "DEMO ${TARGET_NAME} SKIPPED") diff --git a/demos/multi_channel/human_pose_estimation/README.md b/demos/multi_channel/human_pose_estimation_demo/README.md similarity index 100% rename from demos/multi_channel/human_pose_estimation/README.md rename to demos/multi_channel/human_pose_estimation_demo/README.md diff --git a/demos/multi_channel/human_pose_estimation/human_pose.cpp b/demos/multi_channel/human_pose_estimation_demo/human_pose.cpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/human_pose.cpp rename to demos/multi_channel/human_pose_estimation_demo/human_pose.cpp diff --git a/demos/multi_channel/human_pose_estimation/human_pose.hpp b/demos/multi_channel/human_pose_estimation_demo/human_pose.hpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/human_pose.hpp rename to demos/multi_channel/human_pose_estimation_demo/human_pose.hpp diff --git a/demos/multi_channel/human_pose_estimation/main.cpp b/demos/multi_channel/human_pose_estimation_demo/main.cpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/main.cpp rename to demos/multi_channel/human_pose_estimation_demo/main.cpp diff --git a/demos/multi_channel/human_pose_estimation/models.lst b/demos/multi_channel/human_pose_estimation_demo/models.lst similarity index 100% rename from demos/multi_channel/human_pose_estimation/models.lst rename to demos/multi_channel/human_pose_estimation_demo/models.lst diff --git a/demos/multi_channel/human_pose_estimation/peak.cpp b/demos/multi_channel/human_pose_estimation_demo/peak.cpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/peak.cpp rename to demos/multi_channel/human_pose_estimation_demo/peak.cpp diff --git a/demos/multi_channel/human_pose_estimation/peak.hpp b/demos/multi_channel/human_pose_estimation_demo/peak.hpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/peak.hpp rename to demos/multi_channel/human_pose_estimation_demo/peak.hpp diff --git a/demos/multi_channel/human_pose_estimation/postprocess.cpp b/demos/multi_channel/human_pose_estimation_demo/postprocess.cpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/postprocess.cpp rename to demos/multi_channel/human_pose_estimation_demo/postprocess.cpp diff --git a/demos/multi_channel/human_pose_estimation/postprocess.hpp b/demos/multi_channel/human_pose_estimation_demo/postprocess.hpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/postprocess.hpp rename to demos/multi_channel/human_pose_estimation_demo/postprocess.hpp diff --git a/demos/multi_channel/human_pose_estimation/postprocessor.cpp b/demos/multi_channel/human_pose_estimation_demo/postprocessor.cpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/postprocessor.cpp rename to demos/multi_channel/human_pose_estimation_demo/postprocessor.cpp diff --git a/demos/multi_channel/human_pose_estimation/postprocessor.hpp b/demos/multi_channel/human_pose_estimation_demo/postprocessor.hpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/postprocessor.hpp rename to demos/multi_channel/human_pose_estimation_demo/postprocessor.hpp diff --git a/demos/multi_channel/human_pose_estimation/render_human_pose.cpp b/demos/multi_channel/human_pose_estimation_demo/render_human_pose.cpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/render_human_pose.cpp rename to demos/multi_channel/human_pose_estimation_demo/render_human_pose.cpp diff --git a/demos/multi_channel/human_pose_estimation/render_human_pose.hpp b/demos/multi_channel/human_pose_estimation_demo/render_human_pose.hpp similarity index 100% rename from demos/multi_channel/human_pose_estimation/render_human_pose.hpp rename to demos/multi_channel/human_pose_estimation_demo/render_human_pose.hpp diff --git a/demos/tests/cases.py b/demos/tests/cases.py index d90d36b2183..dd456a398f0 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -125,7 +125,7 @@ def device_cases(*args): # TODO: mask_rcnn_demo: no models.lst - NativeDemo(subdirectory='multi_channel/face_detection', test_cases=combine_cases( + NativeDemo(subdirectory='multi_channel/face_detection_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['face-detection-adas']}), device_cases('-d'), @@ -137,7 +137,7 @@ def device_cases(*args): ModelArg('face-detection-retail-0044')), )), - NativeDemo(subdirectory='multi_channel/human_pose_estimation', test_cases=combine_cases( + NativeDemo(subdirectory='multi_channel/human_pose_estimation_demo', test_cases=combine_cases( TestCase(options={'-no_show': None, '-i': IMAGE_SEQUENCES['human-pose-estimation'], '-m': ModelArg('human-pose-estimation-0001')}), From 20c1628232835f414e64d8613e8816391377937b Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 16 Oct 2019 10:24:06 +0300 Subject: [PATCH 144/927] AC: use_pil -> use_pillow (#516) --- .../configs/person-reidentification-retail-0031.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml index 0ca3246e03f..b86c61936ca 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml @@ -33,7 +33,7 @@ models: - type: resize dst_width: 48 dst_height: 96 - use_pil: True + use_pillow: True interpolation: ANTIALIAS metrics: From 2b24c659b58e71425134032b2be996929c3d324b Mon Sep 17 00:00:00 2001 From: Ilya Krylov Date: Wed, 16 Oct 2019 10:26:34 +0300 Subject: [PATCH 145/927] docs update --- demos/python_demos/image_retrieval_demo/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/demos/python_demos/image_retrieval_demo/README.md b/demos/python_demos/image_retrieval_demo/README.md index e5c24501704..3dbf76e6bec 100644 --- a/demos/python_demos/image_retrieval_demo/README.md +++ b/demos/python_demos/image_retrieval_demo/README.md @@ -69,6 +69,10 @@ python image_retrieval_demo.py \ --ground_truth text_label ``` +An example of file listing gallery images can be found [here](https://github.com/opencv/openvino_training_extensions/blob/develop/tensorflow_toolkit/image_retrieval/data/gallery/gallery.txt). + +Examples of videos can be found [here](https://github.com/19900531/test) + ## Demo Output The application uses OpenCV to display gallery searching result and current inference performance. From 17242e75b1da3657a9e5b547ca3423ef74d9ed3b Mon Sep 17 00:00:00 2001 From: Ilya Krylov Date: Wed, 16 Oct 2019 10:53:23 +0300 Subject: [PATCH 146/927] minor --- demos/python_demos/image_retrieval_demo/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demos/python_demos/image_retrieval_demo/README.md b/demos/python_demos/image_retrieval_demo/README.md index 3dbf76e6bec..8e1f3bde245 100644 --- a/demos/python_demos/image_retrieval_demo/README.md +++ b/demos/python_demos/image_retrieval_demo/README.md @@ -71,7 +71,7 @@ python image_retrieval_demo.py \ An example of file listing gallery images can be found [here](https://github.com/opencv/openvino_training_extensions/blob/develop/tensorflow_toolkit/image_retrieval/data/gallery/gallery.txt). -Examples of videos can be found [here](https://github.com/19900531/test) +Examples of videos can be found [here](https://github.com/19900531/test). ## Demo Output From 8ffab6d98c5b60dfd9c5fadf1b4fc032ec365f4d Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 16 Oct 2019 13:44:06 +0300 Subject: [PATCH 147/927] AC: per data instance metrics (#503) --- .../accuracy_checker/dataset.py | 21 +- .../evaluators/model_evaluator.py | 49 ++-- .../evaluators/pipeline_evaluator.py | 4 +- .../quantization_model_evaluator.py | 75 +++++-- .../launcher/dlsdk_launcher.py | 13 -- .../accuracy_checker/metrics/__init__.py | 4 +- .../accuracy_checker/metrics/average_meter.py | 6 + .../metrics/character_recognition.py | 3 +- .../metrics/classification.py | 10 +- .../accuracy_checker/metrics/coco_metrics.py | 27 ++- .../accuracy_checker/metrics/detection.py | 210 +++++++++--------- .../accuracy_checker/metrics/metric.py | 9 +- .../metrics/metric_executor.py | 22 +- .../metrics/question_answering.py | 2 + .../accuracy_checker/metrics/regression.py | 22 +- .../metrics/semantic_segmentation.py | 56 ++++- .../metrics/text_detection.py | 36 ++- .../regression_representation.py | 4 +- .../tests/test_metric_evaluator.py | 111 +++++++-- .../tests/test_model_evaluator.py | 4 +- .../accuracy_checker/tests/test_presenter.py | 6 +- .../tests/test_regression_metrics.py | 113 ++++++++-- .../tests/test_segmentation_metrics.py | 64 +++++- 23 files changed, 620 insertions(+), 251 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index 52789005124..3203cb876f4 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -122,10 +122,11 @@ def full_size(self): return len(self._annotation) def __call__(self, context, *args, **kwargs): - batch_annotation = self.__getitem__(self.iteration) + batch_input_ids, batch_annotation = self.__getitem__(self.iteration) self.iteration += 1 context.annotation_batch = batch_annotation context.identifiers_batch = [annotation.identifier for annotation in batch_annotation] + context.input_ids_batch = batch_input_ids def __getitem__(self, item): if self.size <= item * self.batch: @@ -134,9 +135,11 @@ def __getitem__(self, item): batch_start = item * self.batch batch_end = min(self.size, batch_start + self.batch) if self.subset: - return [self._annotation[idx] for idx in self.subset[batch_start:batch_end]] + batch_ids = self.subset[batch_start:batch_end] + return batch_ids, [self._annotation[idx] for idx in batch_ids] + batch_ids = range(batch_start, batch_end) - return self._annotation[batch_start:batch_end] + return batch_ids, self._annotation[batch_start:batch_end] def make_subset(self, ids=None, start=0, step=1, end=None): if ids: @@ -211,22 +214,20 @@ def __getitem__(self, item): raise IndexError batch_annotation = [] if self.annotation_reader: - batch_annotation = self.annotation_reader[item] + batch_annotation_ids, batch_annotation = self.annotation_reader[item] batch_identifiers = [annotation.identifier for annotation in batch_annotation] batch_input = [self.data_reader(identifier=identifier) for identifier in batch_identifiers] for annotation, input_data in zip(batch_annotation, batch_input): set_image_metadata(annotation, input_data) annotation.metadata['data_source'] = self.data_reader.data_source - return batch_annotation, batch_input, batch_identifiers + return batch_annotation_ids, batch_annotation, batch_input, batch_identifiers batch_start = item * self.batch batch_end = min(self.size, batch_start + self.batch) - if self.subset: - batch_identifiers = [self._identifiers[idx] for idx in self.subset[batch_start:batch_end]] - else: - batch_identifiers = self._identifiers[batch_start:batch_end] + batch_input_ids = self.subset[batch_start:batch_end] if self.subset else range(batch_start, batch_end) + batch_identifiers = [self._identifiers[idx] for idx in batch_input_ids] batch_input = [self.data_reader(identifier=identifier) for identifier in batch_identifiers] - return batch_annotation, batch_input, batch_identifiers + return batch_input_ids, batch_annotation, batch_input, batch_identifiers def __len__(self): if self.annotation_reader: diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py index a14ff065ba9..98e45b7ca80 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py @@ -99,7 +99,7 @@ def process_dataset_async(self, stored_predictions, progress_reporter, *args, ** def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, adapter, raw_outputs_callback): if raw_outputs_callback: raw_outputs_callback( - [batch_predictions], network=self.launcher.network, exec_network=self.launcher.exec_network + batch_predictions, network=self.launcher.network, exec_network=self.launcher.exec_network ) if adapter: batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta) @@ -124,10 +124,12 @@ def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, if ready_irs: wait_time = 0.01 while ready_irs: - batch_id, batch_annotation, batch_meta, batch_predictions, ir = ready_irs.pop(0) + ready_data = ready_irs.pop(0) + batch_id, batch_input_ids, batch_annotation, batch_meta, batch_raw_predictions, ir = ready_data batch_identifiers = [annotation.identifier for annotation in batch_annotation] batch_predictions = _process_ready_predictions( - batch_predictions, batch_identifiers, batch_meta, self.adapter, kwargs.get('output_callback') + batch_raw_predictions, batch_identifiers, batch_meta, self.adapter, + kwargs.get('raw_outputs_callback') ) free_irs.append(ir) if stored_predictions: @@ -135,7 +137,7 @@ def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, annotations, predictions = self.postprocessor.process_batch(batch_annotation, batch_predictions) if not self.postprocessor.has_dataset_processors: - self.metric_executor.update_metrics_on_batch(annotations, predictions) + self.metric_executor.update_metrics_on_batch(batch_input_ids, annotations, predictions) if self.metric_executor.need_store_predictions: self._annotations.extend(annotations) @@ -155,7 +157,9 @@ def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, self.store_predictions(stored_predictions, predictions_to_store) if self.postprocessor.has_dataset_processors: - self.metric_executor.update_metrics_on_batch(self._annotations, self._predictions) + self.metric_executor.update_metrics_on_batch( + range(len(self._annotations)), self._annotations, self._predictions + ) return self.postprocessor.process_dataset(self._annotations, self._predictions) @@ -164,14 +168,21 @@ def process_dataset(self, stored_predictions, progress_reporter, *args, **kwargs self._annotations, self._predictions = self.load(stored_predictions, progress_reporter) self._annotations, self._predictions = self.postprocessor.full_process(self._annotations, self._predictions) - self.metric_executor.update_metrics_on_batch(self._annotations, self._predictions) + self.metric_executor.update_metrics_on_batch( + range(len(self._annotations)), self._annotations, self._predictions + ) return self._annotations, self._predictions self.dataset.batch = self.launcher.batch + raw_outputs_callback = kwargs.get('output_callback') predictions_to_store = [] - for batch_id, batch_annotation in enumerate(self.dataset): + for batch_id, (batch_input_ids, batch_annotation) in enumerate(self.dataset): filled_inputs, batch_meta, batch_identifiers = self._get_batch_input(batch_annotation) batch_predictions = self.launcher.predict(filled_inputs, batch_meta, **kwargs) + if raw_outputs_callback: + raw_outputs_callback( + batch_predictions, network=self.launcher.network, exec_network=self.launcher.exec_network + ) if self.adapter: self.adapter.output_blob = self.adapter.output_blob or self.launcher.output_blob batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta) @@ -181,7 +192,7 @@ def process_dataset(self, stored_predictions, progress_reporter, *args, **kwargs annotations, predictions = self.postprocessor.process_batch(batch_annotation, batch_predictions, batch_meta) if not self.postprocessor.has_dataset_processors: - self.metric_executor.update_metrics_on_batch(annotations, predictions) + self.metric_executor.update_metrics_on_batch(batch_input_ids, annotations, predictions) self._annotations.extend(annotations) self._predictions.extend(predictions) @@ -196,7 +207,9 @@ def process_dataset(self, stored_predictions, progress_reporter, *args, **kwargs self.store_predictions(stored_predictions, predictions_to_store) if self.postprocessor.has_dataset_processors: - self.metric_executor.update_metrics_on_batch(self._annotations, self._predictions) + self.metric_executor.update_metrics_on_batch( + range(len(self._annotations)), self._annotations, self._predictions + ) return self.postprocessor.process_dataset(self._annotations, self._predictions) @@ -214,7 +227,9 @@ def _is_stored(stored_predictions=None): def _load_stored_predictions(self, stored_predictions, progress_reporter): self._annotations, self._predictions = self.load(stored_predictions, progress_reporter) self._annotations, self._predictions = self.postprocessor.full_process(self._annotations, self._predictions) - self.metric_executor.update_metrics_on_batch(self._annotations, self._predictions) + self.metric_executor.update_metrics_on_batch( + range(len(self._annotations)), self._annotations, self._predictions + ) return self._annotations, self._predictions @@ -223,25 +238,27 @@ def _wait_for_any(irs): if not irs: return [], [] - result = [] free_indexes = [] - for ir_id, (batch_id, batch_annotation, batch_meta, ir) in enumerate(irs): + for ir_id, (_, _, _, _, ir) in enumerate(irs): if ir.wait(0) == 0: - result.append((batch_id, batch_annotation, batch_meta, [ir.outputs], ir)) free_indexes.append(ir_id) - irs = [ir for ir_id, ir in enumerate(irs) if ir_id not in free_indexes] + result = [] + for idx in free_indexes: + batch_id, batch_input_ids, batch_annotation, batch_meta, ir = irs.pop(idx) + result.append((batch_id, batch_input_ids, batch_annotation, batch_meta, ir.outputs, ir)) + return result, irs def _fill_free_irs(self, free_irs, queued_irs, dataset_iterator): for ir in free_irs: try: - batch_id, batch_annotation = next(dataset_iterator) + batch_id, (batch_input_ids, batch_annotation) = next(dataset_iterator) except StopIteration: break batch_input, batch_meta, _ = self._get_batch_input(batch_annotation) self.launcher.predict_async(ir, batch_input, batch_meta) - queued_irs.append((batch_id, batch_annotation, batch_meta, ir)) + queued_irs.append((batch_id, batch_input_ids, batch_annotation, batch_meta, ir)) return free_irs, queued_irs diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py index 4357ac65b16..fe067368cec 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py @@ -145,6 +145,7 @@ def __init__(self, dataset, metric_executor=None, launcher=None): self.predictions = [] self.annotation_batch = [] self.prediction_batch = [] + self.input_ids_batch = [] self.data_batch = [] self.metrics_results = [] self.identifiers_batch = [] @@ -161,7 +162,8 @@ def shared_context(self): 'annotation_batch': self.annotation_batch, 'prediction_batch': self.prediction_batch, 'data_batch': self.data_batch, - 'identifiers_batch': self.identifiers_batch + 'identifiers_batch': self.identifiers_batch, + 'input_ids_batch': self.input_ids_batch } return _shared_context diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index ba443475345..f597972b5d7 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -73,16 +73,15 @@ def process_dataset_async( num_images=None, check_progress=False, dataset_tag='', + output_callback=None, **kwargs ): - def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, adapter, raw_outputs_callback): - if raw_outputs_callback: - raw_outputs_callback(batch_predictions) + def _process_ready_predictions(batch_raw_predictions, batch_identifiers, batch_meta, adapter): if adapter: - batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta) + return self.adapter.process(batch_raw_predictions, batch_identifiers, batch_meta) - return batch_predictions + return batch_raw_predictions def _create_subset(subset, num_images): if subset is not None: @@ -116,19 +115,39 @@ def _create_subset(subset, num_images): if ready_irs: wait_time = 0.01 while ready_irs: - batch_id, batch_annotation, batch_identifiers, batch_meta, batch_predictions, ir = ready_irs.pop(0) + ready_data = ready_irs.pop(0) + ( + batch_id, + batch_input_ids, + batch_annotation, + batch_identifiers, + batch_meta, + batch_raw_predictions, + ir + ) = ready_data batch_predictions = _process_ready_predictions( - batch_predictions, batch_identifiers, batch_meta, self.adapter, kwargs.get('output_callback') + batch_raw_predictions, batch_identifiers, batch_meta, self.adapter ) free_irs.append(ir) annotations, predictions = self.postprocessor.process_batch(batch_annotation, batch_predictions) + metrics_result = None if self.metric_executor: - self.metric_executor.update_metrics_on_batch(annotations, predictions) + metrics_result = self.metric_executor.update_metrics_on_batch( + batch_input_ids, annotations, predictions + ) if self.metric_executor.need_store_predictions: self._annotations.extend(annotations) self._predictions.extend(predictions) + if output_callback: + output_callback( + batch_raw_predictions, + metrics_result=metrics_result, + element_identifiers=batch_identifiers, + dataset_indices=batch_input_ids + ) + if progress_reporter: progress_reporter.update(batch_id, len(batch_predictions)) else: @@ -150,6 +169,7 @@ def process_dataset( num_images=None, check_progress=False, dataset_tag='', + output_callback=None, **kwargs ): if self.dataset is None or (dataset_tag and self.dataset.tag != dataset_tag): @@ -166,19 +186,30 @@ def process_dataset( if check_progress: progress_reporter = ProgressReporter.provide('print', self.dataset.size) - for batch_id, (batch_annotation, batch_inputs, batch_identifiers) in enumerate(self.dataset): + for batch_id, (batch_input_ids, batch_annotation, batch_inputs, batch_identifiers) in enumerate(self.dataset): filled_inputs, batch_meta = self._get_batch_input(batch_inputs, batch_annotation) - batch_predictions = self.launcher.predict(filled_inputs, batch_meta, **kwargs) + batch_raw_predictions = self.launcher.predict(filled_inputs, batch_meta, **kwargs) if self.adapter: self.adapter.output_blob = self.adapter.output_blob or self.launcher.output_blob - batch_predictions = self.adapter.process(batch_predictions, batch_identifiers, batch_meta) + batch_predictions = self.adapter.process(batch_raw_predictions, batch_identifiers, batch_meta) + else: + batch_predictions = batch_raw_predictions annotations, predictions = self.postprocessor.process_batch(batch_annotation, batch_predictions, batch_meta) + metrics_result = None if self.metric_executor: - self.metric_executor.update_metrics_on_batch(annotations, predictions) - - self._annotations.extend(annotations) - self._predictions.extend(predictions) + metrics_result = self.metric_executor.update_metrics_on_batch(batch_input_ids, annotations, predictions) + if self.metric_executor.need_store_predictions: + self._annotations.extend(annotations) + self._predictions.extend(predictions) + + if output_callback: + output_callback( + batch_raw_predictions, + metrics_result=metrics_result, + element_identifiers=batch_identifiers, + dataset_indices=batch_input_ids + ) if progress_reporter: progress_reporter.update(batch_id, len(batch_predictions)) @@ -191,25 +222,27 @@ def _wait_for_any(irs): if not irs: return [], [] - result = [] free_indexes = [] - for ir_id, (batch_id, batch_annotation, batch_identifiers, batch_meta, ir) in enumerate(irs): + for ir_id, (_, _, _, _, _, ir) in enumerate(irs): if ir.wait(0) == 0: - result.append((batch_id, batch_annotation, batch_identifiers, batch_meta, ir.outputs, ir)) free_indexes.append(ir_id) - irs = [ir for ir_id, ir in enumerate(irs) if ir_id not in free_indexes] + result = [] + for idx in free_indexes: + batch_id, batch_input_ids, batch_annotation, batch_identifiers, batch_meta, ir = irs.pop(idx) + result.append((batch_id, batch_input_ids, batch_annotation, batch_identifiers, batch_meta, ir.outputs, ir)) + return result, irs def _fill_free_irs(self, free_irs, queued_irs, dataset_iterator, **kwargs): for ir in free_irs: try: - batch_id, (batch_annotation, batch_inputs, batch_identifiers) = next(dataset_iterator) + batch_id, (batch_input_ids, batch_annotation, batch_inputs, batch_identifiers) = next(dataset_iterator) except StopIteration: break batch_input, batch_meta = self._get_batch_input(batch_inputs, batch_annotation) self.launcher.predict_async(ir, batch_input, batch_meta, **kwargs) - queued_irs.append((batch_id, batch_annotation, batch_identifiers, batch_meta, ir)) + queued_irs.append((batch_id, batch_input_ids, batch_annotation, batch_identifiers, batch_meta, ir)) return free_irs, queued_irs diff --git a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py index 1e49a006918..9357ba6e445 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py @@ -269,17 +269,7 @@ def predict(self, inputs, metadata=None, **kwargs): input_shapes = {layer_name: data.shape for layer_name, data in infer_inputs.items()} self._reshape_input(input_shapes) - benchmark = kwargs.get('benchmark') - - if benchmark: - benchmark(infer_inputs) - result = self.exec_network.infer(infer_inputs) - - raw_outputs_callback = kwargs.get('output_callback') - - if raw_outputs_callback: - raw_outputs_callback(result, network=self.network, exec_network=self.exec_network) results.append(result) if metadata is not None: @@ -292,9 +282,6 @@ def predict(self, inputs, metadata=None, **kwargs): def predict_async(self, ir, inputs, metadata=None, **kwargs): infer_inputs = inputs[0] - benchmark = kwargs.get('benchmark') - if benchmark: - benchmark(infer_inputs) ir.async_infer(inputs=infer_inputs) if metadata is not None: for meta_ in metadata: diff --git a/tools/accuracy_checker/accuracy_checker/metrics/__init__.py b/tools/accuracy_checker/accuracy_checker/metrics/__init__.py index 41c0f3d415f..44a61808f8a 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/__init__.py @@ -14,7 +14,8 @@ limitations under the License. """ -from .metric_executor import MetricsExecutor, Metric +from .metric_executor import MetricsExecutor +from .metric import Metric, PerImageMetricResult from .classification import ClassificationAccuracy, ClassificationAccuracyClasses, ClipAccuracy from .detection import (DetectionMAP, MissRate, Recall, DetectionAccuracyMetric) @@ -62,6 +63,7 @@ __all__ = [ 'Metric', 'MetricsExecutor', + 'PerImageMetricResult', 'ClassificationAccuracy', 'ClassificationAccuracyClasses', diff --git a/tools/accuracy_checker/accuracy_checker/metrics/average_meter.py b/tools/accuracy_checker/accuracy_checker/metrics/average_meter.py index eaae62ab4c4..8872dafbd9c 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/average_meter.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/average_meter.py @@ -37,6 +37,12 @@ def update(self, annotation_val, prediction_val): self.accumulator += loss self.total_count += increment + if np.isscalar(loss): + loss = float(loss) + else: + loss = loss.astype(float) + return np.divide(loss, increment, out=np.zeros_like(loss), where=self.total_count != 0) + def evaluate(self): if self.total_count is None: return 0.0 diff --git a/tools/accuracy_checker/accuracy_checker/metrics/character_recognition.py b/tools/accuracy_checker/accuracy_checker/metrics/character_recognition.py index e4ffa16e4db..61cf7e28612 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/character_recognition.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/character_recognition.py @@ -29,7 +29,8 @@ def configure(self): self.accuracy = AverageMeter(lambda annotation, prediction: int(annotation == prediction)) def update(self, annotation, prediction): - self.accuracy.update(annotation.label, prediction.label) + return self.accuracy.update(annotation.label, prediction.label) + def evaluate(self, annotations, predictions): return self.accuracy.evaluate() diff --git a/tools/accuracy_checker/accuracy_checker/metrics/classification.py b/tools/accuracy_checker/accuracy_checker/metrics/classification.py index d14258661ac..21726d51558 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/classification.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/classification.py @@ -54,7 +54,7 @@ def loss(annotation_label, prediction_top_k_labels): self.accuracy = AverageMeter(loss) def update(self, annotation, prediction): - self.accuracy.update(annotation.label, prediction.top_k(self.top_k)) + return self.accuracy.update(annotation.label, prediction.top_k(self.top_k)) def evaluate(self, annotations, predictions): return self.accuracy.evaluate() @@ -106,7 +106,7 @@ def counter(annotation_label): self.accuracy = AverageMeter(loss, counter) def update(self, annotation, prediction): - self.accuracy.update(annotation.label, prediction.top_k(self.top_k)) + return self.accuracy.update(annotation.label, prediction.top_k(self.top_k)) def evaluate(self, annotations, predictions): self.meta['names'] = list(self.labels.values()) @@ -145,13 +145,15 @@ def update(self, annotation, prediction): self.video_accuracy.update(video_top_label, self.previous_video_label) self.video_avg_prob = AverageProbMeter() - self.video_avg_prob.update(annotation.label, prediction.scores) + video_avg = self.video_avg_prob.update(annotation.label, prediction.scores) - self.clip_accuracy.update(annotation.label, prediction.label) + clip_accuracy = self.clip_accuracy.update(annotation.label, prediction.label) self.previous_video_id = video_id self.previous_video_label = annotation.label + return [clip_accuracy, video_avg] + def evaluate(self, annotations, predictions): self.meta['names'] = ['clip_accuracy', 'video_accuracy'] return [self.clip_accuracy.evaluate(), self.video_accuracy.evaluate()] diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py index 2b09d5c3096..28edfd9ff7a 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py @@ -70,22 +70,19 @@ def configure(self): def update(self, annotation, prediction): compute_iou, create_boxes = select_specific_parameters(annotation) + per_class_results = [] for label_id, label in enumerate(self.labels): detections, scores, dt_difficult = prepare_predictions(prediction, label, self.max_detections) ground_truth, gt_difficult, iscrowd, boxes, areas = prepare_annotations(annotation, label, create_boxes) iou = compute_iou(ground_truth, detections, boxes, areas) - self.matching_results[label_id].append( - evaluate_image( - ground_truth, - gt_difficult, - iscrowd, - detections, - dt_difficult, - scores, - iou, - self.thresholds - )) + eval_result = evaluate_image( + ground_truth, gt_difficult, iscrowd, detections, dt_difficult, scores, iou, self.thresholds + ) + self.matching_results[label_id].append(eval_result) + per_class_results.append(eval_result) + + return per_class_results def evaluate(self, annotations, predictions): pass @@ -99,6 +96,10 @@ def reset(self): class MSCOCOAveragePrecision(MSCOCOBaseMetric): __provider__ = 'coco_precision' + def update(self, annotation, prediction): + per_class_matching = super().update(annotation, prediction) + return [compute_precision_recall(self.thresholds, per_class_matching[i])[0] for i, _ in enumerate(self.labels)] + def evaluate(self, annotations, predictions): precision = [ compute_precision_recall(self.thresholds, self.matching_results[i])[0] @@ -111,6 +112,10 @@ def evaluate(self, annotations, predictions): class MSCOCORecall(MSCOCOBaseMetric): __provider__ = 'coco_recall' + def update(self, annotation, prediction): + per_class_matching = super().update(annotation, prediction) + return [compute_precision_recall(self.thresholds, per_class_matching[i])[1] for i, _ in enumerate(self.labels)] + def evaluate(self, annotations, predictions): recalls = [ compute_precision_recall(self.thresholds, self.matching_results[i])[1] diff --git a/tools/accuracy_checker/accuracy_checker/metrics/detection.py b/tools/accuracy_checker/accuracy_checker/metrics/detection.py index beadc030177..3ffbc09588d 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/detection.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/detection.py @@ -28,7 +28,7 @@ DetectionAnnotation, DetectionPrediction, ActionDetectionPrediction, ActionDetectionAnnotation ) -from .metric import Metric, FullDatasetEvaluationMetric +from .metric import Metric, FullDatasetEvaluationMetric, PerImageEvaluationMetric class APIntegralType(enum.Enum): @@ -134,7 +134,7 @@ def reset(self): self.meta['names'] = [dataset_labels[name] for name in valid_labels] -class DetectionMAP(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): +class DetectionMAP(BaseDetectionMetricMixin, FullDatasetEvaluationMetric, PerImageEvaluationMetric): """ Class for evaluating mAP metric of detection models. """ @@ -162,7 +162,19 @@ def configure(self): super().configure() self.integral = APIntegralType(self.get_value_from_config('integral')) + def update(self, annotation, prediction): + return self._calculate_map([annotation], [prediction]) + def evaluate(self, annotations, predictions): + average_precisions = self._calculate_map(annotations, predictions) + average_precisions, self.meta['names'] = finalize_metric_result(average_precisions, self.meta['names']) + if not average_precisions: + warnings.warn("No detections to compute mAP") + average_precisions.append(0) + + return average_precisions + + def _calculate_map(self, annotations, predictions): valid_labels = get_valid_labels(self.labels, self.dataset.metadata.get('background_label')) labels_stat = self.per_class_detection_statistics(annotations, predictions, valid_labels) @@ -175,16 +187,10 @@ def evaluate(self, annotations, predictions): average_precisions.append(ap) else: average_precisions.append(np.nan) - - average_precisions, self.meta['names'] = finalize_metric_result(average_precisions, self.meta['names']) - if not average_precisions: - warnings.warn("No detections to compute mAP") - average_precisions.append(0) - return average_precisions -class MissRate(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): +class MissRate(BaseDetectionMetricMixin, FullDatasetEvaluationMetric, PerImageEvaluationMetric): """ Class for evaluating Miss Rate metric of detection models. """ @@ -206,6 +212,21 @@ def configure(self): super().configure() self.fppi_level = self.get_value_from_config('fppi_level') + def update(self, annotation, prediction): + valid_labels = get_valid_labels(self.labels, self.dataset.metadata.get('background_label')) + labels_stat = self.per_class_detection_statistics([annotation], [prediction], valid_labels) + miss_rates = [] + for label in labels_stat: + label_miss_rate = 1.0 - labels_stat[label]['recall'] + label_fppi = labels_stat[label]['fppi'] + + position = bisect.bisect_left(label_fppi, self.fppi_level) + m0 = max(0, position - 1) + m1 = position if position < len(label_miss_rate) else m0 + miss_rates.append(0.5 * (label_miss_rate[m0] + label_miss_rate[m1])) + + return miss_rates + def evaluate(self, annotations, predictions): valid_labels = get_valid_labels(self.labels, self.dataset.metadata.get('background_label')) labels_stat = self.per_class_detection_statistics(annotations, predictions, valid_labels) @@ -223,7 +244,7 @@ def evaluate(self, annotations, predictions): return miss_rates -class Recall(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): +class Recall(BaseDetectionMetricMixin, FullDatasetEvaluationMetric, PerImageEvaluationMetric): """ Class for evaluating recall metric of detection models. """ @@ -233,7 +254,19 @@ class Recall(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): annotation_types = (DetectionAnnotation, ActionDetectionAnnotation) prediction_types = (DetectionPrediction, ActionDetectionPrediction) + def update(self, annotation, prediction): + return self._calculate_recall([annotation], [prediction]) + def evaluate(self, annotations, predictions): + recalls = self._calculate_recall(annotations, predictions) + recalls, self.meta['names'] = finalize_metric_result(recalls, self.meta['names']) + if not recalls: + warnings.warn("No detections to compute mAP") + recalls.append(0) + + return recalls + + def _calculate_recall(self, annotations, predictions): valid_labels = get_valid_labels(self.labels, self.dataset.metadata.get('background_label')) labels_stat = self.per_class_detection_statistics(annotations, predictions, valid_labels) @@ -246,15 +279,10 @@ def evaluate(self, annotations, predictions): else: recalls.append(np.nan) - recalls, self.meta['names'] = finalize_metric_result(recalls, self.meta['names']) - if not recalls: - warnings.warn("No detections to compute mAP") - recalls.append(0) - return recalls -class DetectionAccuracyMetric(BaseDetectionMetricMixin, FullDatasetEvaluationMetric): +class DetectionAccuracyMetric(BaseDetectionMetricMixin, PerImageEvaluationMetric): __provider__ = 'detection_accuracy' annotation_types = (DetectionAnnotation, ActionDetectionAnnotation) @@ -283,29 +311,32 @@ def configure(self): self.ignore_label = self.get_value_from_config('ignore_label') fast_match = self.get_value_from_config('fast_match') self.match_func = match_detections_class_agnostic if not fast_match else fast_match_detections_class_agnostic + self.cm = np.zeros([len(self.labels), len(self.labels)], dtype=np.int32) + + def update(self, annotation, prediction): + matches = self.match_func(prediction, annotation, self.overlap_threshold, self.overlap_method) + update_cm = confusion_matrix(matches, prediction, annotation, len(self.labels), self.ignore_label) + self.cm += update_cm + if self.use_normalization: + return np.mean(normalize_confusion_matrix(update_cm).diagonal()) + return float(np.sum(update_cm.diagonal())) / float(np.maximum(1, np.sum(update_cm))) def evaluate(self, annotations, predictions): - all_matches = self.match_func( - predictions, annotations, self.overlap_threshold, self.overlap_method - ) - cm = confusion_matrix(all_matches, predictions, annotations, len(self.labels), self.ignore_label) if self.use_normalization: - return np.mean(normalize_confusion_matrix(cm).diagonal()) + return np.mean(normalize_confusion_matrix(self.cm).diagonal()) - return float(np.sum(cm.diagonal())) / float(np.maximum(1, np.sum(cm))) + return float(np.sum(self.cm.diagonal())) / float(np.maximum(1, np.sum(self.cm))) -def confusion_matrix(all_matched_ids, predicted_data, gt_data, num_classes, ignore_label=None): +def confusion_matrix(matched_ids, prediction, gt, num_classes, ignore_label=None): out_cm = np.zeros([num_classes, num_classes], dtype=np.int32) - for gt, prediction in zip(gt_data, predicted_data): - for match_pair in all_matched_ids[gt.identifier]: - gt_label = int(gt.labels[match_pair[0]]) - - if ignore_label and gt_label == ignore_label: - continue + for match_pair in matched_ids: + gt_label = int(gt.labels[match_pair[0]]) + if ignore_label and gt_label == ignore_label: + continue - pred_label = int(prediction.labels[match_pair[1]]) - out_cm[gt_label, pred_label] += 1 + pred_label = int(prediction.labels[match_pair[1]]) + out_cm[gt_label, pred_label] += 1 return out_cm @@ -315,91 +346,72 @@ def normalize_confusion_matrix(cm): return cm.astype(np.float32) / row_sums -def match_detections_class_agnostic(predicted_data, gt_data, min_iou, overlap_method): - all_matches = {} - total_gt_bbox_num = 0 - matched_gt_bbox_num = 0 - - for gt, prediction in zip(gt_data, predicted_data): - gt_bboxes = np.stack((gt.x_mins, gt.y_mins, gt.x_maxs, gt.y_maxs), axis=-1) - predicted_bboxes = np.stack( - (prediction.x_mins, prediction.y_mins, prediction.x_maxs, prediction.y_maxs), axis=-1 - ) - predicted_scores = prediction.scores - - gt_bboxes_num = len(gt_bboxes) - predicted_bboxes_num = len(predicted_bboxes) - - sorted_ind = np.argsort(-predicted_scores) - predicted_bboxes = predicted_bboxes[sorted_ind] - predicted_original_ids = np.arange(predicted_bboxes_num)[sorted_ind] - - similarity_matrix = calculate_similarity_matrix(predicted_bboxes, gt_bboxes, overlap_method) - - matches = [] - visited_gt = np.zeros(gt_bboxes_num, dtype=np.bool) - for predicted_id in range(predicted_bboxes_num): - best_overlap = 0.0 - best_gt_id = -1 - for gt_id in range(gt_bboxes_num): - if visited_gt[gt_id]: - continue +def match_detections_class_agnostic(prediction, gt, min_iou, overlap_method): + gt_bboxes = np.stack((gt.x_mins, gt.y_mins, gt.x_maxs, gt.y_maxs), axis=-1) + predicted_bboxes = np.stack( + (prediction.x_mins, prediction.y_mins, prediction.x_maxs, prediction.y_maxs), axis=-1 + ) + predicted_scores = prediction.scores - overlap_value = similarity_matrix[predicted_id, gt_id] - if overlap_value > best_overlap: - best_overlap = overlap_value - best_gt_id = gt_id + gt_bboxes_num = len(gt_bboxes) + predicted_bboxes_num = len(predicted_bboxes) - if best_gt_id >= 0 and best_overlap > min_iou: - visited_gt[best_gt_id] = True + sorted_ind = np.argsort(-predicted_scores) + predicted_bboxes = predicted_bboxes[sorted_ind] + predicted_original_ids = np.arange(predicted_bboxes_num)[sorted_ind] - matches.append((best_gt_id, predicted_original_ids[predicted_id])) - if len(matches) >= gt_bboxes_num: - break + similarity_matrix = calculate_similarity_matrix(predicted_bboxes, gt_bboxes, overlap_method) - all_matches[gt.identifier] = matches + matches = [] + visited_gt = np.zeros(gt_bboxes_num, dtype=np.bool) + for predicted_id in range(predicted_bboxes_num): + best_overlap = 0.0 + best_gt_id = -1 + for gt_id in range(gt_bboxes_num): + if visited_gt[gt_id]: + continue - total_gt_bbox_num += gt_bboxes_num - matched_gt_bbox_num += len(matches) + overlap_value = similarity_matrix[predicted_id, gt_id] + if overlap_value > best_overlap: + best_overlap = overlap_value + best_gt_id = gt_id - return all_matches + if best_gt_id >= 0 and best_overlap > min_iou: + visited_gt[best_gt_id] = True + matches.append((best_gt_id, predicted_original_ids[predicted_id])) + if len(matches) >= gt_bboxes_num: + break -def fast_match_detections_class_agnostic(predicted_data, gt_data, min_iou, overlap_method): - all_matches = {} - total_gt_bbox_num = 0 - matched_gt_bbox_num = 0 + return matches - for gt, prediction in zip(gt_data, predicted_data): - gt_bboxes = np.stack((gt.x_mins, gt.y_mins, gt.x_maxs, gt.y_maxs), axis=-1) - matches = [] - total_gt_bbox_num += len(gt_bboxes) - if prediction.size: - predicted_bboxes = np.stack( - (prediction.x_mins, prediction.y_mins, prediction.x_maxs, prediction.y_maxs), axis=-1 - ) - similarity_matrix = calculate_similarity_matrix(gt_bboxes, predicted_bboxes, overlap_method) +def fast_match_detections_class_agnostic(prediction, gt, min_iou, overlap_method): + matches = [] + gt_bboxes = np.stack((gt.x_mins, gt.y_mins, gt.x_maxs, gt.y_maxs), axis=-1) + if prediction.size: + predicted_bboxes = np.stack( + (prediction.x_mins, prediction.y_mins, prediction.x_maxs, prediction.y_maxs), axis=-1 + ) - for _ in gt_bboxes: - best_match_pos = np.unravel_index(similarity_matrix.argmax(), similarity_matrix.shape) - best_match_value = similarity_matrix[best_match_pos] + similarity_matrix = calculate_similarity_matrix(gt_bboxes, predicted_bboxes, overlap_method) - if best_match_value <= min_iou: - break + for _ in gt_bboxes: + best_match_pos = np.unravel_index(similarity_matrix.argmax(), similarity_matrix.shape) + best_match_value = similarity_matrix[best_match_pos] - gt_id = best_match_pos[0] - predicted_id = best_match_pos[1] + if best_match_value <= min_iou: + break - similarity_matrix[gt_id, :] = 0.0 - similarity_matrix[:, predicted_id] = 0.0 + gt_id = best_match_pos[0] + predicted_id = best_match_pos[1] - matches.append((gt_id, predicted_id)) - matched_gt_bbox_num += 1 + similarity_matrix[gt_id, :] = 0.0 + similarity_matrix[:, predicted_id] = 0.0 - all_matches[gt.identifier] = matches + matches.append((gt_id, predicted_id)) - return all_matches + return matches def calculate_similarity_matrix(set_a, set_b, overlap): diff --git a/tools/accuracy_checker/accuracy_checker/metrics/metric.py b/tools/accuracy_checker/accuracy_checker/metrics/metric.py index 72bd4dd0308..62eadf426d5 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/metric.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/metric.py @@ -15,6 +15,7 @@ """ import copy +from collections import namedtuple from ..representation import ContainerRepresentation from ..config import ConfigError from ..utils import is_single_metric_source, get_supported_representations @@ -23,6 +24,9 @@ from ..dependency import ClassProvider from ..utils import zipped_transform, get_parameter_value_from_config, contains_any +PerImageMetricResult = namedtuple('PerImageMetricResult', ['metric_name', 'metric_type', 'result']) + + class Metric(ClassProvider): """ @@ -92,7 +96,7 @@ def get_value_from_config(self, key): return get_parameter_value_from_config(self.config, self.parameters(), key) def submit(self, annotation, prediction): - self.update(annotation, prediction) + return PerImageMetricResult(self.name, self.config['type'], self.update(annotation, prediction)) def submit_all(self, annotations, predictions): return self.evaluate(annotations, predictions) @@ -182,7 +186,8 @@ def reset(self): class PerImageEvaluationMetric(Metric): def submit(self, annotation, prediction): annotation_, prediction_ = self._resolve_representation_containers(annotation, prediction) - self.update(annotation_, prediction_) + metric_result = self.update(annotation_, prediction_) + return PerImageMetricResult(self.name, self.config['type'], metric_result) def evaluate(self, annotations, predictions): raise NotImplementedError diff --git a/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py b/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py index 18344d4ec3a..71d697a9b93 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py @@ -14,11 +14,10 @@ limitations under the License. """ -from collections import namedtuple +from collections import namedtuple, OrderedDict from ..presenters import BasePresenter, EvaluationResult from ..config import StringField -from ..utils import zipped_transform from .metric import Metric, FullDatasetEvaluationMetric from ..config import ConfigValidator, ConfigError @@ -93,7 +92,9 @@ def _set_dataset(self, dataset): metric.metric_fn.dataset = dataset def __call__(self, context, *args, **kwargs): - self.update_metrics_on_batch(context.annotation_batch, context.prediction_batch) + self.update_metrics_on_batch( + context.input_ids_batch, context.annotation_batch, context.prediction_batch + ) context.annotations.extend(context.annotation_batch) context.predictions.extend(context.prediction_batch) @@ -102,10 +103,14 @@ def update_metrics_on_object(self, annotation, prediction): Updates metric value corresponding given annotation and prediction objects. """ + metric_results = [] + for metric in self.metrics: - metric.metric_fn.submit(annotation, prediction) + metric_results.append(metric.metric_fn.submit(annotation, prediction)) + + return metric_results - def update_metrics_on_batch(self, annotation, prediction): + def update_metrics_on_batch(self, batch_ids, annotation, prediction): """ Updates metric value corresponding given batch. @@ -114,7 +119,12 @@ def update_metrics_on_batch(self, annotation, prediction): prediction: list of batch number of prediction objects. """ - zipped_transform(self.update_metrics_on_object, annotation, prediction) + results = OrderedDict() + + for input_id, single_annotation, single_prediction in zip(batch_ids, annotation, prediction): + results[input_id] = self.update_metrics_on_object(single_annotation, single_prediction) + + return results def iterate_metrics(self, annotations, predictions): for name, metric_type, functor, reference, threshold, presenter in self.metrics: diff --git a/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py b/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py index e0bde0e63ba..8f40fbc98ff 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/question_answering.py @@ -62,6 +62,7 @@ def update(self, annotation, prediction): max_f1_score = f1 if f1 > max_f1_score else max_f1_score self.f1 += max_f1_score self.total += 1 + return max_f1_score def evaluate(self, annotation, prediction): return self.f1 / self.total @@ -86,6 +87,7 @@ def update(self, annotation, prediction): max_exact_match = exact_match if exact_match > max_exact_match else max_exact_match self.exact_match += max_exact_match self.total += 1 + return max_exact_match def evaluate(self, annotation, prediction): return self.exact_match / self.total diff --git a/tools/accuracy_checker/accuracy_checker/metrics/regression.py b/tools/accuracy_checker/accuracy_checker/metrics/regression.py index 2dc47d5b529..1476d3ca02d 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/regression.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/regression.py @@ -47,7 +47,10 @@ def configure(self): self.magnitude = [] def update(self, annotation, prediction): - self.magnitude.append(self.value_differ(annotation.value, prediction.value)) + diff = self.value_differ(annotation.value, prediction.value) + self.magnitude.append(diff) + + return diff def evaluate(self, annotations, predictions): return np.mean(self.magnitude), np.std(self.magnitude) @@ -112,7 +115,10 @@ def configure(self): def update(self, annotation, prediction): index = find_interval(annotation.value, self.intervals) - self.magnitude[index].append(self.value_differ(annotation.value, prediction.value)) + diff = self.value_differ(annotation.value, prediction.value) + self.magnitude[index].append(diff) + + return diff def evaluate(self, annotations, predictions): if self.ignore_out_of_range: @@ -165,6 +171,10 @@ class RootMeanSquaredError(BaseRegressionMetric): def __init__(self, *args, **kwargs): super().__init__(mse_differ, *args, **kwargs) + def update(self, annotation, prediction): + mse = super().update(annotation, prediction) + return np.sqrt(mse) + def evaluate(self, annotations, predictions): return np.sqrt(np.mean(self.magnitude)), np.sqrt(np.std(self.magnitude)) @@ -189,6 +199,10 @@ class RootMeanSquaredErrorOnInterval(BaseRegressionOnIntervals): def __init__(self, *args, **kwargs): super().__init__(mse_differ, *args, **kwargs) + def update(self, annotation, prediction): + mse = super().update(annotation, prediction) + return np.sqrt(mse) + def evaluate(self, annotations, predictions): if self.ignore_out_of_range: self.magnitude = self.magnitude[1:-1] @@ -224,6 +238,8 @@ def update(self, annotation, prediction): result /= np.maximum(annotation.interocular_distance, np.finfo(np.float64).eps) self.magnitude.append(result) + return result + def evaluate(self, annotations, predictions): num_points = np.shape(self.magnitude)[1] point_result_name_pattern = 'point_{}_normed_error' @@ -277,6 +293,8 @@ def update(self, annotation, prediction): avg_result /= np.maximum(annotation.interocular_distance, np.finfo(np.float64).eps) self.magnitude.append(avg_result) + return avg_result + def evaluate(self, annotations, predictions): self.meta['names'] = ['mean'] result = [np.mean(self.magnitude)] diff --git a/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py b/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py index eaf8e380f96..16d11e620b9 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/semantic_segmentation.py @@ -54,22 +54,31 @@ def update(self, annotation, prediction): n_classes = len(self.dataset.labels) prediction_mask = np.argmax(prediction.mask, axis=0) if self.use_argmax else prediction.mask.astype('int64') - def update_confusion_matrix(confusion_matrix): + def confusion_matrix(): label_true = annotation.mask.flatten() label_pred = prediction_mask.flatten() mask = (label_true >= 0) & (label_true < n_classes) & (label_pred < n_classes) & (label_pred >= 0) hist = np.bincount(n_classes * label_true[mask].astype(int) + label_pred[mask], minlength=n_classes ** 2) hist = hist.reshape(n_classes, n_classes) - confusion_matrix += hist - return confusion_matrix + return hist - self._update_state(update_confusion_matrix, self.CONFUSION_MATRIX_KEY, lambda: np.zeros((n_classes, n_classes))) + def accumulate(confusion_matrixs): + return confusion_matrixs + cm + + cm = confusion_matrix() + + self._update_state(accumulate, self.CONFUSION_MATRIX_KEY, lambda: np.zeros((n_classes, n_classes))) + return cm class SegmentationAccuracy(SegmentationMetric): __provider__ = 'segmentation_accuracy' + def update(self, annotation, prediction): + cm = super().update(annotation, prediction) + return np.diag(cm).sum() / cm.sum() + def evaluate(self, annotations, predictions): confusion_matrix = self.state[self.CONFUSION_MATRIX_KEY] return np.diag(confusion_matrix).sum() / confusion_matrix.sum() @@ -78,10 +87,18 @@ def evaluate(self, annotations, predictions): class SegmentationIOU(SegmentationMetric): __provider__ = 'mean_iou' + def update(self, annotation, prediction): + cm = super().update(annotation, prediction) + diagonal = np.diag(cm).astype(float) + union = cm.sum(axis=1) + cm.sum(axis=0) - diagonal + iou = np.divide(diagonal, union, out=np.zeros_like(diagonal), where=union != 0) + + return iou + def evaluate(self, annotations, predictions): confusion_matrix = self.state[self.CONFUSION_MATRIX_KEY] - union = confusion_matrix.sum(axis=1) + confusion_matrix.sum(axis=0) - np.diag(confusion_matrix) diagonal = np.diag(confusion_matrix) + union = confusion_matrix.sum(axis=1) + confusion_matrix.sum(axis=0) - diagonal iou = np.divide(diagonal, union, out=np.zeros_like(diagonal), where=union != 0) values, names = finalize_metric_result(iou, list(self.dataset.labels.values())) @@ -93,6 +110,14 @@ def evaluate(self, annotations, predictions): class SegmentationMeanAccuracy(SegmentationMetric): __provider__ = 'mean_accuracy' + def update(self, annotation, prediction): + cm = super().update(annotation, prediction) + diagonal = np.diag(cm).astype(float) + per_class_count = cm.sum(axis=1) + acc_cls = np.divide(diagonal, per_class_count, out=np.zeros_like(diagonal), where=per_class_count != 0) + + return acc_cls + def evaluate(self, annotations, predictions): confusion_matrix = self.state[self.CONFUSION_MATRIX_KEY] diagonal = np.diag(confusion_matrix) @@ -108,11 +133,19 @@ def evaluate(self, annotations, predictions): class SegmentationFWAcc(SegmentationMetric): __provider__ = 'frequency_weighted_accuracy' + def update(self, annotation, prediction): + cm = super().update(annotation, prediction) + diagonal = np.diag(cm).astype(float) + union = cm.sum(axis=1) + cm.sum(axis=0) - diagonal + iou = np.divide(diagonal, union, out=np.zeros_like(diagonal), where=union != 0) + freq = cm.sum(axis=1) / cm.sum() + + return (freq[freq > 0] * iou[freq > 0]).sum() + def evaluate(self, annotations, predictions): confusion_matrix = self.state[self.CONFUSION_MATRIX_KEY] - - union = (confusion_matrix.sum(axis=1) + confusion_matrix.sum(axis=0) - np.diag(confusion_matrix)) diagonal = np.diag(confusion_matrix) + union = confusion_matrix.sum(axis=1) + confusion_matrix.sum(axis=0) - diagonal iou = np.divide(diagonal, union, out=np.zeros_like(diagonal), where=union != 0) freq = confusion_matrix.sum(axis=1) / confusion_matrix.sum() @@ -126,14 +159,15 @@ class SegmentationDSCAcc(PerImageEvaluationMetric): overall_metric = [] def update(self, annotation, prediction): - cnt = 0 + result = [] for prediction_mask, annotation_mask in zip(prediction.mask, annotation.mask): annotation_mask = np.transpose(annotation_mask, (2, 0, 1)) annotation_mask = np.expand_dims(annotation_mask, 0) numerator = np.sum(prediction_mask * annotation_mask) * 2.0 + 1.0 denominator = np.sum(annotation_mask) + np.sum(prediction_mask) + 1.0 - self.overall_metric.append(numerator / denominator) - cnt += 1 + result.append(numerator / denominator) + self.overall_metric.extend(result) + return np.mean(result) def evaluate(self, annotations, predictions): return sum(self.overall_metric) / len(self.overall_metric) @@ -201,6 +235,8 @@ def update(self, annotation, prediction): self.overall_metric.append(result) + return result + def evaluate(self, annotations, predictions): mean = np.mean(self.overall_metric, axis=0) if self.mean else [] median = np.median(self.overall_metric, axis=0) if self.median else [] diff --git a/tools/accuracy_checker/accuracy_checker/metrics/text_detection.py b/tools/accuracy_checker/accuracy_checker/metrics/text_detection.py index f0d7a204ea0..2a390fe0eb3 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/text_detection.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/text_detection.py @@ -176,7 +176,7 @@ def update(self, annotation, prediction): precision = 0 if num_det > 0 else 1 self.precision_sum += precision self.recall_sum += recall - return + return precision, recall, num_valid_gt, num_valid_pred recall_accum = 0 precision_accum = 0 @@ -218,6 +218,8 @@ def update(self, annotation, prediction): self.recall_sum += recall self.precision_sum += precision + return precision, recall, num_valid_gt, num_valid_pred + def evaluate(self, annotations, predictions): raise NotImplementedError() @@ -362,6 +364,10 @@ def reset(self): class FocusedTextLocalizationPrecision(FocusedTextLocalizationMetric): __provider__ = 'focused_text_precision' + def update(self, annotation, prediction): + precision, _, _, num_valid_dt = super().update(annotation, prediction) + return precision / num_valid_dt if num_valid_dt != 0 else 0 + def evaluate(self, annotations, predictions): return self.precision_sum / self.num_valid_detections if self.num_valid_detections != 0 else 0 @@ -369,6 +375,10 @@ def evaluate(self, annotations, predictions): class FocusedTextLocalizationRecall(FocusedTextLocalizationMetric): __provider__ = 'focused_text_recall' + def update(self, annotation, prediction): + precision, _, num_valid_gt, _ = super().update(annotation, prediction) + return precision / num_valid_gt if num_valid_gt != 0 else 0 + def evaluate(self, annotations, predictions): return self.recall_sum / self.num_valid_gt if self.num_valid_gt != 0 else 0 @@ -376,6 +386,13 @@ def evaluate(self, annotations, predictions): class FocusedTextLocalizationHMean(FocusedTextLocalizationMetric): __provider__ = 'focused_text_hmean' + def update(self, annotation, prediction): + precision, recall, num_valid_gt, num_valid_dt = super().update(annotation, prediction) + overall_p = precision / num_valid_dt if num_valid_dt != 0 else 0 + overall_r = recall / num_valid_gt if num_valid_gt != 0 else 0 + + return 2 * overall_r * overall_p / (overall_r + overall_p) if overall_r + overall_p != 0 else 0 + def evaluate(self, annotations, predictions): recall = self.recall_sum / self.num_valid_gt if self.num_valid_gt != 0 else 0 precision = self.precision_sum / self.num_valid_detections if self.num_valid_detections != 0 else 0 @@ -465,6 +482,8 @@ def update(self, annotation, prediction): self.number_valid_annotations += num_valid_gt self.number_valid_detections += num_valid_pred + return num_det_matched, num_valid_gt, num_valid_pred + def evaluate(self, annotations, predictions): raise NotImplementedError() @@ -477,6 +496,10 @@ def reset(self): class IncidentalSceneTextLocalizationPrecision(IncidentalSceneTextLocalizationMetric): __provider__ = 'incidental_text_precision' + def update(self, annotation, prediction): + num_det_matched, _, num_valid_dt = super().update(annotation, prediction) + return 0 if num_valid_dt == 0 else float(num_det_matched) / num_valid_dt + def evaluate(self, annotations, predictions): precision = ( 0 if self.number_valid_detections == 0 @@ -489,6 +512,10 @@ def evaluate(self, annotations, predictions): class IncidentalSceneTextLocalizationRecall(IncidentalSceneTextLocalizationMetric): __provider__ = 'incidental_text_recall' + def update(self, annotation, prediction): + num_det_matched, num_valid_gt, _ = super().update(annotation, prediction) + return 0 if num_valid_gt == 0 else float(num_det_matched) / num_valid_gt + def evaluate(self, annotations, predictions): recall = ( 0 if self.number_valid_annotations == 0 @@ -501,6 +528,13 @@ def evaluate(self, annotations, predictions): class IncidentalSceneTextLocalizationHMean(IncidentalSceneTextLocalizationMetric): __provider__ = 'incidental_text_hmean' + def update(self, annotation, prediction): + num_det_matched, num_valid_gt, num_valid_pred = super().update(annotation, prediction) + precision = 0 if num_valid_pred == 0 else num_det_matched / num_valid_pred + recall = 0 if num_valid_gt == 0 else num_det_matched / num_valid_gt + + return 0 if precision + recall == 0 else 2 * recall * precision / (recall + precision) + def evaluate(self, annotations, predictions): recall = ( 0 if self.number_valid_annotations == 0 diff --git a/tools/accuracy_checker/accuracy_checker/representation/regression_representation.py b/tools/accuracy_checker/accuracy_checker/representation/regression_representation.py index 99800d36233..48bb73f4a69 100644 --- a/tools/accuracy_checker/accuracy_checker/representation/regression_representation.py +++ b/tools/accuracy_checker/accuracy_checker/representation/regression_representation.py @@ -49,8 +49,8 @@ class GazeVectorPrediction(GazeVectorRepresentation): class FacialLandmarksRepresentation(BaseRepresentation): def __init__(self, identifier='', x_values=None, y_values=None): super().__init__(identifier) - self.x_values = x_values if x_values.any() else [] - self.y_values = y_values if y_values.any() else [] + self.x_values = x_values if x_values is not None else [] + self.y_values = y_values if y_values is not None else [] class FacialLandmarksAnnotation(FacialLandmarksRepresentation): diff --git a/tools/accuracy_checker/tests/test_metric_evaluator.py b/tools/accuracy_checker/tests/test_metric_evaluator.py index fc0c4d28f85..ff8edc1076e 100644 --- a/tools/accuracy_checker/tests/test_metric_evaluator.py +++ b/tools/accuracy_checker/tests/test_metric_evaluator.py @@ -16,7 +16,7 @@ import pytest from accuracy_checker.config import ConfigError -from accuracy_checker.metrics import ClassificationAccuracy, MetricsExecutor +from accuracy_checker.metrics import ClassificationAccuracy, MetricsExecutor, PerImageMetricResult from accuracy_checker.metrics.metric import Metric from accuracy_checker.representation import ( ClassificationAnnotation, @@ -70,7 +70,7 @@ def test_accuracy_on_container_with_wrong_annotation_source_name_raise_config_er dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'a'}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_with_wrong_annotation_type_raise_config_error_exception(self): annotations = [DetectionAnnotation('identifier', 3)] @@ -78,7 +78,7 @@ def test_accuracy_with_wrong_annotation_type_raise_config_error_exception(self): dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_with_unsupported_annotations_in_container_raise_config_error_exception(self): annotations = [ContainerAnnotation({'annotation': DetectionAnnotation('identifier', 3)})] @@ -86,7 +86,7 @@ def test_accuracy_with_unsupported_annotations_in_container_raise_config_error_e dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_with_unsupported_annotation_type_as_annotation_source_for_container_raises_config_error(self): annotations = [ContainerAnnotation({'annotation': DetectionAnnotation('identifier', 3)})] @@ -94,7 +94,7 @@ def test_accuracy_with_unsupported_annotation_type_as_annotation_source_for_cont dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'annotation'}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_on_annotation_container_with_several_suitable_representations_config_value_error_exception(self): annotations = [ContainerAnnotation({ @@ -105,7 +105,7 @@ def test_accuracy_on_annotation_container_with_several_suitable_representations_ dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_with_wrong_prediction_type_raise_config_error_exception(self): annotations = [ClassificationAnnotation('identifier', 3)] @@ -113,7 +113,7 @@ def test_accuracy_with_wrong_prediction_type_raise_config_error_exception(self): dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_with_unsupported_prediction_in_container_raise_config_error_exception(self): annotations = [ClassificationAnnotation('identifier', 3)] @@ -121,7 +121,7 @@ def test_accuracy_with_unsupported_prediction_in_container_raise_config_error_ex dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_with_unsupported_prediction_type_as_prediction_source_for_container_raises_config_error(self): annotations = [ClassificationAnnotation('identifier', 3)] @@ -129,7 +129,7 @@ def test_accuracy_with_unsupported_prediction_type_as_prediction_source_for_cont dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'prediction_source': 'prediction'}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_accuracy_on_prediction_container_with_several_suitable_representations_raise_config_error_exception(self): annotations = [ClassificationAnnotation('identifier', 3)] @@ -140,14 +140,14 @@ def test_accuracy_on_prediction_container_with_several_suitable_representations_ dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) with pytest.raises(ConfigError): - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) def test_complete_accuracy(self): annotations = [ClassificationAnnotation('identifier', 3)] predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result.name == 'accuracy' @@ -160,7 +160,7 @@ def test_complete_accuracy_with_container_default_sources(self): predictions = [ContainerPrediction({'p': ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])})] dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result.name == 'accuracy' @@ -174,7 +174,7 @@ def test_complete_accuracy_with_container_sources(self): config = [{'type': 'accuracy', 'top_k': 1, 'annotation_source': 'a', 'prediction_source': 'p'}] dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result.name == 'accuracy' @@ -199,7 +199,7 @@ def test_complete_accuracy_top_3(self): predictions = [ClassificationPrediction('identifier', [1.0, 3.0, 4.0, 2.0])] dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 3}], None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result.name == 'accuracy' @@ -248,7 +248,7 @@ def test_classification_per_class_accuracy_fully_zero_prediction(self): prediction = ClassificationPrediction('identifier', [1.0, 2.0]) dataset = DummyDataset(label_map={0: '0', 1: '1'}) dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset) - dispatcher.update_metrics_on_batch([annotation], [prediction]) + dispatcher.update_metrics_on_batch(range(1), [annotation], [prediction]) for _, evaluation_result in dispatcher.iterate_metrics([annotation], [prediction]): assert evaluation_result.name == 'accuracy_per_class' assert len(evaluation_result.evaluated_value) == 2 @@ -263,7 +263,7 @@ def test_classification_per_class_accuracy_partially_zero_prediction(self): dataset = DummyDataset(label_map={0: '0', 1: '1'}) dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset) - dispatcher.update_metrics_on_batch(annotation, prediction) + dispatcher.update_metrics_on_batch(range(len(annotation)), annotation, prediction) for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction): assert evaluation_result.name == 'accuracy_per_class' @@ -282,7 +282,7 @@ def test_classification_per_class_accuracy_complete_prediction(self): dataset = DummyDataset(label_map={0: '0', 1: '1'}) dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset) - dispatcher.update_metrics_on_batch(annotation, prediction) + dispatcher.update_metrics_on_batch(range(len(annotation)), annotation, prediction) for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction): assert evaluation_result.name == 'accuracy_per_class' @@ -306,7 +306,7 @@ def test_classification_per_class_accuracy_partially_prediction(self): dataset = DummyDataset(label_map={0: '0', 1: '1'}) dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 1}], dataset) - dispatcher.update_metrics_on_batch(annotation, prediction) + dispatcher.update_metrics_on_batch(range(len(annotation)), annotation, prediction) for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction): assert evaluation_result.name == 'accuracy_per_class' @@ -325,7 +325,7 @@ def test_classification_per_class_accuracy_prediction_top3_zero(self): dataset = DummyDataset(label_map={0: '0', 1: '1', 2: '2', 3: '3'}) dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 3}], dataset) - dispatcher.update_metrics_on_batch(annotation, prediction) + dispatcher.update_metrics_on_batch(range(len(annotation)), annotation, prediction) for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction): assert evaluation_result.name == 'accuracy_per_class' @@ -346,7 +346,7 @@ def test_classification_per_class_accuracy_prediction_top3(self): dataset = DummyDataset(label_map={0: '0', 1: '1', 2: '2', 3: '3'}) dispatcher = MetricsExecutor([{'type': 'accuracy_per_class', 'top_k': 3}], dataset) - dispatcher.update_metrics_on_batch(annotation, prediction) + dispatcher.update_metrics_on_batch(range(len(annotation)), annotation, prediction) for _, evaluation_result in dispatcher.iterate_metrics(annotation, prediction): assert evaluation_result.name == 'accuracy_per_class' @@ -359,6 +359,77 @@ def test_classification_per_class_accuracy_prediction_top3(self): assert evaluation_result.threshold is None +class TestMetricPerInstanceResult: + def test_classification_accuracy_result_for_batch_1(self): + annotations = [ClassificationAnnotation('identifier', 3)] + predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] + + dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + expected_metric_result = PerImageMetricResult('accuracy', 'accuracy', 1.0) + assert len(metric_result) == 1 + assert 0 in metric_result + assert len(metric_result[0]) == 1 + assert metric_result[0][0] == expected_metric_result + + def test_classification_accuracy_result_for_batch_1_with_named_metric(self): + annotations = [ClassificationAnnotation('identifier', 3)] + predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] + + dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1, 'name': 'accuracy@top1'}], None) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + expected_metric_result = PerImageMetricResult('accuracy@top1', 'accuracy', 1.0) + assert len(metric_result) == 1 + assert 0 in metric_result + assert len(metric_result[0]) == 1 + assert metric_result[0][0] == expected_metric_result + + def test_classification_accuracy_result_for_batch_1_with_2_metrics(self): + annotations = [ClassificationAnnotation('identifier', 3)] + predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] + + dispatcher = MetricsExecutor([ + {'name': 'top1', 'type': 'accuracy', 'top_k': 1}, {'name': 'top3', 'type': 'accuracy', 'top_k': 3} + ], None) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + expected_metric_result = [PerImageMetricResult('top1', 'accuracy', 1.0), PerImageMetricResult('top3', 'accuracy', 1.0)] + assert len(metric_result) == 1 + assert 0 in metric_result + assert len(metric_result[0]) == 2 + assert metric_result[0][0] == expected_metric_result[0] + assert metric_result[0][1] == expected_metric_result[1] + + def test_classification_accuracy_result_for_batch_2(self): + annotations = [ClassificationAnnotation('identifier', 3), ClassificationAnnotation('identifier1', 1)] + predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0]), ClassificationPrediction('identifier2', [1.0, 1.0, 1.0, 4.0])] + + dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + expected_metric_result = [PerImageMetricResult('accuracy', 'accuracy', 1.0), PerImageMetricResult('accuracy', 'accuracy', 0.0)] + assert len(metric_result) == 2 + assert 0 in metric_result + assert len(metric_result[0]) == 1 + assert metric_result[0][0] == expected_metric_result[0] + assert 1 in metric_result + assert len(metric_result[1]) == 1 + assert metric_result[1][0] == expected_metric_result[1] + + def test_classification_accuracy_result_for_batch_2_with_not_ordered_ids(self): + annotations = [ClassificationAnnotation('identifier', 3), ClassificationAnnotation('identifier1', 1)] + predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0]), ClassificationPrediction('identifier2', [1.0, 1.0, 1.0, 4.0])] + + dispatcher = MetricsExecutor([{'type': 'accuracy', 'top_k': 1}], None) + metric_result = dispatcher.update_metrics_on_batch([42, 17], annotations, predictions) + expected_metric_result = [PerImageMetricResult('accuracy', 'accuracy', 1.0), PerImageMetricResult('accuracy', 'accuracy', 0.0)] + assert len(metric_result) == 2 + assert 42 in metric_result + assert len(metric_result[42]) == 1 + assert metric_result[42][0] == expected_metric_result[0] + assert 17 in metric_result + assert len(metric_result[17]) == 1 + assert metric_result[17][0] == expected_metric_result[1] + + class TestMetricExtraArgs: def test_all_metrics_raise_config_error_on_extra_args(self): for provider in Metric.providers: diff --git a/tools/accuracy_checker/tests/test_model_evaluator.py b/tools/accuracy_checker/tests/test_model_evaluator.py index d4b10a62691..f5faa940c58 100644 --- a/tools/accuracy_checker/tests/test_model_evaluator.py +++ b/tools/accuracy_checker/tests/test_model_evaluator.py @@ -45,7 +45,7 @@ def setup_method(self): self.annotations = [[annotation_container_0], [annotation_container_1]] self.dataset = MagicMock() - self.dataset.__iter__.return_value = self.annotations + self.dataset.__iter__.return_value = [(range(1), self.annotations[0]), (range(1), self.annotations[1])] self.postprocessor.process_batch = Mock(side_effect=[ ([annotation_container_0], [annotation_container_0]), ([annotation_container_1], [annotation_container_1]) @@ -185,7 +185,7 @@ def setup_method(self): self.annotations = [[annotation_container_0], [annotation_container_1]] self.dataset = MagicMock() - self.dataset.__iter__.return_value = self.annotations + self.dataset.__iter__.return_value = [(range(1), self.annotations[0]), (range(1), self.annotations[1])] self.postprocessor.process_batch = Mock(side_effect=[ ([annotation_container_0], [annotation_container_0]), ([annotation_container_1], [annotation_container_1]) diff --git a/tools/accuracy_checker/tests/test_presenter.py b/tools/accuracy_checker/tests/test_presenter.py index 4d2b5d4f501..76deb176b1f 100644 --- a/tools/accuracy_checker/tests/test_presenter.py +++ b/tools/accuracy_checker/tests/test_presenter.py @@ -28,7 +28,7 @@ def test_config_default_presenter(self): predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] config = [{'type': 'accuracy', 'top_k': 1}] dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for presenter, _ in dispatcher.iterate_metrics(annotations, predictions): assert isinstance(presenter, ScalarPrintPresenter) @@ -38,7 +38,7 @@ def test_config_scalar_presenter(self): predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] config = [{'type': 'accuracy', 'top_k': 1, 'presenter': 'print_scalar'}] dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for presenter, _ in dispatcher.iterate_metrics(annotations, predictions): assert isinstance(presenter, ScalarPrintPresenter) @@ -48,7 +48,7 @@ def test_config_vector_presenter(self): predictions = [ClassificationPrediction('identifier', [1.0, 1.0, 1.0, 4.0])] config = [{'type': 'accuracy', 'top_k': 1, 'presenter': 'print_vector'}] dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for presenter, _ in dispatcher.iterate_metrics(annotations, predictions): assert isinstance(presenter, VectorPrintPresenter) diff --git a/tools/accuracy_checker/tests/test_regression_metrics.py b/tools/accuracy_checker/tests/test_regression_metrics.py index 5e478043497..03302b4a244 100644 --- a/tools/accuracy_checker/tests/test_regression_metrics.py +++ b/tools/accuracy_checker/tests/test_regression_metrics.py @@ -15,8 +15,11 @@ """ import pytest +import numpy as np from accuracy_checker.metrics import MetricsExecutor -from accuracy_checker.representation import RegressionPrediction, RegressionAnnotation +from accuracy_checker.representation import ( + RegressionPrediction, RegressionAnnotation, FacialLandmarksAnnotation, FacialLandmarksPrediction +) from accuracy_checker.presenters import EvaluationResult @@ -38,7 +41,7 @@ def test_mae_with_zero_diff_between_annotation_and_prediction(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -57,7 +60,7 @@ def test_mae_with_negative_diff_between_annotation_and_prediction(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -76,7 +79,7 @@ def test_mae_with_positive_diff_between_annotation_and_prediction(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -95,7 +98,7 @@ def test_mse_with_zero_diff_between_annotation_and_prediction(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -114,7 +117,7 @@ def test_mse_with_negative_diff_between_annotation_and_prediction(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -133,7 +136,7 @@ def test_mse_with_positive_diff_between_annotation_and_prediction(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -157,7 +160,7 @@ def test_mae_on_interval_default_all_missed(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) with pytest.warns(UserWarning) as warnings: for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): @@ -183,7 +186,7 @@ def test_mae_on_interval_default_all_not_in_range_not_ignore_out_of_range(self): config = [{'type': 'mae_on_interval', 'end': 1, 'ignore_values_not_in_interval': False}] dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -202,7 +205,7 @@ def test_mae_on_interval_values_in_range(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -241,7 +244,7 @@ def test_mae_on_interval_default_not_ignore_out_of_range(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -273,7 +276,7 @@ def test_mae_on_interval_with_given_interval(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -305,7 +308,7 @@ def test_mae_on_interval_with_repeated_values(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -336,7 +339,89 @@ def test_mae_on_interval_with_unsorted_values(self): ) dispatcher = MetricsExecutor(config, None) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected + + +class TestUpdateRegressionMetrics: + def test_update_mae_metric_result(self): + annotations = [RegressionAnnotation('identifier', 3), RegressionAnnotation('identifier2', 1)] + predictions = [RegressionPrediction('identifier', 5), RegressionPrediction('identifier2', 5)] + config = [{'type': 'mae'}] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 2 + assert metric_result[1][0].result == 4 + + def test_update_mse_metric_result(self): + annotations = [RegressionAnnotation('identifier', 3), RegressionAnnotation('identifier2', 1)] + predictions = [RegressionPrediction('identifier', 5), RegressionPrediction('identifier2', 5)] + config = [{'type': 'mse'}] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 4 + assert metric_result[1][0].result == 16 + + def test_update_rmse_metric_result(self): + annotations = [RegressionAnnotation('identifier', 3), RegressionAnnotation('identifier2', 1)] + predictions = [RegressionPrediction('identifier', 5), RegressionPrediction('identifier2', 5)] + config = [{'type': 'rmse'}] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 2 + assert metric_result[1][0].result == 4 + + def test_update_mae_on_interval_metric(self): + config = [{'type': 'mae_on_interval', 'intervals': [0.0, 2.0, 4.0]}] + annotations = [RegressionAnnotation('identifier', 3), RegressionAnnotation('identifier2', 1)] + predictions = [RegressionPrediction('identifier', 5), RegressionPrediction('identifier2', 5)] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 2 + assert metric_result[1][0].result == 4 + + def test_update_mse_on_interval_metric(self): + config = [{'type': 'mse_on_interval', 'intervals': [0.0, 2.0, 4.0]}] + annotations = [RegressionAnnotation('identifier', 3), RegressionAnnotation('identifier2', 1)] + predictions = [RegressionPrediction('identifier', 5), RegressionPrediction('identifier2', 5)] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 4 + assert metric_result[1][0].result == 16 + + def test_update_rmse_on_interval_metric(self): + config = [{'type': 'rmse_on_interval', 'intervals': [0.0, 2.0, 4.0]}] + annotations = [RegressionAnnotation('identifier', 3), RegressionAnnotation('identifier2', 1)] + predictions = [RegressionPrediction('identifier', 5), RegressionPrediction('identifier2', 5)] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 2 + assert metric_result[1][0].result == 4 + + def test_update_per_point_normed_error(self): + config = [{'type': 'per_point_normed_error'}] + annotations = [FacialLandmarksAnnotation('identifier', np.array([1, 1, 1, 1, 1]), np.array([1, 1, 1, 1, 1]))] + annotations[0].metadata.update({'left_eye': 0, 'right_eye': 1}) + predictions = [FacialLandmarksPrediction('identifier', np.array([1, 1, 1, 1, 1]), np.array([1, 1, 1, 1, 1]))] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert np.equal(metric_result[0][0].result.all(), np.zeros(5).all()) + + def test_update_normed_error(self): + config = [{'type': 'normed_error'}] + annotations = [FacialLandmarksAnnotation('identifier', np.array([1, 1, 1, 1, 1]), np.array([1, 1, 1, 1, 1]))] + annotations[0].metadata.update({'left_eye': 0, 'right_eye': 1}) + predictions = [FacialLandmarksPrediction('identifier', np.array([1, 1, 1, 1, 1]), np.array([1, 1, 1, 1, 1]))] + dispatcher = MetricsExecutor(config, None) + + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 0 diff --git a/tools/accuracy_checker/tests/test_segmentation_metrics.py b/tools/accuracy_checker/tests/test_segmentation_metrics.py index 56e13b660af..46f59507544 100644 --- a/tools/accuracy_checker/tests/test_segmentation_metrics.py +++ b/tools/accuracy_checker/tests/test_segmentation_metrics.py @@ -38,16 +38,23 @@ def test_one_class(self): annotations = make_segmentation_representation(np.array([[0, 0], [0, 0]]), True) predictions = make_segmentation_representation(np.array([[0, 0], [0, 0]]), False) dispatcher = MetricsExecutor(create_config(self.name), single_class_dataset()) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result(1.0, self.name) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected + def test_one_class_update_metric_result(self): + annotations = make_segmentation_representation(np.array([[0, 0], [0, 0]]), True) + predictions = make_segmentation_representation(np.array([[0, 0], [0, 0]]), False) + dispatcher = MetricsExecutor(create_config(self.name), single_class_dataset()) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 1 + def test_multi_class_not_matched(self): annotations = make_segmentation_representation(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]), True) predictions = make_segmentation_representation(np.array([[1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]), False) dispatcher = MetricsExecutor(create_config(self.name), multi_class_dataset()) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result(0.0, self.name) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -56,11 +63,18 @@ def test_multi_class(self): annotations = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), True) predictions = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), False) dispatcher = MetricsExecutor(create_config(self.name), multi_class_dataset()) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result((5.0+1.0+1.0)/(8.0+1.0+1.0), self.name) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected + def test_multi_class_update_metric_result(self): + annotations = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), True) + predictions = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), False) + dispatcher = MetricsExecutor(create_config(self.name), multi_class_dataset()) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 0.7 + class TestMeanAccuracy: name = 'mean_accuracy' @@ -70,7 +84,7 @@ def test_one_class(self): predictions = make_segmentation_representation(np.array([[0, 0], [0, 0]]), False) dataset = single_class_dataset() dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result([1.0, 0.0], self.name, dataset.labels) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -80,7 +94,7 @@ def test_multi_class_not_matched(self): predictions = make_segmentation_representation(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]), False) dataset = multi_class_dataset() dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result([0.0, 0.0, 0.0, 0.0], self.name, dataset.labels) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -90,11 +104,20 @@ def test_multi_class(self): annotations = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), True) predictions = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), False) dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result([1.0, 1.0, 0.0, 0.5], self.name, dataset.labels) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected + def test_update_metric_result(self): + dataset = multi_class_dataset() + annotations = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), True) + predictions = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), False) + dispatcher = MetricsExecutor(create_config(self.name), dataset) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + for class_result, expected_class_result in zip(metric_result[0][0].result, [1.0, 1.0, 0.0, 0.5]): + assert class_result == expected_class_result + class TestMeanIOU: name = 'mean_iou' @@ -104,7 +127,7 @@ def test_one_class(self): predictions = make_segmentation_representation(np.array([[0, 0], [0, 0]]), False) dataset = single_class_dataset() dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result([1.0, 0.0], self.name, dataset.labels) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -114,7 +137,7 @@ def test_multi_class_not_matched(self): predictions = make_segmentation_representation(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]), False) dataset = multi_class_dataset() dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result([0.0, 0.0, 0.0, 0.0], self.name, dataset.labels) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -124,11 +147,20 @@ def test_multi_class(self): annotations = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), True) predictions = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), False) dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result([0.625, 1.0, 0.0, 0.5], self.name, dataset.labels) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected + def test_update_metric_result(self): + dataset = multi_class_dataset() + annotations = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), True) + predictions = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), False) + dispatcher = MetricsExecutor(create_config(self.name), dataset) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + for class_result, expected_class_result in zip(metric_result[0][0].result, [0.625, 1.0, 0.0, 0.5]): + assert class_result == expected_class_result + class TestSegmentationFWAcc: name = 'frequency_weighted_accuracy' @@ -138,7 +170,7 @@ def test_one_class(self): predictions = make_segmentation_representation(np.array([[0, 0], [0, 0]]), False) dataset = single_class_dataset() dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result(1.0, self.name) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -148,7 +180,7 @@ def test_multi_class_not_matched(self): predictions = make_segmentation_representation(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]), False) dataset = multi_class_dataset() dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result(0.0, self.name) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected @@ -158,7 +190,15 @@ def test_multi_class(self): annotations = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), True) predictions = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), False) dispatcher = MetricsExecutor(create_config(self.name), dataset) - dispatcher.update_metrics_on_batch(annotations, predictions) + dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) expected = generate_expected_result(0.5125, self.name) for _, evaluation_result in dispatcher.iterate_metrics(annotations, predictions): assert evaluation_result == expected + + def test_update_metric_result(self): + dataset = multi_class_dataset() + annotations = make_segmentation_representation(np.array([[1, 2, 3, 2, 3], [0, 0, 0, 0, 0]]), True) + predictions = make_segmentation_representation(np.array([[1, 0, 3, 0, 0], [0, 0, 0, 0, 0]]), False) + dispatcher = MetricsExecutor(create_config(self.name), dataset) + metric_result = dispatcher.update_metrics_on_batch(range(len(annotations)), annotations, predictions) + assert metric_result[0][0].result == 0.5125 From 6e935bca40429138abca4964ce7a1611662897b9 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 16 Oct 2019 13:51:56 +0300 Subject: [PATCH 148/927] AC: fix models paths (#517) --- .../configs/person-reidentification-retail-0076.yml | 12 ++++++------ .../configs/person-reidentification-retail-0079.yml | 4 ++-- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml index 5eef3ffee0d..4c9e36f67b6 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml @@ -5,23 +5,23 @@ models: - framework: dlsdk tags: - FP32 - model: intel/person-reidentification-retail-0031/FP32/person-reidentification-retail-0076.xml - weights: intel/person-reidentification-retail-0031/FP32/person-reidentification-retail-0076.bin + model: intel/person-reidentification-retail-0076/FP32/person-reidentification-retail-0076.xml + weights: intel/person-reidentification-retail-0076/FP32/person-reidentification-retail-0076.bin adapter: reid - framework: dlsdk tags: - FP16 - model: intel/person-reidentification-retail-0031/FP16/person-reidentification-retail-0076.xml - weights: intel/person-reidentification-retail-0031/FP16/person-reidentification-retail-0076.bin + model: intel/person-reidentification-retail-0076/FP16/person-reidentification-retail-0076.xml + weights: intel/person-reidentification-retail-0076/FP16/person-reidentification-retail-0076.bin adapter: reid - framework: dlsdk tags: - INT8 device: CPU - model: intel/person-reidentification-retail-0031/INT8/person-reidentification-retail-0076.xml - weights: intel/person-reidentification-retail-0031/INT8/person-reidentification-retail-0076.bin + model: intel/person-reidentification-retail-0076/INT8/person-reidentification-retail-0076.xml + weights: intel/person-reidentification-retail-0076/INT8/person-reidentification-retail-0076.bin adapter: reid datasets: diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml index 77d4c7ee5e4..c1187fc8d83 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml @@ -20,8 +20,8 @@ models: tags: - INT8 device: CPU - model: intel/person-reidentification-retail-0079/dldt/INT8/person-reidentification-retail-0079.xml - weights: intel/person-reidentification-retail-0079/0079/dldt/INT8/person-reidentification-retail-0079.bin + model: intel/person-reidentification-retail-0079/INT8/person-reidentification-retail-0079.xml + weights: intel/person-reidentification-retail-0079/INT8/person-reidentification-retail-0079.bin adapter: reid datasets: From ef79c04f901af81445b0fb0da8bd116b65579bcf Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 16 Oct 2019 14:58:12 +0300 Subject: [PATCH 149/927] AC: bugfix after introduction per data instance metrics (#518) --- .../accuracy_checker/evaluators/model_evaluator.py | 1 + .../evaluators/quantization_model_evaluator.py | 1 + .../accuracy_checker/accuracy_checker/metrics/coco_metrics.py | 4 ++-- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py index 98e45b7ca80..772be3868e0 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py @@ -243,6 +243,7 @@ def _wait_for_any(irs): if ir.wait(0) == 0: free_indexes.append(ir_id) result = [] + free_indexes.sort(reverse=True) for idx in free_indexes: batch_id, batch_input_ids, batch_annotation, batch_meta, ir = irs.pop(idx) result.append((batch_id, batch_input_ids, batch_annotation, batch_meta, ir.outputs, ir)) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index f597972b5d7..f60050b33a8 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -227,6 +227,7 @@ def _wait_for_any(irs): if ir.wait(0) == 0: free_indexes.append(ir_id) result = [] + free_indexes.sort(reverse=True) for idx in free_indexes: batch_id, batch_input_ids, batch_annotation, batch_identifiers, batch_meta, ir = irs.pop(idx) result.append((batch_id, batch_input_ids, batch_annotation, batch_identifiers, batch_meta, ir.outputs, ir)) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py index 28edfd9ff7a..13b6ab9db7f 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py @@ -98,7 +98,7 @@ class MSCOCOAveragePrecision(MSCOCOBaseMetric): def update(self, annotation, prediction): per_class_matching = super().update(annotation, prediction) - return [compute_precision_recall(self.thresholds, per_class_matching[i])[0] for i, _ in enumerate(self.labels)] + return [compute_precision_recall(self.thresholds, [per_class_matching[i]])[0] for i, _ in enumerate(self.labels)] def evaluate(self, annotations, predictions): precision = [ @@ -114,7 +114,7 @@ class MSCOCORecall(MSCOCOBaseMetric): def update(self, annotation, prediction): per_class_matching = super().update(annotation, prediction) - return [compute_precision_recall(self.thresholds, per_class_matching[i])[1] for i, _ in enumerate(self.labels)] + return [compute_precision_recall(self.thresholds, [per_class_matching[i]])[1] for i, _ in enumerate(self.labels)] def evaluate(self, annotations, predictions): recalls = [ From af21724b7ee0d09faea7d99c7758d80167742c19 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Wed, 16 Oct 2019 15:19:00 +0300 Subject: [PATCH 150/927] Fixed Demo section --- CONTRIBUTING.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 98b39a8694e..b822783b3a1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -184,7 +184,9 @@ Deep Learning Inference Engine (IE) supports models in the Intermediate Represen ## Demo -A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). +A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). + +The demo's name should end with `_demo` suffix to follow the convention of the project. Demos are required to support the following keys: @@ -201,6 +203,7 @@ If you add a new demo, provide autotesting support as well: - add demo launch parameters in [demos/tests/cases.py](demos/tests/cases.py) - prepare list of input images in [demos/tests/image_sequences.py](demos/tests/image_sequences.py) +Update [demos' README.md](demos/README.md) adding your demo to the list. ___ *After this step you get a **demo** for your model (if no demo was available).* From 9ef91c63f968208b7a85caaed39249c93ae3050b Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 16 Oct 2019 16:12:27 +0300 Subject: [PATCH 151/927] AC: custom evals introduction (#306) --- tools/accuracy_checker/README.md | 6 + .../accuracy_checker/adapters/adapter.py | 1 + .../annotation_converters/README.md | 7 + .../annotation_converters/__init__.py | 4 +- .../action_recognition.py | 158 ++++++ .../accuracy_checker/config/config_reader.py | 519 ++++++++++++------ .../accuracy_checker/evaluators/__init__.py | 6 +- .../evaluators/base_evaluator.py | 44 ++ .../evaluators/model_evaluator.py | 27 +- .../evaluators/module_evaluator.py | 57 ++ .../evaluators/pipeline_evaluator.py | 27 +- .../launcher/caffe_launcher.py | 24 +- .../launcher/dlsdk_launcher.py | 5 +- .../launcher/dummy_launcher.py | 1 + .../launcher/mxnet_launcher.py | 84 +-- .../launcher/onnx_launcher.py | 27 +- .../launcher/opencv_launcher.py | 65 ++- .../accuracy_checker/launcher/tf_launcher.py | 55 +- .../launcher/tf_lite_launcher.py | 21 +- .../accuracy_checker/accuracy_checker/main.py | 45 +- .../accuracy_checker/metrics/coco_metrics.py | 8 +- .../accuracy_checker/utils.py | 1 + .../custom_evaluators/README.md | 22 + .../custom_evaluators/__init__.py | 15 + ...sequential_action_recognition_evaluator.py | 370 +++++++++++++ .../tests/test_caffe_launcher.py | 3 +- .../tests/test_config_reader.py | 38 +- .../tests/test_dlsdk_launcher.py | 11 +- 28 files changed, 1303 insertions(+), 348 deletions(-) create mode 100644 tools/accuracy_checker/accuracy_checker/annotation_converters/action_recognition.py create mode 100644 tools/accuracy_checker/accuracy_checker/evaluators/base_evaluator.py create mode 100644 tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py create mode 100644 tools/accuracy_checker/custom_evaluators/README.md create mode 100644 tools/accuracy_checker/custom_evaluators/__init__.py create mode 100644 tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py diff --git a/tools/accuracy_checker/README.md b/tools/accuracy_checker/README.md index d8557516838..3e544c23c6a 100644 --- a/tools/accuracy_checker/README.md +++ b/tools/accuracy_checker/README.md @@ -218,3 +218,9 @@ Typical workflow for testing new model include: 1. Choose one of *adapters* or write your own. Adapter converts raw output produced by framework to high level problem specific representation (e.g. *ClassificationPrediction*, *DetectionPrediction*, etc). 1. Reproduce preprocessing, metrics and postprocessing from canonical paper. 1. Create entry in config file and execute. + +### Customizing Evaluation + +Standard Accuracy Checker validation pipeline: Annotation Reading -> Data Reading -> Preprocessing -> Inference -> Postprocessing -> Metrics. +In some cases it can be unsuitable (e.g. if you have sequence of models). You are able to customize validation pipeline using own evaluator. +More details about custom evaluations can be found in [related section](custom_evaluators/README.md). diff --git a/tools/accuracy_checker/accuracy_checker/adapters/adapter.py b/tools/accuracy_checker/accuracy_checker/adapters/adapter.py index bee31bed6d3..1da70d8aec0 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/adapter.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/adapter.py @@ -109,6 +109,7 @@ def create_adapter(adapter_config, launcher=None, dataset=None): adapter = Adapter.provide(adapter_config['type'], adapter_config, label_map=label_map) else: raise ConfigError('Unknown type for adapter configuration') + if launcher: adapter.output_blob = launcher.output_blob diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md index 9c8b2a8ba26..1ec6233ec67 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/README.md @@ -211,3 +211,10 @@ Accuracy Checker supports following list of annotation converters and specific f * `max_seq_length` - maximum total input sequence length after word-piece tokenization (Optional, default value is 128). * `lower_case` - allows switching tokens to lower case register. It is useful for working with uncased models (Optional, default value is False). * `language_filter` - comma-separated list of used in annotation language tags for selecting records for specific languages only. (Optional, if not used full annotation will be converted). +* `clip_action_recognition` - converts annotation video-based action recognition datasets. Before conversion validation set should be preprocessed using approach described [here](https://github.com/opencv/openvino_training_extensions/tree/develop/pytorch_toolkit/action_recognition#preparation). + * `annotation_file` - path to annotation file in json format. + * `data_dir` - path to directory with prepared data (e. g. data/kinetics/frames_data). + * `clips_per_video` - number of clips per video (Optional, default 3). + * `clip_duration` - clip duration (Optional, default 16) + * `temporal_stride` - temporal stride for frames selection (Optional, default 2). + * `subset` - dataset split: `train`, `validation` or `test` (Optional, default `validation`). diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py index b8e225784bd..d1dc9977c77 100644 --- a/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/__init__.py @@ -48,6 +48,7 @@ from .cvat_person_detection_action_recognition import CVATPersonDetectionActionRecognitionConverter from .squad import SQUADConverter from .xnli import XNLIDatasetConverter +from .action_recognition import ActionRecognitionConverter __all__ = [ 'BaseFormatConverter', @@ -90,5 +91,6 @@ 'CVATPoseEstimationConverter', 'CVATPersonDetectionActionRecognitionConverter', 'SQUADConverter', - 'XNLIDatasetConverter' + 'XNLIDatasetConverter', + 'ActionRecognitionConverter' ] diff --git a/tools/accuracy_checker/accuracy_checker/annotation_converters/action_recognition.py b/tools/accuracy_checker/accuracy_checker/annotation_converters/action_recognition.py new file mode 100644 index 00000000000..a66a5c009aa --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/annotation_converters/action_recognition.py @@ -0,0 +1,158 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + +from ..utils import read_json, read_txt, check_file_existence +from ..representation import ClassificationAnnotation +from ..data_readers import ClipIdentifier +from ..config import PathField, NumberField, StringField + +from .format_converter import BaseFormatConverter, ConverterReturn + + +class ActionRecognitionConverter(BaseFormatConverter): + __provider__ = 'clip_action_recognition' + annotation_types = (ClassificationAnnotation, ) + + @classmethod + def parameters(cls): + parameters = super().parameters() + parameters.update({ + 'annotation_file': PathField(description="Path to annotation file."), + 'data_dir': PathField(is_directory=True, description="Path to data directory."), + 'clips_per_video': NumberField( + value_type=int, optional=True, min_value=0, default=3, description="Number of clips per video." + ), + 'clip_duration': NumberField( + value_type=int, optional=True, min_value=0, default=16, description="Clip duration." + ), + 'temporal_stride': NumberField( + value_type=int, optional=True, min_value=0, default=2, description="Temporal Stride." + ), + 'subset': StringField( + choices=['train', 'test', 'validation'], default='validation', + optional=True, description="Subset: train, test or validation." + ) + }) + + return parameters + + def configure(self): + self.annotation_file = self.get_value_from_config('annotation_file') + self.data_dir = self.get_value_from_config('data_dir') + self.clips_per_video = self.get_value_from_config('clips_per_video') + self.clip_duration = self.get_value_from_config('clip_duration') + self.temporal_stride = self.get_value_from_config('temporal_stride') + self.subset = self.get_value_from_config('subset') + + def convert(self, check_content=False, progress_callback=None, progress_interval=100, **kwargs): + full_annotation = read_json(self.annotation_file) + label_map = dict(enumerate(full_annotation['labels'])) + video_names, annotation = self.get_video_names_and_annotations(full_annotation['database'], self.subset) + class_to_idx = {v: k for k, v in label_map.items()} + + videos = [] + for video_name, annotation in zip(video_names, annotation): + video_path = self.data_dir / video_name + if not video_path.exists(): + continue + + n_frames_file = video_path / 'n_frames' + n_frames = ( + int(read_txt(n_frames_file)[0].rstrip('\n\r')) if n_frames_file.exists() + else len(list(video_path.glob('*.jpg'))) + ) + if n_frames <= 0: + continue + + begin_t = 1 + end_t = n_frames + sample = { + 'video': video_path, + 'video_name': video_name, + 'segment': [begin_t, end_t], + 'n_frames': n_frames, + 'video_id': video_name, + 'label': class_to_idx[annotation['label']] + } + + videos.append(sample) + + videos = sorted(videos, key=lambda v: v['video_id'].split('/')[-1]) + + clips = [] + for video in videos: + for clip in self.get_clips(video, self.clips_per_video, self.clip_duration, self.temporal_stride): + clips.append(clip) + + annotations = [] + num_iterations = len(clips) + content_errors = None if not check_content else [] + for clip_idx, clip in enumerate(clips): + if progress_callback is not None and clip_idx % progress_interval: + progress_callback(clip_idx * 100 / num_iterations) + identifier = ClipIdentifier(clip['video_name'], clip_idx, clip['frames']) + if check_content: + content_errors.extend([ + '{}: does not exist'.format(self.data_dir / frame) + for frame in clip['frames'] if not check_file_existence(self.data_dir / frame) + ]) + annotations.append(ClassificationAnnotation(identifier, clip['label'])) + + return ConverterReturn(annotations, {'label_map': label_map}, content_errors) + + @staticmethod + def get_clips(video, clips_per_video, clip_duration, temporal_stride=1): + num_frames = video['n_frames'] + clip_duration *= temporal_stride + + if clips_per_video == 0: + step = clip_duration + else: + step = max(1, (num_frames - clip_duration) // (clips_per_video - 1)) + + for clip_start in range(1, 1 + clips_per_video * step, step): + clip_end = min(clip_start + clip_duration, num_frames + 1) + + clip_idxs = list(range(clip_start, clip_end)) + + if not clip_idxs: + return + + # loop clip if it is shorter than clip_duration + while len(clip_idxs) < clip_duration: + clip_idxs = (clip_idxs * 2)[:clip_duration] + + clip = dict(video) + frames_idx = clip_idxs[::temporal_stride] + clip['frames'] = ['image_{:05d}.jpg'.format(frame_idx) for frame_idx in frames_idx] + yield clip + + @staticmethod + def get_video_names_and_annotations(data, subset): + video_names = [] + annotations = [] + + for key, value in data.items(): + this_subset = value['subset'] + if this_subset == subset: + if subset == 'testing': + video_names.append('test/{}'.format(key)) + else: + label = value['annotations']['label'] + video_names.append('{}/{}'.format(label, key)) + annotations.append(value['annotations']) + + return video_names, annotations diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index 13a856a28b3..b6a1172a1bc 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -80,6 +80,7 @@ def process_config(config, mode='models', arguments=None): ConfigReader._merge_paths_with_prefixes(arguments, config, mode) ConfigReader._provide_cmd_arguments(arguments, config, mode) ConfigReader._filter_launchers(config, arguments, mode) + ConfigReader._separate_evaluations(config, mode) @staticmethod def _read_configs(arguments): @@ -137,12 +138,33 @@ def _count_entry(stages, entry): if not count_metrics: raise ConfigError('Metrics are not specified') - if 'pipelines' in config: - _check_pipelines_config(config) - return 'pipelines' + def _check_module_config(config): + required_entries = ['name', 'module'] + evaluations = config['evaluations'] + if not evaluations: + raise ConfigError('Missed "{}" in local config'.format('evaluations')) + for evaluation in evaluations: + if _is_requirements_missed(evaluation, required_entries): + raise ConfigError('Each evaluations must specify {}'.format(', '.join(required_entries))) + + config_checkers = { + 'evaluations': _check_module_config, + 'models': _check_models_config, + 'pipelines': _check_pipelines_config, + } + + if not isinstance(config, dict): + raise ConfigError('local config should has dictionary based structure') + + eval_mode = next(iter(config)) + config_checker_func = config_checkers.get(eval_mode) + if config_checker_func is None: + raise ConfigError('Accuracy Checker {} mode is not supported. Please select between {}'. format( + eval_mode, ', '.join(['evaluations', 'models', 'pipelines']) + )) + config_checker_func(config) - _check_models_config(config) - return 'models' + return eval_mode @staticmethod def _prepare_global_configs(global_configs): @@ -168,59 +190,87 @@ def merge(local_entries, global_entries, identifier): merge(dataset.get('postprocessing'), global_configs.get('postprocessing'), 'type') @staticmethod - def _merge_configs(global_configs, local_config, arguments, mode='models'): - def _merge_models_config(global_configs, local_config, arguments): - config = copy.deepcopy(local_config) - if not global_configs: - return config + def _merge_models_config(global_configs, local_config, arguments): + config = copy.deepcopy(local_config) + if not global_configs: + return config - models = config.get('models') - for model in models: - if 'launchers' in global_configs: - for i, launcher_entry in enumerate(model['launchers']): - model['launchers'][i] = ConfigReader._merge_configs_by_identifier( - global_configs['launchers'], launcher_entry, 'framework' - ) - if 'datasets' in global_configs: - for i, dataset in enumerate(model['datasets']): - model['datasets'][i] = ConfigReader._merge_configs_by_identifier( - global_configs['datasets'], dataset, 'name' + models = config['models'] + for model in models: + if 'launchers' in global_configs: + for i, launcher_entry in enumerate(model['launchers']): + model['launchers'][i] = ConfigReader._merge_configs_by_identifier( + global_configs['launchers'], launcher_entry, 'framework' + ) + if 'datasets' in global_configs: + for i, dataset in enumerate(model['datasets']): + model['datasets'][i] = ConfigReader._merge_configs_by_identifier( + global_configs['datasets'], dataset, 'name' + ) + + config['models'] = models + return config + + @staticmethod + def _merge_pipelines_config(global_config, local_config, args): + config = copy.deepcopy(local_config) + pipelines = [] + raw_pipelines = local_config['pipelines'] + for pipeline in raw_pipelines: + device_infos = pipeline.get('device_info', []) + if not device_infos: + device_infos = [{'device': device} for device in args.target_devices] + per_device_pipelines = [] + for device_info in device_infos: + copy_pipeline = copy.deepcopy(pipeline) + for stage in copy_pipeline['stages']: + if 'launcher' in stage: + stage['launcher'].update(device_info) + if global_config and global_config is not None and 'launchers' in global_config: + stage['launcher'] = ConfigReader._merge_configs_by_identifier( + global_config['launchers'], stage['launcher'], 'framework' + ) + if 'dataset' in stage and global_config is not None and 'datasets' in global_config: + dataset = stage['dataset'] + stage['dataset'] = ConfigReader._merge_configs_by_identifier( + global_config['datasets'], dataset, 'name' ) + per_device_pipelines.append(copy_pipeline) + pipelines.extend(per_device_pipelines) + config['pipelines'] = pipelines - return config + return config - def _merge_pipelines_config(global_config, local_config, args): - config = copy.deepcopy(local_config) - pipelines = [] - raw_pipelines = local_config['pipelines'] - for pipeline in raw_pipelines: - device_infos = pipeline.get('device_info', []) - if not device_infos: - device_infos = [{'device': device} for device in args.target_devices] - per_device_pipelines = [] - for device_info in device_infos: - copy_pipeline = copy.deepcopy(pipeline) - for stage in copy_pipeline['stages']: - if 'launcher' in stage: - stage['launcher'].update(device_info) - if global_config and global_config is not None and 'launchers' in global_config: - stage['launcher'] = ConfigReader._merge_configs_by_identifier( - global_config['launchers'], stage['launcher'], 'framework' - ) - if 'dataset' in stage and global_config is not None and 'datasets' in global_config: - dataset = stage['dataset'] - stage['dataset'] = ConfigReader._merge_configs_by_identifier( - global_configs['datasets'], dataset, 'name' - ) - per_device_pipelines.append(copy_pipeline) - pipelines.extend(per_device_pipelines) - config['pipelines'] = pipelines + @staticmethod + def _merge_module_config(global_config, local_config, args): + config = copy.deepcopy(local_config) + if not global_config: return config + for evaluation in config['evaluations']: + if 'module_config' not in evaluation: + continue + module_config = evaluation['module_config'] + if 'launchers' in module_config and 'launchers' in global_config: + for i, launcher_entry in enumerate(module_config['launchers']): + module_config['launchers'][i] = ConfigReader._merge_configs_by_identifier( + global_config['launchers'], launcher_entry, 'framework' + ) + if 'datasets' in module_config and 'datasets' in global_config: + for i, dataset in enumerate(module_config['datasets']): + module_config['datasets'][i] = ConfigReader._merge_configs_by_identifier( + global_config['datasets'], dataset, 'name' + ) + + return config + + @staticmethod + def _merge_configs(global_configs, local_config, arguments, mode='models'): functors_by_mode = { - 'models': _merge_models_config, - 'pipelines': _merge_pipelines_config + 'models': ConfigReader._merge_models_config, + 'pipelines': ConfigReader._merge_pipelines_config, + 'evaluations': ConfigReader._merge_module_config } return functors_by_mode[mode](global_configs, local_config, arguments) @@ -252,84 +302,35 @@ def _merge_configs_by_identifier(global_config, local_config, identifier): def _merge_paths_with_prefixes(arguments, config, mode='models'): args = arguments if isinstance(arguments, dict) else vars(arguments) - def merge_entry_paths(keys, value): - for field, argument in keys.items(): - if field not in value: - continue - - config_path = Path(value[field]) - if config_path.is_absolute(): - value[field] = Path(value[field]) - continue - - if argument not in args or not args[argument]: - continue - - if not args[argument].is_dir(): - raise ConfigError('argument: {} should be a directory'.format(argument)) - value[field] = args[argument] / config_path - - def process_config( - config_item, entries_paths, dataset_identifier='datasets', - launchers_identifier='launchers', identifiers_mapping=None - ): - - def process_dataset(datasets_configs): - if not isinstance(datasets_configs, list): - datasets_configs = [datasets_configs] - for datasets_config in datasets_configs: - annotation_conversion_config = datasets_config.get('annotation_conversion') - if annotation_conversion_config: - command_line_conversion = (create_command_line_mapping(annotation_conversion_config, 'source')) - merge_entry_paths(command_line_conversion, annotation_conversion_config) - if 'preprocessing' in datasets_config: - for preprocessor in datasets_config['preprocessing']: - command_line_preprocessing = (create_command_line_mapping(preprocessor, 'models')) - merge_entry_paths(command_line_preprocessing, preprocessor) - - def process_launchers(launchers_configs): - if not isinstance(launchers_configs, list): - launchers_configs = [launchers_configs] - - for launcher_config in launchers_configs: - adapter_config = launcher_config.get('adapter') - if not isinstance(adapter_config, dict): - continue - command_line_adapter = (create_command_line_mapping(adapter_config, 'models')) - merge_entry_paths(command_line_adapter, adapter_config) - - for entry, command_line_arg in entries_paths.items(): - entry_id = entry if not identifiers_mapping else identifiers_mapping[entry] - if entry_id not in config_item: - continue - - if entry_id == dataset_identifier: - process_dataset(config_item[entry_id]) - - if entry_id == launchers_identifier: - launchers_configs = config_item[entry_id] - process_launchers(launchers_configs) - - config_entries = config_item[entry_id] - if not isinstance(config_entries, list): - config_entries = [config_entries] - for config_entry in config_entries: - merge_entry_paths(command_line_arg, config_entry) - def process_models(config, entries_paths): for model in config['models']: - process_config(model, entries_paths) + process_config(model, entries_paths, args) def process_pipelines(config, entries_paths): identifiers_mapping = {'datasets': 'dataset', 'launchers': 'launcher', 'reader': 'reader'} entries_paths.update({'reader': {'data_source': 'source'}}) for pipeline in config['pipelines']: for stage in pipeline['stages']: - process_config(stage, entries_paths, 'dataset', 'launcher', identifiers_mapping) + process_config(stage, entries_paths, args, 'dataset', 'launcher', identifiers_mapping) + + def process_modules(config, entries_paths): + for evaluation in config['evaluations']: + module_config = evaluation.get('module_config') + if not module_config: + continue + process_config(module_config, entries_paths, args) + if 'network_info' in module_config: + networks_info = module_config['network_info'] + if isinstance(networks_info, dict): + for _, params in networks_info.items(): + merge_entry_paths(entries_paths['launchers'], params, args) + if isinstance(networks_info, list): + merge_entry_paths(entries_paths['launchers'], networks_info, args) functors_by_mode = { 'models': process_models, - 'pipelines': process_pipelines + 'pipelines': process_pipelines, + 'evaluations': process_modules } processing_func = functors_by_mode[mode] @@ -394,9 +395,21 @@ def merge_pipelines(config, arguments, update_launcher_entry): for stage in pipeline['stages']: if 'launcher' in stage: merge_dlsdk_launcher_args(arguments, stage['launcher'], update_launcher_entry) + + def merge_modules(config, arguments, update_launcher_entry): + for evaluation in config['evaluations']: + module_config = evaluation.get('module_config') + if not module_config: + continue + if 'launchers' not in module_config: + continue + for launcher in module_config['launchers']: + merge_dlsdk_launcher_args(arguments, launcher, update_launcher_entry) + functors_by_mode = { 'models': merge_models, - 'pipelines': merge_pipelines + 'pipelines': merge_pipelines, + 'evaluations': merge_modules } additional_keys = [ @@ -416,66 +429,80 @@ def merge_pipelines(config, arguments, update_launcher_entry): @staticmethod def _filter_launchers(config, arguments, mode='models'): - def filtered(launcher, targets): - target_tags = args.get('target_tags') or [] - if target_tags: - if not contains_any(target_tags, launcher.get('tags', [])): - return True - - config_framework = launcher['framework'].lower() - target_framework = (args.get('target_framework') or config_framework).lower() - if config_framework != target_framework: - return True - - return targets and launcher.get('device', '').lower() not in targets - - def filter_models(config, target_devices): - models_after_filtration = [] - for model in config['models']: - launchers_after_filtration = [] - launchers = model['launchers'] - for launcher in launchers: - if 'device' not in launcher and target_devices: - for device in target_devices: - launcher_with_device = copy.deepcopy(launcher) - launcher_with_device['device'] = device - if not filtered(launcher_with_device, target_devices): - launchers_after_filtration.append(launcher_with_device) - if not filtered(launcher, target_devices): - launchers_after_filtration.append(launcher) - - if not launchers_after_filtration: - warnings.warn('Model "{}" has no launchers'.format(model['name'])) - continue - - model['launchers'] = launchers_after_filtration - models_after_filtration.append(model) - - config['models'] = models_after_filtration - - def filter_pipelines(config, target_devices): - saved_pipelines = [] - for pipeline in config['pipelines']: - filtered_pipeline = False - for stage in pipeline['stages']: - if 'launcher' in stage: - if filtered(stage['launcher'], target_devices): - filtered_pipeline = True - break - if filtered_pipeline: - continue - saved_pipelines.append(pipeline) - config['pipelines'] = saved_pipelines - functors_by_mode = { 'models': filter_models, - 'pipelines': filter_pipelines + 'pipelines': filter_pipelines, + 'evaluations': filter_modules } args = arguments if isinstance(arguments, dict) else vars(arguments) target_devices = to_lower_register(args.get('target_devices') or []) filtering_mode = functors_by_mode[mode] - filtering_mode(config, target_devices) + filtering_mode(config, target_devices, args) + + @staticmethod + def _separate_evaluations(config, mode='models'): + def _separate_models_evaluations(models_config): + evaluations = [] + for model in models_config['models']: + launchers = model['launchers'] + datasets = model['datasets'] + if not launchers: + continue + if len(launchers) == 1 and len(datasets) == 1: + evaluations.append(model) + continue + for launcher in model['launchers']: + model_evaluations = [] + model_config_copy_launcher = copy.deepcopy(model) + model_config_copy_launcher['launchers'] = [launcher] + + for dataset in model_config_copy_launcher['datasets']: + model_config_copy_dataset = copy.deepcopy(model_config_copy_launcher) + model_config_copy_dataset['datasets'] = [dataset] + model_evaluations.append(model_config_copy_dataset) + + evaluations.extend(model_evaluations) + + models_config['models'] = evaluations + + def _separate_modules_evaluations(modules_config): + evals = modules_config['evaluations'] + eval_list = [] + for evaluation in evals: + if 'module_config' not in evaluation: + eval_list.append(evaluation) + continue + module_config = evaluation['module_config'] + launchers = module_config.get('launchers', []) + datasets = module_config.get('datasets', []) + eval_config_list = [] + for launcher in launchers: + copy_module_config = copy.deepcopy(module_config) + copy_module_config['launchers'] = [launcher] + if not datasets: + eval_config_list.append(copy_module_config) + continue + for dataset in datasets: + copy_evaluation_for_dataset = copy.deepcopy(copy_module_config) + copy_evaluation_for_dataset['datasets'] = [dataset] + eval_config_list.append(copy_evaluation_for_dataset) + for eval_config in eval_config_list: + copy_evaluation = copy.deepcopy(evaluation) + copy_evaluation['module_config'] = eval_config + eval_list.append(copy_evaluation) + + modules_config['evaluations'] = eval_list + + mode_func = { + 'models': _separate_models_evaluations, + 'evaluations': _separate_modules_evaluations + } + + separator = mode_func.get(mode) + if not separator: + return + separator(config) @staticmethod def convert_paths(config): @@ -523,3 +550,153 @@ def create_command_line_mapping(config, value): mapping[key] = value return mapping + + +def filtered(launcher, targets, args): + target_tags = args.get('target_tags') or [] + if target_tags: + if not contains_any(target_tags, launcher.get('tags', [])): + return True + + config_framework = launcher['framework'].lower() + target_framework = (args.get('target_framework') or config_framework).lower() + if config_framework != target_framework: + return True + + return targets and launcher.get('device', '').lower() not in targets + + +def filter_models(config, target_devices, args): + models_after_filtration = [] + for model in config['models']: + launchers_after_filtration = [] + launchers = model['launchers'] + for launcher in launchers: + if 'device' not in launcher and target_devices: + for device in target_devices: + launcher_with_device = copy.deepcopy(launcher) + launcher_with_device['device'] = device + if not filtered(launcher_with_device, target_devices, args): + launchers_after_filtration.append(launcher_with_device) + continue + if not filtered(launcher, target_devices, args): + launchers_after_filtration.append(launcher) + + if not launchers_after_filtration: + warnings.warn('Model "{}" has no launchers'.format(model['name'])) + continue + + model['launchers'] = launchers_after_filtration + models_after_filtration.append(model) + + config['models'] = models_after_filtration + + +def filter_pipelines(config, target_devices, args): + saved_pipelines = [] + for pipeline in config['pipelines']: + filtered_pipeline = False + for stage in pipeline['stages']: + if 'launcher' in stage: + if filtered(stage['launcher'], target_devices, args): + filtered_pipeline = True + break + if filtered_pipeline: + continue + saved_pipelines.append(pipeline) + config['pipelines'] = saved_pipelines + + +def filter_modules(config, target_devices, args): + filtered_evals = [] + for evaluation in config['evaluations']: + if 'module_config' not in evaluation or 'launchers' not in evaluation['module_config']: + if target_devices: + warnings.warn( + 'Information about launcher is not provided in config for {}. ' + 'Filtration can not be done'.format(evaluation['name']) + ) + filtered_evals.append(evaluation) + continue + module_config = evaluation['module_config'] + launchers = module_config['launchers'] + if target_devices: + launchers_without_device = [launcher for launcher in launchers if 'device' not in launcher] + for launcher in launchers_without_device: + for device in target_devices: + launcher_with_device = copy.deepcopy(launcher) + launcher_with_device['device'] = device + launchers.append(launcher_with_device) + launchers = [ + launcher for launcher in launchers if not filtered(launcher, target_devices, args) + ] + if not launchers: + warnings.warn('Model "{}" has no launchers'.format(evaluation['name'])) + evaluation['module_config']['launchers'] = launchers + filtered_evals.append(evaluation) + config['evaluations'] = filtered_evals + + +def process_config( + config_item, entries_paths, args, dataset_identifier='datasets', + launchers_idenitfier='launchers', identifers_mapping=None +): + def process_dataset(datasets_configs): + if not isinstance(datasets_configs, list): + datasets_configs = [datasets_configs] + for datasets_config in datasets_configs: + annotation_conversion_config = datasets_config.get('annotation_conversion') + if annotation_conversion_config: + command_line_conversion = (create_command_line_mapping(annotation_conversion_config, 'source')) + merge_entry_paths(command_line_conversion, annotation_conversion_config, args) + if 'preprocessing' in datasets_config: + for preprocessor in datasets_config['preprocessing']: + command_line_preprocessing = (create_command_line_mapping(preprocessor, 'models')) + merge_entry_paths(command_line_preprocessing, preprocessor, args) + + def process_launchers(launchers_configs): + if not isinstance(launchers_configs, list): + launchers_configs = [launchers_configs] + + for launcher_config in launchers_configs: + adapter_config = launcher_config.get('adapter') + if not isinstance(adapter_config, dict): + continue + command_line_adapter = (create_command_line_mapping(adapter_config, 'models')) + merge_entry_paths(command_line_adapter, adapter_config, args) + + for entry, command_line_arg in entries_paths.items(): + entry_id = entry if not identifers_mapping else identifers_mapping[entry] + if entry_id not in config_item: + continue + + if entry_id == dataset_identifier: + process_dataset(config_item[entry_id]) + + if entry_id == launchers_idenitfier: + launchers_configs = config_item[entry_id] + process_launchers(launchers_configs) + + config_entires = config_item[entry_id] + if not isinstance(config_entires, list): + config_entires = [config_entires] + for config_entry in config_entires: + merge_entry_paths(command_line_arg, config_entry, args) + + +def merge_entry_paths(keys, value, args): + for field, argument in keys.items(): + if field not in value: + continue + + config_path = Path(value[field]) + if config_path.is_absolute(): + value[field] = Path(value[field]) + continue + + if not argument in args or not args[argument]: + continue + + if not args[argument].is_dir(): + raise ConfigError('argument: {} should be a directory'.format(argument)) + value[field] = args[argument] / config_path diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/__init__.py b/tools/accuracy_checker/accuracy_checker/evaluators/__init__.py index 765549ea638..77986d0faf9 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/__init__.py @@ -15,10 +15,12 @@ """ from .model_evaluator import ModelEvaluator -from .pipeline_evaluator import PipeLineEvaluator, get_processing_info +from .pipeline_evaluator import PipeLineEvaluator +from .module_evaluator import ModuleEvaluator + __all__ = [ 'ModelEvaluator', 'PipeLineEvaluator', - 'get_processing_info' + 'ModuleEvaluator' ] diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/base_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/base_evaluator.py new file mode 100644 index 00000000000..22b523a1942 --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/evaluators/base_evaluator.py @@ -0,0 +1,44 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" + + +# base class for custom evaluators +class BaseEvaluator: + # create class instance using config + @classmethod + def from_configs(cls, config): + return cls() + + # extract information related to evaluation from config + @staticmethod + def get_processing_info(config): + return config['name'], 'framework', 'device', None, 'dataset_name' + + # determine cycle for dataset processing + def process_dataset(self, *args, **kwargs): + raise NotImplementedError + + # finalize and get metrics results + def compute_metrics(self, print_results=True, output_callback=None, ignore_results_formatting=False): + raise NotImplementedError + + # destruction for entity, which can not be deleted automatically + def release(self): + pass + + # reset progress for metrics calculation + def reset(self): + raise NotImplementedError diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py index 772be3868e0..b1b5c460533 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/model_evaluator.py @@ -29,9 +29,10 @@ from ..adapters import create_adapter from ..config import ConfigError from ..data_readers import BaseReader +from .base_evaluator import BaseEvaluator -class ModelEvaluator: +class ModelEvaluator(BaseEvaluator): def __init__( self, launcher, input_feeder, adapter, reader, preprocessor, postprocessor, dataset, metric, async_mode ): @@ -50,10 +51,13 @@ def __init__( self._metrics_results = [] @classmethod - def from_configs(cls, launcher_config, dataset_config): + def from_configs(cls, model_config): + launcher_config = model_config['launchers'][0] + dataset_config = model_config['datasets'][0] dataset_name = dataset_config['name'] data_reader_config = dataset_config.get('reader', 'opencv_imread') data_source = dataset_config.get('data_source') + dataset = Dataset(dataset_config) if isinstance(data_reader_config, str): data_reader = BaseReader.provide(data_reader_config, data_source, annotations=dataset.annotation) @@ -63,8 +67,6 @@ def from_configs(cls, launcher_config, dataset_config): ) else: raise ConfigError('reader should be dict or string') - - dataset = Dataset(dataset_config) launcher = create_launcher(launcher_config) async_mode = launcher.async_mode if hasattr(launcher, 'async_mode') else False config_adapter = launcher_config.get('adapter') @@ -83,6 +85,17 @@ def from_configs(cls, launcher_config, dataset_config): preprocessor, postprocessor, dataset, metric_dispatcher, async_mode ) + @staticmethod + def get_processing_info(config): + launcher_config = config['launchers'][0] + dataset_config = config['datasets'][0] + + return ( + config['name'], + launcher_config['framework'], launcher_config['device'], launcher_config.get('tags'), + dataset_config['name'] + ) + def _get_batch_input(self, batch_annotation): batch_identifiers = [annotation.identifier for annotation in batch_annotation] batch_input = [self.reader(identifier=identifier) for identifier in batch_identifiers] @@ -164,6 +177,8 @@ def _process_ready_predictions(batch_predictions, batch_identifiers, batch_meta, return self.postprocessor.process_dataset(self._annotations, self._predictions) def process_dataset(self, stored_predictions, progress_reporter, *args, **kwargs): + if progress_reporter: + progress_reporter.reset(self.dataset.size) if self._is_stored(stored_predictions) or isinstance(self.launcher, DummyLauncher): self._annotations, self._predictions = self.load(stored_predictions, progress_reporter) self._annotations, self._predictions = self.postprocessor.full_process(self._annotations, self._predictions) @@ -314,6 +329,9 @@ def store_predictions(stored_predictions, predictions): pickle.dump(predictions, content) print_info("prediction objects are save to {}".format(stored_predictions)) + def reset_progress(self, progress_reporter): + progress_reporter.reset(self.dataset.size) + def reset(self): self.metric_executor.reset() del self._annotations @@ -322,6 +340,7 @@ def reset(self): self._annotations = [] self._predictions = [] self._metrics_results = [] + self.dataset.reset() def release(self): self.launcher.release() diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py new file mode 100644 index 00000000000..fe1c7989c18 --- /dev/null +++ b/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py @@ -0,0 +1,57 @@ +from contextlib import contextmanager +import sys +import importlib + +from .base_evaluator import BaseEvaluator + + +class ModuleEvaluator(BaseEvaluator): + def __init__(self, internal_module): + super().__init__() + self._internal_module = internal_module + + @classmethod + def from_configs(cls, config): + module = config['module'] + module_config = config.get('module_config') + python_path = config.get('python_path') + + return cls(load_module(module, python_path).from_configs(module_config)) + + def process_dataset(self, stored_predictions, progress_reporter, *args, **kwargs): + self._internal_module.process_dataset(stored_predictions, progress_reporter, *args, **kwargs) + + def compute_metrics(self, print_results=True, output_callback=None, ignore_results_formatting=False): + self._internal_module.compute_metrics(print_results, output_callback, ignore_results_formatting) + + def release(self): + self._internal_module.release() + del self._internal_module + + def reset(self): + self._internal_module.reset() + + @staticmethod + def get_processing_info(config): + module = config['module'] + python_path = config.get('python_path') + return load_module(module, python_path).get_processing_info(config) + + +def load_module(model_cls, python_path=None): + module_parts = model_cls.split(".") + model_cls = module_parts[-1] + model_path = ".".join(module_parts[:-1]) + with append_to_path(python_path): + moodule_cls = importlib.import_module(model_path).__getattribute__(model_cls) + return moodule_cls + + +@contextmanager +def append_to_path(path): + if path: + sys.path.append(path) + yield + + if path: + sys.path.remove(path) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py index fe067368cec..8fec0367af0 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py @@ -26,7 +26,8 @@ from ..metrics import MetricsExecutor from ..pipeline_connectors import StageConnectionDescription, Connection from ..postprocessor import PostprocessingExecutor -from..preprocessor import PreprocessingExecutor +from ..preprocessor import PreprocessingExecutor +from .base_evaluator import BaseEvaluator def get_processing_info(pipeline_config): @@ -179,7 +180,7 @@ def reset(self): self.metrics_executor.reset() -class PipeLineEvaluator: +class PipeLineEvaluator(BaseEvaluator): def __init__(self, stages): self.stages = stages self.create_connectors() @@ -194,6 +195,22 @@ def from_configs(cls, pipeline_config): stages[stage_name] = evaluation_stage return cls(stages) + @staticmethod + def get_processing_info(config): + name = config['name'] + stages = config['stages'] + dataset_name = stages[0]['dataset']['name'] + launcher = {} + for stage in stages: + if 'launcher' in stage: + launcher = stage['launcher'] + break + framework = launcher.get('framework') + device = launcher.get('device') + tags = launcher.get('tags') + + return name, framework, device, tags, dataset_name + def create_connectors(self): def make_connection(stages, connection_template): return Connection(stages, connection_template) @@ -226,7 +243,7 @@ def process_dataset(self, stored_predictions, progress_reporter, *args, **kwargs if progress_reporter: progress_reporter.finish() - def compute_metrics(self, output_callback=None, ignore_results_formatting=False): + def compute_metrics(self, print_results=True, output_callback=None, ignore_results_formatting=False): def eval_metrics(metrics_executor, annotations, predictions): for result_presenter, evaluated_metric in metrics_executor.iterate_metrics(annotations, predictions): result_presenter.write_result(evaluated_metric, output_callback, ignore_results_formatting) @@ -241,3 +258,7 @@ def release(self): for _, stage in self.stages.items(): for launcher in stage.evaluation_context.launcher: launcher.release() + + def reset(self): + for _, stage in self.stages.items(): + stage.evaluation_context.reset() diff --git a/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py index 2152bb45943..e7e7d344b84 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/caffe_launcher.py @@ -37,11 +37,13 @@ def __init__(self, config_entry: dict, *args, **kwargs): caffe_launcher_config = LauncherConfigValidator('Caffe_Launcher', fields=self.parameters()) caffe_launcher_config.validate(self.config) + self._delayed_model_loading = kwargs.get('delayed_model_loading', False) + self._do_reshape = False - self.model = str(self.get_value_from_config('model')) - self.weights = str(self.get_value_from_config('weights')) - - self.network = caffe.Net(self.model, self.weights, caffe.TEST) + if not self._delayed_model_loading: + self.model = str(self.get_value_from_config('model')) + self.weights = str(self.get_value_from_config('weights')) + self.network = caffe.Net(self.model, self.weights, caffe.TEST) self.allow_reshape_input = self.get_value_from_config('allow_reshape_input') match = re.match(DEVICE_REGEX, self.get_value_from_config('device').lower()) @@ -90,10 +92,14 @@ def output_blob(self): def fit_to_input(self, data, layer_name, layout): data_shape = np.shape(data) - data = np.transpose(data, layout) if len(data_shape) == 4 else np.array(data) layer_shape = self.inputs[layer_name] + if len(data_shape) == 5 and len(layer_shape) == 4: + data = data[0] + data_shape = np.shape(data) + data = np.transpose(data, layout) if len(data_shape) == 4 else np.array(data) + data_shape = np.shape(data) if layer_shape != data_shape: - self.network.blobs[layer_name].reshape(*data.shape) + self._do_reshape = True return data @@ -107,7 +113,13 @@ def predict(self, inputs, metadata=None, **kwargs): """ results = [] for infer_input in inputs: + if self._do_reshape: + for layer_name, data in infer_input.items(): + if data.shape != self.inputs[layer_name]: + self.network.blobs[layer_name].reshape(*data.shape) + results.append(self.network.forward(**infer_input)) + if metadata is not None: for image_meta in metadata: image_meta['input_shape'] = self.inputs_info_for_meta() diff --git a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py index 9357ba6e445..32bfe8ee632 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py @@ -273,9 +273,8 @@ def predict(self, inputs, metadata=None, **kwargs): results.append(result) if metadata is not None: - for image_meta in metadata: - image_meta['input_shape'] = self.inputs_info_for_meta() - + for meta_ in metadata: + meta_['input_shape'] = self.inputs_info_for_meta() self._do_reshape = False return results diff --git a/tools/accuracy_checker/accuracy_checker/launcher/dummy_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/dummy_launcher.py index 0076739b258..fa40624c2c4 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/dummy_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/dummy_launcher.py @@ -21,6 +21,7 @@ from .loaders import Loader from .launcher import Launcher, LauncherConfigValidator + class DummyLauncher(Launcher): """ Class for using predictions from another tool. diff --git a/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py index a273190d7aa..f8c10af7423 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/mxnet_launcher.py @@ -57,46 +57,49 @@ def parameters(cls): def __init__(self, config_entry: dict, *args, **kwargs): super().__init__(config_entry, *args, **kwargs) + self._delayed_model_loading = kwargs.get('delayed_model_loading', False) - mxnet_launcher_config = MxNetLauncherConfigValidator('MxNet_Launcher', fields=self.parameters()) - mxnet_launcher_config.validate(self.config) - - # Get model name, prefix, epoch - self.model = self.config['model'] - model_path, model_file = self.model.parent, self.model.name - model_name = model_file.rsplit('.', 1)[0] - model_prefix, model_epoch = model_name.rsplit('-', 1) - - # Get device and set device context - match = re.match(DEVICE_REGEX, self.config['device'].lower()) - if match.group('device') == 'gpu': - identifier = match.group('identifier') - if identifier is None: - identifier = 0 - device_context = mxnet.gpu(int(identifier)) - else: - device_context = mxnet.cpu() - - # Get batch from config or 1 - self._batch = self.config.get('batch', 1) - - # Get input shapes - input_shapes = [] - - for input_config in self.config['inputs']: - input_shape = input_config['shape'] - input_shape = string_to_tuple(input_shape, casting_type=int) - input_shapes.append((input_config['name'], (self._batch, *input_shape))) - - # Load checkpoints - sym, arg_params, aux_params = mxnet.model.load_checkpoint( - model_path / model_prefix, int(model_epoch) + mxnet_launcher_config = MxNetLauncherConfigValidator( + 'MxNet_Launcher', fields=self.parameters(), delayed_model_loading=self._delayed_model_loading ) - self._inputs = OrderedDict(input_shapes) - # Create a module - self.module = mxnet.mod.Module(symbol=sym, context=device_context, label_names=None) - self.module.bind(for_training=False, data_shapes=input_shapes) - self.module.set_params(arg_params, aux_params, allow_missing=True) + mxnet_launcher_config.validate(self.config) + if not self._delayed_model_loading: + # Get model name, prefix, epoch + self.model = self.config['model'] + model_path, model_file = self.model.parent, self.model.name + model_name = model_file.rsplit('.', 1)[0] + model_prefix, model_epoch = model_name.rsplit('-', 1) + + # Get device and set device context + match = re.match(DEVICE_REGEX, self.config['device'].lower()) + if match.group('device') == 'gpu': + identifier = match.group('identifier') + if identifier is None: + identifier = 0 + device_context = mxnet.gpu(int(identifier)) + else: + device_context = mxnet.cpu() + + # Get batch from config or 1 + self._batch = self.config.get('batch', 1) + + # Get input shapes + input_shapes = [] + + for input_config in self.config['inputs']: + input_shape = input_config['shape'] + input_shape = string_to_tuple(input_shape, casting_type=int) + input_shapes.append((input_config['name'], (self._batch, *input_shape))) + + # Load checkpoints + sym, arg_params, aux_params = mxnet.model.load_checkpoint( + model_path / model_prefix, int(model_epoch) + ) + self._inputs = OrderedDict(input_shapes) + # Create a module + self.module = mxnet.mod.Module(symbol=sym, context=device_context, label_names=None) + self.module.bind(for_training=False, data_shapes=input_shapes) + self.module.set_params(arg_params, aux_params, allow_missing=True) @property def batch(self): @@ -130,8 +133,9 @@ def predict(self, inputs, metadata=None, **kwargs): infer_res[layer.replace('_output', '')] = out.asnumpy() results.append(infer_res) - for meta_ in metadata: - meta_['input_shape'] = self.inputs_info_for_meta() + if metadata is not None: + for meta_ in metadata: + meta_['input_shape'] = self.inputs_info_for_meta() return results diff --git a/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py index 84ef7629962..98222fc3c42 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/onnx_launcher.py @@ -28,17 +28,17 @@ class ONNXLauncher(Launcher): def __init__(self, config_entry: dict, *args, **kwargs): super().__init__(config_entry, *args, **kwargs) + self._delayed_model_loading = kwargs.get('delayed_model_loading', False) - onnx_launcher_config = LauncherConfigValidator('ONNX_Launcher', fields=self.parameters()) + onnx_launcher_config = LauncherConfigValidator( + 'ONNX_Launcher', fields=self.parameters(), delayed_model_loading=self._delayed_model_loading) onnx_launcher_config.validate(self.config) - - self.model = str(self.get_value_from_config('model')) - - device = re.match(DEVICE_REGEX, self.get_value_from_config('device').lower()).group('device') - beckend_rep = backend.prepare(model=self.model, device=device.upper()) - self._inference_session = beckend_rep._session # pylint: disable=W0212 - outputs = self._inference_session.get_outputs() - self.output_names = [output.name for output in outputs] + self.device = re.match(DEVICE_REGEX, self.get_value_from_config('device').lower()).group('device') + if not self._delayed_model_loading: + self.model = self.get_value_from_config('model') + self._inference_session = self.create_inference_session(self.model) + outputs = self._inference_session.get_outputs() + self.output_names = [output.name for output in outputs] @classmethod def parameters(cls): @@ -63,13 +63,18 @@ def output_blob(self): def batch(self): return 1 + def create_inference_session(self, model): + beckend_rep = backend.prepare(model=str(model), device=self.device.upper()) + return beckend_rep._session # pylint: disable=W0212 + def predict(self, inputs, metadata=None, **kwargs): results = [] for infer_input in inputs: prediction_list = self._inference_session.run(self.output_names, infer_input) results.append(dict(zip(self.output_names, prediction_list))) - for meta_ in metadata: - meta_['input_shape'] = self.inputs_info_for_meta() + if metadata is not None: + for meta_ in metadata: + meta_['input_shape'] = self.inputs_info_for_meta() return results diff --git a/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py index ce803bd4dff..2655d67d9c6 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/opencv_launcher.py @@ -30,11 +30,13 @@ class OpenCVLauncherConfigValidator(LauncherConfigValidator): def validate(self, entry, field_uri=None): + self.fields['inputs'].optional = self.delayed_model_loading super().validate(entry, field_uri) - inputs = entry.get('inputs') - for input_layer in inputs: - if 'shape' not in input_layer: - raise ConfigError('input value should have shape field') + if not self.delayed_model_loading: + inputs = entry.get('inputs') + for input_layer in inputs: + if 'shape' not in input_layer: + raise ConfigError('input value should have shape field') class OpenCVLauncher(Launcher): @@ -75,22 +77,16 @@ def parameters(cls): def __init__(self, config_entry: dict, *args, **kwargs): super().__init__(config_entry, *args, **kwargs) + self._delayed_model_loading = kwargs.get('delayed_model_loading', False) - opencv_launcher_config = OpenCVLauncherConfigValidator('OpenCV_Launcher', fields=self.parameters()) + opencv_launcher_config = OpenCVLauncherConfigValidator( + 'OpenCV_Launcher', fields=self.parameters(), delayed_model_loading=self._delayed_model_loading + ) opencv_launcher_config.validate(self.config) - - self.model = str(self.get_value_from_config('model')) - self.weights = str(self.get_value_from_config('weights')) - - self.network = cv2.dnn.readNet(self.model, self.weights) - match = re.match(BACKEND_REGEX, self.get_value_from_config('backend').lower()) selected_backend = match.group('backend') print_info('backend: {}'.format(selected_backend)) - backend = OpenCVLauncher.OPENCV_BACKENDS.get(selected_backend) - - self.network.setPreferableBackend(backend) - + self.backend = OpenCVLauncher.OPENCV_BACKENDS.get(selected_backend) match = re.match(DEVICE_REGEX, self.get_value_from_config('device').lower()) selected_device = match.group('device') @@ -99,21 +95,18 @@ def __init__(self, config_entry: dict, *args, **kwargs): if ('FP16' in tags) and (selected_device == 'gpu'): selected_device = 'gpu_fp16' - target = OpenCVLauncher.TARGET_DEVICES.get(selected_device) + self.target = OpenCVLauncher.TARGET_DEVICES.get(selected_device) - if target is None: + if self.target is None: raise ConfigError('{} is not supported device'.format(selected_device)) - self.network.setPreferableTarget(target) - - inputs = self.config['inputs'] - - def parse_shape_value(shape): - return tuple([1, *[int(elem) for elem in get_or_parse_value(shape, ())]]) - - self._inputs_shapes = OrderedDict({elem.get('name'): parse_shape_value(elem.get('shape')) for elem in inputs}) - self.network.setInputsNames(list(self._inputs_shapes.keys())) - self.output_names = self.network.getUnconnectedOutLayersNames() + if not self._delayed_model_loading: + self.model = self.get_value_from_config('model') + self.weights = self.get_value_from_config('weights') + self.network = self.create_network(self.model, self.weights) + self._inputs_shapes = self.get_inputs_from_config(self.config) + self.network.setInputsNames(list(self._inputs_shapes.keys())) + self.output_names = self.network.getUnconnectedOutLayersNames() @property def inputs(self): @@ -156,6 +149,24 @@ def predict(self, inputs, metadata=None, **kwargs): def predict_async(self, *args, **kwargs): raise ValueError('OpenCV Launcher does not support async mode yet') + def create_network(self, model, weights): + network = cv2.dnn.readNet(str(model), str(weights)) + network.setPreferableBackend(self.backend) + network.setPreferableTarget(self.target) + + return network + + @staticmethod + def get_inputs_from_config(config): + inputs = config.get('inputs') + if not inputs: + raise ConfigError('inputs should be provided in config') + + def parse_shape_value(shape): + return tuple([1, *[int(elem) for elem in get_or_parse_value(shape, ())]]) + + return OrderedDict([(elem.get('name'), parse_shape_value(elem.get('shape'))) for elem in inputs]) + def release(self): """ Releases launcher. diff --git a/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py index 1c0a016ca0a..1355b1765f7 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/tf_launcher.py @@ -46,35 +46,40 @@ def parameters(cls): def __init__(self, config_entry, *args, **kwargs): super().__init__(config_entry, *args, **kwargs) self.default_layout = 'NHWC' + self._delayed_model_loading = kwargs.get('delayed_model_loading', False) - tf_launcher_config = ConfigValidator('TF_Launcher', fields=self.parameters()) + tf_launcher_config = ConfigValidator( + 'TF_Launcher', fields=self.parameters(), delayed_model_loading=self._delayed_model_loading + ) tf_launcher_config.validate(self.config) - if not contains_any(self.config, ['model', 'saved_model_dir']): - raise ConfigError('model or saved model directory should be provided') - - if contains_all(self.config, ['model', 'saved_model']): - raise ConfigError('only one option: model or saved_model_dir should be provided') - self._config_outputs = self.get_value_from_config('output_names') - - if 'model' in self.config: - self._graph = self._load_graph(str(self.get_value_from_config('model'))) - else: - self._graph = self._load_graph(str(self.get_value_from_config('saved_model_dir')), True) - - self._outputs_names = self._get_outputs_names(self._graph, self._config_outputs) - - self._outputs_tensors = [] - self.node_pattern = 'import/{}:0' - for output in self._outputs_names: - try: - tensor = self._graph.get_tensor_by_name('import/{}:0'.format(output)) - except KeyError: + + if not self._delayed_model_loading: + if not contains_any(self.config, ['model', 'saved_model_dir']): + raise ConfigError('model or saved model directory should be provided') + + if contains_all(self.config, ['model', 'saved_model']): + raise ConfigError('only one option: model or saved_model_dir should be provided') + + self._config_outputs = self.get_value_from_config('output_names') + if 'model' in self.config: + self._graph = self._load_graph(str(self.get_value_from_config('model'))) + else: + self._graph = self._load_graph(str(self.get_value_from_config('saved_model_dir')), True) + + self._outputs_names = self._get_outputs_names(self._graph, self._config_outputs) + + self._outputs_tensors = [] + self.node_pattern = 'import/{}:0' + for output in self._outputs_names: try: - tensor = self._graph.get_tensor_by_name('{}:0'.format(output)) - self.node_pattern = '{}:0' + tensor = self._graph.get_tensor_by_name('import/{}:0'.format(output)) except KeyError: - raise ConfigError('model graph does not contains output {}'.format(output)) - self._outputs_tensors.append(tensor) + try: + tensor = self._graph.get_tensor_by_name('{}:0'.format(output)) + self.node_pattern = '{}:0' + except KeyError: + raise ConfigError('model graph does not contains output {}'.format(output)) + self._outputs_tensors.append(tensor) self.device = '/{}:0'.format(self.get_value_from_config('device').lower()) diff --git a/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py index 2e821974c56..23f55773af0 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/tf_lite_launcher.py @@ -36,15 +36,18 @@ def parameters(cls): def __init__(self, config_entry, adapter, *args, **kwargs): super().__init__(config_entry, adapter, *args, **kwargs) self.default_layout = 'NHWC' + self._delayed_model_loading = kwargs.get('delayed_model_loading', False) - tf_launcher_config = LauncherConfigValidator('TF_Lite_Launcher', fields=self.parameters()) + tf_launcher_config = LauncherConfigValidator( + 'TF_Lite_Launcher', fields=self.parameters(), delayed_model_loading=self._delayed_model_loading + ) tf_launcher_config.validate(self.config) - - self._interpreter = tf.contrib.lite.Interpreter(model_path=str(self.config['model'])) - self._interpreter.allocate_tensors() - self._input_details = self._interpreter.get_input_details() - self._output_details = self._interpreter.get_output_details() - self._inputs = {input_layer['name']: input_layer for input_layer in self._input_details} + if not self._delayed_model_loading: + self._interpreter = tf.contrib.lite.Interpreter(model_path=str(self.config['model'])) + self._interpreter.allocate_tensors() + self._input_details = self._interpreter.get_input_details() + self._output_details = self._interpreter.get_output_details() + self._inputs = {input_layer['name']: input_layer for input_layer in self._input_details} self.device = '/{}:0'.format(self.config.get('device', 'cpu').lower()) def predict(self, inputs, metadata=None, **kwargs): @@ -64,6 +67,10 @@ def predict(self, inputs, metadata=None, **kwargs): res = {output['name']: self._interpreter.get_tensor(output['index']) for output in self._output_details} results.append(res) + if metadata is not None: + for meta_ in metadata: + meta_['input_shape'] = self.inputs_info_for_meta() + return results @property diff --git a/tools/accuracy_checker/accuracy_checker/main.py b/tools/accuracy_checker/accuracy_checker/main.py index c50fd7c6994..1c226b8bec0 100644 --- a/tools/accuracy_checker/accuracy_checker/main.py +++ b/tools/accuracy_checker/accuracy_checker/main.py @@ -22,10 +22,16 @@ from .config import ConfigReader from .logging import print_info, add_file_handler -from .evaluators import ModelEvaluator, PipeLineEvaluator, get_processing_info +from .evaluators import ModelEvaluator, PipeLineEvaluator, ModuleEvaluator from .progress_reporters import ProgressReporter from .utils import get_path, cast_to_bool +EVALUATION_MODE = { + 'models': ModelEvaluator, + 'pipelines': PipeLineEvaluator, + 'evaluations': ModuleEvaluator +} + def build_arguments_parser(): parser = ArgumentParser(description='Deep Learning accuracy validation framework', allow_abbrev=False) @@ -198,38 +204,15 @@ def main(): add_file_handler(args.log_file) config, mode = ConfigReader.merge(args) - if mode == 'models': - model_evaluation_mode(config, progress_reporter, args) - else: - pipeline_evaluation_mode(config, progress_reporter, args) - - -def model_evaluation_mode(config, progress_reporter, args): - for model in config['models']: - for launcher_config in model['launchers']: - for dataset_config in model['datasets']: - print_processing_info( - model['name'], - launcher_config['framework'], - launcher_config['device'], - launcher_config.get('tags'), - dataset_config['name'] - ) - model_evaluator = ModelEvaluator.from_configs(launcher_config, dataset_config) - progress_reporter.reset(model_evaluator.dataset.size) - model_evaluator.dataset_processor(args.stored_predictions, progress_reporter=progress_reporter) - model_evaluator.compute_metrics(ignore_results_formatting=args.ignore_result_formatting) - - model_evaluator.release() - - -def pipeline_evaluation_mode(config, progress_reporter, args): - for pipeline_config in config['pipelines']: - print_processing_info(*get_processing_info(pipeline_config)) - evaluator = PipeLineEvaluator.from_configs(pipeline_config['stages']) + evaluator_class = EVALUATION_MODE.get(mode) + if not evaluator_class: + raise ValueError('Unknown evaluation mode') + for config_entry in config[mode]: + evaluator = evaluator_class.from_configs(config_entry) + processing_info = evaluator.get_processing_info(config_entry) + print_processing_info(*processing_info) evaluator.process_dataset(args.stored_predictions, progress_reporter=progress_reporter) evaluator.compute_metrics(ignore_results_formatting=args.ignore_result_formatting) - evaluator.release() diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py index 13b6ab9db7f..0412de99a48 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py @@ -98,7 +98,9 @@ class MSCOCOAveragePrecision(MSCOCOBaseMetric): def update(self, annotation, prediction): per_class_matching = super().update(annotation, prediction) - return [compute_precision_recall(self.thresholds, [per_class_matching[i]])[0] for i, _ in enumerate(self.labels)] + return [ + compute_precision_recall(self.thresholds, [per_class_matching[i]])[0] for i, _ in enumerate(self.labels) + ] def evaluate(self, annotations, predictions): precision = [ @@ -114,7 +116,9 @@ class MSCOCORecall(MSCOCOBaseMetric): def update(self, annotation, prediction): per_class_matching = super().update(annotation, prediction) - return [compute_precision_recall(self.thresholds, [per_class_matching[i]])[1] for i, _ in enumerate(self.labels)] + return [ + compute_precision_recall(self.thresholds, [per_class_matching[i]])[1] for i, _ in enumerate(self.labels) + ] def evaluate(self, annotations, predictions): recalls = [ diff --git a/tools/accuracy_checker/accuracy_checker/utils.py b/tools/accuracy_checker/accuracy_checker/utils.py index e884b835f27..29e9d03db8e 100644 --- a/tools/accuracy_checker/accuracy_checker/utils.py +++ b/tools/accuracy_checker/accuracy_checker/utils.py @@ -274,6 +274,7 @@ def is_empty(string): def read_xml(file: Union[str, Path], *args, **kwargs): return et.parse(str(get_path(file)), *args, **kwargs).getroot() + def read_json(file: Union[str, Path], *args, **kwargs): with get_path(file).open() as content: return json.load(content, *args, **kwargs) diff --git a/tools/accuracy_checker/custom_evaluators/README.md b/tools/accuracy_checker/custom_evaluators/README.md new file mode 100644 index 00000000000..f6fd65cf546 --- /dev/null +++ b/tools/accuracy_checker/custom_evaluators/README.md @@ -0,0 +1,22 @@ +# Custom Evaluators for Accuracy Checker +Standard Accuracy Checker validation pipeline: Annotation Reading -> Data Reading -> Preprocessing -> Inference -> Postprocessing -> Metrics. +In some cases it can be unsuitable (e.g. if you have sequence of models). You are able to customize validation pipeline using own evaluator. +Suggested approach based on writing python module which will describe validation approach + +## Implementation +Adding new evaluator process similar with adding any other entities in the tool. +Custom evaluator is the class which should be inherited from BaseEvaluator and overwrite all abstract methods. + +The most important methods for overwriting: + +* `from_configs` - create new instance using configuration dictionary. +* `process_dataset` - determine validation cycle across all data batches in dataset. +* `compute_metrics` - metrics evaluation after dataset processing. +* `reset` - reset evaluation progress + +## Configuration +Each custom evaluation config should start with keyword `evaluation` and contain: + * `name` - model name + * `module` - evaluation module for loading. +Before running, please make sure that prefix to module added to your python path or use `python_path` parameter in config for it specification. +Optionally you can provide `module_config` section which contains config for custom evaluator (Depends from realization, it can contains evaluator specific parameters). diff --git a/tools/accuracy_checker/custom_evaluators/__init__.py b/tools/accuracy_checker/custom_evaluators/__init__.py new file mode 100644 index 00000000000..7c9fcf6dc14 --- /dev/null +++ b/tools/accuracy_checker/custom_evaluators/__init__.py @@ -0,0 +1,15 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" diff --git a/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py b/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py new file mode 100644 index 00000000000..b1906f131b0 --- /dev/null +++ b/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py @@ -0,0 +1,370 @@ +""" +Copyright (c) 2019 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +""" +from pathlib import Path +import pickle +import numpy as np +from accuracy_checker.evaluators.base_evaluator import BaseEvaluator +from accuracy_checker.dataset import Dataset +from accuracy_checker.adapters import create_adapter +from accuracy_checker.data_readers import BaseReader +from accuracy_checker.config import ConfigError +from accuracy_checker.preprocessor import PreprocessingExecutor +from accuracy_checker.metrics import MetricsExecutor +from accuracy_checker.launcher import create_launcher +from accuracy_checker.utils import contains_all, extract_image_representations, read_pickle + + +class SequentialActionRecognitionEvaluator(BaseEvaluator): + def __init__(self, dataset, reader, preprocessing, metric_executor, launcher, model): + self.dataset = dataset + self.preprocessing_executor = preprocessing + self.metric_executor = metric_executor + self.launcher = launcher + self.model = model + self.reader = reader + self._metrics_results = [] + + @classmethod + def from_configs(cls, config): + dataset_config = config['datasets'][0] + dataset = Dataset(dataset_config) + data_reader_config = dataset_config.get('reader', 'opencv_imread') + data_source = dataset_config['data_source'] + if isinstance(data_reader_config, str): + reader = BaseReader.provide(data_reader_config, data_source) + elif isinstance(data_reader_config, dict): + reader = BaseReader.provide(data_reader_config['type'], data_source, data_reader_config) + else: + raise ConfigError('reader should be dict or string') + preprocessing = PreprocessingExecutor(dataset_config.get('preprocessing', []), dataset.name) + metrics_executor = MetricsExecutor(dataset_config['metrics'], dataset) + launcher = create_launcher(config['launchers'][0], delayed_model_loading=True) + model = SequentialModel(config.get('network_info', {}), launcher) + return cls(dataset, reader, preprocessing, metrics_executor, launcher, model) + + def process_dataset(self, stored_predictions, progress_reporter, *args, ** kwargs): + self._annotations, self._predictions = ([], []) if self.metric_executor.need_store_predictions else None, None + if progress_reporter: + progress_reporter.reset(self.dataset.size) + + for batch_id, batch_annotation in enumerate(self.dataset): + batch_identifiers = [annotation.identifier for annotation in batch_annotation] + batch_input = [self.reader(identifier=identifier) for identifier in batch_identifiers] + batch_input = self.preprocessing_executor.process(batch_input, batch_annotation) + batch_input, _ = extract_image_representations(batch_input) + batch_prediction = self.model.predict(batch_identifiers, batch_input) + self.metric_executor.update_metrics_on_batch(batch_annotation, batch_prediction) + if self.metric_executor.need_store_predictions: + self._annotations.extend(batch_annotation) + self._predictions.extend(batch_prediction) + progress_reporter.update(batch_id, len(batch_prediction)) + + if progress_reporter: + progress_reporter.finish() + + if self.model.store_encoder_predictions: + self.model.save_encoder_predictions() + + def compute_metrics(self, print_results=True, output_callback=None, ignore_results_formatting=False): + if self._metrics_results: + del self._metrics_results + self._metrics_results = [] + + for result_presenter, evaluated_metric in self.metric_executor.iterate_metrics( + self._annotations, self._predictions): + self._metrics_results.append(evaluated_metric) + if print_results: + result_presenter.write_result(evaluated_metric, output_callback, ignore_results_formatting) + + return self._metrics_results + + def print_metrics_results(self, output_callback=None, ignore_results_formatting=False): + if not self._metrics_results: + self.compute_metrics(True, output_callback, ignore_results_formatting) + return + result_presenters = self.metric_executor.get_metric_presenters() + for presenter, metric_result in zip(result_presenters, self._metrics_results): + presenter.write_results(metric_result, output_callback, ignore_results_formatting) + + def release(self): + self.model.release() + self.launcher.release() + + def reset(self): + self.metric_executor.reset() + self.model.reset() + + def get_processing_info(self, config): + module_specific_params = config.get('module_config') + model_name = config['name'] + dataset_config = module_specific_params['datasets'][0] + launcher_config = module_specific_params['launchers'][0] + return ( + model_name,launcher_config['framework'], launcher_config['device'], launcher_config.get('tags'), + dataset_config['name'] + ) + + +class BaseModel: + def __init__(self, network_info, launcher): + self.network_info = network_info + + def predict(self, idenitifers, input_data): + raise NotImplementedError + + def release(self): + pass + + +def create_encoder(model_config, launcher): + launcher_model_mapping = { + 'dlsdk': EncoderModelDLSDKL, + 'onnx_runtime': EncoderONNXModel, + 'opencv': EncoderOpenCVModel, + 'dummy': DummyEncoder + } + framework = launcher.config['framework'] + if 'predictions' in model_config and not model_config.get('store_predictions', False): + framework = 'dummy' + model_class = launcher_model_mapping.get(framework) + if not model_class: + raise ValueError('model for framework {} is not supported'.format(framework)) + return model_class(model_config, launcher) + + +def create_decoder(model_config, launcher): + launcher_model_mapping = { + 'dlsdk': DecoderModelDLSDKL, + 'onnx_runtime': DecoderONNXModel, + 'opencv': DecoderOpenCVModel, + } + framework = launcher.config['framework'] + model_class = launcher_model_mapping.get(framework) + if not model_class: + raise ValueError('model for framework {] is not supported'.format(framework)) + return model_class(model_config, launcher) + + +class SequentialModel(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + if not contains_all(network_info, ['encoder', 'decoder']): + raise ConfigError('network_info should contains encoder and decoder fields') + self.num_processing_frames = network_info['decoder'].get('num_processing_frames', 16) + self.processing_frames_buffer = [] + self.encoder = create_encoder(network_info['encoder'], launcher) + self.decoder = create_decoder(network_info['decoder'], launcher) + self.store_encoder_predictions = network_info['encoder'].get('store_predictions', False) + self._encoder_predictions = [] if self.store_encoder_predictions else None + + def predict(self, idenitifiers, input_data): + predictions = [] + if len(np.shape(input_data)) == 5: + input_data = input_data[0] + for data in input_data: + encoder_prediction = self.encoder.predict(idenitifiers, [data]) + self.processing_frames_buffer.append(encoder_prediction) + if self.store_encoder_predictions: + self._encoder_predictions.append(encoder_prediction) + if len(self.processing_frames_buffer) == self.num_processing_frames: + predictions.append(self.decoder.predict(idenitifiers, [self.processing_frames_buffer])) + self.processing_frames_buffer = [] + + return predictions + + def reset(self): + self.processing_frames_buffer = [] + if self._encoder_predictions is not None: + self._encoder_predictions = [] + + def release(self): + self.encoder.release() + self.decoder.release() + + def save_encoder_predictions(self): + if self._encoder_predictions is not None: + prediction_file = self.network_info['encoder'].get('predictions', Path('encoder_predictions.pickle')) + with prediction_file.open('wb') as file: + for representation in self._encoder_predictions: + pickle.dump(representation, file) + + +class EncoderModelDLSDKL(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + if 'onnx_model' in network_info: + network_info.update(launcher.config) + model_xml, model_bin = launcher.convert_model(network_info) + else: + model_xml = str(network_info['model']) + model_bin = str(network_info['weights']) + self.network = launcher.create_ie_network(model_xml, model_bin) + if not hasattr(launcher, 'plugin'): + launcher.create_ie_plugin() + self.exec_network = launcher.plugin.load(self.network) + self.input_blob = next(iter(self.network.inputs)) + self.output_blob = next(iter(self.network.outputs)) + + def predict(self, identifiers, input_data): + return self.exec_network.infer(self.fit_to_input(input_data))[self.output_blob] + + def release(self): + del self.exec_network + + def fit_to_input(self, input_data): + input_data = np.transpose(input_data, (0, 3, 1, 2)) + input_data = input_data.reshape(self.network.inputs[self.input_blob].shape) + + return {self.input_blob: input_data} + + +class DecoderModelDLSDKL(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + if 'onnx_model' in network_info: + network_info.update(launcher.config) + model_xml, model_bin = launcher.convert_model(network_info) + else: + model_xml = str(network_info['model']) + model_bin = str(network_info['weights']) + + self.network = launcher.create_ie_network(model_xml, model_bin) + self.exec_network = launcher.plugin.load(self.network) + self.input_blob = next(iter(self.network.inputs)) + self.output_blob = next(iter(self.network.outputs)) + self.adapter = create_adapter('classification') + self.adapter.output_blob = self.output_blob + self.num_processing_frames = network_info.get('num_processing_frames', 16) + + def predict(self, identifiers, input_data): + result = self.exec_network.infer(self.fit_to_input(input_data)) + result = self.adapter.process([result], identifiers, [{}]) + + return result + + def release(self): + del self.exec_network + + def fit_to_input(self, input_data): + input_data = np.reshape(input_data, self.network.inputs[self.input_blob].shape) + return {self.input_blob: input_data} + + +class EncoderONNXModel(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + self.inference_session = launcher.create_inference_session(network_info['model']) + self.input_blob = next(iter(self.inference_session.get_inputs())) + self.output_blob = next(iter(self.inference_session.get_outputs())) + + def predict(self, identifiers, input_data): + return self.inference_session.run((self.output_blob.name, ), self.fit_to_input(input_data))[0] + + def fit_to_input(self, input_data): + input_data = np.transpose(input_data, (0, 3, 1, 2)) + input_data = input_data.reshape(self.input_blob.shape) + + return {self.input_blob.name: input_data} + + def release(self): + del self.inference_session + + +class DecoderONNXModel(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + self.inference_session = launcher.create_inference_session(network_info['model']) + self.input_blob = next(iter(self.inference_session.get_inputs())) + self.output_blob = next(iter(self.inference_session.get_outputs())) + self.adapter = create_adapter('classification') + self.adapter.output_blob = self.output_blob.name + self.num_processing_frames = network_info.get('num_processing_frames', 16) + + def predict(self, identifiers, input_data): + result = self.inference_session.run((self.output_blob.name,), self.fit_to_input(input_data)) + return self.adapter.process([{self.output_blob.name: result[0]}], identifiers, [{}]) + + def fit_to_input(self, input_data): + input_data = np.reshape(input_data, self.input_blob.shape) + return {self.input_blob.name: input_data} + + def release(self): + del self.inference_session + + +class DummyEncoder(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + if 'predictions' not in network_info: + raise ConfigError('predictions_file is not found') + self._predictions = read_pickle(network_info['predictions']) + self.iterator = 0 + + def predict(self, idenitifers, input_data): + result = self._predictions[self.iterator] + self.iterator += 1 + return result + + +class EncoderOpenCVModel(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + self.network = launcher.create_network(network_info['model'], network_info.get('weights', '')) + network_info.update(launcher.config) + input_shapes = launcher.get_inputs_from_config(network_info) + self.input_blob = next(iter(input_shapes)) + self.input_shape = input_shapes[self.input_blob] + self.network.setInputsNames(list(self.input_blob)) + self.output_blob = next(iter(self.network.getUnconnectedOutLayersNames())) + + def predict(self, identifiers, input_data): + self.network.setInput(self.fit_to_input(input_data)[self.input_blob], self.input_blob) + return self.network.forward([self.output_blob])[0] + + def fit_to_input(self, input_data): + input_data = np.transpose(input_data, (0, 3, 1, 2)) + input_data = input_data.reshape(self.input_shape) + + return {self.input_blob: input_data.astype(np.float32)} + + def release(self): + del self.network + + +class DecoderOpenCVModel(BaseModel): + def __init__(self, network_info, launcher): + super().__init__(network_info, launcher) + self.network = launcher.create_network(network_info['model'], network_info.get('weights', '')) + input_shapes = launcher.get_inputs_from_config(network_info) + self.input_blob = next(iter(input_shapes)) + self.input_shape = input_shapes[self.input_blob] + self.network.setInputsNames(list(self.input_blob)) + self.output_blob = next(iter(self.network.getUnconnectedOutLayersNames())) + self.adapter = create_adapter('classification') + self.adapter.output_blob = self.output_blob + self.num_processing_frames = network_info.get('num_processing_frames', 16) + + def predict(self, identifiers, input_data): + self.network.setInput(self.fit_to_input(input_data)[self.input_blob], self.input_blob) + result = self.network.forward([self.output_blob])[0] + return self.adapter.process([{self.output_blob.name: result}], identifiers, [{}]) + + def fit_to_input(self, input_data): + input_data = np.reshape(input_data, self.input_shape) + return {self.input_blob: input_data.astype(np.float32)} + + def release(self): + del self.network diff --git a/tools/accuracy_checker/tests/test_caffe_launcher.py b/tools/accuracy_checker/tests/test_caffe_launcher.py index a68b40bb71d..f8cd542164c 100644 --- a/tools/accuracy_checker/tests/test_caffe_launcher.py +++ b/tools/accuracy_checker/tests/test_caffe_launcher.py @@ -49,7 +49,7 @@ def test_infer(self, data_dir, models_dir): input_blob = np.transpose([img_resized], (0, 3, 1, 2)) res = caffe_test_model.predict([{'data': input_blob.astype(np.float32)}], [{}]) - assert np.argmax(res[0]['fc3']) == 7 + assert np.argmax(res[0]['fc3']) == 6 def test_caffe_launcher_provide_input_shape_to_adapter(self, mocker, models_dir): mocker.patch('caffe.Net.forward', return_value={'fc3': 0}) @@ -71,4 +71,3 @@ def test_missed_weights_in_create_caffe_launcher_raises_config_error_exception() with pytest.raises(ConfigError): create_launcher(launcher) - diff --git a/tools/accuracy_checker/tests/test_config_reader.py b/tools/accuracy_checker/tests/test_config_reader.py index 5bbc72cd8c8..f61b4603d48 100644 --- a/tools/accuracy_checker/tests/test_config_reader.py +++ b/tools/accuracy_checker/tests/test_config_reader.py @@ -142,7 +142,7 @@ def test_missed_models_in_local_config_raises_value_error_exception(self, mocker ConfigReader.merge(self.arguments) error_message = str(exception).split(sep=': ')[-1] - assert error_message == 'Missed "{}" in local config'.format('models') + assert error_message == 'Accuracy Checker not_models mode is not supported. Please select between evaluations, models, pipelines' def test_empty_models_in_local_config_raises_value_error_exception(self, mocker): mocker.patch(self.module + '._read_configs', return_value=( @@ -794,7 +794,10 @@ def test_both_launchers_are_not_filtered_by_the_same_tag(self, mocker): config = ConfigReader.merge(args)[0] - launchers = config['models'][0]['launchers'] + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert launchers == config_launchers def test_both_launchers_are_filtered_by_another_tag(self, mocker): @@ -966,7 +969,10 @@ def test_both_launchers_with_different_tags_are_not_filtered_by_the_same_tags(se config = ConfigReader.merge(args)[0] - launchers = config['models'][0]['launchers'] + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert launchers == config_launchers def test_launcher_is_not_filtered_by_the_same_framework(self, mocker): @@ -1021,7 +1027,10 @@ def test_both_launchers_are_not_filtered_by_the_same_framework(self, mocker): config = ConfigReader.merge(args)[0] - launchers = config['models'][0]['launchers'] + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert launchers == config_launchers def test_launcher_is_filtered_by_another_framework(self, mocker): @@ -1164,7 +1173,10 @@ def test_both_launchers_are_not_filtered_by_the_same_device(self, mocker): config = ConfigReader.merge(args)[0] - launchers = config['models'][0]['launchers'] + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert launchers == config_launchers def test_launcher_is_filtered_by_another_device(self, mocker): @@ -1289,7 +1301,10 @@ def test_only_appropriate_launcher_is_filtered_by_user_input_devices(self, mocke config = ConfigReader.merge(args)[0] - launchers = config['models'][0]['launchers'] + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert launchers == [config_launchers[0], config_launchers[2]] def test_both_launchers_are_filtered_by_other_devices(self, mocker): @@ -1349,7 +1364,10 @@ def test_both_launchers_are_not_filtered_by_same_devices(self, mocker): config = ConfigReader.merge(args)[0] - launchers = config['models'][0]['launchers'] + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert launchers == config_launchers def test_launcher_is_not_filtered_by_device_with_tail(self, mocker): @@ -1461,8 +1479,10 @@ def test_replace_empty_device_by_several_targets_in_models_mode(self, mocker): args = copy.deepcopy(self.arguments) args.target_devices = ['CPU', 'GPU'] config, _ = ConfigReader.merge(args) - launchers = config['models'][0]['launchers'] - assert len(launchers) == 2 + assert len(config['models']) == 2 + assert len(config['models'][0]['launchers']) == 1 + assert len(config['models'][1]['launchers']) == 1 + launchers = [config['models'][0]['launchers'][0], config['models'][1]['launchers'][0]] assert 'device' in launchers[0] assert 'device' in launchers[1] assert launchers[0]['device'].upper() == 'CPU' diff --git a/tools/accuracy_checker/tests/test_dlsdk_launcher.py b/tools/accuracy_checker/tests/test_dlsdk_launcher.py index 899973030ee..745d2d89a89 100644 --- a/tools/accuracy_checker/tests/test_dlsdk_launcher.py +++ b/tools/accuracy_checker/tests/test_dlsdk_launcher.py @@ -34,6 +34,7 @@ from accuracy_checker.data_readers import DataRepresentation from accuracy_checker.utils import contains_all + def check_no_gpu(): try: import openvino.inference_engine as ie @@ -43,6 +44,7 @@ def check_no_gpu(): except (ImportError, RuntimeError): return True + @pytest.fixture() def mock_inference_engine(mocker): try: @@ -107,6 +109,7 @@ def test_dlsd_launcher_set_batch_size(self, models_dir): dlsdk_test_model = get_dlsdk_test_model(models_dir, {'batch': 2}) assert dlsdk_test_model.batch == 2 + @pytest.mark.skipif(check_no_gpu(), reason="GPU is not installed") @pytest.mark.usefixtures('mock_path_exists') class TestDLSDKLauncherAffinity: @@ -117,7 +120,7 @@ def test_dlsdk_launcher_valid_affinity_map(self, mocker, models_dir): 'accuracy_checker.launcher.dlsdk_launcher.read_yaml', return_value=affinity_map ) - dlsdk_test_model = get_dlsdk_test_model(models_dir, {'device' : 'HETERO:CPU,GPU', 'affinity_map': './affinity_map.yml'}) + dlsdk_test_model = get_dlsdk_test_model(models_dir, {'device': 'HETERO:CPU,GPU', 'affinity_map': './affinity_map.yml'}) layers = dlsdk_test_model.network.layers for key, value in affinity_map.items(): assert layers[key].affinity == value @@ -130,17 +133,17 @@ def test_dlsdk_launcher_affinity_map_invalid_device(self, mocker, models_dir): ) with pytest.raises(ConfigError): - get_dlsdk_test_model(models_dir, {'device' : 'HETERO:CPU,CPU', 'affinity_map' : './affinity_map.yml'}) + get_dlsdk_test_model(models_dir, {'device': 'HETERO:CPU,CPU', 'affinity_map': './affinity_map.yml'}) def test_dlsdk_launcher_affinity_map_invalid_layer(self, mocker, models_dir): - affinity_map = {'none-existing-layer' : 'CPU'} + affinity_map = {'none-existing-layer': 'CPU'} mocker.patch( 'accuracy_checker.launcher.dlsdk_launcher.read_yaml', return_value=affinity_map ) with pytest.raises(ConfigError): - get_dlsdk_test_model(models_dir, {'device' : 'HETERO:CPU,CPU', 'affinity_map' : './affinity_map.yml'}) + get_dlsdk_test_model(models_dir, {'device': 'HETERO:CPU,CPU', 'affinity_map': './affinity_map.yml'}) @pytest.mark.usefixtures('mock_path_exists', 'mock_inference_engine', 'mock_inputs') From 758fd825c108c667184a46a3647946343d166b21 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Thu, 10 Oct 2019 11:07:54 +0300 Subject: [PATCH 152/927] Script updated for human-pose-estimation-3d support --- tools/downloader/pytorch_to_onnx.py | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/tools/downloader/pytorch_to_onnx.py b/tools/downloader/pytorch_to_onnx.py index 4371f2fec2d..421bb2916e9 100644 --- a/tools/downloader/pytorch_to_onnx.py +++ b/tools/downloader/pytorch_to_onnx.py @@ -22,6 +22,12 @@ def positive_int_arg(values): return result +def model_parameters(parameters): + if not parameters: + return dict() + return dict((param, eval(value)) for param, value in (element.split('=') for element in parameters.split(','))) + + def parse_args(): """Parse input arguments""" @@ -42,15 +48,16 @@ def parse_args(): parser.add_argument('--import-module', type=str, default='', help='Name of module, which contains model\'s constructor.' 'Requires if model not from Torchvision') - parser.add_argument('--input-names', type=str, nargs='+', + parser.add_argument('--input-names', type=str, metavar='L[,L...]', help='Space separated names of the input layers') - parser.add_argument('--output-names', type=str, nargs='+', + parser.add_argument('--output-names', type=str, metavar='L[,L...]', help='Space separated names of the output layers') - + parser.add_argument('--model-params', type=model_parameters, default='', + help='Pairs "name"="value" of model constructor comma-separeted parameters') return parser.parse_args() -def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None): +def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None, model_params=None): """Import model and load pretrained weights""" if from_torchvision: @@ -72,7 +79,7 @@ def load_model(model_name, weights, from_torchvision=True, model_path=None, modu try: module = __import__(module_name) creator = getattr(module, model_name) - model = creator() + model = creator(**model_params) except ImportError as err: print('Module {} in {} doesn\'t exist. Check import path and name'.format(model_name, model_path)) sys.exit(err) @@ -96,8 +103,8 @@ def convert_to_onnx(model, input_shape, output_file, input_names, output_names): model.eval() dummy_input = torch.randn(input_shape) model(dummy_input) - torch.onnx.export(model, dummy_input, str(output_file), - verbose=False, input_names=input_names, output_names=output_names) + torch.onnx.export(model, dummy_input, str(output_file), verbose=False, + input_names=input_names.split(','), output_names=output_names.split(',')) # Model check after conversion model = onnx.load(str(output_file)) @@ -110,7 +117,8 @@ def convert_to_onnx(model, input_shape, output_file, input_names, output_names): def main(): args = parse_args() - model = load_model(args.model_name, args.weights, args.from_torchvision, args.model_path, args.import_module) + model = load_model(args.model_name, args.weights, args.from_torchvision, + args.model_path, args.import_module, args.model_params) convert_to_onnx(model, args.input_shape, args.output_file, args.input_names, args.output_names) From 3171f6198fde906f1c01260c38652a0bb76276d9 Mon Sep 17 00:00:00 2001 From: ezamalie Date: Wed, 16 Oct 2019 16:41:34 +0300 Subject: [PATCH 153/927] Deleted default input --- tools/downloader/pytorch_to_onnx.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/downloader/pytorch_to_onnx.py b/tools/downloader/pytorch_to_onnx.py index 421bb2916e9..5c766f5050a 100644 --- a/tools/downloader/pytorch_to_onnx.py +++ b/tools/downloader/pytorch_to_onnx.py @@ -57,7 +57,7 @@ def parse_args(): return parser.parse_args() -def load_model(model_name, weights, from_torchvision=True, model_path=None, module_name=None, model_params=None): +def load_model(model_name, weights, from_torchvision, model_path, module_name, model_params): """Import model and load pretrained weights""" if from_torchvision: From 013f25549d32a68570c94acc257b0d32ef4b789e Mon Sep 17 00:00:00 2001 From: ezamalie Date: Wed, 16 Oct 2019 16:42:34 +0300 Subject: [PATCH 154/927] Model-params evaluation is safety now --- tools/downloader/pytorch_to_onnx.py | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/tools/downloader/pytorch_to_onnx.py b/tools/downloader/pytorch_to_onnx.py index 5c766f5050a..178e1ee52f8 100644 --- a/tools/downloader/pytorch_to_onnx.py +++ b/tools/downloader/pytorch_to_onnx.py @@ -25,7 +25,16 @@ def positive_int_arg(values): def model_parameters(parameters): if not parameters: return dict() - return dict((param, eval(value)) for param, value in (element.split('=') for element in parameters.split(','))) + params = {} + for element in parameters.split(','): + param, value = element.split('=') + try: + value = eval(value, {}, {}) + except: + pass + params[param] = value + + return params def parse_args(): From 02527208b838cabca1a128efc10497e0dd0aabca Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 17 Oct 2019 14:38:24 +0300 Subject: [PATCH 155/927] AC: fix coco_precision metric (#520) --- .../accuracy_checker/metrics/coco_metrics.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py index 0412de99a48..02e2b6337c0 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_metrics.py @@ -23,7 +23,7 @@ PoseEstimationPrediction, PoseEstimationAnnotation ) -from ..utils import get_or_parse_value +from ..utils import get_or_parse_value, finalize_metric_result from .overlap import Overlap from .metric import PerImageEvaluationMetric @@ -107,6 +107,7 @@ def evaluate(self, annotations, predictions): compute_precision_recall(self.thresholds, self.matching_results[i])[0] for i, _ in enumerate(self.labels) ] + precision, self.meta['names'] = finalize_metric_result(precision, self.meta['names']) return precision @@ -125,6 +126,7 @@ def evaluate(self, annotations, predictions): compute_precision_recall(self.thresholds, self.matching_results[i])[1] for i, _ in enumerate(self.labels) ] + recalls, self.meta['names'] = finalize_metric_result(recalls, self.meta['names']) return recalls @@ -216,6 +218,8 @@ def compute_precision_recall(thresholds, matching_results): fps = np.logical_and(np.logical_not(dtm), np.logical_not(dt_ignored)) tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float) + if npig == 0: + return np.nan, np.nan for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): tp = np.array(tp) fp = np.array(fp) @@ -223,7 +227,6 @@ def compute_precision_recall(thresholds, matching_results): rc = tp / npig pr = tp / (fp + tp + np.spacing(1)) q = np.zeros(num_rec_thresholds) - if num_detections: recall[t] = rc[-1] else: From 434e7a720a6ca41ffb0bd6dad2915d07e62da802 Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 17 Oct 2019 16:02:01 +0300 Subject: [PATCH 156/927] AC: fix subsample_size for small subsample_ratio (#521) --- .../accuracy_checker/dataset.py | 28 ++++- tools/accuracy_checker/tests/test_dataset.py | 108 ++++++++++++++++++ 2 files changed, 130 insertions(+), 6 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index 3203cb876f4..5bb21f5cc0b 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -54,6 +54,25 @@ def __init__(self, config_entry): self._load_annotation() def _load_annotation(self): + def create_subset(subsample_size, subsample_seed): + if isinstance(subsample_size, str): + if subsample_size.endswith('%'): + try: + subsample_size = float(subsample_size[:-1]) + except ValueError: + raise ConfigError('invalid value for subsample_size: {}'.format(subsample_size)) + if subsample_size <= 0: + raise ConfigError('subsample_size should be > 0') + subsample_size *= len(annotation) / 100 + subsample_size = int(subsample_size) or 1 + try: + subsample_size = int(subsample_size) + except ValueError: + raise ConfigError('invalid value for subsample_size: {}'.format(subsample_size)) + if subsample_size < 1: + raise ConfigError('subsample_size should be > 0') + return make_subset(annotation, subsample_size, subsample_seed) + annotation, meta = None, None use_converted_annotation = True if 'annotation' in self._config: @@ -69,13 +88,10 @@ def _load_annotation(self): raise ConfigError('path to converted annotation or data for conversion should be specified') subsample_size = self._config.get('subsample_size') - if subsample_size: + if subsample_size is not None: subsample_seed = self._config.get('subsample_seed', 666) - if isinstance(subsample_size, str): - if subsample_size.endswith('%'): - subsample_size = float(subsample_size[:-1]) / 100 * len(annotation) - subsample_size = int(subsample_size) - annotation = make_subset(annotation, subsample_size, subsample_seed) + + annotation = create_subset(subsample_size, subsample_seed) if self._config.get('analyze_dataset', False): analyze_dataset(annotation, meta) diff --git a/tools/accuracy_checker/tests/test_dataset.py b/tools/accuracy_checker/tests/test_dataset.py index 8299260ddcd..f48a48b89e0 100644 --- a/tools/accuracy_checker/tests/test_dataset.py +++ b/tools/accuracy_checker/tests/test_dataset.py @@ -182,6 +182,114 @@ def test_annoation_conversion_subset_more_than_dataset_size(self, mocker): annotation = dataset.annotation assert annotation == converted_annotation + def test_annotation_conversion_with_zero_subset_size(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': 0 + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + with pytest.raises(ConfigError): + Dataset(config) + + def test_annotation_conversion_with_negative_subset_size(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': -1 + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + with pytest.raises(ConfigError): + Dataset(config) + + def test_annotation_conversion_negative_subset_ratio_raise_config_error(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': '-50%' + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + with pytest.raises(ConfigError): + Dataset(config) + + def test_annotation_conversion_zero_subset_ratio_raise_config_error(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': '0%' + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + with pytest.raises(ConfigError): + Dataset(config) + + def test_annotation_conversion_invalid_subset_ratio_raise_config_error(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': 'aaa%' + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + with pytest.raises(ConfigError): + Dataset(config) + + def test_annotation_conversion_invalid_subset_size_raise_config_error(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': 'aaa' + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + with pytest.raises(ConfigError): + Dataset(config) + + def test_annotation_conversion_closer_to_zero_subset_ratio(self, mocker): + addition_options = { + 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, + 'subsample_size': '0.001%' + } + config = copy_dataset_config(self.dataset_config) + config.update(addition_options) + converted_annotation = make_representation(['0 0 0 5 5', '0 1 1 10 10'], True) + mocker.patch( + 'accuracy_checker.annotation_converters.WiderFormatConverter.convert', + return_value=ConverterReturn(converted_annotation, None, None) + ) + subset_maker_mock = mocker.patch( + 'accuracy_checker.dataset.make_subset' + ) + Dataset(config) + subset_maker_mock.assert_called_once_with(converted_annotation, 1, 666) + def test_annotation_conversion_subset_with_seed(self, mocker): addition_options = { 'annotation_conversion': {'converter': 'wider', 'annotation_file': Path('file')}, From 8d74cfd3360d8dff32e0c0d9fd52701e1f33441b Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 17 Oct 2019 18:03:42 +0300 Subject: [PATCH 157/927] bugfix (#523) --- .../accuracy_checker/config/config_reader.py | 2 +- .../accuracy_checker/evaluators/module_evaluator.py | 4 ++-- tools/accuracy_checker/accuracy_checker/metrics/reid.py | 3 ++- .../sequential_action_recognition_evaluator.py | 7 ++++--- 4 files changed, 9 insertions(+), 7 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index b6a1172a1bc..f439e431d2c 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -492,7 +492,7 @@ def _separate_modules_evaluations(modules_config): copy_evaluation['module_config'] = eval_config eval_list.append(copy_evaluation) - modules_config['evaluations'] = eval_list + modules_config['evaluations'] = eval_list mode_func = { 'models': _separate_models_evaluations, diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py index fe1c7989c18..44616bb14b6 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/module_evaluator.py @@ -43,8 +43,8 @@ def load_module(model_cls, python_path=None): model_cls = module_parts[-1] model_path = ".".join(module_parts[:-1]) with append_to_path(python_path): - moodule_cls = importlib.import_module(model_path).__getattribute__(model_cls) - return moodule_cls + module_cls = importlib.import_module(model_path).__getattribute__(model_cls) + return module_cls @contextmanager diff --git a/tools/accuracy_checker/accuracy_checker/metrics/reid.py b/tools/accuracy_checker/accuracy_checker/metrics/reid.py index b7fe9e825db..2344f52d943 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/reid.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/reid.py @@ -195,7 +195,8 @@ def parameters(cls): def configure(self): self.subset_num = self.get_value_from_config('subset_number') config_copy = self.config.copy() - config_copy.pop('subset_number') + if 'subset_number' in config_copy: + config_copy.pop('subset_number') self.accuracy_metric = PairwiseAccuracy(config_copy, self.dataset) def evaluate(self, annotations, predictions): diff --git a/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py b/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py index b1906f131b0..d4da7a68560 100644 --- a/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py +++ b/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py @@ -60,13 +60,13 @@ def process_dataset(self, stored_predictions, progress_reporter, *args, ** kwarg if progress_reporter: progress_reporter.reset(self.dataset.size) - for batch_id, batch_annotation in enumerate(self.dataset): + for batch_id, (dataset_indices, batch_annotation) in enumerate(self.dataset): batch_identifiers = [annotation.identifier for annotation in batch_annotation] batch_input = [self.reader(identifier=identifier) for identifier in batch_identifiers] batch_input = self.preprocessing_executor.process(batch_input, batch_annotation) batch_input, _ = extract_image_representations(batch_input) batch_prediction = self.model.predict(batch_identifiers, batch_input) - self.metric_executor.update_metrics_on_batch(batch_annotation, batch_prediction) + self.metric_executor.update_metrics_on_batch(dataset_indices, batch_annotation, batch_prediction) if self.metric_executor.need_store_predictions: self._annotations.extend(batch_annotation) self._predictions.extend(batch_prediction) @@ -107,7 +107,8 @@ def reset(self): self.metric_executor.reset() self.model.reset() - def get_processing_info(self, config): + @staticmethod + def get_processing_info(config): module_specific_params = config.get('module_config') model_name = config['name'] dataset_config = module_specific_params['datasets'][0] From d31fa26652c4976aa9b037038499588efa7c4326 Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Thu, 17 Oct 2019 18:46:55 +0300 Subject: [PATCH 158/927] demos/tests/cases.py: prepare for the disappearance of libcpu_extension.so libcpu_extension.so is going to be merged into the CPU plugin in IE 2019 R4, so don't add the -l option if it isn't present. This will allow the test script to work with both R3 and R4. After R4 is released, we can remove the passing of the option entirely. --- demos/tests/cases.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/demos/tests/cases.py b/demos/tests/cases.py index c1e2bb046c8..6219c7836ca 100644 --- a/demos/tests/cases.py +++ b/demos/tests/cases.py @@ -52,8 +52,10 @@ def models_lst_path(self, source_dir): return source_dir / 'python_demos' / self._name / 'models.lst' def fixed_args(self, source_dir, build_dir): + cpu_extension_path = build_dir / 'lib/libcpu_extension.so' + return [sys.executable, str(source_dir / 'python_demos' / self._name / (self._name + '.py')), - '-l', str(build_dir / 'lib/libcpu_extension.so')] + *(['-l', str(cpu_extension_path)] if cpu_extension_path.exists() else [])] def join_cases(*args): options = {} From 876662afc641fbe367c622de8ad13a874f6ffa03 Mon Sep 17 00:00:00 2001 From: Katya Date: Thu, 17 Oct 2019 19:17:55 +0300 Subject: [PATCH 159/927] AC: fix pipeline evaluator (#525) --- .../accuracy_checker/evaluators/pipeline_evaluator.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py index 8fec0367af0..c9adf1254a5 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/pipeline_evaluator.py @@ -189,7 +189,7 @@ def __init__(self, stages): @classmethod def from_configs(cls, pipeline_config): stages = OrderedDict() - for stage_config in pipeline_config: + for stage_config in pipeline_config['stages']: stage_name = stage_config['stage'] evaluation_stage = PipeLineStage.from_configs(stage_name, stage_config) stages[stage_name] = evaluation_stage From 9f74c65377f2854bc6fd2e2ef556fb85d877ec6f Mon Sep 17 00:00:00 2001 From: Katya Date: Sat, 19 Oct 2019 09:20:38 +0300 Subject: [PATCH 160/927] AC: keep order print processing info (#527) * bugfix * preserve oreder for printing processing info --- .../accuracy_checker/accuracy_checker/main.py | 4 ++-- .../metrics/coco_orig_metrics.py | 19 ++++++++++++++----- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/main.py b/tools/accuracy_checker/accuracy_checker/main.py index 1c226b8bec0..bc2b85045ef 100644 --- a/tools/accuracy_checker/accuracy_checker/main.py +++ b/tools/accuracy_checker/accuracy_checker/main.py @@ -208,9 +208,9 @@ def main(): if not evaluator_class: raise ValueError('Unknown evaluation mode') for config_entry in config[mode]: - evaluator = evaluator_class.from_configs(config_entry) - processing_info = evaluator.get_processing_info(config_entry) + processing_info = evaluator_class.get_processing_info(config_entry) print_processing_info(*processing_info) + evaluator = evaluator_class.from_configs(config_entry) evaluator.process_dataset(args.stored_predictions, progress_reporter=progress_reporter) evaluator.compute_metrics(ignore_results_formatting=args.ignore_result_formatting) evaluator.release() diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py index 871433df698..946d37dbb35 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py @@ -17,7 +17,14 @@ import os import tempfile import json - +try: + from pycocotools.coco import COCO +except ImportError: + COCO = None +try: + from pycocotools.cocoeval import COCOeval +except ImportError: + COCOEval = None from ..representation import ( DetectionPrediction, DetectionAnnotation, @@ -37,6 +44,7 @@ if SHOULD_DISPLAY_DEBUG_IMAGES: import cv2 + def box_to_coco(prediction_data_to_store, pred): x_mins = pred.x_mins.tolist() y_mins = pred.y_mins.tolist() @@ -52,6 +60,7 @@ def box_to_coco(prediction_data_to_store, pred): return prediction_data_to_store + def segm_to_coco(prediction_data_to_store, pred): encoded_masks = pred.mask @@ -110,8 +119,6 @@ def generate_map_pred_label_id_to_coco_cat_id(has_background, use_full_label_map return res_map def _prepare_coco_structures(self): - from pycocotools.coco import COCO - annotation_conversion_parameters = self.dataset.config.get('annotation_conversion') if not annotation_conversion_parameters: raise ValueError('annotation_conversion parameter is not pointed, ' @@ -123,6 +130,8 @@ def _prepare_coco_structures(self): use_full_label_map = annotation_conversion_parameters.get('use_full_label_map', False) meta = self.dataset.metadata + if COCO is None: + raise ValueError('pycocotools is not installed, please install it') coco = COCO(str(annotation_file)) assert 0 not in coco.cats.keys() coco_cat_name_to_id = {v['name']: k for k, v in coco.cats.items()} @@ -233,8 +242,8 @@ def _debug_printing_and_displaying_predictions(coco, coco_res, data_source, shou @staticmethod def _run_coco_evaluation(coco, coco_res, iou_type='bbox', threshold=None): - from pycocotools.cocoeval import COCOeval - + if COCOEval is None: + raise ValueError('pycocotools is not installed, please install it before usage') cocoeval = COCOeval(coco, coco_res, iouType=iou_type) if threshold is not None: cocoeval.params.iouThrs = threshold From 8241268bce39ad524e5d507b399901d0eb027504 Mon Sep 17 00:00:00 2001 From: Daniil Osokin Date: Fri, 4 Oct 2019 10:37:17 +0300 Subject: [PATCH 161/927] Add 3D human pose estimation python demo :small_orange_diamond: The main contributor is Mariia Ageeva, https://github.com/marrmar :fire::top::rocket:. :small_orange_diamond: Special thanks to Alexey Kruglov for 3d plotter contribution :shipit:. --- demos/README.md | 2 + .../human_pose_estimation_3d_demo/README.md | 92 +++++ .../data/extrinsics.json | 30 ++ .../data/human_pose_estimation_3d_demo.jpg | Bin 0 -> 51564 bytes .../human_pose_estimation_3d_demo.py | 148 ++++++++ .../human_pose_estimation_3d_demo/models.lst | 2 + .../modules/__init__.py | 0 .../modules/draw.py | 115 ++++++ .../modules/inference_engine.py | 53 +++ .../modules/input_reader.py | 73 ++++ .../modules/one_euro_filter.py | 65 ++++ .../modules/parse_poses.py | 161 +++++++++ .../modules/pose.py | 107 ++++++ .../pose_extractor/CMakeLists.txt | 34 ++ .../pose_extractor/src/extract_poses.cpp | 66 ++++ .../pose_extractor/src/extract_poses.hpp | 16 + .../pose_extractor/src/human_pose.cpp | 15 + .../pose_extractor/src/human_pose.hpp | 20 ++ .../pose_extractor/src/peak.cpp | 327 ++++++++++++++++++ .../pose_extractor/src/peak.hpp | 56 +++ .../pose_extractor/wrapper.cpp | 92 +++++ .../requirements.txt | 1 + .../human_pose_estimation_3d_demo/setup.py | 86 +++++ .../human-pose-estimation-3d-0001.jpg | Bin 0 -> 51564 bytes .../human-pose-estimation-3d-0001.md | 41 +++ .../human-pose-estimation-3d-0001/model.yml | 55 +++ models/public/index.md | 15 + tools/downloader/license.txt | 210 +++++++++++ 28 files changed, 1882 insertions(+) create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/README.md create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/data/extrinsics.json create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/data/human_pose_estimation_3d_demo.jpg create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/human_pose_estimation_3d_demo.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/models.lst create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/__init__.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/draw.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/inference_engine.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/input_reader.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/one_euro_filter.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/parse_poses.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/modules/pose.py create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/CMakeLists.txt create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.cpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.hpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.cpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.hpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.cpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.hpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/wrapper.cpp create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/requirements.txt create mode 100644 demos/python_demos/human_pose_estimation_3d_demo/setup.py create mode 100644 models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.jpg create mode 100644 models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.md create mode 100644 models/public/human-pose-estimation-3d-0001/model.yml diff --git a/demos/README.md b/demos/README.md index b41a62cc6da..6cc7dad5775 100644 --- a/demos/README.md +++ b/demos/README.md @@ -5,6 +5,7 @@ The Open Model Zoo demo applications are console applications that demonstrate h The Open Model Zoo includes the following demos: +- [3D Human Pose Estimation Python* Demo](./python_demos/human_pose_estimation_3d_demo/README.md) - 3D human pose estimation demo. - [Action Recognition Python* Demo](./python_demos/action_recognition/README.md) - Demo application for Action Recognition algorithm, which classifies actions that are being performed on input video. - [Crossroad Camera C++ Demo](./crossroad_camera_demo/README.md) - Person Detection followed by the Person Attributes Recognition and Person Reidentification Retail, supports images/video and camera inputs. - [Gaze Estimation C++ Demo](./gaze_estimation_demo/README.md) - Face detection followed by gaze estimation, head pose estimation and facial landmarks regression. @@ -43,6 +44,7 @@ The table below shows the correlation between models, demos, and supported plugi | Model | Demos supported on the model | CPU | GPU | MYRIAD/HDDL | HETERO:FPGA,CPU | |--------------------------------------------------|----------------------------------------------------------------------------------------------------------------|-----------|-----------|-------------|-----------------| +| human-pose-estimation-3d-0001 | [3D Human Pose Estimation Python* Demo](./python_demos/human_pose_estimation_3d_demo/README.md) | Supported | Supported | | | | action-recognition-0001-decoder | [Action Recognition Demo](./python_demos/action_recognition/README.md) | Supported | Supported | | | | action-recognition-0001-encoder | [Action Recognition Demo](./python_demos/action_recognition/README.md) | Supported | Supported | | | | driver-action-recognition-adas-0002-decoder | [Action Recognition Demo](./python_demos/action_recognition/README.md) | Supported | Supported | | | diff --git a/demos/python_demos/human_pose_estimation_3d_demo/README.md b/demos/python_demos/human_pose_estimation_3d_demo/README.md new file mode 100644 index 00000000000..dded96e3362 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/README.md @@ -0,0 +1,92 @@ +# 3D Human Pose Estimation Python* Demo + +This demo demonstrates how to run 3D Human Pose Estimation models using OpenVINO™. The following pre-trained models can be used: + +* `human-pose-estimation-3d-0001`. + +For more information about the pre-trained models, refer to the [model documentation](../../../models/public/index.md). + +> **NOTE**: Only batch size of 1 is supported. +## How It Works + +The demo application expects a 3D human pose estimation model in the Intermediate Representation (IR) format. + +As input, the demo application can take: +* a path to a video file or a device node of a web-camera. +* a list of image paths. + +The demo workflow is the following: + +1. The demo application reads video frames one by one and estimates 3D human poses in a given frame. +2. The app visualizes results of its work as graphical window with 2D poses, which are overlaid on input image, and canvas with corresponding 3D poses. + +> **NOTE**: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html). +## Prerequisites + +Before running it is necessary to build `pose_extractor` module. Your system should have installed: +* Python 3.5 (or above) +* CMake 3.10 (or above) +* C++ Compiler (g++ or MSVC) + +To build `pose_extractor` module, please run in command line: +`python setup.py build_ext` +Then add build folder to `PYTHONPATH`: +`export PYTHONPATH=pose_extractor/build/:$PYTHONPATH` + +## Running + +Run the application with the `-h` option to see the following usage message: + +``` +usage: human_pose_estimation_3d_demo.py [-h] -m MODEL [-i INPUT [INPUT ...]] + [-d DEVICE] + [--height_size HEIGHT_SIZE] + [--extrinsics_path EXTRINSICS_PATH] + [--fx FX] [--no_show] + +Lightweight 3D human pose estimation demo. Press esc to exit, "p" to (un)pause +video or process next image. + +Options: + -h, --help Show this help message and exit. + -m MODEL, --model MODEL + Required. Path to an .xml file with a trained model. + -i INPUT [INPUT ...], --input INPUT [INPUT ...] + Required. Path to input image, images, video file or + camera id. + -d DEVICE, --device DEVICE + Optional. Specify the target device to infer on: CPU, + GPU, FPGA, HDDL or MYRIAD. The demo will look for a + suitable plugin for device specified (by default, it + is CPU). + --height_size HEIGHT_SIZE + Optional. Network input layer height size. + --extrinsics_path EXTRINSICS_PATH + Optional. Path to file with camera extrinsics. + --fx FX Optional. Camera focal length. + --no_show Optional. Do not display output. + +``` + +Running the application with an empty list of options yields the short version of the usage message and an error message. + +To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](../../../tools/downloader/README.md) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). + +> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (`*.xml` + `*.bin`) using the [Model Optimizer tool](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). +To run the demo, please provide paths to the model in the IR format, and to an input video or image(s): +```bash +python human_pose_estination_3d_demo.py \ +-m /home/user/human-pose-estimation-3d-0001.xml \ +-i /home/user/video_name.mp4 +``` + +## Demo Output + +The application uses OpenCV to display found poses and current inference performance. + +![](./data/human_pose_estimation_3d_demo.jpg) + +## See Also +* [Using Open Model Zoo demos](../../README.md) +* [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) +* [Model Downloader](../../../tools/downloader/README.md) diff --git a/demos/python_demos/human_pose_estimation_3d_demo/data/extrinsics.json b/demos/python_demos/human_pose_estimation_3d_demo/data/extrinsics.json new file mode 100644 index 00000000000..5f5c066dbc8 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/data/extrinsics.json @@ -0,0 +1,30 @@ +{ + "R": [ + [ + 0.1656794936, + 0.0336560618, + -0.9856051821 + ], + [ + -0.09224101321, + 0.9955650135, + 0.01849052095 + ], + [ + 0.9818563545, + 0.08784972047, + 0.1680491765 + ] + ], + "t": [ + [ + 17.76193366 + ], + [ + 126.741365 + ], + [ + 286.3860507 + ] + ] +} \ No newline at end of file diff --git a/demos/python_demos/human_pose_estimation_3d_demo/data/human_pose_estimation_3d_demo.jpg b/demos/python_demos/human_pose_estimation_3d_demo/data/human_pose_estimation_3d_demo.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b151c7a82845ee69de278f9c2e76c9ab4a7e4d24 GIT binary patch literal 51564 zcmeFZXH-*RyDhqqCPffwN{b2zh=3pn(xNC`3`M#S5s?le(lii?fYb;mC<+0Qq6kQr zBB4h_KzfH1=>!si5KZC6Z{K^)82kIqz31Gq&mQ~7S&VlL)?g)hAJ%%GXFhYzm4m4R z3~xejx1a&dBS z9OgO9#r=0<{>ID0&HMM@@0epOK|3IDm<4kvnw8}=a7ciK zRe}`wum>b@dI6pPQOHySjUN(O>(%jf{?sPfSit&tP%*pG(Utt843|Upu>d(Jk8!1*^WmP298!79MUeoB?&*cB^Iw;;jOYB7g}3^EF< zJGi7TSQCZsJQ(H{kx|FWlKzJFkI4S#0*n3s71@6a?7zn~1Msr4FfSgf000IU@YT1; zfc*a%|AGI57S!}?yS^j^f$tw+?K#>fTcx4u_tRE0GU-~2f$Ps4n)_O`@#9b%1{XSuR82MFwoFn zwtFC|Kt>^%_qxlrRI~jag%GOZGlI6eZa4n}w!u#yU;G0Gy zen#$=@3-=CxO>&_vwslurwgYJfKitLAj0;4xEAa~h_hrDdND(-M;P6yGQEP9Qb-Qm zv!*=m%*iRE;S&-ra^?cqrDO+LFS)dg;pAqm4r}dL9Wno?PcZodAT~T|4ozuRlk-X_^gZWQ&2s>po4-6OWt@9{ z9V}mY^0D{Aj6reHgO65S3McWgTGDetmwDeN%LD&5$lKhE-bu0jO~nev%nk)-8efGd zBxWP97UG7PTVXBK^DWspvZbvb=BVd4t&HLfm-{g%%2>O-io+M=RKLbMNZBSFephfP zS^ht7{PSX%eDTls8}RFc{M%^me?OWE!yiLQH%g^auz7qtR;@Yuw#D(a1B!HE7P(&^ zLdc1mB$6l^{wa_q2Ie_EHcFl$*&WMqiVuV>BYKv~kdr+7#C_`u1gODe$sp5JfZ#F@Z8id4ecs zRA|D!Sa7=we>b$jz6X_F9Ucf7y#qPS7-X!aWBrB=5&@}PTkYp` zpALQ3zsbBa?lOi^bt~w=SrCL$PBmeSLIql_FvurPQiUY`-q4wqw7`JE{87^cuAcn@+R0odsJT;N*Hkz$ASFdIJNLSQ#@kN#{Y3p z$m5XH9cPrc^QLOf8$`!f4dwnBVsH^wXjkZ&2Y@eRnyhim6i&=U_llg)had|#L~3>9 z_e2^sJ=b!7tNq)x+6Lb{0H$kk+XNSq*(6r91HmV|Rjkz7n6sY~rr_`XZpxr#OM_?j z)#3D!5uyG*-6yA6`uc!HdN$YtoD8lQE0U7r7A!&7Zy_9RP@WMQF_i zTqqFhS3EFt$Ra;SaH&GQy~if0RBi64JFu9=;L0L4;%(YesO(Ym`m7Wr?y=~_oQ1=a zFK8+seC$nnq+H~v*WQVxwZ@xw>rQQ%sU^ff6ommP`?^dL(lr{{guQmVc*0+=o+aIud}`$oN_YH~gIYD4L@d7<{q_M6 z7XkJ~%=mUMN}f{!d#j{uRM9vo5B>dBDrgaJptfPiji`u_V`i)oO+HXMVZBeo-~4qCXiSqxpfSOv1MWKnVhj>pO+!j4!QD2-1?$I-T99+joT+a9ib?Xnn{Yjy z@6Ruv|3EPG{|+t{|EUT@*#0x{DBwT)-_z{>lWye?nd|6O82wVx@0~Bsj}Xzib0s^> zJ-Q%F=8gUMBGH~_7LsrP2)7=g@_C7>HXwgiF;XWK!qmS5F!2$4?FlpTh zDXvK$XTlk_B%*OV%$XV)exX}mYh>1PM3;x782^}{AZ+fV_}1Z;Pl_Ka>~jXemJj}< z5ynkKKGoMGA$u2-A5C@#3g7B4qr~+z(|e1`%=`6v=D!a}BOeT|>=f`9w}E1)n(&U( zFi#TSq%YWsvz%A8OK{T1V) zL7MF16BU{bamDG)PJCkv!mh?py1BaF`jxqq4&L?8q z+fU}rmvpaOBwTfR4%&`vUUx9vZLpd_OfR9Ze((-ixEJWj$akp?>`)p_iNtBzsQW3w z?(=Zft4Pj2E=hs=h5!?4usMCH83PAy^V6iqw_3|VNed@P{4rY{x|hZurnHzav*&EY zDX^vDQp$#6-bJ#%_Aj+8=WoN`5f^E)_EC8knF7ZIJ8HS&VIyZXBnb z3L^11yzbI|@%B?Pu4h{|+))z<-BgTA56Z}3LWg)JbO1T|WVw1HiPt1WwmZZ4ON~E7 zjXX+2q)zm;q6jZiuGJ!MIKTaTL886&v2`(FW@*?xvhT3A29Xx4T%hqOiQcFvQUpX zl^}C&up1-KnrVt*f%-BgTs0w6W?fhnn&43T3Sb6a44nZyk8f=@adK}rHu$|+IfJ}V2e-( zc!!rvmN}FPfK?-a$sB@GDUEc#A+%>MsT4mh`obBdEELQJt?o<7pSd

+395l-+4ZeOZ?`wx3T>AP zrmE9q;ptGx)(gZhNzVS}Hc*0sZR~Hmpg5(jpG4JTm!vTEC9q(6g+4dTj#67eX>ps_ZOhB3)#3I~geF;{lZ!O1Nz?v`5ngu!zzsFOUc9ncN{q&;-B)(;p zw0n$8@jWHvZ4~+dI0ggbP$E814wzfv+Jevz#(YSSH{M}o$0Ne}xYC%ZcbCcbSpK2v zj-CetBnbIND`*W~lV0TaI1h~D#qVZwR+*75qNwsTF#Sy(gcl0CNUF>e?H2EX`_i12w7(hhoKY7r?& zmThHXfxxHGV;^e|0I<4sSN_}kzPmFbz4JQZ@!aPOldS3AX? z%aLZqV_d+!<|@+&ynWA+K-{$hpx$NRNpAg)DSPosbU>)H!TD+fu}k7hCgYY8d_qCD zT4)>;+28}l`C!a3)dRHT};fouoH%tOJ0FvvXRyI;^YW1Z(+V9RL@YN>q%l*r@1*oD!g*` z2I;zg^AT0ps`Rt#Wg?dX82v0t=df#;Q6qW|%$%wY?@-ot4lJgb5_!T)$?>G` z+O}y!v@;?ZMhu~0nVl`z`!jdvS9~5g6|v*#Y_X3mIS1f^a8J|>Ctjr+c@%b&#Gk4w zTCw4JW}>p*#cq8T$M=$;H_%;i4zY8C|J={X;RAqZ#y_=4PH(1k;fJDghq^wOe4=y_ zZMqr9>wW$24Px)MQN^L&n{O4IgQ)&#&9T}d1gMG^>eC_`CE(LAb0dy3p(=MBdphl- zu5Zae|IrKep64Fz0=aZ)X6*i@Pop^C9hX}VMTs*8oAXO?<_K}GHtu0s7+m9HgT62^ z)kr?jqdHB-Q*(c|@|by&utn>!Z`BEz4@dxvN@~QYpdnL&808A8C}Z#d;I7Ye$BCxx zj>VCqDLI9X8sB;Xo9-R}PmHhU>LO~4)+-`)Jw<~Hk{o6`UbS6lqrV1wMV=Rg6BT;} zSjJEZTbB=jABr`UG^$$EgcEQ*8b2fPhI^yVZ^^v4p)1A@zvr_ioV$WEe(YJebpNZp;+YAYs!AO!= zX^h5eKdGqU;%_f$6E!QJ+8Sg7{EAc43-QvwC*K_aopk!AR_#y=k|*v+J`|T7QgiapoTM6 zLJDvJ^85v0rLI^nmyd01wpDSqF>vi~((nw86cTwQGN}i*Jj+xqBRg zV7nhQnf%)Tbp?!hC#6N7?u0?0e~w>p-P1t<3);zaaSlRc$XXNT{dDq1Mk8+)dG||4wKO0Gnf3#n?#YX z0gR^*%QO=x?yl(z(VR&t1UmkR*<%nJ?J#lfPkvpsFlcrrep78~?a!ecV;l0cM@Y_+ zRzva<D?@5F-+2k8d82^`|D$Hr1xaYuO{-+#d$jr{F)v%4{7irh)BqQK7@k?DExB zff80pyHQPfMPfA{RTqD4?mbw9oxS85vBk5Qd@_eMm_2Q7<+$x1Zh+{cI3b8dZKgaP zXe3;!n(-_^QDA70jqs>7G8R{CLUji*3C^`k5Z?d9c$yF znU66}_=?SE7Y3K8`)W+&L;Gd%j;e{ax7*M2zB_bWV5aGRe~Imdbm{*+t1S^dAd1!H6o|U>szx>!;KRr;8tmAjXss$s_C|$ z@a}%H_UO(0lCIuIsn9>-dXGHOfgqS39SIY}Af}Cb7TMKNLUZAP28f#^(b+x47DuIe zU!q*L%5H>^s!6U8Z~Gfw-eBYfuqfE$b#==2v@lk#+taak;%a}4f*OqS9Z zfpHcw2Bc&f+9pEe{RoCR-ACF@rJsN{l*FTNKqNUjQ!Tw% z;xR)SPn^r0n{lkl5;w}c`b^wA$~p7reg$NDr3YHGn0St6LA~M)=Y-i8AvwKcbsUXT za82_OkDF&?p6Y$TU9M0sOgaU7gPF{wmy*R>mt8b*6vZp?qA+-9hl{h^ z86~+96w{~a>hPo}H@opx+ zSb|#V=C*XKN@J|1>V(!=)r@2$gL{@}-kq;Ve^<8{4dsS;V{{NaG!bGRGWOx;Q>0!b zt7&9_tG^wlN2a~J|NYRTxt+#)k8Yxf=0nrT8y)aE`={Ul}gQ-eXX?1S{ zix>?8u3hJV2=8!IQ_s*sNv^eFRee+D0HgdoJVKTmL9dLgIL&IO^3cq~~OUs8f-9jw# zJvb5yBAS7w6Uq5RPnysu#;6v@AIFfZ{ZvmWEi#RG=yzAu_;|Cy_wP+Cm)9XeEI8z` zHg3k`AO3Fm(Oc6nHA=i+qI1jL%vtj%j17gy`nP;GvS4Fc#F2s8TjpbGo7qN1LbEtZYZpOtQZ7025#YZ(} z2{sNHm@f&Jl$7#Kb7gfo)Yt^dBr@^BEBR{l0=QpGmB@OOLJMgKv70@nvfOkr!*!uc zNN290`+D?;rgqk7cED4PU_o_RO%;KrMQI=ciSs^LjjKN=KhQYom0+)a{zH@%Jb!AN z+&t8o5@zVEC#JLECDB$M-Yd`Nq}!MyHl%Q(<5%wk#eO$>HQ3ih9hbF?-8 zMaGOD<~YP`zKU>o7AB{w-yFTCt<2x2kvVUDTHzKS`_AVR6z|qgUw4l6ZO^C8p4PsfrX4#=doHCT{(wMOn|e z)bFO}tT#^Vvl2i2G44T458WCg)b%e;t=!-bzMr8&hNtcwZ@R~tw#letAs+yQCLBE4 zRf1mUHo?^8q8n9BiS8Ik>=wQa(PRDF%3LI-RNVZEU~hXP^8VA~`DQsMG0DI9r;PFT zo2*1gkLwwFvu-dJdDNNGh#lz(Bd&DLB&H};RcfG)&FfTrndzKJdHM0vK0@Y++$-Vc zqstcc8z`a^3KTUI1<*^Qq$!@=DDL|AB_z)j_qsNO8Ogd!dU`dK0)<2%5!p4j+ODJ@%j(;FJVsfRG3GFYj{?-_E1kp*}A6n ztwq=QJndhm%NZ%U>D`5)_hd37WO8@;lk>&tFKD7f-%codK$Ic{ZDMf^-f16xm-tTf z*tdWpfmyC`<(z&6?f_fy&q!TQ)*Oljv7D?)=^=qkh@1(nrX=MNyjAl|)MCk$u2pYA zYx-8^`6B>Jn$U3p}@rE!ZqpNQ>Pzv=@8ZKDJIXvDApCN_|0hWL|I-Mm%8s7Qkn~z)YOr2$x@`s*XpJM%pNq7i-23=8V89}6? zYW2SM7Ez^~PU$&ANWF!kKPA2d!W0w9Hz@-o#cLG*V#biG!su2=g^{9d?j-ud8h7>Y zcW$zUI#yw-D{pgwji^%_&>DEOK6|)5Hot;MMcWC4zfVlw_S@9G>MKKHZCe;K`{f+{ z&D~MPmgWALJ~?SAuiNy5O7zyAN^-Uwy~X0<&nSds2vR&sj?se@iaN9IdTNYzi6pk5 zl^S}H(|qjXbspb9v5v#(mrk>b_NU$85ga%- zI{fL3lQj?tGhI#hvUU9<^i9=k!?)(+(-pH9KI%U4{K^`1{F2s=5k-l5(S>&n*>EQR z9esRESG26>_jCIK4$XFzxEy0K2q7M`G6ZQ@6MF|5oUV37GNqjD5{eud8hCgOUI*@~ zLy@+cBzGJ4oqje9-&)IlJfbx^5bvtq6nHq1t^x;xI#mA<>?|Lj2(^B{HJ)Q1Z;Tou2oB z>fU~@?<#+cxvJai&BwL$=Lcuy8nRY2i6H^+kg87uU8dwZD9H-<&z~De1Mfoj zb6t~4bo1?gONtmM*UL|p%9_2$r_%5k#FPpim8$Df5*{@(dpUc7{71gD`4cHLFKEfg zfiuf5IQw>ab~_tsqzyr@XZE9uPjmSri8kvbPlgEFUy?mguz3o?4J*5 zYSHI>O!nMeH;gd67>6m(@rAVwY78Sa-?c@GG`Isk6V?ePTEP#6xAa2!XhH79;dZkQ zB=(`1UFv7-y-y>7%5SYrxJ|R=+d1E#wB+Q%)lzTJPE*YB6_F^cPU8g41)ubA{;Zmc zowKC=j9q@j7`z{>W)ypV&cph$)<8I#rtV3A7f9B^+(?ABzch#oiE^n5LNC`p%^1t) zqof34z#NoPbVGxb(1_A*-yV-*`RWLR$SOPEao_k=bA*iL;+!FCwBmO>y&UB`aH64) zRMl%{Oq^f80A4%e#acxXW(vj=D4{l`a5%OW<~RP`DZfY}EXx%wgyZq}z5M)lS-8oI zsVw?TB2$GWkl?-GWA$G=4PP5%8Rb}IJbZ0o-02kAmptt3C-3BNBK@d}jf;IJY6eeE z8rJ6wU&M2!Fr>|4Bs%gk@!R6s#0V?={`JhhY{c>}!rNy@A6!uTY5%g3kV8uUxneik zDhb~pFS2+CBDv9pJLRnCZFBysX;2)i$b8f!jvd_ zqMI3HUxX0s>Tq3M_7GY#-g4+MuSc>8;sr~M(IeJ@h+J3@g@wt%o`O!TT$Z8Z-Ees~ z`1>CYvPb4HZ&!r|`WxLyDxBhuD|toLbqSBum;AkOGV%%ysx-%QPGrZde5wCvKU=aR z%RD;s2A@3Z02uTB0I2iatR}TEaTleuf>Me_rc~PBAXS!@I4-O%C^RN;B=Gd*JPgt+ z_qTgM*iR4hY*TD!M{jC`f+I>`H>>5SIgk9?LAl1cwN>FOWDQa%DiU|Z7mMnC?3+UE z3y{m4?a2-f2z&57>ha|Wnf!W(4(bsX#QZu`_#A{9n(P0SE~O3ZRy`3@+04U05~Nfp z!*3}&XuF_WGaw#^`SmD?`b?7m73q5~unt=sWaoz&4w)_2+JBh{FQ*Xu$)>H#vt*{) z9XxXYU^m}Hsov6i1nphqX7uX|54{nrkIcBQyN4BU)oo92W8hEg6(d2N$oh$- zps)z-tKNYjSc@BSnMSui6#1$2$#9obUDusL61)51Ph2HiOzWx=XRhmm8ZuAC=i<)Q z&n*&lN?)#w1{Q}wS=)qQ=O~^Z*THrAiKN-9Bo9;4#{JQ{T8Ol%dgTL`mp?NjP1RW>|A#K3|3J6hKa14=(jB*~ z?V3~O`zWs=?cEGh5}sRQuM^;{%hX}FWo?p)p-hf0)*S#ni}4^{UE_CA-yb16jtKUU z!ABl?JuEDn^Vy$Rs(K~jJZZ)Dq3A*_>x)6looE0HchloEsi)LFCN=hCrzocohpv7m zNsgHH*!jLpm^HZ?8v5DmTRQcK-2}>OF?K;^Vq4b~51y93)`A)fc-Pd{Tu-tq< zb(veXKXJ}-m&$H}k}{v~AKEZ>9E7=WzUO zz~$|oOM_ml1BU-FN+gE$KyjlE*26W)~;zMkvarN}XpQpdHPz~!0a;}HZ9 zM#+qybIp3r@ta3$f=;B7hnHWzyGC=o)dfzxoGOuDCSo+IPn+>RBu^1P)JZUMe24!= z`hIw@`;XbtpSin;qm03sz$f5)6>`|@S5yO-b^^8MQ6Gt;FeI{~-={IuO8UyZ>ffJY z`&#E!&jEeM2QN4>ne(B|$E7Y&qi}vf90U!DCxu&vFPa4&TW3X+ollPT%-zEoNRB-_ z%@>5|jMhUixc8_^Fxzps?`o82_*F|{4*}5yPSh76y3~P*ue^uN#n=Am)rE22$qexT z`}oGia>~33cC{zjkd5hYA3|c<4pYFG10d0Q(G)bJ!9?rgN6cofFU|=_sABK_Zf)_x zcbqu!>XYwV$_7=7W=F3AV+t7Rl#Xz9Q<@GQnlzJL9rD->H=vJwJL>Tz^G=8Ko2G6_ z4z2d8OUzejMol1@MVk_bGlL0@r#j`o{pHx@e$M8pQqIqWQ%jeyr*B#zwtjG)lG`kz zx_Wij!P#~1d}b(GXnWG#2O^W0btRCvLXaam_rfKj^un1XbwQ)Cfe6~%z&Q_`8M3Wu zb=ay`MAlBt8kBYMeGhPsz7P%Zp;?0L^kx#&@x%unJC`9w0BUQyT807+|;)nbVEXQ?d9aJ)SL5>n|^yQ~d? zsrq`2v*Q68zHMMmja6_hgrGpRpk1T%6@`2DK*d7BhEHZ})2_;{qva8h2BUmT*6Cy5 zFmd3~&tzbNUV!q2BcOZ_RXAu$%=I+yGr^Edxk8BB{CO*>^0kp!TTHF2ea0#_uO~RcQx6AS9h}o)lV&<9VHT$ocW?nn&yi zQx?z;LB9~VW34xG*)g_KR3;1U)DM~2?`6W5ug+M{F5jKGIU2Z{jiK6G#}t)njnOG8 z;xwZiM^fn~Q*!*_tuHd7Tu-q3HPmU!MDhMoluk7*%OTZxjw?%l>?@i_m2vZ)s@<_i7oK|&u(Im!rJ{}}vW_yZ>9rSnOQ`*(Z%e#dMP^X`RDQ9Fk6 zQFq4o%;8H9#(*Wt=-&p&2VT9vY=D+WY2M5Rh|{yG1&(=>FH2!3qB~qwNK!9|VfMo( zqJZ!rv{PEEl)PFWFX%SXsLtir)y%t8zwdF!4Y297XnkdhFAg+~#w~Uq5ycK7si$>C za~DdFyiuD9CzGz-Gv>aPB*3s`t0r@NQM^TO(mxcg(*r)%s>)np@eElPm0ta$&_fFg zQ$|wws-XV-ONi@41eJR46=sIe)jyU ze7=iHe5ARFWHy}`XmZ<`HPBJr5I0b=DX}&{bJ8su)BSF(1vUZzI1k_KfoZJ8+9KjIH9ManN7aK@y{cNPFq1 zAQG#sf0*T%pHgY`MWi3ENXGM5!VPr;4{7OU3Tv^j(_aM1Wu!_39-Q{*W0de4{kXz& zPI8H%s)}TB^`qCiz>8CN^?!-&H2KLQ5!0pQ3#0zJHpFcRNs@_X2JV?}5Hu}Y^Vu!Vbda65eo0D* zd{#3l)iL3Wi%I8&dC*^N8EJr~ehqW2&%o0)4x>&kZTw(5hCnka7}%l+Q!oZi&l;1? zq+uR=wQ3Zs+lvu)1vM&OsvZFSC$8N+q1($V`mFuUWIiYbR8u_Fv%Q%U5)k?D0C*~@ z_{UuE2PcH+)c+4^&)AsA1_oNqi+MI;C+#yRq`S#83mR$ziivf7H`64aejQZJu%bCJ zeZS%`H%dl1<`)pDgX>OB8gP62`%YmY#&;Co;oY9Kdn7bF=%&<7`pZn$|f z=@jnoorArG*$*9XO@SH9;VhAJFU<(L8O!7-UV6I5#_B_YI4-AKCt^$1p)eWWF_9$_ zss;G9p6EOIu*#5Ltb2s2rkSih^u#7=R8!lG1|AVKV3N{jBk;b@!!nnxNoH3(^G&(h z+uKiQ^#MBxOp-?PR_b4p=1p6iCDFA=lo4HJPm`pc3BQWtPi7+5!xTmGd!hz?t2j*Z zOMP{mb+YM+mw=F7CD)&fy^kvtVd9w;#uaFTN&=#wjkx@t^H)s#qrgopk+TcVk51CR zxQ47tj8@@GD;#ityO2UH4<~5od|f!~YLq)4N=O~tNX`4I&HIc6Ni~59R#S>dPW!e* zW2Uny0X#Jm@}=zsmU@DAH^Y?u#Yb~qv(9UP>R|^=M$>H8#AiXOBn^_+uqLHMlqy8F zkik#2-R5_XoL66=XV9)_YTNg2$L~*5+aL%fOJt{YLI&8KKdsSN>{`Cin4W$Y{)_T; z2FPmwcof#LJuQu2>>7HKuS0r)j3ZR;&3jrPx)wkP$fRFsci53;vgMkp-a##-Noujf^RWxn0Az;ioWXU#jIzmHNInQx=~-5oux}+42*{e??CYeR8`P)-qvr` z%BFqibGB=BJBNwSQd;W$bdB#ktTr_+_t$%{2Xbcnnrsn80q;Pf*)`uH{3%dpDZ&g* z@00tRNCB^~z7SR>05SEyYMC(Q((mz@&2syG%RKSWhZULpXnQxRF)W1s@m!m{Z!@S1 zb%JI#+$!r4#Ganx5g1u^g^f6k{%LrB^{{;?%g+U)V8!6RayX!?O0|NWB{J*cx$rCl z(o%763U;}gNQfU3EgM-EebQd@@h+_McEX*MJAC}s1yPh^f>?oS|LPE?8l4x96vlK zrO4>c&Y8gaQ#b3b-MufIyM&e0=IGV_!3MAAB#2@xSaIesX;5uxbV!;`N9g-^U8iDy z37f}@Z*?1pzhFwi5!D@9y{1( z-0Sc=C(}S`{f8`$;Szn##gxToWcYXdr?h?wESuv%QzcA+r|me*u;u`$f(lQ6=cMm@ z)vqB%bUmo&!#SD2;)%YH50b6b+*hOtAbBwIN+<)GvwhkxE1n=^Yem7HS5Dqi9tOjf zTZcnbrXFp-qA8FJz9N6)Kd$Wq@I^~mKxO&>aBEpOJr;F#s(Mvw^hOh?2<`fUdNtgh z>_lA3Nq~FK+u-?=cY`qmjd#sYDF%vp`#CKE9^Pgy$}%^-?wn=eS#m$aoS^4oBC0wp zHA>EDUBvYy<%yE69$WKP1xIz!v_?~1EmYeu1ONKu#UF1J`tIvY>fqCK$c~gEyo)os z8BI{aIr|bkt#W^D5K#R4z^_XWz+s_thwkge10IMcaAwzh09>J0zI&3NQWnlZP~B5) zne-*1vD;l$u~J7!VT4my{`5zXWVs&?Lr2w3K0fU4cXS)u9xq6SEcO;WPb3^W^6r^C z$LErNnQom>Os(x5O_XAfS)AVKYCGF7J}?4KuAevfrXZb;b1vrcYVoiQnf8nM)DGH> zef)#->E6bF)>B|ElT3+V#g(4RP+B_xM0GU_r+qZ-1RzK2QOW|@4l@^*4xgQJkNacK zV+cI1roV&Von2PjKz+LL;#~TJZ4W%bj5mLU^1cS(G5yxqm8&UqD; z2nVhEvVUY+sV9r*AXm!hZ3vAv?QO$stzc@o#1HbUYw);G6cJLDxG1;_eTv7UQ=O?I zdx0lEjnhOQsJH^$gb?>({9poj;dt!3 z9>mL=MttWre_{^#E_2JXt_CK{;piD426}FQx48s@rD|^iSWg z>y%j%@|tC?X3vn|49gFvjeS)-<9Euw}l zEpC}^9hVVFUDvk}5>Pze_uJ{-S5A$kUyL8%hSG!^v)jqgqZOg#R#XeaTI^djy#nrI zp>t@IoI+}j`JtJt{1T2I5t|zY2O?)drkCpJg}RpyfW196$N^9!T*lf)&w=~!&*=0( zg{7M*F(iAN7UD|6?nYj4chJHW-q!?wceUd+t0p_3{*+k0ySUGKzT}^j-#AIT6}!m* zT@Bd2J-=eber#P|(;diy?*nv|wP}R|3R4Vjh%HRs!2NNN=2df{1mAdeet!U^-o-Q# zvQt*VXIqW2i-&wqP#0&SSU*KBx>Q#HcS;FixcAJQO@w1#?C+ycDb#GCp(oF(!pRjmz?^0#J?yQCKV=s@fpgz$j26!tNe)TJuWOHAqZh&NmKWx^lQ~9Q5H@c3d9yjsDcDrS5G!L0 zUfme0vRx16XP_VTW(C9F-C7&DHBGyk8O$N|a=kM!=+Quqc;ARfY)XW(^};x=sc9->F4{fp;_5&PSi!PX2B86sN7eW< zbG;$GJVeob3QE&^6IFd7doS(4{R( zaCr+w%P>L+wc(Gg8lKN|mpD#c=&3eIl304ll3u?Im5h=>^(>|^T`t|KT**g_E#2J$ zQzJy)KgB$K#Fk_Q$7rly@J6wLJ((@3;-tPLFB`7%d4K_$pMqUh0$DYZAG9GWP|c=#5+S$Cnbc;L1$_JNK;FdPv0E zK=G_Nq*^**|w0O0q8|pU*(# zC4x2)PGW}?0w2DzR2`E~XC;K!b~do>>#NgR%zI{@~-uIu>j;>~r($eO(PdXePfdho0e3Vl6uamD857;UZRe^Z$THne^GHy;-isE(WQb)j$or5 zldYg1As;`!d{CWEVY^fK?5Cjh*J+z)XHOhIo9R_;>X2?f^#p0wJ08Nzt9Wy!p`==Y zrN#D$puu=6w2Yp0<4W|EY&&W-!iLWI=-5?7CkG$>lVgGwqlIcFD+LlP>kzkX*F4M+ zixX2w{ytZ;pw(TQF-m{23mV!jUl1jxzrv=d`npTGRIl9VT7E|T!+*uu+mG))*z8-e ztG~dlk5v+Uc}E~Z4<3P%j#*SoX)`)lTP{>#25##bC#K5SrAlYiU?C)Y^kM(}2SlHtv?F!e2QS>Ufz zmQHCMd82~epxBnZ8{NEAix_$i@Aozam?V}CTzmD2wc8yd!2dmJNTMyqGgxxv*t{U zz(}U~@k&u&`MouR2JPI;i$R(AP|_T z`SeK9XnOrLdgr!=%2eS?Oj#N0bEZ^nN71m)2GU%?rXJ*mf*Q5ze&!SHNHd|cyz0#}ky4=rutHy&(D~w*Gl$=T8xgBlE z8$YOh0Hj4UChK#~LpOeGv>hkq7Z#(12Bf{KyKAPuIwtuK$G-cSHU4?1Kb3n|K0rk& z;bik|Jw86Lc0+RogGKEcAfeMFKkh+O?6W8M(NpSa=bhe5 zrS=-%`*X~=UWIAPzY9%t(ka@|4lS%u_dGi@fPKh0%`pznsJatr_PbaAhdraI47!vY z|LWyZlFZ(j^4YOZ^6xynrG!U!owYU}ZkRvz4eyn1tbcl1eis11W^JF|UG=iE&VOSg ze(CoISvk2-)$@7^Yx{<3#cBG`m)7YRS$P8^?^)52oMML~hp(&%tt;F-bl(f$HW^Uv zyrfkMa2`sp1LE8;EWd3WjEsS(o98=vPF>n?Y?wN7AHduUXeVt$!fu6>$=p}FvTakq zmj0nsuUzitxQK>))eoD!NQ+YjZJthXo%*7!qzYRl>HSBzi(tf<+mZYzAs=tXlO7x3K_u)^tK}H!3%Ot@743p44K!acR?B* z-ln<eP)?<1V@=9n)3dY8=erzotCBrlpPz~?A7!=v(?4|e)uq?WBFK_U93 zZ%4P3{LhK{cdrHO>)*Zp+XVg^StT|8_9<_|!~tM=FJJas+|IzMP8Iy(>ZSKABbzrc zZ-2Sbtd*rfR~FZO-ZX~EvISlchBh~0BH0aBGWseC>I9Ws*EKbvZC{!&57{vM1!aOZ zoJ)0$%E9|jPg;L)8j|OsWtiPDj ztSW|T8i?~I!kUt9M-CbWi+NqXVyQjpg4OfhO^Tlrf1sU;2h$%7-@NC4CW@I3gvZYR zi>Ijn?=P?TZ%es=)oMzgzs+fEUCeO45lV8Sk-)EG- znaQFI*~!`X`tp~XmNPxe8NfB9y!O@83D9;aWc)4F_dh4}tHoWo+P;6iXGrf3Fe`5? z)Y2k2vZ%XW`XpFfSy=qF2K?A=4%nMPOEdmd^qartc_zkFf7=*Xf^6JE#MjB=TR%6n ze0HoqJS?>XU^P9xP7r>*3B%{tk2z1j`@dLw>$fKV_kS3qTS}TCAt2qoiAqbSq)2yn zOu9iy=`Jbh?rv!q-3=Q#YOi~r@4s-rk7N9{alm!$d0yvvK1%V85INTgOKrW44Q&yV zs7ZMhvMu{r=aV>5elo6sW_D z*_4~mGqn@OY)ygrLj7>Xv;7k0&j*BaytL$%c}X@ z6{et4q+P1u_m~N)e^fwuIX#uVy)C`|Z#zO8)GRWq>G zOJsixufeF4S&1O02=|u)#X8EoJo4*RJQwEwMm`Hqz(hqx45WR=1g)^J<@wv!qu;7d zL)dTVH%gxqJ~hr&-NcL(HpQ6*Yp*oQzWPfW_#;rZ@;H?pT?wD>JtLd83?;4xO7p zlY`%;f?k~5f;1rBdX)eOkg4PKhmHrfwdPtWFls`1#S%phJyh!*HiK@9qkp5=_W+|d zg#QvBQVNjT?sI57A&<2e8?Zp%m)P!$ztFLiDi5l6S8{Q%Eu>LHF*!}x*CJO5v`4r% zHBH8cS60PlD^kyJpelaG5+%ngLoX)}H26xvkUyxAF8^~+rxqoy6-{!X1+S)A$~KXN zhOzoL~EWiA$ggW4+gU^}ftNGil)7i&!y@|dOqDO(r8R@s(UlvfqI z9HElnx#E08W44@i&}pJx6K{vM+zjeZg$8rlB@a+=jhSQ*_m*abxbGwfW?Esy_} zgNyqLqhsc7kTXQ&&6V(MT)eZY`te42Rpx;3II+p8{_IgR4$?g*@^H2wKv-rgzX;>o zs;<2it*hUhC)(QlxyAdW-sf_HdW_LYYE<>h~|9~DX~ zMJanrQ+jA`ODWKPDan-K6aREYEyVga*Bt9!IB{2)!N4#g#om)BRzjMT;p|FQdq~D^ zh^48Q^TV?w>P7dqm_OKax5n4&0}B~wH2$Gj4e--u4f}5Z0VOcJNP}uX(RH{WzTuKH z9;f$$@5Ae@M{3>x0_FeLzy3oP)GB$c;-#t2K!~P>t_rDdG*WP2;8*y8VJrBR&W^_r zYHrLdeC$@X)=Nu)$5C8gcHJ{&W*>5u6}yWSk6oT*jlW5Fv;^ereDW*^<4pMv#S7qr z)F_iapU&M#u+5N@@cwCm;`q~9aY1Xvjo-aQAh9i8Gk6`Z?(ff1t>{k-rlLzIX6o9$QIQ<$} zvC+w4XZgJ2T&Vinmjl5>OXdD0w9>i%P!{wx3wD~`j_p5z(B(>!I8syX{b##b=|ld~ z_mfBM=pD>auIX>T!~HlKn}RhrTha+*9pD1;(~23ICu(>9o^%zzj}sanjoxJb9KQ?! zXIn3%mZf-=8$0lyY)`~a3}mHBCR*1wH#MYroDJaLXfw}!hpq2-1SlbCs2JgInCG%ui9&GZqrIQ$7a^jXhp}|qCL#%&~f^*AG5$NrMwzBIe@TwXZLbf1?HK_~j zrE&0D{k2hh$;FiPdyT}`&A1C0SjD z{y_9GdVapq35AOJmp={IGLN)rRf`ZraQtFNkGSsD&!U$}*=V*&O0)`QH7Ol=7@i|9 z(kd99$QhapO(ji=BIQ^Ws&O#0W5EBIwG`={CJztx@QZ*h#i^|h6CMq;F&6OAp3^9} z`kCjf+f~BVU3Q)VgsT1h%4>c2?z|h07KLN+uMqYfduDNk^tbZ`vz;?UBhQfq(d@DR z3tK;qi(L3SqT$o-Fff{j6QRLL7fr14&dP;yS2LTOJp@v%mHv8nZ+M;>uHW^5RNNTB z(Y7~ZKvsUE&)U#axrK{;*!~*vx-GAmdF-pH4b*Vgvld#T(TG*v+@b)!VMu9&f$q3pGHwER zhJCp<`;6Eox6=!v`_*e(+QW)B|L)b~XuWJFl-Ilox%&>6a(ErUC*=1^R-txb&k(%% zwUUpLTI*e6+BR}=#J;FaJ}dPR6qSp1iwUra(W@IhJQsHs{ST$veu1q6&i=>E0`_VK zM~XWyl{Ubuj6A9!un*!{19b&QwebuADEk=}5W{ws8JoXRXhbM9LTG`qM7Eb5PRQR~ zxzvDHVc8((O}I4~0%FCJ|;8o~an%y#kaS4Uo4L+R zGJldHm&mf+=QQ(VPMJuE}=i%fzm-%-ci z#d!I7%02s!9IdK8y&c1N#yT|MK&t;zkZP&@41D^}Klc)z4uy7(nfjgzCFCy%fI884 zo-OTl!E$b?4w(wTZtnM;MLS9*3E#Tm_0WO~(mfLec8>Xvx`6vdj!BOy&HjwsNdIx4 z`Hz<)8=ztGIF_&wv}V<)fQ8M+M%mPVABBs&AE}LJ8gSo8yv>8-9uVy6#Tn4y$7Ea} zsmd{v5^@W-=kM@tyt5fnc``xnZOCzDIJHCjH>0Tv4qZ9Kv%buFsn#|_`K;-lx9NMv zyIxHMR?Fq*MC%HG&F~HZP-Mi1_E9zuEUB$@?uoNa0XFTDP3JdD4%VfjLwnsFv1`n< zy$PWh6i39oODj5tP}`|}s3N*6|0(19K>}g%jma^Ba!ACLZ#v@?)C@ky+U}HJ@s8Op z32|~Do2Hxi+0VYL8{cI!{ESVwlP&JSLT_q(INP?-jP7~K9(=lwCGi~{Ay?&ts4ZJP z4T20FQqso6Xyi`H_~3(@n`0$qoEFD>T=?&E1b zglj~8QH9q@wp#Yg&*LNte>rpLc#7_ILPP~UOIc6eX`pGo!knzd|uh^NvZTod5#{wI1SxOqg@Zr_&{>+a{R{j|RwhnL}%0 zgupA^`V0P2Ra^x!yW=U$5*&_iJmo-a_ka{5Nh4aa1^sp=d^zZJ3qt%jBRstTey`PW z{KxKfGRCXyZtE~iOcl4-!`SOV|8u(2NwOSMEn|}u)_-P9QZ5^vZU#`V0WOmXq@rx7 zz&yKu`cRb#hw`=bH#t@Y*8mRYXJ!~Gi|K;<8Syo(uxj&KwfMu3k&ewZtyYC>PS(gx zLWHmO6Jj#MIIReRm3oCRX#SAS?2{=F8~f60ya_8XSnhCh!$`IgzSGf}$a~r3h6pj` z-_)eLClL65G^w*59+=6LH}vuBJCLT1TM;#Cp;LIa+w8WvB!x%xb(1&xfr<3Lt~t(I zC|hN_Px3OLayg$JvL*|-AXw*Sjw?+cF>`)BausZB!XaCaOhRy_&I~FlQ@I2&PcYl`V-3pD`3CN$8 zLu_8V!963UHZ3hzM5n6mReW2f+d!K!gZVKzy(rn zB3b<>w7c|~4%H`--eOSXx&7_-pN&jEc?a4-!QY0p5UqI8BfdmTS-Yt&25+Li+}F?i z9-SDWBF1V~O+PUbg@@mdpNPBsx-@Kn(Fj^^WdObW23S!R9gAJ36obLIIfxAh5&a~k zEYp<8LG=#1n5s`t97<8fj&TnB%Ms5-AL*TxW<|gKyaA@v$TQ#9rdc%?8CGFoCYujD_ zLQeX+6+4hrCww6N(I$k{o2XacBctn>M~IF{8O1H$JeBdo-AL67&0Q~?G+#wydH_W5 z#Wrk1m3x%Q(Bt7J2IX*BbxMea{>Y(sOLfHlp4-=;hK*0d716G15D%ph)mdqi63F4X z(-Wir_a~8%v|#|@=NmC)DM7uX$ZH>}(~;4h zYA@MoH|rfHu7vR|&VI)ybCb~EkhfLwaWd$Q!M936zcxl zG3RFJn}G(%tAXyBUW|b#zh>e;6o@w*a(#E4ixhSf?XWv=Io+Ppai}&~@1~fw=8owA z#FHA$f5wvb7 zT}_EmX9uJ!7ZF^o-L!A^vFCb9)mW5{qX%ryg9brNh)jDB)<{C!R#uHZEJf3XrpTh2 z$7WsP3&fg{Au7~A{U+CI)!TF7{4Bb482tX%Ok}6usX=fYW105~s%A)be?C^CKyH6@ zd(kG6NxK?lmMFDMZ!Y8W@q?(Sl(UTWC;Pg;AA|G|+LO~`F>P!st82?^l^JG(=mrC^ zCA6)kYoecB%9XQlwe3v~ZF^mqqtH7gce5Ynj;dQ? zRryrYrha9d)nWj?q4zHxXDAPGj!0lLAx;QMo63|DmT8plnCAC}Fpi)6*%CmYEFyxQ zfbfv7O^GL5mP6!`vW686k+HTx?D85kucB5t2G$&j?Ld6EZ$(7oUg`882oX7t9TW!0TE0|%Osg;4@W`Q?F zI{CCu_SX>H@ByG{oI^@#l6C?MJj-Zxjx~-k+arP_k&uC+`WZo|P)NX{^5FHfD|zcx zRYc++@QRK9DAv;gbl@N7RaG;n>nHm#h+)6GuQT)R1a=BXDpeO|X_BOxV#s_w8bS}9 z;{v}2+zVK?}A0BZ(Y=FLT?QQ z&yOyOV~EtiTUG-XmVZn|HPHyHwC>qOcw)%8&1`gU@6Swa9*&jtH-nyS~plqgNIO{4HGk>-Ddi|wBE%n>Vct&q? zl;h>T=OGqef)k?4cB8X=)#k3=#Au}KSGlF7%Nt&JAXZ<}0lrXgZ( zMSkrRIybxlSSnhqK~Qa;h-+}Q*#wV7ZdidgURTt{rgQi2_dfV-nav`feYud;osCW~ z^!_g9-?JD+#H~pLl6r?F=a(yS{(Y~XR91=c!AI|ooZ%-kD?>Hmwf0+F#mo*5*QW-N zLhT_Ox$+Bo6dRBo0CL|S$N(eEwQ<1u(IM)DJVSU3Ez#3DiJp_)zfmy$6*0$x@dy{z z5wn|r?_zch{A#HDg`x!4Es{GH%Y=Kq*A*zA;saV(tHp#4Hlgy!?$&9Tx-@>bX@QIS zKa{Tv@wIMb6t+uvG-C+86zKZ z!m>{8K)S$xKq6RaKM&2Q!;T4&;n!)Vx<1EzwrXjHpQ;-uk2>06jWwaqeBd3wAuhV? ztY*y-Ic{<#J|G;kq1z4=ozXmqgorbco~u%dc0cQXb3ttI9O4vExP9^Wb~pY8@@-}U zRxXaf^o4YtKhat;wBov9eG8|i$;Ap{w2fR7UI3`F6aIL$cirg@W;b@~#eoL%uO(5} zIF6t*>^QkR6y2pEqOj#9|SuC^K@+yW;@sKvLLhpCf8ggL|X_FXatqNE(o~$xjwN9;T@YPA9J!6apW;TTc z4|sg%cUyLrR5AK~Avc;5MTaAF0tCS+YK}D(T$`Q>`H6|?9;Cobh8Xw3LIz^7PPYBH zdN2dc0a@Gg>`VND&N)7zm_6qWw^!oCkgl|sh;BO*h)9p|I|l(Ik&aY|{9kYE=BiO` zYvi{d?dnr#--w4qDQ{{rHWgarV;cMk54y5mG9)o9iu{3%t=`>!0f=85yVbr#P%C{bDB9_Fa{HmPd(Wjr(^aBiQZt&Q>i2@!<_iq1n8r^EtRN{mK3hF?o) ze0I${GfZa4!cQHgBEOwKOK0GHwiKERyVE{?pQumieQl98Nx z%~U+iYaHHyQlaHUS>u^>uep5+$D;2gxf|u#)RyM`ho}AH4fBwmCnKsNiWKHT{ z_gDXS?ok}T`Xz`lRpT25xnkH_4Gta!f17EeKJ#2cSaJ^yhCk zU(4;_F)GI*08vgx(%9T)N#BEV(&?S{q%eHN7W?Fhmw6wTG6E;{*KCm7L9M^744k^B zY~L|mk5H+QqbJeyMl-k2s%1BZLXZ8SRK-Yzf=t7tY&6Yj?Sa6EM_W7&J zLFypuUCxjkaot4t_+1b*B4W>h>-nKX<{=e*3Pd7f0F^tR-r}NWyBV+GIMrgkPmG-Z zb=z;5N*|qk$rW8bO=+E=3(wwO?htb+hHy8WOeIP-@s2j?v47!1OMO~D{fD4EYJ1lT z!}~DfbQU;2yQ1LYfX!r+hAaDwRE5-EABo?91czbWy_;muV4kQdp|4EZ3<(AVfqoBy-5Ctq7QTX59NUZq~*5L9U5TbWqZ&0=PtxHp-T%z z^?*Im<_@O9br^Ffv-XNz466i3lKNyiVlUEsg#)l{_9CG45+(8G*O~VkUo(2H{ZgO) z)!NlAhF^Ww+}S(1*VUzLhi|baKK$ttTR?g~5Ws!PPuntss0ns7ct@CYGC^Mk3K1Aa z*?lWE|EOvl5(TyQi#;gs#e}xNP&L)ay9|>vSovs?Qlq2$X8n+_Pw;^$z1%-()%00R@H!-@*N2qJj&rhB=lF<|Ja;^>_bw{FJzgTQ! z5aRxJ*rcyMkbbh#I}KU&a%z2;kBcX5&@rVGyKE;^49byJL439pi^*UGfb`j|RSoWV zi>wq|2bnuT-ZW}3mShS@XyOQeUbR`=hoe``Dk=(hXusjjyP#L_v|;M0cJ!-hS`ubZ zTVxjbD_+bGr@?k|cMg7j-76LSBCcOn#8uO>UvMSlf{<|ME z>zp@p`v&YO!sUat;Us>qrr(+(E*ssSc<#!IXGNZ-MutL3;u?)q1K~JNul+e*;`~j@ z{k7qC9)5Ji10#@1k$a6b(8?&=+3d((O00yuD8T)3vF4e~ZMZ4$H4>#WTsPu$8Ma1t5}f*@p&pV_xz# z&TA5h@3f4kcwnYFZMOwX%&?95QoIXvKA|2;FZWJ{7;L;r{feW-S9jQY!eMro0cMJa@6MS6LY@Q$9k!f8uQu1i^v)U#DEHQ$RRBNHqelp877cN(A_0`c*W(;EZ z`8T5mDkNO!BPA5~FSDcJT7aA~vex@uQXl`lXq@M|B*CGRexE z*6jo$;rd6h$}X1;{y3QG_4L`yzL=7G&2*Vh^yd6wDZ{Q!rg88Tvrz-1#S-1KKlj*p z7*;0Z@Mjp`2kG^3OcivVh5A5rsMSHU?X)ta{OTH67TzFO;K5wl+>(+0OnLACV}&Z2 z3K2-)3TH6e)x>eV87~{rCq4DSA;&C(W&y=J2aa0@*0F|#c9u;or{4?lCLfEva8y%F zQgOyqM_JmRn>`ik50Enr%80wzC1Z0R^Pn_GeCCJY9lMj8njgZ%nN(Y23d1aJFFR6QARhu{IQxoz&~G*1&uB&{x+dVV;t|H)t|Ev{-^42w~k zrqc7sZR8bGf@bVe#Rl>Vm<<#7Y~V!h4SDo4bTFXI`eYyS_cbza2N+{~@LrTsEjgfD z@a%m2&90w~7b}JNy@K6L)pT0_F1(xfPiHu$sIS+e_6e(mN8@|oEcdCSE!3{k7w<`8 z**jvp`<(Yb6dWLn_q+d4nYjA_g?1=wKu%F{X^Lj zB2;pj>1Bh)cQ$f48uc68i*{8r{b3WZbfksm$Bwtt5s4uwVv1@h3+G;x5EmV{QXf}0 zxeYI1U&=O`7;Ems3AnFfY{o1?%GW32nSJM4r6tNdLL+Ksp4_}e16vB4;flRWU%=t( zmBjZCcJm!7?fmWIxQSmXvA|h?V^xDpH58FAQoOp)@4pH@tOqK?A7{Cf+$lWuT;`T; z$~~#ST-QK0jUzt2{C;-xO^e6#wmn#L2JP297yZCJg|ny20us>76=^pgC1;wVJbP;U3@3gY*}&losmlAQ3_ilUn~T-5M<U6tGHTCIlp=hc;2!QQ< zr0A1=*)u6p4XNW{f`9sWQ0h=%(|54xugbMQVfersm!C;Rllk-2A#=tF$Pz$+-Gxga z>y3ster*0hZbSR6+(}7i1|Vz$yZ$9xcPMH*CJZLC8MiolsG}@B05(@*alsC)o{OSd z1*2aKOB)OG7wfIhhGaL_NADY%KNK|>rR-NnrFR&UV!a3JrSUW*hPpdmrJOaWEY&Zp z_~bB&^nq}`9>^7Z(oCLzk>b>8+FKPlz;kpvEFhh^D8%A zf46zv$!>hky$B5?a8%nIAHjfB87}7jMVjtng(E8?09)CsuOsaZPfO4N%U~^b_dWC@ zQW106Iwa*KIri5z=nQAKI|@j@U`l--!rd@O_M>rOJvm4_XlqAMLoQIP`m#4eCFYMe zo;p0JX57rnrOZ(2u5~MJ)pqCy(QS(}O7CGlbL_S84d+>P;Y8|>J^!&D>ft3IC0xL` zHdC?tAEPxZ(#(>~uaM|=Sr$xv;_5h_-%r`AczA6fL#9i-_`iWcU;aZuma<+uo-?-{ zoz(krC-~R2AFNpv31=@7IpN(5cDI=bY0AgWqh=Gv>ds= zH`Sa0M3trwTmfqHW~?>S71K}w<)_*8f3S6mqWsD()?b=jNv*w7#9nKOyuRTC*f64cgZgldJ z>suarsk6%dcUO>4%}8@@#;*U1ttdJ{#*-Ppne`{2+jx1*TORH4E@=MfeV6)`<5cv< z$kVT7po^i6K_nn<$or_EiwyaZGGkLv);R!%W!2Na{BM9eNv~HH%77r3uRbije6 z&V)iV;0Zg^5jT~NQPm-%|Dm8|y+zgpovt(3re`+7nQNE>k~WsvQA%xXO?36)RI!8jD;IYr^dE6DXkHn-&+ z&~ef>HuTdsizr&~SH&mJG*Mn@Qu&uahrA|WI+_qsZzQr3h4JmI=JIfQoU=%ym0n!p zRDr2g>C?9-^_-ABia$lefrb`uHOhN(BrOv zfz*Nu)o7l$H#C|~0+`*xGmo!o)WgZu#*XGHPgSe)l{I6!HACSJ;N5OaZ;k<9)&5lg zKY^-j;*!AvzPSNJpXZWAC$nIWo08jriq^872hIu#zwk$cdh*0jw5d^d$8lTDU#>wn zv=RXO>d&U0NxwVH5rP!F)YhSVTq?$j8K8{e=5{~W1p*Phl42*buZEEqf4xOe=~3uife$3h5Gz=u0XS~`c~qz6>|;=qxkiF z?o`+JmrhAV79?a^I%aA&-cBIA(Jxc4nRGbK94GjjuE|H~!dXf=z!d7&rwbtHws69? zX-))4-Lj@`b;ATNr%zCxIrhCX3R0=O+JGS(5gc_uUHFu-kq(t^YUY=^w$q$1e3#Er zpg3oK(RNsDsuHAEnyJeyjePUjSa|Y|;|QTv;3sTj?RjG&C_V{ss>^Hhg!&uMVA=QiB2~kiW_S~~lks;EZI_=za z`<%mO!2<)$jqmC2N21O2N7R4#wOhK0&s1>9P2$Fi)A31V_XNUY&z;=?-@PM!4z^F8 zfMJf@6eikwmpNvi&2!Qx_mj5pt{i+M50Nlps|LTmH{Zr9(jw!$w|`x^;fLkVUrG<&Gr-bHTTOB?-o)r^50Y^ zJGq}u`V-syaS2>3uv_V?nE}uSGM&)#TDHaa+WSitEsxS?7Oa+;Iyj(Ga2HLe9MnB6?uNZnK-ko4GTQ)U!wAuJq-Xd4DtEbVY{xVO7{`#<-Fi=GG{@?nh zwy1&@I|<9Lbd=w!zhJJ5)Ie4?ao2Eh5jklPCW}$;W-Zf{OGznCv9P zS5;Le7uk+owWheZ)_Maq=^{5jLaX|4y>VC^O;37c=knpZ>-k-Y*FgtlryWcs^z4rq=&g=W; zro1L9oT`{>^HV__mCk~_-(}w>glX$GBE;P!{Ly@A=S0Ln!r`uvL}2V=t=MG?>4n`X zAQVJ$cGJD$rHL4T8s~ad+vJEfw-}kJC!FIsVN6yqwtd)q>-;ZE!tv=vV(9?Xb1DcU z8dfFvYGR1J>tHPM1&a61NEc>-^kgt8%tey~`FCN#nu9Usd<485oE*r1lEm4#(6(Sz zOV+d9ksr?Pm;bI%R8J|{9cP!hW>Xjh>QuK@U9=A#d$=d$?%ESl`-@E)6K9Jq_P3TZ zFV)}8?`nD!mybC#TG>@)s!(>gz)X~2p&Q~rsg6vw90Br)r9_`mmp9<~4yP3B>3hEB zf4EoOaoy;F#XGnM(Gge?3gX5Xb~iqZWj}!jz%BY!Ny9Whf}%Z(ArjdR=bxbe~d$ zm|akiI5}sTw3Emw5eTVkHqK$;$^z{XGCQDDE(y8fU{P z(C>zpY(3#tj*okVjY}xwGaW9NAfg*pf|2T%4xl`5t=O>l7s-QXs)qD$Fh*-S2&dC_ zSyZ;u5nj0T(mjPh8j6iI?raO&XU{oUH^wf;$1WFgDz^%X0?13Ak@n08N(pJ!~DLd7E zaMOLSgI|A}S2)tU=!>xeQvL;S>S&I)WYJ>xKgZC!9otPBv^m)%>bB^Pe{@Pg`rhb= zyZk;E;%b&Jx_z(!egi3Ba-1K}^y&uhr=)MSPJ(<{uhWrDd#LCCB&$8J4%~}vw)w>5 z9h=kxPw+xY>8S$(m6uL=o?nN6pd*1yy-4aySc1<_@M*tlp6cg-MoJ0q?@}M~SzMPL z0~@BZP>5kezOz_KjyFy$QLed45ZYZ6-EG3qy}AL7%gin4HZHvu z^s3g|(g{zeLU+^WNl$Vcfm4A*b#piJ1xW+GXDW+)Q|+mXYtB|4mq~r`p}{JiKfw|} z26&PN%(31N*^!d0;bni^pGc$NqAvgLGd+d^2xC+4)MjEIJGeY^*=}Ay@U5(Md@io5 zu%hEb6GJx6Mfy(^1B)B>o_9gssRyDXXw`bSP@-j`eh0ToBY3v3Ou~u( z2oir3zSTb+fH-9IodFjPc;oqp_eYP!PB1*a7tF=Vw_9mf9KX8%n21T$XYOMBc8Bim zw*}^=)%)MOj9p>POlRQ&W}YjCE@te=X~J$GkeTz2o)PVgl!HoP;8*f40v+{*V>%1K zJLf+fhM3XudZY+9zF;vUyZEcP6|iiCyCeWr(w?t+rm3_kfSA;+E0YTD2-Lg& z6u*+EBsycgSSiE!Qbom@vI}{V9 zM(d1WBr;m+=B@PU7}?G`o#SlRgaQbkO-*Y+mvJwRgGJztwL}iUvkO?ZgbE9=F-aN%5g#dtrCb%gvFQLBS~t?MM!uvDm#3EI*28epq9>M6BT`uWlv7 z6D33-YbNyaG_v!kVLCvCdqSb;VIMoixWr*W{ezpC4# zf=}Q7#F6~g#(cFE+~JA#s*?$U11m=cS9=rk;GLo}$BQvPNve2TQxfzeo_PKbBFGN> z=POgwze!@WBD7PS(%HY-{$TUOnkP_)7|fE)g_by2^wnUF&a@2}*n zxBkA?FPX?KVlp)Tsz*vG@InR2|8NZa3(wx=lDaV>>ev6$n^)f++q4EAC*vH3nLRFA zR_x>k=TO1e;OvFZY{(o39zuQn@$&T(@v4bqVA%|7E^n04D~F#$nbJM)mj}PY3z5E( zPGRm<>VlVq-d>6=^?y_CZ0sAb74~HpG1bnesajkPm}6WJ)bXoM*?nkkaTCb}f5$LW^~A{IounGOJCHpX1Aw9qsJ zaRSi^b=D?CV5D~W>!`r51I!+5oa)PMz?sz3muF3wPR_q)?gjS%DF@>K6Wt{T zG9<0~+A|f+Y27d>cc`|D3o!?1uq@t^@BqhlF@#JihzysWaM>0)ObPcln6Z{{Ac-C3 zfH$DfXRZ>Y1ulJs)a~#<4YADD^1-~74d6@_kER^eRe|2T=|`@a(StD>*YHw2b)169 zLn&E76TFbBY0fh~DK1mxg(z*enq_x`ZiK^ByhVHYh5sSkkbU z5u!7nxdZOEu#*Rp9GQ@H5NtbP9ZNtskmPJ~85h6uwOxV$H@dv!bZknkV`~4xu0S4G zXpEPx33TUn6lxa=HD*TQc`)(ZANPG#Q7KX>c?zPL*zh;%2>dwKQp1 z&KB#K=;Ldf$hI@*)xDD3%V!cKQtPscb;u9dmD^k0HjOwd{c8G=#R?f`{A)LW zWcG4fb&%+dzUm)?L2r*~vaiRyzxA~V74>^L5|VS!^Pb`x`w+7T%o)TR#J?_4xL>|a zoJGk|4);9gO}lV*v9&{Ct4EYlGfybqHNC3w?Le}fa{^UAJBIw3-7?5&z~B;+rVT67 zyYkm8IC?xISm|0SS^aW|KnN_km#NpN=v5o2NY?5&$faD1Tx_Pa7heiZI`BAPg_N4G z>_k(N8y72jlTs)lilVLhjmHTc6nN9zO#qx@L{cqK|958rnbb zzm;xOxD2L2TU)9ym)9#lvfIB5X-iIp#BJmroD*UHbek_ndmuG#%zM@fV3DbAWJ`T} zZz#Hkbz8&OMWUG|=mA{N4mCle=EEj;8w`Od{tK@SXlq z@3+pZ){^slfy&~GLBKa~FSDfX_B0OMZYU!y@;1{>bZ|d)u_YO0lXC^F^?bJD9)ZgQ zOGENVh=F1zCxQ4aiAv@TO>_IL*3mu%3=^J*iS4;v;?%Iq;0?((sBVOZ*bZglf!=34 z_P(DYm`?s^4>03QXzo@{42%%T0S$f06B4_*jChn;;ip{YT8gaJw0R=WlX6QTQN&zc z=tKGDLk&IuxnfrcVLTj-^25Lv;d{uQQucWlXT$gF;*xO~wTJ1V2DDh@YR>s1w_j742Pm;DYQ6Ul>`oXs2?{VZ0tp-{~>^<-LuiU6yxf|nP) zWn2N6r`r){ZZPg!9w7Cz(ISxYbTBmaQ6zy;#MgH`D~P_>I%o$a9{(@COKOe(%dZZu zCl_yl-I{R0+(=KRL3*5}z?q`)olab(043Ec$8X*`pS_n4NsEGMC^`!(64s-|TIQdd1Vd zzA@~t#vQF%4}rw9d6`<*j0yM7{?YO$F4NaFTF~4WK^h`} z%yxiFQ;*nhG>-~#P8Zpk;)Y-iLh>jkIzo9BTh~gAFDl$sI6{*u z>sffSrN_J*K$twE>vOpGPFdU(T+OA90>L(e@2^rZgUy6(C>9 zwyd%p`kN;FLI4>U_v2j?+yZJf2&QRl2`&y#bCdMBBqOGx!Fnk{1#gk9wSHS6STTG? z>iL6X{S`r^U^@Vr;i_VlMWKpd4Em?5q0|RCwAIOL)o0u;2bjUA;3t)kbZ%zi5~(SU zzcB9dfJ)mm->H01x{I5{q-1GDJD(*e>RqiQ-gY~3LH{h#vbjOndf-muZfL(UVi{(1QQbq< zD2;9f7+7mQ!aDr_T085%CjW4aqm-h6fOHH5q(h`@qS6AAqmh>G9!#XWK}l(ll5R$W zv~ivH>$={T{`|dsMhKYJrFiKhYxw-!tFS;yok#aX zS7srYmo5L$66C7`F)_)uErew@`+#rVXs%iNVjC2mh^-nydI8^x!sAObDX@DcE!g~3 zuL9L42GDdaYg^2F_|-yvqUfMHgy`8n8`>;S^%+Z$cmSc`ognyz<9&4N}CqHLUF|f70ez+c>|3t-iJVKHlIexk=Kp zR|%GjzF+9M6j>|!b63Y~CMjx+KF+q%l`>Y&?Syl{6yt&@t3wckpQd*^W*UwMBeed+ zJXqa0E8M+eTo+yS{txHHFhg05Dq`(COqLC7(TvdTDki>-un21mXx(xwYWkC(eYqm_ z`2KsKLl2k?VX>kFFZ9*cF{j&&YuK@Rt?mk8@;KLbKs?^wYVL8NMAjI$h%T54At<5Y z4V@{!dxT<*SJyahg=EJWes=HVI#duw#0(A=rOi^LTx!lw&Mko_Gq-u=_2FoED%;#A z)wm}jZC*vc_BsxGMoNolKWvr!WXy{}gSG>G2Gt;y4$E^Bg>21ar)~_PEphKQxBjSF z0n&m@8*K-2IbsH}{i@b+Lq927vBd4Aq@E$vhYQNR55~zeniuEi9LA)QmdS8+(fTCWJ%BsD#JZ9pG&b ziWXc^gX`-)9nheCDJsWeS{XJ80Y zPcJ$X#6s-f^-#r!_+{o3$}$S!MztgKIF?eQ4ZwY-pHtL4ehpHR8>I+ zB*P2294Zc9Px>)5UsmOX(G7)m@KxJ&aUhza@6&%Y2cJ8!4jS{j3MxH+wCw7?GaVrn zblDka(&~BSorfRv_%H)PB0{>Hw_VkL10cF${A#Dh77sqnv4gEpl!&ckJb#N5 znH9O(M@dqTdVdTApmngus!~I{HiamcmqCp1@ozk~#RiI<$?NP_!_UuxY%L7x)B^vZ zB)*#A2FZXqp-30=%_`voChm|^s&sOLP;2P7difTsxS`vqq{(x9@GZ<`$#%2If5|Lt zS$n{XH(WG&dUuH)FCkM%sTf2ZS)R`Atl3b^SFjE5WaSwFxvBFXr%9o?3ipIoDY;k8 zqbs7-=6{u=*p}Vmxc^|-$fI{tj}$7|R#L0fM^yNS!tEbjpT9I(vj5m7Z%e+$CH9}1xQ+J-6isqu~b$YsmBZktZ^ zfye(sLBcW5z7$T{OOCc%&=tVJj2o?*$iBJQa*f7v4nX7kG!U03Dk;;V8gqF8Azoa{ zG3bFI##9PA`lj}gr zE&#rIgZZ*=gt}%0P(*?6Ukt_%q-IKH+iixe6tIOsPk~rkJ@rQO;)zJWFdyG7W6}r8 zeV$rLd6i4@iLVi;0 z$u1A2#_`#Q4%^QljpE)NVIdFi1QO65*VT?Fd@<81rIQ-VV_#cpaNaXo89%J7UQ~FH zup}E0_zX4}Qgob*S3hUr#QND0u6IZGFY1QiGuEcD34IhPAu$?6F5TaYsgiQ5_ zt;x9CF0L6T?GyjiB!?#~riM~2^R_*J>wn$V@iY}$5)7$9*u39UfteJ9tcSQ}uiL$b zqdMTs*FO`yQ!B7b%DT67&R5m6GPo%Jxva<%$CFaG5icoR;<~5mJUW+Q1`VuWYfH`5 z0YPPo`<>evh2F8c;U?4ge}5wVohK1ObWE*HIrW;lSV@4g@~0Q!oIi#&`b`5P_j%ga zYuS?2)XglQEOsj0=luUkt_5J8=yeATV_5N(C;J^bAe#hvW~n0go!He0=VvTq(kwDD zsKr$N;O1Z&C>SthMN#L6Ayz65Vl(a%l*aee}5YHgSf-=@oB zFtEODWZmA!KlGq9VNC&xggd{)=;naNw(;kFYA6AIvam?Dl;5(Yan_y!6S9lobT3*go zMXC69daiMPov(`!j}5TA0H5z?DV29%M;_qfK=zj$pQx3X6T zH;tWBjws|cSf0&85?{}(uX=O|L4ORW!CIdgk#ME&JJksm%*)H=DsR!Rc+EMnbtD$q zNzyV(mOU4e4YJTjaYTo_Kj&E$mPkzB$X}f7@hq(qu><2t$>J1lUN56^<>lawET{t9Q z4^ZEItI(WaVmKuY04Xp1mT|q-g@f7NFydb4SAtGd)%G<#u_vVd;t)z3#D74Cw{Rg> z%(t-OwY2PDAZm9r;>@pnvE%eJT8vSdwk8W|XBHWQFoJDF1#xYN{Oek1EI+sUt`z-b zUXi!_^6eMy)6ax7xWxg+(cvZ%GFW)E$}}W_0|xefb(cYBjx7s*s;-NT`I~WHYGF?u zaQw)&%NXGf3Pg;_?5RytvWDo{(AQRM&B~-B0q%EqpVALU`5N%rng*cF$`N{-*&g~~N& z6EEj<_cd>+Amoc+lZVtU4!fJ$bZp)aYhZU0PPwEHXb5-~65r>u^c2mr-^X1`d9+bW zJiZat#pzqPIku(Zi;Yp-)E6$70R&d4R@V*Pj`x@Qu~FD9TQe#s?@-!F0BAvl>r2_> z-+}vqjf+&xJ*$eNRr$;>?Grh^)&DBgaU-zP@kU zb}wPnW(3m}5TaA>9_8LN!JD-j6}I@C{9$rbGaYNFwAK0xji`G zo971yrt)V-e*>_}gV4u5l2y;m+?~tg=zA!(2%Mg{B%=UH`Z2bT;f>A3jV7jwz|lnfcgddIFwCb+&2!wCS?(BrgSB9%?8A zY*miXRVDkn)wRX=Tn#5hvvzALZnUsdSBJJMhthXbhx4_3O=->#mcin9*m%wmZ{XsB z+@`88X_RLtIt?0%k(zHwfzXezPu@3GW1(1eE655Z?=*!6l~oe|J^5-$-?o<6i5!Dt zE;X}rBX3o!Hr5Kkkw2{*_@M5s6rIDVY~8h$2FBIgGc6zYlzAE}Md0gS%O#OUq#dLw^(lgT(kh*B+^TmvL10*7FhO!#xF()P4hb zy_=O$PO@jgqVqW_l?`qJX zd4{?e?Wd11LUWD{YbxuWGs4KhLg&Nbmb9PqpF+@_p_YVFlkXdkO{v_lb*VG`nc96^ zm|g^NZuyJ5&v)8|Mk?=2LtI0;)B`-LWhrCQjU@HvHL;Uew_(Ok=3Op;5n8dQ{~V&C zyG83a*kLUvpkb$#orl!G3I>ieo@y>zHb0iz6!`{o%8i9JPnm2pc->;R%RY~gmgp}$ zLh;#2aw0k>YD)F*UahFJ479%-r7|X0)J$6MB;Cdy=j_sjd?AMam|%lm{h%-3;q}Bt zf--+xX&6Fc%Nb@jY9S#Fr&7JQrd$m(I-?$*7V(ys!wlj%33Hk<>Mv%eBGTOeSI3~YO#LfNIwEZ8>FCt_g zhG+>vpg{)Kfp=x?XQ-Nyoa5w%?I!~M;Rni>+m8~*ro&BHhpS6I>d~h#pR#$;?q7(& zOVKYi^fNvqp(V;ZDfRC^(#oc&K&m(mU>25!tEC~v4pj4`uD4X-w&@dJ^ZT>BrSeVC zB(6`lI|Hjw!uJ1`(R?#hZ-aM|{jP09;CfI$HkEoVRMlh)76EwzSc>WYe!KNL`}nm1 zmAoU6xwvRg|WDX)Yv}E#;n(YrkDzT$>H_@6XB1i?^!m+vpEULNp-S7mGSDJyU!9WUhB&Do^I zD_f4+ev)&R;da=RU;TMD_$%?7B#5sOzcvEqvqFbh)wx$IFndyEWkWQyx9rC{osSTF zSY*2P=y7D9V~%0o_c6Ow6WVUwWRh)EV}()GS`y*w%-SqR3qH zQBQUfT8HU2W)fJ0csf>hGbH_e7JZj+6B91;cAKfHQKhKsQ|*3>V2Q@V6#PfWF+t3-x72N?OC_)m(GJE-$`2A)i;bzih- zPHhG+w`$;lU0SdT3df&_k8c+7E?;>&4M_~@U;VwhIggz+Yls-AFG8}T-&)HKwW}>S zODR5V4Be6X!)idcnnlSCdJp4*4JP83V&0QKp^oSAAShXf2Wy{e@2v8UUDM9Pu+H^R z1>nbn+j5IQ=$m(5F>ChcDgjruVdUb{RexLmvM;);i^6d@IK*MCc{Im-)xMeAdWpNb z(cMMf*J%HRax~-jV+FXCyWI`v=7FzatMR#+w0`(y@<}9+ht33ZzwACX|K}*EjfbI z z{n@EUoBwMd#apwN5DR*xJIn5~C;owUFZJF@!5hufi*^v}ks24aYm@J+&Icbh(Kzu% zeOX_14@x`u59iO^HFken6PiaQ%e`i#?8KO!L={$4RHoUrQbrvYy^lz$fVrc;xA(~!&XE@@-&E+Sq=_AriZ}0M$@mPJy>#kg zK`S7JVhU*!w8PGLEhj{K=)yE-np1_KT{?l&^5IQx+fFJ@XK@P;9r4c0rPtBWt)hjx zn3G+DL5D&en*^dRKB}^ppG+j<)}VB$6GXx7SCyyl5undj9BDWfN#`Ta>jbaFkB#}* zJx+?UYHcv@&Dq2^bu!i48N+_W6;9qpQvT#GN=OPpMi<{C2$|D~t&1NUcnk`)amh7W z*3}{aH=*S&c4Xd(Ve)HnL#6(=8s7joXl0n1vvOn1Tl{{q>Rkh!talGjne(!1SOK04 zI}DcHVDUD+7$Fm-DygJi7KY3-PiUO1z;tAcQXF%f!egvFHwNLJX!M`6Ybmu<(K+k| zCAZ!ipoGD~*q<+u&-0kax@j6L<;*d6*JDld^aV$SlsVyS?qCCk8LT>e-l$qb?wg|X zec52O?ye3`$26G0pA+ZN2~SUlGGYN+s4TGLur=G}^1ZslEZ=vKpc4^PLyS&`S})|Y z1Jsg4cSFg9ltL40_A@jG6+oduY?yA0WN^v=6LHIQHKQEYZ+nVRz*$Qc&gD#%pVu~F zaH)E>?WjPI+Q2~5&8<}-TR~IXT&wV^p>0-QykD+c)C7-vGqtn~XSsrL>fyv3y_*x) zTm}7K)1w!Kw44({iodY6R*6W>t)GmI0!(%&1NV;)^Xha^))bVJ+Wf zl^oICvF0Uv|7>lo=KI#zJv#ww?f-C;Y=<|PWa*HbK~qe^bG*V?QPpgS__DwDd@JK` z9tsr5Uu_F<(sDQ};c~<~L^UDvwgFE?*Tgn!P0k(Pg z1(qwG3_52@gJsRlIBlBo^O$6i6v)NL1?M~j9{)r%pLCMOKeJzFT{w#`=N!vO-();J z+G_PU^5vNC1UnwAS8g(gG&V|nSp*P@@PxH74&SHsxbUxhhWxWj}`% zaZSO+*Gw5vVrYgIq!;`_SYKfI2IdgLHdAS2u(*jspe~3*Gaeu|bZR6IXUZxB6U|l{ zD8$*`#A*3XSY~PG=$7E@p#PP{zsWnJ6BMV@71Wg}o^`Sw<&s-B1_hS^=;a`)+9+Y%$wYEqL+(;B{Q*@$^DGa0mJ-KFPfoRc%PW?l3g zX3iOBwIW_!ifmz}$`^7HfM6Z)--zLprLfQC%>GJocOmm#bvGk0c9EpT^(pQ06T&=g z6X+@9b~_ZeL;GOD<@v+sPmy>zUwiX1sRwRW}mt~KBb%AyWdw)27OM) zN46pf74L;CNTOhcqaG7<5D5zi?RcTWWxP{And5z1BKLA|3nzw z_#;STrIip}Kb^MkPIv3{-=S2C(^9{Eda#NEk5!t8QE^B-ZYcgcdG7cTLva9D%G+ER zWVG$`X$yVR)OMfK)7CYY>G7QJ^(wD-!8=1whCN&B=SnMy;&**F4@fQqxl4?{TDB>5 zyIq$0#ZJnwQ}nt+Hr1fi{lZb@a(u#Or__nwBUYsuTwF$;KFnM6$J(yTupR|MW&%cg zwj&L)Q}U?^!+tax>4M@pSmsqNLynB_pwbxzq|FOI-7Nb*sOck>*_a3sz^E?9KALDA zVjWcnx?FPF&qACxGy?&O(bp9Q&iZ|ga+a|~aE&M*r!8o_p?53oGp5#}&h}umR`*IU zRuTXJ!n}J6Q|wOuO}5Jz^=6hgE>yGd1#|NV%u?j-@pLsqsi_3W9Cv-k{Rr~Po!XkA zibvkHOrel?p0=loeuVp0Z` zWxidr!ST(%Qmkc&fmEZcSeYS@Z_N7cQq-j0>KhJ4AKgI6(QYu>Yes`Jm$Rot&Sz#c zIF@#kpFDlV1d13ns*qfahMX4;~vTzl)r_)Fj#~?W&96^=ni+=B}TofZ0!gi z>Ai#_lj?~nMO0GwzD*Gb=;U9GO)x#ndm5as?9Imgfgtfo#+U^MzL9pmdwi>^R8 z7_iJFQ1{IpZ)-yFRsm8R_^entGF{||I-BWxCj*;{a@`w}an`o!SJ6W^a>#ZFmyJW# zB&oqBi9}0F7T^vqNEa~_&u!_H@2$V78jC5@_K%QepI;uk87#SPI=9$U`bSG|O0T&Ct7fg_&EI?jlePwS`shm*YZ z2M&ofNGS2OH0cWclz#DIi0E;4%Bh()<{yyhdSj_(i!=mcerjI~ENs_9xe5(;No&3}-V+oe5ORkGfBAkpMC6;q>tQL=BO@bDfQv_mc;XTJ}~l=Xuj z3@TjdR=np}to)erQ7Ujk-9Hg)C28nhxV4zGI`TswF9rd?LeUPMtvNXF-v+N2q;jpS zMO{%CuKl1R?33}(fCdJ9f)7?z9n`kQmDJ7bL^0Qo<>f${FkXXT826*DAKQ~3Fqg<9e16GvInA4i>LY4Pd);ME-TV*E~zU~S|vcRER zHK(>He7n4Q{t6g<73Wt0PeTXJ?YHtCHxX^`^CP%dQYltlJ}YWoDE2}_Ohw^XAHpPd z?H6Lro_2}U;)AH?ZV?rY7q7Dwz@#5`0t}oZ%~xc43ZH7iuuOTn2v6#h_PaPprv7I_4%2CyNK{M=^o@zvt$&xwuPd4z~W2k6{vfB-3k>AU3 zyzfO{_NTjIA0n|^qe%wLxG8%RwzD0^6R6q;x=71VZEJhY36K`T+W2(aORz?HrS5Te zAY5w@UfqdkLCYfpHUm1T<>&Gh+B4NkXlo|6Ifyh;2fTMl#3*fz!~Y7k&E-esNQ`J{ z)z53yHD$_o>94trq@C0RupYjlHN`b{qAAua8&qGv)O2@KJDYY-Q9d0zZI|_m>YK0@ zOdcwK)o@-#q5aQFN3HJHnC*JmvXsh-$~Ux)z^b0>KNz7F+>D z(2$t49s>Z6DX>6nLtpuUgJuX_1a6Otf?oK*a1wCyV_7Vc-&Z1=e3>M#Sw* zLnr}cN?(fNZ_LETOA*Q@cul-?%QVIrBtNJ9YUVRjG;OT=Hf^-RCelhex!^TNh>6pKEy3YK%j@lZ#$7;qovHD<;s7jb ztD0H=;4pk(M6hC+vLTGSOQm495O~hb|JP&qjubZ3mltJ9FavBrvmiJOQIe3V8G=IY z!NP}xtc3hZm&J|2<~HCd2)=&5z6MGxd{Bz$cQas3cUI`Ev`cwwd(u1d;RW{{ za&hClzRdIUtFrR~6n~QQ>03`}S030#tSn1g{X;R@hc@w>w9{`(Y&?^_<`2{sKxK?f z*;>!5TvK8H$?D5b+sfClNsjfW{>dc|;{~w`r)t}=RVEv;rT^s*wbr$tWb2!7zVO^! z@o!qytGbm**whG3c@Iq)qC0k(0kc{^6!NldakSy_HE=9=4tN~>p4)}Ks3}5xn)IqZ zx>O4Y5U!iJ%uabTnu5G=Sm#c6Aia)3%TJ8->I!g?h8j<1Y zkrfV%0l~v|SE);;_wdOLKh?x4k>Vl_kqfi4G5K`9UTt$&EX9gOa@~cGEHHd#+znZI zdQ6wFg!OIJ2xl3OV!Cv^US}f9QT)SqG1vh-#v1-Gz7Wy)WNe!LI$l5K%ggr+n>YKm z`(_`o0n^bf(nuhw@GQKfAd=S;3CmOWc_z{-PvpZNbY4jEg!Ryj2Ca&G?f)0Uo$vIQu-s`^S)~cmIYIFo~pbc~lJ{X@$fBSiCCgXbx z!@G|?TX$oX=C^N-LkGH*Yp~cUoAj5ByBdc7z7-PPKXUN-T0tYaTPs>vKI+EuWuLZ2 z_@SsU0GmNs7Yq|xeqE1u@n`3bAKVOx-$hrn>tQ8fo}qOn4QeE-F1$ycFTkA5&9)&- zy$>l$x}}w+^@)xX>7XrUD5uG65wgpC&6;H5645xBrtx#u%~T9dF#ud)BCq$;|Eb0D zGXFkYBO}iv79+BuZ<=LP(mYAD8pvbvGLs@|@tFFt`)fjJJ%`bhSWN0%7e%+ky^ae% z4MX%~YFh12*fR&G_ZX>wUX_o*|2zg^`nGV~>+(v^H4(xm8nbhOe%7l3=B`iflvpJg zp;e&fmf5CV+t@(X|8S0Hc0seYv?yIWn8u^!@h8iP_Dbicsa_aIo)CER5Z3SD0@+%l z@6VN(J-Z5II>K+B+3T=;u35g)Ys$3Eg>$BU>i#J4q$dFw z?yyhT=A|7aaZ_C>ebTBLVh6BAe}n>;w2j_D=2@O^-VCYSiNVx*vOXt0z0*xNG}A!V zYsX*q)!K7+-!;X&!f=yb*Q81ZxSp4d9QoH`yoNx^ncD3(kT9=ylV|#QCY?CgP@f15 zt{ZUlaXa*T=Kto>>;IUycC;5L#7CLf4s4zxeJJFEZYX>Jc(Ia)8>2SkZT z@WQ05w;4Q46}Q`aT&F*gyqoYlG}HHm-Vh^eiqIPKU{ctF{mlD+o;Rcpl}Df0l6|xA zDa}8$X4?`cSEPx{RP3Bb8(Nt%Aw)~J^j$FXO@^UO#~wF6p<`qt=Zjb&)RuPHk9Mys zx8!x2f6#X8UYiIP)Q@2yLO^c?6m3BAvgI$(@II?suz#H?#gS%NYg&<~}<=G*8E;Wq7jvE#JL z-kU`+A$?q%*Jz$q3<+l5j5vrN?EnY$kw)i{Itz#hyjWAq*FP2<{&skA^rkWEV^MZz zLzmwMeKrfr5KqdbX{k_=wHz($e8&ci?YGmy{!W(yv7LGXpOS-J)NjaNRmd()vhKi@#Z$+W+elx&2{_QoQ|wojh`(~&G0B6#ZR zK}x+V`Duq7Fp^Kc;(Wk7o_%L@F?vz!`q#5q1h=Y^Ewx_Isi9*k5HJYoyAh2Zm zeD_XzwES`D2-4c>^MQiS4k#G8egxZP`SbQIcdH}SCL}v^yaB#D0JC|L)AF}>FIwkt z>vUpu*U9Ou;@(t!9duU@%@JSG;&d^bGZbfNIx!FUe7UZXs8ciDAgn>My84#lDLdP8 zcYNCHvM=tKgxxQ^_^?3-vq~7ZVmkkzkZXM9)|VMm?exiAz0K)&?}#oQ_`fxhRF1>@ft<8zvAZ3iTmCgS!HbA*z>F+ny>!Z)*%P(`H0f} z7mz5sYVzL(R0(V$)2<}XCetB5o|QO!=yN)N3}4biO+`q#BsWg@%C0&rFD`>lBZ}XP zR@jMoanX2^$MEn}B6X#IOl7F!o014&Ae;~co9K$W|VZD26*M4=1h>$k>TSg*pn zyR6{9n|i`_n(+iWuUmi3RSEbX&DVFHdv%@%l{yMY4qswZZq(ZX*vL?$(-oiSZ9mpp zUm8iL@S->W+@;!v_r&$_>wOywUtL-b95+-s3IOS$kbVUJusI)540c6K6j%DD8W-O{ z36k8C^sN#N=fo7OxEk<{Hao%vBfgy8~*e0dqp_J7@1}X2lZwGhbsQ$ibB7`Q2G%QaiMQ` z$2Q(7h?-marm(g$=othPX#e*ZLZkkroH>7 zl_00~rVYJo7wz8i)Gu|#bTs;9+TS%}wUp)At&$Dd(xdp&BaAj{LNd*w1O_XD!a}#X z-we9NV$w=CQF+u@cJ74qBg?8$s+6jauk~F8ha4V-#D2#j@Q%1QB4l6tT4LKnx^l{% z?zTq({al_KmrGx$6GF5NHYlWRIbzVNTM)|PXrpH)nk@1PtM8#49&XvE)hbhYjqZVp z)L#L#|Kwi||DOf;{}u63=a{D+)m=F#Q3pNqVd8D0^r6=JdcbrS9yYC)jgjz;=my{m zpvktHE?&1I@D|8T3c*KJcuyN4?YENge%mzTcFLxDuS#lV`;d8v+wNQ(pAeHd?6dYC z4!>wLlP2bt1%$yLx_;{S5mAn7A%@00Er9-?cNqv;K%CD^p}Ipxd9pI==^te_i=1BK zzUwmHv#pN$=$oZJMPc)Q!JFTOxJ=b0eX8{;Pt>cJ=(p0D2$7>OGki-cfxsISJEQ0) z+!r$8o6z7I#mpJ_yz4j&pL_*~nJjJnR!|$)uqS`}Ay!^4G zg^z(%;28f4Lp3(?SfMw1gtY`+)${CpR^9Q97qTcWcN{{1^1;H>m>KCWws^6y79M@PjN;KR zwwc$E@Oe9zVvTY+;ZO7Mw{G%;C4WR@Ks2 z_JBc78Efn$o8!^XHsjn*84FqM}3rkJ%X zh*aD;SD%&IRXf`TPV&Yo+gSJ~D_3vBfCJ}IP;$-5II!Pud$%sq6gSUxQ;+OB%V!JA zEmB|cmNVrSJRX^5)ZV<*an1@@LEmEtGa{>hs~=mI!lB=*yu_y+nym3`v2gwz|C;Z@ zk<$E0hKs9XQ*SIA9{t@eCk)VC2-XD8q^ z>8NnCkEq-Jct@ni+JJTjXoi&@|3vZNuKLFyAFZiUgZq$&gzV0o>Y@I}AQw|^36T|i zExgU>iCGa8$NSp16ldaC8v9c&p}f$cWm%RL77S%t%YQi4j=-uwk)Vqsiu|FLwc2KCb|BGPx2n0y;6kervO`lUS3MlVj|mw>OW@ivbyn47yw1SHm(HsC=!$ zHdC!~0amd(H(#Dk@SKrG1K`?bv_W&3G8pJ4y$e#YA;+*+MpDX`eS_`A$FWHor(ZZH zI)8>v+n@k`M>V`HDMxJ41cH5GJKlRCK4@jS74eL;#Y=88<_NKt~sa75?Re)(0#p5=Pykt=ys4YmP1i>#uH7fV*=*n#3s8N}&p@5Zcdp;h5~ zq%VAvHD3De>vk--7%Et&XlM8`?4Nxj+?uW#DxA2n8B&=sfrrWyH+;bQrh%3k@us2( zyyeFAG`m8>aZx=JtpjJC?k(2m-6%oIV+`#>q%^gYlU0{ELhY7KFF^Azoq{HRR3E+ literal 0 HcmV?d00001 diff --git a/demos/python_demos/human_pose_estimation_3d_demo/human_pose_estimation_3d_demo.py b/demos/python_demos/human_pose_estimation_3d_demo/human_pose_estimation_3d_demo.py new file mode 100644 index 00000000000..7ea59b6fa7e --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/human_pose_estimation_3d_demo.py @@ -0,0 +1,148 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +from argparse import ArgumentParser, SUPPRESS +import json +import os + +import cv2 +import numpy as np + +from modules.inference_engine import InferenceEngine +from modules.input_reader import InputReader +from modules.draw import Plotter3d, draw_poses +from modules.parse_poses import parse_poses + + +def rotate_poses(poses_3d, R, t): + R_inv = np.linalg.inv(R) + for pose_id in range(poses_3d.shape[0]): + pose_3d = poses_3d[pose_id].reshape((-1, 4)).transpose() + pose_3d[0:3] = np.dot(R_inv, pose_3d[0:3] - t) + poses_3d[pose_id] = pose_3d.transpose().reshape(-1) + + return poses_3d + + +if __name__ == '__main__': + parser = ArgumentParser(description='Lightweight 3D human pose estimation demo. ' + 'Press esc to exit, "p" to (un)pause video or process next image.', + add_help=False) + args = parser.add_argument_group('Options') + args.add_argument('-h', '--help', action='help', default=SUPPRESS, + help='Show this help message and exit.') + args.add_argument('-m', '--model', + help='Required. Path to an .xml file with a trained model.', + type=str, required=True) + args.add_argument('-i', '--input', + help='Required. Path to input image, images, video file or camera id.', + nargs='+', default='') + args.add_argument('-d', '--device', + help='Optional. Specify the target device to infer on: CPU, GPU, FPGA, HDDL or MYRIAD. ' + 'The demo will look for a suitable plugin for device specified ' + '(by default, it is CPU).', + type=str, default='CPU') + args.add_argument('--height_size', help='Optional. Network input layer height size.', type=int, default=256) + args.add_argument('--extrinsics_path', + help='Optional. Path to file with camera extrinsics.', + type=str, default=None) + args.add_argument('--fx', type=np.float32, default=-1, help='Optional. Camera focal length.') + args.add_argument('--no_show', help='Optional. Do not display output.', action='store_true') + args = parser.parse_args() + + if args.input == '': + raise ValueError('Please, provide input data.') + + stride = 8 + inference_engine = InferenceEngine(args.model, args.device, stride) + canvas_3d = np.zeros((720, 1280, 3), dtype=np.uint8) + plotter = Plotter3d(canvas_3d.shape[:2]) + canvas_3d_window_name = 'Canvas 3D' + if not args.no_show: + cv2.namedWindow(canvas_3d_window_name) + cv2.setMouseCallback(canvas_3d_window_name, Plotter3d.mouse_callback) + + file_path = args.extrinsics_path + if file_path is None: + file_path = os.path.join(os.path.dirname(__file__), 'data', 'extrinsics.json') + with open(file_path, 'r') as f: + extrinsics = json.load(f) + R = np.array(extrinsics['R'], dtype=np.float32) + t = np.array(extrinsics['t'], dtype=np.float32) + + frame_provider = InputReader(args.input) + is_video = frame_provider.is_video + base_height = args.height_size + fx = args.fx + + delay = 1 + esc_code = 27 + p_code = 112 + space_code = 32 + mean_time = 0 + for frame in frame_provider: + current_time = cv2.getTickCount() + input_scale = base_height / frame.shape[0] + scaled_img = cv2.resize(frame, dsize=None, fx=input_scale, fy=input_scale) + if fx < 0: # Focal length is unknown + fx = np.float32(0.8 * frame.shape[1]) + + inference_result = inference_engine.infer(scaled_img) + poses_3d, poses_2d = parse_poses(inference_result, input_scale, stride, fx, is_video) + edges = [] + if len(poses_3d) > 0: + poses_3d = rotate_poses(poses_3d, R, t) + poses_3d_copy = poses_3d.copy() + x = poses_3d_copy[:, 0::4] + y = poses_3d_copy[:, 1::4] + z = poses_3d_copy[:, 2::4] + poses_3d[:, 0::4], poses_3d[:, 1::4], poses_3d[:, 2::4] = -z, x, -y + + poses_3d = poses_3d.reshape(poses_3d.shape[0], 19, -1)[:, :, 0:3] + edges = (Plotter3d.SKELETON_EDGES + 19 * np.arange(poses_3d.shape[0]).reshape((-1, 1, 1))).reshape((-1, 2)) + plotter.plot(canvas_3d, poses_3d, edges) + + draw_poses(frame, poses_2d) + current_time = (cv2.getTickCount() - current_time) / cv2.getTickFrequency() + if mean_time == 0: + mean_time = current_time + else: + mean_time = mean_time * 0.95 + current_time * 0.05 + cv2.putText(frame, 'FPS: {}'.format(int(1 / mean_time * 10) / 10), + (40, 80), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 255)) + if args.no_show: + continue + cv2.imshow(canvas_3d_window_name, canvas_3d) + cv2.imshow('ICV 3D Human Pose Estimation', frame) + + key = cv2.waitKey(delay) + if key == esc_code: + break + if key == p_code: + if delay == 1: + delay = 0 + else: + delay = 1 + if delay == 0 or not is_video: # allow to rotate 3D canvas while on pause + key = 0 + while (key != p_code + and key != esc_code + and key != space_code): + plotter.plot(canvas_3d, poses_3d, edges) + cv2.imshow(canvas_3d_window_name, canvas_3d) + key = cv2.waitKey(33) + if key == esc_code: + break + else: + delay = 1 diff --git a/demos/python_demos/human_pose_estimation_3d_demo/models.lst b/demos/python_demos/human_pose_estimation_3d_demo/models.lst new file mode 100644 index 00000000000..3dbf0bd42e2 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/models.lst @@ -0,0 +1,2 @@ +# This file can be used with the --list option of the model downloader. +human-pose-estimation-3d-0001 diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/__init__.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/draw.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/draw.py new file mode 100644 index 00000000000..b3a3b12889d --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/modules/draw.py @@ -0,0 +1,115 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import math + +import cv2 +import numpy as np + + +previous_position = [] +theta, phi = math.pi / 4, -math.pi / 6 +should_rotate = False +scale_dx = 800 +scale_dy = 800 + + +class Plotter3d: + SKELETON_EDGES = np.array([[11, 10], [10, 9], [9, 0], [0, 3], [3, 4], [4, 5], [0, 6], [6, 7], [7, 8], [0, 12], + [12, 13], [13, 14], [0, 1], [1, 15], [15, 16], [1, 17], [17, 18]]) + + def __init__(self, canvas_size, origin=(0.5, 0.5), scale=1): + self.origin = np.array([origin[1] * canvas_size[1], origin[0] * canvas_size[0]], dtype=np.float32) # x, y + self.scale = np.float32(scale) + self.theta = 0 + self.phi = 0 + axis_length = 200 + axes = [ + np.array([[-axis_length/2, -axis_length/2, 0], [axis_length/2, -axis_length/2, 0]], dtype=np.float32), + np.array([[-axis_length/2, -axis_length/2, 0], [-axis_length/2, axis_length/2, 0]], dtype=np.float32), + np.array([[-axis_length/2, -axis_length/2, 0], [-axis_length/2, -axis_length/2, axis_length]], dtype=np.float32)] + step = 20 + for step_id in range(axis_length // step + 1): # add grid + axes.append(np.array([[-axis_length / 2, -axis_length / 2 + step_id * step, 0], + [axis_length / 2, -axis_length / 2 + step_id * step, 0]], dtype=np.float32)) + axes.append(np.array([[-axis_length / 2 + step_id * step, -axis_length / 2, 0], + [-axis_length / 2 + step_id * step, axis_length / 2, 0]], dtype=np.float32)) + self.axes = np.array(axes) + + def plot(self, img, vertices, edges): + global theta, phi + img.fill(0) + R = self._get_rotation(theta, phi) + self._draw_axes(img, R) + if len(edges) != 0: + self._plot_edges(img, vertices, edges, R) + + def _draw_axes(self, img, R): + axes_2d = np.dot(self.axes, R) + axes_2d = axes_2d * self.scale + self.origin + for axe in axes_2d: + axe = axe.astype(int) + cv2.line(img, tuple(axe[0]), tuple(axe[1]), (128, 128, 128), 1, cv2.LINE_AA) + + def _plot_edges(self, img, vertices, edges, R): + vertices_2d = np.dot(vertices, R) + edges_vertices = vertices_2d.reshape((-1, 2))[edges] * self.scale + self.origin + for edge_vertices in edges_vertices: + edge_vertices = edge_vertices.astype(int) + cv2.line(img, tuple(edge_vertices[0]), tuple(edge_vertices[1]), (255, 255, 255), 1, cv2.LINE_AA) + + def _get_rotation(self, theta, phi): + sin, cos = math.sin, math.cos + return np.array([ + [ cos(theta), sin(theta) * sin(phi)], + [-sin(theta), cos(theta) * sin(phi)], + [ 0, -cos(phi)] + ], dtype=np.float32) # transposed + + @staticmethod + def mouse_callback(event, x, y, flags, params): + global previous_position, theta, phi, should_rotate, scale_dx, scale_dy + if event == cv2.EVENT_LBUTTONDOWN: + previous_position = [x, y] + should_rotate = True + if event == cv2.EVENT_MOUSEMOVE and should_rotate: + theta += (x - previous_position[0]) / scale_dx * 2 * math.pi + phi -= (y - previous_position[1]) / scale_dy * 2 * math.pi * 2 + phi = max(min(math.pi / 2, phi), -math.pi / 2) + previous_position = [x, y] + if event == cv2.EVENT_LBUTTONUP: + should_rotate = False + + +body_edges = np.array( + [[0, 1], # neck - nose + [1, 16], [16, 18], # nose - l_eye - l_ear + [1, 15], [15, 17], # nose - r_eye - r_ear + [0, 3], [3, 4], [4, 5], # neck - l_shoulder - l_elbow - l_wrist + [0, 9], [9, 10], [10, 11], # neck - r_shoulder - r_elbow - r_wrist + [0, 6], [6, 7], [7, 8], # neck - l_hip - l_knee - l_ankle + [0, 12], [12, 13], [13, 14]]) # neck - r_hip - r_knee - r_ankle + + +def draw_poses(img, poses_2d): + for pose in poses_2d: + pose = np.array(pose[0:-1]).reshape((-1, 3)).transpose() + was_found = pose[2] > 0 + for edge in body_edges: + if was_found[edge[0]] and was_found[edge[1]]: + cv2.line(img, tuple(pose[0:2, edge[0]].astype(np.int32)), tuple(pose[0:2, edge[1]].astype(np.int32)), + (255, 255, 0), 4, cv2.LINE_AA) + for kpt_id in range(pose.shape[1]): + if pose[2, kpt_id] != -1: + cv2.circle(img, tuple(pose[0:2, kpt_id].astype(np.int32)), 3, (0, 255, 255), -1, cv2.LINE_AA) diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/inference_engine.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/inference_engine.py new file mode 100644 index 00000000000..cffad81ed40 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/modules/inference_engine.py @@ -0,0 +1,53 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import os + +import numpy as np + +from openvino.inference_engine import IENetwork, IECore + + +class InferenceEngine: + def __init__(self, net_model_xml_path, device, stride): + self.device = device + self.stride = stride + + net_model_bin_path = os.path.splitext(net_model_xml_path)[0] + '.bin' + self.net = IENetwork(model=net_model_xml_path, weights=net_model_bin_path) + required_input_key = {'data'} + assert required_input_key == set(self.net.inputs.keys()), \ + 'Demo supports only topologies with the following input key: {}'.format(', '.join(required_input_key)) + required_output_keys = {'features', 'heatmaps', 'pafs'} + assert required_output_keys.issubset(self.net.outputs.keys()), \ + 'Demo supports only topologies with the following output keys: {}'.format(', '.join(required_output_keys)) + + self.ie = IECore() + self.exec_net = self.ie.load_network(network=self.net, num_requests=1, device_name=device) + + def infer(self, img): + img = img[0:img.shape[0] - (img.shape[0] % self.stride), + 0:img.shape[1] - (img.shape[1] % self.stride)] + input_layer = next(iter(self.net.inputs)) + n, c, h, w = self.net.inputs[input_layer].shape + if h != img.shape[0] or w != img.shape[1]: + self.net.reshape({input_layer: (n, c, img.shape[0], img.shape[1])}) + self.exec_net = self.ie.load_network(network=self.net, num_requests=1, device_name=self.device) + img = np.transpose(img, (2, 0, 1))[None, ] + + inference_result = self.exec_net.infer(inputs={'data': img}) + + inference_result = (inference_result['features'][0], + inference_result['heatmaps'][0], inference_result['pafs'][0]) + return inference_result diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/input_reader.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/input_reader.py new file mode 100644 index 00000000000..98e38ab20cc --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/modules/input_reader.py @@ -0,0 +1,73 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import cv2 + + +class InputReader: + def __init__(self, file_names): + self.is_video = False + self._input_reader = ImageReader(file_names) + # check if video + img = cv2.imread(file_names[0], cv2.IMREAD_COLOR) + if img is None: + self.is_video = True + self._input_reader = VideoReader(file_names[0]) + + def __iter__(self): + return self._input_reader.__iter__() + + def __next__(self): + return self._input_reader.__next__() + + +class ImageReader: + def __init__(self, file_names): + self.file_names = file_names + self.max_idx = len(file_names) + + def __iter__(self): + self.idx = 0 + return self + + def __next__(self): + if self.idx == self.max_idx: + raise StopIteration + img = cv2.imread(self.file_names[self.idx], cv2.IMREAD_COLOR) + if img.size == 0: + raise IOError('Image {} cannot be read'.format(self.file_names[self.idx])) + self.idx = self.idx + 1 + return img + + +class VideoReader: + def __init__(self, file_name): + try: # OpenCV needs int to read from webcam + self.file_name = int(file_name) + except ValueError: + self.file_name = file_name + + def __iter__(self): + self.cap = cv2.VideoCapture(self.file_name) + self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920) + self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080) + if not self.cap.isOpened(): + raise IOError('Video {} cannot be opened'.format(self.file_name)) + return self + + def __next__(self): + was_read, img = self.cap.read() + if not was_read: + raise StopIteration + return img diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/one_euro_filter.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/one_euro_filter.py new file mode 100644 index 00000000000..156279bbea8 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/modules/one_euro_filter.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import math + + +def get_alpha(rate=30, cutoff=1): + tau = 1 / (2 * math.pi * cutoff) + te = 1 / rate + return 1 / (1 + tau / te) + + +class LowPassFilter: + def __init__(self): + self.x_previous = None + + def __call__(self, x, alpha=0.5): + if self.x_previous is None: + self.x_previous = x + return x + x_filtered = alpha * x + (1 - alpha) * self.x_previous + self.x_previous = x_filtered + return x_filtered + + +class OneEuroFilter: + def __init__(self, freq=15, mincutoff=1, beta=1, dcutoff=1): + self.freq = freq + self.mincutoff = mincutoff + self.beta = beta + self.dcutoff = dcutoff + self.filter_x = LowPassFilter() + self.filter_dx = LowPassFilter() + self.x_previous = None + self.dx = None + + def __call__(self, x): + if self.dx is None: + self.dx = 0 + else: + self.dx = (x - self.x_previous) * self.freq + dx_smoothed = self.filter_dx(self.dx, get_alpha(self.freq, self.dcutoff)) + cutoff = self.mincutoff + self.beta * abs(dx_smoothed) + x_filtered = self.filter_x(x, get_alpha(self.freq, cutoff)) + self.x_previous = x + return x_filtered + + +if __name__ == '__main__': + filter = OneEuroFilter(freq=15, beta=0.1) + for val in range(10): + x = val + (-1)**(val % 2) + x_filtered = filter(x) + print(x_filtered, x) diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/parse_poses.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/parse_poses.py new file mode 100644 index 00000000000..dc16c238415 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/modules/parse_poses.py @@ -0,0 +1,161 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import numpy as np + +from modules.pose import Pose, propagate_ids +from pose_extractor import extract_poses + +AVG_PERSON_HEIGHT = 180 + +# pelvis (body center) is missing, id == 2 +map_id_to_panoptic = [1, 0, 9, 10, 11, 3, 4, 5, 12, 13, 14, 6, 7, 8, 15, 16, 17, 18] + +limbs = [[18, 17, 1], + [16, 15, 1], + [5, 4, 3], + [8, 7, 6], + [11, 10, 9], + [14, 13, 12]] + + +def get_root_relative_poses(inference_results): + features, heatmap, paf_map = inference_results + + upsample_ratio = 4 + found_poses = extract_poses(heatmap[0:-1], paf_map, upsample_ratio) + # scale coordinates to features space + found_poses[:, 0:-1:3] /= upsample_ratio + found_poses[:, 1:-1:3] /= upsample_ratio + + poses_2d = [] + num_kpt_panoptic = 19 + num_kpt = 18 + for pose_id in range(found_poses.shape[0]): + if found_poses[pose_id, 5] == -1: # skip pose if does not found neck + continue + pose_2d = np.ones(num_kpt_panoptic * 3 + 1, dtype=np.float32) * -1 # +1 for pose confidence + for kpt_id in range(num_kpt): + if found_poses[pose_id, kpt_id * 3] != -1: + x_2d, y_2d, conf = found_poses[pose_id, kpt_id * 3:(kpt_id + 1) * 3] + pose_2d[map_id_to_panoptic[kpt_id] * 3] = x_2d # just repacking + pose_2d[map_id_to_panoptic[kpt_id] * 3 + 1] = y_2d + pose_2d[map_id_to_panoptic[kpt_id] * 3 + 2] = conf + pose_2d[-1] = found_poses[pose_id, -1] + poses_2d.append(pose_2d) + poses_2d = np.array(poses_2d) + + keypoint_treshold = 0.1 + poses_3d = np.ones((len(poses_2d), num_kpt_panoptic * 4), dtype=np.float32) * -1 + for pose_id in range(poses_3d.shape[0]): + if poses_2d[pose_id, 2] <= keypoint_treshold: + continue + pose_3d = poses_3d[pose_id] + neck_2d = poses_2d[pose_id, 0:2].astype(np.int32) + # read all pose coordinates at neck location + for kpt_id in range(num_kpt_panoptic): + map_3d = features[kpt_id * 3:(kpt_id + 1) * 3] + pose_3d[kpt_id * 4] = map_3d[0, neck_2d[1], neck_2d[0]] + pose_3d[kpt_id * 4 + 1] = map_3d[1, neck_2d[1], neck_2d[0]] + pose_3d[kpt_id * 4 + 2] = map_3d[2, neck_2d[1], neck_2d[0]] + pose_3d[kpt_id * 4 + 3] = poses_2d[pose_id, kpt_id * 3 + 2] + + # refine keypoints coordinates at corresponding limbs locations + for limb in limbs: + for kpt_id_from in limb: + if poses_2d[pose_id, kpt_id_from * 3 + 2] <= keypoint_treshold: + continue + for kpt_id_where in limb: + kpt_from_2d = poses_2d[pose_id, kpt_id_from * 3:kpt_id_from * 3 + 2].astype(np.int32) + map_3d = features[kpt_id_where * 3:(kpt_id_where + 1) * 3] + pose_3d[kpt_id_where * 4] = map_3d[0, kpt_from_2d[1], kpt_from_2d[0]] + pose_3d[kpt_id_where * 4 + 1] = map_3d[1, kpt_from_2d[1], kpt_from_2d[0]] + pose_3d[kpt_id_where * 4 + 2] = map_3d[2, kpt_from_2d[1], kpt_from_2d[0]] + break + + poses_3d[:, 0::4] *= AVG_PERSON_HEIGHT + poses_3d[:, 1::4] *= AVG_PERSON_HEIGHT + poses_3d[:, 2::4] *= AVG_PERSON_HEIGHT + return poses_3d, poses_2d + + +previous_poses_2d = [] + + +def parse_poses(inference_results, input_scale, stride, fx, is_video=False): + global previous_poses_2d + features = inference_results[0] + poses_3d, poses_2d = get_root_relative_poses(inference_results) + poses_2d_scaled = [] + for pose_2d in poses_2d: + num_kpt = (pose_2d.shape[0] - 1) // 3 + pose_2d_scaled = np.ones(pose_2d.shape[0], dtype=np.float32) * -1 + for kpt_id in range(num_kpt): + if pose_2d[kpt_id * 3] != -1: + pose_2d_scaled[kpt_id * 3] = pose_2d[kpt_id * 3] * stride / input_scale + pose_2d_scaled[kpt_id * 3 + 1] = pose_2d[kpt_id * 3 + 1] * stride / input_scale + pose_2d_scaled[kpt_id * 3 + 2] = pose_2d[kpt_id * 3 + 2] + pose_2d_scaled[-1] = pose_2d[-1] + poses_2d_scaled.append(pose_2d_scaled) + + if is_video: # track poses ids + current_poses_2d = [] + for pose_2d_scaled in poses_2d_scaled: + pose_keypoints = np.ones((Pose.num_kpts, 2), dtype=np.int32) * -1 + for kpt_id in range(Pose.num_kpts): + if pose_2d_scaled[kpt_id * 3] != -1.0: # keypoint was found + pose_keypoints[kpt_id, 0:2] = pose_2d_scaled[kpt_id * 3:kpt_id * 3 + 2].astype(np.int32) + pose = Pose(pose_keypoints, pose_2d_scaled[-1]) + current_poses_2d.append(pose) + propagate_ids(previous_poses_2d, current_poses_2d) + previous_poses_2d = current_poses_2d + + translated_poses_3d = [] + # translate poses + for pose_id in range(poses_3d.shape[0]): + pose_3d = poses_3d[pose_id].reshape((-1, 4)).transpose() + pose_2d = poses_2d[pose_id][0:-1].reshape((-1, 3)).transpose() + num_valid = np.count_nonzero(pose_2d[2] != -1) + pose_3d_valid = np.zeros((3, num_valid), dtype=np.float32) + pose_2d_valid = np.zeros((2, num_valid), dtype=np.float32) + valid_id = 0 + for kpt_id in range(pose_3d.shape[1]): + if pose_2d[2, kpt_id] == -1: + continue + pose_3d_valid[:, valid_id] = pose_3d[0:3, kpt_id] + pose_2d_valid[:, valid_id] = pose_2d[0:2, kpt_id] + valid_id += 1 + + pose_2d_valid[0] = pose_2d_valid[0] - features.shape[2]/2 + pose_2d_valid[1] = pose_2d_valid[1] - features.shape[1]/2 + mean_3d = np.expand_dims(pose_3d_valid.mean(axis=1), axis=1) + mean_2d = np.expand_dims(pose_2d_valid.mean(axis=1), axis=1) + numerator = np.trace(np.dot((pose_3d_valid[0:2] - mean_3d[0:2]).transpose(), + pose_3d_valid[0:2] - mean_3d[0:2])).sum() + numerator = np.sqrt(numerator) + denominator = np.sqrt(np.trace(np.dot((pose_2d_valid[0:2] - mean_2d[0:2]).transpose(), + pose_2d_valid[0:2] - mean_2d[0:2])).sum()) + mean_2d = np.array([mean_2d[0, 0], mean_2d[1, 0], fx * input_scale / stride]) + mean_3d = np.array([mean_3d[0, 0], mean_3d[1, 0], 0]) + translation = numerator / denominator * mean_2d - mean_3d + + if is_video: + translation = current_poses_2d[pose_id].filter(translation) + for kpt_id in range(19): + pose_3d[0, kpt_id] = pose_3d[0, kpt_id] + translation[0] + pose_3d[1, kpt_id] = pose_3d[1, kpt_id] + translation[1] + pose_3d[2, kpt_id] = pose_3d[2, kpt_id] + translation[2] + translated_poses_3d.append(pose_3d.transpose().reshape(-1)) + + return np.array(translated_poses_3d), np.array(poses_2d_scaled) diff --git a/demos/python_demos/human_pose_estimation_3d_demo/modules/pose.py b/demos/python_demos/human_pose_estimation_3d_demo/modules/pose.py new file mode 100644 index 00000000000..c98850199b1 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/modules/pose.py @@ -0,0 +1,107 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import cv2 +import numpy as np + +from modules.one_euro_filter import OneEuroFilter + + +class Pose: + num_kpts = 18 + kpt_names = ['neck', 'nose', + 'l_sho', 'l_elb', 'l_wri', 'l_hip', 'l_knee', 'l_ank', + 'r_sho', 'r_elb', 'r_wri', 'r_hip', 'r_knee', 'r_ank', + 'r_eye', 'l_eye', + 'r_ear', 'l_ear'] + sigmas = np.array([.79, .26, .79, .72, .62, 1.07, .87, .89, .79, .72, .62, 1.07, .87, .89, .25, .25, .35, .35], + dtype=np.float32) / 10.0 + vars = (sigmas * 2) ** 2 + last_id = -1 + color = [0, 224, 255] + + def __init__(self, keypoints, confidence): + super().__init__() + self.keypoints = keypoints + self.confidence = confidence + found_keypoints = np.zeros((np.count_nonzero(keypoints[:, 0] != -1), 2), dtype=np.int32) + found_kpt_id = 0 + for kpt_id in range(keypoints.shape[0]): + if keypoints[kpt_id, 0] == -1: + continue + found_keypoints[found_kpt_id] = keypoints[kpt_id] + found_kpt_id += 1 + self.bbox = cv2.boundingRect(found_keypoints) + self.id = None + self.translation_filter = [OneEuroFilter(freq=80, beta=0.01), + OneEuroFilter(freq=80, beta=0.01), + OneEuroFilter(freq=80, beta=0.01)] + + def update_id(self, id=None): + self.id = id + if self.id is None: + self.id = Pose.last_id + 1 + Pose.last_id += 1 + + def filter(self, translation): + filtered_translation = [] + for coordinate_id in range(3): + filtered_translation.append(self.translation_filter[coordinate_id](translation[coordinate_id])) + return filtered_translation + + +def get_similarity(a, b, threshold=0.5): + num_similar_kpt = 0 + for kpt_id in range(Pose.num_kpts): + if a.keypoints[kpt_id, 0] != -1 and b.keypoints[kpt_id, 0] != -1: + distance = np.sum((a.keypoints[kpt_id] - b.keypoints[kpt_id]) ** 2) + area = max(a.bbox[2] * a.bbox[3], b.bbox[2] * b.bbox[3]) + similarity = np.exp(-distance / (2 * (area + np.spacing(1)) * Pose.vars[kpt_id])) + if similarity > threshold: + num_similar_kpt += 1 + return num_similar_kpt + + +def propagate_ids(previous_poses, current_poses, threshold=3): + """Propagate poses ids from previous frame results. Id is propagated, + if there are at least `threshold` similar keypoints between pose from previous frame and current. + + :param previous_poses: poses from previous frame with ids + :param current_poses: poses from current frame to assign ids + :param threshold: minimal number of similar keypoints between poses + :return: None + """ + current_poses_sorted_ids = list(range(len(current_poses))) + current_poses_sorted_ids = sorted( + current_poses_sorted_ids, key=lambda pose_id: current_poses[pose_id].confidence, reverse=True) # match confident poses first + mask = np.ones(len(previous_poses), dtype=np.int32) + for current_pose_id in current_poses_sorted_ids: + best_matched_id = None + best_matched_pose_id = None + best_matched_iou = 0 + for previous_pose_id in range(len(previous_poses)): + if not mask[previous_pose_id]: + continue + iou = get_similarity(current_poses[current_pose_id], previous_poses[previous_pose_id]) + if iou > best_matched_iou: + best_matched_iou = iou + best_matched_pose_id = previous_poses[previous_pose_id].id + best_matched_id = previous_pose_id + if best_matched_iou >= threshold: + mask[best_matched_id] = 0 + else: # pose not similar to any previous + best_matched_pose_id = None + current_poses[current_pose_id].update_id(best_matched_pose_id) + if best_matched_pose_id is not None: + current_poses[current_pose_id].translation_filter = previous_poses[best_matched_id].translation_filter diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/CMakeLists.txt b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/CMakeLists.txt new file mode 100644 index 00000000000..5ce6f2f74a0 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/CMakeLists.txt @@ -0,0 +1,34 @@ +# Copyright (C) 2018-2019 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + +cmake_minimum_required(VERSION 3.10) +project(pose_extractor) + +find_package(PythonInterp 3.5 REQUIRED) +find_package(PythonLibs 3.5 REQUIRED) + +exec_program(${PYTHON_EXECUTABLE} + ARGS "-c \"import numpy; print(numpy.get_include())\"" + OUTPUT_VARIABLE NUMPY_INCLUDE_DIR + RETURN_VALUE NUMPY_NOT_FOUND + ) +if(NUMPY_NOT_FOUND) + message(FATAL_ERROR "NumPy headers not found") +endif() + +find_package(OpenCV 4 REQUIRED) + +set(CMAKE_CXX_STANDARD 11) +set(target_name pose_extractor) +add_library(${target_name} SHARED wrapper.cpp + src/extract_poses.hpp src/extract_poses.cpp + src/human_pose.hpp src/human_pose.cpp + src/peak.hpp src/peak.cpp) +target_include_directories(${target_name} PRIVATE src/ ${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIR} ${OpenCV_INCLUDE_DIRS}) +target_link_libraries(${target_name} ${PYTHON_LIBRARIES} ${OpenCV_LIBS}) +set_target_properties(${target_name} PROPERTIES PREFIX "" OUTPUT_NAME "${target_name}") +if(WIN32) + set_target_properties(${target_name} PROPERTIES SUFFIX ".pyd") +endif() + diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.cpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.cpp new file mode 100644 index 00000000000..cfa871d22f8 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.cpp @@ -0,0 +1,66 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include + +#include "extract_poses.hpp" +#include "peak.hpp" + +namespace human_pose_estimation { +static void resizeFeatureMaps(std::vector& featureMaps, int upsampleRatio) { + for (auto& featureMap : featureMaps) { + cv::resize(featureMap, featureMap, cv::Size(), + upsampleRatio, upsampleRatio, cv::INTER_CUBIC); + } +} + +class FindPeaksBody: public cv::ParallelLoopBody { +public: + FindPeaksBody(const std::vector& heatMaps, float minPeaksDistance, + std::vector >& peaksFromHeatMap) + : heatMaps(heatMaps), + minPeaksDistance(minPeaksDistance), + peaksFromHeatMap(peaksFromHeatMap) {} + + virtual void operator()(const cv::Range& range) const { + for (int i = range.start; i < range.end; i++) { + findPeaks(heatMaps, minPeaksDistance, peaksFromHeatMap, i); + } + } + +private: + const std::vector& heatMaps; + float minPeaksDistance; + std::vector >& peaksFromHeatMap; +}; + +std::vector extractPoses( + std::vector& heatMaps, + std::vector& pafs, + int upsampleRatio) { + resizeFeatureMaps(heatMaps, upsampleRatio); + resizeFeatureMaps(pafs, upsampleRatio); + std::vector > peaksFromHeatMap(heatMaps.size()); + float minPeaksDistance = 3.0f; + FindPeaksBody findPeaksBody(heatMaps, minPeaksDistance, peaksFromHeatMap); + cv::parallel_for_(cv::Range(0, static_cast(heatMaps.size())), + findPeaksBody); + int peaksBefore = 0; + for (size_t heatmapId = 1; heatmapId < heatMaps.size(); heatmapId++) { + peaksBefore += static_cast(peaksFromHeatMap[heatmapId - 1].size()); + for (auto& peak : peaksFromHeatMap[heatmapId]) { + peak.id += peaksBefore; + } + } + int keypointsNumber = 18; + float midPointsScoreThreshold = 0.05f; + float foundMidPointsRatioThreshold = 0.8f; + int minJointsNumber = 3; + float minSubsetScore = 0.2f; + std::vector poses = groupPeaksToPoses( + peaksFromHeatMap, pafs, keypointsNumber, midPointsScoreThreshold, + foundMidPointsRatioThreshold, minJointsNumber, minSubsetScore); + return poses; +} +} // namespace human_pose_estimation diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.hpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.hpp new file mode 100644 index 00000000000..e9658656c73 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/extract_poses.hpp @@ -0,0 +1,16 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include + +#include + +#include "human_pose.hpp" + +namespace human_pose_estimation { +std::vector extractPoses( + std::vector& heatMaps, + std::vector& pafs, + int upsampleRatio); +} // namespace human_pose_estimation diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.cpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.cpp new file mode 100644 index 00000000000..05c7e92989c --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.cpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include + +#include "human_pose.hpp" + +namespace human_pose_estimation { +HumanPose::HumanPose(const std::vector& keypoints, + const float& score) + : keypoints(keypoints), + score(score) {} +} // namespace human_pose_estimation + diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.hpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.hpp new file mode 100644 index 00000000000..75abc562096 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/human_pose.hpp @@ -0,0 +1,20 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include + +#include + +namespace human_pose_estimation { +struct HumanPose { + HumanPose(const std::vector& keypoints = std::vector(), + const float& score = 0); + + std::vector keypoints; + float score; +}; +} // namespace human_pose_estimation + diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.cpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.cpp new file mode 100644 index 00000000000..e05d9a3ebe1 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.cpp @@ -0,0 +1,327 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include +#include +#include + +#include "peak.hpp" + +namespace human_pose_estimation { +Peak::Peak(const int id, const cv::Point2f& pos, const float score) + : id(id), + pos(pos), + score(score) {} + +HumanPoseByPeaksIndices::HumanPoseByPeaksIndices(const int keypointsNumber) + : peaksIndices(std::vector(keypointsNumber, -1)), + nJoints(0), + score(0.0f) {} + +TwoJointsConnection::TwoJointsConnection(const int firstJointIdx, + const int secondJointIdx, + const float score) + : firstJointIdx(firstJointIdx), + secondJointIdx(secondJointIdx), + score(score) {} + +void findPeaks(const std::vector& heatMaps, + const float minPeaksDistance, + std::vector >& allPeaks, + int heatMapId) { + const float threshold = 0.1f; + std::vector peaks; + const cv::Mat& heatMap = heatMaps[heatMapId]; + const float* heatMapData = heatMap.ptr(); + size_t heatMapStep = heatMap.step1(); + for (int y = -1; y < heatMap.rows + 1; y++) { + for (int x = -1; x < heatMap.cols + 1; x++) { + float val = 0; + if (x >= 0 + && y >= 0 + && x < heatMap.cols + && y < heatMap.rows) { + val = heatMapData[y * heatMapStep + x]; + val = val >= threshold ? val : 0; + } + + float left_val = 0; + if (y >= 0 + && x < (heatMap.cols - 1) + && y < heatMap.rows) { + left_val = heatMapData[y * heatMapStep + x + 1]; + left_val = left_val >= threshold ? left_val : 0; + } + + float right_val = 0; + if (x > 0 + && y >= 0 + && y < heatMap.rows) { + right_val = heatMapData[y * heatMapStep + x - 1]; + right_val = right_val >= threshold ? right_val : 0; + } + + float top_val = 0; + if (x >= 0 + && x < heatMap.cols + && y < (heatMap.rows - 1)) { + top_val = heatMapData[(y + 1) * heatMapStep + x]; + top_val = top_val >= threshold ? top_val : 0; + } + + float bottom_val = 0; + if (x >= 0 + && y > 0 + && x < heatMap.cols) { + bottom_val = heatMapData[(y - 1) * heatMapStep + x]; + bottom_val = bottom_val >= threshold ? bottom_val : 0; + } + + if ((val > left_val) + && (val > right_val) + && (val > top_val) + && (val > bottom_val)) { + peaks.push_back(cv::Point(x, y)); + } + } + } + std::sort(peaks.begin(), peaks.end(), [](const cv::Point& a, const cv::Point& b) { + return a.x < b.x; + }); + std::vector isActualPeak(peaks.size(), true); + int peakCounter = 0; + std::vector& peaksWithScoreAndID = allPeaks[heatMapId]; + for (size_t i = 0; i < peaks.size(); i++) { + if (isActualPeak[i]) { + for (size_t j = i + 1; j < peaks.size(); j++) { + if (sqrt((peaks[i].x - peaks[j].x) * (peaks[i].x - peaks[j].x) + + (peaks[i].y - peaks[j].y) * (peaks[i].y - peaks[j].y)) < minPeaksDistance) { + isActualPeak[j] = false; + } + } + peaksWithScoreAndID.push_back(Peak(peakCounter++, peaks[i], heatMap.at(peaks[i]))); + } + } +} + +std::vector groupPeaksToPoses(const std::vector >& allPeaks, + const std::vector& pafs, + const size_t keypointsNumber, + const float midPointsScoreThreshold, + const float foundMidPointsRatioThreshold, + const int minJointsNumber, + const float minSubsetScore) { + const std::vector > limbIdsHeatmap = { + {2, 3}, {2, 6}, {3, 4}, {4, 5}, {6, 7}, {7, 8}, {2, 9}, {9, 10}, {10, 11}, {2, 12}, {12, 13}, {13, 14}, + {2, 1}, {1, 15}, {15, 17}, {1, 16}, {16, 18}, {3, 17}, {6, 18} + }; + const std::vector > limbIdsPaf = { + {31, 32}, {39, 40}, {33, 34}, {35, 36}, {41, 42}, {43, 44}, {19, 20}, {21, 22}, {23, 24}, {25, 26}, + {27, 28}, {29, 30}, {47, 48}, {49, 50}, {53, 54}, {51, 52}, {55, 56}, {37, 38}, {45, 46} + }; + + std::vector candidates; + for (const auto& peaks : allPeaks) { + candidates.insert(candidates.end(), peaks.begin(), peaks.end()); + } + std::vector subset(0, HumanPoseByPeaksIndices(static_cast(keypointsNumber))); + for (size_t k = 0; k < limbIdsPaf.size(); k++) { + std::vector connections; + const int mapIdxOffset = static_cast(keypointsNumber) + 1; + std::pair scoreMid = { pafs[limbIdsPaf[k].first - mapIdxOffset], + pafs[limbIdsPaf[k].second - mapIdxOffset] }; + const int idxJointA = limbIdsHeatmap[k].first - 1; + const int idxJointB = limbIdsHeatmap[k].second - 1; + const std::vector& candA = allPeaks[idxJointA]; + const std::vector& candB = allPeaks[idxJointB]; + const size_t nJointsA = candA.size(); + const size_t nJointsB = candB.size(); + if (nJointsA == 0 + && nJointsB == 0) { + continue; + } else if (nJointsA == 0) { + for (size_t i = 0; i < nJointsB; i++) { + int num = 0; + for (size_t j = 0; j < subset.size(); j++) { + if (subset[j].peaksIndices[idxJointB] == candB[i].id) { + num++; + continue; + } + } + if (num == 0) { + HumanPoseByPeaksIndices personKeypoints(static_cast(keypointsNumber)); + personKeypoints.peaksIndices[idxJointB] = candB[i].id; + personKeypoints.nJoints = 1; + personKeypoints.score = candB[i].score; + subset.push_back(personKeypoints); + } + } + continue; + } else if (nJointsB == 0) { + for (size_t i = 0; i < nJointsA; i++) { + int num = 0; + for (size_t j = 0; j < subset.size(); j++) { + if (subset[j].peaksIndices[idxJointA] == candA[i].id) { + num++; + continue; + } + } + if (num == 0) { + HumanPoseByPeaksIndices personKeypoints(static_cast(keypointsNumber)); + personKeypoints.peaksIndices[idxJointA] = candA[i].id; + personKeypoints.nJoints = 1; + personKeypoints.score = candA[i].score; + subset.push_back(personKeypoints); + } + } + continue; + } + + std::vector tempJointConnections; + for (size_t i = 0; i < nJointsA; i++) { + for (size_t j = 0; j < nJointsB; j++) { + cv::Point2f pt = candA[i].pos * 0.5 + candB[j].pos * 0.5; + cv::Point mid = cv::Point(cvRound(pt.x), cvRound(pt.y)); + cv::Point2f vec = candB[j].pos - candA[i].pos; + double norm_vec = cv::norm(vec); + if (norm_vec == 0) { + continue; + } + vec /= norm_vec; + float score = vec.x * scoreMid.first.at(mid) + vec.y * scoreMid.second.at(mid); + int height_n = pafs[0].rows / 2; + float suc_ratio = 0.0f; + float mid_score = 0.0f; + const int mid_num = 10; + const float scoreThreshold = -100.0f; + if (score > scoreThreshold) { + float p_sum = 0; + int p_count = 0; + cv::Size2f step((candB[j].pos.x - candA[i].pos.x)/(mid_num - 1), + (candB[j].pos.y - candA[i].pos.y)/(mid_num - 1)); + for (int n = 0; n < mid_num; n++) { + cv::Point midPoint(cvRound(candA[i].pos.x + n * step.width), + cvRound(candA[i].pos.y + n * step.height)); + cv::Point2f pred(scoreMid.first.at(midPoint), + scoreMid.second.at(midPoint)); + score = vec.x * pred.x + vec.y * pred.y; + if (score > midPointsScoreThreshold) { + p_sum += score; + p_count++; + } + } + suc_ratio = static_cast(p_count / mid_num); + float ratio = p_count > 0 ? p_sum / p_count : 0.0f; + mid_score = ratio + static_cast(std::min(height_n / norm_vec - 1, 0.0)); + } + if (mid_score > 0 + && suc_ratio > foundMidPointsRatioThreshold) { + tempJointConnections.push_back(TwoJointsConnection(static_cast(i), static_cast(j), mid_score)); + } + } + } + if (!tempJointConnections.empty()) { + std::sort(tempJointConnections.begin(), tempJointConnections.end(), + [](const TwoJointsConnection& a, + const TwoJointsConnection& b) { + return (a.score > b.score); + }); + } + size_t num_limbs = std::min(nJointsA, nJointsB); + size_t cnt = 0; + std::vector occurA(nJointsA, 0); + std::vector occurB(nJointsB, 0); + for (size_t row = 0; row < tempJointConnections.size(); row++) { + if (cnt == num_limbs) { + break; + } + const int& indexA = tempJointConnections[row].firstJointIdx; + const int& indexB = tempJointConnections[row].secondJointIdx; + const float& score = tempJointConnections[row].score; + if (occurA[indexA] == 0 + && occurB[indexB] == 0) { + connections.push_back(TwoJointsConnection(candA[indexA].id, candB[indexB].id, score)); + cnt++; + occurA[indexA] = 1; + occurB[indexB] = 1; + } + } + if (connections.empty()) { + continue; + } + + bool extraJointConnections = (k == 17 || k == 18); + if (k == 0) { + subset = std::vector( + connections.size(), HumanPoseByPeaksIndices(static_cast(keypointsNumber))); + for (size_t i = 0; i < connections.size(); i++) { + const int& indexA = connections[i].firstJointIdx; + const int& indexB = connections[i].secondJointIdx; + subset[i].peaksIndices[idxJointA] = indexA; + subset[i].peaksIndices[idxJointB] = indexB; + subset[i].nJoints = 2; + subset[i].score = candidates[indexA].score + candidates[indexB].score + connections[i].score; + } + } else if (extraJointConnections) { + for (size_t i = 0; i < connections.size(); i++) { + const int& indexA = connections[i].firstJointIdx; + const int& indexB = connections[i].secondJointIdx; + for (size_t j = 0; j < subset.size(); j++) { + if (subset[j].peaksIndices[idxJointA] == indexA + && subset[j].peaksIndices[idxJointB] == -1) { + subset[j].peaksIndices[idxJointB] = indexB; + } else if (subset[j].peaksIndices[idxJointB] == indexB + && subset[j].peaksIndices[idxJointA] == -1) { + subset[j].peaksIndices[idxJointA] = indexA; + } + } + } + continue; + } else { + for (size_t i = 0; i < connections.size(); i++) { + const int& indexA = connections[i].firstJointIdx; + const int& indexB = connections[i].secondJointIdx; + bool num = false; + for (size_t j = 0; j < subset.size(); j++) { + if (subset[j].peaksIndices[idxJointA] == indexA) { + subset[j].peaksIndices[idxJointB] = indexB; + subset[j].nJoints++; + subset[j].score += candidates[indexB].score + connections[i].score; + num = true; + } + } + if (!num) { + HumanPoseByPeaksIndices hpWithScore(static_cast(keypointsNumber)); + hpWithScore.peaksIndices[idxJointA] = indexA; + hpWithScore.peaksIndices[idxJointB] = indexB; + hpWithScore.nJoints = 2; + hpWithScore.score = candidates[indexA].score + candidates[indexB].score + connections[i].score; + subset.push_back(hpWithScore); + } + } + } + } + std::vector poses; + for (const auto& subsetI : subset) { + if (subsetI.nJoints < minJointsNumber + || subsetI.score / subsetI.nJoints < minSubsetScore) { + continue; + } + int position = -1; + HumanPose pose(std::vector(keypointsNumber, cv::Point3f(-1.0f, -1.0f, -1.0f)), + subsetI.score * std::max(0, subsetI.nJoints - 1)); + for (const auto& peakIdx : subsetI.peaksIndices) { + position++; + if (peakIdx >= 0) { + pose.keypoints[position].x = candidates[peakIdx].pos.x + 0.5f; + pose.keypoints[position].y = candidates[peakIdx].pos.y + 0.5f; + pose.keypoints[position].z = candidates[peakIdx].score; + } + } + poses.push_back(pose); + } + return poses; +} +} // namespace human_pose_estimation + diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.hpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.hpp new file mode 100644 index 00000000000..823153bee9c --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/src/peak.hpp @@ -0,0 +1,56 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include + +#include + +#include "human_pose.hpp" + +namespace human_pose_estimation { +struct Peak { + Peak(const int id = -1, + const cv::Point2f& pos = cv::Point2f(), + const float score = 0.0f); + + int id; + cv::Point2f pos; + float score; +}; + +struct HumanPoseByPeaksIndices { + explicit HumanPoseByPeaksIndices(const int keypointsNumber); + + std::vector peaksIndices; + int nJoints; + float score; +}; + +struct TwoJointsConnection { + TwoJointsConnection(const int firstJointIdx, + const int secondJointIdx, + const float score); + + int firstJointIdx; + int secondJointIdx; + float score; +}; + +void findPeaks(const std::vector& heatMaps, + const float minPeaksDistance, + std::vector >& allPeaks, + int heatMapId); + +std::vector groupPeaksToPoses( + const std::vector >& allPeaks, + const std::vector& pafs, + const size_t keypointsNumber, + const float midPointsScoreThreshold, + const float foundMidPointsRatioThreshold, + const int minJointsNumber, + const float minSubsetScore); +} // namespace human_pose_estimation + diff --git a/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/wrapper.cpp b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/wrapper.cpp new file mode 100644 index 00000000000..e9d7bfe6243 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/pose_extractor/wrapper.cpp @@ -0,0 +1,92 @@ +// Copyright (C) 2018-2019 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#define PY_SSIZE_T_CLEAN +#include + +#include +#include +#include + +#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION +#include "numpy/arrayobject.h" + +#include + +#include "extract_poses.hpp" + +static std::vector wrap_feature_maps(PyArrayObject* py_feature_maps) { + int num_channels = static_cast(PyArray_SHAPE(py_feature_maps)[0]); + int h = static_cast(PyArray_SHAPE(py_feature_maps)[1]); + int w = static_cast(PyArray_SHAPE(py_feature_maps)[2]); + float* data = static_cast(PyArray_DATA(py_feature_maps)); + std::vector feature_maps(num_channels); + for (long c_id = 0; c_id < num_channels; c_id++) { + feature_maps[c_id] = cv::Mat(h, w, CV_32FC1, + data + c_id * PyArray_STRIDE(py_feature_maps, 0) / sizeof(float), + PyArray_STRIDE(py_feature_maps, 1)); + } + return feature_maps; +} + +static PyObject* extract_poses(PyObject* self, PyObject* args) { + PyArrayObject* py_heatmaps; + PyArrayObject* py_pafs; + int ratio; + if (!PyArg_ParseTuple(args, "OOi", &py_heatmaps, &py_pafs, &ratio)) { + return nullptr; + } + std::vector heatmaps = wrap_feature_maps(py_heatmaps); + std::vector pafs = wrap_feature_maps(py_pafs); + + std::vector poses = human_pose_estimation::extractPoses( + heatmaps, pafs, ratio); + + size_t num_persons = poses.size(); + size_t num_keypoints = 0; + if (num_persons > 0) { + num_keypoints = poses[0].keypoints.size(); + } + npy_intp dims[] = {static_cast(num_persons), static_cast(num_keypoints * 3 + 1)}; + PyObject* out_array = PyArray_SimpleNew(2, dims, NPY_FLOAT); + char* out_data = PyArray_BYTES(reinterpret_cast(out_array)); + for (size_t person_id = 0; person_id < num_persons; person_id++) { + float* person_data = reinterpret_cast(out_data + PyArray_STRIDE( + reinterpret_cast(out_array), 0) * person_id); + for (size_t kpt_id = 0; kpt_id < num_keypoints * 3; kpt_id += 3) { + person_data[kpt_id + 0] = poses[person_id].keypoints[kpt_id / 3].x; + person_data[kpt_id + 1] = poses[person_id].keypoints[kpt_id / 3].y; + person_data[kpt_id + 2] = poses[person_id].keypoints[kpt_id / 3].z; + } + person_data[num_keypoints * 3] = poses[person_id].score; + } + return out_array; +} + +PyMethodDef method_table[] = { + {"extract_poses", static_cast(extract_poses), METH_VARARGS, + "Extracts 2D poses from provided heatmaps and pafs"}, + {NULL, NULL, 0, NULL} +}; + +PyModuleDef pose_extractor_module = { + PyModuleDef_HEAD_INIT, + "pose_extractor", + "Module for fast 2D pose extraction", + -1, + method_table +}; + +PyMODINIT_FUNC PyInit_pose_extractor(void) { + PyObject* module = PyModule_Create(&pose_extractor_module); + if (module == nullptr) { + return nullptr; + } + import_array(); + if (PyErr_Occurred()) { + return nullptr; + } + + return module; +} diff --git a/demos/python_demos/human_pose_estimation_3d_demo/requirements.txt b/demos/python_demos/human_pose_estimation_3d_demo/requirements.txt new file mode 100644 index 00000000000..0c697a858e2 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/requirements.txt @@ -0,0 +1 @@ +numpy>=1.17.0 diff --git a/demos/python_demos/human_pose_estimation_3d_demo/setup.py b/demos/python_demos/human_pose_estimation_3d_demo/setup.py new file mode 100644 index 00000000000..6f11d498869 --- /dev/null +++ b/demos/python_demos/human_pose_estimation_3d_demo/setup.py @@ -0,0 +1,86 @@ +#!/usr/bin/env python +""" + Copyright (c) 2019 Intel Corporation + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +import os +import platform +from setuptools import setup, Extension +from setuptools.command.build_ext import build_ext +import subprocess +import sys + + +PACKAGE_NAME = 'pose_extractor' + + +options = {'--debug': 'OFF'} +if '--debug' in sys.argv: + options['--debug'] = 'ON' + + +class CMakeExtension(Extension): + def __init__(self, name, cmake_lists_dir=PACKAGE_NAME, **kwargs): + Extension.__init__(self, name, sources=[], **kwargs) + self.cmake_lists_dir = os.path.abspath(cmake_lists_dir) + + +class CMakeBuild(build_ext): + def build_extensions(self): + try: + subprocess.check_output(['cmake', '--version']) + except OSError: + raise RuntimeError('Cannot find CMake executable') + + ext = self.extensions[0] + build_dir = os.path.abspath(os.path.join(PACKAGE_NAME, 'build')) + if not os.path.exists(build_dir): + os.mkdir(build_dir) + tmp_dir = os.path.join(build_dir, 'tmp') + if not os.path.exists(tmp_dir): + os.mkdir(tmp_dir) + + cfg = 'Debug' if options['--debug'] == 'ON' else 'Release' + + cmake_args = [ + '-DCMAKE_BUILD_TYPE={}'.format(cfg), + '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(cfg.upper(), build_dir), + '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_{}={}'.format(cfg.upper(), tmp_dir), + '-DPYTHON_EXECUTABLE={}'.format(sys.executable) + ] + + if platform.system() == 'Windows': + platform_type = ('x64' if platform.architecture()[0] == '64bit' else 'Win32') + cmake_args += [ + '-DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=TRUE', + '-DCMAKE_RUNTIME_OUTPUT_DIRECTORY_{}={}'.format(cfg.upper(), build_dir), + ] + + if self.compiler.compiler_type == 'msvc': + cmake_args += [ + '-DCMAKE_GENERATOR_PLATFORM={}'.format(platform_type), + ] + else: + cmake_args += [ + '-G', 'MinGW Makefiles', + ] + + subprocess.check_call(['cmake', ext.cmake_lists_dir] + cmake_args, cwd=tmp_dir) + subprocess.check_call(['cmake', '--build', '.', '--config', cfg], cwd=tmp_dir) + + +setup(name=PACKAGE_NAME, + packages=[PACKAGE_NAME], + version='1.0', + description='Auxiliary C++ module for fast 2d pose extraction from network output', + ext_modules=[CMakeExtension(PACKAGE_NAME)], + cmdclass={'build_ext': CMakeBuild}) diff --git a/models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.jpg b/models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b151c7a82845ee69de278f9c2e76c9ab4a7e4d24 GIT binary patch literal 51564 zcmeFZXH-*RyDhqqCPffwN{b2zh=3pn(xNC`3`M#S5s?le(lii?fYb;mC<+0Qq6kQr zBB4h_KzfH1=>!si5KZC6Z{K^)82kIqz31Gq&mQ~7S&VlL)?g)hAJ%%GXFhYzm4m4R z3~xejx1a&dBS z9OgO9#r=0<{>ID0&HMM@@0epOK|3IDm<4kvnw8}=a7ciK zRe}`wum>b@dI6pPQOHySjUN(O>(%jf{?sPfSit&tP%*pG(Utt843|Upu>d(Jk8!1*^WmP298!79MUeoB?&*cB^Iw;;jOYB7g}3^EF< zJGi7TSQCZsJQ(H{kx|FWlKzJFkI4S#0*n3s71@6a?7zn~1Msr4FfSgf000IU@YT1; zfc*a%|AGI57S!}?yS^j^f$tw+?K#>fTcx4u_tRE0GU-~2f$Ps4n)_O`@#9b%1{XSuR82MFwoFn zwtFC|Kt>^%_qxlrRI~jag%GOZGlI6eZa4n}w!u#yU;G0Gy zen#$=@3-=CxO>&_vwslurwgYJfKitLAj0;4xEAa~h_hrDdND(-M;P6yGQEP9Qb-Qm zv!*=m%*iRE;S&-ra^?cqrDO+LFS)dg;pAqm4r}dL9Wno?PcZodAT~T|4ozuRlk-X_^gZWQ&2s>po4-6OWt@9{ z9V}mY^0D{Aj6reHgO65S3McWgTGDetmwDeN%LD&5$lKhE-bu0jO~nev%nk)-8efGd zBxWP97UG7PTVXBK^DWspvZbvb=BVd4t&HLfm-{g%%2>O-io+M=RKLbMNZBSFephfP zS^ht7{PSX%eDTls8}RFc{M%^me?OWE!yiLQH%g^auz7qtR;@Yuw#D(a1B!HE7P(&^ zLdc1mB$6l^{wa_q2Ie_EHcFl$*&WMqiVuV>BYKv~kdr+7#C_`u1gODe$sp5JfZ#F@Z8id4ecs zRA|D!Sa7=we>b$jz6X_F9Ucf7y#qPS7-X!aWBrB=5&@}PTkYp` zpALQ3zsbBa?lOi^bt~w=SrCL$PBmeSLIql_FvurPQiUY`-q4wqw7`JE{87^cuAcn@+R0odsJT;N*Hkz$ASFdIJNLSQ#@kN#{Y3p z$m5XH9cPrc^QLOf8$`!f4dwnBVsH^wXjkZ&2Y@eRnyhim6i&=U_llg)had|#L~3>9 z_e2^sJ=b!7tNq)x+6Lb{0H$kk+XNSq*(6r91HmV|Rjkz7n6sY~rr_`XZpxr#OM_?j z)#3D!5uyG*-6yA6`uc!HdN$YtoD8lQE0U7r7A!&7Zy_9RP@WMQF_i zTqqFhS3EFt$Ra;SaH&GQy~if0RBi64JFu9=;L0L4;%(YesO(Ym`m7Wr?y=~_oQ1=a zFK8+seC$nnq+H~v*WQVxwZ@xw>rQQ%sU^ff6ommP`?^dL(lr{{guQmVc*0+=o+aIud}`$oN_YH~gIYD4L@d7<{q_M6 z7XkJ~%=mUMN}f{!d#j{uRM9vo5B>dBDrgaJptfPiji`u_V`i)oO+HXMVZBeo-~4qCXiSqxpfSOv1MWKnVhj>pO+!j4!QD2-1?$I-T99+joT+a9ib?Xnn{Yjy z@6Ruv|3EPG{|+t{|EUT@*#0x{DBwT)-_z{>lWye?nd|6O82wVx@0~Bsj}Xzib0s^> zJ-Q%F=8gUMBGH~_7LsrP2)7=g@_C7>HXwgiF;XWK!qmS5F!2$4?FlpTh zDXvK$XTlk_B%*OV%$XV)exX}mYh>1PM3;x782^}{AZ+fV_}1Z;Pl_Ka>~jXemJj}< z5ynkKKGoMGA$u2-A5C@#3g7B4qr~+z(|e1`%=`6v=D!a}BOeT|>=f`9w}E1)n(&U( zFi#TSq%YWsvz%A8OK{T1V) zL7MF16BU{bamDG)PJCkv!mh?py1BaF`jxqq4&L?8q z+fU}rmvpaOBwTfR4%&`vUUx9vZLpd_OfR9Ze((-ixEJWj$akp?>`)p_iNtBzsQW3w z?(=Zft4Pj2E=hs=h5!?4usMCH83PAy^V6iqw_3|VNed@P{4rY{x|hZurnHzav*&EY zDX^vDQp$#6-bJ#%_Aj+8=WoN`5f^E)_EC8knF7ZIJ8HS&VIyZXBnb z3L^11yzbI|@%B?Pu4h{|+))z<-BgTA56Z}3LWg)JbO1T|WVw1HiPt1WwmZZ4ON~E7 zjXX+2q)zm;q6jZiuGJ!MIKTaTL886&v2`(FW@*?xvhT3A29Xx4T%hqOiQcFvQUpX zl^}C&up1-KnrVt*f%-BgTs0w6W?fhnn&43T3Sb6a44nZyk8f=@adK}rHu$|+IfJ}V2e-( zc!!rvmN}FPfK?-a$sB@GDUEc#A+%>MsT4mh`obBdEELQJt?o<7pSd

+395l-+4ZeOZ?`wx3T>AP zrmE9q;ptGx)(gZhNzVS}Hc*0sZR~Hmpg5(jpG4JTm!vTEC9q(6g+4dTj#67eX>ps_ZOhB3)#3I~geF;{lZ!O1Nz?v`5ngu!zzsFOUc9ncN{q&;-B)(;p zw0n$8@jWHvZ4~+dI0ggbP$E814wzfv+Jevz#(YSSH{M}o$0Ne}xYC%ZcbCcbSpK2v zj-CetBnbIND`*W~lV0TaI1h~D#qVZwR+*75qNwsTF#Sy(gcl0CNUF>e?H2EX`_i12w7(hhoKY7r?& zmThHXfxxHGV;^e|0I<4sSN_}kzPmFbz4JQZ@!aPOldS3AX? z%aLZqV_d+!<|@+&ynWA+K-{$hpx$NRNpAg)DSPosbU>)H!TD+fu}k7hCgYY8d_qCD zT4)>;+28}l`C!a3)dRHT};fouoH%tOJ0FvvXRyI;^YW1Z(+V9RL@YN>q%l*r@1*oD!g*` z2I;zg^AT0ps`Rt#Wg?dX82v0t=df#;Q6qW|%$%wY?@-ot4lJgb5_!T)$?>G` z+O}y!v@;?ZMhu~0nVl`z`!jdvS9~5g6|v*#Y_X3mIS1f^a8J|>Ctjr+c@%b&#Gk4w zTCw4JW}>p*#cq8T$M=$;H_%;i4zY8C|J={X;RAqZ#y_=4PH(1k;fJDghq^wOe4=y_ zZMqr9>wW$24Px)MQN^L&n{O4IgQ)&#&9T}d1gMG^>eC_`CE(LAb0dy3p(=MBdphl- zu5Zae|IrKep64Fz0=aZ)X6*i@Pop^C9hX}VMTs*8oAXO?<_K}GHtu0s7+m9HgT62^ z)kr?jqdHB-Q*(c|@|by&utn>!Z`BEz4@dxvN@~QYpdnL&808A8C}Z#d;I7Ye$BCxx zj>VCqDLI9X8sB;Xo9-R}PmHhU>LO~4)+-`)Jw<~Hk{o6`UbS6lqrV1wMV=Rg6BT;} zSjJEZTbB=jABr`UG^$$EgcEQ*8b2fPhI^yVZ^^v4p)1A@zvr_ioV$WEe(YJebpNZp;+YAYs!AO!= zX^h5eKdGqU;%_f$6E!QJ+8Sg7{EAc43-QvwC*K_aopk!AR_#y=k|*v+J`|T7QgiapoTM6 zLJDvJ^85v0rLI^nmyd01wpDSqF>vi~((nw86cTwQGN}i*Jj+xqBRg zV7nhQnf%)Tbp?!hC#6N7?u0?0e~w>p-P1t<3);zaaSlRc$XXNT{dDq1Mk8+)dG||4wKO0Gnf3#n?#YX z0gR^*%QO=x?yl(z(VR&t1UmkR*<%nJ?J#lfPkvpsFlcrrep78~?a!ecV;l0cM@Y_+ zRzva<D?@5F-+2k8d82^`|D$Hr1xaYuO{-+#d$jr{F)v%4{7irh)BqQK7@k?DExB zff80pyHQPfMPfA{RTqD4?mbw9oxS85vBk5Qd@_eMm_2Q7<+$x1Zh+{cI3b8dZKgaP zXe3;!n(-_^QDA70jqs>7G8R{CLUji*3C^`k5Z?d9c$yF znU66}_=?SE7Y3K8`)W+&L;Gd%j;e{ax7*M2zB_bWV5aGRe~Imdbm{*+t1S^dAd1!H6o|U>szx>!;KRr;8tmAjXss$s_C|$ z@a}%H_UO(0lCIuIsn9>-dXGHOfgqS39SIY}Af}Cb7TMKNLUZAP28f#^(b+x47DuIe zU!q*L%5H>^s!6U8Z~Gfw-eBYfuqfE$b#==2v@lk#+taak;%a}4f*OqS9Z zfpHcw2Bc&f+9pEe{RoCR-ACF@rJsN{l*FTNKqNUjQ!Tw% z;xR)SPn^r0n{lkl5;w}c`b^wA$~p7reg$NDr3YHGn0St6LA~M)=Y-i8AvwKcbsUXT za82_OkDF&?p6Y$TU9M0sOgaU7gPF{wmy*R>mt8b*6vZp?qA+-9hl{h^ z86~+96w{~a>hPo}H@opx+ zSb|#V=C*XKN@J|1>V(!=)r@2$gL{@}-kq;Ve^<8{4dsS;V{{NaG!bGRGWOx;Q>0!b zt7&9_tG^wlN2a~J|NYRTxt+#)k8Yxf=0nrT8y)aE`={Ul}gQ-eXX?1S{ zix>?8u3hJV2=8!IQ_s*sNv^eFRee+D0HgdoJVKTmL9dLgIL&IO^3cq~~OUs8f-9jw# zJvb5yBAS7w6Uq5RPnysu#;6v@AIFfZ{ZvmWEi#RG=yzAu_;|Cy_wP+Cm)9XeEI8z` zHg3k`AO3Fm(Oc6nHA=i+qI1jL%vtj%j17gy`nP;GvS4Fc#F2s8TjpbGo7qN1LbEtZYZpOtQZ7025#YZ(} z2{sNHm@f&Jl$7#Kb7gfo)Yt^dBr@^BEBR{l0=QpGmB@OOLJMgKv70@nvfOkr!*!uc zNN290`+D?;rgqk7cED4PU_o_RO%;KrMQI=ciSs^LjjKN=KhQYom0+)a{zH@%Jb!AN z+&t8o5@zVEC#JLECDB$M-Yd`Nq}!MyHl%Q(<5%wk#eO$>HQ3ih9hbF?-8 zMaGOD<~YP`zKU>o7AB{w-yFTCt<2x2kvVUDTHzKS`_AVR6z|qgUw4l6ZO^C8p4PsfrX4#=doHCT{(wMOn|e z)bFO}tT#^Vvl2i2G44T458WCg)b%e;t=!-bzMr8&hNtcwZ@R~tw#letAs+yQCLBE4 zRf1mUHo?^8q8n9BiS8Ik>=wQa(PRDF%3LI-RNVZEU~hXP^8VA~`DQsMG0DI9r;PFT zo2*1gkLwwFvu-dJdDNNGh#lz(Bd&DLB&H};RcfG)&FfTrndzKJdHM0vK0@Y++$-Vc zqstcc8z`a^3KTUI1<*^Qq$!@=DDL|AB_z)j_qsNO8Ogd!dU`dK0)<2%5!p4j+ODJ@%j(;FJVsfRG3GFYj{?-_E1kp*}A6n ztwq=QJndhm%NZ%U>D`5)_hd37WO8@;lk>&tFKD7f-%codK$Ic{ZDMf^-f16xm-tTf z*tdWpfmyC`<(z&6?f_fy&q!TQ)*Oljv7D?)=^=qkh@1(nrX=MNyjAl|)MCk$u2pYA zYx-8^`6B>Jn$U3p}@rE!ZqpNQ>Pzv=@8ZKDJIXvDApCN_|0hWL|I-Mm%8s7Qkn~z)YOr2$x@`s*XpJM%pNq7i-23=8V89}6? zYW2SM7Ez^~PU$&ANWF!kKPA2d!W0w9Hz@-o#cLG*V#biG!su2=g^{9d?j-ud8h7>Y zcW$zUI#yw-D{pgwji^%_&>DEOK6|)5Hot;MMcWC4zfVlw_S@9G>MKKHZCe;K`{f+{ z&D~MPmgWALJ~?SAuiNy5O7zyAN^-Uwy~X0<&nSds2vR&sj?se@iaN9IdTNYzi6pk5 zl^S}H(|qjXbspb9v5v#(mrk>b_NU$85ga%- zI{fL3lQj?tGhI#hvUU9<^i9=k!?)(+(-pH9KI%U4{K^`1{F2s=5k-l5(S>&n*>EQR z9esRESG26>_jCIK4$XFzxEy0K2q7M`G6ZQ@6MF|5oUV37GNqjD5{eud8hCgOUI*@~ zLy@+cBzGJ4oqje9-&)IlJfbx^5bvtq6nHq1t^x;xI#mA<>?|Lj2(^B{HJ)Q1Z;Tou2oB z>fU~@?<#+cxvJai&BwL$=Lcuy8nRY2i6H^+kg87uU8dwZD9H-<&z~De1Mfoj zb6t~4bo1?gONtmM*UL|p%9_2$r_%5k#FPpim8$Df5*{@(dpUc7{71gD`4cHLFKEfg zfiuf5IQw>ab~_tsqzyr@XZE9uPjmSri8kvbPlgEFUy?mguz3o?4J*5 zYSHI>O!nMeH;gd67>6m(@rAVwY78Sa-?c@GG`Isk6V?ePTEP#6xAa2!XhH79;dZkQ zB=(`1UFv7-y-y>7%5SYrxJ|R=+d1E#wB+Q%)lzTJPE*YB6_F^cPU8g41)ubA{;Zmc zowKC=j9q@j7`z{>W)ypV&cph$)<8I#rtV3A7f9B^+(?ABzch#oiE^n5LNC`p%^1t) zqof34z#NoPbVGxb(1_A*-yV-*`RWLR$SOPEao_k=bA*iL;+!FCwBmO>y&UB`aH64) zRMl%{Oq^f80A4%e#acxXW(vj=D4{l`a5%OW<~RP`DZfY}EXx%wgyZq}z5M)lS-8oI zsVw?TB2$GWkl?-GWA$G=4PP5%8Rb}IJbZ0o-02kAmptt3C-3BNBK@d}jf;IJY6eeE z8rJ6wU&M2!Fr>|4Bs%gk@!R6s#0V?={`JhhY{c>}!rNy@A6!uTY5%g3kV8uUxneik zDhb~pFS2+CBDv9pJLRnCZFBysX;2)i$b8f!jvd_ zqMI3HUxX0s>Tq3M_7GY#-g4+MuSc>8;sr~M(IeJ@h+J3@g@wt%o`O!TT$Z8Z-Ees~ z`1>CYvPb4HZ&!r|`WxLyDxBhuD|toLbqSBum;AkOGV%%ysx-%QPGrZde5wCvKU=aR z%RD;s2A@3Z02uTB0I2iatR}TEaTleuf>Me_rc~PBAXS!@I4-O%C^RN;B=Gd*JPgt+ z_qTgM*iR4hY*TD!M{jC`f+I>`H>>5SIgk9?LAl1cwN>FOWDQa%DiU|Z7mMnC?3+UE z3y{m4?a2-f2z&57>ha|Wnf!W(4(bsX#QZu`_#A{9n(P0SE~O3ZRy`3@+04U05~Nfp z!*3}&XuF_WGaw#^`SmD?`b?7m73q5~unt=sWaoz&4w)_2+JBh{FQ*Xu$)>H#vt*{) z9XxXYU^m}Hsov6i1nphqX7uX|54{nrkIcBQyN4BU)oo92W8hEg6(d2N$oh$- zps)z-tKNYjSc@BSnMSui6#1$2$#9obUDusL61)51Ph2HiOzWx=XRhmm8ZuAC=i<)Q z&n*&lN?)#w1{Q}wS=)qQ=O~^Z*THrAiKN-9Bo9;4#{JQ{T8Ol%dgTL`mp?NjP1RW>|A#K3|3J6hKa14=(jB*~ z?V3~O`zWs=?cEGh5}sRQuM^;{%hX}FWo?p)p-hf0)*S#ni}4^{UE_CA-yb16jtKUU z!ABl?JuEDn^Vy$Rs(K~jJZZ)Dq3A*_>x)6looE0HchloEsi)LFCN=hCrzocohpv7m zNsgHH*!jLpm^HZ?8v5DmTRQcK-2}>OF?K;^Vq4b~51y93)`A)fc-Pd{Tu-tq< zb(veXKXJ}-m&$H}k}{v~AKEZ>9E7=WzUO zz~$|oOM_ml1BU-FN+gE$KyjlE*26W)~;zMkvarN}XpQpdHPz~!0a;}HZ9 zM#+qybIp3r@ta3$f=;B7hnHWzyGC=o)dfzxoGOuDCSo+IPn+>RBu^1P)JZUMe24!= z`hIw@`;XbtpSin;qm03sz$f5)6>`|@S5yO-b^^8MQ6Gt;FeI{~-={IuO8UyZ>ffJY z`&#E!&jEeM2QN4>ne(B|$E7Y&qi}vf90U!DCxu&vFPa4&TW3X+ollPT%-zEoNRB-_ z%@>5|jMhUixc8_^Fxzps?`o82_*F|{4*}5yPSh76y3~P*ue^uN#n=Am)rE22$qexT z`}oGia>~33cC{zjkd5hYA3|c<4pYFG10d0Q(G)bJ!9?rgN6cofFU|=_sABK_Zf)_x zcbqu!>XYwV$_7=7W=F3AV+t7Rl#Xz9Q<@GQnlzJL9rD->H=vJwJL>Tz^G=8Ko2G6_ z4z2d8OUzejMol1@MVk_bGlL0@r#j`o{pHx@e$M8pQqIqWQ%jeyr*B#zwtjG)lG`kz zx_Wij!P#~1d}b(GXnWG#2O^W0btRCvLXaam_rfKj^un1XbwQ)Cfe6~%z&Q_`8M3Wu zb=ay`MAlBt8kBYMeGhPsz7P%Zp;?0L^kx#&@x%unJC`9w0BUQyT807+|;)nbVEXQ?d9aJ)SL5>n|^yQ~d? zsrq`2v*Q68zHMMmja6_hgrGpRpk1T%6@`2DK*d7BhEHZ})2_;{qva8h2BUmT*6Cy5 zFmd3~&tzbNUV!q2BcOZ_RXAu$%=I+yGr^Edxk8BB{CO*>^0kp!TTHF2ea0#_uO~RcQx6AS9h}o)lV&<9VHT$ocW?nn&yi zQx?z;LB9~VW34xG*)g_KR3;1U)DM~2?`6W5ug+M{F5jKGIU2Z{jiK6G#}t)njnOG8 z;xwZiM^fn~Q*!*_tuHd7Tu-q3HPmU!MDhMoluk7*%OTZxjw?%l>?@i_m2vZ)s@<_i7oK|&u(Im!rJ{}}vW_yZ>9rSnOQ`*(Z%e#dMP^X`RDQ9Fk6 zQFq4o%;8H9#(*Wt=-&p&2VT9vY=D+WY2M5Rh|{yG1&(=>FH2!3qB~qwNK!9|VfMo( zqJZ!rv{PEEl)PFWFX%SXsLtir)y%t8zwdF!4Y297XnkdhFAg+~#w~Uq5ycK7si$>C za~DdFyiuD9CzGz-Gv>aPB*3s`t0r@NQM^TO(mxcg(*r)%s>)np@eElPm0ta$&_fFg zQ$|wws-XV-ONi@41eJR46=sIe)jyU ze7=iHe5ARFWHy}`XmZ<`HPBJr5I0b=DX}&{bJ8su)BSF(1vUZzI1k_KfoZJ8+9KjIH9ManN7aK@y{cNPFq1 zAQG#sf0*T%pHgY`MWi3ENXGM5!VPr;4{7OU3Tv^j(_aM1Wu!_39-Q{*W0de4{kXz& zPI8H%s)}TB^`qCiz>8CN^?!-&H2KLQ5!0pQ3#0zJHpFcRNs@_X2JV?}5Hu}Y^Vu!Vbda65eo0D* zd{#3l)iL3Wi%I8&dC*^N8EJr~ehqW2&%o0)4x>&kZTw(5hCnka7}%l+Q!oZi&l;1? zq+uR=wQ3Zs+lvu)1vM&OsvZFSC$8N+q1($V`mFuUWIiYbR8u_Fv%Q%U5)k?D0C*~@ z_{UuE2PcH+)c+4^&)AsA1_oNqi+MI;C+#yRq`S#83mR$ziivf7H`64aejQZJu%bCJ zeZS%`H%dl1<`)pDgX>OB8gP62`%YmY#&;Co;oY9Kdn7bF=%&<7`pZn$|f z=@jnoorArG*$*9XO@SH9;VhAJFU<(L8O!7-UV6I5#_B_YI4-AKCt^$1p)eWWF_9$_ zss;G9p6EOIu*#5Ltb2s2rkSih^u#7=R8!lG1|AVKV3N{jBk;b@!!nnxNoH3(^G&(h z+uKiQ^#MBxOp-?PR_b4p=1p6iCDFA=lo4HJPm`pc3BQWtPi7+5!xTmGd!hz?t2j*Z zOMP{mb+YM+mw=F7CD)&fy^kvtVd9w;#uaFTN&=#wjkx@t^H)s#qrgopk+TcVk51CR zxQ47tj8@@GD;#ityO2UH4<~5od|f!~YLq)4N=O~tNX`4I&HIc6Ni~59R#S>dPW!e* zW2Uny0X#Jm@}=zsmU@DAH^Y?u#Yb~qv(9UP>R|^=M$>H8#AiXOBn^_+uqLHMlqy8F zkik#2-R5_XoL66=XV9)_YTNg2$L~*5+aL%fOJt{YLI&8KKdsSN>{`Cin4W$Y{)_T; z2FPmwcof#LJuQu2>>7HKuS0r)j3ZR;&3jrPx)wkP$fRFsci53;vgMkp-a##-Noujf^RWxn0Az;ioWXU#jIzmHNInQx=~-5oux}+42*{e??CYeR8`P)-qvr` z%BFqibGB=BJBNwSQd;W$bdB#ktTr_+_t$%{2Xbcnnrsn80q;Pf*)`uH{3%dpDZ&g* z@00tRNCB^~z7SR>05SEyYMC(Q((mz@&2syG%RKSWhZULpXnQxRF)W1s@m!m{Z!@S1 zb%JI#+$!r4#Ganx5g1u^g^f6k{%LrB^{{;?%g+U)V8!6RayX!?O0|NWB{J*cx$rCl z(o%763U;}gNQfU3EgM-EebQd@@h+_McEX*MJAC}s1yPh^f>?oS|LPE?8l4x96vlK zrO4>c&Y8gaQ#b3b-MufIyM&e0=IGV_!3MAAB#2@xSaIesX;5uxbV!;`N9g-^U8iDy z37f}@Z*?1pzhFwi5!D@9y{1( z-0Sc=C(}S`{f8`$;Szn##gxToWcYXdr?h?wESuv%QzcA+r|me*u;u`$f(lQ6=cMm@ z)vqB%bUmo&!#SD2;)%YH50b6b+*hOtAbBwIN+<)GvwhkxE1n=^Yem7HS5Dqi9tOjf zTZcnbrXFp-qA8FJz9N6)Kd$Wq@I^~mKxO&>aBEpOJr;F#s(Mvw^hOh?2<`fUdNtgh z>_lA3Nq~FK+u-?=cY`qmjd#sYDF%vp`#CKE9^Pgy$}%^-?wn=eS#m$aoS^4oBC0wp zHA>EDUBvYy<%yE69$WKP1xIz!v_?~1EmYeu1ONKu#UF1J`tIvY>fqCK$c~gEyo)os z8BI{aIr|bkt#W^D5K#R4z^_XWz+s_thwkge10IMcaAwzh09>J0zI&3NQWnlZP~B5) zne-*1vD;l$u~J7!VT4my{`5zXWVs&?Lr2w3K0fU4cXS)u9xq6SEcO;WPb3^W^6r^C z$LErNnQom>Os(x5O_XAfS)AVKYCGF7J}?4KuAevfrXZb;b1vrcYVoiQnf8nM)DGH> zef)#->E6bF)>B|ElT3+V#g(4RP+B_xM0GU_r+qZ-1RzK2QOW|@4l@^*4xgQJkNacK zV+cI1roV&Von2PjKz+LL;#~TJZ4W%bj5mLU^1cS(G5yxqm8&UqD; z2nVhEvVUY+sV9r*AXm!hZ3vAv?QO$stzc@o#1HbUYw);G6cJLDxG1;_eTv7UQ=O?I zdx0lEjnhOQsJH^$gb?>({9poj;dt!3 z9>mL=MttWre_{^#E_2JXt_CK{;piD426}FQx48s@rD|^iSWg z>y%j%@|tC?X3vn|49gFvjeS)-<9Euw}l zEpC}^9hVVFUDvk}5>Pze_uJ{-S5A$kUyL8%hSG!^v)jqgqZOg#R#XeaTI^djy#nrI zp>t@IoI+}j`JtJt{1T2I5t|zY2O?)drkCpJg}RpyfW196$N^9!T*lf)&w=~!&*=0( zg{7M*F(iAN7UD|6?nYj4chJHW-q!?wceUd+t0p_3{*+k0ySUGKzT}^j-#AIT6}!m* zT@Bd2J-=eber#P|(;diy?*nv|wP}R|3R4Vjh%HRs!2NNN=2df{1mAdeet!U^-o-Q# zvQt*VXIqW2i-&wqP#0&SSU*KBx>Q#HcS;FixcAJQO@w1#?C+ycDb#GCp(oF(!pRjmz?^0#J?yQCKV=s@fpgz$j26!tNe)TJuWOHAqZh&NmKWx^lQ~9Q5H@c3d9yjsDcDrS5G!L0 zUfme0vRx16XP_VTW(C9F-C7&DHBGyk8O$N|a=kM!=+Quqc;ARfY)XW(^};x=sc9->F4{fp;_5&PSi!PX2B86sN7eW< zbG;$GJVeob3QE&^6IFd7doS(4{R( zaCr+w%P>L+wc(Gg8lKN|mpD#c=&3eIl304ll3u?Im5h=>^(>|^T`t|KT**g_E#2J$ zQzJy)KgB$K#Fk_Q$7rly@J6wLJ((@3;-tPLFB`7%d4K_$pMqUh0$DYZAG9GWP|c=#5+S$Cnbc;L1$_JNK;FdPv0E zK=G_Nq*^**|w0O0q8|pU*(# zC4x2)PGW}?0w2DzR2`E~XC;K!b~do>>#NgR%zI{@~-uIu>j;>~r($eO(PdXePfdho0e3Vl6uamD857;UZRe^Z$THne^GHy;-isE(WQb)j$or5 zldYg1As;`!d{CWEVY^fK?5Cjh*J+z)XHOhIo9R_;>X2?f^#p0wJ08Nzt9Wy!p`==Y zrN#D$puu=6w2Yp0<4W|EY&&W-!iLWI=-5?7CkG$>lVgGwqlIcFD+LlP>kzkX*F4M+ zixX2w{ytZ;pw(TQF-m{23mV!jUl1jxzrv=d`npTGRIl9VT7E|T!+*uu+mG))*z8-e ztG~dlk5v+Uc}E~Z4<3P%j#*SoX)`)lTP{>#25##bC#K5SrAlYiU?C)Y^kM(}2SlHtv?F!e2QS>Ufz zmQHCMd82~epxBnZ8{NEAix_$i@Aozam?V}CTzmD2wc8yd!2dmJNTMyqGgxxv*t{U zz(}U~@k&u&`MouR2JPI;i$R(AP|_T z`SeK9XnOrLdgr!=%2eS?Oj#N0bEZ^nN71m)2GU%?rXJ*mf*Q5ze&!SHNHd|cyz0#}ky4=rutHy&(D~w*Gl$=T8xgBlE z8$YOh0Hj4UChK#~LpOeGv>hkq7Z#(12Bf{KyKAPuIwtuK$G-cSHU4?1Kb3n|K0rk& z;bik|Jw86Lc0+RogGKEcAfeMFKkh+O?6W8M(NpSa=bhe5 zrS=-%`*X~=UWIAPzY9%t(ka@|4lS%u_dGi@fPKh0%`pznsJatr_PbaAhdraI47!vY z|LWyZlFZ(j^4YOZ^6xynrG!U!owYU}ZkRvz4eyn1tbcl1eis11W^JF|UG=iE&VOSg ze(CoISvk2-)$@7^Yx{<3#cBG`m)7YRS$P8^?^)52oMML~hp(&%tt;F-bl(f$HW^Uv zyrfkMa2`sp1LE8;EWd3WjEsS(o98=vPF>n?Y?wN7AHduUXeVt$!fu6>$=p}FvTakq zmj0nsuUzitxQK>))eoD!NQ+YjZJthXo%*7!qzYRl>HSBzi(tf<+mZYzAs=tXlO7x3K_u)^tK}H!3%Ot@743p44K!acR?B* z-ln<eP)?<1V@=9n)3dY8=erzotCBrlpPz~?A7!=v(?4|e)uq?WBFK_U93 zZ%4P3{LhK{cdrHO>)*Zp+XVg^StT|8_9<_|!~tM=FJJas+|IzMP8Iy(>ZSKABbzrc zZ-2Sbtd*rfR~FZO-ZX~EvISlchBh~0BH0aBGWseC>I9Ws*EKbvZC{!&57{vM1!aOZ zoJ)0$%E9|jPg;L)8j|OsWtiPDj ztSW|T8i?~I!kUt9M-CbWi+NqXVyQjpg4OfhO^Tlrf1sU;2h$%7-@NC4CW@I3gvZYR zi>Ijn?=P?TZ%es=)oMzgzs+fEUCeO45lV8Sk-)EG- znaQFI*~!`X`tp~XmNPxe8NfB9y!O@83D9;aWc)4F_dh4}tHoWo+P;6iXGrf3Fe`5? z)Y2k2vZ%XW`XpFfSy=qF2K?A=4%nMPOEdmd^qartc_zkFf7=*Xf^6JE#MjB=TR%6n ze0HoqJS?>XU^P9xP7r>*3B%{tk2z1j`@dLw>$fKV_kS3qTS}TCAt2qoiAqbSq)2yn zOu9iy=`Jbh?rv!q-3=Q#YOi~r@4s-rk7N9{alm!$d0yvvK1%V85INTgOKrW44Q&yV zs7ZMhvMu{r=aV>5elo6sW_D z*_4~mGqn@OY)ygrLj7>Xv;7k0&j*BaytL$%c}X@ z6{et4q+P1u_m~N)e^fwuIX#uVy)C`|Z#zO8)GRWq>G zOJsixufeF4S&1O02=|u)#X8EoJo4*RJQwEwMm`Hqz(hqx45WR=1g)^J<@wv!qu;7d zL)dTVH%gxqJ~hr&-NcL(HpQ6*Yp*oQzWPfW_#;rZ@;H?pT?wD>JtLd83?;4xO7p zlY`%;f?k~5f;1rBdX)eOkg4PKhmHrfwdPtWFls`1#S%phJyh!*HiK@9qkp5=_W+|d zg#QvBQVNjT?sI57A&<2e8?Zp%m)P!$ztFLiDi5l6S8{Q%Eu>LHF*!}x*CJO5v`4r% zHBH8cS60PlD^kyJpelaG5+%ngLoX)}H26xvkUyxAF8^~+rxqoy6-{!X1+S)A$~KXN zhOzoL~EWiA$ggW4+gU^}ftNGil)7i&!y@|dOqDO(r8R@s(UlvfqI z9HElnx#E08W44@i&}pJx6K{vM+zjeZg$8rlB@a+=jhSQ*_m*abxbGwfW?Esy_} zgNyqLqhsc7kTXQ&&6V(MT)eZY`te42Rpx;3II+p8{_IgR4$?g*@^H2wKv-rgzX;>o zs;<2it*hUhC)(QlxyAdW-sf_HdW_LYYE<>h~|9~DX~ zMJanrQ+jA`ODWKPDan-K6aREYEyVga*Bt9!IB{2)!N4#g#om)BRzjMT;p|FQdq~D^ zh^48Q^TV?w>P7dqm_OKax5n4&0}B~wH2$Gj4e--u4f}5Z0VOcJNP}uX(RH{WzTuKH z9;f$$@5Ae@M{3>x0_FeLzy3oP)GB$c;-#t2K!~P>t_rDdG*WP2;8*y8VJrBR&W^_r zYHrLdeC$@X)=Nu)$5C8gcHJ{&W*>5u6}yWSk6oT*jlW5Fv;^ereDW*^<4pMv#S7qr z)F_iapU&M#u+5N@@cwCm;`q~9aY1Xvjo-aQAh9i8Gk6`Z?(ff1t>{k-rlLzIX6o9$QIQ<$} zvC+w4XZgJ2T&Vinmjl5>OXdD0w9>i%P!{wx3wD~`j_p5z(B(>!I8syX{b##b=|ld~ z_mfBM=pD>auIX>T!~HlKn}RhrTha+*9pD1;(~23ICu(>9o^%zzj}sanjoxJb9KQ?! zXIn3%mZf-=8$0lyY)`~a3}mHBCR*1wH#MYroDJaLXfw}!hpq2-1SlbCs2JgInCG%ui9&GZqrIQ$7a^jXhp}|qCL#%&~f^*AG5$NrMwzBIe@TwXZLbf1?HK_~j zrE&0D{k2hh$;FiPdyT}`&A1C0SjD z{y_9GdVapq35AOJmp={IGLN)rRf`ZraQtFNkGSsD&!U$}*=V*&O0)`QH7Ol=7@i|9 z(kd99$QhapO(ji=BIQ^Ws&O#0W5EBIwG`={CJztx@QZ*h#i^|h6CMq;F&6OAp3^9} z`kCjf+f~BVU3Q)VgsT1h%4>c2?z|h07KLN+uMqYfduDNk^tbZ`vz;?UBhQfq(d@DR z3tK;qi(L3SqT$o-Fff{j6QRLL7fr14&dP;yS2LTOJp@v%mHv8nZ+M;>uHW^5RNNTB z(Y7~ZKvsUE&)U#axrK{;*!~*vx-GAmdF-pH4b*Vgvld#T(TG*v+@b)!VMu9&f$q3pGHwER zhJCp<`;6Eox6=!v`_*e(+QW)B|L)b~XuWJFl-Ilox%&>6a(ErUC*=1^R-txb&k(%% zwUUpLTI*e6+BR}=#J;FaJ}dPR6qSp1iwUra(W@IhJQsHs{ST$veu1q6&i=>E0`_VK zM~XWyl{Ubuj6A9!un*!{19b&QwebuADEk=}5W{ws8JoXRXhbM9LTG`qM7Eb5PRQR~ zxzvDHVc8((O}I4~0%FCJ|;8o~an%y#kaS4Uo4L+R zGJldHm&mf+=QQ(VPMJuE}=i%fzm-%-ci z#d!I7%02s!9IdK8y&c1N#yT|MK&t;zkZP&@41D^}Klc)z4uy7(nfjgzCFCy%fI884 zo-OTl!E$b?4w(wTZtnM;MLS9*3E#Tm_0WO~(mfLec8>Xvx`6vdj!BOy&HjwsNdIx4 z`Hz<)8=ztGIF_&wv}V<)fQ8M+M%mPVABBs&AE}LJ8gSo8yv>8-9uVy6#Tn4y$7Ea} zsmd{v5^@W-=kM@tyt5fnc``xnZOCzDIJHCjH>0Tv4qZ9Kv%buFsn#|_`K;-lx9NMv zyIxHMR?Fq*MC%HG&F~HZP-Mi1_E9zuEUB$@?uoNa0XFTDP3JdD4%VfjLwnsFv1`n< zy$PWh6i39oODj5tP}`|}s3N*6|0(19K>}g%jma^Ba!ACLZ#v@?)C@ky+U}HJ@s8Op z32|~Do2Hxi+0VYL8{cI!{ESVwlP&JSLT_q(INP?-jP7~K9(=lwCGi~{Ay?&ts4ZJP z4T20FQqso6Xyi`H_~3(@n`0$qoEFD>T=?&E1b zglj~8QH9q@wp#Yg&*LNte>rpLc#7_ILPP~UOIc6eX`pGo!knzd|uh^NvZTod5#{wI1SxOqg@Zr_&{>+a{R{j|RwhnL}%0 zgupA^`V0P2Ra^x!yW=U$5*&_iJmo-a_ka{5Nh4aa1^sp=d^zZJ3qt%jBRstTey`PW z{KxKfGRCXyZtE~iOcl4-!`SOV|8u(2NwOSMEn|}u)_-P9QZ5^vZU#`V0WOmXq@rx7 zz&yKu`cRb#hw`=bH#t@Y*8mRYXJ!~Gi|K;<8Syo(uxj&KwfMu3k&ewZtyYC>PS(gx zLWHmO6Jj#MIIReRm3oCRX#SAS?2{=F8~f60ya_8XSnhCh!$`IgzSGf}$a~r3h6pj` z-_)eLClL65G^w*59+=6LH}vuBJCLT1TM;#Cp;LIa+w8WvB!x%xb(1&xfr<3Lt~t(I zC|hN_Px3OLayg$JvL*|-AXw*Sjw?+cF>`)BausZB!XaCaOhRy_&I~FlQ@I2&PcYl`V-3pD`3CN$8 zLu_8V!963UHZ3hzM5n6mReW2f+d!K!gZVKzy(rn zB3b<>w7c|~4%H`--eOSXx&7_-pN&jEc?a4-!QY0p5UqI8BfdmTS-Yt&25+Li+}F?i z9-SDWBF1V~O+PUbg@@mdpNPBsx-@Kn(Fj^^WdObW23S!R9gAJ36obLIIfxAh5&a~k zEYp<8LG=#1n5s`t97<8fj&TnB%Ms5-AL*TxW<|gKyaA@v$TQ#9rdc%?8CGFoCYujD_ zLQeX+6+4hrCww6N(I$k{o2XacBctn>M~IF{8O1H$JeBdo-AL67&0Q~?G+#wydH_W5 z#Wrk1m3x%Q(Bt7J2IX*BbxMea{>Y(sOLfHlp4-=;hK*0d716G15D%ph)mdqi63F4X z(-Wir_a~8%v|#|@=NmC)DM7uX$ZH>}(~;4h zYA@MoH|rfHu7vR|&VI)ybCb~EkhfLwaWd$Q!M936zcxl zG3RFJn}G(%tAXyBUW|b#zh>e;6o@w*a(#E4ixhSf?XWv=Io+Ppai}&~@1~fw=8owA z#FHA$f5wvb7 zT}_EmX9uJ!7ZF^o-L!A^vFCb9)mW5{qX%ryg9brNh)jDB)<{C!R#uHZEJf3XrpTh2 z$7WsP3&fg{Au7~A{U+CI)!TF7{4Bb482tX%Ok}6usX=fYW105~s%A)be?C^CKyH6@ zd(kG6NxK?lmMFDMZ!Y8W@q?(Sl(UTWC;Pg;AA|G|+LO~`F>P!st82?^l^JG(=mrC^ zCA6)kYoecB%9XQlwe3v~ZF^mqqtH7gce5Ynj;dQ? zRryrYrha9d)nWj?q4zHxXDAPGj!0lLAx;QMo63|DmT8plnCAC}Fpi)6*%CmYEFyxQ zfbfv7O^GL5mP6!`vW686k+HTx?D85kucB5t2G$&j?Ld6EZ$(7oUg`882oX7t9TW!0TE0|%Osg;4@W`Q?F zI{CCu_SX>H@ByG{oI^@#l6C?MJj-Zxjx~-k+arP_k&uC+`WZo|P)NX{^5FHfD|zcx zRYc++@QRK9DAv;gbl@N7RaG;n>nHm#h+)6GuQT)R1a=BXDpeO|X_BOxV#s_w8bS}9 z;{v}2+zVK?}A0BZ(Y=FLT?QQ z&yOyOV~EtiTUG-XmVZn|HPHyHwC>qOcw)%8&1`gU@6Swa9*&jtH-nyS~plqgNIO{4HGk>-Ddi|wBE%n>Vct&q? zl;h>T=OGqef)k?4cB8X=)#k3=#Au}KSGlF7%Nt&JAXZ<}0lrXgZ( zMSkrRIybxlSSnhqK~Qa;h-+}Q*#wV7ZdidgURTt{rgQi2_dfV-nav`feYud;osCW~ z^!_g9-?JD+#H~pLl6r?F=a(yS{(Y~XR91=c!AI|ooZ%-kD?>Hmwf0+F#mo*5*QW-N zLhT_Ox$+Bo6dRBo0CL|S$N(eEwQ<1u(IM)DJVSU3Ez#3DiJp_)zfmy$6*0$x@dy{z z5wn|r?_zch{A#HDg`x!4Es{GH%Y=Kq*A*zA;saV(tHp#4Hlgy!?$&9Tx-@>bX@QIS zKa{Tv@wIMb6t+uvG-C+86zKZ z!m>{8K)S$xKq6RaKM&2Q!;T4&;n!)Vx<1EzwrXjHpQ;-uk2>06jWwaqeBd3wAuhV? ztY*y-Ic{<#J|G;kq1z4=ozXmqgorbco~u%dc0cQXb3ttI9O4vExP9^Wb~pY8@@-}U zRxXaf^o4YtKhat;wBov9eG8|i$;Ap{w2fR7UI3`F6aIL$cirg@W;b@~#eoL%uO(5} zIF6t*>^QkR6y2pEqOj#9|SuC^K@+yW;@sKvLLhpCf8ggL|X_FXatqNE(o~$xjwN9;T@YPA9J!6apW;TTc z4|sg%cUyLrR5AK~Avc;5MTaAF0tCS+YK}D(T$`Q>`H6|?9;Cobh8Xw3LIz^7PPYBH zdN2dc0a@Gg>`VND&N)7zm_6qWw^!oCkgl|sh;BO*h)9p|I|l(Ik&aY|{9kYE=BiO` zYvi{d?dnr#--w4qDQ{{rHWgarV;cMk54y5mG9)o9iu{3%t=`>!0f=85yVbr#P%C{bDB9_Fa{HmPd(Wjr(^aBiQZt&Q>i2@!<_iq1n8r^EtRN{mK3hF?o) ze0I${GfZa4!cQHgBEOwKOK0GHwiKERyVE{?pQumieQl98Nx z%~U+iYaHHyQlaHUS>u^>uep5+$D;2gxf|u#)RyM`ho}AH4fBwmCnKsNiWKHT{ z_gDXS?ok}T`Xz`lRpT25xnkH_4Gta!f17EeKJ#2cSaJ^yhCk zU(4;_F)GI*08vgx(%9T)N#BEV(&?S{q%eHN7W?Fhmw6wTG6E;{*KCm7L9M^744k^B zY~L|mk5H+QqbJeyMl-k2s%1BZLXZ8SRK-Yzf=t7tY&6Yj?Sa6EM_W7&J zLFypuUCxjkaot4t_+1b*B4W>h>-nKX<{=e*3Pd7f0F^tR-r}NWyBV+GIMrgkPmG-Z zb=z;5N*|qk$rW8bO=+E=3(wwO?htb+hHy8WOeIP-@s2j?v47!1OMO~D{fD4EYJ1lT z!}~DfbQU;2yQ1LYfX!r+hAaDwRE5-EABo?91czbWy_;muV4kQdp|4EZ3<(AVfqoBy-5Ctq7QTX59NUZq~*5L9U5TbWqZ&0=PtxHp-T%z z^?*Im<_@O9br^Ffv-XNz466i3lKNyiVlUEsg#)l{_9CG45+(8G*O~VkUo(2H{ZgO) z)!NlAhF^Ww+}S(1*VUzLhi|baKK$ttTR?g~5Ws!PPuntss0ns7ct@CYGC^Mk3K1Aa z*?lWE|EOvl5(TyQi#;gs#e}xNP&L)ay9|>vSovs?Qlq2$X8n+_Pw;^$z1%-()%00R@H!-@*N2qJj&rhB=lF<|Ja;^>_bw{FJzgTQ! z5aRxJ*rcyMkbbh#I}KU&a%z2;kBcX5&@rVGyKE;^49byJL439pi^*UGfb`j|RSoWV zi>wq|2bnuT-ZW}3mShS@XyOQeUbR`=hoe``Dk=(hXusjjyP#L_v|;M0cJ!-hS`ubZ zTVxjbD_+bGr@?k|cMg7j-76LSBCcOn#8uO>UvMSlf{<|ME z>zp@p`v&YO!sUat;Us>qrr(+(E*ssSc<#!IXGNZ-MutL3;u?)q1K~JNul+e*;`~j@ z{k7qC9)5Ji10#@1k$a6b(8?&=+3d((O00yuD8T)3vF4e~ZMZ4$H4>#WTsPu$8Ma1t5}f*@p&pV_xz# z&TA5h@3f4kcwnYFZMOwX%&?95QoIXvKA|2;FZWJ{7;L;r{feW-S9jQY!eMro0cMJa@6MS6LY@Q$9k!f8uQu1i^v)U#DEHQ$RRBNHqelp877cN(A_0`c*W(;EZ z`8T5mDkNO!BPA5~FSDcJT7aA~vex@uQXl`lXq@M|B*CGRexE z*6jo$;rd6h$}X1;{y3QG_4L`yzL=7G&2*Vh^yd6wDZ{Q!rg88Tvrz-1#S-1KKlj*p z7*;0Z@Mjp`2kG^3OcivVh5A5rsMSHU?X)ta{OTH67TzFO;K5wl+>(+0OnLACV}&Z2 z3K2-)3TH6e)x>eV87~{rCq4DSA;&C(W&y=J2aa0@*0F|#c9u;or{4?lCLfEva8y%F zQgOyqM_JmRn>`ik50Enr%80wzC1Z0R^Pn_GeCCJY9lMj8njgZ%nN(Y23d1aJFFR6QARhu{IQxoz&~G*1&uB&{x+dVV;t|H)t|Ev{-^42w~k zrqc7sZR8bGf@bVe#Rl>Vm<<#7Y~V!h4SDo4bTFXI`eYyS_cbza2N+{~@LrTsEjgfD z@a%m2&90w~7b}JNy@K6L)pT0_F1(xfPiHu$sIS+e_6e(mN8@|oEcdCSE!3{k7w<`8 z**jvp`<(Yb6dWLn_q+d4nYjA_g?1=wKu%F{X^Lj zB2;pj>1Bh)cQ$f48uc68i*{8r{b3WZbfksm$Bwtt5s4uwVv1@h3+G;x5EmV{QXf}0 zxeYI1U&=O`7;Ems3AnFfY{o1?%GW32nSJM4r6tNdLL+Ksp4_}e16vB4;flRWU%=t( zmBjZCcJm!7?fmWIxQSmXvA|h?V^xDpH58FAQoOp)@4pH@tOqK?A7{Cf+$lWuT;`T; z$~~#ST-QK0jUzt2{C;-xO^e6#wmn#L2JP297yZCJg|ny20us>76=^pgC1;wVJbP;U3@3gY*}&losmlAQ3_ilUn~T-5M<U6tGHTCIlp=hc;2!QQ< zr0A1=*)u6p4XNW{f`9sWQ0h=%(|54xugbMQVfersm!C;Rllk-2A#=tF$Pz$+-Gxga z>y3ster*0hZbSR6+(}7i1|Vz$yZ$9xcPMH*CJZLC8MiolsG}@B05(@*alsC)o{OSd z1*2aKOB)OG7wfIhhGaL_NADY%KNK|>rR-NnrFR&UV!a3JrSUW*hPpdmrJOaWEY&Zp z_~bB&^nq}`9>^7Z(oCLzk>b>8+FKPlz;kpvEFhh^D8%A zf46zv$!>hky$B5?a8%nIAHjfB87}7jMVjtng(E8?09)CsuOsaZPfO4N%U~^b_dWC@ zQW106Iwa*KIri5z=nQAKI|@j@U`l--!rd@O_M>rOJvm4_XlqAMLoQIP`m#4eCFYMe zo;p0JX57rnrOZ(2u5~MJ)pqCy(QS(}O7CGlbL_S84d+>P;Y8|>J^!&D>ft3IC0xL` zHdC?tAEPxZ(#(>~uaM|=Sr$xv;_5h_-%r`AczA6fL#9i-_`iWcU;aZuma<+uo-?-{ zoz(krC-~R2AFNpv31=@7IpN(5cDI=bY0AgWqh=Gv>ds= zH`Sa0M3trwTmfqHW~?>S71K}w<)_*8f3S6mqWsD()?b=jNv*w7#9nKOyuRTC*f64cgZgldJ z>suarsk6%dcUO>4%}8@@#;*U1ttdJ{#*-Ppne`{2+jx1*TORH4E@=MfeV6)`<5cv< z$kVT7po^i6K_nn<$or_EiwyaZGGkLv);R!%W!2Na{BM9eNv~HH%77r3uRbije6 z&V)iV;0Zg^5jT~NQPm-%|Dm8|y+zgpovt(3re`+7nQNE>k~WsvQA%xXO?36)RI!8jD;IYr^dE6DXkHn-&+ z&~ef>HuTdsizr&~SH&mJG*Mn@Qu&uahrA|WI+_qsZzQr3h4JmI=JIfQoU=%ym0n!p zRDr2g>C?9-^_-ABia$lefrb`uHOhN(BrOv zfz*Nu)o7l$H#C|~0+`*xGmo!o)WgZu#*XGHPgSe)l{I6!HACSJ;N5OaZ;k<9)&5lg zKY^-j;*!AvzPSNJpXZWAC$nIWo08jriq^872hIu#zwk$cdh*0jw5d^d$8lTDU#>wn zv=RXO>d&U0NxwVH5rP!F)YhSVTq?$j8K8{e=5{~W1p*Phl42*buZEEqf4xOe=~3uife$3h5Gz=u0XS~`c~qz6>|;=qxkiF z?o`+JmrhAV79?a^I%aA&-cBIA(Jxc4nRGbK94GjjuE|H~!dXf=z!d7&rwbtHws69? zX-))4-Lj@`b;ATNr%zCxIrhCX3R0=O+JGS(5gc_uUHFu-kq(t^YUY=^w$q$1e3#Er zpg3oK(RNsDsuHAEnyJeyjePUjSa|Y|;|QTv;3sTj?RjG&C_V{ss>^Hhg!&uMVA=QiB2~kiW_S~~lks;EZI_=za z`<%mO!2<)$jqmC2N21O2N7R4#wOhK0&s1>9P2$Fi)A31V_XNUY&z;=?-@PM!4z^F8 zfMJf@6eikwmpNvi&2!Qx_mj5pt{i+M50Nlps|LTmH{Zr9(jw!$w|`x^;fLkVUrG<&Gr-bHTTOB?-o)r^50Y^ zJGq}u`V-syaS2>3uv_V?nE}uSGM&)#TDHaa+WSitEsxS?7Oa+;Iyj(Ga2HLe9MnB6?uNZnK-ko4GTQ)U!wAuJq-Xd4DtEbVY{xVO7{`#<-Fi=GG{@?nh zwy1&@I|<9Lbd=w!zhJJ5)Ie4?ao2Eh5jklPCW}$;W-Zf{OGznCv9P zS5;Le7uk+owWheZ)_Maq=^{5jLaX|4y>VC^O;37c=knpZ>-k-Y*FgtlryWcs^z4rq=&g=W; zro1L9oT`{>^HV__mCk~_-(}w>glX$GBE;P!{Ly@A=S0Ln!r`uvL}2V=t=MG?>4n`X zAQVJ$cGJD$rHL4T8s~ad+vJEfw-}kJC!FIsVN6yqwtd)q>-;ZE!tv=vV(9?Xb1DcU z8dfFvYGR1J>tHPM1&a61NEc>-^kgt8%tey~`FCN#nu9Usd<485oE*r1lEm4#(6(Sz zOV+d9ksr?Pm;bI%R8J|{9cP!hW>Xjh>QuK@U9=A#d$=d$?%ESl`-@E)6K9Jq_P3TZ zFV)}8?`nD!mybC#TG>@)s!(>gz)X~2p&Q~rsg6vw90Br)r9_`mmp9<~4yP3B>3hEB zf4EoOaoy;F#XGnM(Gge?3gX5Xb~iqZWj}!jz%BY!Ny9Whf}%Z(ArjdR=bxbe~d$ zm|akiI5}sTw3Emw5eTVkHqK$;$^z{XGCQDDE(y8fU{P z(C>zpY(3#tj*okVjY}xwGaW9NAfg*pf|2T%4xl`5t=O>l7s-QXs)qD$Fh*-S2&dC_ zSyZ;u5nj0T(mjPh8j6iI?raO&XU{oUH^wf;$1WFgDz^%X0?13Ak@n08N(pJ!~DLd7E zaMOLSgI|A}S2)tU=!>xeQvL;S>S&I)WYJ>xKgZC!9otPBv^m)%>bB^Pe{@Pg`rhb= zyZk;E;%b&Jx_z(!egi3Ba-1K}^y&uhr=)MSPJ(<{uhWrDd#LCCB&$8J4%~}vw)w>5 z9h=kxPw+xY>8S$(m6uL=o?nN6pd*1yy-4aySc1<_@M*tlp6cg-MoJ0q?@}M~SzMPL z0~@BZP>5kezOz_KjyFy$QLed45ZYZ6-EG3qy}AL7%gin4HZHvu z^s3g|(g{zeLU+^WNl$Vcfm4A*b#piJ1xW+GXDW+)Q|+mXYtB|4mq~r`p}{JiKfw|} z26&PN%(31N*^!d0;bni^pGc$NqAvgLGd+d^2xC+4)MjEIJGeY^*=}Ay@U5(Md@io5 zu%hEb6GJx6Mfy(^1B)B>o_9gssRyDXXw`bSP@-j`eh0ToBY3v3Ou~u( z2oir3zSTb+fH-9IodFjPc;oqp_eYP!PB1*a7tF=Vw_9mf9KX8%n21T$XYOMBc8Bim zw*}^=)%)MOj9p>POlRQ&W}YjCE@te=X~J$GkeTz2o)PVgl!HoP;8*f40v+{*V>%1K zJLf+fhM3XudZY+9zF;vUyZEcP6|iiCyCeWr(w?t+rm3_kfSA;+E0YTD2-Lg& z6u*+EBsycgSSiE!Qbom@vI}{V9 zM(d1WBr;m+=B@PU7}?G`o#SlRgaQbkO-*Y+mvJwRgGJztwL}iUvkO?ZgbE9=F-aN%5g#dtrCb%gvFQLBS~t?MM!uvDm#3EI*28epq9>M6BT`uWlv7 z6D33-YbNyaG_v!kVLCvCdqSb;VIMoixWr*W{ezpC4# zf=}Q7#F6~g#(cFE+~JA#s*?$U11m=cS9=rk;GLo}$BQvPNve2TQxfzeo_PKbBFGN> z=POgwze!@WBD7PS(%HY-{$TUOnkP_)7|fE)g_by2^wnUF&a@2}*n zxBkA?FPX?KVlp)Tsz*vG@InR2|8NZa3(wx=lDaV>>ev6$n^)f++q4EAC*vH3nLRFA zR_x>k=TO1e;OvFZY{(o39zuQn@$&T(@v4bqVA%|7E^n04D~F#$nbJM)mj}PY3z5E( zPGRm<>VlVq-d>6=^?y_CZ0sAb74~HpG1bnesajkPm}6WJ)bXoM*?nkkaTCb}f5$LW^~A{IounGOJCHpX1Aw9qsJ zaRSi^b=D?CV5D~W>!`r51I!+5oa)PMz?sz3muF3wPR_q)?gjS%DF@>K6Wt{T zG9<0~+A|f+Y27d>cc`|D3o!?1uq@t^@BqhlF@#JihzysWaM>0)ObPcln6Z{{Ac-C3 zfH$DfXRZ>Y1ulJs)a~#<4YADD^1-~74d6@_kER^eRe|2T=|`@a(StD>*YHw2b)169 zLn&E76TFbBY0fh~DK1mxg(z*enq_x`ZiK^ByhVHYh5sSkkbU z5u!7nxdZOEu#*Rp9GQ@H5NtbP9ZNtskmPJ~85h6uwOxV$H@dv!bZknkV`~4xu0S4G zXpEPx33TUn6lxa=HD*TQc`)(ZANPG#Q7KX>c?zPL*zh;%2>dwKQp1 z&KB#K=;Ldf$hI@*)xDD3%V!cKQtPscb;u9dmD^k0HjOwd{c8G=#R?f`{A)LW zWcG4fb&%+dzUm)?L2r*~vaiRyzxA~V74>^L5|VS!^Pb`x`w+7T%o)TR#J?_4xL>|a zoJGk|4);9gO}lV*v9&{Ct4EYlGfybqHNC3w?Le}fa{^UAJBIw3-7?5&z~B;+rVT67 zyYkm8IC?xISm|0SS^aW|KnN_km#NpN=v5o2NY?5&$faD1Tx_Pa7heiZI`BAPg_N4G z>_k(N8y72jlTs)lilVLhjmHTc6nN9zO#qx@L{cqK|958rnbb zzm;xOxD2L2TU)9ym)9#lvfIB5X-iIp#BJmroD*UHbek_ndmuG#%zM@fV3DbAWJ`T} zZz#Hkbz8&OMWUG|=mA{N4mCle=EEj;8w`Od{tK@SXlq z@3+pZ){^slfy&~GLBKa~FSDfX_B0OMZYU!y@;1{>bZ|d)u_YO0lXC^F^?bJD9)ZgQ zOGENVh=F1zCxQ4aiAv@TO>_IL*3mu%3=^J*iS4;v;?%Iq;0?((sBVOZ*bZglf!=34 z_P(DYm`?s^4>03QXzo@{42%%T0S$f06B4_*jChn;;ip{YT8gaJw0R=WlX6QTQN&zc z=tKGDLk&IuxnfrcVLTj-^25Lv;d{uQQucWlXT$gF;*xO~wTJ1V2DDh@YR>s1w_j742Pm;DYQ6Ul>`oXs2?{VZ0tp-{~>^<-LuiU6yxf|nP) zWn2N6r`r){ZZPg!9w7Cz(ISxYbTBmaQ6zy;#MgH`D~P_>I%o$a9{(@COKOe(%dZZu zCl_yl-I{R0+(=KRL3*5}z?q`)olab(043Ec$8X*`pS_n4NsEGMC^`!(64s-|TIQdd1Vd zzA@~t#vQF%4}rw9d6`<*j0yM7{?YO$F4NaFTF~4WK^h`} z%yxiFQ;*nhG>-~#P8Zpk;)Y-iLh>jkIzo9BTh~gAFDl$sI6{*u z>sffSrN_J*K$twE>vOpGPFdU(T+OA90>L(e@2^rZgUy6(C>9 zwyd%p`kN;FLI4>U_v2j?+yZJf2&QRl2`&y#bCdMBBqOGx!Fnk{1#gk9wSHS6STTG? z>iL6X{S`r^U^@Vr;i_VlMWKpd4Em?5q0|RCwAIOL)o0u;2bjUA;3t)kbZ%zi5~(SU zzcB9dfJ)mm->H01x{I5{q-1GDJD(*e>RqiQ-gY~3LH{h#vbjOndf-muZfL(UVi{(1QQbq< zD2;9f7+7mQ!aDr_T085%CjW4aqm-h6fOHH5q(h`@qS6AAqmh>G9!#XWK}l(ll5R$W zv~ivH>$={T{`|dsMhKYJrFiKhYxw-!tFS;yok#aX zS7srYmo5L$66C7`F)_)uErew@`+#rVXs%iNVjC2mh^-nydI8^x!sAObDX@DcE!g~3 zuL9L42GDdaYg^2F_|-yvqUfMHgy`8n8`>;S^%+Z$cmSc`ognyz<9&4N}CqHLUF|f70ez+c>|3t-iJVKHlIexk=Kp zR|%GjzF+9M6j>|!b63Y~CMjx+KF+q%l`>Y&?Syl{6yt&@t3wckpQd*^W*UwMBeed+ zJXqa0E8M+eTo+yS{txHHFhg05Dq`(COqLC7(TvdTDki>-un21mXx(xwYWkC(eYqm_ z`2KsKLl2k?VX>kFFZ9*cF{j&&YuK@Rt?mk8@;KLbKs?^wYVL8NMAjI$h%T54At<5Y z4V@{!dxT<*SJyahg=EJWes=HVI#duw#0(A=rOi^LTx!lw&Mko_Gq-u=_2FoED%;#A z)wm}jZC*vc_BsxGMoNolKWvr!WXy{}gSG>G2Gt;y4$E^Bg>21ar)~_PEphKQxBjSF z0n&m@8*K-2IbsH}{i@b+Lq927vBd4Aq@E$vhYQNR55~zeniuEi9LA)QmdS8+(fTCWJ%BsD#JZ9pG&b ziWXc^gX`-)9nheCDJsWeS{XJ80Y zPcJ$X#6s-f^-#r!_+{o3$}$S!MztgKIF?eQ4ZwY-pHtL4ehpHR8>I+ zB*P2294Zc9Px>)5UsmOX(G7)m@KxJ&aUhza@6&%Y2cJ8!4jS{j3MxH+wCw7?GaVrn zblDka(&~BSorfRv_%H)PB0{>Hw_VkL10cF${A#Dh77sqnv4gEpl!&ckJb#N5 znH9O(M@dqTdVdTApmngus!~I{HiamcmqCp1@ozk~#RiI<$?NP_!_UuxY%L7x)B^vZ zB)*#A2FZXqp-30=%_`voChm|^s&sOLP;2P7difTsxS`vqq{(x9@GZ<`$#%2If5|Lt zS$n{XH(WG&dUuH)FCkM%sTf2ZS)R`Atl3b^SFjE5WaSwFxvBFXr%9o?3ipIoDY;k8 zqbs7-=6{u=*p}Vmxc^|-$fI{tj}$7|R#L0fM^yNS!tEbjpT9I(vj5m7Z%e+$CH9}1xQ+J-6isqu~b$YsmBZktZ^ zfye(sLBcW5z7$T{OOCc%&=tVJj2o?*$iBJQa*f7v4nX7kG!U03Dk;;V8gqF8Azoa{ zG3bFI##9PA`lj}gr zE&#rIgZZ*=gt}%0P(*?6Ukt_%q-IKH+iixe6tIOsPk~rkJ@rQO;)zJWFdyG7W6}r8 zeV$rLd6i4@iLVi;0 z$u1A2#_`#Q4%^QljpE)NVIdFi1QO65*VT?Fd@<81rIQ-VV_#cpaNaXo89%J7UQ~FH zup}E0_zX4}Qgob*S3hUr#QND0u6IZGFY1QiGuEcD34IhPAu$?6F5TaYsgiQ5_ zt;x9CF0L6T?GyjiB!?#~riM~2^R_*J>wn$V@iY}$5)7$9*u39UfteJ9tcSQ}uiL$b zqdMTs*FO`yQ!B7b%DT67&R5m6GPo%Jxva<%$CFaG5icoR;<~5mJUW+Q1`VuWYfH`5 z0YPPo`<>evh2F8c;U?4ge}5wVohK1ObWE*HIrW;lSV@4g@~0Q!oIi#&`b`5P_j%ga zYuS?2)XglQEOsj0=luUkt_5J8=yeATV_5N(C;J^bAe#hvW~n0go!He0=VvTq(kwDD zsKr$N;O1Z&C>SthMN#L6Ayz65Vl(a%l*aee}5YHgSf-=@oB zFtEODWZmA!KlGq9VNC&xggd{)=;naNw(;kFYA6AIvam?Dl;5(Yan_y!6S9lobT3*go zMXC69daiMPov(`!j}5TA0H5z?DV29%M;_qfK=zj$pQx3X6T zH;tWBjws|cSf0&85?{}(uX=O|L4ORW!CIdgk#ME&JJksm%*)H=DsR!Rc+EMnbtD$q zNzyV(mOU4e4YJTjaYTo_Kj&E$mPkzB$X}f7@hq(qu><2t$>J1lUN56^<>lawET{t9Q z4^ZEItI(WaVmKuY04Xp1mT|q-g@f7NFydb4SAtGd)%G<#u_vVd;t)z3#D74Cw{Rg> z%(t-OwY2PDAZm9r;>@pnvE%eJT8vSdwk8W|XBHWQFoJDF1#xYN{Oek1EI+sUt`z-b zUXi!_^6eMy)6ax7xWxg+(cvZ%GFW)E$}}W_0|xefb(cYBjx7s*s;-NT`I~WHYGF?u zaQw)&%NXGf3Pg;_?5RytvWDo{(AQRM&B~-B0q%EqpVALU`5N%rng*cF$`N{-*&g~~N& z6EEj<_cd>+Amoc+lZVtU4!fJ$bZp)aYhZU0PPwEHXb5-~65r>u^c2mr-^X1`d9+bW zJiZat#pzqPIku(Zi;Yp-)E6$70R&d4R@V*Pj`x@Qu~FD9TQe#s?@-!F0BAvl>r2_> z-+}vqjf+&xJ*$eNRr$;>?Grh^)&DBgaU-zP@kU zb}wPnW(3m}5TaA>9_8LN!JD-j6}I@C{9$rbGaYNFwAK0xji`G zo971yrt)V-e*>_}gV4u5l2y;m+?~tg=zA!(2%Mg{B%=UH`Z2bT;f>A3jV7jwz|lnfcgddIFwCb+&2!wCS?(BrgSB9%?8A zY*miXRVDkn)wRX=Tn#5hvvzALZnUsdSBJJMhthXbhx4_3O=->#mcin9*m%wmZ{XsB z+@`88X_RLtIt?0%k(zHwfzXezPu@3GW1(1eE655Z?=*!6l~oe|J^5-$-?o<6i5!Dt zE;X}rBX3o!Hr5Kkkw2{*_@M5s6rIDVY~8h$2FBIgGc6zYlzAE}Md0gS%O#OUq#dLw^(lgT(kh*B+^TmvL10*7FhO!#xF()P4hb zy_=O$PO@jgqVqW_l?`qJX zd4{?e?Wd11LUWD{YbxuWGs4KhLg&Nbmb9PqpF+@_p_YVFlkXdkO{v_lb*VG`nc96^ zm|g^NZuyJ5&v)8|Mk?=2LtI0;)B`-LWhrCQjU@HvHL;Uew_(Ok=3Op;5n8dQ{~V&C zyG83a*kLUvpkb$#orl!G3I>ieo@y>zHb0iz6!`{o%8i9JPnm2pc->;R%RY~gmgp}$ zLh;#2aw0k>YD)F*UahFJ479%-r7|X0)J$6MB;Cdy=j_sjd?AMam|%lm{h%-3;q}Bt zf--+xX&6Fc%Nb@jY9S#Fr&7JQrd$m(I-?$*7V(ys!wlj%33Hk<>Mv%eBGTOeSI3~YO#LfNIwEZ8>FCt_g zhG+>vpg{)Kfp=x?XQ-Nyoa5w%?I!~M;Rni>+m8~*ro&BHhpS6I>d~h#pR#$;?q7(& zOVKYi^fNvqp(V;ZDfRC^(#oc&K&m(mU>25!tEC~v4pj4`uD4X-w&@dJ^ZT>BrSeVC zB(6`lI|Hjw!uJ1`(R?#hZ-aM|{jP09;CfI$HkEoVRMlh)76EwzSc>WYe!KNL`}nm1 zmAoU6xwvRg|WDX)Yv}E#;n(YrkDzT$>H_@6XB1i?^!m+vpEULNp-S7mGSDJyU!9WUhB&Do^I zD_f4+ev)&R;da=RU;TMD_$%?7B#5sOzcvEqvqFbh)wx$IFndyEWkWQyx9rC{osSTF zSY*2P=y7D9V~%0o_c6Ow6WVUwWRh)EV}()GS`y*w%-SqR3qH zQBQUfT8HU2W)fJ0csf>hGbH_e7JZj+6B91;cAKfHQKhKsQ|*3>V2Q@V6#PfWF+t3-x72N?OC_)m(GJE-$`2A)i;bzih- zPHhG+w`$;lU0SdT3df&_k8c+7E?;>&4M_~@U;VwhIggz+Yls-AFG8}T-&)HKwW}>S zODR5V4Be6X!)idcnnlSCdJp4*4JP83V&0QKp^oSAAShXf2Wy{e@2v8UUDM9Pu+H^R z1>nbn+j5IQ=$m(5F>ChcDgjruVdUb{RexLmvM;);i^6d@IK*MCc{Im-)xMeAdWpNb z(cMMf*J%HRax~-jV+FXCyWI`v=7FzatMR#+w0`(y@<}9+ht33ZzwACX|K}*EjfbI z z{n@EUoBwMd#apwN5DR*xJIn5~C;owUFZJF@!5hufi*^v}ks24aYm@J+&Icbh(Kzu% zeOX_14@x`u59iO^HFken6PiaQ%e`i#?8KO!L={$4RHoUrQbrvYy^lz$fVrc;xA(~!&XE@@-&E+Sq=_AriZ}0M$@mPJy>#kg zK`S7JVhU*!w8PGLEhj{K=)yE-np1_KT{?l&^5IQx+fFJ@XK@P;9r4c0rPtBWt)hjx zn3G+DL5D&en*^dRKB}^ppG+j<)}VB$6GXx7SCyyl5undj9BDWfN#`Ta>jbaFkB#}* zJx+?UYHcv@&Dq2^bu!i48N+_W6;9qpQvT#GN=OPpMi<{C2$|D~t&1NUcnk`)amh7W z*3}{aH=*S&c4Xd(Ve)HnL#6(=8s7joXl0n1vvOn1Tl{{q>Rkh!talGjne(!1SOK04 zI}DcHVDUD+7$Fm-DygJi7KY3-PiUO1z;tAcQXF%f!egvFHwNLJX!M`6Ybmu<(K+k| zCAZ!ipoGD~*q<+u&-0kax@j6L<;*d6*JDld^aV$SlsVyS?qCCk8LT>e-l$qb?wg|X zec52O?ye3`$26G0pA+ZN2~SUlGGYN+s4TGLur=G}^1ZslEZ=vKpc4^PLyS&`S})|Y z1Jsg4cSFg9ltL40_A@jG6+oduY?yA0WN^v=6LHIQHKQEYZ+nVRz*$Qc&gD#%pVu~F zaH)E>?WjPI+Q2~5&8<}-TR~IXT&wV^p>0-QykD+c)C7-vGqtn~XSsrL>fyv3y_*x) zTm}7K)1w!Kw44({iodY6R*6W>t)GmI0!(%&1NV;)^Xha^))bVJ+Wf zl^oICvF0Uv|7>lo=KI#zJv#ww?f-C;Y=<|PWa*HbK~qe^bG*V?QPpgS__DwDd@JK` z9tsr5Uu_F<(sDQ};c~<~L^UDvwgFE?*Tgn!P0k(Pg z1(qwG3_52@gJsRlIBlBo^O$6i6v)NL1?M~j9{)r%pLCMOKeJzFT{w#`=N!vO-();J z+G_PU^5vNC1UnwAS8g(gG&V|nSp*P@@PxH74&SHsxbUxhhWxWj}`% zaZSO+*Gw5vVrYgIq!;`_SYKfI2IdgLHdAS2u(*jspe~3*Gaeu|bZR6IXUZxB6U|l{ zD8$*`#A*3XSY~PG=$7E@p#PP{zsWnJ6BMV@71Wg}o^`Sw<&s-B1_hS^=;a`)+9+Y%$wYEqL+(;B{Q*@$^DGa0mJ-KFPfoRc%PW?l3g zX3iOBwIW_!ifmz}$`^7HfM6Z)--zLprLfQC%>GJocOmm#bvGk0c9EpT^(pQ06T&=g z6X+@9b~_ZeL;GOD<@v+sPmy>zUwiX1sRwRW}mt~KBb%AyWdw)27OM) zN46pf74L;CNTOhcqaG7<5D5zi?RcTWWxP{And5z1BKLA|3nzw z_#;STrIip}Kb^MkPIv3{-=S2C(^9{Eda#NEk5!t8QE^B-ZYcgcdG7cTLva9D%G+ER zWVG$`X$yVR)OMfK)7CYY>G7QJ^(wD-!8=1whCN&B=SnMy;&**F4@fQqxl4?{TDB>5 zyIq$0#ZJnwQ}nt+Hr1fi{lZb@a(u#Or__nwBUYsuTwF$;KFnM6$J(yTupR|MW&%cg zwj&L)Q}U?^!+tax>4M@pSmsqNLynB_pwbxzq|FOI-7Nb*sOck>*_a3sz^E?9KALDA zVjWcnx?FPF&qACxGy?&O(bp9Q&iZ|ga+a|~aE&M*r!8o_p?53oGp5#}&h}umR`*IU zRuTXJ!n}J6Q|wOuO}5Jz^=6hgE>yGd1#|NV%u?j-@pLsqsi_3W9Cv-k{Rr~Po!XkA zibvkHOrel?p0=loeuVp0Z` zWxidr!ST(%Qmkc&fmEZcSeYS@Z_N7cQq-j0>KhJ4AKgI6(QYu>Yes`Jm$Rot&Sz#c zIF@#kpFDlV1d13ns*qfahMX4;~vTzl)r_)Fj#~?W&96^=ni+=B}TofZ0!gi z>Ai#_lj?~nMO0GwzD*Gb=;U9GO)x#ndm5as?9Imgfgtfo#+U^MzL9pmdwi>^R8 z7_iJFQ1{IpZ)-yFRsm8R_^entGF{||I-BWxCj*;{a@`w}an`o!SJ6W^a>#ZFmyJW# zB&oqBi9}0F7T^vqNEa~_&u!_H@2$V78jC5@_K%QepI;uk87#SPI=9$U`bSG|O0T&Ct7fg_&EI?jlePwS`shm*YZ z2M&ofNGS2OH0cWclz#DIi0E;4%Bh()<{yyhdSj_(i!=mcerjI~ENs_9xe5(;No&3}-V+oe5ORkGfBAkpMC6;q>tQL=BO@bDfQv_mc;XTJ}~l=Xuj z3@TjdR=np}to)erQ7Ujk-9Hg)C28nhxV4zGI`TswF9rd?LeUPMtvNXF-v+N2q;jpS zMO{%CuKl1R?33}(fCdJ9f)7?z9n`kQmDJ7bL^0Qo<>f${FkXXT826*DAKQ~3Fqg<9e16GvInA4i>LY4Pd);ME-TV*E~zU~S|vcRER zHK(>He7n4Q{t6g<73Wt0PeTXJ?YHtCHxX^`^CP%dQYltlJ}YWoDE2}_Ohw^XAHpPd z?H6Lro_2}U;)AH?ZV?rY7q7Dwz@#5`0t}oZ%~xc43ZH7iuuOTn2v6#h_PaPprv7I_4%2CyNK{M=^o@zvt$&xwuPd4z~W2k6{vfB-3k>AU3 zyzfO{_NTjIA0n|^qe%wLxG8%RwzD0^6R6q;x=71VZEJhY36K`T+W2(aORz?HrS5Te zAY5w@UfqdkLCYfpHUm1T<>&Gh+B4NkXlo|6Ifyh;2fTMl#3*fz!~Y7k&E-esNQ`J{ z)z53yHD$_o>94trq@C0RupYjlHN`b{qAAua8&qGv)O2@KJDYY-Q9d0zZI|_m>YK0@ zOdcwK)o@-#q5aQFN3HJHnC*JmvXsh-$~Ux)z^b0>KNz7F+>D z(2$t49s>Z6DX>6nLtpuUgJuX_1a6Otf?oK*a1wCyV_7Vc-&Z1=e3>M#Sw* zLnr}cN?(fNZ_LETOA*Q@cul-?%QVIrBtNJ9YUVRjG;OT=Hf^-RCelhex!^TNh>6pKEy3YK%j@lZ#$7;qovHD<;s7jb ztD0H=;4pk(M6hC+vLTGSOQm495O~hb|JP&qjubZ3mltJ9FavBrvmiJOQIe3V8G=IY z!NP}xtc3hZm&J|2<~HCd2)=&5z6MGxd{Bz$cQas3cUI`Ev`cwwd(u1d;RW{{ za&hClzRdIUtFrR~6n~QQ>03`}S030#tSn1g{X;R@hc@w>w9{`(Y&?^_<`2{sKxK?f z*;>!5TvK8H$?D5b+sfClNsjfW{>dc|;{~w`r)t}=RVEv;rT^s*wbr$tWb2!7zVO^! z@o!qytGbm**whG3c@Iq)qC0k(0kc{^6!NldakSy_HE=9=4tN~>p4)}Ks3}5xn)IqZ zx>O4Y5U!iJ%uabTnu5G=Sm#c6Aia)3%TJ8->I!g?h8j<1Y zkrfV%0l~v|SE);;_wdOLKh?x4k>Vl_kqfi4G5K`9UTt$&EX9gOa@~cGEHHd#+znZI zdQ6wFg!OIJ2xl3OV!Cv^US}f9QT)SqG1vh-#v1-Gz7Wy)WNe!LI$l5K%ggr+n>YKm z`(_`o0n^bf(nuhw@GQKfAd=S;3CmOWc_z{-PvpZNbY4jEg!Ryj2Ca&G?f)0Uo$vIQu-s`^S)~cmIYIFo~pbc~lJ{X@$fBSiCCgXbx z!@G|?TX$oX=C^N-LkGH*Yp~cUoAj5ByBdc7z7-PPKXUN-T0tYaTPs>vKI+EuWuLZ2 z_@SsU0GmNs7Yq|xeqE1u@n`3bAKVOx-$hrn>tQ8fo}qOn4QeE-F1$ycFTkA5&9)&- zy$>l$x}}w+^@)xX>7XrUD5uG65wgpC&6;H5645xBrtx#u%~T9dF#ud)BCq$;|Eb0D zGXFkYBO}iv79+BuZ<=LP(mYAD8pvbvGLs@|@tFFt`)fjJJ%`bhSWN0%7e%+ky^ae% z4MX%~YFh12*fR&G_ZX>wUX_o*|2zg^`nGV~>+(v^H4(xm8nbhOe%7l3=B`iflvpJg zp;e&fmf5CV+t@(X|8S0Hc0seYv?yIWn8u^!@h8iP_Dbicsa_aIo)CER5Z3SD0@+%l z@6VN(J-Z5II>K+B+3T=;u35g)Ys$3Eg>$BU>i#J4q$dFw z?yyhT=A|7aaZ_C>ebTBLVh6BAe}n>;w2j_D=2@O^-VCYSiNVx*vOXt0z0*xNG}A!V zYsX*q)!K7+-!;X&!f=yb*Q81ZxSp4d9QoH`yoNx^ncD3(kT9=ylV|#QCY?CgP@f15 zt{ZUlaXa*T=Kto>>;IUycC;5L#7CLf4s4zxeJJFEZYX>Jc(Ia)8>2SkZT z@WQ05w;4Q46}Q`aT&F*gyqoYlG}HHm-Vh^eiqIPKU{ctF{mlD+o;Rcpl}Df0l6|xA zDa}8$X4?`cSEPx{RP3Bb8(Nt%Aw)~J^j$FXO@^UO#~wF6p<`qt=Zjb&)RuPHk9Mys zx8!x2f6#X8UYiIP)Q@2yLO^c?6m3BAvgI$(@II?suz#H?#gS%NYg&<~}<=G*8E;Wq7jvE#JL z-kU`+A$?q%*Jz$q3<+l5j5vrN?EnY$kw)i{Itz#hyjWAq*FP2<{&skA^rkWEV^MZz zLzmwMeKrfr5KqdbX{k_=wHz($e8&ci?YGmy{!W(yv7LGXpOS-J)NjaNRmd()vhKi@#Z$+W+elx&2{_QoQ|wojh`(~&G0B6#ZR zK}x+V`Duq7Fp^Kc;(Wk7o_%L@F?vz!`q#5q1h=Y^Ewx_Isi9*k5HJYoyAh2Zm zeD_XzwES`D2-4c>^MQiS4k#G8egxZP`SbQIcdH}SCL}v^yaB#D0JC|L)AF}>FIwkt z>vUpu*U9Ou;@(t!9duU@%@JSG;&d^bGZbfNIx!FUe7UZXs8ciDAgn>My84#lDLdP8 zcYNCHvM=tKgxxQ^_^?3-vq~7ZVmkkzkZXM9)|VMm?exiAz0K)&?}#oQ_`fxhRF1>@ft<8zvAZ3iTmCgS!HbA*z>F+ny>!Z)*%P(`H0f} z7mz5sYVzL(R0(V$)2<}XCetB5o|QO!=yN)N3}4biO+`q#BsWg@%C0&rFD`>lBZ}XP zR@jMoanX2^$MEn}B6X#IOl7F!o014&Ae;~co9K$W|VZD26*M4=1h>$k>TSg*pn zyR6{9n|i`_n(+iWuUmi3RSEbX&DVFHdv%@%l{yMY4qswZZq(ZX*vL?$(-oiSZ9mpp zUm8iL@S->W+@;!v_r&$_>wOywUtL-b95+-s3IOS$kbVUJusI)540c6K6j%DD8W-O{ z36k8C^sN#N=fo7OxEk<{Hao%vBfgy8~*e0dqp_J7@1}X2lZwGhbsQ$ibB7`Q2G%QaiMQ` z$2Q(7h?-marm(g$=othPX#e*ZLZkkroH>7 zl_00~rVYJo7wz8i)Gu|#bTs;9+TS%}wUp)At&$Dd(xdp&BaAj{LNd*w1O_XD!a}#X z-we9NV$w=CQF+u@cJ74qBg?8$s+6jauk~F8ha4V-#D2#j@Q%1QB4l6tT4LKnx^l{% z?zTq({al_KmrGx$6GF5NHYlWRIbzVNTM)|PXrpH)nk@1PtM8#49&XvE)hbhYjqZVp z)L#L#|Kwi||DOf;{}u63=a{D+)m=F#Q3pNqVd8D0^r6=JdcbrS9yYC)jgjz;=my{m zpvktHE?&1I@D|8T3c*KJcuyN4?YENge%mzTcFLxDuS#lV`;d8v+wNQ(pAeHd?6dYC z4!>wLlP2bt1%$yLx_;{S5mAn7A%@00Er9-?cNqv;K%CD^p}Ipxd9pI==^te_i=1BK zzUwmHv#pN$=$oZJMPc)Q!JFTOxJ=b0eX8{;Pt>cJ=(p0D2$7>OGki-cfxsISJEQ0) z+!r$8o6z7I#mpJ_yz4j&pL_*~nJjJnR!|$)uqS`}Ay!^4G zg^z(%;28f4Lp3(?SfMw1gtY`+)${CpR^9Q97qTcWcN{{1^1;H>m>KCWws^6y79M@PjN;KR zwwc$E@Oe9zVvTY+;ZO7Mw{G%;C4WR@Ks2 z_JBc78Efn$o8!^XHsjn*84FqM}3rkJ%X zh*aD;SD%&IRXf`TPV&Yo+gSJ~D_3vBfCJ}IP;$-5II!Pud$%sq6gSUxQ;+OB%V!JA zEmB|cmNVrSJRX^5)ZV<*an1@@LEmEtGa{>hs~=mI!lB=*yu_y+nym3`v2gwz|C;Z@ zk<$E0hKs9XQ*SIA9{t@eCk)VC2-XD8q^ z>8NnCkEq-Jct@ni+JJTjXoi&@|3vZNuKLFyAFZiUgZq$&gzV0o>Y@I}AQw|^36T|i zExgU>iCGa8$NSp16ldaC8v9c&p}f$cWm%RL77S%t%YQi4j=-uwk)Vqsiu|FLwc2KCb|BGPx2n0y;6kervO`lUS3MlVj|mw>OW@ivbyn47yw1SHm(HsC=!$ zHdC!~0amd(H(#Dk@SKrG1K`?bv_W&3G8pJ4y$e#YA;+*+MpDX`eS_`A$FWHor(ZZH zI)8>v+n@k`M>V`HDMxJ41cH5GJKlRCK4@jS74eL;#Y=88<_NKt~sa75?Re)(0#p5=Pykt=ys4YmP1i>#uH7fV*=*n#3s8N}&p@5Zcdp;h5~ zq%VAvHD3De>vk--7%Et&XlM8`?4Nxj+?uW#DxA2n8B&=sfrrWyH+;bQrh%3k@us2( zyyeFAG`m8>aZx=JtpjJC?k(2m-6%oIV+`#>q%^gYlU0{ELhY7KFF^Azoq{HRR3E+ literal 0 HcmV?d00001 diff --git a/models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.md b/models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.md new file mode 100644 index 00000000000..cd35970df88 --- /dev/null +++ b/models/public/human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.md @@ -0,0 +1,41 @@ +# human-pose-estimation-3d-0001 + +## Use Case and High-Level Description + +Multi-person 3D human pose estimation model based on [Lightweight OpenPose](https://arxiv.org/pdf/1811.12004.pdf) and [Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB](https://arxiv.org/pdf/1712.03453.pdf) papers. + +## Example + +![](./human-pose-estimation-3d-0001.jpg) + +## Specification + +| Metric | Value | +|---------------------------------------------------------------|-------------------------| +| MPJPE (mm) | 100.45 | +| GFlops | 18.998 | +| MParams | 5.074 | +| Source framework | PyTorch\* | + +## Performance + +## Inputs + +1. name: "data" , shape: [1x3x256x448] - An input image in the format [BxCxHxW], + where: + + - B - batch size + - C - number of channels + - H - image height + - W - image width + + Expected color order - BGR. + +## Outputs + +1. The net outputs three blobs with shapes: [1, 57, 32, 56], [1, 19, 32, 56], and [1, 38, 32, 56]. The first blob contains coordinates in 3D space, the second one contains keypoint heatmaps and the third is keypoint pairwise relations (part affinity fields). + +## Legal Information +[LICENSE](https://raw.githubusercontent.com/opencv/openvino_training_extensions/develop/LICENSE) + +[*] Other names and brands may be claimed as the property of others. diff --git a/models/public/human-pose-estimation-3d-0001/model.yml b/models/public/human-pose-estimation-3d-0001/model.yml new file mode 100644 index 00000000000..38687eab683 --- /dev/null +++ b/models/public/human-pose-estimation-3d-0001/model.yml @@ -0,0 +1,55 @@ +# Copyright (c) 2019 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +description: >- + Multi-person 3D human pose estimation model based on "Lightweight OpenPose" + and + "Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB" + papers. + + The model input is a blob that consists of a single image of "1x3x256x448" in + BGR order. + + The model outputs three blobs: features_3d, heatmaps, and pafs. + The first blob contains coordinates in 3D space, the second one contains + keypoint heatmaps and the third is keypoint pairwise relations + (part affinity fields). +task_type: human_pose_estimation +files: + - name: human-pose-estimation-3d-0001.tar.gz + sha256: d2b158f07bd2f3d921bde1215829ac99afc7e63868d2d6738b24c7079db54efc + size: 18421831 + source: https://download.01.org/opencv/openvino_training_extensions/models/human_pose_estimation/human-pose-estimation-3d.tar.gz +postprocessing: + - $type: unpack_archive + format: gztar + file: human-pose-estimation-3d-0001.tar.gz +framework: pytorch +conversion_to_onnx_args: + - --model-path=$dl_dir + - --model-name=PoseEstimationWithMobileNet + - --model-params=is_convertible_by_mo=True + - --import-module=model + - --weights=$dl_dir/human-pose-estimation-3d-0001.pth + - --input-shape=1,3,256,448 + - --input-names=data + - --output-names=features,heatmaps,pafs + - --output-file=$conv_dir/human-pose-estimation-3d-0001.onnx +model_optimizer_args: + - --input=data + - --mean_values=data[128.0,128.0,128.0] + - --scale_values=data[255.0,255.0,255.0] + - --output=features,heatmaps,pafs + - --input_model=$conv_dir/human-pose-estimation-3d-0001.onnx +license: https://raw.githubusercontent.com/opencv/openvino_training_extensions/develop/LICENSE diff --git a/models/public/index.md b/models/public/index.md index 7e46f69f07c..abdcaef2601 100644 --- a/models/public/index.md +++ b/models/public/index.md @@ -128,6 +128,21 @@ SSD-based and provide reasonable accuracy/performance trade-offs. | MobileFaceNet,ArcFace@ms1m-refine-v1 | [MXNet\*](./face-recognition-mobilefacenet-arcface/face-recognition-mobilefacenet-arcface.md) | face-recognition-mobilefacenet-arcface | 0.449 | 0.993 | | SphereFace | [Caffe\*](./Sphereface/Sphereface.md) | Sphereface | 3.504 | 22.671 | +## Human Pose Estimation + +Human pose estimation task is to predict a pose: body skeleton, which consists +of keypoints and connections between them, for every person in an input image or +video. Keypoints are body joints, i.e. ears, eyes, nose, shoulders, knees, etc. +There are two major groups of such metods: top-down and bottom-up. The first +detects persons in a given frame, crops or rescales detections, then runs pose +estimation network for every detection. These methods are very accurate. The +second finds all keypoints in a given frame, then groups them by person +instances, thus faster than previous, because network runs once. + +| Model Name | Implementation | OMZ Model Name | GFlops | mParams | +|------------------------------ | ----------------------------------------------------------------------------------------- | ----------------------------- | ------ | ------- | +| human-pose-estimation-3d-0001 | [PyTorch\*](./human-pose-estimation-3d-0001/description/human-pose-estimation-3d-0001.md) | human-pose-estimation-3d-0001 | 18.998 | 5.074 | + ## Legal Information [*] Other names and brands may be claimed as the property of others. diff --git a/tools/downloader/license.txt b/tools/downloader/license.txt index 9c5dcf910ec..85d8c146188 100644 --- a/tools/downloader/license.txt +++ b/tools/downloader/license.txt @@ -4612,3 +4612,213 @@ License terms: limitations under the License. ================================================================================================== + +* human-pose-estimation-3d-0001 - Multi-person 3d human pose estimation model based on + "Lightweight OpenPose" - https://arxiv.org/pdf/1811.12004.pdf and "Single-Shot Multi-Person + 3D Pose Estimation From Monocular RGB" - https://arxiv.org/pdf/1712.03453.pdf papers. + +License terms: + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +================================================================================================== From aeacca2617f3d6e9302662e7ee8bebc949f818fa Mon Sep 17 00:00:00 2001 From: Katya Date: Mon, 21 Oct 2019 10:37:04 +0300 Subject: [PATCH 162/927] AC: bump version 0.7.4 -> 0.7.5 (#528) --- tools/accuracy_checker/accuracy_checker/__init__.py | 2 +- tools/accuracy_checker/setup.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/__init__.py b/tools/accuracy_checker/accuracy_checker/__init__.py index b132441be98..1ecfbfdc449 100644 --- a/tools/accuracy_checker/accuracy_checker/__init__.py +++ b/tools/accuracy_checker/accuracy_checker/__init__.py @@ -14,4 +14,4 @@ limitations under the License. """ -__version__ = "0.7.4" +__version__ = "0.7.5" diff --git a/tools/accuracy_checker/setup.py b/tools/accuracy_checker/setup.py index 1374ab719e4..d2f35518493 100644 --- a/tools/accuracy_checker/setup.py +++ b/tools/accuracy_checker/setup.py @@ -17,11 +17,11 @@ import importlib import re import sys -from collections import OrderedDict from setuptools import find_packages, setup from setuptools.command.test import test as test_command from pathlib import Path + class PyTest(test_command): user_options = [('pytest-args=', 'a', "Arguments to pass to pytest")] From 906aefcb1384e1919e64077be19aae75d3e36048 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Mon, 21 Oct 2019 13:14:07 +0300 Subject: [PATCH 163/927] Fixes --- CONTRIBUTING.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b822783b3a1..c1a3d221568 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -109,11 +109,11 @@ For replacement operation: - `file` — Name of file to run replacement in - `pattern` — [Regular expression](https://docs.python.org/3/library/re.html) - `replacement` — Replacement string -- `count` (*optional*) — Maximum number of pattern occurrences to be replaced +- `count` (*optional*) — Exact number of replacements (if number of `pattern` occurrences less then this number, downloading will be aborted) -**`conversion_to_onnx_args`** (*optional*) +**`conversion_to_onnx_args`** (*only for Caffe2\*, PyTorch\* models*) -List of ONNX\* conversion parameters, see `model_optimizer_args` for details. Applicable for Caffe2\* and PyTorch\* frameworks. +List of ONNX\* conversion parameters, see `model_optimizer_args` for details. **`model_optimizer_args`** @@ -193,9 +193,9 @@ Demos are required to support the following keys: - `-i ""`: Required. Input to process. - `-m ""`: Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. - `-d ""`: Optional. Default is CPU. - - `-no_show`: Optional. Do not visualize inference results. + - `--no_show`: Optional. Do not visualize inference results. -> **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. Example: `-no-show`. +> **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. Example: `--no-show`. You can also add any other necessary parameters. From a3ea4db951eef3b7d7488b7741673ed377648354 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Mon, 21 Oct 2019 13:14:56 +0300 Subject: [PATCH 164/927] "After this step..." obsoleted --- CONTRIBUTING.md | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c1a3d221568..73541afe2f2 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -169,8 +169,6 @@ model_optimizer_args: framework: tf license: https://raw.githubusercontent.com/pudae/tensorflow-densenet/master/LICENSE ``` ----- -*After this step you get the **model.yml** file.* ## Model Conversion @@ -180,8 +178,6 @@ Deep Learning Inference Engine (IE) supports models in the Intermediate Represen > **NOTE 2**: If a model input is a color image, color channel order should be `BGR`. -*After this step you get **conversion parameters** for the Model Optimizer.* - ## Demo A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). @@ -204,8 +200,6 @@ If you add a new demo, provide autotesting support as well: - prepare list of input images in [demos/tests/image_sequences.py](demos/tests/image_sequences.py) Update [demos' README.md](demos/README.md) adding your demo to the list. -___ -*After this step you get a **demo** for your model (if no demo was available).* ## Accuracy Validation @@ -215,9 +209,6 @@ If a model uses a dataset which is not supported by the Accuracy Checker, you al When the configuration file is ready, you must run the Accuracy Checker to obtain metric results. If they match your results, that means conversion was successful and the Accuracy Checker fully supports your model, metric and dataset. Otherwise, recheck the[conversion](#model-conversion) parameters or the validation configuration file. -___ -*After this step you get the accuracy validation configuration file **.yml**.* - ### Example This example uses one of the files from `tools/accuracy_checker/configs` — validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml)\*: @@ -292,9 +283,6 @@ The documentation should contain: Learn the detailed structure and headers naming convention from any model documentation (for example, [alexnet](./models/public/alexnet/alexnet.md)). ---- -*After this step you get **.md** — the documentation file.* - ## Legal Information [\*] Other names and brands may be claimed as the property of others. From 2c3adc89f3062b1bb8629c470b67ac170cfc9fae Mon Sep 17 00:00:00 2001 From: Katya Date: Mon, 21 Oct 2019 15:36:37 +0300 Subject: [PATCH 165/927] AC: fix coco_eval import (#531) --- .../accuracy_checker/metrics/coco_orig_metrics.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py b/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py index 946d37dbb35..ab44f15ae72 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/coco_orig_metrics.py @@ -22,9 +22,9 @@ except ImportError: COCO = None try: - from pycocotools.cocoeval import COCOeval + from pycocotools.cocoeval import COCOeval as coco_eval except ImportError: - COCOEval = None + coco_eval = None from ..representation import ( DetectionPrediction, DetectionAnnotation, @@ -242,9 +242,9 @@ def _debug_printing_and_displaying_predictions(coco, coco_res, data_source, shou @staticmethod def _run_coco_evaluation(coco, coco_res, iou_type='bbox', threshold=None): - if COCOEval is None: + if coco_eval is None: raise ValueError('pycocotools is not installed, please install it before usage') - cocoeval = COCOeval(coco, coco_res, iouType=iou_type) + cocoeval = coco_eval(coco, coco_res, iouType=iou_type) if threshold is not None: cocoeval.params.iouThrs = threshold cocoeval.evaluate() From e198a20126d4c42842de1b3ac5d1f63f5a5ee274 Mon Sep 17 00:00:00 2001 From: Zlobin Vladimir Date: Mon, 21 Oct 2019 17:30:21 +0300 Subject: [PATCH 166/927] demos/tests/image_sequences.py: remove image_net_arg(00000141) --- demos/tests/image_sequences.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/demos/tests/image_sequences.py b/demos/tests/image_sequences.py index 36254421632..51c1a97e1d8 100644 --- a/demos/tests/image_sequences.py +++ b/demos/tests/image_sequences.py @@ -180,8 +180,6 @@ 'smart-classroom-demo': [ image_net_arg('00000074'), - image_net_arg('00000141'), - image_net_arg('00000141'), image_net_arg('00000164'), image_net_arg('00000181'), image_net_arg('00000164'), From caf9c49436cbe6f02e9af2331c505fc3f4945ae4 Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 22 Oct 2019 10:25:17 +0300 Subject: [PATCH 167/927] AC: add env vars and place definitions in config (#533) --- tools/accuracy_checker/README.md | 11 +++ .../accuracy_checker/config/config_reader.py | 17 +++- .../accuracy_checker/accuracy_checker/main.py | 4 - tools/accuracy_checker/configs/Sphereface.yml | 1 + .../age-gender-recognition-retail-0013.yml | 1 + tools/accuracy_checker/configs/alexnet.yml | 1 + tools/accuracy_checker/configs/caffenet.yml | 1 + tools/accuracy_checker/configs/ctpn.yml | 1 + tools/accuracy_checker/configs/deeplabv3.yml | 1 + .../configs/densenet-121-caffe2.yml | 1 + .../configs/densenet-121-tf.yml | 1 + .../accuracy_checker/configs/densenet-121.yml | 1 + .../configs/densenet-161-tf.yml | 1 + .../accuracy_checker/configs/densenet-161.yml | 1 + .../configs/densenet-169-tf.yml | 1 + .../accuracy_checker/configs/densenet-169.yml | 1 + .../accuracy_checker/configs/densenet-201.yml | 1 + .../configs/efficientnet-b0-pytorch.yml | 1 + .../configs/efficientnet-b0.yml | 1 + .../configs/efficientnet-b0_auto_aug.yml | 1 + .../configs/efficientnet-b5-pytorch.yml | 1 + .../configs/efficientnet-b5.yml | 1 + .../configs/efficientnet-b7-pytorch.yml | 1 + .../configs/efficientnet-b7_auto_aug.yml | 1 + .../emotions-recognition-retail-0003.yml | 1 + .../configs/face-detection-adas-0001.yml | 1 + .../face-detection-adas-binary-0001.yml | 1 + .../configs/face-detection-retail-0004.yml | 1 + .../configs/face-detection-retail-0005.yml | 1 + .../configs/face-detection-retail-0044.yml | 1 + ...face-recognition-mobilefacenet-arcface.yml | 1 + .../face-recognition-resnet100-arcface.yml | 1 + .../face-recognition-resnet34-arcface.yml | 1 + .../face-recognition-resnet50-arcface.yml | 1 + .../face-reidentification-retail-0095.yml | 1 + .../configs/facenet-20180408-102900.yml | 1 + .../configs/facial-landmarks-35-adas-0002.yml | 1 + ...r_rcnn_inception_resnet_v2_atrous_coco.yml | 1 + .../configs/faster_rcnn_inception_v2_coco.yml | 1 + .../configs/faster_rcnn_resnet101_coco.yml | 1 + .../configs/faster_rcnn_resnet50_coco.yml | 1 + .../configs/gaze-estimation-adas-0002.yml | 1 + .../accuracy_checker/configs/googlenet-v1.yml | 1 + .../accuracy_checker/configs/googlenet-v2.yml | 1 + .../configs/googlenet-v3-pytorch.yml | 1 + .../accuracy_checker/configs/googlenet-v3.yml | 1 + .../accuracy_checker/configs/googlenet-v4.yml | 1 + .../handwritten-score-recognition-0003.yml | 1 + .../head-pose-estimation-adas-0001.yml | 1 + .../configs/human-pose-estimation-0001.yml | 1 + .../configs/image-retrieval-0001.yml | 1 + .../configs/inception-resnet-v2-tf.yml | 1 + .../configs/inception-resnet-v2.yml | 1 + .../inceptionv3-int8-sparse-v1-tf-0001.yml | 1 + .../inceptionv3-int8-sparse-v2-tf-0001.yml | 1 + .../configs/inceptionv3-int8-tf-0001.yml | 1 + .../instance-segmentation-security-0010.yml | 1 + .../instance-segmentation-security-0050.yml | 1 + .../instance-segmentation-security-0083.yml | 1 + .../landmarks-regression-retail-0009.yml | 1 + ...license-plate-recognition-barrier-0001.yml | 1 + ...license-plate-recognition-barrier-0007.yml | 1 + ...k_rcnn_inception_resnet_v2_atrous_coco.yml | 1 + .../configs/mask_rcnn_inception_v2_coco.yml | 1 + .../mask_rcnn_resnet101_atrous_coco.yml | 1 + .../mask_rcnn_resnet50_atrous_coco.yml | 1 + .../configs/mobilenet-ssd.yml | 1 + .../configs/mobilenet-v1-0.25-128.yml | 1 + .../configs/mobilenet-v1-0.50-160.yml | 1 + .../configs/mobilenet-v1-0.50-224.yml | 1 + .../configs/mobilenet-v1-1.0-224-tf.yml | 1 + .../configs/mobilenet-v1-1.0-224.yml | 1 + .../configs/mobilenet-v2-1.0-224.yml | 1 + .../configs/mobilenet-v2-1.4-224.yml | 1 + .../configs/mobilenet-v2-pytorch.yml | 1 + .../accuracy_checker/configs/mobilenet-v2.yml | 1 + .../mobilenetv2-int8-sparse-v1-tf-0001.yml | 1 + .../mobilenetv2-int8-sparse-v2-tf-0001.yml | 1 + .../configs/mobilenetv2-int8-tf-0001.yml | 1 + .../configs/octave-densenet-121-0.125.yml | 1 + .../configs/octave-resnet-101-0.125.yml | 1 + .../configs/octave-resnet-200-0.125.yml | 1 + .../configs/octave-resnet-26-0.25.yml | 1 + .../configs/octave-resnet-50-0.125.yml | 1 + .../configs/octave-resnext-101-0.25.yml | 1 + .../configs/octave-resnext-50-0.25.yml | 1 + .../configs/octave-se-resnet-50-0.125.yml | 1 + ...estrian-and-vehicle-detector-adas-0001.yml | 1 + .../pedestrian-detection-adas-0002.yml | 1 + .../pedestrian-detection-adas-binary-0001.yml | 1 + ...-attributes-recognition-crossroad-0230.yml | 1 + ...rson-detection-action-recognition-0005.yml | 1 + ...rson-detection-action-recognition-0006.yml | 1 + ...ection-action-recognition-teacher-0002.yml | 2 +- ...detection-raisinghand-recognition-0001.yml | 1 + .../configs/person-detection-retail-0002.yml | 1 + .../configs/person-detection-retail-0013.yml | 1 + .../person-reidentification-retail-0031.yml | 1 + .../person-reidentification-retail-0076.yml | 1 + .../person-reidentification-retail-0079.yml | 1 + ...-vehicle-bike-detection-crossroad-0078.yml | 1 + ...-vehicle-bike-detection-crossroad-1016.yml | 1 + tools/accuracy_checker/configs/resnet-101.yml | 1 + tools/accuracy_checker/configs/resnet-152.yml | 1 + .../configs/resnet-50-caffe2.yml | 1 + .../resnet-50-int8-sparse-v1-tf-0001.yml | 1 + .../resnet-50-int8-sparse-v2-tf-0001.yml | 1 + .../configs/resnet-50-int8-tf-0001.yml | 1 + .../configs/resnet-50-pytorch.yml | 1 + tools/accuracy_checker/configs/resnet-50.yml | 1 + .../configs/resnet50-binary-0001.yml | 1 + .../configs/road-segmentation-adas-0001.yml | 1 + .../accuracy_checker/configs/se-inception.yml | 1 + .../configs/se-resnet-101.yml | 1 + .../configs/se-resnet-152.yml | 1 + .../accuracy_checker/configs/se-resnet-50.yml | 1 + .../configs/se-resnext-101.yml | 1 + .../configs/se-resnext-50.yml | 1 + .../semantic-segmentation-adas-0001.yml | 1 + .../single-image-super-resolution-1032.yml | 1 + .../single-image-super-resolution-1033.yml | 1 + .../configs/squeezenet1.0.yml | 1 + .../configs/squeezenet1.1-caffe2.yml | 1 + .../configs/squeezenet1.1.yml | 1 + tools/accuracy_checker/configs/ssd300.yml | 1 + tools/accuracy_checker/configs/ssd512.yml | 1 + .../configs/ssd_mobilenet_v1_coco.yml | 1 + .../configs/ssd_mobilenet_v1_fpn_coco.yml | 1 + .../configs/ssd_mobilenet_v2_coco.yml | 1 + .../configs/ssdlite_mobilenet_v2.yml | 1 + .../configs/text-detection-0003.yml | 1 + .../configs/text-detection-0004.yml | 1 + .../text-image-super-resolution-0001.yml | 1 + .../configs/text-recognition-0012.yml | 1 + ...le-attributes-recognition-barrier-0039.yml | 1 + .../configs/vehicle-detection-adas-0002.yml | 1 + .../vehicle-detection-adas-binary-0001.yml | 1 + ...e-license-plate-detection-barrier-0106.yml | 1 + tools/accuracy_checker/configs/vgg16.yml | 1 + .../accuracy_checker/configs/vgg19-caffe2.yml | 1 + tools/accuracy_checker/configs/vgg19.yml | 1 + .../tests/test_config_reader.py | 82 +++++++++++++++++-- 142 files changed, 242 insertions(+), 11 deletions(-) diff --git a/tools/accuracy_checker/README.md b/tools/accuracy_checker/README.md index 3e544c23c6a..6ca20e4aa73 100644 --- a/tools/accuracy_checker/README.md +++ b/tools/accuracy_checker/README.md @@ -88,6 +88,13 @@ You may refer to `-h, --help` to full list of command line options. Some optiona - `-tf, --target_framework` framework for infer. - `-td, --target_devices` devices for infer. You can specify several devices using space as a delimiter. +You are also able to replace some command line arguments with environment variables for path prefixing. Supported following list of variables: +* `DATA_DIR` - equivalent of `-s`, `--source`. +* `MODELS_DIR` - equivalent of `-m`, `--models`. +* `EXTENSIONS` - equivalent of `-e`, `--extensions`. +* `ANNOTATIONS_DIR` - equivalent of `-a`, `--annotations`. +* `BITSTREAMS_DIR` - equivalent of `-b`, `--bitstreams`. + #### Configuration There is config file which declares validation process. @@ -111,6 +118,10 @@ models: datasets: - name: dataset_name ``` +Optionally you can use global configuration. It can be useful for avoiding duplication if you have several models which should be run on the same dataset. +Example of global definitions file can be found [here](dataset_definitions.yml). Global definitions will be merged with evaluation config in the runtime by dataset name. +Parameters of global configuration can be overwritten by local config (e.g. if in definitions specified resize with destination size 224 and in the local config used resize with size 227, the value in config - 227 will be used as resize parameter) +You can use field `global_definitions` for specifying path to global definitions directly in the model config or via command line arguments (`-d`, `--definitions`). ### Launchers diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index f439e431d2c..09f71634ccc 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -16,6 +16,7 @@ import copy from pathlib import Path +import os import warnings @@ -84,8 +85,11 @@ def process_config(config, mode='models', arguments=None): @staticmethod def _read_configs(arguments): - global_config = read_yaml(arguments.definitions) if arguments.definitions else None local_config = read_yaml(arguments.config) + definitions = local_config.get('global_definitions') + if definitions: + definitions = read_yaml(arguments.config.parent / definitions) + global_config = read_yaml(arguments.definitions) if arguments.definitions else definitions return global_config, local_config @@ -301,6 +305,17 @@ def _merge_configs_by_identifier(global_config, local_config, identifier): @staticmethod def _merge_paths_with_prefixes(arguments, config, mode='models'): args = arguments if isinstance(arguments, dict) else vars(arguments) + commandline_arg_to_env_var = { + 'source': 'DATA_DIR', + 'annotations': 'ANNOTATIONS_DIR', + 'bitstreams': 'BITSTREAMS_DIR', + 'models': 'MODELS_DIR', + 'extensions': 'EXTENSIONS_DIR' + } + for argument, env_var in commandline_arg_to_env_var.items(): + if argument not in args or args[argument] is None: + env_var_value = os.environ.get(env_var) + args[argument] = Path(env_var_value) if env_var_value is not None else Path.cwd() def process_models(config, entries_paths): for model in config['models']: diff --git a/tools/accuracy_checker/accuracy_checker/main.py b/tools/accuracy_checker/accuracy_checker/main.py index bc2b85045ef..3b7e6637f88 100644 --- a/tools/accuracy_checker/accuracy_checker/main.py +++ b/tools/accuracy_checker/accuracy_checker/main.py @@ -51,21 +51,18 @@ def build_arguments_parser(): '-m', '--models', help='prefix path to the models and weights', type=partial(get_path, is_directory=True), - default=Path.cwd(), required=False ) parser.add_argument( '-s', '--source', help='prefix path to the data source', type=partial(get_path, is_directory=True), - default=Path.cwd(), required=False ) parser.add_argument( '-a', '--annotations', help='prefix path to the converted annotations and datasets meta data', type=partial(get_path, is_directory=True), - default=Path.cwd(), required=False ) parser.add_argument( @@ -85,7 +82,6 @@ def build_arguments_parser(): '-b', '--bitstreams', help='prefix path to bitstreams folder', type=partial(get_path, file_or_directory=True), - default=Path.cwd(), required=False ) parser.add_argument( diff --git a/tools/accuracy_checker/configs/Sphereface.yml b/tools/accuracy_checker/configs/Sphereface.yml index b6981cfc122..4d85cce70bf 100644 --- a/tools/accuracy_checker/configs/Sphereface.yml +++ b/tools/accuracy_checker/configs/Sphereface.yml @@ -25,3 +25,4 @@ models: - type: resize dst_height: 112 dst_width: 96 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/age-gender-recognition-retail-0013.yml b/tools/accuracy_checker/configs/age-gender-recognition-retail-0013.yml index 17adf00df69..0c0d4cd22ef 100644 --- a/tools/accuracy_checker/configs/age-gender-recognition-retail-0013.yml +++ b/tools/accuracy_checker/configs/age-gender-recognition-retail-0013.yml @@ -44,3 +44,4 @@ models: prediction_source: age_classification - type: mae +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/alexnet.yml b/tools/accuracy_checker/configs/alexnet.yml index 3d983869944..123ddc31359 100644 --- a/tools/accuracy_checker/configs/alexnet.yml +++ b/tools/accuracy_checker/configs/alexnet.yml @@ -46,3 +46,4 @@ models: - name: acciracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/caffenet.yml b/tools/accuracy_checker/configs/caffenet.yml index 6b7e9a26787..1016bda31b1 100644 --- a/tools/accuracy_checker/configs/caffenet.yml +++ b/tools/accuracy_checker/configs/caffenet.yml @@ -38,3 +38,4 @@ models: size: 256 - type: crop size: 227 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ctpn.yml b/tools/accuracy_checker/configs/ctpn.yml index 1789da89396..3fc6461dd3b 100644 --- a/tools/accuracy_checker/configs/ctpn.yml +++ b/tools/accuracy_checker/configs/ctpn.yml @@ -56,3 +56,4 @@ models: ignore_difficult: True area_recall_constrain: 0.8 area_precision_constrain: 0.4 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/deeplabv3.yml b/tools/accuracy_checker/configs/deeplabv3.yml index 095dccd89f5..0af5c3ee583 100644 --- a/tools/accuracy_checker/configs/deeplabv3.yml +++ b/tools/accuracy_checker/configs/deeplabv3.yml @@ -31,3 +31,4 @@ models: - type: mean_iou use_argmax: false presenter: print_scalar +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-121-caffe2.yml b/tools/accuracy_checker/configs/densenet-121-caffe2.yml index 29c44c2fde9..cba813ab651 100644 --- a/tools/accuracy_checker/configs/densenet-121-caffe2.yml +++ b/tools/accuracy_checker/configs/densenet-121-caffe2.yml @@ -44,3 +44,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-121-tf.yml b/tools/accuracy_checker/configs/densenet-121-tf.yml index 6d552cb39ac..58e469d81ba 100644 --- a/tools/accuracy_checker/configs/densenet-121-tf.yml +++ b/tools/accuracy_checker/configs/densenet-121-tf.yml @@ -22,3 +22,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-121.yml b/tools/accuracy_checker/configs/densenet-121.yml index 583841f147e..c3d4336db1c 100644 --- a/tools/accuracy_checker/configs/densenet-121.yml +++ b/tools/accuracy_checker/configs/densenet-121.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-161-tf.yml b/tools/accuracy_checker/configs/densenet-161-tf.yml index 3d643ce6f39..884e2c61558 100644 --- a/tools/accuracy_checker/configs/densenet-161-tf.yml +++ b/tools/accuracy_checker/configs/densenet-161-tf.yml @@ -22,3 +22,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-161.yml b/tools/accuracy_checker/configs/densenet-161.yml index 74ec8962d4d..f3a46699b52 100644 --- a/tools/accuracy_checker/configs/densenet-161.yml +++ b/tools/accuracy_checker/configs/densenet-161.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-169-tf.yml b/tools/accuracy_checker/configs/densenet-169-tf.yml index 5ae3f06e8eb..7076b43611d 100644 --- a/tools/accuracy_checker/configs/densenet-169-tf.yml +++ b/tools/accuracy_checker/configs/densenet-169-tf.yml @@ -22,3 +22,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-169.yml b/tools/accuracy_checker/configs/densenet-169.yml index 4c407a70520..3b26a9242b9 100644 --- a/tools/accuracy_checker/configs/densenet-169.yml +++ b/tools/accuracy_checker/configs/densenet-169.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/densenet-201.yml b/tools/accuracy_checker/configs/densenet-201.yml index e6ab1db0c2e..485daaf227c 100644 --- a/tools/accuracy_checker/configs/densenet-201.yml +++ b/tools/accuracy_checker/configs/densenet-201.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml index 8c8e4d304eb..59df9e95f29 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0-pytorch.yml @@ -64,3 +64,4 @@ models: - type: crop use_pillow: True size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b0.yml b/tools/accuracy_checker/configs/efficientnet-b0.yml index d1f44b2eaef..8dff648e671 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0.yml @@ -53,3 +53,4 @@ models: size: 224 use_pillow: True interpolation: BICUBIC +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml index 44e8f4ae13a..57b84010a11 100644 --- a/tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml +++ b/tools/accuracy_checker/configs/efficientnet-b0_auto_aug.yml @@ -53,3 +53,4 @@ models: size: 224 use_pillow: True interpolation: BICUBIC +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml index 8ceed5791c0..94f3921277c 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5-pytorch.yml @@ -65,3 +65,4 @@ models: - type: crop use_pillow: True size: 456 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b5.yml b/tools/accuracy_checker/configs/efficientnet-b5.yml index efdfc5266a2..fd42f31d047 100644 --- a/tools/accuracy_checker/configs/efficientnet-b5.yml +++ b/tools/accuracy_checker/configs/efficientnet-b5.yml @@ -53,3 +53,4 @@ models: size: 456 use_pillow: True interpolation: BICUBIC +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml index ac195507e3b..25a0fe37243 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7-pytorch.yml @@ -65,3 +65,4 @@ models: - type: crop use_pillow: True size: 600 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml index e18eb906d06..0d3f55297ae 100644 --- a/tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml +++ b/tools/accuracy_checker/configs/efficientnet-b7_auto_aug.yml @@ -53,3 +53,4 @@ models: size: 600 use_pillow: True interpolation: BICUBIC +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/emotions-recognition-retail-0003.yml b/tools/accuracy_checker/configs/emotions-recognition-retail-0003.yml index caed4eacb6f..821215a4254 100644 --- a/tools/accuracy_checker/configs/emotions-recognition-retail-0003.yml +++ b/tools/accuracy_checker/configs/emotions-recognition-retail-0003.yml @@ -31,3 +31,4 @@ models: metrics: - type: accuracy +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-detection-adas-0001.yml b/tools/accuracy_checker/configs/face-detection-adas-0001.yml index 705ced7085e..d24cdd1d52b 100644 --- a/tools/accuracy_checker/configs/face-detection-adas-0001.yml +++ b/tools/accuracy_checker/configs/face-detection-adas-0001.yml @@ -44,3 +44,4 @@ models: include_boundaries: False allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-detection-adas-binary-0001.yml b/tools/accuracy_checker/configs/face-detection-adas-binary-0001.yml index 6ed56b249c7..895b8696d04 100644 --- a/tools/accuracy_checker/configs/face-detection-adas-binary-0001.yml +++ b/tools/accuracy_checker/configs/face-detection-adas-binary-0001.yml @@ -30,3 +30,4 @@ models: include_boundaries: False allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-detection-retail-0004.yml b/tools/accuracy_checker/configs/face-detection-retail-0004.yml index 58dbf3bfcbb..31cdba5487f 100644 --- a/tools/accuracy_checker/configs/face-detection-retail-0004.yml +++ b/tools/accuracy_checker/configs/face-detection-retail-0004.yml @@ -48,3 +48,4 @@ models: include_boundaries: False allow_multiple_matches_per_ignored: False distinct_conf: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-detection-retail-0005.yml b/tools/accuracy_checker/configs/face-detection-retail-0005.yml index 78be4cbc3a4..5e90f2af54a 100644 --- a/tools/accuracy_checker/configs/face-detection-retail-0005.yml +++ b/tools/accuracy_checker/configs/face-detection-retail-0005.yml @@ -50,3 +50,4 @@ models: include_boundaries: False allow_multiple_matches_per_ignored: True distinct_conf: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-detection-retail-0044.yml b/tools/accuracy_checker/configs/face-detection-retail-0044.yml index 62302acdcff..eb1f4e050dc 100644 --- a/tools/accuracy_checker/configs/face-detection-retail-0044.yml +++ b/tools/accuracy_checker/configs/face-detection-retail-0044.yml @@ -52,3 +52,4 @@ models: include_boundaries: False allow_multiple_matches_per_ignored: False distinct_conf: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml b/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml index 992d2aadfde..8aa719bf4a4 100644 --- a/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-mobilefacenet-arcface.yml @@ -48,3 +48,4 @@ models: size: 400 - type: resize size: 112 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml index 571f7ab2a7a..9d8f76e75fc 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet100-arcface.yml @@ -42,3 +42,4 @@ models: size: 400 - type: resize size: 112 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml index a303d7e2ba2..ff6e4439443 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet34-arcface.yml @@ -49,3 +49,4 @@ models: size: 400 - type: resize size: 112 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml b/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml index d7e7c6f5ad3..82b53511c15 100644 --- a/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml +++ b/tools/accuracy_checker/configs/face-recognition-resnet50-arcface.yml @@ -49,3 +49,4 @@ models: size: 400 - type: resize size: 112 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml index 4abb1b41393..a539ad76a06 100644 --- a/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml +++ b/tools/accuracy_checker/configs/face-reidentification-retail-0095.yml @@ -24,3 +24,4 @@ models: size: 400 - type: resize size: 128 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/facenet-20180408-102900.yml b/tools/accuracy_checker/configs/facenet-20180408-102900.yml index 0b697926331..637c662b3f0 100644 --- a/tools/accuracy_checker/configs/facenet-20180408-102900.yml +++ b/tools/accuracy_checker/configs/facenet-20180408-102900.yml @@ -24,3 +24,4 @@ models: size: 400 - type: resize size: 160 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/facial-landmarks-35-adas-0002.yml b/tools/accuracy_checker/configs/facial-landmarks-35-adas-0002.yml index d32ac764d9e..fc204842c5b 100644 --- a/tools/accuracy_checker/configs/facial-landmarks-35-adas-0002.yml +++ b/tools/accuracy_checker/configs/facial-landmarks-35-adas-0002.yml @@ -26,3 +26,4 @@ models: calculate_std: True percentile: 90 presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/faster_rcnn_inception_resnet_v2_atrous_coco.yml b/tools/accuracy_checker/configs/faster_rcnn_inception_resnet_v2_atrous_coco.yml index 04ceb8c76fc..9bcdca5c0cd 100644 --- a/tools/accuracy_checker/configs/faster_rcnn_inception_resnet_v2_atrous_coco.yml +++ b/tools/accuracy_checker/configs/faster_rcnn_inception_resnet_v2_atrous_coco.yml @@ -32,3 +32,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/faster_rcnn_inception_v2_coco.yml b/tools/accuracy_checker/configs/faster_rcnn_inception_v2_coco.yml index ec477c239e5..b5ba18fcefe 100644 --- a/tools/accuracy_checker/configs/faster_rcnn_inception_v2_coco.yml +++ b/tools/accuracy_checker/configs/faster_rcnn_inception_v2_coco.yml @@ -32,3 +32,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/faster_rcnn_resnet101_coco.yml b/tools/accuracy_checker/configs/faster_rcnn_resnet101_coco.yml index 9d8dbb485b0..fba099394ed 100644 --- a/tools/accuracy_checker/configs/faster_rcnn_resnet101_coco.yml +++ b/tools/accuracy_checker/configs/faster_rcnn_resnet101_coco.yml @@ -32,3 +32,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/faster_rcnn_resnet50_coco.yml b/tools/accuracy_checker/configs/faster_rcnn_resnet50_coco.yml index 2aa05c4a28c..1459f174e81 100644 --- a/tools/accuracy_checker/configs/faster_rcnn_resnet50_coco.yml +++ b/tools/accuracy_checker/configs/faster_rcnn_resnet50_coco.yml @@ -32,3 +32,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml b/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml index ab8c6b3f18e..8a3b5ef2492 100644 --- a/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml +++ b/tools/accuracy_checker/configs/gaze-estimation-adas-0002.yml @@ -63,3 +63,4 @@ models: metrics: - type: angle_error presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/googlenet-v1.yml b/tools/accuracy_checker/configs/googlenet-v1.yml index 9b245a2bde1..5b84e9eb46c 100644 --- a/tools/accuracy_checker/configs/googlenet-v1.yml +++ b/tools/accuracy_checker/configs/googlenet-v1.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/googlenet-v2.yml b/tools/accuracy_checker/configs/googlenet-v2.yml index d0d3c7b187b..a8554bb78da 100644 --- a/tools/accuracy_checker/configs/googlenet-v2.yml +++ b/tools/accuracy_checker/configs/googlenet-v2.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/googlenet-v3-pytorch.yml b/tools/accuracy_checker/configs/googlenet-v3-pytorch.yml index 1c1a9cfe231..25e509ce703 100644 --- a/tools/accuracy_checker/configs/googlenet-v3-pytorch.yml +++ b/tools/accuracy_checker/configs/googlenet-v3-pytorch.yml @@ -71,3 +71,4 @@ models: use_pillow: true # Using accuracy metric, achieved result of public model - 77.45% and 93.56% (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/googlenet-v3.yml b/tools/accuracy_checker/configs/googlenet-v3.yml index c68b05972cf..400abb2b29c 100644 --- a/tools/accuracy_checker/configs/googlenet-v3.yml +++ b/tools/accuracy_checker/configs/googlenet-v3.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 299 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/googlenet-v4.yml b/tools/accuracy_checker/configs/googlenet-v4.yml index 9e4c6173290..1a4b10c8983 100644 --- a/tools/accuracy_checker/configs/googlenet-v4.yml +++ b/tools/accuracy_checker/configs/googlenet-v4.yml @@ -40,3 +40,4 @@ models: size: 320 - type: crop size: 299 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml b/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml index acbc6e4b685..2762cd045d0 100644 --- a/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml +++ b/tools/accuracy_checker/configs/handwritten-score-recognition-0003.yml @@ -29,3 +29,4 @@ models: metrics: - type: character_recognition_accuracy +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml b/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml index d15ed015371..fdbf5483ccf 100644 --- a/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml +++ b/tools/accuracy_checker/configs/head-pose-estimation-adas-0001.yml @@ -45,3 +45,4 @@ models: presenter: print_vector annotation_source: roll prediction_source: angle_roll +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/human-pose-estimation-0001.yml b/tools/accuracy_checker/configs/human-pose-estimation-0001.yml index a6cdcd2d22b..20e5ac496fd 100644 --- a/tools/accuracy_checker/configs/human-pose-estimation-0001.yml +++ b/tools/accuracy_checker/configs/human-pose-estimation-0001.yml @@ -59,3 +59,4 @@ models: - name: AP type: coco_precision max_detections: 20 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/image-retrieval-0001.yml b/tools/accuracy_checker/configs/image-retrieval-0001.yml index cf1e34b280c..ddc60a1f2d1 100644 --- a/tools/accuracy_checker/configs/image-retrieval-0001.yml +++ b/tools/accuracy_checker/configs/image-retrieval-0001.yml @@ -27,3 +27,4 @@ models: top_k: 1 - type: reid_map +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/inception-resnet-v2-tf.yml b/tools/accuracy_checker/configs/inception-resnet-v2-tf.yml index 7caf3e21e70..be22759bc27 100644 --- a/tools/accuracy_checker/configs/inception-resnet-v2-tf.yml +++ b/tools/accuracy_checker/configs/inception-resnet-v2-tf.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 299 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/inception-resnet-v2.yml b/tools/accuracy_checker/configs/inception-resnet-v2.yml index 934c08c2ff3..78097dbae94 100644 --- a/tools/accuracy_checker/configs/inception-resnet-v2.yml +++ b/tools/accuracy_checker/configs/inception-resnet-v2.yml @@ -37,3 +37,4 @@ models: size: 320 - type: crop size: 299 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v1-tf-0001.yml b/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v1-tf-0001.yml index a00f321c55d..02dae048fcd 100644 --- a/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v1-tf-0001.yml +++ b/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v1-tf-0001.yml @@ -26,3 +26,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v2-tf-0001.yml b/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v2-tf-0001.yml index 683f16411ff..d999897d50a 100644 --- a/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v2-tf-0001.yml +++ b/tools/accuracy_checker/configs/inceptionv3-int8-sparse-v2-tf-0001.yml @@ -26,3 +26,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/inceptionv3-int8-tf-0001.yml b/tools/accuracy_checker/configs/inceptionv3-int8-tf-0001.yml index 4cbfbd5f8df..6f6689765f6 100644 --- a/tools/accuracy_checker/configs/inceptionv3-int8-tf-0001.yml +++ b/tools/accuracy_checker/configs/inceptionv3-int8-tf-0001.yml @@ -27,3 +27,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/instance-segmentation-security-0010.yml b/tools/accuracy_checker/configs/instance-segmentation-security-0010.yml index 3f81db31b2f..dde4619aad9 100644 --- a/tools/accuracy_checker/configs/instance-segmentation-security-0010.yml +++ b/tools/accuracy_checker/configs/instance-segmentation-security-0010.yml @@ -54,3 +54,4 @@ models: - name: AP@boxes type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/instance-segmentation-security-0050.yml b/tools/accuracy_checker/configs/instance-segmentation-security-0050.yml index 16cc200e6d0..2acf666c4e3 100644 --- a/tools/accuracy_checker/configs/instance-segmentation-security-0050.yml +++ b/tools/accuracy_checker/configs/instance-segmentation-security-0050.yml @@ -47,3 +47,4 @@ models: - name: AP@boxes type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/instance-segmentation-security-0083.yml b/tools/accuracy_checker/configs/instance-segmentation-security-0083.yml index a8d33423f8b..e4a0224aa03 100644 --- a/tools/accuracy_checker/configs/instance-segmentation-security-0083.yml +++ b/tools/accuracy_checker/configs/instance-segmentation-security-0083.yml @@ -54,3 +54,4 @@ models: - name: AP@boxes type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml b/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml index 5c3e574dd24..cd85435d43c 100644 --- a/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml +++ b/tools/accuracy_checker/configs/landmarks-regression-retail-0009.yml @@ -32,3 +32,4 @@ models: - type: per_point_normed_error presenter: print_vector - type: normed_error +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/license-plate-recognition-barrier-0001.yml b/tools/accuracy_checker/configs/license-plate-recognition-barrier-0001.yml index df603c0051b..92d7dd61504 100644 --- a/tools/accuracy_checker/configs/license-plate-recognition-barrier-0001.yml +++ b/tools/accuracy_checker/configs/license-plate-recognition-barrier-0001.yml @@ -48,3 +48,4 @@ models: metrics: - type: character_recognition_accuracy +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/license-plate-recognition-barrier-0007.yml b/tools/accuracy_checker/configs/license-plate-recognition-barrier-0007.yml index 4ae2d870ae9..2c1ae56c6c0 100644 --- a/tools/accuracy_checker/configs/license-plate-recognition-barrier-0007.yml +++ b/tools/accuracy_checker/configs/license-plate-recognition-barrier-0007.yml @@ -28,3 +28,4 @@ models: metrics: - type: character_recognition_accuracy +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mask_rcnn_inception_resnet_v2_atrous_coco.yml b/tools/accuracy_checker/configs/mask_rcnn_inception_resnet_v2_atrous_coco.yml index 3f963f7ffbf..a3d0cdae663 100644 --- a/tools/accuracy_checker/configs/mask_rcnn_inception_resnet_v2_atrous_coco.yml +++ b/tools/accuracy_checker/configs/mask_rcnn_inception_resnet_v2_atrous_coco.yml @@ -41,3 +41,4 @@ models: metrics: - type: coco_orig_segm_precision - type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mask_rcnn_inception_v2_coco.yml b/tools/accuracy_checker/configs/mask_rcnn_inception_v2_coco.yml index 1926b37e29a..b10b3f87148 100644 --- a/tools/accuracy_checker/configs/mask_rcnn_inception_v2_coco.yml +++ b/tools/accuracy_checker/configs/mask_rcnn_inception_v2_coco.yml @@ -40,3 +40,4 @@ models: metrics: - type: coco_orig_segm_precision - type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mask_rcnn_resnet101_atrous_coco.yml b/tools/accuracy_checker/configs/mask_rcnn_resnet101_atrous_coco.yml index 6598c1deed3..79c58b446be 100644 --- a/tools/accuracy_checker/configs/mask_rcnn_resnet101_atrous_coco.yml +++ b/tools/accuracy_checker/configs/mask_rcnn_resnet101_atrous_coco.yml @@ -41,3 +41,4 @@ models: metrics: - type: coco_orig_segm_precision - type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mask_rcnn_resnet50_atrous_coco.yml b/tools/accuracy_checker/configs/mask_rcnn_resnet50_atrous_coco.yml index 46435658fdd..0ccfe6ff5fa 100644 --- a/tools/accuracy_checker/configs/mask_rcnn_resnet50_atrous_coco.yml +++ b/tools/accuracy_checker/configs/mask_rcnn_resnet50_atrous_coco.yml @@ -41,3 +41,4 @@ models: metrics: - type: coco_orig_segm_precision - type: coco_orig_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-ssd.yml b/tools/accuracy_checker/configs/mobilenet-ssd.yml index e03439346ff..51c4ca5041e 100644 --- a/tools/accuracy_checker/configs/mobilenet-ssd.yml +++ b/tools/accuracy_checker/configs/mobilenet-ssd.yml @@ -41,3 +41,4 @@ models: size: 300 postprocessing: - type: resize_prediction_boxes +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v1-0.25-128.yml b/tools/accuracy_checker/configs/mobilenet-v1-0.25-128.yml index feaca0b2426..36e9609424b 100644 --- a/tools/accuracy_checker/configs/mobilenet-v1-0.25-128.yml +++ b/tools/accuracy_checker/configs/mobilenet-v1-0.25-128.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 128 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v1-0.50-160.yml b/tools/accuracy_checker/configs/mobilenet-v1-0.50-160.yml index e867d6edca7..388cdc95126 100644 --- a/tools/accuracy_checker/configs/mobilenet-v1-0.50-160.yml +++ b/tools/accuracy_checker/configs/mobilenet-v1-0.50-160.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 160 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v1-0.50-224.yml b/tools/accuracy_checker/configs/mobilenet-v1-0.50-224.yml index dbfad1eae21..6a867987b85 100644 --- a/tools/accuracy_checker/configs/mobilenet-v1-0.50-224.yml +++ b/tools/accuracy_checker/configs/mobilenet-v1-0.50-224.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v1-1.0-224-tf.yml b/tools/accuracy_checker/configs/mobilenet-v1-1.0-224-tf.yml index 8e9d94935c6..1d3644273be 100644 --- a/tools/accuracy_checker/configs/mobilenet-v1-1.0-224-tf.yml +++ b/tools/accuracy_checker/configs/mobilenet-v1-1.0-224-tf.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v1-1.0-224.yml b/tools/accuracy_checker/configs/mobilenet-v1-1.0-224.yml index 0cf200d96be..d388ad37511 100644 --- a/tools/accuracy_checker/configs/mobilenet-v1-1.0-224.yml +++ b/tools/accuracy_checker/configs/mobilenet-v1-1.0-224.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v2-1.0-224.yml b/tools/accuracy_checker/configs/mobilenet-v2-1.0-224.yml index 99ef1f0061e..8a370ae22f0 100644 --- a/tools/accuracy_checker/configs/mobilenet-v2-1.0-224.yml +++ b/tools/accuracy_checker/configs/mobilenet-v2-1.0-224.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v2-1.4-224.yml b/tools/accuracy_checker/configs/mobilenet-v2-1.4-224.yml index 28fe8969817..10f5a515302 100644 --- a/tools/accuracy_checker/configs/mobilenet-v2-1.4-224.yml +++ b/tools/accuracy_checker/configs/mobilenet-v2-1.4-224.yml @@ -41,3 +41,4 @@ models: central_fraction: 0.875 - type: resize size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v2-pytorch.yml b/tools/accuracy_checker/configs/mobilenet-v2-pytorch.yml index dc1d3ee2d08..e1db6778b02 100644 --- a/tools/accuracy_checker/configs/mobilenet-v2-pytorch.yml +++ b/tools/accuracy_checker/configs/mobilenet-v2-pytorch.yml @@ -76,3 +76,4 @@ models: use_pillow: true # Using accuracy metric, achieved result of public model - 71.8 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenet-v2.yml b/tools/accuracy_checker/configs/mobilenet-v2.yml index 454c3c2f636..456eedbb414 100644 --- a/tools/accuracy_checker/configs/mobilenet-v2.yml +++ b/tools/accuracy_checker/configs/mobilenet-v2.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v1-tf-0001.yml b/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v1-tf-0001.yml index 819d32e1738..5284c24f97c 100644 --- a/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v1-tf-0001.yml +++ b/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v1-tf-0001.yml @@ -25,3 +25,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v2-tf-0001.yml b/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v2-tf-0001.yml index 8173fa91ee1..c1e9704792f 100644 --- a/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v2-tf-0001.yml +++ b/tools/accuracy_checker/configs/mobilenetv2-int8-sparse-v2-tf-0001.yml @@ -26,3 +26,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/mobilenetv2-int8-tf-0001.yml b/tools/accuracy_checker/configs/mobilenetv2-int8-tf-0001.yml index 953ca511b06..593c5c1ebaf 100644 --- a/tools/accuracy_checker/configs/mobilenetv2-int8-tf-0001.yml +++ b/tools/accuracy_checker/configs/mobilenetv2-int8-tf-0001.yml @@ -26,3 +26,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-densenet-121-0.125.yml b/tools/accuracy_checker/configs/octave-densenet-121-0.125.yml index 03eda9f3cfb..0557486a49b 100644 --- a/tools/accuracy_checker/configs/octave-densenet-121-0.125.yml +++ b/tools/accuracy_checker/configs/octave-densenet-121-0.125.yml @@ -71,3 +71,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 76.1 / 93.0 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-resnet-101-0.125.yml b/tools/accuracy_checker/configs/octave-resnet-101-0.125.yml index 3a83061b690..cb4cb0277d8 100644 --- a/tools/accuracy_checker/configs/octave-resnet-101-0.125.yml +++ b/tools/accuracy_checker/configs/octave-resnet-101-0.125.yml @@ -75,3 +75,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 79.2 / 94.4 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-resnet-200-0.125.yml b/tools/accuracy_checker/configs/octave-resnet-200-0.125.yml index 8baee9ab483..e87dd60c857 100644 --- a/tools/accuracy_checker/configs/octave-resnet-200-0.125.yml +++ b/tools/accuracy_checker/configs/octave-resnet-200-0.125.yml @@ -85,3 +85,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 80.0 / 94.9 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-resnet-26-0.25.yml b/tools/accuracy_checker/configs/octave-resnet-26-0.25.yml index 7f39eceb098..63fa31dbdf7 100644 --- a/tools/accuracy_checker/configs/octave-resnet-26-0.25.yml +++ b/tools/accuracy_checker/configs/octave-resnet-26-0.25.yml @@ -73,3 +73,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 76.1 / 92.6 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-resnet-50-0.125.yml b/tools/accuracy_checker/configs/octave-resnet-50-0.125.yml index b1050e3bd19..7ae4c213d7f 100644 --- a/tools/accuracy_checker/configs/octave-resnet-50-0.125.yml +++ b/tools/accuracy_checker/configs/octave-resnet-50-0.125.yml @@ -71,3 +71,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 78.2 / 93.9 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-resnext-101-0.25.yml b/tools/accuracy_checker/configs/octave-resnext-101-0.25.yml index 4f2b433a283..8a10851ae8f 100644 --- a/tools/accuracy_checker/configs/octave-resnext-101-0.25.yml +++ b/tools/accuracy_checker/configs/octave-resnext-101-0.25.yml @@ -66,3 +66,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 79.6 / 94.5 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-resnext-50-0.25.yml b/tools/accuracy_checker/configs/octave-resnext-50-0.25.yml index 49f4992a94a..03707a2668e 100644 --- a/tools/accuracy_checker/configs/octave-resnext-50-0.25.yml +++ b/tools/accuracy_checker/configs/octave-resnext-50-0.25.yml @@ -68,3 +68,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 78.8 / 94.2 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/octave-se-resnet-50-0.125.yml b/tools/accuracy_checker/configs/octave-se-resnet-50-0.125.yml index 4b51e367c7a..e845c66a8c7 100644 --- a/tools/accuracy_checker/configs/octave-se-resnet-50-0.125.yml +++ b/tools/accuracy_checker/configs/octave-se-resnet-50-0.125.yml @@ -76,3 +76,4 @@ models: size: 224 # Using accuracy metric, achieved result of public model - 78.2 / 93.9 (top 1 and top 5 respectively) +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/pedestrian-and-vehicle-detector-adas-0001.yml b/tools/accuracy_checker/configs/pedestrian-and-vehicle-detector-adas-0001.yml index 1a9c0c906c0..2e78fca94b3 100644 --- a/tools/accuracy_checker/configs/pedestrian-and-vehicle-detector-adas-0001.yml +++ b/tools/accuracy_checker/configs/pedestrian-and-vehicle-detector-adas-0001.yml @@ -53,3 +53,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/pedestrian-detection-adas-0002.yml b/tools/accuracy_checker/configs/pedestrian-detection-adas-0002.yml index 57f61a956bd..cec248a0d8c 100644 --- a/tools/accuracy_checker/configs/pedestrian-detection-adas-0002.yml +++ b/tools/accuracy_checker/configs/pedestrian-detection-adas-0002.yml @@ -53,3 +53,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/pedestrian-detection-adas-binary-0001.yml b/tools/accuracy_checker/configs/pedestrian-detection-adas-binary-0001.yml index 470e50b55b5..1979b5f110e 100644 --- a/tools/accuracy_checker/configs/pedestrian-detection-adas-binary-0001.yml +++ b/tools/accuracy_checker/configs/pedestrian-detection-adas-binary-0001.yml @@ -37,3 +37,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-attributes-recognition-crossroad-0230.yml b/tools/accuracy_checker/configs/person-attributes-recognition-crossroad-0230.yml index a73722de920..1b3050e8b4e 100644 --- a/tools/accuracy_checker/configs/person-attributes-recognition-crossroad-0230.yml +++ b/tools/accuracy_checker/configs/person-attributes-recognition-crossroad-0230.yml @@ -33,3 +33,4 @@ models: - type: f1-score calculate_average: False presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-detection-action-recognition-0005.yml b/tools/accuracy_checker/configs/person-detection-action-recognition-0005.yml index bf772ccfcd1..fce0c4082c2 100644 --- a/tools/accuracy_checker/configs/person-detection-action-recognition-0005.yml +++ b/tools/accuracy_checker/configs/person-detection-action-recognition-0005.yml @@ -98,3 +98,4 @@ models: prediction_source: action_prediction label_map: action_label_map ignore_label: 3 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-detection-action-recognition-0006.yml b/tools/accuracy_checker/configs/person-detection-action-recognition-0006.yml index 62e329629c2..8d11f0a19ca 100644 --- a/tools/accuracy_checker/configs/person-detection-action-recognition-0006.yml +++ b/tools/accuracy_checker/configs/person-detection-action-recognition-0006.yml @@ -118,3 +118,4 @@ models: prediction_source: action_prediction label_map: action_label_map ignore_label: 6 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-detection-action-recognition-teacher-0002.yml b/tools/accuracy_checker/configs/person-detection-action-recognition-teacher-0002.yml index 56301b10380..47b454f24f3 100644 --- a/tools/accuracy_checker/configs/person-detection-action-recognition-teacher-0002.yml +++ b/tools/accuracy_checker/configs/person-detection-action-recognition-teacher-0002.yml @@ -81,4 +81,4 @@ models: label_map: action_label_map fast_match: True ignore_label: 3 - +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-detection-raisinghand-recognition-0001.yml b/tools/accuracy_checker/configs/person-detection-raisinghand-recognition-0001.yml index 546b7905e73..907accd4927 100644 --- a/tools/accuracy_checker/configs/person-detection-raisinghand-recognition-0001.yml +++ b/tools/accuracy_checker/configs/person-detection-raisinghand-recognition-0001.yml @@ -80,3 +80,4 @@ models: prediction_source: action_prediction label_map: action_label_map ignore_label: 2 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-detection-retail-0002.yml b/tools/accuracy_checker/configs/person-detection-retail-0002.yml index 1ecbf3edc71..93ce26159c5 100644 --- a/tools/accuracy_checker/configs/person-detection-retail-0002.yml +++ b/tools/accuracy_checker/configs/person-detection-retail-0002.yml @@ -67,3 +67,4 @@ models: include_boundaries: False allow_multiple_matches_per_ignored: False distinct_conf: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-detection-retail-0013.yml b/tools/accuracy_checker/configs/person-detection-retail-0013.yml index 0731eb23663..6ee1dedd393 100644 --- a/tools/accuracy_checker/configs/person-detection-retail-0013.yml +++ b/tools/accuracy_checker/configs/person-detection-retail-0013.yml @@ -54,3 +54,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: False distinct_conf: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml index b86c61936ca..01e4ebc7c39 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0031.yml @@ -42,3 +42,4 @@ models: top_k: 1 - type: reid_map +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml index 4c9e36f67b6..94ca435c4c7 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0076.yml @@ -38,3 +38,4 @@ models: top_k: 1 - type: reid_map +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml b/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml index c1187fc8d83..934cfbd7044 100644 --- a/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml +++ b/tools/accuracy_checker/configs/person-reidentification-retail-0079.yml @@ -38,3 +38,4 @@ models: top_k: 1 - type: reid_map +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-0078.yml b/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-0078.yml index f9a6c8f55dc..b84d5dc592e 100644 --- a/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-0078.yml +++ b/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-0078.yml @@ -53,3 +53,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-1016.yml b/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-1016.yml index b6945fd85da..5e0102c9c92 100644 --- a/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-1016.yml +++ b/tools/accuracy_checker/configs/person-vehicle-bike-detection-crossroad-1016.yml @@ -44,3 +44,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: False +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-101.yml b/tools/accuracy_checker/configs/resnet-101.yml index 24375d1bce5..a05d54a2075 100644 --- a/tools/accuracy_checker/configs/resnet-101.yml +++ b/tools/accuracy_checker/configs/resnet-101.yml @@ -40,3 +40,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-152.yml b/tools/accuracy_checker/configs/resnet-152.yml index 6fce425053f..e8b935e7a03 100644 --- a/tools/accuracy_checker/configs/resnet-152.yml +++ b/tools/accuracy_checker/configs/resnet-152.yml @@ -40,3 +40,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-50-caffe2.yml b/tools/accuracy_checker/configs/resnet-50-caffe2.yml index 512d7d737c9..b336ad40b52 100644 --- a/tools/accuracy_checker/configs/resnet-50-caffe2.yml +++ b/tools/accuracy_checker/configs/resnet-50-caffe2.yml @@ -44,3 +44,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-50-int8-sparse-v1-tf-0001.yml b/tools/accuracy_checker/configs/resnet-50-int8-sparse-v1-tf-0001.yml index c6dd2a9e0b1..5665dec82ec 100644 --- a/tools/accuracy_checker/configs/resnet-50-int8-sparse-v1-tf-0001.yml +++ b/tools/accuracy_checker/configs/resnet-50-int8-sparse-v1-tf-0001.yml @@ -28,3 +28,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-50-int8-sparse-v2-tf-0001.yml b/tools/accuracy_checker/configs/resnet-50-int8-sparse-v2-tf-0001.yml index d7979ab8a24..90bfa327717 100644 --- a/tools/accuracy_checker/configs/resnet-50-int8-sparse-v2-tf-0001.yml +++ b/tools/accuracy_checker/configs/resnet-50-int8-sparse-v2-tf-0001.yml @@ -28,3 +28,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-50-int8-tf-0001.yml b/tools/accuracy_checker/configs/resnet-50-int8-tf-0001.yml index deaf6346e29..e02f33d45ba 100644 --- a/tools/accuracy_checker/configs/resnet-50-int8-tf-0001.yml +++ b/tools/accuracy_checker/configs/resnet-50-int8-tf-0001.yml @@ -28,3 +28,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-50-pytorch.yml b/tools/accuracy_checker/configs/resnet-50-pytorch.yml index 0fac50c8e0e..eaa16288b32 100644 --- a/tools/accuracy_checker/configs/resnet-50-pytorch.yml +++ b/tools/accuracy_checker/configs/resnet-50-pytorch.yml @@ -74,3 +74,4 @@ models: size: 224 use_pillow: True # Reference metric from PyTorch (pytorch v1.0.1, torchvision v0.2.2) top-1 76.13% top-5 92.862% +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet-50.yml b/tools/accuracy_checker/configs/resnet-50.yml index f37723fab68..26f91266fb2 100644 --- a/tools/accuracy_checker/configs/resnet-50.yml +++ b/tools/accuracy_checker/configs/resnet-50.yml @@ -40,3 +40,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/resnet50-binary-0001.yml b/tools/accuracy_checker/configs/resnet50-binary-0001.yml index fa319864441..104b315f268 100644 --- a/tools/accuracy_checker/configs/resnet50-binary-0001.yml +++ b/tools/accuracy_checker/configs/resnet50-binary-0001.yml @@ -33,3 +33,4 @@ models: - name: accuracy@top5 type: accuracy top_k: 5 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml b/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml index 1e3853991ec..74cd9cb6e01 100644 --- a/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml +++ b/tools/accuracy_checker/configs/road-segmentation-adas-0001.yml @@ -20,3 +20,4 @@ models: datasets: - name: road_segmentation +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/se-inception.yml b/tools/accuracy_checker/configs/se-inception.yml index 4fd25c0dcac..6439b2f803f 100644 --- a/tools/accuracy_checker/configs/se-inception.yml +++ b/tools/accuracy_checker/configs/se-inception.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/se-resnet-101.yml b/tools/accuracy_checker/configs/se-resnet-101.yml index 6e7dd6a0373..e895441e63b 100644 --- a/tools/accuracy_checker/configs/se-resnet-101.yml +++ b/tools/accuracy_checker/configs/se-resnet-101.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/se-resnet-152.yml b/tools/accuracy_checker/configs/se-resnet-152.yml index d02eb4ffc83..e8d1bfb1473 100644 --- a/tools/accuracy_checker/configs/se-resnet-152.yml +++ b/tools/accuracy_checker/configs/se-resnet-152.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/se-resnet-50.yml b/tools/accuracy_checker/configs/se-resnet-50.yml index 766f65059de..32c60c6c8a4 100644 --- a/tools/accuracy_checker/configs/se-resnet-50.yml +++ b/tools/accuracy_checker/configs/se-resnet-50.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/se-resnext-101.yml b/tools/accuracy_checker/configs/se-resnext-101.yml index 12cea5add9a..1b0d308a995 100644 --- a/tools/accuracy_checker/configs/se-resnext-101.yml +++ b/tools/accuracy_checker/configs/se-resnext-101.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/se-resnext-50.yml b/tools/accuracy_checker/configs/se-resnext-50.yml index d968c3824c2..884937eddc1 100644 --- a/tools/accuracy_checker/configs/se-resnext-50.yml +++ b/tools/accuracy_checker/configs/se-resnext-50.yml @@ -39,3 +39,4 @@ models: size: 256 - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml b/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml index 2effe8fb568..093d370a11d 100644 --- a/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml +++ b/tools/accuracy_checker/configs/semantic-segmentation-adas-0001.yml @@ -25,3 +25,4 @@ models: - type: mean_iou use_argmax: False presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml b/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml index 87266d46445..1bc683e4c57 100644 --- a/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml +++ b/tools/accuracy_checker/configs/single-image-super-resolution-1032.yml @@ -55,3 +55,4 @@ models: datasets: - name: super_resolution_x4 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml b/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml index 9871b134e99..128669f9629 100644 --- a/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml +++ b/tools/accuracy_checker/configs/single-image-super-resolution-1033.yml @@ -60,3 +60,4 @@ models: - type: psnr scale_border: 4 presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/squeezenet1.0.yml b/tools/accuracy_checker/configs/squeezenet1.0.yml index 9450d8b7c38..38f5e2e0087 100644 --- a/tools/accuracy_checker/configs/squeezenet1.0.yml +++ b/tools/accuracy_checker/configs/squeezenet1.0.yml @@ -38,3 +38,4 @@ models: size: 256 - type: crop size: 227 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml b/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml index 8bcbe35f181..a997820cb59 100644 --- a/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml +++ b/tools/accuracy_checker/configs/squeezenet1.1-caffe2.yml @@ -42,3 +42,4 @@ models: aspect_ratio_scale: greater - type: crop size: 227 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/squeezenet1.1.yml b/tools/accuracy_checker/configs/squeezenet1.1.yml index 8de59f46113..7444305499e 100644 --- a/tools/accuracy_checker/configs/squeezenet1.1.yml +++ b/tools/accuracy_checker/configs/squeezenet1.1.yml @@ -38,3 +38,4 @@ models: size: 256 - type: crop size: 227 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ssd300.yml b/tools/accuracy_checker/configs/ssd300.yml index f53e94ec0d5..6f9f7f3f61f 100644 --- a/tools/accuracy_checker/configs/ssd300.yml +++ b/tools/accuracy_checker/configs/ssd300.yml @@ -40,3 +40,4 @@ models: size: 300 postprocessing: - type: resize_prediction_boxes +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ssd512.yml b/tools/accuracy_checker/configs/ssd512.yml index 39190060f8f..c23cda33c8a 100644 --- a/tools/accuracy_checker/configs/ssd512.yml +++ b/tools/accuracy_checker/configs/ssd512.yml @@ -40,3 +40,4 @@ models: size: 512 postprocessing: - type: resize_prediction_boxes +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ssd_mobilenet_v1_coco.yml b/tools/accuracy_checker/configs/ssd_mobilenet_v1_coco.yml index 306ed2f471b..b2af3f222b0 100644 --- a/tools/accuracy_checker/configs/ssd_mobilenet_v1_coco.yml +++ b/tools/accuracy_checker/configs/ssd_mobilenet_v1_coco.yml @@ -24,3 +24,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ssd_mobilenet_v1_fpn_coco.yml b/tools/accuracy_checker/configs/ssd_mobilenet_v1_fpn_coco.yml index 6b7f5e3ce5e..c8081012e91 100644 --- a/tools/accuracy_checker/configs/ssd_mobilenet_v1_fpn_coco.yml +++ b/tools/accuracy_checker/configs/ssd_mobilenet_v1_fpn_coco.yml @@ -24,3 +24,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ssd_mobilenet_v2_coco.yml b/tools/accuracy_checker/configs/ssd_mobilenet_v2_coco.yml index efe661303a6..791e6bcac88 100644 --- a/tools/accuracy_checker/configs/ssd_mobilenet_v2_coco.yml +++ b/tools/accuracy_checker/configs/ssd_mobilenet_v2_coco.yml @@ -24,3 +24,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/ssdlite_mobilenet_v2.yml b/tools/accuracy_checker/configs/ssdlite_mobilenet_v2.yml index b9364884565..be13ab03c5c 100644 --- a/tools/accuracy_checker/configs/ssdlite_mobilenet_v2.yml +++ b/tools/accuracy_checker/configs/ssdlite_mobilenet_v2.yml @@ -24,3 +24,4 @@ models: - type: resize_prediction_boxes metrics: - type: coco_precision +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/text-detection-0003.yml b/tools/accuracy_checker/configs/text-detection-0003.yml index 56f2abdff6e..35375a91500 100644 --- a/tools/accuracy_checker/configs/text-detection-0003.yml +++ b/tools/accuracy_checker/configs/text-detection-0003.yml @@ -68,3 +68,4 @@ models: - type: incidental_text_hmean name: f-measure ignore_difficult: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/text-detection-0004.yml b/tools/accuracy_checker/configs/text-detection-0004.yml index 3dc506e3154..30a87082466 100644 --- a/tools/accuracy_checker/configs/text-detection-0004.yml +++ b/tools/accuracy_checker/configs/text-detection-0004.yml @@ -55,3 +55,4 @@ models: - type: incidental_text_hmean name: f-measure ignore_difficult: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml index bec1590d9d2..d0359254fc0 100644 --- a/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml +++ b/tools/accuracy_checker/configs/text-image-super-resolution-0001.yml @@ -31,3 +31,4 @@ models: datasets: - name: text_super_resolution_x3 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/text-recognition-0012.yml b/tools/accuracy_checker/configs/text-recognition-0012.yml index 28154833815..4648bb77986 100644 --- a/tools/accuracy_checker/configs/text-recognition-0012.yml +++ b/tools/accuracy_checker/configs/text-recognition-0012.yml @@ -27,3 +27,4 @@ models: metrics: - type: character_recognition_accuracy +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vehicle-attributes-recognition-barrier-0039.yml b/tools/accuracy_checker/configs/vehicle-attributes-recognition-barrier-0039.yml index 71ca136a06f..582a23379b7 100644 --- a/tools/accuracy_checker/configs/vehicle-attributes-recognition-barrier-0039.yml +++ b/tools/accuracy_checker/configs/vehicle-attributes-recognition-barrier-0039.yml @@ -57,3 +57,4 @@ models: annotation_source: type prediction_source: type label_map: type_label_map +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vehicle-detection-adas-0002.yml b/tools/accuracy_checker/configs/vehicle-detection-adas-0002.yml index e18776a06fc..f26fd802d2d 100644 --- a/tools/accuracy_checker/configs/vehicle-detection-adas-0002.yml +++ b/tools/accuracy_checker/configs/vehicle-detection-adas-0002.yml @@ -49,3 +49,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vehicle-detection-adas-binary-0001.yml b/tools/accuracy_checker/configs/vehicle-detection-adas-binary-0001.yml index b273e01b8c2..0f4928c17b8 100644 --- a/tools/accuracy_checker/configs/vehicle-detection-adas-binary-0001.yml +++ b/tools/accuracy_checker/configs/vehicle-detection-adas-binary-0001.yml @@ -33,3 +33,4 @@ models: include_boundaries: True allow_multiple_matches_per_ignored: True use_filtered_tp: True +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vehicle-license-plate-detection-barrier-0106.yml b/tools/accuracy_checker/configs/vehicle-license-plate-detection-barrier-0106.yml index 5578d51c47f..3fc95766721 100644 --- a/tools/accuracy_checker/configs/vehicle-license-plate-detection-barrier-0106.yml +++ b/tools/accuracy_checker/configs/vehicle-license-plate-detection-barrier-0106.yml @@ -55,3 +55,4 @@ models: allow_multiple_matches_per_ignored: True distinct_conf: False presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vgg16.yml b/tools/accuracy_checker/configs/vgg16.yml index 7756740941f..a5bd952e37b 100644 --- a/tools/accuracy_checker/configs/vgg16.yml +++ b/tools/accuracy_checker/configs/vgg16.yml @@ -40,3 +40,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vgg19-caffe2.yml b/tools/accuracy_checker/configs/vgg19-caffe2.yml index 75319cd8ab3..7628ade6d7c 100644 --- a/tools/accuracy_checker/configs/vgg19-caffe2.yml +++ b/tools/accuracy_checker/configs/vgg19-caffe2.yml @@ -43,3 +43,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/vgg19.yml b/tools/accuracy_checker/configs/vgg19.yml index 7f9cdd83616..bc2bc63bbc0 100644 --- a/tools/accuracy_checker/configs/vgg19.yml +++ b/tools/accuracy_checker/configs/vgg19.yml @@ -40,3 +40,4 @@ models: aspect_ratio_scale: greater - type: crop size: 224 +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/tests/test_config_reader.py b/tools/accuracy_checker/tests/test_config_reader.py index f61b4603d48..4e034306d9b 100644 --- a/tools/accuracy_checker/tests/test_config_reader.py +++ b/tools/accuracy_checker/tests/test_config_reader.py @@ -102,12 +102,12 @@ def setup_method(self): def test_read_configs_without_global_config(self, mocker): config = {'models': [{ 'name': 'model', - 'launchers': [{'framework': 'dlsdk', 'model': Path('/absolute_path'), 'weights': Path('/absolute_path')}], + 'launchers': [{'framework': 'dlsdk', 'model': Path('/absolute_path'), 'weights': Path('/absolute_path'), '_models_prefix': Path.cwd()}], 'datasets': [{'name': 'global_dataset'}] }]} empty_args = Namespace(**{ - 'models': None, 'extensions': None, 'source': None, 'annotations': None, - 'converted_models': None, 'model_optimizer': None, 'bitstreams': None, + 'models': Path.cwd(), 'extensions': Path.cwd(), 'source': Path.cwd(), 'annotations': Path.cwd(), + 'converted_models': None, 'model_optimizer': None, 'bitstreams': Path.cwd(), 'definitions': None, 'config': None, 'stored_predictions': None, 'tf_custom_op_config': None, 'progress': 'bar', 'target_framework': None, 'target_devices': None, 'log_file': None, 'tf_obj_detection_api_pipeline_config_path': None, 'target_tags': None, 'cpu_extensions_mode': None, @@ -403,6 +403,76 @@ def test_expand_relative_paths_in_datasets_config_using_command_line(self, mocke assert config['models'][0]['datasets'][0] == expected + def test_expand_relative_paths_in_datasets_config_using_env_variable(self, mocker): + local_config = {'models': [{ + 'name': 'model', + 'launchers': [{'framework': 'caffe'}], + 'datasets': [{ + 'name': 'global_dataset', + 'dataset_meta': 'relative_annotation_path', + 'data_source': 'relative_source_path', + 'segmentation_masks_source': 'relative_source_path', + 'annotation': 'relative_annotation_path' + }] + }]} + + mocker.patch(self.module + '._read_configs', return_value=( + None, local_config + )) + expected = copy.deepcopy(local_config['models'][0]['datasets'][0]) + with mock_filesystem(['source_2/']) as env_prefix: + mocker.patch('os.environ.get', return_value=str(env_prefix)) + with mock_filesystem(['source/', 'annotations/']) as prefix: + expected['annotation'] = prefix / self.arguments.annotations / 'relative_annotation_path' + expected['dataset_meta'] = prefix / self.arguments.annotations / 'relative_annotation_path' + expected['segmentation_masks_source'] = prefix / self.arguments.source / 'relative_source_path' + expected['data_source'] = prefix / self.arguments.source / 'relative_source_path' + + arguments = copy.deepcopy(self.arguments) + arguments.bitstreams = None + arguments.extensions = None + arguments.source = prefix / arguments.source + arguments.annotations = prefix / self.arguments.annotations + + config = ConfigReader.merge(arguments)[0] + + assert config['models'][0]['datasets'][0] == expected + + def test_not_overwrite_relative_paths_in_datasets_config_using_env_variable_if_commandline_provided(self, mocker): + local_config = {'models': [{ + 'name': 'model', + 'launchers': [{'framework': 'caffe'}], + 'datasets': [{ + 'name': 'global_dataset', + 'dataset_meta': 'relative_annotation_path', + 'data_source': 'relative_source_path', + 'segmentation_masks_source': 'relative_source_path', + 'annotation': 'relative_annotation_path' + }] + }]} + + mocker.patch(self.module + '._read_configs', return_value=( + None, local_config + )) + expected = copy.deepcopy(local_config['models'][0]['datasets'][0]) + with mock_filesystem(['source/']) as prefix: + mocker.patch('os.environ.get', return_value=str(prefix)) + expected['dataset_meta'] = prefix / 'relative_annotation_path' + expected['segmentation_masks_source'] = prefix / 'relative_source_path' + expected['data_source'] = prefix / 'relative_source_path' + expected['annotation'] = prefix / 'relative_annotation_path' + expected['dataset_meta'] = prefix / 'relative_annotation_path' + + arguments = copy.deepcopy(self.arguments) + arguments.bitstreams = None + arguments.extensions = None + arguments.source = None + arguments.annotations = None + + config = ConfigReader.merge(arguments)[0] + + assert config['models'][0]['datasets'][0] == expected + def test_not_modify_absolute_paths_in_datasets_config_using_command_line(self): local_config = {'models': [{ 'name': 'model', @@ -630,6 +700,7 @@ def test_merge_launchers_with_definitions(self, mocker): self.global_config, local_config )) expected = copy.deepcopy(self.get_global_launcher('dlsdk')) + expected['_models_prefix'] = Path.cwd() with mock_filesystem(['bitstreams/', 'extensions/']) as prefix: expected['bitstream'] = prefix / self.arguments.bitstreams / expected['bitstream'] expected['cpu_extensions'] = prefix / self.arguments.extensions / expected['cpu_extensions'] @@ -647,11 +718,12 @@ def test_merge_launchers_with_definitions(self, mocker): def test_merge_launchers_with_model_is_not_modified(self, mocker): local_config = {'models': [{ 'name': 'model', - 'launchers': [{'framework': 'dlsdk', 'model': 'custom'}], + 'launchers': [{'framework': 'dlsdk', 'model': '/custom'}], 'datasets': [{'name': 'global_dataset'}] }]} expected = copy.deepcopy(self.get_global_launcher('dlsdk')) - expected['model'] = 'custom' + expected['model'] = Path('/custom') + expected['_models_prefix'] = Path.cwd() mocker.patch(self.module + '._read_configs', return_value=( self.global_config, local_config )) From 6898d16ee8f861d764dd49af3415d73b7a9e1960 Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 22 Oct 2019 11:59:48 +0300 Subject: [PATCH 168/927] AC: add pairwise subsetting logic for make_subset (#534) --- .../accuracy_checker/dataset.py | 30 ++++++++++++++++--- .../accuracy_checker/metrics/reid.py | 9 ++++-- 2 files changed, 33 insertions(+), 6 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/dataset.py b/tools/accuracy_checker/accuracy_checker/dataset.py index 5bb21f5cc0b..3a78ab617a7 100644 --- a/tools/accuracy_checker/accuracy_checker/dataset.py +++ b/tools/accuracy_checker/accuracy_checker/dataset.py @@ -20,8 +20,8 @@ from .annotation_converters import BaseFormatConverter, save_annotation, make_subset, analyze_dataset from .config import ConfigValidator, StringField, PathField, ListField, DictField, BaseField, NumberField, ConfigError -from .utils import JSONDecoderWithAutoConversion, read_json, get_path, contains_all, set_image_metadata -from .representation import BaseRepresentation +from .utils import JSONDecoderWithAutoConversion, read_json, get_path, contains_all, set_image_metadata, OrderedSet +from .representation import BaseRepresentation, ReIdentificationClassificationAnnotation from .data_readers import DataReaderField @@ -158,12 +158,34 @@ def __getitem__(self, item): return batch_ids, self._annotation[batch_start:batch_end] def make_subset(self, ids=None, start=0, step=1, end=None): + pairwise_subset = isinstance(self._annotation[0], ReIdentificationClassificationAnnotation) if ids: - self.subset = ids + self.subset = ids if not pairwise_subset else self.make_subset_pairwise(ids) return if not end: end = self.size - self.subset = range(start, end, step) + ids = range(start, end, step) + self.subset = ids if not pairwise_subset else self.make_subset_pairwise(ids) + + def make_subset_pairwise(self, ids, cut_to_final_size=True): + final_size = len(ids) + subsample_set = OrderedSet() + identifier_to_index = {annotation.identifier: index for index, annotation in enumerate(self._annotation)} + for idx in ids: + subsample_set.add(idx) + current_annotation = self._annotation[idx] + positive_pairs = [ + identifier_to_index[pair_identifier] for pair_identifier in current_annotation.positive_pairs + ] + subsample_set |= positive_pairs + negative_pairs = [ + identifier_to_index[pair_identifier] for pair_identifier in current_annotation.positive_pairs + ] + subsample_set |= negative_pairs + subsample_set = list(subsample_set) + if cut_to_final_size: + subsample_set = subsample_set[:final_size] + return subsample_set @staticmethod def set_image_metadata(annotation, images): diff --git a/tools/accuracy_checker/accuracy_checker/metrics/reid.py b/tools/accuracy_checker/accuracy_checker/metrics/reid.py index 2344f52d943..f0c643cb930 100644 --- a/tools/accuracy_checker/accuracy_checker/metrics/reid.py +++ b/tools/accuracy_checker/accuracy_checker/metrics/reid.py @@ -381,10 +381,15 @@ def get_embedding_distances(annotation, prediction, train=False): if train != image1.metadata.get("train", False): continue + if image1.identifier not in image_indexes: + continue + for image2 in image1.positive_pairs: - pairs.append(PairDesc(image_indexes[image1.identifier], image_indexes[image2], True)) + if image2 in image_indexes: + pairs.append(PairDesc(image_indexes[image1.identifier], image_indexes[image2], True)) for image2 in image1.negative_pairs: - pairs.append(PairDesc(image_indexes[image1.identifier], image_indexes[image2], False)) + if image2 in image_indexes: + pairs.append(PairDesc(image_indexes[image1.identifier], image_indexes[image2], False)) embed1 = np.asarray([prediction[idx].embedding for idx, _, _ in pairs]) embed2 = np.asarray([prediction[idx].embedding for _, idx, _ in pairs]) From 5007edd52c1108e6aeed639ad0844153ea660d46 Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 22 Oct 2019 14:45:31 +0300 Subject: [PATCH 169/927] AC: automatic number of infer requests in async mode (#535) --- .../quantization_model_evaluator.py | 11 +++-- .../launcher/dlsdk_launcher.py | 47 ++++++++++++++++--- 2 files changed, 48 insertions(+), 10 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index f60050b33a8..7bf3cdce126 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -68,7 +68,7 @@ def _get_batch_input(self, batch_input, batch_annotation): def process_dataset_async( self, - nreq=2, + nreq=None, subset=None, num_images=None, check_progress=False, @@ -89,6 +89,12 @@ def _create_subset(subset, num_images): elif num_images is not None: self.dataset.make_subset(end=num_images) + def _set_number_infer_requests(nreq): + if nreq is None: + nreq = self.launcher.auto_num_requests() + if self.launcher.num_requests != nreq: + self.launcher.num_requests = nreq + if self.dataset is None or (dataset_tag and self.dataset.tag != dataset_tag): self.select_dataset(dataset_tag) @@ -96,13 +102,12 @@ def _create_subset(subset, num_images): progress_reporter = None _create_subset(subset, num_images) + _set_number_infer_requests(nreq) if check_progress: progress_reporter = ProgressReporter.provide('print', self.dataset.size) dataset_iterator = iter(enumerate(self.dataset)) - if self.launcher.num_requests != nreq: - self.launcher.num_requests = nreq free_irs = self.launcher.infer_requests queued_irs = [] wait_time = 0.01 diff --git a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py index 32bfe8ee632..de0f14f8670 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py @@ -15,6 +15,7 @@ """ import subprocess +import multiprocessing from pathlib import Path import os import platform @@ -509,6 +510,21 @@ def _align_data_shape(self, data, input_blob): return data.reshape(input_shape) def create_ie_plugin(self, log=True): + def set_nireq(): + num_requests = self.config.get('num_requests') + if num_requests is not None: + num_requests = get_or_parse_value(num_requests, casting_type=int) + if len(num_requests) != 1: + raise ConfigError('Several values for _num_requests specified') + self._num_requests = num_requests[0] + if self._num_requests != 1 and not self.async_mode: + warning('{} infer requests in sync mode is not supported. Only 1 infer request will be used.') + self._num_requests = 1 + elif not self.async_mode: + self._num_requests = 1 + else: + self._num_requests = self.auto_nreq() + if hasattr(self, 'plugin'): del self.plugin if log: @@ -518,13 +534,8 @@ def create_ie_plugin(self, log=True): else: self.plugin = ie.IEPlugin(self._device) self.async_mode = self.get_value_from_config('async_mode') - num_requests = get_or_parse_value(self.config.get('num_requests', 1), casting_type=int) - if len(num_requests) != 1: - raise ConfigError('Several values for _num_requests specified') - self._num_requests = num_requests[0] - if self._num_requests != 1 and not self.async_mode: - warning('{} infer requests in sync mode is not supported. Only 1 infer request will be used.') - self._num_requests = 1 + set_nireq() + if log: print_info('Loaded {} plugin version: {}'.format(self.plugin.device, self.plugin.version)) @@ -541,6 +552,28 @@ def create_ie_plugin(self, log=True): if log_level: self.plugin.set_config({'VPU_LOG_LEVEL': log_level}) + def auto_nreq(self): + concurrency_device = { + 'CPU': 1, + 'GPU': 1, + 'HDDL': 100, + 'MYRIAD': 4, + 'FPGA': 3 + } + platform_list = self._devices_list() + if 'CPU' in platform_list and len(platform_list) == 1: + min_requests = [4, 5, 3] + cpu_count = multiprocessing.cpu_count() + for min_request in min_requests: + if cpu_count % min_request == 0: + return max(min_request, cpu_count / min_request) + if 'GPU' in platform_list and len(platform_list) == 1: + return 2 + concurrency = 0 + for device in platform_list: + concurrency += concurrency_device.get(device, 1) + return concurrency + def _create_multi_device_plugin(self, log=True): async_mode = self.get_value_from_config('async_mode') if not async_mode: From 9048cab2bc6f96e44b3cc816776200c778db0f76 Mon Sep 17 00:00:00 2001 From: Katya Date: Tue, 22 Oct 2019 16:09:59 +0300 Subject: [PATCH 170/927] Ea/fix method name (#536) * AC: automatic number of infer requests in async mode * AC: auto_num_req -> auto_number_requests --- .../evaluators/quantization_model_evaluator.py | 1 + .../accuracy_checker/launcher/dlsdk_launcher.py | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py index 7bf3cdce126..b06c15ee101 100644 --- a/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py +++ b/tools/accuracy_checker/accuracy_checker/evaluators/quantization_model_evaluator.py @@ -108,6 +108,7 @@ def _set_number_infer_requests(nreq): progress_reporter = ProgressReporter.provide('print', self.dataset.size) dataset_iterator = iter(enumerate(self.dataset)) + free_irs = self.launcher.infer_requests queued_irs = [] wait_time = 0.01 diff --git a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py index de0f14f8670..a9e8fb0814b 100644 --- a/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py +++ b/tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher.py @@ -523,7 +523,7 @@ def set_nireq(): elif not self.async_mode: self._num_requests = 1 else: - self._num_requests = self.auto_nreq() + self._num_requests = self.auto_num_requests() if hasattr(self, 'plugin'): del self.plugin @@ -552,7 +552,7 @@ def set_nireq(): if log_level: self.plugin.set_config({'VPU_LOG_LEVEL': log_level}) - def auto_nreq(self): + def auto_num_requests(self): concurrency_device = { 'CPU': 1, 'GPU': 1, From 3325cb8ab63345d5a461e48fe56428bf0a160556 Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Tue, 22 Oct 2019 16:12:47 +0300 Subject: [PATCH 171/927] FIX --- CONTRIBUTING.md | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 73541afe2f2..048e106aea2 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -29,7 +29,7 @@ Name your model in OMZ according to the following rules: - Use a name that is consistent with an original name, but complete match is not necessary - Use lowercase - Use `-`(preferable) or `_` as delimiters, for spaces are not allowed -- Include a suffix according to an original framework (see **`framework`** description in the [configuration file](#configuration-file) section for examples), if you add a reimplementation of an existing model in OMZ from another framework +- Add a suffix according to framework identifier (see **`framework`** description in the [configuration file](#configuration-file) section for examples), if the model is a reimplementation of an existing model from another framework This name will be used for downloading, converting, and other operations. Examples of model names: @@ -40,11 +40,12 @@ Examples of model names: Place your files as shown in the table below: -File | Directory +File | Destination ---|--- -configuration file
documentation file |`models/public/` -validation configuration file|`tools/accuracy_checker/configs` -demo file|`demos` +configuration file | `models/public//.yml` +documentation file | `models/public//.md` +validation configuration file|`tools/accuracy_checker/configs/.yml` +demo|`demos/`
or
`demos/python_demos/` ### Tests @@ -74,7 +75,7 @@ Description of the model. Must match with the description from the model [docume **`task_type`** -[Model task class](tools/downloader/README.md#model-information-dumper-usage). If there is no task class of your model, add a new one to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. +[Model task type](tools/downloader/README.md#model-information-dumper-usage). If there is no task class of your model, add a new one to the list `KNOWN_TASK_TYPES` of the [tools/downloader/common.py](tools/downloader/common.py) file. **`files`** @@ -180,16 +181,16 @@ Deep Learning Inference Engine (IE) supports models in the Intermediate Represen ## Demo -A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](https://docs.openvinotoolkit.org/latest/_demos_README.html) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). +A demo shows the main idea of how to infer a model using IE. If your model solves one of the tasks supported by the Open Model Zoo, try to find an appropriate option from [demos](demos/README.md) or [samples](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html). Otherwise, you must provide your own demo (C++ or Python). The demo's name should end with `_demo` suffix to follow the convention of the project. Demos are required to support the following keys: - `-i ""`: Required. Input to process. - - `-m ""`: Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m`. - - `-d ""`: Optional. Default is CPU. - - `--no_show`: Optional. Do not visualize inference results. + - `-m ""`: Required. Path to an .xml file with a trained model. If the demo uses several models at the same time, use other keys prefixed with `-m_`. + - `-d ""`: Optional. Specifies a target device to infer on. CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Default must be CPU. If the demo uses several models at the same time, use keys prefixed with `d_` (just like keys `m_*` above) to specify device for each model. + - `-no_show`: Optional. Do not visualize inference results. > **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. Example: `--no-show`. @@ -199,15 +200,15 @@ If you add a new demo, provide autotesting support as well: - add demo launch parameters in [demos/tests/cases.py](demos/tests/cases.py) - prepare list of input images in [demos/tests/image_sequences.py](demos/tests/image_sequences.py) -Update [demos' README.md](demos/README.md) adding your demo to the list. +Add `reamde.md` file, which describes demo usage. Update [demos' README.md](demos/README.md) adding your demo to the list. ## Accuracy Validation -Accuracy validation can be performed by the [Accuracy Checker](./tools/accuracy_checker) tool. This tool can use either IE to run a converted model, or an original framework to run an original model. Accuracy Checker supports lots of datasets, metrics and preprocessing options, what simplifies validation if a task is supported by the tool. You only need to create a configuration file that contains necessary parameters for accuracy validation (specify a dataset and annotation, pre- and post-processing parameters, accuracy metrics to compute and so on). For details, refer to [Testing new models](./tools/accuracy_checker#testing-new-models). +Accuracy validation can be performed by the [Accuracy Checker](./tools/accuracy_checker) tool. This tool can use either IE to run a converted model, or an original framework to run an original model. Accuracy Checker supports lots of datasets, metrics and preprocessing options, which simplifies validation if a task is supported by the tool. You only need to create a configuration file that contains necessary parameters for accuracy validation (specify a dataset and annotation, pre- and post-processing parameters, accuracy metrics to compute and so on). For details, refer to [Testing new models](./tools/accuracy_checker#testing-new-models). If a model uses a dataset which is not supported by the Accuracy Checker, you also must provide the license and the link to it and mention it in the PR description. -When the configuration file is ready, you must run the Accuracy Checker to obtain metric results. If they match your results, that means conversion was successful and the Accuracy Checker fully supports your model, metric and dataset. Otherwise, recheck the[conversion](#model-conversion) parameters or the validation configuration file. +When the configuration file is ready, you must run the Accuracy Checker to obtain metric results. If they match your results, that means conversion was successful and the Accuracy Checker fully supports your model, metric and dataset. Otherwise, recheck the [conversion](#model-conversion) parameters or the validation configuration file. ### Example @@ -258,7 +259,7 @@ models: - name: accuracy@top1 type: accuracy top_k: 1 - - name: acciracy@top5 + - name: accuracy@top5 type: accuracy top_k: 5 ``` From 20e5de7f35ae2f90dcc63b99d7bab6b205bf685b Mon Sep 17 00:00:00 2001 From: eizamaliev Date: Tue, 22 Oct 2019 16:40:47 +0300 Subject: [PATCH 172/927] FIX --- CONTRIBUTING.md | 47 +++++++++++++---------------------------------- 1 file changed, 13 insertions(+), 34 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 048e106aea2..67576550826 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -67,7 +67,7 @@ Your PR may be rejected in some cases, for example: The model configuration file contains information about model: what it is, how to download it, and how to convert it to the IR format. This information must be specified in the `model.yml` file that must be located in the model subfolder. -Refer to the detailed descriptions of each file provided below. +The detailed descriptions of file entries provided below. **`description`** @@ -127,8 +127,11 @@ Conversion parameters (learn more in the [Model conversion](#model-conversion) s - --output=prob - --input_model=$conv_dir/googlenet-v3.onnx ``` + > **NOTE:** Do not specify `framework`, `data_type`, `model_name` and `output_dir`, since they are deduced automatically. +> **NOTE:** `$dl_dir` used to substitute download directory (key `-o` or `--output_dir` in `downloader.py` script) and `$conv_dir` used to substitute converter directory (key `-o` or `--output_dir` in `converter.py` script) + **`framework`** Framework of the original model. Examples: `caffe`, `dldt`, `mxnet`, `pytorch`, `tf`. @@ -139,7 +142,7 @@ Path to the model license. ### Example -This example shows how to download the [classification model DenseNet-121*](models/public/densenet-121-tf/model.yml) pretrained in TensorFlow\* from Google Drive\* as an archive. +This example shows how to download the classification model [DenseNet-121*](models/public/densenet-121-tf/model.yml) pretrained in TensorFlow\* from Google Drive\* as an archive. ``` description: >- @@ -192,7 +195,7 @@ Demos are required to support the following keys: - `-d ""`: Optional. Specifies a target device to infer on. CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Default must be CPU. If the demo uses several models at the same time, use keys prefixed with `d_` (just like keys `m_*` above) to specify device for each model. - `-no_show`: Optional. Do not visualize inference results. -> **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. Example: `--no-show`. +> **TIP**: For Python, it is preferable to use `-` instead of `_` as word separators. You can also add any other necessary parameters. @@ -212,39 +215,23 @@ When the configuration file is ready, you must run the Accuracy Checker to obtai ### Example -This example uses one of the files from `tools/accuracy_checker/configs` — validation configuration file for [AlexNet](tools/accuracy_checker/configs/alexnet.yml)\*: +This example uses one of the files from `tools/accuracy_checker/configs` — validation configuration file for [DenseNet-121](tools/accuracy_checker/configs/densenet-121-tf.yml)\* from TensorFlow\*: ``` models: - - name: alexnet-cf - launchers: - - framework: caffe - model: public/alexnet/alexnet.prototxt - weights: public/alexnet/alexnet.caffemodel - adapter: classification - datasets: - - name: imagenet_1000_classes - preprocessing: - - type: resize - size: 256 - - type: crop - size: 227 - - type: normalization - mean: 104, 117, 123 - - - name: alexnet + - name: densenet-121-tf launchers: - framework: dlsdk tags: - FP32 - model: public/alexnet/FP32/alexnet.xml - weights: public/alexnet/FP32/alexnet.bin + model: public/densenet-121-tf/FP32/densenet-121-tf.xml + weights: public/densenet-121-tf/FP32/densenet-121-tf.bin adapter: classification - framework: dlsdk tags: - FP16 - model: public/alexnet/FP16/alexnet.xml - weights: public/alexnet/FP16/alexnet.bin + model: public/densenet-121-tf/FP16/densenet-121-tf.xml + weights: public/densenet-121-tf/FP16/densenet-121-tf.bin adapter: classification datasets: @@ -253,15 +240,7 @@ models: - type: resize size: 256 - type: crop - size: 227 - - metrics: - - name: accuracy@top1 - type: accuracy - top_k: 1 - - name: accuracy@top5 - type: accuracy - top_k: 5 + size: 224 ``` From 3fdaa3af84baf60390dacbd3e588a394abe7217f Mon Sep 17 00:00:00 2001 From: Aung Naing Date: Thu, 10 Oct 2019 22:12:24 -0700 Subject: [PATCH 173/927] Fix error thrown from SetBatch in VPUs VPU calls base implementation which is to throw error for SetBatch. https://github.com/opencv/open_model_zoo/issues/505 Signed-off-by: Aung Naing --- demos/pedestrian_tracker_demo/main.cpp | 13 ++++++++++++- demos/pedestrian_tracker_demo/src/cnn.cpp | 4 +++- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/demos/pedestrian_tracker_demo/main.cpp b/demos/pedestrian_tracker_demo/main.cpp index a01528c6636..f98d2b70486 100644 --- a/demos/pedestrian_tracker_demo/main.cpp +++ b/demos/pedestrian_tracker_demo/main.cpp @@ -50,7 +50,18 @@ CreatePedestrianTracker(const std::string& reid_model, if (!reid_model.empty() && !reid_weights.empty()) { CnnConfig reid_config(reid_model, reid_weights); - reid_config.max_batch_size = 16; + reid_config.max_batch_size = 16; // defaulting to 16 + + try { + if (ie.GetConfig(deviceName, CONFIG_KEY(DYN_BATCH_ENABLED)).as() != PluginConfigParams::YES) { + reid_config.max_batch_size = 1; + std::cerr << "Dynamic batch is not supported for " << deviceName << ". Fall back to batch 1." << std::endl; + } + } + catch(const InferenceEngine::details::InferenceEngineException& e) { + reid_config.max_batch_size = 1; + std::cerr << e.what() << " for " << deviceName << ". Fall back to batch 1." << std::endl; + } std::shared_ptr descriptor_strong = std::make_shared(reid_config, ie, deviceName); diff --git a/demos/pedestrian_tracker_demo/src/cnn.cpp b/demos/pedestrian_tracker_demo/src/cnn.cpp index 5056c0531e5..c1ad08e3e57 100644 --- a/demos/pedestrian_tracker_demo/src/cnn.cpp +++ b/demos/pedestrian_tracker_demo/src/cnn.cpp @@ -77,7 +77,9 @@ void CnnBase::InferBatch( matU8ToBlob(frames[batch_i + b], input_blob_, b); } - infer_request_.SetBatch(current_batch_size); + if (1 != batch_size) { + infer_request_.SetBatch(current_batch_size); + } infer_request_.Infer(); fetch_results(outputs_, current_batch_size); From ae4102dd54b9ca61941b58e722978c2542984371 Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Tue, 22 Oct 2019 19:09:38 +0300 Subject: [PATCH 174/927] demos/multi_channel: update the program names in the documentation --- demos/multi_channel/face_detection_demo/README.md | 8 ++++---- demos/multi_channel/face_detection_demo/main.cpp | 2 +- demos/multi_channel/human_pose_estimation_demo/README.md | 8 ++++---- demos/multi_channel/human_pose_estimation_demo/main.cpp | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/demos/multi_channel/face_detection_demo/README.md b/demos/multi_channel/face_detection_demo/README.md index cb3d6a005b5..ddd5e1b0b56 100644 --- a/demos/multi_channel/face_detection_demo/README.md +++ b/demos/multi_channel/face_detection_demo/README.md @@ -23,9 +23,9 @@ On the start-up, the application reads command line parameters and loads the spe Running the application with the `-h` option yields the following usage message: ```sh -./multi-channel-face-detection-demo -h +./multi_channel_face_detection_demo -h -multi-channel-face-detection-demo [OPTION] +multi_channel_face_detection_demo [OPTION] Options: -h Print a usage message @@ -56,13 +56,13 @@ To run the demo, you can use public or pre-trained models. To download the pre-t For example, to run the demo with the pre-trained face detection model on FPGA with fallback on CPU, with one single camera, use the following command: ```sh -./multi-channel-face-detection-demo -m face-detection-retail-0004.xml +./multi_channel_face_detection_demo -m face-detection-retail-0004.xml -l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -nc 1 ``` To run the demo using two recorded video files, use the following command: ```sh -./multi-channel-face-detection-demo -m face-detection-retail-0004.xml +./multi_channel_face_detection_demo -m face-detection-retail-0004.xml -l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -i /path/to/file1 /path/to/file2 ``` Video files will be processed repeatedly. diff --git a/demos/multi_channel/face_detection_demo/main.cpp b/demos/multi_channel/face_detection_demo/main.cpp index e5ce0f87086..b082e994f07 100644 --- a/demos/multi_channel/face_detection_demo/main.cpp +++ b/demos/multi_channel/face_detection_demo/main.cpp @@ -45,7 +45,7 @@ namespace { */ void showUsage() { std::cout << std::endl; - std::cout << "multichannel_face_detection [OPTION]" << std::endl; + std::cout << "multi_channel_face_detection_demo [OPTION]" << std::endl; std::cout << "Options:" << std::endl; std::cout << std::endl; std::cout << " -h " << help_message << std::endl; diff --git a/demos/multi_channel/human_pose_estimation_demo/README.md b/demos/multi_channel/human_pose_estimation_demo/README.md index bec055f1d54..97956e757a8 100644 --- a/demos/multi_channel/human_pose_estimation_demo/README.md +++ b/demos/multi_channel/human_pose_estimation_demo/README.md @@ -23,8 +23,8 @@ On the start-up, the application reads command line parameters and loads the spe Running the application with the `-h` option yields the following usage message: ```sh -./multi-channel-human-pose-estimation-demo -h -multi-channel-human-pose-estimation-demo [OPTION] +./multi_channel_human_pose_estimation_demo -h +multi_channel_human_pose_estimation_demo [OPTION] Options: -h Print a usage message -m "" Required. Path to an .xml file with a trained model. @@ -54,13 +54,13 @@ To run the demo, you can use public or pre-trained models. To download the pre-t For example, to run the demo with the pre-trained Human Pose Estimation model on FPGA with fallback on CPU with one camera, use the following command: ```sh -./multi-channel-human-pose-estimation-demo -m /human-pose-estimation-0001.xml +./multi_channel_human_pose_estimation_demo -m /human-pose-estimation-0001.xml -l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -nc 1 ``` To run the demo using two recorded video files, use the following command: ```sh -./multi-channel-human-pose-estimation-demo -m /human-pose-estimation-0001.xml +./multi_channel_human_pose_estimation_demo -m /human-pose-estimation-0001.xml -l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -i /path/to/file1 /path/to/file2 ``` diff --git a/demos/multi_channel/human_pose_estimation_demo/main.cpp b/demos/multi_channel/human_pose_estimation_demo/main.cpp index b5450978602..f0e9a09619a 100644 --- a/demos/multi_channel/human_pose_estimation_demo/main.cpp +++ b/demos/multi_channel/human_pose_estimation_demo/main.cpp @@ -61,7 +61,7 @@ namespace { */ void showUsage() { std::cout << std::endl; - std::cout << "multi-channel-human-pose-estimation-demo [OPTION]" << std::endl; + std::cout << "multi_channel_human_pose_estimation_demo [OPTION]" << std::endl; std::cout << "Options:" << std::endl; std::cout << std::endl; std::cout << " -h " << help_message << std::endl; From f2c01233746cde80aeeda2660507b1897bedf6ac Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Tue, 22 Oct 2019 19:28:01 +0300 Subject: [PATCH 175/927] demos: remove references to libcpu_extension.so from the documentation It will be gone in the next IE version. --- demos/multi_channel/face_detection_demo/README.md | 6 ++---- demos/multi_channel/human_pose_estimation_demo/README.md | 6 ++---- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/demos/multi_channel/face_detection_demo/README.md b/demos/multi_channel/face_detection_demo/README.md index ddd5e1b0b56..ef75af6616e 100644 --- a/demos/multi_channel/face_detection_demo/README.md +++ b/demos/multi_channel/face_detection_demo/README.md @@ -56,14 +56,12 @@ To run the demo, you can use public or pre-trained models. To download the pre-t For example, to run the demo with the pre-trained face detection model on FPGA with fallback on CPU, with one single camera, use the following command: ```sh -./multi_channel_face_detection_demo -m face-detection-retail-0004.xml --l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -nc 1 +./multi_channel_face_detection_demo -m face-detection-retail-0004.xml -d HETERO:FPGA,CPU -nc 1 ``` To run the demo using two recorded video files, use the following command: ```sh -./multi_channel_face_detection_demo -m face-detection-retail-0004.xml --l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -i /path/to/file1 /path/to/file2 +./multi_channel_face_detection_demo -m face-detection-retail-0004.xml -d HETERO:FPGA,CPU -i /path/to/file1 /path/to/file2 ``` Video files will be processed repeatedly. diff --git a/demos/multi_channel/human_pose_estimation_demo/README.md b/demos/multi_channel/human_pose_estimation_demo/README.md index 97956e757a8..d5424ddd102 100644 --- a/demos/multi_channel/human_pose_estimation_demo/README.md +++ b/demos/multi_channel/human_pose_estimation_demo/README.md @@ -54,14 +54,12 @@ To run the demo, you can use public or pre-trained models. To download the pre-t For example, to run the demo with the pre-trained Human Pose Estimation model on FPGA with fallback on CPU with one camera, use the following command: ```sh -./multi_channel_human_pose_estimation_demo -m /human-pose-estimation-0001.xml --l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -nc 1 +./multi_channel_human_pose_estimation_demo -m /human-pose-estimation-0001.xml -d HETERO:FPGA,CPU -nc 1 ``` To run the demo using two recorded video files, use the following command: ```sh -./multi_channel_human_pose_estimation_demo -m /human-pose-estimation-0001.xml --l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -i /path/to/file1 /path/to/file2 +./multi_channel_human_pose_estimation_demo -m /human-pose-estimation-0001.xml -d HETERO:FPGA,CPU -i /path/to/file1 /path/to/file2 ``` Video files will be processed repeatedly. From 2099db99398596c590b750ff98af23e5f725ee68 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 23 Oct 2019 10:58:37 +0300 Subject: [PATCH 176/927] AC: adopt config reader on case if local config path is string (#541) --- tools/accuracy_checker/accuracy_checker/config/config_reader.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index 09f71634ccc..f56bbe69bcc 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -88,7 +88,7 @@ def _read_configs(arguments): local_config = read_yaml(arguments.config) definitions = local_config.get('global_definitions') if definitions: - definitions = read_yaml(arguments.config.parent / definitions) + definitions = read_yaml(Path(arguments.config).parent / definitions) global_config = read_yaml(arguments.definitions) if arguments.definitions else definitions return global_config, local_config From 9df1380ef30cf6c96296ffb50283829c1a49a8ab Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 23 Oct 2019 11:41:13 +0300 Subject: [PATCH 177/927] AC: move label map check from configure (#542) --- .../accuracy_checker/adapters/text_detection.py | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/accuracy_checker/accuracy_checker/adapters/text_detection.py b/tools/accuracy_checker/accuracy_checker/adapters/text_detection.py index 05ffa80b55c..41d923589ea 100644 --- a/tools/accuracy_checker/accuracy_checker/adapters/text_detection.py +++ b/tools/accuracy_checker/accuracy_checker/adapters/text_detection.py @@ -667,11 +667,9 @@ class LPRAdapter(Adapter): __provider__ = 'lpr' prediction_types = (CharacterRecognitionPrediction,) - def configure(self): + def process(self, raw, identifiers=None, frame_meta=None): if not self.label_map: raise ConfigError('LPR adapter requires dataset label map for correct decoding.') - - def process(self, raw, identifiers=None, frame_meta=None): raw_output = self._extract_predictions(raw, frame_meta) predictions = raw_output[self.output_blob] result = [] From d243d6bdd437b2f3fa01bdcf0c2ebbaa6554995c Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 23 Oct 2019 11:48:47 +0300 Subject: [PATCH 178/927] added config for brain-tumor-segmnetation-0001 (#532) --- .../configs/brain-tumor-segmentation-0001.yml | 33 +++++++++++++++++++ .../accuracy_checker/dataset_definitions.yml | 11 +++++++ 2 files changed, 44 insertions(+) create mode 100644 tools/accuracy_checker/configs/brain-tumor-segmentation-0001.yml diff --git a/tools/accuracy_checker/configs/brain-tumor-segmentation-0001.yml b/tools/accuracy_checker/configs/brain-tumor-segmentation-0001.yml new file mode 100644 index 00000000000..dbf5bbd153e --- /dev/null +++ b/tools/accuracy_checker/configs/brain-tumor-segmentation-0001.yml @@ -0,0 +1,33 @@ +models: + - name: brain-tumor-segmentation-0001 + + launchers: + - framework: dlsdk + tags: + - FP32 + model: public/brain-tumor-segmentation-0001/FP32/brain-tumor-segmentation-0001.xml + weights: public/brain-tumor-segmentation-0001/FP32/brain-tumor-segmentation-0001.bin + adapter: + type: brain_tumor_segmentation + + - framework: dlsdk + tags: + - FP16 + device: GPU + model: public/brain-tumor-segmentation-0001/FP16/brain-tumor-segmentation-0001.xml + weights: public/brain-tumor-segmentation-0001/FP16/brain-tumor-segmentation-0001.bin + adapter: + type: brain_tumor_segmentation + + datasets: + - name: BraTS + + metrics: + # ground truth mean [0.9239, 0.7114, 0.8205, 0.7271] + # UNCROPPED: ground truth mean [0.9266, 0.7256, 0.8205, 0.7268] + # ground truth median [0.9316, 0.7714, 0.8535, 0.8456] + # UNCROPPED: ground truth median [0.9339, 0.7918, 0.8603, 0.8576] + - type: dice_index + median: True + presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index c3ae70d86ab..1fc64aa5935 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -489,3 +489,14 @@ datasets: data_source: ILSVRC2012_img_val annotation: handwritten_score_recognition.pickle dataset_meta: handwritten_score_recognition.json + + - name: BraTS + data_source: BraTS + reader: numpy_reader + annotation_conversion: + converter: brats_numpy + data_dir: BraTS + ids_file: BraTS/val_ids.p + labels_file: BraTS/labels + annotation: brats.pickle + dataset_meta: brats.json From 48e9d73390f6d83441935495cf295f3fd0217ad9 Mon Sep 17 00:00:00 2001 From: Katya Date: Wed, 23 Oct 2019 11:54:03 +0300 Subject: [PATCH 179/927] AC: configs for action recognition decoders and encoders (#529) --- .../accuracy_checker/config/config_reader.py | 3 +- .../action-recognition-0001-decoder.yml | 59 ++++++++++++++++ .../action-recognition-0001-encoder.yml | 68 +++++++++++++++++++ ...r-action-recognition-adas-0002-decoder.yml | 57 ++++++++++++++++ ...r-action-recognition-adas-0002-encoder.yml | 66 ++++++++++++++++++ ...sequential_action_recognition_evaluator.py | 11 +-- .../accuracy_checker/dataset_definitions.yml | 18 +++++ 7 files changed, 277 insertions(+), 5 deletions(-) create mode 100644 tools/accuracy_checker/configs/action-recognition-0001-decoder.yml create mode 100644 tools/accuracy_checker/configs/action-recognition-0001-encoder.yml create mode 100644 tools/accuracy_checker/configs/driver-action-recognition-adas-0002-decoder.yml create mode 100644 tools/accuracy_checker/configs/driver-action-recognition-adas-0002-encoder.yml diff --git a/tools/accuracy_checker/accuracy_checker/config/config_reader.py b/tools/accuracy_checker/accuracy_checker/config/config_reader.py index f56bbe69bcc..2e96cfb5f93 100644 --- a/tools/accuracy_checker/accuracy_checker/config/config_reader.py +++ b/tools/accuracy_checker/accuracy_checker/config/config_reader.py @@ -37,7 +37,8 @@ 'cpu_extensions': 'extensions', 'gpu_extensions': 'extensions', 'bitstream': 'bitstreams', - 'affinity_map': 'affinity_map' + 'affinity_map': 'affinity_map', + 'predictions': 'source' }, 'datasets': { 'segmentation_masks_source': 'source', diff --git a/tools/accuracy_checker/configs/action-recognition-0001-decoder.yml b/tools/accuracy_checker/configs/action-recognition-0001-decoder.yml new file mode 100644 index 00000000000..b1c5e148703 --- /dev/null +++ b/tools/accuracy_checker/configs/action-recognition-0001-decoder.yml @@ -0,0 +1,59 @@ +evaluations: + - name: action-recognition-0001-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + predictions: kinetics/action-recognition-0001-encoder-predictions.pickle + + + decoder: + model: intel/action-recognition-0001-decoder/FP32/action-recognition-0001-decoder.xml + weights: intel/action-recognition-0001-decoder/FP32/action-recognition-0001-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP32 + + datasets: + - name: kinetics-400 + + preprocessing: + - type: resize + size: 224 + aspect_ratio_scale: fit_to_window + - type: crop + size: 224 + + metrics: + - type: clip_accuracy + presenter: print_vector + + - name: action-recognition-0001-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + predictions: kinetics/action-recognition-0001-encoder-predictions.pickle + + decoder: + model: intel/action-recognition-0001-decoder/FP16/action-recognition-0001-decoder.xml + weights: intel/action-recognition-0001-decoder/FP16/action-recognition-0001-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP16 + + datasets: + - name: kinetics-400 + + metrics: + - type: clip_accuracy + presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/action-recognition-0001-encoder.yml b/tools/accuracy_checker/configs/action-recognition-0001-encoder.yml new file mode 100644 index 00000000000..8f6ec3d460c --- /dev/null +++ b/tools/accuracy_checker/configs/action-recognition-0001-encoder.yml @@ -0,0 +1,68 @@ +evaluations: + - name: action-recognition-0001-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + model: intel/action-recognition-0001-encoder/FP32/action-recognition-0001-encoder.xml + weights: intel/action-recognition-0001-encoder/FP32/action-recognition-0001-encoder.bin + + + decoder: + model: intel/action-recognition-0001-decoder/FP32/action-recognition-0001-decoder.xml + weights: intel/action-recognition-0001-decoder/FP32/action-recognition-0001-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP32 + + datasets: + - name: kinetics-400 + + preprocessing: + - type: resize + size: 224 + aspect_ratio_scale: fit_to_window + - type: crop + size: 224 + + metrics: + - type: clip_accuracy + presenter: print_vector + + - name: action-recognition-0001-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + model: intel/action-recognition-0001-encoder/FP16/action-recognition-0001-encoder.xml + weights: intel/action-recognition-0001-encoder/FP16/action-recognition-0001-encoder.bin + + decoder: + model: intel/action-recognition-0001-decoder/FP16/action-recognition-0001-decoder.xml + weights: intel/action-recognition-0001-decoder/FP16/action-recognition-0001-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP16 + + datasets: + - name: kinetics-400 + + preprocessing: + - type: resize + size: 224 + aspect_ratio_scale: fit_to_window + - type: crop + size: 224 + + metrics: + - type: clip_accuracy + presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/driver-action-recognition-adas-0002-decoder.yml b/tools/accuracy_checker/configs/driver-action-recognition-adas-0002-decoder.yml new file mode 100644 index 00000000000..1f14a5f3a41 --- /dev/null +++ b/tools/accuracy_checker/configs/driver-action-recognition-adas-0002-decoder.yml @@ -0,0 +1,57 @@ +evaluations: + - name: driver-action-recognition-adas-0002-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + predictions: kinetics/driver-action-recognition-encoder-predictions.pickle + + decoder: + model: intel/driver-action-recognition-adas-0002-decoder/FP32/driver-action-recognition-adas-0002-decoder.xml + weights: intel/driver-action-recognition-adas-0002-decoder/FP32/driver-action-recognition-adas-0002-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP32 + + datasets: + - name: driver_action_recognition_dataset + + preprocessing: + - type: resize + size: 224 + aspect_ratio_scale: fit_to_window + - type: crop + size: 224 + + metrics: + - type: clip_accuracy + + - name: driver-action-recognition-adas-0002-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + predictions: kinetics/driver-action-recognition-encoder-predictions.pickle + + decoder: + model: intel/driver-action-recognition-adas-0002-decoder/FP16/driver-action-recognition-adas-0002-decoder.xml + weights: intel/driver-action-recognition-adas-0002-decoder/FP16/driver-action-recognition-adas-0002-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP16 + + datasets: + - name: driver_action_recognition_dataset + + metrics: + - type: clip_accuracy + presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/configs/driver-action-recognition-adas-0002-encoder.yml b/tools/accuracy_checker/configs/driver-action-recognition-adas-0002-encoder.yml new file mode 100644 index 00000000000..14dc215ae59 --- /dev/null +++ b/tools/accuracy_checker/configs/driver-action-recognition-adas-0002-encoder.yml @@ -0,0 +1,66 @@ +evaluations: + - name: driver-action-recognition-adas-0002-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + model: intel/driver-action-recognition-adas-0002-encoder/FP32/driver-action-recognition-adas-0002-encoder.xml + weights: intel/driver-action-recognition-adas-0002-encoder/FP32/driver-action-recognition-adas-0002-encoder.bin + + decoder: + model: intel/driver-action-recognition-adas-0002-decoder/FP32/driver-action-recognition-adas-0002-decoder.xml + weights: intel/driver-action-recognition-adas-0002-decoder/FP32/driver-action-recognition-adas-0002-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP32 + + datasets: + - name: driver_action_recognition_dataset + + preprocessing: + - type: resize + size: 224 + aspect_ratio_scale: fit_to_window + - type: crop + size: 224 + + metrics: + - type: clip_accuracy + + - name: driver-action-recognition-adas-0002-encoder + module: custom_evaluators.sequential_action_recognition_evaluator.SequentialActionRecognitionEvaluator + module_config: + network_info: + encoder: + model: intel/driver-action-recognition-adas-0002-encoder/FP16/driver-action-recognition-adas-0002-encoder.xml + weights: intel/driver-action-recognition-adas-0002-encoder/FP16/driver-action-recognition-adas-0002-encoder.bin + + decoder: + model: intel/driver-action-recognition-adas-0002-decoder/FP16/driver-action-recognition-adas-0002-decoder.xml + weights: intel/driver-action-recognition-adas-0002-decoder/FP16/driver-action-recognition-adas-0002-decoder.bin + num_processing_frames: 16 + adapter: classification + + launchers: + - framework: dlsdk + tags: + - FP16 + + datasets: + - name: driver_action_recognition_dataset + + preprocessing: + - type: resize + size: 224 + aspect_ratio_scale: fit_to_window + - type: crop + size: 224 + + metrics: + - type: clip_accuracy + presenter: print_vector +global_definitions: ../dataset_definitions.yml diff --git a/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py b/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py index d4da7a68560..cc07cd4f49e 100644 --- a/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py +++ b/tools/accuracy_checker/custom_evaluators/sequential_action_recognition_evaluator.py @@ -197,10 +197,9 @@ def release(self): def save_encoder_predictions(self): if self._encoder_predictions is not None: - prediction_file = self.network_info['encoder'].get('predictions', Path('encoder_predictions.pickle')) + prediction_file = Path(self.network_info['encoder'].get('predictions', 'encoder_predictions.pickle')) with prediction_file.open('wb') as file: - for representation in self._encoder_predictions: - pickle.dump(representation, file) + pickle.dump(self._encoder_predictions, file) class EncoderModelDLSDKL(BaseModel): @@ -243,7 +242,11 @@ def __init__(self, network_info, launcher): model_bin = str(network_info['weights']) self.network = launcher.create_ie_network(model_xml, model_bin) - self.exec_network = launcher.plugin.load(self.network) + if hasattr(launcher, 'plugin'): + self.exec_network = launcher.plugin.load(self.network) + else: + launcher.load_network(self.network) + self.exec_network = launcher.exec_network self.input_blob = next(iter(self.network.inputs)) self.output_blob = next(iter(self.network.outputs)) self.adapter = create_adapter('classification') diff --git a/tools/accuracy_checker/dataset_definitions.yml b/tools/accuracy_checker/dataset_definitions.yml index 1fc64aa5935..0a871e198be 100644 --- a/tools/accuracy_checker/dataset_definitions.yml +++ b/tools/accuracy_checker/dataset_definitions.yml @@ -490,6 +490,24 @@ datasets: annotation: handwritten_score_recognition.pickle dataset_meta: handwritten_score_recognition.json + - name: kinetics-400 + data_source: kinetics/frames_val + annotation_conversion: + converter: clip_action_recognition + annotation_file: kinetics/kinetics_400.json + data_dir: kinetics/frames_val + annotation: kinetics_action_recognition.pickle + dataset_meta: kinetics_action_recognition.json + + - name: driver_action_recognition_dataset + data_source: kinetics/frames_val + annotation_conversion: + converter: clip_action_recognition + annotation_file: kinetics/driver_action_recognition.json + data_dir: kinetics/frames_val + annotation: driver_action_recognition.pickle + dataset_meta: driver_action_recognition.json + - name: BraTS data_source: BraTS reader: numpy_reader From 89418b6ce9a299a10c2763935e3941296d088812 Mon Sep 17 00:00:00 2001 From: "Yen-Chang, Feng" Date: Wed, 25 Sep 2019 00:24:08 +0800 Subject: [PATCH 180/927] add yolo info --- demos/multi_channel/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/demos/multi_channel/README.md b/demos/multi_channel/README.md index 8fc2cbbb6b8..023130c2a2d 100644 --- a/demos/multi_channel/README.md +++ b/demos/multi_channel/README.md @@ -3,3 +3,4 @@ The demos provide an inference pipeline for two multi-channel scenarios: face detection and human pose estimation. For more information, refer to the corresponding pages: * [Multi-Channel Face Detection C++ Demo](./face_detection/README.md) * [Multi-Channel Human Pose Estimation C++ Demo](./human_pose_estimation/README.md) +* [Multi-Channel Yolo v3 C++ Demo](./yolo/README.md) From 3d04cd8796b826c4d520821be9e16b92d1fe9d74 Mon Sep 17 00:00:00 2001 From: "Yen-Chang, Feng" Date: Wed, 25 Sep 2019 00:32:05 +0800 Subject: [PATCH 181/927] update CMakeLists --- demos/multi_channel/CMakeLists.txt | 1 + demos/multi_channel/README.md | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/demos/multi_channel/CMakeLists.txt b/demos/multi_channel/CMakeLists.txt index e942cfdc9c5..feaeabd2752 100644 --- a/demos/multi_channel/CMakeLists.txt +++ b/demos/multi_channel/CMakeLists.txt @@ -23,3 +23,4 @@ endif() add_subdirectory(common) add_subdirectory(face_detection_demo) add_subdirectory(human_pose_estimation_demo) +add_subdirectory(yolo_v3) diff --git a/demos/multi_channel/README.md b/demos/multi_channel/README.md index 023130c2a2d..9a8015f894b 100644 --- a/demos/multi_channel/README.md +++ b/demos/multi_channel/README.md @@ -3,4 +3,4 @@ The demos provide an inference pipeline for two multi-channel scenarios: face detection and human pose estimation. For more information, refer to the corresponding pages: * [Multi-Channel Face Detection C++ Demo](./face_detection/README.md) * [Multi-Channel Human Pose Estimation C++ Demo](./human_pose_estimation/README.md) -* [Multi-Channel Yolo v3 C++ Demo](./yolo/README.md) +* [Multi-Channel Yolo v3 C++ Demo](./yolo_v3/README.md) From 3940b1a9daf1e2ac338fade22feb1bfc2fd08226 Mon Sep 17 00:00:00 2001 From: "Yen-Chang, Feng" Date: Wed, 25 Sep 2019 00:37:46 +0800 Subject: [PATCH 182/927] patch for support yolo v3 --- demos/multi_channel/common/graph.cpp | 5 +++-- demos/multi_channel/common/graph.hpp | 5 ++++- demos/multi_channel/common/multichannel_params.hpp | 3 +++ 3 files changed, 10 insertions(+), 3 deletions(-) diff --git a/demos/multi_channel/common/graph.cpp b/demos/multi_channel/common/graph.cpp index 5665b512e0d..9d0e063866d 100644 --- a/demos/multi_channel/common/graph.cpp +++ b/demos/multi_channel/common/graph.cpp @@ -38,7 +38,7 @@ void loadImgToIEGraph(const cv::Mat& img, size_t batch, void* ieBuffer) { } // namespace void IEGraph::initNetwork(const std::string& deviceName) { - InferenceEngine::CNNNetReader netReader; + // InferenceEngine::CNNNetReader netReader; netReader.ReadNetwork(modelPath); netReader.ReadWeights(weightsPath); @@ -235,7 +235,8 @@ std::vector > IEGraph::getBatchData(cv::Size frameSi } if (nullptr != req && InferenceEngine::OK == req->Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY)) { - auto detections = postprocessing(req, outputDataBlobNames, frameSize); + // auto detections = postprocessing(req, outputDataBlobNames, frameSize); + auto detections = postprocessing(req, outputDataBlobNames, frameSize, netReader); for (decltype(detections.size()) i = 0; i < detections.size(); i ++) { vframes[i]->detections = std::move(detections[i]); } diff --git a/demos/multi_channel/common/graph.hpp b/demos/multi_channel/common/graph.hpp index 19c2f0cef0a..b3ba38ad206 100644 --- a/demos/multi_channel/common/graph.hpp +++ b/demos/multi_channel/common/graph.hpp @@ -75,7 +75,8 @@ class IEGraph{ using GetterFunc = std::function; GetterFunc getter; - using PostprocessingFunc = std::function(InferenceEngine::InferRequest::Ptr, const std::vector&, cv::Size)>; + // using PostprocessingFunc = std::function(InferenceEngine::InferRequest::Ptr, const std::vector&, cv::Size)>; + using PostprocessingFunc = std::function(InferenceEngine::InferRequest::Ptr, const std::vector&, cv::Size, InferenceEngine::CNNNetReader netReader)>; PostprocessingFunc postprocessing; std::thread getterThread; @@ -118,5 +119,7 @@ class IEGraph{ Stats getStats() const; void printPerformanceCounts(std::string fullDeviceName); + + InferenceEngine::CNNNetReader netReader; }; diff --git a/demos/multi_channel/common/multichannel_params.hpp b/demos/multi_channel/common/multichannel_params.hpp index f3a24ca8d5c..1f360682bba 100644 --- a/demos/multi_channel/common/multichannel_params.hpp +++ b/demos/multi_channel/common/multichannel_params.hpp @@ -14,6 +14,9 @@ static const char help_message[] = "Print a usage message"; /// @brief Message for model argument static const char face_detection_model_message[] = "Required. Path to an .xml file with a trained model."; +/// @brief Message for model argument +static const char yolo_model_message[] = "Required. Path to an .xml file with a trained model."; + /// @brief Message for assigning face detection calculation to a device static const char target_device_message[] = "Optional. Specify the target device for a network (the list of available devices is shown below). " \ "Default value is CPU. Use \"-d HETERO:\" format to specify HETERO plugin. " \ From f2f3913ae488065172315921c06879eb6f40bf2b Mon Sep 17 00:00:00 2001 From: "Yen-Chang, Feng" Date: Wed, 25 Sep 2019 00:44:25 +0800 Subject: [PATCH 183/927] upload source code --- .../multichannel_demo/yolo_v3/CMakeLists.txt | 77 +++ demos/multichannel_demo/yolo_v3/README.md | 112 ++++ demos/multichannel_demo/yolo_v3/main.cpp | 584 ++++++++++++++++++ .../yolo_v3/multichannel_yolo_v3_params.hpp | 16 + 4 files changed, 789 insertions(+) create mode 100644 demos/multichannel_demo/yolo_v3/CMakeLists.txt create mode 100644 demos/multichannel_demo/yolo_v3/README.md create mode 100644 demos/multichannel_demo/yolo_v3/main.cpp create mode 100644 demos/multichannel_demo/yolo_v3/multichannel_yolo_v3_params.hpp diff --git a/demos/multichannel_demo/yolo_v3/CMakeLists.txt b/demos/multichannel_demo/yolo_v3/CMakeLists.txt new file mode 100644 index 00000000000..7bb82914c06 --- /dev/null +++ b/demos/multichannel_demo/yolo_v3/CMakeLists.txt @@ -0,0 +1,77 @@ +# Copyright (C) 2018-2019 Intel Corporation + +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at + +# http://www.apache.org/licenses/LICENSE-2.0 + +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set(TARGET_NAME "multi-channel-yolo-v3-demo") + +if( BUILD_DEMO_NAME AND NOT ${BUILD_DEMO_NAME} STREQUAL ${TARGET_NAME} ) + message(STATUS "DEMO ${TARGET_NAME} SKIPPED") + return() +endif() + +# Find OpenCV components if exist +find_package(OpenCV COMPONENTS highgui QUIET) +if(NOT(OpenCV_FOUND)) + message(WARNING "OPENCV is disabled or not found, " ${TARGET_NAME} " skipped") + return() +endif() + +file (GLOB MAIN_SRC + ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp + ) + +file (GLOB MAIN_HEADERS + ${CMAKE_CURRENT_SOURCE_DIR}/*.hpp + ) + +# Create named folders for the sources within the .vcproj +# Empty name lists them directly under the .vcproj +source_group("src" FILES ${MAIN_SRC}) +source_group("include" FILES ${MAIN_HEADERS}) + +# Create library file from sources. +add_executable(${TARGET_NAME} ${MAIN_SRC} ${MAIN_HEADERS}) + +set_target_properties(${TARGET_NAME} PROPERTIES + POSITION_INDEPENDENT_CODE ON + COMPILE_PDB_NAME ${TARGET_NAME}) + +if(MULTICHANNEL_DEMO_USE_TBB) + find_package(TBB REQUIRED tbb) + target_link_libraries(${TARGET_NAME} ${TBB_IMPORTED_TARGETS}) + target_compile_definitions(${TARGET_NAME} PRIVATE + USE_TBB=1 + __TBB_ALLOW_MUTABLE_FUNCTORS=1) + + if(FALSE) # disable task isolation for now due to bugs in tbb + target_compile_definitions(${TARGET_NAME} PRIVATE + TBB_PREVIEW_TASK_ISOLATION=1 + TBB_TASK_ISOLATION=1) + endif() +endif() + +target_link_libraries(${TARGET_NAME} IE::ie_cpu_extension ${InferenceEngine_LIBRARIES} gflags ${OpenCV_LIBRARIES} common) + +if(UNIX) + target_link_libraries( ${TARGET_NAME} pthread) +endif() + +if(COMMAND add_cpplint_target) + add_cpplint_target(${TARGET_NAME}_cpplint FOR_TARGETS ${TARGET_NAME}) +endif() + +if(NOT TARGET ie_samples) + add_custom_target(ie_samples ALL) +endif() + +add_dependencies(ie_samples ${TARGET_NAME}) diff --git a/demos/multichannel_demo/yolo_v3/README.md b/demos/multichannel_demo/yolo_v3/README.md new file mode 100644 index 00000000000..df33abf000f --- /dev/null +++ b/demos/multichannel_demo/yolo_v3/README.md @@ -0,0 +1,112 @@ +# Multi-Channel Yolo v3 C++ Demo + +This demo provides an inference pipeline for multi-channel yolo v3. The demo uses Yolo v3 Object Detection network. You can follow [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page convert the YOLO V3 and tiny YOLO V3 into IR model and execute the this demo with converted IR model. + +Other demo objectives are: + +* Up to 16 cameras as inputs, via OpenCV* +* Visualization of detected objects from all channels on a single screen + + +## How It Works + +On the start-up, the application reads command line parameters and loads the specified networks. The Yolo v3 Object Detection network is required. + +> **NOTES**: +> * Running the demo requires using at least one web camera attached to your machine. +> * By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html). + +## Running + +Running the application with the `-h` option yields the following usage message: +```sh +cd /intel64/Release +./multi-channel-yolo-v3-demo -h + +multichannel_yolo_v3 [OPTION] +Options: + + -h Print a usage message. + -m "" Required. Path to an .xml file with a trained yolo v3 or tiny yolo v3 model. + -l "" Required for MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the kernels impl. + Or + -c "" Required for clDNN (GPU)-targeted custom kernels. Absolute path to the xml file with the kernels desc. + -d "" Specify the target device for Face Detection (CPU, GPU, FPGA, HDDL or MYRIAD). The demo will look for a suitable plugin for a specified device. + -nc Maximum number of processed camera inputs (web cams) + -bs Processing batch size, number of frames processed per infer request + -n_ir Number of infer requests + -n_iqs Frame queue size for input channels + -fps_sp FPS measurement sampling period. Duration between timepoints, msec + -n_sp Number of sampling periods + -pc Enables per-layer performance report. + -t Probability threshold for detections. + -no_show No show processed video. + -show_stats Enable statictics output + -duplicate_num Enable and specify number of channel additionally copied from real sources + -real_input_fps Disable input frames caching, for maximum throughput pipeline + -i Specify full path to input video files + +``` + +To run the demo, you can use publiced pre-train model and follow [this](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html) page convert it into IR model. + +> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). + +For example, to run the demo using two recorded video files, use the following command: +```sh +./multi-channel-face-detection-demo -m face-detection-retail-0004.xml +-l /intel64/Release/lib/libcpu_extension.so -d HETERO:FPGA,CPU -nc 1 +``` + +To run the demo using two recorded video files, use the following command: +```sh +./multi-channel-yolo-v3-demo -m $PATH_OF_YOLO_V3_MODEL -l /intel64/Release/lib/libcpu_extension.so -d HDDL -i /path/to/file1 /path/to/file2 +``` +Video files will be processed repeatedly. + +To achieve 100% utilization of one Myriad X, the thumb rule is to run 4 infer requests on each Myriad X. Option “-n_ir 32” can be added to above command to use 100% of HDDL-R card. The 32 here is 8 (Myriad X on HDDL-R card) x 4 (infer requests), such as following command: + +```sh +./multi-channel-yolo-v3-demo -m $PATH_OF_YOLO_V3_MODEL -l /intel64/Release/lib/libcpu_extension.so -d HDDL -i /path/to/file1 /path/to/file2 /path/to/file3 /path/to/file4 -n_ir 32 +``` + +You can also run the demo on web cameras and video files simultaneously by specifying both parameters: `-nc -i