Coral #61
Replies: 7 comments
-
Steps:
|
Beta Was this translation helpful? Give feedback.
-
note that currently our model uses float 16 quantization (and another model uses Dynamics range quantization). Full integer reqs some code tweaks, and a representative dataset... |
Beta Was this translation helpful? Give feedback.
-
one potential idea is the do full integer quant. and just omit quantizing the output.. i..e, include:
the model would run on the TPU except for the last layer, which would run on the CPU.. |
Beta Was this translation helpful? Give feedback.
-
here is the part in the coral docs refering to float input/output (on an integer quantized model) running on CPU: https://coral.ai/docs/edgetpu/models-intro/#quantization |
Beta Was this translation helpful? Give feedback.
-
ok, i got the edge tpu to work. the steps were:
|
Beta Was this translation helpful? Give feedback.
-
edgeTPU support via Quantize aware training has been fully added via #68 |
Beta Was this translation helpful? Give feedback.
-
Current setup does not use coral b/c of supply chain limitations... future models could include it, but would likely need to detect its presence.. which was discussed in #98
|
Beta Was this translation helpful? Give feedback.
-
compile for edge TPU: https://coral.ai/docs/edgetpu/compiler/
And see how much faster the model is compared to the fp16 model (4s per inference)
Beta Was this translation helpful? Give feedback.
All reactions