Skip to content

Commit

Permalink
auto-generating sphinx docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed May 10, 2024
1 parent 2ab9b22 commit afb14e6
Show file tree
Hide file tree
Showing 27 changed files with 57 additions and 57 deletions.
Binary file not shown.
Binary file not shown.
Binary file modified main/_images/sphx_glr_plot_video_api_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified main/_images/sphx_glr_plot_video_api_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -352,7 +352,7 @@ ffmpeg -f image2 -framerate 30 -i predicted_flow_%d.jpg -loop -1 flow.gif

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 9.613 seconds)
**Total running time of the script:** (0 minutes 9.430 seconds)


.. _sphx_glr_download_auto_examples_others_plot_optical_flow.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -417,7 +417,7 @@ Here is an example where we re-purpose the dataset from the
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 2.291 seconds)
**Total running time of the script:** (0 minutes 2.369 seconds)


.. _sphx_glr_download_auto_examples_others_plot_repurposing_annotations.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Since the model is scripted, it can be easily dumped on disk and re-used
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 1.565 seconds)
**Total running time of the script:** (0 minutes 1.575 seconds)


.. _sphx_glr_download_auto_examples_others_plot_scripted_tensor_transforms.py:
Expand Down
4 changes: 2 additions & 2 deletions main/_sources/auto_examples/others/plot_video_api.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -549,7 +549,7 @@ We can generate a dataloader and test the dataset.

.. code-block:: none
{'video': ['./dataset/2/SOX5yA1l24A.mp4', './dataset/2/v_SoccerJuggling_g23_c01.avi', './dataset/2/SOX5yA1l24A.mp4', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/v_SoccerJuggling_g23_c01.avi'], 'start': [5.31105498128678, 2.0789014223402416, 9.49854975604133, 6.075792979599469, 4.502403489105758], 'end': [5.839167, 2.6026, 10.01, 6.606599999999999, 5.005], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
{'video': ['./dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/2/v_SoccerJuggling_g23_c01.avi', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/v_SoccerJuggling_g23_c01.avi', './dataset/1/WUzgd7C1pWA.mp4'], 'start': [2.0935120401385636, 0.4507920953003993, 1.475039877877628, 5.620562628198752, 3.11041692018258], 'end': [2.6026, 0.967633, 2.002, 6.139467, 3.636967], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
Expand Down Expand Up @@ -607,7 +607,7 @@ Cleanup the video and dataset:
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 4.367 seconds)
**Total running time of the script:** (0 minutes 3.952 seconds)


.. _sphx_glr_download_auto_examples_others_plot_video_api.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1162,7 +1162,7 @@ which we used in the first case, does so too.

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 13.952 seconds)
**Total running time of the script:** (0 minutes 14.052 seconds)


.. _sphx_glr_download_auto_examples_others_plot_visualization_utils.py:
Expand Down
12 changes: 6 additions & 6 deletions main/_sources/auto_examples/others/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**00:31.790** total execution time for 5 files **from auto_examples/others**:
**00:31.379** total execution time for 5 files **from auto_examples/others**:

.. container::

Expand All @@ -33,17 +33,17 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_others_plot_visualization_utils.py` (``plot_visualization_utils.py``)
- 00:13.952
- 00:14.052
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_optical_flow.py` (``plot_optical_flow.py``)
- 00:09.613
- 00:09.430
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_video_api.py` (``plot_video_api.py``)
- 00:04.367
- 00:03.952
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_repurposing_annotations.py` (``plot_repurposing_annotations.py``)
- 00:02.291
- 00:02.369
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py` (``plot_scripted_tensor_transforms.py``)
- 00:01.565
- 00:01.575
- 0.0
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ by passing a callable to the ``labels_getter`` parameter. For example:
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.178 seconds)
**Total running time of the script:** (0 minutes 0.163 seconds)


.. _sphx_glr_download_auto_examples_transforms_plot_cutmix_mixup.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ v2). So don't be afraid to simplify and only keep what you need.

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 4.882 seconds)
**Total running time of the script:** (0 minutes 4.836 seconds)


.. _sphx_glr_download_auto_examples_transforms_plot_transforms_e2e.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -400,7 +400,7 @@ Either way, the logic will depend on your specific dataset.

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.815 seconds)
**Total running time of the script:** (0 minutes 0.836 seconds)


.. _sphx_glr_download_auto_examples_transforms_plot_transforms_getting_started.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**00:14.748** total execution time for 7 files **from auto_examples/transforms**:
**00:14.709** total execution time for 7 files **from auto_examples/transforms**:

.. container::

Expand Down Expand Up @@ -36,13 +36,13 @@ Computation times
- 00:08.854
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py` (``plot_transforms_e2e.py``)
- 00:04.882
- 00:04.836
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` (``plot_transforms_getting_started.py``)
- 00:00.815
- 00:00.836
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_cutmix_mixup.py` (``plot_cutmix_mixup.py``)
- 00:00.178
- 00:00.163
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py` (``plot_tv_tensors.py``)
- 00:00.009
Expand Down
18 changes: 9 additions & 9 deletions main/_sources/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**00:46.538** total execution time for 12 files **from all galleries**:
**00:46.088** total execution time for 12 files **from all galleries**:

.. container::

Expand All @@ -33,31 +33,31 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_others_plot_visualization_utils.py` (``../../gallery/others/plot_visualization_utils.py``)
- 00:13.952
- 00:14.052
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_optical_flow.py` (``../../gallery/others/plot_optical_flow.py``)
- 00:09.613
- 00:09.430
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_transforms_illustrations.py` (``../../gallery/transforms/plot_transforms_illustrations.py``)
- 00:08.854
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_transforms_e2e.py` (``../../gallery/transforms/plot_transforms_e2e.py``)
- 00:04.882
- 00:04.836
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_video_api.py` (``../../gallery/others/plot_video_api.py``)
- 00:04.367
- 00:03.952
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_repurposing_annotations.py` (``../../gallery/others/plot_repurposing_annotations.py``)
- 00:02.291
- 00:02.369
- 0.0
* - :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py` (``../../gallery/others/plot_scripted_tensor_transforms.py``)
- 00:01.565
- 00:01.575
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_transforms_getting_started.py` (``../../gallery/transforms/plot_transforms_getting_started.py``)
- 00:00.815
- 00:00.836
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_cutmix_mixup.py` (``../../gallery/transforms/plot_cutmix_mixup.py``)
- 00:00.178
- 00:00.163
- 0.0
* - :ref:`sphx_glr_auto_examples_transforms_plot_tv_tensors.py` (``../../gallery/transforms/plot_tv_tensors.py``)
- 00:00.009
Expand Down
2 changes: 1 addition & 1 deletion main/auto_examples/others/plot_optical_flow.html
Original file line number Diff line number Diff line change
Expand Up @@ -560,7 +560,7 @@ <h2>Bonus: Creating GIFs of predicted flows<a class="headerlink" href="#bonus-cr
<p>Once the .jpg flow images are saved, you can convert them into a video or a
GIF using ffmpeg with e.g.:</p>
<p>ffmpeg -f image2 -framerate 30 -i predicted_flow_%d.jpg -loop -1 flow.gif</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 9.613 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 9.430 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-auto-examples-others-plot-optical-flow-py">
<div class="sphx-glr-download sphx-glr-download-jupyter docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/8c66a0169c2c41937d8db4238842fe5b/plot_optical_flow.ipynb"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Jupyter</span> <span class="pre">notebook:</span> <span class="pre">plot_optical_flow.ipynb</span></code></a></p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -601,7 +601,7 @@ <h2>Converting Segmentation Dataset to Detection Dataset<a class="headerlink" hr
<span class="k">return</span> <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" title="torch.Tensor" class="sphx-glr-backref-module-torch sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">img</span></a><span class="p">,</span> <a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">target</span></a>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 2.291 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 2.369 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-auto-examples-others-plot-repurposing-annotations-py">
<div class="sphx-glr-download sphx-glr-download-jupyter docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/2753c5fd8349a28b023c919d8ddf3573/plot_repurposing_annotations.ipynb"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Jupyter</span> <span class="pre">notebook:</span> <span class="pre">plot_repurposing_annotations.ipynb</span></code></a></p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,7 @@
<span class="k">assert</span> <span class="p">(</span><a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" title="torch.Tensor" class="sphx-glr-backref-module-torch sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">res_scripted_dumped</span></a> <span class="o">==</span> <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" title="torch.Tensor" class="sphx-glr-backref-module-torch sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">res_scripted</span></a><span class="p">)</span><span class="o">.</span><span class="n">all</span><span class="p">()</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 1.565 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 1.575 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-auto-examples-others-plot-scripted-tensor-transforms-py">
<div class="sphx-glr-download sphx-glr-download-jupyter docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/4333051365715be174a783856419631f/plot_scripted_tensor_transforms.ipynb"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Jupyter</span> <span class="pre">notebook:</span> <span class="pre">plot_scripted_tensor_transforms.ipynb</span></code></a></p>
Expand Down
4 changes: 2 additions & 2 deletions main/auto_examples/others/plot_video_api.html
Original file line number Diff line number Diff line change
Expand Up @@ -858,7 +858,7 @@ <h2>3. Building an example randomly sampled dataset (can be applied to training
<span class="nb">print</span><span class="p">(</span><a href="https://docs.python.org/3/library/stdtypes.html#dict" title="builtins.dict" class="sphx-glr-backref-module-builtins sphx-glr-backref-type-py-class sphx-glr-backref-instance"><span class="n">data</span></a><span class="p">)</span>
</pre></div>
</div>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;video&#39;: [&#39;./dataset/2/SOX5yA1l24A.mp4&#39;, &#39;./dataset/2/v_SoccerJuggling_g23_c01.avi&#39;, &#39;./dataset/2/SOX5yA1l24A.mp4&#39;, &#39;./dataset/1/WUzgd7C1pWA.mp4&#39;, &#39;./dataset/2/v_SoccerJuggling_g23_c01.avi&#39;], &#39;start&#39;: [5.31105498128678, 2.0789014223402416, 9.49854975604133, 6.075792979599469, 4.502403489105758], &#39;end&#39;: [5.839167, 2.6026, 10.01, 6.606599999999999, 5.005], &#39;tensorsize&#39;: [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>{&#39;video&#39;: [&#39;./dataset/2/v_SoccerJuggling_g24_c01.avi&#39;, &#39;./dataset/2/v_SoccerJuggling_g23_c01.avi&#39;, &#39;./dataset/1/WUzgd7C1pWA.mp4&#39;, &#39;./dataset/2/v_SoccerJuggling_g23_c01.avi&#39;, &#39;./dataset/1/WUzgd7C1pWA.mp4&#39;], &#39;start&#39;: [2.0935120401385636, 0.4507920953003993, 1.475039877877628, 5.620562628198752, 3.11041692018258], &#39;end&#39;: [2.6026, 0.967633, 2.002, 6.139467, 3.636967], &#39;tensorsize&#39;: [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
</pre></div>
</div>
</section>
Expand All @@ -881,7 +881,7 @@ <h2>4. Data Visualization<a class="headerlink" href="#data-visualization" title=
<a href="https://docs.python.org/3/library/shutil.html#shutil.rmtree" title="shutil.rmtree" class="sphx-glr-backref-module-shutil sphx-glr-backref-type-py-function"><span class="n">shutil</span><span class="o">.</span><span class="n">rmtree</span></a><span class="p">(</span><span class="s2">&quot;./dataset&quot;</span><span class="p">)</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 4.367 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 3.952 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-auto-examples-others-plot-video-api-py">
<div class="sphx-glr-download sphx-glr-download-jupyter docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/4b3151295a1f6b75ffd88837fb31f719/plot_video_api.ipynb"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Jupyter</span> <span class="pre">notebook:</span> <span class="pre">plot_video_api.ipynb</span></code></a></p>
Expand Down
2 changes: 1 addition & 1 deletion main/auto_examples/others/plot_visualization_utils.html
Original file line number Diff line number Diff line change
Expand Up @@ -1004,7 +1004,7 @@ <h2>Visualizing segmentation masks<a class="headerlink" href="#visualizing-segme
Most torch keypoint-prediction models return the visibility for every prediction, ready for you to use it.
The <a class="reference internal" href="../../models/generated/torchvision.models.detection.keypointrcnn_resnet50_fpn.html#torchvision.models.detection.keypointrcnn_resnet50_fpn" title="torchvision.models.detection.keypointrcnn_resnet50_fpn"><code class="xref py py-func docutils literal notranslate"><span class="pre">keypointrcnn_resnet50_fpn()</span></code></a> model,
which we used in the first case, does so too.</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 13.952 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 14.052 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-auto-examples-others-plot-visualization-utils-py">
<div class="sphx-glr-download sphx-glr-download-jupyter docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/95e7320166df5d4ddcbdd5ea64a5c98b/plot_visualization_utils.ipynb"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Jupyter</span> <span class="pre">notebook:</span> <span class="pre">plot_visualization_utils.ipynb</span></code></a></p>
Expand Down
12 changes: 6 additions & 6 deletions main/auto_examples/others/sg_execution_times.html
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@

<section id="computation-times">
<span id="sphx-glr-auto-examples-others-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this heading"></a></h1>
<p><strong>00:31.790</strong> total execution time for 5 files <strong>from auto_examples/others</strong>:</p>
<p><strong>00:31.379</strong> total execution time for 5 files <strong>from auto_examples/others</strong>:</p>
<div class="docutils container">
<style scoped>
<link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css" rel="stylesheet" />
Expand All @@ -379,23 +379,23 @@
</thead>
<tbody>
<tr class="row-even"><td><p><a class="reference internal" href="plot_visualization_utils.html#sphx-glr-auto-examples-others-plot-visualization-utils-py"><span class="std std-ref">Visualization utilities</span></a> (<code class="docutils literal notranslate"><span class="pre">plot_visualization_utils.py</span></code>)</p></td>
<td><p>00:13.952</p></td>
<td><p>00:14.052</p></td>
<td><p>0.0</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="plot_optical_flow.html#sphx-glr-auto-examples-others-plot-optical-flow-py"><span class="std std-ref">Optical Flow: Predicting movement with the RAFT model</span></a> (<code class="docutils literal notranslate"><span class="pre">plot_optical_flow.py</span></code>)</p></td>
<td><p>00:09.613</p></td>
<td><p>00:09.430</p></td>
<td><p>0.0</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="plot_video_api.html#sphx-glr-auto-examples-others-plot-video-api-py"><span class="std std-ref">Video API</span></a> (<code class="docutils literal notranslate"><span class="pre">plot_video_api.py</span></code>)</p></td>
<td><p>00:04.367</p></td>
<td><p>00:03.952</p></td>
<td><p>0.0</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="plot_repurposing_annotations.html#sphx-glr-auto-examples-others-plot-repurposing-annotations-py"><span class="std std-ref">Repurposing masks into bounding boxes</span></a> (<code class="docutils literal notranslate"><span class="pre">plot_repurposing_annotations.py</span></code>)</p></td>
<td><p>00:02.291</p></td>
<td><p>00:02.369</p></td>
<td><p>0.0</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="plot_scripted_tensor_transforms.html#sphx-glr-auto-examples-others-plot-scripted-tensor-transforms-py"><span class="std std-ref">Torchscript support</span></a> (<code class="docutils literal notranslate"><span class="pre">plot_scripted_tensor_transforms.py</span></code>)</p></td>
<td><p>00:01.565</p></td>
<td><p>00:01.575</p></td>
<td><p>0.0</p></td>
</tr>
</tbody>
Expand Down
2 changes: 1 addition & 1 deletion main/auto_examples/transforms/plot_cutmix_mixup.html
Original file line number Diff line number Diff line change
Expand Up @@ -512,7 +512,7 @@ <h2>Non-standard input format<a class="headerlink" href="#non-standard-input-for
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>out[&#39;imgs&#39;].shape = torch.Size([4, 3, 224, 224]), out[&#39;target&#39;][&#39;classes&#39;].shape = torch.Size([4, 100])
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 0.178 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> (0 minutes 0.163 seconds)</p>
<div class="sphx-glr-footer sphx-glr-footer-example docutils container" id="sphx-glr-download-auto-examples-transforms-plot-cutmix-mixup-py">
<div class="sphx-glr-download sphx-glr-download-jupyter docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/66c906e53334aa968837db20de95f3b2/plot_cutmix_mixup.ipynb"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Jupyter</span> <span class="pre">notebook:</span> <span class="pre">plot_cutmix_mixup.ipynb</span></code></a></p>
Expand Down
Loading

0 comments on commit afb14e6

Please sign in to comment.