diff --git a/README.md b/README.md index 45fd2bf..2963829 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,8 @@ conda activate hls4ml-tutorial source /path/to/your/installtion/Xilinx/Vitis_HLS/202X.X/settings64.(c)sh ``` +Note that part 7 of the tutorial makes use of the `VivadoAccelator` backend of hls4ml for which no Vitis equivalent is available yet. For this part of the tutorial it is therefore necesary to install and source Vivado HLS version 2019.2 or 2020.1, which can be obtained [here](https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/vivado-design-tools/archive.html). + ## Companion material We have prepared a set of slides with some introduction and more details on each of the exercises. Please find them [here](https://docs.google.com/presentation/d/1c4LvEc6yMByx2HJs8zUP5oxLtY6ACSizQdKvw5cg5Ck/edit?usp=sharing). diff --git a/part7a_bitstream.ipynb b/part7a_bitstream.ipynb index 1aa91da..32fdcdc 100644 --- a/part7a_bitstream.ipynb +++ b/part7a_bitstream.ipynb @@ -7,7 +7,7 @@ "source": [ "# Part 7a: Bitstream Generation\n", "\n", - "In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/)." + "In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/). NOTE: This tutorial requires on Vivado HLS instead of Vitis." ] }, { @@ -26,7 +26,7 @@ "_add_supported_quantized_objects(co)\n", "import os\n", "\n", - "os.environ['PATH'] = os.environ['XILINX_Vivado'] + '/bin:' + os.environ['PATH']" + "os.environ['PATH'] = os.environ['XILINX_VIVADO'] + '/bin:' + os.environ['PATH']" ] }, { @@ -74,7 +74,7 @@ "import hls4ml\n", "import plotting\n", "\n", - "config = hls4ml.utils.config_from_keras_model(model, granularity='name', backend='Vitis')\n", + "config = hls4ml.utils.config_from_keras_model(model, granularity='name')\n", "config['LayerName']['softmax']['exp_table_t'] = 'ap_fixed<18,8>'\n", "config['LayerName']['softmax']['inv_table_t'] = 'ap_fixed<18,4>'\n", "for layer in ['fc1', 'fc2', 'fc3', 'output']:\n",