Please use caffe-xilinx to test/finetune the caffe models listed in this page.The following table lists various models, download link and MD5 checksum for the zip file of each model.Download and extract the model archive to your working area on the local hard disk. For Ultra96 v1 and v2 the github setup instructions provide additional details.You may also do this through an Ultra96 wifi connection but this will run a bit slower. Leverage the domain-specific accelerated libraries as-is, modify to suit your requirements or use as algorithmic building blocks in your custom accelerators.Complete set of graphical and command-line developer tools that include the Vitis compilers, analyzers and debuggers to build, analyze performance bottlenecks and debug accelerated algorithms, developed in C, C++ or OpenCL. The performance number shown below was measured with the previous AI SDK v2.0.4.The following table lists the performance number including end-to-end throughput and latency for each model on the Measured with Vitis AI 1.1 and Vitis AI Library 1.1The following table lists the performance number including end-to-end throughput and latency for each model on the Measured with Vitis AI 1.1 and Vitis AI Library 1.1The following table lists the performance number including end-to-end throughput and latency for each model on the Measured with Vitis AI 1.1 and Vitis AI Library 1.1The following table lists the performance number including end-to-end throughput and latency for each model on the Measured with Vitis AI 1.1 and Vitis AI Library 1.1The following table lists the performance number including end-to-end throughput and latency for each model on the Measured with Vitis AI 1.1 and Vitis AI Library 1.1The following table lists the performance number including end-to-end throughput and latency for each model on the The performance number shown below was measured with the previous AI SDK v2.0.4 on Ultra96 v1. Vitis-AI Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards. It consists of optimized IP, tools, libraries, models, and example designs. Vitis AI development environment supports the industryâs leading deep learning frameworks like Tensorflow and Caffe, and offers comprehensive APIs to prune, quantize, optimize, and compile your trained networks to achieve the highest AI inference performance for your deployed application. Open-source, performance-optimized libraries that offer out-of-the-box acceleration with minimal to zero-code changes to your existing applications, written in C, C++ or Python. The performance number including end-to-end throughput and latency for each model on various boards with different DPU configurations are listed in the following sections. For the Ultra96, plug-in a USB to Ethernet adapter and simply power up the board. It also enables Python control and execution of the Vitis AI Xilinx Deep Learning Processing Unit (DPU). Vitis AI is composed of the following key components: AI Model Zoo - A comprehensive set of pre-optimized models that are ready to deploy on Xilinx devices. The base layer includes a board and pre-programmed I/O.