I recently presented a talk “Under the Canopy: Exploring Conifer for Low-Latency Decision Forests on FPGAs” at the first FPGA Developer Forum held at CERN.

The conifer library is a tool for translating Decision Forests (ensembles of Decision Trees) for latency-optimised inference on FPGAs. Developments to use conifer for trigger selections at the LHC experiments in 2024 are reaching maturity. The tool supports a variety of frontends for the most popular DF training libraries such as xgboost, scikit-learn, and yggdrasil. Multiple FPGA inference implementations are provided: VHDL, Xilinx HLS, and the Forest Processing Unit (FPU). The VHDL and HLS implementations map a given DF directly onto FPGA logic, while the FPU is a reconfigurable design - implemented with HLS - that supports loading and reloading of different DFs with one implementation. After introducing the tool and some applications, this talk will go “under the canopy” to discuss implementation aspects of wider interest, with perspectives on: programming FPGAs using HDL vs HLS; implementing branching algorithms for FPGAs; and implementing configurable designs with HLS.

You can watch the recording of the talk here:

A related talk was presented by David Reikher on his work using conifer for the ATLAS Run 3 Tau trigger, that you can access here. Nicolò Ghielmetti presented on hls4ml, including some nice results of his on building accelerators, optimising FIFOs for hls4ml dataflow, and using hls4ml for Earth Observation satellites for our project Edge SpAIce. You can find Nicolò’s talk here.