Introduction

Today I want to talk about the typical project / repository structure that I use for efficiently managing the build tools on customer projects. Every project is a bit unique in its own way, so this structure is intentionally flexible, allowing for project-specific script edits where needed, while still being general enough to work in 99% of cases for my usual workflow. Its been slowly developed and tweaked over the years as I add new tools into the mix and make improvements to fix old shortcomings. This structure has evolved with just about every project, so I’m sure this isn’t its final form, but its at a place where I’m happy enough with it to share it.

Goals

First, I’ll list out the goals that motivated this structure, set of tools, and build system.

  • Focus on VHDL There are plenty of articles out there that go over the differences between VHDL and Verilog so this isn’t the place to do that. Most of my work has been concentrated in the US defense industry, where VHDL is the standard. I could easily be swayed into picking up Verilog instead, but this is just one of those cases of swimming with the current and taking the path of least resistance. VHDL is a fine language and it works well for me. The ONE major feature I sorely miss that SystemVerilog has over VHDL is interfaces, so once VHDL 2019 interfaces get better tool support, I’ll be a very happy guy. They’re supported by the NVC simulator and by newer Vivado releases, so I hope to start experimenting with them soon. Maybe I’ll make a branch of sblib-open to start playing around with them. Once GHDL support has been implemented and matured a bit I’ll probably start going full-throttle with interfaces, but for now I think it’s still a bit too early.
  • Focus on Xilinx Vivado Until I get an Altera / Lattice / Microsemi project, I’ll keep the scripting focused on Vivado builds. But like I said earlier, this project structure is always evolving, so it wouldn’t be too difficult to integrate other synthesis and implementation scripts in the future.
  • Support continuous integration This is critical. CI is a classic example of short term pain for long term gain, especially when a project is to be managed with more of a flexible sprint approach than a waterfall approach. Without CI, you’ll spend too much time manually managing different build versions, questioning yourself if “v1.1_hardware_tested_1_final.bit” or “v1.1_hardware_tested_real_final.bit” was the right bitstream to release, and being afraid to touch an existing design because “its already been hardware validated”. Without CI, small changes to existing designs can become really painful. CI helps you trust your code and gives you confidence while incrementally updating designs.
  • Support single command builds For example - all it takes to go from source to compiled release is running make all from the project root.
  • Support single command simulations Using something like make sim.
  • Only check in source code - No generated or compiled files No 30 GB zip file project archives with zillions of different pre-compiled bitstreams and unnecessary tool-generated files. You might be surprised by how common “the zip file method” is amongst old-school developers.
  • Vivado projects are built with a script Do not check-in the project.xpr. Using a script makes managing a large project so much more pleasant over the long run.
  • Support Linux and Windows The build system must be OS-agnostic. I enjoy using Linux, but not everybody does, so I’d never want to force a (present or future) colleague to install a VM. Although - Petalinux / Yocto is a different story, since this requires Linux. But for now, lets just stay focused on the FPGA repository. I almost always keep the software / OS component of an SoC project in its own repo, independent of the FPGA portion. The only case where I might keep them in the same repository would be if the OS was extremely trivial and guaranteed to be versioned and released at the same cadence as the hardware design, but this is almost never the case. You almost always want the flexibility to version track the OS and applications independently from the FPGA. Alright that was a bit of a tangent, but still useful.
  • Support for multiple hardware platforms Just abut every project I take on is targeted for >1 hardware platform. Usually there is a prototyping stage where I prove out a concept using development boards before moving to a custom board. It is also common to have a few different board variants that necessitate creating several tweaked top-level FPGA designs with different IO, but similar core capabilities. The project structure needs to accommodate that. A long time ago I used to maintain a separate git branch for each hardware platform, but that got messy fast. I’m much happier keeping a single “mainline” development branch with all the different top-level instantiations and constraints in their own platform directory.
  • Support “documentation as code” What I mean by this is that the project documentation should be stored in the same repository as the code, and should be built, released, and versioned the same way as the code. This makes keeping track of which version of the documentation lines up to which version of the code so much easier than maintaining a separate confluence page or word doc that is always out of date with the true state of the code.
  • Should support, but never require Vivado GUI mode People love to bash on using the Vivado GUI, and I agree with the criticism - having a full CLI workflow is of paramount importance to me for the long-term success of a project. However, there are still cases, in my opinion, where the vivado GUI really shines, so the idea is to take what’s good about the GUI and support it, while also never needing it if you don’t want to use it. It can be really helpful to visualize dangerous CDC paths using the Vivado CDC tool, view elaborated designs to make sure Vivado is interpreting your RTL as you expect, and trace signals thru a visual netlist to check critical paths that are troubling your timing reports.
  • Should support Vivado project mode rather than non-project mode There are people who swear by non-project mode, and I’m sure it works great for them, but just about every “non-hardcore” FPGA developer I’ve met expects project mode, and may not even know what non-project mode is. So I think a good middle ground is to use project mode and script the project creation. This makes things more accessible, lets developers enjoy the benefits of project mode, while also keeping everything fully scriptable.
  • Code style should be automatically checked as part of CI While working on a team, code reviews help developers learn from each other while also keeping everyone accountable. If you know someone else is going to be scrutinizing your code, you’ll probably be more likely to produce something of higher quality.Code style is one of those things that should be taken care of before the manual review even happens because style rules can be strictly defend in such a way that they should be automatically checkable by a tool. This gives developers more room during a code review to look for real logical issues rather than being distracted by simple style problems.
  • Control / status registers should be generated by a tool Control and status registers are the main interface between the FPGA and the software that controls the FPGA. one of the most boring, time-consuming, and error-prone processes in FPGA development used to be manually maintaining HDL control / status registers, documentation, and software for those registers. Since the HDL, documentation, and the software interface are essentially just different representations of the exact same register information, this is one of those cases where it makes sense to use code generation.
  • There should be a clear boundary between source files and built files I’ve never liked working with projects that generate files all over the place because this can make it difficult to determine what needs to be checked in to source control and what can be safely deleted between builds.
  • Semantic versioning should be used with respect the the software register interface This helps software developers that depend on your FPGA design understand how changes between FPGA versions will affect their software.

The Tools

I’ve picked up lots of great, mainly open-source, build tools over the years to satisfy some the goals outlined above.

This is the only free tool I know of that does the job. Thankfully, its great! You just have to define a “rules.yaml” file listing out your custom code style rules and then pass that file, along with your vhdl files as input arguments to the vsg program. It can generate a structured report with all of the rule violations. It can not only check but also automatically fix most rule violations! This is great not only for your own code, but also in a situation where you’re inheriting someone else’s code and you want to quickly update the style / fix whitespace issues on hundreds or thousands of files with a single command.

  • VUnit For simulator abstraction, simulation scripting, and a high-quality verification library

I love VUnit’s self-checking testbench verification library. It has BFMs for common components like wishbone, axi, axi stream, and UARTSs along with functions for logging failed data comparison checks. One of my other favorite VUnit features is how straightforward it makes testing different combinations of module generics. Imagine you have an async fifo module with generics for DATA_WIDTH and DEPTH. VUnit lets you define a test matrix for DATA_WIDTH = { 16, 32, 128 } and DEPTH = { 16, 1024 }, then VUnit will generate 6 testbench instantiations with all 6 possible combinations. Even better, it makes full use of modern processors with zillions of CPU cores by running all 6 of these simulations in parallel. Check out the simulation scripts in sblib-open for some examples of matrix test generators.

There are lots of register generator tools out there. They solve one of the most common problems HDL developers face, so it makes sense that lots of different people have tackled the problem. Heck, I’ve even written my own custom janky register generator before and just about every company I’ve worked for has had their own proprietary (and always janky) variation of a register generator. This is probably one of the areas of HDL development with the highest amount of duplicated effort, so it was only natural that open-source would eventually fly in to save the day.

HDL Registers is not the most popular register generator. That award probably goes to System RDL. But after trying and comparing most of the well-known generators, I settled on HDL Registers because it is the fastest and simplest. Check out the HDL Registers project philosophy page for more details. The author, Lukas Vik, is great and very responsive to feedback! He’s always added my feature requests in just a few days.

All I have to say is that both of these simulators are really great. Good enough for professional use, IMO. The main drawback is that they do not support mixed language simulation. For that you’ll have to upgrade to one the paid simulators, like Questasim, Xcelium, or Rivieria. These are too expensive for my one-man company, so I make due with the free simulators. I typically design my code hierarchies so that I never need to run mixed-language simulations and I also maintain my own VHDL libraries for basic building block components so that I don’t have to rely on Xilinx IP (often written / encrypted in Verilog). Occasionally I really do need to run a mixed language simulation, so when those rare cases come up, I’ll use the Xilinx xsim simulator. It is very slow, doesn’t support some VHDL 08 features, and doesn’t integrate with VUnit, so I try to avoid it when possible.

This has been one of the strongest productivity boosters I’ve ever had. A language server is a program used for code-completion, syntax checking, and go-to references in your text editor. I use it with the VSCode extension, but its a standalone program that can be integrated into any text editor that supports language servers. You create a file at the root of your repo called vhdl_ls.toml that defines the locations of all the VHDL files in your project and then the magic starts. It’s context aware of your full project, so its smart enough to let you right-click on a entity instantiation and quickly jump to that entity’s definition. It also alerts you when you’ve syntactically incorrect code along with supporting tab-complete for items that would usually take a long time to manually type. Want to instantiate an entity with 200 signals? You can just start typing the entity’s name and vhdl_ls will give you a tab-completion option to fully instantiate the giant entity with one keystroke.

  • Vivado For synthesis and implementation

Vivado is the AMD / Xilinx tool used to “compile” (synthesize / place / route) your set of sources into a binary that can be loaded onto a real FPGA. Other vendors have their own tools that do the same thing. Examples are Altera Quartus, Lattice Diamond, and Microchip Libero

In my projects, I have two TCL scripts for Vivado. The first one generates a Vivado project and adds the relevant sources while the second builds the bistream and reports. As an added bonus, the second one also parses the timing report and CDC report and will raise an error if the design fails to meet timing or has an unsafe CDC path.

Both of these scripts are generalized to the project structure and expect a known-in-advance file structure, so both scripts work for almost all of my projects without having to be modified. Check out the tools/proj.tcl and tools/build.tcl scripts in the template_fpga repo if you’re interested in the inner-workings of these.

One of the most common annoyances people seem to have with Vivado is managing block designs and Xilinx IPs an a clean way with source control. In my experience, by far the best way to handle these is to regenerate them from tcl scripts. You don’t have to write these scripts yourself. Vivado will create them for you with the write_bd_tcl and write_ip_tcl tcl commands. Vivado also has a write_project_tcl command that will regenerate a full project for you, but I prefer maintaining my own project generation script because the one that gets generated by Vivado is usually a buggy mess that doesn’t fit with the way that I like to organize things.

I usually have two main actions rules. The first one gets triggered to run whenever a new versioned tag is created and pushed. This rule builds the FPGA bitfile, xsa, documentation, and other related output files, and creates a new Github Release, which archives the build artifacts as a “single source of truth” for an officially released build.Having long term storage of all your official builds, along with a linked tag for the specific commit that was used to build that release is incredibly important for the long term success and maintenance of a project. What happens when a customer asks for “that specific load from a few years ago because it had a weird bug that one of our other legacy systems actually relied on”? These things really do happen, so its important to keep an organized archive of all official build artifacts over time. The only downside about the actions build rule I use is that it has to run on a self-hosted runner because I don’t think its really feasible to get Vivado to run on the free runners provided by Github. Not only are the Github runners not powerful enough, but they would also require that Vivado be reinstalled every time the runner is started up because they have no memory between runs. This would add a lot of additional time to each build and would also add significant to the actions script. I’ve seen it done before, often with Docker, but the added complexity is not worth the tradeoff to me. Vivado is such a behemoth of a tool that I think this is one of those cases where you’re better off using a pre-configured environment for builds, rather than including the creation of the build environment as part of your build. So for this, I have a lab computer with an Intel i7-1470k, 96 GB of RAM, and a 4 TB NVMe drive set up to always be available on my network. Unfortunately, its a security concern to expose self-hosted runners on public repos, so I can’t include the build action in the template_fpga project.

The other action that I usually include is one to run the simulations and check the code style for a project. This action gets run on EVERY commit to the main branch to help ensure that any new code does not break existing functionality or violate any style rules. Since my simulations and style checking uses all open-source tools, its quite straightforward to get these up and running using a free Github-hosted runner - no self-hosting required.

One more tip I want to mention about these actions scripts is that I’ve learned that its usually better to keep these as lean as possible. Do as much of your scripting work in the actual repo, using bash scripts, python, Makefiles, whatever, and then make calls to these repos scripts from within the github actions script. This has three positive attributes.

  1. Makes migrating to a different git hosting service in the future easier because you’ve minimized the dependencies on Github.
  2. There is almost no difference between running a local build on the machine you’re developing on from running a remote build on the Github Actions server. You’ll want to option to running local builds. You don’t want to be forced to connect to a remote server every time you need to run a build. What if the internet is down?
  3. Shorter actions scripts are easier to debug. Debugging Github actions scripts can be a real pain because they often require a push to the central repo to get triggered. This means your repository may end up with a list of commits like “testing actions script 1”, “testing actions script 2”, … Shorter actions scripts are better.

The Structure

Now that I’ve outlined all of the major goals of the project structure and listed the tools I use within that structure, I’ll walk through the project directories. As a reference, I’ve created the template_fpga repo, so feel free to clone that to follow along and tweak it for your own purposes. Or even send an email to [email protected] if you want to talk about the template with me.

Here is the output of tree -L 3 from template_fpga root directory after cloning the project and running make all

├── .github
│   └── workflows
│       ├── build.yaml
│       └── test.yaml
├── build
│   ├── regs_out
│   │   ├── adder
│   │   ├── gpio
│   │   └── stdver
│   ├── sim_report.xml
│   ├── template_fpga_v0.1.0-basys3
│   │   ├── template_fpga_v0.1.0-basys3.bit
│   │   ├── template_fpga_v0.1.0-basys3_build_info.rpt
│   │   ├── template_fpga_v0.1.0-basys3_cdc.rpt
│   │   ├── template_fpga_v0.1.0-basys3_clock_interaction.rpt
│   │   ├── template_fpga_v0.1.0-basys3_impl.log
│   │   ├── template_fpga_v0.1.0-basys3_io.rpt
│   │   ├── template_fpga_v0.1.0-basys3_methodology.rpt
│   │   ├── template_fpga_v0.1.0-basys3.mmi
│   │   ├── template_fpga_v0.1.0-basys3_power.rpt
│   │   ├── template_fpga_v0.1.0-basys3_synth.log
│   │   ├── template_fpga_v0.1.0-basys3_timing.rpt
│   │   ├── template_fpga_v0.1.0-basys3_util.rpt
│   │   └── template_fpga_v0.1.0-basys3.xsa
│   ├── template_fpga_v0.1.0-basys3.tar.gz
│   ├── template_fpga_v0.1.0-genesys-zu5ev
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev.bit
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_build_info.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_cdc.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_clock_interaction.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_impl.log
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_io.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_methodology.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_power.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_synth.log
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_timing.rpt
│   │   ├── template_fpga_v0.1.0-genesys-zu5ev_util.rpt
│   │   └── template_fpga_v0.1.0-genesys-zu5ev.xsa
│   ├── template_fpga_v0.1.0-genesys-zu5ev.tar.gz
│   └── vivado_out
│       ├── template_fpga_basys3
│       └── template_fpga_genesys-zu5ev
├── CHANGELOG.md
├── doc
│   ├── diagrams.drawio
│   ├── requirements.md
│   └── user_guide.md
├── .gitignore
├── lib
│   └── sblib-open
│       ├── CHANGELOG.md
│       ├── doc
│       ├── LICENSE
│       ├── Makefile
│       ├── README.md
│       ├── src
│       ├── test
│       ├── tools
│       └── vhdl_ls.toml
├── Makefile
├── platforms
│   ├── basys3
│   │   ├── cnstr
│   │   ├── hdl
│   │   ├── ip
│   │   └── platform.mk
│   └── genesys-zu5ev
│       ├── cnstr
│       ├── hdl
│       ├── ip
│       └── platform.mk
├── README.md
├── src
│   └── adder
│       ├── hdl
│       └── regs
├── test
│   └── adder
│       └── adder_tb.vhd
├── tools
│   ├── build.tcl
│   ├── proj.tcl
│   ├── regs.py
│   ├── sim.py
│   └── vsg_rules.yaml
└── vhdl_ls.toml

Lets start at the top and work our way down:

  • .github Holds the Github Actions scripts for building / simulating / testing.
  • build This is the only generated directory, meaning that cleaning up the build artifacts is as simple as running rm -r build.
    • regs_out holds all of the generated register interface code.
    • sim_report.xml is an easily parsable report with the last simulation run results.
    • template_fpga_v0.1.0-basys3 holds the build artifacts from the synthesis / implementation run. The general form of this directory name is <repo_name>_<version>-<platform>. As you can see, this template project supports two different hardware platforms the Basys3 Artix-7 board and the Genesys ZU-5EV Zynq MPSoC board. Along with the FPGA programming files, these build directories also store build reports that provide information on pins, utilization, timing, and CDC. Storing these reports helps you track the changes in the design over time and can also help you make utilization estimates for future projects without having to fire up Vivado and rebuild old projects for comparison.
    • template_fpga_v0.1.0-basys3.tar.gz This is just a compressed and archived file of the directory we talked about in the last bullet. This is the file that gets stored in your build archives. In my case, I use a Github release for each new version, but there are plenty of other suitable ways of handling this. The one way that is NOT suitable is to store these built releases in your git repo. Git was never meant to store large binary blobs so you’ll notice your repos becoming more an more sluggish as new releases are added. Someone might make a case for git LFS here, but I’ve never tried it before (I’ve never had a reason to - Github releases already worked well for me and seem to be a natural fit for the problem of long-term build artifact storage).
    • vivado_out Here’s where the vivado project generation script creates the vivado projects for each platform.
  • CHANGELOG.md Use this to keep track of changes to the project as it evolves. See Keep A Changelog
  • doc Project documentation including diagrams, images, schematic PDFs, user guides, and anything else that may be needed.
  • .gitignore Lists the files that shouldn’t checked in to the repository.
  • lib External source-code libraries that are not expected to significantly change during the life of the project. These could either be copy-pasted in, or linked as git submodules or git subtrees. I’m fond of using git subtrees for libraries because they are simpler to handle than submodules, while still being easy to update, if needed, without having to manually copy and paste anything. Almost all of my customer projects rely on sblib-open as an external dependency and I typically link it as a git subtree.
  • Makefile This has all the major build commands for each project. The usual ones are:
    • make all to fully build all platforms from a freshly checked out repo.
    • make sim to run the self-checking simulations
    • make style-fix to reformat the entire hdl codebase to conform to the expected style guidelines.
  • platforms Has the platform-specific source and configuration files.
  • README.md Should have general info on the project, including explicitly listing tool dependencies and build instructions.
  • src Has source files that are common among all the platforms. Each major module should have its own subdirectory in src. Then each module subdirectory is further divided using the following directories:
    • cnstr for constraints
    • hdl for vhdl and or verilog source code
    • ip for tcl scripts to regenerate IPs and or IP-Integrator block designs. NEVER use an ip.xpr file to store an IP, and ALWAYS use a tcl script to regenerate it. Block design scrips can be created from the Vivado TCL commandline using write_bd_tcl and IP scripts can be created using write_ip_tcl.
    • regs for register definition files that are passed to the register generator tool.
  • test keeps the VUnit testbenches for each project submodule.
  • tools holds project scripts for the build tools.
  • vhdl_ls.toml lists the vhdl source files that should be included for analysis by the language server.

Conclusion

Today we’ve taken a look at the project structure and tools that I like to use as part of my FPGA development and maintenance process. In it, I tried to adopt modern software standards for continuous improvement while also making full use of awesome freely available open-source FPGA automation tools.