Blueshell is a toolkit for building networks on chip using Bluespec System Verilog and various Xilinx components. The purpose of Blueshell is to enable experiments with memory architectures at the system level.

It comprises Bluetiles, Bluetree and a number of implementations found within the SVN at You may be interested in the overnight Blueshell builds.

There are currently a lot of files stored within the project. Please see the Blueshell SVN Structure page for more information on the various components.

Two Components

Bluetiles is a Manhattan grid mesh network. It is intended to be used for communication between CPUs, co-processors and I/O devices. Each device on a Bluetiles network is identified by its location (X, Y). Each device may provide multiple services, which are identified by port numbers.

Bluetree (previously called XPortMC) is a tree network. It is intended to be used for access to a shared memory. The memory appears at the root of the tree, the leaves are CPUs and co-processors, and non-leaf, non-root nodes may be multiplexers or intelligent components.


There are a number of Bluetiles components (filenames: Tile*.bsv) and Bluetree components (filenames: Bluetree*.bsv) found within SVN. Some components are implemented in pure BSV. Others rely on Xilinx IP and therefore mix BSV with VHDL/Verilog and XPS projects. See the Bluetiles and Bluetree pages for details of the various components.

General advice on building Blueshell components

  • The files in each directory are built by first running the "compile" script (to build BSV code) and then running "rebuild" to run the Xilinx tools.
  • Before you can run either of these commands in any directory, you must run the "setup_dir" script in the "blueshell" directory.
  • You can skip any build step by using the overnight Blueshell builds.
  • The difficulty of porting to new FPGA boards is due to the need to recreate the Xilinx components. Generally speaking, BSV and VHDL/Verilog are easily ported, but more complex IP such as Microblaze and various memory controllers involve FPGA-specific configurations which are not portable. Creating these "board support packages" is a dark art.
  • Pre-6th generation FPGA boards, e.g. ML505, are not supported because of our dependence on AXI buses for external connections. For example, ML505 support would require a Bluetree to PLB bridge.

Board Support

We support three FPGA boards with prebuilt sample projects.

From SVN revision 836 onwards, the recommended tool versions are Xilinx 14.6, Vivado 2014.1 and Bluespec-2012.01.A. We do not yet know whether Bluespec upgrades will cause problems (best guess: unlikely) but Xilinx software upgrades inevitably force changes, at least within the XPS projects. The expected build environment is Linux x86_64, as Bluespec is Linux-only.

Digilent Atlys

We have several of these boards and so it is ideal for development of new designs. To build for this board you must first build the Microblaze core for Atlys from and then build the board support package from


We have only one of these boards. To build for it, first build the Microblaze core for ML605 from and then build the board support package from
(You can skip any build step by using the overnight blueshell build.)


We again have only one of these boards. To build for it, first build the Microblaze core for VC707 from and then build the board support package from


We again have only one of these boards. To build for it, first build the Microblaze core for VC709 from and then build the board support package from

Important Note for 7-series Devices

Larger designs for 7-series devices (i.e. the VC707) typically fail timing under the standard ISE/XST flow. Despite all attempts to make this route properly, the tools simply weren't having it, and it seems that using the Vivado flow gives much better results. Unfortunately, Vivado 2012.4 (which is semi-recommended for ISE 14.4) does not infer BlockRAMs from Bluespec correctly, however, this is fixed in Vivado 2013.2. Remember: Vivado only supports 7-series devices!

If larger 7-series-based designs are required, Vivado 2013.2 is the recommended tool flow. Please bear in mind that this will automatically update the included XPS project to 14.6. This should not be an issue in future, since we are planning to update all projects to 14.6 in coming months.

This requires a few more careful considerations. Vivado does not support mixed VHDL/Verilog deigns using the work package at current, so the .vhd files must be changed in order to add the Verilog component specification to the top level VHDL files. See AR# 47454 for more details. Additionally, UCF constraints are no longer supported and must be replaced by XCF constraints. This is done automatically for all but the top level UCF.

This has all been implemented using vc707_4x4, although it is marked as nobuild until all machines can have Vivado 2013.2 installed. Please use this design for guidance when porting designs to Vivado for now.

If Vivado is not an option, it is possible to over-constrain the failing design. Specify a clock with an over-constrained period (say, 8ns instead of 10ns), synthesise (and check the report passes the original timing!), then feed the files back into a normal synthesis using SmartGuide. See the "hack_rebuild" flow of the vc707_4x4 project for pointers on this.


Automatic: the "build_all" script builds bitfiles for both boards by following the steps listed above. After this the "test_all" script can be used to check each bit file. It requires only that suitable virtual lab keys should be placed in the blueshell directory, named atlys.key, ml605.key and vc707.key. On success, it prints the last line of each test log, which should read "That's all folks". These tests form the core of the overnight build system. Once all the tests are completed you can generate a test report by running "python bluetest/lunix/" from the "blueshell" directory.

Manual: each of the component directories includes a "run_test" script which programs the FPGA and runs the test program from

Project Basis is intended to support Gary's work. It has 9 CPUs in a 4x3 CPU NoC, Ethernet, and a DDR memory controller. The toplevel file also shows an example of generating the NoC layout procedurally. currently (revision 760) has a 9x1 NoC with 3 CPUs, an application accelerator co-processor, Ethernet and a DDR memory controller.