Part 1: Memory Mapped

Objective

This tutorial contains an introduction to the memory-mapped bus and how to get started using the AXI BRAM controller.

References


1. Hardware Design

1.1. System Bus

In computer architecture, the system bus is an interconnection that connects the CPU with memory and I/O. The following figure provides an illustration. The system bus consists of control, data, and address lines.

Data can be sent both ways from the CPU to memory or I/O, or vice versa with the CPU as the master.

This figure is an illustration of the FPGA SoC architecture. There is an FPGA that can be connected to the CPU via the system bus.

There are various types of system buses: APB, AHB, AXI, Avalon, etc. On the Zynq SoC, the system bus used is APB, AHB, and AXI. These buses belong to the ARM Advanced Microcontroller Bus Architecture (AMBA). APB and AHB are used on internal PS only, while AXI can be used to connect to PL.

This is a detailed block diagram of the Xilinx Zynq architecture. It consists of the CPU, controller for DRAM and flash memory, input/output, FPGA, and system bus.

1.2. Memory Mapped Access

The method of CPU access to memory and I/O using addresses is called memory mapping. Each DDR memory location and I/O register has its own address.

The number of addresses is determined by the bit width of the address. If the address bit width is 32, then there are 2322^{32} or 4 GB of addresses. If the address bit width is 40, then there are 2402^{40} or 1 TB of addresses.

The following is the memory map on the Zynq-7000:

The Zynq-7000 still uses a 32-bit address width, so the maximum total address space is 4 GB. Meanwhile, Zynq Ultrascale+ has 1 TB.

  • Location from 0x0000_0000 for DDR memory

  • Location from 0x4000_0000 for AXI slave port 0 in PL

  • Location from 0x8000_0000 for AXI slave port 1 in PL

  • Location from 0xE000_0000 for IO peripherals such as UART, USB, Ethernet, etc.

In AXI, the components are known as master and slave. The master controls whether to read or write. The slave can only respond by reading or writing.

The master is usually the CPU, but custom modules that we create in the FPGA can also act as masters. For example, in the case of an FPGA module, it must read or write from or to DDR memory.

1.3. Block Memory (BRAM)

Block RAM (BRAM) is a "custom" block on the FPGA. This means that BRAM does not use flip-flop or LUT resources but is dedicated. BRAM size is very limited; on Kria, it is only 5.1 Mb (637.5 KB).

We can configure RAM blocks in terms of data width and data amount (depth). The data width size is usually limited to multiples of 8 (8, 16, 32, 64) if the BRAM will be connected to a PS.

Single-port BRAM only has one interface for reading and writing, so in the same clock cycle it can only read or write.

Dual-port BRAM has two different ports. For example, port A can read to address 0 in the same clock cycle when port B writes to address 200.

RAM blocks can be added to the design using block design (GUI) or with Verilog/VHDL (Xilinx Parameterized Macros, XPM) code.

1.4. System Design

This system design consists of a block memory (BRAM) that is implemented on the FPGA. The memory is connected to the AMBA interconnect via the General-Purpose AXI Ports, which are based on the AXI4 protocol.

In more detail, the Zynq Ultrascale+ IP block shows these ports. There are two master AXI ports: M_AXI_HPM0_FPD and M_AXI_HPM1_FPD. In Zynq-7000, these ports are named M_AXI_GP0 and M_AXI_GP1.

These ports can be configured in the IP configuration dialog.

This is our system block diagram, which consists of PS and BRAM. Here, we use the master AXI_GP_0 to connect to the AXI BRAM controller via the AXI Interconnect. The AXI Interconnect IP connects one or more AXI memory-mapped master devices to one or more memory-mapped slave devices. The AXI BRAM controller translates the AXI4 protocol to the BRAM protocol.

This is the final block design diagram as shown in Vivado.

We can change the memory-mapped base address of this AXI BRAM controller and the BRAM size in the Address Editor:

2. Software Design

2.1. Hardware-Software Partition

This figure shows the hardware-software partition diagram of the PYNQ framework. The PYNQ framework consists of hardware, software, and applications. The hardware is our design that is implemented on the FPGA. The software consists of a Linux OS kernel and PYNQ libraries, together with applications that run on the ARM CPU.

In order to access our BRAM in FPGA from our application, we can use the MMIO from the PYNQ library. This library performs memory-mapped access to the master AXI port in the FPGA. This library uses the devmem driver in the Linux kernel.

2.2. User Application

To access the BRAM from the user application, we can create a Python program. First, we have to declare a MMIO object by passing the base address and address range as its input parameters.

# Access to memory map of the AXI BRAM controller
ADDR_BASE = 0xA0000000
ADDR_RANGE = 0x2000
bram_obj = MMIO(ADDR_BASE, ADDR_RANGE)

To write data to a location in BRAM, we can use write method from the MMIO object.

# Write data 168 to BRAM address 0x0
bram_obj.write(0x0, 168)

To read data from a location in BRAM, we can use read method from the MMIO object.

# Read data from BRAM address 0x0
bram_obj.read(0x0)
168

3. Full Step-by-Step Tutorial

This video contains detailed steps for making this project.

4. Conclusion

In this tutorial, we covered some of the basics of SoC FPGA, PYNQ, and memory mapped access.


Last updated