-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using both DMA and separate AXI Slave as PCIe requester? #40
Comments
The |
I originally chose to use If I refactor my design to use The design itself needs to utilize the DMA to move data quickly between the host and device, but the AXI slave I'm now trying to add is meant to allow for access to the CPU's memory space by any other AXI master in the design (of which there are many - this is a toy GPU!) through a page table mapping similar to GART. (As an aside, I just wanted to say how awesome your verilog-pcie and verilog-axi libraries are! I'm using them extensively in my design, and after writing some custom SV wrappers for them, they've been nothing but easy to use and a massive time savings over having to deal with Xilinx's versions. My familiarity with PCIe is pretty limited, and I was still able to drop verilog-pcie into my design and get things stood up incredibly quickly. The benefit these libraries have shown to the HDL community can not be overstated!) |
I've currently got a design set up using the
pcie_us_axi_dma
as the sole user of the PCIe requester interface. This works just as I'd expect and I can DMA between the device and the host, but the design has changed and now calls for the ability to have the device access the host's memory using individual AXI transactions from a separate AXI slave module.I realize that having a DMA lets me basically do the same thing as having the device able to access the CPU memory directly, but behavior of parts of the system external to the design are forcing my hand a bit, here. The device must be able to deal with AXI transactions generated by the design that potentially cross the PCIe interface and end up in the host's address space.
Is there any way using the library as it is now to split the RQ and RC interfaces and have them shared by two separate users? Are there any plans to add a drop-in AXI slave module that would let the design treat the entire PCIe host address space as an AXI bus?
The text was updated successfully, but these errors were encountered: