SDRAM driver for P2
David Betz
Posts: 14,516
I've been looking at using the same interface for all of the PropGCC external memory drivers and would like to adopt the one currently used by Chip's SDRAM driver. I have a couple of issues with it though.
First, there is no way to pass any configuration information to the driver. The only parameter passed is the address of the array of mailboxes used to communication read/write requests. This means there is no way to tell the driver which pins to use for example. Also, if I use the same interface for some of the other PropGCC external memories I may need to pass in additional information like some indication of how chip select is done. The C3 uses a different scheme than the standard "one CS pin per device" and so do Bill Henning's boards. I'd like to be able to pass in additional configuration information. Would it be possible to have the "S" register point to an array of initialization values the first of which is the address of the mailboxes and the rest are defined by the specific driver?
My second observation is that the hub memory buffers used to read/write external memory have to be 16 byte aligned using the current interface. Could we move the 4 control bits to the external address rather than the hub address? For a cache driver the external addresses are always aligned on a cache line boundary so requiring 16 byte alignment is not really a problem but it would be slightly cumbersome to require that the hub addresses be aligned.
Thanks,
David
First, there is no way to pass any configuration information to the driver. The only parameter passed is the address of the array of mailboxes used to communication read/write requests. This means there is no way to tell the driver which pins to use for example. Also, if I use the same interface for some of the other PropGCC external memories I may need to pass in additional information like some indication of how chip select is done. The C3 uses a different scheme than the standard "one CS pin per device" and so do Bill Henning's boards. I'd like to be able to pass in additional configuration information. Would it be possible to have the "S" register point to an array of initialization values the first of which is the address of the mailboxes and the rest are defined by the specific driver?
My second observation is that the hub memory buffers used to read/write external memory have to be 16 byte aligned using the current interface. Could we move the 4 control bits to the external address rather than the hub address? For a cache driver the external addresses are always aligned on a cache line boundary so requiring 16 byte alignment is not really a problem but it would be slightly cumbersome to require that the hub addresses be aligned.
Thanks,
David
Comments
You certainly can change the begin of the driver code to pass additional parameters, but making the driver configurable is not so easy. I think you need to patch all the instructions that access the ports direct. The timing is very tough, so it will be hard to make a driver that allows all kind of connections for the SDRAM (16/32bit bus, different ports, other control line alignement of ports and so on..).
Andy
Thanks for the info on the requirement for quad alignment of hub buffers. I guess that isn't such a bad constraint. Also, I wasn't so much suggesting that the SDRAM driver itself be configurable. I just meant that other drivers might require configuration and I'd like PropGCC to be able to use the same interface for all of its external memory drivers. The additional parameters could be ignored by the SDRAM driver.
It's impressive the lengths to which the C standard goes to avoid prescribing OS-specific behavior in the runtime, but it might help to approach the subject from that perspective. In that context, the SDRAM driver is critical resource to be shared by concurrent processes, subject to monopolization by the "kernel" as needed (for driving video, audio, etc.).
I foresee a hybrid design where the video and audio drivers have their copy of the SDRAM driver built-in for performance reasons, but the OS also provides a DMA-list type subsystem, such as how the PS3's Cell processor gets data on behalf of SPU's (which is in hardware). This would allow other languages/runtimes the ability to handle the SDRAM in their own way. A SDRAM-DMA driver could even support some cooperative priority/memory mapping schemes. It would be easily subverted, but it would be a good electric fence.
Each of those SDRAM drivers (video, audio, mixed-use application DMA) would need to coordinate, and in the case of apps on cogs putting DMA requests in the queue those apps will need to coordinate access to the DMA queue data structures to keep from clobbering each other's data.
This doesn't address the question of how that memory gets allocated, only how it could be read/written in a coordinated way, which is usually up to the OS/runtime to define in malloc and its variants. What were you thinking along those lines? malloc_sdram()? malloc_long() a la Win16?
tl;dr - I think standardizing the locking behavior of the SDRAM I/O is a good, minimal starting point.
[edit: grammar]