Shop OBEX P1 Docs P2 Docs Learn Events
Propeller II update - BLOG - Page 100 — Parallax Forums

Propeller II update - BLOG

19798100102103223

Comments

  • SapiehaSapieha Posts: 2,964
    edited 2013-11-28 16:12
    Hi Chip.

    I have bad fillings on that idea!
  • jmgjmg Posts: 15,155
    edited 2013-11-28 16:27
    cgracey wrote: »
    For this to work efficiently, we might need to expand the number of hub slots to 10, allocating turns like so:

    0) cog0
    1) cog1
    2) cog2
    3) cog3
    4) ddr2
    5) cog4
    6) cog5
    7) cog6
    8) cog7
    9) ddr2

    This would give the DDR2 40MHz access to hub RAM with a system clock of 200MHz. If it could transfer 32 bytes (8 longs) per access, that would be 32x40MHz = 1280MB/s transfer. A 1080p/60 screen needs 594MB/s, so we would be able to write and read a whole screen in a screen period.

    That's sounding ok, but users might also want to be fetching code, so you may need 4 slots of 12 perhaps ?

    The DDR manager needs to read larger blocks, and queue them for passing to the hub-time-slots, as there is reasonable overhead in the address-preambles, so the larger the block is, the less % the address-preamble costs.

    The DDR manager would work like a DMA engine, and some co-operative linking of 4 slots could allow more flexible bandwidth mappings.

    I think that just costs an adder, so after [Start][Count] you skip to [Start+Offset][Count] if you want to interleave two slots for twice the bandwidth (or 3 for 3x)
  • Cluso99Cluso99 Posts: 18,069
    edited 2013-11-28 16:53
    jmg wrote: »
    That's sounding ok, but users might also want to be fetching code, so you may need 4 slots of 12 perhaps ?

    The DDR manager needs to read larger blocks, and queue them for passing to the hub-time-slots, as there is reasonable overhead in the address-preambles, so the larger the block is, the less % the address-preamble costs.

    The DDR manager would work like a DMA engine, and some co-operative linking of 4 slots could allow more flexible bandwidth mappings.

    I think that just costs an adder, so after [Start][Count] you skip to [Start+Offset][Count] if you want to interleave two slots for twice the bandwidth (or 3 for 3x)
    I was thinking more about transferring direct to the cog AUX/CLUT memory. This way we could utilise say 4 of the 7 unused hub cycles. If we chose 2 cogs only (ones closest to the DDR2 pins) the new 128bit bus would be minimised. I suspect the clut memory is not fast enough to go 1600MB/s.
  • DL7PNPDL7PNP Posts: 18
    edited 2013-11-28 16:57
    I like the idea of big and fast DDR2 memory, but BGA sounds like a handicap, especially in the prototype and hobby section.
    Is there a possibility to keep the TQFP-128 package and use Port C for the DDR handling (with hardware support)?
    Could some of the GPx and VPx pins be used additionally?

    Then we would have either 92 I/O's OR 64I/O's with DDR-memory.
  • jmgjmg Posts: 15,155
    edited 2013-11-28 17:14
    Cluso99 wrote: »
    I was thinking more about transferring direct to the cog AUX/CLUT memory. This way we could utilise say 4 of the 7 unused hub cycles. If we chose 2 cogs only (ones closest to the DDR2 pins) the new 128bit bus would be minimised. I suspect the clut memory is not fast enough to go 1600MB/s.

    The concern there is COG memory is very precious, and DDR memory likes large blocks, so you take a bite out of already limited space.

    The plus side of DDR-COG pathway is you get closer to execute-in-place code models, but it is less ideal for Data movement.

    The wide transfer Chip was talking about (256 bits?) is probably the best way to manage this, then SW design can determine how much is moved into COG space, and it keeps COGs on an equal footing.
    (plus bumps COG-COG bandwidth, as a bonus )

    Much of the video display flow, is from DDR to some form of video FIFO, but it needs to leave enough spare time-windows for burst writes to DDR.Video space, and burst reads from DDR.Code space.
  • Invent-O-DocInvent-O-Doc Posts: 768
    edited 2013-11-28 17:37
    Hi Chip,

    It does not make sense to have all that dac circuitry just to connect ram. Ive been wondering that for some time. I'm not sure everyone needs a huge amount of ram. As a microcontroller, the smaller ram and all those super pins sounds cool. For a computer, if you had some simple digital lines for external ram and maybe more on chip ram, great.

    My advice is this, get the chip, prop 2, into production. If you are generating income from it, there are lots of cool ideas for prop 2.5 or 3. Don't design prop 3 now and skip 2. After 7 years on prop 1 wont keep bringing in higher income forever. Go for sales to feed your future designs.
  • cgraceycgracey Posts: 14,133
    edited 2013-11-28 20:05
    Hi Chip,

    It does not make sense to have all that dac circuitry just to connect ram. Ive been wondering that for some time. I'm not sure everyone needs a huge amount of ram. As a microcontroller, the smaller ram and all those super pins sounds cool. For a computer, if you had some simple digital lines for external ram and maybe more on chip ram, great.

    My advice is this, get the chip, prop 2, into production. If you are generating income from it, there are lots of cool ideas for prop 2.5 or 3. Don't design prop 3 now and skip 2. After 7 years on prop 1 wont keep bringing in higher income forever. Go for sales to feed your future designs.

    Remember how many times we've discussed giving cogs more than one hub slot, but it always gets shot down because of the indeterminacy that would result?

    Well, maybe letting a DDR2 controller use any available slots would be the way to give it the bandwidth it needs, and not disrupt cog accesses, or steal cog slots. There would need to be some non-hub-slot means of testing when a cog's request has been completed. As long as no more than half the cogs use their hub slots in a given round, the DDR2 controller would be able to read and write data at the max rate (800M words/s). DDR2 only has burst modes of 4 or 8 transactions, so if we used 8-word bursts, that would match a 4-long quad every 2 system clocks, or every other hub cycle. The DDR2 controller would need to buffer it's own quad and read/write it on the next available hub cycle. I think this is the way to fit it in painlessly. I think cog code would have to be written in a way that avoids using every available slot, so that the DDR2 controller would get some cycles in.
  • potatoheadpotatohead Posts: 10,260
    edited 2013-11-28 20:22
    Feels like feature creep to me.

    Some thoughts:

    Setting aside the technical merits of the change for a moment, there is the balance between the value of the product once released and how that value diminishes over time. Now Parallax enjoys a long product cycle, so the typical equation isn't entirely appropriate, but still quite useful. As much as half the value returned by a product occurs during the early part of the life cycle. The early majority typically pays a full margin and will do so for a considerable time.

    The later one gets to market, the higher the impact is.

    Now adding features bends the market window some, roping in use cases and prospects that would not be there without the feature and that's worth a lot, but the design as a whole serves as an anchor, limiting how much of this makes sense.

    On that basis, I worry.

    Technically, I think a dedicated RAM connection makes a lot of sense. I'm not all that excited about dedicated COGS. Throughout this, we need a COG to be a COG, or we've stepped away from Propeller and toward something else.

    Dedicating the pins to RAM makes a lot of sense. Really, we end up with two types of pins. People who use the RAM need 'em, and those that don't can still use them for all sorts of things. This is a bit of a step away from symmetry, but I think a good one. Honestly, I see this happening to the design somewhere, some how in the end.

    The current design won't execute code all that quickly. Some have said a few MIPS at best with complex caching in the general case. Overlays and such, where it's planned out can move more quickly, but that's also going to be case by case too. This will limit big applications.

    This new change seems like it will seriously improve that, and there is a trade-off there between driving a graphics screen or working with big data and executing code, etc... Rather than over-complicate it, I would favor the simplest, least impact thing that lets people decide what they want to do and as we know, software gets better over time. We may well find it's better than we think, particularly if it's simple and flexible as many things Propeller are.

    Going back to the feature chasing discussion, having fast, large RAM puts the Prop 2 well into micro-computer territory, and the comments about application processor as opposed to micro-controller. OS or not, ring true to me. This may well bend the market window enough to make it worth doing. In a simple sense, enough value is added to deliver high returns over a greater fraction of the product life cycle.

    Again, I'm skeptical about this.

    Finally, it's not that I don't want the chip to be excellent. I do. We all do. And perhaps this is an ordinary artifact of this open FPGA test, build, test cycle too. Can we have a discussion about possible use cases and how those compare to other competing products out there? Can we have a second one about the current Parallax market niches and how this would impact them?

    Had the error not been made, we may well be working with what we thought were awesome P2 chips. There was some minor league too little too late discussion, but there was a lot more, "can't wait, it's awesome" discussion too.

    Doing it because it would be that much cooler doesn't make a lot of sense, unless there is some basic business target to quantify the coolness. And I hate to say it, because Parallax hasn't worked that way, and perhaps it doesn't have to a lot of the time, but right now with this investment, what I see as a prime market window potentially beginning to close or shift away, I'm seriously torn between maximizing this design and moving to get P2 chips out there.

    It seems to me, given what we know now and how we've come to work in this way, subsequent designs won't take so many years. In fact, making sure they don't really should be on the table right now, because doing that will have made this entire effort seriously worth it.

    Say we picked two year windows for subsequent designs. Think about that for a minute and how it could work with income happening with the work done to date. A new release could be funded nicely, given this design is actually potent enough to deliver the revenue needed.

    Is it? I don't know. Do any of you?

    Again, I want it to kick Smile and take names. I want it because I want to see the success, everybody benefit, have fun, get design wins and fund this group very nicely until we get old. Would be a shame to botch the first iteration and never see what could come to pass, happen.
  • potatoheadpotatohead Posts: 10,260
    edited 2013-11-28 20:30
    Remember how many times we've discussed giving cogs more than one hub slot, but it always gets shot down because of the indeterminacy that would result?

    I think this makes a TON of sense.

    I really don't feel good about the irregular HUB cycles. However, if the DDR has a controller of sorts, and it can watch the HUB activity and just stuff things in? Yeah. Would such a controller handle requests then, like a DMA, with data just appearing in the HUB when it can? Would those be queued, or not? Maybe one COG can manage this, leaving people to write their own kernel for that COG to service requests in ways that make sense to them.

    Edit, just saw buffered, so that means a double buffer at the least?
  • jmgjmg Posts: 15,155
    edited 2013-11-28 20:57
    cgracey wrote: »
    Well, maybe letting a DDR2 controller use any available slots would be the way to give it the bandwidth it needs, and not disrupt cog accesses, or steal cog slots. There would need to be some non-hub-slot means of testing when a cog's request has been completed. As long as no more than half the cogs use their hub slots in a given round, the DDR2 controller would be able to read and write data at the max rate (800M words/s). DDR2 only has burst modes of 4 or 8 transactions, so if we used 8-word bursts, that would match a 4-long quad every 2 system clocks, or every other hub cycle. The DDR2 controller would need to buffer it's own quad and read/write it on the next available hub cycle. I think this is the way to fit it in painlessly. I think cog code would have to be written in a way that avoids using every available slot, so that the DDR2 controller would get some cycles in.

    Sounds good, it would be rare for all GOGs to use all Slots on a sustained basis, and the DDR pathways will have FIFOs in video modes to tolerate some access jitter.
    Code that could not tolerate jitter, would stay loaded permanently, and given time-slots.

    How many DDR channels would you support (Address,ByteCount,Offset) and how would a COG get control of one (or more) of those ?
    What about refresh ?
  • YanomaniYanomani Posts: 1,524
    edited 2013-11-28 21:01
    cgracey wrote: »
    Remember how many times we've discussed giving cogs more than one hub slot, but it always gets shot down because of the indeterminacy that would result?

    Well, maybe letting a DDR2 controller use any available slots would be the way to give it the bandwidth it needs, and not disrupt cog accesses, or steal cog slots. There would need to be some non-hub-slot means of testing when a cog's request has been completed. As long as no more than half the cogs use their hub slots in a given round, the DDR2 controller would be able to read and write data at the max rate (800M words/s). DDR2 only has burst modes of 4 or 8 transactions, so if we used 8-word bursts, that would match a 4-long quad every 2 system clocks, or every other hub cycle. The DDR2 controller would need to buffer it's own quad and read/write it on the next available hub cycle. I think this is the way to fit it in painlessly. I think cog code would have to be written in a way that avoids using every available slot, so that the DDR2 controller would get some cycles in.

    Chip

    Perhaps, if another port could be crafted, behaving in a similar fashion as port D, but allowing COGs to directly interrogate the RAM controller.
    One can pass, using QUAD operations, enough parameters thru that port, for the RAM controller to act.
    Answers to completed requests, can be received back, thru the same means, including some progress status, so the requesting COG could evaluate how much time it will be waiting, till full operation completes.
    If it urges for some air, it can breath a little, without cluttering all its HUB windows.
    And to be true, a COG that has a pending request for some memory block, to be serviced by the RAM controller, isn't expected to access the HUB very often, I presume, until its request was fully served.
    So, there is the window, to be taken by the RAM controller, and in the event of using OCTS between the RAM controller and HUB memory, there is also ample room to some performance gain, minimizing bottlenecks.

    Yanomani
  • YanomaniYanomani Posts: 1,524
    edited 2013-11-28 21:20
    I just realized, that if there are any means to measure the whole HUB slot usage, accessible to any COG that needs this information, its software could just plan its own access request demands, to minimize the whole system impact.
    In such a way, even trial and error procedures can be validated or denied.
    IMHO, a someway adaptive memory demand control can be crafted, based on such a process.
    This will not have any impact, case some COG needs greater memory bandwidth, but it can know the result of each strategy.

    Yanomani
  • jmgjmg Posts: 15,155
    edited 2013-11-28 21:25
    potatohead wrote: »
    Would such a controller handle requests then, like a DMA, with data just appearing in the HUB when it can?

    I'm not sure exactly what Chip plans, but I picture a DDR controller state engine that is very like a multi channel DMA.
    N(tbf) copies of
    - Hub Start Address
    - DDR Start Address
    - Byte count
    - Read/Write/(bandwidth/priority?) controls
    With the smallish burst modes of 4 or 8 transactions mentioned, >= two of these would allow Video and Code to stay granular and responsive, and keep out of each others way, on a software basis.
    Combined with the data flows, would be a refresh handler.
  • rod1963rod1963 Posts: 752
    edited 2013-11-28 21:36
    I don't know if morphing the P2 into a hybrid multimedia/microprocessor is good thing unless Parallax is intent on just focusing on consumer electronics such as hand held gaming consoles and the like.

    As far as BGA packaging goes, it makes a lot of sense since the P2 from what I've read will be primarily targeted at the commercial side of things.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2013-11-28 22:48
    On the subject of feature creep, perhaps having a split and two P2s would be a good idea?

    If you look at Atmel, they have the AVR32A (deprecated) and AVR32U, which stand for Application and Micro. The AVR32A is an Application Processor with external memory bus, and the AVR32U has on-chip RAM and Flash.

    If you had a P2U in the LQFP-128 with 92 I/Os and a P2A in a BGA with 64 I/Os and a DDR2 memory bus, then you could serve more market segments, and you /might/ even be able to do that with only one silicon design.
  • SeairthSeairth Posts: 2,474
    edited 2013-11-28 23:05
    Suppose the following approach:
    1. Implement a 64 I/O version of P2 (hereafter, MK1). As part of this, redesign the hub memory access to use the same interface (whatever that may be) as would be used for accessing external DDR2 RAM.
    2. After MK1 is released, bring out a second one (hereafter, MK2) in a BGA package that gets rid of the hub RAM altogether and instead uses the external DDR2 RAM.

    The benefits (as I see them, at least) would be:
    • With a smaller pin count and less I/O circuitry in MK1, it might be possible to go to a smaller (or more hobbyist-friendly) package as well as increase the hub memory size.
    • MK2 would be better targeted at commercial applications, but would also give a convenient upgrade path for hobbyists.
    • MK1 also becomes a prototyping version for MK2 (especially for the period between MK1 and MK2 releases).
    • Code would not differ between MK1 and MK2, only the amount of addressable memory.
    • Hopefully, MK1's changes would be minimal enough to still get it into production relatively soon. As they say, "one in the hand...".
  • Cluso99Cluso99 Posts: 18,069
    edited 2013-11-28 23:09
    I was thinking more along the lines that the DDR would feed the cog directly, not via the hub at all. But I realise that its quite likely we will want to put some in/out of the hub.

    Each cog has a maximum bandwidth to hub of 200MHz * 4x4bytes / 8 clocks = 400MB/s which is 25% of the DDR2 excluding setup times.

    What if the DDR fed a dual port block of 1-2KB hub space - it could get swapped out with the lower 1-2KB of hub after powerup, or sit on top of the 128KB hub but that would mean another hub address bit. The second port would only need to be 16 bits (word) wide. DDR could have full bandwidth to the hub dual port block.

    A small state m/c could run the DDR and queue cog requests in a small special fifo register, placing the results of a block read/write in/out of the hub dual port block - sort of like a DMA request system.

    This way, there is no detriment to the current hub cycle system. The 1KB or 2KB block could be allocated as desired by the user.

    Most likely the hardest part of this implementation would be the request logic, aside from the speed issues to the DDR.

    Chip, what would be the maximum data transfer rate to the DDR that the P2 could achieve using the current Onsemi process???
  • SeairthSeairth Posts: 2,474
    edited 2013-11-28 23:09
    Hah. Circuitsoft beat me too the suggestion. That's what I get for typing on a 7" tablet instead of getting my laptop out. :)
  • cgraceycgracey Posts: 14,133
    edited 2013-11-28 23:14
    jmg wrote: »
    I'm not sure exactly what Chip plans, but I picture a DDR controller state engine that is very like a multi channel DMA.
    N(tbf) copies of
    - Hub Start Address
    - DDR Start Address
    - Byte count
    - Read/Write/(bandwidth/priority?) controls
    With the smallish burst modes of 4 or 8 transactions mentioned, >= two of these would allow Video and Code to stay granular and responsive, and keep out of each others way, on a software basis.
    Combined with the data flows, would be a refresh handler.

    Yes.

    What I'm imagining is an independent state machine that controls the DDR2 pins and transacts with hub RAM in any slots that cogs are not using. Cogs would interact with it via some direct conduit which could be used to set commands (init/r/w, sdram address, hub address, number of bytes) and get status for that cog's DMA channel (busy/ready/current address). In a sense, it's very simple. And because DMA would be so fast, the state machine could round-robin each DMA request piece-wise, so that nobody waits for data to start moving. Data could move faster than a cog could even RDQUAD it in.
  • cgraceycgracey Posts: 14,133
    edited 2013-11-28 23:20
    Cluso99 wrote: »
    I was thinking more along the lines that the DDR would feed the cog directly, not via the hub at all. But I realise that its quite likely we will want to put some in/out of the hub.

    Each cog has a maximum bandwidth to hub of 200MHz * 4x4bytes / 8 clocks = 400MB/s which is 25% of the DDR2 excluding setup times.

    What if the DDR fed a dual port block of 1-2KB hub space - it could get swapped out with the lower 1-2KB of hub after powerup, or sit on top of the 128KB hub but that would mean another hub address bit. The second port would only need to be 16 bits (word) wide. DDR could have full bandwidth to the hub dual port block.

    A small state m/c could run the DDR and queue cog requests in a small special fifo register, placing the results of a block read/write in/out of the hub dual port block - sort of like a DMA request system.

    This way, there is no detriment to the current hub cycle system. The 1KB or 2KB block could be allocated as desired by the user.

    Most likely the hardest part of this implementation would be the request logic, aside from the speed issues to the DDR.

    Chip, what would be the maximum data transfer rate to the DDR that the P2 could achieve using the current Onsemi process???

    Good idea, having a block of dual-port memory in the hub to totally get around the hub-slot issue. That is nice because it allows things to become deterministic.

    I'm sure that a 200MHz system clock and a 400MHz DDR2 clock would be operable together. We'll just have the main PLL do 400MHz and divide it by 2 for the system clock. This keeps everything in phase.
  • potatoheadpotatohead Posts: 10,260
    edited 2013-11-28 23:26
    And because DMA would be so fast, the state machine could round-robin each DMA request piece-wise

    :)

    Yeah, when this was discussed long ago, one of the basic issues was the control COG bottleneck. For many cases, PASM can be written with some basic assumptions, true for that COG for sure, and know it will work. Brilliant!
  • cgraceycgracey Posts: 14,133
    edited 2013-11-28 23:27
    Maybe it would be adequate to just have the DMA write into and read from each cog's AUX RAM. That could be deterministic and it would be easy to subsequently move data in and out of hub RAM from AUX. AUX is our already-there dual-port RAM that each cog has one of. You would know when the DMA is happening and avoid RDAUX/WRAUX instructions during that time. That would be really simple - and it's deterministic, unlike hub slot activity. I think that's the way to do it. For video, it's a total shoe-in.
  • potatoheadpotatohead Posts: 10,260
    edited 2013-11-28 23:31
    What happens when a lot of COGS ask for a block of external RAM?
  • cgraceycgracey Posts: 14,133
    edited 2013-11-28 23:36
    potatohead wrote: »
    What happens when a lot of COGS ask for a block of external RAM?

    The state machine gets busy and starts doing burst reads for each cog's request, filling in their AUX RAM at one long per system clock. The AUX RAM could even be redesigned to have 64-bit data paths, so that the DDR2 state machine doesn't need to buffer ANY data, only commands.
  • SapiehaSapieha Posts: 2,964
    edited 2013-11-29 00:21
    Hi Chip.

    I have one question?

    Is it not possible to made that 32 pins parallel with standard I/O pins -- That with some muxes can change betwen standard pins and SDRAM ones.
    That can satisfy both types of users. Both ones that will have P2 as single controller without SDRM an oner that will have SDRAM
  • ozpropdevozpropdev Posts: 2,792
    edited 2013-11-29 00:33
    cgracey wrote: »
    I think that's the way to do it. For video, it's a total shoe-in.

    I second that idea Chip.
    cgracey wrote: »
    The AUX RAM could even be redesigned to have 64-bit data paths, so that the DDR2 state machine doesn't need to buffer ANY data, only commands.

    SETRACE could benefit from a 64 bit arrangement too. :)
    Additional status flags, values, etc. and/or increased capacity
  • jmgjmg Posts: 15,155
    edited 2013-11-29 01:31
    cgracey wrote: »
    I'm sure that a 200MHz system clock and a 400MHz DDR2 clock would be operable together. We'll just have the main PLL do 400MHz and divide it by 2 for the system clock. This keeps everything in phase.

    Because both edges are used, and timing matters, usually this would be a 800MHz vco divided by 2, to give 50% 400MHz, and then divide again.

    There is a refresh time on DRAM, so that clock cannot go too slow, but it would be nice to be able to scale the fSYS for power reasons. 400MHz/2N ?
  • Cluso99Cluso99 Posts: 18,069
    edited 2013-11-29 02:56
    How wide is each port on the AUX ram? Could it be modified to permit the DDR to have access from either port side to the AUX ram?

    What I was thinking that the data lines to/from the DDR could be multiplexed to connect to either side of the AUX ram. If AUX is being used for video, then obviously data would enter AUX via the cog side and hence rd/wraux/etc would need to stall on collision or the programmer would need to ensure no aux access during ddr access. If the video was not used, then the video side is available to be used, so the mux would be set to use this, and hence not stall the cog at all.

    Another possibility is to make two video side data paths of 128 aux longs. Both these data paths can be set individually to video or ddr. So you could have the ddr filling the bottom half of aux while video is reading from the top half of aux. Again, there would be no impact to the cog aux access. This method would permit the data paths between ddr and aux to be 32bits - gives 200M longs/sec (800MB/s) which is 50% ddr use. The state m/c could have a few small buffers.

    Aha - there is a gotcha in the above - the ddr cannot write while the cog writes. I will post anyway in case this triggers some other ideas.
  • David BetzDavid Betz Posts: 14,511
    edited 2013-11-29 05:10
    cgracey wrote: »
    I have a question for you all:

    Would it be a good idea to reduce the universal-purpose I/O pin count down to 64, from 92, in order to provide enough fast 1.8V I/Os to talk to a x16 DDR2 SDRAM?

    I ask because once I started playing around with big memory (32MB in a 3.3V SDRAM), I could see right away that the Prop2 would be able to do a lot of exceptional things if we had big, fast, cheap memory. The sweet spot for SDRAM seems to be the 64MB DDR2, which costs about $2. It's a little cheaper than 32MB of 3.3V SDRAM that we are on target for, but can be read and written twice as fast, which makes 1080p all-points-addressable graphics a lot more practical. It needs 1.8V I/O's, though.

    By doing this, we would free up lots of silicon area which is just routing the DAC busses. There's actually as much area committed to routing the DAC busses as the core takes. We could free up 50% more room for the core by cutting the DAC busses in half.

    This may not be practical to do at this point, because of all the manual layout involved, but I'm just thinking about it. What do you guys say about this?
    How would you used the extra silicon area? More hub RAM?
  • TonyDTonyD Posts: 210
    edited 2013-11-29 05:12
    While more RAM, even if it is external, is always welcome, I'm not sure about bring BGA packaging into the mix. Once you go down the BGA route for RAM or P2 you'll exclude a awful lot of people who make up their own boards.

    Now if the RAM was stacked with the P2 and offered in a LQFP, then that's a different kettle of fish completely

    my 2p worth :-)
Sign In or Register to comment.