Shop OBEX P1 Docs P2 Docs Learn Events
Is there any point in adding 10Base-T to the smart pin? - Page 2 — Parallax Forums

Is there any point in adding 10Base-T to the smart pin?

2»

Comments

  • This FPGA image seems pretty stable, Cluso. There are some good smart pin examples in there, its well worth running it to get a proper handle on where things are at.

    @Peter, there are 'CIO' set bits on each pin that can invert input and or output, this should allow active low 'voting' from multiple cogs

  • cgracey wrote: »
    Smart pins are almost done. No need to throw them away to save time. We need them, anyway, to make the chip do fast things.

    A chip that does not so fast things is far far superior to an FPGA image that does fast things!

    Gimme and I promise that I will never complain about P2 and if sometimes I need more then I will just wait for the next Prop version. :)


  • Cluso99Cluso99 Posts: 18,069
    Tubular wrote: »
    @Peter, there are 'CIO' set bits on each pin that can invert input and or output, this should allow active low 'voting' from multiple cogs
    I don't think this is the same thing.

    Each cog 'OR's its OUT pins with the other cogs. So I would expect there is an output bus going around the chip for the 64 outputs. Then this would drive the output pin and/or the smart pin.
    As I understand it, the inverter is right at the pin for both input and output.

    I would not like to see this method change now. The default of the OUT and DIR is 0 = off. We would not want to default OUT and DIR =1.
  • jmgjmg Posts: 15,145
    ...then just gimme the P2 and pin ANDing rather than ORing for sharing the far more common active low signals.

    This makes more sense for the Open Drain Pull down BUS structures like i2c, but
    it may prove a culture shock for those used to P1 ?

    A clean sheet design would use Active-Low, so pins within a device, behave the same as pins connected between separate devices.
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2016-03-03 02:56
    Cluso99 wrote: »

    I would not like to see this method change now. The default of the OUT and DIR is 0 = off. We would not want to default OUT and DIR =1.

    DIR will always be OR'd but the trouble at present is that I have cogs share I/O rather than the traditional Spin/PASM method where a cog owned the I/O and Spin would talk to that cog as a Spin cog cannot run PASM directly. This is always slower than just doing it in the cog that needs the data, such as SD card etc. The problem with the current OR's outputs is that the active low chip select and perhaps the clock have to be left high but that means another cog cannot override this unless we resort to pull-ups and simply floating the pin in between use.

    By AND'ing outputs instead of OR'ing it just means that if we were to set the DIR and make the pin an output then its default would be high rather than low but any cog can make it low. Much more useful in the real world where chip selects are active low. I could not think of any software that would rely on a pin default to low rather than high when it is made an output as normally we set the OUTx register before we set the DIRx register anyway.


  • pjvpjv Posts: 1,903
    jmg wrote: »
    pjv wrote: »
    JMG,

    My needs are 100 megabits per second, continuously, no gaps, all day long. Totally predictable. Occasional communication errors are handled by ignoring the offending byte. Speed and reliability are utmost. I have been wanting to do this in a P1, but it needs some helper hardware such as an FPGA. I'm hoping the P2 with its smart pins can do this directly.
    I don't think Ethernet can do "continuously, no gaps".
    So you have 256 nodes, all sending ? No receive at all ?
    How many bytes per node, and what BUS lengths ?

    A ring bus is one way to reduce turn-around times & the address is implicit.
    (and it is inherently duplex)

    Probably I should explain a little better.

    I'm NOT looking for Ethernet proper, just the low level signalling, enabled with a shift register for serial I/O in Manchester mode, and a CRC. If the latter is tough, I'll do with a checksum.

    I will transport 65,536 bits every millisecond, interleaved from 255 nodes. So exact timing is required in order for each node to know and place it's data into the stream at exactly the right time. So this consumes 65 Mbits of my 100 Mbit data pack. The balance of the bandwidth is used for ad-hoc bi-directional communications.

    So just Ethernet-like signalling is what I need. I don't even plan to use a Phy.

    Sorry if I came across as asking for Ethernet per SE.

    Cheers,

    Peter (pjv)

  • Cluso99Cluso99 Posts: 18,069
    I actually thought the P1 was refreshing to default to low, both for input and output, rather than following tradition.

    I share an output pin between -CE (sram) and via an inverter to -CS (SD) gated with +EN. By utilising the or function, I can see if the shared cogs have released the bus. Guess it's no different to pull-up and wired AND really.

    I have also used a pull-up with the cogs out=0 and setting DIR to 0 or 1. This is effectively wired and.
  • jmgjmg Posts: 15,145
    pjv wrote: »

    I will transport 65,536 bits every millisecond, interleaved from 255 nodes. So exact timing is required in order for each node to know and place it's data into the stream at exactly the right time. So this consumes 65 Mbits of my 100 Mbit data pack. The balance of the bandwidth is used for ad-hoc bi-directional communications.

    So just Ethernet-like signalling is what I need. I don't even plan to use a Phy.
    If you do not want to use a PHY, what length cables do you expect to drive, and is this on a parallel bus, or a ring ?

    The PHY gives a means for a practical bandwidth, 25MHz nibbles likely can stream from P2.

    However, P2 has no means to extract a 100MHz clock, for Rx, and "exactly the right time" seems to require some very high precision clock syncing, also outside the P2 envelope.
    It may manage around half your target, which would just need 2 channels ?
    50MHz Async and 52Mhz BUS transceivers are both likely practical.

  • pjvpjv Posts: 1,903
    jmg wrote: »
    pjv wrote: »

    I will transport 65,536 bits every millisecond, interleaved from 255 nodes. So exact timing is required in order for each node to know and place it's data into the stream at exactly the right time. So this consumes 65 Mbits of my 100 Mbit data pack. The balance of the bandwidth is used for ad-hoc bi-directional communications.

    So just Ethernet-like signalling is what I need. I don't even plan to use a Phy.
    If you do not want to use a PHY, what length cables do you expect to drive, and is this on a parallel bus, or a ring ?

    The PHY gives a means for a practical bandwidth, 25MHz nibbles likely can stream from P2.

    However, P2 has no means to extract a 100MHz clock, for Rx, and "exactly the right time" seems to require some very high precision clock syncing, also outside the P2 envelope.
    It may manage around half your target, which would just need 2 channels ?
    50MHz Async and 52Mhz BUS transceivers are both likely practical.

    I will use a standard Ethernet transformer, driven from the P2 via some transistors. The cable will be a dual co-ax send/receive ring, reversing direction on completion of every packet. This gives required redundancy, and cable break handling.

    Each node will refresh the data without delay as it passes through, and will also supply power over the co-ax to run any node. The timing at each node is resynced as required to keep everything aligned.

    I expect cable lengths from any node to be a few hundred feet with a total ring length of perhaps a few thousand feet max.

    Cheers,

    Peter (pjv)
  • cgraceycgracey Posts: 14,133
    You can always set the pin to have inverted output. That way, you get effective AND function when inverting OUT bits.
  • jmgjmg Posts: 15,145
    pjv wrote: »
    I will use a standard Ethernet transformer, driven from the P2 via some transistors. The cable will be a dual co-ax send/receive ring, reversing direction on completion of every packet. This gives required redundancy, and cable break handling.

    Each node will refresh the data without delay as it passes through, and will also supply power over the co-ax to run any node. The timing at each node is resynced as required to keep everything aligned.

    I expect cable lengths from any node to be a few hundred feet with a total ring length of perhaps a few thousand feet max.
    Sounds still outside P2 reach, but if you have Dual Coax, you can run two 50Mbd paths which could be possible.

    Lots of loss in few hundred feet, so drivers and receivers will likely be needed.

    It's a fundamental clock sampling issue : USB and similar edge-inserted protocols usually have oversampling of x4, for DPLL sync. ( points to ~400MHz here )

    Chip has indicated Async in P2 can run at x3 sampling, so your highest streaming receive is going to be 32bit Async - which a P2 can do, to a projected 50MBd+.

    There could still be buffering issues, as Ozpropdev has found 8b & 8MHz gets extra stop bits, so 32b at > 32MHz is going to have the same time issues, but the final core speed will be higher.
    6~7MBd in present 80MHz FPGA 8b space, will roughly map to 50 MBd 32b ~160 MHz silicon space.


    There are (local clocked) use cases (eg FTDI fast Serial) where it could be useful to run Async at x2 sampling, (phase known) but I'm not sure if that is possible in the P2 silicon ? - Chip ?

  • Well,
    My first reply was deleted by a moderator.

    Basically all I said was that if you are serious, you could contract someone for the Ethernet portion of your design. While I was at National Semiconductor I had a lead part of the design team for the high speed communications division dealing with 10/100/1000 Ethernet Chip sets. The Chip line was the MacPhy series which stood for Media Access Communication for the Physical layer.


  • jmgjmg Posts: 15,145
    While I was at National Semiconductor I had a lead part of the design team for the high speed communications division dealing with 10/100/1000 Ethernet Chip sets.

    How did they manage Rx Clock extract/sync at100M ?
    Was that Analog PLL, or faster clock DPLL ?
  • RaymanRayman Posts: 13,860
    There are little USB to Ethernet adapters that work with embedded Linux:
    http://free-electrons.com/blog/usbeth/

    I think that means there is source code around to make it work.
    If you don't mind GPL license, maybe that's a simple way to add Ethernet...
  • jmg ... I doubt you will be able to get 100 speeds, but you might be able to bit bang 10M. The 100M did use an analog PLL, but for the higher 1000M (1G) the 4 twisted pairs were each out of phase by 90 Deg, that way each pair only had to go to 250MHz (<- The technology would only allow for 300MHz) so you had to apply tricks to gain higher speeds. ... and then there are signal reflections that need to be dealt with in the analog domain. About 1/4 of the chip was dedicated to nothing but DSP.
  • jmgjmg Posts: 15,145
    Rayman wrote: »
    There are little USB to Ethernet adapters that work with embedded Linux:

    That's also how RaspPi does their Ethernet (only they have 480M USB)
    SiLabs have Parallel to Ethernet, CP220x, and Microchip have USB-Ethernet

    It should be possible to do FS USB to 100M Ethernet, with some caveats ?

  • jmg,

    Signal reflections are going to be your biggest problem... you "can" get 100M, but then you have to perform noise cancellation techniques that vary with cable lengths and several other variables that need to be dealt with.
  • Can a moderator please close this thread? Ethernet is not going to be added to the P2. Therefore, any additional posts are not going to be adding any substance to the OP. And there are plenty of other threads to continue discussing USB, pin polarities, etc...
  • jmgjmg Posts: 15,145
    Seairth wrote: »
    Can a moderator please close this thread? Ethernet is not going to be added to the P2. Therefore, any additional posts are not going to be adding any substance to the OP. And there are plenty of other threads to continue discussing USB, pin polarities, etc...

    Change of the title seems smarter, than the rather blunt instrument of closing it entirely.
    It has valid discussions on ways to get Ethernet via USB, and Ethernet-like operation desired by some.

    It may even be possible to do 10M Ethernet on P2, as someone mentioned above.
    (not sure how supported 10M is on routers )


    Microchip LAN9500A ( 100 $3.20) says :
    Supports HS (480 Mbps) and FS (12 Mbps) modes
    Contains an integrated 10/100 Ethernet PHY, USB PHY, Hi-Speed USB 2.0 device
    controller, 10/100 Ethernet MAC, TAP controller, EEPROM controller, and a FIFO controller with a total of 30 KB of internal packet buffering.


    A single P2 is not going to saturate 100M Ethernet, but a dozen P2's could sensibly connect to a 100M backbone this way.
  • 10M Ethernet is on its way out like HUBs, most common router now do 100M/1000M and do rarely support 10M anymore.

    In company's I more and more see 10G Ethernet between server's and 1000M to the clients.

    Not sure about the software layers needed to support Ethernet over USB but Micah was able to get a USB and Bluetooth stack running on a P1. My main concern is that USB is somehow just standard for some use cases, like keyboard/Mouse/File Storage and maybe serial.

    Everything else needs a different driver for each product, even if all of them are WiFi or Ethernet adapter. Whereupon a direct support for 10baseT would just need one driver and would work out of the box and also without binding to a certain product.

    But I think 10M Ethernet is to old/late and 100M not easy on a P2. So sadly it will be better to skip it.

    Mike
  • TorTor Posts: 2,010
    At least with some (if not all) of the hubs I have, connecting a single 10M ethernet client will force all the other clients to 10M too. We don't want that. I don't want to see 10M anywhere at this point, because of the side effects. Same with 802.11b, for the same reason: Doesn't play along with g or n.
    It's much better to leave it out than to implement something that old.
  • Heater.Heater. Posts: 21,230
    Arrrrgh....No!

    No more features.

    The only feature I want the P2 to have is silicon !

  • cgracey wrote: »
    Smart pins are almost done. No need to throw them away to save time. We need them, anyway, to make the chip do fast things.

    If only they could be used to make "Chip" go faster. ;)
Sign In or Register to comment.