Shop OBEX P1 Docs P2 Docs Learn Events
10BaseT Ethernet — Parallax Forums

10BaseT Ethernet

Bill HenningBill Henning Posts: 6,445
edited 2013-03-13 12:12 in Propeller 2
should be quite feasible in a single cog... perhaps less... on the P2... Just add a mag jack...

Oh wait - IT IS CLEARLY IMPOSSIBLE!
«1

Comments

  • davidsaundersdavidsaunders Posts: 1,559
    edited 2013-03-10 15:34
    should be quite feasible in a single cog... perhaps less... on the P2... Just add a mag jack...

    Oh wait - IT IS CLEARLY IMPOSSIBLE!
    Oh Oh, who is going to have it done in the next week?? :smile:
  • Cluso99Cluso99 Posts: 18,069
    edited 2013-03-10 21:08
    Bill, agreed. TOTALLY IMPOSSIBLE ;)

    However, I think IMHO I would rather have a simple WiFi module than cabled Ethernet. IMHO that is the biggest mistake on the Raspberry-Pi and why the MK802 etc are looking good.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2013-03-11 06:20
    What is it about WiFi? The RISC OS Open people talked me into adding an Atheros Driver for USB WiFi dongles to my todo list, and now here it is again. I still strongly prefer wired networking.

    Though yes 10BaseT Ethernet, completely impossible to implement in software on the Propeller 2 :smile:.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-03-11 08:41
    The appeal of wi-fi is to get rid of the wire! Portable things operate best when they are actually portable.
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2013-03-11 11:13
    I think you guys are wasting the "impossible" thing on stuff here... A 10Mbit/s manchester encoded differential signal sent from a 160Mhz-200Mhz MCU with effective 1 cycle per instruction, OMG so hard?!?! Maybe it's "impossible" on the Prop 1, but really you need to pick harder stuff for the Prop 2. Like 100Base-TX.
  • rod1963rod1963 Posts: 752
    edited 2013-03-11 13:38
    100Base-TX and USB-OTG HS should be easily achievable goals for the P2. Look, if you want to positively compare the P2 to say a STM32-M4 you gotta have these.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 00:51
    From my preliminary investigations, the single biggest hindrance to implementing Ethernet, or other protocols, is CRC calculation. There are a lot of cycles involved in just generating a CRC checksum. The bit twiddling is all very straightforward, but CRC-32 takes many cycles just to accumulate a single byte. CRC-16 is easily done with lookup tables, CRC-32 is a different beast.

    I wrote both in SPIN as a comparison, and several moons ago I recommended to Chip to add hardware instructions to calculate this. Unfortunately I hadn't given it enough time to come up with an elegant interface, and thus formulate a persuasive argument, for its inclusion.

    EDIT: After re-reading the Wikipedia page, I might take a whack at an optimized PASM implementation of CRC-32, it looks like there might be some nice tricks.

    If I had a DE0-nano, I could try making a P2 implementation that used the CLUT for the lookup table, since it needs 256 longs for the lookup. My guess at this point is that it would take between around 10 to 12 clocks per byte accumulated.

    At that rate, you can only CRC sum 160 Mb per second at 160Mhz best case, which would likely require a dedicated COG just to do CRC generation.
  • Heater.Heater. Posts: 21,230
    edited 2013-03-12 01:35
    Isn't there the option of checking the CRC after you have all the bytes of a packet in memory? That is:
    1) Receive all the incoming bytes.
    2) Create checksum from the buffer and check it.

    Of course during 2) you might be non-responsive to another incoming packet. But I suspect that may be the case a lot of the time anyway as we probably don't have a lot of buffer space and we will be wanting to process the packets content anyway, perhaps slowly in Spin.

    Perhaps during the time of processing the checksum and the packet content the driver could be transmitting something so as to make the far end think there is a collision going on and it backs off till a later retry.
  • roglohrogloh Posts: 5,786
    edited 2013-03-12 04:35
    According to some fast CRC32 algorithm information identified here: http://create.stephan-brumme.com/crc32/
    uint32_t crc32_1byte(const void* data, size_t length, uint32_t previousCrc32 = 0) 
    { 
      uint32_t crc = ~previousCrc32; 
      unsigned char* current = (unsigned char*) data; 
      
      while (length--) 
        crc = (crc >> 8) ^ crc32Lookup[(crc & 0xFF) ^ *current++]; 
      return ~crc; 
    }
    
    you can probably achieve a CRC32 algorithm overhead of 5 instructions per byte (in an existing byte processing loop). Still getting my head around all the cool new P2 instructions but something like this might work.
    xor    crc, new_byte
    setspa crc
    shr    crc, #8
    popar  temp
    xor    crc, temp
    ... rest of packet data loop...
    
    'where crc is the accumulated crc in the loop
    'new_byte is the new incoming data byte
    'temp is some scratch variable from the CLUT lookup result
    

    If that particular code snippet works out, at 100Mbps it is then just 5 PASM instructions / byte * 12.5MB/s = ~62.5 MIPS to do the CRC computation part, leaving the remaining ~100 MIPS or so of the COG for the rest of the packet processing. You'd need to interleave this stuff in with the required 25MHz pin sampling for 4 bit MII and other decoding operations and it assumes you can do it all in parallel which would be ideal if possible. Obviously a 100/150/200MHz clock source would be best for this instead of 80MHz (sub)multiple in order to sample the pins at the right clock rate.

    Also I am assuming you'd use an MII based PHY here too instead of attempting your own MLT-3 encoding at 125Mbps - though maybe the video generator output might be able to stream out the 4B/5B transformed symbols at that rate using 3 levels with its DAC somehow...interesting idea. Ok Tx perhaps if you are lucky but the CLUT might be needed there too for the video generator which would prevent the CRC algorithm from also using it. I really do challenge someone to figure out a way to receive/decode MLT-3 at 125MHz with the P2 and keep up without using a PHY or external circuit for that purpose. I think that one is impossible but would love to be proved wrong. :smile:

    Roger.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 08:28
    Heater. wrote: »
    Isn't there the option of checking the CRC after you have all the bytes of a packet in memory? That is:
    1) Receive all the incoming bytes.
    2) Create checksum from the buffer and check it.

    Of course during 2) you might be non-responsive to another incoming packet. But I suspect that may be the case a lot of the time anyway as we probably don't have a lot of buffer space and we will be wanting to process the packets content anyway, perhaps slowly in Spin.

    Perhaps during the time of processing the checksum and the packet content the driver could be transmitting something so as to make the far end think there is a collision going on and it backs off till a later retry.

    You have to send a CRC with every packet, that's where the main overhead is with CRC.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 08:34
    While the accumulate loop may be 5 instructions per byte, you still have hub memory access and a loop counter, all of that brings the number of clocks per byte up.
  • Heater.Heater. Posts: 21,230
    edited 2013-03-12 08:37
    I appreciate that. But it makes no odds does it?
    If it took 10ms to calculate the CRC, an extreme example, then you would build your packet plus CRC in about 10ms and then "boom" send it at the required bit rate.
    On receive you get the bits in at the required bit rate and then spend a leisurely 10ms checking the CRC. The other end does not care.

    All it means is that you have a 20ms turn around time when replying to a packet.

    I would suspect this is something we could live with if need be.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 09:05
    You are discussing 2 ends of the same problem.

    There is signalling rate and then there is effective throughput.

    Doing some math:

    1500 byte frame size.

    ~16,000 clock cycles to generate CRC checksum.

    160,000,000 / 16,000 = 10,000 frames per second

    10,000 * 1,500 = 15,000,000 bytes per second

    So the raw capability of the Prop 2 chip for CRC generation is around 15 Megabytes per second, and the 100Base-T standard is 100Mbits per second, or ~12.5 Megabytes per second.

    You are left with about 27,000 clocks to do all of the ethernet overhead.

    It's very likely that you will have a 2 COG 100Base-T driver and a single COG 10Base-T driver.
  • SapiehaSapieha Posts: 2,964
    edited 2013-03-12 10:05
    Hi pedward.

    In Yours calculations Yo don show versions with threading else internal COG to COG signaling

    pedward wrote: »
    You are discussing 2 ends of the same problem.

    There is signalling rate and then there is effective throughput.

    Doing some math:

    1500 byte frame size.

    ~16,000 clock cycles to generate CRC checksum.

    160,000,000 / 16,000 = 10,000 frames per second

    10,000 * 1,500 = 15,000,000 bytes per second

    So the raw capability of the Prop 2 chip for CRC generation is around 15 Megabytes per second, and the 100Base-T standard is 100Mbits per second, or ~12.5 Megabytes per second.

    You are left with about 27,000 clocks to do all of the ethernet overhead.

    It's very likely that you will have a 2 COG 100Base-T driver and a single COG 10Base-T driver.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 10:33
    Sapieha wrote: »
    Hi pedward.

    In Yours calculations Yo don show versions with threading else internal COG to COG signaling

    Threading doesn't magically get you more clock cycles. The fact is CRC calculations will take 80% of a COG's clocks, threading isn't going to fix that.

    EDIT: CRC may be able to be included in some of the overhead in assembling a frame, but it still constitutes the bulk of the cycles.

    You can transmit a frame, while accumulating the CRC, but even then, that accumulate is 5 clocks for every 8 bits in a best case scenario. You then need 2 transfers at 25MHz to send 1 byte to the PHY.

    You might be able to interleave some of that, using the CRC calculation to time the 25Mhz signalling rate to the PHY. At 160Mhz, you would get around 6 clocks between transfers to the PHY at full rate. In those 6 clocks you need the ethernet frame in the COG memory. There isn't enough clocks to transfer the data from HUB to COG, calculate the CRC, and signal to the PHY. 100Base-T will require 2 COGs to implement.

    I'm confident 10Base-T could be implemented in 1 COG. The same goes for full-speed USB at 12Mbps. If I were writing a driver, I'd have the CRC COG do the data transfer to the PHY, and let another COG handle all of the assembly and overhead in implementing an ethernet interface.
  • Heater.Heater. Posts: 21,230
    edited 2013-03-12 13:40
    Wait up a minute,

    Seems we are talking about Ethernet at 100MBits per second. That's like 10 million bytes per second. That's like 80 times the entire content of HUB RAM per second.
    Let's assume we don't want to transmit the content of HUB over and over again.
    To be interesting we have to get new data in so that we can send it out.
    What applications do you have in mind that will require this? And can it be done in Spin, for example?
    Sure a 100Bbit signaling rate may be nice, but we have those huge pauses in between packets to think about CRC's.
    If you can signal 100MBit/s from one cog with some pauses for CRC that's good enough I would imagine.
  • Bill HenningBill Henning Posts: 6,445
    edited 2013-03-12 14:23
    I think for many applications 10Mbps - even half duplex - in one cog would be good enough. Full Duplex would be better.

    Just think how nice it would be to have Ethernet at the P2 launch - it would be a major selling feature. (Which is why I invoked the "Impossible" forum magic)

    Same for full speed USB (12Mbps) in a single cog.
  • David BetzDavid Betz Posts: 14,516
    edited 2013-03-12 14:33
    I think for many applications 10Mbps - even half duplex - in one cog would be good enough. Full Duplex would be better.

    Just think how nice it would be to have Ethernet at the P2 launch - it would be a major selling feature. (Which is why I invoked the "Impossible" forum magic)

    Same for full speed USB (12Mbps) in a single cog.
    If you guys make the PASM drivers I'll work on the C library code to make use of them!
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 15:29
    We are arguing the same point.

    For general connectivity, a single COG 10Base-T driver would be sufficient.

    If you were making a device that streamed data to/from external memory (ala SDRAM), then you might want to consider 100Base-T, which would then be a 2 COG driver.

    There is *no* point is implementing 100Base-T if you can't process data at line speed.

    Here is a short description of how Ethernet rate limiting works, courtesy of HP:

    How Fixed Rate Limiting Works

    Fixed Rate Limiting counts the number of bytes that a port either sends or receives, in one second intervals. The
    direction that the software monitors depends on the direction you specify when you configure the rate limit on the
    port. If the number of bytes exceeds the maximum number you specify when you configure the rate, the port
    drops all further packets for the rate-limited direction, for the duration of the one-second interval.


    In essence, it relies on protocols that build on top of the L2 Ethernet layer to re-transmit lost packets. This effectively reduces the bandwidth by causing the sender to be more conservative.

    It doesn't make sense to implement an Ethernet driver which is incapable of operating at line speed, a driver that intermittently communicates at line speed, but then has a long turnaround, is just not a sensible design. It's better to drop back to a protocol you *can* keep up with.

    Consider that you could probably stuff a 10Base-T driver in there and have a lot of extra cycles, at the L2 layer. You could either implement multiple channels (say 4) or you could fit the rest of a TCP/IP stack into the remaining cycles and have a 1 COG 10Base-T driver with TCP/IP.
  • tonyp12tonyp12 Posts: 1,951
    edited 2013-03-12 16:25
    Bursts of 100Base-T data may be preferred as I heard that some routers/switches don't like mixed 10/100 and slows down.
    And that is not good for other 100Base data that is transmitted simultaneous on that same switch.
    Though maybe no truth to that.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 17:28
    I have pointed out that for 100Base-T, use 2 COGs to implement a full spec interface, for a single COG, use 10Base-T.

    I argue that it's *wrong* to emulate a 100Base-T interface and not be able to actually keep up, it's like all of those guys with funny stickers on their cars, sounds fast but isn't.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2013-03-12 18:31
    Have you ever had a network setup that any endpoint was capable of keeping up with the maximum rate? There is so much dead time that it is ridiculous in 99.8% of cases, and even more for most TCP applications. As such I would argue that implementing 100baseTX would be quite doable in a usable way in 3 cogs on the Prop2, that is able to keep up with the real data rates that it would encounter.

    As a second it may be possible to implement 10BaseT and Full Speed (12Mb/s) in a single cog, at least for the basic interface (maybe even more as I think that using hub mem for pasm2 code may be fast enough [thanks to the RDQUAD ops] and this would give the easy possibility for implementing entire stacks in a single cog [even with 2 threads at 160MHz you could maintain 80MIPS per thread, and with a well written extended memory system for HUB access 20MIPS for running from HUB mem (would be 40MIPS from HUB mem if only single threaded)]).
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2013-03-12 18:37
    Watching with great interest the clever boffinry going on here. 10baseT would be pretty amazing. Oops, sorry. Impossible.
  • pedwardpedward Posts: 1,642
    edited 2013-03-12 18:56
    Have you ever had a network setup that any endpoint was capable of keeping up with the maximum rate? There is so much dead time that it is ridiculous in 99.8% of cases, and even more for most TCP applications. As such I would argue that implementing 100baseTX would be quite doable in a usable way in 3 cogs on the Prop2, that is able to keep up with the real data rates that it would encounter.

    As a second it may be possible to implement 10BaseT and Full Speed (12Mb/s) in a single cog, at least for the basic interface (maybe even more as I think that using hub mem for pasm2 code may be fast enough [thanks to the RDQUAD ops] and this would give the easy possibility for implementing entire stacks in a single cog [even with 2 threads at 160MHz you could maintain 80MIPS per thread, and with a well written extended memory system for HUB access 20MIPS for running from HUB mem (would be 40MIPS from HUB mem if only single threaded)]).

    Yes, 100Base-T is old at this point.

    Something that is perhaps lost in this conversation is that EVERY other device out there has a custom ASIC that does the Ethernet interfacing. The "computer" implements a driver that talks to the ASIC to send and receive packets of data. The ASIC handles the buffering and framing. If you want a bare metal Ethernet interface for the P2, then you have to be mindful of the realtime nature of an Ethernet network. I guarantee that an Intel EEPRO 10/100 controller can handle full data rate.

    If you implement a bare metal driver, and it cannot process data at line speed, then you're going to have a boat load of retransmissions because Ethernet doesn't have any provision for throttling. CSMA/CD is the collision detection mechanism, but it doesn't implement a "Clear To Send" protocol, it just starts transmitting on the wire. If the P2 isn't ready to receive a packet, that packet gets dropped and the sender will continue to retransmit (the sender on the other side of the switch) until a higher level protocol acknowledges receipt.

    Think of Ethernet like a dead drop, you send info out there and don't have any direct confirmation someone received it. It's only through incidental protocols do you get some higher level of confirmation.

    L2 is dumb and doesn't implement any niceties, it's crudely simple and effective. If you drop packets like a klutz, performance will suffer badly and the overall efficiency of the network will degrade. With 10Base-T, the effective throughput is less, but you can catch every packet and eliminate wasted packet transmissions. The L3+ protocols will handle the bandwidth throttling efficiently, since the window will remain small and the fall forward, fall back timers will keep track of your effective throughput. More importantly, the other end won't be wasting precious resources retransmitting packets due to packet loss.
  • roglohrogloh Posts: 5,786
    edited 2013-03-12 22:22
    pedward wrote: »
    Threading doesn't magically get you more clock cycles. The fact is CRC calculations will take 80% of a COG's clocks, threading isn't going to fix that.

    EDIT: CRC may be able to be included in some of the overhead in assembling a frame, but it still constitutes the bulk of the cycles.

    You can transmit a frame, while accumulating the CRC, but even then, that accumulate is 5 clocks for every 8 bits in a best case scenario. You then need 2 transfers at 25MHz to send 1 byte to the PHY.

    You might be able to interleave some of that, using the CRC calculation to time the 25Mhz signalling rate to the PHY. At 160Mhz, you would get around 6 clocks between transfers to the PHY at full rate. In those 6 clocks you need the ethernet frame in the COG memory. There isn't enough clocks to transfer the data from HUB to COG, calculate the CRC, and signal to the PHY. 100Base-T will require 2 COGs to implement.

    I'm confident 10Base-T could be implemented in 1 COG. The same goes for full-speed USB at 12Mbps. If I were writing a driver, I'd have the CRC COG do the data transfer to the PHY, and let another COG handle all of the assembly and overhead in implementing an ethernet interface.

    At least for a 200MHz (over)clocked P2 I disagree with your conclusion about not having enough time using a single COG to transfer data from HUB to COG, do CRC and output to PHY and here's why I am thinking that.

    We would need to send 12.5 MB/s to the PHY or 200MHz / 16 for line rate 100Mbps. This number is nice as we can align our nibble transmission with the hub access windows every 8 cycles. The total instructions in the critical loop should include this group (in a suitable sequence to align data transitions relative to the MII output Tx clock edges which we could probably generate with a counter).

    1 hub byte read operation with PTRA pointer increment
    5 cycles for CRC accumulate using the earlier approach
    1 output 4 lsb bits to pins
    1 shift data right by 4 bits
    1 output 4 msb bits to pins (I checked this MII nibble order is correct)
    1 DJNZD (or initial REPS patched with the length)

    That is a total of 9 or 10 instruction cycles per byte being sent in the frame. We even have left some to spare possibly for clock pin toggling as well (needs 4 more instructions in the loop) should we not use a counter for that purpose. It should just fit nicely. Yeah there's obviously a few other things left to consider outside the loop for handling start and end of the packet, TxEN, packet preamble, CRC inversion and output, IPG, etc but I think 1 COG should probably do all Tx functions and still keep up for its critical loop at least. It is getting tight but it fits.

    Unfortunately for the Rx side we can't do the same thing with hub memory writes because it is outside of our control as to when the MII nibbles arrive relative to the hub window, so I'm thinking you would just need to buffer up to the largest incoming MRU or 1518-1536 bytes or so internally in the COG mem itself (not the CLUT which is for CRC and is only 1k), while accumulating the CRC, then writing everything collected in COG memory back to the HUB memory at the end of the packet.

    For have any chance at sustaining line rate packet reception (in the driver COGs not necessarily the application!) I think you'd need a second Rx COG to alternate processing packets with the first Rx COG. So that is likely 3 COGs needed in total when doing full duplex Fast Ethernet at 100Mbps, 1 COG for Tx, and 2 for Rx, and this COG partitioning may then just be able to keep up with line rate and do the CRCs and send to/from hub RAM. I didn't check all the other Rx stuff and additional nibble shifts and and/or masking are necessary to fill each long with the incoming data.

    Packet Rx is really tight and needs to be checked out much more thoroughly to see if it doable. You've got to initially sync up the nibble reads to the Rx clock edges and data valid signals. This means either being able to resync occasionally in the middle of the packet (not enough cycles left in the critical loop for that purpose if CRC is also being done) or just syncing once with the clock at the start and not drift off it for the whole packet size of ~3100 nibbles or so. The latter means rather tight crystal frequency tolerances like 50ppm would be needed if the PHY runs off an independent clock to the prop and we are drifting relative to that. You've also got to detect the end of the packet within the critical loop itself to be able to break out of it by monitoring the MII RXDV signal while incrementing a packet length counter as you go. It is quite a lot harder actually.

    But I'm thinking this set of operations per byte might have a chance to be coaxed to work in the critical loop. It just fits in 16 clocks too which is nice. Maybe some further optimizations are possible with new P2 instructions. Byte arrivals would need to be grouped into bundles of 4 to load into the local COG RAM longs. This is to just show the concept here and it is not arranged in the correct order or aligned with correct MII nibble arrivals, but hopefully you get the point.

    1 input 4 lsbs from pins
    1 ror 4 bits
    1 or input 4 msbs from pins
    1 rol 4 bits
    1 and to clean out unwanted upper 24 bits of long
    5 crc accumulation instructions on new byte
    1 increment packet length counter
    1 RXEN pin signal detection
    1 conditional branch instruction based on RXEN state to exit loop at end of packet
    1 JMPD to start of loop again (or use a long REPS to allow break out with timeout?)
    1 byte OR operation to the long being constructed / inital register copy for first byte
    1 write long to COG mem (every 4 bytes) with autoincrement using INDA or rotate right by 8 bits (for the other 3 bytes in the long)

    Roger.
  • Heater.Heater. Posts: 21,230
    edited 2013-03-13 01:22
    @pedward,
    There is *no* point is implementing 100Base-T if you can't process data at line speed.
    From the point of view of the Propeller and what ever process it is communicating with that is true. Like slowing down the baud rate on an RS232 link if you can't actually handle the data so fast.
    However ethernet is a shared medium. Many devices can be talking across it. That means a pair of devices working at one tenth the normal speed can be hogging things and slowing down the communications of others.

    Or, am I out of touch? Is it so that modern day switches (it's point to point between me and my switch) will pass traffic on at the higher bit rate if the far end can handle it?

    @Bill

    Yeah, I am out of touch. The idea of "full duplex ethernet" took me aback for a moment. Took me a while to remember there is no "ether" in modern day ethernet. It's all done point to point through switches and routers.

    I kind of liked the old coax ethernet days, so much more convenient. No switch boxes hanging around, no wall warts needed to power them, a lot less wire cluttering the place up. I guess that why people move to real "ether" with wifi now a days.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2013-03-13 03:02
    Or, am I out of touch? Is it so that modern day switches (it's point to point between me and my switch) will pass traffic on at the higher bit rate of the far end can handle it?

    I built my house just at the transition between coax and cat5. Cleverly put coax in the wall spaces while the house was at the frame stage. And then d'oh, along comes cat5 just after I finish and had to crawl around in the roof pulling cat5 around.

    Now of course, things change again. My kids are all on wifi devices. Wires are uncool.

    But getting back to heater's question, yes, if everything goes through a router, and one device is slow and only 10baseT speed, the router should handle that, right? So... I'm more interested in a slow 10baseT system that works, ideally on both the prop I and prop II, than an abstract discussion about a 100 or 1000 speed system that may or may not work.

    For the propeller, I would dearly love to see an internet browser. I think so many things are so close. The touchscreen displays can handle the graphics of the internet. Move the SD driver and display driver off to a second propeller, and I dearly hope that there is then enough room for a TCP stack. I know that code to translate html can work as I have written demo code to do this. Maybe wifi comes later, but starting simple is likely to get there in the end, and 10baseT seems the place to start.
  • Heater.Heater. Posts: 21,230
    edited 2013-03-13 05:13
    Having got ethernet up and running on the PII we are going to need an operating system to manage it.

    Well, now that we have a C compiler there is this: http://www.contiki-os.org/

    http://en.wikipedia.org/wiki/Contiki

    Contains the worlds smallest WEB browser.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2013-03-13 05:26
    Hope it is small. They say the code fits in a few kilobytes, then give you a one gigabyte download??

    c is going to be the killer app that gets this working but it does need to be small. Anything over about 500 kilobytes is a pain when stuck in the download/run/debug cycle.

    Does that web browser contain the jpeg decompression algorithm by any chance?
  • average joeaverage joe Posts: 795
    edited 2013-03-13 06:47
    Heater. wrote: »
    @pedward,
    However ethernet is a shared medium. Many devices can be talking across it. That means a pair of devices working at one tenth the normal speed can be hogging things and slowing down the communications of others.

    Or, am I out of touch? Is it so that modern day switches (it's point to point between me and my switch) will pass traffic on at the higher bit rate if the far end can handle it?
    As long as the network is segmented, ie each device connected to a ROUTER or SWITCH, it will not matter. If you're using a HUB *basically these are just blobs of solder* then you could have problems. Most networks today would be fine with one device running at 10Base-T and the rest of the network running 100Base-T or 1000Base-T for that matter.

    This could lead to some VERY interesting projects!
Sign In or Register to comment.